There is a new project setting under Annotation > Task Reservation.
You can use this setting to determine how many minutes a task can be reserved by a user. For more information, see Project settings - Task Reservation.
By default, the task reservation time is set to one day (1440 minutes). This setting is only available when task distribution is set to Auto.
$undefined$
is present in the task data.
Previously, the projects that you could use with Prompts were limited to relatively simple labeling configurations (for example, only having one set of Choices
or one set of Labels
).
With this change, you can now use complex projects with Prompts. For example, project with multiple Choices
, Labels
, and a combination of both. It also adds support for TextArea
and Pairwise
control tags.
Also as part of this change, you will no longer be asked to select a project Type when creating a new Prompt.
For more information, see Create a Prompt.
There is a new Export action available from the annotator performance dashboard.
We recently introduced the ability to perform video frame classification with the <TimelineLabels>
tag.
You now have the ability to edit the frame spans you select in the timeline editor, making it easier to control which frames you want to label.
There are a number of project settings that are only applicable when auto distribution is enabled for users.
To prevent confusion, settings that are not applicable will be hidden when manual distribution is enabled.
This means the following settings will be hidden when Annotation > Distribute Labeling Tasks is set to Manual:
TextArea elements have been update to reflect the look and feel of other labeling elements.
When using the Send Test Request action for a connected ML backend model, you will now see more descriptive error messages.
There is a new Enhance Prompt action available when writing a prompt.
You can use this feature to allow an LLM to enhance the prompt you already have, using your initial prompt and the tasks as context. For more information, see Draft and run prompts.
You will now see an estimated cost before running a prompt. This estimate is based on the number of tokens required:
You can now link comments to specific regions or fields within an annotation.
This change will help improve clarity between Annotators and Reviewers, enhancing the quality review process.
For more information, see Comments and notifications.
There is a new option on the Review page of the project settings: Allow reviewer to choose: Requeue or Remove.
When enabled, reviewers will see two reject options for an annotation: Remove (reject and remove the annotation from the queue) and Requeue (reject and then requeue the annotation).
This is useful for situations in which some annotations are incorrect or incomplete, but are acceptable with some fixes. In those cases, it may be more useful to send the task back to the annotator. In other cases, where the annotation is far from correct, it is better to remove it entirely.
For more information, see Project settings - Review.
You can now add a custom, self-hosted LLM to use with Prompts. This will allow you to use your fine-tuned models with your Prompts workflow.
Custom models must be compatible with OpenAI's JSON mode. For more information, see Model provider API keys.
There is a new Save As option when saving a prompt. This allows you to save named versions of a prompt as you refine it.
You can select between saved versions to compare their metrics. You can also manage older versions by updating them, deleting them, or renaming them.
When you run a prompt, you will now see the following metrics:
For more information, see Draft and run prompts.
A new hotkey (Ctrl + h) has been added. Use this shortcut to hide all regions. Or, if no regions are visible, show all regions.
Prompts, our Enterprise tool for evaluating and refining LLM prompts, now supports multi-class text classification projects. This will enable you to label your text with more than one choice, allowing for more complex classification use cases.
This also means that projects with the choice="multiple"
parameter set will now appear in the Target Project drop-down menu when creating a Prompt (assuming that the project meets all other eligibility criteria).
You can now apply labels to video frames. Previously, we only supported per-video classification.
This new feature allows you to apply labels at a per-frame level. You can implement this feature using a new tag: <TimelineLabels>
.
For more information, see New! Video Frame Classification.
You can now select NER projects when creating a Prompt. Previously, Prompts only supported Text Classification projects.
For more information, see Named entity recognition (NER) in our Prompts documentation.
You can now add JavaScript to your Label Studio projects to further customize your labeling experience.
Note that due to security precautions, custom scripts must be enabled for your organization before will see the Scripts option when configuring the labeling interface. Contact your account representative to request this feature.
For more information, see the following resources:
You can now use Azure OpenAI when creating Prompts. For more information, see Model provider API keys.
After:
created_by
was null.The Prompts tool leverages ChatGPT to help you evaluate and refine your LLM prompts. You can also use Prompts to generate predictions for automating your labeling process, and to quickly bootstrap labeling projects.
For more information, see Automate Data Labeling with HumanSignal and our Prompts documentation.
The Label Studio UI has been upgraded with updated colors and fonts, giving it a sleek new look while maintaining the same intuitive navigation you're familiar with. All Label Studio tools, features, and settings are still in the same place, ensuring a smooth transition.
Improved performance on the Projects list page due to improvement on the API level.
Fixed an issue with Google Cloud Storage when the connection has the Use pre-signed URLs option disabled. In these situations, Google was sending pre-signed URLs with the format https://storage.googleapis.com
rather than sending BLOBs.
With this fix, Google Cloud Storage will begin returning BLOBs/base64 encoded data when Use pre-signed URLs is off. This means that Label Studio will start reading data from Google Cloud Storage buckets, which can result in large amounts of data being sent to your Label Studio instance - potentially affecting performance.
When using the annotator performance report, you can now filter by project or workspace.
HIDE_STORAGE_SETTINGS_FOR_MANAGER
environment variable set to True
, Managers will now be able to sync data from external storage as necessary rather than request assistance from an Admin user.
There is a new setting that can restrict users from uploading data directly to Label Studio, forcing them to only use cloud storage. If you would like to enable this setting, set the DISABLE_PROJECT_IMPORTS
environment variable to True
.
LABEL_STUDIO_
appearing in context logs.DEBUG
. The default log level is now INFO
.