Introducing two new tags: Vector and VectorLabels.
These tags open up a multitude of new uses cases from skeletons, to polylines, to Bézier curves, and more.
There is a new Model Providers page available at the organization level where you can configure API keys to use with LLM tasks.
If you have previously set up model providers as part of your Prompts workflow, they are automatically included in the list.
For more information, see Model provider API keys for organizations.
The Organization page (only accessible to Owner and Admin roles) has been redesigned to be more consistent with the rest of the app.
Note that as part of this change, the Access Token page has been moved under Settings.
Before:
After:
There is a new project setting available from Annotation > Annotating Options and Review > Reviewing Options called Show unused data columns to reviewers in the Data Manager.
This setting allows you to hide unused Data Manager columns from any Annotator or Reviewer who also has permission to view the Data Manager.
"Unused" Data Manager columns are columns that contain data that is not being used in the labeling configuration.
For example, you may include meta or system data that you want to view as part of a project, but you don't necessarily want to expose that data to Annotators and Reviewers.
Each user has a numeric ID that you can use in automated workflows. These IDs are now easier to quickly find through the UI.
You can find them listed on the Organization page and in the Annotation Summary table on the Members page for projects.
Managers and Reviewers will now see a link to the Annotator Dashboard from the Home page.
The Annotator Dashboard displays information about their annotation history.
Managers:
Reviewers:
We have continued to add new endpoints to our SDK, including new endpoints for bulk assign and unassign members to tasks.
See our SDK releases and API reference.
Label Studio can now be used in dark mode.
Click your avatar in the upper right to find the toggle for dark mode.
Note that this feature is not available for environments using whitelabeling.
There is a new Annotator Evaluation section under Settings > Quality.
When there are ground truth annotations within the project, an annotator will be paused if their ground truth agreement falls below a certain threshold.
For more information, see Annotator Evaluation.
We have added support for the following:
Anthropic: Claude 3.7 Sonnet
Gemini/Vertex: Gemini 2.5 Pro
OpenAI: GPT 4.5
For a full list of supported models, see Supported base models.
A new Pages Processed indicator is available when using Prompts against a project that includes tasks with image albums (using the valueList
parameter on an Image tag).
You can also see the number of pages processed for each task by hovering over the image thumbnail.
You can now click and drag to adjust text span regions.
You can now export polygons created using the BrushLabels tag to COCO format.
If you have AI Assistant enabled and ask multiple questions without coming to a resolution, it will offer to create a support ticket on your behalf:
You can now clear your chat history to start a new chat.
unsafe-eval
usage.
visibleWhen
parameter was not working when used with a taxonomy.
You can now use image data with Prompts. Previously, only text data was supported.
This new feature will allow you to automate and evaluate labeling workflows for image captioning and classification. For more information, see our Prompts documentation.
We’ve trained an AI on our docs to help you create projects faster. If you’d like to give it a try early, send us an email.
There is a new labeling
parameter available for the Taxonomy tag. When set to true, you can apply your Taxonomy classes to different regions in text. For more information, see Taxonomy as a labeling tool.