Project and tasks will have states depending on the actions you have taken on them and where they are in the workflow.
State management is currently being rolled out so that we can backfill tasks and projects created before this feature was implemented. If you do not see it yet, you will in the coming days.
There is a new Enforce strict overlap limit setting under Quality > Overlap of Annotations.
Previously, it was possible to have more annotations than the number you set for Annotations per task.
This would most frequently happen in situations where you set a low task reservation time, meaning that task locks expired before annotators submitted their tasks -- allowing other annotators to access and then submit the task, and potentially resulting in an excess of annotations.
When this new setting is enabled, if too many annotators are try to submit a task, they will see an error message. Their draft will be saved, but they will be unable to submit their annotation.
Note that strict enforcement only applies towards annotations created by users in the Annotator role.
Previously, when configuring annotator evaluation against ground truth tasks, you could configure exactly how many ground truth tasks each annotator should see as they begin annotating. The remaining ground truth tasks would be shown to each annotator depending on where they are and the task ordering method.
Now, you can set a specific number of ground truth tasks to be included in continuous evaluation.
You can use this as a way to ensure that not all annotators see the same ground truths, as some will see certain tasks during continuous evaluation and others will not.
Before:
After:
You can now set annotation overlap up to 500 annotations. Previously this was restricted to 20 when setting it through the UI.
When working in the template builder, you can now use Ctrl + F to search the your labeling configuration XML.
There is a new option to only run a Prompt against tasks that do not already have predictions.
This is useful for when you have failed tasks or want to target newly added tasks.
The following models have been deprecated:
gpt-4.5-preview
gpt-4.1
gpt-4.1-mini
gpt-4.1-nano
gpt-4
gpt-4-turbo
gpt-4o
gpt-4o-mini
o3-mini
o1
When clicking Show task source <> from the Data Manager, you will see a new Interactive view.
From here you can filter, search, and expand/collapse sections in the task source. You can also selectively copy sections of the JSON.
It is now clearer how to access the task summary view. The icon has been replaced with a Compare All button.
For additional clarity, the Compare tab has now been renamed Side-by-Side.
Before:
After:
The Project > Settings > Members page has been fully redesigned.
It includes the following changes:
Members page:
Members page:
Add Members modal:
The members table on the Organization page has been redesigned and improved to include:
Members table:
Member details:
When setting up cloud storage for Databricks, you can now select whether you want to use a personal access token, Databricks Service Principal, or Azure AD Databricks Service Principal.
When you want to select multiple users in the Member Performance dashboard, there is a new All Members option in members drop-down.
If you have a published project that is in a shared workspace and you move it to your Personal Sandbox workspace, the project will automatically revert to an unpublished state.
Note that published projects in Personal Sandboxes were never visible to other users. This change is simply to support upcoming enhancements to project work states.
In preparation for upcoming enhancements to agreements, we are deprecating some functionality.
In addition to the changes outlined here, we will be changing how custom weights work for class-based control tags.
Class-based control tags are tags in which you can assign individual classes to objects.
These include:
<Labels>, <RectangleLabels>, <PolygonLabels>, etc.)<Choices><Pairwise><Rating><Taxonomy>Instead of being able to set a percentage-based weight for each individual class, you will soon only be able to include or exclude them from calculation by turning them on or off.
For any existing projects that have custom class weights, the class weight will change to 100% weight.
You can still set a percentage-based weight for the parent control tag (<Labels> in the screenshot below).
You can now click and drag to adjust panel widths when configuring your labeling interface.
We're introducing a new tag: <ReactCode>.
ReactCode represents new evaluation and annotation engine for building fully programmable interfaces that better fit complex, real-world labeling and evaluation use cases.
With this tag, you can:
For more information, see the following resources:
How time is displayed across the app has been standardized to use the following format:
[n]h [n]m [n]s
For example: 10h 5m 22s
You can now select a Data Manager row, and then while holding shift, select another Data Manager row to select all rows between your selections.
Added support for the following models:
claude-sonnet-4-5
claude-haiku-4-5
claude-opus-4-5
To better utilize space, the annotation ID and the navigation controls for the labeling stream have been moved to below the labeling interface.
In preparation for upcoming enhancements to agreements, we are deprecating some functionality.
In the coming weeks, we will be changing how we support the following features:
Reach out to your CSM or Support if you have any questions and would like help preparing for the transition.
The Annotator Evaluation feature been improved to include a more robust ongoing evaluation functionality and improved clarity within the UI.
Enhancements include:
Before:
After:
Tip: You can disallow skipping in all project tasks. But if you want to allow skipping while ensuring that annotators cannot skip ground truth tasks, you can use the unskippable tasks feature recently introduced.
The Recent Projects list on the Home page will now include the most recently visited projects at the top of the list instead of pinned projects.
To provide visibility to unskippable tasks, there is a new Allow Skip column in the Data Manager.
This column is filterable, hidden by default, and is only visible to Managers, Admins, and Owners.
When you add OpenAI models to Prompts or to the organization model provider list, GPT-5.2 will now be included.
<HyperText> tags. <DateTime> tags when using consensus-based agreement.
The command palette is an enhanced search tool that you can use to navigate to resources both inside and outside the app, find workspaces and projects, and (if used within a project) navigate to project settings.
For more information, see Command palette.
While you can hide the Skip action in the project settings, this enhancement allows you to configure individual tasks so that any user in the Annotator or Reviewer role should not be able to skip them.
To make a task unskippable, you must specify "allow_skip": false as part of the JSON task definition that you import to your project.
For example, the following JSON snippet would result in one skippable task and one unskippable task:
[
{
"data": {
"text": "Demo text 1"
},
"allow_skip": false
},
{
"data": {
"text": "Demo text 2"
}
}
]
For more information, see Skipping tasks.
When configuring Annotations per task for a project, only annotations from distinct users will count towards task overlap.
Previously, if a project had Annotations per task set to 2, and User A created and then submitted two annotations on a single task (which can be done in Quick View), then the task would be considered completed.
Now, the task would not be completed until a different user submitted an annotation.
The Member Performance dashboard has been moved to a location under Analytics in the navigation menu. This page now also features an improved UI and more robust information about annotating and reviewing activities, including:
For more information, see Member Performance Dashboard.
When configuring SAML, you can now click on a selection of common IdPs to pre-fill values with presets.
required parameter was not always working in Chat labeling interfaces.
Annotation tabs have the following improvements:
Before:
After:
We have made a number of improvements to task summaries.
Before:
After:
Improvements include:
When calculating agreement, control tags that are not populated in each annotation will now count as agreement.
Previously, agreement only considered control tags that were present in the annotation results. Going forward, all visible control tags in the labeling configuration are taken into consideration.
For example, the following result set would previously be considered 0% agreement between the two annotators, as only choices group 1 would be included in the agreement calculation.
Now it would be considered 50% agreement (choices group 1 has 0% agreement, and choices group 2 has 100% agreement).
Annotator 1
Choices group 1
Choices group 2
Annotator 2
Choices group 1
Choices group 2
Notes:
This change only applies to new projects created after November 13th, 2025.
Only visible tags are taken into consideration. For example, you may have tags that are conditional and hidden unless certain other tags are selected. These are not included in agreement calculations as long as they remain hidden.
When you add OpenAI models to Prompts or to the organization model provider list, GPT-5.1 will now be included.
There is a new page available from Organization > Settings > Permissions that allows users in the Owner role to refine permissions across the organization.
This page is only visible to users in the Owner role.
For more information, see Customize organization permissions.
You can now find the follow templates in the in-app template gallery:
Fine-Tune an Agent with an LLM
Fine-tune an Agent without an LLM
Evaluate Production Conversations for RLHF
Previously, if using a machine learning model with a project, you had to set up your ML backend with a legacy API token.
You can now use personal access tokens as well.
The Chat tag now supports markdown and HTML in messages.
The Quality section of the project settings has been improved.
The onboarding checklist for projects has been improved to make it clearer which steps still need to be taken before annotators can begin working on a project:
There is a new option to select between the default column display and a more compact version.
There is a new Agreement (Selected) column that allows you to view agreement data between selected annotators, models, and ground truths.
This is different than the Agreement column, which displays the average agreement score between all annotators.
Also note that now when you click on a comparison between annotators in the agreement matrix, you will be taken to the Data Manager with the Agreement (Selected) column pre-filtered for those annotators and/or models.
For more information, see Agreement and Agreement (Selected) columns.
value parameter. When deleting a project, users will now be asked to enter text in the confirmation window:
Activity logs are now only retained for 180 days.
The Member Performance Dashboard now includes two new graphs for Reviewer metrics:
You can also now find a Review Time column in the Data Manager:
Note that data collection for review time began on September 25, 2025. You will not be to view review time for reviewing activity that happened before data collection began.
There is a new <Markdown> tag, which you can use to add content to your labeling interface.
For example, adding the following to your labeling interface:
<View>
<Markdown>
## Heading 2
### Heading 3
- bullet point one
- bullet point two
**Bold text** and *italic text*
`inline code`
```
code block
```
[Link](https://humansignal.com/changelog/)

</Markdown>
</View>
Produces this:
Previously, the Table tag only accepted key/value pairs, for example:
{
"data": {
"table_data": {
"user": "123456",
"nick_name": "Max Attack",
"first": "Max",
"last": "Opossom"
}
}
}
It will now accept an array of objects as well as arrays of primitives/mixed values. For example:
{
"data": {
"table_data": [
{ "id": 1, "name": "Alice", "score": 87.5, "active": "true" },
{ "id": 2, "name": "Bob", "score": 92.0, "active": "false" },
{ "id": 3, "name": "Cara", "score": null, "active": "true" }
]
}
}
You can now perform page-level annotation on PDF files, such as for OCR, NER, and more.
This new functionality also supports displaying PDFs natively within the labeling interface, allowing you to zoom and rotate pages as needed.
The PDF functionality is now available for all Label Studio Enterprise customers. Contact sales to request a trial.
The Video tag now has the following optional parameters:
defaultPlaybackSpeed - The default playback speed when the video is loaded. minPlaybackSpeed - The minimum allowed playback speed.
The default value for both parameters is 1.
We have continued to add new endpoints to our SDK, including new endpoints for model and user stats.
See our SDK releases and API reference.
You can now select specific workspaces and projects when inviting new Label Studio users. Those users will automatically be added as members to the selected projects and/or workspaces:
There is also a new Invite Members action available from the Settings > Members page for projects. This is currently only available for Administrators and Owners.
This will create a new user within your organization, and also immediately add them as a member to the project:
The Annotation section of the project settings has been improved.
For more information, see Project settings - Annotation.
splitchannels="true".
Chat conversations are now a native data type in Label Studio, so you can annotate, automate, and measure like you already do for images, video, audio, and text.
For more information, see:
Blog - Introducing Chat: 4 Use Cases to Ship a High Quality Chatbot
There is a new cloud storage option to connect your Databricks Unity Catalog to Label Studio.
For more information, see Databricks Files (UC Volumes).
When you click the Agreement column in the Data Manager, you can see a pop-up with an inter-annotator agreement matrix. This pop-up will now also identify annotations with ground truths.
For more information about adding ground truths, see Ground truth annotations.
You can now sort regions by media start time.
Previously you could sort by time, but this would reflect the time that the region was created. The new option reflects the start time in relation to the media.
When you add Gemini or Vertex AI models to Prompts or to the organization model provider list, you will now see the latest Gemini models.
gemini-2.5-pro
gemini-2.5-flash
gemini-2.5-flash-lite
You can now search the template gallery. You can search by template title, keywords, tag names, and more.
Note that template searches can only be performed if your organization has AI features enabled.