The Annotation section of the project settings has been improved.
For more information, see Project settings - Annotation.
splitchannels="true"
.
Chat conversations are now a native data type in Label Studio, so you can annotate, automate, and measure like you already do for images, video, audio, and text.
For more information, see:
Blog - Introducing Chat: 4 Use Cases to Ship a High Quality Chatbot
There is a new cloud storage option to connect your Databricks Unity Catalog to Label Studio.
For more information, see Databricks Files (UC Volumes).
When you click the Agreement column in the Data Manager, you can see a pop-up with an inter-annotator agreement matrix. This pop-up will now also identify annotations with ground truths.
For more information about adding ground truths, see Ground truth annotations.
You can now sort regions by media start time.
Previously you could sort by time, but this would reflect the time that the region was created. The new option reflects the start time in relation to the media.
When you add Gemini or Vertex AI models to Prompts or to the organization model provider list, you will now see the latest Gemini models.
gemini-2.5-pro
gemini-2.5-flash
gemini-2.5-flash-lite
You can now search the template gallery. You can search by template title, keywords, tag names, and more.
Note that template searches can only be performed if your organization has AI features enabled.
Introducing two new tags: Vector and VectorLabels.
These tags open up a multitude of new uses cases from skeletons, to polylines, to Bézier curves, and more.
There is a new Model Providers page available at the organization level where you can configure API keys to use with LLM tasks.
If you have previously set up model providers as part of your Prompts workflow, they are automatically included in the list.
For more information, see Model provider API keys for organizations.
The Organization page (only accessible to Owner and Admin roles) has been redesigned to be more consistent with the rest of the app.
Note that as part of this change, the Access Token page has been moved under Settings.
Before:
After:
There is a new project setting available from Annotation > Annotating Options and Review > Reviewing Options called Show unused data columns to reviewers in the Data Manager.
This setting allows you to hide unused Data Manager columns from any Annotator or Reviewer who also has permission to view the Data Manager.
"Unused" Data Manager columns are columns that contain data that is not being used in the labeling configuration.
For example, you may include meta or system data that you want to view as part of a project, but you don't necessarily want to expose that data to Annotators and Reviewers.
Each user has a numeric ID that you can use in automated workflows. These IDs are now easier to quickly find through the UI.
You can find them listed on the Organization page and in the Annotation Summary table on the Members page for projects.
Managers and Reviewers will now see a link to the Annotator Dashboard from the Home page.
The Annotator Dashboard displays information about their annotation history.
Managers:
Reviewers:
We have continued to add new endpoints to our SDK, including new endpoints for bulk assign and unassign members to tasks.
See our SDK releases and API reference.
If your project is using predictions, you will now see a Show m=Models toggle on the Members dashboard.
This will allow you to view model agreement as compared to annotators, other models, and ground truths.
For more information, see the Members dashboard.
When duplicating a project, you will now see a modal with an updated UI and more helpful text.
We have continued to add new endpoints to our SDK. See our SDK releases.
Administrators and Owners can now opt in to get an email notification when a new user logs in who has not yet been assigned a role.
When you have a labeling configuration that includes multiple <Labels> blocks, like the following:
<View>
<Text name="text" value="$text" granularity="word"/>
<Labels name="category" toName="text" choice="single">
<Label value="Animal" background="red"/>
<Label value="Plant" background="darkorange"/>
</Labels>
<Labels name="type" toName="text" choice="single">
<Label value="Mammal" background="green"/>
<Label value="Reptile" background="gray"/>
<Label value="Bird" background="blue"/>
</Labels>
</View>
You can now choose multiple labels to apply to the selected region.
When loading the Data Manager in which you have not yet imported data, you will now see a more helpful interface.
We released a new version of the SDK, with multiple functional and documentation enhancements.
All imported predictions are now validated against your project’s labeling configuration and the required prediction schema.
Predictions that are missing required fields (for example, from_name
, to_name
, type
, value
) or that don’t match the labeling configuration (for example, to_name
must reference an existing object tag) will be rejected with detailed, per-task error messages to help you correct the payloads.
You can now filter prediction results by selecting options that correspond to control tag values.
Previously, you could only filter using an unstructured text search.
The prediction results filter also includes a nested model version filter, which (if specified) will ensure that your filters returns tasks only when the selected prediction result comes from the selected model.
There is a new See Logs option for custom agreement metrics, which you can use to view log history and error messages.
leafsOnly
parameter for taxonomies. Text
or Hypertext
with multiple Taxonomy
tags at the same time. When using an OpenAI API key, you will now see the following models as options:
You can now connect your projects to Azure Blob Storage using Service Principal authentication.
Service Principal authentication uses Entra ID to authenticate applications rather than account keys, allowing you to grant specific permissions and can be easily revoked or rotated.
For more information, see Azure Blob Storage with Service Principal authentication.
The Organization > Usage & License page has new options to disable individual email notifications for all members in the organization.
If disabled, the notification will be disabled for all users and hidden from their options on their Account & Settings page.
When adding cloud storage, the modal has now been redesigned to add clarity and additional guidance to the process.
For example, you can now preview a list of files that will be imported in order to verify your settings.
When applying an annotation results filter, you will now see a nested Annotator option. This allows you to specify that the preceding filter should be related to the specific annotator.
For example, the following filter will retrieve any tasks that have an annotation with choice "bird" selected, and also retrieve any tasks that have an annotation submitted by "Sally Opossum."
This means if you have a task where "Max Opossum" and "Sally Opossum" both submitted annotations, but only Max chose "bird", the task would be returned in your filter.
With the new nested filter, you can specify that you only want tasks in which "Sally Opossum" selected "bird":
While you can still adjust the default height in the labeling configuration, now users can drag and drop to adjust the height as needed.
Next week, we are releasing version 2.0.0 of the Label Studio SDK, which will contain breaking changes.
If you use the Label Studio SDK package in any automated pipelines, we strongly recommend pinning your SDK version to <2.0.0
.
When labeling paragraphs in dialogue format (layout="dialogue"
), you can now apply labels at an utterance level.
There is a new button that you can click to apply the selected label to the entire utterance. You can also use the pre-configured Command + Shift + A
hotkey:
You can now annotate time series data on the sub-second decimal level.
Note: Your time format must include .%f
to support decimals.
For example:timeFormat="%Y-%m-%d %H:%M:%S.%f"
There is a new option on the Members page to export its data to CSV:
When listing organization members via the API, you can use two new query params to exclude project or workspace members:
exclude_project_id
exclude_workspace_id
The following API endpoints have been deprecated and will be removed on September 16, 2025.
GET /api/projects/{id}/dashboard-members
GET /api/projects/{id}/export
snap="pixel"
was not included in autocomplete options for RectangleLabels
.You now have the option to view the Projects page in list format rather than as a grid of cards:
In the list view, you see will a condensed version of the project information that includes fewer metrics, but more projects per page:
(Admin view)
(Annotator view)
This change also includes a new option to sort projects (available in either view):
When you are using a labeling configuration that includes <TimelineLabels>
, you will now see a settings icon.
From here you can specify the following:
The <Rectangle>
and <RectangleLabels>
tags now include the snap
parameter, allowing you to snap bounding boxes to pixels.
Tip: To see a pixel grid when zoomed in on an image, you must disable pixel smoothing. This can be done as a parameter on the <Image>
tag or from the user settings.
The <Collapse>
tag now includes an open
parameter. You can use this to specify whether a content area should be open or collapsed by default.
/import
API calls no longer apply to GET
requests.You can now configure global hotkeys for each user account. These are available from the Account & Settings page.
Previously, the bulk annotation actions were only available to users in the Reviewer role or Manager and higher.
Now, users in the Annotator role can access these action.
Note that this is only available when the project is using Manual distribution and annotators must have access to the Data Manager.
You can now search by project description and project ID.
You can now click a link in the project breadcrumbs to navigate back to a specific workspace.
Removed the default zoom level calculation for Audio, allowing it to render the full waveform by default.
include
and filter
parameters. /api/tasks/{id}
call for tasks with more than 10 annotations.<TextArea>
field was still submitted even if the field was conditionally hidden.
We’ve introduced a new BitMask tag to support pixel-level image annotation using a brush and eraser. This new tag allows for highly detailed segmentation using brush-based region and a cursor that reflects brush size down to single pixels for fine detail. We’ve also improved performance so it can handle more regions with ease.
Additionally, Mac users can now use two fingers to pinch zoom and pan images for all annotation tasks.
Email notifications have been added for important project events, including task assignments, project publishing, and data export completion. This helps annotators and project managers stay in sync without unnecessary distractions.
Users can manage email preferences in their user settings.
All Label Studio Starter Cloud and Enterprise SaaS users, including those on a free trial can ask inline questions of an AI trained on our docs and even use AI to quickly create projects, including configuring labeling UIs, with natural language.
Account owners can enable the AI Assistant in the Settings > Usage & Licenses by toggling on “Enable AI” and “Enable Ask AI.” For more information see the docs.
There is a new option to display audio files as spectrograms. You can further specify additional spectrogram settings such as windowing function, color scheme, dBs, mel bands, and more.
Spectrograms can provide a deeper level of audio analysis by visualizing frequency and amplitude over time, which is crucial for identifying subtle sounds (like voices or instruments) that might be missed with traditional waveform views.
There is a new Multichannel tag for visualizing time series data. You can use this tag to combine and view multiple time series channels simultaneously on a single channel, with synchronized interactions.
The Multichannel tag significantly improves the usability and correlation of time series data, making it easier for users to analyze and pinpoint relationships across different signals.
When using the View All action, users who are in the Reviewer role or higher can now see a summary of the annotations for a specific task. This summary includes metadata, agreements, and side-by-side comparisons of labels.
You can use this summary for a more efficient and detailed review of annotated tasks and to better understand consensus and discrepancies, especially when needing to compare the work of multiple annotators.
When applying filters, you will see new options that correspond to annotation results.
These options are identified by the results chip and correspond to control tag names and support complex filtering for multiple annotation results. For example, you can filter by “includes all” or “does not include.”
This enhancement provides a much more direct, predictable, and reliable way to filter and analyze annotation results, saving time and reducing the chances of errors previously encountered with regex matching.
For more information, see Filter annotation results.
When deleting annotations, reviews, or assignments, you can now select a specific user for the delete action. Previously, you were only able to delete all instances.
With this change, you will have more granular control over data deletion, allowing for precise management of reviews and annotations.
This enhancement is available for the following actions:
Users can now opt into email notifications when you are invited to a project or workspace. These options are available from the Account & Settings page.
This ensures users are promptly aware of new project and workspace invitations, improving collaboration and onboarding workflows.
There are two UI changes related to storage proxies:
The Billing & Usage page has been renamed the Usage & License page. Previously this page was only visible to users in the Owner role. A read-only form of this page is now available to all users in the Admin role.
Organization owners can use the new Session Timeout Policies fields to control session timeout settings for all users within their organization. These fields are available from the Usage & License page.
Owners can configure both the maximum session age (total duration of a session) and the maximum time between activity (inactivity timeout).
You can now use the sync parameter to align audio and video streams with time-series sensor data by mapping each frame to its corresponding timestamp.
For more information, see Time Series Labeling with Audio and Video Synchronization.
You can now configure the following aspects of the Grid View in the Data Manager:
Previously when exporting task information using the SDK, comment text was truncated at 255 characters. Now, you can export the full comment text when using the following:
tasks = list(ls.tasks.list(<PROJECT_ID>, fields="all"))
Improved the scrolling action for the workspace list, making it easier for orgs with very large workspace lists.
Made security improvements around webhook permissions.
0
character appeared in the Quick View when labeling audio data.Repeater
tag is used.Label Studio now supports more flexible JSON data import from cloud storage. When importing data, you can use JSONL format (where each line is a JSON object), and import Parquet files.
JSONL is the format needed for OpenAI fine-tuning, and the default format from Sagemaker and HuggingFace outputs. Parquet enables smoother data imports and exports for Enterprise-grade systems including Databricks, Snowflake, and AWS feature store,
This change simplifies data import for data scientists, aligns with common data storage practices, reduces manual data preparation steps, and improves efficiency by handling large, compressed data files (Parquet).
The Label Studio Playground is an interactive sandbox where you can write or paste your XML labeling configuration and instantly preview it on sample tasks—no local install required.
The playground has recently been updated and improved, now supporting a wider range of features, including audio labeling. It is also now a standalone app and automatically stays in sync with the main application.
Tip: To modify the data input, use a comment below the <View>
tags:
A new PDF tag lets you directly ingest PDF URLs for classification without needing to use hypertext tags.
This also simplifies the process for using PDFs with Prompts, because you no longer need to first convert your PDFs to images.
The View All feature when reviewing annotations has been improved so that you can now interact with all annotation elements side-by-side, making it easier to review annotations. For example, you can now play video and audio, move through timelines, and highlight regions.
Two notable performance improvements include:
/api/users
response time, in some cases from P50 > 40s to P50 < 30s.httpx_client
.
Previously, if you loaded JSON tasks from source storage, you could only configure one task per JSON file.
This restriction has been removed, and you can now specify multiple tasks per JSON file as long as all tasks follow the same format.
For more information, see the examples in our docs here.
The Export Underlying Data option was recently introduced and is available from the Annotations chart in the annotator performance dashboard. This allows you to export information about the tasks that the selected users have annotated.
Previously, users were only identified by user ID within the CSV.
With this update, you can also identify users by email.
Added customizable column selectors for performance tuning and cost management.
This will help you better understand the data behind each request to the LLM for improved troubleshooting and cost management with Prompts.
Added support for the OpenAI GPT-4.1 series models in Prompts. See the complete list of supported base models here.
The cursor now adjusts dynamically to brush size to allow for more precision in segmentation tasks.
COCO and YOLO export formats now available for KeyPointLabels
.
Gain clarity into annotator output and productivity; track performance across individuals and teams.
Click any agreement score to view pairwise agreement scores with others.
You can now restrict Plugin access to Administrators and above. By default, Managers have access to update plugins.
To request that this be restricted to Administrators, contact support.
Contribute to and browse the open source community templates repository.
View dozens of pre-built templates available today in the Templates Gallery.
Label Studio can now be used in dark mode.
Click your avatar in the upper right to find the toggle for dark mode.
Note that this feature is not available for environments using whitelabeling.
There is a new Annotator Evaluation section under Settings > Quality.
When there are ground truth annotations within the project, an annotator will be paused if their ground truth agreement falls below a certain threshold.
For more information, see Annotator Evaluation.
We have added support for the following:
Anthropic: Claude 3.7 Sonnet
Gemini/Vertex: Gemini 2.5 Pro
OpenAI: GPT 4.5
For a full list of supported models, see Supported base models.
A new Pages Processed indicator is available when using Prompts against a project that includes tasks with image albums (using the valueList
parameter on an Image tag).
You can also see the number of pages processed for each task by hovering over the image thumbnail.
You can now click and drag to adjust text span regions.
You can now export polygons created using the BrushLabels tag to COCO format.
If you have AI Assistant enabled and ask multiple questions without coming to a resolution, it will offer to create a support ticket on your behalf:
You can now clear your chat history to start a new chat.
unsafe-eval
usage.
visibleWhen
parameter was not working when used with a taxonomy.
There are a number of new features and changes related to plugins:
When you open Label Studio, you will see a new Home page. Here you can find links to your most recent projects, shortcuts to common actions, and links to frequently used resources.
Note that this feature is not available for environments using whitelabeling.
When adding project storage, you now have the option to choose Google Cloud Storage WIF.
Unlike the the standard GCS connection using application credentials, this allows Label Studio to request temporary credentials when connecting to your storage.
For more information, see Google Cloud Storage with Workload Identity Federation (WIF).
We have added support for the following Gemini models:
We have removed support for the following OpenAI models:
Label Studio is transitioning from user to roles in AWS S3 IAM.
This affects users who are using the Amazon S3 (IAM role access) storage type in projects.
Before you can add new projects with the Amazon S3 (IAM role access) storage type, you must update your AWS IAM policy to add the following principal:
arn:aws:iam::490065312183:role/label-studio-app-production
Please also keep the the previous Label Studio principal in place to ensure that any project connections you set up prior to April 7, 2025 will continue to have access to AWS:
arn:aws:iam::490065312183:user/rw_bucket
For more information, see Set up an IAM role in AWS.
prediction-changed
value was not being reset after making manual changes to pre-annotations.
Prompts now supports the valueList
parameter for the Image
tag, meaning you can use prompts with a series of images.
Importantly, this also means expanded support for PDFs in Prompts. For more information on using this parameter, see Multi-Page Documentation Annotation.
You can now configure Prompts to use AI Foundry models. For more information, see Model provider API keys.
You can now drag and drop to adjust the length of video timeline segments.
Fixed an issue with workspace permission checks.