We’ve introduced a new BitMask tag to support pixel-level image annotation using a brush and eraser. This new tag allows for highly detailed segmentation using brush-based region and a cursor that reflects brush size down to single pixels for fine detail. We’ve also improved performance so it can handle more regions with ease.
Additionally, Mac users can now use two fingers to pinch zoom and pan images for all annotation tasks.
Email notifications have been added for important project events, including task assignments, project publishing, and data export completion. This helps annotators and project managers stay in sync without unnecessary distractions.
Users can manage email preferences in their user settings.
All Label Studio Starter Cloud and Enterprise SaaS users, including those on a free trial can ask inline questions of an AI trained on our docs and even use AI to quickly create projects, including configuring labeling UIs, with natural language.
Account owners can enable the AI Assistant in the Settings > Usage & Licenses by toggling on “Enable AI” and “Enable Ask AI.” For more information see the docs.
There is a new option to display audio files as spectrograms. You can further specify additional spectrogram settings such as windowing function, color scheme, dBs, mel bands, and more.
Spectrograms can provide a deeper level of audio analysis by visualizing frequency and amplitude over time, which is crucial for identifying subtle sounds (like voices or instruments) that might be missed with traditional waveform views.
There is a new Multichannel tag for visualizing time series data. You can use this tag to combine and view multiple time series channels simultaneously on a single channel, with synchronized interactions.
The Multichannel tag significantly improves the usability and correlation of time series data, making it easier for users to analyze and pinpoint relationships across different signals.
When using the View All action, users who are in the Reviewer role or higher can now see a summary of the annotations for a specific task. This summary includes metadata, agreements, and side-by-side comparisons of labels.
You can use this summary for a more efficient and detailed review of annotated tasks and to better understand consensus and discrepancies, especially when needing to compare the work of multiple annotators.
When applying filters, you will see new options that correspond to annotation results.
These options are identified by the results chip and correspond to control tag names and support complex filtering for multiple annotation results. For example, you can filter by “includes all” or “does not include.”
This enhancement provides a much more direct, predictable, and reliable way to filter and analyze annotation results, saving time and reducing the chances of errors previously encountered with regex matching.
For more information, see Filter annotation results.
When deleting annotations, reviews, or assignments, you can now select a specific user for the delete action. Previously, you were only able to delete all instances.
With this change, you will have more granular control over data deletion, allowing for precise management of reviews and annotations.
This enhancement is available for the following actions:
Users can now opt into email notifications when you are invited to a project or workspace. These options are available from the Account & Settings page.
This ensures users are promptly aware of new project and workspace invitations, improving collaboration and onboarding workflows.
There are two UI changes related to storage proxies:
The Billing & Usage page has been renamed the Usage & License page. Previously this page was only visible to users in the Owner role. A read-only form of this page is now available to all users in the Admin role.
Organization owners can use the new Session Timeout Policies fields to control session timeout settings for all users within their organization. These fields are available from the Usage & License page.
Owners can configure both the maximum session age (total duration of a session) and the maximum time between activity (inactivity timeout).
You can now use the sync parameter to align audio and video streams with time-series sensor data by mapping each frame to its corresponding timestamp.
For more information, see Time Series Labeling with Audio and Video Synchronization.
You can now configure the following aspects of the Grid View in the Data Manager:
Added customizable column selectors for performance tuning and cost management.
This will help you better understand the data behind each request to the LLM for improved troubleshooting and cost management with Prompts.
Added support for the OpenAI GPT-4.1 series models in Prompts. See the complete list of supported base models here.
The cursor now adjusts dynamically to brush size to allow for more precision in segmentation tasks.
COCO and YOLO export formats now available for KeyPointLabels
.
Gain clarity into annotator output and productivity; track performance across individuals and teams.
Click any agreement score to view pairwise agreement scores with others.
You can now restrict Plugin access to Administrators and above. By default, Managers have access to update plugins.
To request that this be restricted to Administrators, contact support.
Contribute to and browse the open source community templates repository.
View dozens of pre-built templates available today in the Templates Gallery.
Label Studio can now be used in dark mode.
Click your avatar in the upper right to find the toggle for dark mode.
Note that this feature is not available for environments using whitelabeling.
There is a new Annotator Evaluation section under Settings > Quality.
When there are ground truth annotations within the project, an annotator will be paused if their ground truth agreement falls below a certain threshold.
For more information, see Annotator Evaluation.
We have added support for the following:
Anthropic: Claude 3.7 Sonnet
Gemini/Vertex: Gemini 2.5 Pro
OpenAI: GPT 4.5
For a full list of supported models, see Supported base models.
A new Pages Processed indicator is available when using Prompts against a project that includes tasks with image albums (using the valueList
parameter on an Image tag).
You can also see the number of pages processed for each task by hovering over the image thumbnail.
You can now click and drag to adjust text span regions.
You can now export polygons created using the BrushLabels tag to COCO format.
If you have AI Assistant enabled and ask multiple questions without coming to a resolution, it will offer to create a support ticket on your behalf:
You can now clear your chat history to start a new chat.
unsafe-eval
usage.
visibleWhen
parameter was not working when used with a taxonomy.
There are a number of new features and changes related to plugins:
When you open Label Studio, you will see a new Home page. Here you can find links to your most recent projects, shortcuts to common actions, and links to frequently used resources.
Note that this feature is not available for environments using whitelabeling.
When adding project storage, you now have the option to choose Google Cloud Storage WIF.
Unlike the the standard GCS connection using application credentials, this allows Label Studio to request temporary credentials when connecting to your storage.
For more information, see Google Cloud Storage with Workload Identity Federation (WIF).
We have added support for the following Gemini models:
We have removed support for the following OpenAI models:
Label Studio is transitioning from user to roles in AWS S3 IAM.
This affects users who are using the Amazon S3 (IAM role access) storage type in projects.
Before you can add new projects with the Amazon S3 (IAM role access) storage type, you must update your AWS IAM policy to add the following principal:
arn:aws:iam::490065312183:role/label-studio-app-production
Please also keep the the previous Label Studio principal in place to ensure that any project connections you set up prior to April 7, 2025 will continue to have access to AWS:
arn:aws:iam::490065312183:user/rw_bucket
For more information, see Set up an IAM role in AWS.
prediction-changed
value was not being reset after making manual changes to pre-annotations. There is a new Quality > Annotation Limit section in the project settings.
You can use these fields to set limits on how many tasks each user is able to annotate. Once the limit is reached, their progress will be paused.
For more information, see Annotation Limit.
You can now scroll forward and backward within audio files. This can be activated using the scrolling motion on a trackpad or a mouse.
We have introduced two new settings for audio tasks:
There are three new templates available from the template gallery:
Made security improvements regarding org membership visibility.
Prompts is now available in Label Studio Enterprise SaaS.
Prompts is a tool that leverages LLMs to automatically generate predictions for your labeling tasks, allowing you to pre-annotate data and quickly bootstrap projects. It can also be used to evaluate your LLM prompts against ground truth data.
For more information, see the Prompts documentation and Accelerate Labeling and Model Evaluation with Prompts—Now Generally Available.
There is a new action to pause annotators. This is available from the Members dashboard and via the API.
For more information, see Pause an annotator.
There is a new type of token available for API access. The new tokens use JWT standards and, unlike the current tokens in use, allow you to set an expiration date.
You can enable or disable these tokens from the Organization page. Once enabled, they will be available for users to generate from their Account & Settings page. Legacy tokens can still be used unless disabled from the organization level.
For more information, see Access tokens.
You can now link directly to specific annotations or regions within an annotation. These actions are available from the labeling interface in the overflow menus for the annotation and the region.
There is a new action from the Data Manager that allows you to mark the annotations submitted by a specific user as ground truth annotations.
Users in the Manager, Annotator, and Reviewer role cannot currently retrieve their API access tokens from the Label Studio UI.
This does not affect the functionality of any existing scripts or automations currently using their access tokens.
There is a new Bulk label action available from the Data Manager. You can use this to quickly label tasks multiple tasks at once.
This feature also includes enhancements to the Grid View in the Data Manager. Now when viewing images, you can zoom in/out, scroll, and pan.
For more information, see the Bulk labeling documentation and Bulk Labeling: How to Classify in Batches.
There is a new toggle on the Billing & Usage page (only available to users in the Owner role). You can use this to enable AI features throughout Label Studio. See AI features.
The Billing & Usage page (only accessible to users in the Owner role) has several new options:
You can now configure Prompts to use an Anthropic model. For more information, see Model provider API keys.
You can now use image data with Prompts. Previously, only text data was supported.
This new feature will allow you to automate and evaluate labeling workflows for image captioning and classification. For more information, see our Prompts documentation.
We’ve trained an AI on our docs to help you create projects faster. If you’d like to give it a try early, send us an email.
There is a new labeling
parameter available for the Taxonomy tag. When set to true, you can apply your Taxonomy classes to different regions in text. For more information, see Taxonomy as a labeling tool.
Paginated multi-image labeling allows you to label an album of images within a single task. When enabled, a page navigation tool is available within the labeling interface.
While you can use paginated multi-image labeling with any series of related images, it can also be especially useful for document annotation.
For example, you can pre-process a PDF to convert it into image files, and then use the pagination toolbar to navigate the PDF. For more information, see our Multi-Page Document Annotation template.
To enable this feature, use the valueList
parameter on the <Image> tag
.
When your Prompt includes classification elements, you can now view a report that tells you how often each class was identified. This is available for the following tags:
There is a new project setting under Annotation > Task Reservation.
You can use this setting to determine how many minutes a task can be reserved by a user. For more information, see Project settings - Task Reservation.
By default, the task reservation time is set to one day (1440 minutes). This setting is only available when task distribution is set to Auto.
$undefined$
is present in the task data.
Previously, the projects that you could use with Prompts were limited to relatively simple labeling configurations (for example, only having one set of Choices
or one set of Labels
).
With this change, you can now use complex projects with Prompts. For example, project with multiple Choices
, Labels
, and a combination of both. It also adds support for TextArea
and Pairwise
control tags.
Also as part of this change, you will no longer be asked to select a project Type when creating a new Prompt.
For more information, see Create a Prompt.
There is a new Export action available from the annotator performance dashboard.
We recently introduced the ability to perform video frame classification with the <TimelineLabels>
tag.
You now have the ability to edit the frame spans you select in the timeline editor, making it easier to control which frames you want to label.
There are a number of project settings that are only applicable when auto distribution is enabled for users.
To prevent confusion, settings that are not applicable will be hidden when manual distribution is enabled.
This means the following settings will be hidden when Annotation > Distribute Labeling Tasks is set to Manual:
TextArea elements have been updated to reflect the look and feel of other labeling elements.
When using the Send Test Request action for a connected ML backend model, you will now see more descriptive error messages.
There is a new Enhance Prompt action available when writing a prompt.
You can use this feature to allow an LLM to enhance the prompt you already have, using your initial prompt and the tasks as context. For more information, see Draft and run prompts.
You will now see an estimated cost before running a prompt. This estimate is based on the number of tokens required:
You can now link comments to specific regions or fields within an annotation.
This change will help improve clarity between Annotators and Reviewers, enhancing the quality review process.
For more information, see Comments and notifications.
There is a new option on the Review page of the project settings: Allow reviewer to choose: Requeue or Remove.
When enabled, reviewers will see two reject options for an annotation: Remove (reject and remove the annotation from the queue) and Requeue (reject and then requeue the annotation).
This is useful for situations in which some annotations are incorrect or incomplete, but are acceptable with some fixes. In those cases, it may be more useful to send the task back to the annotator. In other cases, where the annotation is far from correct, it is better to remove it entirely.
For more information, see Project settings - Review.
You can now add a custom, self-hosted LLM to use with Prompts. This will allow you to use your fine-tuned models with your Prompts workflow.
Custom models must be compatible with OpenAI's JSON mode. For more information, see Model provider API keys.
There is a new Save As option when saving a prompt. This allows you to save named versions of a prompt as you refine it.
You can select between saved versions to compare their metrics. You can also manage older versions by updating them, deleting them, or renaming them.
When you run a prompt, you will now see the following metrics:
For more information, see Draft and run prompts.
A new hotkey (Ctrl + h) has been added. Use this shortcut to hide all regions. Or, if no regions are visible, show all regions.
You can now apply labels to video frames. Previously, we only supported per-video classification.
This new feature allows you to apply labels at a per-frame level. You can implement this feature using a new tag: <TimelineLabels>
.
For more information, see New! Video Frame Classification.
You can now select NER projects when creating a Prompt. Previously, Prompts only supported Text Classification projects.
For more information, see Named entity recognition (NER) in our Prompts documentation.
You can now add JavaScript to your Label Studio projects to further customize your labeling experience.
Note that due to security precautions, custom scripts must be enabled for your organization before will see the Scripts option when configuring the labeling interface. Contact your account representative to request this feature.
For more information, see the following resources:
You can now use Azure OpenAI when creating Prompts. For more information, see Model provider API keys.
The Prompts tool leverages ChatGPT to help you evaluate and refine your LLM prompts. You can also use Prompts to generate predictions for automating your labeling process, and to quickly bootstrap labeling projects.
For more information, see Automate Data Labeling with HumanSignal and our Prompts documentation.
The Label Studio UI has been upgraded with updated colors and fonts, giving it a sleek new look while maintaining the same intuitive navigation you're familiar with. All Label Studio tools, features, and settings are still in the same place, ensuring a smooth transition.
Improved performance on the Projects list page due to improvement on the API level.
Fixed an issue with Google Cloud Storage when the connection has the Use pre-signed URLs option disabled. In these situations, Google was sending pre-signed URLs with the format https://storage.googleapis.com
rather than sending BLOBs.
With this fix, Google Cloud Storage will begin returning BLOBs/base64 encoded data when Use pre-signed URLs is off. This means that Label Studio will start reading data from Google Cloud Storage buckets, which can result in large amounts of data being sent to your Label Studio instance - potentially affecting performance.