New Article! NLP Autolabeling with Label Studio Prompts
Contact Sales

Customer Story

How Sense Street Increased Annotator Efficiency 120% With Label Studio

In Conversation With

Daria Paczesna

Linguist

Karolina Drabik

Linguist

Download PDF

120 % Increase in annotations per labeler (depending on project)
4 x Increase in team size made possible by streamlined operations and increased productivity
50 % Increase in the scope of their data labeling practice

Introduction

Sense Street: Making Sense of Market Conversations

In the intricate world of capital markets, the ability to extract nuanced information from complex, unstructured trader chats is a game changer. Sense Street is developing large generative language models to understand a comprehensive array of financial jargon used across the life cycle of a trade. Their models enable banks to quickly report the details of transactions, discover missed and open trades, and keep track of their clients' interests. However, as the number of annotators grew, the legacy annotation software did not scale with the team. They needed solutions that would help manage workflows, ensure quality control, and maintain consistency amid rapid growth. So they went looking for a new labeling solution to help support their growth. And that’s when Label Studio Enterprise came into the picture.

Choosing Label Studio Enterprise has led to transformative results for Sense Street. The platform has enabled a remarkable 150% increase in labels. This substantial expansion in the number of labels allowed for a more nuanced and detailed categorization of their data, thereby enhancing the quality and specificity of their data annotation efforts. Furthermore, the streamlined operations and boosted productivity facilitated by the platform has allowed Sense Street to expand its team size by a substantial 400%. The scope of data labeling at Sense Street has also increased by 50%, enabling the company to handle a wider range of data types and complexities. These impressive results underscore the platform's value in optimizing data labeling practices and driving growth at Sense Street.

Scope of Sense Street’s Data Labeling Operations

Sense Street's data labeling practice is extensive and diverse, involving annotators who label data in English, German, Italian, Spanish, and French. The team consists of 60% annotators and 40% linguists, each performing distinct yet interconnected roles. The linguists, apart from their principal task of reviewing the work of annotators, also undertake a diverse range of responsibilities. They formulate the annotation guidelines that serve as the foundation for the work of annotators, and they mentor new team members, providing them with the necessary training to efficiently execute their tasks. Above all, they oversee the standard of annotations across the team, acting as the gatekeepers for data processing and quality control within the company.

In a six-month period, Sense Street has annotated around 15,000 complex conversations in five languages. The conversations are divided based on the type of transaction. This division allows them better to manage each transaction type's different requirements and nuances.

The diversity of their data is reflected through the many languages, asset classes, and transaction types found in these conversations. They deal with RFQs, where a party asks for a quote for a specific security; IOIs, where traders indicate interest in buying or selling attributes of securities; and Repo trading conversations, where traders negotiate the sale and future repurchase of securities.

To determine the success of its labeling efforts, Sense Street uses several metrics:

  • Inter-annotator agreement: This is used to verify whether the annotation schema is clear, understandable, and correctly executed by the annotators. It is beneficial at the beginning stage of a new project and helps identify the most problematic cases.
  • Annotator-reviewer agreement: This is an indicator of annotation quality delivered by individual annotators and helps to track the annotators' progress over time.
  • Number of annotated conversations per day: The objective is to keep a steady pace, balancing quality annotations with a satisfactory quantity of new data. Tracking the rate of dataset growth for each project plays a substantial role in planning the projects’ timeframes and setting realistic goals.
  • The number of positive examples found in the datasets: This helps to decide whether the datasets need further programmatic filtering to use the team’s resources as effectively as possible.

Complex Industry Challenges

As is the case for many specialized fields of endeavor, data labeling in the trading sector is a complex task due to the industry-specific jargon, abbreviations, and terms that require a deep understanding of the domain. Misinterpretations can lead to incorrect labels, impacting data quality and machine learning model performance.

The subjectivity in interpretation further compounds the challenge. Different annotators may perceive the same conversation differently, leading to inconsistencies in the data labels. This issue is particularly prevalent in projects where multiple interpretations of a conversation may be valid.

Maintaining a high level of agreement among annotators requires comprehensive guidelines and regular reviews. The language used in trading conversations can vary widely, adding to the complexity.

The context of a conversation, such as current market conditions or specific client-trader relationships, can significantly influence its meaning. Maintaining this contextual understanding at scale is challenging, as is understanding the concepts being labeled, especially when dealing with unfamiliar transactions.

As new transactions emerge, they may require updates to the taxonomy or ontology used for labeling. This process can be complex and time-consuming, requiring a balance between adapting these frameworks and maintaining consistency with previous labeling.

The multilingual nature of trading conversations adds another layer of complexity, requiring annotators to be skilled in text interpretation and fluent in more than one foreign language. These challenges underscore the need for robust, flexible, and intuitive data labeling tools and practices in the trading industry.

Lastly, as Sense Street leverages generative modeling for commercial classification problems, they require schemas that are flexible across different modeling strategies. This challenge is relatively new, without many existing conventions.

Problems With Scaling

Sense Street's existing data labeling software could not scale to meet Sense Street’s ambitions. As Sense Street's team of annotators expanded, they found that the previous annotation tool was too cumbersome to handle the increasing scale of their operations. The lack of team-management options and the need to perform many tasks programmatically using their own scripts and management systems led to a lot of background coding, making the process inefficient and laborious.

The interface of the previous tool was also less user-friendly and offered limited functionality. It had a specific interface that only displayed text and a task, with no possibility to navigate forward or back. It also lacked the option to create hierarchy trees, a feature that is fundamental to easing complex annotation tasks and maintaining clear data representation. This limitation forced the team to draw arrows and relations manually.

Handling large datasets was another challenge they encountered when using the legacy annotation software. It had to keep pace with the new and increasingly large volumes of data. As they scaled their operations, the quantity of conversations they needed to annotate significantly increased. Additionally, it did not natively support calculating inter-annotator agreement. The team had to manually export the annotations from multiple annotators and write custom scripts to calculate them.

The review process in the previous tool was also problematic. It was difficult to control which part of the data would be annotated between several annotators and which would be just for one annotator. There was also no history of changes. The team used to save an original annotation in the dataset and create another dataset for reviews, but they couldn’t see them simultaneously.

Another significant issue was the lack of metrics, which forced the team to record annotations on a spreadsheet. This manual process was time-consuming and prone to errors.

Finally, dividing all the available data to annotate in chunks was long and complex. The team couldn’t target a specific conversation or combine different annotation tasks. Moreover, in the previous tool, the annotations were anonymous and didn’t directly show who made them.

These challenges led Sense Street to look for a new solution that could better meet its needs and improve the efficiency of its operations.

The Solution

As a result of their search, Sense Street switched to Label Studio, a move that led to a significant improvement in its operations. The Data Science department and Machine Learning Operations/Annotation team saw a substantial rise in annotator efficiency, increased project capacity, and improved the quality of their machine learning models.

  • 150% increase in labels lead to a more nuanced data classification.
  • 4x increase in team size made possible by streamlined operations and increased productivity.
  • 50% increase in the scope of their data labeling.

Sense Street's decision to choose Label Studio was driven by a multitude of factors that led to significant improvements in its operations:

  • Advanced Features and Flexibility: Label Studio offered a range of features unavailable on other platforms. This included the ability to handle overlapping spans, relations, and multi-class classification and the flexibility to combine these tasks.
  • Customization and Management: Label Studio allowed Sense Street to optimize operations. By leveraging customizable interfaces and labels, they enhanced data processing. Task distribution improved with advanced role assignments and the ability to manage multiple projects. Progress tracking and robust quality control mechanisms ensured they consistently delivered high-quality results on time. This level of customization and control was a significant advantage.
  • Progress Reports: Label Studio provided annotation progress reports, giving Sense Street a clear overview of their project status and progress.
  • Team Management: The platform supported multiple team members with different roles, enabling efficient team management and collaboration.
  • Review Assignment: Label Studio made it easy to assign annotations and reviews, streamlining the review process.
  • Annotator Agreement Reports: These reports provided valuable insights into the level of agreement between different annotators, helping to ensure consistency and accuracy in the annotations.
  • Pre-Annotation with Models: Sense Street was able to use its models to pre-annotate data, saving time and improving efficiency.
  • Review Status Filtering: Label Studio's ability to filter conversations by their review status allowed Sense Street to easily track which annotations were pending review, approved, or rejected.
  • Change Tracking: The platform tracked changes made to annotations over time, providing a clear record of modifications.
  • Performance Metrics: Label Studio provided metrics on annotator performance, agreement, and labeling quality, offering valuable insights for performance improvement and quality control.

These features and capabilities in Label Studio Enterprise have significantly improved the efficiency and effectiveness of Sense Street's annotation process, leading to a higher volume of better quality data for their machine-learning models.

Conclusion

Sense Street's journey with Label Studio has been transformative. The company has improved its operations and revolutionized how they extract nuanced insights from unstructured trader chats to shape their product development. The success of Sense Street serves as a testament to the power of machine learning and the potential it holds in transforming industries.

Sense Street benefits two-fold from HumanSignal’s massive 250,000 user open source community with fast development cycles and user feedback. In addition, they get the benefits of the enterprise features that allow them to scale, team management features, and reporting capabilities like the Annotator Agreement Matrix, enabling them to grow their team and make sure that they don’t create roadblocks for their data scientists who are using the data to train machine learning models.

With Label Studio, Sense Street was able to streamline its operations and increase productivity. The company significantly improved in extracting nuanced information from complex, unstructured trader chats. Moving to a highly capable enterprise-grade labeling platform has contributed to enhancing their data processing efficiency and quality, given its advanced labeling capabilities and comprehensive quality control mechanisms.

“Label Studio's review process, complete with commenting capabilities, has significantly enhanced the quality of our data annotations. This platform fosters clear and direct communication between annotators and reviewers, promoting a learning-oriented environment and leading to continuous improvement,” said Karolina Drabik, one of the lead linguists at Sense Street.