New! Build Powerful Custom Labeling Interfaces with Plugins
Contact Sales

Fine-Tuning Llama 3: Adapting LLMs for Specialized Domains

Learn how to develop high-quality domain-specific Q&A datasets for model fine-tuning.

High-quality data is essential for tuning models to solve domain-specific problems. Given that Large Language Models (LLMs) are trained largely on scraped data from the internet, systems that rely on them have a tendency to propagate misinformation or hallucinations due to the inherent bias in the underlying datasets.

Micaela Kaplan, ML Evangelist at HumanSignal, will show you how to develop high-quality domain-specific Q&A datasets for model fine-tuning by leveraging LLMs for dataset curation and integrating human input throughout the process.

In this webinar, we’ll show you how to:

  • Generate a large specialized dataset in a cost-effective way using Label Studio
  • Fine-tune Llama 3 with this high-quality dataset
  • Incorporate human input throughout the process while using LLMs to aid with text generation and automation

This will be a highly-technical and actionable demo, so you won’t want to miss it. We’ll see you there!

Speakers

Micaela Kaplan

Machine Learning Evangelist, [object Object]

Micaela Kaplan is the Machine Learning Evangelist at HumanSignal. With her background in applied Data Science and a masters in Computational Linguistics, she loves helping other understand AI tools and practices.

Related Content