📽️ Live Workshop - Evaluating Models Using LLM-as-a-Judge: Helpful or Harmful?
Contact Sales

Evaluating Models Using LLM-as-a-Judge: Helpful or Harmful?

LLM-as-a-Judge is an increasingly popular way to evaluate the output of AI models. But is it actually worth using? And if so, how?

Register Now

It’s easy to see the appeal of using an LLMs to evaluate the output of other LLMs. After all, today's models are increasingly capable and efficient, and the potential time and cost savings are compelling. But using an LLM to evaluate your model may not be the silver bullet you're looking for.

In this live workshop, ML Evangelist Micaela Kaplan will show you:

  • The origin and evolution of LLM-as-a-Judge as an evaluation methodology.
  • The clear advantages and potential pitfalls of evaluating models with LLM-as-a-Judge.
  • Actionable strategies to mitigate the risks inherent in LLM-as-a-Judge.

Speakers

Micaela Kaplan

Machine Learning Evangelist, [object Object]

Micaela Kaplan is the Machine Learning Evangelist at HumanSignal. With her background in applied Data Science and a masters in Computational Linguistics, she loves helping other understand AI tools and practices.

Related Content