
Generative AI Benchmarks: Evaluating Large Language Models
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to Cart failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from wishlist failed.
Please try again later
Adding to library failed
Please try again
Follow podcast failed
Please try again
Unfollow podcast failed
Please try again
-
Narrated by:
-
By:
About this listen
There are many variables to consider when defining our Generative AI strategy. Having a clear understanding of the use case/business problem is crucial. However, a good understanding of benchmarks and metrics helps business leaders connect with this new world and its potential.
So whether you are intending to:
- select a pretrained foundation LLM (like OpenAI's GPT-4) to connect via API to your project,
- select a base open-source LLM (like Meta's Llama 2) to train and customize,
- or looking to evaluate the performance of your LLM
the available benchmarks are crucial and useful in this task. In this video we will explore a few examples.
No reviews yet