Skip to content

fachiny17/llm-benchmarking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

Benchmarking multiple LLMs in Streamlit

Learn how to compare large language models side-by-side using Streamlit! In this session, you’ll build an interactive dashboard that sends prompts to multiple LLMs, measures their performance, and visualizes the results. A practical way to evaluate speed, quality, and cost so you can choose the right model for your projects.

Features

  • ✅ Understand the different models
  • ✅ Learn about benchmarking
  • ✅ Commonly used metrics
  • ✅ Construct a nice interface with Streamlit
  • ✅ Deploy!

Run Streamlit

  1. Install Steamlit: pip install streamlit

  2. Run in your terminal: streamlit run app.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages