A Retrieval-Augmented Generation (RAG) chatbot template that answers questions based on your company's documents using LlamaIndex and OpenAI.
π For Students: This is a template for your project. The main folder contains your workspace, and the
examples/folder shows a complete working version for reference.
This project gives you a competitive edge. By building an AI-powered chatbot with industry-standard tools, you'll gain hands-on experience with technologies that Fortune 500 companies and cutting-edge startups use dailyβfrom working with OpenAI's API and deploying to professional platforms like Hugging Face, to managing code with Git and GitHub. These aren't just buzzwords: they're resume-ready skills that distinguish you in any career path, whether you're pursuing roles in business, healthcare, law, marketing, or technology. You'll create a live, public portfolio piece that demonstrates technical problem-solving, modern AI fluency, and the ability to build real-world applicationsβcapabilities that employers across industries increasingly value. While the low-tech option is perfectly valid, this path transforms your class project into a genuine professional asset.
- Repository Structure
- What This Does
- Prerequisites
- Setup Instructions
- Adding Your Data
- Development & Testing Options
- Testing with the Example
- Deploying to Hugging Face
- Embedding in Your Website
- Publishing to GitHub Pages
- Troubleshooting
- Project Integration
- Recommended Workflow
pitt-llama-project/
βββ README.md # β You are here!
βββ .env.example # Template for your API key
βββ .gitignore # Protects sensitive files
βββ requirements.txt # Python dependencies
βββ app.py # YOUR chatbot (work here!)
βββ data/ # YOUR documents go here (currently empty)
β βββ README.md
β
βββ examples/ # π Reference only
βββ llama_test.ipynb # Learning notebook for Colab
βββ index.html # Full website (HTML, CSS, JS) saved locally w/ embedded chatbot script
βββ visuals/ # example images of UI
β βββ embeddedui-demo.png # example website w/ embedded chatbot
β βββ ui-demo.jpeg # example ui during local streamlit testing
βββ data/ # Example documents
β βββ taylor_swift_biography.html
β βββ constitution.pdf
βββ storage/ # Pre-built index for example
app.py- Your main chatbot code (already complete, no edits needed!)data/- Put YOUR company documents hereexamples/- Look here if you get stuck (don't edit this!)
When you run the app, it will automatically create:
storage/- Cached index of your documents (speeds up loading)
This chatbot uses Retrieval-Augmented Generation (RAG) to answer questions about your documents:
- π Reads your documents from the
data/folder - π Creates a searchable index using AI embeddings
- π¬ Answers questions by finding relevant information and generating responses
- π§ Remembers conversation context within each chat session
Example Use Case: A customer support chatbot that answers questions about your company's products, policies, or services.
Before starting, make sure you have:
- A Google account for Google Colab (Sign up here)
- Google Colab Pro (FREE for students!) (Get it here)
- β¨ Faster execution
- β±οΈ Longer runtime limits
- πΎ More storage
- β‘ Priority access to GPUs
- π 100% FREE with your .edu email - verification takes ~2 seconds!
- An OpenAI API key
- π I, Amir, will provide a shared API key for the class
- No payment required! Use the key provided by me
- (Alternative: Use Gemini API within your Google Colab Workspace for free! For more info, see this link)
- A LlamaCloud API key (Optional but Recommended)
- π Free tier available at cloud.llamaindex.ai
- Enables advanced parsing of PDFs with tables, charts, and complex layouts
- Get 1,000 free pages per month
- Not required but highly recommended for processing complex documents
- A GitHub account (Sign up here)
- A Hugging Face account (Sign up here) - for deployment
- Go to Google Colab
- Sign in with your Google account
- Get Colab Pro for FREE:
- Go to Colab Pro pricing page
- Click "Get Colab Pro" and verify with your .edu email
- Instant approval! No payment required for students π
- Enjoy faster runtimes and priority access
- Go to the repository on GitHub
- Click the "Fork" button in the top right
- This creates your own copy of the project
- In Google Colab, click File β Open notebook
- Select the GitHub tab
- Enter your repository URL or search for your username
- Open
examples/llama_test.ipynbto start learning!
Option A: Using Colab Secrets (Recommended)
- In your Colab notebook, click the π key icon in the left sidebar
- Click "Add new secret"
- Add
OPENAI_API_KEY:- Name:
OPENAI_API_KEY - Value:
sk-proj-xxxxxxxxxxxxxxxxxxxxx(your actual key) - Toggle "Notebook access" to ON
- Name:
- (Optional) Add
LLAMA_CLOUD_API_KEY:- Click "Add new secret" again
- Name:
LLAMA_CLOUD_API_KEY - Value:
llx-xxxxxxxxxxxxxxxxxxxxx(your LlamaCloud key) - Toggle "Notebook access" to ON
Option B: Using Code (Less Secure)
from google.colab import userdata
import os
# This retrieves your secret keys
os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')
# Optional: Enable advanced document parsing
os.environ['LLAMA_CLOUD_API_KEY'] = userdata.get('LLAMA_CLOUD_API_KEY')- PDF documents (
.pdf) - with advanced OCR, table extraction, and chart recognition - Word documents (
.docx,.doc) - PowerPoint presentations (
.pptx,.ppt) - Excel spreadsheets (
.xlsx,.xls)
- HTML files (
.html) - Text files (
.txt) - Markdown files (
.md) - CSV files (
.csv) - JSON files (
.json) - XML files (
.xml)
New Feature: The app now uses LlamaParse for advanced document parsing! LlamaParse provides:
- High-quality OCR for scanned documents
- Intelligent table extraction (even from images and charts)
- Multi-column layout handling
- Chart and graph text extraction
- Better handling of complex PDFs with mixed content
If LLAMA_CLOUD_API_KEY is not set, the app will fall back to SimpleDirectoryReader for all files.
Method 1: Upload Directly (Quick Testing)
- In your Colab notebook, run:
from google.colab import files uploaded = files.upload()
- Select your documents to upload
- Files will be in the current directory
Method 2: Mount Google Drive (Recommended)
- Upload your documents to a folder in Google Drive (e.g.,
My Drive/chatbot-data/) - In your Colab notebook:
from google.colab import drive drive.mount('/content/drive')
- Access files from:
/content/drive/MyDrive/chatbot-data/
Method 3: Push to GitHub (For Deployment)
- Add your documents to the
data/folder in your repository - Commit and push to GitHub
- Pull the repository in Colab or deploy directly to Hugging Face
- β Use clear, well-formatted documents
- β Include only relevant company information
- β Break very large documents into smaller, topic-focused files
- β Don't include sensitive data (passwords, private info)
- β Avoid image-only PDFs (text must be selectable)
You have two options for developing and testing your chatbot. Choose the one that works best for you!
Pros: No installation needed, works in browser, free GPU access Cons: Temporary URLs, session expires after inactivity
Use this if: You prefer browser-based development or don't want to install Python locally
Pros: Persistent environment, faster development, works offline Cons: Requires Python installation and setup
Use this if: You're comfortable with terminal/command line and want full control
The example notebook (examples/llama_test.ipynb) teaches you RAG concepts interactively.
-
Open the example notebook in Colab:
- Go to your forked repository
- Navigate to
examples/llama_test.ipynb - Click "Open in Colab" badge (or manually open via Colab)
-
Install dependencies (First cell - run this first!):
# STEP 1: Install all required packages print("π¦ Installing dependencies...") !pip install -q streamlit==1.50.0 !pip install -q llama-index==0.14.4 !pip install -q llama-index-core==0.14.4 !pip install -q llama-index-llms-openai==0.6.4 !pip install -q llama-index-embeddings-openai==0.5.1 !pip install -q openai==1.109.1 !pip install -q python-dotenv==1.1.1 !pip install -q jedi==0.19.2 print("β All dependencies installed!")
β±οΈ This takes 1-2 minutes. Wait for "β All dependencies installed!" before continuing.
-
Set up your API key (Second cell):
# STEP 2: Configure OpenAI API Key from google.colab import userdata import os # Get API key from Colab secrets (you must add this first!) os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY') print("β API key loaded")
-
Load and index documents (Third cell):
# STEP 3: Load documents and create index from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.llms.openai import OpenAI from llama_index.embeddings.openai import OpenAIEmbedding # Configure models llm = OpenAI(model="gpt-5-nano-2025-08-07", temperature=0.1) embed_model = OpenAIEmbedding(model="text-embedding-3-small") # Load documents from data folder documents = SimpleDirectoryReader("data").load_data() print(f"π Loaded {len(documents)} documents") # Create searchable index index = VectorStoreIndex.from_documents( documents, llm=llm, embed_model=embed_model ) print("β Index created successfully!")
-
Query the chatbot (Fourth cell):
# STEP 4: Ask questions! query_engine = index.as_query_engine() # Try your first question response = query_engine.query("Your question here") print(response)
-
Test with example data first, then replace with your own documents
Once you understand how RAG works from the notebook, transition to testing your actual app.py Streamlit application.
- π Notebook (
llama_test.ipynb): Learning tool, shows RAG step-by-step - π Streamlit app (
app.py): Production-ready chatbot with UI, what you'll deploy
-
Create a new Colab notebook (or add cells to your existing one):
- File β New notebook
- Or continue in your existing notebook
-
Install dependencies (same as before):
!pip install -q streamlit==1.50.0 llama-index==0.14.4 llama-index-core==0.14.4 llama-index-llms-openai==0.6.4 llama-index-embeddings-openai==0.5.1 openai==1.109.1 python-dotenv==1.1.1 jedi==0.19.2
-
Clone your repository (if not already in Colab):
# Clone your forked repository !git clone https://github.com/YOUR-USERNAME/YOUR-REPO-NAME.git %cd YOUR-REPO-NAME
-
Set up your API key as environment variable:
import os from google.colab import userdata # Set API key for the app to use os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')
-
Upload your documents (if not already in the repo):
# Option A: Upload directly to Colab from google.colab import files uploaded = files.upload() # Move uploaded files to data folder !mkdir -p data !mv *.pdf data/ # Adjust file extensions as needed # Option B: Mount Google Drive from google.colab import drive drive.mount('/content/drive') !cp -r /content/drive/MyDrive/chatbot-data/* data/
-
Install localtunnel to expose Streamlit:
# Install localtunnel for public URL !npm install -g localtunnel
-
Run Streamlit in the background:
# Run Streamlit app in background !streamlit run app.py &>/content/logs.txt & # Wait for Streamlit to start import time time.sleep(5) # Verify it's running !curl http://localhost:8501
-
Expose with localtunnel to get a public URL:
# Get a public URL using localtunnel !npx localtunnel --port 8501 & # Wait a moment for the URL import time time.sleep(3) # The URL will appear in the output above # Look for: "your url is: https://xxxxx.loca.lt"
-
Access your chatbot:
- Click the URL from localtunnel output (looks like
https://xxxxx.loca.lt) - Click "Click to Continue" on the localtunnel page
- Your Streamlit chatbot interface will appear! π
- Click the URL from localtunnel output (looks like
- Test your chatbot:
- Ask questions about your documents
- Verify responses are accurate
- Test different types of queries
- Localtunnel URLs are temporary (expire when Colab disconnects)
- Not suitable for permanent hosting
- Great for testing and development only
β When to Use This:
- Testing your app with real documents before deploying
- Showing your team the chatbot interface during development
- Debugging issues before Hugging Face deployment
π For Production:
- After testing in Colab, deploy to Hugging Face Spaces (permanent hosting)
- Colab is for development and testing
- Hugging Face is for production and embedding
Step 1: Learn RAG concepts
βββ Use llama_test.ipynb notebook
Step 2: Test with your data
βββ Add your documents to data/
βββ Run notebook cells to verify indexing works
Step 3: Test the Streamlit UI
βββ Run app.py in Colab with localtunnel
βββ Verify chatbot interface works correctly
Step 4: Deploy to production
βββ Push to GitHub
βββ Deploy to Hugging Face Spaces
βββ Embed in your website
Step 5: Publish website
βββ Enable GitHub Pages
βββ Share your live URL!
Once you've tested everything in Colab and your chatbot works well:
- β
Make sure all your documents are in the
data/folder - β Push your code to GitHub
- β Deploy to Hugging Face Spaces (see next section)
- β Embed the permanent Hugging Face URL in your website
If you prefer to develop on your local machine, follow these steps.
- Python 3.9+ installed
- Terminal/Command Prompt access
- Text editor or IDE (VS Code recommended)
-
Clone your repository:
git clone https://github.com/YOUR-USERNAME/YOUR-REPO-NAME.git cd YOUR-REPO-NAME -
Create a virtual environment:
On macOS/Linux:
python3 -m venv venv source venv/bin/activateOn Windows:
python -m venv venv venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set up your API keys:
# Copy the template cp .env.example .env # Edit .env and add your keys # OPENAI_API_KEY=your-provided-key-here # LLAMA_CLOUD_API_KEY=llx-your-key-here (optional but recommended)
-
Add your documents to the
data/folder:# Place your PDF, HTML, TXT files in data/ ls data/ -
Run the Streamlit app:
streamlit run app.py
The app will open at
http://localhost:8501π
- First run: The app will index your documents (takes 10-30 seconds)
- Subsequent runs: Loads from cached
storage/folder (much faster) - To re-index: Delete the
storage/folder and restart
β Advantages:
- Faster iteration (no need to reinstall packages each time)
- Persistent storage (index cache survives between sessions)
- Works offline (once dependencies are installed)
- Better debugging experience
- Keep your virtual environment activated when working
- Never commit
.envfile to GitHub - Test thoroughly before deploying to Hugging Face
# 1. Activate environment
source venv/bin/activate # or venv\Scripts\activate on Windows
# 2. Make changes to your code or data/
# 3. Test locally
streamlit run app.py
# 4. When satisfied, push to GitHub
git add .
git commit -m "Update chatbot"
git push
# 5. Deploy to Hugging Face (see next section)| Factor | Google Colab | Local Development |
|---|---|---|
| Setup Time | β‘ Instant | π 10-15 minutes |
| No Installation | β Yes | β Need Python |
| Persistent Environment | β Sessions expire | β Always available |
| Speed | π Slower | β‘ Faster |
| Best For | Beginners, quick tests | Serious development |
| Internet Required | β Always | β Only for deployment |
Recommendation: Start with Google Colab to learn, then switch to local development if you want a better experience!
π Colab Notebook β π§ͺ Test RAG Logic β π Deploy to Hugging Face β π Embed in Website
- Colab: Development and testing environment
- Hugging Face: Production hosting for your Streamlit app
- Website: User-facing integration
- Open
examples/llama_test.ipynbin Google Colab - Run all cells to see the chatbot in action
- Ask questions like:
- "When did Taylor Swift become a superstar?"
- "What are the amendments in the Constitution?"
If you want to test with the example documents:
- Clone the example data to your Google Drive
- Or download from GitHub and upload to Colab
- Point your code to the example data folder
- β¨ Makes your chatbot publicly accessible
- π Free hosting for public projects
- π Easy to share with your team and embed in websites
- π¨ Professional Streamlit interface
-
Create a new Space at huggingface.co/new-space
- Name:
your-company-chatbot - License: Apache 2.0
- SDK: Streamlit
β οΈ Important! - Hardware: CPU Basic (free)
- Name:
-
Upload your files from your GitHub repository:
app.pyβrequirements.txtβdata/folder with YOUR documents βstorage/folder (optional - speeds up first load)β οΈ
-
Add your API keys as Secrets:
- Go to Space Settings β Repository secrets
- Add secret:
OPENAI_API_KEY=your-key-here(required) - Add secret:
LLAMA_CLOUD_API_KEY=llx-your-key-here(optional but recommended for better document parsing)
-
Wait for build (2-3 minutes)
- Check the "Logs" tab for any errors
- Look for: "β Index loaded" or "β Index created"
- Once running, your chatbot is live! π
Your chatbot URL will be: https://huggingface.co/spaces/YOUR-USERNAME/your-company-chatbot
- Upload the
storage/folder to skip indexing on first load (faster startup) - Test thoroughly in Colab or locally before deploying
- Use descriptive Space names (e.g.,
acme-support-botnottest123) - The chatbot uses
gpt-5-nano-2025-08-07for responses andtext-embedding-3-smallfor indexing (configured in app.py)
Once deployed to Hugging Face, you can embed your chatbot in your company website HTML page.
See it in action: Check out visuals/embeddedui-demo.html for a working example!
Add this code before the closing </body> tag of your index.html:
<!-- Chatbot Widget Styles -->
<style>
.chat-widget-container {
position: fixed;
bottom: 20px;
right: 20px;
z-index: 9999;
width: 400px;
height: 600px;
border-radius: 12px;
box-shadow: 0 8px 32px rgba(0, 0, 0, 0.2);
overflow: hidden;
display: none;
background: white;
}
.chat-widget-container.open {
display: block;
animation: slideUp 0.3s ease;
}
@keyframes slideUp {
from {
opacity: 0;
transform: translateY(20px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
.chat-widget-button {
position: fixed;
bottom: 20px;
right: 20px;
z-index: 10000;
width: 60px;
height: 60px;
border-radius: 50%;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
border: none;
color: white;
font-size: 24px;
cursor: pointer;
box-shadow: 0 4px 15px rgba(0, 0, 0, 0.3);
transition: all 0.3s ease;
}
.chat-widget-button:hover {
transform: scale(1.1);
box-shadow: 0 6px 20px rgba(0, 0, 0, 0.4);
}
@media (max-width: 768px) {
.chat-widget-container {
width: calc(100vw - 40px);
height: calc(100vh - 140px);
bottom: 10px;
right: 10px;
}
}
</style>
<!-- Chatbot Toggle Button -->
<button class="chat-widget-button" onclick="toggleChat()" aria-label="Open chatbot">π¬</button>
<!-- Chatbot Container -->
<div class="chat-widget-container" id="chatWidget">
<iframe
src="https://huggingface.co/spaces/YOUR-USERNAME/your-company-chatbot"
width="100%"
height="100%"
frameborder="0"
title="Company Chatbot">
</iframe>
</div>
<!-- Toggle Script -->
<script>
function toggleChat() {
const widget = document.getElementById('chatWidget');
const button = document.querySelector('.chat-widget-button');
if (widget.classList.contains('open')) {
widget.classList.remove('open');
button.textContent = 'π¬';
button.setAttribute('aria-label', 'Open chatbot');
} else {
widget.classList.add('open');
button.textContent = 'β';
button.setAttribute('aria-label', 'Close chatbot');
}
}
</script><iframe
src="https://huggingface.co/spaces/YOUR-USERNAME/your-company-chatbot"
width="100%"
height="600px"
frameborder="0"
title="Company Chatbot">
</iframe>YOUR-USERNAME/your-company-chatbot with your actual Space URL!
- Change colors by editing the CSS
backgroundgradients - Adjust size with
widthandheightproperties - Move position with
bottomandrightvalues - Customize the button emoji (π¬, π€, π‘, etc.)
Once you have your chatbot embedded, publish your complete website live on GitHub Pages!
Make sure your repository has:
- β
index.html(your main website page with embedded chatbot) - β
style.css(your website styles) - β
app.py(your chatbot code) - β
data/folder (your company documents) - β
requirements.txt - β
README.md
# Add all files
git add .
# Commit with a descriptive message
git commit -m "Add company website with AI chatbot"
# Push to your repository
git push origin main- Go to your repository on GitHub
- Click Settings β Pages (in the left sidebar)
- Under "Source", select:
- Branch:
main - Folder:
/ (root)
- Branch:
- Click Save
- Wait 1-2 minutes for deployment
Your website will be live at:
https://YOUR-USERNAME.github.io/YOUR-REPO-NAME/
π Your chatbot is now embedded in a live website!
- β
Your
index.htmlwebsite - β All CSS, JavaScript, and assets
- β The embedded Hugging Face chatbot iframe
- β Backend files (app.py, data/) are not served by GitHub Pages
- βΉοΈ The chatbot itself runs on Hugging Face, not GitHub Pages
Every time you push to GitHub, your site automatically updates:
# Make changes to your HTML/CSS
git add index.html style.css
git commit -m "Update website design"
git push origin main
# Site updates in 1-2 minutes!- Test your website locally by opening
index.htmlin a browser before pushing - Make sure your Hugging Face Space URL in the iframe is correct
- Use relative paths for CSS/JS files (e.g.,
./style.cssnot/style.css) - Add a custom domain in GitHub Pages settings if you have one!
- β Colab: Make sure you added the secret (π icon) and toggled "Notebook access" to ON
- β
Local: Check that your
.envfile exists and contains the instructor-provided key - β Hugging Face: Verify the secret is set in Settings β Repository secrets
- β Make sure the key is exactly as provided by your instructor (no extra spaces)
- β Colab Pro (FREE for students!) has longer runtimes than the free tier
- β Save your work frequently to GitHub or Google Drive
- β Consider running critical tasks in shorter sessions
- β Run the install cells at the start of your notebook
- β
Use
!pip install(with !) in Colab, not regularpip install - β Make sure you ran the entire installation cell and waited for it to complete
- β If issues persist, restart runtime (Runtime β Restart runtime) and run install cell again
- β Make sure you uploaded files to the data folder
- β Check that files are in supported formats (PDF, HTML, TXT, etc.)
- π‘ Try the example: upload files from
examples/data/
- β Make sure your documents contain the relevant information
- β Try rephrasing your question more specifically
- β Check if the document text is readable (not corrupted or image-only PDFs)
- π‘ Test with the example first to verify it's working
- β±οΈ First query after starting is always slower (building index)
- β‘ Subsequent queries should be faster (using cached index)
- π Get Colab Pro for FREE with your .edu email for better performance
- β Check the "Logs" tab for error messages
- β
Verify
OPENAI_API_KEYis set in Repository secrets - β Make sure you selected "Streamlit" as the SDK
- β
Confirm you uploaded
requirements.txtandapp.py
- β Make sure your Hugging Face Space is running (check the Space URL directly)
- β Try hard refresh: Ctrl+Shift+R (Windows) or Cmd+Shift+R (Mac)
- β Check browser console for errors (F12 β Console tab)
- β Verify the iframe src URL is correct
- β Make sure GitHub Pages is enabled in Settings β Pages
- β Wait 1-2 minutes after enabling for initial deployment
- β
Check that branch is set to
mainand folder is/ (root)
- β Verify your Hugging Face Space URL is correct in the iframe src
- β Check browser console for CORS or iframe errors
- β Test the Hugging Face Space URL directly in a browser first
- β
Use relative paths:
./style.cssnot/style.css - β Check file names match exactly (case-sensitive on GitHub Pages)
- β Clear browser cache and hard refresh
- Google Colab Documentation
- LlamaIndex Documentation
- Streamlit Documentation
- OpenAI API Documentation
- Hugging Face Spaces Documentation
This chatbot template is designed for your company project:
| Project Step | What to Do | Where |
|---|---|---|
| Steps 1-5 | Plan your company, identify documents needed | Team planning |
| Step 6 | Research and gather company documents | data/ folder |
| Steps 7-9 | Test and refine your chatbot | Google Colab |
| Step 8 | Deploy chatbot to production | Hugging Face Spaces |
| Step 8 | Build website and embed chatbot | HTML/CSS with iframe |
| Step 9 | Push repository and publish website | GitHub β GitHub Pages |
| Step 10 | Present your live website with chatbot | Final demo |
- β Working chatbot with your company's documents
- β Chatbot deployed to Hugging Face Spaces
- β Company website with embedded chatbot
- β Website live on GitHub Pages
- β Complete repository pushed to GitHub
- β Documentation (README, etc.)
- β Google Colab notebook showing development process
1. π Learn RAG Concepts
βββ Open examples/llama_test.ipynb in Google Colab
βββ Understand how document indexing and retrieval works
2. π Plan Your Company
βββ Identify what documents your chatbot needs
βββ Gather company information (products, policies, FAQs)
3. π Prepare Documents
βββ Collect and organize documents in supported formats
βββ Add to data/ folder
4. π§ͺ Choose Development Environment
βββ Option A: Google Colab (browser-based, beginner-friendly)
βββ Option B: Local development (faster, more control)
5. π§ Test Your Chatbot
βββ Google Colab: Use localtunnel for temporary testing
βββ Local: Run streamlit run app.py for instant feedback
βββ Verify answers are accurate and relevant
6. π Deploy to Production
βββ Push code to GitHub repository
βββ Deploy to Hugging Face Spaces (permanent hosting)
βββ Get your permanent chatbot URL
7. π Build Company Website
βββ Create index.html with company branding
βββ Embed Hugging Face chatbot using iframe code
βββ Style with CSS
8. π€ Publish Website
βββ Push website files to GitHub
βββ Enable GitHub Pages in repository settings
βββ Get your live website URL
9. β
Verify Everything Works
βββ Test chatbot on live website
βββ Ask various questions to ensure accuracy
βββ Check responsive design on mobile
10. π€ Present Your Project
βββ Demo your live website with working AI chatbot
βββ Explain your company and how the bot helps customers
βββ Share both GitHub and live website URLs
If you run into issues:
- β Check the Troubleshooting section above
- π§ͺ Try running the
examples/llama_test.ipynbto verify setup - π Review your code against this README
- π Check Hugging Face Space logs for error messages
- π¬ Ask your instructor or TA for help
examples/llama_test.ipynb- Jupyter notebook explaining RAG concepts (start here!)examples/README.md- How the example chatbot worksdata/README.md- Tips for adding documentsvisuals/- UI demos and screenshots for reference
Good luck building your AI-powered chatbot! π
Remember: Develop in Google Colab, deploy to Hugging Face, embed in your website, publish on GitHub Pages!


