Complete example showing how to build a query engine using Skill Seekers nodes with LlamaIndex.
- Loads Skill Seekers-generated LlamaIndex Nodes
- Creates a persistent VectorStoreIndex
- Demonstrates query engine capabilities
- Provides interactive chat mode with memory
# Install dependencies
pip install llama-index llama-index-llms-openai llama-index-embeddings-openai
# Set API key
export OPENAI_API_KEY=sk-...First, generate LlamaIndex nodes using Skill Seekers:
# Option 1: Use preset config (e.g., Django)
skill-seekers scrape --config configs/django.json
skill-seekers package output/django --target llama-index
# Option 2: From GitHub repo
skill-seekers github --repo django/django --name django
skill-seekers package output/django --target llama-index
# Output: output/django-llama-index.jsoncd examples/llama-index-query-engine
# Run the quickstart script
python quickstart.py- Nodes loaded from JSON file
- Index created with embeddings
- Example queries demonstrating the query engine
- Interactive chat mode with conversational memory
============================================================
LLAMAINDEX QUERY ENGINE QUICKSTART
============================================================
Step 1: Loading nodes...
✅ Loaded 180 nodes
Categories: {'overview': 1, 'models': 45, 'views': 38, ...}
Step 2: Creating index...
✅ Index created and persisted to: ./storage
Nodes indexed: 180
Step 3: Running example queries...
============================================================
EXAMPLE QUERIES
============================================================
QUERY: What is this documentation about?
------------------------------------------------------------
ANSWER:
This documentation covers Django, a high-level Python web framework
that encourages rapid development and clean, pragmatic design...
SOURCES:
1. overview (SKILL.md) - Score: 0.85
2. models (models.md) - Score: 0.78
============================================================
INTERACTIVE CHAT MODE
============================================================
Ask questions about the documentation (type 'quit' to exit)
You: How do I create a model?
- Query Engine - Semantic search over documentation
- Chat Engine - Conversational interface with memory
- Source Attribution - Shows which nodes contributed to answers
- Persistence - Index saved to disk for reuse
quickstart.py- Complete working exampleREADME.md- This filerequirements.txt- Python dependencies
- Customize - Modify for your specific documentation
- Experiment - Try different index types (Tree, Keyword)
- Extend - Add filters, custom retrievers, hybrid search
- Deploy - Build a production query engine
"Documents not found"
- Make sure you've generated nodes first
- Check the
DOCS_PATHinquickstart.pymatches your output location
"OpenAI API key not found"
- Set environment variable:
export OPENAI_API_KEY=sk-...
"Module not found"
- Install dependencies:
pip install -r requirements.txt
from llama_index.core import load_index_from_storage, StorageContext
# Load existing index
storage_context = StorageContext.from_defaults(persist_dir="./storage")
index = load_index_from_storage(storage_context)from llama_index.core.vector_stores import MetadataFilters, ExactMatchFilter
filters = MetadataFilters(
filters=[ExactMatchFilter(key="category", value="models")]
)
query_engine = index.as_query_engine(filters=filters)query_engine = index.as_query_engine(streaming=True)
response = query_engine.query("Explain Django models")
for text in response.response_gen:
print(text, end="", flush=True)Need help? GitHub Discussions