diff --git a/docs/developer/index.md b/docs/developer/index.md
index 579c8a01c..1accaa480 100644
--- a/docs/developer/index.md
+++ b/docs/developer/index.md
@@ -10,30 +10,26 @@ GuideLLM is an open-source project that values community contributions. We maint
## Developer Resources
-
-
-- :material-handshake:{ .lg .middle } Code of Conduct
+- 📝 Code of Conduct
______________________________________________________________________
Our community guidelines ensure that participation in the GuideLLM project is a positive, inclusive, and respectful experience for everyone.
- [:octicons-arrow-right-24: Code of Conduct](code-of-conduct.md)
+ [Code of Conduct](code-of-conduct.md)
-- :material-source-pull:{ .lg .middle } Contributing Guide
+- 🔖 Contributing Guide
______________________________________________________________________
Learn how to effectively contribute to GuideLLM, including reporting bugs, suggesting features, improving documentation, and submitting code.
- [:octicons-arrow-right-24: Contributing Guide](contributing.md)
+ [Contributing Guide](contributing.md)
-- :material-tools:{ .lg .middle } Development Guide
+- 📚 Development Guide
______________________________________________________________________
Detailed instructions for setting up your development environment, implementing changes, and adhering to the project's coding standards and best practices.
- [:octicons-arrow-right-24: Development Guide](developing.md)
-
-
+ [Development Guide](developing.md)
diff --git a/docs/getting-started/index.md b/docs/getting-started/index.md
index a8ddf3d54..862c9a943 100644
--- a/docs/getting-started/index.md
+++ b/docs/getting-started/index.md
@@ -12,38 +12,34 @@ GuideLLM makes it simple to evaluate and optimize your large language model depl
Follow the guides below in sequence to get the most out of GuideLLM and optimize your LLM deployments for production use.
-
-
-- :material-package-variant:{ .lg .middle } Installation
+- 🛠 Installation
______________________________________________________________________
Learn how to install GuideLLM using pip, from source, or with specific version requirements.
- [:octicons-arrow-right-24: Installation Guide](install.md)
+ [Installation Guide](install.md)
-- :material-server:{ .lg .middle } Start a Server
+- 🚀 Start a Server
______________________________________________________________________
Set up an OpenAI-compatible server using vLLM or other supported backends to benchmark your LLM deployments.
- [:octicons-arrow-right-24: Server Setup Guide](server.md)
+ [Server Setup Guide](server.md)
-- :material-speedometer:{ .lg .middle } Run Benchmarks
+- ⏳ Run Benchmarks
______________________________________________________________________
Learn how to configure and run performance benchmarks against your LLM server under various load conditions.
- [:octicons-arrow-right-24: Benchmarking Guide](benchmark.md)
+ [Benchmarking Guide](benchmark.md)
-- :material-chart-bar:{ .lg .middle } Analyze Results
+- 📊 Analyze Results
______________________________________________________________________
Interpret benchmark results to understand throughput, latency, reliability, and optimize your deployments.
- [:octicons-arrow-right-24: Analysis Guide](analyze.md)
-
-
+ [Analysis Guide](analyze.md)
diff --git a/docs/guides/index.md b/docs/guides/index.md
index a362dad7a..72782fa6a 100644
--- a/docs/guides/index.md
+++ b/docs/guides/index.md
@@ -10,54 +10,50 @@ Whether you're interested in understanding the system architecture, exploring su
## Key Guides
-
-
-- :material-layers-outline:{ .lg .middle } Architecture
+- 🏗️ Architecture
______________________________________________________________________
Understanding the modular design of GuideLLM and how core components interact to evaluate LLM deployments.
- [:octicons-arrow-right-24: Architecture Overview](architecture.md)
+ [Architecture Overview](architecture.md)
-- :material-server:{ .lg .middle } Backends
+- 🖥️ Backends
______________________________________________________________________
Learn about supported LLM backends and how to set up OpenAI-compatible servers for benchmarking.
- [:octicons-arrow-right-24: Backend Guide](backends.md)
+ [Backend Guide](backends.md)
-- :material-database:{ .lg .middle } Datasets
+- 🗄️ Datasets
______________________________________________________________________
Configure and use different data sources for benchmarking, including synthetic data, Hugging Face datasets, and file-based options.
- [:octicons-arrow-right-24: Dataset Guide](datasets.md)
+ [Dataset Guide](datasets.md)
-- :material-chart-bar:{ .lg .middle } Metrics
+- 📊 Metrics
______________________________________________________________________
Explore the comprehensive metrics provided by GuideLLM to evaluate performance, including latency, throughput, and token-level analysis.
- [:octicons-arrow-right-24: Metrics Guide](metrics.md)
+ [Metrics Guide](metrics.md)
-- :material-file-export:{ .lg .middle } Outputs
+- 📄 Outputs
______________________________________________________________________
Learn about supported output formats and how to customize result reporting for your benchmarks.
- [:octicons-arrow-right-24: Output Guide](outputs.md)
+ [Output Guide](outputs.md)
-- :material-target:{ .lg .middle } Service Level Objectives
+- 🎯 Service Level Objectives
______________________________________________________________________
Define and implement SLOs and SLAs for your LLM deployments to ensure reliability and performance.
- [:octicons-arrow-right-24: SLO Guide](service_level_objectives.md)
-
-
+ [SLO Guide](service_level_objectives.md)
diff --git a/docs/index.md b/docs/index.md
index 948f814f8..859f7c925 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,10 +1,7 @@
# Home
-
-
-
-
+
@@ -23,38 +20,10 @@ SLO-Aware Benchmarking and Evaluation Platform for Optimizing Real-World LLM Inf
## Key Sections
-
+- 🚀 Getting Started Install GuideLLM, set up your first benchmark, and analyze the results to optimize your LLM deployment. [Getting started](./getting-started/)
-- :material-rocket-launch:{ .lg .middle } Getting Started
+- 📚 Guides Detailed guides covering backends, datasets, metrics, and service level objectives for effective LLM benchmarking. [Guides](./guides/)
- ______________________________________________________________________
+- 💻 Examples Step-by-step examples demonstrating real-world benchmarking scenarios and optimization techniques. [Examples](./examples/)
- Install GuideLLM, set up your first benchmark, and analyze the results to optimize your LLM deployment.
-
- [:octicons-arrow-right-24: Getting started](./getting-started/)
-
-- :material-book-open-variant:{ .lg .middle } Guides
-
- ______________________________________________________________________
-
- Detailed guides covering backends, datasets, metrics, and service level objectives for effective LLM benchmarking.
-
- [:octicons-arrow-right-24: Guides](./guides/)
-
-- :material-code-tags:{ .lg .middle } Examples
-
- ______________________________________________________________________
-
- Step-by-step examples demonstrating real-world benchmarking scenarios and optimization techniques.
-
- [:octicons-arrow-right-24: Examples](./examples/)
-
-- :material-api:{ .lg .middle } API Reference
-
- ______________________________________________________________________
-
- Complete reference documentation for the GuideLLM API to integrate benchmarking into your workflow.
-
- [:octicons-arrow-right-24: API Reference](./api/)
-
-
+- 🔌 API Reference Complete reference documentation for the GuideLLM API to integrate benchmarking into your workflow. [API Reference](./api/)