Skip to content

Commit 03f66f3

Browse files
committed
Updated LocalLab v0.1.4 and Updated Docs
1 parent 8603869 commit 03f66f3

File tree

5 files changed

+44
-34
lines changed

5 files changed

+44
-34
lines changed

CHANGELOG.md

Lines changed: 19 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,15 @@
22

33
All notable changes for version updates.
44

5-
## [0.1.3] - 2024-02-27
5+
## [0.1.4] - 2024-02-25
66

77
### Changed
88

9-
- Updated GitHub Actions workflow to use the --no-cache-dir flag in pip install commands, which prevents disk space issues during dependency installation (e.g., for large packages like torch).
9+
- Improved logging across startup: the banner, model details, configuration, system resources, API documentation, quick start guide, and footer are now fully logged and printed.
10+
- Updated the start_server function to extend the health check timeout to 60 seconds in Google Colab (when using ngrok) and to set an environment variable to trigger the Colab branch in run_server_proc.
11+
- Modified startup_event to load the model in the background, ensuring that the server's /health endpoint becomes available in time and that logging output is complete.
1012

11-
## [0.1.2] - 2024-02-25
13+
## [0.1.3] - 2024-02-25
1214

1315
### Changed
1416

@@ -22,23 +24,28 @@ All notable changes for version updates.
2224
- Removed duplicate architecture diagrams from the root `README.md` to streamline documentation.
2325
- Minor improvements to logging and error handling.
2426

27+
## [0.1.2] - 2024-02-25
28+
29+
### Changed
30+
31+
- Updated GitHub Actions workflow to install the Locallab package along with its runtime dependencies in CI.
32+
33+
### Fixed
34+
35+
- Fixed RuntimeError related to SemLock sharing in multiprocessing by clearing logger handlers in `run_server_proc`.
36+
- Updated Mermaid diagrams to wrap node labels in double quotes, improving compatibility with GitHub rendering.
37+
- Improved build status badge aesthetics in the README.
38+
2539
## [0.1.1] - 2024-02-25
2640

2741
### Fixed
2842

2943
- Fixed RuntimeError related to SemLock sharing in multiprocessing by clearing logger handlers in `run_server_proc`.
30-
- Updated Mermaid diagrams in `README.md` and `docs/colab/README.md` to wrap node labels in double quotes, improving compatibility with GitHub rendering.
44+
- Updated Mermaid diagrams to wrap node labels in double quotes, improving compatibility with GitHub rendering.
3145
- Improved build status badge aesthetics in the README.
3246

3347
## [0.1.0] - 2024-02-24
3448

3549
### Added
3650

37-
- Upgraded package version from 0.0 to 0.1.0.
38-
- Initial release as a Python package with full Google Colab integration.
39-
- Basic API functionality including dynamic model loading and inference endpoints.
40-
- Enhanced logging with ASCII art banners, detailed model and system resource information, and robust error handling.
41-
- Ngrok tunnel management for public URL access in Colab.
42-
- Comprehensive documentation with absolute URLs for smooth navigation.
43-
- GitHub Actions workflow for automated PyPI publishing.
44-
- MIT License introduced.
51+
- Initial release as a Python package with full Google Colab integration, dynamic model loading, robust logging (with ASCII art banners), API endpoints for text generation and system monitoring, Ngrok tunnel management, and comprehensive documentation.

README.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
# 🚀 LocalLab
22

33
[![Build Status](https://img.shields.io/github/actions/workflow/status/Developer-Utkarsh/LocalLab/ci.yml?style=flat-square)](https://github.com/Developer-Utkarsh/LocalLab/actions)
4-
[![Coverage Status](https://coveralls.io/repos/github/Developer-Utkarsh/LocalLab/badge.svg?branch=main&style=flat-square)](https://coveralls.io/github/Developer-Utkarsh/LocalLab?branch=main)
54
[![LocalLab Version](https://img.shields.io/pypi/v/locallab.svg?style=flat-square)](https://pypi.org/project/locallab/)
65
[![Python Version](https://img.shields.io/pypi/pyversions/locallab.svg?style=flat-square)](https://pypi.org/project/locallab/)
76
[![License](https://img.shields.io/github/license/Developer-Utkarsh/LocalLab.svg?style=flat-square)](https://github.com/Developer-Utkarsh/LocalLab/blob/main/LICENSE)
@@ -50,13 +49,13 @@ LocalLab is a powerful, lightweight AI inference server designed to deliver cutt
5049
Below is a high-level diagram of LocalLab's architecture.
5150

5251
```mermaid
53-
graph TD;
54-
A["User"] --> B["LocalLab Client (Python/Node.js)"];
55-
B --> C["LocalLab Server"];
56-
C --> D["Model Manager"];
57-
D --> E["Hugging Face Models"];
58-
C --> F["Optimizations"];
59-
C --> G["Resource Monitoring"];
52+
graph TD
53+
A["User"] --> B["LocalLab Client (Python/Node.js)"]
54+
B --> C["LocalLab Server"]
55+
C --> D["Model Manager"]
56+
D --> E["Hugging Face Models"]
57+
C --> F["Optimizations"]
58+
C --> G["Resource Monitoring"]
6059
```
6160

6261
## Google Colab Workflow

locallab/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
LocalLab - A lightweight AI inference server
33
"""
44

5-
__version__ = "0.1.3"
5+
__version__ = "0.1.4"
66

77
from typing import Dict, Any, Optional
88

locallab/main.py

Lines changed: 16 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -376,7 +376,6 @@ async def startup_event():
376376
╚══════╝ ╚═════╝ ╚═════╝╚═╝ ╚═╝╚══════╝╚══════╝╚═╝ ╚═╝╚═════╝
377377
{Style.RESET_ALL}"""
378378

379-
# In Google Colab, explicitly print the banner to ensure visibility
380379
print(banner)
381380
sys.stdout.flush()
382381
logger.info(banner)
@@ -430,10 +429,10 @@ async def startup_event():
430429
logger.info(model_config)
431430
sys.stdout.flush()
432431

433-
# Load model with progress indicator
432+
# Load model with progress indicator (start in background to not block startup)
434433
logger.info(f"\n{Fore.YELLOW}⚡ Loading model: {hf_model}{Style.RESET_ALL}")
435-
await model_manager.load_model(hf_model)
436-
logger.info(f"{Fore.GREEN}Model loaded successfully!{Style.RESET_ALL}\n")
434+
asyncio.create_task(model_manager.load_model(hf_model))
435+
logger.info(f"{Fore.GREEN}Model loading started in background!{Style.RESET_ALL}\n")
437436
sys.stdout.flush()
438437

439438
# System Resources with box drawing
@@ -509,7 +508,7 @@ async def startup_event():
509508
╔════════════════════════════════════════════════════════════════════╗
510509
║ ║
511510
{Fore.GREEN}LocalLab - Your Local AI Inference Server{Fore.CYAN}
512-
{Fore.GREEN}Made with ❤️ by Utkarsh Tiwari{Fore.CYAN}
511+
{Fore.GREEN}Made with ❤️ by Utkarsh{Fore.CYAN}
513512
║ ║
514513
╚══════════════════════════════════════════════════════════════════╝{Style.RESET_ALL}
515514
@@ -714,8 +713,7 @@ def run_server_proc(log_queue):
714713
logger.error(f"Server startup failed: {str(e)}")
715714
raise
716715

717-
# Modify start_server to accept a log_queue parameter and pass it to the child process
718-
716+
# Modify start_server function
719717
def start_server(use_ngrok: bool = False, log_queue=None):
720718
import time
721719
import requests
@@ -724,14 +722,20 @@ def start_server(use_ngrok: bool = False, log_queue=None):
724722
if log_queue is None:
725723
ctx = multiprocessing.get_context("spawn")
726724
log_queue = ctx.Queue()
727-
725+
726+
# If using ngrok, set environment variable to trigger colab branch in run_server_proc
727+
if use_ngrok:
728+
os.environ["COLAB_GPU"] = "1"
729+
timeout = 60
730+
else:
731+
timeout = 30
732+
728733
# Start the server in a separate process using spawn context with module-level run_server_proc
729734
ctx = multiprocessing.get_context("spawn")
730735
p = ctx.Process(target=run_server_proc, args=(log_queue,))
731736
p.start()
732-
737+
733738
# Wait until the /health endpoint returns 200 or timeout
734-
timeout = 30
735739
start_time_loop = time.time()
736740
health_url = "http://127.0.0.1:8000/health"
737741
server_ready = False
@@ -744,10 +748,10 @@ def start_server(use_ngrok: bool = False, log_queue=None):
744748
except Exception:
745749
pass
746750
time.sleep(1)
747-
751+
748752
if not server_ready:
749753
raise Exception("Server did not become healthy in time.")
750-
754+
751755
if use_ngrok:
752756
public_url = setup_ngrok(port=8000)
753757
ngrok_section = f"\n{Fore.CYAN}┌────────────────────────── Ngrok Tunnel Details ─────────────────────────────┐{Style.RESET_ALL}\n\n│ 🚀 Ngrok Public URL: {Fore.GREEN}{public_url}{Style.RESET_ALL}\n\n{Fore.CYAN}└──────────────────────────────────────────────────────────────────────────────┘{Style.RESET_ALL}\n"

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
setup(
77
name="locallab",
8-
version="0.1.3",
8+
version="0.1.4",
99
packages=find_packages(include=["locallab", "locallab.*"]),
1010
install_requires=[
1111
"fastapi>=0.68.0,<1.0.0",

0 commit comments

Comments
 (0)