diff --git a/docs.json b/docs.json index 366190e..b84098a 100644 --- a/docs.json +++ b/docs.json @@ -282,6 +282,63 @@ { "tab": "Blog", "href": "https://www.shuttle.dev/blog/tags/all" + }, + { + "tab": "Cobra", + "groups": [ + { + "group": "Cobra", + "pages": [ + "python/welcome/introduction", + "python/getting-started/quickstart", + "python/getting-started/cli-installation" + ] + }, + { + "group": "Tutorials", + "pages": [ + "python/tutorials/your-first-app" + ] + }, + { + "group": "How-to Guides", + "pages": [ + { + "group": "Storage", + "icon": "database", + "pages": [ + "python/how-to-guides/add-s3-bucket", + "python/how-to-guides/add-database" + ] + } + ] + }, + { + "group": "Reference", + "pages": [ + { + "group": "CLI", + "icon": "terminal", + "pages": [ + "python/reference/cli-reference" + ] + } + ] + }, + { + "group": "Explanation", + "pages": [ + { + "group": "Core Concepts", + "icon": "lightbulb", + "pages": [ + "python/explanation/architecture", + "python/explanation/infrastructure-from-code" + ] + } + ] + } + ] } ] } diff --git a/python/explanation/architecture.mdx b/python/explanation/architecture.mdx new file mode 100644 index 0000000..26b8974 --- /dev/null +++ b/python/explanation/architecture.mdx @@ -0,0 +1,142 @@ +--- +title: "Shuttle Architecture" +description: "Understand how Shuttle's infrastructure-from-code platform works under the hood" +icon: "building" +--- + +## Overview + +Shuttle is designed around a fundamental principle: **Infrastructure from Code**. Unlike traditional platforms where you configure infrastructure separately from your application code, Shuttle provisions and manages infrastructure directly from your Python code using type hints and decorators. + +## Core Architecture + +### The Shuttle Runtime + +At the heart of Shuttle is how your application's entrypoint is handled, typically using `shuttle_runtime.main(your_function)` or a decorated function like `@shuttle_task.cron`. This approach: + +- **Analyzes your code** to understand what resources you need +- **Provisions infrastructure** automatically based on your type hints and resource options +- **Handles deployment lifecycle** including startup, shutdown, and health checks +- **Manages resource connections** and provides them to your application + +```python +import shuttle_runtime +import shuttle_task +from shuttle_aws.s3 import Bucket, BucketOptions +from shuttle_aws.rds import RdsPostgres, RdsPostgresOptions +from typing import Annotated + +@shuttle_task.cron(schedule="0 3 * * ? *") +async def run( + bucket: Bucket: Annotated[ + Bucket, BucketOptions(bucket_name="my-bucket", policies=[]) + ], + db: Annotated[RdsPostgres, RdsPostgresOptions()], +): + # Your app code here - infrastructure is ready + pass +``` + +### Resource Provisioning System + +Shuttle's resource system works through a combination of: + +1. **Code Analysis**: Shuttle scans your code for type hints and resource options +2. **Infrastructure Planning**: The platform determines what resources to provision +3. **Automatic Provisioning**: Resources are created and configured +4. **Runtime Injection**: Live connections are provided to your application + +This approach eliminates the need for: + +- Manual infrastructure configuration +- Environment-specific connection strings +- Complex deployment scripts +- Infrastructure-as-code templates + +### Deployment Pipeline + +When you run `shuttle deploy`, here's what happens: + +1. **Code Archive**: Your project is bundled and uploaded +2. **Build Phase**: Dependencies are installed and code is prepared in a secure build environment +3. **Resource Analysis**: The runtime analyzes required resources +4. **Infrastructure Provisioning**: Resources are created or updated +5. **Container Deployment**: Your app is deployed to AWS ECS (Fargate) +6. **Health Checks**: The platform verifies deployment success + +### Isolation and Security + +Each Shuttle project runs in its own: + +- **ECS Service**: Dedicated compute isolation +- **Resource Namespace**: Logical separation of databases, secrets, etc. +- **Network Context**: Isolated networking and security groups + +## Design Principles + +### Developer Experience First + +Shuttle prioritizes developer productivity by: + +- Reducing boilerplate and configuration +- Providing immediate feedback during development +- Abstracting infrastructure complexity +- Maintaining familiar Python development patterns + +### Infrastructure Transparency + +While Shuttle abstracts infrastructure management, it maintains transparency by: + +- Providing clear resource information in the console +- Offering detailed deployment logs +- Supporting custom resource configurations when needed +- Maintaining compatibility with standard Python libraries and frameworks + +### Scalability by Design + +The architecture supports growth through: + +- Automatic resource scaling based on usage +- Support for custom resource configurations +- Integration with external services and databases +- Multi-region deployment capabilities (Enterprise) + +## Comparison with Traditional Approaches + +### Traditional Deployment + +``` +Code → Docker Image → Kubernetes/Docker Compose → Infrastructure Config → Deploy +``` + +### Shuttle Approach + +``` +Annotated Code → shuttle deploy → Running Application +``` + +This simplified flow reduces complexity while maintaining full control over your application logic. + +## Resource Lifecycle + +Resources in Shuttle follow a managed lifecycle: + +1. **Declaration**: Resources are declared via type hints and decorators in your code +2. **Provisioning**: First deployment creates the resource +3. **Persistence**: Resources persist across deployments +4. **Management**: Resources can be managed via CLI or console +5. **Cleanup**: Resources are cleaned up when projects are deleted + +This lifecycle ensures that your data persists while your application code evolves. + +## Understanding the Platform Benefits + +Shuttle's architecture provides several key advantages: + +- **Faster Development**: No infrastructure setup time +- **Reduced Complexity**: Infrastructure and application code in one place +- **Better Reliability**: Managed infrastructure with automatic scaling +- **Cost Efficiency**: Pay only for what you use, automatic optimization +- **Security**: Built-in best practices and managed updates + +Understanding these architectural principles helps you make the most of Shuttle's capabilities and design applications that leverage the platform's strengths. diff --git a/python/explanation/infrastructure-from-code.mdx b/python/explanation/infrastructure-from-code.mdx new file mode 100644 index 0000000..700b474 --- /dev/null +++ b/python/explanation/infrastructure-from-code.mdx @@ -0,0 +1,167 @@ +--- +title: "Infrastructure from Code" +description: "Deep dive into Shuttle's Infrastructure from Code philosophy and how it differs from traditional approaches" +icon: "code" +--- + +## What is Infrastructure from Code? + +Infrastructure from Code (IfC) is Shuttle's foundational approach where infrastructure requirements are expressed directly in your application code through language-native constructs like type hints, attributes, or decorators, rather than in separate configuration files or management consoles. + +## The Philosophy + +Traditional cloud development often separates concerns: + +- **Application Logic**: Your business code +- **Infrastructure Configuration**: YAML files, Terraform, CloudFormation +- **Deployment Scripts**: CI/CD pipelines, Docker configurations + +This separation, while architecturally sound, can introduce friction: + +- Context switching between code and infrastructure definitions. +- Synchronization challenges between different environments. +- Complex dependency management. +- A higher barrier to entry for new developers. + +Shuttle's Infrastructure from Code collapses this complexity by enabling you to express infrastructure needs directly alongside where they are used in your application code. + +## How Shuttle's IfC Works + +At its core, IfC means that your application code becomes the single source of truth for both your business logic and your infrastructure requirements. + +### 1. Declarative Resource Requirements + +You declare the resources your application needs directly in your code, typically as function parameters or class members, using language-native features. Shuttle then interprets these declarations to provision the necessary cloud infrastructure. + +For example, you might declare a database, a storage bucket, or a secret manager entry. Shuttle understands these declarations and handles the provisioning, configuration, and injection of the actual resources into your running application. + +### 2. Automatic Lifecycle Management + +When you deploy your application with Shuttle: + +- **Provisioning**: Required resources are automatically created on first deployment. +- **Persistence**: Resources like databases or storage buckets persist across deployments, maintaining their state. +- **Management**: Shuttle manages updates, backups, and scaling of these resources where applicable. +- **Cleanup**: Resources are automatically torn down when they are no longer declared in your code, preventing orphaned infrastructure. + +## IfC vs. Traditional Infrastructure Provisioning + +To illustrate the difference, consider provisioning a PostgreSQL database for your application: + +### Traditional Approach Example + +With traditional methods, you'd define your infrastructure in separate files and manage environment variables: + +```yaml +# docker-compose.yml or similar IaC tool +services: + database: + image: postgres:13 + environment: + POSTGRES_DB: myapp + POSTGRES_USER: user + POSTGRES_PASSWORD: password + + app: + build: . + depends_on: + - database + environment: + DATABASE_URL: postgres://user:password@database:5432/myapp +``` + +And your application code would retrieve the connection string from environment variables: + +```python +# Your app code +import os +import psycopg +from psycopg_pool import ConnectionPool + +DATABASE_URL = os.environ.get("DATABASE_URL") +if not DATABASE_URL: + raise ValueError("DATABASE_URL environment variable is not set") + +pool = ConnectionPool(DATABASE_URL) +# ... use pool to interact with the database ... +``` + +This approach requires manual synchronization between the infrastructure definition and the application's environment variable consumption. + +### Infrastructure from Code in Python Example + +With Shuttle's IfC in Python, the database requirement is expressed directly in your application code: + +```python +from typing import Annotated + +import shuttle_runtime +import shuttle_task +from shuttle_aws.s3 import Bucket, BucketOptions, AllowWrite +from shuttle_aws.rds import RdsPostgres, RdsPostgresOptions +from shuttle_runtime import Secrets + +@shuttle_task.cron(schedule="0 3 * * ? *") +async def run( + # Dedicated AWS RDS Postgres with custom options + production_db: Annotated[ + RdsPostgres, + RdsPostgresOptions( + database_name="prod_metrics", + allocated_storage_gb=20, + ), + ], + # Dedicated AWS S3 Bucket with write permissions + data_bucket: Annotated[Bucket, + BucketOptions( + bucket_name="my-app-data-bucket-unique", # Must be globally unique + policies=[ + AllowWrite( + account_id="123456789012", # Example AWS account ID + role_name="MyExternalServiceRole", + ), + ], + ), + ], + # A secret automatically managed by Shuttle + secrets: Secrets, +): + # All resources are ready to use as function arguments + print(f"Connected to production DB host: {production_db.get_connection().dsn}") + print(f"Data bucket name: {data_bucket.options.bucket_name}") + print(f"Secrets available: {secrets.get('MY_SECRET')}") +``` + +In this Python example, `RdsPostgres` and `Bucket` are type hints that tell Shuttle what resources are needed. The `Annotated` type allows for additional configuration (e.g., `RdsPostgresOptions`). Shuttle automatically provisions these resources and injects fully configured, ready-to-use instances into your function at runtime. + +## Developer Experience & Efficiency + +### Unified Codebase for App & Infra + +Infrastructure requirements are defined directly within your application code using type hints and decorators. This creates a single source of truth, eliminating the need for separate configuration files (like Terraform or CloudFormation) and reducing context switching. + +- **Version Control**: Since infrastructure is code, all changes to your application and its underlying resources are tracked together in version control, simplifying collaboration, code reviews, and rollbacks. + +With commands like `shuttle deploy`, `shuttle logs`, and `shuttle destroy`, the platform automatically provisions, updates, and tears down resources. The CLI provides immediate, human-readable feedback (diffs) on planned infrastructure changes, ensuring consistency and reproducibility across environments. + +### Enhanced Local Development + +## Limitations and Considerations + +### When IfC Might Not Be Ideal and Working Within Constraints + +Infrastructure from Code may not be ideal in scenarios such as complex multi-service architectures not fully managed by Shuttle, strict compliance requirements that demand external auditing of infrastructure, legacy system integration, or highly diverse multi-language environments where a common IaC tool might be preferred. In these cases, consider: + +- **Hybrid Approaches**: Integrating Shuttle-managed parts with external infrastructure definitions managed by traditional IaC tools. +- **Custom Resources**: For complex infrastructure needs not directly supported by Shuttle's built-in types, you might define them externally and integrate them into your Shuttle project. +- **Traditional IaC**: Combining Shuttle with traditional Infrastructure as Code tools for legacy components or highly specialized requirements that fall outside Shuttle's immediate scope. + +## The Future of Infrastructure from Code + +As AI and machine learning technologies advance, they are poised to revolutionize Infrastructure from Code by making it even more approachable and efficient. AI tools combined with succinct syntax will lower the barrier to entry for developers, making it easier to manage infrastructure through code. + +- **AI-Enhanced Tooling**: AI will enable more intuitive interaction with infrastructure, allowing developers to focus on application logic rather than complex configurations. +- **Streamlined Orchestration**: With AI-driven command-line tools, orchestrating infrastructure provisioning will become more streamlined, reducing the need for verbose commands. +- **Short Context Prompting**: AI will enhance the accuracy of infrastructure management tasks by understanding short context prompts, improving over verbose and error-prone infrastructure tools. + +Understanding Infrastructure from Code helps you leverage Shuttle's full potential and design applications that are both powerful and maintainable. diff --git a/python/getting-started/cli-installation.mdx b/python/getting-started/cli-installation.mdx new file mode 100644 index 0000000..ecdcc02 --- /dev/null +++ b/python/getting-started/cli-installation.mdx @@ -0,0 +1,45 @@ +--- +title: "CLI Installation" +description: "How to install the Python-based Shuttle Command Line Interface (CLI) using uv" +icon: "download" +--- + +# Using Shuttle within your projects + +For managing dependencies and running Shuttle commands within your projects, it is recommended to use `uv` directly. + +1. Create a Virtual Environment: + + Navigate to your project directory and create a virtual environment with `uv`. + + ```bash + uv venv + source .venv/bin/activate + ``` + +2. Add Shuttle Dependency: + + Add the `shuttle` package to your project's dependencies. + + ```bash + uv init + uv add shuttle-python + ``` + +3. Run Shuttle Commands: + + Execute Shuttle commands using `uv run -m shuttle` to ensure they run within your project's isolated environment. + + ```bash + uv run -m shuttle deploy + uv run -m shuttle logs + uv run -m shuttle local + ``` + + Alternatively, if you are activated in your virtual environment, you can use the `shuttle` script. + + ```bash + shuttle deploy + shuttle logs + shuttle local + ``` diff --git a/python/getting-started/quickstart.mdx b/python/getting-started/quickstart.mdx new file mode 100644 index 0000000..6c12ed3 --- /dev/null +++ b/python/getting-started/quickstart.mdx @@ -0,0 +1,82 @@ +--- +title: "Quickstart" +description: "Follow these steps to deploy your first Shuttle project using Python" +icon: "circle-play" +--- + +## Deploy your project + +1. **Configure AWS Credentials** + Before deploying, ensure you have authenticated with AWS using one of the following methods: + - **SSO Enabled**: Use `aws configure sso` for AWS Single Sign-On. + - **IAM User Account**: Use `aws configure` for standard IAM user credentials. + - **Temporary IAM Credentials**: Set the following environment variables: + ```sh + export AWS_ACCESS_KEY_ID='your_access_key' + export AWS_SECRET_ACCESS_KEY='your_secret_key' + export AWS_SESSION_TOKEN='your_session_token' + ``` + - **Other Options**: IAM role metadata and OIDC federation are also supported. + +2. **Install uv for Virtual Environment Management** + ```sh + pip install uv + ``` + +3. **Navigate to Your Project Directory** + Change into the newly created project directory: + ```sh + cd + ``` + +4. **Create and Activate a Virtual Environment** + ```sh + uv venv + source .venv/bin/activate + ``` + +5. **Add Shuttle Dependency** + ```sh + uv init + uv add shuttle-python + ``` + +6. **Create Your Shuttle Project** + There is no `shuttle init` command for Python projects. Instead, you must create your project files manually. + + Create a `main.py` file: you can use the [built-in user-project example](https://github.com/shuttle-hq/shuttle-python/blob/main/user-project/__main__.py) as a starting point. + + +7. **Deploy Your Project** + Deploy your application to the Shuttle platform: + ```sh + shuttle deploy + # or + uv run -m shuttle deploy + ``` + Follow the prompts to confirm deployment. + +8. **View Application Logs** + To view logs from your deployed application: + ```sh + shuttle logs + # or + uv run -m shuttle logs + ``` + +9. **Run Your Project Locally (Optional)** + To run your application locally using deployed resources: + ```sh + shuttle run + # or + uv run -m shuttle run + ``` + Note: This will execute your application's `main` function (or equivalent entrypoint) and connect to the remote resources provisioned in the cloud. + +10. **Destroy Your Project (Cleanup)** + When you're finished, you can destroy all provisioned infrastructure and associated resources: + ```sh + shuttle destroy + # or + uv run -m shuttle destroy + ``` diff --git a/python/how-to-guides/add-database.mdx b/python/how-to-guides/add-database.mdx new file mode 100644 index 0000000..394f75d --- /dev/null +++ b/python/how-to-guides/add-database.mdx @@ -0,0 +1,81 @@ +--- +title: "Add a Database" +description: "How to add a dedicated database to your Shuttle project." +icon: "database" +--- + +# Add a Dedicated Database + +Add a managed, dedicated Postgres database to your Shuttle Python project. This allows your application to persist data, serving as a robust and scalable relational database solution without needing to manage the underlying infrastructure yourself. Shuttle automatically handles provisioning, scaling, and connection management. + +## Prerequisites + +- An existing Shuttle project (Python) +- [Shuttle CLI installed](/getting-started/installation) +- `uv` installed and a virtual environment activated for dependency management +- AWS credentials configured locally (e.g., via `aws configure`, environment variables like `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, etc.) + +## Instructions: Provisioning a Dedicated Postgres Database + +This guide will walk you through adding a dedicated Postgres database to your Shuttle Python application. + +### 1. Install Dependencies + +First, add the `shuttle-db[postgres]` package to your project using `uv`: + +```bash +uv init +uv add shuttle-python +``` + +This will install the necessary `shuttle-rds` package and its Postgres-specific dependencies, including database drivers like `psycopg`. + +### 2. Define Your Database in `main.py` + +In your project's `main.py` file, import `RdsPostgres` and `RdsPostgresOptions` from `shuttle_aws.rds`. Then, add the `RdsPostgres` resource as an argument to your `@shuttle_task.cron` (or other service) decorated function, using type annotations. You can customize the database using `RdsPostgresOptions` (e.g., instance size, storage, etc., though for a basic guide, default options are sufficient). + +```python +from typing import Annotated + +import shuttle_task +import shuttle_runtime +from shuttle_aws.rds import RdsPostgres, RdsPostgresOptions + +@shuttle_task.cron(schedule="0 * * * ? *") # This task will run hourly +async def main( + postgres: Annotated[ + RdsPostgres, + RdsPostgresOptions(), # Default options for a basic Postgres instance + ], +): + """An example task that interacts with a dedicated Postgres database.""" + + print("Accessing dedicated Postgres database...") + + # Get a database connection + # The connection object returned is compatible with psycopg's connection interface + # For async contexts, you might use asyncpg, and get_connection() would return an asyncpg connection. + try: + conn = postgres.get_connection() + with conn.cursor() as cur: + # Example: Create a simple table if it doesn't exist + cur.execute("CREATE TABLE IF NOT EXISTS shuttle_test (id SERIAL PRIMARY KEY, message VARCHAR(255));") + conn.commit() + print("Table 'shuttle_test' ensured.") + + # Example: Insert data + cur.execute("INSERT INTO shuttle_test (message) VALUES (%s);", ("Hello from Shuttle Postgres!",)) + conn.commit() + print("Successfully inserted data.") + + # Example: Select data + cur.execute("SELECT message FROM shuttle_test ORDER BY id DESC LIMIT 1;") + result = cur.fetchone() + print(f"Retrieved from DB: {result[0]}") + + except Exception as e: + print(f"Error interacting with Postgres database: {e}") + +# This line is essential for Shuttle to run your application locally or deploy +if __name__ == "__main__": + shuttle_runtime.main(main) diff --git a/python/how-to-guides/add-s3-bucket.mdx b/python/how-to-guides/add-s3-bucket.mdx new file mode 100644 index 0000000..6acfa77 --- /dev/null +++ b/python/how-to-guides/add-s3-bucket.mdx @@ -0,0 +1,165 @@ +--- +title: "Add an S3 Bucket" +description: "Step-by-step guide to adding and configuring an S3 bucket in your Shuttle Python project." +icon: "cloud" +--- + +Add a managed S3 bucket to your Shuttle Python project. This allows your application to store and retrieve objects, serving as a flexible and cost-effective object storage solution. + +## Prerequisites + +- An existing Shuttle project (Python) +- [Shuttle CLI installed](/getting-started/installation) +- `uv` installed and a virtual environment activated for dependency management +- AWS credentials configured locally (e.g., via `aws configure`, environment variables like `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, etc.) + +## Instructions: Provisioning an S3 Bucket + +This guide will walk you through adding an S3 bucket to your Shuttle Python application. + +### 1. Install Dependencies + +First, add the `shuttle-aws[s3]` package to your project using `uv`: + +```bash +uv init +uv add shuttle-python +``` + +This will install the necessary `shuttle-aws` package and its S3-specific dependencies, including `boto3`. + +### 2. Define Your S3 Bucket in `main.py` + +In your project's `main.py` file, import `Bucket` and `BucketOptions` from `shuttle_aws.s3`. Then, add the `Bucket` resource as an argument to your `@shuttle_task.cron` (or other service) decorated function, using type annotations. You can specify a custom bucket name using `BucketOptions`. + +```python +from typing import Annotated + +import shuttle_task +import shuttle_runtime +from shuttle_aws.s3 import Bucket, BucketOptions + +# Define a constant for your desired bucket name +BUCKET_NAME = "my-shuttle-python-bucket" + +@shuttle_task.cron(schedule="0 * * * ? *") # This task will run hourly +async def main( + bucket: Annotated[ + Bucket, + BucketOptions(bucket_name=BUCKET_NAME, policies=[]) + ], +): + """An example task that interacts with an S3 bucket.""" + + print(f"Accessing S3 bucket: {bucket.options.bucket_name}") + + # Get a boto3 S3 client to interact with the bucket + s3_client = bucket.get_client() + + try: + # Example: List objects in the bucket + response = s3_client.list_objects_v2(Bucket=BUCKET_NAME) + + if "Contents" in response: + print(f"Found {response['KeyCount']} objects in '{BUCKET_NAME}':") + for obj in response["Contents"]: + print(f"- {obj['Key']} ({obj['Size']} bytes)") + else: + print(f"No objects found in '{BUCKET_NAME}'.") + + # Example: Put a simple object + s3_client.put_object(Bucket=BUCKET_NAME, Key="hello.txt", Body="Hello from Shuttle Python!") + print("Successfully put 'hello.txt' into the bucket.") + + except Exception as e: + print(f"Error interacting with S3 bucket: {e}") + +# This line is essential for Shuttle to run your application +if __name__ == "__main__": + shuttle_runtime.main(main) +``` + +### 3. Add External Write Permissions (Optional) + +If another AWS service or account needs write access to your bucket, you can grant it using the `AllowWrite` annotation from `shuttle_aws.iam`. This will attach an IAM policy to your bucket allowing the specified role/account to write objects. + +```python +from typing import Annotated + +import shuttle_runtime +import shuttle_task +from shuttle_aws.s3 import Bucket, BucketOptions, AllowWrite + +BUCKET_NAME = "my-shuttle-python-bucket" + +@shuttle_task.cron("0 * * * ? *") +async def main( + bucket: Annotated[ + Bucket, + BucketOptions( + bucket_name=BUCKET_NAME, + policies=[ + AllowWrite(account_id="842910673255", role_name="SessionTrackerService") # Example: Grant write to specific role + ] + ), + ], +): + """An example task with external write permissions for its S3 bucket.""" + # ... your S3 interaction logic here ... +``` + +### 4. Deploy to the Cloud + +Deploy your project to the Shuttle cloud: + +```bash +uv run -m shuttle deploy +``` + +Shuttle will provision your managed S3 bucket (if it doesn't already exist) and connect your application to it automatically. You will see output similar to this, showing the created resources: + +```bash +Deploying... + +Deploy complete! Resources created: + +- shuttle_aws.s3.Bucket + id = "my-shuttle-python-bucket-abcdef12" + arn = "arn:aws:s3:::my-shuttle-python-bucket-abcdef12" + +- shuttle_task.cron + id = "my-shuttle-python-task-abcdef12" + schedule = "0 * * * *" + arn = "arn:aws:ecs:eu-west-2:123456789012:task/my-shuttle-python-project/... + +Use `uv run -m shuttle logs` to view logs. +``` + +### 5. Test Locally + +Run your project locally using the Shuttle CLI: + +```bash +uv run -m shuttle run +``` + +Shuttle will execute your Python application locally. For S3 buckets, `shuttle run` will connect to the *remote* S3 bucket provisioned in your AWS account. No local S3 emulation is performed. + +## Troubleshooting + +- **Deployment failed?** + - Check `uv run -m shuttle logs` for detailed error messages. + - Ensure your AWS credentials are correctly configured and have sufficient permissions to create S3 buckets and IAM policies. + - Verify your `main.py` has no syntax errors and all required `shuttle-aws` imports are correct. +- **Local run issues?** + - Ensure your AWS credentials are set up correctly on your local machine, as `shuttle local` uses the remote S3 bucket. + - Check for network connectivity issues if your application struggles to reach AWS S3. +- **S3 permissions errors?** + - Double-check the `account_id` and `role_name` in your `AllowWrite` annotation. + - Ensure the external AWS role/service attempting to access the bucket has the correct IAM permissions. + +## Next Steps + +- [Add other resources](/reference/resources) like a managed database or secrets. +- Learn more about the [Shuttle Python framework](/getting-started/python). +- Explore [Shuttle Python examples](/examples/overview) for more advanced use cases. diff --git a/python/reference/cli-reference.mdx b/python/reference/cli-reference.mdx new file mode 100644 index 0000000..516c3bd --- /dev/null +++ b/python/reference/cli-reference.mdx @@ -0,0 +1,261 @@ +--- +title: "CLI Reference" +description: "Complete reference for all Shuttle CLI commands and options for Python projects." +icon: "terminal" +--- + +## Overview + +The Shuttle CLI for Python is your primary interface for developing, deploying, and managing applications on the Shuttle platform. This reference provides comprehensive documentation for all commands and options. + +## Installation + +It is recommended to use `uv` for dependency management and running the Shuttle CLI. + +```bash +uv venv +source .venv/bin/activate +uv add shuttle-python +``` + +Once installed, you can invoke the Shuttle CLI using `uv run -m shuttle`. If your virtual environment is activated, you can also run commands directly using `shuttle`. +For example: + +```bash +uv run -m shuttle deploy +# OR +shuttle deploy +``` + +## Global Options + +These options are common to many command-line interfaces and may be supported by the underlying Python `click` framework, but are not explicitly defined or handled by the `shuttle` Python application's `__main__.py` for all cases. + +| Option | Description | +| ---------------------------- | -------------------------------------------------------------------------------- | +| `--debug` | Turn on tracing output for Shuttle libraries (WARNING: can print sensitive data) | + +## Commands Reference + +### Project Management + +### Deployment + +#### `shuttle deploy` + +Provision, build, and deploy the application. + +```bash +uv run -m shuttle deploy [PATH] +# OR +shuttle deploy [PATH] +``` + +**Arguments:** + +- `[PATH]` - The path to your Shuttle project's root directory (defaults to current directory). + +**Examples:** + +```bash +uv run -m shuttle deploy # Deploy the project in the current directory +shuttle deploy # Deploy the project in the current directory + +uv run -m shuttle deploy my-project # Deploy a project located in 'my-project' directory +shuttle deploy my-project # Deploy a project located in 'my-project' directory +``` + +#### `shuttle destroy` + +Destroy the deployed stack and all associated resources for a project. + +```bash +uv run -m shuttle destroy [PATH] +# OR +shuttle destroy [PATH] +``` + +**Arguments:** + +- `[PATH]` - The path to your Shuttle project's root directory (defaults to current directory). + +**Examples:** + +```bash +uv run -m shuttle destroy # Destroy the project in the current directory +shuttle destroy # Destroy the project in the current directory + +uv run -m shuttle destroy my-project # Destroy a project located in 'my-project' directory +shuttle destroy my-project # Destroy a project located in 'my-project' directory +``` + +### Local Development + +#### `shuttle run` + +Run your Shuttle project locally. + +```bash +uv run -m shuttle run [PATH] +# OR +shuttle run [PATH] +``` + +**Arguments:** + +- `[PATH]` - The path to your Shuttle project's root directory (defaults to current directory). + +**Examples:** + +```bash +uv run -m shuttle run # Run the project in the current directory locally +shuttle run # Run the project in the current directory locally + +uv run -m shuttle run my-project # Run a project located in 'my-project' directory locally +shuttle run my-project # Run a project located in 'my-project' directory locally +``` + +### Logs + +#### `shuttle logs` + +Show logs from the deployed application. + +```bash +uv run -m shuttle logs [PATH] +# OR +shuttle logs [PATH] +``` + +**Arguments:** + +- `[PATH]` - The path to your Shuttle project's root directory (defaults to current directory). + +**Examples:** + +```bash +uv run -m shuttle logs # View logs for the project in the current directory +shuttle logs # View logs for the project in the current directory + +uv run -m shuttle logs my-project # View logs for a project located in 'my-project' directory +shuttle logs my-project # View logs for a project located in 'my-project' directory +``` + +## Environment Variables + +The Shuttle Python CLI primarily relies on AWS authentication configured in your environment. + +| Variable | Description | +| ------------------------ | ------------------------------------------------------------------------------------------------------- | +| `AWS_ACCESS_KEY_ID` | Your AWS access key ID. Used for programmatic access to AWS. | +| `AWS_SECRET_ACCESS_KEY` | Your AWS secret access key. Used in conjunction with `AWS_ACCESS_KEY_ID`. | +| `AWS_SESSION_TOKEN` | (Optional) The session token for temporary AWS credentials. | +| `AWS_PROFILE` | The name of the AWS profile to use from your AWS credentials file (`~/.aws/credentials`). | +| `LOCALSTACK_AUTH_TOKEN` | Used for testing with LocalStack; typically not needed for standard deployments. | + +You can configure your AWS credentials using `aws configure` or `aws configure sso` via the AWS CLI. + +## Configuration Files + +### `pyproject.toml` (and project structure) + +Shuttle Python projects are standard Python projects, typically managed with `pyproject.toml`. Your application code, including `@shuttle_task.cron` decorated functions, resides within your project's Python source files (e.g., `main.py` or `src/my_project/task.py`). + +```toml +[project] +name = "my-shuttle-project" +version = "0.1.0" +dependencies = [ + "shuttle-python", + # ... other dependencies +] +``` + +## Common Workflows + +### First Deployment + +```bash +uv venv +source .venv/bin/activate +uv add shuttle +# Add your Shuttle Python code and dependencies (e.g., shuttle-aws, shuttle-db) +uv run -m shuttle deploy # Deploy to AWS infrastructure managed by Shuttle +# OR +shuttle deploy +uv run -m shuttle run # Test locally against provisioned infrastructure +# OR +shuttle run +``` + +### Development Cycle + +```bash +uv run -m shuttle deploy # Deploy changes +# OR +shuttle deploy +uv run -m shuttle run # Develop locally against provisioned infrastructure +# OR +shuttle run +uv run -m shuttle logs # Check deployment logs +# OR +shuttle logs +``` + +### Project Management + +```bash +# The Python CLI manages projects through their local paths. +# To remove a project and its deployed resources: +uv run -m shuttle destroy +# OR +shuttle destroy +``` + +### Debugging + +```bash +uv run -m shuttle logs # Check recent logs +# OR +shuttle logs +uv run -m shuttle deploy --debug # Verbose deployment (if --debug is implemented) +# OR +shuttle deploy --debug +``` + +## Troubleshooting + +### Common Issues + +**AWS Authentication issues:** + +- Ensure your AWS credentials are correctly configured via environment variables or `~/.aws/credentials`. +- Use `aws configure` or `aws configure sso`. + +**Local development issues:** + +- Ensure you are in the correct virtual environment (`source .venv/bin/activate`). +- Verify that your project path is correct (e.g., `.` for the current directory). +- Check for `FileNotFoundError` if `main.py` or `__main__.py` is missing in your project root. + +**Deployment issues:** + +- Check `uv run -m shuttle logs` for build or deployment errors. +- Ensure all Python dependencies are specified in `pyproject.toml` and installed (`uv add shuttle` or `uv sync` as appropriate). + +## Getting Help + +```bash +uv run -m shuttle --help # General help +# OR +shuttle --help +uv run -m shuttle --help # Command-specific help (if supported by click) +# OR +shuttle --help +``` + +For additional support: + +- [Discord Community](https://discord.gg/shuttle) +- [GitHub Issues](https://github.com/shuttle-hq/shuttle/issues) +- [Documentation](https://docs.shuttle.dev) diff --git a/python/tutorials/your-first-app.mdx b/python/tutorials/your-first-app.mdx new file mode 100644 index 0000000..e968f82 --- /dev/null +++ b/python/tutorials/your-first-app.mdx @@ -0,0 +1,406 @@ +--- +title: "Your First Shuttle App" +description: "Learn Shuttle fundamentals by building and deploying a scheduled background task with Infrastructure from Code" +icon: "rocket" +--- + +## Learning Objectives + +By completing this tutorial, you'll master **foundational Shuttle concepts** and learn to: + +- **Shuttle Tasks**: Use `@shuttle_task.cron` for scheduled background execution +- **Infrastructure from Code**: Provision an S3 bucket and PostgreSQL database with type-hinted parameters +- **Zero-Config Deployment**: Deploy with `shuttle deploy` - no YAML or containers needed +- **Local Development**: Test locally with `shuttle run` before deploying +- **CLI Workflow**: Use Shuttle CLI for project management and deployment + +## Prerequisites + +- **Time Required**: 15 minutes +- **Python**: Version 3.12 or later ([install here](https://www.python.org/downloads/)) +- **Tools**: [Shuttle CLI installed](/python/getting-started/cli-installation) +- **Accounts**: Use your own AWS account +- **Experience**: Basic Python knowledge (functions, async/await, type hints) + +## What We're Building + +We'll create a **Records Grafana Exporter** - a scheduled background task that processes data from an S3 bucket and inserts record counts into a PostgreSQL database. + +This app demonstrates Shuttle's core value: turning Python code into production infrastructure with zero configuration. + +**High-level components:** + +- **Background Task** (`@shuttle_task.cron`) +- **S3 Bucket** (auto-provisioned by Shuttle) +- **PostgreSQL Database** (auto-provisioned by Shuttle) +- **Database Table Initialization** (SQL via `psycopg`) +- **S3 Object Processing** (`polars` for JSON parsing) + +## Tutorial Steps + +### Step 1: Create Your First Project + +There is no `shuttle init` command for Python projects. Instead, we'll manually set up the project structure. + +First, create a new directory for your project and navigate into it: + +```bash +mkdir records-grafana-exporter +cd records-grafana-exporter +``` + +Next, initialize a `uv` virtual environment and activate it. Activating the virtual environment is recommended to ensure you're using the correct Python interpreter and installed packages. + +```bash +uv venv +source .venv/bin/activate # On Windows, use `.venv\Scripts\activate` +``` + +Now, add the `shuttle` dependency to your project. `uv` will automatically create or update `pyproject.toml` and install the package. + +```bash +uv init +uv add shuttle-python +``` + +Finally, create the main application file `main.py` with the following content: + +```python +# main.py +import shuttle_runtime +import shuttle_task + +@shuttle_task.cron(schedule="0 3 * * ? *") +async def main(): + # Your scheduled task logic goes here + print("Hello from your scheduled task!") + +if __name__ == "__main__": + shuttle_runtime.main(main) +``` + +This sets up a new directory with a basic Python project structure and a scheduled task. + +### Step 2: Understand the Created Code + +Examine `main.py` that you just created. + +**Key concepts:** + +- `@shuttle_task.cron("0 3 * * ? *")` - Shuttle's decorator to define a scheduled task using a cron expression. This is the AWS EventBridge cron format that has 6 fields, and runs at 3:00AM UTC every day. +- `async def run()` - The asynchronous function that Shuttle will execute periodically. + +### Step 3: Test Locally + +Run your task locally: + +```bash +shuttle run # or uv run -m shuttle run +``` + +You should see output similar to: + +``` +Running locally... + +Starting local runner... + +2025-07-03T10:00:00Z [task:records-grafana-exporter-1234abcd] Hello from your scheduled task! +``` + +**What happened:** Shuttle started a local server that mimicked the production environment for your scheduled task. It executes the `run` function immediately and then at subsequent intervals (if defined by the cron schedule, though for a quick test it runs once). + +### Step 4: Add Infrastructure (S3 Bucket & Postgres) + +Our task needs an S3 bucket to read from and a PostgreSQL database to write to. We'll add these as parameters to our `run` function, and Shuttle will automatically provision them. + +Update `main.py` to include these resources: + +```python +import shuttle_task +import shuttle_runtime +from shuttle_aws.s3 import Bucket, BucketOptions +from shuttle_aws.rds import RdsPostgres, RdsPostgresOptions +from typing import Annotated + +@shuttle_task.cron(schedule="0 3 * * ? *") +async def main( + bucket: Annotated[Bucket, BucketOptions(bucket_name="grafana-exporter-1234abcd", policies=[])], + db: Annotated[RdsPostgres, RdsPostgresOptions()], +): + # Your scheduled task logic goes here + print(f"Hello from your scheduled task! Bucket: {bucket.options.bucket_name}, Postgres host: {db.output.host.get_value()}") + +if __name__ == "__main__": + shuttle_runtime.main(main) +``` + +### Step 5: Deploy to Production + +Deploy your Records Grafana Exporter task to the Shuttle platform. Shuttle will analyze your code, generate a deployment plan, and proceed automatically: + +```bash +shuttle deploy # or uv run -m shuttle deploy +``` + +Upon successful deployment, you'll see details of the created resources, similar to: + +``` +Deploy complete! Resources created: + +- shuttle_aws.s3.Bucket + id = "records-grafana-exporter-bucket-7fd3a2c4" + arn = "arn:aws:s3:::records-grafana-exporter-bucket-7fd3a2c4" + +- shuttle_db.postgres.Postgres + id = "records-grafana-exporter-db-7fd3a2c4" + host = "records-grafana-exporter-db-7fd3a2c4.pg.shuttle.run" + +- shuttle_task.cron + id = "records-grafana-exporter-task-7fd3a2c4" + schedule = "0 * * * *" + arn = "arn:aws:ecs:eu-west-2:123456789012:task/records-grafana-exporter/abcdef1234567890" + +Use `shuttle logs` to view logs. +``` + +Your task is now live! It will run every hour, processing S3 objects and updating your database. You can view its logs with `shuttle logs # or uv run -m shuttle logs`. + +### Step 6: Initialize Database Schema + +Before we can insert data, we need to ensure our database table exists. We'll add logic to create the `record_counts` table if it doesn't already. + +Update `main.py`: + +```python +import shuttle_task +import shuttle_runtime +from shuttle_aws.s3 import Bucket, BucketOptions +from shuttle_aws.rds import RdsPostgres, RdsPostgresOptions +from typing import Annotated + +TABLE = "record_counts" + +@shuttle_task.cron(schedule="0 3 * * ? *") +async def main( + bucket: Annotated[Bucket, BucketOptions(bucket_name="grafana-exporter-1234abcd", policies=[])], + db: Annotated[RdsPostgres, RdsPostgresOptions()], +): + pg_conn = db.get_connection() + with pg_conn.cursor() as cur: + cur.execute(f""" + CREATE TABLE IF NOT EXISTS {TABLE} ( + ts TIMESTAMPTZ PRIMARY KEY, + count INTEGER NOT NULL + ); + """ + ) + pg_conn.commit() + + print(f"Hello from your scheduled task! Bucket: {bucket.name}, Postgres host: {db.host}") + +if __name__ == "__main__": + shuttle_runtime.main(main) +``` + +### Step 7: Implement Business Logic + +Now, let's add the core logic for our ETL task: reading JSON files from S3, +counting records with `polars`, and inserting the total into our Postgres +database. Add the `polars` dependency: + +```bash +uv add polars +``` + +We're also updating the cron schedule to `0 * * * ? *` to run the task +every hour, instead of daily at 3 AM UTC. + +Update `main.py` with the full application logic: + +```python +import io +import polars as pl +from datetime import datetime, timedelta, timezone + +import shuttle_task +import shuttle_runtime +from shuttle_aws.s3 import Bucket, BucketOptions +from shuttle_aws.rds import RdsPostgres, RdsPostgresOptions +from typing import Annotated + +TABLE = "record_counts" + + +@shuttle_task.cron(schedule="0 3 * * ? *") +async def main( + bucket: Annotated[ + Bucket, BucketOptions(bucket_name="grafana-exporter-1234abcd", policies=[]) + ], + db: Annotated[RdsPostgres, RdsPostgresOptions()], +): + total_rows = 0 + + now = datetime.now(timezone.utc) + cutoff = now - timedelta(hours=1) + + pg_conn = db.get_connection() + with pg_conn.cursor() as cur: + cur.execute( + f""" + CREATE TABLE IF NOT EXISTS {TABLE} ( + ts TIMESTAMPTZ PRIMARY KEY, + count INTEGER NOT NULL + ); + """ + ) + pg_conn.commit() + + s3_client = bucket.get_client() + objects = s3_client.list_objects_v2(Bucket=bucket.options.bucket_name) + if objects["KeyCount"] == 0: + print(f"No objects in the bucket {bucket.options.bucket_name}.") + else: + for obj in objects["Contents"]: + if obj["LastModified"] <= cutoff: + continue + + try: + content = s3_client.get_object( + Bucket=bucket.options.bucket_name, Key=obj["Key"] + ) + body = content["Body"].read() + json = io.StringIO(body.decode("utf-8")) + df = pl.read_json(json) + total_rows += df.height + except Exception as e: + print(f"Failed to parse {obj.key}: {e}") + + with pg_conn.cursor() as cur: + cur.execute( + f""" + INSERT INTO {TABLE} (ts, count) + VALUES (%s, %s) + ON CONFLICT(ts) DO UPDATE SET count = EXCLUDED.count + """, + (now, total_rows), + ) + pg_conn.commit() + + print(f"Inserted {total_rows} records into {TABLE} for {now.isoformat()}") + + +if __name__ == "__main__": + shuttle_runtime.main(main) +``` + +### Step 8: Test Your Complete App Locally + +Run the full application locally again: + +```bash +shuttle run # or uv run -m shuttle run +``` + +You'll see logs indicating the task is running. Since there are no objects in a local S3 bucket (Shuttle's local runner does not emulate S3 by default), the `total_rows` will likely be 0. The local runner will attempt to connect to the *remote* S3 bucket and Postgres database if they've been deployed, or simulate them if not. + +``` +Running locally... + +Using existing deployed resources: + +src/records_grafana_exporter/task.py + ├── [=] shuttle_aws.s3.Bucket (remote) + │ id = "records-grafana-exporter-bucket-xxxx" + │ + └── [=] shuttle_db.postgres.Postgres (remote) + id = "records-grafana-exporter-db-xxxx" + +Starting local runner... + +2025-07-03T10:00:00Z [task:records-grafana-exporter-1234abcd] Inserted 0 records into record_counts for 2025-07-03T10:00:00.000000+00:00 +``` + +This confirms your code runs and interacts with the (remote or simulated) infrastructure. + +### Step 9: Configure S3 Permissions (AllowWrite) + +Often, other services need to write to your S3 bucket. Shuttle allows you to grant specific IAM permissions directly in your code. Let's say a microservice with role `SessionTrackerService` in AWS account `842910673255` needs write access. + +Update `main.py`: + +```python +# ... (imports and other code) +from typing import Annotated +from shuttle_aws.s3 import AllowWrite + +TABLE = "record_counts" + +@shuttle_task.cron("0 * * * *") +async def run( + bucket: Annotated[ + Bucket, + BucketOptions( + bucket_name="grafana-exporter-1234abcd", + policies=[ + AllowWrite(account_id="842910673255", role_name="SessionTrackerService") + ] + ) + ], + db: Annotated[RdsPostgres, RdsPostgresOptions()], +): + # ... (task logic as before) +``` + +Now, run `shuttle deploy` again. Shuttle will detect the change and apply it automatically: + +```bash +shuttle deploy # or uv run -m shuttle deploy +``` + +Upon successful deployment, the S3 bucket's policy will be updated to grant write access to the specified IAM role. + +### Step 10: Access Postgres Connection String + +To connect Grafana (or any other external tool) to your PostgreSQL database, you'll need its connection details. Shuttle automatically provisions a secure database. You can find the connection host and other details from the `shuttle deploy` output, or by inspecting your project in the Shuttle console. Typically, you'll construct a connection string like `postgresql://{user}:{password}@{host}:{port}/{database_name}`. + +## What You've Learned + +You've mastered these **key Shuttle concepts**: + +- **Infrastructure from Code** - S3 and Postgres provisioned with simple function parameters +- **Zero-Config Deployment** - Production deployment without Docker or YAML files +- **Shuttle Tasks** - `@shuttle_task.cron` handles scheduled execution and infrastructure concerns +- **Local Development** - `shuttle run` provides production-like local testing +- **Automatic Infrastructure** - Shuttle automatically handles database and S3 provisioning, including connection details +- **IAM Permissions** - Configure fine-grained S3 bucket access using `AllowWrite` annotations + +## Troubleshooting + +**Python environment issues?** + +- Ensure you've activated your virtual environment: `source .venv/bin/activate` + +**Local task not running?** + +- Ensure you're in the project root directory when running `shuttle run # or uv run -m shuttle run`. +- Check `main.py` for syntax errors. + +**Deployment failures?** + +- Verify your code runs locally first with `shuttle run # or uv run -m shuttle run`. +- Check deployment logs with `shuttle logs # or uv run -m shuttle logs`. +- Ensure your AWS credentials are configured correctly (e.g., `aws configure`). + +**S3 or Postgres connection errors (remote)?** + +- Shuttle handles provisioning and connection. Ensure your code correctly uses the `Bucket` and `Postgres` objects passed to `run`. +- For `AllowWrite` policies, double-check the AWS account ID and role name. + +## Next Steps + +Continue your Shuttle journey: + +1. **Add More Resources**: Explore other available Shuttle resources like queues or caches. +2. **Advanced Data Processing**: Dive deeper into using Python libraries like `pandas`, `dask`, or other data tools with Shuttle. +3. **Monitor Your Task**: Learn how to integrate with monitoring solutions for your deployed tasks. diff --git a/python/welcome/introduction.mdx b/python/welcome/introduction.mdx new file mode 100644 index 0000000..980c791 --- /dev/null +++ b/python/welcome/introduction.mdx @@ -0,0 +1,49 @@ +--- +title: "Welcome" +description: "Shuttle makes dealing with cloud infrastructure simple and enjoyable so our users can focus on creating great products." +icon: "hand-wave" +--- + + + + Installation and quickstart guide + + + Follow one of our tutorials + + + Get started from one of many templates + + + + + Complete reference for all Shuttle CLI commands and options + + + + Understand the core concepts and architecture of Shuttle + +