Skip to content

Latest commit

 

History

History
375 lines (272 loc) · 7.64 KB

File metadata and controls

375 lines (272 loc) · 7.64 KB

CLI Advanced Workflows

Automate complex workflows and integrate Cursor AI into your development pipeline.

Reference: All features documented here are based on the official Cursor Documentation.

Note: The examples below show workflow patterns. In practice, use Cursor's Chat (Cmd/Ctrl + L) or Composer (Cmd/Ctrl + I) features. See CURSOR-USAGE-NOTE.md for details.

Workflow Automation Patterns

Basic Workflow Pattern

While Cursor doesn't have a CLI cursor-agent command, you can create workflows that prompt you to use Cursor features:

#!/bin/bash
# deploy.sh

# Run tests
npm test

# If tests pass, prompt for Cursor review
if [ $? -eq 0 ]; then
  echo "Tests passed. Use Cursor Chat (Cmd/Ctrl + L) to review code quality and suggest improvements"
  echo "Prompt: 'review code quality and suggest improvements'"
else
  echo "Tests failed. Fix issues before deploying."
  exit 1
fi

Error Handling

Implement robust error handling:

#!/bin/bash

set -e  # Exit on error

# Workflow: Use Cursor to complete task, then verify
echo "Use Cursor Chat (Cmd/Ctrl + L) or Composer (Cmd/Ctrl + I) to complete: your task"

# After manual review, verify results
if [ ! -f "expected-output.txt" ]; then
  echo "Expected output not found"
  exit 1
fi

Retry Logic

Add retry logic for unreliable operations:

#!/bin/bash

MAX_RETRIES=3
RETRY_COUNT=0

while [ $RETRY_COUNT -lt $MAX_RETRIES ]; do
  if cursor-agent "your task"; then
    echo "Success"
    exit 0
  fi
  
  RETRY_COUNT=$((RETRY_COUNT + 1))
  echo "Attempt $RETRY_COUNT failed, retrying..."
  sleep 2
done

echo "Failed after $MAX_RETRIES attempts"
exit 1

Managing Multiple Tasks

Sequential Task Workflow

Since Cursor is an editor application, manage multiple tasks by organizing your work:

## Workflow for Multiple Tasks

1. **Task 1**: Open Cursor Chat, complete task 1
2. **Task 2**: Open Cursor Composer, complete task 2  
3. **Task 3**: Use inline edit (Cmd/Ctrl + K) for task 3

Document each task's status as you complete them.

Parallel Execution with Limits

Limit concurrent agents to avoid resource exhaustion:

#!/bin/bash

MAX_PARALLEL=3
PIDS=()

# Function to wait for a slot
wait_for_slot() {
  while [ ${#PIDS[@]} -ge $MAX_PARALLEL ]; do
    for pid in "${PIDS[@]}"; do
      if ! kill -0 $pid 2>/dev/null; then
        # Process finished, remove from array
        PIDS=("${PIDS[@]/$pid}")
      fi
    done
    sleep 1
  done
}

# Launch agents
for task in "task1" "task2" "task3" "task4" "task5"; do
  wait_for_slot
  cursor-agent "$task" &
  PIDS+=($!)
done

# Wait for all to complete
for pid in "${PIDS[@]}"; do
  wait $pid
done

CI/CD Integration

GitHub Actions

Integrate Cursor agents into GitHub Actions workflows:

name: Code Review with Cursor AI

on:
  pull_request:
    branches: [main]

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Setup Cursor
        run: |
          # Install Cursor CLI
          # Add your installation steps here
      
      - name: Run AI Code Review
        env:
          CURSOR_API_KEY: ${{ secrets.CURSOR_API_KEY }}
        run: |
          cursor-agent --non-interactive "review this PR for code quality, security issues, and best practices"
      
      - name: Comment on PR
        uses: actions/github-script@v6
        with:
          script: |
            // Post review results as PR comment

GitLab CI

stages:
  - review

cursor-review:
  stage: review
  script:
    - cursor-agent --non-interactive "review code changes"
  only:
    - merge_requests

Jenkins Pipeline

pipeline {
    agent any
    
    stages {
        stage('AI Review') {
            steps {
                sh '''
                    cursor-agent --non-interactive "review code quality"
                '''
            }
        }
    }
}

Integration with Existing Workflows

npm Scripts Integration

Add Cursor agents to your package.json:

{
  "scripts": {
    "dev": "npm run dev:server",
    "dev:server": "node server.js",
    "ai:review": "cursor-agent 'review code quality'",
    "ai:test": "cursor-agent 'write tests for changed files'",
    "ai:docs": "cursor-agent 'generate documentation'",
    "pre-commit": "npm run ai:review"
  }
}

Git Hooks

Integrate with Git hooks:

#!/bin/bash
# .git/hooks/pre-commit

# Run Cursor agent for code review
cursor-agent --non-interactive "review staged changes for issues"

if [ $? -ne 0 ]; then
  echo "AI review found issues. Commit aborted."
  exit 1
fi

Makefile Integration

.PHONY: ai-review ai-test ai-refactor

ai-review:
	cursor-agent "review code quality"

ai-test:
	cursor-agent "write tests for changed files"

ai-refactor:
	cursor-agent "refactor code following best practices"

# Combine with existing targets
test: ai-test
	npm test

Automating Code Reviews

Automated Review Script

#!/bin/bash
# auto-review.sh

BRANCH=$(git rev-parse --abbrev-ref HEAD)
FILES=$(git diff --name-only main...$BRANCH)

echo "Reviewing changes in $BRANCH"
echo "Files changed: $FILES"

cursor-agent --non-interactive "review these changes: $FILES for code quality, security, and best practices"

# Generate review report
cursor-agent "create a markdown report summarizing code review findings" > review-report.md

Pre-Push Review

#!/bin/bash
# .git/hooks/pre-push

REMOTE="$1"
URL="$2"

# Get commits being pushed
LOCAL=$(git rev-parse @)
REMOTE=$(git rev-parse "$REMOTE"/@{u})
BASE=$(git merge-base @ "$REMOTE"/@{u})

# Review new commits
if [ $LOCAL != $REMOTE ]; then
  COMMITS=$(git log --oneline $BASE..$LOCAL)
  cursor-agent --non-interactive "review commits: $COMMITS"
fi

Automating Testing

Test Generation Script

#!/bin/bash
# generate-tests.sh

# Find files without tests
FILES=$(find src -name "*.ts" -not -name "*.test.ts" -not -name "*.spec.ts")

for file in $FILES; do
  TEST_FILE="${file%.ts}.test.ts"
  
  if [ ! -f "$TEST_FILE" ]; then
    echo "Generating tests for $file"
    cursor-agent "write comprehensive unit tests for $file" > "$TEST_FILE"
  fi
done

Test Fixing Script

#!/bin/bash
# fix-tests.sh

# Run tests
npm test 2>&1 | tee test-output.log

# If tests fail, ask agent to fix
if [ $? -ne 0 ]; then
  cursor-agent "analyze test failures in test-output.log and fix the failing tests"
fi

Automating Deployment

Deployment Pipeline

#!/bin/bash
# deploy.sh

set -e

echo "Running tests..."
npm test

echo "Building application..."
npm run build

echo "Running AI security scan..."
cursor-agent --non-interactive "scan code for security vulnerabilities"

echo "Deploying..."
# Your deployment commands here

Best Practices

  1. Error Handling: Always implement proper error handling
  2. Logging: Log agent actions for debugging
  3. Idempotency: Design scripts to be safely rerunnable
  4. Validation: Verify agent outputs before using them
  5. Security: Never expose API keys in scripts
  6. Testing: Test automation scripts before production use
  7. Documentation: Document what each script does
  8. Version Control: Track script changes in git

Example Scripts

See the examples directory for complete working examples:

Next Steps