Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions .eslintignore

This file was deleted.

2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
node-version: [ 16.x, 18.x, 20.x ]
node-version: [ 18.x, 20.x ]
os: [ windows-latest, ubuntu-latest, macOS-latest ]

# Go
Expand Down
22 changes: 22 additions & 0 deletions .kiro/steering/product.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Product Overview

Dynalite is a fast, in-memory implementation of Amazon DynamoDB built on LevelDB. It provides a local DynamoDB-compatible server for development and testing purposes.

## Key Features
- Full DynamoDB API compatibility (matches live instances closely)
- Fast in-memory or persistent storage via LevelDB
- Supports both CLI and programmatic usage
- SSL support with self-signed certificates
- Configurable table state transition timings
- Comprehensive validation matching AWS DynamoDB

## Use Cases
- Local development and testing
- Fast startup alternative to DynamoDB Local (no JVM overhead)
- CI/CD pipelines requiring DynamoDB functionality
- Offline development environments

## Target Compatibility
- Matches AWS DynamoDB behavior including limits and error messages
- Tested against live DynamoDB instances across regions
- Supports DynamoDB API versions: DynamoDB_20111205, DynamoDB_20120810
74 changes: 74 additions & 0 deletions .kiro/steering/structure.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Project Structure

## Root Files
- `index.js` - Main server module and HTTP request handler
- `cli.js` - Command-line interface entry point
- `package.json` - Project configuration and dependencies

## Core Directories

### `/actions/`
Contains implementation modules for each DynamoDB operation:
- Each file corresponds to a DynamoDB API action (e.g., `listTables.js`, `putItem.js`)
- Functions accept `(store, data, callback)` parameters
- Return results via callback with `(err, data)` signature

### `/validations/`
Input validation and type checking for API operations:
- `index.js` - Core validation framework and utilities
- Individual validation files match action names (e.g., `listTables.js`)
- Each exports `types` object defining parameter validation rules
- May include `custom` validation functions

### `/db/`
Database layer and expression parsing:
- `index.js` - Core database operations and utilities
- `*.pegjs` - PEG.js grammar files for DynamoDB expressions
- `*Parser.js` - Generated parsers (built from .pegjs files)

### `/test/`
Comprehensive test suite:
- `helpers.js` - Test utilities and shared functions
- Individual test files match action names
- Uses Mocha framework with `should` assertions
- Supports both local and remote DynamoDB testing

### `/ssl/`
SSL certificate files for HTTPS support:
- Self-signed certificates for development
- Used when `--ssl` flag is enabled

## Architecture Patterns

### Action Pattern
```javascript
// actions/operationName.js
module.exports = function operationName(store, data, cb) {
// Implementation
cb(null, result)
}
```

### Validation Pattern
```javascript
// validations/operationName.js
exports.types = {
ParameterName: {
type: 'String',
required: true,
// additional constraints
}
}
```

### Database Operations
- Use `store.tableDb` for table metadata
- Use `store.getItemDb(tableName)` for item storage
- Use `store.getIndexDb()` for secondary indexes
- All operations are asynchronous with callbacks

## Naming Conventions
- Files use camelCase matching DynamoDB operation names
- Action functions use camelCase (e.g., `listTables`, `putItem`)
- Database keys use specific encoding schemes for sorting
- Test files mirror the structure of implementation files
51 changes: 51 additions & 0 deletions .kiro/steering/tech.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Technology Stack

## Core Technologies
- **Runtime**: Node.js (>=16)
- **Database**: LevelDB via LevelUP with memdown for in-memory storage
- **HTTP Server**: Node.js built-in http/https modules
- **Parsing**: PEG.js for expression parsing (condition, projection, update expressions)
- **Cryptography**: Node.js crypto module for hashing and SSL
- **Async Control**: async library for flow control

## Key Dependencies
- `levelup` + `leveldown`/`memdown` - Database layer
- `subleveldown` - Database partitioning
- `big.js` - Precise decimal arithmetic for DynamoDB numbers
- `buffer-crc32` - CRC32 checksums for response validation
- `lazy` - Stream processing utilities
- `pegjs` - Parser generator for expressions
- `minimist` - CLI argument parsing

## Build System
- **Build Command**: `npm run build` - Compiles PEG.js grammar files to JavaScript parsers
- **Test Command**: `npm test` - Runs linting and Mocha test suite
- **Lint Command**: `npm run lint` - ESLint with @architect/eslint-config
- **Coverage**: `npm run coverage` - Test coverage via nyc

## Development Commands
```bash
# Install dependencies
npm install

# Build parsers from grammar files
npm run build

# Run tests (includes linting)
npm test

# Run with coverage
npm run coverage

# Start server programmatically
node index.js

# Start CLI server
node cli.js --port 4567
```

## Parser Generation
The project uses PEG.js to generate parsers from grammar files in `/db/*.pegjs`:
- `conditionParser.pegjs` → `conditionParser.js`
- `projectionParser.pegjs` → `projectionParser.js`
- `updateParser.pegjs` → `updateParser.js`
6 changes: 3 additions & 3 deletions actions/batchGetItem.js
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ module.exports = function batchGetItem (store, data, cb) {
for (table in tableResponses) {
// Order is pretty random
// Assign keys before we shuffle
tableResponses[table].forEach(function (tableRes, ix) { tableRes._key = data.RequestItems[table].Keys[ix] }) // eslint-disable-line no-loop-func
tableResponses[table].forEach(function (tableRes, ix) { tableRes._key = data.RequestItems[table].Keys[ix] })
shuffle(tableResponses[table])
res.Responses[table] = tableResponses[table].map(function (tableRes) { // eslint-disable-line no-loop-func
res.Responses[table] = tableResponses[table].map(function (tableRes) {
if (tableRes.Item) {
// TODO: This is totally inefficient - should fix this
var newSize = totalSize + db.itemSize(tableRes.Item)
if (newSize > (1024 * 1024 + store.options.maxItemSize - 3)) {
if (newSize > ((1024 * 1024) + store.options.maxItemSize - 3)) {
if (!res.UnprocessedKeys[table]) {
res.UnprocessedKeys[table] = { Keys: [] }
if (data.RequestItems[table].AttributesToGet)
Expand Down
126 changes: 71 additions & 55 deletions actions/createTable.js
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,11 @@ module.exports = function createTable (store, data, cb) {
tableDb.lock(key, function (release) {
cb = release(cb)

tableDb.get(key, function (err) {
tableDb.get(key, function (err, existingTable) {
if (err && err.name != 'NotFoundError') return cb(err)
if (!err) {

// Check if table exists and is valid
if (!err && existingTable && typeof existingTable === 'object' && existingTable.TableStatus) {
err = new Error
err.statusCode = 400
err.body = {
Expand All @@ -19,69 +21,83 @@ module.exports = function createTable (store, data, cb) {
return cb(err)
}

data.TableArn = 'arn:aws:dynamodb:' + tableDb.awsRegion + ':' + tableDb.awsAccountId + ':table/' + data.TableName
data.TableId = uuidV4()
data.CreationDateTime = Date.now() / 1000
data.ItemCount = 0
if (!data.ProvisionedThroughput) {
data.ProvisionedThroughput = { ReadCapacityUnits: 0, WriteCapacityUnits: 0 }
}
data.ProvisionedThroughput.NumberOfDecreasesToday = 0
data.TableSizeBytes = 0
data.TableStatus = 'CREATING'
if (data.BillingMode == 'PAY_PER_REQUEST') {
data.BillingModeSummary = { BillingMode: 'PAY_PER_REQUEST' }
data.TableThroughputModeSummary = { TableThroughputMode: 'PAY_PER_REQUEST' }
delete data.BillingMode
}
if (data.LocalSecondaryIndexes) {
data.LocalSecondaryIndexes.forEach(function (index) {
index.IndexArn = 'arn:aws:dynamodb:' + tableDb.awsRegion + ':' + tableDb.awsAccountId + ':table/' +
data.TableName + '/index/' + index.IndexName
index.IndexSizeBytes = 0
index.ItemCount = 0
// If table exists but is corrupted, delete it first
if (!err && existingTable && (!existingTable.TableStatus || typeof existingTable !== 'object')) {
tableDb.del(key, function () {
// Ignore deletion errors and proceed with creation
createNewTable()
})
return
}
if (data.GlobalSecondaryIndexes) {
data.GlobalSecondaryIndexes.forEach(function (index) {
index.IndexArn = 'arn:aws:dynamodb:' + tableDb.awsRegion + ':' + tableDb.awsAccountId + ':table/' +

// Table doesn't exist, create it
createNewTable()

function createNewTable () {
data.TableArn = 'arn:aws:dynamodb:' + tableDb.awsRegion + ':' + tableDb.awsAccountId + ':table/' + data.TableName
data.TableId = uuidV4()
data.CreationDateTime = Date.now() / 1000
data.ItemCount = 0
if (!data.ProvisionedThroughput) {
data.ProvisionedThroughput = { ReadCapacityUnits: 0, WriteCapacityUnits: 0 }
}
data.ProvisionedThroughput.NumberOfDecreasesToday = 0
data.TableSizeBytes = 0
data.TableStatus = 'CREATING'
if (data.BillingMode == 'PAY_PER_REQUEST') {
data.BillingModeSummary = { BillingMode: 'PAY_PER_REQUEST' }
data.TableThroughputModeSummary = { TableThroughputMode: 'PAY_PER_REQUEST' }
delete data.BillingMode
}
if (data.LocalSecondaryIndexes) {
data.LocalSecondaryIndexes.forEach(function (index) {
index.IndexArn = 'arn:aws:dynamodb:' + tableDb.awsRegion + ':' + tableDb.awsAccountId + ':table/' +
data.TableName + '/index/' + index.IndexName
index.IndexSizeBytes = 0
index.ItemCount = 0
index.IndexStatus = 'CREATING'
if (!index.ProvisionedThroughput) {
index.ProvisionedThroughput = { ReadCapacityUnits: 0, WriteCapacityUnits: 0 }
}
index.ProvisionedThroughput.NumberOfDecreasesToday = 0
})
}
index.IndexSizeBytes = 0
index.ItemCount = 0
})
}
if (data.GlobalSecondaryIndexes) {
data.GlobalSecondaryIndexes.forEach(function (index) {
index.IndexArn = 'arn:aws:dynamodb:' + tableDb.awsRegion + ':' + tableDb.awsAccountId + ':table/' +
data.TableName + '/index/' + index.IndexName
index.IndexSizeBytes = 0
index.ItemCount = 0
index.IndexStatus = 'CREATING'
if (!index.ProvisionedThroughput) {
index.ProvisionedThroughput = { ReadCapacityUnits: 0, WriteCapacityUnits: 0 }
}
index.ProvisionedThroughput.NumberOfDecreasesToday = 0
})
}

tableDb.put(key, data, function (err) {
if (err) return cb(err)
tableDb.put(key, data, function (err) {
if (err) return cb(err)

setTimeout(function () {
setTimeout(function () {

// Shouldn't need to lock/fetch as nothing should have changed
data.TableStatus = 'ACTIVE'
if (data.GlobalSecondaryIndexes) {
data.GlobalSecondaryIndexes.forEach(function (index) {
index.IndexStatus = 'ACTIVE'
})
}
// Shouldn't need to lock/fetch as nothing should have changed
data.TableStatus = 'ACTIVE'
if (data.GlobalSecondaryIndexes) {
data.GlobalSecondaryIndexes.forEach(function (index) {
index.IndexStatus = 'ACTIVE'
})
}

if (data.BillingModeSummary) {
data.BillingModeSummary.LastUpdateToPayPerRequestDateTime = data.CreationDateTime
}
if (data.BillingModeSummary) {
data.BillingModeSummary.LastUpdateToPayPerRequestDateTime = data.CreationDateTime
}

tableDb.put(key, data, function (err) {
// eslint-disable-next-line no-console
if (err && !/Database is not open/.test(err)) console.error(err.stack || err)
})
tableDb.put(key, data, function (err) {

if (err && !/Database is (not open|closed)/.test(err)) console.error(err.stack || err)
})

}, store.options.createTableMs)
}, store.options.createTableMs)

cb(null, { TableDescription: data })
})
cb(null, { TableDescription: data })
})
}
})
})

Expand Down
16 changes: 14 additions & 2 deletions actions/deleteTable.js
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,18 @@ module.exports = function deleteTable (store, data, cb) {
store.getTable(key, false, function (err, table) {
if (err) return cb(err)

// Handle corrupted table entries
if (!table || typeof table !== 'object') {
// Table entry is corrupted, treat as if table doesn't exist
err = new Error
err.statusCode = 400
err.body = {
__type: 'com.amazonaws.dynamodb.v20120810#ResourceNotFoundException',
message: 'Requested resource not found: Table: ' + key + ' not found',
}
return cb(err)
}

// Check if table is ACTIVE or not?
if (table.TableStatus == 'CREATING') {
err = new Error
Expand Down Expand Up @@ -38,8 +50,8 @@ module.exports = function deleteTable (store, data, cb) {

setTimeout(function () {
tableDb.del(key, function (err) {
// eslint-disable-next-line no-console
if (err && !/Database is not open/.test(err)) console.error(err.stack || err)

if (err && !/Database is (not open|closed)/.test(err)) console.error(err.stack || err)
})
}, store.options.deleteTableMs)

Expand Down
19 changes: 14 additions & 5 deletions actions/listTables.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,26 @@ var once = require('once'),

module.exports = function listTables (store, data, cb) {
cb = once(cb)
var opts, limit = data.Limit || 100
var opts = {}, limit = data.Limit || 100

if (data.ExclusiveStartTableName)
opts = { gt: data.ExclusiveStartTableName }
// Don't use opts.gt since it doesn't work in this LevelDB implementation
// We'll filter manually after getting all results

db.lazy(store.tableDb.createKeyStream(opts), cb)
.take(limit + 1)
.take(Infinity) // Take all items since we need to filter manually
.join(function (names) {
// Filter to implement proper ExclusiveStartTableName behavior
// LevelDB's gt option doesn't work properly in this implementation
if (data.ExclusiveStartTableName) {
names = names.filter(function (name) {
return name > data.ExclusiveStartTableName
})
}

// Apply limit after filtering
var result = {}
if (names.length > limit) {
names.splice(limit)
names = names.slice(0, limit)
result.LastEvaluatedTableName = names[names.length - 1]
}
result.TableNames = names
Expand Down
Loading
Loading