Skip to content

Commit 1affd4f

Browse files
committed
Performance test code
1 parent cd94a96 commit 1affd4f

File tree

6 files changed

+849
-0
lines changed

6 files changed

+849
-0
lines changed
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
# Performance test execution for ScalarDB Data Loader
2+
3+
## Instructions to run the script
4+
5+
Execute the e2e_test.sh script from the `performance-test` folder of the repository:
6+
7+
```
8+
./e2e_test.sh [options]
9+
```
10+
11+
## Available command-line arguments
12+
13+
```
14+
./e2e_test.sh [--memory=mem1,mem2,...] [--cpu=cpu1,cpu2,...] [--data-size=size] [--image-tag=tag] [--import-args=args] [--export-args=args] [--network=network-name] [--skip-data-gen] [--disable-import] [--disable-export] [--no-clean-data] [--database-dir=path] [--use-jar] [--jar-path=path]
15+
```
16+
17+
Options:
18+
19+
- `--memory=mem1,mem2,...`: Comma-separated list of memory limits for Docker containers (e.g., 1g,2g,4g)
20+
- `--cpu=cpu1,cpu2,...`: Comma-separated list of CPU limits for Docker containers (e.g., 1,2,4)
21+
- `--data-size=size`: Size of data to generate (e.g., 1MB, 2GB)
22+
- `--num-rows=number`: Number of rows to generate (e.g., 1000, 10000)
23+
24+
Note: Either `--data-size` or `--num-rows` must be provided, but not both.
25+
26+
- `--image-tag=tag`: Docker image tag to use (default: 4.0.0-SNAPSHOT)
27+
- `--import-args=args`: Arguments for import command
28+
- `--export-args=args`: Arguments for export command
29+
- `--network=network-name`: Docker network name (default: my-network)
30+
- `--skip-data-gen`: Skip data generation step
31+
- `--disable-import`: Skip import test
32+
- `--disable-export`: Skip export test
33+
- `--no-clean-data`: Don't clean up generated files after test
34+
- `--database-dir=path`: Path to database directory
35+
- `--use-jar`: Use JAR file instead of Docker container
36+
- `--jar-path=path`: Path to JAR file (when using --use-jar)
37+
38+
Examples:
39+
40+
```
41+
# Using data size
42+
./e2e_test.sh --memory=1g,2g,4g --cpu=1,2,4 --data-size=2MB --image-tag=4.0.0-SNAPSHOT
43+
```
44+
45+
```
46+
# Using number of rows
47+
./e2e_test.sh --memory=1g,2g,4g --cpu=1,2,4 --num-rows=10000 --image-tag=4.0.0-SNAPSHOT
48+
```
49+
50+
Example with JAR:
51+
52+
```
53+
./e2e_test.sh --use-jar --jar-path=./scalardb-data-loader-cli.jar --import-args="--format csv --import-mode insert --mode transaction --transaction-size 10 --data-chunk-size 500 --max-threads 16" --export-args="--format csv --max-threads 8 --data-chunk-size 500"
54+
```
55+
56+
### Import-Only Examples
57+
58+
To run only the import test (skipping export):
59+
60+
```
61+
# Using data size
62+
./e2e_test.sh --disable-export --memory=2g --cpu=2 --data-size=1MB --import-args="--format csv --import-mode insert --mode transaction --transaction-size 10 --max-threads 16"
63+
```
64+
65+
```
66+
# Using number of rows
67+
./e2e_test.sh --disable-export --memory=2g --cpu=2 --num-rows=10000 --import-args="--format csv --import-mode insert --mode transaction --transaction-size 10 --max-threads 16"
68+
```
69+
70+
With JAR:
71+
72+
```
73+
./e2e_test.sh --disable-export --use-jar --jar-path=./scalardb-data-loader-cli.jar --import-args="--format csv --import-mode insert --mode transaction --transaction-size 10 --max-threads 16"
74+
```
75+
76+
### Export-Only Examples
77+
78+
To run only the export test (skipping import):
79+
80+
```
81+
./e2e_test.sh --disable-import --memory=2g --cpu=2 --export-args="--format csv --max-threads 8 --data-chunk-size 500"
82+
```
83+
84+
With JAR:
85+
86+
```
87+
./e2e_test.sh --disable-import --use-jar --jar-path=./scalardb-data-loader-cli.jar --export-args="--format csv --max-threads 8 --data-chunk-size 500"
88+
```
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
#!/bin/bash
2+
3+
# Set variables
4+
NETWORK_NAME="my-network"
5+
POSTGRES_CONTAINER="postgres-db"
6+
SCALARDB_PROPERTIES="$(pwd)/scalardb.properties"
7+
SCHEMA_JSON="$(pwd)/schema.json"
8+
9+
# Step 1: Create a Docker network (if not exists)
10+
docker network inspect $NETWORK_NAME >/dev/null 2>&1 || \
11+
docker network create $NETWORK_NAME
12+
13+
# Step 2: Start PostgreSQL container
14+
docker run -d --name $POSTGRES_CONTAINER \
15+
--network $NETWORK_NAME \
16+
-e POSTGRES_USER=myuser \
17+
-e POSTGRES_PASSWORD=mypassword \
18+
-e POSTGRES_DB=mydatabase \
19+
-p 5432:5432 \
20+
postgres:16
21+
22+
# Wait for PostgreSQL to be ready
23+
echo "Waiting for PostgreSQL to start..."
24+
sleep 10
25+
26+
# Step 3: Create 'test' schema
27+
docker exec -i $POSTGRES_CONTAINER psql -U myuser -d mydatabase -c "CREATE SCHEMA IF NOT EXISTS test;"
28+
29+
# Step 4: Run ScalarDB Schema Loader
30+
docker run --rm --network $NETWORK_NAME \
31+
-v "$SCHEMA_JSON:/schema.json" \
32+
-v "$SCALARDB_PROPERTIES:/scalardb.properties" \
33+
ghcr.io/scalar-labs/scalardb-schema-loader:3.15.2-SNAPSHOT \
34+
-f /schema.json --config /scalardb.properties --coordinator
35+
36+
# Step 5: Verify schema creation
37+
docker exec -i $POSTGRES_CONTAINER psql -U myuser -d mydatabase -c "\dn"
38+
39+
echo "✅ Schema Loader execution completed."
40+
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
scalar.db.storage=jdbc
2+
scalar.db.contact_points=jdbc:postgresql://postgres-db:5432/mydatabase
3+
scalar.db.username=myuser
4+
scalar.db.password=mypassword
5+
scalar.db.cross_partition_scan.enabled=true
6+
scalar.db.transaction_manager=single-crud-operation
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
{
2+
"test.all_columns": {
3+
"transaction": true,
4+
"partition-key": [
5+
"col1"
6+
],
7+
"clustering-key": [
8+
"col2",
9+
"col3"
10+
],
11+
"columns": {
12+
"col1": "BIGINT",
13+
"col2": "INT",
14+
"col3": "BOOLEAN",
15+
"col4": "FLOAT",
16+
"col5": "DOUBLE",
17+
"col6": "TEXT",
18+
"col7": "BLOB",
19+
"col8": "DATE",
20+
"col9": "TIME",
21+
"col10": "TIMESTAMP",
22+
"col11": "TIMESTAMPTZ"
23+
}
24+
}
25+
}

0 commit comments

Comments
 (0)