This repository provides a native C++ client for Apache Spark Connect, enabling C++ applications to execute Spark SQL workloads against remote Spark clusters without requiring JVM dependencies.
Spark Connect introduces a decoupled client–server architecture for Apache Spark, where client applications construct logical query plans that are executed remotely by a Spark server. This project delivers an idiomatic, high-performance C++ interface for building and submitting Spark queries, with efficient Apache Arrow–based columnar data exchange over gRPC.
The library is intended for environments where:
- Native C++ integration is required
- JVM runtimes are impractical or undesirable
- High-throughput data movement is necessary
- Tight control over memory and system resources is important
- Existing performance-critical C++ systems need to integrate with Spark
Status: Work in Progress
- Native-first Spark integration for C++ ecosystems
- Efficient Arrow-based columnar data transfer
- Clear separation between client logic and remote Spark execution
- Compatibility with evolving Spark Connect protocols
- Predictable performance characteristics suitable for production systems
- Minimal runtime dependencies outside standard native infrastructure
The Spark Connect C++ client is designed for environments that require native system integration, efficient columnar data transfer, and low-overhead communication with remote Spark clusters.
Enable high-performance C++ services to submit ingestion workloads and push structured datasets to Spark using Apache Arrow serialization without JVM dependencies.
Connect existing C++ infrastructure such as industrial control systems, financial engines, robotics platforms, HPC pipelines, or scientific instrumentation directly to Spark for large-scale analytics.
Run high-speed inference using native libraries (e.g., TensorRT, ONNX Runtime, CUDA pipelines) and stream features, embeddings, or predictions into Spark for downstream analytics and training workflows.
Build specialized connectors for proprietary data stores, binary protocols, in-memory engines, or high-throughput messaging systems that benefit from native execution and fine-grained memory control.
Develop native dashboards, monitoring tools, and backend services that execute Spark SQL queries through Spark Connect with minimal client-side overhead.
Use C++ aggregation services or gateway nodes to collect telemetry from distributed environments and forward structured datasets to centralized Spark clusters for processing.
Evaluate Spark Connect performance characteristics such as serialization overhead, network latency, query planning costs, and concurrent client behavior using native workloads.
The Spark Connect C++ client is not a replacement for Python or Scala Spark APIs. Instead, it enables Spark integration in environments where native execution, performance constraints, or system-level integration are required.
-
You are integrating Spark into an existing C++ application or platform
-
Your environment cannot depend on a JVM runtime
-
You are building high-performance ingestion or producer services
-
You require integration with:
- HPC systems
- scientific computing pipelines
- robotics or industrial platforms
- embedded gateways or native backend services
-
You run AI/ML inference pipelines in C++ and need to forward structured outputs into Spark
-
You are implementing proprietary data connectors or binary protocols
-
You require precise control over memory layout and data transfer efficiency
- You are building notebooks or data science workflows
- Your team primarily consists of data engineers or analysts
- You require the full Spark API surface immediately
- Rapid prototyping is more important than native performance
- Your applications already run comfortably in JVM or Python environments
| Category | API Feature | Status | Implementation |
|---|---|---|---|
| Session | Databricks / Local Conn | ● | Implemented |
| Reader | CSV | ● | Implemented |
| Reader | JSON | ● | Implemented |
| Reader | Parquet | ● | Implemented |
| Writer | Parquet (Overwrite/Gzip) | ● | Implemented |
| Schema | JSON / Print / Column List | ● | Implemented |
| Action | Show / Collect / Head / First | ● | Implemented |
| Query | SQL / Range | ● | Implemented |
| Logic | Filter / Where / Distinct | ● | Implemented |
| Relational | Joins (Inner, Outer, Expr) | ● | Implemented |
| Group | GroupBy & Global Aggs | ● | Implemented |
| Analytics | Window Functions | ○ | Planned |
| Catalog | Table/Database Management | ○ | Planned |
| Streaming | Structured Streaming | ◌ | Not Implemented |
| GraphFrames | Graph processing & analytics | ● | Implemented |
The Spark Connect C++ client follows the Spark Connect execution model:
Applications use a native C++ API to construct DataFrame operations and Spark SQL queries. These operations are translated into logical execution plans rather than executed locally.
The client builds Spark logical plans representing transformations, queries, and actions. No distributed computation occurs inside the client process.
Logical plans and data batches are serialized using:
- Protobuf for query plans and RPC communication
- Apache Arrow for efficient columnar data transfer
Communication with the Spark Connect server occurs via:
- gRPC streaming RPCs
- bidirectional execution channels
- Arrow batch streaming
The Spark Connect server:
- receives logical plans
- executes distributed workloads
- performs query planning and optimization
- returns Arrow-encoded results to the client
This separation enables native applications to leverage Spark’s distributed engine without embedding Spark or JVM runtimes locally.
-
Apache Spark 3.5+ with Spark Connect enabled
-
C++17 or later
-
Required libraries:
gRPCProtobufApache Arrowuuid
# --------------------------------
# Install all required dependencies
# --------------------------------
chmod +x ./install_deps.sh
./install_deps.sh
mkdir build && cd build
# ----------------------------------
# Build the Spark Connect Client
# ----------------------------------
cmake ..
make -j$(nproc)
# --------------------------------
# Make sure Spark is running...
# --------------------------------
docker compose up spark --build
# ---------------------------
# Run Full Test Suite
# ---------------------------
ctest --output-on-failure --verbose
# ctest --verbose --test-arguments=--gtest_color=yes
# -----------------------------
# Run Single Test Suite
# -----------------------------
ctest -R test_dataframe_reader --verbose
# ------------------------------
# Run Single Test Case
# ------------------------------
ctest -R test_dataframe_writer --test-args --gtest_filter=SparkIntegrationTest.ParquetWrite
# --------------------------------
# Run Test Suite directly
# --------------------------------
./test_<suite_name>
# --------------------------------
# Run Single Test Case directly - show output
# --------------------------------
./test_dataframe --gtest_filter=SparkIntegrationTest.DropDuplicatesmkdir -p build && cd build
cmake .. \
-DCMAKE_BUILD_TYPE=Debug \
-DCTEST_MEMORYCHECK_COMMAND=/usr/bin/valgrind \
-DCTEST_MEMORYCHECK_TYPE=Valgrind
valgrind --leak-check=full --show-leak-kinds=all ./test_dataframeThis project uses gcovr for coverage reporting.
gcovr analyzes which parts of the compiled code were executed during the test suite execution.
sudo apt update
sudo apt install -y gcovrcmake -S . -B build -DENABLE_COVERAGE=ON
cmake --build build -j
gcovr -r src \
--object-directory build \
--exclude '.*\.pb\.cc' \
--exclude '.*\.grpc\.pb\.cc' \
--exclude '.*\.h' \
--html-details coverage.html \
--html-theme green \
--print-summary \
--fail-under-line 70This will generate a coverage report in HTML format.
mkdir -p build && cd build
cmake ..
make -j$(nproc)
cpackThis generates a compressed archive under the build directory.
CPack: Create package using TGZ
CPack: Install projects
CPack: - Run preinstall target for: spark_connect_cpp
CPack: - Install project: spark_connect_cpp []
CPack: Create package
CPack: - package: /home/user/spark-connect-cpp/build/spark-connect-cpp-0.0.1-linux-amd64.tar.gz generated.
Extract the contents of the archive:
tar -xzf spark-connect-cpp-0.0.1-linux-amd64.tar.gzYou can verify the contents by running:
ls -F spark-connect-cpp-0.0.1-linux-amd64Copy contents to global path:
sudo cp -r ~/spark-connect-cpp/build/spark-connect-cpp-0.0.1-linux-amd64/include/spark_connect_cpp /usr/local/include/
sudo cp ~/spark-connect-cpp/build/spark-connect-cpp-0.0.1-linux-amd64/lib/libspark_connect_cpp.a /usr/local/lib/
# --------------------------------------------------------------------------------------------
# Purpose of the pkgconfig directory
# --------------------------------------------------------------------------------------------
#
# The pkg-config directory serves as a central repository for metadata about installed libraries on a system.
#
# It contains .pc files, which are the metadata files that provide information such as the
# location of header files, library binaries, and various descriptive information.
#
# These files are essential for developers to retrieve package-specific compilation and linking parameters,
# ensuring that applications are built correctly with all necessary components.
#
# The pkg-config tool, which is invoked via its command line interface (CLI), reports library information via standard output,
# allowing for the sharing of a codebase in a cross-platform way by using host-specific library information
# that is stored outside of yet referenced by the codebase.
# --------------------------------------------------------------------------------------------
sudo mkdir /usr/local/lib/pkgconfig
sudo cp ~/spark-connect-cpp/build/spark-connect-cpp-0.0.1-linux-amd64/lib/pkgconfig/spark_connect_cpp.pc /usr/local/lib/pkgconfig/Verify that the contents were copied successfully:
ls /usr/local/lib/pkgconfigUpdate the Shared Library Cache:
sudo ldconfigVerify Spark Connect C++ is accessible globally:
pkg-config --exists spark_connect_cpp && echo "Library is global" || echo "Library not found"Running a sample application:
#include <spark_connect_cpp/session.h>
int main()
{
auto spark = &SparkSession::builder()
.master("sc://localhost")
.appName("demo")
.getOrCreate();
auto df = spark->sql("SELECT 1 AS id");
df.show();
}Compile the sample application:
g++ src/main.cpp $(pkg-config --cflags --libs spark_connect_cpp) -o spark_appRun the sample application:
./spark_appOutput:
+----------------------+
| id |
+----------------------+
| 1 |
+----------------------+
For detailed development setup instructions, see:
Apache 2.0
