diff --git a/.github/workflows/spring-data-jdbc-sample.yaml b/.github/workflows/spring-data-jdbc-sample.yaml
index 0185a2905..7d5b4ea41 100644
--- a/.github/workflows/spring-data-jdbc-sample.yaml
+++ b/.github/workflows/spring-data-jdbc-sample.yaml
@@ -25,6 +25,9 @@ jobs:
with:
distribution: temurin
java-version: 17
- - name: Run tests
+ - name: Run tests on GoogleSQL
run: mvn test
- working-directory: samples/spring-data-jdbc
+ working-directory: samples/spring-data-jdbc/googlesql
+ - name: Run tests on PostgreSQL
+ run: mvn test
+ working-directory: samples/spring-data-jdbc/postgresql
diff --git a/samples/spring-data-jdbc/README.md b/samples/spring-data-jdbc/README.md
index da1d69532..4b6dbcc57 100644
--- a/samples/spring-data-jdbc/README.md
+++ b/samples/spring-data-jdbc/README.md
@@ -1,95 +1,17 @@
-# Spring Data JDBC Sample Application with Cloud Spanner PostgreSQL
+# Spring Data JDBC
-This sample application shows how to develop portable applications using Spring Data JDBC in
-combination with Cloud Spanner PostgreSQL. This application can be configured to run on either a
-[Cloud Spanner PostgreSQL](https://cloud.google.com/spanner/docs/postgresql-interface) database or
-an open-source PostgreSQL database. The only change that is needed to switch between the two is
-changing the active Spring profile that is used by the application.
-
-The application uses the Cloud Spanner JDBC driver to connect to Cloud Spanner PostgreSQL, and it
-uses the PostgreSQL JDBC driver to connect to open-source PostgreSQL. Spring Data JDBC works with
-both drivers and offers a single consistent API to the application developer, regardless of the
-actual database or JDBC driver being used.
-
-This sample shows:
-
-1. How to use Spring Data JDBC with Cloud Spanner PostgreSQL.
-2. How to develop a portable application that runs on both Google Cloud Spanner PostgreSQL and
- open-source PostgreSQL with the same code base.
-3. How to use bit-reversed sequences to automatically generate primary key values for entities.
-
-__NOTE__: This application does __not require PGAdapter__. Instead, it connects to Cloud Spanner
-PostgreSQL using the Cloud Spanner JDBC driver.
-
-## Cloud Spanner PostgreSQL
-
-Cloud Spanner PostgreSQL provides language support by expressing Spanner database functionality
-through a subset of open-source PostgreSQL language constructs, with extensions added to support
-Spanner functionality like interleaved tables and hinting.
-
-The PostgreSQL interface makes the capabilities of Spanner —__fully managed, unlimited scale, strong
-consistency, high performance, and up to 99.999% global availability__— accessible using the
-PostgreSQL dialect. Unlike other services that manage actual PostgreSQL database instances, Spanner
-uses PostgreSQL-compatible syntax to expose its existing scale-out capabilities. This provides
-familiarity for developers and portability for applications, but not 100% PostgreSQL compatibility.
-The SQL syntax that Spanner supports is semantically equivalent PostgreSQL, meaning schemas
-and queries written against the PostgreSQL interface can be easily ported to another PostgreSQL
-environment.
-
-This sample showcases this portability with an application that works on both Cloud Spanner PostgreSQL
-and open-source PostgreSQL with the same code base.
-
-## Spring Data JDBC
+This directory contains two sample applications for using Spring Data JDBC
+with the Spanner JDBC driver.
[Spring Data JDBC](https://spring.io/projects/spring-data-jdbc) is part of the larger Spring Data
-family. It makes it easy to implement JDBC based repositories. This module deals with enhanced
-support for JDBC based data access layers.
+family. It makes it easy to implement JDBC based repositories.
+This module deals with enhanced support for JDBC based data access layers.
Spring Data JDBC aims at being conceptually easy. In order to achieve this it does NOT offer caching,
lazy loading, write behind or many other features of JPA. This makes Spring Data JDBC a simple,
limited, opinionated ORM.
-## Sample Application
-
-This sample shows how to create a portable application using Spring Data JDBC and the Cloud Spanner
-PostgreSQL dialect. The application works on both Cloud Spanner PostgreSQL and open-source
-PostgreSQL. You can switch between the two by changing the active Spring profile:
-* Profile `cs` runs the application on Cloud Spanner PostgreSQL.
-* Profile `pg` runs the application on open-source PostgreSQL.
-
-The default profile is `cs`. You can change the default profile by modifying the
-[application.properties](src/main/resources/application.properties) file.
-
-### Running the Application
-
-1. Choose the database system that you want to use by choosing a profile. The default profile is
- `cs`, which runs the application on Cloud Spanner PostgreSQL. Modify the default profile in the
- [application.properties](src/main/resources/application.properties) file.
-2. Modify either [application-cs.properties](src/main/resources/application-cs.properties) or
- [application-pg.properties](src/main/resources/application-pg.properties) to point to an existing
- database. If you use Cloud Spanner, the database that the configuration file references must be a
- database that uses the PostgreSQL dialect.
-3. Run the application with `mvn spring-boot:run`.
-
-### Main Application Components
-
-The main application components are:
-* [DatabaseSeeder.java](src/main/java/com/google/cloud/spanner/sample/DatabaseSeeder.java): This
- class is responsible for creating the database schema and inserting some initial test data. The
- schema is created from the [create_schema.sql](src/main/resources/create_schema.sql) file. The
- `DatabaseSeeder` class loads this file into memory and executes it on the active database using
- standard JDBC APIs. The class also removes Cloud Spanner-specific extensions to the PostgreSQL
- dialect when the application runs on open-source PostgreSQL.
-* [JdbcConfiguration.java](src/main/java/com/google/cloud/spanner/sample/JdbcConfiguration.java):
- Spring Data JDBC by default detects the database dialect based on the JDBC driver that is used.
- This class overrides this default and instructs Spring Data JDBC to also use the PostgreSQL
- dialect for Cloud Spanner PostgreSQL.
-* [AbstractEntity.java](src/main/java/com/google/cloud/spanner/sample/entities/AbstractEntity.java):
- This is the shared base class for all entities in this sample application. It defines a number of
- standard attributes, such as the identifier (primary key). The primary key is automatically
- generated using a (bit-reversed) sequence. [Bit-reversed sequential values](https://cloud.google.com/spanner/docs/schema-design#bit_reverse_primary_key)
- are considered a good choice for primary keys on Cloud Spanner.
-* [Application.java](src/main/java/com/google/cloud/spanner/sample/Application.java): The starter
- class of the application. It contains a command-line runner that executes a selection of queries
- and updates on the database.
-
+- [GoogleSQL](googlesql): This sample uses the Spanner GoogleSQL dialect.
+- [PostgreSQL](postgresql): This sample uses the Spanner PostgreSQL dialect and the Spanner JDBC
+ driver. It does not use PGAdapter. The sample application can also be configured to run on open
+ source PostgreSQL, and shows how a portable application be developed using this setup.
diff --git a/samples/spring-data-jdbc/googlesql/README.md b/samples/spring-data-jdbc/googlesql/README.md
new file mode 100644
index 000000000..7abe334ef
--- /dev/null
+++ b/samples/spring-data-jdbc/googlesql/README.md
@@ -0,0 +1,57 @@
+# Spring Data JDBC Sample Application with Spanner GoogleSQL
+
+This sample application shows how to use Spring Data JDBC with Spanner GoogleSQL.
+
+This sample shows:
+
+1. How to use Spring Data JDBC with Spanner GoogleSQL.
+2. How to use bit-reversed identity columns to automatically generate primary key values for entities.
+3. How to set the transaction isolation level that is used by the Spanner JDBC driver.
+
+## Spring Data JDBC
+
+[Spring Data JDBC](https://spring.io/projects/spring-data-jdbc) is part of the larger Spring Data
+family. It makes it easy to implement JDBC based repositories. This module deals with enhanced
+support for JDBC based data access layers.
+
+Spring Data JDBC aims at being conceptually easy. In order to achieve this it does NOT offer caching,
+lazy loading, write behind or many other features of JPA. This makes Spring Data JDBC a simple,
+limited, opinionated ORM.
+
+### Running the Application
+
+The application by default runs on the Spanner Emulator.
+
+1. Modify the [application.properties](src/main/resources/application.properties) to point to an existing
+ database. The database must use the GoogleSQL dialect.
+2. Run the application with `mvn spring-boot:run`.
+
+### Main Application Components
+
+The main application components are:
+* [DatabaseSeeder.java](src/main/java/com/google/cloud/spanner/sample/DatabaseSeeder.java): This
+ class is responsible for creating the database schema and inserting some initial test data. The
+ schema is created from the [create_schema.sql](src/main/resources/create_schema.sql) file. The
+ `DatabaseSeeder` class loads this file into memory and executes it on the active database using
+ standard JDBC APIs.
+* [SpannerDialectProvider](src/main/java/com/google/cloud/spanner/sample/SpannerDialectProvider.java):
+ Spring Data JDBC by default detects the database dialect based on the JDBC driver that is used.
+ Spanner GoogleSQL is not automatically recognized by Spring Data, so we add a dialect provider
+ for Spanner.
+* [SpannerDialect](src/main/java/com/google/cloud/spanner/sample/SpannerDialect.java):
+ Spring Data JDBC requires a dialect for the database, so it knows which features are supported,
+ and how to build clauses like `LIMIT` and `FOR UPDATE`. This class provides this information. It
+ is based on the built-in `AnsiDialect` in Spring Data JDBC.
+* [JdbcConfiguration.java](src/main/java/com/google/cloud/spanner/sample/JdbcConfiguration.java):
+ This configuration file serves two purposes:
+ 1. Make sure `OpenTelemetry` is initialized before any data sources.
+ 2. Add a converter for `LocalDate` properties. Spring Data JDBC by default map these to `TIMESTAMP`
+ columns, but a better fit in Spanner is `DATE`.
+* [AbstractEntity.java](src/main/java/com/google/cloud/spanner/sample/entities/AbstractEntity.java):
+ This is the shared base class for all entities in this sample application. It defines a number of
+ standard attributes, such as the identifier (primary key). The primary key is automatically
+ generated using a (bit-reversed) sequence. [Bit-reversed sequential values](https://cloud.google.com/spanner/docs/schema-design#bit_reverse_primary_key)
+ are considered a good choice for primary keys on Cloud Spanner.
+* [Application.java](src/main/java/com/google/cloud/spanner/sample/Application.java): The starter
+ class of the application. It contains a command-line runner that executes a selection of queries
+ and updates on the database.
diff --git a/samples/spring-data-jdbc/googlesql/pom.xml b/samples/spring-data-jdbc/googlesql/pom.xml
new file mode 100644
index 000000000..57b0dea29
--- /dev/null
+++ b/samples/spring-data-jdbc/googlesql/pom.xml
@@ -0,0 +1,138 @@
+
+
+ 4.0.0
+
+ org.example
+ cloud-spanner-spring-data-jdbc-googlesql-example
+ 1.0-SNAPSHOT
+
+ Sample application showing how to use Spring Data JDBC with Cloud Spanner GoogleSQL.
+
+
+
+ 17
+ 17
+ 17
+ UTF-8
+
+
+
+
+
+ org.springframework.data
+ spring-data-bom
+ 2024.1.5
+ import
+ pom
+
+
+ com.google.cloud
+ google-cloud-spanner-bom
+ 6.91.1
+ import
+ pom
+
+
+ com.google.cloud
+ libraries-bom
+ 26.59.0
+ import
+ pom
+
+
+ io.opentelemetry
+ opentelemetry-bom
+ 1.49.0
+ pom
+ import
+
+
+
+
+
+
+ org.springframework.boot
+ spring-boot-starter-data-jdbc
+ 3.4.5
+
+
+
+
+ com.google.cloud
+ google-cloud-spanner-jdbc
+
+
+ com.google.api.grpc
+ proto-google-cloud-spanner-executor-v1
+
+
+
+
+
+
+ io.opentelemetry
+ opentelemetry-sdk
+
+
+ com.google.cloud.opentelemetry
+ exporter-trace
+ 0.34.0
+
+
+ com.google.cloud.opentelemetry
+ exporter-metrics
+ 0.34.0
+
+
+
+
+ org.testcontainers
+ testcontainers
+ 1.21.0
+
+
+
+ com.google.collections
+ google-collections
+ 1.0
+
+
+
+
+ com.google.cloud
+ google-cloud-spanner
+ test-jar
+ test
+
+
+ com.google.api
+ gax-grpc
+ testlib
+ test
+
+
+ junit
+ junit
+ 4.13.2
+
+
+
+
+
+
+ com.spotify.fmt
+ fmt-maven-plugin
+ 2.25
+
+
+
+ format
+
+
+
+
+
+
+
diff --git a/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/Application.java b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/Application.java
new file mode 100644
index 000000000..a75ea2fec
--- /dev/null
+++ b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/Application.java
@@ -0,0 +1,269 @@
+/*
+ * Copyright 2025 Google LLC
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.google.cloud.spanner.sample;
+
+import com.google.cloud.spanner.connection.SavepointSupport;
+import com.google.cloud.spanner.connection.SpannerPool;
+import com.google.cloud.spanner.jdbc.CloudSpannerJdbcConnection;
+import com.google.cloud.spanner.sample.entities.Album;
+import com.google.cloud.spanner.sample.entities.Singer;
+import com.google.cloud.spanner.sample.entities.Track;
+import com.google.cloud.spanner.sample.repositories.AlbumRepository;
+import com.google.cloud.spanner.sample.repositories.SingerRepository;
+import com.google.cloud.spanner.sample.repositories.TrackRepository;
+import com.google.cloud.spanner.sample.service.SingerService;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.Tracer;
+import io.opentelemetry.context.Scope;
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Savepoint;
+import java.sql.Statement;
+import javax.sql.DataSource;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.boot.CommandLineRunner;
+import org.springframework.boot.SpringApplication;
+import org.springframework.boot.autoconfigure.SpringBootApplication;
+
+@SpringBootApplication
+public class Application implements CommandLineRunner {
+ private static final Logger logger = LoggerFactory.getLogger(Application.class);
+
+ public static void main(String[] args) {
+ // This automatically starts the Spanner emulator in a Docker container, unless the
+ // spanner.auto_start_emulator property has been set to false. In that case, it is a no-op.
+ EmulatorInitializer emulatorInitializer = new EmulatorInitializer();
+ try {
+ SpringApplication application = new SpringApplication(Application.class);
+ application.addListeners(emulatorInitializer);
+ application.run(args).close();
+ } finally {
+ SpannerPool.closeSpannerPool();
+ emulatorInitializer.stopEmulator();
+ }
+ }
+
+ private final DatabaseSeeder databaseSeeder;
+
+ private final SingerService singerService;
+
+ private final SingerRepository singerRepository;
+
+ private final AlbumRepository albumRepository;
+
+ private final TrackRepository trackRepository;
+
+ private final Tracer tracer;
+
+ private final DataSource dataSource;
+
+ public Application(
+ SingerService singerService,
+ DatabaseSeeder databaseSeeder,
+ SingerRepository singerRepository,
+ AlbumRepository albumRepository,
+ TrackRepository trackRepository,
+ Tracer tracer,
+ DataSource dataSource) {
+ this.databaseSeeder = databaseSeeder;
+ this.singerService = singerService;
+ this.singerRepository = singerRepository;
+ this.albumRepository = albumRepository;
+ this.trackRepository = trackRepository;
+ this.tracer = tracer;
+ this.dataSource = dataSource;
+ }
+
+ @Override
+ public void run(String... args) {
+ // Set the system property 'drop_schema' to true to drop any existing database
+ // schema when the application is executed.
+ if (Boolean.parseBoolean(System.getProperty("drop_schema", "false"))) {
+ logger.info("Dropping existing schema if it exists");
+ databaseSeeder.dropDatabaseSchemaIfExists();
+ }
+
+ logger.info("Creating database schema if it does not already exist");
+ databaseSeeder.createDatabaseSchemaIfNotExists();
+ logger.info("Deleting existing test data");
+ databaseSeeder.deleteTestData();
+ logger.info("Inserting fresh test data");
+ databaseSeeder.insertTestData();
+
+ Iterable allSingers = singerRepository.findAll();
+ for (Singer singer : allSingers) {
+ logger.info(
+ "Found singer: {} with {} albums",
+ singer,
+ albumRepository.countAlbumsBySingerId(singer.getId()));
+ for (Album album : albumRepository.findAlbumsBySingerId(singer.getId())) {
+ logger.info("\tAlbum: {}, released at {}", album, album.getReleaseDate());
+ }
+ }
+
+ // Create a new singer and three albums in a transaction.
+ Singer insertedSinger =
+ singerService.createSingerAndAlbums(
+ new Singer("Amethyst", "Jiang"),
+ new Album(DatabaseSeeder.randomTitle()),
+ new Album(DatabaseSeeder.randomTitle()),
+ new Album(DatabaseSeeder.randomTitle()));
+ logger.info(
+ "Inserted singer {} {} {}",
+ insertedSinger.getId(),
+ insertedSinger.getFirstName(),
+ insertedSinger.getLastName());
+
+ // Create a new track record and insert it into the database.
+ Album album = albumRepository.getFirst().orElseThrow();
+ Track track = new Track(album, 1, DatabaseSeeder.randomTitle());
+ track.setSampleRate(3.14d);
+ // Spring Data JDBC supports the same base CRUD operations on entities as for example
+ // Spring Data JPA.
+ trackRepository.save(track);
+
+ // List all singers that have a last name starting with an 'J'.
+ logger.info("All singers with a last name starting with an 'J':");
+ for (Singer singer : singerRepository.findSingersByLastNameStartingWith("J")) {
+ logger.info("\t{}", singer.getFullName());
+ }
+
+ // The singerService.listSingersWithLastNameStartingWith(..) method uses a read-only
+ // transaction. You should prefer read-only transactions to read/write transactions whenever
+ // possible, as read-only transactions do not take locks.
+ logger.info("All singers with a last name starting with an 'A', 'B', or 'C'.");
+ for (Singer singer : singerService.listSingersWithLastNameStartingWith("A", "B", "C")) {
+ logger.info("\t{}", singer.getFullName());
+ }
+
+ // Run two concurrent transactions that conflict with each other to show the automatic retry
+ // behavior built into the JDBC driver.
+ concurrentTransactions();
+
+ // Use a savepoint to roll back to a previous point in a transaction.
+ savepoints();
+ }
+
+ void concurrentTransactions() {
+ // Create two transactions that conflict with each other to trigger a transaction retry.
+ // This sample is intended to show a couple of things:
+ // 1. Spanner will abort transactions that conflict. The Spanner JDBC driver will automatically
+ // retry aborted transactions internally, which ensures that both these transactions
+ // succeed without any errors. See
+ // https://cloud.google.com/spanner/docs/jdbc-session-mgmt-commands#retry_aborts_internally
+ // for more information on how the JDBC driver retries aborted transactions.
+ // 2. The JDBC driver adds information to the OpenTelemetry tracing that makes it easier to find
+ // transactions that were aborted and retried.
+ logger.info("Executing two concurrent transactions");
+ Span span = tracer.spanBuilder("update-singers").startSpan();
+ try (Scope ignore = span.makeCurrent();
+ Connection connection1 = dataSource.getConnection();
+ Connection connection2 = dataSource.getConnection();
+ Statement statement1 = connection1.createStatement();
+ Statement statement2 = connection2.createStatement()) {
+ statement1.execute("begin");
+ statement1.execute("set transaction_tag='update-singer-1'");
+ statement2.execute("begin");
+ statement2.execute("set transaction_tag='update-singer-2'");
+ long id = 0L;
+ statement1.execute("set statement_tag='fetch-singer-id'");
+ try (ResultSet resultSet = statement1.executeQuery("select id from singers limit 1")) {
+ while (resultSet.next()) {
+ id = resultSet.getLong(1);
+ }
+ }
+ String sql = "update singers set active=not active where id=?";
+ statement1.execute("set statement_tag='update-singer-1'");
+ try (PreparedStatement preparedStatement = connection1.prepareStatement(sql)) {
+ preparedStatement.setLong(1, id);
+ preparedStatement.executeUpdate();
+ }
+ statement2.execute("set statement_tag='update-singer-2'");
+ try (PreparedStatement preparedStatement = connection2.prepareStatement(sql)) {
+ preparedStatement.setLong(1, id);
+ preparedStatement.executeUpdate();
+ }
+ statement1.execute("commit");
+ statement2.execute("commit");
+ } catch (SQLException exception) {
+ span.recordException(exception);
+ throw new RuntimeException(exception);
+ } finally {
+ span.end();
+ }
+ }
+
+ void savepoints() {
+ // Run a transaction with a savepoint, and rollback to that savepoint.
+ logger.info("Executing a transaction with a savepoint");
+ Span span = tracer.spanBuilder("savepoint-sample").startSpan();
+ try (Scope ignore = span.makeCurrent();
+ Connection connection = dataSource.getConnection();
+ Statement statement = connection.createStatement()) {
+ // Enable savepoints for this connection.
+ connection
+ .unwrap(CloudSpannerJdbcConnection.class)
+ .setSavepointSupport(SavepointSupport.ENABLED);
+
+ statement.execute("begin");
+ statement.execute("set transaction_tag='transaction-with-savepoint'");
+
+ // Fetch a random album.
+ long id = 0L;
+ try (ResultSet resultSet =
+ statement.executeQuery(
+ "/*@statement_tag='fetch-album-id'*/ select id from albums limit 1")) {
+ while (resultSet.next()) {
+ id = resultSet.getLong(1);
+ }
+ }
+ // Set a savepoint that we can roll back to at a later moment in the transaction.
+ // Note that the savepoint name must be a valid identifier.
+ Savepoint savepoint = connection.setSavepoint("fetched_album_id");
+
+ String sql =
+ "/*@statement_tag='update-album-marketing-budget-by-10-percent'*/ update albums set marketing_budget=marketing_budget * 1.1 where id=?";
+ try (PreparedStatement preparedStatement = connection.prepareStatement(sql)) {
+ preparedStatement.setLong(1, id);
+ preparedStatement.executeUpdate();
+ }
+
+ // Rollback to the savepoint that we set at an earlier stage, and then update the marketing
+ // budget by 20 percent instead.
+ connection.rollback(savepoint);
+
+ sql =
+ "/*@statement_tag='update-album-marketing-budget-by-20-percent'*/ update albums set marketing_budget=marketing_budget * 1.2 where id=?";
+ try (PreparedStatement preparedStatement = connection.prepareStatement(sql)) {
+ preparedStatement.setLong(1, id);
+ preparedStatement.executeUpdate();
+ }
+ statement.execute("commit");
+
+ // Reset the state of the connection before returning it to the connection pool.
+ statement.execute("reset all");
+ } catch (SQLException exception) {
+ span.recordException(exception);
+ throw new RuntimeException(exception);
+ } finally {
+ span.end();
+ }
+ }
+}
diff --git a/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/DatabaseSeeder.java b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/DatabaseSeeder.java
new file mode 100644
index 000000000..3e370a097
--- /dev/null
+++ b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/DatabaseSeeder.java
@@ -0,0 +1,353 @@
+/*
+ * Copyright 2025 Google LLC
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.google.cloud.spanner.sample;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+import com.google.cloud.spanner.sample.entities.Singer;
+import com.google.common.collect.ImmutableList;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.Tracer;
+import io.opentelemetry.context.Scope;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.Reader;
+import java.io.UncheckedIOException;
+import java.math.BigDecimal;
+import java.math.RoundingMode;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.time.LocalDate;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Random;
+import javax.annotation.Nonnull;
+import org.springframework.beans.factory.annotation.Value;
+import org.springframework.core.io.Resource;
+import org.springframework.jdbc.core.BatchPreparedStatementSetter;
+import org.springframework.jdbc.core.JdbcTemplate;
+import org.springframework.stereotype.Component;
+import org.springframework.util.FileCopyUtils;
+
+/** This component creates the database schema and seeds it with some random test data. */
+@Component
+public class DatabaseSeeder {
+
+ /** Randomly generated names. */
+ public static final ImmutableList INITIAL_SINGERS =
+ ImmutableList.of(
+ new Singer("Aaliyah", "Smith"),
+ new Singer("Benjamin", "Jones"),
+ new Singer("Chloe", "Brown"),
+ new Singer("David", "Williams"),
+ new Singer("Elijah", "Johnson"),
+ new Singer("Emily", "Miller"),
+ new Singer("Gabriel", "Garcia"),
+ new Singer("Hannah", "Rodriguez"),
+ new Singer("Isabella", "Hernandez"),
+ new Singer("Jacob", "Perez"));
+
+ private static final Random RANDOM = new Random();
+
+ private final JdbcTemplate jdbcTemplate;
+
+ private final Tracer tracer;
+
+ @Value("classpath:create_schema.sql")
+ private Resource createSchemaFile;
+
+ @Value("classpath:drop_schema.sql")
+ private Resource dropSchemaFile;
+
+ public DatabaseSeeder(JdbcTemplate jdbcTemplate, Tracer tracer) {
+ this.jdbcTemplate = jdbcTemplate;
+ this.tracer = tracer;
+ }
+
+ /** Reads a resource file into a string. */
+ private static String resourceAsString(Resource resource) {
+ try (Reader reader = new InputStreamReader(resource.getInputStream(), UTF_8)) {
+ return FileCopyUtils.copyToString(reader);
+ } catch (IOException e) {
+ throw new UncheckedIOException(e);
+ }
+ }
+
+ /**
+ * Removes all statements that start with a 'skip_on_open_source_pg' comment if the application is
+ * running on open-source PostgreSQL. This ensures that we can use the same DDL script both on
+ * Cloud Spanner and on open-source PostgreSQL. It also removes any empty statements in the given
+ * array.
+ */
+ private String[] updateDdlStatements(String[] statements) {
+ // Remove any empty statements from the script.
+ return Arrays.stream(statements)
+ .filter(statement -> !statement.isBlank())
+ .toArray(String[]::new);
+ }
+
+ /** Creates the database schema if it does not yet exist. */
+ public void createDatabaseSchemaIfNotExists() {
+ // We can safely just split the script based on ';', as we know that there are no literals or
+ // other strings that contain semicolons in the script.
+ String[] statements = updateDdlStatements(resourceAsString(createSchemaFile).split(";"));
+ // Execute all the DDL statements as a JDBC batch. That ensures that Cloud Spanner will apply
+ // all statements in a single DDL batch, which again is a lot more efficient than executing them
+ // one-by-one.
+ jdbcTemplate.batchUpdate(statements);
+ }
+
+ /** Drops the database schema if it exists. */
+ public void dropDatabaseSchemaIfExists() {
+ // We can safely just split the script based on ';', as we know that there are no literals or
+ // other strings that contain semicolons in the script.
+ String[] statements = updateDdlStatements(resourceAsString(dropSchemaFile).split(";"));
+ // Execute all the DDL statements as a JDBC batch. That ensures that Cloud Spanner will apply
+ // all statements in a single DDL batch, which again is a lot more efficient than executing them
+ // one-by-one.
+ jdbcTemplate.batchUpdate(statements);
+ }
+
+ /** Deletes all data currently in the sample tables. */
+ public void deleteTestData() {
+ Span span = tracer.spanBuilder("deleteTestData").startSpan();
+ try (Scope ignore = span.makeCurrent()) {
+ // Delete all data in one batch.
+ jdbcTemplate.execute("set statement_tag='batch_delete_test_data'");
+ jdbcTemplate.batchUpdate(
+ "delete from concerts where true",
+ "delete from venues where true",
+ "delete from tracks where true",
+ "delete from albums where true",
+ "delete from singers where true");
+ } catch (Throwable t) {
+ span.recordException(t);
+ throw t;
+ } finally {
+ span.end();
+ }
+ }
+
+ /** Inserts some initial test data into the database. */
+ public void insertTestData() {
+ Span span = tracer.spanBuilder("insertTestData").startSpan();
+ try (Scope ignore = span.makeCurrent()) {
+ jdbcTemplate.execute("begin");
+ jdbcTemplate.execute("set transaction_tag='insert_test_data'");
+ jdbcTemplate.execute("set statement_tag='insert_singers'");
+ jdbcTemplate.batchUpdate(
+ "insert into singers (first_name, last_name) values (?, ?)",
+ new BatchPreparedStatementSetter() {
+ @Override
+ public void setValues(@Nonnull PreparedStatement preparedStatement, int i)
+ throws SQLException {
+ preparedStatement.setString(1, INITIAL_SINGERS.get(i).getFirstName());
+ preparedStatement.setString(2, INITIAL_SINGERS.get(i).getLastName());
+ }
+
+ @Override
+ public int getBatchSize() {
+ return INITIAL_SINGERS.size();
+ }
+ });
+
+ List singerIds =
+ jdbcTemplate.query(
+ "select id from singers",
+ resultSet -> {
+ ImmutableList.Builder builder = ImmutableList.builder();
+ while (resultSet.next()) {
+ builder.add(resultSet.getLong(1));
+ }
+ return builder.build();
+ });
+ jdbcTemplate.execute("set statement_tag='insert_albums'");
+ jdbcTemplate.batchUpdate(
+ "insert into albums (title, marketing_budget, release_date, cover_picture, singer_id) values (?, ?, ?, ?, ?)",
+ new BatchPreparedStatementSetter() {
+ @Override
+ public void setValues(@Nonnull PreparedStatement preparedStatement, int i)
+ throws SQLException {
+ preparedStatement.setString(1, randomTitle());
+ preparedStatement.setBigDecimal(2, randomBigDecimal());
+ preparedStatement.setObject(3, randomDate());
+ preparedStatement.setBytes(4, randomBytes());
+ preparedStatement.setLong(5, randomElement(singerIds));
+ }
+
+ @Override
+ public int getBatchSize() {
+ return INITIAL_SINGERS.size() * 20;
+ }
+ });
+ jdbcTemplate.execute("commit");
+ } catch (Throwable t) {
+ try {
+ jdbcTemplate.execute("rollback");
+ } catch (Exception ignore) {
+ }
+ span.recordException(t);
+ throw t;
+ } finally {
+ span.end();
+ }
+ }
+
+ /** Generates a random title for an album or a track. */
+ static String randomTitle() {
+ return randomElement(ADJECTIVES) + " " + randomElement(NOUNS);
+ }
+
+ /** Returns a random element from the given list. */
+ static T randomElement(List list) {
+ return list.get(RANDOM.nextInt(list.size()));
+ }
+
+ /** Generates a random {@link BigDecimal}. */
+ BigDecimal randomBigDecimal() {
+ return BigDecimal.valueOf(RANDOM.nextDouble()).setScale(9, RoundingMode.HALF_UP);
+ }
+
+ /** Generates a random {@link LocalDate}. */
+ static LocalDate randomDate() {
+ return LocalDate.of(RANDOM.nextInt(200) + 1800, RANDOM.nextInt(12) + 1, RANDOM.nextInt(28) + 1);
+ }
+
+ /** Generates a random byte array with a length between 4 and 1024 bytes. */
+ static byte[] randomBytes() {
+ int size = RANDOM.nextInt(1020) + 4;
+ byte[] res = new byte[size];
+ RANDOM.nextBytes(res);
+ return res;
+ }
+
+ /** Some randomly generated nouns that are used to generate random titles. */
+ private static final ImmutableList NOUNS =
+ ImmutableList.of(
+ "apple",
+ "banana",
+ "cherry",
+ "dog",
+ "elephant",
+ "fish",
+ "grass",
+ "house",
+ "key",
+ "lion",
+ "monkey",
+ "nail",
+ "orange",
+ "pen",
+ "queen",
+ "rain",
+ "shoe",
+ "tree",
+ "umbrella",
+ "van",
+ "whale",
+ "xylophone",
+ "zebra");
+
+ /** Some randomly generated adjectives that are used to generate random titles. */
+ private static final ImmutableList ADJECTIVES =
+ ImmutableList.of(
+ "able",
+ "angelic",
+ "artistic",
+ "athletic",
+ "attractive",
+ "autumnal",
+ "calm",
+ "careful",
+ "cheerful",
+ "clever",
+ "colorful",
+ "confident",
+ "courageous",
+ "creative",
+ "curious",
+ "daring",
+ "determined",
+ "different",
+ "dreamy",
+ "efficient",
+ "elegant",
+ "energetic",
+ "enthusiastic",
+ "exciting",
+ "expressive",
+ "faithful",
+ "fantastic",
+ "funny",
+ "gentle",
+ "gifted",
+ "great",
+ "happy",
+ "helpful",
+ "honest",
+ "hopeful",
+ "imaginative",
+ "intelligent",
+ "interesting",
+ "inventive",
+ "joyful",
+ "kind",
+ "knowledgeable",
+ "loving",
+ "loyal",
+ "magnificent",
+ "mature",
+ "mysterious",
+ "natural",
+ "nice",
+ "optimistic",
+ "peaceful",
+ "perfect",
+ "pleasant",
+ "powerful",
+ "proud",
+ "quick",
+ "relaxed",
+ "reliable",
+ "responsible",
+ "romantic",
+ "safe",
+ "sensitive",
+ "sharp",
+ "simple",
+ "sincere",
+ "skillful",
+ "smart",
+ "sociable",
+ "strong",
+ "successful",
+ "sweet",
+ "talented",
+ "thankful",
+ "thoughtful",
+ "unique",
+ "upbeat",
+ "valuable",
+ "victorious",
+ "vivacious",
+ "warm",
+ "wealthy",
+ "wise",
+ "wonderful",
+ "worthy",
+ "youthful");
+}
diff --git a/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/EmulatorInitializer.java b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/EmulatorInitializer.java
new file mode 100644
index 000000000..afc55890e
--- /dev/null
+++ b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/EmulatorInitializer.java
@@ -0,0 +1,57 @@
+/*
+ * Copyright 2024 Google LLC
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.google.cloud.spanner.sample;
+
+import org.springframework.boot.context.event.ApplicationEnvironmentPreparedEvent;
+import org.springframework.context.ApplicationListener;
+import org.springframework.core.env.ConfigurableEnvironment;
+import org.testcontainers.containers.GenericContainer;
+import org.testcontainers.containers.wait.strategy.Wait;
+import org.testcontainers.images.PullPolicy;
+import org.testcontainers.utility.DockerImageName;
+
+public class EmulatorInitializer
+ implements ApplicationListener {
+ private GenericContainer> emulator;
+
+ @Override
+ public void onApplicationEvent(ApplicationEnvironmentPreparedEvent event) {
+ ConfigurableEnvironment environment = event.getEnvironment();
+ boolean useEmulator =
+ Boolean.TRUE.equals(environment.getProperty("spanner.emulator", Boolean.class));
+ boolean autoStartEmulator =
+ Boolean.TRUE.equals(environment.getProperty("spanner.auto_start_emulator", Boolean.class));
+ if (!(useEmulator && autoStartEmulator)) {
+ return;
+ }
+
+ emulator =
+ new GenericContainer<>(DockerImageName.parse("gcr.io/cloud-spanner-emulator/emulator"));
+ emulator.withImagePullPolicy(PullPolicy.alwaysPull());
+ emulator.addExposedPort(9010);
+ emulator.setWaitStrategy(Wait.forListeningPorts(9010));
+ emulator.start();
+
+ System.setProperty("spanner.endpoint", "//localhost:" + emulator.getMappedPort(9010));
+ }
+
+ public void stopEmulator() {
+ if (this.emulator != null) {
+ this.emulator.stop();
+ }
+ }
+}
diff --git a/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/JdbcConfiguration.java b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/JdbcConfiguration.java
new file mode 100644
index 000000000..2537ed7d7
--- /dev/null
+++ b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/JdbcConfiguration.java
@@ -0,0 +1,74 @@
+/*
+ * Copyright 2025 Google LLC
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.google.cloud.spanner.sample;
+
+import java.sql.JDBCType;
+import java.sql.SQLType;
+import java.time.LocalDate;
+import org.springframework.context.annotation.Bean;
+import org.springframework.context.annotation.Configuration;
+import org.springframework.context.annotation.DependsOn;
+import org.springframework.context.annotation.Lazy;
+import org.springframework.data.jdbc.core.convert.DefaultJdbcTypeFactory;
+import org.springframework.data.jdbc.core.convert.JdbcArrayColumns;
+import org.springframework.data.jdbc.core.convert.JdbcConverter;
+import org.springframework.data.jdbc.core.convert.JdbcCustomConversions;
+import org.springframework.data.jdbc.core.convert.MappingJdbcConverter;
+import org.springframework.data.jdbc.core.convert.RelationResolver;
+import org.springframework.data.jdbc.core.dialect.JdbcDialect;
+import org.springframework.data.jdbc.core.mapping.JdbcMappingContext;
+import org.springframework.data.jdbc.repository.config.AbstractJdbcConfiguration;
+import org.springframework.data.relational.core.dialect.Dialect;
+import org.springframework.data.relational.core.mapping.RelationalPersistentProperty;
+import org.springframework.jdbc.core.namedparam.NamedParameterJdbcOperations;
+
+/**
+ * This configuration class is registered as depending on OpenTelemetry, as the JDBC driver uses the
+ * globally registered OpenTelemetry instance. It also overrides the default jdbcConverter
+ * implementation to map LocalDate to the JDBC type DATE (the default implementation maps LocalDate
+ * to TIMESTAMP).
+ */
+@DependsOn("openTelemetry")
+@Configuration
+public class JdbcConfiguration extends AbstractJdbcConfiguration {
+
+ @Bean
+ @Override
+ public JdbcConverter jdbcConverter(
+ JdbcMappingContext mappingContext,
+ NamedParameterJdbcOperations operations,
+ @Lazy RelationResolver relationResolver,
+ JdbcCustomConversions conversions,
+ Dialect dialect) {
+ JdbcArrayColumns arrayColumns =
+ dialect instanceof JdbcDialect
+ ? ((JdbcDialect) dialect).getArraySupport()
+ : JdbcArrayColumns.DefaultSupport.INSTANCE;
+ DefaultJdbcTypeFactory jdbcTypeFactory =
+ new DefaultJdbcTypeFactory(operations.getJdbcOperations(), arrayColumns);
+ return new MappingJdbcConverter(
+ mappingContext, relationResolver, conversions, jdbcTypeFactory) {
+ @Override
+ public SQLType getTargetSqlType(RelationalPersistentProperty property) {
+ if (property.getActualType().equals(LocalDate.class)) {
+ return JDBCType.DATE;
+ }
+ return super.getTargetSqlType(property);
+ }
+ };
+ }
+}
diff --git a/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/OpenTelemetryConfiguration.java b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/OpenTelemetryConfiguration.java
new file mode 100644
index 000000000..076554473
--- /dev/null
+++ b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/OpenTelemetryConfiguration.java
@@ -0,0 +1,121 @@
+/*
+ * Copyright 2025 Google LLC
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.google.cloud.spanner.sample;
+
+import com.google.auth.oauth2.GoogleCredentials;
+import com.google.cloud.opentelemetry.metric.GoogleCloudMetricExporter;
+import com.google.cloud.opentelemetry.metric.MetricConfiguration;
+import com.google.cloud.opentelemetry.trace.TraceConfiguration;
+import com.google.cloud.opentelemetry.trace.TraceExporter;
+import com.google.cloud.spanner.SpannerOptions;
+import io.opentelemetry.api.OpenTelemetry;
+import io.opentelemetry.api.trace.Tracer;
+import io.opentelemetry.sdk.OpenTelemetrySdk;
+import io.opentelemetry.sdk.metrics.SdkMeterProvider;
+import io.opentelemetry.sdk.metrics.export.MetricExporter;
+import io.opentelemetry.sdk.metrics.export.PeriodicMetricReader;
+import io.opentelemetry.sdk.resources.Resource;
+import io.opentelemetry.sdk.trace.SdkTracerProvider;
+import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
+import io.opentelemetry.sdk.trace.export.SpanExporter;
+import io.opentelemetry.sdk.trace.samplers.Sampler;
+import java.io.IOException;
+import java.util.concurrent.ThreadLocalRandom;
+import org.springframework.beans.factory.annotation.Value;
+import org.springframework.context.annotation.Bean;
+import org.springframework.context.annotation.Configuration;
+
+// @AutoConfiguration(before = DataSourceAutoConfiguration.class)
+@Configuration
+public class OpenTelemetryConfiguration {
+
+ @Value("${open_telemetry.enabled}")
+ private boolean enabled;
+
+ @Value("${spanner.emulator}")
+ private boolean emulator;
+
+ @Value("${open_telemetry.project}")
+ private String project;
+
+ @Bean
+ public OpenTelemetry openTelemetry() {
+ if (!enabled || emulator) {
+ return OpenTelemetry.noop();
+ }
+
+ // Enable OpenTelemetry tracing in Spanner.
+ SpannerOptions.enableOpenTelemetryTraces();
+ SpannerOptions.enableOpenTelemetryMetrics();
+
+ if (!hasDefaultCredentials()) {
+ // Create a no-op OpenTelemetry object if this environment does not have any default
+ // credentials configured. This could for example be on local test environments that use
+ // the Spanner emulator.
+ return OpenTelemetry.noop();
+ }
+
+ TraceConfiguration.Builder traceConfigurationBuilder = TraceConfiguration.builder();
+ TraceConfiguration traceConfiguration = traceConfigurationBuilder.setProjectId(project).build();
+ SpanExporter traceExporter = TraceExporter.createWithConfiguration(traceConfiguration);
+
+ MetricConfiguration.Builder metricConfigurationBuilder = MetricConfiguration.builder();
+ MetricConfiguration metricConfiguration =
+ metricConfigurationBuilder.setProjectId(project).build();
+ MetricExporter metricExporter =
+ GoogleCloudMetricExporter.createWithConfiguration(metricConfiguration);
+
+ SdkMeterProvider sdkMeterProvider =
+ SdkMeterProvider.builder()
+ .registerMetricReader(PeriodicMetricReader.builder(metricExporter).build())
+ .build();
+
+ // Create an OpenTelemetry object and register it as the global OpenTelemetry object. This
+ // will automatically be picked up by the Spanner libraries and used for tracing.
+ return OpenTelemetrySdk.builder()
+ .setTracerProvider(
+ SdkTracerProvider.builder()
+ // Set sampling to 'AlwaysOn' in this example. In production, you want to reduce
+ // this to a smaller fraction to limit the number of traces that are being
+ // collected.
+ .setSampler(Sampler.alwaysOn())
+ .setResource(
+ Resource.builder()
+ .put(
+ "service.name",
+ "spanner-jdbc-spring-data-sample-"
+ + ThreadLocalRandom.current().nextInt())
+ .build())
+ .addSpanProcessor(BatchSpanProcessor.builder(traceExporter).build())
+ .build())
+ .setMeterProvider(sdkMeterProvider)
+ .buildAndRegisterGlobal();
+ }
+
+ private boolean hasDefaultCredentials() {
+ try {
+ return GoogleCredentials.getApplicationDefault() != null;
+ } catch (IOException exception) {
+ return false;
+ }
+ }
+
+ @Bean
+ public Tracer tracer(OpenTelemetry openTelemetry) {
+ return openTelemetry.getTracer("com.google.cloud.spanner.jdbc.sample.spring-data-jdbc");
+ }
+}
diff --git a/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/SpannerDialect.java b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/SpannerDialect.java
new file mode 100644
index 000000000..b89500584
--- /dev/null
+++ b/samples/spring-data-jdbc/googlesql/src/main/java/com/google/cloud/spanner/sample/SpannerDialect.java
@@ -0,0 +1,139 @@
+/*
+ * Copyright 2025 Google LLC
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.google.cloud.spanner.sample;
+
+import static java.time.ZoneId.systemDefault;
+
+import com.google.common.collect.ImmutableList;
+import java.sql.JDBCType;
+import java.sql.Timestamp;
+import java.time.LocalDateTime;
+import java.time.OffsetDateTime;
+import java.time.ZoneId;
+import java.util.Collection;
+import java.util.Date;
+import javax.annotation.Nonnull;
+import org.springframework.core.convert.converter.Converter;
+import org.springframework.data.convert.ReadingConverter;
+import org.springframework.data.convert.WritingConverter;
+import org.springframework.data.jdbc.core.mapping.JdbcValue;
+import org.springframework.data.relational.core.dialect.AnsiDialect;
+import org.springframework.data.relational.core.dialect.LimitClause;
+import org.springframework.data.relational.core.sql.IdentifierProcessing;
+import org.springframework.data.relational.core.sql.IdentifierProcessing.LetterCasing;
+import org.springframework.data.relational.core.sql.IdentifierProcessing.Quoting;
+
+/**
+ * The Spanner GoogleSQL dialect is relatively close to the standard ANSI dialect. We therefore
+ * create a custom dialect based on ANSI, but with a few overrides.
+ */
+public class SpannerDialect extends AnsiDialect {
+ public static final SpannerDialect INSTANCE = new SpannerDialect();
+
+ /** Spanner uses backticks for identifier quoting. */
+ private static final Quoting QUOTING = new Quoting("`");
+
+ /** Spanner supports mixed-case identifiers. */
+ private static final IdentifierProcessing IDENTIFIER_PROCESSING =
+ IdentifierProcessing.create(QUOTING, LetterCasing.AS_IS);
+
+ private static final LimitClause LIMIT_CLAUSE =
+ new LimitClause() {
+ private static final long DEFAULT_LIMIT_FOR_OFFSET = Long.MAX_VALUE / 2;
+
+ @Nonnull
+ @Override
+ public String getLimit(long limit) {
+ return String.format("LIMIT %d", limit);
+ }
+
+ @Nonnull
+ @Override
+ public String getOffset(long offset) {
+ // Spanner does not support an OFFSET clause without a LIMIT clause.
+ return String.format("LIMIT %d OFFSET %d", DEFAULT_LIMIT_FOR_OFFSET, offset);
+ }
+
+ @Nonnull
+ @Override
+ public String getLimitOffset(long limit, long offset) {
+ return String.format("LIMIT %d OFFSET %d", limit, offset);
+ }
+
+ @Nonnull
+ @Override
+ public Position getClausePosition() {
+ return Position.AFTER_ORDER_BY;
+ }
+ };
+
+ private SpannerDialect() {}
+
+ @Nonnull
+ @Override
+ public IdentifierProcessing getIdentifierProcessing() {
+ return IDENTIFIER_PROCESSING;
+ }
+
+ @Nonnull
+ @Override
+ public LimitClause limit() {
+ return LIMIT_CLAUSE;
+ }
+
+ @Nonnull
+ @Override
+ public Collection