-
Notifications
You must be signed in to change notification settings - Fork 699
Release Train 2021.1 (Q) Release Notes
-
Upgrade to Querydsl 5.0
-
MongoDB
@DocumentReference, schema derivation for encrypted fields, MongoDB 5.0 Time Series support -
Support for streaming large result sets in Spring Data JDBC, SQL Builder refinements around conditions,
JOINs andSELECTprojections -
Support for cyclic mapping in Neo4j projections and support for Querydsl in Neo4j
Details
-
Spring Data Build - 2.6
Domain models can now use jMolecules' @Identity annotation to denote the identifier property of an aggregate root to improve developer experience when using jMolecules.
QuerydslPredicateExecutor, QueryByExampleExecutor and their reactive variants now define with findBy(…) a query method that allows fluent definition of queries. The fluent API allows customization of projections, sort properties and various terminal methods introducing consumption of results as Stream and other return types. Support for the fluent API is available in all store modules that already support Querydsl or Query by Example.
Repositories that support deleteInBatch and deleteAllInBatch methods now publish @DomainEvents when deleting multiple aggregates using batch functionality. From Spring Data’s core modules JPA is now supporting this functionality.
Repositories can now make use of Smallrye Mutiny types such as Uni and Multi in repository query methods. These types serve also as markers to detect whether a repository is a reactive one.
RxJava 2 support is now deprecated for removal with Spring Data 3.0. RxJava 2 is end-of-life as of February 28, 2021 and we recommend using RxJava 3 instead.
Tickets
M3
RC1
JpaRepositoryFactory.getRepositoryFragments(RepositoryMetadata, EntityManager, EntityPathResolver, CrudMethodMetadata) allows customization of fragments providing more contextual information without requiring reflective access to fields. The related ticket contains additional information.
MongoDB 5.0 introduced Time Series collections that are optimized to efficiently store documents over time such as measurements or events. Those collections need to be created as such before inserting any data. Collections can be created by either running the createCollection command, defining time series collection options or extracting options from a @TimeSeries annotation used on domain classes.
@TimeSeries(collection = "weather", timeField = "timestamp")
public class Measurement {
String id;
Instant timestamp;
// ...
}
See the Spring Data MongoDB documentation for further reference.
Wildcard indexes can be created programatically or declaratively. The annotation-driven declaration style covers various use-cases such as full-document indexes or indexes for maps. Consult the documentation for wildcard indexes to learn about the details.
Properties whose values is null were skipped when writing a Document from an entity. @Field(write=…) can now be used to control whether to skip (default) such properties or whether to force a null property to be written to the Document.
MongoDB’s Client-Side Field-Level Encryption requires a schema map to let the driver transparently encrypt and decrypt fields of a document.To simplify the configuration. properties in the domain model can be annotated with @Encrypted. MongoJsonSchemaCreator can create the schema map for Mongo’s AutoEncryptionSettings based on the domain model. Schema generation considers the algorithm and key identifiers.
The documentation on Encrypted Fields explains the configuration in detail.
Tickets
M1
M2
M3
Tickets
M1
-
#1767 - DynamicMapping annotation should be applicable to any object field.
-
#1454 - Allow disabling TypeHints.
-
#1787 - Search with MoreLikeThisQuery should use Pageable.
-
#1792 - Upgrade to Elasticsearch 7.12.1.
-
#1800 - Improve handling of immutable classes.
-
#1255 - Add pipeline aggregations to NativeSearchQuery.
-
#1816 - Allow runtime_fields to be defined in the index mapping.
-
#1831 - Upgrade to Elasticsearch 7.13.0.
-
#1839 - Upgrade to Elasticsearch 7.13.1.
-
#1862 - Add native support for range field types by using a range object.
-
#1864 - Upgrade to Elasticsearch 7.13.3.
M3
RC1
-
#1938 - Add @QueryAnnotation meta annotation to @Query.
-
#1941 - Upgrade to Elasticsearch 7.15.0.
-
#1909 - Add repository search for nullable or empty properties..
-
#1950 - AbstractElasticsearchTemplate.searchForStream use Query scrolltime.
-
#1945 - Enable custom converters for single fields .
-
#1911 - Supply a custom Sort.Order providing Elasticsearch specific parameters.
-
#769 - Support for field exclusion from source.
GA
Queries using the IN relation in combination with bind markers use now a single parameter bind marker for efficient statement reuse when using prepared statements. Using a single bind marker avoids unrolling bound collections into multiple bind markers which make prepared statement caching depending on the actual parameters. This lead previously to increased memory usage.
PrimaryKeyClassEntityMetadataVerifier which verifies mapping metadata for primary key types now no longer requires that primary key types subclass only java.lang.Object. Records use java.lang.Record as superclass so the subclass check is no longer applied. We encourage using records as composite primary keys for partitioning primary keys as those are not updateable in Cassandra itself.
It’s now possible to specify write options when using batch operations so you can customize TTL, timestamp and other options during batch writes.
Batch operations can be now configured for usage of Logged, Unlogged, or Counter batches by specifying the BatchType when obtaining CassandraBatchOperations. Previously, only Logged batches could be used.
As of this version, you can use a wide range of Redis 6.2 commands such as LMOVE/BLMOVE, ZMSCORE, ZRANDMEMBER, HRANDFIELD, and many more. Refer to the 2.6.0-M1 Release Notes for a full list of introduced commands.
LettuceConnectionFactory can now be configured by using a Lettuce RedisURI. This method creates a RedisConfiguration that can be then used to create LettuceConnectionFactory.
It’s now possible to configure a BatchStrategy for RedisCache. The batch strategy supports for now cache clearing using either KEYS or SCAN with a configurable batch size.
For example, the following will configure a non-locking CacheWriter with a SCAN batching strategy:
RedisCacheManagerBuilder.fromCacheWriter(RedisCacheWriter.nonLockingRedisCacheWriter(connectionFactory, BatchStrategies.scan(42)));
Support for SubscriptionListener when using MessageListener for subscription confirmation callbacks. ReactiveRedisMessageListenerContainer and ReactiveRedisOperations provide receiveLater(…) and listenToLater(…) methods to await until Redis acknowledges the subscription.
-
#920 - The Postgres dialect now consideres
PGobjecta simple type.