Skip to content

OkaeriPoland/okaeri-persistence

Repository files navigation

Okaeri Persistence

License Codecov Discord

Object Document Mapping (ODM) library for Java - store JSON documents in MongoDB, PostgreSQL, MariaDB, H2, Redis, or flat files with a consistent API. Write your data layer once, run it anywhere.

Features

  • Write Once, Run Anywhere: Swap databases with one line - the core Java philosophy (without the XML hell)
  • Fluent Query DSL: Filtering, ordering, and pagination across all backends (native for MongoDB/PostgreSQL/MariaDB/H2, in-memory for others)
  • Fluent Update DSL: Field and array operations (native atomic for MongoDB/PostgreSQL/MariaDB, in-memory for others)
  • Repository Pattern: Define method names, get auto-implemented finders (findByName, streamByLevel, etc.)
  • Unified Indexing: Declare indexes once, backends create native indexes when supported
  • Document-Based: Store data as JSON/YAML documents - flexible but not schema-free
  • Streaming Support: Process large datasets with Java streams and automatic batching

The Philosophy (and the Pitfalls)

The Good: Write your persistence code once against our interface, switch from MongoDB to PostgreSQL without changing application code. Your dev team can use H2, staging uses PostgreSQL, production uses MongoDB. Same code.

The Catch: You're trading database-specific optimizations for portability. Need MongoDB's aggregation pipeline? You'll have to fetch and process in Java. Need PostgreSQL's full-text search? Same deal. This library is for when you value flexibility and developer velocity over squeezing every bit of performance from your database.

Good For: Apps where data naturally clusters around an ID (user profiles, game state, session data), rapid prototyping, when you want to defer the database choice.

Not Good For: Complex joins, analytical queries, when you need database-specific features, when you need every bit of performance.

Requirements

Java

  • Java 8 or higher for library code
  • Java 21 for running tests (but your app can use Java 8)

Backends

Pick one (or multiple):

Native Document Support:

Backend Artifact Description
MongoDB okaeri-persistence-mongo Uses the official MongoDB driver. Native document store with automatic index creation and native filtering by properties.
PostgreSQL okaeri-persistence-jdbc Uses the official PostgreSQL JDBC driver with HikariCP. Stores documents as JSONB with native GIN indexes and JSONB operators for filtering.

Other Storage:

Backend Artifact Description
MariaDB okaeri-persistence-jdbc Uses HikariCP with MariaDB. Stores documents using native JSON datatype with native query translation (JSON_EXTRACT, JSON_UNQUOTE). Native indexes via stored generated columns.
H2 okaeri-persistence-jdbc Uses HikariCP with H2. Stores documents as native JSON type with native query translation using field reference syntax (value)."field". No index support.
Redis okaeri-persistence-redis Uses Lettuce client. Stores JSON as strings in Redis hashes. No index support - filtering done in memory.
Flat Files okaeri-persistence-flat File-based storage using any okaeri-configs format (YAML/JSON/HOCON). In-memory indexes.
In-Memory okaeri-persistence-core Pure in-memory storage with in-memory indexes. Zero persistence.

Installation

Version Java

Maven

<repositories>
    <repository>
        <id>okaeri-releases</id>
        <url>https://repo.okaeri.cloud/releases</url>
    </repository>
</repositories>
<dependency>
    <groupId>eu.okaeri</groupId>
    <artifactId>okaeri-persistence-mongo</artifactId>
    <version>3.0.1-beta.19</version>
</dependency>

Gradle (Kotlin DSL)

repositories {
    maven("https://repo.okaeri.cloud/releases")
}
dependencies {
    implementation("eu.okaeri:okaeri-persistence-mongo:3.0.1-beta.19")
}

Replace mongo with: jdbc, redis, flat depending on your backend.

Quick Start

1. Define Your Document

@Data
public class User extends Document {
    private String name;
    private int level;
    private Instant lastLogin;
    private List<String> achievements;
}

2. Create a Repository

@DocumentCollection(
    path = "users",
    // keyLength auto-detected: UUID=36, Integer=11, Long=20, others=255
    indexes = {
        @DocumentIndex(path = "name", maxLength = 32),  // Optional (default 255). Used only by MariaDB
        @DocumentIndex(path = "level")
    }
)
public interface UserRepository extends DocumentRepository<UUID, User> {

    // Method names are parsed automatically - no annotations needed!
    Optional<User> findByName(String name);
    Stream<User> streamByLevel(int level);
    List<User> findByLevelAndName(int level, String name);
}

3. Use It

import static eu.okaeri.persistence.filter.OrderBy.*;
import static eu.okaeri.persistence.filter.condition.Condition.*;
import static eu.okaeri.persistence.filter.predicate.SimplePredicate.*;

// Setup (MongoDB example - swap for any backend)
MongoClient mongo = MongoClients.create("mongodb://localhost");
DocumentPersistence persistence = new DocumentPersistence(
    new MongoPersistence(mongo, "mydb", JsonSimpleConfigurer::new)
);

// Create repository (convenience method)
UserRepository users = persistence.createRepository(UserRepository.class);

// Advanced: manual approach for custom ClassLoader or collection customization
// PersistenceCollection collection = PersistenceCollection.of(UserRepository.class);
// persistence.registerCollection(collection);
// UserRepository users = RepositoryDeclaration.of(UserRepository.class)
//     .newProxy(persistence, collection, customClassLoader);

// Create (UUID auto-generated on save)
User alice = new User();
alice.setName("alice");
alice.setLevel(42);
alice.setAchievements(List.of("speedrun", "pacifist"));
users.save(alice);

// Find by ID
User found = users.findByPath(alice.getPath()).orElseThrow();

// Find by indexed field (auto-implemented from method name)
User byName = users.findByName("alice").orElseThrow();

// Query with filtering and ordering
List<User> topPlayers = users.find(q -> q
  .where(on("level", gt(10)))
  .orderBy(desc("level"), asc("name"))
  .limit(10))
  .toList();

// Stream processing
users.streamByLevel(42)
  .filter(u -> u.getAchievements().size() > 1)
  .forEach(u -> System.out.println(u.getName()));

Query DSL

The find() method takes a lambda that builds a query and returns a Stream:

// Filtering
List<User> users = userRepo.find(q -> q
  .where(on("level", gt(10))))
  .toList();

// Multiple conditions
List<User> users = userRepo.find(q -> q
  .where(and(
    on("level", gte(10)),
    on("lastLogin", gt(yesterday)))))
  .toList();

// Ordering (single or multiple)
List<User> users = userRepo.find(q -> q
  .orderBy(desc("level")))
  .toList();

List<User> users = userRepo.find(q -> q
  .orderBy(desc("score"), asc("name")))
  .toList();

// Nested properties
List<Profile> profiles = profileRepo.find(q -> q
  .where(on("address.city", eq("London")))
  .orderBy(asc("profile.age")))
  .toList();

// Pagination
List<User> users = userRepo.find(q -> q
  .where(on("active", eq(true)))
  .orderBy(desc("score"))
  .skip(20)
  .limit(10))
  .toList(); // Page 3 of results

// Advanced: string predicates, case-insensitive matching, IN/NOT IN, null checks
List<User> results = userRepo.find(q -> q
  .where(and(
    on("name", contains("smith").ignoreCase()),      // .ignoreCase() works with startsWith/endsWith/contains
    on("username", eqi("alice")),                    // eqi() or eq().ignoreCase() for case-insensitive equals
    on("role", in("ADMIN", "MODERATOR")),            // in() and notIn() for collections
    on("level", between(10, 50)),                    // between() is sugar for gte + lte
    on("deletedAt", notNull()),                      // isNull()/notNull() for null checks
    or(
      on("verified", eq(true)),
      on("email", endsWith("@trusted.com"))
    )))
  .orderBy(desc("level"), asc("name"))
  .skip(0)
  .limit(25))
  .toList();

Backend Support:

  • MongoDB: Native query translation with $gt, $and, etc.
  • PostgreSQL: Native JSONB operators (->, ->>, @>) with GIN indexes
  • MariaDB: Native JSON functions (JSON_EXTRACT, JSON_UNQUOTE) with proper type casting
  • H2: Native JSON field reference syntax ((column)."field") with type casting
  • Redis, Flat Files, In-Memory: In-memory filter evaluation (fetch all, filter in Java)

Performance Note: Native backends (MongoDB, PostgreSQL, MariaDB, H2) push filtering to the database. Other backends fetch all documents and filter in memory.

Update DSL

Modify documents with field and array operations:

import static eu.okaeri.persistence.filter.UpdateBuilder.*;

// Update by ID - returns boolean (true if modified)
boolean updated = users.updateOne(userId, u -> u
  .set("level", 43)
  .increment("exp", 100));

// Update by entity - returns boolean
boolean updated = users.updateOne(alice, u -> u
  .push("achievements", "speedrun"));

// Update multiple with WHERE - returns count
long count = users.update(u -> u
  .where(on("level", gte(10)))
  .increment("exp", 50));

// Update and return NEW version
Optional<User> newVersion = users.updateOneAndGet(userId, u -> u
  .set("verified", true));

// Update and return OLD version
Optional<User> oldVersion = users.getAndUpdateOne(userId, u -> u
  .unset("tempToken"));

Operations:

// Field operations
.set("name", "bob")              // Set field value
.set("profile.age", 25)          // Nested fields supported
.unset("token")                  // Remove field (set to null)
.increment("score", 10)          // Add to number (use negative to subtract)
.multiply("damage", 1.5)         // Multiply number
.min("bestTime", 42.5)           // Update only if new value is smaller
.max("highScore", 1000)          // Update only if new value is larger
.currentDate("updatedAt")        // Set to current timestamp (ISO-8601)

// Array operations
.push("tags", "a")               // Append value(s) to array
.push("tags", "a", "b", "c")     // Varargs for multiple values
.popFirst("queue")               // Remove first element
.popLast("history")              // Remove last element
.pull("tags", "old")             // Remove all occurrences of value
.pull("flags", null)             // Supports null
.pullAll("roles", "A", "B")      // Remove multiple values (varargs)
.addToSet("badges", "new")       // Add if not present (varargs, supports null)

Important: Each field can only appear once per update. Use multiple .set() calls for different fields, or chain separate updateOne() calls for complex scenarios.

Backend Support:

  • MongoDB/PostgreSQL: Native atomic operations
  • MariaDB: Native atomic* operations
    • *Non-atomic in-memory fallback for pull/pullAll/addToSet
  • In-Memory: Synchronized operations with per-document locking
  • H2/Redis/Flat Files: In-memory evaluation (non-atomic)

Repository Methods

Define methods in your repository interface and they're auto-implemented based on method name parsing (works for any field, but indexing recommended for performance):

@DocumentCollection(path = "players", indexes = {
    @DocumentIndex(path = "username", maxLength = 16),
    @DocumentIndex(path = "rank", maxLength = 32),
    @DocumentIndex(path = "stats.level")
})
public interface PlayerRepository extends DocumentRepository<UUID, Player> {

    // === Simple equality (parsed from method name) ===
    Optional<Player> findByUsername(String username);
    Stream<Player> streamByRank(String rank);
    List<Player> findByRank(String rank);

    // === Multiple conditions (AND/OR) ===
    List<Player> findByRankAndUsername(String rank, String username);
    List<Player> findByRankOrUsername(String rank, String username);
    // AND has precedence: A OR B AND C → A OR (B AND C)
    List<Player> findByUsernameOrRankAndLevel(String username, String rank, int level);

    // === Nested properties (auto-discovered from camelCase or use $ as separator) ===
    Stream<Player> findByStatsLevel(int level);      // statsLevel → stats.level
    List<Player> findByStats$Score(int score);       // stats$Score → stats.score (explicit)

    // === Ordering ===
    List<Player> findByRankOrderByUsernameAsc(String rank);
    List<Player> findAllOrderByStats$LevelDesc();
    Stream<Player> streamAllOrderByUsernameAscRankDesc();

    // === Limiting ===
    Optional<Player> findFirstByOrderByStats$LevelDesc();  // First = limit 1
    List<Player> findTop10ByRank(String rank);              // TopN = limit N

    // === Count/Exists/Delete ===
    long countByRank(String rank);
    boolean existsByUsername(String username);
    long deleteByRank(String rank);

    // === Alternative prefixes (all equivalent to find) ===
    Optional<Player> readByUsername(String username);
    Optional<Player> getByUsername(String username);
    List<Player> queryByRank(String rank);

    // === Underscores for readability (ignored in parsing) ===
    Optional<Player> findBy_username(String username);
    List<Player> findBy_rank_and_username(String rank, String username);

    // === Custom logic with default methods ===
    default boolean isUsernameTaken(String username) {
        return this.existsByUsername(username);
    }

    default Player getOrCreate(UUID id, String username) {
        return findByPath(id).orElseGet(() -> {
            Player p = new Player();
            p.setPath(id);
            p.setUsername(username);
            return save(p);
        });
    }
}

Method Name Syntax:

Pattern Example Description
findBy{Field} findByName(String) Simple equality
findBy{A}And{B} findByNameAndLevel(String, int) AND conditions
findBy{A}Or{B} findByNameOrEmail(String, String) OR conditions
findBy{Field}OrderBy{F}Asc/Desc findByActiveOrderByLevelDesc(boolean) With ordering
findAllOrderBy{Field} findAllOrderByNameAsc() All with ordering
findFirst... findFirstByOrderByLevelDesc() Limit to 1
findTop{N}... findTop10ByActive(boolean) Limit to N
countBy{Field} countByActive(boolean) Count matching
existsBy{Field} existsByEmail(String) Check existence
deleteBy{Field} deleteByLevel(int) Delete matching
streamBy{Field} streamByLevel(int) Must return Stream<T>
{field}${nested} findByProfile$Age(int) Nested field (→ profile.age)

Return Types:

  • Optional<T> - Single result or empty
  • Stream<T> - Lazy stream (required for stream prefix)
  • List<T>, Collection<T>, Set<T> - Collected results
  • T (naked entity) - Single result or null
  • long - For count/delete operations
  • boolean - For exists operations

Note: For complex queries (comparisons like >, <, >=, regex, etc.), use the Query DSL instead:

users.find(q -> q.where(on("level", gt(10))).orderBy(desc("score")));

Built-in Methods (from DocumentRepository):

// Metadata
DocumentPersistence getPersistence()
PersistenceCollection getCollection()
Class<? extends Document> getDocumentType()

// Counting
long count()

// Finding - by path
Optional<T> findByPath(PATH path)
T findOrCreateByPath(PATH path)
Collection<T> findAll()
Collection<T> findAllByPath(Iterable<PATH> paths)
Collection<T> findOrCreateAllByPath(Iterable<PATH> paths)
Stream<T> streamAll()              // Safe but loads all data
Stream<T> stream(int batchSize)    // Memory-efficient, requires closing
Stream<T> stream()                 // stream(100) - requires closing

// Finding - with queries
Stream<T> find(FindFilter filter)
Stream<T> find(Function<FindFilterBuilder, FindFilterBuilder> function)
Stream<T> find(Condition condition)
Optional<T> findOne(Condition condition)

// Saving
T save(T document)
Iterable<T> saveAll(Iterable<T> documents)

// Deleting - by path
boolean deleteByPath(PATH path)
long deleteAllByPath(Iterable<PATH> paths)
boolean deleteAll()

// Deleting - with queries
long delete(DeleteFilter filter)
long delete(Function<DeleteFilterBuilder, DeleteFilterBuilder> function)

// Updating - by path
boolean updateOne(PATH path, Function<UpdateBuilder, UpdateBuilder> operations)
boolean updateOne(T entity, Function<UpdateBuilder, UpdateBuilder> operations)
Optional<T> updateOneAndGet(PATH path, Function<UpdateBuilder, UpdateBuilder> operations)
Optional<T> getAndUpdateOne(PATH path, Function<UpdateBuilder, UpdateBuilder> operations)

// Updating - with queries
long update(Function<UpdateFilterBuilder, UpdateFilterBuilder> updater)

// Existence
boolean existsByPath(PATH path)

Switching Backends

Change one line, everything else stays the same:

// MongoDB
new DocumentPersistence(new MongoPersistence(mongoClient, "mydb", JsonSimpleConfigurer::new));

// PostgreSQL
new DocumentPersistence(new PostgresPersistence(hikariDataSource, JsonSimpleConfigurer::new));

// MariaDB
new DocumentPersistence(new MariaDbPersistence(hikariDataSource, JsonSimpleConfigurer::new));

// H2
new DocumentPersistence(new H2Persistence(hikariDataSource, JsonSimpleConfigurer::new));

// Redis
new DocumentPersistence(new RedisPersistence(redisClient, JsonSimpleConfigurer::new));

// Flat files (YAML/JSON/HOCON)
new DocumentPersistence(new FlatPersistence(new File("./data"), YamlBukkitConfigurer::new));

// In-memory (volatile, no persistence)
new DocumentPersistence(new InMemoryPersistence());

Namespace support: Add PersistencePath.of("prefix") as first parameter to prevent collection name conflicts when multiple apps share storage (e.g., new MongoPersistence(PersistencePath.of("app"), mongoClient, "mydb", JsonSimpleConfigurer::new)).

Your repositories, queries, and business logic stay the same.

Builder API

All backends support a fluent builder pattern for more explicit configuration:

// MongoDB with builder
MongoPersistence.builder()
    .client(mongoClient)
    .databaseName("mydb")
    .configurer(JsonSimpleConfigurer::new)
    .serdes(new MySerdesPack())  // optional
    .basePath("myapp")           // optional namespace prefix
    .build();

// PostgreSQL with builder
PostgresPersistence.builder()
    .hikariConfig(hikariConfig)  // or .dataSource(hikariDataSource)
    .configurer(JsonSimpleConfigurer::new)
    .serdes(new MySerdesPack())
    .basePath("myapp")
    .build();

// MariaDB with builder
MariaDbPersistence.builder()
    .hikariConfig(hikariConfig)  // or .dataSource(hikariDataSource)
    .configurer(JsonSimpleConfigurer::new)
    .build();

// H2 with builder
H2Persistence.builder()
    .hikariConfig(hikariConfig)  // or .dataSource(hikariDataSource)
    .configurer(JsonSimpleConfigurer::new)
    .build();

// Redis with builder
RedisPersistence.builder()
    .client(redisClient)
    .configurer(JsonSimpleConfigurer::new)
    .basePath("myapp")
    .build();

// Flat files with builder
FlatPersistence.builder()
    .storageDir(new File("./data"))  // or .storageDir(Path.of("./data"))
    .configurer(YamlBukkitConfigurer::new)
    .extension("yml")                 // optional: override auto-detected extension
    .build();

Indexing

Declare indexes once in your @DocumentCollection:

@DocumentCollection(
    path = "users",
    // keyLength auto-detected: UUID=36, Integer=11, Long=20, others=255 (override by specifying explicitly)
    indexes = {
        @DocumentIndex(path = "username", maxLength = 32),  // Optional (default: 255). Used only by MariaDB
        @DocumentIndex(path = "email"),
        @DocumentIndex(path = "profile.age"),
        @DocumentIndex(path = "settings.notifications.email")
    }
)
Backend keyLength Usage maxLength Usage Index Type
MongoDB Ignored Ignored Native createIndex()
PostgreSQL Uses for key VARCHAR Ignored (uses JSONB GIN) Native JSONB expression indexes
MariaDB Uses for key VARCHAR Used for generated column* Native stored generated columns
H2 Uses for key VARCHAR Ignored None
Redis Ignored Ignored None
Flat Files Ignored Ignored In-memory (TreeMap + HashMap)
In-Memory Ignored Ignored In-memory (TreeMap + HashMap)
  • keyLength auto-detected (UUID=36, Integer=11, Long=20, others=255) - used by JDBC backends for primary key VARCHAR
  • maxLength used by MariaDB for string fields only (numeric/boolean use fixed types)

Streaming Datasets

Two methods for processing collections:

streamAll() - Simple with Tradeoffs

Loads all data, no resource management required. Best for small collections:

// Stream all users - safe, no try-with-resources needed
userRepository.streamAll()
  .filter(u -> u.getLevel() > 50)
  .map(User::getName)
  .forEach(System.out::println);

// Custom queries return streams
userRepository.find(q -> q.where(on("active", eq(true))))
  .parallel() // Process in parallel
  .map(this::calculateStats)
  .toList();

stream(batchSize) - Memory Efficient

Fetches data in batches. Must be closed (use try-with-resources or @Cleanup):

// Memory-efficient streaming with batches of 100
try (Stream<User> stream = userRepository.stream(100)) {
    return stream
        .filter(u -> u.isActive())
        .map(User::getName)
        .collect(Collectors.toList());
}

// Process large collection without loading all into memory
try (Stream<User> stream = userRepository.stream(50)) {
    stream.forEach(user -> {
        // Process each user as it's fetched (e.g., export, transform)
        exportUser(user);
    });
}

// Alternative: Lombok @Cleanup
@Cleanup Stream<User> stream = userRepository.stream(100);
List<String> names = stream.map(User::getName).toList();

Backend-specific batching:

  • PostgreSQL: JDBC cursor (requires open transaction until closed)
  • H2/MariaDB: LIMIT/OFFSET pagination
  • MongoDB: Driver cursor with batchSize hint
  • Redis: HSCAN with custom step size

Advanced: Document References

Store references to other documents using EagerRef or LazyRef:

public class Book extends Document {
    private String title;
    // EagerRef: fetches authors immediately when Book is loaded
    // LazyRef: defers fetch until .get() is called
    private List<EagerRef<Author>> authors;
}

// Creating references
Author author = authorRepository.findOrCreateByPath(authorId);
author.setName("Alice");
author.save();

Book book = new Book();
book.setTitle("Some Book");
book.setAuthors(List.of(EagerRef.of(author))); // Store reference
book.save();

// Accessing references
Book loaded = bookRepository.findByPath(bookId).orElseThrow();
for (Ref<Author> authorRef : loaded.getAuthors()) {
    // EagerRef: already loaded, LazyRef: fetches now
    Author author = authorRef.orNull();
    System.out.println(author.getName());
}

How it works: Refs serialize as {"_collection": "author", "_id": "uuid"} in the database. The field type (EagerRef vs LazyRef) controls when referenced documents are fetched during deserialization.

N+1 Warning: Each ref triggers a separate database query (EagerRef on load, LazyRef on .get()). For documents with many refs, fetch referenced documents in bulk using findAllByPath() instead.

Real-World Example

Complete user management system:

// Document model
@Data
public class UserAccount extends Document {
    private String email;
    private String username;
    private UserProfile profile;
    private String role; // e.g., "USER", "ADMIN", "MODERATOR"
    private Instant createdAt;
    private Instant lastLogin;
}

@Data
public class UserProfile {
    private String displayName;
    private String bio;
    private String avatarUrl;
    private Map<String, Object> preferences;
}

// Repository
@DocumentCollection(
    path = "accounts",
    indexes = {
        @DocumentIndex(path = "email"),
        @DocumentIndex(path = "username", maxLength = 32),
        @DocumentIndex(path = "role", maxLength = 16)
    }
)
public interface UserAccountRepository extends DocumentRepository<UUID, UserAccount> {

    // Method names are parsed automatically - no annotations needed!
    Optional<UserAccount> findByEmail(String email);
    Optional<UserAccount> findByUsername(String username);
    Stream<UserAccount> streamByRole(String role);

    default UserAccount register(String email, String username) {
        // WARNING: This has a race condition! In production, use:
        // - External locking (e.g., Redisson distributed locks)
        // - Action queue/message broker for sequential processing
        if (findByEmail(email).isPresent()) {
            throw new IllegalStateException("Email already registered");
        }

        UserAccount account = new UserAccount();
        account.setEmail(email);
        account.setUsername(username);
        account.setRole("USER");
        account.setCreatedAt(Instant.now());

        UserProfile profile = new UserProfile();
        profile.setDisplayName(username);
        account.setProfile(profile);

        return save(account);
    }

    default void updateLastLogin(UUID userId) {
        findByPath(userId).ifPresent(account -> {
            account.setLastLogin(Instant.now());
            save(account);
        });
    }

    default List<UserAccount> getAdmins() {
        return streamByRole("ADMIN").toList();
    }
}

// Usage
UserAccountRepository accounts = persistence.createRepository(UserAccountRepository.class);

// Register new user
UserAccount alice = accounts.register("alice@example.com", "alice");

// Login
accounts.findByEmail("alice@example.com").ifPresent(account -> {
    accounts.updateLastLogin(account.getPath());
    System.out.println("Welcome back, " + account.getUsername());
});

// Find all admins (using indexed field)
List<UserAccount> admins = accounts.getAdmins();

// Search users by role with ordering
List<UserAccount> moderators = accounts.find(q -> q
  .where(on("role", eq("MODERATOR")))
  .orderBy(asc("username")))
  .toList();

Backend Comparison

Backend Indexes Query DSL Update DSL Best For
MongoDB Native Native Native (atomic) Document workloads
PostgreSQL Native (JSONB) Native Native (atomic) Already using Postgres
MariaDB Native (gen. col.) Native Native (atomic)* Already using MariaDB
H2 None Native In-memory Testing/Embedded
Redis None In-memory In-memory Fast key-value access
Flat Files In-memory In-memory In-memory Config files, small apps
In-Memory In-memory In-memory In-memory (synchronized) Testing, temp state

Configurer Support

Serialization formats from okaeri-configs:

// JSON (all backends) - configurer passed to backend constructor
new DocumentPersistence(new MongoPersistence(mongoClient, "mydb", JsonSimpleConfigurer::new))

// YAML/HOCON/TOML (flat files only)
new DocumentPersistence(new FlatPersistence(new File("./data"), YamlBukkitConfigurer::new))

MongoDB, PostgreSQL, MariaDB, H2, and Redis require a JSON configurer. In-Memory uses an internal configurer. Flat Files support any format.

Related Projects

About

Java ODM / Unified persistence API: Store JSON in PostgreSQL, MariaDB, H2, Redis... or just use MongoDB like we meant you to. Technically backend-agnostic, spiritually MongoDB.

Topics

Resources

License

Stars

Watchers

Forks

Contributors

Languages