Skip to content
robbiehanson edited this page Mar 26, 2013 · 6 revisions

Each database connection has its own dedicated cache. This is a cache layer that sits within the YapDatabase architecture. It provides caching at the object level. That is, sqlite has caching too. But it caches pages representing your serialized objects in raw byte form. Having a cache of your objects within the objective-c layer reduces the cost of the deserialization process.


FLEXIBILITY

You have complete control over the cache at all times. Let's take a look at the related properties (from YapAbstractDatabaseConnection, which is the base class of both YapDatabaseConnection and YapCollectionsDatabaseConnection).

/**
 * Each database connection maintains an independent cache of deserialized objects.
 * This reduces the overhead of the deserialization process.
 * You can optionally configure the cache size, or disable it completely.
 *
 * The cache is properly kept in sync with the atomic snapshot architecture of the database system.
 *
 * By default the objectCache is enabled and has a limit of 250.
 *
 * You can configure the objectCache at any time, including within readBlocks or readWriteBlocks.
 * To disable the object cache entirely, set objectCacheEnabled to NO.
 * To use an inifinite cache size, set the objectCacheLimit to zero.
**/
@property (atomic, assign, readwrite) BOOL objectCacheEnabled;
@property (atomic, assign, readwrite) NSUInteger objectCacheLimit;

/**
 * Each database connection maintains an independent cache of deserialized metadata.
 * This reduces the overhead of the deserialization process.
 * You can optionally configure the cache size, or disable it completely.
 *
 * The cache is properly kept in sync with the atomic snapshot architecture of the database system.
 *
 * By default the metadataCache is enabled and has a limit of 500.
 *
 * You can configure the metadataCache at any time, including within readBlocks or readWriteBlocks.
 * To disable the metadata cache entirely, set metadataCacheEnabled to NO.
 * To use an inifinite cache size, set the metadataCacheLimit to zero.
**/
@property (atomic, assign, readwrite) BOOL metadataCacheEnabled;
@property (atomic, assign, readwrite) NSUInteger metadataCacheLimit;

As you can see, you can manage the cache for the objects & metadata separately. This provides for some very powerful solutions. For example, you can crank up the metadataCacheLimit, and keep the majority (or all) of your metadata in-memory. This delivers speed, while still allowing you to keep your large objects on disk, and out of memory.

Furthermore, you can configure the cache limits from within transactions. If you're about to do a bunch of processing which may involve looping over a large number of objects multiple times, you can temporarily increase the cache size, and then decrease it again when you're done.


CONCURRENCY

The caches of each connection are integrated deep into the architecture. Every transaction provides an atomic snapshot of the database, and the caches are automatically synchronized with the snapshot.

If you make changes to an object on one connection, then those changes are automatically picked up by the caches of other connections once their snapshot catches up. In other words, everything just works.


PERFORMANCE

We weren't satisfied with the performance of NSCache. We knew we could make something faster. And so we did. In fact, we've benchmarked our cache at up to 85% faster than NSCache on an iPhone 5. (The benchmark code is included in the project if you want to run it yourself.)

But that's not all we improved upon. One of the things we didn't like about NSCache was its memory consumption. Although one can configure the cache with a countLimit, it doesn't strictly enforce it. NSCache has "various auto-removal policies", which means it evicts items when it darn-well feels like it, regardless of how you configure it. This is concerning from a memory management and memory footprint perspective. That's why our built-in cache strictly obeys the limits you set. And furthermore, it tracks the order in which objects are accessed, so it always evicts the least-recently used object.

Long story short: better performance, predictable memory footprint

Clone this wiki locally