1. Setup database



Table of changes


CREATE TABLE `db_changes` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `entity_id` bigint(20) NOT NULL, `entity_class` varchar(255) NOT NULL, `time_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `operation` char(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;

Triggers


Create triggers that pushes rows into table of changes everytime entity is inserted, updated or deleted. Note the string 'orders' in entity class column. This string must be unique per table and you must use same string in FreshDbEntity annotation in attribute changesTableEntityIdentificator.

DELIMITER ; DROP TRIGGER IF EXISTS trg_AFT_INS_orders; DROP TRIGGER IF EXISTS trg_AFT_UPDATE_orders; DROP TRIGGER IF EXISTS trg_AFT_DEL_orders; DELIMITER @ CREATE TRIGGER trg_AFT_INS_orders AFTER INSERT ON orders FOR EACH ROW BEGIN INSERT INTO db_changes ( entity_id, entity_class, time_stamp, operation) VALUES ( NEW.id, 'orders', CURRENT_TIMESTAMP(), 'I'); END@ CREATE TRIGGER trg_AFT_UPDATE_orders AFTER UPDATE ON orders FOR EACH ROW BEGIN INSERT INTO db_changes ( entity_id, entity_class, time_stamp, operation) VALUES ( NEW.id, 'orders', CURRENT_TIMESTAMP(), 'U'); END@ CREATE TRIGGER trg_AFT_DEL_orders AFTER DELETE ON orders FOR EACH ROW BEGIN INSERT INTO db_changes ( entity_id, entity_class, time_stamp, operation) VALUES ( OLD.id, 'orders', CURRENT_TIMESTAMP(), 'D'); END@


2. Add Maven dependency


<dependency>
    <groupId>com.freshdbcache</groupId>
    <artifactId>freshdbcache</artifactId>
    <version>1.0.0</version>
</dependency>

Fresh DB Cache is hosted on Nexus Maven repository. You may therefore also need to add reference to this repository to your pom or settings.xml

<repository>
    <id>Nexus</id>
    <url>https://oss.sonatype.org/content/repositories/releases/</url>
</repository>



3. Define spring beans


It's quite simple. Define 1 cache for each cached entity. You can then wire or autowire defined cache beans to your service beans/controllers. Also define manager and provide it with standard data source an list of defined caches.

<bean class="com.freshdbcacheexamples.ordersexample.cache.ItemsCache" id="itemsCache"></bean>
<bean class="com.freshdbcache.impl.MapByIdCache" id="ordersCache">
    <constructor-arg>
        <value>com.freshdbcacheexamples.ordersexample.domain.Order</value>
    </constructor-arg>
</bean>

<bean class="com.freshdbcacheexamples.ordersexample.cache.ItemsCache" id="itemsCache"></bean>
<bean class="com.freshdbcache.impl.MapByIdCache" id="ordersCache">
    <constructor-arg>
        <value>com.freshdbcacheexamples.ordersexample.domain.Order</value>
    </constructor-arg>
</bean>

<bean id="freshDbCacheManager" class="com.freshdbcache.manager.FreshDbCacheManager" init-method="start">
    <property name="freshCaches">
        <list>
            <ref local="itemsCache"/>
            <ref local="ordersCache"/>
        </list>
    </property>
    <property name="dataSource" ref="dataSource"></property>
    <property name="changesTableName"><value>db_changes</value></property>
</bean>



4. Annotate POJOs


Fresh DB Cache also needs you to define SELECTs used to retrieve entity data and how to map that data to POJO entities. To avoid additional dependencies we have decided to implement simple ORM configured via annotations of POJO entities. There are 2 annotations - one for class and one for column.

@FreshDbEntity(changesTableEntityIdentificator = "order_items", initializationSelect = "SELECT * FROM order_items WHERE id > :idFrom LIMIT :limit", multiIdSelect = "SELECT * FROM order_items WHERE id IN (:idsList)", entityPrimaryKeyClass = Long.class) public class OrderItem implements Serializable {

FreshDbEntity annotation must define identificator in changes table (same as value stored by triggers in entity_class column in changes table), two selects with predefined placeholders and primary key class. Primary key class must match with all caches that store this entity as well as with type of attribute marked with FreshDbEntityColumn(isId=true) annotation. Primary key it must be class that correctly implements hashCode and equals methods.

Initialization select is called during initial load of all entities. It must contain 2 placeholders replaced by id of last loaded entity and limit (number of retrieved rows). Multi id select is called when changed entities are retrieved from database. IdsList placeholder will be replaced by comma separated list of retrieved ids.

@FreshDbEntityColumn(column="creation_date", valueMappingClass=com.freshdbcache.impl.DateColumnValueMapper.class, valueMappingClassParameters="yyyy-MM-dd HH:mm:ss")

FreshDbEntityColumn annotation must be used for all attributes that should be loaded from database. In this simple mapping framework you define just column name. In case of non-standard attribute class you define mapping class (see Documentation). For attribute that is primary key of entity define isId=true (exactly 1 id must be defined per entity).



5. Implement your own cache


Last step to get eveything up and runing is to define how cached data will be sorted in memory. Simplest implementation (MapByIdCache) sorts entities in one big ConcurrentHashMap. But Fresh DB Cache encourages you to implement your own cache. This is actually great extension point as you are no constrained to in-memory implementations only. You can use any local filesystem index/storage sollution as well.
For example once I have used local, embeded, single-partition ElasticSearch that was fully under updated by FreshDbCache-like mechanism and it was able to store, index and query milions of complex documents that otherwise wouldn't fit into memory. I know it was kind of raping ElasticSearch, but this solution had reliable update-wait for update synchronization and it was embeded so there was no need to manage standalone ElasticSearch cluster at all.

So how to implement your own custom cache?

1. Think how you would like to have your data sorted. For example FreshDbCache mapping does not support 1:n collections so you probably need children sorted by parent id. Use ConcurrentHashMap of ConcurrentHashMaps for that or Map of ConcurrentSkipListMaps if you want children ordered. If you want some ordered index or collection that is traversable by value range, use also ConcurrentSkipListMap. Concurrent collections are required, because you simultanously read and iterate this collections meanwhile they are modified in processAdded/Deleted/Updated methods. ProcessXXX methods are always called by same thread. If you are not sure about your implementation feel free to *contact us*.

2. To access your custom data structures you need to create getters for them.

3. Implement methods defined in interface.

/** * Retuns class of stored entity. From this class cache manager takes all annotation metadata required for reading and mapping data into entities. * @return */ public Class getEntityClass(); /** * Retuns class of entity primary key (for example usually Long) * Key class must define following method: * public K valueOf(String str) which returns key instance from its string description * @return */ public Class getKeyClass(); /** * Callback when new entity is added * @param entityId id of entity in string form. String is used due to possibility of composed ids. * @param newEntity entity that has been added to database * @param changeId Id of change that describes this entity added - can be used as timestamp/vector clock */ public void processAdded(K entityId, T newEntity, Long changeId, Date changeTimestamp); /** * Method that must return if cache already contains entity with provided key. This method is tested when new entity is added. I new added entity is by accident already contained, * it is deleted first. * @param entityId * @return */ public boolean containsKey(K entityId); /** * Callback when entity is updated * @param entityId id of entity in string form. String is used due to possibility of composed ids. * @param entity new state of updated entity * @param changeId id of change that describes this update. Can be used as timestamp/vector clock */ public void processUpdated(K entityId, T newEntity, Long changeId, Date changeTimestamp); /** * Callback when entity is deleted. * @param entityId String representation of entity id. Usually it\'s long but for composed id use case we use String in this API * @param changeId */ public void processDeleted(K entityId, Long changeId, Date changeTimestamp);


6. Update -> Wait for your update -> Read synchronization


When you are already happy with caches and you are able to run queries and reads without accessing database you come to situation when you need to display data immediatelly after user has changed them. Of course you want to display him already changed values...so you have to wait until changes are applied to cache. With Fresh DB Cache it's fortunatelly quite simple:

1. Do your update using preffered framework (I use for simplicity directly JDBC here)

conn = dataSource.getConnection(); stmt = conn.createStatement(); stmt.executeUpdate("INSERT INTO orders (creation_date, customer_name, customer_address) VALUES (now(),'" + customerName +"','" + customerAddress + "')");

2. Get actual change id directly from database (equal or higher than change id of previous insert)

Long dbChangeId = cacheManager.getActualChangeIdFromDB();

3. Wait until that change is applied to all caches.

while (cacheManager.getActualChangeId() < dbChangeId) { try { Thread.sleep(100); } catch (InterruptedException e) { e.printStackTrace(); } }

4. Continue with reads and queries. Changes were already applied to caches.