This Blog Has Moved!

Right, so yes, five years ago I moved to github pages, and never bothered to redirect any of these pages there. Now I've moved on from there, and... Finally I am using my real domain, trishagee.com . My blog is now at trishagee.com/blog .  See you there!

AOP Caching

Today I would like to document my experiences implementing caching with Aspect Oriented Programming (AOP) and annotations. 

Background context
 
Caching may need to be implemented in your application for a number of reasons. OK, actually usually only one: performance. I would like to add my own tuppence-worth to this though - if you can get away without caching (specifically in application that provide the ability to view and change data) then do so, unless you are using a cache implementation that will handle as much of the pain as possible for you. Implementing a home-grown cache from scratch is almost never the correct thing to do in my experience, you spend lots of time debugging and tweaking the cache when you should be working on your day-job, not re-inventing something that someone, somewhere, has already done a perfectly good job of

The example I'm about to show you is for a web application created to let users read and edit values from a database (not an unusual scenario!). 

Application Architecture
 
The application architecture I have assumed for this example is: 
Java 1.5, JSP, Spring MVC (2.0.1), Spring JDBC (2.0.4), running on Tomcat 5.5, connecting to a database (RDBMS type not important for this example). 

OSCache
 
A third party library, OSCache, provides the underlying cache for the application. This was chosen because it provides a simple solution which is easy to integrate into our Spring MVC layer and also provides JSP-level caching should we need it later. 

The application uses OSCache in a very basic way. Caching could've been implemented with a HashMap and it wouldn’t have provided much less functionality (the way I'm using it), but by using OSCache we can using the "group" functionality (which allows us to cache against a key AND a group name, so we can flush and reload a whole group if necessary), and we can potentially add timeouts and other more complex functionality simply with configuration changes. See the OSCache documentation for full details. 

CacheManager
 
Primarily to aid unit testing, but also to provide some separation between the application and the implementation of the caching mechanism, a CacheManager interface was implemented, and the implementation version simply wraps the OSCache GeneralCacheAdministrator. 

Aspect Oriented Programming for Caching
 
If caching is implemented in a very simple way, it can be easy to forget to handle caching on all methods that require it. Also, the code to check something is in the cache and retrieve it from the database and store it in the cache if it is not, is standard for most functions. Therefore it seemed to make sense to implement caching using Aspect Oriented Programming, so it can cut across all functionality without it having to be explicitly declared in every method that might need to utilise the cache. 

Spring has built-in support for AOP (and that documentation also provides a good introduction to what AOP is), so given our use of Spring MVC it shouldn't be too complicated to add Aspects to our code. 

Implementation: Application Context file
 
You need to add a couple of things to the application context file for your app to set up the cache and enable the AOP. 

<!-- Initialisation for Caching -->
<!-- the actual cache -->
<bean id="cacheAdministrator"
    class="com.opensymphony.oscache.general.GeneralCacheAdministrator"
    destroy-method="destroy"/>

 

<bean id="cacheManager" class=" com.mechanitis.examples.cache.impl.CacheManagerImpl">
    <property name="cacheAdministrator" ref="cacheAdministrator"/>
</bean>

 

<!-- Magic to get the Spring aspectJ-style AOP working -->
<aop:aspectj-autoproxy />

 

<!-- Code that does the caching (the Aspect) -->
<bean id="cacheAOP" class="com.mechanitis.examples.aspect.CachingAspects">
     <property name="cache" ref="cacheManager" />
</bean>
   
<!-- End of Caching setup -->

Note that, like the validation, this makes use of schema-based configuration. 

Now these settings are in the configuration file, they should not need to be changed unless the cache provider is changed or caching is to be fundamentally altered. 

Implementation: Defining Items to be Cached
 
Originally I had the application "magically" caching anything returned from a “get” method in the service layer and purging the cache on any “save” or “update” method. 

However there are some types of objects that don’t need to be cached and cause errors when they do, as sometimes the data needs to be "fresh" from the database. So the service layer declares what needs attention from the cache manager by the use of annotations:

import com.mechanitis.examples.cache.Cache;
// ...more imports...

public class CustomerServiceImpl implements CustomerService {
private static final String CUSTOMER_CACHE_GROUP = "Customer";
private static final String BRANCH_CACHE_GROUP = "Branch";

private CustomerDAO customerDAO;

@Cache(groups=CUSTOMER_CACHE_GROUP)
public Client getCustomer(CustomerId customerId) {
return customerDAO.getCustomer(customerId);
}

@Cache(groups=CUSTOMER_CACHE_GROUP)
public ClientId saveCustomer(Customer customer, String username) {
return customerDAO.saveCustomer(customer, username);
}

@Cache(groups=CUSTOMER_CACHE_GROUP)
public void updateCustomer(Customer customer) {
this.customerDAO.updateCustomer(customer);
}

@Cache(groups={CUSTOMER_CACHE_GROUP,BRANCH_CACHE_GROUP})
public void saveCustomerBranch(CustomerBranch customerBranch) {
customerDAO.saveCustomerBranch (customerBranch);
}
}

You may notice that this service doesn’t really add much value – it forwards the request to the DAO and little else. The purpose of the service layer, however, is to provide a simple place for things like caching, and in future potentially additional security, logging, transactions, or to string together multiple calls to DAOs for a more complex transaction. 

The methods that return objects that need to be cached or that affect items in the cache are tagged with the @Cache annotation. The single argument to this is a list of the groups in the cache that the Object should be or already is associated with. This group allows selective flushing of the cache – so when a new Customer is added, only the Customer group gets flushed (and consequently refreshed) rather than the whole cache. 

Note that these annotations have to be on the implementation class of the service layer, not the interface – this is because it's the implementation that is wrapped by the AOP proxy. For more information see Understanding AOP Proxies

Implementation: CachingAspects
This class is responsible for most of the work around the caching mechanism. It defines which methods in the service layer require attention from the caching mechanism and it performs the work around retrieving from the cache and dealing with cache misses. 

It uses the AspectJ AOP conventions in Spring 2.0, more information of which can be found in the Spring documentation. The main areas of interest are the annotations for each method which state when this method is to be called:
@Around("execution(@com.mechanitis.examples.cache.Cache java.util.List get* ())")
public Object cacheListWithNoArgs(ProceedingJoinPoint pjp) throws Throwable {...}

This states that this method should be called when any method that starts with the word "get" that returns a List and is tagged with the @Cache annotation is called. The @Around states that this method will be responsible for calling the original method – so in this case the original service method that was called (e.g. CustomerService.getAllCustomers()) will only be called if the list not is found in the cache. Another example is:
@Around("execution(@com.mechanitis.examples.cache.Cache “
+ " com.mechanitis.examples.common.domain.* "
+ "get* (com.mechanitis.examples.domain.id.*))")
public Object getIdentifiableObject(ProceedingJoinPoint pjp) throws Throwable {...}

This method is called when a service method is called that is tagged with the @Cache annotation, starts with “get”, is passed a domain ID and returns a domain object. This is a classic example of something to be dealt with by the cache manager – again it needs to check if the item is in the cache, return it if it is or retrieve it from its original source and store it in the cache if it is not. You can have similar methods for determining which methods need to flush the cache (e.g. "update" or "create" methods). 

And hey presto! An almost magical cache which does not require your developers to re-write the same caching code for all the "get", "update" and "create" methods on your service layer. All they have to do is tag the appropriate methods with @Cache and the AOP will take care of the rest of it. 

Disadvantages to AOP
 
As with many “magical” implementations, the main issue I found with this implementation of an AOP cache is that it can be difficult to debug. Caching can cause weird issues anyway (for example, if your update methods don’t correctly flush the cache you get old data being displayed, or if your cache update method doesn’t correctly retrieve the data). But when you throw Aspects into the mix, it can cause some interesting bugs that are hard to track down. 

The number one key to helping to overcome this issue is a good set of unit/functional tests for your cache. The advantage of testing a centralised AOP cache is that you don’t have to thoroughly test every method that might have caching implemented. So writing a lot of good tests for the AOP cache probably pays off vs. implementing and testing caching for individual methods. 

Still, strange things can creep in that can’t be detected by unit tests. For example, if a method has been tagged as a cache method through careless copy-paste coding, when it needs real-time data. Or, as I found, worse – if you don’t have a way to explicitly state which methods require caching but do it through the magic of naming conventions, you need all your developers to be fully aware of these conventions (and to not make mistakes in this area) in order to state which methods use the cache and which do not. 

Although I probably spent more time tweaking and debugging the cache than almost any other individual area of the application when I used it in anger, I would still say it was worthwhile implementing it in this fashion. The benefits from removing any “difficult” bits from the service layer, so junior developers can happily work, and the ease of adding an annotation to the appropriate methods, I think improved productivity enough and allowed for much cleaner code (which also improves productivity) so that it was the right choice to make.

Comments

Post a Comment

Comments have been disabled since this blog is no longer active.

Popular posts from this blog

Dissecting the Disruptor: What's so special about a ring buffer?

Dissecting the Disruptor: Writing to the ring buffer

Dissecting the Disruptor: Why it's so fast (part one) - Locks Are Bad