Skip to main content
Blog

Debugging caches using groovy logging

By 26 december 2014januari 30th, 2017No Comments

Caches are difficult to get right, and difficult to maintain. To get an insight into how well the Spring method caches we are using actually work, I added some logging. Spring method caches are annotated using ‘Cacheable’.
It seemed like a good idea to at least have some information on:

  • how often a cache is hit
  • how long the call would take if not cached
  • proof that the cache actually saves time
  • proof that the caches are cleaned according to TTL configuration

Or to sum it all up: how effective is each individual cache, is it worth the maintenance? And while we’re at it, are they being refreshed as intended?

So to have all this I wanted to print the amount of time each method call takes to finish and the time between identical method calls.

Groovy metaclass programming provides an easy way to apply logic to all methods at once. For those who know AOP, it’s like an ‘around advice’.
This simple example will enable us to show how long the first invocation takes and how long subsequent cached invocations take (hopefully nearing zero ms).

// whenever any method is invoked on the class (through groovy)...
CachingOrderService.metaClass.invokeMethod = { name, args ->
  long startMillis = System.currentTimeMillis()
        
  // get the name of the method to print something useful
  def metaMethod = delegate.metaClass.getMetaMethod(name, args)
  try {
    // actually let the invocation go through
    return metaMethod.invoke(delegate, args)
  } finally {
    long finishMillis = System.currentTimeMillis()
    long durationMillis = finishMillis - startMillis
    // let the instance that performs the call actually log the message
    delegate.log.debug "Call #${name}() finished in ${durationMillis} ms."
  }
}

Putting all the caching declarations, conversion logic, etc in a separate class leaves the original one (OrderService) nice and clean.
Using the @Delegate annotation prevents us from having to write a full wrapper class with delegation logic for every method!

class CachingOrderService {
  // all calls go to the delegate (if not handled here)
  @Delegate OrderService orderService
}

Knowing the timings of method calls is nice, but to know if TTL works properly (is configured correctly etc) and to know how much effect the caching will have on the total system it’s good to know how often calls are made.
To be able to record the time between similar calls, we can just keep a map of previous times.

// a map of times per method call
private final ConcurrentHashMap callTimings = new ConcurrentHashMap()

// get the moment of the last invocation (or now if no recorded time available)
long lastTimeMillis = callTimings.get(name, System.currentTimeMillis())
// the time between invocations (assuming here the start time is available)
long inBetweenMillis = startMillis - lastTimeMillis 

Putting it all together would make something like the following (warning, large slap of code coming through):

import groovy.util.logging.Slf4j
import java.util.concurrent.ConcurrentHashMap
import javax.annotation.PostConstruct
import org.springframework.cache.annotation.Cacheable

/**
 * Provides caching calls if applicable for the {@link OrderService}.
 **/
@Slf4j
class CachingOrderService {

  // all calls go to the delegate if not handled here
  @Delegate OrderService orderService

  private final ConcurrentHashMap callTimings =
	new ConcurrentHashMap()

  @PostConstruct
  def init() {
    log.info "Initializing caching wrapper for orderService: ${orderService}"

    CachingOrderService.metaClass.invokeMethod = { name, args ->
      long lastTimeMillis = callTimings.get(name, System.currentTimeMillis())
      long startMillis = System.currentTimeMillis()
      long inBetweenMillis = startMillis - lastTimeMillis 
	
      def metaMethod = delegate.metaClass.getMetaMethod(name, args)
      try {
        callTimings.put(name, startMillis)
        return metaMethod.invoke(delegate, args)
      } finally {
        long finishMillis = System.currentTimeMillis()
        long durationMillis = finishMillis - startMillis
        delegate.log.debug
          "Call #${name}(), last occurance was ${inBetweenMillis} ms ago." +
          " Finished in ${durationMillis} ms."
      }
    }
  }

  @Cacheable("orders")
  synchronized List listOrders(
	BigDecimal statusFilter,
	BigDecimal company,
	String customer) {
    return orderservice.listOrders(statusFilter, company, customer)
  }
}

Some notes

  • To use the caching, clients should work with the CachingOrderService, and not the original.
  • Only the methods that are put in this ‘Caching’ service are actually doing any caching.
    There is no magic going on for all the other delegated calls (hence the @Delegate) to the original orderService.

  • The @Slf4j annotation provides a implicit slf4j ‘log’ instance variable put in there by AST transformations.
  • The map keeps track of method call times so time between calls can be calculated. The service is a Spring singleton bean, so multiple threads can access it at the same time, this means the map needs to handle this well.
  • The methods should be kept simple or the keys for the map should be made more unique to prevent clashing in case of method overloading for instance.
  • The listOrders method is synchronized, this might seem strange and not very good for performance. It’s the easiest way to prevent caches being created multiple times by Spring, and it does not really hurt performance that much if you consider the difference between a call that takes up to a full second by accessing backend systems and the small locking overhead that the synchronized keyword causes (milliseconds at the most). This is the quick way around instead of having a BlockingCache, which can be configured using EhCache.
  • The PostConstruct annotation provides an trigger to be able to apply the metaclass programming part, just once.
  • It is actually not required to put the metaclass programming part inside the service, but it’s really easy to locate this way.
  • Another way to get insights is to use the EhCache statistics functionality. This requires a little configuration, but provides even more information. Having cache information in the logs is interesting though since cache hits and misses can be crucial to finding bugs.