Learn caching

Dear new developer,

Caching is a common architectural pattern that helps with performance and scalability. Spending some time learning about this will help you build better systems and understand existing architectures.

What is a Cache?

At the most fundamental level, a cache is a secondary store of a set of values which pulls values from a primary source. There are many reasons why you might want such a store, but a common one is that retrieving the data from the primary datastore is expensive. It could be costly because it is an expensive calculation, because it requires a network call or disk read, or for some other reason.

To avoid the cost of the retrieval, store the value in a cache, and then, when it is needed, retrieve it from there.

Cache Types

There are two main ways to categorize caches.

  • How the data is stored
  • What is stored

Cache data can be stored the same way any data can be stored. In practice, for most applications, you’ll be looking at a few options:

  • Memory
  • Disk
  • Abstracted

Caches that are stored in memory are quick (yay!) but ephemeral (boo!). The cache can go away at any time (whenever the server maintaining the memory is shut down), which means that you can lose access to the cached items at inopportune times. In addition, a memory based cache competes for RAM with other elements of your system. In general, most systems have less memory than disk, which is an alternative storage mechanism.

Retrieving items from disk is slower than memory. Of course, this is true for all disk access, not just that of cached data. The upside of a disk based cache is that you can store far more information.

You can often switch between memory and disk easily and transparently to your application’s code; consult the cache’s documentation to learn more about this option.

The last type of cache is ‘abstracted’. This means you use a cache as a service and don’t really care about the underlying implementation. This could be part of a framework (rails caching, for example), a standalone program like memcached or redis, or a full fledged service like a CDN. Browsers are a cache too. You don’t have to care about the details of this abstraction to take advantage of it. However, you will need to have some understanding of the cache behavior and performance when you operate your software.

Cache Keys

Most caches have a key and a value. The value is the cached value. The key is something that can be constructed by a client to get to the value. There is an implicit communication between the process that puts the value in a cache and the processes that retrieve a value from the cache: they have to agree to how the key gets constructed. Otherwise the cache readers won’t be able to actually read the correct values from the cache.

If you have product data that you are caching, you might have a key of product-<productid>. For a product with the id 15, the data would have a key of product-15. The product prefix namespaces the id in the cache so you can have multiple different types of objects cached (categories, deals, etc). The product id (15) needs to be known by the client to get the data.

Evictions

At some point, your cache will run out of room to store new items. At that point, it will need to get rid of some old items. This process is called ‘eviction’, and there are multiple ways to configure a cache to evict old items:

  • Least recently used (LRU): evict the items that were accessed furthest in the past.
  • Least expensive. If you are caching calculations because they are difficult/expensive to perform, evict the least expensive calculations first.
  • Oldest: evict items that are the oldest. This is also known as first in, first out (FIFO).

Sometimes it can be hard to judge which of these will be the right fit for your system, especially since you may not have a good grasp of usage patterns initially. Generally LRU is a safe choice to begin with.

You should also think about how to trigger evictions. This might occur if the item being cached is materially changed. An example is a new logo for a website. You might want that to be cached in a CDN for years, because it rarely changes and is used everywhere (so you might set the cache period to a long time). But when a new logo is included in a launch, you want to ensure it is used. One way to manually trigger a cache eviction is to use a new name for the logo file.

You might also want to evict a cache item when the underlying source of truth has changed. For instance, if you are displaying the price of an item, when the price changes in the underlying data, you are going to want to display the new cost right away. So when the price changes, you’ll want to force-evict any cache you have. You don’t want someone thinking they can get the product at $X, only to find they are charged $X+1. Not a fun conversation with the customer.

When Should You Use a Cache?

Caches optimize access but introduce complexity. A system is always easier to understand if you pull from a single source of truth.

Introduce a cache when the performance and scalability benefits are required. Otherwise you are simply prematurely optimizing. How do you know if they are required? Evaluate the performance of your code. You can do this by inspection and reasoning about the code. It can be easier to run tests with the cache both disabled and enabled if it is easy to integrate. If you don’t see big changes in performance, there is probably a different slow area of your system.

In addition, think about how expensive a retrieval from the primary datasource is. This will change over time based on your usage and the type of request. You’ll need some metrics to make a good decision as well as to determine if the cache actually helps. Yes, you can introduce a cache and have things be slower, especially if you are on a resource constrained system or you misconfigure it.

Also, consider how much effort it will be to introduce caching. If it is easily supported in your framework or you can add some headers to your HTTP response, caching can be simple to introduce. If it requires you to stand up a whole new system and refactor clients to use it, it can be difficult.

Finally, there are some caches that are built into systems already. Take some time to learn about them; there may be easy wins by tweaking the configuration. Linux has caching as do most databases.

Conclusion

As you might have gleaned from all the examples I mentioned, caches are everywhere. Learning about them and how they can be used is worth your time.

Sincerely,

Dan

Make it work

This is a guest post from Tim Bourguignon. Enjoy.

Dear new developer,

Let me tell you a story.

Once upon a time, there was a black chalk board in a dark room. The board carried remainders of countless creations. Diagrams that could pass as hieroglyphs for the non-educated eye. Or well crafted UML to the others. From systems to modules, modules to components and component to classes, it all seemed to flow. Generic, inter-operable and reusable, it was beautiful. Those architectural blueprints led the creation of the software. Countless tests secured it, using the best means at hand. Production servers welcomed the bits. But a few week later, the company shut down. There was no market for this software. There was not enough cash left for the company to pivot and explore a different idea. Too much time was lost writing the most beautiful and never used piece of code in the world. Beautiful or well crafted code that doesn’t solve a real problem doesn’t count.

Once upon another time, a software crashed in production. A website was down. A shopping system was not taking orders. A customer was losing a fair amount of money for every hour, every minute and every second that passed. And boy was she furious. A few caffeine-doped hacking hours solved the problem. Thankfulness replaced the customer’s furor. The architecture of the solution lacked flexibility. The re-use potential was at an ever lowest. A bitter smell of copy-paste still held in the air. And the amount of technical debt in the codebase reached new heights. But the fix saved the business. Sometimes, ugly or imperfect code that “do the job” is the best you can hope for.

Here’s how David Heinemeier Hanson and Jason Fried described the launch of Basecamp:

When we launched Basecamp, we didn’t even have the ability to bill customers! Because the product billed in monthly cycles, we knew we had a thirty-day gap to figure it out. So we used the time before launch to solve more urgent problems that actually mattered on day one. Day 30 could wait.

Rework by DHH & Jason Fried

Let me rephrase this: they launched a paid-product with no way to earn money! Does this sound sane to you? It actually is. Worst case scenario, nobody is willing to pay anyway. They don’t have users, they don’t need a payment system. Period. They concentrated their effort on making a great product until the launch date. If the product attracted customers, they would need a payment system. The month that ensued might have been smooth… or very bumpy. Again at the end of that following month, some code had to be running. Good or bad, perfect or crappy looking, it had to do the job. Otherwise they would lose money.

Let’s generalize by putting code aside for a moment. Have you ever written a text to throw it out seconds later realizing how crappy it was? I know I do it countless times every single day. Have you ever had a great idea put you into ecstasy, only to throw it away the moment you put it on paper. Have you ever realized how dumb an idea you had, after hearing you verbalize it? I know I do every single day. Plans are perfect, until they meet reality.

For everything you do, there is a right time. Sometimes it is “yesterday”, sometimes it is “now”. But more often than not, it is “later” or even “never”. The best way to find it out is to strive to make “it” work. Whatever you do, try to make your plans meet reality as fast as possible. Only then can you be sure you are not working on false promises.

Don’t underestimate this first rule. We often forget about it at the first pinch of peer pressure. If you write software as part of a team, you have experienced a variation of the following in the past:

My coworkers will review my code. What will they think of me? I’d better refactor this component, make non-trivial changes to this module, enable future extensions in this class and clean my code up to its core to uphold the design standards…

What I could have done in a few minutes has grown into hours and the peer pressure increases. What has taken a long time to craft must be excellent, or was it the contrary?

Can you live up to this first rule? Our formal education taught us to seek the unattainable best grade and to hunt for any flaws. To the contrary, this rule tells us to seek the “good-enough” and then iterate on it. Use those intermediate versions to foster discussion. Lay a “good-enough” version of your work on the table, be open about its flaws and be sharp in criticizing it yourself. As a group, you can then decide whether to invest in making it suck less, or leave it be. As J.B. Rainsberger said in his talk “7 minutes, 26 seconds, and the Fundamental Theorem of Agile Software Development”:

If every item in our shop either costs $8 or $13 and we hard-coded those values, we are done!

Of course, not every draft that does the job is production ready. Production is often far away down the road. You will have to work hard to find out what “good-enough” means for every new challenge you face. But by seeking working solutions in your daily life, you will make true progress. Refine working software. Keep both feet on the ground and focus on tiny measured steps. This is the basis for producing a good solution. The solution that is there when the deadline falls. The solution that makes the business work.

— Tim

This post was originally published at Auswanderer Quatsch ².

Tim has been building synapses between human beings since 1983. Passionate developer, Mentoring advocate, mentor and mentee himself, he works as Chief Learning Officer, Head of Agile and technical Agile Coach for the MATHEMA company in Germany. In his free time, he hosts the Software Developer’s Journey podcast and spends as much time as he can with his wife, with [1;3] kids clutched on his back!

Be a Just in Time Learner, part II

Dear new developer,

I previously wrote about being a JIT learner and talked about it in the context of a Just In Time compiler.

Just in time has another meaning, that relates to manufacturing. Delivering the right parts to the right plant at the right time revolutionized manufacturing. Just in time learning means that you focus on what can give you the greatest bang for your buck, and that you learn it when you need to.

The world of software is immense and as you navigate it more, you’ll begin to see patterns. When I see a new dependency management tool, I know it’ll operate roughly like the three other dependency management tools I’m experienced with. It will have:

  • a dependency tree, likely stored in text
  • a central repository or multiple repositories, where common code resides
  • a way to have a private repository for proprietary code
  • commands to update individual packages or an entire system

So, I don’t really need to master each dependency manager, because I can do the mental mapping between the ones that  I know (like maven and bundler) and the ones I’m less familiar with (composer, npm). I learn just enough to do what I need to do. I do this to avoid being overwhelmed by each new tool.

In a similar manner, you can apply the same thing to software development in general. When you start to get overwhelmed, you can focus on one task at a time, and learn just enough to do that task. Now, I think you should try to understand why you are doing that task and not just copy pasta code, but there’s a balance to be struck. You also may need to have a deep stack to do this (watch out for yak shaving), as you’ll be pulled from one task you don’t fully understand to another.

The way to defeat this is to continue to build the mental model. Try to understand the smaller pieces of a site or application before moving up to the medium size pieces and then to the larger pieces.

An example. When I’m starting a new project the first task I always try to understand is how do I get this running locally. Running a project locally is glorious! Even if the project isn’t under version control, I have absolute control of the local environment and can tweak and break things with abandon. The next task I try to understand is how does this software get deployed to production. Here obviously I can’t break things with abandon, but I learn what the architecture is like. Finally, I’ll try to make a small change and see if I can get through the deployment pipeline. This assures me I know how to connect the two key environments (local and production).

You can do the same thing with the first months of a new job. Map it to other jobs or schooling you’ve had. Think back to what worked in the past for learning new tasks.

In short, be a just in time learner. Focus on what’s in front of you, and learn that. Build models between what you know and what you don’t. Don’t fall into the trap of trying to understand everything.

Sincerely,

Dan