In the early days of Java, I recall reading a book talking about performance characteristics of various things in Java. One of the tidbits that I latched onto was the suggestion that the default size of a Hashtable
object was inefficient if it had to grow in size beyond the default.
As I recall the situation, the default size of a Hashtable
was 101 and the hashing algorithm that was being used relied on the size being a prime number. It was faster for prime sizes than for non-prime. 101 is prime and so it provided good performance. If the Hashtable
needed to be resized ever and ever larger, it very quickly became a non-prime number and performance would degrade.
The advice that the book recommended was to start with an initial size of 89 as starting from there would yield prime values for many more resizings as the collection became larger.
This notion appealed to me, so for years I littered my code with new Hashtable(89)
instead of just new Hashtable()
. Clearly this is better, right? Wrong.
At some point (I think Java 5), they re-implemented the Hashtable class and the new algorithm favoured a size that is a power of two.
Had all my code called new Hashtable()
, I would have got a nice speed increase right out of the box. For no extra work, my code would be better. But my code didn’t do that. I’d decided to be clever and had hard coded it to 89, which is very much not a power of two.
This is a perfect example of premature optimization. Had I focused on leaving my code maintainable and easy to read, it would have improved as the libraries improved. Instead, I had decided to be clever and actually made things worse. The interesting thing is that I didn’t make it immediately worse, I’d left a time bomb that would make it worse when the code was upgraded to a newer version of Java.
Learn from my mistakes. Don’t optimize the code until you have a measurable performance problem. Until that point, write maintainable code, not clever code.