Slow Is Smooth, Smooth Is Fast - 25% of Our Time Refactoring
My team has been spending less of our “free” time working on bugs and features from the backlog, and more time refactoring our code and tests. As a result, and perhaps somewhat counterintuitively, we’ve noticed a significant increase in our throughput of features and bug fixes.
As it turns out, it’s easy to find bugs and add features to a well-written codebase that the entire team is familiar with. Go figure.
I’ve been paying attention to our throughput and efficiency as we’ve made a conscious effort to spend around 25% of our time refactoring, revisiting, and restyling existing code and tests. Let’s go over some of the benefits we’ve seen since making the change.
Context and Caveats
I’ve found that in articles like this it’s important to give as much context to the situation as possible, as certain development methodologies may work better or worse in teams with different sizes, tech stacks, or development processes. Here are some notable things about our situation:
- 4 engineers on my team, ~16 in the company
- Tech stack - Go, Postgres, ElasticSearch and RabbitMQ
- Microservices architecture on Kubernetes
- Kanban-style development process - no scrum
- Our team is responsible for ~15 repositories
- Each repo represents a small service in a data pipeline process that handles the sorting and NLP of social media posts

Code Familiarity
With only four engineers on my team and ~15 repositories we’re responsible for, it was hard for all four of us to be intimately familiar with all the code. When we needed a new microservice, one team member typically wrote the first iteration, and one other team member did a quick code review. The engineer who did the first iteration would then be primarily responsible for bug fixes and new features relating to that project.
By focusing more of our time on reviewing and refactoring existing code, it gave us a chance to hop into projects that we never would have had a reason to become familiar with before. Not only does getting more eyes on a project mean the overall code quality will likely go up, but it also means we aren’t hosed if the original maintainer moves on to a new company.
Slow to Fix Bugs
When you get deep into spaghetti code, it can be really hard to find bugs. In a messy codebase, sometimes fixing a bug can add to the “uncleanliness” of the code. You may have to exacerbate or extend an already bad architectural pattern to get a bug fix in.
Ideally, you would do the refactoring first and then fix the bug (assuming the bug still exists after a good refactoring). Unfortunately, oftentimes there isn’t enough time to refactor a project before fixing a critical bug. For this reason, we should always be refactoring so that bug fixes can happen quickly without harming code quality.
Slow to Add Features
I don’t want to beat a dead horse, the reasoning here is largely the same as with bug fixes. Adding features to a messy codebase just makes it messier. It’s like frosting a cake that’s already been dropped on the ground. I guess the cake would taste better if you still felt inclined to eat it, but you’ve made the inevitable clean-up harder.

Try It Yourself
Since adding a consistent refactoring process to our team’s routine, we’ve been able to put more features through to production while spending less time working on them. Let me know what you think and if you have a different experience.
Related Articles
(Very) Basic Intro to Lattices in Cryptography
Aug 21, 2020 by Lane Wagner - Boot.dev co-founder and backend engineer
Lattice-based cryptography, an important contender in the race for quantum-safe encryption, describes constructions of cryptographic primitives that involve mathematical lattices. Lattices, as they relate to crypto, have been coming into the spotlight recently. In January 2019, Many of the semifinalists in the NIST post-quantum-cryptography competition were based on lattices. Lattice-based cryptography has promising aspects that give us hope for cryptographic security in a post-quantum world.
What is SHA-256?
Jul 08, 2020 by Lane Wagner - Boot.dev co-founder and backend engineer
SHA-2 (Secure Hash Algorithm 2), of which SHA-256 is a part, is one of the most popular hash algorithms around. A cryptographic hash, also often referred to as a “digest”, “fingerprint” or “signature”, is an almost perfectly unique string of characters that is generated from a separate piece of input text. SHA-256 generates a 256-bit (32-byte) signature.
Why is Exclusive Or (XOR) Important in Cryptography?
Jan 18, 2020 by Lane Wagner - Boot.dev co-founder and backend engineer
If you are getting into cryptography, or just trying to understand the fundamentals, you may have noticed that the exclusive or (XOR) operation is used quite often, especially in ciphers. XOR is a simple bitwise operation that allows cryptographers to create strong encryption systems, and consequently is a fundamental building block of practically all modern ciphers. Let’s dive into the details and see what makes XOR so important.
(Very) Basic Intro to Hash Functions (SHA-256, MD5, etc)
Jan 01, 2020 by Lane Wagner - Boot.dev co-founder and backend engineer
Hash functions are used to securely store passwords, find duplicate records, quickly store and retrieve data, among other useful computational tasks. As a practical example, all user passwords on boot.dev are hashed using Bcrypt to ensure that if an attacker were ever to gain access to our database our user’s passwords wouldn’t be compromised.