Software Development “Process Smells”

We’ve all heard of code smells right? These are the tell-tale signs that things aren’t right. I haven’t heard anyone talking about software development process smells, the tell-tale signs that your process is awry, so I thought I’d put a stake in the ground. I’d love any of our readers to chip in with more suggestions and I’ll keep this page updated in the future.

So, here’s my initial list of the smelly stuff:

Manual Testing

This is a massive bugbear of mine, and one of the key process smells. If you do anything manually you aren’t guaranteeing repeatability. If you haven’t got repeatability you can’t guarantee quality. Since testing is supposed to be about assuring quality, manual testing is a contradiction. Stop it. Right now.

Well, maybe not right now, but as soon as you get automated testing in place.

Remember as well that manual testing is like a parasitic load on your development cost. OK, writing automated tests may be 3x more expensive than a manual test but if you can run a regression sweep every time you check in code. This catches issues early and allows them to be fixed when it is cheap and effective to do so.

When you have the dead weight of manual testing on your project you tend to make testing an event rather than an integrated part process. You store up more and more code into a release with the intention of testing it once. You then find bugs and end up with stabilization phases. Don’t do this. Test early, test often, test automatically.

Manual Deployment

Ditto the above for the same reasons of quality, repeatability and parasitic effort / cost.

Manual deployment often comes around because of a lack of capability as much as anything.  You need to change your ops procedures from “click through the wizard” to “write a shell script” for everything they do.  Once you have the shell scripts to do everything, from provisioning and configuring infrastructure to application deployment your task to automate end-to-end becomes a heck of a lot easier.  There are plenty of tools out there that will process a YAML file full of bash commands.

Sometimes I find the inability to deploy automatically is a key factor in release constipation.  The inability to consistently deploy flags releases as risky, and hence something to be avoided.  Releases should be seen as low-risk and something to be embraced.  You get this from automation, and automated quality control.

Planned Production Downtime

This follows hot on the heels of the above point.

For many in the .com economy, whether you’re running an ecommerce business or a SaaS service, the idea of taking down production for patching or releases is simply untenable. If you switch off your business you’re also switching off your clients. And they don’t like this. They might not come back.

Believe it or not, in the dark and murky world of the enterprise, the practice of planned outages is still alive and well. Business that operate Monday to Friday might get away with releasing over a weekend. Just because you can get away with this, it doesn’t mean it’s right.  It’s a sign your process smells.

When you take down your production environment for releases it usually means:

  • You have no disaster recovery / failover capability, as you can’t swap to your backup instances.
  • You have manual deployment processes.
  • You are tied to physical machines, rather than provisioning a fleet of newly minted virtual instances ready to swap in for your old ones.

This is also symptomatic of the “deployment is an event not a process” outlined in the next section.

Release Constipation (or Deployment as an Event not a Process)

Otherwise known as releasing annually / biannually / quarterly.

See “crazy branching” below. The longer the gaps between deployments the more unreleased code you have. The more unreleased code you have the riskier each deployment becomes, and the greater the merging backwash you leave behind.  This is one of the uber process smells because it’s at the apex of a smell pyramid.

A smell for release constipation is the scheduled release cycle.

Why not release when a feature is ready?

OK, to be fair, I have worked on banking systems whose releases need to align with regulatory change. This often entailed a huge last-minute changeover of systems to meet the regulatory change. Having said this, you’re rarely penalised for being compliant with new regulation too early, only if you’re late. Most of the time you can get almost all of the features out there early, and sometimes you just need to change some configuration at the time the new regs come in.

Almost every excuse I hear for why you schedule you releases instead of releasing on demand actually points to something smelly in your processes. Inability to integration test. Lack of test environments. Lengthy manual test cycles. “The business not being ready”.   Manual deployments. Downtime.

Fix the things that prevent you from releasing, then experience the joy of frequent low-risk releases and put the pain of the sh*t-or-bust release behind you.

Crazy Branching

Ever been on one of those projects where someone tries to explain the branching strategy and you get lost after 2 minutes because there’s so much information to take in? Crazy branching is often a sign of release constipation, which is another of the process smells. When you can’t get a release out of the door and into production (maybe because you’ve frozen the features in order to undertake a lengthy manual testing phase), but your team is still working, you end up making extra branches for them to work on.

Of course, your release branch never stays still. You fix bugs. You get emergency feature requests and have to accommodate interface changes when integrating with other systems. These changes in your release branches need to be merged back into your feature branches, and you then end up spending half your time just making sure the merges are done and nothing is regressed.

The fix for this is to ship early, ship often, minimise the amount of unreleased code, keep the lifetime of feature branches short (a couple of days max, ideally) and avoid branching off branches (which branch off branches off master).

Zip Files of Source Code

If you find that someone has ever felt the need to keep zip files with a cut of your code in them, “in case I need to get back to this build”, you should break out in a cold sweat. There is no better illustration of the fact that you’re not managing your source code repository correctly (or you have got devs who don’t know how to manage source code).

Stop it. You should be able to go back to any commit and build from it. You can branch from any commit in the past. You can do all of this if you’re managing your source code right.

No zip files please. There’s no need and you’re only embarrassing yourself.

Sharing Dev/Test Servers

I remember working with a client once that had a thick-client trading app running against a SQL Server database. To “save on the licence costs” all of the dev team connected to the same database server to debug against.

What could possibly go right with this? Every build you’re running needs to be on a clean stack so you know you haven’t got cross-contamination from anyone else’s changes.

If different testers are running different sets of tests concurrently on the same environment how can you be sure that they’re not affecting each other? (A fix for that is to automate your test suite and run it as part of your release pipeline).

Quality depends on repeatability.  You need isolation of your environments to achieve this.  If you have no isolation your process smells.

Test Outcome Reports

It’s not unusual for system integrators to agree to writing a report when they “deliver code” that describes the quality of it.  What’s the point?  If you’ve set up your CI properly you should always have 100% of tests passing, or else you’re not protecting your branches properly.  You should also be setting minimum coverage requirements on the branches as well.  I can give you the stats on quality after every build I do.  Why write this down in a report?

The answer, sadly, is that test reports are a process smell of testing-as-an-event.  If you have decent CI and automated results available from every build you don’t need to report on quality, you enforce it.


The above points are by no means exhaustive and are intended as a guide to get you thinking about the issue of process smells.  What is there in your organization that is a symptom of underlying process stinkers?  Contact us via the website or add to the comments below if you’ve got additions to the wall of shame.