Curious to know how many people do zero-downtime deployment of backend code and how many people regularly take their service down, even if very briefly, to roll out new code.
Zero-downtime deployment is valuable in some applications and a complete waste of effort in others, of course, but that doesn’t mean people do it when they should and skip it when it’s not useful.
I write data pipeline code and there is zero downtime. We use kafka to buffer messages from dozens of producers to dozens of consumers on kubernetes.
Yeah zero downtime. You ship out the new features but gate them using some system you can control. When all the new features are shipped you turn up the new features until it gets to 100%. This lets you observe the real world behavior of the new features if they don’t cache well or cause 500s or what have you you can turn it off without having to ship new code.
Also if you keep all these feature flags, if you have a situation where you have capacity problems you can turn down features for the survival of the service as a whole.
Answering my own question: My systems do zero-downtime deployment. Some of my services are managed using ECS and some using custom deployment scripts.
It’s interesting that people mostly focus on the mechanics of launching the new code. To me, the interesting thing about zero-downtime deployment is what happens while the release is in progress, when there will be a mix of the old and new code versions accessing the same resources (databases, microservices, etc.) at the same time.
For example, you don’t want to just drop a previously-mandatory column from a SQL database: even if your new release no longer references the column, the new code will break if you deploy code before updating the database, and the old code will break if you update the database before deploying code. Obviously there are ways to do this kind of thing (roll out the change in small backward-compatible steps) but they’re extra work and can be easy to get wrong even if you’re using ECS to launch the code. Whereas, if you’re allowed to take downtime, you can do it all in one step without worrying about mixed-version environments.
if you’re allowed to take downtime, you can do it all in one step without worrying about mixed-version environments.
You don’t need to wiry about mixed version environments but you need to worry about whether you can roll back your changes without loss of data. It’s not as hard but it seems to get overlooked if there haven’t been any bad deployments lately.
Zero downtime deployments can get very complex for heavy usage apps, such as blue-green deployment.
We decided to avoid the complexity with some practical workarounds.
- Most deployments happen at 4am. “develop” branch merges deploy at 4am, and “master” branch merges deploy immediately.
- We force browser refresh if the front end detects the back end has had breaking changes. We attempt to re-populate form field values.
- During database migrations, we send 503 with Retry-After header in response to POSTs. Our client code knows to wait for that time and try again. If the time is too long, the user gets a friendly message that it will try again in X seconds. GETs are handled by an available read-replica, if possible.
We force browser refresh if the front end detects the back end has had breaking changes. We attempt to re-populate form field values.
Do users not find this disruptive?