I blame this post entirely on my long time coding partner, as some time ago he blew away my long held belief that you shouldn't make changes at work on a Friday. This age old wisdom urges those of us in IT not to change something that could go wrong right before the weekend. Friday is POETS  day, so don't make a rod for your own back! That's changing now though, as hopefully I'll unpack in this post.
Traditional thinking was that if something goes wrong when you make a change on a Friday that you're stuck fixing it. Given people generally don't want to be working late on a Friday, or worse over the weekend, the advice is not to make a change on a Friday. I'll be honest, the bit about breaking something and being responsible for fixing it hasn't changed.
Unfortunately, that thinking traps you in the belief that something probably will go wrong because it's a Friday. There's no logic behind that when you think about it. Perhaps this is a hold over from the days where computers (and technology in general) were less reliable: things have moved on.
As time has gone on, so has the thinking around deploying changes, and I suspect this was largely driven by rapid software development practices. New thinking says "make your change when you're ready" and "making lots of little changes is fine". In software development, you could almost deploy changes all the time thanks to continuous integration / continuous development (CI/CD) practices. When you consider rapid development and automated software testing, every day becomes a good day (or every day becomes a Friday, depending on how confident you're feeling).
We can apply similar thinking to other types of change too, even when there's no CI/CD pipeline involved. There's still a need for careful consideration, but we don't need to be frozen just because it's Friday.
Before making a change there should be planning involved. If you've planned the change you've considered how to implement the change, where it could go wrong, and how to revert the change should you need to. Don't forget you need to know the current state of things too.
Relevant people should be consulted and informed about changes, and changes should be tested in a lab or staging environment wherever possible. Given numerous people have read about the change, considered the change, and (presumably) agreed with the change you'd hope any problems had been picked up already.
Your planning phase will vary in length depending on the extent of the change. Just plugging in a new item of equipment probably doesn't need any formal planning, just a sense check. Looking to change the entire email routing for an organisation will require significantly more thought in order to avoid a negative impact to business continuity.
Determining the change's success criteria should also happen at the planning stage, along with defining a test plan. When I was responsible for upgrading the firmware running our firewalls, my change documentation included a full test plan to make sure things still worked. Some example success criteria from the test plan:
- Can I still browse the Internet from here?
- Are websites published to the Internet still accessible from the public Internet?
- Can people still remote in using Citrix and the VPN?
- Are calls still coming in to the phone system? Are they still going out?
- Am I able to cross different firewall zones, for example from the trusted LAN into the DMZ?
All of those criteria had to pass for the change to be considered a success. If only most of them passed then I had to consider backing out the change - more on that below.
Let me ask you this - do you perform less effective testing based on the day of the week? Do you perform better tests on a Friday than you would on a Monday? If you're testing at all I'd suggest the answer was no - you always aim for your tests to evaluate your intended change thoroughly.
Budget often contributes to the amount of testing you can do. I've never had the budget to have a complete clone of my production environment in which I can test my changes in advance. Certainly it's easier to test things with the advent of virtual machines, or cloud computing where you can create and destroy whole test environments in the blink of an eye, but there will be some changes that you cannot test completely. That's OK, you just have to think about it more.
If you don't trust it on Friday, why would you trust it on Monday?
Stuff happens, and sometimes your change won't go to plan, so it's important to know how you're going to undo your change. Backing out could be a really involved process, or it could simply be a case of telling your version control system (e.g.
git) to go back to a previous state.
Before you can revert your change, you do need to been in possession of a key piece of information: what was it like before? Without that you've no way to know that you've reverted completely.
Perhaps you're considering backing out because not all of the success criteria have been met. It's important to consider the implications of each failure: it may be better to "fix forward" than to roll back. Time is a factor here, and if the "broken" bit can be quickly fixed, and isn't hugely business impacting, then I'd be very tempted to fix forward. Change windows can be hard to come by.
If you do need to back out, you can follow the "how to revert this change" section of your plan. You did plan, right?
What's the worst that can happen?
Assuming you've planned and tested thoroughly, the worst that can happen is that you have to revert the change. Admittedly I'm massively oversimplifying that - a massive change going wrong could result in the deletion of all company data - but reverting that change involves restoring from backup.
 "Push" Off Early, Tomorrow's Saturday