By Enterprise IT Planet Staff
January 13, 2006
Ah, Patch Tuesday.
To do: Develop patching guidelines by the time the next Patch Tuesday rolls around.
Recent WMF drama
aside, the arrival of any patch is cause for concern. That's because unlike most home systems, companies rely on a broader set of software to conduct business.
To complicate matters, this software ends up 'talking' to other software and/or connecting to databases. With so many datalinks coursing through a company's network, no one wants to upset the balance or undo weeks, if not months, of performance tuning.
At worst, it can lead to an admin's worst nightmare: unplanned downtime.
Frankly, unlike home users, IT workers can't just run an updater, cross their fingers, and hope for the best (patch and pray, as this ritual is called). It's one thing is to inconvenience a user whose P2P program no longer works (bad employee!), it's quite another when a Web or application server crashes.
No one wants to field that call from the CIO. So a plan is required.
AntiOnline members discuss the strategies they rely on to minimize the risk of applying a patch that does more harm than good.
Note: Any opinions expressed below are solely those of the individual posters on the AntiOnline forums.
How do you deal with patching?
HTRegz kicks things off with a question about how to respond to the patch that fixes a TNEF flaw. Wait for successful reports or throw caution to the wind?
I'm looking at the TNEF decoding vulnerability and wondering how to treat it.
This isn't something where I can close a port/filter content (not that I have the ability anyways) while I read the results of others.
I don't have a test environment or even a single test machine. I was barely able to create a test XP Machine (we had one extra license).
That leaves me with patch immediately or leave it open and wait to read about others' luck with it.
dynamoo advocates performing a risk analysis to determine a response.
Some of it depends on the seriousness of the flaw and the likelihood of it being exploited, versus the inherent risks of applying the patch.
For example, with the WMF patch I manually tried out a W2K and XP workstation and then rolled it out straight away. In this case it was a serious flaw being actively exploited, and the risks of being hit by a nasty outweighed the risks of the patch screwing something important up. And even if it did have problems, it was only going to be pretty limited.
On the other hand, when it comes to IE patching we are much more careful - exploits via IE tend to be one machine at a time and can be largely mitigated by anti-virus and antispyware apps. IE patches have a tendency to break business critical applications for us too, so on balance we tend to evaluate those for much longer.
discusses a strategy for a large organization where extensive testing just isn't in the cards.
When a new patch comes out that is classified as "Critical," the company was just pushing it out to everyone immediately through SMS and that included servers. All patches got pushed out; if it breaks anything then figure it out. (I can feel all of you cringing just like I still do.)
Now since we can't fully replicate a lot of our environment in a lab...
We push the patch out to a sample population.
If there are no issues within a 24-hour period we push it out to everyone.
Push the patch out to non-critical servers.
Test for 24 hours then hit all the critical servers.
Discuss your patching procedure here. You do have one, don't you?