Jump to content

Sysadmins: Step away from the Big Mac. No more Heartbleed-style 2am patch dashes


Reefa

Recommended Posts

The bad news is that this is only the start. As software vendors move towards a more appliance-based approach, upgrades become that bit more difficult. Companies will start to proliferate tens of appliance VMs and they are all Linux-based. Black boxes, if you will.

Each company may have a different process to update. Some big players demand you redeploy an entire virtual appliance to patch it, making support that bit more time-consuming. Sometimes the updates don’t even work and you have to jump through several dozen hoops to get your data moved on to a new bug-fixed platform.

Patching costs a lot of resources, time and money. How can you do it efficiently and accurately?

Every site and situation is different, but the differences in how businesses implement patching depends a lot on the size of the company. It is an established fact that the bigger a company gets, the more red tape exists and the slower it moves.

This means the costs of rolling out a patch increase significantly due to the overheads incurred, both technical and non-technical. Each progression on this path from small to large environments increases the cost and complexity of patching exponentially. How can administrators manage this issue and costs at the same time?

Surprisingly, one of the bigger issues with larger vendors is the time scale between vulnerability identification and general patch availability. Without naming names, critical patch timescales have been known to stretch into several weeks for some vendors affected by Heartbleed and similar. Unfortunately, bugging them and escalating on a daily basis (assuming you have the clout) only has so much of an impact. They like to take their own sweet time. Once they arrive, what’s the plan?

How not to do it

My first job as a network administrator for a small single site with approximately one hundred users gives some insight into how patching used to happen within smaller companies. Patches were deployed once a month, by hand.

The company was too cheap to buy SMS or patching infrastructure products so it cost them a few hundred in overtime once a month (usually) for me to roll round the offices/server room, installing the latest patches whilst stuffing my face with McDonalds' finest.

Testing was as simple as trying the desired patches on the IT administrators machines or low level servers for a week before rolling the patches out. Test infrastructure was something only larger companies had. The only other prep that was required beforehand was a catch-all email 48 hours beforehand informing all the users that systems would be potentially unavailable for the best part of the weekend. Change control or contingency plans weren’t even a thought beyond a decent back-up. Fortunately, I never got bitten by the “OMG Noooooezzz” patch of death that truly busted a machine.

Doing it properly

The key aspect is forward planning. Patches are going to be needed; it’s not an optional requirement (for decent sysadmins). Below is a list of steps that can help admins get on top of the nightmare that is patch Tuesday (and other vendors too!)

1. Create a good, tested patch process, documenting the how and the where. Document what needs to happen for a patch deployment to be deemed successful. Include any paperwork or representations to change meetings that need to be made. Once the process exists and has been debugged of any issues, it can serve as a template for how to deploy patches in the future. This first step in creating a patching process helps ensure uniformity and consistency of patching in the environment, which is desirable.

2. Ask if this patch affects you. This may seem like an obvious question but not all bugs will affect all users. If the bug is in a service you don’t use and isn’t installed, there is little point in installing it.

Give us the tools...

3. Get the right tools: manual patching doesn’t work anymore. Scaling above a handful of users is when patch management becomes time-consuming and error prone if not done in a managed way. Manually repeating patching on machines allows room for error. There are many patch management products on the market for administrators to use, and some are even free. The cost of any tool you buy will quickly be offset by the time saved over manual patch methods. The one unfortunate part of getting the right tools is that if you run more than one type of OS, you may need to get a second set of patching tools. For example, if you maintain a sizeable Red Hat installation you will need to use Satellite or something similar to automate the process.

4. Some machines are special and hand deployment has a place. Some machines are very critical and patching should be supervised or done manually. Applying the same patches should happen, just not automatically. To give a quick example, in a virtualised environment, there can be many dependencies such as AD, SSO, and similar. If you lose the ability to manage the VM estate from a single piece of glass you will need to start trying to figure out where the affected machine is. Therefore, apply the patches and ensure the machine is clean after a reboot. Only experience of your environment will tell you which machines may require enhanced management.

5. Test, test and test your patches before deploying. It may sound obvious, but fixing a bad patch can be very problematic and expensive, especially if deskside intervention is required. An ideal testing set-up should include three distinct groups. Essentially, a patch alpha release to those knowledgeable IT staff who can cope better with a bad patch. The second group should comprise selected machines and users that encompass users of each of the different builds used within the environment. The last group should be akin to the second group but with larger numbers, or indeed sites. It depends on the set-up being managed.

6. The users are people too. Users realise that bugs have to be patched, but it helps if you get them on board early. They would much rather be told a week before about the patch so they don’t make plans to work at the weekend and they suddenly can't, as the network is down. Also, plan these patches at appropriate times. Month end is not usually a good patching time for accountants, for example.

The fly in the ointment: appliances

Working in larger environments and ensuring patch compliance can be very interesting when applied to appliances, both physical and virtual. All major operating systems have some form of patch management rolled into their management tools. However, appliance manufacturers who run CentOS or some other Linux variant will often only support their own patch update channels and provide limited support for other management tools, crying “unsupported configuration” when you broach the subject.

This may not be an issue if you have only or two devices, but what if you have several dozen appliances, with one appliance per virtual host to provide a specific service, for example? It gets tiresome and boring, but can be essential. Manually logging into each device several times in order to update it gets really boring, really quickly. Then what if you have tens of appliances from several vendors? More time wasted. Applying patches can quickly become a major logistical nightmare when done globally. At present there are not many tools that will patch appliances from several vendors AND are vendor supported to boot.

To be sure, things are improving with centralised web GUIs to make updating appliances easier, but we are not there yet. Given time and the number of recent Linux security bugs, I think vendors will need to move towards a more integrated patch management model, above and beyond what a lot currently provide.

Patching is one of those activities that is essential to any environment of any size. It is without doubt a cost to companies that provides no extra business edge, but can’t be avoided. Using tools such as SCCM or Redhat Satellite can help manage the environment and reduce deployment overhead.

That represents only half the story, as the management overhead and business requirements can easily push the cost of a patch up significantly. But it is a cost that has to be shouldered.

Optimising it to be as efficient as possible is worth the investment in time, because it is a certainty that these steps will be repeated frequently, month to month and beyond.

http://www.theregister.co.uk/2015/03/04/patching_for_sanity/
Link to comment
Share on other sites


  • Views 1.3k
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...