Tag Archives: Patching

Equifax Breach Drives Home the Importance of Prompt Patching as GDPR Approaches

by Bharat Mistry

No organisation is breach-proof: we all know that the odds are stacked too high in the attackers’ favour. However, by following industry best practices we can make it as difficult as possible for hackers, and discourage all but the most determined and well resourced. That’s why it will dismay many in the industry to learn that Equifax knew about the vulnerability that it claims led to a massive breach at the firm this year, all the way back in March. However, it was apparently only fully patched months later once the damage had been done.

Given the scale of the breach, and the fact the firm could have been hit with fines of over $60m under the forthcoming GDPR regime, this should serve as yet another cautionary tale to IT leaders. Best practice security, including effective patch management, is called “best practice” for a reason. Continue reading

WannaCry & The Reality Of Patching

Mark Nunnikhoven, VP Cloud Research, Trend Micro

The WannaCry ransomware variant of 12-May-2017 has been engineered to take advantage of the most common security challenges facing large organizations today. Starting with a basic phish, this variant uses a recent vulnerability (CVE-2017-0144/MS17-010) to spread unchecked through weaker internal networks, wreaking havoc in large organizations.

The gut reaction from those on the sidelines was–understandably–”Why haven’t they patched their systems?” Like most issues in the digital world, it’s just not that simple. While it’s easy to blame the victims, this ransomware campaign really highlights the fundamental challenges facing defenders.

It’s not the latest zero-day—a patch for MS17-010 was available 59 days before the attack—or persistent attacker. One of the biggest challenges facing the security community today is effectively communicating cybersecurity within the larger context of the business.


A common refrain in the security community is that patching is your first line of defence. Despite this, it’s not uncommon for it to take 100 days or more for organizations to deploy a patch. Why?

It’s complicated. But the reason can be boiled down roughly to the fact that IT is critical to the business. Interruptions are frustrating and costly.

From the user’s perspective, there is a growing frustration with the dreaded “Configuring updates. 25% complete. Do not turn off your computer” screen. The constant barrage of updates is tiring and gets in the way of work. Making matters worse is the unpredictable nature of application behaviour post-patch.

About 10 years ago, “best practices” formed around extensive testing of patches before deploying them. At this time, the primary motivator was patch quality. It wasn’t uncommon for a patch to crash a system. Today, patches occasionally cause these types of issues but they’re the exception not the rule.

The biggest challenge now is custom and third party applications that don’t follow recommended coding practices. These applications might rely on undocumented features, unique behaviours, or shortcuts that aren’t officially supported. Patches can change the landscape rendering critical business applications unusable until they too can be patched.

This cycle is why most businesses stick to traditional practices of testing patches, which significantly delays their deployment. Investing in automated testing to reduce deployment time is expensive and a difficult cost to justify given the long list of areas that need attention within the IT infrastructure.

This unrelenting river of patches makes it difficult for organizations to truly evaluate the risks and challenges of deploying critical security patches.

Legacy Weight

The argument around patching assumes—of course—that a patch is actually available to resolve the issue. This is the zero-day. While the threat of zero-days is real, long patch cycles mean the 30-day, 180-day, and the forever-day are far more likely to be used in an attack. The Verizon Data Breach Investigations Report consistently highlights how many organizations are breached using exploits of patchable vulnerabilities.

The WannaCry campaign used a vulnerability that was publicly known for 59 days. Unfortunately, we’ll continue to see this vulnerability exploited for weeks—if not months—to come.

Making matters worse, MS17-010 was only patched on supported platforms. A position that Microsoft has since reversed and issued a patch for all affected platforms (kudos to them for making that call). While it’s logical only to provide patches for supported platforms, the reality is the “supported” number is far different than the “deployed” number.

We know that Windows XP, Windows Server 2003, and Windows 8 continue to live on – by some reports accounting for 11.6% of Windows desktops and 17.9% of Windows servers. That’s a lot of vulnerable systems that need to be protected.

There are third party security solutions (some from Trend Micro) that can help address the issue, these legacy systems are a weight on forward progress. As a system ages, it’s harder to maintain and poses a greater risk to the organization.

Malware, like the 12-May-2017 WannaCry variant, takes advantage of this fact  to maximize the success and their attack…and their potential profits.

Security teams need to help the rest of the IT teams explain the need to invest in updating legacy infrastructure. It’s a hard argument to make successfully. After all, the business processes have adapted to these systems and from a workflow process, they are reliable.

The challenge is quantifying the risk they pose (maintenance and security-wise) or at least putting this risk in the proper perspective in order to make an informed business decision.

Critical…For Real

All too frequently, vulnerabilities are flagged as critical. 637 and counting so far in 2017, which is a faster pace than the 1,057 reported in 2016 (and these numbers are only for remotely exploitable vulnerabilities!). Your organization is not going to be impacted by all of these, but it’s fair to say that you’ll face a decision about a critical vulnerability once a month.

To make the decision to disrupt the business, you’re going to have to evaluate that impact. This is where organizations tend to falter. It’s extremely difficult to boil the decision down to numbers.

In theory, you should take the cost of downtime (when deploying the patch) and compare it to the cost of a breach. Ponemon and IBM have the cost of a data breach in 2016 at an average of $4 million USD (4% of worldwide turnover for EU companies). This means that you should always patch unless the downtime cost is more than$4 million.

Except that it doesn’t factor in the probability of that breach happening or the cost of using security control to mitigate the issue. This is where it gets really complicated and highly individualized.

The debate on how to properly evaluate this decision rages on in the IT community, but specific to WannaCry, the equation was actually pretty straight forward.

Microsoft issued MS17-010 in March, 2017 and flagged it as critical. A month later, there was a very high profile and very public data dump that contained an easy to understand and execute exploit for the vulnerabilities patched by MS17-010. At this point, the security team can guarantee that their organization will see attacks taking advantage of this vulnerability.

That puts the probability of attack at 100 percent. So unless it’s going to cost $4 million to patch your systems, the patch should be rolled out immediately.


Un-patchable systems still need to be protected. With WannaCry, all affected systems are patchable now—again, thanks to a generous move by Microsoft. With other malware threats, that’s typically not the case.

This is where mitigations come into play. These mitigations also buy time for patches to be deployed.

WannaCry is a solid example of a new variant that caused significant damage before traditional anti-malware scanning could be implemented. This is where machine learning models and behavioural analysis running on the endpoint is critical.

These techniques provide continuous and immediate protection for new threats. In the case of WannaCry, systems with this type of endpoint protection were not impacted. After deeper analysis by the security community, traditional controls were able to detect and prevent the latest variant of WannaCry from taking root.

When in place, strong network controls (like intrusion prevention) were able to block WannaCry from spreading indiscriminately throughout corporate networks. This is another argument for microsegmentation within the network.

Finally, phishing emails continue to be the most effective method of malware distribution. 79 percent of all ransomware attacks in 2016 started via phishing. Aggressively scanning emails for threats and implementing strong web gateways are a must.

Protecting Against The Next Threat

WannaCry is a fast moving threat that’s had a significant real-world impact. In the process, it’s exposed fundamental challenges of real-world cybersecurity.

Patching is a critical issue and it needs the entire IT organization working with the rest of the business to be effective. Year after year, the majority of attacks take advantage of patchable vulnerabilities. This means that most cyberattacks are currently preventable.

Rapid patching combined with reasonable security controls for mitigating new and existing threats are the one-two punch your organization needs to reduce its risk of operating in the digital world.

While the problem and solutions are technical in nature, getting the work done starts with communications. There’s no better time to start than now.

Ransomware Server Threat Demands a Virtual Patching Response

by Bharat Mistry

We all know that ransomware is one of the biggest threats facing UK organisations today. You only have to take a look at the headlines to see the havoc it’s wreaking all over the country, and the world. But although the broad message seems to be getting through, Trend Micro research has revealed a troubling lack of awareness when it comes to the details.

As we head towards VMworld Europe in a fortnight it’s worth remembering that only a layered approach to protection offers the best chance of success. That’s because corporate servers are increasingly being singled out by the black hats as vulnerable targets. Continue reading

Layered Protection: The Only Cure for the Ransomware Epidemic

by Raimund Genes

What’s the number one challenge facing CISOs today? It’s not compliance, budgetary concerns, securing cloud computing or even data breaches – as important as all of these issues are. It’s ransomware. Every day there seems to be a new outbreak. The latest is a double-edged attack campaign apparently combining ransomware and DDoS. But while many cybercriminals are keen to exploit your organisation’s weakest point – its users – via web and email channels, some are looking to attack other parts of the IT infrastructure such as the network and servers.

That’s why CISOs need to ensure their organisation implements layered protection covering all possible weak points. It’s the only way to ensure you stand the maximum chance of avoiding ransomware infection. Continue reading