Information System Security (INFOSEC) is a perception. You can follow all the rules and best practices, implement the best technologies, check all of the boxes, and still get hacked. While a positive Security Assessment Report might be appreciated, it shouldn’t lead to a sense of complacency. In fact, that positive report may lead to a false sense of security because all it has really done is measure compliance, not effectiveness in an era where public sector entities ignore critical vulnerabilities and are subsequently hacked in the most public and embarrassing ways. Even commercial organizations find themselves desperately defending against state-sponsored attacks of unprecedented speed and sophistication.
Your information systems, regardless of the resources expended on security, have critical vulnerabilities that haven’t been discovered yet
Your information systems, regardless of the resources expended on security, have critical vulnerabilities that haven’t been discovered yet. Security is a rapidly moving target that occasionally defies the laws of physics or probability. Welcome to 2018 – this is the sobering lesson of Meltdown and Spectre.1
To successfully navigate a hostile cyberspace, a solid cybersecurity strategy — not a check the security control box mentality — is required. Consider the complexities of a multi-tenant cloud or even just a virtualized environment. As soon as tenant number two is introduced, securing both customer tenancies across a common infrastructure and the underlying management systems become significantly more complex.
In 2014, while taking one of the first cloud systems through the early Federal Risk and Authorization Management Program (FedRAMP) process 2, assessors from our Third-Party Assessment Organization (3PAO) came in to conduct desktop exercises with the author’s Cloud Operations Team. After a long session, the lead assessor said “I have one last exercise. It’s called the zombie apocalypse, and in a zombie apocalypse anything is possible.”
A zombie apocalypse desktop exercise is an efficient way to test systems in a manner that rapidly identifies architectural and process weaknesses, but in a way that has a near-zero probability of real-life occurrence. It lowers the defender’s readiness by turning the exercise into something that feels like a game. Concurrently, it enhances creativity and acceptance of bizarre incidents that would never happen in real life.
For example, zombies might attempt to penetrate systems logically using the latest strategies. If the zombies can’t breach the systems logically, they’ll physically attack datacenters and, for example, start a fire inside. If they can’t physically breach the datacenter they’ll set adjacent buildings on fire causing a 3-alarm fire that ultimately engulfs the datacenter. They can bring down the main power substation feeding your data center then flood the fuel pump room so that backup generators, safely installed on the 18th floor, run out of fuel within hours, not days. They can flood the entire data center in less than four minutes, fully submerging it and killing 20 employees. They can even drive an SUV up the grass security berm, flying over the perimeter fence and parked cars, and crashing directly into the main power transformer, causing two cascading power failures that result in the chillers not being able to re-start thus turning the data center into an overheated oven in a matter of minutes.
Anything is possible in a zombie apocalypse desktop exercise. The problem is, these six scenarios actually happened to large enterprises. No zombies required.
Zombie apocalypse desktop exercises help to generate a cultural shift toward providing and prioritizing effective confidentiality, integrity and availability capabilities through the exploration of a wildly unexpected series of physical or logical events. If such exercises are a cultural bridge too far, organizations should make The Art of Cyber Conflict, by Henry J. Sienkiewicz, former CIO and Designated Approval Authority (DAA) of the Defense Information Systems Agency (DISA) a required reading. Sienkiewicz presents modern cybersecurity as a conflict, using timeless strategies from Sun Tzu’s The Art of War.
While a zombie apocalypse is an effective way to test real-world information security and instill a powerful cybersecurity culture, its results are developed over time and organizations need help now. The following are security controls that can and should be implemented immediately, with the goal of keeping an organization out of the headlines.
Fundamental security controls could have kept cybersecurity out of the headlines over the past five years.
Lessons learned from major security breaches about the security controls that could have remediated the vulnerabilities exploited remain relevant today. In fact, they’re more relevant today because they represent a roadmap of successful strategies for malicious actors.
By the time a system’s risk has been assessed, the assessment is already outdated.
According to the FedRAMP program, if an information system’s vulnerabilities aren’t identified at least every thirty days, it’s at risk. This is the core of FedRAMP’s continuous monitoring program and is a necessary improvement to United States government certification and accreditation processes that assess systems on a triennial basis.
“Continuous” within the FedRAMP context means that systems are scanned at least every thirty (30) days to identify vulnerabilities, and that identified vulnerabilities are communicated to the relevant vendors, the responses tracked, interim compensating controls planned (if required), testing completed, and patches or configuration changes applied as needed. FedRAMP continuous monitoring requires critical vulnerabilities to be remediated within thirty (30) days from when they are identified. Moderate vulnerabilities must be remediated within ninety (90) days.
Thirty days is a lifetime
2017 saw the largest data breaches in IT history. The Equifax breach alone resulted in the theft of personally identifiable information (PII) impacting over 143 million people. The breach sent shockwaves throughout industries of all sectors. If Equifax could suffer a loss of this magnitude, anybody could.
The root cause of the breach wasn’t related to a sophisticated state-sponsored attack, a flaw in the technical implementation of their architecture, a Snowden-like insider or something worthy of a zombie apocalypse desktop exercise. The breach was due to a known critical vulnerability in the Apache Struts web-application software, for which a patch was released on
March 8, 2017 (CVE-2017-5638). The Equifax breach occurred between May 13th and July 30th – over 60 days after the release of the patch. Simply patching within a thirty-day timeframe could have avoided the largest breach in history.
Still, 30 days is a lifetime for a high value target since attackers already know where IT systems are, what they’re running, and who the key personnel are. Such information can be acquired through active scanning or by analyzing resumes and LinkedIn profiles of current or past employees that list technologies and skill sets. From this, attackers can derive useful knowledge of the target’s infrastructure. Attackers are just waiting for a vulnerability related to a technology used by a target so they can launch an attack.
Vulnerability scanning and remediation
The Vulnerability Scanning FedRAMP security control (RA-5)3 requires the identification of vulnerabilities vertically across the OSI stack 4 and horizontally across operating systems, web applications, databases and appliances, at regular, periodic intervals.
Identifying, reporting, testing and applying patches for known vulnerabilities isn’t trivial. Identification doesn’t mean that a vendor is going to provide hardware, firmware or software patches in a timely manner. This is especially true with appliances. Vendors may refuse access to their appliance and unapproved access may void the warranty. If a vendor providing a solution as a hardware appliance won’t permit vulnerability scanning with root access, that appliance poses a significant risk to your infrastructure.
An organization’s relationship with vendors in vulnerability remediation should be professional, consistent, persistent and produce results that continuously reduce risk across all IT assets. Additionally, an organization’s IT staff should be on a first name basis with the vendor’s key support personnel, with escalation paths to upper management clearly defined.
It should be made clear to vendors that, at a minimum, on a specified day of each month they can expect to receive a list of identified vulnerabilities and that they are expected to respond within a specified time period with a plan and schedule for remediating each item on the list. They should also understand that follow-up is expected until a patch is issued. Similar expectations should be communicated to storage or computer vendors as with firewall vendors. Often, computer and storage vendors consider management interfaces to be out of band, and may respond slowly to issues concerning these functionalities. This is unacceptable as insider threats are responsible for about 43% of all breaches.5 This is what happened to Sony in 2014.
Thirty days is still too long
For high-value targets, even the thirty-day scan period is inadequate. Monitoring should take place more frequently. Whether it’s bi-monthly, weekly, bi-weekly, daily or every time your scanner vulnerability database is updated depends on the organization’s internal capabilities.
Scanning takes time, and if the environment is large, full production system scans can take multiple days to complete. If such a case, a dedicated “scan farm” of baseline configurations deployed for the servers, application stacks and devices within the production environment that can be scanned in a matter of minutes should be established. This will provide a near-real time view of existing vulnerabilities and make scanning every time a new vulnerability database update is released feasible.
Vulnerability remediation goes beyond using scanning tools to identify known common vulnerabilities and exposures (CVE) and patching requirements. Security operations staff should be actively monitoring many (dozens or even hundreds) information feeds such as News, CERT (US-CERT, DoD CERT, IC CERT), Gartner Cyber Incident Response Team, SANS Computer Incident Response Team, etc.
The broader impact of proper vulnerability scanning
Vulnerability management, when properly implemented, also assists the organization with other critical security controls, including:
A senior government official responsible for the certification and accreditation (C&A) of some of the first commercial cloud products offered to federal agencies (prior to the official creation of FedRAMP) once said “Any CSP [cloud service provider] who cannot prove that they have mastered Configuration Management has no business delivering service to the U.S. Federal Government”.
“Any CSP [cloud service provider] who cannot prove that they have mastered Configuration Management has no business delivering service to the U.S. Federal Government”
Vulnerability scanners and other related systems require strict configuration management (CM) because they have administrative access to every device on the network as they are required to perform fully authenticated scans. Security is therefore paramount and configurations for operating systems, applications and the scanner management interface must be reviewed and managed properly.
All vulnerability scanners can be set to ignore and exclude certain known vulnerabilities from their reports. This is often used for vulnerability remediations that have been identified as operationally required. Exceptions should never be allowed in scanning tools. Every vulnerability, even if accepted as operationally required or with a compensating control in place, should show up in every report, every month. Security operations should still verify the status of all vulnerabilities with every vendor, every month, even if they have stated that they never intend to remediate.
In addition to maturing other controls such as CM, change control, risk assessment, maintenance, penetration testing, contingency plan testing, incident response testing, and security testing and evaluation, vulnerability scanning will also mature the automation of key processes. This is critical, given that attacks are now happening at the speed of automation and artificial intelligence. Information system automation should be positively impacted or improved in about 40 different areas.6
In March of 2014 and June of 2015, the U.S. Office of Personnel Management (OPM) suffered one of the worst, if not the worst, breach ever disclosed by the government. As a result of this breach the background investigation records of approximately 21.5 million people7 who had undergone security clearance checks were exposed.
In the aftermath of the breach it was discovered that OPM had not implemented basic security required by Homeland Security Presidential Directive 12 (HSPD-12), requiring assurances that every person granted access to facilities or information systems is the person they claim to be.
The days of castle and moat protection are over. Multi-factor authentication (MFA) is a necessity, not just at a system’s authorization boundary, but also for every asset behind the authorization boundary including operating systems, applications, remote access cards, switches, and firewalls. Insider threats are real, and the number is growing. Consequently, administrative access to vulnerability scanners must have MFA implemented, as these scanners must have root level access to all systems to properly assess vulnerabilities and risk.
The fundamental security controls mentioned throughout this article could have stopped some of the largest breaches in IT history (including the OPM breach), the JP Morgan Chase breach (which impacted 76 million households and 7 million small businesses8), Target (impacting up to 110 million people9), Heartland Payment Systems breach (exposing 134 million credit cards10) and Equifax (exposing the personal information of 143 million consumers11).
Organizations unable to perform cyber zombie apocalypse desktop exercises should, at the least, follow FedRAMP’s guidance and implement a disciplined Vulnerability Scanning and Remediation program and allow it to mature the other key security controls so that the organization doesn’t wind up in the headlines in 2018.
Since 2010, based on lessons learned from the DISA Rapid Access Computing Environment (RACE) project – the DOD’s initial foray into highly secure cloud computing, CyLogic has been working to bring the highest level of security in a cloud solution to public and private sector organizations.
CyLogic, Inc. produces CyCloud, a FISMA high-compliant high-performance true cloud platform with the most comprehensive set of cybersecurity capabilities available on the market. CyCloud implements every fundamental security capability noted in this article (and hundreds more), and is also available to U.S. commercial customers to protect critical corporate infrastructure.
The next critical vulnerability is coming. Until it arrives, CyLogic will continue conducting zombie apocalypse desktop exercises to prepare for it. What will your organization be doing?
This article was originally published on the United States Cybersecurity Magazine
Fundamental security controls you should implement this year; their widespread adoption would have prevented cybersecurity failures that made headlines over the past five years.
FedRAMP - The Gold Standard of Cloud Security
Security has always been about identifying who or what can be trusted accessing data, and what they can do with that access
The last few years have seen a series of high-profile breaches against large institutions, particularly in the banking industry. Many firms have been accused of being stuck in a “90’s” cybersecurity mentality believing that on premise networks, strong firewalls, and anti-virus software were sufficient to ward off most cyber-attacks.
READ more >
You Are Always In Control With CyCloud
The Economist wrote that “The world’s most valuable resource is no longer oil, but data.” You’re doing a lot to protect your data, but what about keeping control of your data? Having transparency about where your data is located and who has access to it are key components of controlling your data.
READ more >
Cybersecurity Critical to Energy Sector
The energy and utilities sector is one of the vital infrastructure sectors where a shutdown would have adverse effects on national security, public health and safety. For that reason, Industrial Control Systems (ICS) and other critical energy production operations must be protected from cyberattacks.
READ more >