The average organization today has multiple cloud solutions in play. Nearly all organizations have, or plan to implement, some sort of cloud-based infrastructure to do things like consolidate and centralize data centers or more efficiently process workflows. Infrastructure-as-a-Service (IaaS) offers the sort of scalability and elasticity that conventional networks were never designed to provide. Now organizations are deploying multiple IaaS environments, sometimes from different cloud providers.

Virtualized networks have now become private clouds, with many of the same advantages and challenges of public cloud environments. And cloud-based software services, such as sales management and off-network storage, are nearly ubiquitous. According to the Fortinet Threat Landscape Report for Q1 2017, organizations now use a median of 62 different cloud applications, which accounts for roughly one third of their applications. Furthermore, most organizations now have sensitive or critical data stored in the cloud using applications and services that IT doesn’t even know about. It’s called Shadow IT, and it’s a growing security concern.

Orchestrating all of these separate cloud environments has become a real challenge for many IT teams, who still have to manage their physical networks, a growing number of endpoint devices, the explosion of Internet of Things (IoT) devices and infrastructures, and increasingly, the connection of previously isolated operational technology (OT) environments to the network and Internet.

The ability to track devices and data in such a constantly shifting environment, establishing and maintaining policies, and ensuring a consistent security posture across all of it has already become unmanageable. In addition, in a multi-cloud environment, the various cloud-based services that have been deployed are usually unable to see or talk to each other. We’ve traded a meshed network of highly interactive systems with what is essentially a hub and spoke design. And with that comes serious security challenges around detecting breaches and malware, sharing and correlating real-time threat intelligence, and coordinating an effective network-wide response.

Fortunately, most organizations are still in the early stages of building out their cloud infrastructures, which means they can still plan for the risks associated with an organically evolving network. This planning stage is critical, because approaching security as an afterthought has largely been responsible for the financial success of the cybercrime industry.

The multi-cloud has to start with intentional design. Organizations not only need to understand why they are adopting a particular set of cloud services but how data, applications, and workflows will move across and between these services. They need to clearly articulate where risks exist.

Once a secure baseline has been architected, organizations need to select tools to enhance and secure this dynamic and evolving infrastructure. For many organizations, some or all of the existing security solutions that were deployed to protect their static, physical environments will simply be unable to secure their distributed, multi-cloud environment. Likewise, they need to avoid continuing to build separate security solutions for each cloud environment or deploying specialized security tools that operate in isolation. IT resources are already overburdened, and the growing shortage of skilled cybersecurity professionals means that the answer to an expanding network can’t be an increasingly complex security environment. Instead, security solutions need to be selected based on their visibility, correlation, and automation.


The old adage that you can’t protect what you can’t see is especially true for multi-cloud environments. Virtual Machines (VMs) spin up and down on demand. Data moves to wherever it is needed. Workflows switch around based on dynamic changes in applications and end user requirements. From a security perspective, you not only need to be able to see all of these changes but have some way to apply and maintain policies as things shift around.

Seeing deep into every cloud instance is essential in order to baseline normal traffic, identify anomalous behavior, and track and monitor indicators of compromise. This requires security tools that can do things like follow asymmetric data flows, see and impose policies on new devices, tear down rules that are no longer needed, monitor traffic moving laterally across the network, and immediately adapt as the environment changes.


Hand in hand with visibility is the need for correlation between security tools. A shift in policy in one place can have both an impact and unintended consequences elsewhere. To avoid this, security tools need to be able to constantly share what they are seeing, regardless of where they have been deployed, in order to enable the effective orchestration and updating of policies.

Likewise, essential threat intelligence needs to be immediately shared and correlated in order to detect today’s more sophisticated attacks — especially those that have been specifically designed to evade detection. Separate security architectures and devices using isolated management consoles create blind spots. To fill that gap, IT teams are often required to manually correlate data in order to detect threats, which is why successful breaches often remain undetected for weeks and even months.


Attacks happen at digital speeds. Over the past couple of years, the time between a network breach and the compromise of data and resources has dropped from over half an hour to less than 10 minutes. This is the threshold that can determine whether your organization avoids a break-in or makes the front-page news.

Human intervention is no longer sufficient. In order to close the gap between detection and response, organizations need to implement automation. Where possible, that automation needs to be integrated with things like machine learning and artificial intelligence (AI) to enable autonomous decision making as close to a detected breach as possible.

But automation is much more than simply shutting down an attack where it is detected. Effective automation needs to be a multi-step process that enables the correlation of data and the coordination of devices across the entire distributed network.

When anomalous behavior, attachments, or applications are detected, that information needs to be immediately analyzed using a combination of signatures and sandboxing. Once they produce a threat profile, compromised devices need to be isolated and flagged for remediation. Security alerts need to go to all security devices to look for other instances of this new threat, and lockdown the network from the cloud to the core. Forensic tools then need to begin backtracking the attack in order to determine where the compromise occurred, and close the breach.

To make this all happen, security tools need to be able to work together as an integrated system that can span and adapt to the network as it evolves. The multi-cloud network is here, and securing it can seem overwhelming. But if proper planning takes place, appropriate policies are developed, and tools are selected for visibility, correlation, and automation, organizations will be able to realize the advantages of the cloud without overwhelming their IT resources or introducing new and unnecessary risk.

<<< This article was originally published on SDxCentral’s website here. >>>