social.dark-alexandr.net

sss friendica

Are Containers More Secure Than VMs?

source: #^https://hubzilla.cyberwald.com/item/malware.news-topic-44142

Are Containers More Secure Than VMs?

Stop and think for a moment. How many virtual machines (VMs) do you have running in your production cloud environment? How many containers? Can you easily formulate the percentage split for your workload mix? 50/50? 80/20?

Here’s a better question: How secure do you think your environment is, based on that mix? Are virtual machines inherently more secure than containers, or is the opposite true? Are you running your container workloads on a service like AWS Fargate, or do they run on virtual machines themselves?

As it turns out, there’s no hard-and-fast answer to how your workload mix affects your environment’s security. The immutable nature of containers offers more consistency throughout the development lifecycle, whereas VMs are dynamic and often subject to configuration drift. Containers might offer their own unique technical challenges when it comes to security, but VMs present a broad attack surface. In this article, we’ll explore some of the key security advantages and disadvantages that each platform offers.

Virtualization



Virtual machines, obviously, are a product of virtualization technology. What may be less obvious is that containers are also enabled by the same technological paradigm. How are they different, and is one necessarily more or less secure than the other?

If you’ve been in IT or engineering roles the last few years, you’ve likely had some experience with virtualization. If not, this post from opensource.com does an excellent job of laying out the basics of how virtualization and virtual machines work. It all starts with the hypervisor, the brain of a virtualization platform. As detailed in the opensource.com post, hypervisors typically run either like an operating system, installed directly to the hardware, or more like an installed application, on top of an existing operating system.

If it’s not immediately obvious, the hypervisor acts as the metaphorical “keys to the kingdom,” providing a high-value target for threat actors. If it is compromised, it puts every workload and datastore on the physical server at risk as well. For any enterprise, this would be a high-severity compromise. For infrastructure-as-a-service providers, with multiple customer tenants per machine, the implications are catastrophic. Every individual VM also increases the attack surface, each one acting as an independent system, with an OS, open network ports, and a wide breadth of dependencies and installed packages. Despite the obvious need to secure such a critical piece of infrastructure, a cursory search for CVEs related to hypervisors reveals they are still prone to vulnerabilities.

In contrast, containers are bound to a single host, virtual or physical. They virtualize a subset of the underlying OS or kernel, providing a limited runtime environment for what is typically a single binary or executable. One of the most popular containerization engines, Docker, provides some helpful illustrations of the model and how it compares to VMs. Each container runs as an isolated process, but shares the host kernel with other containers.

In the case of containers, the Docker engine acts as a hypervisor of sorts, running, for example, as a daemon on a Linux node. If this is compromised in any way, the host and all other containers are potentially at risk. Containers themselves also provide a potential avenue of attack, since they could exploit the subset of privileged access they have to low-level system calls and subsystems. Consider the recently discovered Doki malware, an attack that takes advantage of misconfigured, publicly exposed Docker APIs. Using container escapes, the malware gains control of the host, and ultimately, the other containers. Docker provides fairly extensive documentation regarding the security of the daemon, containers, and other aspects of the ecosystem. One major caveat described in that documentation is that security and isolation features are enabled by default with new containers, but this is not the most secure profile possible and may offer incomplete or imperfect isolation. Major cloud providers such as AWS are also introducing technologies such as Firecracker MicroVM to help improve isolation and mitigate potential attacks.



Networking



Once upon a time, networking for a web-facing enterprise implied a significant investment in physical infrastructure and support. With the advent of the cloud, complex network architecture can be created in a few API calls. Despite the paradigm shift, the actual reality of how resources utilize this network is not much changed.

The typical virtual machine utilizes network space in much the same way as traditional hardware nodes do, albeit with a couple of wrinkles. VMs often have two or more modes that define how networking behaves. In bridged mode, a VM is transparently allocated logical space on one of the host network interface cards (NICs), and it receives its own IP address and DHCP lease. To anything else on the network, that VM appears as an independent resource, with the underlying hypervisor acting transparently. In this instance, ports and interfaces are typically secured from the VM operating system. Most cloud providers offer additional services to act as firewalls and filtering devices as needed. In Network Address Translation (NAT) mode, VMs share the external IP of their host, and outwardly it appears as if all traffic originates from the host node. In this mode, it requires extra care to secure and manage ingress/egress security of the host networking interface/s.

In the container model, again using Docker as the base example, there are a multitude of networking options, owing to the different architectures available for orchestration and deployment. This article won’t dive too deep into every possible option, but will instead focus on the default bridge mode. With Docker, the daemon maintains virtual networks for containers to connect to, including the default bridge network. Unless otherwise specified, all new containers will connect to this network. Once a container starts, it receives its own IP address, gateway, and subnet, just as a VM might. However, the bridge network won’t necessarily be in the same subnet or address space as the host itself. Without additional configuration, you only receive a basic subset of possible configurations. This is where the added complexity could potentially become a liability as much as a feature: Docker’s own documentation admits that the default mode is inferior and less secure. Unless you spend additional time understanding the additional complexity, the default configuration will not provide the best security.

Develop, Deploy, and Run



To get a complete picture of the security implications for each platform, you need to evaluate the user experience of the engineering and ops teams. What are the added complexities of developing for a container engine? Does a vast fleet of VMs mean a lot of toil work for ops, patching and securing hundreds (or thousands) of servers? A smoother experience for these teams will make it that much easier to provide a secure environment for your applications and data.

Virtual machines provide the more traditional experience in terms of development, deployment, and ultimately, the production runtime environment. As previously discussed, VMs are simply a logical, virtual representation of the classic bare metal server. In a typical enterprise, VMs are managed through a combination of configuration management software and infrastructure as code. Unfortunately, because a VM is akin to a traditional server, the same traditional operational problems tend to surface. Each VM has a full complement of network ports, services, daemons, user accounts, and packages that must be carefully managed. Over time, patching and configuration changes will introduce drift into the environment, and production runtime will no longer align with development environments. Even if the production workload is not active, the VM will still need to be kept at maximum security posture.

In contrast, containers typically are bound to a single binary, and last only for the lifetime of the underlying process. Containers are also typically immutable, so there is a reasonable expectation that your runtime environments will be nearly identical to the development workloads. However, at each stage of the software lifecycle, containers add additional complexity and wrinkles. During development, engineers must be careful to ensure that the base image they utilize comes from a trusted source. Docker Hub, one of the largest public sources for Docker images, has encountered compromised images in past incidents. DevOps teams often have to spend additional cycles deploying internal image hosting architecture in order to provide a secure source for development teams. The runtime environment for containers also brings additional complexity, as additional configuration and tools often must be deployed to provide observability for containers and the underlying engine.

Secure Everything



So should you choose one platform or the other, purely based on security? Ultimately, no. Modern cloud environments will almost always utilize a mixture of container- and VM-based workloads. Common architecture patterns often involve container-based microservices, with stateful services like databases housed on more traditional VM-based infrastructure. A successful security story will necessarily involve taking steps to secure both types of workloads. Containers have enabled new ways to design and scale highly available digital infrastructure, and it would be difficult to justify ignoring the business value they provide. Conversely, it would be foolish to focus entirely on the latest and greatest, without acknowledging the utility of the more traditional VM platform.

Solutions like Intezer Protect offer runtime security for container and VM workloads across the cloud. A key aspect of a successful DevSecOps effort is being able to integrate a low-friction security solution with existing processes. Having to depend on multiple tools or services to secure a mixed-workload environment will add unnecessary complexity and operational toil to your environment. Protect offers a one-stop solution for runtime cloud-native security.

Organizations should apply a pragmatic, data-driven approach to security, analyzing the pros and cons of both platforms and integrating them where it provides the most value. Too often, security efforts focus too strongly on implementing solutions for one type of workload, when the ecosystem is in fact a mix. It doesn’t have to be an either/or proposition, since modern runtime security solutions were designed with the cloud as a first-class environment, aiming to secure workloads across any type of compute resource an organization might depend on. With a robust security solution, organizations can take full advantage of both containers and virtual machines.




The post Are Containers More Secure Than VMs? appeared first on Intezer.

Article Link: https://www.intezer.com/blog/cloud-security/are-containers-more-secure-than-vms/

1 post - 1 participant
Read full topic
Iron Bug friendica (via ActivityPub)
the whole idea is crazy. people write buggy and vulnerable code first and then wrap that crap in fat wrappers just because they're not sure in what they run on their servers. this is madness and rot of software development.
Iron Bug friendica (via ActivityPub)
I would call this totalblow, as opposed to suckless.
sss friendica
@Iron Bug unfortunately now-days whole great set of features provided by virtualization is mostly misused, from my point of view virtualization is great for few things:
1. if you need to run os different from host os with smallest possible speed overhead.
2. as of containers, it provide possibility to use same os kernel with different environments, let's for a minute forget about security, different environment may be useful to avoid conflict's with host software/libraries which is also great

and now-days most users use containers and vm's for false security only, but still, virtualized software does provide additional security.

in terms of security, i prefer sandboxes, as it have much lower overhead and designed for exact purpose - security )
Iron Bug friendica (via ActivityPub)
>if you need to run os different from host os
this means you picked the wrong OS on your server. it shouldn't be like that. this is nonsense.

>different environment may be useful to avoid conflict's with host software/libraries which is also great
this shouldn't be so either. the very idea of Linux distribution is stable and checked up set of libraries that work together and supporting certain ABI. but no, they drag terrible windoze dll hell to Linux. and this is something I would kill for.
sss friendica
@Iron Bug narrow-minded point of view.

i am developing software for various platforms, and doing so mostly inside virtual machines.

some of task possible to implement on same host systems, but it will take much more time to implement, some not, for exapmple my current project works on custom freebsd kernel only, and no, i do not want freebsd as host os on my server for a lot of reasons.
Iron Bug friendica (via ActivityPub)
this is just erroneous using of OS.
I'm an educated system programmer, I know why OSes exist, how they utilize resources and why it is important to run native code for every OS and every CPU.
the hell that people created from software nowadays only makes me do facepalm.
Iron Bug friendica (via ActivityPub)
and all that VMs and containers are utter facepalm causes. this is a shame of development.
sss friendica
@Iron Bug so, are you suggesting to buy dedicated hardware for each target platform for which i need develop and test software ?
Iron Bug friendica (via ActivityPub)
no, I suggest writing proper software and compiling it for target system and hardware specifications.
Iron Bug friendica (via ActivityPub)
software is NOT to be easy or comfortable for developer. software should be fast and optimized for certain server.
sss friendica
@Iron Bug this does not discard my previous statement.

virtual machines one of real purpose of vm existence is development for another arch/os kernels, and they does it well, i do not see any reason to not use it for this task and completely disagree with you on this topic.

but i completely agree with your point what it is let's say it softly, not the best solution to increase security.
Iron Bug friendica (via ActivityPub)
VM are possible in development. just for prototyping and initial code debug (not final, though, and not for performance testing). but NOT in production.
sss friendica
@Iron Bug i am saying just this. looks like we just misunderstand each other.