Welcome to the first in a series of four articles discussing the containerization and cloud native movement that is led by Docker, Google and a burgeoning ecosystem of traditional and new players. In this series we venture to define and discuss this emerging and exciting space in an effort to help organizations better navigate it. While this first article will focus on defining “cloud native” and some related moving parts, in future articles we will explore the important layers and related challenges with networking, storage and container orchestration to give you a more complete view…but first let’s start at the beginning.
Welcome to the first in a series of four articles discussing the containerization and cloud native movement that is led by Docker, Google and a burgeoning ecosystem of traditional and new players. In this series we venture to define and discuss this emerging and exciting space in an effort to help organizations better navigate it. While this first article will focus on defining “cloud native” and some related moving parts, in future articles we will explore the important layers and related challenges with networking, storage and container orchestration to give you a more complete view…but first let’s start at the beginning.
In the beginning there were Free BSD and Solaris, which laid some masterful groundwork for what now is considered modern containerization technologies with capabilities called FreeBSD Jails and Solaris Zones respectively. Google helped bring containers to Linux by adding cgroups to the Linux Kernel. Combined with namespaces and chroot, the technical groundwork was laid. Docker further made containers more accessible by creating any easy workflow around container images and focusing on an easy developer experience.
Probably the best way to define the space is by answering some fundamental questions and getting some guest experts to answer them. (see how I offload work to other people that are more qualified!?)
What is a Container?
Guest Contributor Cameron Brunner, Chief Architect Navops
Containers allow applications to be moved reliably from one computing environment to another. This could be from a developer’s laptop, to the QA environment to production on-premise or in the cloud. Software stack dependencies of the application running in the container, such as the operating system or other software components and libraries, can largely be embedded inside a container allowing it to run decoupled from the details of the underlying IT environment. Containers were originally designed to provide isolation between applications running on an operating system. They provide a combination of resource controls and boundary separation that helps to isolate the execution of code within the container from other activities and other containers that are running on that machine/operating system. Containers achieve this isolation by leveraging operating system functions like cgroups and namespaces.
What is Docker?
Docker is both a company and a commercial/open source implementation of containers. Containers have been around for a long time prior to Docker with early implementations like FreeBSD Jails (2000) and Solaris Zones (2004) and Docker has done a fabulous job of making containers useful for the masses by greatly simplifying the creation and execution process. While Docker is quickly becoming the defacto standard container format, the company has made further moves towards openness and collaboration by creating the Open Container Initiative (OCI) under the Linux Foundation. OCI involves a number of industry participants and is working towards industry standards for container formats and run times. (See www.opencontainers.org/)
So what is Cloud Native Computing?
Guest Contributor Joe Beda, Founder and contributor to both Google Compute Engine and Kubernetes
Cloud Native is one of many new ways of thinking about building and managing applications at scale. At its root, Cloud Native is structuring teams, culture and technology to utilize automation and architectures to manage complexity and unlock velocity.
While containers and container management are often a part of “Cloud Native” thinking, organizations such as Netflix have famously applied this thinking with VMs and VM images. In addition, you don’t have to run in the Cloud to start realizing some benefits from this shift in thinking. Applications and teams can be more manageable while deploying applications on prem.
There are no hard and fast rules for what Cloud Native is. However, there are some themes that are emerging.
- DevOps and Ops automation: Engineers wearing the application developer hat play an active role in ensuring that applications can run reliably in production. Similarly, those fulfilling the operations role make sure that experiences feed back into development. Automation is key for managing lots of moving pieces.
- Containers: Containers provide a convenient way to create a deployable build artifact that can be tested and validated. This ensures that deployments are predictable.
- Compute Clusters: An API driven compute cluster and scheduling system allows for a small number of engineers to manage a large number of workloads. Beyond that, it allows for those workloads to be efficiently bin packed to nodes in order to drive utilization rates. Finally, a well-run cluster reduces the operations burden for application teams.
- Microservices: Microservices split up applications into smaller deployable units so that development teams can be fast and nimble. These ideas aren’t new, necessarily, but are being applied in concert with tools to enable scalable management. We’ll talk more about this below.
- Deep Visibility: Cloud Native implies deeper insights into how services are running. Distributed tracing, collecting and indexing logs and deep application monitoring all help to shine a light on what is actually happening inside an application.
What is a micro-service based architecture?
Guest Contributor Joe Beda, Founder and contributor to both Google Compute Engine and Kubernetes
Microservices are a new name for a concept that has been around for a very long time. Basically, it is a way to break up a large application into smaller pieces so that they can be developed and managed independently. Let’s look at some of the key aspects here:
- Strong and clear interfaces. Tight coupling between services must be avoided. Documented and versioned interfaces help to solidify that contract and retain a certain degree of freedom for both the consumers and producers of these services.
- Independently Deployed and Managed. It should be possible for a single microservice to be updated without synchronizing with all of the other services. It is also desirable to be able to roll back a version of a microservice easily. This means the binaries that are deployed must be forward and backward compatible both in terms of API and any data schemas. This can test the cooperation and communication mechanisms between the appropriate ops and dev teams.
- Resilience built in. Microservices should be built and tested to be independently resilient. Code that consumes a service should strive to continue working and do something reasonable in the event that the consumed service is down or misbehaving. Similarly, any service that is offered should have some defences with respect to unanticipated load and bad input.
- Microservices are more about people than technology. Small teams are more nimble. Jeff Bezos is famous for suggesting keeping meetings and teams small enough so that they can be fed with 2 pizzas. By structuring a big project as a series of smaller teams and then getting out of the way, those teams can mind meld and own that part of the project.
We look forward to the next articles where we’ll discuss some of the important layers of the cloud native stack.
Rob Lalonde is a VP and General Manager of Navops. He is an active participant in various open source foundations including the Cloud Native Computing Foundation (CNCF), the Open Container Initiative (OCI) and the Linux Foundation. Rob has held executive positions in multiple, successful high tech companies and startups. He has completed MBA studies at York University's Schulich School of Business and holds a degree in computer science from Laurentian University.
Sign up for the StorageReview newsletter