Login  
Contact Us    
This site uses cookies I'm okay with that More info
Software Containers: What, Why, Who, and When?
Software Containers: What, Why, Who, and When?
November 20, 2018
I know that many of you reading this are in the transportation and logistics industry. And when you saw “containers” you probably thought “freight”. Hopefully this won’t be a disappointment as we’ll be introducing and describing software containers. But, there are analogies from freight containers that apply to software containers – more on that later. 

If your software development team hasn’t already taken advantage of containers, there is a good chance you are at least evaluating how they can be used. I’ll introduce software containers at a high level, specifically discussing the Docker container engine on Windows, and the reasons Docker and containerized applications are being adopted and embraced at a rapid pace. 

What are containers?
Software containers are lightweight, standalone, executable packages of software that include everything needed to run an application – the application, runtime environment, system tools, system libraries, file system, etc. Unlike virtual machines, containers usually share the host operating system kernel with other containers, which is key to their increased efficiency over virtual machines.

What is Docker?
Docker is a container engine that makes containers easier and safer to use. Docker container usage has grown at an incredible rate. In the past 5 years, approximately 3.5 million applications have been placed in Docker containers and 37 billion containerized applications have been downloaded. The annual revenue from the application container market is expected to quadruple, from $749 million in 2016 to more than $3.4 billion in 2021.

What are the benefits of containerization?
Consistency. Docker’s promise and key value:
“Containerized software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.”

Isolation. Scheduling resource usage hasn’t typically been a design concern for developers because most operating systems have handled those concerns for years with virtual isolation via kernels for memory and CPU. This changed somewhat as multiple core processors have become common and developers have needed to design for synchronization of resources between threads, for example.

Most developers don’t have to worry significantly about how the operating system kernel orchestrates access to CPU, memory, and networking among applications on the same machine. However, operating systems have not provided scheduling or isolation for other resources. For example, an application must follow best practices for sharing the file system and network ports. On Windows, there are important rules for sharing resources such as the registry and the .NET global assembly cache. Containerization engines, like Docker, provide virtual isolation for those resources.

Security. Security risks are obviously reduced as a result of increased isolation. With Docker, an additional isolation mode is available on Windows that allows for greater security to reduce the possibility that a containerized application could access the container host through a flaw within the kernel.

“Windows Server Containers isolation” or simply “process isolation” is the default mode for Docker. A second, more secure isolation mode, “Hyper-V” isolation, takes advantage of the Windows hypervisor to provide a separate kernel for each container. The trade-off consideration for this increased isolation is increased resource usage, slower startup times, etc.

Smaller. Container images have a smaller footprint than virtual machines because they include the minimum requirements for the application and are able to virtually share resources with both the host and other container images. A smaller operating system requires fewer operating system updates (reducing downtime and break risks) and makes startup time faster.

Control. Container images can be versioned, shared, and archived. Having this level of control over the complete application environment is one key to DevOps success. For application vendors, this also allows the vendor to apply and test operating system updates to containers with their applications before making them available.

Why Docker?
Microsoft, Amazon, Google, and Red Hat are among the numerous software companies who have embraced Docker and are currently working closely with Docker to standardize containerization. The development of Microsoft Nano Server and .NET Core are direct results of Microsoft’s desire to provide lightweight kernels and a lightweight application framework suitable for Docker images.

Platform independence. Both Windows and Linux containers are supported on the Docker engines that run on Windows Server 2016, Windows Server 2019 and Windows 10. Docker on Windows is also able to run Linux containers by taking advantage of Hyper-V virtualization.

Licensing. Windows Server Containers (aka “process isolation”) does not require additional operating system licenses for each container. This allows increased density at no additional cost. However, containers using Hyper-V isolation require the same number of licenses that virtual machines require – a trade-off to consider.

When should containers be used instead of virtual machines?
Ben Armstrong, a Principal Program Manager Lead at Microsoft gives this general advice for when to use containers versus virtual machines:


  1. If you can run a workload in a container, you should

  2. Containers require applications to be “headless”, i.e., no GUI applications

  3. Microsoft only supports the latest version of Windows in containers (Windows 10 and Server 2016,2019)

  4. Some Windows services aren’t (yet) available in containers


How is ConnectShip | iShip using software containers?
We are experimenting with containers for internal build processes and as an additional means of delivery for our current on-premise Toolkit product, and future products which are based on .NET Core. We are also exploring how we can leverage containers behind our Progistics Toolkit Cloud product to make DevOps processes and deployments more reliable and our physical resource usage more efficient and cost effective.

Who can Benefit?
Developers


  • Containers allow developers to deliver and control the entire application environment. This removes restrictions typically required for applications which share their environment with other applications.

  • Containerized application no longer has to follow standard guidelines for creating application files and directories. Each application has a virtual file system within its container which is shielded from applications on the host and in other containers.

  • Containers support continuous integration/continuous deployment (CI/CD) processes by allowing build environments to be declarative and containerized, and as such, made deterministic and consistent.


Quality Assurance


  • Containers place the responsibility on developers to declare and build a complete environment for QA testing. This guarantees prerequisites are met and makes it easy to return the environment to its initial state.


End-Users


  • Container-deployed applications are isolated and protected from changes to the environment by other applications.

  • Containers can protect the end-user’s host environment from inadvertent modifications by containerized applications.


Hosting Providers


  • Containers require significantly fewer resources than virtual machines because they’re able to share the operating system kernel and other resources. Hosting providers can achieve higher “density” on their physical hosts than with virtual machines.

  • Cloud-hosted applications are faster, more transient, and easier to deploy. A case in point – Every time you use Gmail, Google Docs, Search, etc. a new container is created.

  • The Hyper-V isolation mode with Docker on Windows can provide increased security with the trade-off of higher resource usage.


DevOps


  • Pack, ship, and run any application as a lightweight, self-sufficient container, which can run virtually anywhere, providing portability between development, QA, staging, and deployment environments.


Experimentation


  • Quickly start using applications, database servers or web servers with less initial setup, configuration and administrative expertise.

  • Easily discard containers after experimentation so that no application “residue” is left on the host.


Back to the freight containers…
In the introduction I said that there were analogies between freight containers and software containers. Radek Ostrowski has written an excellent introduction to Docker, where he describes the similarities:

“In the international transportation industry, goods have to be transported by different means like forklifts, trucks, trains, cranes, and ships. These goods come in different shapes and sizes and have different storing requirements: sacks of sugar, milk cans, plants etc. Historically, it was a painful process depending on manual intervention at every transit point for loading and unloading.

It has all changed with the uptake of intermodal containers. As they come in standard sizes and are manufactured with transportation in mind, all the relevant machineries can be designed to handle these with minimal human intervention. The additional benefit of sealed containers is that they can preserve the internal environment like temperature and humidity for sensitive goods. As a result, the transportation industry can stop worrying about the goods themselves and focus on getting them from A to B.

And here is where Docker comes in and brings similar benefits to the software industry.”

The same benefits that come from standardizing freight containers – portability (“built with transportation in mind”), security, isolation, automation, density, efficiency, speed, etc. – are the same benefits to be gained by using software containers.

What about you?
How is your company using or planning to use containers? My hope is that this article has served as a high-level introduction to containerization, specifically with respect to Docker for Windows. I welcome your feedback and any questions you may have about my experiences with containers.




Additional Resources

- What is Docker? (Microsoft)
- Docker: Get Started (Docker)
- What is Docker? Docker containers explained (InfoWorld – Sep 6, 2018)
- Getting Started with Docker: Simplifying DevOps (Radek Ostrowski)
- Docker Containerization Unlocks the Potential for Dev and Ops (Docker)
- How To Get Started With Docker On Windows (Rafael Carvalho, Scalable Path)
- A Brief History of Containers: From the 1970s to 2017 (Rani Osnat, March 2018)
- Docker has raised $92 million in new funding (TechCrunch – Oct 22, 2018)
- What is Docker and why is it so darn popular? (Steven J. Vaughan-Nichols for Linux and Open Source, ZDNet – March 21, 2018)
- Containers: Docker, Windows, and Trends (Mark Russinovich, CTO, Microsoft Azure)
- Ben Armstrong AMA on Containers


Glenn Carr
Senior Software Engineer ~ ConnectShip | iShip
Glenn started with ConnectShip | iShip (then known as TanData) in 1999. He currently works in the ConnectShip | iShip software engineering department as a Senior Software Architecture Manager, but is best known for his funky sock collection which he sports around the office. Outside of work he enjoys spending time with family, running, and being outdoors as much as possible.