Roger Stoffers

Roger Stoffers


Roger is a Certified SOA Architect, Security Specialist and Consultant as well as a Certified Trainer for Arcitura Education. Roger is a TOGAF-certified Enterprise Architect and a senior Solution Architect with affinity for service-orientation and Cloud at Hewlett Packard Enterprise in the Netherlands. He has been the lead for many SOA and Cloud projects, primarily in the telecommunications and media sectors, with 20 years of international experience working with large organizations across many countries. He is a contributor to the Service-Oriented Architecture: Analysis and Design for Services and Microservices book from the Prentice Hall Service Technology Series from Thomas Erl.

As a Solution Architect, Roger has a wide interest in different types of architectures and leads many service-orientation initiatives. He is interested in application integration as well as the end-to-end consequences for businesses, organizations and business processes, and is always looking for the business driver to meet the requirements with the most potential for business satisfaction.

As an enterprise architect, his interests include translating business drivers and goals into architecture principles and requirements. Lastly, he is interested in finding relationships between business and IT: how business decisions affect IT and how IT decisions can affect businesses. As a strategic and pragmatic thinker, he architected several innovative future architectures—some of which have already been realized, and some of which are presently in progress due to their long-term nature.

Roger’s work with Hewlett Packard Enterprise has been as an Enterprise and Solution Architect, where he led many projects for Digital Transformation and Enterprise Application Transformation. His recent work involved being the Lead Architect for many projects in a large-scale business and IT transformation program, acting as strategic advisor for the program board, and providing advice about integration strategy and principles. Roger is also responsible for strategic architecture and governance principles of a Global Service Bus in a challenging multi-national environment where central governance was not possible.

Further expertise with a wide range of business capabilities includes but is not limited to the following: Billing, CRM, Retail, Credit Management, Product and Offer Management, Service Provisioning, Business Process Management, Lead Management, Sales and Ordering, and Order Fulfillment.


rss  subscribe to this author


Containers and Containerization – Applications and Services on Steroids? Published: June 30, 2016 • Service Technology Magazine Issue XCV PDF

Containerization is a term originating from the transportation industry. A container is an independent self-contained (autonomous) unit of freight with fixed dimensions, also known as a fixed interface. Such a container is used in international transportation where all transportation devices (trucks, ships, trains, cranes) handle containers (containing disparate payload) in a unified way, meaning the payload can be moved back and forth in an automated way.

A similar approach has made its way to the IT world in recent years, with the advent of containerization as a promising technology to succeed or complement virtualization. Before discussing the implications of IT container technology, let's see how it is similar, yet fundamentally different from virtualization.

Both containers as well as virtual machines (virtualization) are means to increase the effectiveness and portability of hosting workloads on servers. Both allow workload to be abstracted from the underlying physical hardware but each takes a different approach.

Different Approach to Abstraction

A virtual machine abstracts a complete server including hardware resources like memory, disk and an operating system at the hardware level; the hypervisor provides virtual devices to the guest operating system.

A container abstracts the operating system instead and as such the operating system offers direct access to the underlying hardware, removing the need for resource-costly translation of virtual device access onto the physical server's devices. This is depicted in figure 1.


Figure 1 – A virtual server abstracts a physical machines to offer virtual devices to the guest operating system. Translation of virtual devices access to physical devises is costly from system resources point of view. A container abstracts the operating system instead, removing the need for costly device access translations, freeing up system resources.

To get the real benefits of performance increase and more effective use of system resources and devices, containers should not be used inside virtual machines, but directly on a host operating system which is specialized to manage containers. Such light-weight operating systems are offered in the marketplace today.

In case a desire exists to not manage the hardware, still virtualization can be used in addition to containerization, at the cost of loss of less effective use of resources.

So What About Those Steroids?

Containers bring steroids to the enterprise (and consequently to our services), by offering a significant advantage in these key areas:

  • Portability
  • Service or Application Density
  • Fault tolerance and Resilience through Fault Isolation and rapid replacement of faulty containers
  • Suitability for Automation


By abstracting the operating system, containers introduce a significant amount of portability of the payload, as the entire container (including the application, its required libraries and APIs and its required storage) can be moved from one environment to another, which may involve moving onto an entirely different operating system. These environments could be running in entirely different types of environments, being for example, development on a laptop, test in a cloud environment, and production in a hybrid on-premise environment with cloud-bursting facilities.

As indicated, the operating system is abstracted resulting in a massive increase of portability, as long as the container technology is supported by the host operating system and as long as kernel API compatibility exists. As such, containers provide a complete runtime environment including all dependencies like platforms, libraries, programs and other binaries, in a configurable deployment. This level of dependency inclusion is paramount to the portability of the container.

Size Matters

Another significant difference between a virtual machine and a container is that a virtual machine is fairly large, offering support for almost any software or application to be hosted inside. The guest operating system makes sure that a complete set of features and operating system services is available to the applications hosted. This results in a bloated environment with lots of unused, hence unnecessary services and features. The startup time for a guest operating system inside a virtual machine can be several minutes.

When comparing containers, these only offer the essential required services and libraries for the application or the SOA services to run. This means that any unnecessary operating system services are just not present. This approach results in a container which can be started instantly or in a matter of seconds.

Because of this size difference, a machine can host a significantly larger amount of containers (dozens to hundreds) than it could host virtual machines (handful).

Application / Service Density

Containers consume much less space and system resources because they are tailored for a specific application or (group of) SOA service(s).

Even if a container hosts an equal amount of services when compared to a virtual machine, all it needs to support considering resources, libraries and APIs, is really what is minimum viable to make those services work.

System services which allow a virtual machine to offer a wide variety of functionalities are simply not there.

Deploying services on container engine have in several cases shown an increase in service density of several hundreds of percent.

Whether it be the increased processing capacity of the existing hardware, or the reduced requirements for new hardware and the corresponding reduction of power and cooling will make any CxO a star.

Resilience / Fault tolerance

As containers are so light-weight and since they can be started in a matter of seconds, failure in a container can be solved as easily as removing the faulty instance and deploying a new instance, providing great tolerance to outages and container faults. We could even speculate that making a container immutable (resulting in a more secure implementation of applications and services) would be possible.

Unlike virtual machines, where a significant amount of operations is required to keep them up and running as long as possible, in a container we could not care less – just destroy, recreate and then be up and running again in a matter of seconds. This provides a significant level of fault tolerance.

Services deployed in a container fail? Just replace the faulty containers. Services not coping with the load causing erratic and slow behavior? Just deploy more container instances hosting these services.

As presently many applications are a collection of distributed components, and since we could host all those components in a set of containers, an issue in the application could be solved by a fairly local change only affecting one container, which can then be replaced in production while other parts of the application are still running. This approach results in less downtime as we don't need to bring down the entire application. Consequently we can conclude that business continuity is increased.

Fault Isolation

Fault isolation is another key benefit of applying containers – In a monolithic application, a problem in one of the components can bring down an entire (group of) applications for example when a particular component consumes all available memory resources. Conversely, if a particulate container has a fault, it is local to that container and proliferation of effects is not possible beyond the scope of the container instance itself.

Suitability for Automation

Containers are quite suitable for automation. As such they for a really good foundation for DevOps, as they are small manageable pieces of software, platform or infrastructure components; for which a clear separation between development and operations can easily be found by either looking at everything inside a container (Dev), and the container itself which should be managed (Ops).

DevOps is providing continuous deployment, typically for your own bespoke applications and services, containers add 'full-stack' deployment to the mix, meaning that complete IaaS, PaaS and SaaS environments and corresponding software can be managed in one unified approach. This increases the change rate offering even better business agility to the enterprise.

Why Now?

Container technology has been around for more than a decade. Similar operating system level abstraction is almost commonplace; so why worry about it now?

At large, since fairly recently two discriminating factors were realized: operating system manufacturers have optimized their operating systems to run containers and companies have added ease-of use to the container marketplace. If containers are easy to use and light-weight, they offer significant advantages over virtual machines.

Microservices / Containerized Service Deployment

The interest in the architectural approach of microservices is yet another reason to think about containers. Much can be said about microservices. The goal of this article however is not to analyze and explain what a microservice is, so please excuse for any lack of detail.

A microservice is not a small-sized service, but a (very small group of) service(s) which can be bundled, deployed and managed independently. This results in a very fine-grained approach for deploying and redeploying services in case of bugs or runtime failures.

What is important about the microservices approach, is to understand that typically any underlying dependencies are deployed together with the microservice. This increases the service autonomy of the microservice tremendously, as it provides greater control over the underlying execution environment. This is also depicted in figure 2.


Figure 2 – A microservice is a service which is bundled together with its dependencies (including any service data required if possible). The deployment bundle is created, deployed and managed independently of other services. If required, service data replication mechanisms are utilized to provide the service with up-to-date and optimized-for-service-execution data from elsewhere in the enterprise.

Containerized service deployment is an approach where individual services or a relatively small group of services are packaged and deployed together with the corresponding container. For an illustration of this mechanism see figure 3. As such the services and supporting container configuration are prepared up-front and deployed together when needed. This allows for a very flexible and fine-grained approach to managing services and their capacity. A failure in a service could be solved by redeploying the container and its contained services. Lack of capacity could be overcome by rapidly deploying one or more extra container instances, providing our consumers with a service which can be scaled horizontally very easily.


Figure 3 – A containerized service is a service deployment bundle preconfigured inside a container. The total package can be instantiated / (re)deployed within seconds, offering great control over capacity, performance and fault tolerance, especially with the microservices approach.

For further details on containerized service deployment, see the related SOA pattern: Containerized Service Deployment


Regarding security the air is less clear as two streams of prominent opinions exist:

  • Containers are more secure as unused services, libraries and other operating system capabilities are simply not there.
  • Containers are less secure as they do not sufficiently abstract beyond process isolation.

Currently Both Opinions are Equally True.

Indeed, what is not there cannot be abused, and traditional so-called "hardening" follows the same philosophy. So if a container by default offers the minimum viable support for the deployed services and applications, it should be inherently more secure, as any unnecessary services and API are omitted from the container as starting point.

There is however still one concern. Privileges of the processes running a container, and privileges of processes running inside a container. A container engine typically requires root/admin privileges. As processes in a container engine share the same kernel as the host operating system, great responsibility comes to us as container users. A process inside a container with root privileges has the same rights as any process in the host operating system. For this reason, in a container, never use anything which requires root access. Any process in the container running with root privileges, is a disaster waiting to happen, and probably target no. 1 for a potential hacker trying to cause havoc. Unless you use an inherently more secure approach like leveraging SELinux (Security-Enhanced Linux) or similar, then if at all you need to have any rules, this is the one: don't use processes requiring root privileges.

Extending the Reach of Containers Beyond "Applications" or "Services"

Containers can host what we could call workers. Many worker types exist. Services are workers, but also Web-APIs and many other types of software applications can be considered workers.

Big data relies on workers as well. The Map/Shuffle/Reduce paradigm as heavily relied on in Big Data Analysis, is a specific way of indexing, sorting and aggregating data which runs multiple concurrent workers with a specific task in parallel. The ability to rapidly deploy, execute work and subsequently destroy workers, is key to effective use of big data analysis resources. To execute a task, a set of dedicated workers is deployed and once a task is done, the related workers are destroyed.

Google already uses containers to a great extent – any search operation for any internet user is handled by a containerized search application. Once searching is done, the container is destroyed and the freed up resources can be offered to another search operation by deploying another container.

Immediate Need for the Industry

When looking at the present state of the industry, as few important factors must be addressed:

  • Container standardization is probably of immediate importance in the container space to foster a significant increase in portability, even across cloud providers. The recently started Open Container Initiative (OCI) is the biggest attempt at establishing such standardization.
  • PaaS and SaaS providers could offer certified container images in a marketplace offering more security and ease-of-use to the deployment of platforms and software, resulting in greater ability to offer services and applications as a commodity. As an example, message oriented middleware can be deployed in a container, as can a business rules engine, as can a source version control management tool, as can a complete email application. Possibilities are endless. Present lack of certification options of container images allows anyone (including malicious container image providers) to register and distribute, which should be addressed.
  • Standardization of monitoring and management control, ultimately extended into what is commonly referred to as container management or container orchestration. This is a concept where clusters of containers are monitored and managed together as a single distributed application. For container technology to become truly successful this is one of the key requirements. Containers can run in many configurations and in many deployment configurations (including a mix of on premise and in-the-cloud, using hybrid cloud approaches), and event correlation as well as an end-to-end understanding of the telemetry is key. Further concerns are topics like inter-container dependencies through independently containerized application components, load distribution, redundant implementation of services, applications, resources and infrastructure, synchronization of resources and many other complex technologies that all must be managed efficiently. Ease of use of the management of containers is key here.
  • Configuration management issues arise when an application is decomposed into a group of containers. Often only specific versions of specific containers work well together, resulting in the need for effective baseline management of containers. A central container registry could play a key role, as the pace of changes is considered high, especially in a DevOps environment.

IT Transformation Potential

Due to the significant increase in the amount of portability, fault tolerance, level of automation and density, the advent of containers could bring fundamental changes to the way we design, deliver and manage the entire IT in our enterprises.

Whether or not the industry embraces containerization and where it will lead us, remains to be seen in the next few years, however, if world-leading banks are actively using containers in production (and banks are known to stick with proven technology, making them slow to adopt new market trends), then chances are that we are onto something big here.