Blog by Jason Normanton, Head of Cloud Services
Welcome to the final part of this Containerization blog series.
In the first blog post, I spoke about our vision to support clients on their Digital, Cloud, and Containerization Journey - and how we see ourselves very much like the Sherpa, guiding explorers to the summit, whilst avoiding the pitfalls, crevasses, and icefields on the way.
In the second blog, we discussed Application Modernization and the evolution of management and operating modern microservices application architectures (from pets to chickens!) – plus some of the new toolsets and major benefits around scaling that can be achieved with Kubernetes.
In this third post, I will bring this together with some real-world examples of the benefits of containerization and the issues our clients are overcoming by leveraging our investment in technology, networks, and data centers to bring their vision to life.
When we began working with companies on their containerization projects, we saw lots of similar questions that we really needed to address. Things like, 'which tools should we use and why?' and 'how do I choose the right one?'. These questions come as no surprise when you consider every client has individual requirements around tools to suit their needs for:
Next, came the important questions, 'which vendor do we choose based on their commitment to open-source code, the size of organization, and their ability to execute and ability to support?' and 'do we build, buy or integrate our technology stack from open source products/projects?'.
The container ecosystem is so large that we were really looking at a huge task to test and evaluate each option to quickly understand what good looked like, and then get a stable version of whatever toolset was selected.
We also had to bear in mind that we needed a solution that would remain integrated and stable in production as new releases of the components used within the toolset were updated.
As we aren't a software house and don't want to maintain the integrations internally, we opted to review the marketplace and investigate the major container Application Platform as a Service (APaaS) and Container as a Service (CaaS) offerings available.
It quickly became apparent that there were only two real options in this space - with Kubernetes support a given in both - Cloud Foundry or OpenShift.
Reviewing the Red Hat OpenShift distribution, we began to see some major advantages for our enterprise clients with its extensibility into the more traditional world of IBM. This was apparent through the tight integration that IBM has performed in their Cloud Paks approach to key industry themes like Multi-Cloud Management, Applications, Data, Integration, Automation and Security.
Another benefit of OpenShift is the approach to maintaining the ability to access existing legacy infrastructure, such as storage arrays for containerized workloads with products like Spectrum Protect Plus working as the translation layer as outlined in the above IBM slide.
This richness of capability in the new container world that also links existing assets to legacy investments within the data center is exactly the kind of vision that Tectrace embraces and an approach we know will pay dividends to our clients. In the hybrid world in which we are operating, we want to provide our clients with choices, not dead ends, as shown in our approach to tooling.
In support of client container projects, Tectrade offers a native support option in both Amazon Web Services (AWS) and Microsoft Azure providing full support for both the Azure Kubernetes Service (AKS) and Amazon Elastic Kubernetes Services (EKS).
Our enhanced container offering based on RedHat OpenShift can be deployed in your data center or our private cloud, or directly onto AWS, Azure, GCP or IBM Cloud to support your vision for true multi-cloud capability.
Our client, a large financial services Independent Software Vendor (ISV), wanted to achieve a 'Standard Development Environment' (SDE) for a select number of workstations with a standard set of developer tools loaded to each. The ISV has a complex development stack built around a backend of IBM Power Systems coupled to x86 middleware services and web front-end.
The SDE is required each time the ISV begins a new project or onboards a new group of developers, and it's important that identical SDEs can be built as quickly as possible and with minimal manual effort.
Leveraging our relationship with IBM, Red Hat, and Hashicorp, we developed an automated environment using Power Virtual Server capacity in the IBM Cloud to present a full SDE back to the client for use as a sandpit or pre-production development and User Acceptance Testing (UAT) environment.
We maintain the automation and deployment tasks required to spin up and connect the environment to the client's network, whilst the ISV is free to focus on developing products in a flexible, standardized environment, which can be completely torn down and replaced in a matter of minutes.
This type of environment can be expanded to hyperscale public clouds when used in conjunction with our Scaffold and FastTrack services. These are automated deployments of the foundational and governance resources and policies we have developed to speed up our clients' consumption of the leading cloud products, platforms, and capabilities.
More to come around this exciting offering in my next blog post “Treating Cloud Deployments like a Software Product”...
Whilst I acknowledge that there is an element of lock-in in relation to our decision to choose OpenShift over Cloud Foundry (or RedHat over Pivotal per se), we saw more positives to both the company and the technology than we saw negatives.
Below is the final segment of videos that were recorded during a client webinar in January.
In this episode, my esteemed IBM colleague Dr. Frank Lee explores some real-world examples of how IBM, Red Hat, and OpenShift are helping customers solve very modern problems and handle very large datasets.
Jason Normanton is Head of Cloud Services at CSI Group. If you wish to get in touch to discuss anything raised in this blog, please use our Contact Us Page.