Mainframe Modernization

Monoj Kanti Saha
8 min readJul 10, 2022

Mainframe modernization, probably, has been a long-held aspiration for many Tech honchos of enterprises for decades. With the advent of innovative cloud computing, enterprises now have choices to migrate their legacy workloads into cloud.

Who are using mainframe? — Enterprises from different domain of business starting from Banking, Insurance & Healthcare to Retail, Government and Aviation (https://www.precisely.com/blog/mainframe/6-industries-mainframes-king).

“For more than 60 years, mainframe computers have powered mission-critical applications for industries central to the continuing operation of the global economy. Today, 45 of the top 50 banks, 4 of the top 5 airlines, 7 of the top 10 global retailers, and 67 of the Fortune 100 companies leverage the mainframe as their core platform. Mainframes handle almost 70% of the world’s production IT workloads.” — IBM Institute for Business Value

Time and technology are around the corner to jump in the bandwagon now to modernize Mainframe.

Why is Mainframe modernization gaining momentum?

1. Maturity of cloud,

2. Getting regulations fit right in the plan,

3. World’s rush towards AI/ML driven systems,

4. Ready Ecosystem with long and short term options in the arsenal,

5. Sustainability,

6. Cost Advantage,

7. Lack of Developers to maintain.

8. Emerging market disruptors with advanced digital platform.

Mainframe Modernization Benefits

1. Reducing MIPS Cost

2. Unlocking the Data Potential

3. Leveraging the cloud with Horizon 2/ Horizon 3 level transformation

4. Modernizing Legacy Stack — finding talent in the market

5. Ready to compete with the market disruptors / start-ups

The Gordian Knot — Going for a solution for mainframe modernization, one needs to understand the exact strategic problem for the client. Whether the ambition is to get out of mainframe contract of IBM within 18–24 months, or whether it is a long term plan to create a stable environment out from Mainframe in 5–7 years. Of course, funding is an important aspect here.

Roads forward — I tried to create a decision flow chart for mainframe workload’s target disposition. Broadly, it can be divided into two strategies — i) short term and ii) long term.

Short term solutions can be applied for i) enterprises who need to get out of Data center quickly or ii) want to avoid an impending mainframe renewal contract or iii) need to manage the TCO to fund future modernization or iv) someone having smaller workloads in mainframe.

Long term solutions can be applied to enterprises with huge workload like banks or insurance companies. They can avoid a big bang approach — logically divide their workload into i) core or mission critical part, ii) peripheral-to-the-core or loosely-coupled part. The second one should be the quick one to move to cloud — in form of cloud native custom development or SaaS products. The main core part can be moved out gradually in smaller chunks in a phased manner where different function from mainframe and the newly developed system can co-exist (just like strangler pattern) — this is hollowing out the core. This can be developed in highly agile, scalable microservice architecture.

Mainframe Modernization Patterns

There can be a middle of the road also — a two step process - i)quickly move things in emulator based architecture in cloud (e.g. Microfocus) with new SQL engines and get some TCO saved and ii) then use that savings to fund modernization using cloud native services.

Decision Tree — Mainframe Modernization

Augmentation — Another important aspect is mainframe data and its backup. There are tools (e.g. IIDR, tcVISION, Qlik) to decouple the data and put it as a read-only type use in cloud for new or modernized applications including datalake and analytics. There are tools, like Model9 which can put the back up data in cheapest object storages in cloud (e.g. S3 bucket) and restore whenever necessary.

IBM Offerings — There would always be another angle to look at from IBM’s standpoint in the Mainframe modernization journey.

i)Off-loading of non-prod Mainframe environment to cloud — IBM has developed a new version called IBM Z Development and Test Environment(IBM ZD&T) that can run z/OS distribution on a personal computer or workstation Linux. It can also run as docker container. Without costly Z-Mainframe hardware, it allows enterprises to create, run environment for mainframe application demonstration, development, testing, and employee education. But this is strictly restricted to dev only — Not apt for any production workload or any kind of robust development or performance testing environment provisioning. It can be run on public cloud (e.g. AWS — aws.amazon.com/blogs/apn/deploying-ibm-mainframe-z-os-on-aws-with-ibm-zd-and-t/)

IBM has released another solution for dev and test — IBM Virtual Dev and Test for z/OS (IBM ZVDT). This is z/OS on Linux on IBM Z that can be virtualized and offers to developers a sandbox for prototyping and Version to Version migration.

In-place modernization with Hybrid Cloud — With this approach, one can run java, python, node, go programs using IBM sdks in z/OS. In addition to this, Redhat Openshift can be provisioned to run containerized workloads and have a seamless integration with Openshift running anywhere. Real time data sharing using streaming solutions like Kafka and leveraging z/OS connect for exposing APIs from z/OS are all possible without migrating primary mainframe workloads and databases to cloud. All tools for devops, automation can be established seamlessly across enterprise (https://aws.amazon.com/blogs/apn/modernize-mainframe-applications-for-hybrid-cloud-with-ibm-and-aws/).

zCloud — This is IBM Mainframe as IaaS environment, hosted in IBM data centers, by using advanced virtualization with secured logical partitions (LPARs) to reduce capital expenses in running own data center — like hardware, floor space, power and cooling costs.

zCloud

Approach to solution — Generally the Mainframe landscape is mammoth. Understanding customer’s strategy and pain points are key here. The key information for any solution are — i) Current Contract Details & End-date, ii) Current Expenditure & Growth Expectation, iii)Future Roadmap from the Enterprise, iv) Mainframe Technical Stack (Standard or Custom or Niche), v) Security & Regulations of the Land, vi) Current Enterprise Architecture. Depending on these information a solution blueprint or roadmap can be created with TCO optimization and Digitization.

Popular Tools & Techniques for Mainframe Migration to Cloud

Microfocus & Replatform — Microfocus Enterprise Server is a software emulator of IBM Mainframe which can run on Linux and Windows VMs on Public Cloud — it provides environment for batch execution and online transaction for IBM COBOL, IBM PL/I, IBM JCL batch jobs, IBM CICS and IMS TM transactions, web services, and common batch utilities including SORT.

Microfocus Enterprise Server can be launched in both AWS and Azure. AWS has recently launched AWS Mainframe Migration Service where Microfocus in available as PaaS.

Sample Target Architecture with Microfocus Emulator in Cloud

TSRI and Refactor — It is an architecture driven approach to automated-refactoring and transformation — Along with translating source codes to target modern day languages, it also modernizes architecture, database and user interfaces.

TSRI — Three Step Approach for Modernization

TSRI Target Architecture support containerized environments on public or private cloud or on-prem Data Centers. It is gradually increasing its portfolio of support for source languages (35+), databases, UIs and File systems.

TSRI — Modernization Methods (source — https://tsri.com/solution)

AWS Blu Age and Refactor — AWS Blu Age has a service to refactor IBM or Unisys Mainframe code and database to modern languages, architectures and databases. It can deploy applications in stand alone EC2 or Containers running in EKS or ECS or Openshift or in cost-effective serverless Lambda functions.

AWS Blu Age — Mainframe Modernization Path (source — https://aws.amazon.com/blogs/apn/automated-refactoring-from-mainframe-to-serverless-functions-and-containers-with-blu-age/)

Serverless COBOL — Another very interesting way to run COBOL programs in cloud is Serverless COBOL — AWS Blu Age gives the option to keep developing in COBOL and deploy it as Lambda function in AWS, thereby without managing servers or containers, and AWS handling the scaling. The other advantage is — it helps to decompose COBOL applications into microservices to provide more agility and flexibility.

Model9 and Mainframe Data Management — Model9 provides the feature to backup mainframe data in cloud-native less-costly storage with a recovery option. It also helps in connecting cloud AI/ML and Analytics. It has support for all leading cloud providers. It helps in cutting costs by replacing costly proprietary VTLs with affordable cloud storages and also by reducing expensive mainframe CPU consumption.

Model9 — Data Management Solution (source- https://model9.io/solutions/model9-shield/)

Kafka (Confluent), tcVISION and Digital Decoupling- Unlocking Mainframe data is crucial. tcVISION is data replication tool which helps in synchronizing mainframe database and cloud services. “Data can be replicated from IBM Db2 z/OS, Db2 z/VSE, VSAM, IMS/DB, CA IDMS, CA Datacom, or Software AG ADABAS. tcVISION can replicate data to many targets including Confluent Platform, Apache Kafka®, AWS, Google Cloud, Microsoft Azure, PostgreSQL, Snowflake, etc.”

tcVISION, Confluent and Mainframe Data Decoupling (source — https://www.confluent.io/blog/unlock-db2-data-with-tcvision-and-confluent/)

CICS and Kafka — There is also an interesting way to publish messages using Liberty Kafka client running in CICS (https://community.ibm.com/community/user/ibmz-and-linuxone/blogs/mark-cocker1/2020/08/07/cics-and-kafka-integration).

Mainframe Application Modernization Software (Source — https://services.google.com/fh/files/misc/mainframe_app_mod_europe_2022.pdf)

These all are different tools and approaches quite predominant in the current market. Migration Tools are increasing their portfolio — covering a plethora of different source codes, databases and other subsystems in legacy. Hopefully, this blog will give a bird’s-eye view of the entire spectrum.

References

https://www.oracle.com/emea/news/announcement/blog/oracle-and-accenture-have-cracked-mainframe-to-cloud-conundrum-2022-04-05/

https://www.ibm.com/cloud/architecture/architectures/application-modernization-mainframe/

https://aws.amazon.com/blogs/apn/automated-refactoring-from-mainframe-to-serverless-functions-and-containers-with-blu-age/

https://tsri.com/cloud/oracle

https://tsri.com/cloud/azure

https://www.confluent.io/blog/unlock-db2-data-with-tcvision-and-confluent/

dzone.com/articles/mainframe-offloading-and-replacement-with-apache-k

https://community.ibm.com/community/user/ibmz-and-linuxone/blogs/mark-cocker1/2020/08/07/cics-and-kafka-integration

The writer is a solution architect with specialization in cloud native serverless architecture — with experience of end-to-end solutioning of mainframe modernization for a number of enterprises. His cloud architecture experience covers Domain Driven Design, Event Driven Architectures, Serverless First Design, Data-Analytics and Composable Architecture Practices.

--

--

Monoj Kanti Saha
Monoj Kanti Saha

Written by Monoj Kanti Saha

Cloud Architect — AWS, Azure, Google Cloud (Specialized in Cloud Native, Serverless, Event Driven Architecture)

Responses (1)