Skip to main content

Microservice architectures and the IBM Mainframe - Recollections of similarity

As I learn about Docker, containerization, and micro-services, I can't help but think we've been down (part of) this road before.  Without a doubt, there's a lot of innovation in Docker, so don't misunderstand me.  However, I want to acknowledge the good parts of what I experienced long ago in my limited experience on IBM mainframes and perhaps there are some lessons to be learned here as well.

My experience on mainframes was using the IBM VM/ESA operating system from 1992-1994 at IBM's Boca Raton facility ("Home of the PC").  VM/ESA (now known as z/VM) is a (some would say "the") virtual machine operating system.  It has a history which is nearly as old as UNIX.

One thing you have to know about VM/ESA is that it divides a mainframe up into thousands of virtual machines, each of which runs a single user, single tasking operating system called CMS (conversational monitor system).  CMS is much like MSDOS (in my opinion).   As a result, it is a pretty small system (relatively speaking).  It can mount disk drives (called minidisks)  which, like in MSDOS, are lettered a-z.   The traditional CMS filesystem is not hierarchical - it is just a set of minidisks.  CMS has a command line but most of the time, you run programs like FILELIST which is a bit like a Norton Commander or XTREE full screen file manager.  The primary text editor is XEDIT, a full screen text editor with an extension capability (like Emacs or VIM).

When you log into the mainframe, you get a single VM just for you!  So, when there are 10,000 users, there are 10,000 VMs. And I'm pretty sure we were running numbers as high as that on our mainframes at the time.

The primary programming language for writing these service machines was the scripting language REXX.  REXX is easy to learn, easy to use programming language which is much like a simplified PERL or TCL.  It was not object oriented (it is now) and it relied heavily on the host operating system for advanced features (this is easy to do on VM/ESA).  For example, you don't usually read files directly on CMS.  You use a command called EXECIO to read records and push them onto the QUEUE.  The QUEUE is a built-in feature of CMS that REXX can use to pass data back and forth to programs.  This is a lot like a pipe in UNIX (and CMS has PIPELINES) but different in that the QUEUE exists outside the programs. It is not just hooking up inputs to outputs like in UNIX.

One thing that I picked up by talking with real VM programmers was the concept of "service machine".  A service machine was a virtual machine created for a specific purpose, much like a daemon on UNIX.

The service machine concept is that you startup a VM that doesn't shutdown when you log out, you run one program that loops continuously servicing requests.  Somewhat like UNIX, VM has a "standard input" and "standard output".  But these are called "virtual reader" and "virtual punch" for keypunch and card reader. You remember 80 column Hollerith cards right?  The service machine program reads from the reader and writes to the punch. Oh and you can log into the service machine interactively if you want to.   You could query the IO subsystem to see how many records were processed and where the filepointer/cursor was located in a file (this is a mindbender for UNIX programmers.)  Neat. (though I can also see lots of problems with this. Still, it's nice to remember there is more than one way to do it (tm).)

Your other programs can connect to the service machine and make requests and get responses.  And it's a mainframe so it's pretty darned fast.

What else did the mainframe get right?  Well, like modern containerization, it shares the operating system image among many VMs.  This reduces the resource (memory) burden of having thousands of VMs. In addition, it uses a single tasking model (non-multithreaded) which is easy to program and has low overhead (a bit like NodeJS). Finally, it was programmed heavily using a scripting language for productivity.  REXX was also the extension language for XEDIT (the text editor) and was used for everyday scripting (like AWK or PERL).  REXX has a compiler if you need more speed as well.  

Where did the mainframe go wrong? Not too many places.  The service machine concept provides a nice isolated program.  I don't know if you can scale them horizontally and I'm uncertain about the mechanisms for sharing across a network (mainframes talked both SNA and TCP/IP when I used them.)  I do recall they had a mechanism for networking but I can't remember how it worked.  This is pre-SSH so TN3270 was the primary way to connect as a user.  Programs could talk over  a different set of protocols.   I also don't think the concept of service discovery was well established.

I have vastly simplified how things work and I'm sure I've gotten more than a few things wrong. Feel free to leave me comments and I'll be glad to make corrections.


Popular posts from this blog

Using Fossil SCM with Jenkins CI

Currently, there is no SCM plugin for Fossil in Jenkins. I've been coding one but it's not yet ready.

Update: I have permission from my employer to open source the plugin. Now if only I had the time...

Update 2:  I've created a github repo for my code:

It turns out, you should be able do a pretty good job of integrating Fossil into Jenkins without using such a plugin.

Here's my idea:

For now, you should just need the Xtrigger plugin. This plugin is capable of triggering a build based on a number of things changing. Among them, a web-addressable piece of content in XML.

Fossil is able to generate RSS feeds trivially.

On the Fossil machine, you'll want to start the fossil server as usual:

$ fossil server --port=1234

On the Jenkins machine, you'll simply install the Xtrigger plugin and set it to trigger a build, by polling the following URL for changes:



Why Fossil-SCM is an excellent choice for introductory programming courses

Fossil SCM for introductory programming courses The use of source control management (or version control - take your pick) is an important skill for new programmers to adopt.  It is expected that all programmers use SCM in their daily jobs, in order to coordinate changes among team members.  Thus, getting beginners to adopt good habits early should be a goal.

While GIT (  is certainly the dominant source control system of today, I believe instructors of introductory classes in programming should consider an alternative called Fossil (

Fossil has several compelling advantages in education over GIT.  You will see that I value the practical aspects of Fossil even more than its technical capabilities.  After all, an instructor has a limited amount of time to have an impact and they don't want to waste time doing technical support on a tool that is too complex.  Helping one or two people is fine but helping 30 can be a real burden.

Simple installation and …

So you want to use Fossil DVCS as your SCM solution? Here are some first steps.

First steps when using Fossil SCM.

Download the executable from

Depending on your programming language and operating systems, you'll want to make sure you ignore certain kinds of file extensions.

You might want to create a configuration file and store it in fossil for use in other fossil setups.  The configuration file goes into the top level directory under a folder called ".fossil-settings".  The filename matches the configuration setting, thus it is called "ignore-glob".

For unix/linux, I would ignore the following file extensions (you can put one per line or separate them with commas. I'll use the per line convention here.)


For Windows, I would ignore these:


Next, you'll want to decide on binary file for the purpose of merging. These go in the .fossil-settings/binary-glob file:


Typing fossil settings binary-…