I recently came across drawio, and I’ve been using it to make Wardley Maps. In addition to the online version, there’s also a desktop version that you can download and use.
It doesn’t require me to sign up for an account; it integrates well with common cloud storage options (dropbox, google drive, onedrive). As a result, I find myself using it more than I thought, especially when it comes to layers.
To see what I mean, I’ve placed the maps from the previous article on AWS S3 to try out. The output/export of drawio was a single html file that can also be saved locally and opened in the browser.
Interact with the map – using buttons
To interact with the Wardley Map mentioned above, you can use the buttons present or select which layers you’d like to see. The disadvantage of using the buttons is that pressing them out of sequence messes up what’s displayed.
Use the “fullscreen” option to view the map. This abstracts away browser-specific behaviours.
The click through the buttons at the top in this sequence:
1, 2, 3, then on 3 again (to make the selection disapper)
5a, then click “back to map”
6, then click “back to map”
Interact with the map – using layers
To select the layers, hover your mouse over the map’s page to reach the icon for layers. There, you’ll see several checkboxes that can be selected.
Using drawio online with icons/symbols for Wardley Maps
If you’d like to try out drawio online for drawing Wardley Maps, I’ve created drawio specific template, or a set of icons, that should save us time. These are are saved online and made available through a URL  because draw-io online gives us the option of specifying URLs from which to load icons/symbols – see step 3 in the image below. For a quick try, see section titled “P.S. Update on 09-April 2019“
I’m still discovering what drawio enables. E.g., with this link , the stencils are now linked to the corresponding github repo, meaning that you’ll always get the latest version of the stencils. After updating the repo, I don’t need to copy them to s3 and dropbox. Whereas you, by getting the stencils from github (if you’re using the desktop version), always have the latest version. Win-win for all. wohoo!! 😎
I’ll be mapping a few important chapters of Lou Gerstner’s book, Who Says Elephants can’t Dance , as illustrating Wardley Mappings. Not that Gerstner draws maps for us but his descriptions and narratives embody much strategic thinking that I couldn’t help recall Wardley’s Strategy Cycle, which led to an attempt at visualising them using Wardley Maps.
Justifying my application
But, before we explore the maps from Gerstner’s book, I’d like to explain why I think it’s the best that I’ve come across that illustrates Wardley Mapping, looping through the Strategy Cycle within a business context.
When I say “the best,” I mean it in the sphere of what I’ve come across and read. This sphere is naturally quite narrow.
Of all the materials out there, a subset have been published or made public. Of these, I’ve read a small portion. Of those that I’ve read, I see two that are relevant to Wardley Maps. I’ve restricted myself to books. Articles are too short for this purpose. On the other hand, I could wade through documents, such as annual reports of publicly traded companies – this I occasionally do – but these make for dull reading, let alone function to impress the mind with vivid illustrations.
First are the series of books by Peter Krass. These consist of a collection of articles from the leading business men and women of the time, articles arranged around varied themes within the broader categories of Business, Management, Leadership, Entrepreneurship, and Investment. I mention these series because, in them are many articles contributed by several contemporaries of Gerstner — such as Bill Gates (Microsoft), Larry Ellison (Oracle), Andy Gove (Intel) — contemporaries that he speaks of.
The book has two parts. The first portion highlights companies that have struggled to solve matters within their respective businesses while the second part features firms that successfully overcame obstacles.
From my perspective, the scope of what they did and didn’t do is too narrow when compared to Gerstner’s book in the following sense: with Gerstner, he described to us the context (the market, customers, competitors, employees, culture, leadership) followed by his actions whereas in “Denial,” it’s Tedlow (the author) who tells us the context and then explains the actions or inaction of the business leaders at that time. This gives the impression (at least to me), that the companies spoken of didn’t know (or didn’t make explicit) the context (at least the critical parts relevant to them) in which they operated, i.e., didn’t know their landscape. Of those that did, their landscape and corresponding value chains described were small – made up of a few components – in comparison to Gerstner. If you don’t know the landscape, how can you apply doctrine, climatic patterns, and gameplay on an industry/market level ?
Regarding learning to map, the task is two-fold: to find materials ample enough to cover all the elements of Strategy (in business), and on the other hand, to express them on one or several Wardley Maps. Relevant books and articles furnish us with materials. What’s left, for us learners, is to map them. Laying aside how true they are and to what degree, these materials become common ground to those learning. Imagine a book club, with the added twist that the selected book is mapped, and the subsequent discussions revolve around the maps produced.
Gerstner’s book/memoir furnishes me with such materials with a large enough scope – that of a big, mutlinational corporation – and an acknowledge of the role luck plays in succeeding.
Assumed knowledge and how I’ll quote
Before I proceed to the parts of the Strategy Cycle, I hope you’ve already read Gerstner’s book. Otherwise, I might spoil it for you. I’ll take it apart (figuratively speaking) and place chapters/sections where I think they’d fit on the Wardley Map and the Doctrine cheatsheet without much regard to their sequential order in the book. This is a poor man’s version of Boyd’s “analysis” and “synthesis,” which he taught through a mental exercise that ends up with snowmobiles. I’m hoping this ends up as useful regardless of being small in degree.
Secondly, to keep the article as short as possible, I’ll assume that you’re already familiar to some extent with Simon Wardley, Wardley Maps, and the corresponding terms and symbols (see Figure 60 and 61 in Chapter 6)
Thirdly, to use Gerstner’s words to illustrate Wardley Maps would require quoting from him extensively. E.g., to illustrate the point of “removing bias and duplication” within the “Development” category of Doctrine, I’d show the current state with the appropriate quote (in RED), followed by the decisions reached and actions taken, and finally how this point looked like afterwards (in ORANGE). Limiting myself to only those descriptions of the current state, he says this about “duplication” on page 42:
I returned home with a healthy appreciation of what I had been warned to expect: powerful geographic fiefdoms with duplicate infrastructure in each country. (Of the 90,000 EMEA employees, 23,000 were in support functions!)
Then again on page 64 (note he uses “division” instead of “geography” – the difference is huge especially in the context of a global company):
Today (circa 2001) IBM has one Chief Information Officer. Back then we had, by actual count, 128 people with CIO in their titles—all of them managing their own local systems architectures and funding home-grown applications. . . The result was the business equivalent of the railroad systems of the nineteenth century—different tracks, different gauges, different specifications for the rolling stock. If we had a financial issue that required the cooperation of several business units to resolve, we had no common way of talking about it because we were maintaining 266 different general ledger systems. At one time our HR systems were so rigid that you actually had to be fired by one division to be employed by another.
There are 15 more page numbers (in different parts of the book) that correspond to the different points of Wardley’s Doctrine to show us the situation at the time (what I’m referring to as the “current state”). And that’s just the first part – the current state. There are many other passages on the decisions and actions he took, and what the corresponding result was. To reproduce all that here would definitely overstep the “fair use” policy of copyright in books. Unless one of you know him and can ask permission form him – after all, it’s for educational purposes 🙂
Therefore, I’ll state the page numbers in the relevant sections, which should help you find your way. As seen above, his descriptions are excellent. I know it’s cumbersome to read an article on the one hand, and on the other, to look up pages in another book. Nevertheless, I’d still recommend it. Who knows, you might find even more that I’ve probably missed. I’ll restrict myself to quoting where it matters to make an impression on the mind.
Looping around the Strategy Cycle
I’m referring to Wardley’s Strategy Cycle below:
In this book are, what seems to me, several loops around it:
The first loop consists of chapters 3 to 7.
The second loop consists of chapters 8 to 10.
Another is in Part II of the book.
Yet another is Part III of the book.
Other parts of the book contain more iterations around the Strategy Cycle. For this article, I’ll start with the first loop.
First map and corresponding Doctrine
The map below shows the current state of IBM – recall it’s in the mid-1990s. I’ve kept it simple. I’ve placed the map of Wardley Maps on top of, or next to, what I’d represent as IBM’s map.
Because of this overlay, Point 1 in Figure 2 below shows what has an effect on Doctrine, that is, the internal and external processes on Doctrine, and the most important of all, the messaging – whether it’s something important to the CEO, senior leadership, and to the company. If restricted to the CEO, this shows itself in the what he says, the decisions he makes, and the actions he takes.
An example of this is when Gerstner, having decided to stop milking the Mainframe, re-invests in it in order to lower its prices – good for the Customer; risky for the company in improving cashflow and remaining profitable.
I’ve added the component of “Messaging and Communication” because of Gerstner’s estimation of it during transformational efforts, namely:
The sine qua non of any successful corporate transformation is public acknowledgment of the existence of a crisis. If employees do not believe a crisis exists, they will not make the sacrifices that are necessary to change. Nobody likes change. Whether you are a senior executive or an entry-level employee, change represents uncertainty and, potentially, pain. (p. 77)
The importance of this “Messaging and Communication” component is felt today in other companies. Consider the effect that Jeff Bezos’ letters has had on Doctrine at Amazon. Or the effect from Warren Buffett’s annual letters for Berkshire Hathaway.
Point 2 in Figure 2 above is supposed to show what happened to cause the company to lose market share and money. This made most of the components have the characteristics and properties found in the “Custom” phase.
Point 3 is to show that a lot of uncertainty surrounded the major three components but not “Moral Imperative.” This uncertainty made these components risks. There was no guarantee, as Gerstner explains, that the company would succeed in stopping the bleeding, let alone be profitable. Nevertheless, the “Moral Imperative” was felt keenly and strongly – in the board members, in Gerstner, in the senior leadership team, and in many of the employees.
I’m taking liberties with where I put Doctrine. When I see it in the “Genesis” and “Custom” phases, I interpret it to mean that the Cheatsheet will contain lots of red. As we move through the stages of Evolution, it becomes orange, then green. As I mentioned earlier, I’ve added the page numbers in the relevant boxes, which should help you find your way.
Add Decisions and Actions to the map
The map below shows Gerstner’s decisions and the affected components. This corresponds to the “Decide and Act” part of the Strategy Cycle.
Besides the two decisions that he made early on — keeping the company together instead of splitting it and repositioning the mainframe — Gerstner introduces three initiatives that are shown in the map below.
Keep in mind that I’ve oversimplified the components “External facing” and “internal facing.” What I quoted at the beginning of this article is one of the things that Gerstner says about them.
He further added, that:
Reengineering is difficult, boring, and painful. One of my senior executives at the time said: “Reengineering is like starting a fire on your head and putting it out with a hammer.” (p.64)
Areas of Doctrine affected by Initiatives
These initiatives affected these areas of Doctrine. I’ve kept them colour-coded with the parts of the map.
Initiatives change the map
These initiative, running in parallel, took many years to complete. Nevertheless, even after one year, there was much improvement. Gerstner summarises some of them (see pp. 65-66):
By addressing some of the obvious excesses, he had already cut $2.8 billion from our expenses that year alone. Beyond the obvious, however, the overall task was enormous and daunting.
The map below shows how the components have moved. One point to note is that the dotted red arrows start from where a component was before the initiatives started.
He goes on to tell us:
From 1994 to 1998, the total savings from these reengineering projects was $9.5 billion. Since the reengineering work began, we’ve achieved more than $14 billion in overall savings.
Since he doesn’t mention the time period, I’m assuming a time period of approximately 8 years — from 1993 to 2001 — to improve the area of Doctrine of “Optimise Flow” (in category “Operation”)
Hardware development was reduced from four years to an average of sixteen months—and for some products, it’s far faster. We improved on-time product delivery rates from 30 percent in 1995 to 95 percent in 2001; reduced inventory carrying costs by $80 million, write-offs by $600 million, delivery costs by $270 million; and avoided materials costs of close to $15 billion.
Doctrine is also changing
Some areas of Doctrine are no longer “red” but “amber.” They’re not “green” because the Doctrine component is not yet in the Utility phase and there are also more iterations to come around the Strategy Cycle.
Preparations for the second loop
We’re now in a position to start looping around the Strategy Cycle a second time. Describing it all might be too much for this single article. If you, dear reader, have read thus far, I’ll leave you with the starting point of the next iteration in the map below, where the node “previous Wardley Map” encapsulates the aforementioned maps.
I’ll admit that reading this book several times in order to follow the threads of each point of doctrine, pattern, and gameplay has been weary at times. For the repetitious readings, like the repeated use of sharp knife, often blunts the impact of these impressions on my mind.
On the other hand, by the same repetitious process, I engrave again on my mind those traces that are bound to easily fade.
 – Page numbers are based on the edition, “HarperCollins e-books. Kindle Edition”. Complete title is “Who Says Elephants Can’t Dance?: Leading a Great Enterprise Through Dramatic Change” by Gerstner Jr., Louis V.
What I’ve found helpful in maintaining my ardor as I get to grips with complex topics has been a Wardley Map of a “Cognitive Hierarchy,” a hierarchy I came across a while ago. Even though it’s explained in the context of war, I’ve found it useful when also applied to my studies. Perhaps it might do the same to yours.
Many subjects that I’d like to get into require time from me to understand, even if I limit myself to Hamerton’s definition of “soundness,”  which is below:
The best time-savers are the love of soundness in all we learn or do, and a cheerful acceptance of inevitable limitations. There is a certain point of proficiency at which an acquisition begins to be of use, and unless we have the time and resolution necessary to reach that point, our labor is as completely thrown away as that of mechanic who began to make an engine but never finished it. . . .
Now the time spent on these unsound accomplishments has been in great measure wasted, not quite absolutely wasted, since the mere labor of trying to learn has been a discipline for the mind, but wasted so far as the accomplishments themselves are concerned. . . .
I should define each kind of knowledge as an organic whole and soundness as the complete possession of all the essential parts. For example, soundness in violin-playing consists in being able to play the notes in all the positions, in tune, and with a pure intonation, whatever may be the degree of rapidity indicated by the musical composer. . . .
A man may be a sound botanist without knowing a very great number of plants, and the elements of sound botanical knowledge may be printed in a portable volume. . . .
Suppose, for example, that the student said to himself “I desire to know the flora of the valley I live in,” and then set to work systematically to make a herbarium illustrating that flora, it is probable that his labor would be more thorough, his temper more watchful and hopeful, than if he set himself to the boundless task of the illimitable flora of the world. . . .
Lastly, it is a deplorable waste of time to leave fortresses untaken in our rear. Whatever has to be mastered ought to be mastered so thoroughly that we shall not have to come back to it when we ought to be carrying the war far into the enemy’s country. But to study on this sound principle, we require not to be hurried. And this is why, to a sincere student, all external pressure, whether of examiners, or poverty, or business engagements, which causes him to leave work behind him which was not done as it ought to have been done, is so grievously, so intolerably vexatious.
Since some of these topics are not necessarily related to my job, it means spending some of my spare time on them. Suppose that to acquire soundness in topic ‘X’ requires 40 hours; if I have 2 hours per day that are free from interruptions, I would need 20 days (almost 3 weeks) for such an acquisition. This also assumes that I’m pursuing only that one topic. I leave it you to imagine what happens when tackling other topics at the same time, varying them to break the monotony. After the initial enthusiasm wanes, is endurance called for, so as to maintain the consistency of working on it daily; or like John Foster once called it, “this indefatigable patience of exertion.”
For that, having an assurance that it’s worth it for me is encouraging. This is where having maps that I can frequently review maintains my ardour. They also help me decide whether to pursue a particular subject, and, most importantly, to what extent. After all, “the time given to the study of one thing is withdrawn from the study of another, and the hours of the day are limited alike for all of us.”
What would the map of an individual (in this case, me) look like ? Some aspects are applicable to groups/teams, but I wanted to keep the scope as narrow as possible. Because it’s for an individual, most components are in the “Custom” phase, e.g., the person has to perform analysis; it’s not something that can be delegated/outsourced.
1. The first user need is to understand a topic, especially, as it relates to either help in arriving at a decision (perhaps “to decide” should be the top-level user need) or to see patterns or to anticipate the consequences of others’ (or my) actions. To decide, to see patterns, to anticipate consequences — all these are applicable in many contexts — at work, at home, and in the community. If a topic does not help me with these, I often lay it aside. I’m excluding those that are for amusement — but even with these, it’s almost an impossibility for them not touch those three points.
2. Having selected, and committed to, such a topic, there’s the need to apply “judgement,” which is not only knowing that something is but why it is so. Based on one’s experience, expertise, and intuition, one also develops principles (or the inner workings) that can explain what’s going on. Yet without such experience or intuition, so do they need be built up. One cannot apply the judgement he/she does not posses. Therefore, there’s the need for the next component: if I practise applying these cognitive functions to the processed data, I’ll be able to construct a mental model/picture of the current situation/environment and be able to make decisions that anticipate what others might do, owing to the principles/laws/patterns that have caused the situation in the first place.
3. Once we have various processed data that have been evaluated (either by oneself or by others), there’s the need to apply those “cognitive functions,” to harmonise them. If applied to studies, “comparing text with text,” as Sertillanges encourages us, “making the different sources of information complete, illustrating one with the other, and draw[ing] up your own article.” As he concludes, he reassures us, “It is an excellent gymnastic, which will give your mind flexibility, vigor, precision, breadth, hatred of sophistry and of inexactitude, and at the same time insure you a progressively increasing store of notions that will be clear, deep, consecutive, always linked up with their first principles and forming by their interadaptation a sound synthesis. ”  Such links to first principles lend themselves to drawing Wardley Maps 🙂
That’s where the difficulty manifests itself — where does one begin, and end, with the many resources (print, video, audio, etc) on a topic ? Which of them to gather? How many of them can one get through, knowing that each differs in style, breadth, depth ? How to bring them to terms, to find and state their propositions, their arguments, and their solutions, if any ? Mortimer Adler and Charles Van Doren  help us here. This calls, once more, for that “indefatigable patience of exertion.”
There’s also that feeling that John Foster  so aptly describes:
Is it, then, in the first place, that a man can instantly place himself among the subjects of knowledge, and begin to take possession, without the cost of any tedious forms of introduction? No; he must consume in all a number of years in the acquisition of mere signs; in the irksome study of terms, languages, and dry elementary arrangements. Is it, that having thus fairly arrived within the boundary of the ample and diversified scene, he is certain to take a direction toward the richest part of it, and with the best guides? He may happen to be led by some casual circumstance, or to be attracted by some delusive appearance, to a department where his mind will exhaust its strength in endless toils, to reap nothing but a few vain and pernicious dogmas. He may be as if Adam, when “the world was all before him where to choose,” had been deserted by “Providence his guide,” and beguiled to wonder into what is now Siberia.
Or if a man in quest of knowledge should have directed his view to a more valuable class of subjects, he may waste a great deal of labour and time, and be often tempted to renounce his purpose in disgust, through an unfortunate selection of instructors and guides.
Hence the reason to be severe with those who profess to instruct; and the necessity of critically reviewing each work, such that the criticism serves as a signpost to encourage other travellers to the helpful, or as a warning sign advising them not to approach. This is where Simon Wardley’s “Tomb of Tomes” would come in handy 🙂
4. For that sifting to occur, these have to evaluated based on some criteria — e.g., their importance, relevance, and reliability. But then, the question becomes “important” to whom ? What’s important to me may not be important to you. Moreover, even if I limit this check to myself, what’s important to me now may not be what’s important to me a few months/years’ time. Because I have to apply these criteria frequently, I’ve left this activity in the “Custom” phase.
5. At this point, only one book is under consideration. For it to have been produced required someone to apply the processing functions to raw data.
What limits do I see in such a map have ? Firstly, this map is for a cognitive hierarchy. It’s generic. It’s not specific to any subject nor to any industry/company/team. In order to apply it to something specific, e.g., to learn about AWS or Azure or Bash shell scripting, you’d need to map your industry/company/team to see where such knowledge sits in the phases of evolution. This, in turn, determines how much to invest and what the expected return is. Next is to overlay one map on the other. If the lower components of the chain are seen as commodities in their respective markets/industries, then you can focus on steps 1 and 2 only.
Secondly, this map doesn’t show any higher-order activities that result from, and build on, more commodotised components.
How does such map help me? Firstly, because of the constraint of time, I’d prefer to only perform activities in step 1 and 2. But, lacking those, I have no choice but to continue descending the value chain. For topics that can be traced back to a few excellent books, I skip (or rather, defer) tackling the many books — the second pipeline — and focus on the books that expressed the initial idea. Sometimes, I find it necessary to descend lower, and apply the processing functions to the author’s raw data, if appropriate/applicable. This is one reason I read, with keen interest, the bibliography or references sections of books and article. And why, I, too, include them in what I write.
Secondly, I’ve noticed that the further down the chain I go, the more time I’ll need to ascend up again. I may be on step 5 for months (according to the 2hr per day metric), making some progress, but still at the bottom of the chain. Looking at the user need repeatedly refreshes, renews my vision of the goal and sustains me in my pursuit.
This, I’ve found to be a pleasant side-effect.
 – pp. 20–23 of “Naval Doctrine Publication (NDP) 6”
 – pp. 93-96 of “The Intellectual Life” by Philip Gilbert Hamerton
 – p. 113 of “The Intellectual Life: Its Spirit, Conditions, Methods” by A.G. Sertillanges, translated from the french by Mary Ryan, 1987 edition, reprinted in 1998
 – pp. 114–136 of “How to read a Book” by Mortimer J. Adler and Charles Van Doren
 – p. 118 of “The Improvement of Time” by John Foster, edited by J. E. Ryland (1863) — Google Book
Just as it’s now the norm for applications and systems to run on public cloud infrastructure and platforms, perhaps so too should be the norm for Integrated Development Environments (IDEs) and the corresponding configuration — those tools that help us write software applications — to reduce the yak-shaving.
I recently got a new Samsung phone and didn’t have to reinstall all my apps. I signed in, was asked to confirm my previous handset, and all my apps with their settings were automatically retrieved and setup. Previously, the process involved remembering all the apps and their settings (especially security and privacy settings), downloading them, changing their settings, opening each one of them to check, and then using the app.
Is such a convenient “user experience” too much to ask for a developer’s workspace ? Do we still need to setup our developer environment/workspace again and again each time we change laptops or change teams or projects? Is there a way to have, what Ryan Boyd, in the context of creating a sandbox for Neo4J graph database, so aptly phrased, “fast time to first line of code”?
To explore this, I’ll use a Wardley Map — something I’m learning from Simon Wardley and the community. Because a map, once created and discussed among relevant people, helps one gain awareness of one’s current environment (situation awareness), which gives direction to actions that lead to serving users’ needs effectively and efficiently.
A word on terminology: writing software requires more than an IDE. Besides downloading it, one also needs to configure it, to obtain the project/program files to work on, and other settings/configurations that make it possible to run the program under development. I’ll refer to all these components as a “developer/development workspace”, just as Codenvy uses the term. It differs, naturally, from what the term means in the context of an Eclipse IDE or in the context of “Amazon WorkSpaces.”
Users, User Needs, and User Journeys
Users and their needs, being the anchor of a Wardley Map, will be my starting point. Who uses IDEs and what do IDEs help them achieve? So far, I can think of 5 categories of users and the corresponding “transactions they’re likely to have with the program/application” (by application, I mean any piece of code that’s made available — from simple programs to learn from, such as those in books, to big applications.
The categories are:
Category A — the developer(s) writing the application and making it available;
Category B — those who might contribute to it, i.e., help fix open issues on it or extend it;
Category C — those who’d like to play around, experiment, with it;
Category D — those who’d like to use the application but have no interest in looking at the code — I’ll not go into much detail as far these are concerned. However, this is, in most cases, why the application is built in the first place;
Category E — those who, having found that it solves a problem partially, want to incorporate it into their own application as a 3rd party library dependency.
We have a few needs for different users, whose needs I’ve grouped into “primary” and “secondary” in Table 1 below.
Users’ needs and their relations
The “secondary” needs are the ones I’ll be focusing on. Figure 1, drawn with the Atlas Wardley Mapping Tool, that shows the relationships between the “secondary” needs.
A few notes on this map:
The different users, depending on what they’re working on, can be found in either of the 4 stages of evolution. I put them in “Product” because it’s the most likely.
Depending on the idea, or programming language, “Programming an Idea” can take place in either of the 4 phases of evolution. Ideally, I’d represent this as a pipeline.
A convention for naming needs that’s worked for me so far is to add the past tense suffix if they are required by other components. E.g., from the Programmer’s perspective, the need is to “program an idea;” but when this need is required by “Play/Experiment,” I read it as: “Play/Experiment” needs “Programmed Idea.”
I’ll focus on these 3 users: “Programmer”, “Contributor”, and “Experimenter”, and on the 3 needs: “Program(med) Idea,” “Enable(d) Contributions,” and “Play/Experiment” because they have to setup their developer workspace for the program. One of the underlying components these three need is “Developer Workspaces.” Figure 2 shows this.
Starting from the top of Figure 2, all three users need to have a workspace. All three, in order to fulfill their needs, have to prepare their own workspace. Hence, why I’m placing the component slightly after the custom phase.
In a non-job context (e.g., at home), a developer sets up his own local development workspace. I, like many others, do this several times: every time I have to replace my laptop, or when trying to build and run different projects on my laptop.
Depending on how complex these projects are, so do the setup instructions. As a developer, excited to get, build, and run the code, I would read as accurately as possible, follow each step thoroughly, sometimes all steps work the first time (always an unexpected but delightful surprise), most times not (maybe a specific setting is required on the operating system, etc), which leads to attempts to undo every change in order to start afresh on a clean canvass, until it works.
To give up one project and start on another means undoing all the previous changes, but the effort is sometimes not worth it. And so they stay; configuration for project upon project keeps piling up.
For a new project, the setup process is similar. After a few projects and repetitions, a simpler way is always welcomed.
From the project’s perspective (supplier/provider), any users interested in running the project’s application code will also go through the same process, unless it’s been somewhat automated/created for them (e.g., as a Virtual Machine). From the user’s (demand) perspective, for each different project our developer/experimenter/contributor, is interested in, they’ll have to go through this again and again because each project is likely to be on a different technology stack that relies on different underlying components and versions.
For users, this on-boarding experience is slooow. Doing it for different projects turns the initial delight to tediousness. But then, after several repetitions, we’ve become so used to it to be numb.
On the job, the process is similar but the scope is much wider. Depending on the technical hats I wear (developer, team lead, technology architect), I’m both a user and a provider/supplier.
As a developer, I use what’s available on the project — if it’s automated, great. Otherwise, do it manually & automate it over time.
As one responsible for a team, one component I’m responsible for is the on-boarding experience of new team members — to make it fast and smooth — in terms of tools, access, etc. The worst case is for every new team member to go through the process already described above.
As a technology architect [one definition about the scope of their differing responsibilties is defined on the company website], joining a project, one of the activities is to, if not already done, take care of (describe, specify, define upgrade path of some components, build) the developers’ workspace in the context of build, deploy, release, and operational processes (including tools and the underlying infrastructure) beginning with the project files in source control, onto a developer’s machine, all the way to different development, test, staging, and ultimately production environments.
If I could somehow provide that workspace so that other developers didn’t need to install anything but use existing components (e.g., their browser), then on-boarding would be much faster; and keeping the components standardised, up to date, secure, would simplify this step of the developers’ workflow.
Fast time to first line of code
This is what Eclipse CHE, and its SaaS version — Codenvy (which RedHat acquired in May 2017) — makes possible. Figure 3 shows a simplified value chain.
To start working on a project, I now need to have a browser or a desktop to start working on a project’s codebase. The URL to the workspace is provided by the project (the provider/supplier provides).
“You might find that you’re forced to treat the operating system as more of a product than a commodity because some essential business application [in our case, the build tools and runtime environments] is tightly coupled to the operating system. By understanding and breaking this link, such as forcing the application into a browser, you can often treat a wide number of other components as a commodity.”
As far as I know (and I’m happy to be corrected), other “Cloud IDEs” — Microsoft’s VSCode, AWS’ Cloud9 (which AWS acquired in July 2016)— don’t deal with this problem of developer workspaces. But they do solve the problem of working on the desktop and the browser.
So, as User A (developer/programmer), having created a workspace for a project and made it available via a URL, the other users, User B (contributor) and User C (experimenter), can have a very “fast time to first line of code,” which removes the friction of their meeting their primary needs (job to be done, leisure, or learning).
One way to relax, after a major project release, or as near to it as to allow a deep breath, is to spend some time with others setting up, testing, and working on something at a Hackday. “What?” I hear someone exclaim, “you’ve just built software for a release; it’s exhausted you; and now you still want to work on software, though of another kind, and you find that relaxing? Come now, surely, you jest!”
Before responding to this, here’s an outline of this post:
The openjdk hackday and what it consisted of. The hackday happens once every month.
How to understand, or at least navigate, a complex codebase; and how this applies to complicated legacy codebases.
What’s involved in building and testing the openjdk; and thoughts on keeping up with the different versions of Java.
Like I said, it’s one of the ways to relax. Secondly, not all activities exhaust you in the same way nor to the same degree: he who has spent the week in Spreadsheets might find it relaxing to work in PowerPoint; he who works indoors might relax outdoors, and vice-versa; he whose mind has been wrestling with Prose, in its rhythmic irregularities, soothes himself in rhythmic Poetry. In all these examples, and in many others that occur to you, effort is expended yet the activities vary; each exercises different mental faculties, and breaks off that weariness of the monotonous exertion of the same faculties. Hence the recommendation to so arrange our activities that, being neither idle nor worn out, we find a certain rest in them.
Before going to the Hackday, I read the instructions, followed numerous links: one link to get the software I needed, then attracted by another, and then another, and there I stayed tangled in the web. One effect was that feeling of being overwhelmed by the information I came across, and the proficiency that would be expected, and my impatience to start contributing; I had forgotten that we all have to start somewhere. Even on the day, I was hesitant: to go or not to go. But, having committed, I went. I also realised that it’s better to download all the reading materials and the required software in case there’s no Wi-Fi.
Overlooking one part of London
At the hackday were several people, all with different experiences, all with varying degrees of proficiency; yet, all eager to learn, willing to be taught, and willing to teach. To this, you can add, as an auxiliary, the venue. The hackday was at the Salesforce office, the 26th floor of new The Heron building.
After having breakfast, talking with the other attendees, connecting to the Wi-Fi, the host gathers us, explains the ground rules, outlines the main ideas on the agenda, and opens it up to the attendees to suggest more ideas that they’d like to work on. Most of us attending had a preference of what to work on: some wanted to work on openjdk, some on Scala, others on Play, some on Go, some on Clojure.
Brainstorming over food
The agenda had two pre-planned items:
Getting started on, and improving the openjdk. The openjdk is a Reference Implementation (RI) of the Java programming Language since version 7.
Working on betterrev – a wrapper, written in Play, for the JDK build system. More details are here
The floor was then opened to the attendees to speak on what they would like to work on, and also what they’re working on.
One of the attendees (Norbert Radyk ) was working on a Scala wrapper for the Apache Commons POI library (a Java API for Microsoft Documents). Those who had come for Scala had something to work on; Norbert had helpers; and I think that explaining his project to strangers and seeing them productive was a good preparation for his talk at the London Scala User Group (video)
Another was working on Hugo, a static blog generator written in Go. I was interested, but I couldn’t be in two places at the same time: I’d like to do away with setting up WordPress and a database, and return to using Emacs orgmode for my notes.
Finding your group – Pairing
Now that we know who’s working on what, it becomes easy to find the group we’d like to work with. Having found the openjdk group, each of us explained where we were on the learning curve: some were having issues with the virtual machines, some with getting around Linux, some with understanding the relationships between the main JDK components. Some were further ahead – they had attended other sessions previously. We coalesced around Mani, around each other, asked for help, received help, and began experimenting. Having the prepackaged virtual machines helped much.
We followed two documents of instructions to get started:
One the attendees had already downloaded the 2 virtual machines that were created for developing openjdk. The links to the virtual machines are kept up-to-date in the beginner’s guide. I was working on the virtual machine that had the jdk sources in Eclipse, not the one that uses IntelliJ for its IDE. Retrieving the sources of jdk9 and building it consumed most of virtual machines space on the hard disk. Because of this, I skipped the section on building jdk9 and focused on those instructions which apply to jdk8 – I had to make the most of the day. Expanding the size of the virtual machine’s hard disk was something I could do later.
Level 1 directory hierarchy
Two questions occurred to me: the first is how do you grasp the different components that make up the JDK, and how quickly can you do that? How are they structured? What’s the biggest component? One way to see the structure of the jdk components is to the UNIX tree command. The one below shows the tree structure of the first level hierarchy of the openjdk.
But it’s much easier to visualise these components, their size and complexity using SonarQube. Most tools that are used to build and test the jdk predate Maven, Hudson/Jenkins, and SonarQube. Hence the reason why some jdk components will not appear in SonarQube’s reports.
For a codebase that we’re new to, using these tools is a quick way of understanding their structure and complexity. On any new project, I always ask for documentation of the application. Not getting it, or finding it non-existent, I would mentally add a percentage to the estimate I had in mind. Now with tools like SonarQube, if allowed to use them, would reduce the time to build up knowledge of an application.
The second was: the virtual machine we were working on had three versions of Java; we were installing a fourth – so, how are these different versions used in build and testing a new version?
Using several jdk versions, and keeping up
What I found interesting was how the different versions of Java are needed to build and test new versions. At one of the LJC sessions on the new version of the JMS API, much was said about how it uses the new features of Java 6 and 7, and I never understood why it was so or why it had to be. Why should new versions of the Java language necessitate a re-write, or at least the perception of one, of the components that make up the Java EE platform? Doesn’t this seem like a waste of time and effort?
Seeing how new versions of the Java language are built answered this question that has been lingering within me. You currently have Java 7; how would you build Java 8? Isn’t it by using Java 7:
Using Java 7, write the new feature that you need for Java 8 – (I’m using Java as the umbrella term for both the jdk and the jre).
Test that Java 7 compiles your new Java 8 feature. Also ensure that the test you’ve written passes in Java 7.
Write a program to use the new Java 8 feature. Compile and run it using Java 8. Ensure that the test you’ve written in Java 8 for your program passes.
Now you can go back to Java 7, and mark your new feature as complete.
This process is also what the jdk test suite (jtreg) allows us to test. There are more complex tests but that’s for another time.
Extend this process to Java EE: it stands to reason that as new versions of the Java language are being created, the relevant components in Java EE will also need to be written using the features of the new Java version.
What, then, does this mean for us who work with the different versions of both Java the language and Java EE? Can’t I just learn one version thoroughly and not trouble myself with these “new” features? The answer, as always, depends on what our work entails, and on what we can ignore, or pass by, without being culpable of incompetence in our work. I like how Cobbett uses the word ignorance, restricting it to the relation between all knowledge and that proportion of it that we’re expected to know:
Ignorance consists in a want [lack] of knowledge of those things which your calling or state of life naturally supposes you to understand. A ploughman is not an ignorant man because he does not know how to read; if he knows how to plough. 
Hammerton goes further. He shows how we’re qualified for our calling, or profession, partly by our knowledge and partly by our ignorance since “everything we learn affects the whole character of the mind.” I would re-phrase him thus: we call a small proportion of knowledge ignorance; whereas we call, as science, a larger proportion of it.“ This larger quantity,” he continues, “is recommended as an unquestionable good, but the goodness of it is entirely dependent on the mental product that we want. 
Moreover, I think we’ll also save time, even in learning these new features if we know what problems these versions and platforms are attempting to solve. I think we know, or are learning, about them; and we should frequently bring them to mind again as new versions and technologies being released and weigh them. Consider Java EE – one central problem it attempts to solve is distributed computing: how do I, as a computer, communicate with another computer existing on some other network to get some work done? If that computer uses Java objects, how do I talk to it? Java EE provides Remote Method Invocations(RMI) or EJBs. What if that computer doesn’t use Java? If it uses HTTP, try Web Services over HTTP. But that requires me to wait for a response before I can continue with my work. I don’t want to wait; I have other things I could be doing while the other computer is processing my request. Ok, use Messaging which is in Java EE. As these initial solutions were improved, newer versions come out. At the same time, other problems were becoming apparent and were looked at, such as, how do we make it easier for the developers to use what we’re providing? how do we ensure consistency between those implementing these services?
Knowing the problem, and recognising it as a problem, gives us a standard which allows us to consider the extent to which the solution addresses the problem, to compare the different solutions, and to determine how just and solid are the arguments in their favour. Then, perhaps an order of these solutions is established in our minds, ordered by how they address the problem – either addressing part of the problem or addressing the whole problem. That standard also becomes the criteria we use to classify these solutions. And each subsequent version or new technology can find its place in our mental catalogue. How do we go about learning about these problems and solutions?
Reading blogs – how many would you need to read, notwithstanding the weak reasoning in some, or the repeated and diluted information in others?
How about writing sample programs – how many would you have to write to determine the correct behaviour for all scenarios? How would you know all the scenarios that are likely to occur?
Reading about the different frameworks, and probably their source code – how many to go through?
What about StackOverflow? How about mailing lists? Or even books?
Of course, each has its place. But for a comprehensive view which, like a map of a city, shows you the major towns, their relation to each other, in terms of distance and size, and leaves you free to decide how to get from one town to the other, there’s no better place to start than by reading the Specifications – those of the Java language itself and those of Java EE, the Java Specification Requests (JSR). What I quoted from Hammerton still applies: “the goodness of it is entirely dependent on the mental product that we want.” The best expression on comprehensive views that I could find was Cardinal Newman’s:
. . . we cannot gain real knowledge on a level ; we must generalize, we must reduce to method, we must have a grasp of principles, and group and shape our acquisitions by means of them. It matters not whether our field of operation be wide or limited ; in every case, to command it, is to mount above it. Who has not felt the irritation of mind and impatience created by a deep, rich country, visited for the first time, with winding lanes, and high hedges, and green steeps, and tangled woods, and every thing smiling indeed, but in a maze? The same feeling comes upon us in a strange city, when we have no map of its streets. Hence you hear of practised travellers, when they first come into a place, mounting some high hill or church tower, by way of reconnoitring its neighbourhood. In like manner, you must be above your knowledge, not under it, or it will oppress you ; and the more you have of it, the greater will be the load. . . Instances abound; there are authors who are as pointless as they are inexhaustible in their literary resources. They measure knowledge by bulk, without symmetry, without design. 
Here are some high hills from which to reconnoitre the Java EE neighbourhood: JEE 5 (JSR-244) ; JEE 6 (JSR-316); JEE 7 (JSR-342). Page 6 of each of those JSRs has a diagram showing what the new version contributes to the old one, and the context of the subsequent JSRs; just as the Executive Summary of Annual Reports describes the business and the environment in which it operates.
William Cobbett, (1829), Advice to Young Men, and (Incidentally) to Young Women, in the Middle and Higher Ranks of Life, p. 119. Online pdf.
Philip Gilbert Hammerton, (1873), The Intellectual Life, p. 52
John Henry Cardinal Newman, (1886), The Idea of a University, 6th ed., pp. 139-140. Online PDF.
At times, what we write ends up being longer than we thought, and we’re naturally uneasy that the length of it will dissuade our fellow readers from proceeding. In my previous entries is one sentence that I had started in order to justify the length of that article: that while considering the reader’s time and attention, it was in proportion to the subject-matter. But, in hastening to describe the event to you, I left the thought there incomplete. Now, adding a little to that thought here, is what this entry is about.
When we come across long articles, one of our initial reactions is reaching for that common, though inaccurate, saying, that “a picture is worth a thousand words,” placing it next to the article, and wondering whether it imperceptibly escaped the writer, while we ourselves are subjected to thousands of words when (or so we think), for the same purpose, a couple of pictures would have been just as adequate.
All of which could be true; yet, keeping articles short is often not possible, at least of those that I’ve tried to write. These have been on events whose content I thought worthwhile to transcribe. I am fully conscious of how we all are pressed for time, and how difficult it is to retain anyone’s attention for a long time; and how, even in those moments of relaxation or idleness, we prefer almost any thing that doesn’t tax our attention for long. But we do make time, and compel our attention, for those things we consider worthwhile.
It’s out of this concern, at least in part, that for those blog entries describing the events, I determined to make one or two statements to explain why the length is in proportion to the content. I may be going to more events, which means that, for subsequent blog entries, I’ll probably be repeating those statements. So, applying the DRY (Don’t Repeat Yourself) principle, I thought it convenient to have such an explanation in one place which I can refer to.
As for that common saying, “A picture is worth a thousand words,” wherever it’s appropriately applied, I would add “that describe it.” Consider this: here is a picture; is it worth any thousand words? Would it be worth the first thousand words in a dictionary? What about the last thousand words? I’m accounting for only the number of words, not the “picture,” or the “words” used, or the standard of “worthiness” appealed to.
Now, at these events, there are more things to describe than I have the inclination, and the ability to do so strikingly: there are those one-on-one conversations you have with others who are attending, either during the breaks or over drinks; there are those numerous hints from the expressions of people’s faces, their varying modulation of voice, and their actions which gradually rise from expressing nervousness to confidence; all of which would occupy a curious observer because of the probable lessons they contain.
Furthermore, on stage, all these are somewhat magnified. There, we see the earnestness of the speakers, which diverts us from the familiarity of the day’s routine; and which fixes our attention, at least for a while, on our temporary instructors and on their subject-matter. Stranger still, even those plain announcements take a transient hold on us.
To describe all these would take much more than a thousand words. So far, I, like some others, have limited these summaries to those talks that we’ve been to, and take it for granted that the contents of those talks are able, by themselves, to sustain your interest while reading about them in more than a thousand words.
As always, I’m happy to be corrected.
For others going to events, and writing summaries, I can’t wait to read them, however long they turn out to be.
A summary of this month’s JBoss Forum on Integration consisting of:
a description of the venue, speakers, and audience;
short summaries on what each speaker spoke about.
This short overview I have taken from my notes supported by my memory. I’ve verified the information that’s open to the public. Yet, inaccuracies and errors, if any, are my own.
At the entrance to the Stationer’s Hall
Yes, I couldn’t resist Red Hat’s red hat. I received one at the JBoss Enterprise Forum where Integration was the focus. This was on Thursday, 7th February 2013 at the Stationer’s Hall; a hall, well preserved from the 14th Century, and rich in history.
Though the hall’s history was a conversation starter, Integration was the theme. Red Hat had chosen its speakers well – addressing both the business and technical aspects of Red Hat and Integration. These were:
Werner Knoblich, VP and General Manager of Red Hat EMEA;
James Strachan, Senior Consultant, Software Engineering, Red Hat; (He created the Groovy programming language)
Rob Davies, Technical Directory for Fuse Engineering, Red Hat; (One of the authors of the book, “ActiveMQ in Action”)
Steve Gaines, Head of Middleware, Red Hat UK and Ireland.
The audience was varied; varied in where they work – some in private companies, in institutions of education, in public sector organisations, some as industry analysts; varied in what they do, in their interests, concerns, and outlook; out of which Steve Gaines addressed three important ones bringing all the rest in harmony, as the different musical notes, when suitably combined, form a harmonious sound.
The start of the day was an indication that Red Hat had arranged the sessions optimally – they started with breakfast 🙂
Then onto the business overview, followed by the theoretical concepts of Integration and Messaging. After this was a short coffee break. Then James Strachan demonstrated Integration using Apache Camel with Fuse IDE. Next was Steve’s short and well-arranged talk, which led into the Question and Answer (Q & A) session.
Werner explained three important things:
That Open Source was not Red Hat’s business model. Rather, it was the way that Red Hat developed software;
Red Hat’s acquisition of FuseSource (link) gives Red Hat a stepping stone (and a place) into Enterprise Integration, which is to become part of their product suite centred around Integration.
Red Hat’s acquisition of Polymita (link) means that Red Hat is moving into the BPM (Business Process Management).
In this way, Red Hat covers two areas: the front end with BPM; the middle end with FuseSource, and back end with the JBoss Application Server.
After the sessions with James and Rob, two questions required fuller explanations from the speakers, namely:
In JBoss, what is the recommended way to do Messaging to enable Integration without using standalone Messaging Providers?
What’s the relation between FuseSource Apache ActiveMQ and Apache Camel?
As to the first question, the background is this: in the JBoss Application Server (AS) 4, there was a JBossMQ component which you could use for messaging. Then in JBoss AS 5, 6, and 7, JBoss has a messaging component, which uses HornetQ internally. And now that JBoss has acquired FuseSource, which comes with its own FuseMQ, there’s more choice. What JBoss would like to do is to have one way to do messaging in the future, and that’s by using FuseMQ.
What then is FuseMQ and how is related to FuseSource? FuseMQ is a product developed by FuseSource. Building on top of ActiveMQ, FuseSource has made FuseMQ to address few important requirements of enterprise messaging, such as Clustering and High Availability.
Another requirement which FuseSource seeks to satisfy is that of providing patches and upgrades as soon as possible, if it’s part of your agreement (Service Level Agreement – SLA) with them. Suppose that during production, you found a FuseMQ (or ActiveMQ) bug which affects your application, submitted it, and according to its severity and SLA, required a patch in 48 hours, then FuseSource can provide the patch. If the same bug was submitted to the Apache Software Foundation (ASF), ASF would need at least 72 hours to vote to include the bug fix in the next release.
In fact, each Apache Software Foundation project has its own PMC (Project Management Committee), to determine committers, project direction and overall management. . . .
The project is a meritocracy — the more work you have done, the more you will be allowed to do. The group founders set the original rules, but they can be changed by vote of the active PMC members. There is a group of people who have logins on our server and access to the source code repositories. Everyone has read-only access to the repositories. Changes to the code are proposed on the mailing list and usually voted on by active members — three +1 (‘yes’ votes) and no -1 (‘no’ votes, or vetoes) are needed to commit a code change during a release cycle; docs are usually committed first and then changed as needed, with conflicts resolved by majority vote. . . .
Anyone on the mailing list can vote on a particular issue, but only those made by active members or people who are known to be experts on that part of the server [application] are counted towards the requirements for committing. Vetoes must be accompanied by a convincing technical justification.
Since some of those who develop ActiveMQ, i.e., committers, work for FuseSource and FuseMQ is a fork of ActiveMQ, FuseSource is not as restrained, and is therefore free to provide patches when you need them. Later on, these patches are merged into the main ActiveMQ branch, after the usual ASF voting system.
The relation and process described above exists between, and is applicable to, Apache Camel and the FuseSource’s Mediation Router.
Steve Gaines was the final speaker. Using four case studies as examples, he described the three requirements of three enterprises that ActiveMQ was able to satisfy. Naturally, the numbers (or metrics) are still exciting to see.
Performance: For this example, their client, through ActiveMQ, was able to handle 32,000 transaction per second. I don’t remember if, in this case, transactions mean the same thing as messages. But since the context is Messaging and Integration, I’ll assume so for now.
Nevertheless, the default implementation of ActiveMQ can handle 6000 messages per second, assuming that the rate you produce messages is at least the same rate that the receiver will consume them. Imagine producing 6000 message per second but the receiver consuming 10 messages per second. And of course, before discerning this, you raise a defect with ActiveMQ 🙂
Global Scalability: For this client (SpecSavers), the challenge was threefold: rolling out Messaging to 1900 global retail stores, managing the installations, and affording to pay for the product to use in that many stores. The installation and service costs of ActiveMQ made it attractive to SpecSavers.
Proven Enterprise Quality of Service (QoS): this client (CERN) cared about running the operational grid for the Hadron Collider. The goal here was to distribute 100, 000 messages in more than 140 facilities in 20 countries.
You’ve got to admit, that’s an impressive list of what JBoss and ActiveMQ were able to accomplish.
red hat in the office
During the demo and Q & A sessions, James Strachan, using the Fuse IDE, showed us how easy it is to use Apache Camel to integrate several systems. In addition, with Camel, you’re middleware logic is separate from your business logic; and this makes testing integration simple. Because most of the Camel classes have corresponding JMX MBeans, Fuse IDE takes advantage of that to make it easy to visualise as messages pass through Camel and are routed to different destinations.
I enjoyed this forum; and it wasn’t all because of the red hat.