Does architecture emerge or is it planned?

As a good architect, the right answer is ‘it depends’. But I have also seen too many perplexed and ‘that cannot be all’ face expressions to know, that it is also not the complete answer. There are roughly four mechanisms:

  • The process of creation: The architect identifies and analyses a number of alternative solutions, derives a rationale and then guides the design process to its completion (i.e., code tested and released).
  • The process of evolution: Repeated executions of the ‘process of creation’ in response to new or changed requirements, environment changes or other constraints imposed on existing systems.
  • The process of guidance: The identification and specification of goals or references to guide and constrain both of the above processes.
  • The process of collaboration: The identification and realisation of opportunities to collaborate and share software between two or more groups of people.

The process of creation is predominantly a planned process, but it is not uncommon for parts of the solution to emerge due to factors of the unknown risks, constraints or other factors with uncertain details. Project focused Solution Architects will typically work into this category. Solution Architects are responsible for the search for the optimal (or good enough) solution given a set of possible solutions (also known as the ‘design space’).

The process of evolution is a gradual process where the architecture emerges through a step by step planning process. Solution Architects involved in several releases or multiple projects with dependencies on a shared set of existing applications fall into this category. Here the architect is responsible for the gradual realisation of a solution (and interim solutions), or altering the solution in regards to what’s considered ‘optimal’ (changes in the design space).

The process of guidance ensures a suitable direction for the process of evolution and appropriate constraints for the process of creation. Enterprise Architects typically work within this category. The focus on goals is about ensuring the continuous alignment with the objectives, strategy and business goals of the organisation – the actual realisation belongs as part of the processes of creation and evolution – i.e., define the benchmarks for what’s considered ‘optimal’ and set the outer boundaries for the design (search) space. In this context, architecture within the enterprise context emerges from the combined set of projects executed over time.

The process of collaboration is primarily for scenarios where company goals can only be achieved through the collaboration between multiple units either within or external to the company. In this context, both the goals and solutions would emerge as a result of market forces.

So it really does depend….

More isn’t just more…

Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently an IBM Distinguished Engineer) coined the phrase ‘Worse Is Better‘, in formulating a software concept that Christopher Alexander (a ‘real’ architect famous within software engineering) might have termed ‘piecemeal growth’ – e.g., start small and add later.

But what happens if we continue to add more, even if they are just small, simple pieces?

The suggestion is that more isn’t just… more.

A bit of history-digging reveals that, in 1962, Herbert Simon (granted the ACM Turing Award in 1972, and Nobel Memorial Prize in 1978) published The Architecture of Complexity, describing the structure and properties of complex systems. He concluded that complex systems are inherently hierarchical in structure, and exhibit near-decomposability property – e.g., a complex system is essentially a system of mostly independent systems. In 1972, P.W. Anderson (indirectly) builds this line of thought in “More is Different“. Anderson argued that an understanding of how the parts (e.g., systems) work, does not equate to an understanding of how the whole works. We can all relate to the point that biology, medicine, and science can explain, to a large degree, how our bodies work, yet when all the parts are combined into a human body, elements such as emotion, intelligence, and personality form part of the ‘whole’, and are distinctly not medicine or biology. Yet, in Simon’s terminology, the human body is a system of systems exhibiting the near-decomposability property – otherwise, heart transplant wouldn’t be possible.

The near-decomposability is a property we desire as part of software design – although better known as modularity. Software architecture is basically ‘to separate into components or basic elements’, based primarily on the fundamental principle of information hiding first formulated in Parnas‘s seminal paper of 1972: On the criteria to be used in decomposing systems into modules. But as Richard Gabriel argued in his essay, Design Beyond Human Abilities, there is a key difference between software modularity, and the near-decomposability property of systems of systems.

Within a software engineering context, a module is traditionally defined by what it does rather than by who produced it – the latter is the definition used by Simon and Anderson.

This is a significant difference, and one we need to get used to as our software systems become increasingly complex.

Those of you who know Conway’s law from 1968, shouldn’t be surprised however. The ‘law’ states, “a software system necessarily will show a congruence with the social structure of the organization that produced it“. In other words, the modular structure of a software system reflects that of the organisation’s social structure – e.g., who rather than what. This doesn’t imply that you should skip your ‘structured design’, ‘object oriented programming’, or ‘patterns’ course at uni and head down to the bar to socialise instead, but I think there is a number of implications that software architects, especially, need to pay attention to, when dealing with a system of software systems:

  • The architecture will be at the mercy of the organisational structure or social network. Although I don’t believe that software architects need to become experts in social science or organisational behaviours, it will be helpful to understand the basics. To get you started, I’d suggest a couple of books: The Hidden Power of Social Networks: Understanding How Work Really Gets Done in Organizations and, Getting Things Done When You Are Not in Charge
  • Software architects can no longer pretend that they are software versions of ‘real’ building architects. We’ll need more skills similar to those possessed by policy makers, negotiators, and other communicators (though not necessarily politicians). Robert Schaefer’s ‘Software maturity: design as a dark art‘ is a place to start.
  • The duplication of software functionality or applications within organisations or commercial enterprises isn’t necessarily a bad thing in itself, but instead we need to focus on warning signs such as duplicated organisational roles (or responsibility overlaps), or lacking communication channels (related to the first point). I know many might be up in arms by the mere suggestion of duplication (wasting resources is never good!), but we need to be serious about the ‘cost of re-use‘, versus the ‘cost of (almost) re-implementation’. Even when viewed within the context of Total Cost of Ownership, my belief is that the former isn’t always the clear winner.
  • Focus on interoperability instead of integration. So what’s the difference? NEHTA‘s Interoperability Framework captures the difference as the ability to maintain integration over time at minimum effort: the ability to maintain integration despite changes to, or replacement of, the communicating systems. Other references include the comprehensive ‘A Survey of Interoperability Measurement – if you can’t measure it, then you are not going to get it.

Today’s realities of software development are essentially about adding, changing, or removing parts of an existing complex software system, through a continuous process of negotiations, bargaining, and policing. Our ability to do this is directly linked to our software’s ability to interoperate.

Oracle’s Architect Day on Cloud Computing – a contradiction?

TechForum - Technology Forecast: Cloudy for 2010
Image by BasicGov via Flickr

Given comments by the Oracle’s CEO, Larry Ellison “Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?” and the recent closure of Sun’s Cloud, I figured this one should make for an interesting day. So I went along to the Grace Hotel, Sydney.

And I’m glad I did – the organised presentations and panel discussion demonstrated, that Oracle and Cloud are by no means a contradiction – in fact, it was nice to see and hear (at least) one vendor preaching some calm among the hype. And yes, the food was nice too.

According to Gartner’s Hype Cycle (as quoted by Oracle), Cloud Computing is at the Peak of Inflated Expectations, and we are about to decent into the Trough of Disillusionment. But rather than writing off Cloud Computing altogether, there are a number of potential benefits to emerge on the Slope of Enlightenment.

Better and Flexible Pricing Models for IT

Enablement of IT departments to implement effective ‘charge back’ model for IT services. The difficulties to charge for technology services as per usage than per server have in my experience been a major show stopper for effective sharing of infrastructure – resulting in later expensive, but inevitable, consolidation projects. Avoiding them in the first place would be a far more cost-effective approach and a lot less risky.

Better Enterprise Architecture

Questions like security, privacy, vendor lock-in and what do we put in a public cloud are major concerns. But regardless of the applied technologies, it all starts and ends with effective governance. And effective governance doesn’t happen without a good approach to Enterprise Architecture. The Enterprise Architecture challenge is not just about effective and better implementations of current best practice, it is also the enhancements of the frameworks to cope effectively with cross-company collaborations (as noted by Philip Boxer in his blog, Asymmetric Design).

Cheaper Infrastructure – Upfront and Total

Tim Rubin, (Senior Enterprise Architect, Oracle) described the concept of ‘cloud bursting’ – e.g., where companies purchase additional capacity for the (few) times a year it’s needed. This would enable companies to size their own infrastructure (maybe as a private cloud) for the typical performance and load requirements (lower upfront) and ‘rent’ infrastructure for the peaks. Neither public nor private clouds are able to offer both, but the private cloud combined with the ability to burst into public clouds offers this opportunity.