Archive

Author Archive

What do you think of Brahms?

A brief but so very relevant article regarding one man’s study of computer system design.

This NYTimes article is a glimpse into a lifetime of work and provides a strong reminder of how easy it is to design with complexity, and how hard it is to design for simplification (and robustness).

20121030-024910.jpg

The Rise of Ecosystems…and the fall of Nokia

“The world is shifting from a battle of devices to a war of ecosystems,” said Stephen Elop, the new chief executive of Nokia, while announcing Nokia’s decision to drop their own Symbian OS efforts and move forward utilizing the Microsoft Windows Mobile 7 smart phone platform.  He went on to say…”Nokia brings hardware and incredible industrial design, and Microsoft has the software.”

In other words…Nokia blew it.  They lost the ecosystem battle.  Elop’s own quote seems to show he understands the importance of ecosystems,  but claiming your future is in hardware is pretty much giving up unless Nokia can figure out some way to outdo the Japanese and the Chinese in hardware design.  And even if they did, so what!

Ecosystems are built on software and services…hardware is non-strategic and rarely constitutes an ecosystem (and certainly not in cell/smart phones). Apple arguably has the most attractive hardware but their secret sauce is their software and ecosystem surrounding it.  Think of it, would an iPhone running Windows Mobile 7 have any real advantage?  NOPE!  Yet Steve Elop of  Nokia thinks that is the future of his Company.

For comparison, read this article from the New York Times on Apple and their platform as the driver of success.  A nice overview of how Apple was/is so wildly successful (while Nokia spiraled down and now, admitting defeat and re-labeling the Windows platform, is completely out of it as an ecosystem competitor)

http://www.nytimes.com/2011/01/30/business/30unbox.html?_r=1&scp=5&sq=apple%20ecosystem&st=cse

 

Categories: Uncategorized

Design for Change and HBS article

A key principle of Excellence by Design is the that of ‘Design for Change’.   While the ability of any system to change over time has been important, it is becoming increasingly so due to the more rapid pace of change in technology, globalization, and consumer demands.  Simply put, the world is changing much faster these days than it did even just 5 years ago, and systems (both technological and  organizational) must be much more agile and responsive to this change.

Which means ‘Design for Change’ is more critical than ever.  Designing a system intentionally, to be more capable of change (in a managed, effective way) should now be a paramount consideration.  It is also one reason why systems that are rigid and inflexible to change, regardless of their maturity and function, are becoming greater sources of dissatisfaction.  Agility is becoming more important than function.  The Apple iPhone is the perfect example…it’s ecosystem of thousands of apps allowed it to be very flexible to new needs, while the typical cell phone providers struggled to provide more ‘fixed’ features in their phones.  It is fast becoming an unsustainable business, and ‘smartphones’ the norm….because they are ‘designed for change’.

An article by the Harvard Business School does an outstanding job of putting more explanation into this.  While it is very technical, the critical results can be summarized, and have great implications for Design for Change.  The full article can be found here.

Summary:

The  article analyzed complex software systems in terms of Core and Periphery Subsytems.  Their interest was to measure the level of core vs periphery use and understand its implications.  The study analyzed large software systems with a minimum level of complexity and usage  as measured by having a large number of end-user deployments.

Any complex technological system can be decomposed into a number of subsystems and associated components, some of which are core to system function while others are only peripheral.  Core components are those that are tightly coupled to other components. Peripheral components are those that are only loosely coupled to other components.

Some key findings:

  • “How such “core-periphery” structures evolve and become embedded in a firm’s innovation routines has been shown to be a major factor in predicting survival, especially in turbulent technology-based industries.”
  • they found “tremendous variation in the number of core components across systems, even when controlling for system size and function. Critically, these differences appear to be driven by differences in the structure of the developing organization.”
  • The article notes research showing: that “tightly coupled (‘core’) components tend to survive longer and are more costly to maintain, as compared to loosely coupled equivalents… higher levels of component coupling are associated with more frequent changes and higher defect levels…teams developing components with higher levels of coupling require increased amounts of  communication to achieve a given level of quality.”
  • There are substantial variations in the number of core components across systems of similar size. For example, variation from 7% (linux) to 64% (myboks) of a system as being core.
  • Most interestingly different organizational forms appear to yield designs with different structures.  The difference between systems constructed in distributed (esp open source) methods, and closed/commercial (single company/team) environments was striking .  Even when comparing similar functionality, the closed/commerical offerings were significantly based more on core subsystems with an average of over 50%, compared to  avg of less than 10% core in those systems constructed from distributed, independent teams.
  • And finally, their summary, which is telling:  “it is significant that a substantial number of systems lack such a structure. This implies that a considerable amount of managerial discretion exists when choosing the “best” architecture for a system. Such a conclusion is supported by the large variations we observe with respect to the characteristics of such systems.  In particular, there are major differences in the number of core components across a range of systems of similar size and function, indicating that the differences in design are not driven solely by system requirements. These differences appear to be driven instead, by the characteristics of the organization within which system development occurs.”

Implications/Recommendations:

Design is a key consideration in the design of systems, yet this study would show that design (in terms of subsystems) is variable, influenced by the organizational design, and yet has very important ramifications for future extensibility, flexibility, and resilience to change.  My experience has been heavily influenced by the focus on Core/Periphery subsystem design, and these findings match my observations.  As such, an organization developing a system (whether it be software, hardware, process, or organizational) would be wise to remember:

  • constructing your ‘system’ with a minimal of ‘core’ components, and well interfaced (standardized, loosely coupled, independent of implementation) ‘periphery’ components, will lead to lower costs for change…which is inevitable, an increasingly the norm.
  • avoid having your organizational structure unduly influence your system structure.  For example, while a ‘small tight team’ may be good to drive an effective design effort, ensure they design the system with the intention for ‘least core/maximum periphery’ system design
  • these recommendations apply equally to business design, something to think about especially as a company expands globally and it is critical to determine (design!) the right balance of adherence to corporate consistency (core), while allowing regional/market adaptability (periphery) capabilities.

In any case, some good aspects to consider when you ‘Design for Change’ as part of Excellence by Design.

Categories: Complexity

How Ford got its groove back

An article this week in CIO reviews how the IT transformation at Ford Motor Company helped drive, and support, the turnaround at that Company.  Before I comment and provide some personal experience from my participation, lets review some of the very impressive news from Ford:

  • Profit is back.  Ford reported its fifth consecutive profitable quarter, and $2.6B for the last period (2Q10)
  • US Marketshare has grown.  In fact, in every month of the last 2 years (except one)
  • US Brand impression is MUCH higher in the US.  For perception of quality and very importantly, innovation
  • US Vehicles now receive high ratings.  As measured by Consumer Reports, and by other consumer testing sources (and Ford’s internal research)
  • US Product Winners abound.  Taurus (especially SHO) is back in a big way and finally sheds the ‘500’ fiasco years.  Fusion continues to do very well. The F-150 is taking share and winning awards (as usual). The revamped Mustang is a hot hit, again.  The new Edge interior with Ford’s new driver UI is gorgeous. The small Fiesta has entered the market to warm reviews.  There really are few duds and perhaps the only complaint is that Lincoln is still not performing as well as one might like (but customer satisfaction, especially with dealer experience, is very high) and some vehicles like Flex are not the runaway hits one might have hoped for.
  • Europe leadership grows.  Ford is #1 or #2 selling brand in Europe (depends on period you select over the last 2 years) and its design leadership there has influenced the Company’s direction, leading to better perception, higher sales, and better US products.  KA, Transit, Focus, C-Max, Mondeo.  Numerous product hits demonstrate a strength and foundation for future success.

The article in CIO talks about IT’s actions relevant to this transformation, and quotes CIO nick Smither, who also gives due credit to the prior CIO Marv Adams.  I was at Ford from 2002 to 2008, worked for both CIO’s, and was fortunate to participate closely in much of the work done to help IT become more effective, and help drive the corporate revitalization.  That effort, which started before Alan Mullaly arrived at Ford, really took hold once he took the reigns.  But the principles were the same.  Here are a few of the key ones.

  • Reduce Complexity.  This was a key IT strategy starting in 2002.  Initially it started in areas IT could control, like infrastructure, and then moved slowly upward, towards business applications and information.  Over time, this effort helped not only shed duplicate assets, but gain greater focus on the assets that remained, so they became better. This occurred in servers, storage, and networks, but also in key enterprise wide application services like collaboration, data warehousing, and application hosting.  This IT strategy bled into the business and took hold in product development, where Ford finally began to take seriously the needless complexity in platforms and components.  The benefits IT saw also occurred in vehicles.  Product engineering costs lowered, quality rose, capability increased.  IT customers saw better service levels.  Ford  customers saw better products.  Complexity kills and focus saves.  Of course, its not just reduction.  You have to design for greater commonality.
  • Be truly Global.  Ford has always acted like a multinational, not single global company.  IT did too.  But over the last few years this balkanization of organizations finally ended.  IT started working to leverage global talent, consolidate facilities, and share best practices.  The business side of Ford has done so too.  While Ford still seems to be very skillful at providing market unique offerings when required, the ‘back office’ of IT and business functions works together much more effectively as a global entity.  But note  importantly that the reduction in complexity and greater commonization of IT and vehicle products makes this all possible.  You can’t maximize global potential if you act like a million separate entities.  You have to redesign your systems and processes to enable globalization.
  • Leverage the Community.  Ford (IT and the business) has moved more towards a model of true teaming, and using methods of enabling that.  This not only builds camraderie, it builds best practices, and it increases momentum.  A single person’s great idea can be absorbed and magnified, instead of possibly resented or ignored.  Team sport is something IT  built with Computing Patterns, Centers of Excellence and Communities of Practice.  Alan Mullaly brought it into Ford executive suite (where it had, ahem, been lacking) with his common Business Plan Review (BPR) process that encouraged open, efficient dialogue of issues, where help was needed, and a fresh attitude of working together.  The lesson here is you have to design enablers and solutions to help leverage the community, not just yell a people to work together more (like many companies often do).
  • Pursue Product Leadership.  Both IT and the Ford business rededicated themselves to building outstanding products (and in IT’s case, services).  In IT Ford led the industry in moving towards utility computing, introduced better methods of developing applications ‘like a product’, and less like one-off order taking, and helped introduce new innovations like Sync.  The business re-energized itself too.  Under Derrick Kuzak, Global VP of Engineering, new methods, along with more ambitious objectives, were employed to better define the key attributes of excellence, and aggressively design for them.

The points above could sound like motherhood and apple pie, but Ford (IT and the business) made them real by designing for them.  It was the perfect example of excellence by design.  Many IT and business leaders can talk the talk, but few have walked the walk as we did at Ford.  It was truly a transformation in leadership, strategy, tactics, and results that Americans should be proud of, I know I am blessed for having been a part of it.  I hope to share more insights soon in my forthcoming book, because the lessons learned in Ford’s successful transformation should be regularly taught in any business, and IT organization.

Applying Excellence by Design…for Healthcare

Much of my professional time over the last few months has been focused on the area of Healthcare and considering the application of Excellence by Design techniques to it.

Here’s a look at Healthcare using just some of the Excellence by Design model facets:

  • Environment: Challenging! The Healthcare industry is perhaps the leading example today of a challenging Environment that exhibits the paradox of Chaos vs Control.  (Control) The industry is facing unprecedented standardization and regulatory pressures driven by government entities.  These cover things like basic interoperability of protocols based on the National Information Exchange model (NEIM) in which the US will guide the development of a  health information exchange framework.  There is also new content standards for specifying clinical diagnosis and procedures, among others.  These new standards will/are significantly affecting the Environment that all players must live in, whether they be software product vendors, information value added services vendors, hospitals, insurance carriers, or others.  (Chaos) Of course at the same time the desire to drive new competitive innovations marches on, in medical devices, in information (i.e. business) intelligence services, and in solutions that drive cost down and effectiveness up.  But don’t forget that many/most Healthcare systems are based on pretty antiquated technology.  So all this change is occurring against a landscape that badly needs modernization of basic infrastructure.  From my perspective it seems the Healthcare industry, which has been a laggard in IT evolution compared to other industries (in particular Manufacturing, Finance, and Travel) in both optimization (Control) and innovation (Chaos), now seems to be paying the piper by having to face simultaneous pressures from multiple directions, in a shorter (government imposed, politically energized) timeframe.
  • Systems as Strategy: A Paradox. A key facet of Excellence by Design is the use of ‘systems as strategy’ (meaning structured approaches to problems and design of systemic solutions to them).  The Healthcare industry has a dual personality it seems in this regard.  The medical/clinical side of the industry is the poster child for developing structured approaches to disease discovery, diagnosis, and treatment.  It is a hallmark of the industry.  Yet IT has not adopted this same level of rigor.  Why?  Typical reasons given are underinvestment in IT in general, relatively low competency (in staff and even in CIO roles, which are being posted with a flourish these days, as if it never was regarded as important before!) a lack of cross-industry driven desire to solve some of the broader IT challenges like Automotive did with CAD and Supply Chain, or like Finance took on with bank funds transfer interoperability and stock trading processing.  The Healthcare industry and its functional organizations have generally tended to remain ‘islands’ that did not seek to cooperate among competing entities, technology providers, and even across functions within a company.  There was with little application of broad ‘systems’ of execution as a strategic approach to business process design and technology solutions planning.
  • Product as Platforms: An Opportunity (again). As an industry, the IT solutions employed for Healthcare are very ‘siloed’ both in design and in implementation.  Other industries have shown the advantages of greater integration of IT solutions into broad platforms that enable a wider class of functionality and information insight, in a more consistent and approachable (same UI, same interface, etc.) form.  Of course the classic examples are the ERP vendors, although their offerings have become so bloated and complex they are not the model I would recommend.  Better examples are Salesforce.com, Amazon, and e-Bay.  These have become very successful not only due to their function and content, but because of the capability to provide as ‘platforms’ that are extendable.   Other companies are following this trend.  Facebook and Twitter are among the many social networking offerings that are trying to grow beyond being ‘an app’ to become a ‘platform’.  So what is happening in Healthcare?  Not clear yet.  While there is some noise in this direction I cannot say I have been overly impressed that what I have seen is more than marketing spin.  Just adding function to an existing offering, or rebranding/bundling of applications, does not a platform make.  In my forthcoming book (or a future blog post) I’ll provide some general characteristics that I believe define a great product-as-platform.

In summary, Healthcare is either a scary place to be, or the best game to be in right now.  The industry is facing great change, ripe for all kinds of improvement, forced with a sense of urgency by government, and has a noble mission to improve the lives of people.  It can be a great podium for those wise and skilled enough to apply smart approaches to meet the challenge. It can also be a vast graveyard for the those who are unable to think broadly, and try and save the patient by applying the ‘one more band-aid and pray’ approach.

I am optimistic that, driven by the forces of today, the industry (and IT especially) will leverage the good capabilities that abound, to improve efficiency of operations, as well patient outcomes.  But of course I also believe a key to most effectively doing this is not brute force but Excellence by Design.

Simplicity and Design

I have emphasized the issue of Complexity in Design before in this blog.  It is an ongoing and critical aspect of understanding Excellence by Design.

In the talk above, George Whitesides does a nice job of providing a very simple introduction to Simplicity and Complexity. Excellence by Design requires the designer to be adept at using simplicity to create complex capabilities through what George refers to as stacking. A not new concept, he just reminds us of the basic value of using small elements to build bigger things.  He also tries to define what ‘simplicity’ is.  Interestingly he defines it as:

  • Cheap (low cost, so easy to reuse on a massive scale)
  • Functional (must provide some utility)
  • Reliable (does what it says with extreme predictability and consistency)
  • Stackable (has some characteristic to enable easy combination/connection with other things)

Although George claims little study has been made of the subject of simplicity in general, the use of stacking is certainly not new.  It is a basic concept that engineers (whether mechanical, chemical, or information technology) strongly use as a fundamental part of their jobs.

I might say however that typically engineers strive to 1) ‘shorten the distance’ from building blocks to complex solutions by using the highest level building blocks they can (use a light switch off the shelf instead of redesigning and manufacturing your own light switch) and 2) seek to build complex designs that are predictable and stable not emergent.

Said another way, the traditional (engineering) way man has viewed simplicity and complexity is to SHORTEN the ‘distance’ between the two needed to accomplish a SPECIFIC result.  What this yields is less understanding of the truly simple building blocks, in favor of using a more complex one.  No problem if the issue is of some type that lends itself to a ‘static’ goal, like building construction.

But below is a vastly different presentation discussing the effects and factors that have contributed to the destruction of ocean life.  The ‘distance’ between the most simple elements of ocean life, and the ultimate effects it will have on life on our plant, is obviously a huge challenge to understand because it is a dynamic, emergent system without fixed,  predictable results.

Moral of this post: In business, when considering how to achieve Excellence by Design, the designer must be careful to understand whether the solution they are designing is really

  • one best served by shortening the distance to a specific/static solution

or

  • one that must enable dynamic/emergent behavior

or some combination of the two…

This ability to determine what level of ‘simplification’ to use, and how, and the effects it will enable, is a very challenging task.  It would frankly, be a great subject for a college course in advanced design…but perhaps we’ll get to that level of detail another day.

Einstein of Design

Several years ago I was astounded upon reading the book ‘A New Kind of Science’ by Stephen Wolfram.  It provides a point of view I highly concur with, that the universe of complexity can be explained via computational models.  Essentially, in my terms, it points out how brilliant design (that is, at its core, quite simple!) can produce infinite variety.

The video above is a talk by Stephen at TED, in which he provides an update on some new capabilities he and his team have subsequently produced (like Wolfram Alpha and Wolfram Tunes), but more importantly, expounds on his belief/vision that computation can present the basis for understanding the fundamentals of the universe…indeed, modeling alternative universes as well.

I believe Mr. Wolfram is well on his way to being the next Einstein for several reasons, and they worth touching upon I think, because they are directly related to the theme of Excellence by Design.

  • Great Design can be simple, yet yield infinite variety.  This is a core theme of Stephen’s work, my own beliefs, this blog, and is a key characteristic of great designers.   It is interesting to me to see, in the universe of IT professionals and organizations, how some embrace this deeply and some do not.  It is a capability I watch for in peers and colleagues, and a capability that this blog tries to show how to enable for IT organizations especially.
  • Models may be simple, but Results are irreducible. This is a very interesting paradox and is something again, many people may have strong reactions against.  Stephen declares (and shows) how by enabling infinite diversity, simple designs are understandable, but their results are not predictable in reduced form.  This has huge ramifications.  It means you could design something that evolves with unintended consequences…a scary thought if working in biotech or some field whose outcome of your experiment could create a pathogen of death!  On the brighter side (a lot brighter) it means that designers can be charged to create more ‘organic’ solutions that can evolve and react to new needs, not just mindless programs that do only what they were originally coded for.
  • Model modularity is a powerful concept. In the IT world ‘SOA’ has followed ‘OO programming’ and ‘modular programming’ before that, as a more organized approach to producing, and reusing, functionality.  Stephen certainly understand the concept but extends the theory into his concept of computational modeling and in his products (like Wolfram Alpha).  I love what Stephen is doing both conceptually and practically.

There is a lot more to Stephen Wolfram, his contributions and  concepts, than I highlight here.  but if I may may two grand statements:

Statement 1 (not SO grand): Any IT organization (or any business for that matter) would be wise to deeply study what Stephen has done and is proposing to do, and develop a core competence in its application to IT & business.  There are deep implications for how to organize work, design products and solutions, and deliver value to your customers.  I would argue that just as concepts like industrialization, mass production, process reengineering, and six sigma quality had their time of birth, adoption, and eventual incorporation into the DNA of business management, so will the concept of computational modeling into the methods of planning,  production, integration, and service of businesses.  It certainly is happening today in many areas (again SOA being a trivial example) but is  not really recognized yet for the broader value it can provide.

Statement 2 (very grand): I believe the idea of simple computational models as the basis for understanding systems (whether they be mathematical systems, physical systems, biological systems, or the universe itself) is not only correct, but is, frankly, how God would have done it.  Seriously.  If you were God, would you build the world in 7 days by painstakingly creating and positioning every molecule?  Or would you, as  the Great Designer, craft the ability for systems (the universe) to start, and computationally evolve using simple models over eons of time?  The idea is so appealing.  And it can fit whether you are are deeply religious, spiritual, or atheist.  Given the fact of irreducibility, this Great Designer had ideas on what might evolve, yet enabled the freedom of evolution.

I hope you are intrigued enough by Stephen’s talk above to take a bit more time and think about this.  He has done a fabulous job of providing a fantastic view of, and methods for, Design, and one that still has very practical applications today.  He may well go down as the next Einstein in terms of contributing to the understanding of science, physics, and the universe.