Friday, January 28, 2011

Software Blog 3: Software Autobiography - Good Software, Bad Software, and Ugly Software.

After Ameritech, I went to Alcatel and became involved with technology strategy and M&A.  I had the opportunity to visit a lot of IP-related startups and saw a lot of teams with very different approaches to system design and development.  Most of them were building a product based on some breakthrough piece of hardware technology, either a novel product architecture (usually motivated by removing a performance limitation), a hero experiment in massive integration or a new division of labor between HW and Software, usually doing functions in HW that had previously been done in SW.  A few of them were building software products or software stacks, tools and frameworks.  More rarely, but memorably, were a couple of startups that had a balanced approach to systems design... that is, they made really good choices about what functions should go in hardware, what should be kept in software and what partners should they have to develop parts of the HW and SW externally.
A common problem of the HW guys was to plan the software as an afterthought and then to hire a team of software folks to do everything internally, usually because they didn’t have the budget or the time to work with external partners.  The systems that resulted usually went to market with fragile and poorly tested software and if they didn’t fail right away, didn’t achieve stability until the second or third release of the software.
The most impressive of the balanced teams I met, had partnered for their key silicon components, had a simple / robust chassis and had 5 or 6 software partners for various stacks and frameworks.  The internal resources were architects, testers and glue software developers.  They followed the technology trends of their HW partner closely, working with each new chip well before it became generally available.  The software partners were each chosen because they were up and coming technologists that would, in 3-5 years be the best in the business.  The resulting product was leading edge, not bleeding edge and remarkably stable.  When changes in standards happened, it was there partners’ problem to sort these things out and, because they were totally focused on their function, almost always got it right the first time and worked the standards bodies to stay on the winning side of the techno-politics.
When I moved to ADC, one of my early responsibilities was to consolidate network management from multiple platforms down to a single platform.  This was a very challenging activity, particularly since we had to execute it during the telecom meltdown in 2001.  We started with the best of breed EMS and then, over the course of a year, built a modular product architecture that maximized reuse as new releases and new products to manage came and went.  This flexible client / server architecture was eventually overtaken by thin client products, however, the modular approach, user interface experience and test automation remained valuable for 10 years.  This software team was built up in Bangalore, India and eventually became the center of software development excellence (embedded and otherwise) for all of ADC.
Next time... lessons learned from my software journey.  What was your experience with the Good, the Bad and the Ugly of software systems?

Tuesday, January 25, 2011

Software Blog 2: Software Autobiography - Hands-On Software Management

For 2 years at Bell Labs, I was the Supervisor of the NETS Requirements Group.  The Nationwide Emergency Telecommunications System (NETS) was a research activity contracted by the US Federal Government to evaluate the survivability of national telecom assets to various threats, primarily nuclear war.  I supervised a number of activities: network data consolidation and analysis, network design and simulation and development of hardening and routing requirements for network equipment.  In my first year, we wrote a lot of software to process and validate data, to structure and map the physical and the logical structure of the network, to estimate damage and to simulate the performance of the network in its damaged state.  We also developed new design rules and routing tables to extract the maximum performance from the surviving assets.
When the contract came up for renewal, I negotiated with the government to deliver all the design, simulation and visualization software as a stable product that non-research users could use to evaluate and enhance the survivability of other networks.  I later learned that the final development cost was about 10x what I had estimated.  Part of this was due to creeping features, negotiated when the professional software people took over, part of it was due to the difference in effectiveness of the average software developer compared to my best people (e.g., a PhD in theoretical physics who was brilliant at both the concepts and the execution of network design) and part of it was my own ignorance of what it takes to create a software product vs. a collection of research algorithms.  Also, everything we had done was on a dedicated Unix platform using shell script and C programs, we’d built our own data structures from scratch, there were no stacks, frameworks or significant third party software that we could leverage, there was no structured programming to maximize reuse.  We did a lot of our testing and debugging the old fashioned way... bit by bit and byte by byte.
After I left Bell Labs, I became an individual contributor again at Ameritech Science and Technology.  I evaluated technology and designed access and transport networks.  Most of my work was done within spreadsheets on Macintosh computers.  I became a power user of Wingz, for a time more powerful and graphically sophisticated than Excel.  I was able to characterize the technology, design the network, run all the economics and sensitivities and develop the management presentation on one platform.  The efficiency and effectiveness of this platform, which contained database, scripting and graphical capabilities  had an impact on my attitude towards 3rd party software.
Later, still at Ameritech, I became very interested in R&D focused on improving software productivity.  I saw the technology trends for software productivity as one of the primary challenges of technology management, especially for the telephone operating companies, that were stuck in a rat’s maze of interlocking, inflexible and expensive operational support systems.  At that time, structured programming had promise, but had not yet delivered on the promise.  Development frameworks were in their infancy and almost all protocol stacks had to be laboriously built and maintained for each development project.
This takes me up to the mid-1990s.  What are your memories of software processes and structures from this period?

Thursday, January 20, 2011

Software Blog 1: Software Autobiography - The Early Days

Everyone’s perception is colored by their history.  I plan to write a few blogs on software, so I thought, in the spirit of full disclosure, I’d write a software memoir.
My first exposure to software was a manual for a PDP-8 minicomputer my father put in my hands when I was 15 or 16.  I recall machine language commands for manipulating binary word structures that resulted in addition, subtraction, multiplication, etc.  Since my father was a chemical process engineer at a nuclear fuel plant, I have no idea how he got this, probably from a control systems salesman.
Later, as a physics student at the University of Missouri, I went through a painful, but very useful, course on FORTRAN programing taught by a member of the physics faculty.  The punch cards, line printers, late hours and rigid syntax of IBM Job Control Language created antibodies that took years to recover from.  I also did a senior project to write a plotting utility for the department’s brand new PDP-11 computer.  I remember entering the program bit by bit, writing and then reading paper tapes to make it work.
My graduate work at the University of Wisconsin was organized to avoid computers altogether, however, as I later wrote papers and my did my thesis, it was necessary to evaluate a 2 dimensional integral equation.  I managed to achieve this on a Hewlett-Packard desk-top calculator that could store 128 commands.  I used a continued fraction expression of the confluent hypergeometric function and a simple 1-d integration to get the job done without punch cards or standing in line.
My post-doctoral experience, first at the Theoretical Physics Division of Harwell Labs and then at Kansas State University, involved significant data crunching on IBM main frames, also in FORTRAN.  Harwell had a very useful library of mathematical subroutines that helped improve my efficiency and I was able to directly enter programs and data into storage using a 110 Baud teletype (enclosed in its own sound proof box in my office.)  At K-State, I had my first experience with CRT Terminals and remember the great day my terminal was direct connected at 9600 Baud.
At Bell Labs in the 1980’s, the first few items you received after being hired included the Bell Labs directory (very important), a book of Unix commands and the “The C Programming Language” by Kernighan and Ritchie.  I threw myself whole heartedly into C, however, what quickly became apparent was my greater aptitude for shell script, nroff, troff, eqn and tbl.  These were the scripting languages of document production in the Unix environment.
My organization, Government Network Planning, often ran the proposal management and production in response to federal RFPs.  Project Management was document management.  My supervisor and I organized teams of dozens of engineers and produced documents measured in inches of output (I remember one that was about 2 feet thick.)  We weren’t satisfied to take the troff macros as found, but went back to the developers to automate page and paragraph security markings.
Next time: from writing software to managing software teams.  What are your memories of computers and programing prior to 1984?

Tuesday, January 18, 2011

Steve Jobs: the Power of Large M Marketing

With the recent announcement of Steve Jobs’ medical leave of absence (http://www.nytimes.com/2011/01/18/technology/18apple.html), I have been thinking about his contribution to culture and technology.  He has been called many things, mostly superlative and positive, however, I believe his great contribution is to show technology companies how to market, truly market their products and services.
Some background... when I was getting started in telecom at Bell Labs, I had a conversation with an older manager who was probably in service marketing in the Long Lines organization.  I asked what he did: he said he did focus groups to determine willingness to pay for new services and then he set the price point for the service.  In further conversation, I learned that he was an engineer by training, who, over time, had moved into service marketing and management.  “Where are the marketing people?” I asked.  He responded that AT&T had difficulty hiring and retaining the good ones.  “Why’s that?” I asked.  “Because they can make so much more money and have so much better careers in other industries like consumer products.”  We then had a conversation about the value created by a good marketing person at companies like Pepsi or Proctor and Gamble.  By understanding market segmentation, by manipulating perception and desire through advertising and branding, a dollar’s worth of detergent can sell for $2.50 or more.  All that extra value is created by marketing.  In telecommunications, the additional value add of a top brand and an advertising campaign was rarely more than 15%.
Later, as I worked on new product and service launches in various companies, I came to distinguish between what I called “small m marketing” and “large M Marketing.”  What I meant by that is that small m marketing helps with collateral, names, trade shows, publications, interviews, press releases, etc. required to launch and sell products and services.  Large M Marketing, starts with a fundamental understanding of the market, it’s segments, the problems that need to be solved, how people solve those problems now, how our competitors address the problem, how our new product or service addresses the problem and what a solution is worth to each segment.  Then, armed with that knowledge, the product is re-conceived (sometimes abandoned) to optimize value and position for each segment and to develop a launch plan, segment by segment, customer by customer until maximum value is extracted from the market.  Sadly, most of my career has been spent working with marketers, not with Marketers.
Getting back to Steve Jobs... he is the master Marketer of personal computing.  He understands the technology deeply, but that is not the most important thing.  The value the technology delivers is the great user experience.  He is the master at getting every detail of the user experience to line up and constantly exceed expectations, year after year, product after product.  His technology is a pleasure to buy, to install, to maintain, to use for work and for play.  The average, non-technical user is the key segment he addresses, not just the segment with the advanced technology degrees.  And, by the way, this allows Apple to command a premium compared to it’s competition...and not just 15%.  This is his contribution to our industry... I wish him a speedy recovery and return to daily leadership of Apple.  What are your thoughts about Steve Jobs’ contribution to technology and culture?

Monday, January 17, 2011

A Reflection on M. L. King Day

Most of us know the story of Dr. King’s struggle and of his shift, late in his life to focus on economic rights as well as civil rights.  Most of us also know that Dr. King was strongly influenced by Mohandas Ghandi’s philosophy of non-violent resistance and the remarkable success of Ghandi’s movement in freeing the Indian sub-continent from British rule.
I learned something recently that I found remarkable... that Ghandi in turn was strongly influenced by Leo Tolstoy’s religious views (Tolstoyanism, characterized by his opponents alternately as Christian Pacificism and Christian Anarchism) and that the program for Indian independence was outlined by Tolstoy in “A Letter to a Hindoo” published in 1908.  This document, once in Ghandi’s hands in 1909 prompted further correspondence between him and Tolstoy in the last year of Tolstoy’s life.  Ghandi, in his autobiography, credited Tolstoy as "the greatest apostle of non-violence that the present age has produced.”  Today, 100 years after his death, he remains controversial in Russia and excommunicated by the Russian Orthodox Church.
I find it very interesting that the non-violent resistance program, demonstrated as successful by Ghandi and King, is so infrequently implemented in our times.  I believe the reason for this is the rarity of the leadership that can inspire a large body of people to take the blows and not respond with violence.  This is one of the few ways the powerless can take the moral high-ground from the powerful.  The method is simple, but the execution is almost superhuman.

Wikipedia at 10th Anniversary: Tensions in the Crowd-sourcing Model

There have been several good articles in the press stimulated by the 10th Anniversary of Wikipedia going live on January 15, 2001 (for example, Wikipleadia in The Economist, http://www.economist.com/node/17911276.)  I, for one, am a heavy user of Wikipedia, for almost as long as it has been in existence.  So much so, that I’m unloading my personal library of reference books and encyclopedias that I haven’t touched for 5 or more years.  Most charitable organizations will not even take them in donation.
My use of Wikipedia ranges from serious research to entertainment.  The serious research usually starts with Wikipedia for  orientation, but does not end there.  My entertainment use is either an aide to solving crossword puzzles or digging for background on movies, authors, actors, writers, etc.  I recognize the limitations of the volunteer author / editor model (crowd sourcing) used by Wikipedia.  However, about 5 years ago, a study of accuracy indicated that Wikipedia attained about the same level of accuracy as professionally edited and authored reference works such as paper encyclopedias.  As an academic, I had been exposed to the dynamics of the encyclopedia business years ago.  Most articles were written not by top scholars, but by junior scholars (hoping to advance their careers) for very little money.  The editorial workload was also very high, so many of these articles got only limited review.  Wikipedia’s model replaces the junior scholar with the passionate amateur (some of whom are also scholars) and the overworked editors and reviewers with volunteers and a structured system of article standards.  Consequently, articles are constantly being rewritten and reviewed, addressing one of the fundamental problems with paper reference material... rapidly becoming dated.
In The Economist article, it is noted that the number of regular contributors to the English-language encyclopedia has dropped from about 54,000 in March 2007 to about 35,000 in September, 2010 and that Wikipedia has been increasingly accused of elitism in its editorial policies and practices.  One argument for the decline in contributors is that most of the subjects of interest to readers have already been written.  The counter argument is that, in order to deal with concerns about accuracy and vandalism, the editorial policies and practices have become increasingly restrictive and reactionary, thus frustrating the energy and good will of the volunteer labor force.  A group that gets gratification from seeing their work published and used by the greater Wikipedia community.
In any event, the Wikipedia phenomenon, which attracts 400 M visitors a month, is a remarkable and transformative story of the impact of Internet technology on people’s lives.  What’s your reaction to the tension between volunteer authors and editors and Wikipedia’s editorial policies and practices to ensure greater accuracy and immunity to vandalism?  How do you use Wikipedia?  How much do you trust what you find there?

Wednesday, January 12, 2011

iPhone on Verizon: What are the differences compared to AT&T?

After years of speculation, Apple and Verizon announced yesterday that starting 3 February, the iPhone 4 (modified for CDMA technology) would be available for Verizon customers.  Differences between the iPhone on the AT&T network and iPhone on the Verizon network have been discussed in the press (http://www.nytimes.com/2011/01/12/technology/12phone.html.)
  1. Because Apple had to re-engineer the iPhone 4 for 3G CDMA, they also fixed the external antenna problem, causing so many dropped calls for early iPhone 4 users on AT&T.
  2. Because of CDMA technology limitations, iPhone 4 users on Verizon will not have simultaneous voice and data capability (e.g., talking while looking up directions on the Maps app.)  Apple’s COO, Timothy Cook said he did not think most users would mind.
  3. Also because of CDMA technology limitations, international coverage will be less than iPhone on AT&T (which is based on GSM.)  However, Verizon users should have greater 3G coverage within North America.
  4. Finally, for an undisclosed additional cost, the Verizon iPhone 4 will be able to act as a WiFi hub for up to 5 laptops or other devices (MiFi capability.)
Most interesting to me, given Verizon has recently launched true 4G LTE capability, is that the iPhone 4 will not take advantage of this capability, nor has Verizon or Apple disclosed a date by which an LTE device will be available.  This means that some Android-based technology is likely to have both a processing power and bandwidth advantage over the iPhone for the foreseeable future.  This may result in a market segmentation for smart phones similar to the PC market segmentation... Apple focusing on user experience and image while ceding competition on price / performance to others.
The other point of discussion in the press is the issue of Verizon being able to handle the traffic that will be generated by these new iPhones.  Since Verizon’s network is larger, both in terms of customers and base stations and because Verizon has put in place serious fiber-optic backhaul (to prepare for LTE), my speculation is that they will have fewer problems and complaints than AT&T has had.
Given that Verizon and AT&T are the top two carriers in the North American market, I don’t expect any competition on price... there will be minor differences in terms and conditions, but nothing compelling.  The competition is expected to be an image war, largely based on network quality perception, with Apple standing profitably on the sidelines.
What are your thoughts on the Verizon iPhone announcement?  Do you expect to own one?

Friday, January 7, 2011

Chinese J-20 Stealth Fighter: Advanced Weapons and Cyber Security

The high speed taxi tests for the Chinese J-20 4th generation (stealth) fighter have been getting press lately.  One of the most interesting articles is in Aviation Week (http://www.aviationweek.com/aw/generic/story.jsp?id=news/awst/2011/01/03/AW_01_03_2011_p18-279564.xml&channel=defense), my most reliable source of military technology information for the last 30 years.  In this article by Bill Sweetman, the size of the J-20 is discussed: it is about 75 feet long, longer than the F-22, comparable to the F-111, suggesting longer range and heavier munition capability than a general purpose fighter/interceptor would have.  Several of the articles commenters suggest that this size is associated with delivering advanced air to surface missiles, perhaps in association with penetration of a carrier battle group some distance from China.  This is the kind of capability that could change the calculus of the projection of force in the Western Pacific.
The second interesting discussion (from my perspective) is the question of the link between the rapid development of the J-20 and the cyber attacks on the military and defense industries, starting in 2006 and reaching a peak in 2009. Sweetman quotes Lockheed Martin’s Chief Information Security Officer Anne Mullins that “six to eight companies” among its subcontractors “had been totally compromised—e-mails, their networks, everything,” in late 2009 to early 2010.  Sweetman does not expect this link to be clarified by anyone soon, however, the timing of the attacks aligns well with the probable development schedule for the J-20.
Those of us who have been involved in large, high-tech development projects know that  the fast follower game is much easier to execute, both in terms of time and expense, than true technology leadership.  If cyber espionage is less costly than independent development, then the rational strategy is to leverage cyber espionage to shorten development time and expense and close the gap with the leader.  It seems that without the clarity of who the adversaries are (such as we had in the cold war), defense against cyber attack becomes a lower priority and a rational strategy for any power trying to catch up with the leader.
What are your reactions to the J-20?  Do you see a relationship between advanced weapon development in China and cyber attacks on the US defense industry?

Tuesday, January 4, 2011

The New Chip Frontier: Graphics Everywhere

Today’s New York Times reports back from the Consumer Electronics Show that the competition between the computer chip makers today is based on how well graphics can be processed and displayed, not just on MIPS or power efficiency (http://www.nytimes.com/2011/01/04/technology/04chip.html?nl=technology&emc=techupdateema3)  This is quite interesting given that processors were originally designed without any consideration for I/O and now, it would seem, processors are going to be optimized for very large, very rich, very graphical (pictures and video) I/O.  Nothing else, it would seem is more valued by users nor more demanding of consumer device architectures.
If we go back in time 20-25 years, the state of the art for video processing was the DVI Pro750 Application Development Platform, developed by Sarnoff Labs and then commercialized by Intel.  This was a full 7 board architecture (3 main boards plus piggyback modules), built on the IBM/PC AT platform available in 1989 for $22k.  It could process a full 1024 (pixels) x585 (lines) image in full color at 30 fps.  If we go back another 10 years, there was no such thing as digital video, as processing power was architected for number crunching, not image processing.
I think of this graphics optimization of the General Purpose Processor as the logical result of the fundamental insight Steve Jobs had in the early days of Apple.  Why not take some of the incremental processing power delivered by Moore’s Law and use it to improve the User Interface and user experience.  Early Apple computers were significantly more graphical than their PC rivals (although some early game systems were even more graphically capable than the Apples.)  Play out this logic for 20 years and the data structures of the video display permeate the entire consumer appliance architecture.  As the processing power increases, there is no need to build separate video processors to drive the displays.
This trend sets up full voice and gestural Input.  In the not too distant future, we will type less, we will muti-touch less and interact through voice, our facial expressions and body gestures.  The appliance will constantly watch our expressions, much like the family dog does now.  The appliance will react to our boredom, dejection, excitement and inadvertent expressions by modifying the current application or suggesting alternative applications.  This will mean all appliances will have video input as well as video output.  An appliance without video input will be considered “stupid.”
What’s your reaction to this trend?  Am I extrapolating wildly or reasonably?  I would appreciate your perspective.

Saturday, January 1, 2011

Happy New Year: Economic Forecasts for 2011

As a corporate strategy leader, macro economic outlook weighs heavily on my mind this time of year.  One of the things I learned from the strategy experience is that knowledge of ones own markets, customers plans and the projections of industry experts are easily trumped by the overall economy, particularly when much of the spending in telecommunications networks is in response to consumer and general business discretionary spending.  A rising tide can save a poor company and a falling tide can put a good company at risk.
Several discussions I’ve read about 2011 predict continuation of the economic dilemma of 2010... that is, continued growth for US companies (at about 4%) and continued high employment for US workers (slowly falling from the current 9.3% (November figure, unadjusted.))  This is a reflection of the fact that most large US domiciled corporations are global and that developing markets are expected to continue strong growth in 2011.  It also reflects the fact that these corporations have continued to shift employment from high-cost regions (US and EU) to low-cost regions (Mexico, Brazil, India, China and eastern Europe.)  The current political dynamics in Washington seems unlikely to impact  either of these trends in the coming year.  I agree with this view, although as a freshly unemployed person, I’m not happy about it.
Another big trend for the US is the impending retirement of the boomer generation.  The first wave of boomers will begin retiring in 2011.  I expect the US federal budget battles to be intense and the impact on boomer’s lives will either be the elephant in the room or a matter of open political warfare.  I expect more smoke than fire, but 2011 will be another of a series of battles regarding boomer entitlement programs.
When I think of the US in the 21st century, I often think about the UK in the 20th century.  The British empire was at it greatest height in 1945, just as it passed the baton to the US and we began to exercise our own superpower empire.  I lived in the UK from 1978 to 1980 and experienced the post-imperial discomfort of the British people, their uneasy relationship to the US, and their ambivalence about their own history.  It has been said that the financial cost and human cost of maintaining empire became too great for the British in the first half of the 20th century, so they were willing to let the empire go.  The British economy had to pay for a global military and a lot of common people were required to become soldiers and bureaucrats and move to the far corners of the planet to support the empire.
The US may be reaching the same breaking point on its empire, both from the perspectives of cost and political will.  The simultaneous rise of China as a world power and as holder of much of the US debt, adds an additional factor that could accelerate the US transition from imperial power to regional player more rapidly than any of us could predict. This is unlikely to happen in 2011, however, it is something I will be watching over the next few years.

What are your thoughts about the economic prospects for 2011... Good, Bad or Ugly instability?