Sunday, July 10, 2011

Therapeutic Radiation Technology: 30-year technology update

Recently, I began study of therapeutic radiation technology used to treat cancer with external radiation.  This was a deep update, since the last time I got into the details of X-ray technology was 25 to 30 years ago.  Some of my concerns were based on spatial and spectral resolution of X-ray production and I was delighted to find that several fundamental improvements had been made that allowed fine tuning of both spatial and spectral resolution of the X-rays used for radiation therapy.  This, combined with computer control and sophisticated 3-D simulation of radiation deposition, make the current generation of this technology highly tunable for treating many forms of cancer, particularly brain tumors.  Challenges still remain for treating other forms of cancer, for example, lung tumors.
Long ago, one of the challenges of X-ray technology was the rise-time of the high voltage circuits that drove the X-ray tube.  A slow rise time meant that the harder X-rays were accompanied by soft X-rays during the rise and fall of the voltage.  Different tricks were used 30 years ago to minimize this effect, however, these soft X-rays resulted in poorer resolution of X-ray imaging and lower spatial resolution of therapeutic X-rays.  Spatial resolution is important, because the objective of radiation therapy is to put radiation into the tumor and to avoid irradiating radiosensitive structures near or behind the tumors.  A rotating gantry spreads this collateral radiation around the body, however, the better the spatial resolution, the more manageable the collateral damage becomes.
Current generation therapeutic X-rays machines don’t use transformers to switch the X-rays on and off, but rather, electrons are generated in a linear accelerator, then velocity selected in a magnetic field before they are allowed to hit a water cooled tungsten target.  The Full Width at Half Max (FWHM) of a beam of 6 MeV electrons is then engineered to be small compared to the FWHM of the bremsstrahlung mechanism that generates the X-rays.  This ensures maximum spatial and spectral resolution with suppression of soft X-radiation by orders of magnitude.  The beam of X-rays produced is then further columnated and shaped by dynamic baffles to achieve precise control of the radiation contours within the body.
Other radiation technologies, such as proton beam therapy, that have advantages for certain types of cancer, for example, cancers of the spinal cord.  Unlike X-rays, protons do not irradiate tissue behind the tumor.  For distributed brain tumors, however, proton beams have few advantages over X-rays.  Brain tumors are collocated and interstitial with healthy tissue and require an approach that treats large volumes of the brain with radiation.  In the rare case that a benign tumor can be isolated (e.g., a pituitary tumor), then precise radiosurgery is done, usually with 15 MeV or higher X-rays.
Current generation radiotherapy, radiosurgery and neurosurgery are all computer assisted, using CAT and MRI scans extensively to calibrate and register placement of radiation or surgical instruments within the body.  This approach de-skills the therapy and surgery and ensures a better outcome for the patient.  I was a young physicist when all these technologies were being developed.  It is wonderful to see how far these technologies have come in the last 30 years.  What is your experience with CAT scans, MRI scans, therapeutic X-rays or computer assisted surgery?  What’s your perspective on the most important directions for future development of these technologies? 

Sunday, June 26, 2011

The Broadband Dilemma and Netflix

Jonathan A. Knee wrote a short article in the July/August edition of The Atlantic that should be required reading for every businessman “Why Content isn’t King” (http://www.theatlantic.com/magazine/archive/2011/07/why-content-isn-8217-t-king/8551/).  In this article he explains why Netflix has been successful leveraging distribution economies of scale and building loyalty in their customer base despite the predictions of many market analysts.  The Netflix business model runs contrary to the conventional wisdom that consumers are only interested in the content, not in how it is delivered, and that the real profitability in the value chain lies with the content producers.  Knee argues that content producers suffer from two real problems... 1) that there is no long-term loyalty to content producers, particularly to the production houses and that 2) there is no substantial economies of scale in production, despite a hundred years of “industrialization” on movie/TV/music/video production experience.  The only place in the value chain where economies of scale exist are in distribution and that it doesn’t take much to create customer loyalty when competing with the likes of the cable MSO’s.
When I was at Ameritech Services in the 1990’s, I was involved with a lot of study work and network planning around the deployment of broadband networks to the home (ADSL, FTTC, FTTP, etc.)  The Broadband Dilemma as it was expressed then was as follows... how do you build a business case justifying broadband deployment when the vast majority of the investment is required for the distribution network while the customer willingness to pay is associated with the broadband content?  If you can force the bundling of content with the distribution network, as cable companies did, then your business case would work.  If you were forced to unbundle the distribution network and offer it to your competition, then you were stuck with low rates of return on investment and you would never be able to attract capital for this build.  In the mid-1990s, Ameritech concluded that the regulatory environment was the problem and our competitive response to the threat of Cable MSO’s offering broadband data and voice was to become a second cable operator in a few of our most vulnerable markets and compete with the MSO’s on their turf with the same regulatory structure.  Subsequent regulatory change on the telecom side (Telecommunications Act of 1996 and following court decisions) allowed deployments like Verizon’s FIOS to move forward in the last decade.
The Netflix model does a number of interesting things:  first, Netflix leverages existing distribution infrastructure investments (the postal system and broadband deployments), it also relies on existing customer premises equipment (PCs, Blue Ray players and game systems), so Netflix isn’t picking up all the cost of the distribution network, just some pay as they grow costs for servers, processing centers, advertising, etc.  They have bridged customer loyalty built on bettering the Blockbuster business model into the video on demand business... they have not allowed a particular distribution technology to define them, but rather, they have focused on customer satisfaction.
I have been a happy Netflix customer for years and use their instant download services through a Sony Blue Ray system attached to my cable modem via WiFi.  Ironically, Comcast provides the infrastructure for Netflix movies in my house.  What’s your experience with Netflix... how do your view their business model and long-term prospects for growth?

Saturday, June 18, 2011

Google’s Chromebook - The Return of the Dumb Terminal?

David Pogue reviews his user experience with the Google Chromebook (http://www.nytimes.com/2011/06/16/technology/personaltech/16pogue.html?_r=1&ref=technology) and, not too surprisingly, was underwhelmed.  The Chromebook concept is an element of the vision of Cloud computing.  If everything is done in the cloud.... storage, software, etc., then the cloud access terminal doesn’t need anything but a browser and wireless access.  The Chromebook laptop concept is the lightweight netbook, stripped of a hard drive and most software, that relies on constant wireless access to the cloud for its functionality.  Samsung has built a Chromebook that recently went on sale for about $500.
About 18 years ago, when I was Director of Network Architecture at Ameritech, I ran into an old friend from my Bell Labs days.  He was selling mainframe software and dumb terminals for customer service applications.  This was an architecture already 10 years out of date, however, he emphasized that the PC had many problems when used as the vehicle for customer service, information operators, etc.  The primary problems were software maintenance and network security.  PC software gets out of date, corrupted and infected; users load personal software that impact PC performance and software.  In addition, the PCs host malware that undermines network integrity and security.  The upshot of this is high cost to maintain and fix PCs that are essentially doing no more than providing access to centralized databases.  So, why not solve the problem by returning to the previous architecture of dumb terminal access to a mainframe application or applications.  You can save capital costs and operational costs as well.  For a variety of reasons, this concept didn’t catch on in the mid-1990s, however, there have always been a number of settings where dumb terminals have never gone away... ATM’s, POS, Lottery terminals all come to mind.
The first thing that strikes me is the cost issue... for giving up most of the functionality and flexibility of the PC, I don’t save much money.  For $500 I can’t get a lightweight laptop, however, I can get a netbook or a cheap desktop PC.  Pogue is most disturbed by the issue of constant wireless access.  He found that when he didn’t have wireless access, the Chromebook was just a 3.3 lb paperweight.  On airplanes or in hotel rooms, to do anything he was forced to cough up $7 and up for WiFi access adding insult to injury.  The suite of cloud functionality and services were not enough to make up for the lost functionality of customizable PC software.  He seemed particularly unhappy about the lack of compatibility with Apple products like iPod, iPhone and iPad.  Finally, he just felt insecure about not having local storage for documents and pictures he most values; you never really know if the cloud will preserve and protect your family photos as well as you do.
What’s your reaction to the Chromebook?  Is it the 2nd or 3rd return of the dumb terminal or just wishful cloud thinking?  What would the Chromebook have to cost to justify giving up PC flexibility?

Wednesday, June 15, 2011

Monitoring and Alarms: Persistent Problems of Intelligent Systems

Over the last week, I have spent a great deal of time in hospital rooms listening to monitoring systems in alarm and observing the problems of poorly designed sensors.  I was reminded of the development of large network Operational Support Systems (OSSs) from the 1980‘s and 1990‘s.  These systems were designed to improve the availability of network services through rapid fault detection, isolation and repair.  In those days, we talked about “alarm storms”... single or correlated events that would generate hundreds or thousands of alarms that were more than operators were capable of analyzing and taking action upon.  An example of an event that would generate a large number of alarms is the failure of DC power feeds to a rack of transmission equipment.  Hundreds or thousands of circuits passing through this rack would go into alarm.  These alarms would be reported from equipment all over the network.  The result of alarm storms were overwhelmed network control center personnel, who either couldn’t figure out what the root cause of all the alarms were or who grew fatigued from constant assault of systems in alarm.  Overwhelmed operators resulted in poor networks service availability despite the promise of intelligent systems to improve service availability.
So to deal with this problem, alarm categorization, alarm filtering and alarm correlation capabilities were build into these systems.  Alarms could be categorized. Minor alarms could be ignored.  Circuit failures that could be mapped into the failure of a single board or system were rolled up into one alarm.  These capabilities are normal now, however, they had to be learned through experience as networks of intelligent systems scaled up.
Back to the hospital intensive care unit... a single patient has multiple sensors and multiple machines providing services to the patient.  Each sensor (heart rate, blood pressure, respiration rate, blood oxygen levels, etc.) generates an alarm when parameters get out of range.  Ranges can be customized for each patient, however, some fundamentals do apply.  For example, blood oxygen levels below 90% is bad for any patient.  Machines providing services, such as an IV machine, go into alarm under a variety of circumstances... for example, if the supply bag is empty, or if the IV tube is blocked.  Alarms can be suppressed, however, these machines go back into alarm after a time if the trouble is not resolved.
All alarms are not equally life threatening, however, each monitor or machine behaves as if they are.  Some sensors are very unreliable and are constantly in alarm. Sometimes, different systems interact with each other to cause alarms.  For example, when a Blood Pressure cuff is measuring, it cuts off blood supply to the oxygen sensor on the finger of the same arm, causing an alarm.  Nursing staff, like operators at a network control center, learn to ignore unreliable alarms and work around some monitoring system problems, however, it is easy for them to be overwhelmed by alarm overload and miss critical alarms.  Integrated alarm monitoring and management systems for intensive care seem to be emerging, however, they still have a long way to go.  I found that there was no substitute for a patient advocate, providing an escalation path for the critical problems that get lost in the noise.
What’s your experience with monitoring, alarm management and intelligent systems?

Wednesday, June 8, 2011

6 Month Anniversary of Unemployment: Lessons Learned from the Job Market

I’ve looked for work before: 2000, 1996, 1988, 1982 and multiple times in the 1970’s.  The job market for technical executive people like myself is very difficult... the closest thing compared to this was 1982, in the beginning of Reagan’s recession.  The problem with the job market is it is a very strong hirer’s market and the way you find a job is different in kind from how jobs were located during the candidate’s markets of the 1990’s.  Job sites are worse than worthless... they are simply engines of automated commoditization of job functions.  LinkedIn is more useful because it does open doors for differentiated job search, however, it doesn’t close deals.
What you need in this job markets is someone special inside the hiring company... someone who is willing to take career risk for you.  This is all about depth of relationships, not about breadth of networks.  Your inside person needs to vouch for you, to create a position for you, to create a sense of urgency for you, to unstick stuck hiring processes, to stand up and swear that you are worth the price premium, the special consideration or the extra incentive to close the deal.  This is different in kind from the most recent job markets I’ve participated in.  This is not just a question of doing all the old things you did before, only more often, faster, etc.
In 1982, when I decided to leave physics and Kansas State University, I had a number of industries that were interested in me: oil and gas exploration, weapons development, and telecommunications.   As the recession deepened through the spring, opportunities dried up right and left.  One Oklahoma petroleum lab sent me tickets to fly for a round of interviews and then a few days later, asked for the tickets back as hiring had been shut down.  Bell Labs shut down hiring too, however, a professor friend at KSU had an old high-school buddy from Taiwan whose organization was still looking for physics talent.  They were able to work around the HR system to get me an interview and a job in the Government Communications Department.  In this hirer’s market, Bell Labs had sent me an offer letter with monthly and annual salary figures that didn’t agree.  I was asked to accept the lower of the two and I did.  I had no bargaining power and the salary was still more than twice what I’d made as an Assistant Professor.
Who are these special people who will take a risk on you?  For the most part they are people who have taken a risk on you before... either because you reported to them, they reported to you or in some extraordinary circumstance (special project, emergency task force, mentoring relationship, etc.) a strong bond was created that withstood adversity.  Most people do not have many of these relationships, even if they have worked hard and behaved with integrity.  The right strategy in this market is to dig deep, identify these people (say the “Golden 20” in your career) and work to increase the depth of your relationships.  Communicate frequently and personally, bring quality with each contact, be thoughtful and generous with your time. Do whatever you can to help these people, especially if they are out of work too.
What’s your experience in this job market?  Have you thought about your “Golden 20” professional relationships recently?

Friday, June 3, 2011

New Nikon P500 Camera: The Importance of Robust Usability Testing

For the last 2 months, I have been enjoying a new digital camera, a Nikon P500.  This is a hybrid camera, incorporating features of a full size digital SLR on a much smaller platform.  Most remarkable is the 32x optical zoom which allows much easier wildlife photography, particularly wild birds.  It is almost more zoom than I can handle.
One problem with the camera is interesting from the perspective of system testing.  The camera battery can be recharged by removing the battery and placing it in a charger, or you can option the camera to charge the battery through the power feed on a USB interface.  Since the charger was not included in the camera kit, the USB mode seems to be the manufacturer’s preferred mode.  When you connect the camera to a computer using the USB cord, the on/off light turns green, then flashing orange, to indicate charging state... after a few minutes or hours, the on/off light returns to steady green, indicating a full charge.  30 minutes later, the camera turns itself off. While the camera is charging, you can transfer photographs and videos to your computer and perform other functions.
The problem occurs when you disconnect the camera from the computer.  If you press the on/off button first and then disconnect the USB cable, the camera turns off the port and holds its charge.  If you disconnect the USB cable first, the camera does not power down the port and after a couple of hours will completely discharge the battery.  This has happened to me on several occasions.  You expect to have a full charge, however, all you have is a dead camera.  Any photographs for that session are impossible.  The correct procedure is documented in a notes section, however, there is no indication in the trouble shooting section about the danger of changing the sequence of disconnecting the camera. 
If you think about feature verification testing, the camera works perfectly as specified, and neither software testing nor hardware testing would have found this problem.  Only usability testing that explores all the different things users might naturally do wrong would have uncovered this problem.  The other option is to let the customers discover  the problem.  Until I figured out what was wrong and how to fix it, I was ready to return the camera to the store.  Once discovered, documentation should be improved to help shoot the trouble and change user behavior, however, the right answer is to change the software control so that the sequence of button or USB cord disconnect first doesn’t create a discharged camera.
I’m reminded of a problem that usability testing identified at Ameritech in the early 1990’s.  At that time, Ameritech had deployed cross connects and STPs from the same manufacturer.  Sometimes, they were located near each other in central offices.  It turned out that cards from one machine could be inserted into slots on the other machine with disastrous effect.  There were no mechanical lockouts that prevented this card swapping from happening if the operator got confused.  The result of the usability testing was a requirement to prevent this swap mechanically.
What’s your experience with usability testing?  Have you had a similar experience to the P500 camera discharge problem?

Thursday, May 26, 2011

Cell Phones: Sensors and Sensor Relay Platform

Nick Bilton wrote in his Bits Blog in the New York Times on May 19 (The Sensors Are Coming! - NYTimes.com - Bits Blog):  “The coming generation of mobile phones... will have so many new types of smart sensors that your current mobile phone will look pretty dumb.”  Among the sensors he discusses are altimeters, gyroscopic sensors, heart monitors, perspiration monitors, more microphones (presumably for stereophonic noise management and imaging) , temperature and humidity monitors and connections to sensors in your clothes and environment (presumably through a Bluetooth and Bluetooth Low Energy interfaces.)  Software associated with these sensors can be used for micro-location services, environmental characterization, another layer of user security and many other uses.  The message Nick delivers is primarily to get ready for a new set of applications and associated privacy issues.
A few years ago, it became clear that a smart phone plus Bluetooth / Bluetooth Low Energy devices on the body or in the body constitutes an excellent health monitoring and management platform for a wide range of chronic conditions.  A three step process of registering with a management server, downloading a health monitoring and management app and linking to a sensor could change the way some conditions are managed with significant cost savings compared to institutionalization or frequent visits from a health professional.  I’m sure all these elements are already in place for some patients in some parts of the world.  Once cell phones become platforms for health care delivery, we may see a new set of requirements emerge for cell phones... focused on reliability, availability, partitioning, local caching of critical data, security, battery life, etc.  Something like this happened to wireline phone systems when E911 services became a necessity of life.
What are your thoughts about the proliferation of sensors in cell phones and the cell phone as a sensor monitoring, relay and management platform?  Each new capability that has been added to the cell phone has had unforeseen consequences... for example, GPS capability potentially makes each of us more trackable, photographic and video capability has made everyone a potential video-graphic witness to crimes and disasters.  New capability have also increased the value of the cell phone to users and increased usage and willingness to pay for advanced services.  What sensors would you like to see on a phone; what sensors would you like to turn off?

Monday, March 21, 2011

Cell Phone Reception: Quality Getting Worse

Alex Mindlin reports in the New York Times http://www.nytimes.com/2011/03/21/technology/21drill.html that J.D. Power and Associates has announced that  cell phone quality has hit a plateau after steady improvement from 2003 to 2009.  The article cites the trend of indoor usage as people replace or supplement landlines being responsible for this quality plateau.  Regardless of where the calls were made or over what handsets, the quality of calls have declined over the last 6 months.  The article does not differentiate between carriers, but indicate the worst city for reception is Washington with 18% problem calls, three times the rate for Pittsburgh or Cincinnati which have the best call quality.
Those of us who have worked in the indoor coverage niche for years have seen this coming.  Improvements in technology and additional spectrum have been overwhelmed both by subscriber growth and the bandwidth thirst of smart phone applications.  Wireless technology was keeping pace with demand until the introduction of the iPhone.  On the consumer behavior side of the equation, cell phones have evolved from the family road emergency tool to the personal communications device.  Consequently, if I want to reach the individual, I call the cell and most of the time that person and phone are indoors.  The only solution to the problem is frequency reuse by cell splitting and the new cells need to be inside office buildings, public venues and residential buildings.  The most efficient architecture for indoor cell service is the combination of a picocell and a distributed antenna system.  This allows for separating the geographical aggregation engineering from the traffic aggregation engineering within buildings.
Over the last 5 years, both the carriers and the OEMs have indulged in wishful thinking and have avoided directly confronting these trends in the serving architecture... they have continued to upgrade and build macro towers and have treated indoor as an afterthought at best or something that might go away somehow (700 MHz may fix it, smart antennas may fix it, better coding may fix it, etc.)
There has been a move by AT&T in the last year to recognize the importance of indoor, particularly for large venues, however, there is not consensus in the industry.  One could expect a bad quality report to motivate carriers, however, with the continued consolidation of the carriers, quality as a competitive differentiator may become less important http://dealbook.nytimes.com/2011/03/20/att-to-buy-t-mobile-usa-for-39-billion/.
What’s your view... has the time come for indoor wireless coverage and capacity to take center stage and have well architected solutions?  Or will wishful thinking and market power continue to delay a solution to this problem?

Sunday, March 20, 2011

Technology Trends: Transmission Systems

I attended OFC-NFOEC a couple of weeks ago and listened to a panel talk on the issue of “what next beyond 100 Gbps systems.”  This is a familiar issue from the early 1990s when I was involved in transmission planning for Ameritech.  It was also a critical issue in the late 1990s when I helped plan transmission system products for Alcatel.  In the 1980s and 1990s, there were two parallel universes: Transmission systems and data ports, both evolving aggressively as silicon and optical technology improved.  The transmission rule of thumb was 4x while the data port rule was 10x... that is, the next generation of transmission systems technology needed to be 4x in speed (and usually only about 2x in price) to justify transition of whole networks to the new technology.  OC3 systems were replaced by OC12 systems which were replaced by OC48 (2.4 Gbps) systems which were replaced by 10 Gbps systems.  Meanwhile 10 Mbps Ethernet ports were replaced by 100 Mbps ports were replaced by 1 Gbps ports were replaced by 10 Gbps ports.  Why 4x and why 10x?  Coming from carrier transmission planning, I knew that the 4x rule had its basis in transmission deployment business cases, comparing the cost of upgrade to the higher speed (Up) vs. building additional systems at the same speed (Out.)  Up vs. Out business cases had different results for short haul compared to long haul deployment.  The 4x rule was for long haul, but ushered in improved economics for short haul systems whenever long haul upgrade hit volume deployment.
The data port 10x rule had its own Up vs. Out business cases, however, the ports were much more rapidly commoditized and depreciated than transmission systems and the cost of cabling in the business cases were negligible compared to the cost of laying new fiber routes.  I also think the 10x rule was a kind of self fulfilling prophecy... once people got on the 10x band wagon, it was hard to get off.
Back to OFC-NFOEC: the curious thing about this panel was the absence of any discussion of these underlying business cases, rather, the subjective impressions of customers and technologists were explored and people seemed to be equally divided between continuing the 4x rule (carrier planners and their fellow travelers) and continuing the 10x rule (data networking folks and their allies.)
There is, however, an additional factor that may be more important than either traditional Up vs. Out business cases or subjective impressions of the people with the purchase power... fundamental limitations of optical technology.  Transmission systems are beginning to reach fundamental limitations of glass media and optical coding.  Beyond 100 Gbps, we reach the Shannon limit for single channels and the power limit for glass fiber (before the lasers melt the core of the fiber.)  It may not be possible to achieve either 4x or 10x capacity increases in the future, but rather the more modest 20 to 40% improvements of a mature technology.  The future may well be multi-fiber or require new higher power wave guide technology.
What are your impressions of the future of optical transmission systems?  Are you a 4x or 10x technologist?  How does that color your perspective on the march of optical systems?

Friday, March 4, 2011

Modular Design: Product Architecture Value-add

One of the disciplines I learned as a network architect was to explore alternative functional architectures... that is, to be deliberate in the choice of where a function is performed in the network.  For example, restoration on failure of a fiber optic link is a function that can be performed centrally (manual restoration coordination, or automated restoration using an operational support system) or in a distributed manner (rerouting in switching or routing vehicles or more rapid restoration using 1-1, ring or mesh transport systems.) Each approach has its strengths and weaknesses, some work well together and many do not.
In product design, electronic systems architects generally do a good job of thinking through the functional architecture because there is a tradition of modular design in both hardware (backplanes, boards/blades, daughter boards, etc.) and software (operating system, services, applications, stacks, frameworks, etc.)  However, in the design of mechanical systems, the modular approach is less traditional and sometimes counter-intuitive.  The benefits of standardization of parts and subassemblies are clear from a procurement and manufacturing perspective, however, for simple products the additional planning, coordination and overhead of modularization may not be justified, particularly where the volumes of a new product are hard to predict.
Modular design for mechanical products requires a change of thought... the designer should be planning for success and expecting larger volumes, product evolution and related products.  That is, the designer should be planning a product family over time, not just one product to solve one problem for one customer.  That approach will identify components of the product that will be common over the product family, components that will evolve over time, components that will need to change to address different scales of applications and components that should be customizable to address different operating environments and operational procedures.  The result will be a product family with many variations, built from a smaller number of components.
The pioneers of this type of design for mechanical products have been Scandinavian engineering and manufacturing companies.  There are good economic and cultural reasons for this discipline developing in these countries... design traditions are strong, but also, these companies recognized early that to create a sustainable differentiation in a global market where the competition has much lower labor and material costs, they needed to create designs that used material more efficiently, could be shipped more cost effectively and addressed fundamental customer problems in unique ways.  I find a walk through an IKEA store is an education in modular mechanical design and global competition.
What are your perspectives on modular design, particularly for mechanical products?  Do you own any Scandinavian products?  If so, why?

Wednesday, February 23, 2011

3-D Printing: Is it ready for prime time?

Every few years, there is a rush of enthusiasm regarding 3-D printing technologies (see for example http://www.economist.com/node/18114221.) Each time this topic comes around, I have the same set of questions and wonder if this is the time it will experience explosive growth and substitution for conventional fabrication technologies.   My questions are usually around materials, precision/resolution and cost.  What range of materials are suitable to this type of 3-D printing technology (plastics, metals, composites, etc.)?  How precise is the “machining” and how reproducible are successive pieces?  How much does it cost per piece (setup, first piece, first 10, 100, 1000, 10000, etc.) and how does this cost compare to conventional fabrication technologies?  What are the technology trends for 3-D printing (cost, precision, materials) compared to the steady improvement in CAD/CAM?
Earlier 3-D printing technologies were dependent on the peculiar properties of particular classes of polymers or gels.  The object formed by the printing was the right shape, but usually the wrong material.  The subsequent step of creating a mold from the printed object and molding in the preferred material reduced the appeal of the 3-D printing process.
This most recent economist article, like many I’ve see over the years, is fully of vision and technology salesmanship, however, there are a couple of details that are more than just boosterism that may indicate that more widespread adoption will happen in a few years.  The first is a material... titanium.  Titanium has always been a problem material.  Light, strong, creep and crack resistant, with a high melting temperature, it has been a preferred material for supersonic airframes since the days of the SR-71 Blackbird.  However, it is difficult to machine (requiring an inert atmosphere and special tooling) and one fabrication technique for titanium has involved sintering...  placing titanium powder in ceramic molds at very high temperatures until the powders fuse at the edges into a solid material.  The quality of the piece is dependent on process trade secrets.  My understanding is that there is a lot of art in the fabrication of titanium.
EADS engineers in Bristol, have been using lasers and/or electron beams to print 20-30 micron layers of titanium power into fused structures and then add layer by layer to make titanium parts.  This sounds like a technology that could work for any powdered metal that you might want to fabricate.  Aerospace parts using titanium or titanium alloys could be the breakthrough application for the technology.
The second detail is an interview with Dr. Neil Hopkinson at Loughborough University that discusses an ink-jet technology his team has invented that prints an infra-red-absorbing ink on polymer powder and then uses infra-red heating to fuse the powder layer by layer.  This is sintering (at low temperatures) applied to polymers and should be useful for a variety of polymer compounds.  Dr. Hopkinson thinks their process is already competitive with injection-molding at production runs of around 1000 items and expects it to be competitive on 10,000 to 100,000 item runs in about 5 years.  This is the key economic comparison for adoption.

What do you think of 3-D printing?  Is the time now, soon or much later in your technological crystal ball?

Tuesday, February 22, 2011

Geoengineering: Technology of Last Resort?

As a resident of planet earth with children, I have been watching the long, painful public discussions relating to greenhouse gas emissions for decades with a mixture of hope and despair... hope that increasing scientific evidence would galvanize action to address the problem and despair that this is a problem too difficult for human beings to solve before an ecological disaster kills significant numbers.   I have written previously that in the 1980’s one of my colleagues from Harwell Labs had identified this as the most significant threat to human beings and that one solution was to introduce stratospheric aerosols to cool the earth and counteract the effects of additional greenhouse gas in the troposphere.  This concept, now called geoengineering, is getting more serious consideration in technical and policy communities (see for example, http://www.economist.com/node/18175423.)
My current thoughts on the subject:
  1. Ecological impact of increased greenhouse gas emissions has already happened, however, most people can’t connect the human causes (fossil fuel burning) with the human effects (crop failures, starvation, population dislocations, war and genocide.)
  2. Our current political and economic systems are too local and too short term to adequately address this problem... they have failed us and are not likely to change soon.  Furthermore, it is more cost effective for interested parties to inject noise into the public debate than to change the economic behavior that has created the problem.
  3. Independent of whether action is taken or whatever action is taken to reduce emissions, there is enough CO2 in the atmosphere and enough inertia in the economy these consequences will continue for decades or generations.
  4. Since we can’t cure the disease, we are then forced to treat the symptoms.
  5. The only proposal that can be implemented quickly and cost effectively is a geoengineering solution that creates sufficient cloud cover in the stratosphere to reflect sunlight back into space to counterbalance the greenhouse effect in the troposphere.
  6. This is an action that can be taken by a few technologically advanced countries that find it in their best interest to stabilize planetary climate to reduce the cost and risk of the human consequences of greenhouse gas emissions.
  7. Although the politics of this solution are difficult, this solution is less difficult than previous attempts to limit emissions.  This will not require most people to change their lives and jobs.
  8. Because this is an entirely new technology, there will be a learning curve while cloud creation and control systems are developed.  This will take years to develop, but not decades.  I believe the solution can be developed quickly enough to address the problem.
What are your thoughts on geoengineering?  Are there other approaches to treating the symptoms of global warming that hold more promise?

Monday, February 21, 2011

3-D Audio: New Tricks for an Old Dog

In the March 2011 edition of The Atlantic, Hal Espen interviews Princeton professor Edgar Choueiri about his pure stereo filter (http://www.theatlantic.com/magazine/archive/2011/03/what-perfection-sounds-like/8377/).  He promises “truly 3-D reproduction of a recorded soundfield.”  Espen’s description of the demo in an anechoic room evokes an acoustic experience that surpasses all previous technologies for presence... both for natural sound and music.
The problem that Prof. Choueiri identified and claims to have solved is a filter which removes cross talk between stereo channels without audible spectral coloration.  His approach was documented in a 24 page paper described by Mr. Espen as “fiendishly abstruse.”   Since then, Prof. Choueiri obtained Project X funding at Princeton to code his filter and test it in his 3-D audio lab (not his primary work at Princeton... he teaches applied physics and develops plasma rocket engines for spacecraft propulsion.)
Human hearing, including the audio processing in the human brain, is sensitive and sophisticated.  When we hear a conventional stereo recording, with channel cross talk, we have to “work” to process and interpret the signal.  This “work” creates a barrier between the listener and the audio experience the technology was trying to reproduce.  Prof. Choueiri: “The most tiring part of stereo is the fact that the image spatially doesn’t correspond to anything that you ordinarily hear.”  “That’s what drove me to create this thing.  Your brain is getting the right cues, and you relax.  Your brain stops trying to recreate reality.”
I love this story because spatial sound reproduction is considered by most technologists to be a solved problem.  Most development since the introduction of the CD has been development of multi-speaker technologies that use brute force to add sound to action-adventure cinema experiences.  There has been some work on digital audio signal processing to allow customization of a stereo audio signal to reproduce different concert spaces or to correct artifacts from earlier recording technologies, but Prof. Choueiri’s work goes back to the fundamental problem of stereo fidelity, the focus of audio engineering before 1980.
I’m neither an audio engineer, nor an audiophile, however, I have a strong preference for live music over reproduced music because of the quality of the experience.  I’ve assumed that the problem with reproduced music has been the cost of creating a good audio environment in my home, not the fundamental technology.  This work causes me to reexamine that assumption.
What’s your perspective on Prof. Choueiri’s invention?  Is a great leap forward or just an over-hyped incremental improvement?

Wednesday, February 16, 2011

Decision Support: The Multi-Variable Ranking Systems

In the most recent issue of The New Yorker, Malcolm Gladwell takes on the US News & World Report’s annual “Best Colleges” guide in The Order of Things: What college rankings really tell us (http://www.newyorker.com/reporting/2011/02/14/110214fa_fact_gladwell) (sorry, subscription required for the full text article.)  He dissects the US News college ranking system with other examples of ranking systems: Car and Driver’s ranking of three sports cars, a ranking of suicide rate by country, a ranking of law schools and a ranking of geographical regions by level of civilization.  He makes the point that these systems are highly arbitrary and can be best understood by inspecting the categories and weights against the needs of various audiences that will make decisions based on the information provided.  Most ranking systems reflect the prejudices of the people doing the ranking and less frequently serve the needs of the people seeking advice.
I have also used multi-variable ranking systems to assist in decision making in the past.  These systems were used to help sort out alternative architectures, product portfolios investment decisions or alternative corporate strategies and as such often stimulated intense political positioning and lobbying before, during and after decisions were made. My experience has been that to drive good decisions, there are a number of best practices that have their origin in the politics of shared responsibility in corporate organizations.  Here they are in no particular order:
  1. Score criteria that are naturally quantitative, quantitatively, for other criteria, use qualitative ranking, e.g., project cost or profitability should be scored in $, while strategic alignment can be High, Medium or Low. 
  2. Get political adversaries to participate in the structuring of the number and weights of criteria... be sure to include the top two or three things each cares most about in the system.
  3. Be prepared to explain everything: the structure of the system and the reasoning behind the scores.  To the extent that a score is controversial, it should have its origin and ownership in the organizations that bear the greatest responsibility for execution after the decision is made.
  4. Find several different points of view to cross check the results of the ranking system.  If you use choose certain proxies for cost, quality, flexibility, etc., cross check with other proxies.
  5. Run sensitivities on the number of variables and weights chosen to see how dependent the outcome is on the particular variable set and weights.  When a sensitivity changes the outcome, study what you have to believe to use different variables and weights that drive a different outcome.
  6. Recognize that it is impossible to put every decision criteria into the system and that ultimately an executive will make a decision based on the system and other factors outside the system.  Also recognize that an important function of the system is to get the participants to develop common language and models, not to drive a particular result.
What are your experiences with multi-variable ranking systems in support of decision making?  How do you react to Gladwell’s discussion of the limitations of these systems for making difficult apples to oranges comparisons?

Monday, February 14, 2011

Smart Phone Pricing Sophistication: Crisis for Nokia

In the February 12, 2011 Issue of The Economist, the lead article in the Business Section discusses the recent history of the cell phone, particularly focused on the fall from leadership of Nokia (http://www.economist.com/node/18114689).  The charts on cell phone market share and profitability share (below) tell a remarkable story... Asian competitors like Samsung, HTC and LG have been eating away at Nokia’s market share for years, however, the entry of Apple and its iPhone has been a disaster.  Apple rapidly took half the industry profitability (a position that Nokia held in 2007) and have crushed 75% of Nokia’s profitability.  This reminds me of another business lesson I learned in the mid-1980s.
pastedGraphic.pdf
In 1986, I was at Bell Labs, supporting new service introduction for AT&T Long Distance.  AT&T was just emerging from decades of rate of return regulation and was learning how to manage for profit and deal with anti-trust regulations.  Their 80%+ market share in long distance telephone service caused intense scrutiny by DoJ and the FCC.  I attended a seminar by one of the AT&T business leaders that described the market position of IBM at the time... 40% market share and 60% of the market profitability.  He asserted that AT&T’s goal should be to manage our share down and our profitability up until we reached a similar position in our marketplace.  Only then would the intense scrutiny and pressure on AT&T subside.  AT&T tried and failed to execute this strategy.... market share did come down, however, profitability came down at the same time.
One of the great difficulties companies in telecommunications have, and this applies to both carriers and to equipment providers, is getting pricing right.  I think of pricing as a capability maturity model... the lowest level of pricing maturity is ad hoc: try a price and see what happens.  The next level up is cost-based pricing.  This requires a product manager to have access to accounting information of sufficient quality to fully understand the loaded cost of the product, including customer service components, and how cost varies with volume.  The next level of sophistication is competitive pricing, i.e., understanding your cost structure and the cost structures of your competition so that the  behavior of competition can be understood as prices are raised and lowered.
The next level up in sophistication is value-based pricing.  Understanding the value a product or service brings to the customer by understanding the customers’ cost structure and alternatives to your product or service.  Not just a competitive substitution, but architectural and operational alternatives.  The highest level of sophistication is normally called Brand management, where manipulation of perception, through advertising, drives up value of your product in the mind of the customer and allows higher levels of price and profitability to be achieved.
Most of telecommunications products and services pricing operates in the cost-based or competitive pricing levels of maturity.  Apple, a supreme brand manager, operates a couple of levels higher in the Pricing CMM hierarchy.  Consequently, they were able to price high and maintain their perceived difference in value while their competition were starved for profits.  They taught the smart phone market some tough lessons that are forcing companies like Nokia to change or die.
What are your thoughts on the smart phone competitive marketplace?  How do you react to the remarkable history of market share and profit share in mobile phones?

Wednesday, February 9, 2011

Optical Technology: A Few Thoughts about Product Differentiation

The first few years I worked at Bell Labs involved late analog and early digital transmission technology... for example, microwave routes using TD-2 radio technology, long-haul L-carrier coaxial cable systems, FDM and TDMA satellite links and short-haul T1 carrier.  But soon thereafter, and ever since, I’ve been involved with fiber-optic transmission technology, first asynchronous, then SONET/SDH, PON systems, Gigabit Ethernet, 10-Gig Ethernet, etc.  In the last 10 years, much of my work on fiber optics has been at layer 1... cable, connectors and components.
With my physics background, I tend to think of optical technology as a natural extension of radio technology, just at higher frequency... so the full richness of electro-magnetic propagation through anisotropic materials is quite natural for me.  But what I’ve learned, is that complex as optical science is, the really difficult problems from an engineering perspective come down to two disciplines: mechanical engineering and materials.  This took me a long time to fully appreciate.
Optical technology is difficult mechanical engineering because optical wave lengths are very short compared to traditional fabrication tolerances.  Consequently, fractions of a micron matter... most transmission lasers have wavelengths in the 0.8 to 1.6 micron range.  Also, cleanliness matters and process reproducibility matters.  In retrospect, it is amazing that optical connectors, assembled by hand, work as well as they do.  This was the result of designs that embedded tight tolerance components (ferules and fibers) inside lower tolerance connector components.  Equally important are connector assembly processes that rigorously controlled and tested.
The second discipline, materials design, is so pervasive as to be invisible to many people.  The optical properties of glasses, coating materials, buffer and cabling materials, epoxies, ferule materials, laser materials, detector materials, abrasives, cleaning materials... it all matters.  The inherent variability of these materials choices is enormous and when you combine them together in a system... you can generate astronomical complexity.  This complexity is both a curse (how do you control them all) and a blessing (plenty of opportunity to improve designs.)
How do you create differentiated products in the optical space?  Great mechanical designs, protected by patents, are the first step.  These are necessary, but not sufficient, because it is easy enough to engineer around particular patents.  Process trade secrets are the second step, providing better protection because it is much more difficult to determine them by inspection of the product.  But, trade secrets can also be lost to you or leaked to competitors as employees come and go.  The third step is getting access to custom materials, special formulations of materials that enhance the performance of mechanical designs and process improvement. If all three components of this differentiation strategy are working well together, then your optical product will be strongly differentiated in a number of performance dimensions:  cost, optical parameters, reliability, etc. and that differentiation should be sustainable.
What are your thoughts on optical technology and product differentiation in this space?  Do you disagree with my elevation of mechanical and materials engineering above electrical and optical engineering disciplines?

Friday, February 4, 2011

IceCube: Relationship between science and technology

In the February 2011 issue of IEEE Spectrum, there is a report on the IceCube array by physicist Spencer Klein, (IceCube: The Polar Particle Hunter) that I find a very interesting case study in the relationship between science and technology.  IceCube is a project to build a neutrino detector deep in the ice near the South Pole.  The scientific objective is to detect very energetic neutrinos of cosmic origin and determine if they can be traced to a particular place in the cosmos, e.g., the galactic center, or if they are more uniform, e.g., 3 K blackbody radiation.
This project will instrument a cubic kilometer of ice between 1.5 and 2.5 km below the surface when completed later this year.  This project is the 3rd generation of Antarctic Ice neutrino detectors... Its predecessor, AMANDA placed photomultiplier detectors beneath the ice in the early 1990s using coaxial transmission technology and later redesigned their array with fiber optics in the late 1990s.  Each generation of technology reflects both deeper scientific understanding of the phenomenon they are detecting as well as improved drilling, detection, signal processing and communications technology.
I have two personal connections to this project.  On the scientific side, UW Madison physics Professor, Francis Halzen, whom I knew as a grad student in the 1970’s was one of the scientific leaders of AMANDA and lists himself as the principle investigator and co-spokesperson for IceCube on his CV.  I heard him talk about AMANDA in the early 1990’s at a special colloquium honoring my major professor, Marvin Ebel upon his retirement.  I was particularly interested in this project because AMANDA is a network with a survivability requirement.  My engineering connection is through Wayne Kachmar, an extraordinary fiber optic engineer who designed and produced custom FO cable for the second generation of AMANDA in the 1990s.  Wayne reported to me for many years at ADC.
The first generation of AMANDA learned that shallow ice had too many air bubbles to allow the array to act as a telescope, but that by going deeper, highly compressed ice, with no bubbles, could be found below 1400 meters.  The second generation of AMANDA determined that most of the neutrinos detected had their origin in the atmosphere, a product of energetic charged cosmic rays that have been steered through the cosmos by magnetic fields and have “forgotten” their points of origin.  The third generation, IceCube, will be able to distinguish between neutrinos of cosmic and atmospheric origin and will have sophisticated signal processing to screen events to reduce the quantity of data that is transmitted to the surface, relayed to the University of Wisconsin and ultimately analyzed to understand the nature and origin of cosmic neutrinos.  IceCube is a network of distributed processors in constant communication with their neighbors.
I encourage you to read this article and comment below.  I have learned that scientific and technical advancement go hand in hand and that great scientists are often great technologists and vice versa.  What’s your reaction to IceCube?

Wednesday, February 2, 2011

Software Blog 4: TNOP and Software Best Practices

When I joined Bell Labs and started my telecommunications career, one of the cohorts within the labs (and there were many) was the TNOP alumni. The Total Network Operations Plan (TNOP) was an activity launched in the 1970’s to solve the problem of thousands of different minicomputer-based Operational Support Systems (OSSs) that had been developed within the Bell System’s manifold operations groups.  In some cases, there were a dozens of slightly different versions of an OSS solving the same problem.  The TNOP people set out to create an overall OSS architecture for the Bell System and create an orderly evolution from manual, to semi-automated to fully automated operations, where possible.
TNOP Alumni had a systems engineering methodology that I found particularly compelling.  They started with functional requirements (independent of the current division of labor between people and machines) and then made deliberate choices of what should be automated, why and when.  These choices were based on the inherent characteristics of work groups vs. machines, the volatility of the requirements, the technology trends of software systems and the needs of the business.  Preserving the  Present Method of Operation (PMO) was a constraint, not an objective.  And the methodology wasn’t fool proof, but did keep the cultural biases and organizational politics under control.
When it comes to electronics systems, which have a component of software, I prefer an architectural methodology similar to TNOP.  Start with functional requirements, driven as much as possible by solving a significant customer problem.  Make deliberate choices about what should be done in hardware and what should be done in software.  The bias should be towards doing as much in HW as possible because of the difficulty in managing complexity of software.  The n+1-th software feature increases the test time by n squared to achieve the same level of quality as n features.
The next set of choices is to decide what software should be developed internally and what should be purchased or contracted externally.  The bias should be toward purchasing stacks and frameworks from leading suppliers, particularly for functionality that is still evolving in standards bodies, etc.  Internal software resources should be reserved for glue software , testing and market differentiating capability.  I’ve seen too many products get bogged down with creating 2nd rate functionality internally when 1st rate stuff is available in the marketplace.
Project schedules should be built backward from testing and market introduction, rather than forward from product requirements and prototyping.  Too many projects end up with no time to test and tons of pressure to go to market.  Those jeopardies will get called out earlier if there is agreement up front on the intervals required for unit test, system test and verification testing for the product being built.  Project schedules should never be built without the involvement of the software developers... they will have a more refined sense of product complexity than the hardware guys.
Finally, don’t write any software before a scalable development environment is in place.  Change control of requirements and software are critical to delivery of quality software.
More on software best practices in a later blog, however, what is your reaction to these ideas? 

Friday, January 28, 2011

Software Blog 3: Software Autobiography - Good Software, Bad Software, and Ugly Software.

After Ameritech, I went to Alcatel and became involved with technology strategy and M&A.  I had the opportunity to visit a lot of IP-related startups and saw a lot of teams with very different approaches to system design and development.  Most of them were building a product based on some breakthrough piece of hardware technology, either a novel product architecture (usually motivated by removing a performance limitation), a hero experiment in massive integration or a new division of labor between HW and Software, usually doing functions in HW that had previously been done in SW.  A few of them were building software products or software stacks, tools and frameworks.  More rarely, but memorably, were a couple of startups that had a balanced approach to systems design... that is, they made really good choices about what functions should go in hardware, what should be kept in software and what partners should they have to develop parts of the HW and SW externally.
A common problem of the HW guys was to plan the software as an afterthought and then to hire a team of software folks to do everything internally, usually because they didn’t have the budget or the time to work with external partners.  The systems that resulted usually went to market with fragile and poorly tested software and if they didn’t fail right away, didn’t achieve stability until the second or third release of the software.
The most impressive of the balanced teams I met, had partnered for their key silicon components, had a simple / robust chassis and had 5 or 6 software partners for various stacks and frameworks.  The internal resources were architects, testers and glue software developers.  They followed the technology trends of their HW partner closely, working with each new chip well before it became generally available.  The software partners were each chosen because they were up and coming technologists that would, in 3-5 years be the best in the business.  The resulting product was leading edge, not bleeding edge and remarkably stable.  When changes in standards happened, it was there partners’ problem to sort these things out and, because they were totally focused on their function, almost always got it right the first time and worked the standards bodies to stay on the winning side of the techno-politics.
When I moved to ADC, one of my early responsibilities was to consolidate network management from multiple platforms down to a single platform.  This was a very challenging activity, particularly since we had to execute it during the telecom meltdown in 2001.  We started with the best of breed EMS and then, over the course of a year, built a modular product architecture that maximized reuse as new releases and new products to manage came and went.  This flexible client / server architecture was eventually overtaken by thin client products, however, the modular approach, user interface experience and test automation remained valuable for 10 years.  This software team was built up in Bangalore, India and eventually became the center of software development excellence (embedded and otherwise) for all of ADC.
Next time... lessons learned from my software journey.  What was your experience with the Good, the Bad and the Ugly of software systems?

Tuesday, January 25, 2011

Software Blog 2: Software Autobiography - Hands-On Software Management

For 2 years at Bell Labs, I was the Supervisor of the NETS Requirements Group.  The Nationwide Emergency Telecommunications System (NETS) was a research activity contracted by the US Federal Government to evaluate the survivability of national telecom assets to various threats, primarily nuclear war.  I supervised a number of activities: network data consolidation and analysis, network design and simulation and development of hardening and routing requirements for network equipment.  In my first year, we wrote a lot of software to process and validate data, to structure and map the physical and the logical structure of the network, to estimate damage and to simulate the performance of the network in its damaged state.  We also developed new design rules and routing tables to extract the maximum performance from the surviving assets.
When the contract came up for renewal, I negotiated with the government to deliver all the design, simulation and visualization software as a stable product that non-research users could use to evaluate and enhance the survivability of other networks.  I later learned that the final development cost was about 10x what I had estimated.  Part of this was due to creeping features, negotiated when the professional software people took over, part of it was due to the difference in effectiveness of the average software developer compared to my best people (e.g., a PhD in theoretical physics who was brilliant at both the concepts and the execution of network design) and part of it was my own ignorance of what it takes to create a software product vs. a collection of research algorithms.  Also, everything we had done was on a dedicated Unix platform using shell script and C programs, we’d built our own data structures from scratch, there were no stacks, frameworks or significant third party software that we could leverage, there was no structured programming to maximize reuse.  We did a lot of our testing and debugging the old fashioned way... bit by bit and byte by byte.
After I left Bell Labs, I became an individual contributor again at Ameritech Science and Technology.  I evaluated technology and designed access and transport networks.  Most of my work was done within spreadsheets on Macintosh computers.  I became a power user of Wingz, for a time more powerful and graphically sophisticated than Excel.  I was able to characterize the technology, design the network, run all the economics and sensitivities and develop the management presentation on one platform.  The efficiency and effectiveness of this platform, which contained database, scripting and graphical capabilities  had an impact on my attitude towards 3rd party software.
Later, still at Ameritech, I became very interested in R&D focused on improving software productivity.  I saw the technology trends for software productivity as one of the primary challenges of technology management, especially for the telephone operating companies, that were stuck in a rat’s maze of interlocking, inflexible and expensive operational support systems.  At that time, structured programming had promise, but had not yet delivered on the promise.  Development frameworks were in their infancy and almost all protocol stacks had to be laboriously built and maintained for each development project.
This takes me up to the mid-1990s.  What are your memories of software processes and structures from this period?

Thursday, January 20, 2011

Software Blog 1: Software Autobiography - The Early Days

Everyone’s perception is colored by their history.  I plan to write a few blogs on software, so I thought, in the spirit of full disclosure, I’d write a software memoir.
My first exposure to software was a manual for a PDP-8 minicomputer my father put in my hands when I was 15 or 16.  I recall machine language commands for manipulating binary word structures that resulted in addition, subtraction, multiplication, etc.  Since my father was a chemical process engineer at a nuclear fuel plant, I have no idea how he got this, probably from a control systems salesman.
Later, as a physics student at the University of Missouri, I went through a painful, but very useful, course on FORTRAN programing taught by a member of the physics faculty.  The punch cards, line printers, late hours and rigid syntax of IBM Job Control Language created antibodies that took years to recover from.  I also did a senior project to write a plotting utility for the department’s brand new PDP-11 computer.  I remember entering the program bit by bit, writing and then reading paper tapes to make it work.
My graduate work at the University of Wisconsin was organized to avoid computers altogether, however, as I later wrote papers and my did my thesis, it was necessary to evaluate a 2 dimensional integral equation.  I managed to achieve this on a Hewlett-Packard desk-top calculator that could store 128 commands.  I used a continued fraction expression of the confluent hypergeometric function and a simple 1-d integration to get the job done without punch cards or standing in line.
My post-doctoral experience, first at the Theoretical Physics Division of Harwell Labs and then at Kansas State University, involved significant data crunching on IBM main frames, also in FORTRAN.  Harwell had a very useful library of mathematical subroutines that helped improve my efficiency and I was able to directly enter programs and data into storage using a 110 Baud teletype (enclosed in its own sound proof box in my office.)  At K-State, I had my first experience with CRT Terminals and remember the great day my terminal was direct connected at 9600 Baud.
At Bell Labs in the 1980’s, the first few items you received after being hired included the Bell Labs directory (very important), a book of Unix commands and the “The C Programming Language” by Kernighan and Ritchie.  I threw myself whole heartedly into C, however, what quickly became apparent was my greater aptitude for shell script, nroff, troff, eqn and tbl.  These were the scripting languages of document production in the Unix environment.
My organization, Government Network Planning, often ran the proposal management and production in response to federal RFPs.  Project Management was document management.  My supervisor and I organized teams of dozens of engineers and produced documents measured in inches of output (I remember one that was about 2 feet thick.)  We weren’t satisfied to take the troff macros as found, but went back to the developers to automate page and paragraph security markings.
Next time: from writing software to managing software teams.  What are your memories of computers and programing prior to 1984?

Tuesday, January 18, 2011

Steve Jobs: the Power of Large M Marketing

With the recent announcement of Steve Jobs’ medical leave of absence (http://www.nytimes.com/2011/01/18/technology/18apple.html), I have been thinking about his contribution to culture and technology.  He has been called many things, mostly superlative and positive, however, I believe his great contribution is to show technology companies how to market, truly market their products and services.
Some background... when I was getting started in telecom at Bell Labs, I had a conversation with an older manager who was probably in service marketing in the Long Lines organization.  I asked what he did: he said he did focus groups to determine willingness to pay for new services and then he set the price point for the service.  In further conversation, I learned that he was an engineer by training, who, over time, had moved into service marketing and management.  “Where are the marketing people?” I asked.  He responded that AT&T had difficulty hiring and retaining the good ones.  “Why’s that?” I asked.  “Because they can make so much more money and have so much better careers in other industries like consumer products.”  We then had a conversation about the value created by a good marketing person at companies like Pepsi or Proctor and Gamble.  By understanding market segmentation, by manipulating perception and desire through advertising and branding, a dollar’s worth of detergent can sell for $2.50 or more.  All that extra value is created by marketing.  In telecommunications, the additional value add of a top brand and an advertising campaign was rarely more than 15%.
Later, as I worked on new product and service launches in various companies, I came to distinguish between what I called “small m marketing” and “large M Marketing.”  What I meant by that is that small m marketing helps with collateral, names, trade shows, publications, interviews, press releases, etc. required to launch and sell products and services.  Large M Marketing, starts with a fundamental understanding of the market, it’s segments, the problems that need to be solved, how people solve those problems now, how our competitors address the problem, how our new product or service addresses the problem and what a solution is worth to each segment.  Then, armed with that knowledge, the product is re-conceived (sometimes abandoned) to optimize value and position for each segment and to develop a launch plan, segment by segment, customer by customer until maximum value is extracted from the market.  Sadly, most of my career has been spent working with marketers, not with Marketers.
Getting back to Steve Jobs... he is the master Marketer of personal computing.  He understands the technology deeply, but that is not the most important thing.  The value the technology delivers is the great user experience.  He is the master at getting every detail of the user experience to line up and constantly exceed expectations, year after year, product after product.  His technology is a pleasure to buy, to install, to maintain, to use for work and for play.  The average, non-technical user is the key segment he addresses, not just the segment with the advanced technology degrees.  And, by the way, this allows Apple to command a premium compared to it’s competition...and not just 15%.  This is his contribution to our industry... I wish him a speedy recovery and return to daily leadership of Apple.  What are your thoughts about Steve Jobs’ contribution to technology and culture?