Wednesday, February 23, 2011

3-D Printing: Is it ready for prime time?

Every few years, there is a rush of enthusiasm regarding 3-D printing technologies (see for example http://www.economist.com/node/18114221.) Each time this topic comes around, I have the same set of questions and wonder if this is the time it will experience explosive growth and substitution for conventional fabrication technologies.   My questions are usually around materials, precision/resolution and cost.  What range of materials are suitable to this type of 3-D printing technology (plastics, metals, composites, etc.)?  How precise is the “machining” and how reproducible are successive pieces?  How much does it cost per piece (setup, first piece, first 10, 100, 1000, 10000, etc.) and how does this cost compare to conventional fabrication technologies?  What are the technology trends for 3-D printing (cost, precision, materials) compared to the steady improvement in CAD/CAM?
Earlier 3-D printing technologies were dependent on the peculiar properties of particular classes of polymers or gels.  The object formed by the printing was the right shape, but usually the wrong material.  The subsequent step of creating a mold from the printed object and molding in the preferred material reduced the appeal of the 3-D printing process.
This most recent economist article, like many I’ve see over the years, is fully of vision and technology salesmanship, however, there are a couple of details that are more than just boosterism that may indicate that more widespread adoption will happen in a few years.  The first is a material... titanium.  Titanium has always been a problem material.  Light, strong, creep and crack resistant, with a high melting temperature, it has been a preferred material for supersonic airframes since the days of the SR-71 Blackbird.  However, it is difficult to machine (requiring an inert atmosphere and special tooling) and one fabrication technique for titanium has involved sintering...  placing titanium powder in ceramic molds at very high temperatures until the powders fuse at the edges into a solid material.  The quality of the piece is dependent on process trade secrets.  My understanding is that there is a lot of art in the fabrication of titanium.
EADS engineers in Bristol, have been using lasers and/or electron beams to print 20-30 micron layers of titanium power into fused structures and then add layer by layer to make titanium parts.  This sounds like a technology that could work for any powdered metal that you might want to fabricate.  Aerospace parts using titanium or titanium alloys could be the breakthrough application for the technology.
The second detail is an interview with Dr. Neil Hopkinson at Loughborough University that discusses an ink-jet technology his team has invented that prints an infra-red-absorbing ink on polymer powder and then uses infra-red heating to fuse the powder layer by layer.  This is sintering (at low temperatures) applied to polymers and should be useful for a variety of polymer compounds.  Dr. Hopkinson thinks their process is already competitive with injection-molding at production runs of around 1000 items and expects it to be competitive on 10,000 to 100,000 item runs in about 5 years.  This is the key economic comparison for adoption.

What do you think of 3-D printing?  Is the time now, soon or much later in your technological crystal ball?

Tuesday, February 22, 2011

Geoengineering: Technology of Last Resort?

As a resident of planet earth with children, I have been watching the long, painful public discussions relating to greenhouse gas emissions for decades with a mixture of hope and despair... hope that increasing scientific evidence would galvanize action to address the problem and despair that this is a problem too difficult for human beings to solve before an ecological disaster kills significant numbers.   I have written previously that in the 1980’s one of my colleagues from Harwell Labs had identified this as the most significant threat to human beings and that one solution was to introduce stratospheric aerosols to cool the earth and counteract the effects of additional greenhouse gas in the troposphere.  This concept, now called geoengineering, is getting more serious consideration in technical and policy communities (see for example, http://www.economist.com/node/18175423.)
My current thoughts on the subject:
  1. Ecological impact of increased greenhouse gas emissions has already happened, however, most people can’t connect the human causes (fossil fuel burning) with the human effects (crop failures, starvation, population dislocations, war and genocide.)
  2. Our current political and economic systems are too local and too short term to adequately address this problem... they have failed us and are not likely to change soon.  Furthermore, it is more cost effective for interested parties to inject noise into the public debate than to change the economic behavior that has created the problem.
  3. Independent of whether action is taken or whatever action is taken to reduce emissions, there is enough CO2 in the atmosphere and enough inertia in the economy these consequences will continue for decades or generations.
  4. Since we can’t cure the disease, we are then forced to treat the symptoms.
  5. The only proposal that can be implemented quickly and cost effectively is a geoengineering solution that creates sufficient cloud cover in the stratosphere to reflect sunlight back into space to counterbalance the greenhouse effect in the troposphere.
  6. This is an action that can be taken by a few technologically advanced countries that find it in their best interest to stabilize planetary climate to reduce the cost and risk of the human consequences of greenhouse gas emissions.
  7. Although the politics of this solution are difficult, this solution is less difficult than previous attempts to limit emissions.  This will not require most people to change their lives and jobs.
  8. Because this is an entirely new technology, there will be a learning curve while cloud creation and control systems are developed.  This will take years to develop, but not decades.  I believe the solution can be developed quickly enough to address the problem.
What are your thoughts on geoengineering?  Are there other approaches to treating the symptoms of global warming that hold more promise?

Monday, February 21, 2011

3-D Audio: New Tricks for an Old Dog

In the March 2011 edition of The Atlantic, Hal Espen interviews Princeton professor Edgar Choueiri about his pure stereo filter (http://www.theatlantic.com/magazine/archive/2011/03/what-perfection-sounds-like/8377/).  He promises “truly 3-D reproduction of a recorded soundfield.”  Espen’s description of the demo in an anechoic room evokes an acoustic experience that surpasses all previous technologies for presence... both for natural sound and music.
The problem that Prof. Choueiri identified and claims to have solved is a filter which removes cross talk between stereo channels without audible spectral coloration.  His approach was documented in a 24 page paper described by Mr. Espen as “fiendishly abstruse.”   Since then, Prof. Choueiri obtained Project X funding at Princeton to code his filter and test it in his 3-D audio lab (not his primary work at Princeton... he teaches applied physics and develops plasma rocket engines for spacecraft propulsion.)
Human hearing, including the audio processing in the human brain, is sensitive and sophisticated.  When we hear a conventional stereo recording, with channel cross talk, we have to “work” to process and interpret the signal.  This “work” creates a barrier between the listener and the audio experience the technology was trying to reproduce.  Prof. Choueiri: “The most tiring part of stereo is the fact that the image spatially doesn’t correspond to anything that you ordinarily hear.”  “That’s what drove me to create this thing.  Your brain is getting the right cues, and you relax.  Your brain stops trying to recreate reality.”
I love this story because spatial sound reproduction is considered by most technologists to be a solved problem.  Most development since the introduction of the CD has been development of multi-speaker technologies that use brute force to add sound to action-adventure cinema experiences.  There has been some work on digital audio signal processing to allow customization of a stereo audio signal to reproduce different concert spaces or to correct artifacts from earlier recording technologies, but Prof. Choueiri’s work goes back to the fundamental problem of stereo fidelity, the focus of audio engineering before 1980.
I’m neither an audio engineer, nor an audiophile, however, I have a strong preference for live music over reproduced music because of the quality of the experience.  I’ve assumed that the problem with reproduced music has been the cost of creating a good audio environment in my home, not the fundamental technology.  This work causes me to reexamine that assumption.
What’s your perspective on Prof. Choueiri’s invention?  Is a great leap forward or just an over-hyped incremental improvement?

Wednesday, February 16, 2011

Decision Support: The Multi-Variable Ranking Systems

In the most recent issue of The New Yorker, Malcolm Gladwell takes on the US News & World Report’s annual “Best Colleges” guide in The Order of Things: What college rankings really tell us (http://www.newyorker.com/reporting/2011/02/14/110214fa_fact_gladwell) (sorry, subscription required for the full text article.)  He dissects the US News college ranking system with other examples of ranking systems: Car and Driver’s ranking of three sports cars, a ranking of suicide rate by country, a ranking of law schools and a ranking of geographical regions by level of civilization.  He makes the point that these systems are highly arbitrary and can be best understood by inspecting the categories and weights against the needs of various audiences that will make decisions based on the information provided.  Most ranking systems reflect the prejudices of the people doing the ranking and less frequently serve the needs of the people seeking advice.
I have also used multi-variable ranking systems to assist in decision making in the past.  These systems were used to help sort out alternative architectures, product portfolios investment decisions or alternative corporate strategies and as such often stimulated intense political positioning and lobbying before, during and after decisions were made. My experience has been that to drive good decisions, there are a number of best practices that have their origin in the politics of shared responsibility in corporate organizations.  Here they are in no particular order:
  1. Score criteria that are naturally quantitative, quantitatively, for other criteria, use qualitative ranking, e.g., project cost or profitability should be scored in $, while strategic alignment can be High, Medium or Low. 
  2. Get political adversaries to participate in the structuring of the number and weights of criteria... be sure to include the top two or three things each cares most about in the system.
  3. Be prepared to explain everything: the structure of the system and the reasoning behind the scores.  To the extent that a score is controversial, it should have its origin and ownership in the organizations that bear the greatest responsibility for execution after the decision is made.
  4. Find several different points of view to cross check the results of the ranking system.  If you use choose certain proxies for cost, quality, flexibility, etc., cross check with other proxies.
  5. Run sensitivities on the number of variables and weights chosen to see how dependent the outcome is on the particular variable set and weights.  When a sensitivity changes the outcome, study what you have to believe to use different variables and weights that drive a different outcome.
  6. Recognize that it is impossible to put every decision criteria into the system and that ultimately an executive will make a decision based on the system and other factors outside the system.  Also recognize that an important function of the system is to get the participants to develop common language and models, not to drive a particular result.
What are your experiences with multi-variable ranking systems in support of decision making?  How do you react to Gladwell’s discussion of the limitations of these systems for making difficult apples to oranges comparisons?

Monday, February 14, 2011

Smart Phone Pricing Sophistication: Crisis for Nokia

In the February 12, 2011 Issue of The Economist, the lead article in the Business Section discusses the recent history of the cell phone, particularly focused on the fall from leadership of Nokia (http://www.economist.com/node/18114689).  The charts on cell phone market share and profitability share (below) tell a remarkable story... Asian competitors like Samsung, HTC and LG have been eating away at Nokia’s market share for years, however, the entry of Apple and its iPhone has been a disaster.  Apple rapidly took half the industry profitability (a position that Nokia held in 2007) and have crushed 75% of Nokia’s profitability.  This reminds me of another business lesson I learned in the mid-1980s.
pastedGraphic.pdf
In 1986, I was at Bell Labs, supporting new service introduction for AT&T Long Distance.  AT&T was just emerging from decades of rate of return regulation and was learning how to manage for profit and deal with anti-trust regulations.  Their 80%+ market share in long distance telephone service caused intense scrutiny by DoJ and the FCC.  I attended a seminar by one of the AT&T business leaders that described the market position of IBM at the time... 40% market share and 60% of the market profitability.  He asserted that AT&T’s goal should be to manage our share down and our profitability up until we reached a similar position in our marketplace.  Only then would the intense scrutiny and pressure on AT&T subside.  AT&T tried and failed to execute this strategy.... market share did come down, however, profitability came down at the same time.
One of the great difficulties companies in telecommunications have, and this applies to both carriers and to equipment providers, is getting pricing right.  I think of pricing as a capability maturity model... the lowest level of pricing maturity is ad hoc: try a price and see what happens.  The next level up is cost-based pricing.  This requires a product manager to have access to accounting information of sufficient quality to fully understand the loaded cost of the product, including customer service components, and how cost varies with volume.  The next level of sophistication is competitive pricing, i.e., understanding your cost structure and the cost structures of your competition so that the  behavior of competition can be understood as prices are raised and lowered.
The next level up in sophistication is value-based pricing.  Understanding the value a product or service brings to the customer by understanding the customers’ cost structure and alternatives to your product or service.  Not just a competitive substitution, but architectural and operational alternatives.  The highest level of sophistication is normally called Brand management, where manipulation of perception, through advertising, drives up value of your product in the mind of the customer and allows higher levels of price and profitability to be achieved.
Most of telecommunications products and services pricing operates in the cost-based or competitive pricing levels of maturity.  Apple, a supreme brand manager, operates a couple of levels higher in the Pricing CMM hierarchy.  Consequently, they were able to price high and maintain their perceived difference in value while their competition were starved for profits.  They taught the smart phone market some tough lessons that are forcing companies like Nokia to change or die.
What are your thoughts on the smart phone competitive marketplace?  How do you react to the remarkable history of market share and profit share in mobile phones?

Wednesday, February 9, 2011

Optical Technology: A Few Thoughts about Product Differentiation

The first few years I worked at Bell Labs involved late analog and early digital transmission technology... for example, microwave routes using TD-2 radio technology, long-haul L-carrier coaxial cable systems, FDM and TDMA satellite links and short-haul T1 carrier.  But soon thereafter, and ever since, I’ve been involved with fiber-optic transmission technology, first asynchronous, then SONET/SDH, PON systems, Gigabit Ethernet, 10-Gig Ethernet, etc.  In the last 10 years, much of my work on fiber optics has been at layer 1... cable, connectors and components.
With my physics background, I tend to think of optical technology as a natural extension of radio technology, just at higher frequency... so the full richness of electro-magnetic propagation through anisotropic materials is quite natural for me.  But what I’ve learned, is that complex as optical science is, the really difficult problems from an engineering perspective come down to two disciplines: mechanical engineering and materials.  This took me a long time to fully appreciate.
Optical technology is difficult mechanical engineering because optical wave lengths are very short compared to traditional fabrication tolerances.  Consequently, fractions of a micron matter... most transmission lasers have wavelengths in the 0.8 to 1.6 micron range.  Also, cleanliness matters and process reproducibility matters.  In retrospect, it is amazing that optical connectors, assembled by hand, work as well as they do.  This was the result of designs that embedded tight tolerance components (ferules and fibers) inside lower tolerance connector components.  Equally important are connector assembly processes that rigorously controlled and tested.
The second discipline, materials design, is so pervasive as to be invisible to many people.  The optical properties of glasses, coating materials, buffer and cabling materials, epoxies, ferule materials, laser materials, detector materials, abrasives, cleaning materials... it all matters.  The inherent variability of these materials choices is enormous and when you combine them together in a system... you can generate astronomical complexity.  This complexity is both a curse (how do you control them all) and a blessing (plenty of opportunity to improve designs.)
How do you create differentiated products in the optical space?  Great mechanical designs, protected by patents, are the first step.  These are necessary, but not sufficient, because it is easy enough to engineer around particular patents.  Process trade secrets are the second step, providing better protection because it is much more difficult to determine them by inspection of the product.  But, trade secrets can also be lost to you or leaked to competitors as employees come and go.  The third step is getting access to custom materials, special formulations of materials that enhance the performance of mechanical designs and process improvement. If all three components of this differentiation strategy are working well together, then your optical product will be strongly differentiated in a number of performance dimensions:  cost, optical parameters, reliability, etc. and that differentiation should be sustainable.
What are your thoughts on optical technology and product differentiation in this space?  Do you disagree with my elevation of mechanical and materials engineering above electrical and optical engineering disciplines?

Friday, February 4, 2011

IceCube: Relationship between science and technology

In the February 2011 issue of IEEE Spectrum, there is a report on the IceCube array by physicist Spencer Klein, (IceCube: The Polar Particle Hunter) that I find a very interesting case study in the relationship between science and technology.  IceCube is a project to build a neutrino detector deep in the ice near the South Pole.  The scientific objective is to detect very energetic neutrinos of cosmic origin and determine if they can be traced to a particular place in the cosmos, e.g., the galactic center, or if they are more uniform, e.g., 3 K blackbody radiation.
This project will instrument a cubic kilometer of ice between 1.5 and 2.5 km below the surface when completed later this year.  This project is the 3rd generation of Antarctic Ice neutrino detectors... Its predecessor, AMANDA placed photomultiplier detectors beneath the ice in the early 1990s using coaxial transmission technology and later redesigned their array with fiber optics in the late 1990s.  Each generation of technology reflects both deeper scientific understanding of the phenomenon they are detecting as well as improved drilling, detection, signal processing and communications technology.
I have two personal connections to this project.  On the scientific side, UW Madison physics Professor, Francis Halzen, whom I knew as a grad student in the 1970’s was one of the scientific leaders of AMANDA and lists himself as the principle investigator and co-spokesperson for IceCube on his CV.  I heard him talk about AMANDA in the early 1990’s at a special colloquium honoring my major professor, Marvin Ebel upon his retirement.  I was particularly interested in this project because AMANDA is a network with a survivability requirement.  My engineering connection is through Wayne Kachmar, an extraordinary fiber optic engineer who designed and produced custom FO cable for the second generation of AMANDA in the 1990s.  Wayne reported to me for many years at ADC.
The first generation of AMANDA learned that shallow ice had too many air bubbles to allow the array to act as a telescope, but that by going deeper, highly compressed ice, with no bubbles, could be found below 1400 meters.  The second generation of AMANDA determined that most of the neutrinos detected had their origin in the atmosphere, a product of energetic charged cosmic rays that have been steered through the cosmos by magnetic fields and have “forgotten” their points of origin.  The third generation, IceCube, will be able to distinguish between neutrinos of cosmic and atmospheric origin and will have sophisticated signal processing to screen events to reduce the quantity of data that is transmitted to the surface, relayed to the University of Wisconsin and ultimately analyzed to understand the nature and origin of cosmic neutrinos.  IceCube is a network of distributed processors in constant communication with their neighbors.
I encourage you to read this article and comment below.  I have learned that scientific and technical advancement go hand in hand and that great scientists are often great technologists and vice versa.  What’s your reaction to IceCube?

Wednesday, February 2, 2011

Software Blog 4: TNOP and Software Best Practices

When I joined Bell Labs and started my telecommunications career, one of the cohorts within the labs (and there were many) was the TNOP alumni. The Total Network Operations Plan (TNOP) was an activity launched in the 1970’s to solve the problem of thousands of different minicomputer-based Operational Support Systems (OSSs) that had been developed within the Bell System’s manifold operations groups.  In some cases, there were a dozens of slightly different versions of an OSS solving the same problem.  The TNOP people set out to create an overall OSS architecture for the Bell System and create an orderly evolution from manual, to semi-automated to fully automated operations, where possible.
TNOP Alumni had a systems engineering methodology that I found particularly compelling.  They started with functional requirements (independent of the current division of labor between people and machines) and then made deliberate choices of what should be automated, why and when.  These choices were based on the inherent characteristics of work groups vs. machines, the volatility of the requirements, the technology trends of software systems and the needs of the business.  Preserving the  Present Method of Operation (PMO) was a constraint, not an objective.  And the methodology wasn’t fool proof, but did keep the cultural biases and organizational politics under control.
When it comes to electronics systems, which have a component of software, I prefer an architectural methodology similar to TNOP.  Start with functional requirements, driven as much as possible by solving a significant customer problem.  Make deliberate choices about what should be done in hardware and what should be done in software.  The bias should be towards doing as much in HW as possible because of the difficulty in managing complexity of software.  The n+1-th software feature increases the test time by n squared to achieve the same level of quality as n features.
The next set of choices is to decide what software should be developed internally and what should be purchased or contracted externally.  The bias should be toward purchasing stacks and frameworks from leading suppliers, particularly for functionality that is still evolving in standards bodies, etc.  Internal software resources should be reserved for glue software , testing and market differentiating capability.  I’ve seen too many products get bogged down with creating 2nd rate functionality internally when 1st rate stuff is available in the marketplace.
Project schedules should be built backward from testing and market introduction, rather than forward from product requirements and prototyping.  Too many projects end up with no time to test and tons of pressure to go to market.  Those jeopardies will get called out earlier if there is agreement up front on the intervals required for unit test, system test and verification testing for the product being built.  Project schedules should never be built without the involvement of the software developers... they will have a more refined sense of product complexity than the hardware guys.
Finally, don’t write any software before a scalable development environment is in place.  Change control of requirements and software are critical to delivery of quality software.
More on software best practices in a later blog, however, what is your reaction to these ideas?