Monday, March 21, 2011

Cell Phone Reception: Quality Getting Worse

Alex Mindlin reports in the New York Times http://www.nytimes.com/2011/03/21/technology/21drill.html that J.D. Power and Associates has announced that  cell phone quality has hit a plateau after steady improvement from 2003 to 2009.  The article cites the trend of indoor usage as people replace or supplement landlines being responsible for this quality plateau.  Regardless of where the calls were made or over what handsets, the quality of calls have declined over the last 6 months.  The article does not differentiate between carriers, but indicate the worst city for reception is Washington with 18% problem calls, three times the rate for Pittsburgh or Cincinnati which have the best call quality.
Those of us who have worked in the indoor coverage niche for years have seen this coming.  Improvements in technology and additional spectrum have been overwhelmed both by subscriber growth and the bandwidth thirst of smart phone applications.  Wireless technology was keeping pace with demand until the introduction of the iPhone.  On the consumer behavior side of the equation, cell phones have evolved from the family road emergency tool to the personal communications device.  Consequently, if I want to reach the individual, I call the cell and most of the time that person and phone are indoors.  The only solution to the problem is frequency reuse by cell splitting and the new cells need to be inside office buildings, public venues and residential buildings.  The most efficient architecture for indoor cell service is the combination of a picocell and a distributed antenna system.  This allows for separating the geographical aggregation engineering from the traffic aggregation engineering within buildings.
Over the last 5 years, both the carriers and the OEMs have indulged in wishful thinking and have avoided directly confronting these trends in the serving architecture... they have continued to upgrade and build macro towers and have treated indoor as an afterthought at best or something that might go away somehow (700 MHz may fix it, smart antennas may fix it, better coding may fix it, etc.)
There has been a move by AT&T in the last year to recognize the importance of indoor, particularly for large venues, however, there is not consensus in the industry.  One could expect a bad quality report to motivate carriers, however, with the continued consolidation of the carriers, quality as a competitive differentiator may become less important http://dealbook.nytimes.com/2011/03/20/att-to-buy-t-mobile-usa-for-39-billion/.
What’s your view... has the time come for indoor wireless coverage and capacity to take center stage and have well architected solutions?  Or will wishful thinking and market power continue to delay a solution to this problem?

Sunday, March 20, 2011

Technology Trends: Transmission Systems

I attended OFC-NFOEC a couple of weeks ago and listened to a panel talk on the issue of “what next beyond 100 Gbps systems.”  This is a familiar issue from the early 1990s when I was involved in transmission planning for Ameritech.  It was also a critical issue in the late 1990s when I helped plan transmission system products for Alcatel.  In the 1980s and 1990s, there were two parallel universes: Transmission systems and data ports, both evolving aggressively as silicon and optical technology improved.  The transmission rule of thumb was 4x while the data port rule was 10x... that is, the next generation of transmission systems technology needed to be 4x in speed (and usually only about 2x in price) to justify transition of whole networks to the new technology.  OC3 systems were replaced by OC12 systems which were replaced by OC48 (2.4 Gbps) systems which were replaced by 10 Gbps systems.  Meanwhile 10 Mbps Ethernet ports were replaced by 100 Mbps ports were replaced by 1 Gbps ports were replaced by 10 Gbps ports.  Why 4x and why 10x?  Coming from carrier transmission planning, I knew that the 4x rule had its basis in transmission deployment business cases, comparing the cost of upgrade to the higher speed (Up) vs. building additional systems at the same speed (Out.)  Up vs. Out business cases had different results for short haul compared to long haul deployment.  The 4x rule was for long haul, but ushered in improved economics for short haul systems whenever long haul upgrade hit volume deployment.
The data port 10x rule had its own Up vs. Out business cases, however, the ports were much more rapidly commoditized and depreciated than transmission systems and the cost of cabling in the business cases were negligible compared to the cost of laying new fiber routes.  I also think the 10x rule was a kind of self fulfilling prophecy... once people got on the 10x band wagon, it was hard to get off.
Back to OFC-NFOEC: the curious thing about this panel was the absence of any discussion of these underlying business cases, rather, the subjective impressions of customers and technologists were explored and people seemed to be equally divided between continuing the 4x rule (carrier planners and their fellow travelers) and continuing the 10x rule (data networking folks and their allies.)
There is, however, an additional factor that may be more important than either traditional Up vs. Out business cases or subjective impressions of the people with the purchase power... fundamental limitations of optical technology.  Transmission systems are beginning to reach fundamental limitations of glass media and optical coding.  Beyond 100 Gbps, we reach the Shannon limit for single channels and the power limit for glass fiber (before the lasers melt the core of the fiber.)  It may not be possible to achieve either 4x or 10x capacity increases in the future, but rather the more modest 20 to 40% improvements of a mature technology.  The future may well be multi-fiber or require new higher power wave guide technology.
What are your impressions of the future of optical transmission systems?  Are you a 4x or 10x technologist?  How does that color your perspective on the march of optical systems?

Friday, March 4, 2011

Modular Design: Product Architecture Value-add

One of the disciplines I learned as a network architect was to explore alternative functional architectures... that is, to be deliberate in the choice of where a function is performed in the network.  For example, restoration on failure of a fiber optic link is a function that can be performed centrally (manual restoration coordination, or automated restoration using an operational support system) or in a distributed manner (rerouting in switching or routing vehicles or more rapid restoration using 1-1, ring or mesh transport systems.) Each approach has its strengths and weaknesses, some work well together and many do not.
In product design, electronic systems architects generally do a good job of thinking through the functional architecture because there is a tradition of modular design in both hardware (backplanes, boards/blades, daughter boards, etc.) and software (operating system, services, applications, stacks, frameworks, etc.)  However, in the design of mechanical systems, the modular approach is less traditional and sometimes counter-intuitive.  The benefits of standardization of parts and subassemblies are clear from a procurement and manufacturing perspective, however, for simple products the additional planning, coordination and overhead of modularization may not be justified, particularly where the volumes of a new product are hard to predict.
Modular design for mechanical products requires a change of thought... the designer should be planning for success and expecting larger volumes, product evolution and related products.  That is, the designer should be planning a product family over time, not just one product to solve one problem for one customer.  That approach will identify components of the product that will be common over the product family, components that will evolve over time, components that will need to change to address different scales of applications and components that should be customizable to address different operating environments and operational procedures.  The result will be a product family with many variations, built from a smaller number of components.
The pioneers of this type of design for mechanical products have been Scandinavian engineering and manufacturing companies.  There are good economic and cultural reasons for this discipline developing in these countries... design traditions are strong, but also, these companies recognized early that to create a sustainable differentiation in a global market where the competition has much lower labor and material costs, they needed to create designs that used material more efficiently, could be shipped more cost effectively and addressed fundamental customer problems in unique ways.  I find a walk through an IKEA store is an education in modular mechanical design and global competition.
What are your perspectives on modular design, particularly for mechanical products?  Do you own any Scandinavian products?  If so, why?