Sunday, July 10, 2011

Therapeutic Radiation Technology: 30-year technology update

Recently, I began study of therapeutic radiation technology used to treat cancer with external radiation.  This was a deep update, since the last time I got into the details of X-ray technology was 25 to 30 years ago.  Some of my concerns were based on spatial and spectral resolution of X-ray production and I was delighted to find that several fundamental improvements had been made that allowed fine tuning of both spatial and spectral resolution of the X-rays used for radiation therapy.  This, combined with computer control and sophisticated 3-D simulation of radiation deposition, make the current generation of this technology highly tunable for treating many forms of cancer, particularly brain tumors.  Challenges still remain for treating other forms of cancer, for example, lung tumors.
Long ago, one of the challenges of X-ray technology was the rise-time of the high voltage circuits that drove the X-ray tube.  A slow rise time meant that the harder X-rays were accompanied by soft X-rays during the rise and fall of the voltage.  Different tricks were used 30 years ago to minimize this effect, however, these soft X-rays resulted in poorer resolution of X-ray imaging and lower spatial resolution of therapeutic X-rays.  Spatial resolution is important, because the objective of radiation therapy is to put radiation into the tumor and to avoid irradiating radiosensitive structures near or behind the tumors.  A rotating gantry spreads this collateral radiation around the body, however, the better the spatial resolution, the more manageable the collateral damage becomes.
Current generation therapeutic X-rays machines don’t use transformers to switch the X-rays on and off, but rather, electrons are generated in a linear accelerator, then velocity selected in a magnetic field before they are allowed to hit a water cooled tungsten target.  The Full Width at Half Max (FWHM) of a beam of 6 MeV electrons is then engineered to be small compared to the FWHM of the bremsstrahlung mechanism that generates the X-rays.  This ensures maximum spatial and spectral resolution with suppression of soft X-radiation by orders of magnitude.  The beam of X-rays produced is then further columnated and shaped by dynamic baffles to achieve precise control of the radiation contours within the body.
Other radiation technologies, such as proton beam therapy, that have advantages for certain types of cancer, for example, cancers of the spinal cord.  Unlike X-rays, protons do not irradiate tissue behind the tumor.  For distributed brain tumors, however, proton beams have few advantages over X-rays.  Brain tumors are collocated and interstitial with healthy tissue and require an approach that treats large volumes of the brain with radiation.  In the rare case that a benign tumor can be isolated (e.g., a pituitary tumor), then precise radiosurgery is done, usually with 15 MeV or higher X-rays.
Current generation radiotherapy, radiosurgery and neurosurgery are all computer assisted, using CAT and MRI scans extensively to calibrate and register placement of radiation or surgical instruments within the body.  This approach de-skills the therapy and surgery and ensures a better outcome for the patient.  I was a young physicist when all these technologies were being developed.  It is wonderful to see how far these technologies have come in the last 30 years.  What is your experience with CAT scans, MRI scans, therapeutic X-rays or computer assisted surgery?  What’s your perspective on the most important directions for future development of these technologies? 

Sunday, June 26, 2011

The Broadband Dilemma and Netflix

Jonathan A. Knee wrote a short article in the July/August edition of The Atlantic that should be required reading for every businessman “Why Content isn’t King” (http://www.theatlantic.com/magazine/archive/2011/07/why-content-isn-8217-t-king/8551/).  In this article he explains why Netflix has been successful leveraging distribution economies of scale and building loyalty in their customer base despite the predictions of many market analysts.  The Netflix business model runs contrary to the conventional wisdom that consumers are only interested in the content, not in how it is delivered, and that the real profitability in the value chain lies with the content producers.  Knee argues that content producers suffer from two real problems... 1) that there is no long-term loyalty to content producers, particularly to the production houses and that 2) there is no substantial economies of scale in production, despite a hundred years of “industrialization” on movie/TV/music/video production experience.  The only place in the value chain where economies of scale exist are in distribution and that it doesn’t take much to create customer loyalty when competing with the likes of the cable MSO’s.
When I was at Ameritech Services in the 1990’s, I was involved with a lot of study work and network planning around the deployment of broadband networks to the home (ADSL, FTTC, FTTP, etc.)  The Broadband Dilemma as it was expressed then was as follows... how do you build a business case justifying broadband deployment when the vast majority of the investment is required for the distribution network while the customer willingness to pay is associated with the broadband content?  If you can force the bundling of content with the distribution network, as cable companies did, then your business case would work.  If you were forced to unbundle the distribution network and offer it to your competition, then you were stuck with low rates of return on investment and you would never be able to attract capital for this build.  In the mid-1990s, Ameritech concluded that the regulatory environment was the problem and our competitive response to the threat of Cable MSO’s offering broadband data and voice was to become a second cable operator in a few of our most vulnerable markets and compete with the MSO’s on their turf with the same regulatory structure.  Subsequent regulatory change on the telecom side (Telecommunications Act of 1996 and following court decisions) allowed deployments like Verizon’s FIOS to move forward in the last decade.
The Netflix model does a number of interesting things:  first, Netflix leverages existing distribution infrastructure investments (the postal system and broadband deployments), it also relies on existing customer premises equipment (PCs, Blue Ray players and game systems), so Netflix isn’t picking up all the cost of the distribution network, just some pay as they grow costs for servers, processing centers, advertising, etc.  They have bridged customer loyalty built on bettering the Blockbuster business model into the video on demand business... they have not allowed a particular distribution technology to define them, but rather, they have focused on customer satisfaction.
I have been a happy Netflix customer for years and use their instant download services through a Sony Blue Ray system attached to my cable modem via WiFi.  Ironically, Comcast provides the infrastructure for Netflix movies in my house.  What’s your experience with Netflix... how do your view their business model and long-term prospects for growth?

Saturday, June 18, 2011

Google’s Chromebook - The Return of the Dumb Terminal?

David Pogue reviews his user experience with the Google Chromebook (http://www.nytimes.com/2011/06/16/technology/personaltech/16pogue.html?_r=1&ref=technology) and, not too surprisingly, was underwhelmed.  The Chromebook concept is an element of the vision of Cloud computing.  If everything is done in the cloud.... storage, software, etc., then the cloud access terminal doesn’t need anything but a browser and wireless access.  The Chromebook laptop concept is the lightweight netbook, stripped of a hard drive and most software, that relies on constant wireless access to the cloud for its functionality.  Samsung has built a Chromebook that recently went on sale for about $500.
About 18 years ago, when I was Director of Network Architecture at Ameritech, I ran into an old friend from my Bell Labs days.  He was selling mainframe software and dumb terminals for customer service applications.  This was an architecture already 10 years out of date, however, he emphasized that the PC had many problems when used as the vehicle for customer service, information operators, etc.  The primary problems were software maintenance and network security.  PC software gets out of date, corrupted and infected; users load personal software that impact PC performance and software.  In addition, the PCs host malware that undermines network integrity and security.  The upshot of this is high cost to maintain and fix PCs that are essentially doing no more than providing access to centralized databases.  So, why not solve the problem by returning to the previous architecture of dumb terminal access to a mainframe application or applications.  You can save capital costs and operational costs as well.  For a variety of reasons, this concept didn’t catch on in the mid-1990s, however, there have always been a number of settings where dumb terminals have never gone away... ATM’s, POS, Lottery terminals all come to mind.
The first thing that strikes me is the cost issue... for giving up most of the functionality and flexibility of the PC, I don’t save much money.  For $500 I can’t get a lightweight laptop, however, I can get a netbook or a cheap desktop PC.  Pogue is most disturbed by the issue of constant wireless access.  He found that when he didn’t have wireless access, the Chromebook was just a 3.3 lb paperweight.  On airplanes or in hotel rooms, to do anything he was forced to cough up $7 and up for WiFi access adding insult to injury.  The suite of cloud functionality and services were not enough to make up for the lost functionality of customizable PC software.  He seemed particularly unhappy about the lack of compatibility with Apple products like iPod, iPhone and iPad.  Finally, he just felt insecure about not having local storage for documents and pictures he most values; you never really know if the cloud will preserve and protect your family photos as well as you do.
What’s your reaction to the Chromebook?  Is it the 2nd or 3rd return of the dumb terminal or just wishful cloud thinking?  What would the Chromebook have to cost to justify giving up PC flexibility?

Wednesday, June 15, 2011

Monitoring and Alarms: Persistent Problems of Intelligent Systems

Over the last week, I have spent a great deal of time in hospital rooms listening to monitoring systems in alarm and observing the problems of poorly designed sensors.  I was reminded of the development of large network Operational Support Systems (OSSs) from the 1980‘s and 1990‘s.  These systems were designed to improve the availability of network services through rapid fault detection, isolation and repair.  In those days, we talked about “alarm storms”... single or correlated events that would generate hundreds or thousands of alarms that were more than operators were capable of analyzing and taking action upon.  An example of an event that would generate a large number of alarms is the failure of DC power feeds to a rack of transmission equipment.  Hundreds or thousands of circuits passing through this rack would go into alarm.  These alarms would be reported from equipment all over the network.  The result of alarm storms were overwhelmed network control center personnel, who either couldn’t figure out what the root cause of all the alarms were or who grew fatigued from constant assault of systems in alarm.  Overwhelmed operators resulted in poor networks service availability despite the promise of intelligent systems to improve service availability.
So to deal with this problem, alarm categorization, alarm filtering and alarm correlation capabilities were build into these systems.  Alarms could be categorized. Minor alarms could be ignored.  Circuit failures that could be mapped into the failure of a single board or system were rolled up into one alarm.  These capabilities are normal now, however, they had to be learned through experience as networks of intelligent systems scaled up.
Back to the hospital intensive care unit... a single patient has multiple sensors and multiple machines providing services to the patient.  Each sensor (heart rate, blood pressure, respiration rate, blood oxygen levels, etc.) generates an alarm when parameters get out of range.  Ranges can be customized for each patient, however, some fundamentals do apply.  For example, blood oxygen levels below 90% is bad for any patient.  Machines providing services, such as an IV machine, go into alarm under a variety of circumstances... for example, if the supply bag is empty, or if the IV tube is blocked.  Alarms can be suppressed, however, these machines go back into alarm after a time if the trouble is not resolved.
All alarms are not equally life threatening, however, each monitor or machine behaves as if they are.  Some sensors are very unreliable and are constantly in alarm. Sometimes, different systems interact with each other to cause alarms.  For example, when a Blood Pressure cuff is measuring, it cuts off blood supply to the oxygen sensor on the finger of the same arm, causing an alarm.  Nursing staff, like operators at a network control center, learn to ignore unreliable alarms and work around some monitoring system problems, however, it is easy for them to be overwhelmed by alarm overload and miss critical alarms.  Integrated alarm monitoring and management systems for intensive care seem to be emerging, however, they still have a long way to go.  I found that there was no substitute for a patient advocate, providing an escalation path for the critical problems that get lost in the noise.
What’s your experience with monitoring, alarm management and intelligent systems?

Wednesday, June 8, 2011

6 Month Anniversary of Unemployment: Lessons Learned from the Job Market

I’ve looked for work before: 2000, 1996, 1988, 1982 and multiple times in the 1970’s.  The job market for technical executive people like myself is very difficult... the closest thing compared to this was 1982, in the beginning of Reagan’s recession.  The problem with the job market is it is a very strong hirer’s market and the way you find a job is different in kind from how jobs were located during the candidate’s markets of the 1990’s.  Job sites are worse than worthless... they are simply engines of automated commoditization of job functions.  LinkedIn is more useful because it does open doors for differentiated job search, however, it doesn’t close deals.
What you need in this job markets is someone special inside the hiring company... someone who is willing to take career risk for you.  This is all about depth of relationships, not about breadth of networks.  Your inside person needs to vouch for you, to create a position for you, to create a sense of urgency for you, to unstick stuck hiring processes, to stand up and swear that you are worth the price premium, the special consideration or the extra incentive to close the deal.  This is different in kind from the most recent job markets I’ve participated in.  This is not just a question of doing all the old things you did before, only more often, faster, etc.
In 1982, when I decided to leave physics and Kansas State University, I had a number of industries that were interested in me: oil and gas exploration, weapons development, and telecommunications.   As the recession deepened through the spring, opportunities dried up right and left.  One Oklahoma petroleum lab sent me tickets to fly for a round of interviews and then a few days later, asked for the tickets back as hiring had been shut down.  Bell Labs shut down hiring too, however, a professor friend at KSU had an old high-school buddy from Taiwan whose organization was still looking for physics talent.  They were able to work around the HR system to get me an interview and a job in the Government Communications Department.  In this hirer’s market, Bell Labs had sent me an offer letter with monthly and annual salary figures that didn’t agree.  I was asked to accept the lower of the two and I did.  I had no bargaining power and the salary was still more than twice what I’d made as an Assistant Professor.
Who are these special people who will take a risk on you?  For the most part they are people who have taken a risk on you before... either because you reported to them, they reported to you or in some extraordinary circumstance (special project, emergency task force, mentoring relationship, etc.) a strong bond was created that withstood adversity.  Most people do not have many of these relationships, even if they have worked hard and behaved with integrity.  The right strategy in this market is to dig deep, identify these people (say the “Golden 20” in your career) and work to increase the depth of your relationships.  Communicate frequently and personally, bring quality with each contact, be thoughtful and generous with your time. Do whatever you can to help these people, especially if they are out of work too.
What’s your experience in this job market?  Have you thought about your “Golden 20” professional relationships recently?

Friday, June 3, 2011

New Nikon P500 Camera: The Importance of Robust Usability Testing

For the last 2 months, I have been enjoying a new digital camera, a Nikon P500.  This is a hybrid camera, incorporating features of a full size digital SLR on a much smaller platform.  Most remarkable is the 32x optical zoom which allows much easier wildlife photography, particularly wild birds.  It is almost more zoom than I can handle.
One problem with the camera is interesting from the perspective of system testing.  The camera battery can be recharged by removing the battery and placing it in a charger, or you can option the camera to charge the battery through the power feed on a USB interface.  Since the charger was not included in the camera kit, the USB mode seems to be the manufacturer’s preferred mode.  When you connect the camera to a computer using the USB cord, the on/off light turns green, then flashing orange, to indicate charging state... after a few minutes or hours, the on/off light returns to steady green, indicating a full charge.  30 minutes later, the camera turns itself off. While the camera is charging, you can transfer photographs and videos to your computer and perform other functions.
The problem occurs when you disconnect the camera from the computer.  If you press the on/off button first and then disconnect the USB cable, the camera turns off the port and holds its charge.  If you disconnect the USB cable first, the camera does not power down the port and after a couple of hours will completely discharge the battery.  This has happened to me on several occasions.  You expect to have a full charge, however, all you have is a dead camera.  Any photographs for that session are impossible.  The correct procedure is documented in a notes section, however, there is no indication in the trouble shooting section about the danger of changing the sequence of disconnecting the camera. 
If you think about feature verification testing, the camera works perfectly as specified, and neither software testing nor hardware testing would have found this problem.  Only usability testing that explores all the different things users might naturally do wrong would have uncovered this problem.  The other option is to let the customers discover  the problem.  Until I figured out what was wrong and how to fix it, I was ready to return the camera to the store.  Once discovered, documentation should be improved to help shoot the trouble and change user behavior, however, the right answer is to change the software control so that the sequence of button or USB cord disconnect first doesn’t create a discharged camera.
I’m reminded of a problem that usability testing identified at Ameritech in the early 1990’s.  At that time, Ameritech had deployed cross connects and STPs from the same manufacturer.  Sometimes, they were located near each other in central offices.  It turned out that cards from one machine could be inserted into slots on the other machine with disastrous effect.  There were no mechanical lockouts that prevented this card swapping from happening if the operator got confused.  The result of the usability testing was a requirement to prevent this swap mechanically.
What’s your experience with usability testing?  Have you had a similar experience to the P500 camera discharge problem?

Thursday, May 26, 2011

Cell Phones: Sensors and Sensor Relay Platform

Nick Bilton wrote in his Bits Blog in the New York Times on May 19 (The Sensors Are Coming! - NYTimes.com - Bits Blog):  “The coming generation of mobile phones... will have so many new types of smart sensors that your current mobile phone will look pretty dumb.”  Among the sensors he discusses are altimeters, gyroscopic sensors, heart monitors, perspiration monitors, more microphones (presumably for stereophonic noise management and imaging) , temperature and humidity monitors and connections to sensors in your clothes and environment (presumably through a Bluetooth and Bluetooth Low Energy interfaces.)  Software associated with these sensors can be used for micro-location services, environmental characterization, another layer of user security and many other uses.  The message Nick delivers is primarily to get ready for a new set of applications and associated privacy issues.
A few years ago, it became clear that a smart phone plus Bluetooth / Bluetooth Low Energy devices on the body or in the body constitutes an excellent health monitoring and management platform for a wide range of chronic conditions.  A three step process of registering with a management server, downloading a health monitoring and management app and linking to a sensor could change the way some conditions are managed with significant cost savings compared to institutionalization or frequent visits from a health professional.  I’m sure all these elements are already in place for some patients in some parts of the world.  Once cell phones become platforms for health care delivery, we may see a new set of requirements emerge for cell phones... focused on reliability, availability, partitioning, local caching of critical data, security, battery life, etc.  Something like this happened to wireline phone systems when E911 services became a necessity of life.
What are your thoughts about the proliferation of sensors in cell phones and the cell phone as a sensor monitoring, relay and management platform?  Each new capability that has been added to the cell phone has had unforeseen consequences... for example, GPS capability potentially makes each of us more trackable, photographic and video capability has made everyone a potential video-graphic witness to crimes and disasters.  New capability have also increased the value of the cell phone to users and increased usage and willingness to pay for advanced services.  What sensors would you like to see on a phone; what sensors would you like to turn off?