Wednesday, February 2, 2011

Software Blog 4: TNOP and Software Best Practices

When I joined Bell Labs and started my telecommunications career, one of the cohorts within the labs (and there were many) was the TNOP alumni. The Total Network Operations Plan (TNOP) was an activity launched in the 1970’s to solve the problem of thousands of different minicomputer-based Operational Support Systems (OSSs) that had been developed within the Bell System’s manifold operations groups.  In some cases, there were a dozens of slightly different versions of an OSS solving the same problem.  The TNOP people set out to create an overall OSS architecture for the Bell System and create an orderly evolution from manual, to semi-automated to fully automated operations, where possible.
TNOP Alumni had a systems engineering methodology that I found particularly compelling.  They started with functional requirements (independent of the current division of labor between people and machines) and then made deliberate choices of what should be automated, why and when.  These choices were based on the inherent characteristics of work groups vs. machines, the volatility of the requirements, the technology trends of software systems and the needs of the business.  Preserving the  Present Method of Operation (PMO) was a constraint, not an objective.  And the methodology wasn’t fool proof, but did keep the cultural biases and organizational politics under control.
When it comes to electronics systems, which have a component of software, I prefer an architectural methodology similar to TNOP.  Start with functional requirements, driven as much as possible by solving a significant customer problem.  Make deliberate choices about what should be done in hardware and what should be done in software.  The bias should be towards doing as much in HW as possible because of the difficulty in managing complexity of software.  The n+1-th software feature increases the test time by n squared to achieve the same level of quality as n features.
The next set of choices is to decide what software should be developed internally and what should be purchased or contracted externally.  The bias should be toward purchasing stacks and frameworks from leading suppliers, particularly for functionality that is still evolving in standards bodies, etc.  Internal software resources should be reserved for glue software , testing and market differentiating capability.  I’ve seen too many products get bogged down with creating 2nd rate functionality internally when 1st rate stuff is available in the marketplace.
Project schedules should be built backward from testing and market introduction, rather than forward from product requirements and prototyping.  Too many projects end up with no time to test and tons of pressure to go to market.  Those jeopardies will get called out earlier if there is agreement up front on the intervals required for unit test, system test and verification testing for the product being built.  Project schedules should never be built without the involvement of the software developers... they will have a more refined sense of product complexity than the hardware guys.
Finally, don’t write any software before a scalable development environment is in place.  Change control of requirements and software are critical to delivery of quality software.
More on software best practices in a later blog, however, what is your reaction to these ideas? 

No comments:

Post a Comment