My favorite bytes from the wonderful book [Facts and Fallacies of Software Engineering](https://www.goodreads.com/book/show/83792.Facts_and_Fallacies_of_Software_Engineering) By [Robert Glass](http://www.robertlglass.com/)
* The problem is a culture that puts schedule conformance, using impossible schedules, above all else—a culture that values schedule so highly that there is no time to learn about new concepts.
* Runaway projects, at least those that stem from poor estimation, do not usually occur because the programmers did a poor job of programming. Those projects became runaways because the estimation targets to which they were being managed were largely unreal to begin with.
* Most often, software estimation is done by the people who want the software product. Upper management. Marketing. Customers and users. Software estimation, in other words, is currently more about wishes than reality.
* Management was "the best I've ever worked with." Why? "Because the team was given the freedom to develop a good design," because there was no "scope creep," and because "I never felt pressure from the schedule."
* "projects where no estimates were prepared at all fared best on productivity" (versus projects where estimates were performed by technologists [next best] or their managers [worst]).
* When the programmers felt in control of their fate, they were much more productive.
* Management, upon reading this story and reflecting on this fact, would in general be horrified that a project so "obviously" a failure could be seen as a success by these technologists. My further suspicion is that most technologists, upon reading this story and reflecting on this fact, would find it all quite reasonable. If my suspicions are correct, there is essentially an unspoken controversy surrounding the issue this fact addresses. And that controversy is about what constitutes project success. If we can't agree on a definition of a successful project, then the field has some larger problems that need sorting out.
* Since no one has ever been able to solve the problems we are able to solve, we believe that no new problem is too tough for us to solve.
* Reuse-in-the-small (libraries of subroutines) began nearly 50 years ago and is a well-solved problem. Discussion There is a tendency in the computing world to assume that any good idea that comes along must be a new idea. Case in point—reuse. **See also** [From Local to Global Coordination: Lessons from
* The primary controversy here is that too many people in the computing field think that reuse is a brand-new idea. As a result, there is enormous (and often hyped) enthusiasm for this concept, an enthusiasm that would be more realistic if people understood its history and its failure to grow over the years.
* Two "rules of three" in reuse:
* (a) It is three times as difficult to build reusable components as single use components.
* (b) AA reusable component should be tried out in three different applications before it will be sufficiently general to accept into a reuse library.
* Reusable components are harder to develop and require more verification than their single-task brethren.
* Modification of reused code is particularly error-prone. If more than 20 to 25 percent of a component is to be revised, it is more efficient and effective to rewrite it from scratch.
* Design patterns emerge from practice, not from theory.
* Missing requirements are the hardest requirements errors to correct.
* Software people have dreamed of automating software processes throughout the history of the field. One by one, those dreams have been dashed.
* Notice that the underlying theme of many of the facts in this book is that the construction of software is a complex, deeply intellectual task, one that shows little possibility of being made simple. Automation is the ultimate trivialization of this nontrivial activity, and those who claim that it has been achieved are doing serious harm to the software field in its quest for better realistic tools and techniques.
* The debugging process is the detective story of programming. You play Sherlock Holmes in pursuit of the elusive software bug. And, like Sherlock Holmes, you need to enlist your brain and any brain-supporting things you can think of.
* In this era of automated everything, it is all too easy to leave the work of software testing to tools and techniques. But doing so would be a mistake.
* Research study has shown that inspections can detect up to 90 percent of the errors in a software product before any test cases have been run.
* Error removal is a complex task, and it requires all the armament the tester can muster.
* Computer scientists have long said that formal verification, if done sufficiently rigorously, will be enough. Fault-tolerance advocates have taken the position that self-checking software, which detects and recovers from errors, will be enough. Testing people have sometimes said that 100 percent test coverage will be enough. Name your favorite error removal poison, and someone has probably made grandiose claims for it.
* In any case, I think most would agree that our field is so busy with its foot pressed to the gas pedal that it rarely has time to think about how it could be going better, not just faster. We speak of working smarter, not harder. But who has time to get in a position of working smarter?
* **See also** [IEEE. 2002. "Knowledge Management in Software Engineering. Special issue."](http://publications.aston.ac.uk/id/eprint/2841/1/Chapter.pdf) IEEE Software, May. Contains several articles on postmortem reviews and the experience factory.
* That word is rigor. It is vitally important that the participants in a review process be totally dedicated to and focused on what they are doing.
* **See also** [Peer Reviews in Software: A Practical Guide](https://www.goodreads.com/book/show/846008.Peer_Reviews_in_Software)
* Maintenance typically consumes 40 to 80 percent (average, 60 percent) of software costs. Therefore, it is probably the most important life cycle phase of software.
* Software's errors aren't due to material fatigue, but rather to errors made when the software was being built or errors made as the software is being changed.
* There's an old software saying that I'd like to make into the following corollary: Old hardware becomes obsolete; old software goes into production every night.
* Software people tend to behave as if the original development of the software product is all that matters. So do academics teaching software classes.
* **See also** Boehm, Barry W. 1975. "The High Cost of Software." In Practical Strategies for Developing Large Software Systems, edited by Ellis Horowitz. Reading, MA: Addison-Wesley.
* **See also** Lientz, Bennet P. E., Burton Swanson, and G.E. Tompkins. 1976. "Characteristics of Applications Software Maintenance." UCLA Graduate School of Management.
* When software is originally developed, the customers and future users really have only a partial vision of what that product can and will do for them. It's only after the product goes into production and the users use it for awhile that they begin to realize how much more the software product could be revised to do. And, frequently, they request that those changes be made.
* Changing existing product is always difficult, no matter how "soft" the software product really is.
* The 60/60 rule: 60 percent of software's dollar is spent on maintenance, and 60 percent of that maintenance is enhancement. Enhancing old software is, therefore, a big deal.
* **Maintenance is a solution, not a problem.**
* **See also** : Glass, Robert L. 1991. Software Conflict. Englewood Cliffs, NJ: Yourdon Press, where this fact is stated more elaborately (a chapter is devoted to it).
* Far too many people see software maintenance as a problem, something to be diminished and perhaps even "obliterated." In saying that, they are really expressing their ignorance.
* Instead, modifying software to do something different is comparatively simple. (Notice the key word comparatively. Making changes to software is nontrivial. It's simply easier than its tangible product alternatives.)
* In examining the tasks of software development versus software maintenance, most of the tasks are the same—except for the additional maintenance task of "understanding the existing product." This task consumes roughly 30 percent of the total maintenance time and is the dominant maintenance activity.
* Research data tells us that understanding the existing product is the most difficult task of software maintenance.
* Why? Well, in a sense the answer lies in many of our previous facts: The explosion when requirements are transformed into design (Fact 26). The fact that there is no single design solution to most problems (Fact 27). The fact that design is a complex, iterative process (Fact 28). The fact that for every 25 percent increase in problem complexity, there is a 100 percent increase in solution complexity (Fact 21).
* There is another reason why understanding the existing product, or undesign, is difficult. The original designer created what we call a design envelope, a framework within which the problem, as it was known at development time, could be solved.
* The maintenance life cycle is different in one major way. Here is the maintenance life cycle according to Fjelsted and Hamlen (1979):
* Defining and understanding the change : 15 percent
* Reviewing the documentation for the product : 5 percent
* Tracing logic : 25 percent
* Implementing the change : 20 percent
* Testing and debugging : 30 percent
* Updating the documentation : 5 percent
* In a survey of Air Force sites in 1983, researchers found that the "**biggest problem of software maintenance" was "high [staff] turnover,**" at 8.7 (on a scale of 10). Close behind that, in second and third places, were "understanding and the lack of documentation," at 7.5, and "determining the place to make a change,"
* Notice the small percentages in the maintenance life cycle devoted to documentation activities. The maintainer spends 5 percent of his or her time "reviewing documentation" and another 5 percent "updating documentation." If you thought about those numbers at all, you may have been surprised at how small they were.
* "Unworthiness" or its complexity is for you to decide. But one of the things that lack of respect leads to is an almost total lack of what we might call maintenance documentation.
* Ongoing maintenance drives the specs and the product even further apart. The fact of the matter is, design documentation is almost completely untrustworthy when it comes to maintaining a software product. The result is, almost all of that undesign work involves the reading of code (which is invariably up to date) and ignoring the documentation (which commonly is not).
* Here again, the underlying problem is our old enemy schedule pressure. There is too much demand for the modified product to be ready.
* Perhaps it's a stretch, but I like to tell people that, because of all of the above, maintenance is a more difficult task than software development. Few people want to hear that, so I tend to say it in something of a whisper.
* **See also** Glass, Robert L. 1981. "Documenting for Software Maintenance: We're Doing It Wrong." In Software Soliloquies, Computing Trends.
* Better software engineering development leads to more maintenance, not less.
* **See also** Dekleva, Sasa M. 1992. "The Influence of the Information System Development Approach on Maintenance." Management Information Systems Quarterly, Sept. This study looked at the effect of using "modern development methods" on software projects from the point of view of their subsequent maintenance.
* These systems took longer to maintain than the others because more modifications were being made to them. And more modifications were being made because it was easier to enhance these better-built systems.
* We neither agree on a workable definition nor agree on whose responsibility quality in the software product is.
* Modifiability, one of those attributes, is a matter of knowing how to build software in such a way that it can be easily modified.
* Reliability is about building software in ways that minimize the chance of it containing errors and then following that up with an error removal process that uses as many of the multifaceted error removal options as makes sense.
* Quality is one of the most deeply technical issues in the software field.
* Management's job, far from taking responsibility for achieving quality, is to facilitate and enable technical people and then get out of their way.
* It is nearly impossible to put a number on understandability or modifiability or testability or most of the other quality -ilities.
* **See also** Glass, Robert L. 1992. Building Quality Software. Englewood Cliffs, NJ: Prentice-Hall. In Section 3.9.1, State of the Theory, this book is an analysis of the DoD report discussed earlier.
* Quality in the software field is about a collection of seven attributes that a quality software product should have: portability, reliability, efficiency, usability (human engineering), testability, understandability, and modifiability.
1. Portability is about creating a software product that is easily moved to another platform.
2. Reliability is about a software product that does what it's supposed to do, dependably.
3. Efficiency is about a software product that economizes on both running time and space consumption.
4. Human engineering (also known as usability) is about a software product that is easy and comfortable to use.
5. Testability is about a software product that is easy to test.
6. Understandability is about a software product that is easy for a maintainer to comprehend.
7. Modifiability is about a software product that is easy for a maintainer to change.
* Quality is not user satisfaction, meeting requirements, meeting cost and schedule targets, or reliability.
* User satisfaction = Meets requirements + delivered when needed + appropriate cost + quality product.
* **See also** Gramms, Timm. 1987. Paper presented on "biased errors" and "thinking traps." Notices of the German Computing Society Technical Interest Group on Fault-Tolerant Computing Systems, Bremerhaven, West Germany,
* Errors tend to cluster.
* "Half the errors are found in 15% of the modules" (Davis 1995, quoting Endres 1975). 80% of all errors are found in just 2% (sic) of the modules" (Davis 1995, quoting Weinberg 1992). Given the quote that follows, it makes you wonder if 2 percent was a misprint.)
* There is no single best approach to software error removal.
* The advocates of silver bullets will continue to make exaggerated claims for whatever technique they are selling,
* **See also** Glass, Robert L. 1992. Building Quality Software. Englewood Cliffs, NJ: Prentice-Hall.
* Residual errors will always persist. The goal should be to minimize or eliminate severe errors.
* It would be nice to remove all those other errors too (for example, documentation errors, redundant code errors, unreachable path errors, errors in numerically insignificant portions of an algorithm, and so on.), but it's not always necessary.
* "Almost 90% of the downtime comes from, at most, 10% of the defects"
* Efficiency stems more from good design than from good coding.
* Interface and internal inefficiencies pale to insignificance compared to I/O inefficiencies. Still, it is possible, through poorly designed looping strategies, for a coder to make a program's logic wheels spin an inordinately long time.
* *To some eager programmers, coding is the most important task of software construction, and the sooner we get to it, the better. Design, to people in that camp, is simply something that puts off the ultimate problem solution activity.* As long as these people continue to address relatively simple problems, (a) they will probably never be convinced otherwise, and (b) there may be nothing terribly wrong with what they are doing. But it doesn't take much problem complexity before that minimal-design, quick-to-code approach begins to fall apart. (Recall Fact 21, about how quickly problem complexity drives up solution complexity?)
* Somehow it seems like anything that makes a program more time-efficient will also make it more size-efficient. But that is not true.
* Many software researchers advocate rather than investigate. As a result, (a) some advocated concepts are worth far less than their advocates believe, and (b) there is a shortage of evaluative research to help determine what the value of such concepts really is.
* It wasn't until the notion of the GQM approach (originally proposed by Vic Basili)—establish Goals to be satisfied by the metrics, determine what Questions should be asked to meet those goals, and only then collect the Metrics needed to answer just those questions—that there began to be some rationality in metrics approaches.
* No matter how many people believed that management was responsible for product quality, there was too much technology to the subject of software quality to leave it up to management.
* So what's the fallacy here? That quality is a management job. Management, of course, does have a vitally important role in achieving quality. They can establish a culture in which the task of achieving quality is given high priority. They can remove barriers that prevent technologists from instituting quality. They can hire quality people, by far the best way of achieving product quality. And they can get out of the way of those quality people, once the barriers are down and the culture is established, and let them do what they have wanted to do all along—build something they can be proud of.
* What's the alternative to an ego-invested programmer? A team-player programmer.
* Programmers really do need to be open to critique; the fact that we cannot write error-free programs, hammered home so many times in this book, means that programmers will always have to face up to their technical foibles and frailties.