A Fool with a Tool Is an Amplified Fool

Nov 10, 2024
1999 Views
0 Comments
1 Likes

Tools can amplify a software developer’s capability, but ineffective or inappropriate tool usage amplifies their shortcomings as well.

A Fool with a Tool Is an Amplified Fool

Image by KamranAydinov on Freepik

My friend Norm is an expert woodworker (that’s not Norm in the photo above). He designed and built his woodshop, including the building itself. His shop contains countless hand and power tools, and he knows how and when to use each of them properly and safely. Expert software engineers also know the right tools to use and how to apply them effectively.

Perhaps you’ve heard that “A fool with a tool is still a fool,” sometimes attributed to software engineer Grady Booch. That’s too generous. A tool gives someone who doesn’t quite know what they’re doing a way to do it more quickly and perhaps more dangerously. That leverage just amplifies their ineffectiveness. All tools have benefits and limitations. To reap the full benefit, practitioners need to understand the tool’s concepts and methods so that they can apply it correctly to appropriate problems.

When I say tool here, I’m referring both to software products that facilitate or automate some project work (estimation, modeling, testing, collaboration) and to specialized software development techniques, such as use cases. These days, AI also can be considered a tool. The capabilities and limitations of AI are still being explored, in many domains. While various AI tools can be valuable assistants for software development, they aren’t magic or infallible. Overreliance on the correctness or efficacy of results from an AI could indeed lead to amplified foolishness.

Tools can make skilled team members more productive, but they don’t make untrained people better. Providing less capable developers with tools can actually inhibit their productivity if they don’t use them appropriately and effectively. If people don’t understand a technique and know when—and when not—to use it, a tool that lets them do it faster and prettier won’t help.

A Tool Must Add Value

I’ve seen numerous examples of ineffective tool use. My software group once adopted Microsoft Project for project planning. Most of us found Project helpful for recording and sequencing tasks, estimating their duration, and tracking progress. One team member got carried away, though. He was the sole developer on a project with three-week development iterations. He spent a couple of days at the start of each iteration creating a detailed Microsoft Project plan for the iteration, down to one-hour resolution. I’m in favor of planning, but this was time-wasting overkill.

I know of a government agency that purchased a high-end requirements management (RM) tool but benefited little from it. They recorded hundreds of requirements for their project in a traditional requirements specification document. Then they imported those requirements into the RM tool, but the document remained the definitive repository. Whenever the requirements changed, the BA had to update both the document and the contents stored in the RM tool’s database: extra work.

The only major tool feature that the team exploited was to define a complex network of traceability links between requirements. That’s useful, but later they discovered that no one ever used the extensive traceability reports they generated! This agency’s ineffective tool use consumed considerable time and money while yielding little value.

Modeling tools are easily misused. Analysts and designers sometimes spend excessive effort perfecting models, sometimes called “mousing around.” I’m a big fan of visual modeling to facilitate iterative thinking and reveal errors, but people should create models selectively. Modeling portions of the system that are already well understood and drilling down to the finest details don’t add proportionate value to the project.

Besides automated tools, specialized software practices also can be applied inappropriately. As an example, use cases help me understand what users need to do with a system so that I can then deduce the necessary functionality to implement. But I’ve known some people who tried to force-fit every known bit of functionality into a use case simply because that’s the requirements technique their project employed. If you already know about some needed functionality, I see little value in repackaging it just to proudly declare you have a complete set of use cases.

A Tool Must Be Used Sensibly

I was at a consulting client’s site the same day that one of their team members was configuring a change-request tool they’d just purchased. I endorse sensible change control mechanisms, including using a tool to collect change requests and track their status over time. However, the team member configured the tool with no fewer than twenty possible change-request statuses: submitted, evaluated, approved, deferred, and so forth. Even if they’re logically sensible, nobody’s going to use twenty statuses. Six or seven should suffice. Making it so complex imposes an unrealistic burden on the tool users. It could even discourage them from using the tool at all, making them think it’s more trouble than it’s worth.

While teaching a class on software engineering best practices one time, I asked the students if they used any static code analysis tools, such as lint. The project manager said, “Yes, I have ten copies of PC-lint in my desk.” My first reaction was, “You might want to distribute those to the developers, as they aren’t doing any good in your desk.” If tools aren’t in the hands of people who could benefit from them, they’re useless.

I asked the same question about static code analysis at another company. One student said that when his team ran lint on their system’s codebase, it reported about 10,000 errors and warnings, so they didn’t use it again. If a sizable program has never been passed through an automated checker, it will probably trigger many alerts. Many of the reports were false positives, inconsequential warnings, or issues the team will decide to ignore. But there were likely some real problems in there, lost in the noise. Configure the tools so that you can focus on items of real concern and not be overwhelmed by distracting minor issues.

A Tool Is Not a Process

People sometimes think that using a good tool means their problem is solved. However, a tool is not a substitute for a process; it supports a process. When one of my clients told me that they used a problem-tracking tool, I asked some questions about the process that the tool supported. I learned that they had no defined process for receiving and processing problem reports; they only had the tool. Without an accompanying practical process, a tool can increase chaos if people don’t use it appropriately.

Tools can lead people to think that they’re doing a better job than they are. Automated testing tools aren’t any better than the tests stored in them. Just because you can run automated regression tests quickly doesn’t mean the tests it executes find errors effectively. A code coverage tool could report a high percentage of statement coverage, but that doesn’t guarantee that all the important code was executed. Even a high statement coverage percentage doesn’t tell you what will happen when the untested code is executed, whether all the logic branches were tested in both directions, or what will happen with different input data values. Nor do tools fully replace human effort. People who test software will find issues beyond those that are loaded into testing tools.

I’ve spoken to people who claimed their project was doing a fine job on requirements because they stored them in a requirements management tool. RM tools do offer many valuable capabilities. However, the ability to generate nice reports doesn’t mean that the requirements stored in the database are any good. RM tools are a vivid illustration of the old computing expression GIGO: garbage in, garbage out. The tool won’t know if the requirements are accurate, clearly written, or complete. It won’t detect missing requirements.

You need to know both the capabilities and the limitations of each tool. Some tools can scan a set of requirements for conflicts, duplicates, and ambiguous words, but that assessment doesn’t tell you if the requirements are logically correct or even necessary. A team that uses a requirements tool first needs to learn how to do a good job of eliciting, analyzing, and specifying requirements. Buying an RM tool doesn’t make you a skilled BA. You should learn how to use a technique manually and prove to yourself that it works for you before automating it.

Properly applied tools and practices can add great value to a project team by increasing quality and productivity, improving planning and collaboration, and bringing order out of chaos. But even the best tools won’t overcome weak processes, untrained team members, challenging change initiatives, or cultural issues in the organization. And always remember one of Wiegers’s Laws of Computing: “Artificial intelligence is no substitute for the real thing.” That might not be true someday, but it still applies for now.


Author: Karl Wiegers

This article is adapted from Software Development Pearls: Lessons from Fifty Years of Software Experience by Karl Wiegers. Karl is the author of numerous other books, including Software Requirements Essentials (with Candase Hokanson), Software Requirements (with Joy Beatty), The Thoughtless Design of Everyday Things, and Successful Business Analysis Consulting.

 

 



Upcoming Live Webinars

 




Copyright 2006-2024 by Modern Analyst Media LLC