Most of us have seen praise and recognition at work go to the people who react quickly when a problem occurs:
- The IT person who takes care of technical issues at critical moments, like restoring access to a demo site right before a sales rep is scheduled to present to a hot prospect.
- The salesperson who closes a deal on the last day of the quarter, preventing the sales department from facing the negative consequences of missing quota.
- The business analyst who works extra hours to make sure late-breaking requirements are properly documented in time to prevent delays in the next development cycle.
But what about the people preventing those problems from happening in the first place? The IT person who is always making sure credentials don't accidentally expire, the salesperson who plans her work so that her quarterly quota is met or exceeded several weeks before the deadline, the BA who takes all the right steps to avoid last-minute requirements changes?
Preventing problems tends to be much cheaper and more effective than fixing them. The analyst working long hours to finish the acceptance criteria for late-breaking requirements to avoid delays in implementation is much more likely to make a mistake than the one who ensures a complete set of requirements is properly reviewed during sprint planning. But due to its “save the day” nature, reactive work is more likely to trigger recognition and reward than the quiet and reserved work that “keeps the day from needing saving”.
This phenomenon even manifests itself in much higher stakes situations. In his new book Upstream: The Quest to Solve Problems Before They Happen, Dan Heath offers the example of two police officers:
"The first officer spends half a shift standing on a street corner where many accidents happen; her visible presence makes drivers more careful and might prevent collisions. The second officer hides around the corner, nabbing cars for prohibited turn violations. It's the first officer who did more to help public safety [...], but it's the second officer who will be rewarded, because she has a stack full of tickets to show for her efforts."
As Heath points out, we tend to favor reaction because it's more tangible. Reactive work is easier to see and measure than the proactive actions that keep problems from happening.
But this reality shouldn't discourage business analysts who wish to make a bigger impact in their organizations. The path from good to great requires being more proactive and less reactive. But what to do to avoid ending up like the police officer who does more to help public safety but doesn’t get recognition because she doesn’t have anything tangible to show for her efforts?
This is a valid concern. The stoic doctrine “virtue is its own reward” doesn’t translate well here. Even if an analyst is focusing on proactive work for selfless reasons (because she knows her efforts are more likely to improve the outcomes for her company), without proper acknowledgement and recognition that behavior is unlikely to achieve the “critical mass” required to build a culture of excellence.
The solution is to collect evidence that your preventive efforts are making a difference. The same way a police chief can gather good evidence that the number of crashes is going down at the street corner where an officer is making her presence visible, managers and analysts can use data as proof of the positive results achieved from being proactive. Data about processes, outputs, and outcomes is likely to be already available in tools like requirements management and issue tracking systems, often requiring minimum effort to produce the desired evidence.
Case Study 1: Organization with an individual performance measurement system
A team of BAs was in charge of writing the requirements for various internal systems at an organization that used an external software development provider to create and maintain their systems. Individual performance measures were selected based on what mattered most for the business: cost of development, time-to-market, effectiveness of the solutions delivered based on the requirement defined by the analysts. As described in the ebook Measuring the Performance of Business Analysts, their performance measurement system served multiple purposes:
- Remove ambiguities about roles and responsibilities.
- Establish how individuals and the team are doing and what areas of improvement can deliver the best return on investment.
- Hold individuals accountable for what they can control, as opposed to external factors or unreasonable management expectations about the time and resources required to complete a task or project.
- Provide early detection of process problems that may affect performance.
- Enable fair recognition and reward for doing the right things.
The measurement system relied mostly on the automatic extraction of information from issue tracking and survey tools. The measures included requirements defect density, on-time delivery of work products, and NPS (net promoter score based on surveys with stakeholders performed at the end of each quarter, calculated with the percentage of unhappy stakeholders subtracted from the happy ones).
While the indicators weren’t designed to directly measure proactive behavior, they were capable of detecting the positive outcomes of preventive efforts. For example, the BA manager was able to link the superior performance of two business analysts to their efforts to involve all relevant stakeholder groups in backlog grooming and sprint planning activities. By ensuring that even less obvious groups of stakeholders (compliance, IT security) were heard before acceptance criteria was finalized, the analysts avoided requirements defects and increased customer satisfaction. With the positive impact seen in lower defect density and higher NPS, it wasn’t difficult for the BA manager to convince the other analysts on the team to adopt stakeholder analysis as part of their workflow.
Case Study 2: Organization without individual performance measures
Imagine that you've just taken the role of product owner for a software-as-a-service product that thousands of customers use for different purposes. New features and feature enhancements are deployed on a monthly basis. Because of the diversity of use cases, the members of the customer support team are always bracing themselves for a barrage of complaints the day after a release that includes user-facing changes.
Sometimes the enhancements increase the value of the software for the vast majority of customers, and it makes business sense to expect customers unable to adapt to the new reality to switch to another provider with a solution more suitable for their needs. But in many cases the product team is able to address complaints by making adjustments to eliminate the negative consequences for an affected customer segment without degrading the experience of the non-affected segments.
Until now, the team had taken for granted that friction and rework were part of the "growing pains" as the application evolved to provide more value to existing and new customers. But as the new person on the team, you could decide to take a "problem preventer" approach and seek to eliminate avoidable rework and customer frustration associated with the release of new functionality.
Through stakeholder interviews, you realize that the customer support team can usually tell ahead of time when an enhancement is going to affect a segment of customers. Because of the nature of their work, they are more familiar with the different ways teams using the software operate than the product team, and can more easily detect potential issues with the proposed changes meant to enhance the user experience.
You also learn that the support team is only introduced to new features right before they are released, to prepare for questions from users when the changes go live. By then, it’s too late for the product management team to react to feedback and warnings from the support reps.
After studying the situation, you decide to adjust the workflow so that not only the customers that are part of the user advisory board but also the support team is presented with mock-ups of any change affecting the user experience during the planning phase. This way, support reps can act as “proxies” for a larger variety of end-users and raise concerns with plenty of time for the PM and UX teams to evaluate the situation. Often their feedback includes a different alternative that provides the same value without the negative side-effects of the original solution.
This scenario did happen in real-life. After a couple of releases, the product owner compiled a report showing the positive outcomes achieved with the change. The managers of the customer support and software development teams had already noticed the improvement, but the data confirmed that the number of customer complaints and change requests approved after go-live had dropped significantly. The software development team was spending less time fixing issues with new features and more time on the delivery of additional capabilities that customers had been impatiently waiting for.
When management saw the impact for the business, the product owner received a raise after just six months on the job, along with the additional responsibility to help other product teams implement the same practice of involving customer support during the planning phase of new features.
# # #
The key takeaway from these two case studies is well summarized by this quote from the book Upstream:
“With some forethought, we can prevent problems before they happen, and even when we can’t stop them entirely, we can often blunt their impact.”
Organizations are constantly handling short-term urgent problems and thanking the people who show up to save the day. Instead of being drawn to the glory of the rescue and the response to restore things to normal, analysts can achieve greater impact if they adopt the habit of looking for ways to stop problems from happening. Using the power of prevention, BAs can generate substantially better outcomes in terms of higher productivity, quality, sales, and profits.
Author: Adriana Beal
Adriana Beal moved to the U.S. in 2004 and since then has helped several Fortune 100 companies, innovation companies, and startups build better software that solves the right problem. In 2016 she decided to follow her passion for quantitative analysis and got a certificate in Big Data and Data Analytics from UT. Since then she has been working in data science projects in healthcare, mobility, IoT, homelessness interventions. Currently the Lead Data Scientist at Social Solutions, Adriana has two IT strategy books published in her native country, Brazil, and work internationally published by IEEE and IGI Global. You can find more of her useful advice for business analysts at bealprojects.com.