Second-Order Thinking (Mental Models for Business Analysts, Part I)

Featured
18452 Views
2 Comments
41 Likes

Photo by Josh Riemer on Unsplash

Learning about mental models and how to apply them to their work is one of the best investments for business analysts interested in achieving the level of deep thinking that leads to better outcomes for their projects and organizations.

But what are mental models? Shane Parrish from Farnam Street offers a good definition:

Mental models are a representation of how something works. We can’t keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks. Whether we realize it or not, we then use these models every day to think, decide, and understand our world.

The Farnam Street website is a great place to learn more about mental models in general. In this series, though, I’m going to focus on a few models that are particularly useful for business analysts to achieve the kind of thinking that leads to better outcomes for their projects and organizations.

We’ll start with the concept of second-order thinking.

First-order thinking happens when we look for something that only solves the immediate problem without considering the consequences. Second-order thinking means thinking in terms of interactions and time, and taking into account the fact that despite our best intentions our interventions may have negative side effects or stop working over time.

Let’s look at a case study that illustrates the benefits of adopting second-order thinking in the BA work.

Consider an internal system used to monitor software defects. Users have asked for a new feature: the ability to add labels to individual defect tickets. An initial analysis indicates that the new feature will bring value to various stakeholders: managers would be able to more easily build reports by filtering tickets by label; developers would be able to flag defects they want to monitor until they are closed; testers would be able to use labels to escalate defects to the development team; and so forth.

First-order thinking made the labeling solution look good.

The feature was implemented, and initially all went well, with end-users all providing positive feedback and reporting having achieved the desired outcomes.

However, over time, the method of using labels to classify tickets for multiple purposes became unattainable as the number of labels grew substantially and lack of uniformity caused the same labels to be used for different purposes (e.g., "Q2" being used to identify tickets that were scheduled to be fixed both in Q2-2018 and Q2-2019). Also, people started to forget to remove labels that were no longer applicable, causing label-based filter results to become cluttered and unreliable. For example, filtering tickets by the label "question" started to bring up not only new questions that needed an answer, but old ones already answered, because nobody remembered to remove the original label from them.

Initially, applying labels resulted in a positive impact, since as predicted they make it easier to identify tickets that need to be worked on, escalated, etc. But as users started applying labels more and more often to the tickets they were working on or trying to monitor, it became more difficult to find the right label to apply, and to remember to remove labels from tickets when they were no longer applicable. The benefits of filtering tickets by label started to decline, so after a period of overshooting, label usage experienced a drastic decline.

What is happening here is the result of interacting feedback loops. The ability to retrieve tickets based on labels for follow-up and easy reporting are amplifying factors that in the first moment increase label usage, establishing a positive feedback loop. The more labels are created and applied, the more valuable the labeling feature becomes, stimulating more labeling activity.

The system, however, also contains a negative feedback loop: the more labels are created, the more frustrating it becomes to find the right ones to apply, and the larger the risk of someone creating a different label for the same purpose (e.g., some people may be using the label "Release 2.0" to flag tickets to be included in the second release, while others adopted "release-2" with the same intent). As the situation evolves, users also forget to remove labels that are no longer applicable to a given ticket.

As a consequence, retrieving tickets by label started to become less and less reliable, requiring people to use different methods to flag tickets for escalation and to built status reports. Over time, fewer and fewer users bothered to apply labels to tickets, choosing instead to add a comment with the name of the person to escalate a ticket, and to build reports using rules based on ticket creation date and other metadata that produced more reliable information than labels.

Second-order thinking can help us anticipate this kind of system degradation so we can avoid it. Using this kind of thinking during the analysis before feature implementation would have helped the team make better decisions. The risks of lack of standardization and excessive proliferation of labels could have been identified and addressed. Controls such as granting the privilege of creating new labels to only a few users who'd be able to study the requests received and educate other users when an existing label could be applied to achieve the desired intent. Standards could also be introduced to improve the quality of the labels, such as using Q2-2017 and Q2-2018 as labels instead of Q2 to distinguish the year a quarter belongs to. Combined, these actions might have prevented the overshoot and collapse effect of label usage that quickly erased the initial benefits of introducing the new feature.

Unintended consequences and waste caused by an idea that seemed to work based on first-order thinking but fails to provide the desired outcomes in the long run is what second-order thinking is designed to help avoid. The best way to incorporate this mental model in your business analysis practice is to think beyond the immediate results expected from a design decision, new feature, or change in process. The idea is to think through time (what the consequences look like in 1 month, 6 months, a year), and use systems thinking to take into account how other parts of the system are likely to respond to the proposed change.

What happens when the volume of daily transactions, which is going to be initially low, increases over time? Or the number of assets produced (labels, coupons, articles, employees, images, alerts, product categories, etc.) start to accumulate? Or more people / external users start to use the same feature that initially is only going to be used by a few employees? Will the proposed feature/process/design/procedure still work then?

By asking those kinds of questions, you’ll be incorporating second-order thinking into your analysis. Try this approach in your next project, and you’ll likely start noticing things that other people can’t see, and consequently elevate the contributions you deliver to your team and organization.


Author: Adriana Beal, Data Scientist, Carnegie Technologies

 

Since 2004, Adriana Beal has been helping Fortune 100 companies, innovation companies, and startups in the U.S. gain business insight from their data, make better decisions, and build better software that solves the right problem. In 2016 she decided to follow her passion for quantitative analysis and got a certificate in Big Data and Data Analytics from UT. In her current role as Data Scientist with Carnegie Technologies, Adriana works with IoT and satellite data in support of the company’s mission to empower businesses and people to connect with greater reach, ease, and trust. She has two IT strategy books published in her native country, Brazil, and work internationally published by IEEE and IGI Global. Her latest publication is the guide How to Get the Tech Job You Want for professionals interested in shifting careers in tech.

 

Like this article:
  41 members liked this article
Featured
18452 Views
2 Comments
41 Likes

COMMENTS

Duane Banks posted on Sunday, November 10, 2019 9:00 PM
Hi Adriana!

Perhaps working through a casual loop diagram would’ve helped.

https://m.youtube.com/watch?v=dBjh0io0B6c


DivisionOne
Adriana Beal posted on Thursday, March 26, 2020 3:51 PM
Hi, DivisionOne!

Sure , casual loop diagrams, like so many other tools and approaches available to us (including systems thinking and integrative thinking), can be very useful to identify unintended consequences of a change.

But causal diagrams, like any other tool, has their limitations. I'm not sure in my example (degradation of what initially might look like a great solution to make tickets more findable) creating a causal diagram would have surfaced the problem.

Sometimes, a good question is the best approach to identify potential pitfalls. In this particular example, a useful question would be, "How might users adjust to the new function of adding labels?"

Thanks for stopping by and sharing your link!
abeal
Only registered users may post comments.

 



Upcoming Live Webinars

 




Copyright 2006-2024 by Modern Analyst Media LLC