Getting up to Speed on Threat Risk Assessment Finding Reports


It’s Monday morning and, as you arrive at your desk, you know that it is going to be a busy day. The new portal project is going to be promoted into production in a couple of weeks and there are still a few items to clear up.

As you fire up your e-mail client and take the first sip of coffee, your shoulders start to tense up. The subject line of one of your e-mails reads “Threat Risk Assessment Finding Report” and it is marked important.

This not the way you wanted to start the week, but you remember the report was due on Friday, the day you decided to “call in sick”.

As you open the message, realizing you can no longer avoid it, you cross your fingers hoping it won’t be too bad.

Then you remember why you hate these reports so much, they are confusing and seem overly alarming in their findings.

Critical, Severe, High, Medium and Important findings all over the place, red, orange and yellow screaming in your face, reams and reams of technical output, patches missing, vulnerabilities exposed, buffer overflows, exploitations amok, privilege escalations, and on and on. Oh your head hurts now.

Where do you begin? What’s important, and how much is this going to push the timeline back? You know the launch is in two weeks and there are functional issues that need to be addressed, there is no time to deal with all this!

Take a Deep Breath

If you have ever found yourself in this situation, you’re not alone. Project managers, especially, seem to dread this part of the project as it always equals more work.

The stress comes from a lack of understanding of the details from reports that security analysts provide to the project teams. I can honestly say that most of this is our fault, but on the other hand no one seems to ask us to sit down down with them and explain the findings or what they mean, the only interest is in what they can skip out on.

As a business analyst, if you cannot understand the information you are trying to analyze, well, what does that get you?

A proper Threat Risk Assessment for a project should give you a clear “where we stand” viewpoint and should not be confusing.

Unfortunately, this is rarely the case, and part of the blame here is that there are few places for a security analyst to learn on how to write a report that gives the details and skips the confusion.

It took me over five years before I felt comfortable in releasing my reports to executive management. The difference came when I started getting feedback instead of silence.

Silence, or zero feedback means that they didn’t read it, they didn’t understand it, or you didn’t address the point that interested them.

This is true for any report, be it from a security analyst or a business analyst.

Taking it all Apart

Having been in this field for over a decade, I can tell you every report is different. As I highlighted in my last article “Security Tool Chest: Checklists”, if you take three analysts and have them report on the same target, you will get three different finding reports.

However, what will be mostly consistent is the language they use in the report. Over time, you might have picked up some of the terms, but unless you can put them in context, you will still have a hard time reading a findings report.

The structure of a report may also be different between organizations and further between analysts. As with the common terminology used in the report, the reports will most likely all report on three common areas:

  • What is the threat to the organization?
  • Is there a vulnerability that could be exploited by the threat?
  • What is the risk to the organization if the vulnerability is exploited?

There are many terms in those three questions that form the foundation for any information security assessment findings report.

Let’s use a common everyday scenario to describe these terms:

You are walking down the street and come to an intersection. You notice the red signal light is not functioning, only yellow and green. Cars are traveling through the intersection in both directions as no one seems to receive a red light signal. You want to cross the street.

What is the threat to the organization?

Well you are the organization in this instance, and the threat is the drivers of the cars.

Is there a vulnerability that could be exploited by this threat?

Yes, the red signal light is not working and if the driver does not notice that fact they may proceed into the intersection

What is the risk if the vulnerability is exploited?

When you cross the street, you get hit by a car causing death or serious injury.

Were you thinking the car would be the threat in this case; after all it is the car that hits you and not the driver?

Well, you are partially correct, but a car is known as a threat agent, in other words, the method in which the actual threat is delivered. Although the car is hitting your body, the car cannot move without the inputs of the driver.

Another significant part of a findings report is the criticality factor. This is often the part where color gets over used by an analyst and you are exposed to screaming reds, oranges, and yellows, as I described at the top of this article.

Criticality is used to describe the significance of the event if the vulnerability is actually exploited.

Would you say the result of death is Critical, High, Medium, or Low? Most of us would select Critical, and the same is true in the IT security space.

If you have an application that has a vulnerability that is easy to exploit and would result in some catastrophic event like loss of service or destruction of data, it would get a critical rating.

So there are actually four questions that need to be asked and answered, with the forth being, what is the criticality factor?

To help make further sense of all this, I have prepared some charts outlining and defining common terms you might come across is the Threat Risk Assessment findings report (see below).  Also, check out the Security Threat Risk Assessment - Findings Report Template.

I hope it makes your Monday mornings a little more enjoyable.

Author: Stewart Allen is a certified Information Security Consultant with over 12 years of experience specializing in Health Care and Financial Service industries. Acting as an Information Security Advisor, Mr Allen is responsible for finding opportunities for his clients to achieve their business goals, while helping to ensure information assets are secure. If you would like to learn more about the author he can be found on LinkedIn at


Risk Ratings:


Risk Level

Technical Meaning

Business Risk Meaning

Action to Take


There is an existing threat out there that could take advantage of the finding if not corrected. The exploitation would lead to loss of service, root level access, or unauthorized disclosure of data.
(Virus/Worm/Exploit )

The finding will result in a direct violation of a corporate policy, governance item, or would put the business in peril if the finding became a reality. (catastrophic failure of critical service)

Treat these as the name states – Critical. Do no move forward in the project until each finding is addressed


The finding would be easy to exploit if discovered and would bypass any security controls in place. Could lead to loss of service, root level access, or unauthorized disclosure of data. (Missing patches)

The finding is in violation of a corporate policy, governance item, or the business reputation could be in jeopardy if the finding became a reality. (client data hosted live on the internet and is stolen)

These findings need to be addressed before the project is put into production. Any item not addressed will require a risk acceptance sign-off from Sr management.


There are security controls in place to protect against the finding; however a skilled malicious user could exploit the finding. (buffer overflow condition in application source code)

The finding illustrates the organization is not following best practice. (clients are not notified to expect a fax before it is sent)

If you can not correct them now, make sure to correct them after the go-live. If you fail to correct them, expect them to escalate to a High rating.


Security controls are in place to protect against the finding or there is no risk to the organization based on the security controls in place.
(Missing patch for MSN Messenger that is not used)

The finding is highlighting information that the organization should be aware of but does not present any risk. (client data is retained for 7 years when policy requires only 5 years)

Read and understand these findings. Sometimes they will uncover bigger findings down the road or for the next project.




Technical Meaning

Business Risk Meaning

Action to Take

Buffer / Heap or Stack Overflow

Attempting to overwrite a known memory buffer to cause it to overflow. When it overflows the application using the buffer could cease to work or the attacker can inject new instructions into the buffer to gain command and control access.

If a system is vulnerable to buffer, heap or stack overflows then the availability of the service can not be relied on. SLAs will be affected. Time sensitive services should not be used on this system.

This is a challenging issue that companies like Microsoft correct on a regular basis. (patch your systems)
If it is an in-house application, assign a specific developer the responsibility for correcting it.

Cross site scripting

An attacker injects malicious scripts into the otherwise trusted web site to trick unsuspecting users into running code or re-directs them to a un-trusted version of the website.

This is the most common form of phishing attacks and could result in lose of reputation or trust from your business partners if your clients are constant victims.

Corrective action is straight forward once detected, but detecting this vulnerability requires a skilled analyst. Take the time to find and correct before going into production.

SQL Injection

An attacker uses a common SQL query via an input data field like a form on a web page or a client utility. (Insert/Update/Delete)
If successful they can execute administration operations on the database and in some cases issue commands to the operating system.

A hacker is able to bypass front end security controls and gain direct access to information stored in a database. The integrity of the data can no longer be trusted and this will lead to information disclosure.

A powerful exploit that is straight forward to correct and must be validated. Ensure this is corrected before going into production

Remote code execution

A program code injection attack that is usually very specific against a vulnerable application running on a remote system that requires a skilled hacker to pull off correctly. (malicious SQL stored procedure)

A hacker is able to remotely issue commands to your servers. In other words, you no-longer can trust the services, information, or availability of this system.

Have a complete review of the security architecture completed before going to production or if there are patches specific to the findings ensure they are applied.

Information Disclosure

Security controls are not preventing the information asset from being disclosed to unauthorized persons.

If someone outside of the confidentiality ring of trust was to gain access to an information resource there would be unauthorized disclosure of information. This is what happens when laptops go missing that are not encrypted.

Information Disclosure is a serious issue that must be addressed to avoid making the headlines. Encryption and multi-factor authentication are used to combat this.

For more Information Security terms and definitions NIST has an excellent document you can reference:

Posted in: Security Analysis
Like this article:
  7 members liked this article


Only registered users may post comments.



Copyright 2006-2024 by Modern Analyst Media LLC