If you don’t want to bother with reading the full post, here’s the list of all metrics mentioned in this post. If you deicde to read further, you’ll get a bit more reasoning behind each KPI and its importance.
Metric | Explanation |
---|---|
% of Applicable requirements fully documented | Understanding applicable requirements and starting to design and document the controls. |
% of Applicable requirements implemented & rolled out | Tracks whether documented controls have been fully implemented and integrated into daily operations. |
Mean time to rollout | Measures the average time taken to implement each requirement, indicating progress and estimating completion. |
Number of audit findings (by severity) | Highlights key areas of vulnerability and opportunities for improvements. |
Number of incidents | Tracks the number of incidents, aiming for fewer occurrences over time. |
% of Assets / asset groups with identified and documented risks | Measures thoroughness of risk management across the asset inventory. |
Total risk score | Tracks the overall risk landscape as new risks emerge and old ones are mitigated. |
Total residual risk score | Shows the remaining risk after applying mitigations, indicating overall risk exposure. |
Total score of mitigations | Shows progress in designing and implementing controls to reduce risk. |
Mean time to resolve issues / incidents / non-conformities | Measures the speed of resolving identified issues, indicating response efficiency. |
Time to implement corrective actions post-incident | Measures how quickly fixes are applied after identifying an issue. |
Uptime / Availability | A technical metric that reflects system availability, often included in SLAs. |
# of OFIs (Opportunities for Improvement) identified | Tracks the active identification of improvements, fostering a culture of continuous enhancement. |
# of OFIs implemented | Measures the execution of improvement plans, highlighting follow-through. |
Average implementation time for OFIs | Indicates agility in making improvements, useful for identifying delays or inefficiencies. |
% of recommendations from previous audits implemented | Reflects commitment to addressing audit feedback, showing readiness for subsequent audits. |
Time/Effort spent on internal & external audits | Tracks the time spent coordinating with auditors and preparing audit materials. |
Time spent on collecting and organising evidence | Measures the effort required for evidence management, highlighting efficiency opportunities. |
Number of Controls Automated | Tracks the extent of automated controls, highlighting areas where manual work has been reduced. |
Information Security and compliance is way too often seen as a cost center. It takes a lot of money, effort and time and all the organisation seemingly gets back is a badge on their home with a security framework name.
In order for everyone to take information security seriously and prioritise it we need to shift the perspective and show the progress we’re making in information security initiatives and the value it brings to the organisation.
In this post, I’ll go through a number of metrics and KPI-s and organisation might track at different stages.
By no means, do I think that all these metrics should be tracked at once, rather this should serve as an inspirational list, from where you can cherry pick the ones that make the most sense in your organisation or maybe the list inspires you to discover a KPI that isn’t in this list but works in your current situation.
Adopting a new framework is a huge effort for any organisation, whether they already have an information security program in place or not. It can be difficult to communicate how big of an effort it actually takes and dodge the “are we there yet” questions from the stakeholders.
The “are we there yet” questions often come because the new framework implementation process looks like a black box which seemingly only has one status – complete and audited or not. But it doesn’t have to be like that. It is possible to successfully report on information security and information security compliance at all stages of the lifecycle, the key is to report on the most relevant information at each stage.
Covering any requirement from a framework is a multi-step process. To get a realistic view of how things are progressing, it makes sense to track how work progresses through the “pipeline” from documentation to actual roll out to day-to-day work through policy change or technical implementation.
It all starts with understanding which requirements even are applicable to the organisation and then starting to design and document the controls.
Documenting however, is only one piece of the puzzle. Before we can say that a control has been truly implemented, we also need to roll it out.
Some controls will require a bigger organisational change – such is often the case with policies. That change might require collaboration from other teams or departments and the rollout might need to be timed to facilitate the capacity the organisation has for change.
Similarly to organisational change that policies bring, some technical controls also take time to implement.
That means that quite many of the controls will stay at this stage for a while, and reporting them as such is a great way to communicate where we’re at and maybe, where the bottlenecks are.
Mean time to rollout is a great metric that can hint when the entire framework implementation might be completed. One just needs to look at the time it has taken to cover the requirements so far and then based on that calculate the remaining effort. This in theory does sound awesome, but keep in mind that not all organisations are equally equipped to measure this. So only do this when it’s easy and meaningful to measure in your organisation.
Once a framework has been rolled out, it’s time for it to serve its purpose as part of your Information Security Management System (ISMS).
The primary purpose of an ISMS is to protect an organisation’s (information) assets by systematically managing risks, ensuring data integrity, confidentiality, and availability.
So at this stage we need to focus on if the ISMS is functioning and how effectively and efficiently it’s doing it. Because you know. Efficiency is good but effectiveness is a must – no point in doing pointless things efficiently.
In essence, we will be looking at 4 things:
This is one of the most difficult aspects to report on, coincidentally it’s also one of the most important. The thing is that until nothing has hit the fan, everything is 100% right?
There are two sure-fire ways to know when your ISMS is not up to par – the auditor finds something or there’s an incident.
Highlights key areas of vulnerability and opportunities for improvements.
A good auditor will always find something. So in order for the story to make sense, you might want to split the findings up by severity.
Plain and simple, we want to have as few incidents as possible and the number to be going down over time.
Both of these, however, are lagging indicators. It’s the outcome or culmination of our efforts or the lack thereof so far. Considering the core mission of an ISMS – protecting organisation’s (information) assets confidentiality, integrity and availability, you might also consider splitting the incidents up by type, to really understand where the issues lie.
Measuring the breath and depth of the ISMS
Similarly to the metrics in the implementation stage, we want to show and inspire progress in all the leading indicators that ultimately bring us to this number of incidents. Because, you always get what you measure.
One way to do this is to make sure that our ISMS has breath and depth. This is an important thing to note, because of course, we might have 100% coverage with the requirements from ISO 27001, SOC 2 or others, but not so great information security. The popular SOC in a BOX topic comes to mind.
So, how do we measure the breath and depth of our ISMS and make sure our ISMS goes further than ticking the boxes in a requirements checklist?
Measures the thoroughness of risk management across the asset inventory. Could potentially also just track the number, whichever makes more sense for you.
It’s entirely possible to be compliant with frameworks, having completed risk assessments for only some of your assets, and that’s a good starting point. But over time, we want to expand to less critical and low value assets as well. Because, any chain is only as strong as its weakest link.
Quoting our favorite AI assistants: In this dynamic ever evolving landscape, it would be nearly impossible for an organisations risk score to stay still. New risks are constantly emerging, and this means more effort to mitigate these, which might mean more resources required.
Hence, this is a good metric to keep an eye on and use in your stakeholder conversations to explain what the heck you are doing all the time and why you might need more headcount.
This is one of the few KPIs in this post that I would argue every team should be tracking. If you were to only choose one, take this. It’s great at showing your overall risk exposure over time..
Shows the progress the team is making in designing and evolving controls as part of improving the ISMS and reducing risk.
The total risk and the total residual risk scores will always be increasing. It could feel like the situation is always worsening and we are not making any progress… No matter how much the other numbers are going up, this is the number that will show progress. Assuming that you are working on it …
Measures the speed and efficiency of response once an issue is identified. Indicates the effectiveness of mitigation and remediation practices.
Issue is a wide term for sure. It could mean a finding from an audit, a non-conformity or a (breach) incident. It makes sense to make this one specific to whatever you want to improve. Having trouble with the implementation time of technical controls after non-conformances? Start measuring it.
Having trouble getting systems up and running after an incident – start measuring it. And then make it a problem for the right person who is in the position to move the needle on the metric.
Measures the speed of applying fixes after identifying an issue.
One metric that most stakeholders can definitely resonate with and understand is availability. It’s one technical metric that even makes it to customer contracts through service level agreements (SLA-s). It’s one of the measures we can show before anything has hit the fan, to display that things are good (for a reason).
An ISMS can never be static; it requires ongoing maintenance and monitoring to ensure it remains effective over time as things change (assets change, risks evolve, regulations and frameworks get updated). Continuous improvement is even an expectation in some frameworks like ISO 27001.
Improvement is change over time (hopefully for the better). We’re already covered one type of change – increasing the depth and breadth of our ISMS – how many assets are covered with risk identification and mitigations.
Another type of change is finding and implementing opportunities for improvement (OFIs) in existing documentation and controls.
Demonstrates active identification of enhancements; drives continuous improvement culture.
Shows execution of improvement plans; highlights effectiveness of follow-through.
Indicates agility in implementing improvements; prompts efficiency analysis. You might want to track this, if you’re having trouble engaging other departments in the collaborative efforts.
Reflects commitment to addressing audit feedback; ensures audit preparation readiness.
One of the biggest challenges in GRC that we hear from information security managers is the fact that audits take a ridiculously long time. Interestingly, a lot of people keep using the word ridiculous.
What makes this a big or dare I say a ridiculous problem, is the fact that the time wasted in audits is taken away from meaningful information security efforts.
So, keeping with the premise that you get what you measure. It makes sense to measure the time we spend on maintaining and auditing the ISMS so we can maximise the time that goes to meaningful information security work.
Time spent on audits and governance in general
Measuring time we spend on governance is no easy undertaking. It’s vague – it consists of different technical activities, meetings, discussions, chats, evidence collection and so on.
It’s not important that we are super exact in the measurement of this. What’s important is to keep the measurement consistent so that the margin of error remains the same. What we want to keep an eye on are the trends and ballpark amounts.
At the minimum, count the hours spent coordinating with the auditors and internally specifically about topics around the audit. Considering that audits are frequently quoted as the most time consuming part of information security, there will be parts that offer opportunities for optimisation.
Unless you’re using an integrated GRC platform like Kordon, most of your time during internal and external audits will be spent on collecting and organising evidence. Measuring that time will surprise you and the stakeholders and will allow you to justify investment in software tooling and or just productivity boosting initiatives that streamline the process with existing tools in the organisation.
Considering the popularity of “compliance on autopilot”, this metric is coming up more frequently. I want to stress though that automation for automation sake does not make sense.
You should keep track of this indicator, if you have identified that the upkeep of your ISMS is taking too much time and you’ve identified the controls where the ROI of automation makes sense. We’ve all heard stories where hundreds of hours have been spent to save a team 8 hours of work a year.
Implementing and maintaining an information security program is a continuous journey, not a one-time effort. By tracking meaningful and actionable KPIs at each stage, from framework implementation to operational effectiveness and continuous improvement, organizations can clearly demonstrate progress and value. Remember, the goal is not just to collect data but to use it to drive better security practices, optimize resources, and make informed decisions that align with your organization’s risk appetite. Choose the metrics that best reflect your current priorities and maturity level, and use them to build a resilient and adaptive information security program.
P.S. I know I could have made it a round 20, but first wrote the post and then counted it, and well didn’t want to pull another one out of thin air just to make the number nicer.