Readers of this series of articles have seen the progression of steps taken in the implementation of 1 particular element
of governing and managing enterprise IT (GEIT): managing security, as defined in COBIT 5’s APO13
Manage security process. The preceding articles have described the activities involved in identifying the issues,
developing the mitigation plans, defining the appropriate work products, implementing those products and plans, and verifying
the process’s operational effectiveness. This article, the sixth and final installment in the series, examines 2 objectives:
how to measure APO13’s performance and how to use that measurement to identify areas for potential process improvement.
In addition to these 2 objectives, it is important to monitor the performance of all processes, not solely APO13, to
determine whether their goals support enterprise objectives.
After the governance structure for APO13 has been running for some time, there will be work products that can be collected
and assessed to determine how well the process is performing. The methodology used to perform this assessment is presented
in 2 ISACA publications within the
COBIT 5 product family:
COBIT Process Assessment Model (PAM): Using COBIT 5
and
COBIT Assessor Guide: Using COBIT 5
. As reference, the practitioner might need to also use
COBIT 5: Enabling Processes
to confirm process goals and metrics and base practice work products. These 3 publications (PAM, Assessor Guide and Enabling
Processes) are referred to throughout this article.
The overall process of performing an assessment is presented in this article, but the focus is on data collection, validation
and rating. The reader is advised to reference the PAM and the Assessor Guide for greater detail on project management
of an assessment. This article also discusses some of the pitfalls to avoid in performance measurements.
Setting Up an Assessment
The assessment program is made up of 7 steps. They are:
- Initiation
- Planning
- Briefing
- Data collection
- Data validation
- Process attribute rating
- Assessment reporting
The first 3 steps (1-3) and the reporting step (7) pertain to every assessment initiative and cover the entire scope of each assessment. In an assessment of multiple processes, data collection, validation and attribute rating (steps 4-6) would be performed for each process examined. In the example in this series, the assessment looks only at APO03 Manage security.
1. Initiation
Initiation is primarily a project management step. In it, the practitioner defines what will be assessed, who the participants
will be, the authorization of the project and other details.
A vital step in performing initiation is to define the target assessment level for the process being examined. Across
an enterprise, target levels vary and the ultimate evaluation of performance includes a comparison to the target. Every
effort must be made to secure buy-in from the project, or assessment, sponsor to ensure the target level is fully supported.
The initiation section of the Assessor Guide goes into great detail on determining the class of the assessment and confirming
scope.
2. Planning
The planning stage confirms the approach that will be followed for the assessment and confirms the process owner and other related resources who will need to provide work effort to complete the assessment. It is imperative to clearly define what evidence the assessor, or team member, will request and from where the evidence will come.
3. Briefing
The briefing step is meant to extend communication of the assessment to all participants, especially those who perform the work that is being examined. The Management Support section in the Assessor Guide states that it should be made clear that it is the process that is under examination, not the performance of the employees themselves. This is an absolutely critical point. The lead assessor must emphasize this early on, as soon as any meeting with process employees is conducted (and restate it throughout the assessment when interacting with process practitioners). Lack of enthusiasm from those who perform the process tasks can prove disastrous to the assessment.
4. Data Collection
Data collection is the step in which evidentiary material is collected from practitioners. The planning step defined what
will be collected, and those artifacts must be consistent with process and supporting practice definitions, which label
them as “outputs.”
The terminology used in Enabling Processes is not always identical to the PAM (
figure 1). Process owners will have learned the Enabling Processes terminology in the course of building and
using the governance structure. The PAM-specific terminology will likely be new to them. It is useful to review these
differences prior to requesting and collecting data.
Figure 1—Terminology Used in Enabling Processes and the PAM
PAM |
Enabling Processes |
Outcomes (Os) |
Process Goal |
Base Practices (BPs) |
Management Practice |
Inputs |
Inputs |
Outputs |
Outputs |
Work Products (WPs) |
None—refers collectively to all inputs and outputs for a particular practice |
Another important difference between Enabling Processes and the PAM is that the PAM connects, or maps, work product outputs
to base practices. In Enabling Processes, each process goal is presented with related metrics, but there is no connectivity
shown for how the practice outputs support each.
For APO13, the relationship of the outputs to the process goals is shown in
figure 2.
Figure 2—Outputs and Process Goals
Output |
Practice |
Process Goal |
Information security management system (ISMS) policy |
APO13.01 |
1 |
ISMS scope statement |
APO13.01 |
1 |
Information security risk treatment plan |
APO13.02 |
2 |
Information security business cases |
APO13.02 |
2 |
ISMS audit reports |
APO13.03 |
1 |
ISMS audit reports |
APO13.03 |
3 |
Recommendations for improving the ISMS |
APO13.03 |
1 |
Recommendations for improving the ISMS |
APO13.03 |
3 |
It is important to understand which WPs support each outcome, or goal, in order to perform the assessment.
Figure 2 provides this detail.
The degree of data collection depends greatly on the nature of the output and defined scope of the assessment. For example,
APO13.01 will produce an ISMS policy. There will not be more than one such policy produced in a single year. By contrast,
DSS05.04
Manage user identity and logical access produces user access rights reports. These reports are generated on an ongoing
basis driven by changes in personnel, new hires, terminations and internal movements. For user access reports, an assessment
would request much more evidence.
As each data artifact is collected, it must be carefully recorded as to its origin, date of collection and source. The
record of evidence collected will be needed if the assessment results are challenged.
5. Data Validation
For data to be useful in the assessment, they must be reliable. The assessor must determine that each piece of data collected came from an appropriate source and that all instances are consistent with respect to source. All evidence collected must be separately examined and confirmed as to fitness for purpose.
6. Process Attribute Rating
Once all required data are collected, the assessor must apply professional judgment to determine the level at which the process
is operating. The assessor must understand that level 1 pertains to the creation of the work products as defined in the
management practices. Once that hurdle is reached, the assessor may determine whether a higher level is appropriate,
but those higher levels are no longer COBIT-centric. Higher levels are related to International Organization for Standardization
(ISO) standard ISO 15504/33000, and the assessment criteria move to process performance management, definition, actual
deployment, etc. The move from COBIT to ISO is not well understood by many practitioners who are not experienced assessors.
The Assessor Guide defines the process by which an assessor can move up the assessed level. Basically, a process can
be assessed at a higher level only if it was first assessed as fully achieving the levels beneath it. For example, level
3 can be assessed if, and only if, levels 0, 1 and 2 were assessed and each was fully achieved. Level 3 can then be assessed
as also fully or, perhaps, only partially achieved.
The final determination to make is the comparison of the achieved level against the target determined earlier. If there
is a gap between these 2, the assessor should make a separate note of the gap and whether the level achieved is greater
or less than the target.
7. Assessment Reporting
The assessment report can take many forms. There is no prescribed format as this is not a regulatory compliance activity.
The assessor should have presented a report template and had it approved during the initiation step.
When assessments are performed over multiple processes, the assessment report includes numerous ratings. It is important
to understand that these ratings must be considered independently from one another. Combining them by calculating averages
across processes creates numerous problems. Process assessments look at disparate artifacts and averaging across them
is an “apples and oranges” comparison. Likewise, averaging across processes can mask issues. Lower ratings get balanced
out by higher ratings in other processes and the average across them might look acceptable. Using this model to calculate
or report an assessment score for the overall enterprise should be avoided.
Process Improvement
Assessment ratings are compared to enterprise targets for the process(es) assessed. Any ratings that fall below target represent
potential opportunities for improvement. Those situations are easy to identify. However, there are other benefits from
performing an assessment that can lead to process improvement. When enterprise objectives or stakeholder requirements
evolve, the governance structure created using COBIT 5 makes examination of capabilities a simple matter of going back
through the process capability assessment. New requirements or changes can bring about changed target levels. Comparing
these new targets to existing or newly refreshed capability levels can quickly highlight improvement opportunities.
COBIT 5 Implementation
describes 7 phases of an implementation cycle. This cycle is generally discussed when planning the implementation of
a new or revised governance structure. However, it is also used for continuous improvement projects. In such projects,
the second phase calls for an assessment of process capability, and phase 3 performs a comparison of actual levels to
target levels. These 2 phases in the continuous improvement model are exactly what an assessor performs in the activities
described in this article. The only difference is that, in continuous improvement initiatives, the drivers for the activity
can be as simple as a periodic review or response to some stimulus or integration with other enterprise initiatives.
Conclusion
This series of articles has focused on one COBIT process as a means to discuss how to use several parts of the COBIT 5 product family to design and implement a governance structure and measure its performance. Focusing on a particular process has enabled the articles to offer specific information about the steps involved and detail each from the perspective of an IT governance practitioner. However, despite the detail provided over the course of the 6 articles, no single series can provide a comprehensive tutorial of what is ultimately a long, complex program of activities undertaken by multiple participants. However, these articles describe a general approach that will prove helpful to the reader in navigating through a very dense, but effective, governance framework and its supporting pieces.
Peter C. Tessin, CISA, CRISC, CISM, CGEIT
Is a senior manager at Discover Financial Services. He leads the governance group within business technology (BT) risk. In this role, he is responsible for ensuring that policy, standards and procedures align with corporate objectives. He serves as the internal party responsible for regulatory exam management and is the internal liaison to corporate risk management. Prior to this role, Tessin was a technical research manager at ISACA where he was the project manager for COBIT 5 and led the development of other COBIT 5-related publications, white papers and articles. Tessin also played a central role in the design of COBIT Online, ISACA’s website that offers convenient access to the COBIT 5 product family and includes interactive digital tools to assist in the use of COBIT. Prior to joining ISACA, Tessin was a senior manager at an internal audit firm, where he led client engagements and was responsible for IT and financial audit teams. Previously, he worked in various industry roles including staff accountant, application developer, accounting systems consultant and trainer, business analyst, project manager, and auditor. He has worked in many countries outside of his native United States, including Australia, Canada, France, Germany, Italy, Jordan, Mexico and the United Kingdom.