Related posts
See all
Key Insights:
Audit teams lose time and budget when early risk calls are vague, outdated, or undocumented. The downstream impact shows up in over-testing low-risk areas, missing what matters, and struggling to defend planning decisions in review. This article covers what risk identification means under current standards, the techniques that work best for audit and advisory teams, and how to translate identified risks into a defensible, risk-based audit plan.
Risk identification is the process of recognizing and describing risks along with what drives them and what they could lead to. ISO 31073 draws a clear line between identification and the broader risk assessment process, which also includes analysis and evaluation. That distinction matters in your workpapers: identification comes first, and everything downstream depends on getting it right.
Under the IIA framework, management and the board own risks while internal audit provides an independent check on how well those risks are managed. Separation and blurring the line puts your independence at risk. The principle that you can't test what you haven't identified applies across engagement types, whether internal audit (governed by IIA Standards), financial statement audits (AICPA/PCAOB), or SOC 2 engagements (AICPA), though each follows its own professional standards for how risk identification is performed.
Treating risk identification as a check-the-box exercise carries real consequences. A recent PCAOB inspection of RSM US LLP found a 35% Part I.A deficiency rate, meaning insufficient evidence on more than a third of inspected engagements. A separate PCAOB action assessed $3.375 million in penalties across nine KPMG network firms for quality control violations that included risk assessment elements. For smaller practices, the stakes scale differently but hit harder: threatened registration, individual bars, and reputational damage that's difficult to recover from.
The pattern is consistent. Planning files that don't explain scoping decisions and procedures that don't trace back to identified risks are among the most common deficiencies. If your risk identification doesn't clearly drive your audit plan, it will eventually show up in review.
On the financial audit side, SAS No. 145 tightened expectations for identifying and assessing risks of material misstatement. The PCAOB’s inspection priorities emphasize recurring deficiency areas, so firms with past risk assessment findings should expect continued attention.
You identify risks at multiple levels, and skipping any of them tends to create gaps in your engagement plan. Your annual view of risk sets priorities, but your engagement-level work is where scope gets sharp enough to defend.
The IIA’s Global Internal Audit Standards (often called “the Standards”) require the CAE to build a risk-based internal audit plan from at least an annual assessment of the organization’s strategies, objectives, and risks, then get senior management and board approval. You use this assessment to prioritize engagements by risk exposure and identify the resources your plan requires. When risks, operations, or controls change, the plan should change with them.
The Standards require a preliminary risk assessment for each engagement, carrying forward a long-standing expectation from the former IPPF. That means reviewing prior reports, confirming what’s changed, and talking with stakeholders before you finalize scope. Your engagement-level work should deepen the annual assessment, not restart it. Organizational risks set context; engagement planning turns that context into testable risks.
Fraud risk assessment should be integrated early in engagement planning, not treated as a late add-on. The IIA practice guide describes an approach within engagement planning: gather context about the engagement's purpose and operating environment, brainstorm fraud scenarios specific to the area you're auditing, then assess and prioritize which scenarios warrant deeper evaluation in fieldwork. Teams that bolt fraud risk onto the end of planning consistently struggle to defend those decisions in review.
The “best” technique is usually the one that fits your constraints: complexity, uncertainty, available time, and how much quantitative output you need.
Mind mapping is one of the most effective ways to surface risk relationships that linear lists miss, especially when you work through them with process owners who see the work every day. You start with a central risk theme or process area and branch outward into sources, events, causes, impacts, and existing controls. That visual structure forces connections between risks that often stay siloed in traditional risk registers.
If you need expert judgment without group-dynamic bias, the Delphi technique can be a strong complement. It's especially useful when the risk is emerging and the expertise you need is spread across teams.
A risk matrix or heat map is often how you turn a long list of “possible risks” into something you can scope and staff. A practical workflow that aligns to common IIA-style approaches is to start with example risks and calibrate them with process owners, then hold validation meetings to agree on impact and likelihood scoring. As you score, you also capture the key controls tied to each risk and document how those controls change your view of residual risk and control importance.
If you’re prioritizing competing risks and need to explain tradeoffs, business impact analysis can supplement heat map work by spelling out the downstream consequences of risk events.
Scenario analysis explores what-if situations and potential impacts, which makes it a good fit for emerging risks or areas with limited historical data. The main discipline here is forcing a current-year view instead of letting last year’s approach quietly set this year’s risk assumptions, a point echoed in the Journal of Accountancy.
Data analytics takes this further by letting you analyze entire populations rather than just samples, using pattern recognition to identify exceptions and anomalies. When you pair scenario thinking with analytics, you often get a stronger fraud risk picture because traditional sampling can miss low-frequency, high-impact events.
The Standards raised the bar on documentation, and planning files are part of that. If your risk register shows ratings but not the basis for those ratings, reviewers will notice.
Effective risk documentation should meet established quality criteria:
A useful benchmark: could another competent professional understand the work performed and the conclusions reached? That means your risk documentation needs to capture the "why," not just the score.
Centralizing engagement documentation makes it easier to apply these criteria consistently across teams and engagements. To keep quality from drifting, build in structured review at multiple levels. Otherwise, even a well-designed template becomes inconsistent over time.
The IIA's implementation guides illustrate how review responsibilities can be structured: supervisors check for accuracy, relevance, and completeness, while the CAE approves documentation on higher-risk work. Firms should align current review practices with the Standards and any updated guidance. One common trap is over-documenting processes while under-documenting risk judgments. Pressure-test your planning files with three questions: Is it relevant? Is it reliable? Is it sufficient to support your conclusions?
Your risk identification process needs its own quality checks, not just a sign-off on final conclusions. As the CAE, the IIA’s Quality Assurance and Improvement Program (QAIP) guidance expects you to maintain a program that covers the internal audit activity end to end, including how you plan engagements and identify risks. That includes ongoing monitoring plus periodic self-assessments.
You also need an independent check on a longer cycle. To claim conformance with IIA Standards, an internal audit function must have an external quality assessment conducted by a qualified, independent assessor at least once every five years from the date of adopting the Standards. These reviews assess conformance across the board, including the engagement planning standards that govern risk identification.
A risk register that doesn’t drive your audit plan is just documentation. The translation from identified risks to planned work happens through prioritization. You assess impact and likelihood in the uncontrolled state (inherent risk), factor in existing controls, and decide where you get the most audit value for your hours. Audit frequency should track risk factors and change signals, not a fixed rotation. If a process was stable last year but had major personnel turnover or a system migration this year, you’ll usually want a fresh look.
Keeping the plan current matters as much as building it. Your risk profile shifts throughout the year, and your plan should shift with it. It’s important to use measurable risk indicators that are sensitive to change, then use those indicators to redirect resources. When an indicator spikes, you adjust scope or add work; when indicators stabilize, you can reallocate effort elsewhere.
One requirement that’s easy to underplay is communicating coverage limits. If resource constraints mean you can’t cover a high-risk area, you need to make that tradeoff visible to senior management and the board.
The techniques in this article work best when your platform supports them rather than slowing you down. Fieldguide is built for audit and advisory firms, designed by practitioners for practitioners.
For financial audit teams, the platform's AI reviews documents related to the account being evaluated, providing movement analysis and suggesting areas to investigate further. AI Chat provides context-aware assistance at the document, request, or workspace level when you need to interpret evidence, clarify requirements, or pressure-test risk hypotheses. All AI outputs require practitioner review and approval before they support engagement conclusions.
Maxwell Locke & Ritter grew their Risk & Compliance practice 5x in six months using Fieldguide. Book a demo to see how Fieldguide helps audit and advisory firms strengthen documentation quality while reducing manual effort.