Skip to main content

Key Insights:

  • Operational risk comes from four sources: process breakdowns, people failures, system failures, and external events.
  • No single framework covers operational risk end-to-end. COSO handles governance, ISO 31000 handles process, and the IIA standards handle assurance over both.
  • Documentation-heavy work eats most of the engagement hours in ORM. AI takes that load off practitioners so the time goes to risk analysis and judgment instead.

Operational risk management (ORM) sits underneath nearly every engagement an audit or advisory firm runs. When the structure is solid, your team focuses on the right risks. When it isn't, scope drifts, frameworks overlap, and audit conclusions miss what the audit committee actually wants to know. This article covers what ORM is, how the major frameworks fit together, and how risk advisory and internal audit teams use it in practice.

Defining Operational Risk Management

Operational risk is the day-to-day risk that a business runs by virtue of operating: the risk that a process fails, a person makes a mistake, a system goes down, or something external disrupts the work. It is distinct from strategic risk (which is about the choices the business makes) and financial risk (which is about market and credit exposure), and it is the category that audit and advisory firms spend the most time on.

The standard way to break operational risk down is into four sources: process breakdowns, people failures, system failures, and external events. ORM is the practice of identifying those failure points and managing them before they cause loss.

For practitioners at audit and advisory firms, ORM means evaluating where the client's failure points actually sit and whether existing controls address them at the right level of the process.

Core Components of an Operational Risk Management Program

No single framework covers operational risk end-to-end. The practical move on an ORM engagement is matching the right framework to the right phase of the work, then knowing where each one's coverage ends.

COSO ERM: Governance and Strategy Structure

When you're evaluating board oversight and strategy alignment, COSO ERM gives you a clear structure. It organizes enterprise risk management around five components:

  • Governance and Culture
  • Strategy and Objective-Setting
  • Performance
  • Review and Revision
  • Information, Communication, and Reporting

For SOX engagements, the COSO Internal Control-Integrated Framework remains your anchor for assessing internal controls over financial reporting.

ISO 31000: The Process Model

When you need a practical process model, ISO 31000:2018 gives you principles and guidelines that work across industries and organization sizes. It is principles-based rather than a certifiable management system standard. That means you can use it as a process reference instead of a compliance target. That makes it useful when you want to embed risk management into day-to-day business activity.

Connecting the Frameworks

You can think of the frameworks as playing different roles in the same engagement. COSO gives you governance structure. ISO 31000 gives you process discipline. The IIA standards define how you provide assurance over both. Most firms get better results by using each framework for the role it plays best.

When an engagement touches IT risk or cybersecurity, it's worth knowing that NIST SP 800-221 maps aspects of enterprise technology risk management to ISO 31000 and COSO concepts. That makes it a useful cross-reference rather than a separate framework to choose among.

The Operational Risk Management Process: From Identification to Monitoring

The frameworks above tell you what good operational risk management looks like. The ORM process is how it actually gets done on an engagement. Most ORM processes follow the same arc: identify the risks the client is exposed to, assess how serious each one is, decide how to respond, and monitor whether the response is working.

Risk Identification and Assessment

The process typically starts with mapping the client's key processes, objectives, failure points, and external dependencies. It works best when iterative and organization-wide. Without context around risk appetite and system boundaries, assessments lack the specificity that makes them actionable. From there, teams assess likelihood and impact through a mix of qualitative judgment and quantitative data where available.

Risk Response and Control Activities

Once risks are assessed, management usually decides to accept, avoid, mitigate, or transfer the exposure based on risk appetite, control cost, and operational constraints. The practitioner's role is to evaluate whether that response makes sense and whether the controls support it in practice.

Teams typically test those decisions through four approaches:

  • RCSAs: Capture process-level views of risk and control design through structured self-assessments.
  • KRIs: Track changes in exposure or control conditions over time.
  • Scenario analysis: Explore plausible disruptions and their downstream effects.
  • Targeted control testing: Confirm whether controls are designed and operating effectively.

The mix depends on engagement scope and client maturity.

Monitoring and Reporting

In the NIST Risk Management Framework, the Monitor step focuses on maintaining ongoing situational awareness of security and privacy risks, including tracking controls and environmental changes that may affect their effectiveness. In practice, that means looking at whether the client has ongoing monitoring mechanisms rather than only periodic assessments. It also means considering whether those mechanisms feed reporting to the right decision-makers. A control environment that looks solid during annual testing but lacks continuous monitoring is worth highlighting in your findings.

How Risk Advisory Services Use Operational Risk Management in Client Engagements

Risk advisory firms don't usually run pure ORM engagements. The work shows up as something else: a SOC 2 audit, a HITRUST certification, a vendor program review, a SOX third-party assessment. ORM is the structure underneath those engagements that determines whether you're testing the right things in the right order.

When that structure is missing, clients feel it through duplicated requests, overlapping control reviews, and repeated testing across assurance teams. When it's there, the same body of work supports multiple framework deliverables, and the team spends its time on analysis rather than reconciliation. This section covers how that plays out across the engagement types where ORM thinking changes the most.

SOC 2 and HITRUST Engagements

The same SOC 2 engagement looks different depending on where you start. Controls-first engagements list and test what's already in place; ORM-first engagements identify what could break, then evaluate whether existing controls actually address those risks. The second approach maps directly to the AICPA's Trust Services Criteria, which require service organizations to demonstrate controls over real risks to security, availability, processing integrity, confidentiality, and privacy.

HITRUST takes the same approach further. Its CSF harmonizes over 60 regulations and standards and tailors control requirements based on the organization's risk profile, system characteristics, and regulatory factors. Done well, a HITRUST engagement is an ORM exercise that produces a certification, not a checklist that produces documentation.

Cross-Framework Integration

Most clients face overlapping framework obligations: SOC 2 for service customers, HITRUST for healthcare partners, PCI DSS for payment processing, ISO 27001 for international markets. ORM is what makes "test once, report many" possible. Without that structure, every framework becomes a separate engagement.

The integration usually organizes around shared control themes: access management, logging, vulnerability management, third-party oversight. A single set of access controls evaluated once can support testing across all four frameworks, with framework-specific reporting layered on top.

Third-party risk is another natural integration point. Vendor management auditing benefits from reviewing available independent assurance reports such as SOC 1 and SOC 2, along with ISACA guidance on vendor program controls. When you structure vendor risk within ORM, it connects to your SOC, cybersecurity, and financial control engagements rather than sitting in its own silo.

How Technology Supports Operational Risk Management

Even with cross-framework integration, documentation work still consumes a disproportionate share of ORM engagement time, leaving teams less capacity for analysis, client communication, and judgment-intensive work.

AI is changing the landscape. EY documented one engagement where AI cut risk assessment task time from 50 hours to 6 hours, an 88% reduction. Adoption is uneven, though. The IIA Pulse report found that about 41% of internal audit functions use generative AI, but most usage is infrequent and concentrated in narrow tasks. Most firms sit at Level 0 or 1 of the AI Maturity Framework, using general-purpose AI for isolated tasks rather than embedding it in the engagement workflow itself.

How Fieldguide Supports ORM Engagements

Fieldguide is an AI-native platform built for audit and advisory firms. Its engagement AI comes in two layers built to work together. AI Assist gives practitioners on-demand AI for the tasks they want to drive themselves: AI Chat for document-grounded answers with cited references, and AI Actions for column-level content generation. The Agent Workforce takes on the multi-step procedural work end-to-end: Field Agents like Field Auditor execute full workflows while practitioners review the output and apply judgment. For document-heavy ORM work, the two layers cover both ends of the engagement.

Two capabilities matter most:

  • Testing Agent matches evidence to samples, validates data, identifies exceptions, and produces reviewable documentation for SOC 2, PCI DSS, and HITRUST control testing. It executes up to 70% of the work, with all outputs subject to practitioner review and approval.
  • AI Actions generates outputs across an entire column in one click within workpaper sheets, using multi-step reasoning with automatic context from documents and linked rows. It's useful for tasks like drafting walkthrough narratives or suggested test procedures. All outputs require practitioner review, revision, and approval before use in workpapers or reports.

The BerryDunn case study illustrates what this looks like in practice: the firm reported 30–50% efficiency gains and doubled its engagement capacity after adopting the platform. For ORM engagements that lean heavily on documentation, that capacity goes to the work that earns the fee: risk analysis, client communication, and the judgment AI can't make.

Build a Stronger ORM Practice with Fieldguide

Operational risk management sits at the center of every risk advisory engagement your firm delivers. Fieldguide is an end-to-end AI-native platform purpose-built for audit and advisory firms, with the Agent Workforce, methodology depth, and audit-grade rigor needed to manage engagements from initial scoping through final reporting.

Practitioners stay accountable for scoping, testing, and forming conclusions while the Agent Workforce handles execution. Request a demo to see how it works.

Amanda Waldmann

Amanda Waldmann

Increasing trust with AI for audit and advisory firms.

fg-gradient-light