Skip to main content

Key Insights: AI can address audit capacity constraints by executing procedural work: document extraction, evidence matching, and risk-based sample analysis, all within practitioner-defined parameters. Some firms report 21% higher billable hours per practitioner after reducing manual tasks, though results vary by implementation. Professional standards permit AI use without explicit AI-specific guidance, requiring practitioners to apply general principles around evidence sufficiency, professional skepticism, and documentation to AI-assisted procedures.


Audit engagements often generate substantial procedural work that doesn't require professional judgment but still consumes significant practitioner time. PwC research found that document-related activities represent the most time-consuming element of audits, with documentation and supporting materials consuming up to 71% of audit preparation time.

Associates dedicate hours to extracting data from PDFs, matching evidence to control requirements, and reviewing transaction populations, yet traditional audit software stores and organizes outputs without executing the work itself.

AI can shift this dynamic by executing procedural steps within practitioner-defined parameters. Firms consistently report measurable capacity gains when AI handles data transfer and document review tasks, though results vary by implementation. This article examines AI use cases showing strong potential for returns, data quality foundations that influence success, and how AICPA and PCAOB standards apply to AI-assisted procedures.

What Is Agentic AI in Engagement Automation?

Agentic AI handles routine audit procedures, including data extraction, evidence matching, and sample-based analysis, within parameters practitioners define at engagement start. Unlike traditional audit software that only stores and organizes workpapers, engagement automation platforms embed purpose-built AI into specific workflow steps while maintaining a unified system of record. When you map client-provided bank statements to revenue testing procedures, AI can extract transaction data and flag items for practitioner review based on your testing criteria.

AI can compress the timings of those tasks from days to hours which can create capacity expansion firms need without adding headcount.

The distinction between assistance and automation matters here. When AI assists with sampling, it analyzes population data you provide and identifies high-risk transactions based on your defined risk criteria, but you retain control over methodology and final selection. Evidence extraction works differently: once you map documents to specific requirements, AI reads those documents and pulls relevant data into structured formats without requiring manual transcription. Practitioners review all outputs and apply professional judgment to reach conclusions.

Why Are Audit and Advisory Firms Adopting AI Now?

AI adoption across client organizations has accelerated rapidly. 58% of finance functions now use AI, with similar trends emerging in IT, operations, and compliance functions that audit and advisory firms regularly assess.

This shift creates both opportunity and pressure for practitioners. On one hand, AI-native platforms can differentiate your practice in competitive RFPs. On the other, clients increasingly expect the efficiency and responsiveness that AI-powered workflows deliver, and may question why their auditors aren't using the same tools they've adopted internally.

What Are the Top AI Use Cases in Auditing?

The following use cases represent high-ROI applications of AI in audit workflows: engagement automation, document review, population analysis, fraud detection, and continuous monitoring. Each addresses a specific capacity constraint audit teams face by combining high procedural volume with clear parameters for oversight.

Engagement Automation Across the Complete Lifecycle

Modern engagement automation platforms can provide a single system of record for the full audit lifecycle from planning through reporting. Rather than coordinating across separate tools for document requests, testing procedures, review notes, and report generation, teams work within a unified platform where information stays centralized and practitioners review and approve work at each stage.

This can reduce the version control challenges that affect distributed teams. Managers get portfolio-wide visibility into engagement status without chasing email updates, while partners see real-time dashboards showing completion rates, outstanding requests, and resource allocation across their book of business.

Fieldguide, for example, embeds purpose-built AI into specific workflow steps within this unified platform:

  • AI Audit Testing Agent: Extracts defined data fields from supporting documents and writes results directly into Sample Sheets with direct source references, helping teams populate sample testing data more consistently within assessor-defined parameters.
  • Testing Agent: Automates end-to-end controls testing for Risk Advisory engagements, generating test plans, mapping evidence, executing tests, and documenting results with exception flagging.
  • Request Agent: Analyzes evidence uploaded to engagement requests to assess relevance, audit-period currency, and alignment to selected samples, reducing back-and-forth during the request process.
  • AI Actions: Executes prompt-driven content generation within sheet columns on a row-by-row basis, supporting drafting, analysis, and standardized documentation directly within workpapers.
  • AI Chat: Provides contextual answers, analysis, and guidance at the document, sheet, and workspace level, scoped to the specific engagement surface where it's invoked.

Warren Averett reported 25% higher realization rates attributed to streamlined reporting and real-time collaboration features. The platform supports complete audit methodologies including trial balance import, adjusting journal entry tracking, and risk-based sampling assistance where practitioners determine methodology and AI analyzes population data you provide.

Document Review and Evidence Analysis

AI can transform one of the most time-intensive audit procedures: reviewing client-provided documentation to extract relevant evidence. Traditional approaches require associates to manually read contracts, invoices, bank statements, and supporting schedules, then copy relevant data into testing templates.

Machine learning has demonstrated time savings ranging from 20-90% in contract document review across engagements involving over 100,000 documents, with results varying based on document complexity, AI training quality, and baseline manual processes. Time savings for contract and legal document analysis tend to scale with document volume: the larger the population, the greater the potential efficiency gain from automated extraction versus manual review.

The workflow starts with practitioners mapping documents to specific testing requirements. Once you've identified which bank statements support revenue cutoff testing, AI extracts transaction dates, amounts, and counterparty information into structured formats your team validates. This maintains the determination of what constitutes sufficient appropriate evidence while reducing hours of manual data transfer.

Full-Population Analysis as a Market Trend

Statistical sampling exists because manually reviewing every transaction in a population is impractical. PCAOB AS 2315 defines sampling risk as "the risk that the auditor's conclusion based on a sample may be different from the conclusion if the test were applied to all items in the population."

AI is beginning to change this calculus for certain engagement types. Some analytics tools now allow practitioners to review entire populations and flag anomalies requiring detailed examination, rather than relying solely on statistical sampling. Instead of selecting 25 revenue transactions from 10,000 based on sampling methodology, practitioners can analyze all 10,000 and focus attention on those that exhibit unusual patterns.

For certain engagement types, particularly those with large transaction volumes, some firms are exploring risk-based selection through AI-assisted review to identify the riskiest items for sample inclusion. This approach can supplement traditional statistical sampling formulas, shifting some audit focus from sample design to risk criteria definition. However, full-population testing remains an emerging capability, and most current AI tools focus on supporting risk-based sample selection rather than replacing sampling entirely.

Fraud Detection and Anomaly Detection in Journal Entries

Journal entry testing represents one of the most judgment-intensive audit procedures. PCAOB AS 2401 requires auditors to design procedures to test the appropriateness of journal entries, yet manually reviewing thousands of entries to identify fraud indicators proves impractical on most engagements.

AI-powered anomaly detection tends to be more reliable when based on proper parameters set by auditors. The key is defining what constitutes unusual activity for each engagement: this might include entries posted outside normal business hours, transactions lacking required approvals, unusual account combinations that don't fit typical posting patterns, or round-dollar amounts appearing in contexts where precision would be expected. AI then flags entries meeting those criteria across the complete journal entry population.

The fraud detection value comes from comprehensive coverage. Manual procedures might review high-dollar journal entries and a sample of routine entries. AI-assisted procedures can analyze the entire population of entries against your defined risk parameters. This comprehensive approach can significantly reduce the risk of missing fraud indicators buried in high-volume transaction populations.

Continuous Controls Monitoring for Risk Advisory

Traditional audit procedures test controls at a point in time. The industry is moving toward continuous monitoring approaches where controls can be assessed on an ongoing basis throughout the period. The IIA distinguishes continuous monitoring (management-driven) from continuous auditing (audit function-driven assessment of risks and controls).

For SOC 2 and compliance engagements, the concept involves embedding rules that surface control exceptions (such as unauthorized access attempts) closer to when they occur rather than discovering issues during quarterly testing. Clients often value this ongoing visibility into control effectiveness rather than waiting months for formal reports documenting issues that may have existed for extended periods. As AI capabilities mature, practitioners can expect more robust continuous monitoring options to emerge.

How Can You Ensure Data Quality for AI Success?

AI outputs are only as reliable as the data inputs, which means firms should validate data completeness and standardize formats across clients before deploying AI tools on engagements.

Start by confirming that record counts match general ledger totals and that required fields contain complete information. Missing data (blank dates, null amounts, incomplete descriptions) degrades AI accuracy and creates exceptions requiring manual follow-up. Format inconsistencies matter too: if one client provides dates as MM/DD/YYYY and another uses DD-MM-YYYY, establish conversion protocols before analysis begins.

Documentation requirements don't diminish based on efficiency gains. When AI flags transactions for review, your workpapers must document the criteria applied, the population analyzed, and the evaluation of flagged items; the same standards that apply to any audit procedure.

What Does AICPA and PCAOB Guidance Say About AI in Audits?

Professional standards permit AI use but provide limited prescriptive guidance. The AICPA has not issued authoritative standards specifically addressing AI in financial statement audits; their only AI-specific guidance, responsible AI guidelines for Forensic and Valuation Services, is explicitly non-authoritative.

The PCAOB adopted amendments to AS 1105 (Audit Evidence) and AS 2301 (Responses to Risks) in June 2024, which became effective December 2025. These amendments establish technology-neutral requirements that apply broadly to technology-assisted audit procedures without mandating AI-specific protocols.

Practitioners apply general principles around evidence sufficiency, professional skepticism, and documentation to AI-assisted work. PCAOB staff have indicated that existing standards support firms using technology-based tools in ways that can enhance audit quality.

How Can You Maintain Professional Skepticism With AI?

PCAOB AS 1015 requires auditors to maintain a questioning mind and critically assess audit evidence, a standard that applies equally to automated procedures.

The primary risk is automation bias: over-reliance on AI outputs without critical evaluation. When AI flags 12 transactions out of 10,000 as unusual, the tendency is to assume the other 9,988 are acceptable. Maintaining skepticism requires questioning both what AI identifies as unusual and what it classifies as routine. Practical safeguards include having team members independently review samples of AI-classified transactions and periodically verifying that classifications meet your defined criteria.

Expand Your Firm's Capacity With Purpose-Built AI

Firms looking to expand capacity without adding headcount are finding traction with Field Agents in specific workflows: sample-based data extraction, evidence validation, and request management. Fieldguide embeds purpose-built AI into key workflow steps within an end-to-end engagement platform built for audit and advisory firms.

The AI Audit Testing Agent handles sample-sheet extraction with direct source references, while the Request Agent validates evidence relevance and period alignment, and AI Chat provides contextual guidance scoped to your specific engagement. The platform maintains ISO 42001 certification for AI management systems and is deployed across nearly 40 of the top 100 CPA firms. Request a demo to see how Fieldguide's AI capabilities apply to your engagement workflows.

Amanda Waldmann

Amanda Waldmann

Increasing trust with AI for audit and advisory firms.

fg-gradient-light