Resource Articles

AI in Audit Preparation and Reporting

Written by Amanda Waldmann | May 8, 2026 5:15:16 PM

Key Insights: Audit activities remain heavily manual, consuming engagement time on repetitive tasks: request tracking, evidence extraction, workpaper-to-report transfers. Traditional automation can't adapt when scope changes or data arrives in unexpected formats. AI-assisted workflows handle procedural work within defined parameters, freeing auditors to focus on risk assessment, control evaluation, and client advisory where judgment matters.

Why Audit and Advisory Firms Are Adopting AI

The accounting profession faces a capacity challenge. Data shows that accounting graduates in the U.S. have declined for several years, with total bachelor's and master's degrees falling 6.6% in the 2023–24 academic year alone. Partners managing client relationships find it harder to staff engagements, forcing firms to decline profitable work while competitors with capacity capture market share.

AI adoption addresses this gap directly. For example, engagement automation customers report spending 66% less time drafting test procedures, and BerryDunn more than doubled their engagement capacity through AI-driven automation. Beyond efficiency, AI enables consistent application of firm methodologies across engagements, reducing manual errors while freeing staff to focus on risk assessment and client advisory.

This article covers what AI means in an audit context, how it transforms key stages of audit preparation, real-world outcomes from firms already implementing these tools, and a phased approach for responsible adoption that maintains audit quality and compliance.

What Is AI in Auditing?

AI touches many aspects of audit work, from transaction-level anomaly detection and predictive analytics to natural language processing for contract review. One increasingly adopted category is engagement automation: AI capabilities that streamline the operational workflow of audit preparation and reporting.

Within engagement automation, AI can assist practitioners with individual tasks like drafting procedures, analyzing evidence, and summarizing findings. Agentic AI goes further, with the capability to execute multi-step workflows under assessor-defined scope: controls testing, evidence validation, and sample-based data extraction. This isn't ChatGPT writing audit opinions; it's purpose-built agentic AI trained on audit methodologies, controls frameworks, and compliance standards.

The distinction matters for understanding practical implementation. AI copilots accelerate discrete tasks: drafting PBC requests, extracting field values from PDFs, summarizing document content.

Agentic AI handles orchestrated workflows: applying test parameters to control documentation, pulling data fields from source documents into sample testing sheets, and flagging gaps for practitioner review. Both layers operate within parameters auditors establish, and all outputs require assessor review and approval.

Addressing Partner Concerns

The critical question partners ask: does AI undermine the professional judgment that defines audit work? The answer lies in understanding AI's role as assistant, not replacement. Although risk quantifications can help inform risk assessments, professional skepticism and auditor judgment cannot be automated regardless of AI capabilities. Under both AICPA and PCAOB standards, auditors retain responsibility for:

  • Risk assessment design and materiality determinations
  • Sampling methodology and population analysis
  • Evidence evaluation and sufficiency judgments
  • Opinion formation and engagement conclusions

AI tools assist with data work and drafting, but all outputs require assessor review and approval before finalization. Partner engagement reviews and engagement quality reviews maintain the same standards as pre-AI engagements. The difference is that staff arrive at review checkpoints faster because they spent hours on judgment work instead of manual data extraction.

The value proposition for staff retention deserves emphasis. Associates drowning in copy-paste tedium and routine procedural work leave for industry positions offering better work-life balance. When AI eliminates the procedural burden, staff focus on control evaluation, risk assessment, client communication, and higher-value work that builds professional skills and creates engagement satisfaction.

How AI Transforms Audit Preparation

Fieldguide's AI Maturity Framework provides a guide through practical AI adoption stages. Implementation of AI in audit workflows support these critical tasks:

  • Evidence Gathering and Request Management

Traditional approaches scatter PBC lists across email threads, requiring manual coordination and client follow-up. Agentic request management analyzes evidence uploaded to engagement requests and can be configured to assess relevance and associate documents with the appropriate controls, workpapers, or samples, reducing manual review. Real-time dashboards track document status, significantly reducing the administrative burden of coordinating multiple concurrent engagements.

Request Agents help teams quickly understand evidence readiness and reduce back-and-forth during the request process, freeing capacity for higher-value scope decisions.

  • Test Procedure Design

Partners traditionally draft procedures from firm templates while staff customize language for client-specific circumstances, a process that consumes significant time per engagement. AI-assisted procedure drafting generates standardized first drafts from firm methodology and engagement context; Fieldguide customers report reducing drafting time by 66%.

Time savings allow practitioners to focus on risk assessment rather than formatting and template management overhead.

Stage 3: Risk-Based Sampling and Data Extraction

Manual sampling requires staff to select samples, then extract data fields by hand, often spending entire days copying PDF values into Excel. AI can perform sample-based data extraction by pulling defined data fields from supporting source documents and writing results directly into Sample Testing Sheets with dynamic citations.

Assessors determine sampling methodology; AI assists with extracting defined fields and highlighting inconsistencies within provided sample context for practitioner review. This shifts hours from data processing to analysis.

Stage 4: Controls Testing and Findings Compilation

Staff traditionally test controls, manually summarize results in Word documents, and managers compile findings across multiple engagement sections. Within Risk Advisory engagements, AI can automate key aspects of controls testing by applying practitioner-defined test parameters to control documentation and supporting evidence.

Agents surface indicators related to control design and operating effectiveness, identify gaps and inconsistencies, and prepare preliminary findings summaries for practitioner review and determination. Hours previously spent on manual testing and summarization become focused review time.

Stage 5: Audit Reporting

Reporting automation can populate report templates with validated data from workpapers. Data flows automatically from workpapers to reports, while AI suggests language based on historical findings patterns. Assessors review, edit, customize, and approve all content before delivery; reports are never auto-generated without human oversight.

Current implementations show practitioners can recover time from routine tasks, with firms reporting measurable improvements in staff utilization and engagement delivery capacity. These workflow improvements translate into concrete business outcomes.

What Results Are Firms Actually Seeing?

Research on data analytics implementation in auditing concludes that adopting data analytics can materially improve efficiency and effectiveness of internal audit processes, including reducing time spent on testing in many engagements. The Human + AI in Accounting field study reports that accountants using generative AI tools reallocate approximately 8.5% of their time from routine data entry to higher-value tasks (about 3.5 hours in a 40-hour week), achieve a 12% increase in general ledger granularity, and close the books about 7.5 days faster at month-end.

These outcomes aren't limited to early adopters or large firms. Fieldguide’s customers such as UHY's advisory practice report similar patterns: reduced time on manual evidence gathering, faster turnaround on client deliverables, and staff reallocation from procedural tasks to advisory work. The common thread across implementations is that efficiency gains compound—time saved on one engagement phase creates capacity for deeper analysis in subsequent phases.

Firms documenting measurable AI outcomes share three characteristics: they target high-volume repetitive processes first, maintain auditor oversight at defined checkpoints, and measure time savings against baseline workflows established before implementation. These metrics demonstrate that AI-assisted workflows meaningfully improve both staff utilization and engagement delivery capacity as adoption matures.

Implementing AI Responsibly

AICPA standards require auditors to exercise professional judgment when evaluating audit evidence obtained through AI-assisted methods. The Statement on Auditing Standard (SAS) 142 requires auditors to evaluate whether audit evidence is sufficient and appropriate, while SAS 145 requires auditor judgment to design appropriate risk assessment procedures. Materiality determinations and control evaluation remain auditor-led decisions.

While AI can assist with data extraction, anomaly detection, and pattern recognition in large datasets, auditors must:

  • Understand the controls surrounding AI processes
  • Verify that original data input agrees to the financial statements
  • Evaluate the reliability of AI-processed information
  • Remain responsible for validating accuracy and completeness

These requirements ensure that AI-assisted procedures meet the same evidentiary standards as traditional audit methods.

Quality Assurance and Governance

Quality assurance procedures require adaptation to address AI-assisted work processes. Partner engagement reviews must evaluate whether AI-assisted audit work meets professional standards for sufficiency and appropriateness of evidence. The key distinction: reviewers evaluate both the AI tool's output reliability and the auditor's exercise of professional judgment in accepting or modifying that output.

Under AICPA's SQMS No. 1 quality management framework, firms must establish risk-based quality objectives and governance protocols that address all relevant quality risks, which in practice should include those arising from technology-assisted procedures. This includes:

  • Documenting AI tool configuration and test parameters
  • Maintaining assessor approval trails
  • Aligning with AICPA auditing standards including SAS 145 for risk assessment and documentation
  • Addressing PCAOB requirements—AS 1105 and AS 2301 were amended in June 2024 to address technology-assisted audit procedures
  • Ensuring data handling complies with GDPR, HIPAA, SOX, and industry-specific requirements

Regardless of how preliminary work was performed, audit documentation must show assessor decisions and final determinations.

Build Capacity Without Compromising Quality

The firms seeing the strongest results from AI adoption share a common approach: they choose platforms purpose-built for audit and advisory work rather than adapting generic tools. The difference matters because audit workflows require domain-specific understanding, including compliance frameworks, evidence standards, and professional documentation requirements, that general-purpose AI lacks.

Fieldguide's end-to-end platform is designed specifically for accounting and advisory firms. Its agentic AI supports and executes defined workflow segments under practitioner oversight, and the platform maintains SOC 2 Type 2 attestation and ISO 42001 AI management system certification for firms prioritizing security and responsible AI practices.

For firms evaluating their next step, the key questions remain consistent: Does the platform understand audit methodology? Can it operate within your firm's defined parameters? Does it maintain the documentation trails your quality processes require? Schedule a demo to see how Fieldguide addresses these requirements for your specific workflows.