Key Insights
Audit quality often breaks down in the handoffs between steps, not in any one procedure itself. The planning file doesn't track to the controls work, the controls work gets re-tested in substantive procedures, and the evidence gets chased twice because the tracker and the binder live in different tools. This article covers the seven core steps of a financial statement audit, where SOX requirements fit for public-company engagements, and where Fieldguide can handle execution.
Pre-audit work happens before fieldwork starts: deciding whether to take the engagement, confirming independence and capacity, and establishing which standards govern.
Which standards apply depends on the entity. Non-public companies fall under AICPA Auditing Standards; public companies follow PCAOB standards. Many public companies are also subject to the Sarbanes-Oxley Act (SOX), specifically Section 404, which sets requirements for internal control over financial reporting (ICFR). ICFR refers to the processes a company uses to keep its financial reporting reliable: approvals, reconciliations, access restrictions, and similar safeguards.
Acceptance and continuance work confirms the firm has the capacity, competence, and independence to take on the engagement. Integrated audits also require staffing coverage for both the financial statement and ICFR components.
For SOX engagements, the readiness assessment looks at whether the client's control environment can support an integrated audit. The work checks whether management has selected and documented an ICFR framework, whether management has completed its own assessment, and whether process-level controls are documented. Sparse documentation tends to extend timelines and tighten budgets, both better surfaced during scoping.
Planning is where the audit shifts from strategy to specifics. Weak planning shows up later as budget overruns, samples that miss the riskiest balances, and review rounds that pile up right before reporting. A well-built planning file gives reviewers and inspectors a logic they can trace.
Setting materiality early is what scopes the rest of the audit. It shapes how big the samples need to be, which accounts get the most attention, and what counts as a finding worth reporting.
Materiality is the threshold at which a misstatement becomes large enough to influence a reader's judgment about the financial statements. It starts with a dollar number, but that number alone isn't enough. A misstatement that looks immaterial by size can still matter if it turns a reported loss into income or triggers a debt covenant breach.
Materiality tells you what counts; risk assessment tells you where to look. It operates at two levels (financial statements as a whole, and the assertion level for individual transactions, account balances, and disclosures) and typically involves:
The clearer this picture, the better you allocate evidence. For integrated audits, identify common controls and shared risks once and apply them to both objectives.
Step 3 is where the audit moves from desk work to fieldwork. The risks identified during planning have to be tested against how the client's controls actually run, and that starts with walkthroughs. The gap between how a process is documented and how it runs is where control failures hide. Walkthroughs close that gap.
A walkthrough traces a single transaction from origination through the process to the financial records, observing and testing the controls along the way. A defensible walkthrough typically includes four components:
All four steps contribute to the team's understanding of how the control runs, but inquiry alone is rarely enough; observation, inspection, and re-performance are what give the walkthrough its evidentiary weight.
Walkthroughs leave a lot of documentation: process narratives, control notes, exception write-ups, all spread across the workpaper sheet. Fieldguide's AI Actions drafts that documentation in one click. The team's time can then go to the procedural work itself, including the inspection and re-performance that anchor the walkthrough.
Now the audit has to decide how much of its evidence will come from the client's controls and how much will have to come from direct testing of transactions. If a control runs reliably, the team can rely on it and scale back the substantive work that covers the same risk. Get this call wrong and the file has a hole that surfaces during review or inspection.
Controls testing is needed in two situations: when substantive procedures alone can't give sufficient evidence for a relevant assertion, and when you rely on system- or client-generated data in other procedures. Recent PCAOB inspection reports have cited firms for relying on such data without adequately testing the underlying information or controls. Treat its reliability as a testing question, not an assumption.
Reliance on a control depends on testing both how it's designed and how it actually runs. Design effectiveness asks whether the control can prevent or detect material misstatements as designed. Operating effectiveness asks whether it actually functions that way in practice. Skip either and your controls reliance is unsupported, which means the substantive plan works harder than it should.
Substantive procedures are the tests that look directly at the numbers in the financial statements: confirming balances, examining supporting documents, recalculating amounts, and tracing transactions back to source. They consume a large share of audit hours because they turn assessed risk into direct evidence. Efficiency matters, but coverage matters more: substantive gaps are hard to repair once reporting deadlines close in.
Even with controls reliance, you still need substantive procedures for each relevant assertion tied to each significant account and disclosure. Reliance reduces the work; it doesn't eliminate it.
The work gets heavier as risk does. Significant risks require tests of details responsive to the exposure; substantive analytical procedures alone are not enough. Fraud risks demand their own responsive tests, and period-end material adjustments need examination as a separate item.
Day to day, that means matching evidence to samples, extracting data from invoices and journal entries, documenting test results across dozens of accounts.
Fieldguide's Field Auditor is built for exactly this work. It runs audit testing across 70+ Performer Agents organized by use case: For Profit (AR, AP, payroll, expense, inventory price testing), Not For Profit, Employee Benefit Plans, Lending & Regulatory, Investment Institutions, and Request & Evidence. Field Auditor extracts defined data fields from source documents into sample sheets and provides direct source references back to the support. Practitioners review and approve all outputs. What's left for the team is the judgment.
Two evaluation tasks dominate the late-engagement work: deciding which misstatements matter and how to classify each control deficiency. Both depend on aggregation. Items that look immaterial in isolation can shift significance once you combine them.
Misstatement evaluation is the process of deciding whether the errors and adjustments the team found during testing add up to a material problem with the financial statements. It starts with the best estimate of total misstatement, then asks whether each item matters on its own and whether items matter in combination. Aggregation flips many of these calls. Qualitative considerations shift them further: does a misstatement change trend analysis, affect management compensation, or shift a reported result in a way that matters to investors? Prior-year carryover effects can compound the math.
Classifying control deficiencies is how the audit signals to the client and the audit committee how serious each control problem is. The classification drives the remediation work that follows and the way deficiencies get reported externally. For SOX engagements, deficiencies need severity classifications (control deficiency, significant deficiency, or material weakness) that hold up with reviewers, audit committees, and management. The distinction comes down to two questions: how likely is it that a material misstatement would not be prevented or detected on time, and how important is the issue for oversight?
Aggregation matters here too. Multiple deficiencies affecting the same account or process can amount to a material weakness in combination, even when each looks manageable on its own. Teams evaluating deficiencies one at a time miss those clusters.
Up to this point, the work has lived inside the engagement file. The final stretch is where it becomes public-facing: the team translates the findings into the report investors, audit committees, and regulators will rely on, and ties off the file so next year's team has what they need. Strong closeout discipline keeps the final file, communications, and opinions aligned.
The audit report is the formal opinion the auditor issues on whether the financial statements are presented fairly. For public companies, the report also has to flag the matters that took the most challenging judgment, called Critical Audit Matters or CAMs. Public-company reports under PCAOB standards follow AS 3101 reporting requirements, and the auditor determines whether there are CAMs to communicate.
Evaluating potential CAMs starts with a practical question: is the matter central enough to the audit committee dialogue and to material accounts that readers would expect to understand it? The harder call is whether the work involved especially challenging, subjective, or complex auditor judgment. That's where borderline issues separate from true CAMs.
Once a matter is a CAM, the write-up should help readers understand why it mattered: the issue itself, what made it especially difficult, how you addressed it, and the related accounts or disclosures. That structure is what makes the disclosure defensible in the file.
Reporting isn't quite the end. Before the engagement archives, finish the remaining closeout work:
Teams treating closeout as administrative afterthought lose context between engagements. A disciplined wrap-up gives next year's team a stronger start.
Connecting risk assessment, controls testing, substantive work, and reporting inside one workflow separates a clean engagement from one that runs over budget. Fieldguide is the audit and advisory profession's only end-to-end AI-native platform, covering every step from scoping through report delivery. Auditors direct the work through Field Orchestrator, which routes each task to the specialist Field Agent that executes it. Practitioners review every output and own the methodology, judgment, and final conclusions. Firms moving now are building advantages that compound. Request a demo to see how Fieldguide handles the engagement lifecycle in practice.