Related posts
See all
Engagement quality, realization rates, and your firm's standing with regulators all hinge on how well you design and execute substantive testing. Getting it wrong is the fastest path to inspection deficiencies; getting it right means confident sign-offs and defensible workpapers. This article walks through what substantive testing requires under current standards, how it relates to control testing, and where AI-powered platforms fit into the picture.
Substantive testing is the set of audit procedures you perform to detect material misstatements in account balances, transaction classes, and disclosures. In practice, that means inspecting invoices, confirming balances with third parties, recalculating estimates, and investigating variances between expected and recorded amounts.
You're tying balances and disclosures back to evidence through targeted tests, expectation setting, and follow-up so you can decide whether the remaining risk of material misstatement is acceptable. PCAOB AS 2301 and AS 2305 establish substantive procedures as two categories: tests of details of accounts and disclosures, and substantive analytical procedures.
Tests of details are exactly what they sound like. You pull a sample of transactions or balances, examine the supporting evidence, and determine whether the recorded amounts are accurate, complete, and properly classified. Think of vouching a sample of revenue transactions to shipping documents and customer contracts, or confirming accounts receivable balances directly with customers.
Substantive analytical procedures take a different angle. Instead of testing individual transactions, you develop an expectation for what a balance or ratio should be and investigate significant differences between that expectation and what's recorded. A straightforward example: comparing gross margin percentages by product line against prior periods and industry benchmarks, then digging into any material variance.
Both approaches are subject to minimum requirements that determine when each is sufficient on its own.
No matter how strong your controls test, AS 2301.36 sets a mandatory floor: substantive procedures are required for each relevant assertion of each significant account and disclosure, regardless of assessed control risk.
That floor gets higher when significant risks are involved. AS 2305.09 establishes that substantive analytical procedures alone are unlikely to provide sufficient audit evidence for significant risks of material misstatement, so you need to plan tests of details for those areas. AS 2301 also requires that substantive procedures respond to assessed risks of material misstatement, including fraud risks, but the standards do not require tests of details exclusively for fraud.
These two categories of audit procedures answer different questions, and understanding the distinction shapes how you plan every engagement.
Tests of controls evaluate whether a specific control operated effectively during the audit period. You're asking: did this control actually prevent or detect material misstatements as designed? The evidence you gather relates to the control's operation, not to the dollar amounts in the financial statements.
Substantive tests go straight to the numbers, asking whether a balance or transaction class is materially correct. The evidence relates directly to financial statement assertions like existence, completeness, valuation, and accuracy.
In some situations, tests of controls become mandatory. You must test controls when substantive procedures alone cannot provide sufficient appropriate audit evidence for a relevant assertion, and when you need to support your reliance on the accuracy and completeness of financial information used in performing other audit procedures. Highly automated environments with few paper trails are the classic example; you can't substantively test what you can't independently recreate without understanding the system controls.
Substantive procedures, as noted above, are always required for significant accounts. That asymmetry matters for planning. You can sometimes skip control testing if your substantive approach is comprehensive enough (and the assertion doesn't fall into the mandatory-controls category), but you can never skip substantive work by relying solely on controls.
When both objectives apply to the same population, there's an efficient way to address them together.
When both types of procedures apply to the same population, you can design a dual-purpose sample that addresses both objectives simultaneously. In practice, firms generally size dual-purpose samples to meet at least the larger of the control or substantive requirements; combining objectives does not reduce the required sample volume.
For nonissuer audits, firms often reference the PCAOB's analogous-standards mapping to align PCAOB requirements with AU-C standards (e.g., AU-C 330 for responses to assessed risks and AU-C 530 for sampling).
Weak sampling design is one of the fastest ways to undermine otherwise solid substantive work. AS 2315 governs the framework, defining audit sampling as testing less than 100 percent of a population to reach conclusions about the entire balance or transaction class.
Both statistical and nonstatistical sampling can produce sufficient audit evidence when you design and execute them well. AS 2315 confirms that your choice between the two doesn't directly affect which procedures you perform or the appropriateness of evidence for individual items.
Statistical sampling gives you mathematical precision around your results, which helps when you need to quantify sampling risk or project misstatements across a population. Non-statistical sampling relies more heavily on your professional judgment in both design and evaluation, which works well when population characteristics make stratification difficult or when the account isn't large enough to justify the additional rigor. Whichever approach you choose, sample size determination follows the same core logic.
Three factors drive sample size, and each pulls in a specific direction. Lower tolerable misstatement means larger samples. Lower allowable risk of incorrect acceptance (meaning you want more assurance) also pushes sample sizes up. And populations with high variability or expected misstatements require more items to achieve the same confidence level. AS 2315 requires you to consider all three when determining your sample size.
Beyond current requirements, the sampling standards are about to change in ways that affect how you handle unexpected findings.
The PCAOB adopted amendments to AS 2315 in 2024 that, among other things, clarify how auditors handle items identified for investigation in tests of details. The amended standard is effective for audits of fiscal years beginning on or after December 15, 2026. You should start familiarizing yourself with the updated requirements now, particularly the expanded guidance on what to do when sample items raise questions beyond the original test objective.
Your substantive procedures and ICFR work should inform each other throughout the engagement; that two-way relationship is where firms either strengthen their audits or create inspection deficiencies. AS 2201, together with AS 2301.36, makes this explicit: substantive procedures are required over relevant assertions for significant accounts and disclosures, even when you plan to rely on controls.
When ICFR testing reveals deficiencies, your substantive plan needs to respond directly. AS 2201 requires you to factor identified control deficiencies, including those related to fraud, into your risk assessment for the financial statement audit. A material weakness or significant deficiency in controls over revenue recognition, for example, means expanding the nature, timing, and extent of your substantive revenue testing to compensate for the reduced assurance from controls.
Recent inspection results show how frequently this connection between control findings and substantive responses breaks down.
Many firms aren't making those adjustments effectively. The PCAOB's 2024 inspection spotlight shows that while the aggregate Part I.A deficiency rate among U.S. Global Network Firms dropped to 26% in 2024 from 34% in 2023, revenue and related accounts remained the most frequently cited category. Three failure patterns stand out:
That last pattern connects to a broader issue. The KPMG 2025 SOX Survey found that even as average testing hours per control increased from 12 to 16 hours between FY22 and FY24, automated controls accounted for just 17% of total controls, down from 21% in FY22.
When firms rely heavily on manual processes for IPE validation, the risk of incomplete or inaccurate data feeding into substantive tests rises. If you're not rigorously testing the completeness and accuracy of entity-produced information before using it as audit evidence, you're building your conclusions on unstable ground.
Technology-assisted analysis can strengthen substantive testing, but the bar for evaluating and documenting that evidence is rising.
What the new evidence standards require
If you're using technology-assisted analysis in your audits, the documentation bar just got higher, and firms that don't clear it are generating deficiencies instead of eliminating them. PCAOB amendments related to AS 1105, effective for fiscal years beginning on or after December 15, 2025, now spell out what you need to support when your procedures involve analyzing entity-produced electronic information with technology-based tools. AICPA SAS No. 142 similarly modernized how auditors evaluate evidence obtained through automated tools and data analytics.
The takeaway is the same from both: technology use is fine, but you need to document how you validated the data feeding into it. Meeting those requirements is easier when your testing platform is designed around them
Platforms purpose-built for financial audit workflows can address several of the procedural pain points that show up in inspection findings. Fieldguide's AI Audit Testing Agent, for example, matches evidence to samples and extracts defined data fields from source documents into Sample Sheets, providing dynamic citations back to the source material. The agent handles extraction only; it does not validate data or draw conclusions, so all outputs require practitioner review and approval before they become part of your workpapers.
That kind of structured extraction helps you maintain consistency across large sample populations while preserving the documentation trail regulators expect. Beyond extraction, AI is already reshaping how firms approach the broader substantive testing workflow.
The Journal of Accountancy describes current AI applications in audit as supporting journal entry testing, anomaly detection, and processing large volumes of documents during planning and substantive procedures. As agentic AI capabilities mature, they handle multistep task execution like gathering information and populating workpapers, while you focus review time on the exceptions and judgments that actually require expertise.
If you're running multiple concurrent audits, the practical benefit is straightforward: less time spent on manual data extraction and reconciliation means more time spent evaluating the results. In one Fieldguide case study, UHY reported 20-30% engagement time reductions, with some tasks dropping from 3 hours to 15 minutes.
Substantive testing demands precision, thorough documentation, and consistent execution across every significant account. Purpose-built for audit and advisory firms, Fieldguide's engagement automation platform embeds AI directly into your workflows: from sample-based data extraction with direct source references to evidence validation and workpaper documentation. The result is faster fieldwork without sacrificing the rigor that regulators and your review partners expect. See Fieldguide in action.