Skip to main content

Payment processors and merchants face a challenge: AI systems that process cardholder data must comply with PCI DSS v4.0.1, with compliance deadlines already in effect. As a result, organizations deploying AI for fraud detection, transaction analysis, or payment processing can’t treat these systems as exempt from security requirements.

This guide breaks down the impact of AI on payment security compliance. 

In it, we'll focus on four key PCI DSS requirements (3, 6, 7, and 10) where AI creates specific risks, provide mitigation strategies validated by NIST and international regulators, and examine how PCI DSS v4.0.1 (published June 2024, with mandatory compliance effective March 2025) affects assessment processes. 

For Partners managing PCI assessments and IT leaders deploying AI payment systems, understanding these requirements prevents compliance violations before they occur.

What is PCI DSS compliance?

The Payment Card Industry Data Security Standard (PCI DSS) is a security framework that establishes mandatory requirements for all organizations that process, store, or transmit payment card data. 

PCI DSS protects cardholder data through 12 requirements organized under six control objectives. Mandatory compliance with version 4.0.1 became effective June 2024, with version 4.0 officially retired after December 31, 2024.

Organizations fall into four compliance levels based on annual transaction volume:

  • Level 1: Processors handling over 6 million transactions annually face the most rigorous requirements, including mandatory Qualified Security Assessor (QSA) validation.
  • Level 2: Organizations processing 1-6 million transactions annually require less extensive documentation but still maintain significant compliance obligations.
  • Level 3 and 4: Lower-volume merchants follow progressively simplified requirements tailored to their reduced risk profile.

Your compliance level determines whether you need external QSA validation or can complete self-assessment questionnaires. Note that as of March 31, 2025, all PCI DSS v4.0.1 requirements apply equally to AI-based payment systems, with no exemptions based on compliance level.

The four PCI DSS requirements below are particularly critical for AI implementations:

  • Requirement 3: Mandates protecting stored cardholder data through encryption, tokenization, or masking to prevent unauthorized access to sensitive information.
  • Requirement 6: Covers secure systems and software development practices, including AI model training and deployment to maintain secure code.
  • Requirement 7: Restricts access to cardholder data based on business need, directly affecting who can access AI training datasets and model parameters.
  • Requirement 10: Requires comprehensive logging and monitoring of all access to system components, including AI model queries and decisions for auditability.

With these compliance fundamentals established, let's examine how major payment processors are implementing AI within this regulatory framework.

How is AI used in payment systems today?

AI supports payment security functions across multiple applications, which creates both significant security improvements and new compliance challenges.

Here are a few examples of how AI is used in payment systems today:

  • Fraud detection: Mastercard's Decision Intelligence processes 143 billion transactions annually, achieving 20% average improvement in fraud detection rates and up to 300% improvement in specific scenarios
  • Network analysis: AI systems map connections between accounts, devices, and transactions across payment networks using graph technology
  • Transaction scoring: Real-time risk assessment evaluates transactions as they occur
  • Identity verification: Behavioral biometrics authenticate users through typing patterns and device interaction
  • Dispute resolution: AI chatbots handle fraud claims and documentation
  • Adaptive security: Machine learning frameworks continuously retrain to address emerging fraud patterns

While AI can improve security, it introduces compliance risks related to data handling, system security, and regulatory requirements, which must be carefully managed under PCI DSS.

What are the AI compliance risks under PCI DSS?

AI systems processing payment data don't operate outside PCI DSS requirements. Instead, they introduce new failure modes where standard compliance controls break down, creating systematic violations that traditional security frameworks weren't designed to address.

There are four categories of systematic PCI DSS violations that organizations must address:

  • Cardholder data exposure in AI training. Organizations violate Requirement 3 when they use production payment data for model training without proper protection. Training datasets containing primary account numbers (PANs) must be tokenized, redacted, or otherwise rendered unreadable, in accordance with PCI DSS controls for protecting stored account data. 
  • Vulnerable AI infrastructure. NIST's COSAIS framework says that machine learning  systems face unique vulnerabilities requiring AI-specific security measures. For example, prompt injection attacks implicate Requirement 6's secure development and input validation objectives, while frameworks like TensorFlow require systematic vulnerability management under Requirement 6.3.
  • Audit trail and logging gaps. AI systems might experience "data drift," where model behavior changes without corresponding audit trails. Organizations must log inference metadata (model versions, parameter changes, inference IDs, and access events) with pseudonymized references rather than raw cardholder data. These logs must be protected and retained per Requirement 10.2's mandate to implement audit logs for anomaly detection and Requirement 10.5's retention requirements.
  • Access control and scope management. If AI service accounts receive overprivileged database access, this violates Requirement 7.1's least privilege principle. Unrestricted "SELECT *" permissions and shared credentials across development teams contradict PCI SSC guidance limiting access to "only those individuals whose job requires such access." Shadow AI compounds this risk when developers deploy AI coding assistants accessing payment code repositories without appropriate risk management under Requirement 12.3.

These systematic violations create compounding compliance risks that expose organizations to penalties and also increase the likelihood of data breaches.

AI in PCI DSS Assessments

The PCI Security Standards Council published a formal framework for payment security assessors on AI use in spring 2025. This 12-page document maintains that human experts should lead assessments while authorizing AI to automate document reviews and generate reports.

Agentic AI platforms like Fieldguide help QSAs improve assessment efficiency through automated evidence analysis and documentation review. These tools process large volumes of compliance documentation while maintaining the security controls required for handling sensitive payment card data, allowing assessors to focus on substantive risk evaluation rather than manual document processing.

How to use Agentic AI for efficient PCI assessments

Partners and Managers can implement Agentic AI across four workflows:

  1. Consolidating interview notes. AI-powered transcription tools convert interviews into searchable text with speaker identification. Assessors can focus on substantive follow-up questions while AI captures complete conversation records. Post-interview, AI summarizes key points by PCI DSS requirement and flags potential compliance gaps requiring verification.
  2. Summarizing documentation and evidence. Agentic AI processes documentation that assessors have mapped to specific PCI DSS requirements, extracting relevant evidence quotes and key information. It summarizes documentation content, highlights potential compliance gaps within assigned requirements, and assists in creating matrices linking evidence to specific controls (though the initial requirement mapping requires qualified assessor judgment).
  3. Sampling and data comparison. For PCI DSS Requirement 8, Agentic AI assists with risk-based sampling by analyzing provided population data to identify anomalous access patterns and privileged accounts requiring enhanced scrutiny. Where validated population files are available and client permissions allow, AI can process larger datasets for anomaly detection, though practical implementation depends on data availability, format, and scope. In most assessments, QSAs apply risk-based sampling methods informed by AI-assisted analysis.
  4. Quality assurance and alignment verification. AI can check for consistency across assessment work papers before final report delivery. 

When implementing AI for PCI assessments, QSAs should follow data security protocols by ensuring AI platforms maintain SOC 2 Type 2 or equivalent attestations (request the vendor's current report and confirm the scope covers PCI-relevant processing controls), maintaining clear audit trails of AI-assisted work, and establishing explicit engagement letters documenting AI tool usage with client consent.

Where feasible, redact sensitive client information before AI processing, though assessors should evaluate whether redaction efforts offset efficiency gains.

Implement AI to increase margins on PCI assessments

PCI DSS v4.0.1 compliance extends fully to AI systems processing cardholder data, requiring proper integration of security controls across data protection, secure development, access management, and comprehensive logging. 

For audit and advisory firms, Fieldguide’s Agentic AI improves PCI assessments through automated documentation review and evidence analysis while maintaining professional standards and client data security. Request a demo to learn more.

Deirdre Dolan

Deirdre Dolan

Sr. Director of Product Marketing

Increasing trust with AI for audit and advisory firms.

fg-gradient-light