Equifax passed their compliance audits. They had documented patch management processes, quarterly reviews, and policies requiring timely patching. Their audit reports showed they were compliant with industry standards.
143 days later, attackers exploited the unpatched Apache Struts vulnerability. 147 million records exposed. $1.4 billion in settlements. Multiple compliance violations identified retroactively.
The audits checked documentation. Attackers checked actual patch status. One of these things mattered more than the other.
This is the fundamental problem with compliance-focused patch management: organizations optimize for passing audits rather than actually being secure. They have policies, procedures, and documentation that auditors approve while their systems remain vulnerable to known exploits. Compliance becomes a checkbox exercise divorced from actual risk reduction.
The gap between documented compliance and actual security is where most organizations live—and where breaches happen.
What Compliance Actually Requires (And What It Doesn’t)
Different regulations have different patching requirements, but most are vague enough to cause confusion:
PCI-DSS Requirement 6.2: Install critical security patches within one month of release. This is specific—you have 30 days maximum for critical patches affecting systems that process, store, or transmit cardholder data. Fail this, and you risk losing your ability to process credit cards.
HIPAA Security Rule 164.308: Implement procedures for reviewing security updates and patches. Document decisions about whether and when to install them. “Reasonable and appropriate” timeline based on risk. This is intentionally vague—what’s reasonable for a hospital might not be for a small clinic.
GDPR Article 32: State-of-the-art technical measures to ensure security. Courts have interpreted unpatched known vulnerabilities as failure to maintain appropriate security. British Airways paid £20 million partially because systems were inadequately patched during their breach.
SOC 2: Controls to identify and address vulnerabilities in a timely manner. “Timely” is defined by your documented policies, which auditors then verify you’re following.
The pattern: Regulations require documented processes and evidence of following them. They don’t specify exactly how fast you must patch (except PCI-DSS). This means organizations define “timely” themselves, then auditors verify compliance with self-imposed standards.
The problem: Organizations set achievable targets (“we’ll patch within 90 days”) that satisfy auditors but leave them vulnerable. They’re compliant with their own inadequate policies.
The Compliance Theater Problem
Most audits check three things:
- Do you have documented policies?
- Do you have evidence of following those policies?
- Do you have controls to prevent/detect violations?
Notice what’s missing? Auditors rarely check whether your policies actually reduce risk or whether your evidence reflects reality.
Common compliance theater
Your patch management policy says critical patches deploy within 30 days. Your reports show 95% compliance. Auditors approve. Reality: the 5% gap includes all your internet-facing servers because they’re “too critical to patch without extensive testing.” The numbers look good, but your most exposed systems remain vulnerable.
You have quarterly access recertification showing managers approved all access. Auditors check the box. Reality: managers approved 50 pages of technical permissions they didn’t understand because denying access might break something.
Your vulnerability scans run monthly and generate reports. Auditors verify the reports exist. Reality: nobody prioritizes remediation based on those scans because they find too many vulnerabilities to address.
This is compliance without security—documentation without substance.
After seeing this pattern repeatedly, I’m convinced that “compliance-focused” security programs are actually security theater focused on audit performance. They measure whether you’re doing security activities, not whether those activities reduce risk.
Evidence Auditors Actually Demand
Understanding what auditors check helps you maintain both compliance and security:
Patch deployment logs: Who deployed which patches to which systems when. Timestamped evidence that patches were actually installed, not just attempted. Solutions like Action1 maintain this automatically—when auditors ask “prove you patched these systems in January,” you provide detailed deployment logs with timestamps and success confirmations, not manually maintained spreadsheets that may or may not reflect reality.
Vulnerability scan results: Regular scanning showing identified vulnerabilities. More importantly, evidence you acted on scan results—not just that scans occurred. Auditors look for patterns where the same vulnerabilities appear repeatedly across multiple scans, suggesting you’re scanning but not remediating.
Exception documentation: Why certain systems aren’t patched on schedule. Business justification, risk acceptance, compensating controls. Every exception needs a documented reason and approval. If 30% of your systems have exceptions, auditors start questioning whether your process works.
Policy documentation and revisions: Your patch management policy, and evidence that it’s reviewed and updated regularly. Policies that haven’t changed in five years suggest you’re not adapting to evolving threats or organizational changes.
Incident response records: When patches fail or cause issues, documented response and resolution. Auditors want to see you handle patch failures systematically, not just hope everything works.
Access controls: Who can deploy patches, who approves exceptions, separation of duties. Evidence that not just anyone can push code to production systems.
The organizations that struggle with audits aren’t those with inadequate security—they’re those with inadequate documentation of adequate security. You can have excellent patch management but fail audits because you can’t prove it. Conversely, you can have terrible patch management but pass audits because your documentation looks good.
The Regulatory Consequence Reality
Compliance violations cost real money, though less than breaches:
Equifax (2017): Multiple compliance violations identified after breach. $575 million to FTC, plus $1.4 billion total costs. The compliance violations weren’t the primary cost—the breach was—but they eliminated any legal defense.
British Airways (2019): £20 million GDPR fine (reduced from £183 million). Inadequate security measures including unpatched systems contributed to the penalty. The fine wasn’t just for the breach but for failure to maintain appropriate security.
Marriott (2020): £18.4 million GDPR fine for breach affecting 339 million guests. Inadequate security measures including poor patch management. Again, the issue wasn’t just the breach but demonstrable failure to meet security obligations.
Morgan Stanley (2020): $60 million fine to OCC for inadequate risk management including poor decommissioning and patching of systems. This wasn’t even a breach—just regulators finding inadequate controls during routine examination.
The pattern: Fines for compliance violations are typically millions, not billions. The real cost is the breach itself. But compliance violations eliminate your legal defenses and increase penalties. If you can show you were reasonably compliant when breached, penalties are lower. If audits reveal you weren’t even trying, penalties are severe.
Risk-Based Compliance That Actually Works
The organizations that achieve both compliance and security don’t treat them as separate goals. They build patch management that satisfies regulatory requirements while actually reducing risk.
Start with regulatory requirements as minimum baselines
PCI-DSS says 30 days for critical patches? Set your target at 7 days. When auditors check compliance, you’re not just meeting requirements—you’re exceeding them. More importantly, you’re patching fast enough to stay ahead of exploit development.
HIPAA requires “reasonable and appropriate” patching? Define that specifically: critical vulnerabilities within 7 days, high-severity within 30 days, everything else within 90 days. Document why these timeframes are appropriate for your risk profile. Now you have specific, measurable targets that both satisfy compliance and provide actual security.
Automate evidence collection
Manual compliance reporting is both time-consuming and unreliable. Platforms like Action1 provide automated audit-ready reporting showing which systems received which patches when—the paper trail auditors demand. More importantly, this reporting is accurate because it’s generated from actual deployment data, not manually updated spreadsheets someone forgot to maintain.
Implement risk-based prioritization that satisfies both security and compliance
Regulators care most about timely patching of critical systems. Security teams care about patching based on exploitability. These align well: internet-facing systems processing sensitive data need both rapid patching for security and documented rapid patching for compliance.
Action1’s vulnerability management framework aligns with compliance requirements across PCI-DSS, HIPAA, and SOC 2 by providing risk-based prioritization and automated deployment documentation. You patch based on actual risk while automatically generating the compliance evidence auditors need.
Document exceptions properly
Every unpatched system is a potential audit finding. If you have valid reasons for delayed patching (incompatibility, critical system requiring extended testing, compensating controls), document them. Auditors accept documented, approved exceptions with risk acceptance. They don’t accept “we didn’t get around to it yet.”
Common Audit Failures (And How to Avoid Them)
After managing programs through multiple audits, these failures appear repeatedly:
Gap between policy and practice: Policy says critical patches within 30 days. Practice averages 60 days. Auditors compare documented policy to actual metrics and find non-compliance. Fix: either improve your practice or revise your policy to reflect reality (though the second option may violate regulatory requirements).
Missing evidence: You patched systems but can’t prove it because logs weren’t retained. You approved exceptions verbally but didn’t document them. Auditors need evidence, not assertions. Fix: automated evidence collection that doesn’t rely on humans remembering to document things.
Inconsistent processes: You have a formal process for patching servers but not for endpoints, or for Windows but not Linux. Auditors look for systematic approaches applied consistently. Fix: unified patch management across all systems, not separate processes for each platform.
Stale documentation: Your patch management policy references tools you stopped using two years ago, or describes processes that no longer reflect how you actually operate. Fix: annual policy reviews as part of your compliance program.
No metrics or KPIs: You can’t demonstrate patch management effectiveness because you don’t measure it. Auditors want evidence that your program works, not just that it exists. Fix: track mean time to patch, patch success rates, and vulnerability remediation metrics.
Ineffective vulnerability management: You scan monthly but don’t prioritize or remediate findings. Scans occur but vulnerabilities remain. Auditors question what the scanning accomplishes if you’re not acting on results. Fix: documented vulnerability management process linking scanning to remediation.
Your Patch Compliance Readiness Checklist
Assess whether you’re actually ready for your next audit:
Documentation Requirements:
☐ Current patch management policy (reviewed within past year)
☐ Documented patching schedules and SLAs by system criticality
☐ Exception approval process with documented risk acceptance
☐ Incident response procedures for patch failures
☐ Access controls defining who can deploy patches and approve exceptions
Evidence Maintenance:
☐ Automated patch deployment logs with timestamps
☐ Vulnerability scan results from past 12 months
☐ Remediation tracking showing scan findings were addressed
☐ Exception documentation with business justification and approvals
☐ Patch success/failure metrics with root cause analysis for failures
Process Controls:
☐ Separation of duties (different people scan, approve, deploy)
☐ Regular policy reviews (at least annually)
☐ Change management integration for patches
☐ Rollback procedures and testing for critical systems
☐ Communication processes for scheduled patching
Compliance Metrics:
☐ Mean time to patch for critical vulnerabilities (<7 days target)
☐ Patch compliance percentage by system criticality (>95% target)
☐ Exception rate (<10% of systems target)
☐ Vulnerability remediation rates (>90% within SLA target)
☐ Patch failure/rollback rate (<5% target)
Audit Preparedness:
☐ Can produce patch deployment evidence for any system within 15 minutes
☐ Can demonstrate compliance with documented policies using actual data
☐ Have evidence of following exception approval processes
☐ Can show trending metrics demonstrating program effectiveness
☐ Have documentation explaining gap between target and actual performance
Regulatory Alignment:
☐ PCI-DSS: Critical patches to cardholder data environment within 30 days
☐ HIPAA: Documented process for evaluating and installing security updates
☐ GDPR: State-of-the-art security measures including timely patching
☐ SOC 2: Controls for timely vulnerability identification and remediation
☐ Industry-specific requirements addressed in documented policies
Scoring:
- 18-20 checked: Audit-ready
- 13-17 checked: Significant gaps requiring attention before audit
- 8-12 checked: Will likely have findings, need immediate remediation
- 0-7 checked: Not ready for audit, serious compliance risk
Measuring Compliance Effectiveness
Compliance programs need metrics demonstrating they’re working:
Audit finding trends: Are findings decreasing over time? If you have the same findings every audit, your remediation process isn’t working.
Time to remediate findings: How quickly do you address audit findings? Organizations that take 6+ months to remediate findings get escalated in subsequent audits.
Patch compliance percentage by regulation: Track compliance specifically for systems subject to different regulations. Your PCI environment might be 98% compliant while your general infrastructure is 80% compliant—regulators care about the first number.
Exception aging: How long do exceptions persist? “Temporary” exceptions lasting 12+ months suggest your exception process is actually how you avoid patching.
Evidence production time: How long does it take to produce compliance evidence? If gathering evidence for an audit takes weeks, your evidence collection process needs improvement.
Policy vs. practice gap: What’s the delta between your documented SLAs and actual performance? If your policy says 30 days but reality averages 60, either improve performance or admit your policy isn’t realistic.
When Compliance and Security Conflict
Sometimes regulatory requirements and security best practices diverge:
Testing requirements vs. emergency patching: Compliance frameworks often require testing before production deployment. Security sometimes demands deploying critical patches immediately. Resolution: have pre-approved emergency processes that satisfy both—minimal testing for actively exploited vulnerabilities with enhanced monitoring and fast rollback capability.
Documentation burden vs. operational speed: Maintaining detailed evidence for every patch consumes resources that could go toward faster patching. Resolution: automate documentation so it doesn’t slow operations.
Conservative change management vs. rapid patching: Compliance teams want careful change control. Security teams want patches deployed before exploit code appears. Resolution: risk-based change management with expedited approvals for security patches.
Vendor certification requirements: Some compliance frameworks require using vendor-certified configurations. Security patches sometimes break certifications. Resolution: documented risk acceptance and compensating controls during the period between patching and recertification.
In all these cases, document your decisions and risk tradeoffs. Auditors understand that perfect compliance isn’t always possible—they want evidence you made informed risk decisions, not that you ignored risks entirely.
The Future of Patch Compliance
Regulatory requirements are getting more specific and demanding:
Shorter timelines: PCI-DSS already requires 30 days. Future regulations may require 14 days or 7 days for critical patches as exploit development accelerates.
Continuous compliance: Annual audits are being supplemented with continuous monitoring. Organizations may need to demonstrate real-time compliance, not just point-in-time compliance during audit windows.
Supply chain requirements: Regulations increasingly hold organizations responsible for third-party risk, including patching by vendors and service providers. You’ll need evidence that your suppliers maintain adequate patch management.
Automated evidence: Expect regulatory requirements for automated evidence collection and reporting. Manual compliance tracking won’t satisfy future requirements as it’s too easy to manipulate.
AI-driven audits: Regulators are experimenting with automated audit tools that continuously analyze your environment rather than periodic human audits. These tools check actual patch status, not just documentation.
The Bottom Line on Compliance
Compliance is necessary but insufficient. Passing audits proves you have documented processes and maintain evidence. It doesn’t prove you’re secure.
The organizations that succeed treat compliance as the minimum baseline, not the ultimate goal. They build patch management that satisfies regulatory requirements while actually reducing risk. They automate evidence collection so compliance doesn’t slow security operations. They measure both compliance metrics and security outcomes.
Equifax was compliant until the breach proved they weren’t really secure. British Airways passed audits before paying £20 million in fines. Marriott had compliance programs before facing £18.4 million in penalties. Compliance protects you only if it reflects actual security.
If your patch management looks good on paper but your systems remain vulnerable, you’re building liability, not protection. When the next major vulnerability disclosure hits and you’re asked “were you compliant with industry standards?” the answer needs to be yes—and more importantly, you need to actually be secure, not just documented as secure.
The audit matters. The breach matters more. Build programs that address both.
About Action1
Action1 is an autonomous endpoint management platform trusted by many Fortune 500 companies. Cloud-native, infinitely scalable, highly secure, and configurable in 5 minutes—it just works and is always free for the first 200 endpoints, with no functional limits. By pioneering autonomous OS and third-party patching with peer-to-peer patch distribution and real-time vulnerability assessment without needing a VPN, it eliminates routine labor, preempts ransomware and security risks, and protects the digital employee experience. In 2025, Action1 was recognized by Inc. 5000 as the fastest-growing private software company in America. The company is founder-led by Alex Vovk and Mike Walters, American entrepreneurs who previously founded Netwrix, a multi-billion-dollar cybersecurity company.




