Action1 5 Blog 5 Patch Management Audits: What Actually Matters (And What Doesn’t)

Patch Management Audits: What Actually Matters (And What Doesn’t)

December 1, 2025

By Gene Moody

First 200 endpoints free, no feature limits.

No credit card required, full access to all features.

Equifax knew about the Apache Struts vulnerability for 143 days before their breach. They had patch management policies, quarterly audits, and compliance reports showing 98% coverage. None of it mattered. 147 million records were exposed, and the company paid $1.4 billion in settlements.

That disconnect between what audits measure and what actually protects you? That’s the problem we need to fix.

Most patch management audits check whether you have documentation. They verify you have policies, procedures, and regular scanning schedules. They confirm you’re generating reports. What they don’t check is whether any of this actually reduces your risk.

The One Question Your Audit Should Answer

Can you deploy a critical security patch to 95% of production systems within 72 hours?

If you can’t answer “yes” with confidence, everything else is secondary.

When Microsoft released the MS17-010 patch for EternalBlue in March 2017, security teams had exactly two months before WannaCry hit. Organizations that could move fast—deploy, test, and roll out patches quickly—survived relatively unscathed. Those that couldn’t? WannaCry infected 200,000+ computers across 150 countries in a single day. The global economic impact exceeded $4 billion.

NotPetya hit just weeks later using the same vulnerability. Organizations that still hadn’t patched after WannaCry became easy targets. Maersk lost $300 million. FedEx’s TNT Express division reported $400 million in losses. These weren’t sophisticated attacks—they exploited a known vulnerability with an available patch.

What Traditional Audits Get Wrong

Traditional patch management audits follow a predictable pattern: verify policy exists, check scanning frequency, review documentation, confirm compliance reporting. The problem? You can check every box and still be vulnerable.

I’ve seen audit reports showing 95% patch compliance that were technically accurate but operationally meaningless. The 5% gap included all the internet-facing web servers and the domain controllers—you know, the systems that actually matter. Raw compliance percentages without context tell you almost nothing about risk.

The three most common audit failures:

  1. Measuring activity instead of outcomes: Tracking how many patches you deployed doesn’t tell you whether you’re less vulnerable. What matters is whether you’re closing critical exposures faster than attackers can exploit them.
  2. Treating all systems equally: Patching 95% of endpoints means nothing if the unpatched 5% includes your VPN servers, firewalls, and domain controllers. Yet most audits treat every system the same.
  3. Ignoring speed: Compliance percentages are snapshot metrics. They don’t measure your mean time to patch (MTTP) for critical vulnerabilities. An organization at 90% compliance that patches critical flaws in 48 hours is safer than one at 98% compliance that takes three weeks.

Metrics That Actually Matter

Forget the vanity metrics. Focus on these instead:

Mean Time to Patch (MTTP) for Critical Vulnerabilities: This should be under 7 days, ideally under 72 hours. When Log4Shell (CVSS 10.0) hit in December 2021, organizations with MTTP under a week contained the damage. Those taking 2-3 weeks faced widespread exploitation.

Percentage of Critical Assets Patched Within SLA: Not overall patch compliance—specifically your crown jewels. Internet-facing systems, domain controllers, systems processing sensitive data. These should be at 98%+ compliance within your SLA window.

Patch Failure Rate: How often do patches fail to deploy or cause issues requiring rollback? Target should be under 5%. Higher rates indicate testing problems or deployment issues that slow your entire program.

Vulnerability Window: Time between CVE publication and patch deployment. Every day you’re vulnerable is a day attackers have to develop and deploy exploits. According to Verizon’s Data Breach Investigations Report, 60% of breaches involve known, unpatched vulnerabilities—not zero-days.

The Real Audit Checklist

This isn’t a theoretical list of best practices. These are specific, measurable items that separate functional patch management programs from security theater.

Asset Management & Visibility

  1. Verify asset inventory completeness: Can you account for 100% of devices on your network? Last time I checked, most organizations discover 10-15% more assets when they actually look.
  2. Confirm inventory updates in real-time: Asset lists updated monthly are useless. Verify your inventory reflects changes within 24 hours.
  3. Validate agent deployment rate: Check that 95%+ of managed endpoints have functioning agents reporting status. Dead or misconfigured agents create blind spots.
  4. Review discovery of unmanaged devices: Confirm you’re actively scanning for shadow IT, contractor devices, and IoT equipment. These are often the least patched.

Vulnerability Assessment

  1. Check scanning frequency for critical assets: Internet-facing systems should be scanned at least weekly, preferably daily. Monthly scans are too slow.
  2. Verify vulnerability data freshness: Are you using vulnerability feeds updated within 24 hours? Stale data means you’re reacting to threats attackers already know about.
  3. Validate scan coverage: Review last three vulnerability scans. Did they cover all assets? Missed scans indicate configuration problems.
  4. Confirm authenticated scanning: Unauthenticated scans miss 30-40% of vulnerabilities. Verify you’re using credentialed scans with appropriate permissions.

Prioritization & Risk Assessment

  1. Review prioritization methodology: Verify you’re not just sorting by CVSS score. Check whether you’re considering exploitability, asset criticality, and threat intelligence.
  2. Validate critical asset identification: Confirm you’ve actually documented which systems are business-critical and why. Most organizations haven’t.
  3. Check false positive handling: Review how you’re filtering out false positives without missing real vulnerabilities. Manual verification should be under 10% of findings.

Patch Deployment & Testing

  1. Measure actual MTTP for last 10 critical patches: Don’t ask what your target is—measure what you actually achieved. Gaps between policy and reality reveal operational problems.
  2. Review emergency patching capability: Can you bypass normal change control for actively exploited vulnerabilities? Test whether you can actually execute emergency deployments in under 24 hours.
  3. Validate patch testing procedures: Check whether testing is proportional to risk. Critical security patches shouldn’t spend two weeks in testing while attackers are actively exploiting the vulnerability.
  4. Confirm rollback capability: Verify you’ve successfully rolled back problematic patches in the last quarter. If you haven’t tested this, it won’t work when you need it.

Compliance & Reporting

  1. Calculate patch compliance by asset criticality: Overall compliance percentages hide risk. Break out compliance for internet-facing systems, domain controllers, and data processing systems separately.
  2. Review patching SLA adherence: Check whether you’re meeting your own deadlines. Consistent SLA misses indicate resourcing problems or unrealistic targets.
  3. Validate exception management: Review patch exceptions granted in the last quarter. Are these temporary with remediation plans, or permanent exemptions that never get addressed?
  4. Check reporting accuracy: Spot-check reported patch status against actual system state. Reporting discrepancies indicate data quality problems.

Automation & Tooling

  1. Assess automation coverage: What percentage of patches deploy automatically versus requiring manual intervention? Target should be 80%+ for standard patches.
  2. Review tool effectiveness: Check whether your patch management platform is actually reducing workload or just generating reports. I’ve seen organizations replace manual spreadsheets with automated spreadsheets—same problem, fancier tools.

Where Automation Actually Helps

Automation isn’t magic, but it’s necessary at scale. The question isn’t whether to automate—it’s what to automate and how.

The difference comes from eliminating manual bottlenecks: automated discovery, risk-based prioritization, scheduled deployments, and automated verification.

The key capability to look for: automated prioritization based on exploitability, not just severity scores. Action1’s approach combines CVSS scores with active exploitation data, which means you’re patching vulnerabilities attackers are actually using rather than just chasing theoretical risks. When you’re dealing with 50+ critical CVEs per month, this filtering matters enormously.

What automation should handle:

  • Continuous asset discovery and inventory updates
  • Automated vulnerability scanning on a schedule
  • Risk-based prioritization using threat intelligence
  • Scheduled patch deployment to non-critical systems
  • Automated verification of successful installation
  • Exception tracking and remediation workflows

What still needs humans:

  • Emergency patching decisions for zero-days
  • Critical system patching that requires coordination
  • Testing patches that could impact production
  • Handling patch failures and rollbacks
  • Making risk-based exceptions with clear timelines

Quick aside—anyone who tells you they can patch production databases with zero downtime is either lying or has a much simpler environment than yours. Some patches require restarts. Plan for it.

The Change Control Problem

The most common reason patch management programs fail isn’t technology—it’s change management boards that take three weeks to approve emergency patches.

I understand why change control exists. I’ve seen enough botched deployments to appreciate careful planning. But when you have an actively exploited vulnerability and the patch has been available for a week, your change control board becomes the problem, not the solution.

Options for fixing this:

Create a separate fast-track approval process for security patches under active exploitation. Define criteria (CVSS >8.0 + confirmed exploitation + internet-facing system = emergency process). Get buy-in from business stakeholders before you need it.

Implement pre-approved maintenance windows for security patching. Instead of requesting approval for each patch, get blanket approval for regular security updates during defined windows.

Use automated patch deployment for non-critical systems outside of change control. Your marketing team’s laptops don’t need a change control ticket. Save the process for systems that actually require coordination.

Third-Party Software: The Blind Spot

Most organizations have decent coverage for operating system patches. Microsoft and Apple make this relatively easy. The real gap is third-party applications—Java, Adobe products, web browsers, productivity tools.

According to research, third-party software accounts for 75%+ of vulnerabilities but often gets 25% of patching attention. Your audit should specifically verify:

  • Coverage for all third-party applications, not just OS patches
  • Update mechanisms for software without centralized management
  • Process for handling end-of-life software that no longer receives patches
  • Shadow IT applications that bypass standard deployment

When auditing remote workforce patching, tools like Action1 handle the hybrid challenge well—combining agent-based enforcement for managed devices with agentless scanning for BYOD and contractor equipment. Most traditional patch management solutions assume everything is on your corporate network, which hasn’t been true since 2020.

Common Audit Findings and What They Mean

After reviewing dozens of patch management audits, certain findings appear repeatedly. More importantly, some findings matter more than others.

High-risk findings that demand immediate action:

  • MTTP >30 days for critical vulnerabilities (indicates systemic process failure)
  • <90% compliance for internet-facing systems (you’re giving attackers easy targets)
  • No emergency patching procedure (you can’t respond to zero-days)
  • Patch testing taking longer than vulnerability window (testing becomes the risk)

Medium-risk findings requiring attention:

  • Missing patches >90 days old (indicates tracking/prioritization problems)
  • High patch failure rates >10% (deployment or compatibility issues)
  • Incomplete asset inventory (can’t patch what you don’t know exists)
  • Manual-heavy processes that don’t scale (works until it doesn’t)

Low-risk findings that are often noise:

  • Missing non-security updates (nice to have, not urgent)
  • Documentation gaps (fix it, but not a security risk)
  • Inconsistent naming conventions (annoying but not exploitable)

The Cloud and Container Problem

Traditional patch management focuses on operating systems and applications. But most organizations now run significant workloads in cloud environments and containers, which follow different patching models.

Cloud infrastructure: Whose responsibility is patching? In IaaS (AWS EC2, Azure VMs), you’re responsible. In PaaS (Azure App Service, AWS Lambda), the provider handles infrastructure. Your audit must clarify which patches are your responsibility and which aren’t. I’ve seen organizations assume AWS was patching their EC2 instances. AWS wasn’t.

Containers: Container images can contain vulnerable libraries and dependencies. When did you last scan your base images? How do you update running containers? Most organizations discover they’re running containers with known vulnerabilities from images built months or years ago.

Serverless: You’re not patching the runtime, but you are responsible for application dependencies. When Log4Shell hit, serverless applications using Java with Log4j were just as vulnerable as traditional deployments.

Your audit checklist should include cloud-specific items verifying responsibility matrices, container image scanning, and processes for updating deployed resources.

Measuring Program Effectiveness Over Time

The real test of patch management isn’t a single audit—it’s improvement over time. Track these trends quarterly:

MTTP trend: Is your mean time to patch decreasing? If it’s holding steady or increasing, you have scaling problems.

Vulnerability exposure: Total number of critical unpatched vulnerabilities across the environment. This should trend downward as you improve processes.

Compliance by asset class: Break out internet-facing, internal servers, and endpoints. Target 98%/95%/90% respectively. Watch for classes that aren’t improving.

Patch failure rate: Should decrease as you refine testing and deployment. Increasing failure rates indicate environmental changes or rushed processes.

Action1’s vulnerability management platform provides these trending metrics automatically, which is useful for demonstrating program improvement to leadership. Manual tracking is possible but time-consuming enough that most organizations don’t bother—which means they don’t know if they’re improving.

What “Good” Actually Looks Like

After auditing patch management programs across different organizations, the successful ones share common characteristics:

They can deploy critical patches to 95% of systems in under a week. They maintain that pace consistently, not just during fire drills. Their patch failure rate stays under 5%. They know what assets they have, what’s running on them, and what’s missing patches—in real-time, not based on last month’s scan.

They’ve automated routine patching but maintain human oversight for critical systems. They have clear escalation paths for emergency patching that bypass normal change control when necessary. They measure risk reduction, not just patching activity.

Most importantly, they treat patch management as an ongoing operational capability, not a compliance checkbox. The difference is obvious in how they respond to the next zero-day disclosure.

When You Find Problems

Your audit will find gaps. That’s the point. What matters is how you prioritize remediation.

Start with the issues that create immediate exploitable risk: unpatched internet-facing systems, missing critical patches >30 days old, systems with no visibility or management. These need to be fixed within 30 days.

Then address systemic problems that prevent scaling: manual processes, lack of automation, incomplete asset inventory, slow approval workflows. These take 60-90 days to fix but have the biggest long-term impact.

Finally, handle the documentation and compliance gaps. Important for audits but not urgent for security.

The organizations that fail are the ones that treat every finding equally and get overwhelmed trying to fix everything simultaneously. Prioritize ruthlessly based on risk, not based on audit finding severity levels.

The Real ROI of Better Patch Management

The average cost of a data breach in 2024 is $4.45 million according to IBM’s research. Verizon’s DBIR reports that 60% of breaches involve known, unpatched vulnerabilities. Basic math: improve your patch management enough to prevent just one breach, and you’ve likely paid for your entire security program several times over.

But there’s opportunity cost too. Time your team spends chasing patches manually is time they’re not spending on threat hunting, architecture improvements, or strategic initiatives. When teams cut their patching time from weeks to days through automation, they don’t just reduce risk—they free up resources for higher-value work.

The question isn’t whether you can afford to improve patch management. It’s whether you can afford not to. Especially when the next Equifax-scale incident happens and everyone asks why you didn’t patch a known vulnerability.

Start with the audit checklist above. Measure where you are today. Then focus on the gaps that create the most risk. You don’t need perfection—you need to be good enough that attackers choose easier targets. Most organizations are nowhere close to that threshold, which means there’s plenty of room for improvement.

About Action1

Action1 is an autonomous endpoint management platform trusted by many Fortune 500 companies. Cloud-native, infinitely scalable, highly secure, and configurable in 5 minutes—it just works and is always free for the first 200 endpoints, with no functional limits. By pioneering autonomous OS and third-party patching with peer-to-peer patch distribution and real-time vulnerability assessment without needing a VPN, it eliminates routine labor, preempts ransomware and security risks, and protects the digital employee experience. In 2025, Action1 was recognized by Inc. 5000 as the fastest-growing private software company in America. The company is founder-led by Alex Vovk and Mike Walters, American entrepreneurs who previously founded Netwrix, a multi-billion-dollar cybersecurity company.

See What You Can Do with Action1

 

Join our weekly LIVE demo “Patch Management That Just Works with Action1” to learn more

about Action1 features and use cases for your IT needs.

 

spiceworks logo
getapp logo review
software advice review
trustradius
g2 review
spiceworks logo