Most organisations treat vulnerability scanning as a periodic event — a quarterly assessment, an annual penetration test, a compliance checkbox before an audit. The problem is that attackers don't operate on your audit schedule. New vulnerabilities are disclosed daily, configurations drift continuously, and the window between your last scan and the next one is exactly the window attackers exploit.
The Vulnerability Window
A vulnerability window is the time between when a weakness is introduced and when it's detected and remediated. Every day in this window is a day the vulnerability can be exploited. Consider the timeline:
- Day 0: A developer deploys a configuration change that removes the HSTS header.
- Day 1-89: The site is vulnerable to SSL stripping attacks. Automated scanners probe it. Your quarterly scan isn't scheduled yet.
- Day 90: Your quarterly scan detects the missing header.
- Day 91-97: The finding is triaged, prioritised, and eventually fixed.
That's a 97-day vulnerability window for a fix that takes 30 seconds to implement. With weekly scanning, the window shrinks to 7-14 days. With daily scanning, it's 1-2 days. The frequency of detection directly determines the duration of exposure.
What Changes Between Scans
The assumption behind infrequent scanning — that nothing changes between scans — is systematically wrong. Between any two scans:
- Deployments happen: New code goes to production, potentially introducing new vulnerabilities or changing security configurations.
- Certificates approach expiration: TLS certificates have fixed expiration dates. A certificate that was valid last quarter might expire next week.
- DNS records change: New subdomains are created, old records become dangling, email authentication records are modified.
- Third-party scripts update: External JavaScript loaded from CDNs or vendors can be modified at any time, introducing new risks without any change to your codebase.
- New CVEs are published: The software running your site may have newly discovered vulnerabilities that didn't exist during your last scan.
Matching Frequency to Risk
The right scanning frequency depends on your risk profile. Consider these factors:
- Deployment frequency: If you deploy daily, you should scan daily. Every deployment is a potential configuration change.
- Data sensitivity: Sites handling financial data, health records, or personal information need higher scanning frequency than a static marketing site.
- Regulatory requirements: PCI DSS requires quarterly external vulnerability scanning at minimum. GDPR requires "regular testing and evaluation." SOC 2 expects continuous monitoring.
- Change velocity: Dynamic applications with frequent content changes, user-generated content, and third-party integrations have more opportunities for configuration drift.
For most production web applications, weekly scanning is the minimum defensible frequency. Applications handling sensitive data should scan daily. Critical infrastructure warrants continuous monitoring.
The Compliance vs Security Gap
Compliance requirements often specify a minimum scanning frequency that's inadequate for actual security. PCI DSS's quarterly external scan requirement is a compliance floor, not a security best practice. Organisations that scan only as often as compliance requires are optimising for passing audits, not for protecting their users.
The gap is measurable. A quarterly scan catches a vulnerability within 0-90 days of introduction. A weekly scan catches it within 0-7 days. That 83-day difference is the gap between compliance-driven security and threat-driven security.
Automated vs Manual Assessment
Frequency and depth are different dimensions. Automated scanning can run daily or continuously, checking for known misconfigurations, certificate issues, header changes, and DNS record modifications. Manual penetration testing provides deeper analysis but can't run at the same frequency due to cost and resource constraints.
The optimal approach layers both: continuous automated scanning for configuration monitoring and drift detection, complemented by periodic manual testing (annually or after major changes) for logic flaws and complex vulnerability chains that automated tools miss.
The Cost of Over-Scanning vs Under-Scanning
Over-scanning has negligible cost — modern scanning tools are designed for frequent automated execution. Under-scanning has potentially catastrophic cost — a breach during the vulnerability window can result in data loss, regulatory fines, and reputation damage. The asymmetry is clear: scanning too often wastes minutes of compute time; scanning too infrequently risks everything.
ShieldReport supports automated recurring scans on your preferred schedule — weekly, daily, or on-demand — so your vulnerability window stays as narrow as possible and configuration drift is detected before it becomes an exploitable weakness.