Every year, researchers and companies discover more software flaws. A "CVE" or Common Vulnerabilities and Exposures is the ID assigned to each one. The chart shows the cumulative total of all CVEs ever published by that year. The line only goes up and is increasing.
As the volume exploded, MITRE authorized companies called CNAs (CVE Numbering Authorities) to assign their own CVE IDs. Today, companies assign the majority of new CVEs.
MITRE is the organization that originally managed CVE assignment. As the volume exploded, they authorized companies called CNAs (CVE Numbering Authorities) to assign their own. CNAs include Microsoft, Google, Red Hat, and hundreds of others.
Each vulnerability gets a severity score from 0 to 10. The score measures how much damage it could cause if exploited. Click a category below to learn what each level means.
CVSS base scores of 9.0–10.0 are considered the most “severe.” In practice, these are supposed to be the most dangerous flaws: an attacker could take full control of your system remotely, often without needing a password or any interaction from the victim. As an example, someone can access your computer from across the internet, just by visiting a malicious website. These are typically treated as emergencies and get emergency patches.
Serious vulnerabilities that could let attackers steal data, crash systems, or gain significant access. They usually require some preconditions (like being on the same network or tricking you into clicking a link), but the damage potential is high.
Moderate-risk flaws. These typically need multiple conditions to exploit and cause limited damage. By themselves they're not percieved as emergencies, but attackers sometimes chain several medium-severity bugs together to build a full attack. This is the cutoff point for compliance for standards like PCI-DSS, the payment card industry's security standard.
Minor issues with minimal direct impact. An attacker would gain very little from exploiting these alone. They're worth fixing eventually, but they don't typically keep security teams up at night.
CVSS scores aren't evenly distributed. Exploited vulnerabilities cluster at higher scores, but not exclusively. Some medium-severity bugs are targeted by hackers too, according to CISA's Known Exploited Vulnerabilities (KEV) list. Hackers do not seem to mind what the CVSS base score is, as long as the vulnerability can accomplish the task.
When you toggle to CISA's Known Exploited Vulnerabilities (KEV), the distribution shifts toward higher scores, but not exclusively. Some medium-severity vulnerabilities and even low-severity vulnerabilities are exploited too.
The National Vulnerability Database (NVD) fell behind processing CVEs, and therefore many vulnerabilities have no score at all. You can't easily prioritize what you can't measure.
CISA (the US Cybersecurity and Infrastructure Security Agency) maintains a list of vulnerabilities that are known to be actively exploited. Federal agencies are required to patch these. Many companies also use this list to prioritize patching. It does not appear that hackers care about when a vulnerability was released, as long as it is useful to them.
The high numbers years subsequent to 2021 are because the catalog launched in 2021 with a large backfill of historically exploited vulnerabilities.
A severity label measures potential damage, not likelihood of use by adversaries. Most critical CVEs are never exploited. And many exploited CVEs aren't rated Critical.
Out of 100 Critical CVEs, how many are actually exploited, according to CISA?
Out of 100 Exploited CVEs, what severity were they?
Proof-of-concept exploit code exists for far more vulnerabilities than are actually exploited in the wild. Having exploit code doesn't mean anyone will use it.
CVEs with exploit code (cvelistV5 references) vs. actually exploited (CISA KEV). The gap shows they are not correlated.
When a vulnerability is discovered, it takes time before it gets an official CVE publication. That delay leaves a gap where the flaw is added to the CVE catalog but details are withheld from the public.
How long between a vulnerability being discovered and its CVE being published?
Publication delays vary widely. Run the pipeline to see median, 90th percentile, and mean delay from real data.
Comparing scores across CVSS versions is like comparing Fahrenheit to Celsius. The numbers mean different things.
CVSS scores aren't evenly distributed. The formula's math creates spikes at certain scores. 7.5 and 9.8 appear far more often than 7.4 or 9.7. This spikiness is a quirk of the CVSS calculators, not a reflection of a meaningful gradient of risk.
The CVSS formula uses discrete inputs (like "Low/High" for complexity) that collapse into certain numeric outputs. This means some scores are mathematically impossible to reach, creating gaps and spikes in the distribution.
The mathematical distribution of all possible base scores from each CVSS formula. Each bar shows how many metric combinations produce that score. Some numbers cannot be reached in any version, giving the numerical illusion that there is a wider distribution than there actually is. Note that not all scores are possible, and some scores are more likely than others. CVSS 4.0 has the fewest possible scores but requires the most input to calculate.
EPSS (Exploit Prediction Scoring System) tries to predict which vulnerabilities will actually be exploited in the next 30 days. Unlike CVSS which measures theoretical severity, EPSS attempts to measure the likelihood of exploitation.
CISA KEV vulnerabilities are confirmed as actively exploited. If CVSS (severity) and EPSS (exploitation probability) agreed, points would fall on the diagonal line. Instead the scores scatter widely and flatly disagree. Neither score reliably predicts the other and neither appear to predict actual exploitation activity.
Points on the diagonal = perfect agreement (CVSS 10 would mean EPSS 100%). The scatter shows that high-severity KEVs often have low EPSS, and low-severity KEVs can have high EPSS. Both metrics have limited predictive value for actual exploitation.
Attackers overwhelmingly target vulnerabilities that are network-accessible, low complexity, no authentication required, no user interaction needed, according to CISA KEV.
Different organizations track exploited vulnerabilities. They don't agree on what to include. Are they all wrong, or is one of them right? Because they can't all be correct.
Different criteria produce wildly different counts. CVSS Critical is a severity threshold; EPSS Critical is a probability threshold; KEV lists track confirmed exploitation. No single list is comprehensive.
Products at the top of the list account are not necessarily there because they're less secure, but often because they're the most widely deployed. Researchers and attackers go where the users are.
When both NVD and CISA ADP/CNA score the same vulnerability, agreement rates vary. Run the pipeline to see exact agreement and disagreement percentages from real data.
EUVD (European Union), JVN (Japan), BDU (Russia, when available), and CNVD (China) each maintain their own vulnerability lists. They don't agree on what to include, and when they overlap with cvelistV5 they often don't agree on base scores either. Since none of them agree, either one of them is correct, or they all are incomplete. Which do you think it is?
Each bar shows what % of cvelistV5 CVEs appear in that database. Score disagreement: when the same CVE is scored by both cvelistV5 and NVD (US), how often do they assign different CVSS values?
As CVE volume has exploded, so have quality problems. Run the pipeline to see how many CVEs have been rejected due to duplicates, errors, and non-vulnerabilities that slipped through the system.
CVSS calculators have many possible vector combinations that can produce each score. If scores were chosen purely by the math, we'd see a roughly even distribution. Instead, certain scores appear far more often than expected. For every 1 theoretical combination that could produce a given score, there are many more CVEs assigned to it. This suggests researchers and hackers have strong preferences for certain scores, despite the calculators not favoring them.