David W. Flater
Counting known vulnerabilities and correlating different factors with the vulnerability track records of software products after the fact is obviously feasible. The harder challenge is to produce “evidence to tell how vulnerable a piece of software is” with respect to vulnerabilities and attack vectors that are currently unknown. This means forecasting the severity and the rate at which currently unknown vulnerabilities will be discovered or exploited in the future, given a candidate system and its environment. Meteorologists can observe the present state of a weather system and assume that the future state must evolve from it through the application of known physics. Small features that are below the resolution of the radar are correspondingly limited in their impact, so the uncertainty can be bounded. But for computer system vulnerabilities, there are no analogous limits. High-impact exploits of tiny, obscure quirks that were not on anyone’s “radar” appear with regularity. Although the resolution of that “radar” is continuously improved, the complexity of systems is increasing faster, so the relevant details are inexorably receding into the background. Under these conditions, our best available predictors of future vulnerabilities in systems that were responsibly designed and implemented may be nothing more than metrics of size, complexity, and transparency. Unexciting as it may be, there is rationality to this approach. To develop a market for smaller, simpler, more verifiable systems would not be too modest a goal for a large government effort to attempt.