I recently attended a private event where there was a talk on security metrics. Security metrics can be used to determine if action x is reducing risk y. Software security metrics typically involve counting the number of defects discovered over time to see if things are getting better. Most of these metrics involve issues discovered during the testing (qa/development) or post production (pen test) phases and are considered to be an industry accepted measurement. Typically these security 'defects' are placed into some sort of defect tracking system that can be pulled up at a later time.
Anyone who has been involved in application requirements/design knows that security related issues pop up and are remediated before a single line of code is written. Flawed business requirements/designs are typically not filed into a defect tracking system and are instead merely updated in the design/requirements document. I got into an interesting discussion about this and the lack of security metrics involving business/architectural related flaws discovered/remediated prior to development. I asked a simple question 'Are people measuring architectural flaws, and if so how?'. Only 1 person (out of 15 or so highly qualified individuals/companies) was doing this sort of tracking, albeit in an adhoc manner. The other participants weren't aware of a document/format available to do just this either. We did all agree this was an important missing measurement that wasn't well explored in the infosec community.
You may be wondering 'why measure architectural/business requirement flaws?'. Consider 10 different product teams wishing to transfer x data across an internet pipe. 3 teams wish to use FTP, 1 team to use an HTTP service over SSL, 4 teams wish to use SSH, and 2 teams have no clue how to accomplish this goal. In this situation there is a clear common task needing to be performed by different groups, with no standard or formal guidance available to address how it should be handled. By measuring these 'issues/requirement changes' you can identify missing standards, training opportunities, the success rate of secure design review, and frequency of certain bad designs or group assumptions.
Is anyone aware of any article/documentation/metrics pertaining to recording of flaws identified/addressed during the design/requirements phase? Please reply to the comments form below