Top 10 2017 - Call for Data and Weightings Discussion


Why

What

What sort of data does the Top 10 need, and where/who do we ask for it, and how do we weight the various types of responses (see Brian Glas’ blog for “tools augmented with humans” vs “humans with augmented tooling”). This weightings of data will help define the approach in OWASP Top 10 2017 RC2, and also be used in the 2020 and 2023 OWASP Top 10. I want the community to drive this discussion.

Outcomes

Synopsis and Takeaways

We talked about how data was collected and the process by which it was analysed. For Top 10 2017, there was an open call for data, but it wasn’t widely reported nor sufficiently pushed once open. This possibly resulted in fewer responses than in a perfect world. There was a lot of discussion around the process such as “if we use data scientists, can we use the existing data?”, and “should we re-open the data collection?” It was incredibly valuable discussion, and struck good pragmatic balance. We want to drive a release this year, but RC2 will not come out this week, so we will not be running editing / creating sessions this week, but will instead work on collecting some more data.

The outcomes from this session were:

  • A data collection process and timeline will be published to the wiki to make sure everyone knows how data is collected and analysed, and when the next data call will be held. Some of this will appear in the text, probably an appendix to make sure that our process is transparent and open.

  • Andrew van der Stock will work on a process with Foundation staff to ensure that we can maximise publicity for the next data call round in 2019. There was suggestion to keep it open all the time, but this was felt to be unworkable without a data scientist to volunteer. For smaller consultancies, obtaining this data is already difficult, and we don’t want people to be overly burdened by the data call.

  • A data call extension will be pushed out for interested parties. Andrew will take care of this on Tuesday 12 June, 2017. As long as data is roughly in the same Excel format as the existing data and provided by the end of July, It ought to be possible to use it.

  • Dave Wichers will reach out to Brian Glas for feedback for tomorrow morning’s data weighting session.

  • For 2020, we will try to find data scientists to help us to improve our data methodology and analysis, so we can at least ensure that data drives inclusion for the non-forward looking data.

  • Ordering will never be strictly data order; to provide continuity, there is a decision (which will now be documented) that if A1 … A3 in 2010 are the same in 2017 but in a slightly different order, those will retain a previous order. This helps folks map results year on year and prevents massive upheaval between years.

  • Feedback obtained from the OWASP Top 10 mail list will end up in Git Hub tomorrow as issues. For feedback sent privately to Dave, Andrew will reach out to these individuals to ask permission to create issues at GitHub. This will help with project transparency. From now on, if you have feedback, please provide it at GitHub: https://github.com/OWASP/Top10/issues

Who

The target audience for this Working Session is:

  • OWASP Top 10 2017 Track participants

References


Working materials



Back to list of all Working Sessions and Tracks

Edit this page here