This is a link to last month's fascinating Beesley lecture by  Stefan Hunt, the FCA's Head of Behavioural Economics & Data Science. It focuses on the impact of machine learning on helping the FCA solve what he describes as its "prediction" problems.

Regulators have always been fascinated by the possibilities of data and the importance of getting better at 1. identifying significant patterns, 2. discerning the nature of the problem they reveal, and 3. deciding how to fix it. Stefan's speech demonstrates the FCA is making significant strides - though they're still hard yards - on the first of these but only alludes in general terms to the latter two.

To see this in perspective, in the very early days of the FSA, back in 2001 I think,, Professor Malcolm Sparrow, author of "The Regulatory Craft" was invited over from Harvard to help us get better at these issues. He spoke in particular about the now famous "Boston Gun Project"and the FSA worked extremely hard to apply this learning. But it succeeded only fitfully, hampered in part by the problems of obtaining clean, accurate data sets. 

This is clearly still a hurdle, but is not as insuperable as it once was, and Regtech (#FCAsprint) holds out the alluring promise - equally attractive to firms - of greatly reducing the ambiguity and thereby increasing the accuracy of regulatory data sets. 

Stages 2 and 3, however, - interpreting and framing the true nature of the problem (and doing it early), and then working out how to fix it - are likely to remain ferociously difficult for a while yet.