Hidden Hot Battle Lessons of Cold War: All Learning Models Have Flaws, Some Have Casualties

BSidesLV 2017

Presented by: Davi Ottenheimer
Date: Tuesday July 25, 2017
Time: 11:30 - 12:00
Location: Ground Truth

In a pursuit of realistic expectations for learning models can we better prepare for adversarial environments by examining failures in the field? All models have flaws, given any usual menu of problems with learning; it is the rapidly increasing risk of a catastrophic-level failure that is making data /robustness/ a far more immediate concern. This talk pulls forward surprising and obscured learning errors during the Cold War to give context to modern machine learning successes and how things quickly may fall apart in evolving domains with cyber conflict.

Davi Ottenheimer

flyingpenguins, Cyberwar History, Threat Intel, Hunt, Active Defense, Cyber Letters of Marque, Cloudy Virtualization Container Security, Adversarial Machine Learning, Data Integrity and Ethics in Machine Learning (Formerly Known as Realities of Securing Big Data).


KhanFu - Mobile schedules for INFOSEC conferences.
Mobile interface | Alternate Formats