Algorithms control more and more of the systems we interact with on a daily basis. Critical decisions are executing without direct oversight by machine learning models. These systems, like any system, should be continuously taken apart and inspected to see how they work. Examining a machine learning model is not as easy as examining source code. This talk goes into detail on how to hack machine learning models and similar systems. Could an algorithm be racist? How can we detect it? Live examples in Python will be demoed and available on GitHub, and only basic programming knowledge is required to understand the talk and reproduce the examples.