adverserial

In order to prevent the pandas from being classified as random pixels or a gibbon you need checks on the data before it is used in the training set. Although this is challenging to get right because you don’t want to add too many restrictions. You do need some flexibility of the model in order for it to find unintuitive relationships and optimise.

How do you defend against adversarial machine learning?

1) Add security measures to automated training of machine learning

2) Protect access to machine learning models

3) Make creation of results transparent (e.g. recommended because on Netflix, which will help you find the culprit)

4) Notify when something is outside the model

Machine learning results are usually presented quite transparently, which is rare. This is a critical component in catching issues in the process. If you see an IP address and a risk score, you probably don’t have any more information than what was used to create the score, so you have to trust it or know how it used that information to create that score. It’s not as easy as showing an arithmetic equation due to the nature of machine learning. However, there are some things that would help and machine learning can provide.

  1. Machine Learning Model: This lets you know the approximate technique used
  2. Key training samples: Know the top matches
  3. Top factors: There are hundreds or more data points that are used in these models. But in each result there are top data points that made an impact in coming to that result.
With this information available you can identify a number of things that need to be adjusted or have more trust in the result. Machine learning is everywhere and integrated into our lives and the more we trust it without being able to verify it, the more vulnerable we become, so it is essential that steps are taken to protect your organisation.