Machine Learning Explainability | Avast

      No Comments on Machine Learning Explainability | Avast

This post was written by the following Avast researchers:

Petr Somol, Avast Director AI Research
Tomáš Pevný, Avast Principal AI Scientist
Andrew B. Gardner, Avast VP Research & AI

The automated detection of threats by analyzing emails, downloaded files, log files, or browsing history, for example is a key requirement of today’s cybersecurity products. Machine learning (ML) is a great tool for achieving this automation, but most applications are black box in other words,  the models provide detections with little or no context or explanation. This is problematic for humans (more specifically, the security analysts that handle threat response, the developers that maintain protection systems and sometimes even the users who rely on the products for protection) because it makes it difficult to understand and trust the product’s performance, track down and correct spurious detections, investigate newly emerging or zero-day threats, and even ensure fairness and compliance.

Leave a Reply