The Security of Pattern Recognition Systems in Future Internet

Pattern recognition systems will play a crucial role in the future Internet, and are already used in applications like biometric identity recognition and user preferences prediction. This makes them likely to be subject to several kinds of attacks by malicious users.
This project, funded by Regione Autonoma della Sardegna (the regional Government of Sardinia), aims at extending the theoretical foundations and the design methods of pattern recognition systems, including machine learning techniques, to adversarial environments, that are not taken into account by the classical theoretical framework.

Adversarial learning is a novel research field that lies at the intersection of machine learning and computer security. It aims at enabling the safe adoption of machine learning techniques in adversarial settings like spam filtering, computer security, and biometric recognition.

The problem is motivated by the fact that machine learning techniques have not been originally designed to cope with intelligent and adaptive adversaries, and, thus, in principle, the whole system security may be compromised by exploiting specific vulnerabilities of learning algorithms through a careful manipulation of the input data. 

Accordingly, to improve the security of learning algorithms, the field of adversarial learning addresses the following main open issues:

  1. identifying potential vulnerabilities of machine learning algorithms during learning and classification;
  2. devising the corresponding attacks and evaluating their impact on the attacked system;
  3. proposing countermeasures to improve the security of machine learning algorithms against the considered attacks.


These are the main steps of the project:

  • Analysis of foundations of security of pattern recognition systems.
  • Development of a new security model based on evolutionary game theory.
  • Design of secure pattern recognition systems.
  • Case studies and development of proof­‐of‐concept systems.