110 likes | 219 Views
The increasing attention on Artificial Intelligence (AI) regulamentation has led to the definition of a set of ethical principles grouped into the Sustainable AI framework. In this article, we identify Continual Learning, an active area of AI research, as a promising approach towards the design of systems compliant with the Sustainable AI principles. While Sustainable AI outlines general desiderata for ethical applications, Continual Learning provides means to put such desiderata into practice.
E N D
Sustainable Artificial Intelligence through Continual Learning Continual Learning is the most promising candidate for Sustainable AI Andrea Cossu – Scuola Normale Superiore, University of Pisa Marta Ziosi – University of Oxford Vincenzo Lomonaco – University of Pisa CAIP 2021
Continual Learning objectives Mitigate catastrophic forgetting Quickly learn new tasks Build new concepts incrementally ...
Continual Learning for Efficiency and Scalability AI requires long training phases on gigantic datasets AI is not able to manage drifts → requires retraining CL systems learn over time CL advantage grows with the degree of non-stationarity of the environment
Continual Learning for Fairness, Privacy & Security Fairness CL allows to monitor the emergence of biases CL allows to easily incorporate new information to correct possible biases Privacy & Security low-powered devices → need to move data to central facilities exposes the systems to external attacks ● CL can train directly on-device!
Continual Learning for Accuracy & Robustness Offline AI → drastic drop in performance when data arrives dynamically CL improves robustness by mitigating catastrophic forgetting CL systems can live less isolated in real-world environments ● CL can improve final accuracy! positive interaction between tasks (transfer learning) ●
Continual Learning for Explainability, Transparency & Accountability CL does not address these problems directly Offline AI solutions can be adopted for very narrow, specific tasks ● Biological plausibility of CL systems may lead to new mechanisms of studying their behavior Are there alternatives when no assumptions on the task can be made?
From here to there Continual Learning is still in its infancy Catastrophic forgetting is not yet solved → training on previous data is still partially needed ● Continual Learning models do not fully exploit the incremental learning paradigm still to similar to offline, large machine learning models ● Experiments are usually run in toy environments scale to realistic applications ●