Gabriele Ciravegna
I am a Postdoctoral Researcher in the DBDMG team of Politecnico di Torino, dedicated to advancing Artificial Intelligence. My work focuses on making Deep Neural Networks not only more powerful but also Understandable and Trustworthy.
Previously, I have been working as a PostDoc in the MAASAI team of Inria at Sophia Antipolis (France), from 2022-2023. In 2022, I received the Ph.D. degree with honours from the University of Florence under the supervision of Professor Marco Gori. In 2018 I received a master’s degree in Computer Engineering with honours at the Polytechnic of Turin. Besides machine learning, I also like football, volleyball, and playing the piano.
Recent Publications
- A constraint-based approach to learning and explanation, AAAI, 2020.
- Human-Driven FOL Explanations of Deep Learning. IJCAI, 2020.
- Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers, TPAMI, 2021.
- Entropy-based logic explanations of neural networks, AAAI, 2022.
- Concept embedding models: Beyond the accuracy-explainability trade-off, NeurIPS, 2022.
- Logic explained networks, Artificial Intelligence, 2023.
- Interpretable Neural-Symbolic Concept Reasoning, ICML, 2023.
- Knowledge-driven active learning, ECML, 2023.
Awards
- Best PhD thesis Award – Premio Caianiello 2023
- Best Paper Award – AIxIA Conference 2023
Conference & Journal Organization
- AAAI Program Committee
- IJCAI Reviewer
- ECML Reviewer
- TNNLS Reviewer
Courses
- Advanced Deep Learning, Université Côte d’Azur (2022-2023)
- Machine Learning for Networking, Politecnico di Torino (2023-2024)
Invited Talks
- ‘‘On the Two-fold Role of Logic Constraints in Deep Learning’’, Artificial Intelligence Research Group Talks, Cambridge University, 2021
- ‘‘Entropy-Based Logic Explanations of Neural Networks’’, 1st Nice Workshop on Interpretability, 2022.
- “Concept-based Models: Towards Interpretable-by-design Neural Networks’’, Trusted AI – The future of creating ethical and responsible AI systems, AI4Media Theme Development Workshop, 2023.
- ‘‘XAI Is Dead, Long Live C-XAI: A Paradigm Shift in Explainable Artificial Intelligence’’, 2nd Nice Workshop on Interpretability, 2023.