Research is formalized curiosity. It is poking and prying with a purpose. – Zora Neale Hurston
On January 13 and 14, 2022, we are helding a series of short seminars (“bites”) on recent research topics.
Seminars are part of the “Data Science Lab: Process and methods” master degree course.
Quick access to slides:
Speakers on Thursday 13
Moreno La Quatra
Abstract
Is it fake or real? This time is not about news and it is not about humans either. Neural networks can be trained to fool one each other and, while they do it, they become more and more accurate.
Generative Adversarial Networks are a class of deep learning architectures aiming at generating realistic data (e.g., images or text). This short seminar introduces the main concepts behind their design together with some interesting examples.
Additional resources
NLP
– Text-To-Text Generative Adversarial Networks https://ieeexplore.ieee.org/document/8489624
– Curriculum CycleGAN for Textual Sentiment Domain Adaptation with Multiple Sources https://arxiv.org/abs/2011.08678
Audio
– CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion https://arxiv.org/abs/1904.04631
– Adversarial Audio Synthesis https://arxiv.org/abs/1802.04208
Finance
– cCorrGAN: Conditional Correlation GAN for Learning Empirical Conditional Distributions in the Elliptope https://arxiv.org/pdf/2107.10606.pdf
Flavio Giobergia
Abstract
Representing words using vectors is not a novel idea. Term frequency, TF-IDF, LSA and many other approaches have been in place for decades. However, word2vec has radically changed the way in which we use vectors to represent words.
In this talk, we will look at what word2vec is, how it works and why it produces such useful vector representations.
Giuseppe Attanasio
Abstract
Where did recurrent models (e.g., RNNs) stem from? What were their main applications? How did we end up using Attention instead?
This talk provides an answer to these questions, giving a temporal perspective on the NLP research over the period from 2014 and late 2016.
Speakers on Friday 14
Eliana Pastor, Salvatore Greco
Abstract
How can we explain the behavior of a model? Should we trust it? Is it right for the right reasons? The talk introduces basic concepts of Explainable AI and first provides an overview of explanation methods. We will then focus on explanation techniques for textual data, showing some interesting examples and discussing the challenges of explanation validation.
Cover image credits: Photo by Alexandre Pellaes on Unsplash