I am Antonio Norelli, a Computer Science PhD student at the Sapienza University of Rome working on AI and deep learning in the GLADIA research group, advised by Emanuele Rodolร .

Before joining the group, I graduated in physics in 2016 (BSc), with a thesis on using neural networks to spot dark photons at CERN, advised by Stefano Giagu; and two years later in computer science (MSc), by defeating the Italian Othello champion with a cheap reimplementation of AlphaGo Zero from scratch, advised by Alessandro Panconesi. Along with Roberto Di Leonardo, he was also my tutor at the formidable Sapienza School for Advanced Studies.
Before and during my PhD, I worked as a Research Intern for Spiketrap with Andrea Vattani, and as an Applied Scientist Intern for the Amazon Lablet in Tรผbingen with Francesco Locatello. Recently, I spent time with Alex Bronsteinโ€™s group at Technion in Haifa.


What makes us humans? I am interested in the intelligence that accounts for the difference existing between human beings and the other animals.

To me this gap is on par to the one between life and non-life and is the consequence of a fundamental property of nature, that should be understood in terms of information processing. Currently, I am persuaded that this intelligence coincides with our use of symbols, and that we shall model the mechanism through which humans link meaning to new signs. As when a scientist proposes a new theory.

Highlights of my research in this direction

  • When a ML system becomes an artificial scientist: mastering the game of Zendo with Transformers.
    Explanatory Learning: Beyond Empiricism in Neural Networks, 2022 [arXiv] [code] [Twitter thread] [Judea Pearl about this work].
  • The meaning was already there: connecting text and images without training a neural network to do so.
    ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training, 2022 [arXiv] [code] [Alex Smola about this work].

Other featured research

  • It happens that different neural networks trained on the same stuff learn intrinsically equivalent latent spaces.
    Relative Representations Enable Zero-shot Latent Space Communication, 2022 [arXiv] [code] [Oral at ICLR23 with 8-8-10 reviews].
  • AlphaGo Zero for Othello. With two ideas to speed up the learning, and tested in a live match against a former world champion.
    OLIVAW: Mastering Othello without Human Knowledge, nor a Penny, 2022 [arXiv] [Trailer of the match].
  • With the right geometric prior, 11 samples are enough to train a generative model for 3D shapes of humans or animals.
    LIMP: Learning Latent Shape Representations with Metric Preservation Priors, 2020 [arXiv] [code] [Oral at ECCV 2020 (2 min, 10min video)].
  • The task with the widest gap between human and machine performance in BIG-bench, a collaborative effort to test Language Models.
    Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models, 2022 [arXiv] [SIT task].

Check my Google Scholar profile for a complete list of my articles.

Selected Invited Talks

Advising

I enjoy mentoring younger students and coadvising them on their thesis with prof. Rodolร . If you are super passionate about AI and looking for what to do next or for a thesis (maybe an AI agent for a board game involving a cool challenge?) feel free to reach out at [my last name] at di.uniroma1.it!

Students I am advising/had advised on their BSc/MSc thesis:

  • Robert Adrian Minut, MSc, Backward LLMs (Now PhD at Sapienza)
  • Alessandro Zirilli, BSc, AlphaZero for Hex
  • Ahmedeo Shokry, MSc, AI and Feynman Diagrams
  • Giovanni Quadraroli, BSc, DeepRL for Space Invaders
  • Guido Maria Dโ€™Amely Di Melendugno, MSc, DL for Contract Bridge (Now PhD at Sapienza)

News