Valentino Maiorca

Apple MLR Intern | ELLIS Ph.D. Student (Sapienza & ISTA)

prof_pic.jpg

I’m interested in how the semantics of data shape the latent geometry of neural networks and enable information transfer between them.

I study how to act on this shared geometry, from aligning representational spaces to steering them toward task-relevant properties. The goal is to understand better what models learn and how to control, transfer, or repurpose that knowledge.

I’m always open to collaborations, discussions, and new ideas, so feel free to contact me!


Full CV available here.

Selected Publications

  1. TMLR
    ResiDual Transformer Alignment with Spectral Decomposition
    Lorenzo Basile*, Valentino Maiorca*, Luca Bortolussi, Emanuele Rodolà, and Francesco Locatello
    TMLR, 2025
  2. Relative representations enable zero-shot latent space communication
    Luca Moschella*Valentino Maiorca*, Marco Fumero, Antonio Norelli, Francesco Locatello, and Emanuele Rodola
    In ICLR; top 5% (oral), 2023
  3. Asif: Coupled data turns unimodal models to multimodal without training
    Antonio Norelli, Marco Fumero, Valentino MaiorcaLuca Moschella, Emanuele Rodola, and Francesco Locatello
    In NeurIPS, 2023
  4. Latent Space Translation via Semantic Alignment
    Valentino Maiorca*Luca Moschella*, Antonio Norelli, Marco Fumero, Francesco Locatello, and Emanuele Rodolà
    In NeurIPS, 2023
  5. COSYNE
    Multi-subject neural decoding via relative representations
    Valentino Maiorca, Simone Azeglio, Marco Fumero, Clémentine Dominé, Emanuele Rodolà, and Francesco Locatello
    In COSYNE, 2024