<aside>
🔥
Interested? got an idea? send an email to [email protected]
</aside>
Keep an eye on the projects. They like to change.
Things I’ve been working on: https://scholar.google.it/citations?user=1xd52jMAAAAJ&hl=en
Current Research Areas
Here are some of the research directions I’m working on. These are broad themes that guide multiple ongoing and future projects:
- Factuality and Misinformation Detection: Designing systems and methodologies to assess and improve the truthfulness of AI-generated content, especially in high-stakes domains like health and politics.
- LLMs in Human-AI Collaboration: Studying how large language models can augment or simulate crowd workers, including trust calibration, agreement, and hybrid workflows.
- Interpretability and Model Behavior Analysis: Applying mechanistic interpretability, probing, and counterfactual techniques to better understand how LLMs reason, decide, and fail.
- Retrieval-Augmented Generation (RAG): Exploring how information retrieval and generative models can be combined, optimized, and evaluated for tasks like question answering, fact-checking, and legal reasoning.
- LLMs for Scientific and Industrial Tasks: Adapting language models for specialized applications in healthcare, robotics, optimization, and scientific discovery.
- Fairness, Bias, and Robustness in LLMs: Investigating bias, robustness, and alignment in model behavior, including demographically-sensitive tasks and underrepresented domains.
- Multi-agent and Self-play Systems: Leveraging reinforcement learning, AlphaZero-style strategies, and multi-agent simulation to train models that improve through interaction and feedback.
- Synthetic Data and Task Design: Using LLMs to generate high-quality synthetic data, benchmark tasks, and evaluation protocols to advance LLM testing and development.
- Interactive and Real-Time Systems: Building infrastructure for real-time fact-checking, interactive retrieval, autonomous agents, and user-facing applications powered by LLMs.
- Evaluation and Benchmarking of LLMs: Developing new metrics, test collections, and evaluation strategies to measure model performance across truthfulness, reasoning, fairness, and usability dimensions.
Main Collaborations