Achleshwar Luthra
Ph.D. Student in Computer Science at Texas A&M University.
Focusing on Self-Supervised Learning (SSL) and Multimodal Learning to enable models to learn general-purpose representations.
About Me
I am a Computer Science Ph.D. student at Texas A&M University, advised by Tomer Galanti where my research focuses on theoretical understanding of SSL and Multimodal Learning. My primary goal is to develop methods that allow AI models to learn robust, general-purpose representations from data and effectively adapt across various modalities.
Before my doctoral studies, I worked at the intersection of 3D Computer Vision and AI, focusing on Neural Scene Representations and 3D Scene Understanding. I earned my Masterβs degree in Computer Vision from the Robotics Institute, CMU, where I was advised by David Held and Ben Eisner on 3D Scene Understanding. During my undergraduate studies at BITS Pilani, I collaborated with Narendra Ahuja on 3D Reconstruction of Animals and with Jitendra Malikβs group on Single-View 3D Reconstruction of inanimate objects.
News
April 2026
π Directional Neural Collapse in SSL got accepted at ICML 2026!
Mar 2026
π’ New preprint released on few-shot transferability in SSL.
Jan 2026
π CL~NSCL v2 got accepted at ICLR 2026!
Oct 2025
π’ New preprint released on CL-NSCL representation alignment.
Sep 2025
π CL~NSCL got accepted at NeurIPS 2025!
Jan 2025
π Started Ph.D. in Computer Science at Texas A&M University.
May 2024
π¨βπ Conferred Masterβs degree in Computer Vision from Carnegie Mellon University.
Feb 2024
πΌ Joined Futurewei as a Research Engineer (Graphics Rendering) working on Generalizable Radiance Fields.
Selected Publications
Directional Neural Collapse Explains Few-Shot Transfer in Self-Supervised Learning
Achleshwar Luthra*, Yash Salunkhe*, and Tomer Galanti*
ICML 2026
TL;DR: We answer the question of what geometric properties explain multitask few-shot adaptation capabilities in self-supervised learning.
On The Alignment Between Supervised And Self-Supervised Learning
Achleshwar Luthra, Priyadarsi Mishra, and Tomer Galanti
ICLR 2026
TL;DR: We show that self-supervised contrastive learning is aligned with its supervised counterpart, not just at a loss level but also at a representation level.
Self-Supervised Contrastive Learning Is Approximately Supervised Contrastive Learning
Achleshwar Luthra, Tianbao Yang, and Tomer Galanti
NeurIPS 2025
TL;DR: We theoretically and empirically show that self-supervised contrastive learning operates similarly to its supervised counterpart, and we explain transferability via representation geometry.
Deblur-NSFF: Neural Scene Flow Fields For Blurry Dynamic Scenes
Achleshwar Luthra, Shiva Gantha, Heather Yu, Liang Peng, Zongfang Lin, and Xiyun Song
WACV 2024
TL;DR: We propose a novel method to address motion blur in neural scene representations, enabling high-quality, novel space-time synthesis from blurry videos.
Experience
Research Engineer, GenAI for Rendering Team
Feb 2024- Nov 2024 | May- Aug 2023
Futurewei Technologies, San Jose, CA
- Developed a generalized TensoRF method for efficient 3D scene understanding and interactive editing, leading to a research submission to the ACM SIGGRAPH I3D Symposium 2025.
- Pioneered Deblur-NSFF, a novel method to address motion blur in neural scene representations, enabling high-quality, novel space-time synthesis from blurry videos; this work was accepted to WACV 2024.
Best way to reach me
I'm always open to discussing research, new ideas, or potential collaborations.