Achleshwar Luthra
Ph.D. Student in Computer Science at Texas A&M University.
Focusing on Self-Supervised Learning (SSL) and Multimodal Learning to enable models to learn general-purpose representations.

About Me
I am a Computer Science Ph.D. student at Texas A&M University, advised by Tomer Galanti where my research focuses on theoretical understanding of SSL and Multimodal Learning. My primary goal is to develop methods that allow AI models to learn robust, general-purpose representations from data and effectively adapt across various modalities.
Before my doctoral studies, I worked at the intersection of 3D Computer Vision and AI, focusing on Neural Scene Representations and 3D Scene Understanding. I earned my Master’s degree in Computer Vision from the Robotics Institute, CMU, where I was advised by David Held and Ben Eisner on 3D Scene Understanding. During my undergraduate studies at BITS Pilani, I collaborated with Narendra Ahuja on 3D Reconstruction of Animals and with Jitendra Malik’s group on Single-View 3D Reconstruction of inanimate objects.
News
Sep 2025
🎉 CL~NSCL got accepted at NeurIPS 2025!
Jan 2025
🎓 Started Ph.D. in Computer Science at Texas A&M University.
May 2024
👨🎓 Conferred Master’s degree in Computer Vision from Carnegie Mellon University.
Feb 2024
💼 Joined Futurewei as a Research Engineer (Graphics Rendering) working on Generalizable Radiance Fields.
Jan 2024
🖥️ Presented our work on Deblur Neural Scene Flow Fields at WACV 2024.
Selected Publications
On The Alignment Between Supervised And Self-Supervised Learning
Achleshwar Luthra, Priyadarsi Mishra, and Tomer Galanti
Under Review
TL;DR: We show that self-supervised contrastive learning is aligned with its supervised counterpart, not just at a loss level but also at a representation level.
Self-Supervised Contrastive Learning Is Approximately Supervised Contrastive Learning
Achleshwar Luthra, Tianbao Yang, and Tomer Galanti
NeurIPS 2025
TL;DR: We theoretically and empirically show that self-supervised contrastive learning operates similarly to its supervised counterpart, and we explain transferability via representation geometry.
Deblur-NSFF: Neural Scene Flow Fields For Blurry Dynamic Scenes
Achleshwar Luthra, Shiva Gantha, Heather Yu, Liang Peng, Zongfang Lin, and Xiyun Song
WACV 2024
TL;DR: We propose a novel method to address motion blur in neural scene representations, enabling high-quality, novel space-time synthesis from blurry videos.
Experience
Research Engineer, GenAI for Rendering Team
Feb 2024- Nov 2024 | May- Aug 2023
Futurewei Technologies, San Jose, CA
- Developed a generalized TensoRF method for efficient 3D scene understanding and interactive editing, leading to a research submission to the ACM SIGGRAPH I3D Symposium 2025.
- Pioneered Deblur-NSFF, a novel method to address motion blur in neural scene representations, enabling high-quality, novel space-time synthesis from blurry videos; this work was accepted to WACV 2024.
Best way to reach me
I'm always open to discussing research, new ideas, or potential collaborations.