Raj Ghugare

rg9360@princeton.edu

Raj Ghugare

I am a PhD student at Princeton, advised by Ben Eysenbach. Previously, I spent 1.5 years at Mila and Montreal robotics and AI lab. Before that, I completed my bachelors from NIT Nagpur. Broadly, my research goal is to develop simpler and scalable AI algorithms. I am interested in machine learning and reinforcement learning. I enjoy working on a wide range of topics, some keywords

  • long horizon inference using
    • contrastive and non-contrastive representations
    • generative models,
  • characteristics of intelligent reasoning
    • combinatorial / compositional generalization
    • dynamic programming
    • abstractions.

Research

Please refer to Google Scholar for a complete list of my publications.

Stitching research visualization
Closing the Gap between TD Learning and Supervised Learning -- A Generalisation Point of View [ICLR 2024]
Raj Ghugare, Matthieu Geist, Glen Berseth, Benjamin Eysenbach

paper, code


This paper explores the link between trajectory stitching and combinatorial generalization, demonstrating significant progress in decision-making algorithms using simpler techniques.
AI generated molecule
Searching for High-Value Molecules Using Reinforcement Learning and Transformers [ICLR 2024]
Raj Ghugare, Santiago Miret, Adriana Hugessen, Mariano Phielipp, Glen Berseth

website, paper, code


Through extensive experiments spanning across datasets with 100 million molecules and 25+ reward functions, we uncover essential algorithmic choices for efficient search with RL, and other phenomena like reward hacking of protien docking scores.
Aligned objective molecule
Simplifying Model-based RL: Learning Representations, Latent-space Models and Policies with One Objective [ICLR 2023]
Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov

website, paper, code


We present a joint objective for latent space model based RL which lower bounds the RL objective. Maximising this bound jointly with the encoder, model, and the policy boosts sample efficiency, without using techniques like ensembles of Q-networks and a high replay ratio.

Last updated: September 2024.