
Hey there! I am a final year PhD student in Computer Science at the University of California, Los Angeles working on Natural Language Processing. I am being adviced by Prof. Nanyun Peng and Prof. Kai-Wei Chang. I’ve been awarded the Bloomberg Data Science Ph.D. Fellowship, Amazon Science Hub Fellowship, UCLA Computer Science Fellowship, and NAACL 2024 Best Paper Nomination. I am on the job market looking for active roles in Applied Research teams.
My research focuses on improving the planning and reasoning capabilities of Large Language Models (LLMs), with focus on long-context understanding and efficiency. These include, but not limited to (1) parallel exploration for efficient search for planning, (2) personalizing LLM generations through target signals, (3) using fine-tuning and preference-tuning to drive LLM efficiency, and (4) unconstraining LLM and context compression to improve long-context reasoning. I’ve applied these methods and frameworks in my works across a wide range of tasks spanning Information Extraction (IE), Question Answering (QA) and Code Generation; in a diverse set of domains like biomedical, social media, news, code; and multilingually spanning 40+ languages.
I graduated as a Masters student (MLT) from Language Technologies Institute (LTI) at Carnegie Mellon University. There, I was advised by Prof. Alan W. Black and worked on topics like style transfer and code-switched dialogue systems. Previously, I received my undergraduate degree – Bachelor of Technology in Computer Science – from IIT Bombay. As part of my thesis, I worked on Language Modelling for code-switched language under the guidance of Prof. Preethi Jyothi.
In Summer 2025, I interned at Bloomberg AI as part of my Bloomberg Data Science Internship. Previously, I have interned at Meta Gen AI (Summer 2024) and Amazon Lab 126 (Summer 2022). Before, I worked full-time for a year in the Machine Learning team at Amazon.
Updates
| Nov 2025: | Awarded as an Outstanding Reviewer for EMNLP 2025 |
| Nov 2025: | Presented our work DiCoRe and SNaRe for improving low-resource Event Detection at EMNLP 2025 |
| Oct 2025: | Completed my internship in the Code Generation team at Bloomberg AI |
| Sep 2025: | Our works on DiCoRe and SNaRe got accepted at EMNLP 2025 |
| Jun 2025: | Started my internship in the Code Generation team at Bloomberg AI |
| Jun 2025: | Arxived our work on DiCoRe: Enhancing Zero-shot Event Detection via Divergent-Convergent LLM Reasoning [paper] |
| Jun 2025: | Updated our work on FIG to SNaRe: Domain-aware Data Generation for Low-Resource Event Detection [paper] |
| Apr 2025: | Presented our work DyPlan for efficient planning at NAACL 2025 |
| Feb 2025: | Arxived our work on FIG: Forward-Inverse Generation for Low-Resouce Domain-specific Event Detection [paper] |
| Jan 2025: | Our work on DyPlan got accepted at NAACL Findings 2025 |
| Nov 2024: | Presented our works SPEED++ and QUDSelect at EMNLP 2024 |
| Nov 2024: | Inaugrated the new UCLA NLP Seminar Series with Yufei, Ashima, and Salman |
| Oct 2024: | Arxived my Meta internship work on Dynamic Strategy Planning for Efficient Question Answering with Large Language Models [paper] |
| Oct 2024: | Featured in UCLA News for the Bloomberg Data Science Ph.D. Fellowship. |
| Oct 2024: | Awarded the Bloomberg Data Science Ph.D. Fellowship for the academic year 2025-2026 |
| Sep 2024: | Completed my internship in the Gen AI team at Meta |
| Sep 2024: | Two of our works on QUDSelect and SPEED++ got accepted at EMNLP 2024 |
| Jul 2024: | Arxived our work on QUDSELECT: Selective Decoding for Questions Under Discussion Parsing [paper] |
| Jun 2024: | Started my internship in the Gen AI team at Meta |
| Jun 2024: | Gave a poster presentation for CLAP at NAACL 2024 |
| Jun 2024: | Gave an oral presentation for SPEED at NAACL 2024 |
| Jun 2024: | Awarded the Amazon Fellowship for the academic year 2024-2025 |
| May 2024: | Our work on TextEE got accepted at ACL Findings 2024 |
| Mar 2024: | Two of my works on CLAP and SPEED got accepted at NAACL 2024 |
| Mar 2024: | Passed my Oral Qualifying Examination and became a PhD Candidate |
| Nov 2023: | Arxived our work on Reevaluation of Event Extraction: Past, Present, and Future Challenges [paper] |
| Oct 2023: | Serving as the Program Chair for the Socal NLP Symposium 2023 |
| Sep 2023: | Arxived our work on Contextual Label Projection for Cross-lingual Structure Extraction [paper] |
| Jul 2023: | Presented our work on Generalizability Benchmarking Dataset for Event Argument Extraction at ACL 2023 [paper] |
| Jun 2023: | Completed Teaching Assistantship for 3 courses - Machine Learning, Natural Language Processing (undergrad), Natural Language Processing (graduate) |
| May 2023: | Our work on Generalizability Benchmarking Dataset for Event Argument Extraction accepted at ACL 2023 [paper] |
| May 2023: | Gave a guest lecture for the CS 263 (Natural Language Processing) class at UCLA |
| Jun 2022: | Started my summer internship at Amazon Alexa as an Applied Scientist Intern |
| May 2022: | Arxived our work on Generalizability Benchmarking Dataset [paper] |
| Apr 2022: | Cleared the Written Qualifying Exam (WQE) towards my PhD |
| Oct 2021: | Granted the UCLA PhD Fellowship for the first year |
| Sept 2021: | Started my PhD in Computer Science at UCLA |
| Aug 2021: | Graduated as a Masters student in Language Technologies from CMU |
| Nov 2020: | Presented our work on linguistic accommodation for code-switched dialogues at CoNLL '20 [paper] |
| Aug 2020: | Represented CMU at the Alexa Socialbot Challenge 3 and reached the Semifinals. |
| Jul 2020: | Our work on politeness transfer got featured in SCS CMU News, TechCrunch, CNET, Pittsburgh Post-Gazzette, MSN, Hindustan Times, Axios |
| Jul 2020: | Presented our work on politeness transfer at ACL '20 [paper] |
| May 2020: | Reached the semifinals of the Alexa SocialBot Challenge 2020 |
| Aug 2019: | Joined the MLT program at LTI, CMU for Fall '19 |
| Mar 2019: | Our work on Named Entity Recognition in partially and noisy labelled setting got accepted at AMLC '19 |
| Nov 2018: | Presented our work on dual rnns to improve code-switched language models at EMNLP '18 [paper] |
| Sep 2018: | Received the ISCA Student Grant |
| Sep 2018: | Presented our work on dual language models to improve code-switched speech recognition at Interspeech '18 [paper] |
| Aug 2018: | Graduated from IIT Bombay |
| Jul 2018: | Started working as Applied Scientist at Amazon in the Machine Learning team |
| Dec 2017: | Invited to Microsoft Research India to ideate and devlop Indian language technologies |
| May 2017: | Summer Internship at Goldman Sachs |
| May 2016: | Summer Internship at Philips Innovation Center |
| Dec 2015: | Winter Internship at Edelweiss |
| Jul 2015: | Secured branch change to Computer Science |
| May 2015: | Summer Internship at Sportz Interactive |
| Jul 2014: | Joined IIT Bombay |