Research & Publications
Investigating the frontiers of intelligence through the lens of robustness, efficient adaptation, and multi-modal synthesis.
LLM POST-TRAINING
Focus Area
Optimization strategies for large language models, focusing on Parameter-Efficient Fine-Tuning (PEFT) and continuous domain adaptation without catastrophic forgetting.
SMT: Fine-tuning large language models with sparse matrices
TL; DR: Instead of low-rank updates, selecting task-relevant sub-matrices enables PEFT to outperform LoRA and better match full fine-tuning.
Finetuning MoE LLMs with Condenser Experts
TL; DR: We stabilize MoE fine-tuning by eliminating auxiliary losses and preserving rare expert knowledge, achieving stronger downstream performance.
Robust AI
Focus Area
Ensuring model safety and consistency through adversarial training, uncertainty quantification, and structural bias mitigation in high-stakes environments.
On adversarial robustness of large-scale audio visual learning
Adversarial camera stickers: A physical camera-based attack on deep learning systems
Adversarial music: Real world audio adversary against wake-word detection system
Audio & Multimodal
Focus Area
Cross-modal representation learning and generative audio synthesis, exploring the intersection of visual semantics and spatial audio reconstruction.