Sanghamitra Dutta
banner
sanghamd.bsky.social
Sanghamitra Dutta
@sanghamd.bsky.social
Assistant Professor at UMD College Park| Past: JPMorgan, CMU, IBM Research, Dataminr, IIT KGP
| Trustworthy ML, Interpretability, Fairness, Information Theory, Optimization, Stat ML
Pinned
📢 Knowledge distillation trains smaller student models from complex teacher models. But are all teachers equally helpful? Can we formally quantify useful distillable knowledge? Our paper at #AISTATS2025 explains distillation using Partial Information Decomposition. arxiv.org/abs/2411.07483
Quantifying Knowledge Distillation Using Partial Information Decomposition
Knowledge distillation deploys complex machine learning models in resource-constrained environments by training a smaller student model to emulate internal representations of a complex teacher model. ...
arxiv.org
✈️ Traveling to #ICML2025. Presenting our paper: Quantifying Prediction Consistency Under Fine-Tuning Multiplicity in Tabular LLMs #TabularLLM #Uncertainty #PredictionConsistency #Robustness
🕟 Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT
➡️ East Exhibition Hall A-B E-901
🔗https://arxiv.org/abs/2407.04173
July 12, 2025 at 6:29 PM
📢 Knowledge distillation trains smaller student models from complex teacher models. But are all teachers equally helpful? Can we formally quantify useful distillable knowledge? Our paper at #AISTATS2025 explains distillation using Partial Information Decomposition. arxiv.org/abs/2411.07483
Quantifying Knowledge Distillation Using Partial Information Decomposition
Knowledge distillation deploys complex machine learning models in resource-constrained environments by training a smaller student model to emulate internal representations of a complex teacher model. ...
arxiv.org
May 2, 2025 at 10:45 PM
🔈 Sharing our recent paper on "Counterfactual Explanations for Model Ensembles Using Entropic Risk Measures" 🎉 🎉 Accepted at #AAMAS2025
Joint work with: Erfaun Noorani Pasan Dissanayake Faisal Hamman #Explainability #XAI #AlgorithmicRecourse #EntropicRisk arxiv.org/abs/2503.07934
Counterfactual Explanations for Model Ensembles Using Entropic Risk Measures
Counterfactual explanations indicate the smallest change in input that can translate to a different outcome for a machine learning model. Counterfactuals have generated immense interest in high-stakes...
arxiv.org
April 7, 2025 at 1:16 AM
Excited to share that my PhD student Sachindra Pasan Dissanayake has been awarded the Outstanding Graduate Assistant Award by the Graduate school (Top 2% of campus graduate assistants). #proudadvisor

pasandissanayake.github.io
February 19, 2025 at 1:01 AM
Are you interested in serving as a Program Committee member for the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2025)? PC Members are expected to review papers in their area of expertise. Expression of interest form: forms.gle/dmhCPbRBTEzF...
#FAccT2025
Expression of Interest in ACM FAccT 2025 Program Committee
Please fill out this form if you are interested in serving as a Program Committee (PC) member for the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2025). The Program Committ...
forms.gle
December 13, 2024 at 6:50 PM
✈️ Headed to #NeurIPS2024 Presenting our paper on "Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope Theory"
🎉 Wed 11 Dec 4:30 pm, Poster Session 2 East Exhibit Hall A-C #3303 #NeurIPS #XAI #Explainability #Privacy #Counterfactuals
Arxiv: arxiv.org/abs/2405.05369
December 10, 2024 at 3:33 AM