Siddharth Srivastava
sidsrivast.bsky.social
Siddharth Srivastava
@sidsrivast.bsky.social
Associate Professor at ASU. Research focus on safe and reliable AI and robotics; systems that learn generalizable knowledge for long-term planning and reasoning.

http://siddharthsrivastava.net
If you’re at #ICLR2025 consider stopping by our poster being presented by @danielrbramblett.bsky.social this Friday! We present new methods for objective evaluation of truth maintenance in LLM and LRM translation tasks using formal verifiers, overcoming the need for labeled datasets.
Can we automatically evaluate the semantic accuracy of LLM translations without human annotation?
Our #ICLR ’25 work introduces a novel approach for assessment of truth maintenance in formal language translation.
Joint w/ Rushang Karia, Daksh Dobhal, @sidsrivast.bsky.social.
(1/3)
April 24, 2025 at 1:20 AM
Reposted by Siddharth Srivastava
🚀 Join us for the #GenPlan workshop on March 4th in Philadelphia at #AAAI2025! Learn about generalization in sequential decision-making in AI from an incredible lineup of invited speakers. Don’t miss out — Check out the full program schedule here: aair-lab.github.io/genplan25/in...
GenPlan: Generalization in Planning | AAAI 2025
aair-lab.github.io
February 22, 2025 at 8:14 PM
Reposted by Siddharth Srivastava
#AAAI2025 is almost here!

I’ll co-organize a tutorial with @sidsrivast.bsky.social on User-Driven Capability Assessment of Taskable AI Systems. The schedule is now live, so mark your calendars!

📅 26 February 2025
📍Room 115A, Pennsylvania Convention Center
🔗 bit.ly/aia25-tutorial
February 22, 2025 at 10:31 PM
Reposted by Siddharth Srivastava
Excited to organize a half-day tutorial at #AAAI2025 on User-Driven Capability Assessment of Taskable AI Systems with @sidsrivast.bsky.social.

📅 26 February 2025
⏱️ 8:30 AM - 12:30 PM EST
📍 AAAI 2025, Philadelphia, USA
🔗 bit.ly/aia25-tutorial

(1/3)
January 18, 2025 at 7:41 AM
Looking fwd to meetups at #NeurIPS ‘24!

@danielrbramblett.bsky.social will be presenting some of our work on user-aligned AI systems that operate reliably under partial observability. Consider stopping by during the Wednesday afternoon poster session (poster 6505)!
Can we align an AI system with users’ expectations when it has limited, noisy information about the real world?
Our #NeurIPS ’24 work allows users fuse high-level objectives with preferences and constraints based on the agent’s current belief about its environment.
Joint w/ @sidsrivast.bsky.social
December 11, 2024 at 1:54 AM
Reposted by Siddharth Srivastava
@louiseadennis.bsky.social and I are program chairs for #AAMAS2026. I'm looking to try some new ideas for the reviewing process. Any ideas about some of your favorite (or least favorite) things other conferences have tried?
December 9, 2024 at 9:51 PM