Ruchi Sinha
ruchisinha.bsky.social
Ruchi Sinha
@ruchisinha.bsky.social
Organizational Psychologist ; Associate Professor at Nanyang Technological University (Singapore); Coach ; Speaker ; Consultant
LLM decision-making opacity makes removing human oversight risky. LLMs may create unintended effects. We need: regular audits for emergent risky behaviours, liability frameworks for harm, and legal restrictions on coordination between systems. AI integration will require robust ethical governance.
AI is good at pricing, so when GPT-4 was asked to help merchants maximize profits - and it did exactly that by secretly coordinating with other AIs to keep prices high!

So... aligned for whom? Merchants? Consumers? Society? The results we get depend on how we define 'help' arxiv.org/abs/2404.00806
November 28, 2024 at 3:25 PM
Much of the responsibility rests with the user. As humans, we must recognize our own biases & how they show up in prompt language--> triggering biases embedded in training data. Critically cross-checking & asking for counterfactuals-is key. AI use needs socially and emotionally intelligent humans.
The thing that is hard to get about LLMs is that we expected AI to be awesome at math & be all cool logic.

Instead, AI is best at human-like tasks (eg writing) & is all hot, weird simulated emotion. For example, if you make GPT-3.5 “anxious,” it changes its behavior! arxiv.org/abs/2304.11111
November 27, 2024 at 7:35 AM