Randomizer
randomdms.bsky.social
Randomizer
@randomdms.bsky.social
HCM Expert and Doomscroller Extraordinaire
approaching a ubiquitous adoption point, so new ways to programmatically identify biased data ingestion NEEDs to happen now, before AI/LMM/etc. reaches consumer market saturation. Qty over quality is a poor metric. What benefit are 5 billion training sets if 4.9 billion are deceptively biased? (5/5)
April 12, 2025 at 11:21 PM
if RLHF were possible for a billion of entries, it has risks of its own. Having started my career in HR systems, I can tell you that unconscious bias is quite real. The posts above display just 1 example of bias risk. However, there are as many potential examples as there are words. We’re fast (4/5)
April 12, 2025 at 11:18 PM
a thing is healthy, but only 100,000 instances revealing the danger. In a world where LLMs have replaced search engines, how will an LLMs’ incorrect response affect the population? We need a vast upscaling of our current research investment into finding new bias filtering techniques. Even (3/5)
April 12, 2025 at 11:16 PM
bias. Confirmation bias, among other types, can be incredibly insidious. Imagine something popular that is erroneously believed to be healthy (think smoking in the ’50s, leaded gas in the ’70s, or cocaine in the 1910s). Your LLM is trained on 50 million instances of statements that (2/5)
April 12, 2025 at 11:15 PM
My favorites include: Cloud = Remote Server Access, AI (LLM) = Crowd Sourced Auto-Correct, BlockChain = Write only Spreadsheet, Saas = Subscriptions, SuperSpeed USB 20Gbps 3.2 Gen 2×2 = *sigh*, HCM Object-Oriented Management System = Shhh, just click this button
March 14, 2025 at 8:02 PM