DCWebGuy
banner
dcwebguy.bsky.social
DCWebGuy
@dcwebguy.bsky.social
Also on Xitter: @DCWebGuy

Malware hunter/analyst. PCAP denizen. Old-school webdev. (re-)Tweets mainly infosec IOCs, plus some politics and science. Consilience bias. I hate ideologies.
Reposted by DCWebGuy
This is absolutely nightmarish and dangerous, and more people need to raise hell over them doing this!
Reporter: The FDA has a new AI tool that's intended to speed up drug approvals. But several FDA employees say the new AI helper is making up studies that do not exist. One FDA employee telling us, 'Anything that you don't have time to double check is unreliable. It hallucinates confidently'
July 24, 2025 at 3:54 PM
Reposted by DCWebGuy
Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.
June 19, 2025 at 11:21 AM