Rachel Flood Heaton
banner
rachelfloodheaton.bsky.social
Rachel Flood Heaton
@rachelfloodheaton.bsky.social
Cognitive and perceptual psychologist, industrial designer, & electrical engineer. Assistant Professor of Industrial Design at University of Illinois Urbana-Champaign. I make neurally plausible bio-inspired computational process models of visual cognition.
it would have to be a fundamentally different solution. Different architecture.
November 2, 2025 at 9:41 PM
Yes, but a sufficiently sophisticated machine could outstrategize those who would turn it off before they even realized there was a problem. We're not even close to that, but I think the show Person of Interest did a good job of illustrating orchestration of distant moving parts.
November 2, 2025 at 9:25 PM
That would be impressive.
October 28, 2025 at 3:39 PM
Avoid getting yourself into any situation where you can be kept as a hostage by a bad actor because you have no other options. For example, don't let your whole research program be dependent on a grant with someone who has not proven repeatedly that they are a wonderful and reasonable human being.
October 28, 2025 at 3:35 PM
This summer I got disgusted and left a keynote where someone did this, and I inadvertently made more of a scene than I anticipated because literally everyone else in the huge hall was so rapt that they didn't even seem to be breathing as I walked past them. I guess it was just me.
October 28, 2025 at 1:43 AM
Some verifications are more independent and reliable than others. For example, I don't test my own code or review my own papers. I also don't test my own code or review my own papers and then just have someone visually check what I did. A different mind doing the stress testing is crucial.
October 4, 2025 at 8:18 PM
Well, it depends upon the likelihood of an error in an activity as to whether the method is 'sound'. If one asks an unreliable system to generate the verification tool, and then visually checks the verification tool, one might be more likely to miss an error than if one wrote that tool oneself.
October 4, 2025 at 8:12 PM
Although I submit that even a Fields medalist can make errors, so I wouldn't actually take the result as rock solid without some other sort of independent verification.
October 4, 2025 at 7:52 PM
I am not a fan of current AI but even I acknowledge that it gives the right answers sometimes. The issue is that you have to already be an expert to know whether it was the right answer, and most users are not. So sometimes it causes legit tragedies by pushing naive people towards bad conclusions.
October 4, 2025 at 2:32 PM
What replaced the program was subject specific tracking that can change dynamically and is based on long term classroom performance. I think it serves the same purpose with respect to pacing but does it more effectively.
October 2, 2025 at 11:06 PM
It is amazing watching one set of students be treated like special geniuses and the other be treated like actual criminals when the difference might literally have been one point on a test that only some parents knew how to prepare their children for.
October 2, 2025 at 9:59 PM
In my area, gifted placement was best predicted by having parental resources to train for the gifted testing. It perpetuated inequality, especially because there was no other onramp into the program past that initial testing. Kids who should have been considered 'gifted' were excluded too early.
October 2, 2025 at 9:57 PM
I would be unhappy to be forced back to an internal combustion engine car. Riding in a car literally rattling from little explosions going off inside of it seems absurd now, like other ingenious but excessively complicated Victorian era contraptions. EVs are fast, clean, quiet, and easy to maintain.
October 1, 2025 at 12:25 PM
Having 'scratch' space to process pieces of information is useful across most contexts and types of computations. Even in humans this is true, like moving things in and out of working memory. However because this is so generally true, any true relationship to human-AI alignment is likely illusory.
September 30, 2025 at 3:23 PM
The only thing I learned about antennas in my EE program is that antennas are weird and hard to design.
September 30, 2025 at 3:17 AM
If people get this cancer I hope they are 74 years old.
September 26, 2025 at 9:51 PM
Reposted by Rachel Flood Heaton
OpenAI needs at least $500 billion in cash to fund its operations in the next 4 years, and $432.5 billion *on the low end* to meet their other obligations - more than the combined available capital of the top 10 PE firms ($477bn) and US venture capital ($164bn).
www.wheresyoured.at/openai-onetr...
September 26, 2025 at 5:02 PM