James MacGlashan
banner
jmac-ai.bsky.social
James MacGlashan
@jmac-ai.bsky.social
Ask me about Reinforcement Learning
Research @ Sony AI
AI should learn from its experiences, not copy your data.

My website for answering RL questions: https://www.decisionsanddragons.com/

Views and posts are my own.
To change your mind I think we'd have to operationally define what you mean by "medicine" and "social science"

By Wikipedia's definition of social science, I would be inclined to agree that "health care" is a social science, but "medicine" is not.
November 19, 2025 at 4:45 PM
A failed journalist with poor ethics, who burnt it all down because she fell in love with RFK Jr of all people, but keeps failing up despite that and now we all have to suffer her.
November 18, 2025 at 2:28 PM
First, that's an entirely separate question. Corporations are "persons" in law, but they most definitely are not "persons" as discussed here. The question here is about what we're building, not laws.

Second, I think that's a terrible outcome and oppose "corporation personhood" as well.
November 16, 2025 at 4:12 PM
Individuals may have other goals, but there is a goal of the field. Go back and read the original Dartmouth proposal that started the field, or Turing's writings that precipitated it, or look at the research the field does.

Artificial intelligence is about intelligence, not artificial people.
November 16, 2025 at 4:25 AM
Here are some of my big questions

- Latent long/short-term memory
- Continual learning on experience (not datasets)
- Exploration and information gathering
- Counterfactual world models from sensors
- Sensory abstraction facilitating reasoning
- Long-horizon planning
November 14, 2025 at 3:31 PM
Thanks, the paper is much more clear. I think The Daily Star may have muddled the claims!
November 13, 2025 at 7:38 PM
Are you proposing a new sensory mechanism, or are you proposing a perceptual capability from our standard "touch" senses. The article makes it sound more like the latter to me, despite calling it the former, but I'm not sure if I'm missing something.
November 13, 2025 at 3:31 PM
They're doing it to pay and possibly increase their investments in OpenAI...
November 11, 2025 at 4:45 PM
After reading your responses here, I feel better that you are not advocating for a problematic analysis. However, I do worry that people will get the wrong message from that analogy. People already form a lot of wrong beliefs about these systems, so they're primed to get the wrong message.
November 11, 2025 at 2:42 PM
I do agree with humility in general though. Mainly I am concerned about using the analogy of humility we need for the suffering of animals/people because it's not a good fit for where and why we may need humility with AI tech.
November 11, 2025 at 2:42 PM
For that reason, I think it's important to partially oppose the commonly stated idea that "we don't know how they work." We know quite a lot, so when we invoke uncertainty, we need to be sure that's actually the case to not misinform. For moral status of LLMs, I think we know many relevant facts.
November 11, 2025 at 2:42 PM
Regularly, I find people making either false claims about these systems or appealing to uncertainty about properties we actually are able to know from first principles. Sometimes even from junior researchers who don't yet have the full background knowledge to understand how they work.
November 11, 2025 at 2:42 PM
I appreciate the detailed response :) I think it's plausible we agree more than disagree.

Re what we know: there are certainly some things we don't know. A motivation for ML is to learn behaviors we cannot precisely specify ourselves. But I do think we know a lot more than people realize.
November 11, 2025 at 2:42 PM
While it is wrong to dismiss AI suffering because "it's just a machine," AI is not like us and we know a lot about how they work. We know that they do not have salient faculties that people have. This knowledge should be the first form of evaluation and we should discourage evaluating them like life
November 8, 2025 at 7:32 PM
These facts make it extremely dangerous for people evaluate model suffering from observation. Such evaluations are likely to result in incorrect conclusions. And wrong conclusions about this kind of thing can lead us to dystopian outcomes that value models over people and give corporations cover.
November 8, 2025 at 7:32 PM
For example, we had the google engineer worried about the well-being of the google LLM because it would say it was "lonely" or other such things.

Except we know from first principles than an LLM cannot be lonely, but is exactly what we would expect the model to output despite that.
November 8, 2025 at 7:32 PM
Additionally, the fact that these systems are usually designed to imitate surface behavior of people makes judging them _observationally_, the way we might evaluate other life, a categorical mistake.
November 8, 2025 at 7:32 PM