M. C. Flux PhD
@fluxinflux.bsky.social
Neuroscience and Clinical Psychology PhD
Educator and researcher. My research explores, equity, novel therapeutics, and emotion. I’m passionate about communication empowering communities.
Proudly queer and neurodiverse
drfluxphd.com
Educator and researcher. My research explores, equity, novel therapeutics, and emotion. I’m passionate about communication empowering communities.
Proudly queer and neurodiverse
drfluxphd.com
I don’t think these tools should have been introduced without vetting. The fact that we are figuring this out as we go is one of the most horrifying social experiments humanity has ever conducted.
October 28, 2025 at 6:15 PM
I don’t think these tools should have been introduced without vetting. The fact that we are figuring this out as we go is one of the most horrifying social experiments humanity has ever conducted.
This announcement was supposed to demonstrate how they "solved" this problem. It doesn't say all that much though...
October 27, 2025 at 9:31 PM
This announcement was supposed to demonstrate how they "solved" this problem. It doesn't say all that much though...
...which is still quite troubling. It's hard to compare these numbers to actual base rates of these mental health conditions in society (or in ChatGPT's users) becasue they aren't formatted in alignment with how those statistics are reported. We should be asking for more information from OpenAI.
October 27, 2025 at 9:30 PM
...which is still quite troubling. It's hard to compare these numbers to actual base rates of these mental health conditions in society (or in ChatGPT's users) becasue they aren't formatted in alignment with how those statistics are reported. We should be asking for more information from OpenAI.
Well, I think judgement and common sense are appropriate when you have access to all of the information and adequate ways to interpret it. When we have to form incomplete pictures from incomplete data, I like to draw attention to that.
October 27, 2025 at 9:28 PM
Well, I think judgement and common sense are appropriate when you have access to all of the information and adequate ways to interpret it. When we have to form incomplete pictures from incomplete data, I like to draw attention to that.
Again, though, this is all difficult to interpret without open access to the data. We just have to rely on OpenAI's report here.
October 27, 2025 at 9:19 PM
Again, though, this is all difficult to interpret without open access to the data. We just have to rely on OpenAI's report here.
So this was a bit challenging to parse in the way that the presented the data. My understanding of this was that these stats were actually representative of users, while the uses were lower statistics, since they were a smaller percentage of each users posts.
October 27, 2025 at 9:19 PM
So this was a bit challenging to parse in the way that the presented the data. My understanding of this was that these stats were actually representative of users, while the uses were lower statistics, since they were a smaller percentage of each users posts.
Are* definitely troubling. Oops.
October 27, 2025 at 8:51 PM
Are* definitely troubling. Oops.
When money is involved, be careful where you are placing your trust when it comes to mental health.
October 27, 2025 at 8:45 PM
When money is involved, be careful where you are placing your trust when it comes to mental health.
This is not a scientific report. It is a self congratulatory press release around how much better ChatGPT-5 is at responding to mental health concerns (as defined by OpenAI).
OpenAI is neither a research nor clinical institution. It is a company trying to build a financially successful product.
OpenAI is neither a research nor clinical institution. It is a company trying to build a financially successful product.
October 27, 2025 at 8:45 PM
This is not a scientific report. It is a self congratulatory press release around how much better ChatGPT-5 is at responding to mental health concerns (as defined by OpenAI).
OpenAI is neither a research nor clinical institution. It is a company trying to build a financially successful product.
OpenAI is neither a research nor clinical institution. It is a company trying to build a financially successful product.
Additionally, we have to take OpenAI at their word here. The analyses that are described in this press release are incredibly complex and they don't supply any of the raw data or complete details of their analytics. They recognize the difficulty, but then don't do much to instill trust.
October 27, 2025 at 8:45 PM
Additionally, we have to take OpenAI at their word here. The analyses that are described in this press release are incredibly complex and they don't supply any of the raw data or complete details of their analytics. They recognize the difficulty, but then don't do much to instill trust.
Scientists and clinicians don't report statistics for mental health concerns in terms of weekly levels. Generally these data are reported as yearly statistics, which makes comparison really challenging. But these numbers aren't definitely troubling.
October 27, 2025 at 8:45 PM
Scientists and clinicians don't report statistics for mental health concerns in terms of weekly levels. Generally these data are reported as yearly statistics, which makes comparison really challenging. But these numbers aren't definitely troubling.
There is so much to unpack here. These numbers were presented as percentages. OpenAI estimates 800 million weekly users, with 0.07% displaying psychosis/mania and 0.15% displaying plan/intent.
This is challenging to interpret without broader context, and also considering this wasn't peer reviewed.
This is challenging to interpret without broader context, and also considering this wasn't peer reviewed.
October 27, 2025 at 8:45 PM
There is so much to unpack here. These numbers were presented as percentages. OpenAI estimates 800 million weekly users, with 0.07% displaying psychosis/mania and 0.15% displaying plan/intent.
This is challenging to interpret without broader context, and also considering this wasn't peer reviewed.
This is challenging to interpret without broader context, and also considering this wasn't peer reviewed.
The thing that gets me is that there isn’t even any worry. It’s just “we created this pathway to easy access heroin. Our heroin is the safest, but soon there will be a lot of unsafe heroin. Good luck!”
October 9, 2025 at 5:31 PM
The thing that gets me is that there isn’t even any worry. It’s just “we created this pathway to easy access heroin. Our heroin is the safest, but soon there will be a lot of unsafe heroin. Good luck!”
I love that reference!
September 1, 2025 at 7:39 PM
I love that reference!
I very much think we are on the same page. Which is why I wanted to engage! These are really complex issues that often get lost when we try to make short slogans to summarize them.
As an educator, I often struggle with how to stay precise AND concise. I’m still learning.
As an educator, I often struggle with how to stay precise AND concise. I’m still learning.
September 1, 2025 at 2:05 AM
I very much think we are on the same page. Which is why I wanted to engage! These are really complex issues that often get lost when we try to make short slogans to summarize them.
As an educator, I often struggle with how to stay precise AND concise. I’m still learning.
As an educator, I often struggle with how to stay precise AND concise. I’m still learning.
I chose a shorthand way of saying that which does lose some accuracy, but it was more pithy for a Bluesky post.
September 1, 2025 at 1:33 AM
I chose a shorthand way of saying that which does lose some accuracy, but it was more pithy for a Bluesky post.
The more nuanced point here is that when we reject replicable observations and decisions made on empirical evidence in favor of a stance that we prefer but is unsupported by those observations, then the system of science is no longer self-correcting in the way I discuss in the video.
September 1, 2025 at 1:33 AM
The more nuanced point here is that when we reject replicable observations and decisions made on empirical evidence in favor of a stance that we prefer but is unsupported by those observations, then the system of science is no longer self-correcting in the way I discuss in the video.
Approaches to ethics are themselves ideological. Even the distinction we draw between subjectivity and objectivity can be seen as the consequence of an ideology.
But we tend to take the stance that if an observation is replicable, it stands on its own in some particular way.
But we tend to take the stance that if an observation is replicable, it stands on its own in some particular way.
September 1, 2025 at 1:33 AM
Approaches to ethics are themselves ideological. Even the distinction we draw between subjectivity and objectivity can be seen as the consequence of an ideology.
But we tend to take the stance that if an observation is replicable, it stands on its own in some particular way.
But we tend to take the stance that if an observation is replicable, it stands on its own in some particular way.
Oh definitely. all of this is a broader ideological stance on what science is and its role in society.
This video also didn’t really have a lot to do with that point, it’s the conclusion of one of my general psychology lectures on research ethics. But I’ve been thinking about this lately.
This video also didn’t really have a lot to do with that point, it’s the conclusion of one of my general psychology lectures on research ethics. But I’ve been thinking about this lately.
September 1, 2025 at 1:33 AM
Oh definitely. all of this is a broader ideological stance on what science is and its role in society.
This video also didn’t really have a lot to do with that point, it’s the conclusion of one of my general psychology lectures on research ethics. But I’ve been thinking about this lately.
This video also didn’t really have a lot to do with that point, it’s the conclusion of one of my general psychology lectures on research ethics. But I’ve been thinking about this lately.