centagis.bsky.social
@centagis.bsky.social
In the context of academia, sources would be controlled, obviously. If the task even requires sources. So AI filtering and analyzing with objectively correct information, you still deny it’s ability to be useful?
November 9, 2025 at 2:28 AM
My reply was that AI could, and seemingly seems to be an inevitable asset in research. The fields that use them seem to agree. You, nor I can speak on whatever example OP has, as literally 0 information was provided. There are cases where synthetic data is just as useful as original data.
November 9, 2025 at 2:20 AM
The point of discussion was whether or not we could trust AI to perform the tasks its given in research. I’ve never claimed it could do the research and present its own information. Yes, of course the AI relies on input of some kind. I’ve never disputed that.
November 2, 2025 at 11:08 PM
No. My position is that they have a wide variety of applications. AI does not generate information. Every response you get is the product of gathering information from hundreds of other sources. The information they use was first brought forth by a human.
November 2, 2025 at 11:06 PM
I see they also claim AI could rapidly spread misinformation, which is true. But it would require it being FED misinformation. Which isn’t a criticism of AI but of whatever academic figure is feeding it. They didn’t address its ability to perform the given tasks at all. Makes my case for me.
November 2, 2025 at 10:48 PM
Interesting read. Only a very small part of the paper is relevant to our conversation. Looking through that section I see your shared criticism, but the only actual argument made in its favor is that AI in academia is funded by AI companies. Not a strong argument. Unless I’m missing something?
November 2, 2025 at 10:46 PM
But the outcomes are indistinguishable. You can learn the model just as well as if you’ve used actual data.
October 30, 2025 at 11:25 PM
Oh that’s the first time you’ve said that if I’m not mistaken. Could you give an actual example?
October 30, 2025 at 11:18 PM
Appearing believable is legit all that matters in training. If you can’t tell the difference, is there really a difference?
October 30, 2025 at 11:11 PM
In opposition to the experts and investors positions, sure!
October 30, 2025 at 11:02 PM
I’m not interested in switching this to some broad sociologic conversation. But I’m glad we’re both on the same page that AI is already used in science and is expanding it’s role rapidly with the confidence of the experts who are investing in it!
October 30, 2025 at 11:00 PM
I don’t think the 2 are even remotely comparable. The tech world and their investors agree with me. AI is receiving unprecedented amounts of investment from tech giants. There’s clearly some serious expectations from the world.
October 30, 2025 at 10:54 PM
The claim that AI is not an asset in research even now is just false. It’s already used widely.

When I say advancements I mean technologically. Which is good for humans overall. Science will speed up.
October 30, 2025 at 10:48 PM
I’m also talking about the future. I can’t comment on how advanced AI is right now, but it’s growing rapidly, and to not apply the tool in science is just a detriment to us.
October 30, 2025 at 10:38 PM
I think you’re not understanding my position. I’m not saying AI will do the science (though I won’t even rule that out). I’m saying it’ll play an ever growing role in science. Obviously humans will need to be involved sometimes, but machines can perform lots of tasks.
October 30, 2025 at 10:38 PM
3 continued. AI will likely be more accurate in this kind of analysis by a lot. Analyzing data 1,000 times through 5 different machines should be way more accurate than what people will do. Obviously not right now, but eventually. It’s inevitable.
October 30, 2025 at 10:29 PM
1. I hear arguments like this all the time, and it’s absurd. One thing doesn’t take away from the other. You can do both.

2. I mean, it can be. Especially math heavy science.

3. Disagree. It could eventually just run the program 1,000 times, or more. In seconds.
October 30, 2025 at 10:27 PM
1. Well humans lack the ability to analyze dozens of pages of data in a few seconds, where AI does not.

2. Well duh. That doesn’t mean there aren’t big time consuming tasks that couldn’t be hyper accelerated by machine learning.

3. I don’t think this sentence makes any sense.
October 30, 2025 at 10:13 PM
You’re taking your experiences where it doesn’t impact quality and applying it to OPs experiences in science. They’re not comparable. That’s why an actual example (which would be very easy to provide) would have been good, but we didn’t get any.
October 30, 2025 at 10:10 PM
Well I guess even I see stuff like that. But again, in training, it doesn’t make any difference. The contents of the profiles don’t actually matter. I’ve trained with several different DMS systems with this. It doesn’t compromise anything but it’s cost effective and doesn’t expose information.
October 30, 2025 at 10:09 PM
I agree, but like I said OP didn’t actually give an example. Can’t criticize something you know nothing of. This is the only time I’ve seen synthetic data used for AI.
October 30, 2025 at 11:27 AM
Also, no I’m not a professional in the field. I just like to read.
October 30, 2025 at 3:04 AM
An actual criticism is the context I’d be looking for. An example of it and an argument against it. The post basically just waved a finger and said AI bad.
October 30, 2025 at 3:02 AM
Synthetic as in fabricated data based on models of real data. They get a comprehensive idea of what the data would look like, and use those models to generate as much synthetic data as needed. It’s incredibly valuable.
October 30, 2025 at 2:52 AM