banner
adriandolinay.bsky.social
@adriandolinay.bsky.social
An aspiring STEM geek! I am fascinated by all things STEM, but in particular Machine Learning, Statistics, Data Science, Education, Cyber Security and the Energy industry.
For any instructors that want to contribute to OER here is the form to contact OpenStax admin: openstax.org/contact/ Open-Source Textbooks available from OpenStax: openstax.org/subjects
OpenStax Assignable: openstax.org/assignable
OpenStax for K-12 Teachers: openstax.org/k12
December 3, 2025 at 3:02 PM
If you are a student or instructor, please share this with your administration and hopefully we can get more free educational resources into the hands of people who need it. Thank you and I hope you enjoy the conversation!
December 3, 2025 at 3:01 PM
I want to thank the OpenStax organization for all their work. It is impactful to provide textbooks to students, instructors and schools that are struggling financially. I also want to thank all the experts that have contributed to open-source textbooks.
December 3, 2025 at 3:01 PM
In the episode we talk about how OpenStax develops open source textbooks, how different universities are utilizing open educational resources, the experts who contribute to open source textbooks and comparing open educational resources to proprietary textbook publishers.
December 3, 2025 at 3:00 PM
To claim that “More Articles Are Now Created by AI Than Humans” is a stretch given how limited this observational study is. I would suggest changing the title to something less sensationalist to reflect the uncertainty of the study.
October 26, 2025 at 7:23 PM
To summarize, we do not have the URLs from the authors, the accuracy claimed by the authors is potentially overly optimistic and only a single LLM was used to generate the AI articles for detection.
October 26, 2025 at 7:23 PM
3. A peer reviewed study linked below stated that text generated by Claude was much harder to detect relative to ChatGPT 4o. A more robust study would have tested the detector against AI generated article from multiple LLMs.
October 26, 2025 at 7:23 PM
3. Within the “Limitations” section of the model, the authors state “We only evaluate the false negative rate on articles generated by GPT-4o.”.
October 26, 2025 at 7:23 PM
2. It is hard to believe that Surfer, a company with just under 100 employees according to LinkedIn, developed an AI detection tool that is more accurate to OpenAI’s own tool.
October 26, 2025 at 7:23 PM
2. New AI classifier for indicating AI-written text: openai.com/index/new-ai...
New AI classifier for indicating AI-written text
We’re launching a classifier trained to distinguish between AI-written and human-written text.
openai.com
October 26, 2025 at 7:23 PM
2. This seems to be a suspiciously low false positive rate. OpenAI themselves released an article stating that there AI text detector “incorrectly label[ed] human-written text as AI-written 9% of the time”.
October 26, 2025 at 7:23 PM
2. To mark an article as “AI Generated”, the authors used a tool called SurferSEO’s AI detection tool. I could not find peer reviewed papers on the tool itself, but the author claims that the false positive rate is 4.2% on the document level.
October 26, 2025 at 7:23 PM
1. If 50% of the dataset was comprised of these types of “GlobeNewswire articles” this would introduce significant bias and the AI detector would likely overperform relative to the true population of articles.
October 26, 2025 at 7:23 PM
1. This is an issue because we do not know if the articles were selectively chosen. For example even 5 years ago GlobeNewswire would push out earnings or dividend announcements where the “articles” were generated automatically with a template.
October 26, 2025 at 7:23 PM
1. The article "https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans" links "raw data”. The claim is that 65k articles were parsed. However the URL links and the content of the articles themselves are not provided.
October 26, 2025 at 7:23 PM
I will highlight three issues with the article: 1. Transparency, 2. Potentially inflated accuracy 3. Single model AI generation bias
October 26, 2025 at 7:23 PM