#OverFitting
And they want to take TikTok away from kids.
December 14, 2025 at 2:25 AM
Re: overfitting in ChatGPT word choice:

"In Nigerian English, it’s more ordinary to speak in a heightened register; words like “delve” are not unusual.

AI is trained on the entire internet… Nigeria has one of the world’s largest English-speaking populations."

free link: archive.ph/2025.12.16-1...
Why Does A.I. Write Like … That?
www.nytimes.com
December 16, 2025 at 2:22 PM
Was heartbreaking to hear students' using ChatGPT to "sound white".

It seems like it's more accurate to say that it instead makes people sound like an uncanny dupe of a well-educated African English-speaker since the models were trained on English internet and subject to regional overfitting.
Reminded again of the exchange @fractalecho.bsky.social had w/ their student whose writing quality tanked, was noticed, and then, when asked what was going on, revealed they'd been using "AI" to rewrite papers to sound more white.

Our trust in these systems does us real epistemic & hermeneutic harm
Not a chance in the world I’d ever admit this.
December 16, 2025 at 2:37 PM
왜 나비일까 … (overfitting …)
December 15, 2025 at 11:01 AM
*everyone trying to show off on the same benchmarks like solving specific math problems* hmm, I wonder if this could lead to massive overfitting that no one is talking about
December 12, 2025 at 7:01 PM
New releases for tidymodels! 📦

We've launched filtro and important for supervised feature selection in #RStats. These tools simplify predictor ranking, reduce overfitting, and include advanced dynamic importance calculations.

Learn more: tidyverse.org/blog/2025/11...
December 9, 2025 at 3:21 PM
Dropout is a simple but very effective regularization technique used in neural networks to reduce overfitting. Visualizing the effect of dropouts on performance of neural networks is of great help in grasping the basics. t.co/nHKI9MbYkf
December 12, 2025 at 7:12 PM
yeah its true, but also what does overfitting even mean in this context? RL -- the other proposed system for answering this -- will also systemically overfit to its operational domain
December 6, 2025 at 9:23 PM
I think one of the bigger unexplored angles of attack in @rechelon 's book is that, at higher levels of abstraction, lossy statistical models are necessary, but such models are very sensitive to overfitting (which corresponds to the "reality is patchy" anti-realism) and the opposite, bias (which […]
Original post on mastodon.social
mastodon.social
December 3, 2025 at 6:18 PM
Overfitting aside, when was the last time a project or paper used Mistral models? Nowadays, they mostly use Qwen.
December 2, 2025 at 8:34 PM
Regularization Techniques — Keeping Your SLM Stable During Training

Learn the essential regularization techniques needed to keep Small Language Model training stable and prevent overfitting, including dropout, weight decay, gradient clipping, data augmentation, and early stopping.
Regularization Techniques — Keeping Your SLM Stable During Training
Learn the essential regularization techniques needed to keep Small Language Model training stable and prevent overfitting, including dropout, weight decay, gradient clipping, data augmentation, and early stopping.
nanolanguagemodels.com
November 30, 2025 at 7:33 PM
Overfitting vs. Underfitting: Making Sense of the Bias-Variance Trade-Off

The best models live in the sweet spot: generalizing well, learning enough, but not too much

Telegram AI Digest
#ai #overfitting #underfitting
Overfitting vs. Underfitting: Making Sense of the Bias-Variance Trade-Off
The best models live in the sweet spot: generalizing well, learning enough, but not too much
towardsdatascience.com
November 23, 2025 at 9:08 AM
In our next Perspectives on Science / Helsinki Philosophy Colloquium seminar on Thursday, Nov 27, Jan-Willem Romeijn (University of Groningen) will give a talk titled “Overfitting in statistics and machine learning”. More here: tint-helsinki.fi/2025/11/17/p...

#philsky #philsci
Perspectives on Science / Helsinki Philosophy Colloquium Seminar 27.11 with Jan-William Romeijn – Centre for Philosophy of Social Science
tint-helsinki.fi
November 21, 2025 at 8:33 AM
overfitting-the-local-correctness-maximum.com
November 18, 2025 at 2:18 PM
If you think your model is overfitting — it probably is.
If you think it’s perfect — it definitely is.
(Overfitting, I mean.) 😂

#MachineLearning #AI
#buildinpublic #DataScience #100DaysOfCode
November 12, 2025 at 6:38 PM
This AI overfitting issue is mentioned in Disney's lawsuit against Midjourney. Even if you use a generic word like "superhero," GenAI image models will still generate copyrighted characters because they just go with whatever appeared most frequently in the training data.
November 11, 2025 at 1:13 AM
elvis was basically ai-generated before ai existed: a white man trained on black culture until overfitting occurred
November 7, 2025 at 5:36 PM
it is annoying to me that pollsters keep learning the wrong lessons from their failures

They manipulate models to get the "right" results to the previous outcome, and in the process end up overfitting the existing data, which means they will get the next election wrong as well, just the other way.
November 6, 2025 at 4:17 PM
October 31, 2025 at 7:03 AM
Over-reacting to outliers == Overfitting
October 28, 2025 at 11:47 PM
"We were raised to believe over the decades that machines will outgrow us[, but] automatons are not consuming the globe brilliantly. They are not outsmarting us—they are overfitting on us."

Snehamol Joseph & Jeena Joseph (2025)
Digital decay: when AI eats its own homework doi.org/10.1007/s001...
October 29, 2025 at 9:49 AM
🚨The Formalism-Implementation Gap in RL research🚨

Lots of progress in RL research over last 10 years, but too much performance-driven => overfitting to benchmarks (like the ALE).

1⃣ Let's advance science of RL
2⃣ Let's be explicit about how benchmarks map to formalism

1/X
October 28, 2025 at 1:56 PM
The NY Post appear unwilling to accept the fact that Mamdandi is genuinely & organically popular.

The Medium analysis mentioned employs a convoluted and involved process to analyze the data, and then presents a nearly perfect correlation chart, which is always suspicious and suggests overfitting.
October 28, 2025 at 6:26 PM
He does specify that he only made this chart because statistically illiterate repliers asked it. The issue is really the overfitting.
October 28, 2025 at 2:04 AM
Like as someone with schizotypy who has almost superhuman pattern-recognition abilities and is prone to overfitting and divining patterns that just aren't there (often resulting in paranoid fits), LLMs are just like me frfr.
October 27, 2025 at 5:07 PM