#OverFitting
fMRI's core challenge is its indirect measurement of brain activity via BOLD signals, often facing low signal-to-noise ratios. This can lead to overfitting and misinterpretation, as non-neuronal factors can influence results. #BrainImaging 2/6
December 17, 2025 at 8:00 PM
The community values benchmarks for AI image models but worries about overfitting—models optimized specifically for existing tests. Robust, less predictable benchmarks are suggested to ensure fair and accurate evaluation. #AIBenchmarks 3/6
December 17, 2025 at 8:00 PM
Overfitting Random Fourier Features: Universal Approximation Property

https://thierrymoudiki.github.io/blog/2025/12/13/r/python/Overfitting-Random-Fourier-Features

#Techtonique #DataScience #Python #rstats #MachineLearning
December 17, 2025 at 5:45 PM
Dual Language Models: Balancing Training Efficiency and Overfitting Resilience
Read more: https://arxiv.org/html/2512.14549v1
December 17, 2025 at 1:42 PM
Ever since seeing examples of it I aways think about overfitting

A small piece original could be there word for word, but it's still not always possible to trace back the source
December 17, 2025 at 12:40 PM
David Samuel, Lucas Georges Gabriel Charpentier
Dual Language Models: Balancing Training Efficiency and Overfitting Resilience
https://arxiv.org/abs/2512.14549
December 17, 2025 at 6:43 AM
Ludwig, Bakas, Barmpas, Zoumpourlis, Adamos, Laskaris, Panagakis, Zafeiriou: EEG-D3: A Solution to the Hidden Overfitting Problem of Deep Learning Models https://arxiv.org/abs/2512.13806 https://arxiv.org/pdf/2512.13806 https://arxiv.org/html/2512.13806
December 17, 2025 at 6:33 AM
David Samuel, Lucas Georges Gabriel Charpentier: Dual Language Models: Balancing Training Efficiency and Overfitting Resilience https://arxiv.org/abs/2512.14549 https://arxiv.org/pdf/2512.14549 https://arxiv.org/html/2512.14549
December 17, 2025 at 6:30 AM
To be clear, I meant humans using AI to comb through years of audio records or hundreds of thousands of trail cam photos. Or humans using machine learning techniques for which there are best practices to avoid overfitting. Not AI agents doing sloppy analysis. No. Eww. Just no.
December 17, 2025 at 2:00 AM
Joe McLean, a Miro product manager, warns that designing only from use cases can cause "overfitting," making designs too complex—similar to our concept of Experience Rot, where excess features hurt UX.

https://bootcamp.uxdesign.cc/overfitting-and-the-problem-with-use-cases-337d9f4bf4d7
Overfitting and the problem with use cases
As a Miro PM with lots of collaborators, I sit in a lot of early-stage product and design reviews (including my own). I hear a question…
bootcamp.uxdesign.cc
December 16, 2025 at 11:00 PM
His problem with Bouie/HCR is that they analogize current developments to historical events, arguing meaningful patterns exist.

Silver, a statistician, thinks history is largely coincidence, and Bouie/HCR are overfitting the data, weaving a phantom narrative to advance their lefty ideology.
also i am a fairly prolific newspaper columnist and yet none of these people seem to be able to accurately describe my views in the least
December 16, 2025 at 8:42 PM
Without proper data curation, more data usually results in worse performance from an ANN (eg. overfitting). Additionally, and easy to miss, they're going to use the drivers responses as an indication of how to navigate the edge case. Without knowing why, we can't say if it's the right response.
December 16, 2025 at 5:50 PM
Was heartbreaking to hear students' using ChatGPT to "sound white".

It seems like it's more accurate to say that it instead makes people sound like an uncanny dupe of a well-educated African English-speaker since the models were trained on English internet and subject to regional overfitting.
Reminded again of the exchange @fractalecho.bsky.social had w/ their student whose writing quality tanked, was noticed, and then, when asked what was going on, revealed they'd been using "AI" to rewrite papers to sound more white.

Our trust in these systems does us real epistemic & hermeneutic harm
Not a chance in the world I’d ever admit this.
December 16, 2025 at 2:37 PM
Re: overfitting in ChatGPT word choice:

"In Nigerian English, it’s more ordinary to speak in a heightened register; words like “delve” are not unusual.

AI is trained on the entire internet… Nigeria has one of the world’s largest English-speaking populations."

free link: archive.ph/2025.12.16-1...
Why Does A.I. Write Like … That?
www.nytimes.com
December 16, 2025 at 2:22 PM
*Test for overfitting: this is a verification whether the tested model is really capable to solve any complex problem, or it is ’just’ trained to solve the given problems in public set.
December 16, 2025 at 12:42 PM
The problems are divided into three subsets:
🔹 Public: 731 instances openly available on the Hugging Face platform
🔹 Commercial: 276 instances sourced from startup repositories; only the results are publicly accessible
🔹 Held-Out: 858 problems that mirror the public set, used to test for overfitting
December 16, 2025 at 12:42 PM
Metodologías OLA y BFA permiten extraer conocimiento médico fiable reduciendo overfitting e identificando el mejor algoritmo
La revolución silenciosa del machine learning médico: las metodologías que revelan qué algoritmo interpreta mejor los datos clínicos
www.fundacionmuyinteresante.org
December 16, 2025 at 8:36 AM
Hayk Amirkhanian, Marco F. Huber: From Overfitting to Reliability: Introducing the Hierarchical Approximate Bayesian Neural Network https://arxiv.org/abs/2512.13111 https://arxiv.org/pdf/2512.13111 https://arxiv.org/html/2512.13111
December 16, 2025 at 6:34 AM
...Your argument about overfitting, in my own words, is that it's actually my data or success criteria failing to align with the thing I care about. So it is with publications and science (or engineering or whatever)...
December 16, 2025 at 1:56 AM
No need to correct the record. I was just having a bit of fun with overfitting and overpublication sharing the same prefix.

But to elaborate a bit...
December 16, 2025 at 1:52 AM
I'm not following the connection between overpublication and overfitting. But I wouldn't be surprised if my blogging lacks temporal consistency... help me out and I'll correct the record.
December 16, 2025 at 1:11 AM
Just poking Ben a bit. He had piece a while ago that asked what overfitting was and then argued that overfitting does not exist
December 15, 2025 at 10:10 PM
- некорректная модель прогнозирования (а. в модель заложены ошибочные предпосылки; б. не учитываются важные для результата данные);
- да/нет вместо вероятностного подхода;
- проблема overfitting'а (решение для частного случая — это не общий принцип);
- корреляция ≠ причине-следственная связь;
December 15, 2025 at 9:43 PM
왜 나비일까 … (overfitting …)
December 15, 2025 at 11:01 AM
Features:
- navigate logs and compare runs
- downsampled summaries designed to fit in LLM context
- Sparkline trends: loss: ▇▆▅▃▂▁ ↓
- Anomaly flags: ⚠️ Overfitting detected
- markdown export
- support for W&B, TensorBoard, JSONL
December 15, 2025 at 3:09 AM