Adam Parker
foreverska.bsky.social
Adam Parker
@foreverska.bsky.social
My cry for more compute is not unlike that of a certain mouse with a love for cookies. But I do swear, one more compute is all I need.
December 16, 2025 at 5:20 PM
Academic writing and the act of being judged on it has all but stifled my love for sharing my ideas.
December 15, 2025 at 2:23 AM
Hyper Parameterizing your new ML/RL algorithm versus an existing algorithm is team sports for nerds.
December 6, 2025 at 12:00 AM
Me (internal monolog): That's the last time I listen to an LLM's advice on hyperparameter tuning.
Editor's note: It wasn't.
November 25, 2025 at 3:33 PM
A lobster is a tree that reduces to a caterpillar when pruning all leaf nodes. A caterpillar is a tree that reduces to a path graph when pruning all leaf nodes; setting p2 to zero produces a caterpillar.

People of the graphs, what kinda horcrux bs is this? Sounds like a damn autobattler ruleset.
October 1, 2025 at 1:17 AM
GPT-OSS-20B signed off a message with "Happy Debugging!" after writing the whole project I was contemplating building. Oddly self-aware.
September 26, 2025 at 4:06 AM
It's an incredibly weird feeling to have an LLM hallucinate knowledge of your paper. It hallucinated all kinds of vague extensions to the algorithm which might serve as interesting research directions. New tech?
September 12, 2025 at 2:38 PM
In the DFW metroplex, society never really ends no matter how far you drive. Unless you’re on 121. Then it definitely did.
September 8, 2025 at 1:52 PM
Say what?
August 15, 2025 at 3:27 AM
How many pages have been (arguably) wasted explaining that in the following paper we define a trajectory to be the exact same definition as every other Reinforcement Learning paper? Or other similar platitudes like explaining the exploration-exploitation tradeoff. I've done it. But still.
August 3, 2025 at 8:01 PM
How does one become a scholar without ending up with the classic hunch? Is a permanently tilted head a job requirement? Asking for a friend... who's neck is starting to creak.
August 3, 2025 at 4:39 PM
Is there a medal for beating the results in an under-hyperparameterized paper? Is it Nobel or...
July 29, 2025 at 3:00 AM
The metric for a gucci home computer has moved. No longer is it, "can it run Crysis?" The true metric is, "How many SB3 DQN Atari agents can you run at once?"
July 25, 2025 at 4:11 PM
ProtonMail now has an LLM service based on Mistral AI. The sunk cost on my GPUs have always driven local to be my default "privacy" conscious method of interacting with an LLM. But it's charming to see a privacy focused group like Proton try to take on privacy focused LLM use.
July 24, 2025 at 4:00 PM
Is there a scientist out there who forms hypotheses from reading papers, not by writing random code until something weird happens? And do they, by chance, find the background section easier than the other totally hypothetical scientist who is definitely not me?
July 11, 2025 at 1:06 AM
I had recent high praise for Tidal and I feel I must balance it out. Whoever over there is running the AI behind the daily playlists, Slint to Reel Big Fish is a bit too far a jump in a single song for even my musical tastes.
June 4, 2025 at 6:22 PM
I know it's technically cheaper to buy a subscription to journals but it feels cheaper to go back to school.
May 29, 2025 at 2:32 PM
@jetbrains.com I know what you're going for but I don't always want a new conda environment every time I start a new project in dataspell. Can we get a "use existing conda environment option" or why am I wrong for wanting this?
May 17, 2025 at 6:17 PM
My first bit of RL research is finally published. Hopefully the first in a line of exploration-based inquiries. I'll be at FLAIRS next week giving a presentation on it.

journals.flvc.org/FLAIRS/artic...
Biasing Exploration towards Positive Error for Efficient Reinforcement Learning | The International FLAIRS Conference Proceedings
journals.flvc.org
May 17, 2025 at 3:03 PM
I have some personalization in ChatGPT to sharpen it's critiquing. But every now and again it inserts a phrase to remind me of it's prompt like a junior actor a bit too excited for their role. "I'll now go over this line by line like the pedant we both know I am." Has anyone else noticed this?
May 6, 2025 at 1:41 AM
The modern Sisyphus: an RL algorithm perfectly capable of learning its environment. The boulder: a non-stationary bandit, meticulously designed to erase all progress.
May 2, 2025 at 2:09 AM
Tidal's monthly summary of my listening activities almost has the vibe of the yearly wrapped for which Spotify is notorious. However, it is missing one critical component, an uncalled for assault on my personality. Tidal: "Here's who you listen to." Spotify: "Have you tried smiling?"
May 1, 2025 at 6:36 PM
Advertising your masters program in a rejection email for a PhD is a wild choice. Especially to someone with a masters.
April 11, 2025 at 4:11 PM
Researcher, after a statistically significant improvement over SOTA:
"We're getting published with this one!"
April 6, 2025 at 3:43 PM
Total exploration is a neat ideal. But if modeling indicates that something is a fantastically bad idea maybe the algorithm just shouldn't try it until that changes? Doing irreparably bad things is kinda a hallmark of human intelligence but we are looking for something smarter than us after all.
April 3, 2025 at 10:40 PM