Adam Pocock
craigacp.bsky.social
Adam Pocock
@craigacp.bsky.social
Machine Learning researcher at Oracle Labs. Brit currently living in the US.
Is your US shipping still on hold? I ordered a bunch of Faction Paradox books which just arrived (thanks! looking forward to reading them) but I forgot to get the EDAs collection before the tariffs arrived.
September 3, 2025 at 6:42 PM
There are countless papers I've reviewed where they say they used a 2GHz Intel processor when reporting performance numbers and I have to ask "which one of the 25 years worth of 2GHz Intel processors did you use?" System environment reporting standards are just terrible.
July 17, 2025 at 7:07 AM
Many conferences also use openreview.net, so the reviews (for accepted papers after publication) tend to be open as well.
OpenReview
Promoting openness in scientific communication and the peer-review process
openreview.net
July 14, 2025 at 12:18 PM
I've only ever published in CS so I can't compare it to elsewhere, but it's typically 20-25% at the top conferences. Not sure about the journals. So much stuff is on ArXiv now too, so lots of things become well known before they finish going through peer review.
July 14, 2025 at 12:15 PM
The best journal in Machine Learning (jmlr.org) and most of the best conferences are entirely open access and free to publish in. It's been this way for at least 15 years and is working pretty well.
Journal of Machine Learning Research
jmlr.org
July 14, 2025 at 12:05 PM
I've derived a bunch of amusement from asking different LLMs about various obscure Doctor Who spinoffs and watching each one straight up lie to me. The 90s novels must not have been in the books they scraped from the internet.
February 17, 2025 at 9:30 PM
I watched that one last night. Things would have been much better for everyone over the next 6 years if they'd decided to drop her in the gamma quadrant.
February 5, 2025 at 9:44 PM
Yeah, I have a rough idea what JVM startup looks like, and a similarly rough idea of how processes get launched & what libc does to bring up a C program, but a web browser is far beyond my expertise. Even node.js or CPython are systems I don't understand startup for.
December 26, 2024 at 9:03 PM
Feels like the difficulty decrease from using a higher level language to implement fizz buzz is not commensurate with the difficulty increase in describing how the language runtime works. It's much easier to describe how a (*nix) C program executes than how Java, Python or JavaScript executes.
December 26, 2024 at 8:39 PM
Thanks for the stickers, they are great. We were considering making a similar one with a pile of old Sun servers on, we're the office hardware horders.
December 25, 2024 at 1:43 AM
Good job my new laptop still has a bunch of space for stickers because I need that one.
December 19, 2024 at 7:56 PM
Is it a transformer decoder? If it's encoder or encoder/decider then an auxillary loss head like BERT has for NSP will train that into the representation. If it's a decoder then that's more of a pain as the place where you do the prediction matters and sticking it on the last token is bad.
December 13, 2024 at 7:33 PM
Are you including things like Mission to the Unknown to get there, or people who are more Doctor adjacent?
December 11, 2024 at 7:58 PM
Early Wikipedia? Before the tooling and culture built up it was a lot more vulnerable to trolling and misleading edits. I remember my teachers continually saying to check its sources rather than use Wikipedia text directly.
December 11, 2024 at 12:25 PM
Given how they work getting to zero hallucinations requires incredibly sharp probability distributions over the next token, leading to no variability. So I suspect the hallucination problem isn't going away with generative systems.
November 19, 2024 at 1:29 PM