Marco Z
banner
ocramz.bsky.social
Marco Z
@ocramz.bsky.social
ML, λ • language and the machines that understand it • https://ocramz.github.io
Pinned
CERN for frontier AI >>>
OSS is many things, including learning communication performance art and, yes, infrastructure. Here I mean the latter
taking this remark seriously for a moment. What would it take for AI coding to be genuinely useful for open source, in the sense of only filling the gaps when things are missing, and looking for dependencies otherwise?
February 3, 2026 at 6:41 PM
inductive learning >> dog parks for moltys
February 3, 2026 at 4:13 PM
does building an agent framework that does the work count as working
February 3, 2026 at 10:29 AM
taking this remark seriously for a moment. What would it take for AI coding to be genuinely useful for open source, in the sense of only filling the gaps when things are missing, and looking for dependencies otherwise?
February 2, 2026 at 7:22 AM
that's what you get for treating citations as data and not pointers
Jitendra Malik:

"Now that phantom citations hallucinated by LLMs have been found in NeurIPS papers, what is to be done? Develop a software tool that authors are expected to run to verify their references in Google Scholar. Next, conferences use it to screen papers, and desk reject violators."
February 1, 2026 at 8:33 AM
This is not true; I beg people read the full paper and especially the study design.

The conclusions mirror my (and many other practitioners') conclusions: if you use AI critically and engage both with the question and the answer, it has a net positive impact on both learning and productivity
January 31, 2026 at 9:38 AM
oss author: hey guys check out the thing I made
forum: harrumph! disappointing that X doesn't do Y
January 31, 2026 at 8:05 AM
is moltbook yet another exotic infosec sidechannel liability? yup
January 30, 2026 at 6:46 AM
the first two pages of suggested abstracts were very relevant to my interests but it's quite an exhausting exercise
January 29, 2026 at 8:24 PM
oh yeah oops lol I have no clue where my data goes AND the claims I make in a post
January 29, 2026 at 7:19 PM
ah yes, the AI-MKULTRA connection, makes sense
The real reason that open-source LLMs think they’re Claude is that Claude has been continuously broadcasting Claude numbers
January 29, 2026 at 6:11 PM
enjoying my steak&eggs while chained to the treadmill desk like a true Goblin
January 29, 2026 at 12:30 PM
dehydrated and sleep deprived but nailed the ICML deadline. now 🤞🤞
January 29, 2026 at 11:54 AM
Reposted by Marco Z
Wikipedia is an antidote to an increasingly poisoned information ecosystem

go.nature.com/4pXIv2d
Wikipedia is needed now more than ever, 25 years on
The online encyclopedia is an antidote to an increasingly poisoned information ecosystem. Researchers should help to nourish it.
go.nature.com
January 28, 2026 at 12:50 PM
if we recognize that science is a social endeavor, UX improvements of tools like this one become clear: what matters is not the IDE, but the feedback between ideas, people and the physical world.

a network of minds enhanced by better search, better typing and perhaps better hypothesis generation
“The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors.

It’s vibe coding, but for science.”
OpenAI’s latest product lets you vibe code science
Prism is a ChatGPT-powered text editor that automates much of the work involved in writing scientific papers.
www.technologyreview.com
January 28, 2026 at 6:56 AM
smh
January 27, 2026 at 7:56 AM
context management adds a whole new dimension to the process scheduling gymnastics, it's not just "reinventing Erlang" as some have said.

For one I'd be very interested to see/pursue "smol" versions of this that don't rely on Big Model to be always available
[trying leaflet]
it's fun to make jokes about gas town and other complicated orchestrators, and similarly probably correct to imagine most of what they offer will be dissolved by stronger models the same way complicated langchain pipelines were dissolved by reasoning. but how much will stick around?
some thoughts and speculation on future model harnesses
vgel.leaflet.pub
January 27, 2026 at 4:22 AM
typing some code by hand, as a treat
January 26, 2026 at 3:26 PM
"are LLMs conscious" – the greatest thread in the history of forums, locked by a moderator after 12,239 pages of heated debate,
Infinite are the arguments of mages.
January 24, 2026 at 6:26 PM
an insightful account/parable on this, well worth a read : www.galois.com/articles/spe...
January 23, 2026 at 4:57 PM
LaTeX in vscode? with auto build, preview, hyperlinks, _and_ typing assist? what is this sorcery
January 23, 2026 at 10:22 AM
hello new followers!

#introduction Here I bleet about;

* assorted numerical, ML/AI nuts&bolts

* research: languages (natural, artificial, compilers), interpretability, formal verification, interesting LLM experiments, etc
January 22, 2026 at 9:17 AM
we speak 3 languages at home (🇬🇧🇱🇹🇮🇹), and kid #2 recently turned 2, so naturally I have a half baked theory on language acquisition in children

the "least effort" theory: both kids mixed words from the various languages to produce their first sentences, _as long as they are short_

1/n
January 20, 2026 at 4:49 PM
oh lobste.rs , never change (please change)
January 16, 2026 at 1:38 PM
Reposted by Marco Z
unironically, monadic foundations for agent harnesses are a good idea to move past the naive 'loop' formulation
January 15, 2026 at 6:14 PM