Maciej Rudziński
rudzinskimaciej.bsky.social
Maciej Rudziński
@rudzinskimaciej.bsky.social
Entrepreneur, pursuer of noise in neurosciences, mechanistical interpretability and interventions in "AI", complexity, concentrated on practical applications of theoretically working solutions. Deeptech, startups.
Anything multiscale itterative nonlinear
🥲 how to block politics here?
Blue sky has much better research feed than X yet I have to scroll through so much random materials to get each piec 😓
And I can't block people as the best writers repost tone of this stuff
October 26, 2025 at 11:23 AM
Iterative self reference cfg as a base for prediction refinement?

That's a simplification of really elegant theory but the principles shown can have multiple types of implementation (as shown) some even more elegant (for me) so not example but latent based

So many possibilities in this approach
How does our brain excel at complex object recognition, yet get fooled by simple illusory contours? What unifying principle governs all Gestalt laws of perceptual organization?

We may have an answer: integration of learned priors through feedback. New paper with @kenmiller.bsky.social! 🧵
October 26, 2025 at 10:36 AM
I've just tested out EEG system for "emotions"* measurement on Google keynote
It was so bad my brain nearly freezed and the only moment shown as engaging was due to me tripping cable during disgust 🥲

Gemini 2.5 is quite good at interpretation from such visualisations vid

*BIS-BAS
May 23, 2025 at 9:57 PM
Reposted by Maciej Rudziński
What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals
May 23, 2025 at 5:00 PM
This seems like a particularly interesting technology
Valve is going toward neuropixels 2x4mm &18kHz
www.roadtovr.com/valve-founde...
Valve Founder's Neural Interface Company to Release First Brain Chip This Year
Valve founder Gabe Newell’s neural chip company Starfish Neuroscience announced it’s developing a custom chip designed for next-generation, minimally invasive brain-computer interfaces—and it may be c...
www.roadtovr.com
May 23, 2025 at 8:46 AM
Fact that this operation now could be copied and automated for bellow 1M€ is scary

The most expensive would be the code
Then AI API
but even on USA scale that's pennies as even smallest models can match the quality of this campaign

Worst is that it could be escalated, improved and hidden better
1/31

Meet project "Good Old USA" the now unsealed DoJ file on the Russian influence in the US to sway opinion on the war in Ukraine.

Something Trump has bought into hook-line and sinker.

It was held under seal, because we got the literal playbook revealing their methods
March 9, 2025 at 11:08 AM
I spend tone of time with AIs, R1, pro etc and study their potential impact on information spread and thinking frames
Main conclusion?
Read books!
Preferably hard demanding ones written befor social media
Regularly as a detoxification of simplified skipped thinking and deciding
...
I’m not kidding: those who delegate all their writing, thinking and creative expression to machines are going to wake up one day and discover that they can no longer write or think. You need to make your own art. You need to keep your brain working. You need to stay human.
January 31, 2025 at 2:06 PM
Reposted by Maciej Rudziński
I wasn’t super excited by o1, but as reasoning models go open-weights I’m starting to see how they make this interesting again. The 2022-24 “just scale up” period was both very effective and very boring.
January 23, 2025 at 3:03 PM
Hierarchical Multiscale in Neurosciences
Honey 🍯 for my brain 🧠🙏
Hi BlueSky fam, for my first post and to celebrate our recent paper being physically published I thought I’d do a summary thread!

This has been my most favourite (and toughest) work to date.

Please help share around!!

www.cell.com/cell/abstrac...
(Reach out if you can’t access)
December 19, 2024 at 11:36 AM
Really nice and deep overview of LLM algos for inference time scaling (reasoning included)

leehanchung.github.io/blogs/2024/1...

I'm starting to think that we are reinventing beamsearch in a more complex way eluding ourselfs that there is a deeper theory behind 🙃
But there is so much more...
Reasoning Series, Part 4: Reasoning with Compound AI Systems and Post-Training
Explore how compound AI systems and post-training approaches can make large language models (LLMs) more reliable and scalable by improving their reasoning capabilities. Learn about validation, verific...
leehanchung.github.io
November 25, 2024 at 2:58 PM
Reposted by Maciej Rudziński
Just an anegdote but when we present people their live EEG based BIS-BAS indicator and we do that by presenting them 8 timescales at once from ~300ms to ~30s one the one they "feel" corresponds to their internal state is a ~2.5s one and for people with high introspection ~1s
November 23, 2024 at 2:58 PM
How to keep people engaged if they are meant to read for few h?
They not only need to see the text they need to concentrate on its meaning,
Ability to read longer texts is dying
but due to the fact we measure ~emotions we can see what people react to
So the answer is recommender system for txt
November 23, 2024 at 12:51 PM
Fresh sample from EEG study on SFT materials for LLMs (watched by people).
Sample matches speed of reading and reactions of participant.
Colours represent standardized BIS-BAS reactions of single person.

The hardest thing is to keep participants interested in this materials 😅

txt in Polish
September 19, 2024 at 9:12 PM