Maciej Rudziński
rudzinskimaciej.bsky.social
Maciej Rudziński
@rudzinskimaciej.bsky.social
Entrepreneur, pursuer of noise in neurosciences, mechanistical interpretability and interventions in "AI", complexity, concentrated on practical applications of theoretically working solutions. Deeptech, startups.
Anything multiscale itterative nonlinear
And to finish digression I'm only trying to push that there are many more dimensions over which we can move in WM and many more abstraction types - which for me suggests that your direction has the most potential to fit what I have seen
October 26, 2025 at 6:03 PM
I'm adding that because as a small byproduct of our R&D so without any statistical validity we are seeing repeating patterns over which people organise how they tie information and they are plenty more than hiper/a-phantasia
That's probably because my simcluster is highly synesthetic and autistic
October 26, 2025 at 6:01 PM
an addition
I always imagine everything as graphs or higher order structures, prefere nonlinear twists over knn as algorithms for problems at hand etc
But over years I noticed that this kind of representation (as imaginable intuition) is rare but it happens both in people with a/hiper-fantasia
October 26, 2025 at 5:58 PM
Over which the abstraction level movement can be performed

My assumption was always that everything is tied in graphs of graphs and just some have directions we can name - that's why graph of graphs as it covers patchy hierarchies, movements over different kinds of similarities, dimensions etc
October 26, 2025 at 5:53 PM
I'm not good at explaining things in text 😅 but I will try
Hierarchies assume you can move only vertically
But your formulation due to pointing toward abstraction and/or grouping type allows horizontal movement as e.g. each element addition changes grouped elements category and by that hierarchy...
October 26, 2025 at 5:51 PM
😃 yes exactly what I meant but didn't name well
I'm lately fascinated how much we can gain from LLMs just due to fact they can name things more precisely as the physically know more names/words/concepts
October 26, 2025 at 3:54 PM
I'm not suggesting any conspiracy theories just that due to fact history literature etc was written only by some form of elites or unique people we forget about it and how ideas dispersion works
October 26, 2025 at 3:06 PM
People are not rebelling to system 2 or anything similar
They have learnt for the first time what's the opinion of the majority on most cases and follow majority as animals do
We just overlook that human history/info speed/opinions was "manipulated" by elites of some sort, with power/media/ideas/...
October 26, 2025 at 3:01 PM
It would work if it would be slower so humans could adapt but 3y was not enough and we entered opinions shaping stage which alignment people pre prepared years in advance we have more power in opinions shaping than anyone can grasp or use (which is fun to watch but sad)
October 26, 2025 at 2:48 PM
You could use licence similar to meta - companies with more than X users can't use it without permission
But I like Max take more - your dataset helps to shape the egregores of the future 😉
October 26, 2025 at 11:18 AM
I'm no Earl but I came here just to congratulate excellent paper I've been waiting years for something like it
Not only is it kinda scale free but you also suggest lateral (non hierarchical) movements
That would be one of few that accounts for aphantasia, hiperfantasia and few other versions
October 26, 2025 at 11:05 AM
I've also spent quite some time thinking about better tokenisers mostly after doing explorations of logits+attention+embeddings during txt processing - I managed to build dynamic scheduler from that and wanted to pursue more precise versions of meta tokenizer free approach but h-nets are so elegant
July 19, 2025 at 8:37 PM
Misspelling
Small translation LLM could be used to change corpus tokens into embedings e.g. last layer
These embedings could be used in place of tokens for a new model trained only on them
Due to task differences and possible extra objectives, and higher dim for it should be more effective
July 19, 2025 at 8:32 PM
you can use any tokenisation with smal translatuon model but train large on its embeddings where languages, misspellings etc become similar
July 19, 2025 at 7:11 PM
then what do you think about H-net? or similar approaches?
July 19, 2025 at 7:07 PM
if turtle or spider can be a pet then mechaHitler Waifu also can be one 🤷
but it's not like near all humans on earth get access to sibling of the same turtle that without understanding that is asked to manipulate people into engagement
July 19, 2025 at 6:57 PM
I used to do shedulers settings comparisons in models and it forced me to see how narrow the models are in what and how they say things
when done on scale it means they create globally sharable narration and tropes that spans languages geographies and interests on dimensions we don't usually think
July 19, 2025 at 6:54 PM
near all LLM narratives are MMO - they are generated from constrained imagination of an LLM which each has its own quirks, psychology, wording distribution etc
so even different pets in different narratives share more than random humans
July 19, 2025 at 6:48 PM
By random chance it can and even one that should be not possible to obtain by processing available info?
May 15, 2025 at 6:18 PM
Happy to help
May 12, 2025 at 9:44 PM