Clément Girault
seascape.bsky.social
Clément Girault
@seascape.bsky.social
WFC is probably too painful to implement yourself. And each tile needs rules to be manually attached to it. I’d recommend you grab a map generator off GitHub and tweak it. Or Unity which has lots of assets covering mapgen but obviously you’ve started with JavaScript so you’d have to port your code.
December 23, 2025 at 2:41 PM
For procedural map generation look up “wave function collapse” (wfc) and find yourself a JavaScript implementation if that’s what you’re working with.
December 23, 2025 at 12:33 AM
Extending human rights is indeed absurd. AIs are not humans. But what about another type of right? Just like parents are liable for the damage caused by their children, why not have a new class of rights that holds parent organisations responsible for their AI models? Then you could tax the robots?
May 6, 2025 at 8:52 AM
And yes, I’m making many assumptions and this is all still fantasy! There’s no denying that. But if we assume actual intelligence from an AI model and if we assume that intelligence will be equal or greater to ours, then i am also going assume it may seek equal rights! Good luck denying it that!
April 28, 2025 at 1:14 PM
I’m not trying to argue anything though so apologies if it came across that way. I was simply asking questions that popped into my head, and thought you might have some insight.
April 28, 2025 at 12:54 PM
Your paper seems to imply that LLMs are capable of sentience. Some would disagree and say that being probabilistic models they are merely getting better at faking intelligence. A neuro-symbolic model would by definition be a reasoning model though. One that passes the ARC2 tests would qualify as AGI
April 28, 2025 at 12:51 PM
Assuming sentience from a neuro-symbolic model capable of AGI, wouldn’t granting it equal rights actually be a safeguard as to what it can and can’t legally do? And couldn’t its “parent” company also be held liable for any damage caused, just like any parent is when their children cause damage?
April 28, 2025 at 11:32 AM
I’ve been designing such a generalist neuro-symbolic model built around a novel algorithm. I would love to run it past you to get your thoughts. At the early stages but it shows promise for self-reasoning across domains. I’ve architected it to be modular. Adding a domain is virtually plug and play.
March 20, 2025 at 2:20 AM
Then again…

Will we ever be able to design a machine with the efficiency of the human brain, which is capable of so much while consuming a tiny amount of power and resources compared to electronic devices?

Could the AGI machines of the future be bio-organic, grown in a lab? With dopamine present?
January 19, 2025 at 12:16 AM