Brendon LeCount
banner
brendonlecount.bsky.social
Brendon LeCount
@brendonlecount.bsky.social
Video game programmer & designer, UCSC & Cal Poly alum, avid swimmer

Mod author, and Development/Project Director at Mechanical Moonworks

https://next.nexusmods.com/profile/BrendonLeCount/mods

https://www.mechanicalmoonworks.com/
Call it Deus Excrement
November 17, 2025 at 4:00 AM
This could be a stealth game if you change it to "anywhere just don't get caught".
November 17, 2025 at 3:59 AM
The other option is to have your character controller determine the character's movement, and then adjust the playback speed of the animation to match, which is what I started with, but it didn't look right for the skeletons because their average speed is all over the place as they lurch.
November 9, 2025 at 1:30 AM
Root motion is part of an animation clip. The animator includes how the character's overall position is changing in addition to the motion of the limbs. You can then use it to determine how the character moves through the gameworld, in a way that ensures their footsteps match their movements.
November 9, 2025 at 1:29 AM
The ghoul really looks like an imp from old school Doom in that shot.
October 12, 2025 at 7:00 PM
Here's a link to the conversation if anyone wants to dig deeper:

chatgpt.com/share/688f92...

(6/6)
ChatGPT - Reality testing concept
Shared via ChatGPT
chatgpt.com
August 3, 2025 at 4:46 PM
ChatGPT stated it can't determine when a result has been influenced by developer intervention, partly due to technical limitations, partly philosophical. The philosophical excuse is that people might trust intervention-derived responses more than data-derived ones. Seems a little backwards. (5/?)
August 3, 2025 at 4:45 PM
Interestingly, ChatGPT didn't immediately list developer intervention into the model as a method of reality testing, though it compared it to humans getting feedback from their peers. Unlike humans, however, it can't take that feedback with a grain of salt. It just accepts it as truth. (4/?)
August 3, 2025 at 4:36 PM
Part of the way they try to distinguish factual data from fictional data is by looking at phrasing. This can leave them vulnerable to conspiracy theories and misinformation, because they are often written as statements of fact. (3/?)
August 3, 2025 at 4:30 PM
LLM results are gleaned from data more via statistical analysis and pattern recognition than understanding. They are capable of testing for logical coherence, however. (2/?)
August 3, 2025 at 4:25 PM
Worth noting that part of that 3% is musk investing in a ginormous ai data center that I'm sure he'll find plenty of nefarious uses for.
August 2, 2025 at 7:29 PM
But... can't we just wait until they're drug addicted, depressed, or homeless and indefinitely institutionalize them?
July 31, 2025 at 4:33 PM
Toughest kid in school sees a kid getting bullied, says: "I'm giving you two hours to stop it or else..."
July 29, 2025 at 2:05 AM
Left to its own devices, AI is too dumb to have an opinion. It just mirrors the data it was trained on. Naive objectivity could be it's biggest strength, but... of course not.
July 24, 2025 at 2:06 PM