Lupin
lp1.bsky.social
Lupin
@lp1.bsky.social
💻 Hacker with a short attention span // 📚 Information Security Trainer // 🎓 Epitech Alumi // http://keybase.io/lp1 // Pentest+ / CySA+
Sure, but imho that's an exploitation/circumvention of a text-generation tool to output a valid answer, not a way to make an agent "think"
November 6, 2024 at 2:36 PM
I'm quite sure even today's state of the art LLMs cannot do either. Best case scenario, they would actually generate an output with a valid way of counting if you gave a precise enough input, because that would be the stastically most probable output, but they are far from doing any kind of thinking
November 6, 2024 at 2:34 PM