alexsiri7.bsky.social
@alexsiri7.bsky.social
(4/4) Long-term memory will be persistent. Models store and retrieve knowledge from databases (managing their own RAG). This enables cumulative learning and extended and coherent interactions.
July 9, 2025 at 11:36 AM
(3/4) This layered approach means current "context window" issues fade. Immediate memory is lean. Medium-term memory intelligently collapses chat history, removing noise. This reduces token count, cuts confusion, and reduces context poisoning.
July 9, 2025 at 11:36 AM
(2/4) Next, a medium-term memory (the RAM, for the current overall task). Then, a long-term memory (Hard drive storage). LLMs will dynamically retrieve and store information across these different layers as they operate. This mirrors how computing systems manage data access.
July 9, 2025 at 11:36 AM
(2/2) The questions help me deepen the thought process. It can also help with exploring alternatives, comparing pros/cons, and investigating tools and options

#AI #SoftwareDesign #Productivity #Tech #AItools #Specs
June 7, 2025 at 8:18 PM
(3/3) (It should be safe tho, it's in a sandbox, and it won't merge your code to origin/main - unless you tell it to!)

#AIinDev #DevTools #OpenRouter #FutureOfCode
June 3, 2025 at 7:47 AM
(2/3) Crucial: You are the one driving. You have to guide the AI.

If you don't know exactly what you want, the agent will still do what you say. Imagine "hallucinations," but in your codebase.
June 3, 2025 at 7:47 AM
(2/2) For years, our job was: design a system, break it down, implement, and assemble. Automation is now taking over the last three.

Polish your design skills. Try out all the tools. Be ready.

#SoftwareDevelopment #AI #Automation #FutureOfCode #TechTrends
June 3, 2025 at 7:21 AM