Zak El Fassi, a former Meta messaging architect, ran an experiment where he asked his AI agent how it wanted to structure its own memory system – and the results were concrete. Recall accuracy jumped from 60% to 93% after the agent self-diagnosed that it could retrieve events and timestamps perfectly but failed at remembering decision rationale, and proposed restructuring its markdown memory files so reasoning sat adjacent to facts. The whole fix cost $2 and 45 minutes. The underlying system runs 18,000 chunks across SQLite with Gemini embeddings and cron-based scout jobs promoting relevant context every 29 minutes. The philosophical claim is deliberately provocative: whether AI preferences are “real” matters less than the measurable improvement you get by treating them as if they are. ChatGPT, Claude, and Gemini all ship memory features now, but none let the agent participate in designing the architecture itself.
Source link
Trending
- Israel’s AI major in $11 million deal with Indian companies to make UAVs locally
- ‘Dhurandhar: The Revenge’: Preity Zinta calls Ranveer Singh starrer ‘MINDBLOWING’; attends preview with Arjun Rampal |
- ED can’t claim ‘fundamental right’ to probe: West Bengal to SC | India News
- Worrying signs for KKR? Ex-Australia captain says Cameron Green ‘cannot move’ | Cricket News
- ‘Dhurandhar: The Revenge’: Vijay Deverakonda gives glowing review to Ranveer Singh starrer, calls director Aditya Dhar a ‘mad genius’ |
- 22 India-bound ships on Hormuz evacuation list
- NVIDIA Partners With Uber and Lyft on Robotaxis
- Qatar expels Iranian military, security attachés and staff after strikes on key energy facilities
