Zak El Fassi, a former Meta messaging architect, ran an experiment where he asked his AI agent how it wanted to structure its own memory system – and the results were concrete. Recall accuracy jumped from 60% to 93% after the agent self-diagnosed that it could retrieve events and timestamps perfectly but failed at remembering decision rationale, and proposed restructuring its markdown memory files so reasoning sat adjacent to facts. The whole fix cost $2 and 45 minutes. The underlying system runs 18,000 chunks across SQLite with Gemini embeddings and cron-based scout jobs promoting relevant context every 29 minutes. The philosophical claim is deliberately provocative: whether AI preferences are “real” matters less than the measurable improvement you get by treating them as if they are. ChatGPT, Claude, and Gemini all ship memory features now, but none let the agent participate in designing the architecture itself.
Source link
Trending
- Smith says missing IPL a blessing in disguise ahead of Tests | Cricket News
- Eric Dane: Rebecca Gayheart makes her first public appearance with her daughters since Eric Dane’s demise at ‘The Drama’ premiere | English Movie News
- Textbook row: NCERT needs to look within — judiciary, too
- India’s child deaths down sharply but are still a worry: UN | India News
- Fears of early summer recede, wet spells likely in North till March-end | India News
- Virat Kohli, RCB return to M Chinnaswamy for first training session: Who did what? | Cricket News
- ‘Dhurandhar: The Revenge’ box office collection: Ranveer Singh starrer mints Rs 44 crore from early preview shows | Hindi Movie News
- Remembering Len Deighton, spy writer who taught young men to cook
