opinionatedintelligence.substack.com
|
ksl
|
|
Chandra Narayanan, former head of analytics at Facebook and Sequoia operating partner, and Julie Zhuo, ex-VP Product Design at Facebook, published a sharp breakdown of why AI analysis defaults to useless generalities. The core insight is that LLMs fill gaps with statistically common patterns – a product showing 5M MAU and 80% DAU/MAU looks great until you add that sessions last 30 seconds, and the interpretation flips entirely depending on whether it’s a payments app or a social product. Their fix is orthogonal context: feeding the model independent information dimensions like retention cohorts, unit economics, and external drivers rather than going deep on a single metric. The piece reads like a diagnostic manual for anyone who’s been frustrated by bland ChatGPT business analysis and didn’t know why. Prompting technique discussions have mostly focused on creative and coding tasks – structured analytical work like this is still underexplored.
