Simon Willison describes a new analogy for large language models (LLMs) as "a lossy encyclopedia." LLMs contain a vast array of facts compressed into their models, but this compression is inherently lossy, similar to how a blurry JPEG image loses detail. The important insight is to understand which questions an LLM can answer effectively and which require too much detail lost in compression. This idea came from a Hacker News comment questioning why an LLM cannot "create a boilerplate Zephyr project skeleton for Pi Pico with st7789 SPI display drivers configured," which Simon argues is a "lossless encyclopedia" type question—too detailed for an LLM to know without additional context. His solution is to provide a correct example to the LLM rather than expecting it to know very specific facts on its own, treating it as a tool that acts on the facts given rather than as an exhaustive database. Recent related articles on the blog include: - "V&A East Storehouse and Operation Mincemeat in London" (27 Aug 2025) - "The Summer of Johann: prompt injections as far as the eye can see" (15 Aug 2025) - "Open weight LLMs exhibit inconsistent performance across providers" (15 Aug 2025) The blog also offers a monthly briefing sponsor option, where for $10/month, one can receive a curated email digest highlighting the month's most important LLM developments.