The paper titled "On the Theoretical Limitations of Embedding-Based Retrieval," authored by Orion Weller, Michael Boratko, Iftekhar Naim, and Jinhyuk Lee, explores fundamental constraints of vector embedding models used for retrieval tasks. As embeddings are increasingly applied to diverse queries across tasks like reasoning, instruction-following, and coding, there is an assumption that challenges with embeddings arise mainly from unrealistic queries and can be overcome through better training and larger models. However, the authors demonstrate that theoretical limitations appear even with simple, realistic queries. They link known learning theory results to show that the variety of top-k document subsets retrievable by embeddings is restricted by the embedding dimension. Empirical evidence supports this limitation even when optimizing embeddings directly on a test set for k=2. To further investigate, the authors introduce LIMIT, a realistic dataset designed to stress test these theoretical boundaries. They find that state-of-the-art embedding models fail on LIMIT despite the task's simplicity. The study concludes that existing single vector embedding paradigms have inherent limits, highlighting the need for new methods to overcome these fundamental barriers in embedding-based retrieval. The work spans topics in Information Retrieval, Computation and Language, and Machine Learning.