The paper titled "Adaptive LLM Routing under Budget Constraints," submitted on August 28, 2025, and accepted at EMNLP 2025 (Findings), addresses the challenge of efficiently selecting large language models (LLMs) for different natural language processing tasks given their varying capabilities and costs. Traditional supervised LLM routing assumes full knowledge of the best model-query pairings, which is impractical in evolving real-world scenarios. The authors propose framing LLM routing as a contextual bandit problem to enable adaptive decision-making based on bandit feedback without exhaustively querying all LLMs for every input. They develop a shared embedding space that aligns embeddings of queries and LLMs to measure affinity. This embedding space is initially learned from offline human preference data and is refined online with bandit feedback. The paper introduces PILOT (Preference-prior Informed LinUCB for adaptive routing), a novel extension of the LinUCB algorithm tailored to this problem. To address varying user budget constraints, the authors incorporate an online cost policy modeled as a multi-choice knapsack problem, ensuring resource-efficient routing decisions. This work lies in the domain of machine learning, particularly adaptive model selection and resource-aware deployment of LLMs. It contributes a practical framework for dynamically routing tasks to appropriate LLMs under budget constraints using online learning techniques. The full paper and source are accessible on arXiv at https://arxiv.org/abs/2508.21141 with DOI https://doi.org/10.48550/arXiv.2508.21141.