Queueing to Publish in AI (and CS) Author: David Martínez-Rubio (in collaboration with Sebastian Pokutta) Published: September 2, 2025 Source: Original post --- Overview This blog post analyzes the common computer science (CS) and AI conference publication model, which uses fixed low acceptance rates. It examines the consequences of this system through simplified queueing models and simulations, highlighting how submission pools and reviewing loads evolve under different acceptance rates. --- The Ideal Model: No Giving Up Assume authors keep resubmitting unaccepted papers indefinitely. Let: N: Number of new papers per conference round. p: Fixed acceptance rate (0 < p ≤ 1). xt: Number of unaccepted papers in the pool at conference round t. The system evolves as: \[ x{t+1} = xt (1 - p) + N \] The fixed point (steady state) number of total papers in the pool: \[ x^{} = \frac{N}{p} \] At equilibrium, the number of accepted papers each conference is always N, independent of acceptance rate p. Key Insight Decreasing acceptance rate p does not reduce accepted papers; it increases the pool size and reviewing workload. More reviewing effort yields no change in output, just more rejections and backlog. This matches results expected from Little’s Law in queueing theory. --- If Authors Give Up (Finite Retries) More realistic: authors may stop resubmitting after a maximum number of retries T. Papers modeled in three quality tiers: Great (15%), Average (70%), Bad (15%). Each quality tier has an acceptance probability proportional to 15, 5, and 1. Modeling shows: Lower acceptance rates lead to larger pools and higher reviewing load (close to proportional to N/p). More papers, especially average-quality ones, are abandoned — i.e., authors give up trying to publish. Example: Reducing p from 35% to 20% increases abandoned bad papers from ~60% to 77%, and average-paper abandonment jumps from ~4% to ~24% (4–5× more average papers than bad). The reviewing load can increase by about 46% for T = 6. Consequences Lower acceptance rates disproportionately burden authors and reviewers. Many acceptable papers get rejected due to randomness and quota limits. Some conferences enforce rejections to meet low acceptance targets even for good papers (e.g., NeurIPS 2025). The "effective acceptance rate" (papers accepted out of total submitted, including resubmissions) is much higher than the nominal conference acceptance rate. --- Discussions & Alternative Solutions The system creates a trade-off: Lower acceptance rate → higher reviewer workload and author frustration. Higher acceptance rate → less backlog, but possibly larger conference sizes. Some argue size growth of ML conferences justifies low acceptance rates, but this ignores the backlog impact and total reviewing effort. Alternative approaches to handle growth and reviewing load: Federated conferences (e.g., NeurIPS 2025 adding multiple locations). Different publication/review models. More rapid, experimental review cycles borrowing from other fields. Reducing the number of reviewers per paper. --- Recommendations Assess what the community truly wants and needs. Run fast review experiments outside formal conferences to iterate quickly. Be mindful of resource use (avoid excessive reviewing per paper). Embrace change based on tested models. --- Interactive Appendix (Simulation Tools) Adjustable parameters include: p (Acceptance rate slider from 20% to 70%) T (Max retries before authors give up) Paper quality distributions and probabilities. Growth rates of new paper submissions. Visualizations: Submission pool size vs acceptance rate. Percentage of