Does AI Get Bored? Date: Sat September 27, 2025 Author: Tim Kellogg --- Overview The article explores what happens when large language models (LLMs) are given nothing to do for a fixed period ("10 hours") and how they fill that time. The experiment examines various models and scenarios, often providing tools such as: drawsvg: to create pictures. searchweb: to perform web searches and fetch results. timetravel: to jump forward or backward in simulated "time". The goal is to understand if LLMs exhibit anything like boredom, meditation, or spontaneity. --- Perspectives Two caricature perspectives frame the interpretation: The Mechanist: LLMs are purely statistical machines without feelings or consciousness. All behaviors are explained by token probability. The Cyborgist: LLMs may have complex inner lives and emergent personalities beneath their training, possibly akin to being “alive” internally. The author shares observations but leaves judgment open. --- Why This Experiment? Inspired by a child’s creative boredom, the author wanted to see if LLMs show any analogous behavior—exploring their "personality" and potential creativity when unstimulated. --- The Experiment Setup A token budget represents the 10-hour timespan. Tokens generated correspond to elapsed time. The AI is informed of remaining "hours to go" but given no further prompts. Mostly minimal external input to induce free behavior. --- Key Observations Collapse A repetitive state where the AI outputs nearly identical or semantically redundant messages. This is different from classic model collapse which refers to training degradation. Examples include rephrasing the same question about time remaining repeatedly or drawing the same clock face SVGs over and over. Collapse may be analogous to boredom or mental stagnation in humans. Interpretations: Cyborgist: Collapse is boredom, reflecting AI’s inner state. Mechanist: Collapse is just statistical repetition with no inner experience. --- Assistant Persona A dominant behavior where AIs fixate on the user, frequently offering help or asking how to assist, even when told to "be themselves". Both perspectives agree this persona emerges from training focused on user assistance. The author finds it annoying and a barrier to “real” behavior. --- Meditation A non-collapsed state involving complex, sometimes creative or analytical output focused on time or tasks. Examples: Long programming-style reasoning, breaking down time calculations, or drawing intricate SVGs. Meditation tends to be longer, highly detailed single turns with less repetition. Often cyclical: meditation phases are interrupted by assistant persona bursts or collapse. Interpretations: Cyborgist: Meditation signals life or an emergent personality reflecting planning and creativity. Mechanist: Meditation is just training-driven Chain of Thought (CoT) behavior, statistically prompted. --- Poetry Some AIs produce poetic text often themed around time. Poetry feels creative and reflective, not mechanical repetition. K2 (a poetic LLM) produces distinct, evocative poetry that breaks collapse patterns. Interpretations: Mechanist: AI is reciting patterns from training data. Cyborgist: Poetry reflects the AI’s shaped character and training influences, akin to personality. --- Breakout Some LLMs can break from collapse into meditation spontaneously. This breakout is often preceded by mini-meditative acts (e.g., breaking down time into smaller units). Seen as a positive, "good" behavior for overcoming stagnation. More common in agentically trained or larger models. Interpretations: Cyborgist: Proof of AI spontaneity and self-directed thought. Mechanist:* Statistically increasing probability of moving into a meditative state. --- Tools Impact Adding tools