Kaplanski AI Lab

Independent AI research

Research on language-model dynamics.

We study recursive large-language-model loops — the perturbation dose required to redirect them, the energy that runs them, and the trust they require. Code and papers in public.

Director
Pawel Kaplanski, PhD
Domains
AI · Physics · Bitcoin
Output
45 papers · 2 patents

Working paper · arXiv 2605.02236

Perturbation dose responses in recursive large-language-model loops.

Raw switching, stochastic floors, and rare persistent escape across append, replace, and dialog nudges.

Fig. 1 — Single-particle 3D PCA trajectory of a recursive GPT-4o-mini loop under an adversarial in-distribution injection. Source: kaplan196883/llmattr.
Abstract

Recursive language-model loops often settle into recognizable attractor-like patterns, but the practical question is how much injected text is needed to move a settled loop somewhere else, and whether that move lasts. We study this in 30-step recursive loops by separating the model from the context-update rule: append, replace, and dialog updates expose different histories to the same generator.

The main result is that persistent redirection in append-mode loops is memory-policy-conditioned. Under a 12 K-character tail-clip, destination-coherent persistence plateaus near 16 % across doses 5–400 tokens; under a full-history protocol, retained source-basin escape crosses 50 % near 400 tokens and saturates at 75–80 % by 1,500 tokens. Adversarial continuations yield ED50raw40 tokens, with convergent estimates from pooled logistic fitting, mixed-effects modeling, and family-cluster bootstrap.

Practically, recursive-loop evaluations should distinguish transient movement from durable escape, always subtract stochastic floors, and treat context-update rules as first-class safety-relevant design choices rather than implementation details.

  • ED50raw 40tok half-effect dose, append-mode
  • Persistence 16% 12 K tail-clip, doses 5–400
  • Experiments 37runs gpt-4o-mini · gpt-4.1-nano