Case Study

AnyStory

AnyStory is a production AI writing platform that keeps manuscript context stable across long sessions while routing requests across Anthropic, OpenAI, and Grok.

DjangoReactTypeScriptPostgreSQLRedisStripeDocker Swarm
Live product ↗

Role

Founder and full-stack engineer

Timeline

2024 - Present

Organization

AnyStory

Status

Live with paying users

AnyStory product screenshot

The Problem

Most AI writing tools are optimized for short prompts, not full manuscripts. Writers are forced to split their process across multiple tabs: one document for draft text, another for research notes, another for character details, and a separate chat window for rewrite assistance. That fragmentation creates a context reset on every request. The model only sees a partial snapshot, so edits drift from established tone, character behavior, and plot constraints.

I built AnyStory to solve that context fracture directly. The goal was not to create another chat wrapper. The goal was to create a writing workspace where project structure and AI behavior are coupled by design, so the model can assist without flattening voice or continuity over long sessions.

The Architecture

The platform runs a Django backend with a React and TypeScript client, backed by PostgreSQL for durable project state and Redis for low-latency ephemeral context. Each project stores chapters, worldbuilding notes, character records, and revision history in one graph of content entities. AI requests are composed from that graph at runtime, not from a single text box.

Model routing is provider-aware. Requests can be directed to Anthropic, OpenAI, or Grok based on task profile and configured preference, while preserving a common internal contract for prompt assembly, response normalization, and safety handling. That abstraction lets me add provider-specific optimizations without rewriting product logic.

Billing is integrated through Stripe subscriptions with token-based plan boundaries and overage support. The billing layer is coupled to usage metering so account state and model access stay synchronized.

Challenges Solved

The hardest problem was maintaining context quality over long-form sessions without exploding latency and token cost. I addressed this with structured context assembly, selective retrieval from project entities, and strict prompt budgets that prioritize narrative-critical artifacts first.

A second challenge was protecting user trust during aggressive rewriting workflows. AnyStory now uses continuous autosave plus full revision history with one-click restore, so writers can experiment with AI edits without fear of irreversible loss.

A third challenge was operational reliability under real subscription usage. Deployment runs on Docker Swarm on Hetzner, with service boundaries tuned for predictable rollout behavior and quick recovery paths. This keeps infrastructure straightforward while supporting live customer traffic.

Current State

AnyStory is live, paid, and used for real writing projects across novels, screenplays, and long-form nonfiction. Core flows include drafting, context-aware rewrites, redrafting, and revision rollback. The system is now in a phase where the foundation is stable enough to prioritize product iteration speed: better collaborative editing primitives, stronger context controls per chapter, and richer model-routing policies tied to writer intent rather than a single global default.