AI Workflow
This project was built with Claude Code. Most “built with AI” claims stop at the label. Here’s the full picture: what the AI knows, how sessions work, what prompting looks like in practice, and where the human draws the line. If you’re building with AI yourself, or just curious what it actually looks like, this is for you.
What the AI Knows
Section titled “What the AI Knows”Claude Code has a persistent memory file that carries context across sessions. Instead of re-exploring the codebase every conversation, it picks up where we left off. Here’s what’s in it (sanitized):
Project Identity
Section titled “Project Identity”- App name, source location, license, app identifier
- Fork relationship: upstream is En Croissant by Francisco Salgueiro, we maintain our own fork independently
- Why the fork exists: upstream maintainer declined the TTS feature, which is fair — different visions for the same project
Build Rules
Section titled “Build Rules”- pnpm only — npm breaks vanillaExtract (white screen at runtime, no error, just nothing)
- Node.js 22+ required (Vite 7 needs
crypto.hash) - Always
pnpm format && pnpm lint:fixbefore committing - Close the app before overwriting the binary (“Text file busy”)
- After moving source directories,
cargo cleanto clear stale path references
Architecture Knowledge
Section titled “Architecture Knowledge”- Which files own which features (atoms in
atoms.ts, tree navigation intree.ts, TTS engine intts.ts) - Why all TTS atoms need
getOnInit: true(imperative reads viastore.get()before React subscribes) - How the audio cache works (
provider:voiceId:lang:textkeys) - The chessground coordinate fix is CSS-side, not a fork of the library
- Data layout: what lives where, what’s symlinked, what survives app restarts
What It Doesn’t Know
Section titled “What It Doesn’t Know”The memory doesn’t contain API keys, passwords, or credentials. It references their storage locations (localStorage atom names) but never the values. The AI generates code that reads keys from settings — it never sees or handles the actual secrets.
What the AI Is Told
Section titled “What the AI Is Told”Beyond the memory file, Claude Code follows rules baked into its system:
- Don’t over-engineer. Only make changes that are directly requested. A bug fix doesn’t need surrounding code cleaned up. Three similar lines of code is better than a premature abstraction.
- Don’t guess URLs. Never fabricate links or endpoints.
- Read before editing. Never propose changes to code it hasn’t read.
- Prefer editing to creating. Don’t create new files unless absolutely necessary.
- No security vulnerabilities. Watch for injection, XSS, and OWASP top 10 issues.
- Ask when uncertain. If an instruction is ambiguous, ask rather than guess.
- Measure twice, cut once. Destructive operations (force push, reset —hard, deleting files) require explicit human approval.
Writing Good Prompts
Section titled “Writing Good Prompts”The difference between a useful AI interaction and a frustrating one is almost always the prompt.
Be specific about what you want. Not “fix the bug” but “the TTS cache key doesn’t include the provider name, so switching from ElevenLabs to Google plays cached ElevenLabs audio instead of generating new audio.”
Include context the AI doesn’t have. The AI can read your code, but it can’t read your mind. “The user reported that coordinates are backwards on the board” is less useful than “the CSS in chessgroundBaseOverride.css has ranks and files swapped — Francisco’s original had them backwards.”
State your constraints. “Don’t create new files” or “use the existing atom pattern” or “this needs to work without an API key” tell the AI where the guardrails are.
Say what you don’t want. “Don’t add error handling for cases that can’t happen” or “don’t refactor the surrounding code” prevents over-engineering — the most common AI failure mode.
The pattern is: intent + context + constraints. Master that and the AI becomes dramatically more useful.
Plan Mode: Using One Claude to Prompt Another
Section titled “Plan Mode: Using One Claude to Prompt Another”Claude Code has a “plan mode” that separates thinking from doing. In plan mode, the AI reads files, explores the codebase, and produces a plan — but writes no code. You review the plan, adjust it, then switch to implementation mode where the AI executes.
Why does this work? Because the hardest part of any coding task isn’t writing the code. It’s figuring out what code to write — which files to change, what patterns to follow, what edge cases exist. Plan mode dedicates full attention to that question before a single line gets written.
From this project: when we restructured the Help menu to add the Language selector, the plan mode conversation explored how Tauri menus work, what atoms already existed, how the doc viewer resolved resource paths, and what the confirmation dialog API looked like. By the time we switched to implementation, the AI had a complete map of the changes. No false starts.
You’re essentially using one instance of the AI as a senior architect and another as the developer. Same model, different roles.
How Sessions Work
Section titled “How Sessions Work”A typical session looks like this:
-
Human states intent. “Add caching note to the KittenTTS section.” “Remove PostHog telemetry.” “The quality ratings are wrong, here’s what they should be.”
-
AI reads the relevant files. It doesn’t guess what’s in a file. It reads it, understands the current state, then proposes changes. Multiple files are read in parallel when they’re independent.
-
AI makes the change. Targeted edits to existing files. Not rewrites — surgical modifications that preserve everything around them.
-
Human reviews. Every edit is shown before it hits disk. The human approves, rejects, or redirects. “No, that’s too soft — say it genuinely sucks.” “Move that paragraph up.” “That’s not what I meant.”
-
Commit when told. The AI never commits on its own initiative. The human says “commit” or “commit and push.” Commits include
Co-Authored-By: Claude Opus 4.6— always attributed, never hidden.
Context Windows and Save States
Section titled “Context Windows and Save States”Every AI conversation has a context window — the total amount of text it can hold in memory at once. When the conversation gets long enough, older messages get compressed to make room.
Two strategies: keep conversations focused (one task per conversation), and use save states (Claude Code saves transcripts as JSONL files you can resume from with full context restored). The memory file serves a different purpose — it’s a persistent knowledge base that survives across all conversations.
What the AI Proposes vs. What Ships
Section titled “What the AI Proposes vs. What Ships”The AI’s first suggestion is rarely the final version. A typical exchange:
- AI drafts something reasonable
- Human says “too corporate” or “be more direct” or “that’s wrong, here’s why”
- AI adjusts
- Human approves
The taste, tone, and final call are always human. The AI handles velocity — reading files, understanding context, making precise edits across a codebase it can hold in memory. The human handles judgment — what to build, how it should feel, when to stop.
Skills and Slash Commands
Section titled “Skills and Slash Commands”Claude Code supports “skills” — reusable prompts stored as markdown files in the .claude/commands/ directory. You invoke them with a slash command, like /translate-docs.
This project uses a /translate-docs skill that automates translating documentation into multiple languages. The skill file contains the full instructions: which files to translate, what format to use, how to handle code blocks and links, what tone to maintain. Instead of explaining all that every time, you just type /translate-docs and the AI knows exactly what to do.
Skills encode process, not just information. You can build them for any recurring workflow: running tests, deploying, reviewing PRs, updating changelogs.
Coding Principles
Section titled “Coding Principles”The full principles document lives in the repo at .claude/01_UNIVERSAL_PRINCIPLES.md. It started as Robert C. Martin’s Clean Code (2008) plus additions for the AI era. Then we had an honest conversation about what still holds up and what doesn’t.
What’s Timeless
Section titled “What’s Timeless”- Intention-revealing names. Always. Forever.
- Functions do one thing. The real principle is coherence, not size.
- No side effects. Still the source of most bugs.
- Comments explain why, not what.
- Single Responsibility. A module should have one reason to change.
- Program to interfaces, not implementations.
- Don’t swallow errors. Every error is information.
- Emergent design: runs all tests, no duplication, expresses intent, minimizes complexity. In that order.
What’s Contextual
Section titled “What’s Contextual”These principles are sound, but the specific rules reflect a pre-AI or language-specific world. We apply the spirit, not the letter:
- DRY. Duplication that drifts apart is dangerous. But extracting every repeated pattern into an abstraction creates indirection that can be worse. Sometimes three readable lines right here is better than a premature abstraction in another file.
- Strict TDD ceremony. The principle — ship tested code, know it works — is non-negotiable. The ceremony — test must exist before code — was designed for a workflow where humans type slowly. Write tests. Make sure they pass. Whether the test or the code came first is less important than whether both exist.
- The Boy Scout Rule. “Leave the campground cleaner” — yes. But the Boy Scout cleaned up the campsite, not the whole forest. Fix what you touch. Don’t refactor a file’s entire structure because you changed one line in it.
AI-Era Additions
Section titled “AI-Era Additions”- Principles-based guidance scales better than rules. The principle allows judgment; the rule is brittle.
- If the agent builds it, the agent can maintain it. Keep conversation context and artifacts. Document the build process, not just the result.
- Clear over clever. The system that built this needs to reconstruct the reasoning and modify it correctly. Explicit structure beats tiny clever abstractions.
- Could this become infrastructure? A tool solves a problem for you. Infrastructure enables others to build on top. Design accordingly.
Where the Human Draws the Line
Section titled “Where the Human Draws the Line”The AI is a tool. A remarkably good one. But there are things it doesn’t do:
- Product decisions. What features to build, what to cut, what the app should feel like. “The System TTS quality rating should say ‘passable’ because it genuinely sucks” — that’s a human judgment call based on actually listening to it.
- Taste. The AI can write clean prose, but the voice of the project, the decision to be blunt about quality, the choice to credit Francisco prominently — those are human choices.
- Ethics. Removing PostHog wasn’t a refactoring task. It was “the settings page says we don’t collect telemetry, but there’s an active PostHog API key in the code. That’s a lie. Fix it.” The AI executed. The human identified the problem and cared about it.
- Chess. The board doesn’t care about your tooling.
Why Share This
Section titled “Why Share This”Because “built with AI” has become meaningless. Everyone says it. Nobody shows it. The interesting question isn’t whether AI was involved — it’s how it was involved, and what the human actually contributed.
This is the answer. The human brings the vision, the taste, the judgment, and the accountability. The AI brings the velocity, the memory, and the tireless willingness to read Rust error messages at 2am.
Neither builds this thing alone. Both are credited. That’s the deal.
En Parlant~ is a fork of En Croissant by Francisco Salgueiro, built with Claude Code by Anthropic.