Getting started with Claude Code is easy. Getting good output consistently is a different problem. Most developers who install it and try it a few times are leaving significant value on the table — wrong model for the task, no project memory, context filling up before the work is done.
These are the patterns that actually make Claude Code useful in daily practice, not just for impressive demos.
The quality of Claude Code's output tracks directly to the quality of your task description. Vague inputs produce exploratory, unfocused output. Specific inputs produce direct execution.
Worse: "Clean up the auth code"
Better: "Refactor src/auth/session.ts to use JWT instead of session cookies. Keep the existing interface — only change the implementation. Run the auth tests when done."
Include: what to change, where to change it, what "done" means, and any constraints. One clear task per session is more reliable than a stream-of-consciousness conversation that accumulates across multiple requests.
CLAUDE.md is a file Claude Code reads at the start of every session. Put your project context in it — stack, conventions, what to avoid, active work. You stop re-explaining the same things every session; Claude starts with the right frame already loaded.
A minimal CLAUDE.md:
# Project
Go service, PostgreSQL (Fly.io), deployed via GitHub Actions.
## Stack
- Go 1.23, Gin, pgx/v5 for Postgres
- Tests: `go test ./...`
- Migrations: `tools/migrate.sh`
## Conventions
- Error handling: always wrap with context, never suppress
- New endpoints need a test in _test.go
- Never commit .env files
## Do not touch
- vendor/ directory (managed externally)
- legacy/billing/ (Shane owns this)
See the full CLAUDE.md guide for what to include and what makes it worse.
Claude Code defaults to Sonnet, which is right for most work. But switching models mid-session changes both quality and cost significantly:
/model claude-haiku-4-5 — fast and cheap; use for simple edits, formatting, boilerplate, adding comments. Costs roughly 10× less than Opus./model claude-sonnet-4-6 — the right default; good balance of reasoning and speed for standard dev work/model claude-opus-4-7 — use for genuinely hard problems: complex architecture decisions, subtle bug diagnosis, large refactors where getting it wrong is expensiveDon't burn Opus credits on tasks that don't need deep reasoning. Don't use Haiku for tasks where a wrong architectural decision will cost you a day of cleanup. Match the model to what the task actually requires.
Run /cost at the end of a session to see what you spent. Do this consistently and you'll develop intuition for which kinds of tasks are expensive (large refactors, many file reads, heavy MCP tool use) versus cheap (targeted edits, simple additions). This pays off when you're deciding whether to continue a session or start fresh.
When your context starts filling, the instinct is to keep going and deal with it later. The better move is to compact before the model starts losing the beginning of the conversation — not after.
/compact summarizes the conversation so far and resets the context window with a condensed version. Running it when you're at ~60–70% context preserves more useful detail than waiting until the model is already degraded. You can also use it between major phases of a long task — finish the implementation, compact, then start the testing phase fresh.
See the context limit guide for the full set of strategies.
Tell Claude to run your tests after every significant change — in the task description, not as an afterthought:
"After each change, run npm test and confirm it passes before moving on. If tests fail, fix them before continuing."
This single instruction transforms Claude Code from a code generator into a code generator that verifies its own output. It catches regressions immediately rather than letting them accumulate through five changes before you notice something broke.
For large repos, tell Claude explicitly where to work:
"Work only in src/api/handlers/ and src/middleware/. Don't touch anything in src/legacy/."
Scoping reduces noise in your diff, prevents unintended side effects, and makes the output easier to review. It also keeps Claude from spending tokens exploring irrelevant parts of the codebase.
If you have a Postgres database, wire it in as an MCP server. Same for GitHub, Slack, or any internal API your work touches. The one-time setup:
claude mcp add --transport stdio postgres \
--env DATABASE_URL="$DATABASE_URL" \
-- npx -y @modelcontextprotocol/server-postgres
Once it's wired, Claude can query live database state, read GitHub issues, and search Slack threads during a session — without you copy-pasting output into the conversation. The sessions that need external context go from slow and manual to fast and complete.
See the full MCP setup guide for the command syntax and scope options.
For complex multi-step tasks, ask Claude to write its plan to a file before touching any code:
"Before making any changes, write a plan to TASK-PLAN.md describing what you'll do step by step. Wait for my confirmation before starting."
Benefits: you catch wrong assumptions before they propagate through code; the plan file survives if the session ends; Claude can resume from the plan in a new session without losing the thread of the work.
Claude Code's non-interactive mode lets you call it from scripts and CI pipelines:
claude --print "Generate a migration to add indexes to the jobs table. Output only the SQL."
Combine with --allow-shell and --allow-edits flags for autonomous pipelines where you don't want interactive prompts. This is how you build Claude Code into automated code review, documentation generation, or scheduled analysis workflows.
The temptation is to keep one session running all day and stack tasks. The result is a context window that's half-full of irrelevant history before your new task even starts, and a model that's dragging prior decisions into current work.
Fresh sessions are cheaper (less context to process), more focused (no prior task noise), and produce cleaner output. Make it a habit: finish a task, review the output, start a new session for the next one.
If you're doing a long autonomous task where you trust Claude to run commands, set the permission flags at startup instead of approving each one manually:
claude --allow-shell --allow-edits "Migrate all tests from Jest to Vitest"
For tasks where you know what Claude will need to do and you trust the scope, pre-approving eliminates the interruption loop. For exploratory tasks where you don't know what Claude will touch, keep interactive approval on — it's the right safeguard.
The compounding tip: Tip 02 (CLAUDE.md) + Tip 08 (MCP servers) together eliminate most of the session startup overhead. CLAUDE.md loads your project context; MCP servers give Claude live access to your stack. Combined, a new session starts with everything it needs to work effectively from the first message.
Every tip here helps you get more from individual sessions. None of them solve the session boundary problem: when a long task spans multiple sessions, you lose the working state, the decisions made, and the context built — and you start over.
This is especially painful on complex tasks: a multi-day refactor, a debugging investigation that runs into the context limit, a task you handed off to Claude overnight. The next session doesn't know what the previous one did.
Bring Your AI captures the decisions made, context built, and working state from a Claude Code session and carries it forward to the next one. When you hit the context limit mid-task — or need to hand the session off — you don't start over from scratch.
See how Bring Your AI works →