RYDIR

Agentic Coding with Claude Code

Mar 2, 2026

From Autocomplete to Agency

For years my AI-assisted coding story was the same as everyone else’s: tab-complete suggestions in the editor, the occasional “explain this function” prompt, and a vague feeling that there had to be more. Then I started using Claude Code - an agentic coding tool that lives in the terminal - and the way I build software changed fundamentally.

This post is a practical look at what agentic coding actually means in day-to-day work, where it shines, and where you still need to stay sharp.

What Is Agentic Coding?

The word “agentic” gets thrown around a lot, so let me be specific. Traditional code assistants like GitHub Copilot operate in a suggestion loop: you type, the model predicts the next few lines, you accept or reject. The context window is narrow - usually the current file and maybe an open tab or two.

Agentic coding is different. The model doesn’t just predict your next line; it reasons about your request, reads files across the project, runs shell commands, and iterates on its own output. It has access to tools - file search, grep, the terminal - and it decides which ones to use and in what order to accomplish a goal.

The practical difference is enormous. Instead of “complete this function signature,” you can say “add a dark-mode toggle that respects the user’s system preference and persists the choice in localStorage” and watch the agent read your existing theme setup, find the right components, make the changes, and verify the build still passes.

My Setup

My workflow is straightforward:

  • Claude Code running in the terminal alongside VS Code
  • A CLAUDE.md file at the project root that gives Claude persistent context - build commands, architecture notes, conventions, and file structure
  • Standard Git workflow - Claude makes changes, I review diffs before committing

The CLAUDE.md pattern is worth calling out. It’s a plain Markdown file that gets loaded into Claude’s context automatically. Mine typically includes:

// Example of what I put in CLAUDE.md (as structured notes, not code)
// - Build: npm run dev / npm run build
// - Stack: SvelteKit 2, TailwindCSS, TypeScript, mdsvex
// - Components: src/lib/components/ui/ (shadcn-svelte pattern)
// - Data: src/lib/data/resume.ts (single config object)
// - Formatting: tabs, no trailing commas, print width 100

This means Claude already knows the project’s conventions before I ask my first question. No re-explaining the stack every session.

A Real Workflow Example

Here’s a concrete scenario from this portfolio site. I wanted to replace two placeholder blog posts with real content, which meant:

  1. Expanding the Categories type to include new values
  2. Adding syntax highlighting languages to the Shiki config
  3. Deleting old files and creating new Markdown posts
  4. Verifying the build still passes

Without agentic coding, I’d context-switch between files, look up the Shiki API, manually verify the type changes propagate. With Claude Code, the conversation looked roughly like this:

# Me: "Replace the example blog posts with two real posts about
# agentic coding and self-hosting AI. Update the Categories type
# and add bash/yaml syntax highlighting."

# Claude: reads types.ts, svelte.config.js, existing posts
# → edits Categories union
# → adds languages to Shiki config
# → deletes old posts, creates new ones
# → runs npm run check to verify

The agent handled the multi-file coordination. I reviewed each change before committing. The whole operation took a fraction of the time it would have taken manually - not because typing is slow, but because the cognitive overhead of context-switching between files is what actually eats your time.

What Works Well

After several months of daily use, these are the patterns where agentic coding delivers the most value:

Multi-file refactoring. Renaming a component, updating its imports across the codebase, and adjusting the tests - the agent handles the tedious coordination while you focus on whether the refactoring direction is right.

Pattern matching across a codebase. “Find everywhere we handle authentication errors and make sure we’re consistent” is a natural-language code review that would take an hour of grepping manually.

Explaining unfamiliar code. Jumping into a new codebase or a dependency’s source? Ask Claude to read the relevant files and explain the data flow. It can trace through multiple files and give you a coherent narrative.

Boilerplate with context. Unlike snippets or templates, the agent generates boilerplate that’s already tailored to your project’s conventions - correct import paths, matching naming patterns, consistent error handling.

Where to Stay Sharp

Agentic coding is powerful, but it’s not autopilot. Here’s where I’ve learned to pay attention:

Always review the diff. Claude is good, but it can occasionally introduce subtle issues - an unnecessary dependency, a slightly off type assertion, or a pattern that works but isn’t idiomatic for your codebase. The diff review is non-negotiable.

Security considerations. Be thoughtful about what context the agent has access to. Don’t paste API keys into prompts. Review any code that touches authentication, authorization, or user input with extra care - the same scrutiny you’d give a junior developer’s PR.

Know when to take the wheel. Some tasks are genuinely easier to do by hand - a quick one-line fix, a CSS tweak you can see in the browser, or a decision that requires product context the agent doesn’t have. Agentic coding is a tool, not a mandate.

Keep your mental model. The biggest risk isn’t bad code - it’s losing your understanding of your own codebase. If the agent made a change and you can’t explain why it works, stop and understand it before moving on.

Closing Thoughts

Agentic coding isn’t replacing developers. It’s changing what “writing code” means in practice. Less time on mechanical tasks, more time on architecture, design, and the judgment calls that actually require a human.

For senior engineers especially, it’s a productivity multiplier. You already know what good code looks like - now you have an agent that can execute on that vision faster than your fingers can type. The skill shifts from “writing code” to “directing code” - and honestly, that’s where most of the value was all along.

If you haven’t tried it yet, start small. Pick a refactoring task you’ve been putting off, set up a CLAUDE.md with your project context, and see what happens. You might be surprised at how natural it feels.