Author here. I've been tuning my Claude Code setup for a while and noticed most of the value comes from CLAUDE.md quality, hooks, and guard rails, not prompts.
Built a skill called /refine that audits your config across 8 dimensions and gives you a letter grade. It then interviews you about your workflow to generate rules that actually match how you work.
This post shares my experience using Cursor for what I call "Prompt Coding" on a large, existing full-stack project. I cover how I use different types of rules (Global, Domain-Specific, Workflow) to get the AI agent to understand and follow project conventions, and a "probe mode" for when it doesn't. It's more about skill progression for developers and moving beyond "vibe coding" to more precise AI collaboration. Curious to hear others' experiences integrating AI into mature codebases.
Built a skill called /refine that audits your config across 8 dimensions and gives you a letter grade. It then interviews you about your workflow to generate rules that actually match how you work.
The skill is open and installable: https://skills.sh/meetdave3/refine-skill/refine
Source: https://github.com/meetdave3/refine-skill