Novac

7 AI coding techniques that quietly make you elite

digital transformation concept

Vertigo3d / iStock / Getty Images Plus

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Treat the AI like another developer, not a magic box.
  • Encode design systems and user profiles in system prompts.
  • Every fixed bug becomes a permanent lesson learned in the project’s DNA.

Ever since the days of punched cards, I’ve self-identified as a programmer and a computer scientist. The programmer side is the practical side of my engineering identity, the person who crafts code line by line. The computer scientist is the theoretician, the scientist, the strategist, and the planner.

While I love the theory and science of computers, I’ve always enjoyed the hands-on feeling of cutting code. I think it’s probably akin to how some woodworkers prefer hand tools over power tools for the visceral feel of working with wood.

Also: Worried about AI coding? Why the invention of power tools is the blueprint for your career future

Unfortunately, I’ve never had much time to code. My day-to-day job has been as a company executive, founder, educator, and writer. I do love making software products, but I’ve never managed to get more than one small product done each year, using little bits of available nights and weekend time.

All that changed this past September. That’s when I started using agentic vibe coding tools, such as OpenAI’s Codex and Claude Code.

Since September, I’ve built and shipped four major products (WordPress security add-ons), built a working iPhone app for managing 3D printer filament, and am close to having a beta of an app my wife requested for managing sewing patterns. These last two are being built simultaneously for iPhone, iPad, Apple Watch, and Mac.

As a sole coder, agentic AI has been a force multiplier of almost breathtaking capability.

Also: I got 4 years of product development done in 4 days for $200, and I’m still stunned

In this article, I’m going to take you through seven best practices I use. These practices help me work with AI as a partner, and generate products of a quality suitable for production use. At the end, I’ll also share a bonus best practice that comes in handy more often than you might expect.

This is vibe coding. But it’s vibe coding with engineering discipline, and an underlying framework designed for robustness and product quality. If you want to use AI to build your apps, follow these best practices.

Primary practice: Written instructions

The items listed below are specific, deliberate practices. Each one comes from something I purposely built into my workflow.

The way I make these practices stick is I’ve added them to the „ini“ files for the AIs, the CLAUDE.MD and AGENTS.MD files. I’ve also added other files used to document the project itself. I’ll describe those in more depth as you read the rest of this article.

Also: 10 ChatGPT Codex secrets I only learned after 60 hours of pair programming with it

Let’s start with my first best practice, codified when I found that agent behavior in Xcode was unreliable for multiple parallel processes.

Definitely keep reading until the end, because the aforementioned bonus best practice can be a real game-changer.

1. Sequential visibility over parallel speed

The AI companies are touting the new ability to run multiple agents in parallel. However, it’s very difficult to manage multiple agents running in parallel, especially when you can’t see what they are all doing.

Worse, I found that this approach causes crashes and hangs, leaving projects in limbo. Until this becomes a manageable and visible technique, I only want to run one agent at a time.

Also: 10 things I wish I knew before trusting Claude Code to build my iPhone app

Manageability must take precedence over speed, especially when the AIs hide so much more of what you’d normally see line by line if you were coding it all yourself.

My rule: „Do NOT use background agents or background tasks. Do NOT split into multiple agents. Process files ONE AT A TIME, sequentially. Update the user regularly on each step.“ In this rule, „the user“ is me, since it’s an instruction to the AI about its own usage.

Why it’s elite: I learned this the hard way. Claude seemed to like launching parallel agents in Xcode. But after a few times when one or more parallel agents got stuck, became unresponsive, had to be killed, and left the codebase changes half-finished and in an indeterminant state, I got fed up.

The principle: I chose slower but visible over faster but opaque. Yes, I’ll admit that waiting for the AI can get tedious. But I’ll take predictability and recoverability over rash speed every time.

2. Migration tracking as a first-class artifact

My two Apple projects are being built for four platforms each: Mac, iPhone, Watch, and iPad. The capabilities and interfaces of these devices are quite different, especially for my apps that rely on NFC (available only on the iPhone) and other features that are most appropriate for different platforms.

For example, in the sewing pattern app, there’s a very powerful AI component for scanning and categorizing PDFs imported from the Mac’s file system, which is a workflow less likely to be used on an iPhone.

Also: AI agents are fast, loose, and out of control, MIT study finds

The challenge here is that when I’m working on one platform, I don’t want to lose track of changes for the others. So I have details about migrating platform-wide changes. These details are specifically encoded as an artifact that can be observed, tracked, and referenced.

My rule: „Every time you make a change to an app that would also need to be applied to iOS, iPad, Mac, or Watch apps, log it in Docs/IOS_CHANGES_FOR_MIGRATION.md. Include: date, files changed, which platforms it applies to, what specifically changed (old to new values, code snippets if helpful), any notes about platform-specific adaptations completed and/or needed.“

Why it’s elite: I don’t trust myself (or the AI) to remember changes across sessions. Within the AI, I built a structured change log that acts as a migration checklist for bringing other platforms to parity. I use it as an operational tool to prevent drift between platforms.

The principle: Every change generates a technical debt ticket for every platform it hasn’t reached yet.

3. Persistent memory with semantic organization

Both the AI and I learn lots when building these apps. Some techniques we try fail, and others become best practices. As part of the process, I have the AI build a knowledge base that is filled with those learnings. For better classification and easier access, I have the AI organize the knowledge base by topic rather than notes added to the bottom of a log.

My rule: I have the AI maintain a MEMORY.md that persists across conversations, organized by topic (not chronologically), with separate topic files for detailed notes. I gave the AI this instruction: „Update or remove memories that turn out to be wrong or outdated. Do not write duplicate memories.“

Why it’s elite: AI sessions are stateless by default, but I wanted to retain state information. Just dumping everything into a log file would have been messy and inefficient. Instead, I had the AI build and maintain a curated knowledge base that the AI reads on startup. The knowledge base has API signatures, scoring algorithms, layout measurements, and hard-won lessons (tangible examples where it took us a while to figure out how to make something work).

The principle: These lessons and learnings can be applied further down the development path, or to sister projects that use the same foundational structure. Don’t reinvent the wheel.

4. Prompt logging as an audit trail

By contrast, I also want the AI to log every instruction I give it chronologically. This approach is a great way to reference what was worked on previously, especially when I might not return to the project for days or even weeks.

Also: From Clawdbot to OpenClaw: This viral AI agent is evolving fast – and it’s nightmare fuel for security pros

Additionally, this approach allows us to go back and see whether my prompt was inadequate or misleading, or if some other prompt-related factor could have caused a fail or turned out to be a strong win.

My rule: „Every session, after reading these instructions, log each user prompt to PROMPT_LOG.md. Timestamp each entry with date and time.“

Why it’s elite: This approach gives me (and the AI) a complete, timestamped record of every instruction I’ve ever given the AI across all sessions. This serves multiple purposes:

  • I can reconstruct what happened when something goes wrong.
  • I can see how a feature evolved through prompts.
  • The AI and I can pick up exactly where we left off.

It’s version control for my collaboration with the AI.

The principle: If we can’t replay the conversation, we can’t debug the collaboration. More to the point, the approach enables both of us (the AI and me) to go back to reference specific instructions, replay certain actions, and correct issues that may have come out of unclear or incorrect prompting.

5. User profile as a design constraint

My two Apple apps use similar features, but have radically different user profiles. The filament inventory project is meant for technically strong individuals managing a fairly large set of 3D printers and filament types.

Also: I built an iOS app in just two days with just my voice – and it was electrifying

On the other hand, the sewing pattern inventory project is intended for active sewists with a collection of hundreds or even thousands of sewing patterns. Sewists are technically capable with specialized sewing machinery, but they tend to prefer more intuitive app interfaces than the 3D printer geeks. They are often extremely fussy about the quality of their collections and their information.

Because I often turn to the AI for help with design and implementation, I find it advantageous for the AI to understand the user profiles. When the AI does something different from what a typical user would be comfortable with, I tell it, „remind me what the user profile is for the app.“ This approach forces the AI to remember that info and restate it to me. In doing so, the AI immediately updates its work while focusing on those requirements and constraints.

My rule: „My sewing pattern inventory users are predominantly over 50. Many are grandparents. They typically have limited technical skills. They tend to have large collections with a strong ‘got to keep it’ collector mentality.“

I did not go into the nuances of the different types of machines these users know, but kept it simple as a guise for the AI. The technical complexity of what sewists can produce is often astounding. And critically, the contrast: „The sewing app needs to be noticeably more approachable than the filament app.“ Yes, it’s stereotypical. But stereotypical works pretty well with the AI’s training corpus. It gets the job done.

Why it’s elite: Since I was relying on the AI for design help, I wanted to give it a mental model of the actual human using the app. That user profile included age, technical comfort, and collector psychology, because these factors influence design choices. When the AI makes a design recommendation, it has a profile for the person the product is designed for. This setup echoes my overall collaboration approach — think of AI as just another developer on the other end of a Slack channel.

The principle: Telling the AI who uses the software helps it understand how to build the software.

6. Codified design system in the project prompt file

As a former creative director and designer, I know how important it is to create a design language for a company or a product. While it might seem like Apple apps have their own design language by virtue of being on Apple platforms, there’s still lots of room for inconsistency.

Also: AI agents are already causing disasters – and this hidden threat could derail your safe rollout

To mitigate this possibility, I’ve encoded the design language for the projects right in the main project instruction file, so the AI can always reference it when building out designs. This approach provides us with a very consistent, attractive, and understandable interface that works with every update or change during development.

My rule: I embedded an entire iOS and macOS design system directly in the CLAUDE.md main project prompt file. These details include specific font sizes (24pt bold for sheet titles, 15pt medium for list items), exact color RGB values, component patterns (card structure, icon badge sizing, button styles), and named reference implementations.

Why it’s elite: Every new view the AI creates automatically matches the existing ones because the design tokens are in the system prompt and immediately available to the AI. I don’t have to tell it, „make it look like the other views,“ and hope the AI can figure out what „the other views“ look like. The reference data means the AI has a detailed design language for all UI elements.

The principle: Design consistency shouldn’t depend on the AI’s memory of what it built last time, or on its ability to derive design cues from previous implementation code.

7. Hard-won lessons encoded as rules

There are many, many ways for software to fail. One of the gotchas about coding for Apple is that you sometimes need to go outside its canned interfaces and features. If you do that (and even sometimes when you code directly to its design), stuff breaks.

Also: True agentic AI is years away – here’s why and how we get there

Rather than re-debug everything each time around, I have the AI encode lessons learned, especially after a long session of trying to figure out what broke. This way, we can make it work again later. This approach is particularly powerful if the AI decides to scrap a block of code and recreate it. With lessons encoded as rules, the AI knows what not to do.

My rule: Scattered throughout my AI instruction files are lessons from things that went wrong, encoded as permanent rules. At the end of every session, I tell the AI to record its learnings. The result is a series of reusable instructions based on our development experiences.

Here are some examples.

  • „Never stack more than 4 .sheet() modifiers on the same view on macOS.“ We learned this when a PDF picker silently failed as the 7th stacked sheet.
  • „NSOpenPanel.runModal() must not be called from inside a sheet’s onAppear.“ We learned this from a crash.
  • „NEVER use .secondary, .gray, or low-opacity white for text“ on watchOS. Instructed the AI based on OLED readability testing.
  • „Navigation titles use system styling (gray) to preserve back button functionality.“ Learned and instructed, when custom toolbar items hid the back button.

Why it’s elite: Many developers fix a bug and move on. My approach is that when we fix a bug, we write it into the project DNA as a lesson. Bug fixes become guidelines and restraints that the AI must follow for the life of the project. That way, future sessions don’t experience the same problems. Those solved problems become encoded as development guardrails.

The principle: Every AI mistake should only happen once, because avoiding it becomes a guardrail rule.

Bonus best practice: Code review

These seven best practices form a system. The AI starts each session reading its memory, its design system, its rules, and brings itself up to speed on the migration tracker data and the learnings we carefully encoded. The AI logs every prompt. It works visibly, so it’s not getting stuck with multiple parallel agents running amok. It also knows how to design for a real person, encoded in the user profile.

Effectively, this approach goes beyond the idea of vibe coding, where you say stuff, and the AI makes what it wants. This approach is a carefully designed and engineered collaboration engine more akin to traditional software engineering management practices.

Speaking of software engineering management practices, here’s a bonus: use the AI for code review.

Also: 5 custom ChatGPT instructions I use to get better AI results – faster

Every so often, I start up a new session. But before the AI reads all the instructions and notes, I tell it to analyze the project and all its files. I ask it to flag issues and problems. That way, I get the equivalent of „fresh eyes.“ The AI often finds little details that need to be addressed.

Powerful. Easy to do. Enormously effective. What’s not to love?

Have you adopted any structured practices when working with AI coding tools, or are you still in full vibe-coding mode?

Do you run multiple agents in parallel, or have you found that slower, more visible workflows produce better results? Have you built persistent memory files, migration logs, or prompt audit trails into your projects? If so, how has that changed your output quality?

What about design constraints and user profiles? Are you explicitly teaching your AI who it’s building for? I’d love to hear how you’re collaborating with AI, what’s worked, what’s backfired, and whether you think disciplined AI workflows really do separate casual users from elite builders. Comment below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

извор линк

Оставите одговор

Ваша адреса е-поште неће бити објављена. Неопходна поља су означена *

Back to top button