“Code Smarter, Not Harder” – Evergreen Practices in an AI-Accelerated World

Stef DyankovMay 20, 20259 min read
We used to send our code as ZIP files to the sysadmin over email so he can deploy. No CI, no git—just vibes. Now AI writes a good part of our code. But best practices? Those are still written by hand.

TL;DR

In 2025, generative AI tools are built into nearly every part of a developer’s workflow. And this is just the beginning. From auto-generating React components to writing cloud infrastructure code, tools like GitHub Copilot, ChatGPT, Claude, and Cursor are shaping how we work. But the core principles of solid engineering—clarity, maintainability, resilience—haven’t changed. In fact, in a world where AI can accelerate both good and bad decisions, these principles are more critical than ever.

1. Prioritize Clarity and Simplicity in Design

In the late 2000s, I was learning how to program by searching forums and reverse-engineering code downloaded from torrent sites (ouch). I waited for hours and sometimes even for days until someone responded to me on some random forum (do we actually remember what a forum is in 2025?) and helped me overcome an issue. In contrast, today you can ask an LLM to build a login flow and get a working prototype in seconds. But fast doesn’t always mean good.

Image

Simple systems are easier to test, debug, reason about, and extend. They’re also easier for AI to understand and work with, because models operate best when your codebase provides clear context. If your project is a tangle of side-effects and magic variables, you’ll end up with bad AI suggestions, longer onboarding, and more production surprises. I’ve seen code so bad, even AI refused to guess what it was doing.

Ask yourself: Could a teammate—or an AI reviewer—understand this code's purpose by skimming the function signature, type definition, and an example of how it was used? If not, keep improving it.

2. Write and Rely on Tests

In 2025, with AI tools capable of generating implementation details faster than ever, tests now serve a second role: guarding against hallucination.

An LLM might (and it does) generate a method that looks right, passes type checks, and even compiles but fails in subtle ways. And if there are no tests, guess who’s cleaning that mess when prod breaks? Spoiler: it’s not the AI.

A robust test suite, unit tests and even snapshot tests provide a protective boundary around your system, and they also act as ground truth when using AI to refactor or extend functionality.

I know that it's not possible to test everything but at least test the most critical parts of your system otherwise you have to prepare for late night coding sessions with your manager poking you on all possible chats.

Ask yourself: If someone (you) “vibe codes” something in production and breaks one of the most critical features of your system, is there anything to alert you (aside from your angry boss)?

“Vibe coding is when you let autocomplete or intuition lead the way—without verifying the outcome.”

3. Make Small, Contextual Git Commits

Git is your timeline, your safety net, and your audit trail. Yet far too many developers treat it like a backup system instead of a tool for clarity.

Image

In AI-augmented workflows, small commits become essential. They allow you to isolate side-effects and bisect regressions. When Copilot or ChatGPT is helping you review a file, feeding it a short, well-scoped diff gives it far more context than dumping in a 500-line pull request. It's the same for your co-workers, believe me they hate you (at least a little bit), when you ask them to review your 126-files PR.

Ask yourself: Could someone else learn what happened in this file just by reading your commit messages and look at the diff or will they need a crystal ball?

4. Automate Every Repetitive and Deterministic Task

There’s no glory in remembering setup or deployment steps. If something is repetitive and predictable, it should be codified, into scripts or pipelines.

Whether it’s linting, seeding databases, provisioning infrastructure, automation is what protects you from human error and saves you a lot of time in the process.

On a recent project, I spent an entire evening building a setup script that syncs all environment variables from/to AWS SSM into your local .env file — while my wife stared at me from the couch with that look.

It wasn’t glamorous, but it solved a real problem. This project has over 50 env vars, and every time someone added a new one, and didn’t tell the team, someone else got stuck staring at a white screen.

Now imagine that pain multiplied across 15 developers.

It took me a night to build, but the time saved since? Massive. Fewer Slack messages asking, “Why is the app blank?” Fewer setup headaches. And way less guesswork.

It’s not perfect (and never will be), but now we just run yarn setup and it pulls everything up. Feels like magic. Almost.

Ask yourself: If you gave a new dev a laptop and git clone, would they be productive within 30 minutes?

5. Explain the Reasoning, Not Just the Result

One of the biggest benefits of working with senior engineers isn’t just cleaner code—it’s visibility into decision-making. AI tools can describe what code does, but they struggle to explain why a particular pattern was chosen or a tradeoff was made. AI wasn't there for the team meetings, it wasn't there for the edge cases, and definitely it wasn't there to understand that "weird" client requirement.

Image

This is where human context matters most. If you’re solving a problem in a non-obvious way, document it using inline comments so it's clear why you wrote what you wrote. Then create a detailed pull request description (we love those) too.

We’ve seen countless cases where a subtle performance optimization or non-standard behavior was removed by a later developer—or AI refactor—because the rationale wasn’t documented and they were not able to grasp why this code is there and what it does.

Ask yourself: Would your future self understand the decision-making process just by reading the diff or the PR summary? If not, help your future self by documenting it now.

6. Monitor and Profile Continuously in Production

Performance doesn’t live in the code editor—it lives in real-world usage. You can’t optimize what you don’t measure, and no amount of AI inference can replace live observability.

With AI now suggesting performance-related changes (if you ask it nicely), it’s even more important to understand what those changes are doing.

There was a project which got DDoS-ed for a few days. Since we use serverless and our infra scales indefinitely, nobody noticed. But guess what happened?

We had anomaly detection set up — and it caught it.

We served 17 terabytes (yes, terabytes) of traffic in a single day. Because we had monitoring turned on, we got notified and were able to stop the attack in time saving ourselves from many more terabytes and larger AWS bills.

Ask yourself: Do you have insights into usage trends, performance bottlenecks, and system health in real time (or any time)? Are you going to be alerted to anomalies?

7. Improve Your Ability to Read Before You Write

In 2025, writing code is often the easy part. Reading code—especially stuff someone pasted from ChatGPT—is what separates engineers from editors.

Image

With AI accelerating the production of boilerplate and scaffolding, it’s easy to lose the habit of deeply (or at all) understanding a system before you change it. But that’s where bugs hide, where patterns live, and where architectural insight begins.

When I’m onboarding onto a project, the first few days always look slow — at least from the outside. But what I’m actually doing is reading every single file in the project. I’m mapping the structure, understanding the context, and figuring out how everything fits together.

After those few days, my output ramps up fast — because now I know what’s in the project, what’s missing, what’s solid, what’s fragile, and where everything lives.

Ask yourself: Before I make this change, do I know what else in the system relies on this code? What will happen and what will be affected if I change it?

8. Shorten the Feedback Loop With Real Users

It’s tempting to keep iterating in isolation—especially when AI tools give you an endless stream of suggestions. Until users touch it, it’s all theory. AI will hallucinate all kinds of beautiful nonsense. But only real users show the truth.

AI is great at simulating edge cases and optimizing small logic blocks—but it can’t predict what real users will do (nobody can predict some users). The only true signal comes from production usage.

Lately, I had a lot of these cases where AI told me that what I was building is needed and relevant but then on production nobody used it...

Ask yourself: Can we validate this feature with users in the next sprint, even if it’s behind a flag?

9. Master the Fundamentals—They Don't Expire

Technologies come and go. Frameworks evolve. But fundamentals are eternal.

The more we use AI, the more we need to sharpen our judgment. Because models are trained on existing code, they surface the median—not the optimal.

A strong foundation in systems thinking helps you reject bad AI suggestions and extend good ones. There’s no substitute for intuition grounded in first principles.

Ask yourself: Do you understand how the system you’re building on top of actually works? How internet works, how cloud works, how a user stays logged in on a website?

Final Thoughts

AI tools are transforming how we write, review, and deploy code—but they don’t replace engineering judgment. If anything, they raise the stakes.

The faster we can produce software, the more important it becomes to make good decisions at speed. These evergreen practices help you do just that. They keep your systems healthy, your teammates productive, and your future self grateful.

At WIARA, we’re committed to pairing the best tools with the best habits. We’ve gone from emailing ZIPs to having AI write our deploy scripts. AI can write the code. But it’s up to us to write the judgment.


Want to work with a team that still cares about the craft?We follow the practices we just shared.

Further Thinking


Join WIARA Insights