Kodnos
/
AI & Data Science

How AI Is Quietly Changing the Way We Write Code

AI coding tools are not just hype anymore. After a year of daily use, here is what actually works, what fails, and how to use them without losing the skills that matter.

A
admin
Apr 17, 2026 10 min read 25 views
How AI Is Quietly Changing the Way We Write Code

If you have been writing code for more than a couple of years, you probably noticed something interesting lately. AI tools are not just autocompleting your brackets anymore — they are suggesting entire functions, catching bugs before you even run the code, and sometimes writing better tests than you would.

This is not a hype piece. I have been using AI coding tools daily for about a year now, and I want to share what I have actually learned — the good parts, the bad parts, and the things nobody talks about.

The Shift Nobody Expected

I remember when GitHub Copilot first came out back in 2021. Most developers I knew were skeptical, myself included. "It will just copy-paste Stack Overflow answers," we said. "It will introduce security vulnerabilities." "Real developers do not need autocomplete on steroids."

But that is not what happened.

What actually happened is more subtle and, honestly, more interesting. AI coding assistants learned the patterns of good software. They picked up on naming conventions, common error handling approaches, and even architectural patterns that experienced developers use instinctively. They did not just memorize code — they learned the shape of well-written software.

Today, tools like GitHub Copilot, Cursor, Windsurf, and Claude have become genuinely useful daily companions. Not perfect, not magic, but genuinely useful.

What AI Does Well Right Now

Let me be honest about what works and what does not. After a year of daily use, here is my assessment.

Works Great

Boilerplate code is where AI really shines. Need a DTO class with validation annotations? A mapper between two similar objects? A standard CRUD endpoint? AI handles these in seconds, and the output is usually correct on the first try.

Java
// I typed this comment, AI generated the rest:
// Create a DTO for user registration with validation
@Getter @Setter
public class RegisterRequest {
    @NotBlank(message = "Username is required")
    @Size(min = 3, max = 50)
    private String username;

@NotBlank(message = "Email is required") @Email(message = "Invalid email format") private String email;

@NotBlank(message = "Password is required") @Size(min = 8, message = "Password must be at least 8 characters") private String password; }

Perfect output, took about two seconds. Writing this by hand would have taken me a couple of minutes — not a huge deal for one class, but multiply that across an entire project.

Test generation works surprisingly well for straightforward logic. Give AI a function and ask it to write tests, and it will usually cover the happy path, edge cases, and null checks. I still review every test, but it gives me a solid starting point.

Refactoring suggestions are another strong suit. "Extract this into a separate method," "Convert this to use streams," "Add error handling" — AI handles these transformations cleanly because they follow well-known patterns.

Language translation between similar languages is remarkably good. I have used AI to convert Python scripts to Java and TypeScript to Java with minimal manual fixes. It understands the idioms of each language well enough to produce natural-looking code.

Still Needs Work

Complex business logic is where AI struggles the most. If your code requires deep understanding of a specific domain — healthcare regulations, financial calculations, logistics constraints — AI will give you something that looks reasonable but is often subtly wrong.

Architectural decisions are beyond AI's current capability. "Should I use microservices or a monolith?" "Should this be an event-driven system?" These questions require understanding your team, your scale, your timeline, and your business context. AI cannot reason about these tradeoffs meaningfully.

Performance optimization for specific use cases requires understanding your data distribution, your hardware, and your access patterns. AI might suggest using a HashMap where a TreeMap would be better for your sorted data, or recommend an index without understanding your write-heavy workload.

Security-critical code is dangerous territory. AI-generated code might have SQL injection vulnerabilities, improper input validation, or incorrect authentication checks. Always review security-sensitive code with extra care.

A Real Example From My Week

Last week I was building a REST API endpoint that needed cursor-based pagination. Instead of writing everything from scratch, I described what I needed in a comment:

Java
// Cursor-based pagination: decode cursor from Base64,
// fetch N+1 records to determine hasNext,
// return results with encoded next cursor
public CursorPage<PostDto> getPostsByCursor(String cursor, int size) {

The AI generated about 80 percent of what I needed — the Base64 decoding, the N+1 fetch trick, the response construction. I still had to adjust several things:

  • The cursor decoding did not handle malformed input gracefully
  • It used findAll instead of a paginated query
  • The hasNext logic had an off-by-one error
  • But fixing those three issues took 10 minutes instead of writing everything from scratch in 30 minutes. That is a real, measurable productivity gain — not 10x, but meaningful.

    The Productivity Question Everyone Gets Wrong

    Here is the thing people get wrong about AI productivity: it does not make you 10x faster. Anyone who claims that is either selling something or working on very simple projects.

    In my experience, AI makes me about 1.2 to 1.5x faster on an average day. Some days more, some days less. But the important thing is not raw speed — it is where the speed comes from.

    AI makes the boring parts faster. The boilerplate, the repetitive patterns, the "I have written this exact same code fifty times" stuff. This means you spend more of your mental energy on the parts that actually matter — system design, debugging tricky issues, thinking about edge cases, and understanding user requirements.

    That is not a small thing. Developer fatigue is real, and anything that reduces the cognitive load of routine tasks is genuinely valuable. I find myself less mentally drained at the end of the day, which means I make better decisions in the afternoon instead of just grinding through repetitive work.

    The Quality Question

    Does AI-generated code meet production quality standards? It depends.

    For well-known patterns — CRUD operations, standard algorithms, common design patterns — AI code is often indistinguishable from human-written code. Sometimes it is even better because it consistently follows naming conventions and adds proper error handling.

    For anything novel or domain-specific, AI code needs careful review. I treat it like code from a junior developer — it is usually structurally correct but might miss important edge cases or make incorrect assumptions about the business domain.

    My rule: never commit AI-generated code you do not fully understand. If you cannot explain every line to a colleague, do not ship it.

    What This Means for Learning

    If you are just starting out in programming, here is my honest advice: do not let AI write everything for you. You need to understand why code works, not just that it works.

    Use AI as a study partner instead. Let it generate something, then read through it line by line. Ask it to explain its choices. Try to modify the code and see what breaks. This is actually a powerful learning approach because you get instant examples of patterns you might not have seen before.

    For experienced developers, lean into it. Use AI for the stuff that bores you, and invest the saved time in learning things AI cannot do yet — distributed systems design, performance tuning, understanding your users, and building good abstractions.

    The Tools I Actually Use

    I rotate between a few tools depending on the task:

  • GitHub Copilot for in-editor autocomplete while coding
  • Claude for longer conversations about architecture and debugging
  • Cursor/Windsurf for AI-aware IDE features

Each has strengths. Copilot is great for flow-state coding where you do not want to break context. Claude is better for complex reasoning and explaining tricky bugs. The AI-native IDEs are interesting for larger refactoring tasks.

Looking Ahead

I think we are about two years away from AI being able to handle entire feature branches with minimal supervision. Not because the technology is not there — it mostly is — but because trust takes time to build. Teams need to develop review processes for AI-generated code, and we need better tools for verifying AI output.

The developers who will thrive are not the ones who resist AI or the ones who blindly trust it. They are the ones who learn to collaborate with it effectively — knowing when to accept its suggestions, when to push back, and when to just write the code yourself.

And honestly? That is just good engineering — using the best tool for the job, understanding its limitations, and never stopping learning.

10 MinShare this article
Oğuzhan Berke Özdil
Author

Oğuzhan Berke Özdil

I have been connected to computers since childhood. On this website, I share what I learn and experience while trying to build a strong foundation in software. I completed my BSc in Computer Science at AGH University of Krakow and I am currently pursuing an MSc in Computer Science with a focus on AI & Data Analysis at the same university.