Use the machine, fear the machine, learn from the machine.
Use the machine
When I first learnt how to write code, I was typing a lot. I always believe writing/typing things out is the most guaranteed way to truly absorb something. Something about creating something with your own hands matters a lot, and it makes me feel like I truly own that thing. Even when I tried to copy someone’s else work, I still typed things out to make sure I understand what I was copying.
But lately I have been delegating a lot of my typing to AI. Not delegating all the work, just the typing part, and most of my time is spent reading through the massive amount of AI-written code. Not that the AI writes better code than me (does it? pretty much so I guess?), but it’s just that I’m getting lazier when it comes to boilerplate code, or unit tests that I know for sure what they cover. I miss the constant clicking clacking of my mechanical keyboard sometimes.
Working with AI makes me re-check my definition of “writing code”: If I just type things out without any consideration of how things work like a code monkey, then surely the AI will take my job with ease. Typing out code faster means little in every context, with AI being much faster at generating workable code. But my intention of writing code is more than just to type out the syntax. I think to “write” code now is, ironically, not to literallly write the code to the machine, but to “write” the code into our mind. A weird way to say it I agree, but it’s just that we try to learn as much as possible from the AI.

Learn from the machine
I like to think of the AI as a very particular senior engineer, who happens to know a lot of things but has little taste. That senior can write a lot of runnable code, but it might lack some kind of character, something that differentiates between machine code and human-written code. I like to call that code ownership, and it only happens when I both understand the code and has some kind of “preference” for it over different versions of the same logic.
That perspective led to a change of how I use AI. Now instead of writing a very long prompt of what the agents should do, plus telling them to generate an implementation plan for me, I just chat with it. Yes, you tell me: We go from conversational chatbot, to agentic AI, then back to Conversational Agent™. The way I go is that “I would like to build feature X, it should do Y and Z, now do some searching around the code base and the web, and propose some solutions. Do NOT write any code.” I have to bold the not writing code part, because what I want is not the AI spitting the solution out, but the time for both of us to get into the weeds.
My job as a learner/junior engineer is not to fear the AI, but to learn from it, to guide it and make the code truly my own. I delegate the typing, sometimes the idea, but never the taste, because AI has no taste. I still do some typing here and there, but mostly guiding it to a point where I would think “In an ideal world, this is how I will write the code by hand”. There are times when the agent just spat out a solution without any explanation, so I just hit it with “Why did you choose framework X? What’s the point of doing Y over Z? Is there any edge case with doing A, B or C?”. It takes quite a while to get the code running, but at that point I'm pretty much satisfied with what I have, because the final code has some of my human-ness with it.
Fear the machine
As software developers, we are lifelong learners. The landscape is always changing, so we should always strive to flow with it. I believe our tech industry is heading to a place where AI will not just write code but to test it, fix it, review it, deploy it as well. Some people say we still need human intervention like code reviews, but I don’t know, what if we just don’t even do it too? We just let it roam free without diving deep into the nitty-gritty of how systems work, we blindly trust it to work until things slowly drift away until an unfortunate customer notices it? If we take writing code faster as the standard the tech industry is heading to, then we are heading to a very dangerous feature.
The human-in-the-loop is necessary, but I think it’s slowly eroding out. If we are to replace junior engineers with agents and fewer seniors, the combination lacking future senior engineers, free-roaming agents, and shrinking number of present senior engineers will endanger our industry.
Final thoughts
I fear the machines, because it has so much trusts from us now. I use the machines, because I want to not fear from them. I learn from the machines to know where they might go wrong. I want to become one of the people to be responsible for not letting that future happen. We the human could be a bottleneck, but we are the necessary bottleneck, the anchor that slows the full-fledged AI ship from hitting any iceberg out there. Because we take responsibility, something the agents cannot do, something we could gain through experience and constant learning.

