On Learning, Vibe-Coding, and the Lack of AI Understanding

My personal reflection on how everything done day-to-day gets vibe-coded, as fast as possible (no matter how much one wants to do it well), without learning anything about the subject along the way.

My case with websites: I built a website (not this), with interesting technology, but now that page is obsolete, and I have learned absolutely nothing from it: I only know that you can do cool things like converting markdown to HTML with a framework, but nothing beyond that. Everything extremely superficial.

I hold the deep belief that people who are creative and who create special things share one fundamental characteristic (a necessary criterion, not a unique one): they know many things, they are curious about many different things, and they understand what lies behind everything. In short, they understand the complete system. And I believe that understanding is the key to everything.

What I am about to say, I say with some reservation, as I acknowledge my limited knowledge compared to people more experienced in the field and, above all, compared to what none of us even know yet. It would be wonderful to be proven wrong and have the world shut me up.


There is a difference. Humans do not tolerate our world hypotheses clashing. It is not that we have them all present at every moment, but when we detect that things conflict, we try to fix it. It is not a perfect system. Many times we patch it horribly by blaming external factors, or by ignoring isolated cases. But the truth is that, quite literally, we suffer when our house of cards starts to collapse. An LLM does not have this incentive. It does not seek to correct all its knowledge by building a causal tree (or something similar to causal), and seeing what conflicts. It simply updates that isolated case and reaches a point of global stability. (Not to mention that during inference it does not even update, which is a much larger and still unsolved problem.)

Current AIs, however, are expected to reach this indirectly. And not only is it indirect, but it is not even incentivised. We give them a massive (even ridiculous and embarrassing) dataset, and we hope that, out of the sheer necessity to compress so much knowledge into less space, they find causal shortcuts and a genuine understanding of the world. The truth is that there have been signals that this sometimes happens, but we have also witnessed genuine atrocities, LLM failures that expose a complete and utter ignorance.

The trend among LLMs seems to point towards the leading laboratories aiming to map EVERYTHING, until the difference with a human becomes imperceptible. But will that actually lead to understanding what is happening? Or have we simply forced the neural network to create a ridiculous map of what humans do?

My belief is that what current AI models are missing is not only architecture, but the training method. Learning from data in a brute-force way, as is currently done, is very interesting for mapping human knowledge, but it will probably not scale indefinitely as we expect it to. It will keep improving at coding, security, and so on. Yes, but it will remain bounded by human limits.

Reinforcement Learning in LLM training is a good attempt at solving this, but it is still in a too superficial state. Algorithms like GRPO generate several reasoning paths for the same problem and reward the best one, but there is no feedback as to why it is the best. We simply hope that by doing this a ridiculous number of times, the model will learn.


Coming back to my experience doing some web tasks, I think falling into the autopilot of delegating everything to AI without knowing what you are doing, simply knowing the objectives, is a disaster for the developer community. The world needs leaders, and leading AI models I am sure will be just as important as leading humans, but the world also needs curious people, people who have doubts, people who want more. And we will never create new knowledge if people do not push themselves to the edge. Vibe-coding everything on autopilot does not get you to the edge. You have to expose yourself to new challenges, improve every day, and doing that will allow AI to truly, genuinely, create unique things.

For now, I will hold on to one principle: if after doing something I have not learned anything new, I have not improved. I have simply patched a short-term problem. Learning remains the foundation of human progress.

Beware of vibe-coding. Beware of autopilot.