<< Back to Notes

On Purpose in an AGI Future

Last edited LinkedIn X

This note is adapted from my Medium post. It has been edited for this site, while keeping the original meaning intact.

The arrival of artificial general intelligence is inevitable. Whether it occurs this year or within the next decade is a secondary question. What is certain is that many tasks we still consider uniquely human will, in time, be performed by machines: with greater speed, precision, and consistency than any individual could offer.

This prospect raises a question that I find difficult to set aside: what, then, is the meaning of human work?

AGI will not eliminate the need for judgement. It will concentrate it. An AGI could optimise a national economy with a precision no team of economists could match, but the values that define what "optimal" means remain a human responsibility. The machine computes the most efficient path; we decide whether efficiency is the right objective. That distinction is not trivial. It is, arguably, the only distinction that matters.

The same logic applies to darker possibilities. Superintelligent systems could be weaponised to destabilise economies, manipulate institutions, or undermine democratic processes in ways that are computationally impossible to detect without equivalent capability. This creates a structural arms race, not unlike nuclear deterrence, but applied to the very faculty that defines our civilisation: intelligence itself.

The fracture

The most serious risk is not technological displacement. It is concentration of access. A world in which AGI remains the exclusive domain of a small number of states or corporations is a world in which the majority of humanity becomes structurally irrelevant: present, but without meaningful participation. That is not an abstract concern. It is a trajectory already visible in the distribution of current AI capabilities.

Democratised access, by contrast, could produce the opposite effect: elevating collective ambition, expanding the space of problems people are capable of pursuing, and freeing human effort from execution toward vision.

The real threat

What concerns me most is not that machines will displace human purpose. It is that human ambition will be surrendered before the question is even seriously posed.

This erosion is already underway, and it predates AI. Institutions, whether political, commercial, or cultural, frequently operate to keep populations comfortable enough not to question, distracted enough not to act. The threat to human purpose is not artificial intelligence. It is the willingness to relinquish ambition in exchange for convenience.

If ambition is preserved, no technology can substitute for human direction. If it is surrendered, no technology can restore it.


Machines replace execution, not purpose. A translator whose goal is merely to convert text between languages is vulnerable to automation. One whose purpose is to preserve cultural nuance, to build bridges between communities, to make meaning portable across languages, finds in AGI a tool, not a replacement. The same distinction applies across every discipline. The question is never what you do, but what you are trying to achieve.

The challenge ahead is not technical. It is distributive, political, and ultimately ethical. Ensuring broad access to these tools is the defining task of this generation. As long as that inequality persists, purpose is not in short supply: it is simply misallocated.

"Life is never made unbearable by circumstances, but only by lack of meaning and purpose."
Viktor Frankl