🕓 9 min ✏️ Published on

Open AI's Codex

The only way out is ideas

A few days ago I watched the OpenAI Codex demo and I am still thinking about what to do because it means that I will lose my job as a software engineer pretty soon. This post explores what I could do to prolong this period for some time.

OpenAI what?

OpenAIs codex is a computer program, which knows how to write computer programs. You should check out their first demo, and maybe this video here, showing that it might not take that long before OpenAI codex can write itself...

Codex is still in Beta and has some very rough edges. As of now, it is not as capable as a " good" human developer. It can do 37% of the "coding" tasks metric the OpenAI team developed... But that's already bonanza crazy anyway.

Considering that it has only been around for a little bit of time and that the OpenAI team makes giant progress every year, it is at most a few years until it surpasses all human-level coding abilities. This development of AI is scary because it moves us closer to superintelligence.

Essentially Codex, or something like it, will interpret commands given in plain English in the future and use them to write a computer program that fulfills exactly your request. And it will be smart enough to decipher the actual intent, get what you mean and be able to debug and fix its output!

The last Programming Language

This development will transform computer programming from an "arcane" form of art and wizardry into something much more powerful and easy to use.

You won't need to know the exact language the computer uses anymore, and all the knowledge about how computers work in general, to make the computer do something that you want. Because Codex will make the computer understand plain English. Writing computer programs in English has been a dream of programmers for a long time. Slowly languages have moved into "higher" level languages, away from the 0s and 1s the computer knows, and the machine instructions the CPU might use into things that somewhat resemble English.

Now Codex will lift that to a much higher level. The final level. It will abstract all the nitty-gritty details of the high-level programming languages of the past away. Which leaves people with only one requirement they need to know: English. AI will take care of the rest and people can go on and program whatever they want.

In a way that's what Codex is - an awesomely clever compiler that can infer intent from English requirements and translate them into a high-level programming language that fulfills what the person asked for.

The job of that compiler sounds familiar... Because that's pretty much exactly what my job is at the moment. As a software developer/engineer/whatever this is exactly the thing you are doing. Translate English requirements into code that solves the problem described. When a person can instruct AI to do the same thing, there is no more need for programmers - or rather - everybody who knows English suddenly turns into a programmer and can make the computer solve their problems, no special knowledge required!

So that is what Codex and its successors will do to programming. This development is going to enable everybody to code as soon as they know the English language.

Programming -> democratized

What does all of this mean? It means that soon you can program... You and everybody else who can read this text and write English will become able to program–period. Even better - everybody will get better at programming from now on, even without learning anything! The tools will be further improved and therefore the possible output of everybody will be improved too. Because if AI can solve problems better over time, so can you by making good use of these tools.

That's why I am going to lose my job pretty soon because I can not compete with an AI that can code better than I can. I guess that AI will soon be able to code better than everybody else on the planet too. An AI that enables everybody who knows plain English, to write rock-solid computer programs which do exactly what they want and only that, without bugs and incredibly high performance is going to exist at a point in the not-so-distant future. It might even be able to prove that the programs it writes work.

Learning Programming Stuff becomes obsolete

A consequence of this is, that spending a lot of time on learning something that might seem valuable now is going to be a waste of time and effort. The skills learned will get useless rapidly. And that is something that frustrates me deeply because I like learning and the competitive edge that this gave me.

It means that learning new programming languages or how to develop crypto applications, shader coding, or how to write game engines and any other such thing, is going to be a waste of time if you only do it to stay competitive in the software engineering industry.

There is nothing I can do about becoming obsolete as a programmer, so I might as well accept it and pivot to another activity. But I love programming, which makes this development deeply sad to me, personally.

But if coding can be automated - then - what's left?

The answer to that question is pretty grim. Because there is no real answer. Everything from here on out can and will be automated... And people will do so sooner rather than later if there are economic incentives to do it. But I think there is one exception that will take some more time than all the others: jobs that involve the creation and sifting of ideas.

In a way, what OpenAI builds are awesome tools to express ideas of human creation in the digital world (code, text, images, everything represented in bits and bytes really). The tools they build will be extremely easy to use (the only requirement will be knowing English, no fancy jargon or deeper understanding of domain expertise required) and these tools will be available to everyone (for a small fee probably).

But... this tooling (for now at least) leaves one aspect of the human intellect un-automated. And that is the aspect of creativity. OpenAI already built things that can generate ideas (GANs and Auto-Encoders), but sifting through them and choosing only those that "work" because humans like them, is still something that humans need to do. There is no intent to program anything yet from the AI, it can not dream up a new startup or something utterly new that has never been done before. It can not innovate, at least for now. And so that's what I could be doing in the future. Use AI to build cool stuff.

At least for the medium-term future, that's the way out I can see. Because I can still do that, create and sift through ideas, for maybe the next five to ten years until that will also be automated away by the next neural network...

A very weird future.

When Codex can generate and implement its ideas and develop its own notion of "where to go" the world might collapse into a singularity anyway. A singularity is the result of ongoing exponential progress that might happen when a machine can generate its own novel ideas, then implement and test them with amazing speed and upgrade itself over and over again, in an upward spiral of exponential growth. At that time, worrying about a job or almost anything becomes meaningless. We will have created our version of something that after a few iterations of improvement might quickly deserve the label "god".

Even if the Singularity or a fast take-off doesn't happen, there will be enough tools like Codex around that can do everything better than humans can. Including science and engineering and even the generation of ideas. Sure the whole thing doesn't explode into a singularity where everything changes, but the world would still have to look vastly different and adapt to this drastic change in available intelligence.

Two Options

Complete automation like this would result in a radical abundance, that's at least theoretically available for everybody. With this abundance I think, there are essentially two options - either the spoils of automation are divided somewhat fairly - in other words, everybody can have everything necessary for survival and infinite leisure time for free. Or there is some sort of basic general "income" which is enough to "buy" everything one needs.

Or the spoils of AI aren't divided fairly and access is restricted by a crazy elite, trying to keep everybody else poor and wretched, so that they can enjoy their symbol of status. In that case, one needs to have enough capital to own enough of the production machinery to be able to live from it and be part of that elite. While maybe secretly working to destroy the elite from the inside ^^

The people who are not part of the elite might try to revolt against this injustice which in a way would be the revolution Marx has been thinking about centuries ago. But with powerful AI being one-sidedly available, such a revolt might not work or ever happen in the first place.

This third future is why I am even thinking about working for the next years until that point comes. I do not want to end up at the bottom of that curve of wealth distribution, where one is part of the poor can't do much about it because work has become an impossibility.

When the point of complete automation comes, however hard you might be able to work, you can not change anything about your economic status anymore. Since, by that time everything you might be able to do, can already be done better without you. Therefore at that time, there is no need for you anymore, you can not create value anymore and therefore new wealth can not be gained anymore, except by returns from owning a part of production machinery already. If you are not wealthy at that point you will never be, therefore it's best to accumulate wealth before AI automation is fully here. Owning a piece of AI-productive machinery is the way to continue having a high standard of living, even after the automation transition.

The Kantian imperative?

All of this makes me feel very gloomy and almost doomsday-like. In the sense, that if everything comes to this point eventually anyway, without me doing anything for it - then why am I not spending all of my time - not working at all, but enjoying all the life I can, while the world is still the way it is now?

Why do I bother about learning, about reading, about becoming good at things? The thought crossing my mind is to stop competing entirely, taking the money that I have now, and bridging the gap of time until complete automation happens, with stuff that I enjoy doing... There are enough people out there, why should I bother helping to bring about AI and help create a world of complete abundance - if I am already sure that this will happen within my lifetime anyway?

I think this is the same problem as the fundamental problem of cooperation. This leads to the tragedy of the commons and that's where the genius of the Kantian imperative lies. In a way the idea of - if everybody would do it as you would, how would the world look - is a test for the tragedy of the commons - using rationality to sift out suboptimal but evolutionarily stable solutions from one's behavior.

So that's the answer then - if everybody were to think like that - everybody would start to travel and do whatever the fuck they want, leading to a collapsing society, without values except one's own pleasure. A society, which would end up forever delaying that ideal dream of utopian abundance until some other day, tomorrow. Utopia would never happen, because of the decisions of all the individuals who were waiting on the sidelines, traveling, having a good life, while waiting for somebody else to come around and build this future. Thinking that it would happen without them, leading to giving up their responsibility to make this good future come about in the first place.

And that's bad and the main reason why I am still worrying about working and not already retiring with some good books on a beach somewhere in the middle of nowhere...

The human purpose crisis

The last problem I see with all of this is the loss of purpose in people's lives. In a way, people have defined and identified themselves by what they have been doing. And if that gets taken away from them, the question of what is left becomes very loud and clear.

If there are no problems left to be solved, nothing to do, only infinite leisure time–then why does anything matter? How can you stretch your mind, exert your creative powers, and your freedom, and adopt responsibility if everything is already taken care of by machines? In this world, what is your purpose? And how do you decide to spend your time?

If those questions can not be sufficiently answered, then we should start worrying about whether or not we want a future with powerful AI... Maybe we don't want to go all the way with automation, leaving some aspects of work–maybe the creation of ideas–still up to us humans.