SSI Inc, Situational Awareness, and Runway Gen3 ā Live and Learn #45
Welcome to this edition of Live and Learn. This time with the announcement of Safe Superintelligence INC by Ilya Sutskever, an essay series on how AGI is going to happen soon, and what that means for the world, as well as the release of the Gen3 video generation model by RunwayML. As always, I hope you enjoy this Edition of Live and Learn!
āØ Quote āØ
Just as the human world is a society of selves, your body is a society of cells.
ā Kasra - (source)
Links
Genomes by Design by Arc Institute. There's been a discovery of how to design specific genetic changes. This new method, called "Bridge", is a way to design genomes and it is more flexible and powerful than traditional CRISPR/Cas systems. It's a big step forward in the field of synthetic biology and it will be interesting to see how this technology is going to be used in the future. The details are described in two papers published just last week in Nature titled "Bridge RNAs direct programmable recombination of target and donor DNA" and "Structural mechanism of bridge RNA-guided recombination". The papers are open to access for the public.
Deepmind V2A (Video to Audio) by Google. This model by Google can create the sounds for a video, based on the raw pixels as input but the sound output can be steered by text prompts too. This way artists can have a high level of control over the type of sounds and music that are generated yet the audio always "fits" the video. This work reminds me of similar things that ElevenLabs has been working on. The main difference is that in this work by Google, the sounds are produced with the video as the main input, whereas the ElevenLabs version is "just" text to sound, making it sometimes hard to describe exactly what you want. The version from Google just "gets it" from the video input and you can refine this with text if needed.
Gen3 Video Generation Model by RunwayML. Runway was one of the first companies that made AI video generation models available for creatives to use. Now they have released their Gen3 model, which is again a big step up from their previous models. It is much closer in performance to Sora from OpenAI and the output quality is quite insane. Especially the adherence to more complicated text prompts is impressive and something that I have not seen before, not even with Sora.
Situational Awareness by Leopold Aschenbrenner. This essay series gave me the chills many times while reading it and I highly recommend reading the full thing. It reminded me a lot of the WaitButWhy articles on AI. The basic takeaway is this: we are about to hit AGI levels soon and we are not prepared for the consequences at all. Because as soon as this happens we are off to an arms race to create superhuman intelligence and weaponize it. Once this happens, nation-states like China, Russia, and America will literally fight wars over who first gets access to AGI. And the world will be in a state of extreme disequilibrium because of that. Current AI labs are not safe enough to protect the secrets to building super powerful weapon systems (AGI) and therefore the government will (and should) step in to take control of these research programs to protect them from national state actors. Some people have very contrary opinions to this, like the Goldilocks Zone argument presented by NotBoring. It's interesting to listen to Leopold defending his position in the interview on the Dwarkesh Patel Podcast. In the end, we don't know what is going to happen, but his argument, as presented in this essay series is at least coherent. And we should take the possibility of a race towards superintelligence and all its consequences seriously. Because if he's right this will be a history-defining moment.
Safe Superintelligence INC by Ilya Sutskever. A short announcement on what Ilya is going to do now, that he has resigned from OpenAI. TLDR: He's helping to build a company whose sole goal is to create safe superintelligence without the need to make it a product.
AI Search - The Bitter-er Lesson in AI by Aidan McLaughlin. This essay describes how the single biggest unlock for LLM's is yet to come. The unlock in question is that of search. Essentially an LLM should be able to use more compute to think and reason about a problem, instead of always giving the next likely token as an answer. Right now we don't have that capability, but once there, LLMs could use their stream of produced tokens for a while, to refine and think through an answer and search over multiple possible solutions instead of always taking the first thing that "comes to mind". It's sort of like trading inference compute to get more accurate solutions instead of "only" scaling training compute to get more raw intelligence from bigger models. What he's describing sounds like "System 2" level thinking but for LLMs to me. McLaughlin further argues that the intelligence inherent in LLMs is already enough to get to ASI (artificial superintelligence) if we were to succeed in giving them this sort of search capabilities. Let's see.
š Traveling š
I've spent the last two weeks in Berlin, meeting friends again, going to ZuBerlin, and enjoying the summer. It's been a tremendously fun (and slightly crazy) time but I am looking forward to the next weeks in Berlin.
š¶ Song š¶
Early Summer by Ryo Fukui
That's all for this time. I hope you found this newsletter useful, beautiful, or even both!
Have ideas for improving it? As always please let me know.
Cheers,
ā Rico