Alpha Evolve, CTM and Sequoia's AI Ascent Conference ā Live and Learn #68

Welcome to this edition of Live and Learn. This time with Google's new Alpha Evolve system, showing that AI can solve hard novel problems, Continuous Thought Machine, a new brain-inspired approach for reasoning AI from Sakana, and some interviews from Sequoia's AI Ascent event. As always, I hope you enjoy this edition of Live and Learn.
⨠Quote āØ
The unsatisfied human is an active human, less likely to get eaten by tigers and more likely to outwork lazy competitors.
ā Michael Edward Johnson - (source)
Links
AlphaEvolve by Google DeepMind. To me, this might be some of the biggest announcements of Google yet. Pair LLMs with reinforcement learning (something Deepmind is very good at - see AlphaZero or AlphaGo) and a bit of evolutionary search and you get an AI system that can reason through and solve novel problems in mathematics and computer science including important aspects of AI development itself. In the words of the authors: "AlphaEvolve enhanced the efficiency of Google's data centers, chip design, and AI training processes ā including training the large language models underlying AlphaEvolve itself." The recursive, self-improving nature of AI is starting to show itself, and who knows what problems AlphaEvolve has solved within Google internally that they don't show to the public yet? They are only saying that they are trying it on mathematical problems, where in 20% of cases it has discovered novel solutions that advance the state of the art. This is another feeling the AGI sort of moment. Read the whole thing on their blog, it's very interesting how they combine different LLMs with more traditional algorithms and a host of other techniques to make it all come together and work.
Continuous Thought Machines by SakanaAI. The people at SakanaAI have invented an entirely new architecture paradigm for AI that seems promising and is more inspired by how real brains work. Essentially, their approach uses the timing between artificial neurons' "firing" to solve problems. Whereas a traditional transformer just pulses through a whole bunch of matrix multiplications at once, this is different because each neuron has access to its own history of firing patterns and connections to other neurons and can use that to make "decisions". The whole interactive report and the website linked above are simply wild, and I'm excited to see where this goes. It seems a lot more brain-like and at least their initial results look promising and more generally intelligent than current approaches.
Seven Wonders Essay by Lewis Thomas. This is just a beautiful piece of writing about seven natural wonders that exist in this world, things that, from a perspective of biology, are just interesting and make you go: "wow, I didn't know that this existed, or possibly could exist". There are truly a lot of beautiful and intricately complex things in this world, and we often forget this in our day-to-day lives. This essay tries to instill that sense of wonder again and asks the reader to pay attention to all the marvelous things out there once more.
How much Information is in a piece of DNA? by Asimov Press. This is a seemingly simple question with a surprisingly difficult answer, and this post goes into a lot of detail explaining what one could even mean by "information". It explains how the biological nature of DNA and its interaction with the rest of the cell during transcription and translation make it very difficult to exactly specify how much information DNA really contains. Hint, it's not as easy as counting up the possible base pairs and putting a bit count on that ^^ but for a more exact idea, read the post.
AI Ascent Conference by Sequoia Capital. Sequoia, a big VC investment firm, hosted its annual AI event, inviting the who's who of AI CEOs and researchers for interviews and talks. I really liked the presentation of Jim Fan on the Physical Touring Test, the interview with Jeff Dean on what Google is cooking up, and Sam Altman talking about what OpenAI is doing and the bigger picture of AI development. There are many more speakers and interviews out there that you might find interesting, so go check out their event page š
š Travel š
I've been busy cycling for the last week, and I'm in a hurry to get myself to Peru so as not to miss my flight back home. Every day, I need to do at least 60km on average to arrive in time... for the next two MONTHS. Conditions with rain, mud, and bad gravel roads make it difficult sometimes, but overall I've made decent progress and am on schedule... at least for now š
The landscapes are absolutely worth every bit of pain, though. Crazy mountains, deserts, waterfalls, rivers, and lakes, everywhere I go. Colombia is truly a marvel of natural beauty, and I'm excited for more of this.
š¶ Song š¶
Coming to Get You Nowhere by This is The Kit
I've been listening to the music of This is the Kit a lot while cycling, it is the right mix of tranquility and energy, and I love it, this is just one of their amazing songs and you should check out all the rest of it too hehe.
That's all for this time. I hope you found this newsletter useful, beautiful, or even both!
Have ideas for improving it? As always please let me know.
Cheers,
ā Rico
Subscribe to Live and Learn š±
Join the Live and Learn Newsletter to receive updates on what happens in the world of AI and technology every two weeks on Sunday!