An “AI scientist” invents and conducts his own experiments

An "AI scientist" invents and conducts his own experiments

At first glance, a recent batch of research papers produced by a prominent artificial intelligence lab at the University of British Columbia in Vancouver might not seem so remarkable. Incorporating incremental improvements to existing algorithms and ideas, they read like the content of an average AI conference or journal.

But the research is actually remarkable. That’s because it’s entirely the work of an “AI scientist” developed in the UBC lab along with researchers from the University of Oxford and a startup called Sakana AI.

The project demonstrates an early step toward what could prove to be a revolutionary trick: allowing AI to learn by inventing and exploring new ideas. They just aren’t super novels right now. Several papers describe settings for enhancing an imaging technique known as diffusion modeling; another outlines an approach to speed up learning in deep neural networks.

“These are not breakthrough ideas. They’re not extremely creative,” admits Jeff Clune, the professor who directs the UBC lab. “But they seem like pretty cool ideas that someone might try.”

As amazing as today’s AI programs can be, they are limited by their need to consume human-generated training data. If AI programs can instead learn in an open-ended way, experimenting and exploring “interesting” ideas, they can unlock possibilities that go beyond anything humans have shown them.

Clune’s lab has previously developed AI programs designed to learn in this way. For example, a program called Omni tried to generate the behavior of virtual characters in several video game-like environments by deleting the ones that looked interesting and then repeating them with new designs. These programs previously required hand-coded instructions to determine interest. However, large language models provide a way to allow these programs to identify what is most intriguing due to their ability to mimic human reasoning. Another recent project from Clune’s lab uses this approach to allow AI programs to come up with the code that allows virtual characters to do all sorts of things in a Roblox-like world.

The AI ​​scientist is one example of Clune’s lab exploring the possibilities. The program offers experiments with machine learning, decides what looks most promising using LLM, then writes and runs the necessary code – rinse and repeat. Despite the unfavorable results, Clune says that open-ended learning programs, as well as language models themselves, can become much more capable as the computer power that powers them increases.

“It feels like exploring a new continent or a new planet,” Clune says of the possibilities unlocked by the LLM. “We don’t know what we’re going to find, but everywhere we turn there’s something new.”

Leave a Reply

Your email address will not be published. Required fields are marked *