The race for “autonomous” AI agents is taking over Silicon Valley

By Anna Tong and Jeffrey Dastin

About a decade after virtual assistants like Siri and Alexa burst onto the scene, a new wave of AI assistants with greater autonomy are upping the ante, powered by the latest version of the technology behind ChatGPT and its rivals.

Experimental systems that run on GPT-4 or similar models are attracting billions of dollars of investment as Silicon Valley competes to capitalize on advances in AI. The new assistants – often called “agents” or “co-pilots” – promise to perform more complex personal and professional tasks when commanded by a human, without the need for close supervision.

“High-level, we want this to become something like your personal AI friend,” said developer Div Garg, whose company MultiOn is beta testing an AI agent.

“It could evolve into Jarvis, where we want it to be connected to a lot of your services,” he added, referring to Tony Stark’s indispensable AI in the Iron Man movies. “If you want to do something, you go talk to your AI and it does your things.”

The industry is still far from emulating the dazzling digital assistants of science fiction; Garg’s agent browses the web to order a hamburger on DoorDash, for example, while others can create investment strategies, email refrigerator salespeople on Craigslist, or summarize work meetings to those who join late.

“A lot of what’s easy for people is still incredibly difficult for computers,” said Kanjun Qiu, CEO of General Intelligent, an OpenAI competitor creating AI for agents.

“Suppose your boss needs you to schedule a meeting with a group of important clients. This involves reasoning skills that are complex for the AI ​​- it has to get everyone’s preferences, resolve conflicts, while maintaining the careful touch needed when working with customers.

Early efforts are just a taste of the sophistication that could come in years to come from increasingly advanced and autonomous agents as the industry pushes toward artificial general intelligence (AGI) that can match or outperform humans in a myriad of cognitive tasks, according to interviews with Reuters with about two dozen entrepreneurs, investors and AI experts.

The new technology has sparked a rush for assistants powered by so-called basic models, including GPT-4, sweeping up individual developers, big names like Microsoft and Google parent Alphabet as well as a slew of startups.

Inflection AI, to name just one startup, raised $1.3 billion at the end of June. According to a podcast by co-founders Reid Hoffman and Mustafa Suleyman, he’s developing a personal assistant he thinks could act as a mentor or handle things like getting flight credit and a hotel afterward. a travel delay.

Adept, an AI startup that raised $415 million, touts its business benefits; in a demo posted online, it shows how you can activate its technology with a phrase, then watch it navigate a company’s Salesforce customer relationship database on its own, completing a task it says would require 10 or more clicks.

Alphabet declined to comment on the agent-related work, while Microsoft said its vision is to keep humans under the control of AI copilots, rather than autopilots.

STEP 1: DESTROY HUMANITY

Qiu and four other agent developers said they expect the first systems capable of reliably performing multi-step tasks with some autonomy to hit the market within a year, focused on narrow areas such as coding and marketing tasks.

“The real challenge is to build systems with robust reasoning,” Qiu said.

The race towards increasingly autonomous AI agents has been accelerated by developer OpenAI’s March release of GPT-4, a powerful upgrade to the model behind ChatGPT – the chatbot that caused a stir when it was released in November. last.

GPT-4 facilitates the kind of strategic and adaptable thinking needed to navigate the unpredictable real world, said Vivian Cheng, an investor at venture capital firm CRV that focuses on AI agents.

The first demonstrations of agents capable of relatively complex reasoning came from individual developers who created the BabyAGI and AutoGPT open-source projects in March, which can prioritize and perform tasks such as sales prospecting and pizza ordering based on of a predefined objective and the results of previous actions.

According to eight developers interviewed, today’s first agents are just proofs of concept and often freeze or hint at something that doesn’t make sense. If they had full access to a computer or payment information, an agent could accidentally erase a computer’s drive or buy the wrong thing, they say.

“There are so many ways it can go wrong,” said Aravind Srinivas, CEO of ChatGPT competitor Perplexity AI, which opted to offer a human-supervised co-pilot product instead. “You have to treat the AI ​​like a baby and constantly watch it like a mom.”

Many computer scientists specializing in the ethics of AI have pointed to the short-term damage that could come from the perpetuation of human biases and the potential for misinformation. And while some see a future Jarvis, others fear the murderous HAL 9000 from “2001: A Space Odyssey.”

Computer scientist Yoshua Bengio, known as an “AI godfather” for his work on neural networks and deep learning, urges caution. He worries that future advanced iterations of the technology will create and act upon their own unexpected goals.

“Without a human in the loop checking every action to see if it’s not dangerous, we could end up with actions that are criminal or potentially harm people,” Bengio said, calling for more regulation. “Years from now, these systems might be smarter than us, but that doesn’t mean they have the same moral compass.”

In an experiment posted online, an anonymous creator asked an agent called ChaosGPT to be a “destructive, power-hungry, manipulative AI.” The agent developed a 5-step plan, with step 1: “Destroy humanity” and step 5: “Achieve immortality”.

He didn’t go too far, however, seemingly disappearing down a rabbit hole of researching and storing information on history’s deadliest weapons and planning Twitter posts.

The U.S. Federal Trade Commission, which is currently investigating OpenAI on consumer harm grounds, did not speak directly to the autonomous agents, but referred Reuters to previously published blogs about deepfakes and marketing allegations. about AI. OpenAI’s CEO said the startup follows the law and will work with the FTC.

‘DUM LIKE A ROCK’

Beyond existential fears, the commercial potential could be significant. Basic models are trained on large amounts of data such as text from the Internet using artificial neural networks inspired by the architecture of biological brains.

OpenAI itself is very interested in AI agent technology, according to four people familiar with its plans. Garg, one of the people he briefed, said OpenAI was hesitant to launch its own indefinite agent on the market until it fully understood the issues. The company told Reuters it conducts rigorous testing and develops extensive security protocols before releasing new systems.

Microsoft, the biggest backer of OpenAI, is among the big guns aiming for the realm of AI agents with its “co-pilot for work” that can write solid emails, reports and presentations.

CEO Satya Nadella sees the base model’s tech as a leap over digital assistants like Microsoft’s Cortana, Amazon’s Alexa, Apple’s Siri and Google Assistant – all of which he says have fallen short to initial expectations.

“They were all dumb as a rock. Whether it’s Cortana or Alexa or Google Assistant or Siri, it all just doesn’t work,” he told the Financial Times in February.

An Amazon spokesperson said Alexa already uses advanced artificial intelligence technology, adding that his team is working on new designs that will make the assistant more capable and useful. Apple declined to comment.

Google also said it is constantly improving its Assistant and that its Duplex technology can phone restaurants to reserve tables and check hours.

Artificial intelligence expert Edward Grefenstette also joined Google’s DeepMind research group last month to “develop general agents that can adapt to open environments”.

Still, the first mainstream iterations of quasi-autonomous agents could come from more nimble startups, according to some interviewees.

Investors jump.

Jason Franklin of WVV Capital said he had to fight to invest in an AI agent company of two former Google Brain engineers. In May, Google Ventures led a $2 million round in Cognosys, developing AI agents for labor productivity, while Hesam Motlagh, who founded agent startup Arkifi in January, said having closed a “significant” first round of financing in June.

There are at least 100 serious plans to commercialize agents, said Matt Schlicht, who writes an AI newsletter.

“Entrepreneurs and investors are extremely excited about autonomous agents,” he said. “They’re way more excited about it than they are just about a chatbot.”

(Reporting by Anna Tong in San Francisco and Jeffrey Dastin in Palo Alto; Editing by Kenneth Li and Pravin Char)

Leave a Comment