
In the coming years, agents are widely expected to take over more and more chores on behalf of humans, including using computers and smartphones. For now, though, they’re too error prone to be much use.
A new agent called S2, created by the startup Simular AI, combines frontier models with models specialized for using computers. The agent achieves state-of-the-art performance on tasks like using apps and manipulating files—and suggests that turning to different models in different situations may help agents advance.
“Computer-using agents are different from large language models and different from coding,” says Ang Li, cofounder and CEO of Simular. “It’s a different type of problem.”
In Simular’s approach, a powerful general-purpose AI model, like OpenAI’s GPT-4o or Anthropic’s Claude 3.7, is used to reason about how best to complete the task at hand—while smaller open source models step in for tasks like interpreting web pages.
Li, who was a researcher at Google DeepMind before founding Simular in 2023, explains that large language models excel at planning but aren’t as good at recognizing the elements of a graphical user interface.
S2 is designed to learn from experience with an external memory module that records actions and user feedback and uses those recordings to improve future actions.
On particularly complex tasks, S2 performs better than any other model on OSWorld, a benchmark that measures an agent’s ability to use a computer operating system.
For example, S2 can complete 34.5 percent of tasks that involve 50 steps, beating OpenAI’s Operator, which can complete 32 percent. Similarly, S2 scores 50 percent on AndroidWorld, a benchmark for smartphone-using agents, while the next best agent scores 46 percent.
Victor Zhong, a computer scientist at the University of Waterloo in Canada and one of the creators of OSWorld, believes that future big AI models may incorporate training data that helps them understand the visual world and make sense of graphical user interfaces.
“This will help agents navigate GUIs with much higher precision,” Zhong says. “I think in the meantime, before such fundamental breakthroughs, state-of-the-art systems will resemble Simular in that they combine multiple models to patch the limitations of single models.”
To prepare for this column, I used Simular to book flights and scour Amazon for deals, and it seemed better than some of the open source agents I tried last year, including AutoGen and vimGPT.
But even the smartest AI agents are, it seems, still troubled by edge cases and occasionally exhibit odd behavior. In one instance, when I asked S2 to help find contact information for the researchers behind OSWorld, the agent got stuck in a loop hopping between the project page and the login for OSWorld’s Discord.
OSWorld’s benchmarks show why agents remain more hype than reality for now. While humans can complete 72 percent of OSWorld tasks, agents are foiled 38 percent of the time on complex tasks. That said, when the benchmark was introduced in April 2024, the best agent could complete only 12 percent of the tasks.
Zhong says that the amount of training data available may limit how good agents can become.
Perhaps one solution is to add human intelligence to the mix. While looking into Simular, I discovered a research project that shows how effective it can be to blend human skills with those of an AI agent.
CowPilot, a Chrome plugin developed by a team at Carnegie Mellon University, allows a human to intervene if an AI agent gets stuck doing things. With CowPilot, I can step in and click or type if the agent seems to be dithering.
Jeffrey Bigham, a professor at CMU who oversaw the project, which was developed by his student, Faria Huq, says the idea of having a human work with an agent “is almost so obvious that it’s hard to believe it’s not the way most people are thinking about it.”
Most interestingly, Bigham and Huq say that a human and agent working together can perform more tasks than either party working alone. In a limited test, the human-agent combo completed 95 percent of the jobs it was given, while requiring humans to perform only 15 percent of the total steps.
“Web pages are often hard to use, especially if you’re not familiar with a particular page, and sometimes the agent can help you find a good path through that would have taken you longer to figure out on your own,” Bigham adds.
I don’t know about you, but I like the idea of an agent that makes me more productive and less error prone.