An invisible, non-human hand moving the cursor across your computer screen and typing without using the keyboard in fiction is usually a sign of malicious AI hijacking something (or a friendly ghost helping you solve mysteries, like the TV show Ghost Writer). Thanks to Anthropic’s new computing feature for its AI assistant Claude, there’s now a much more sympathetic explanation.
Powered by an enhanced version of the Claude 3.5 Sonnet model, this AI – also known as “computing” – allows you to interact with your computer in the same way you would. The AI assistant concept goes a step beyond text and a voice, with virtual hands typing, clicking and otherwise manipulating your computer.
Anthropic discounts computer use as a way for Claude to perform tedious tasks. It can help you fill out a form, find and organize information on your hard drive, and move information. While OpenAI, Microsoft, and other developers have demonstrated similar ideas, Anthropic is the first with a public feature, even though it is still in beta.
“With computing we are trying something fundamentally new,” Anthropic explains in a blog after. Instead of creating specific tools to help Claude complete individual tasks, we teach it general computer skills, allowing it to use a wide range of standard tools and software programs designed for humans.”
The computing feature is due to the improved performance of Claude 3.5 Sonnet, especially with digital tools and encoding software. Although somewhat overshadowed by the spectacle of its computer use feature, Anthropic also debuted a new model called Claude 3.5 Haiku, a more advanced version of the cheaper Anthropic model, although it was once able to rival Anthropic’s previous top-performing model, Claude 3 Opus. to match. while still being much cheaper.
Invisible AI support
You can’t just give an order and walk away. Claude’s control over your computer comes with some technical issues and intentional limitations. On the technical side, Anthropic admitted that Claude has difficulty scrolling and zooming on a screen. That’s because the AI interprets what’s on your screen as a collection of screenshots, and then tries to put them together like a camera roll. Anything that happens too quickly or changes the perspective on the screen can confuse it. Still, Claude can do quite a bit by manipulating your computer, as seen above.
Unbridled automation poses obvious dangers, even when it works perfectly, as many science fiction films and books have shown. Claude is no Skynet, but Anthropic has imposed limitations on the AI for more prosaic reasons. For example, there are guardrails that prevent Claude from interacting with social media or government websites. Registering domain names or posting content is not allowed without human verification.
“Because computing can be a new vector for more well-known threats such as spam, disinformation or fraud, we are taking a proactive approach to promote their safe implementation. We have developed new classifications that can identify when computing is being used and whether harm is occurring” , wrote Anthropic. “Learning from the early implementations of this technology, which is still in its early stages, will help us better understand both the potential and implications of increasingly capable AI systems.”