Image Playground turned me into a wizard, but I’m still waiting for that Siri magic
Appearing in all its fullness before our eyes, Apple Intelligence takes a giant leap forward this week with the introduction of iOS 18.2. Across the iPhone landscape (at least those that can support Apple’s artificial intelligence), millions of people are experiencing their iPhone’s ability to turn them into cartoon-like characters and, via Image Playground, create fantastic mashups and strange scenarios.
I am not immune to this attraction. I used Image Playground to transform me into a magician I can share with my unimpressed wife, who told me, “I don’t need this in my life.” Later I made a little wizard, Genmoji Lance, that I can use as a message emoji. Maybe she likes that better.
At times, Apple’s commitment to its Apple Intelligence efforts seems almost provisional. For example, look for the Genmoji beta. It’s hidden in Messages under first the emoji and then an even smaller version of that icon with a small “+” next to the “Describe an emoji” field. It’s a nice tool; why hide his light under a bushel?
After spending way too much time in Image Playground and with Genmoijis, I turned my attention to Siri and remained unimpressed.
Apple’s late entry into the generative AI space means there’s more pressure on Apple to bring something truly useful to market, and if I’m honest, Genmoji and Image Playground aren’t. They’re fun, but what excited me most about Apple Intelligence when Apple first introduced it at WWDC 2024 was how your iPhone (as well as the Mac and iPad) could become more self-aware.
In a release on Apple Intelligence, Apple promised that Siri could “take hundreds of new actions in and between Apple and third-party apps.” Apple claimed I could ask Siri, ‘Send the pictures from Saturday’s BBQ to Malia,’ and Siri will take care of it.”
To be fair, Apple has noted that Apple Intelligence would be rolled out in late 2024 and into 2025. It’s an incredibly slow schedule in a space where competitors and partners like OpenAI are releasing massive updates almost daily (see 12 Days of OpenAI).
On the other hand, the lack of clarity about what exactly is possible in our current version of Siri is frustrating. Siri looks very different now and is beautiful, but it mainly comes down to a facelift where the underlying bone structure is essentially the same.
Siri still struggles to make conversations, and when I tried to replicate the photo request and ask Siri to send photos from a recent holiday party to my wife, Siri simply said, “I can only send screenshots from here.” That’s a long way from being system-aware and truly helpful. I would at least expect Siri to guess which recent photos I was talking about.
In Photos, when I asked Siri to “Open Screenshot” because I’m now having trouble finding that album in the redesigned Photos, Siri took a screenshot of the page. Thanks, Siri, for another screenshot that I’ll have a hard time finding later.
There are a lot of things Siri can do with the system now. I can easily switch to dark mode by asking Siri. The digital assistant can navigate my mumbling better. I can access Home through Siri, but Siri can’t help me solve my home problems, including looking at my home network to see if there are other smart devices that aren’t part of the home system.
Siri is still smart. I asked him what he was doing today and he said, “I’m thinking about eternity, it takes forever.” But when I asked what that meant, Siri didn’t answer; Having a conversation is still not Siri’s strong suit.
Likewise, third-party app integration and on-screen action awareness aren’t really a thing yet. For example, Siri can open Threads, but when I asked it to write a new message, Siri got stuck in a “to whom?” loop. Since Threads is a public social media platform, the answer would be “to everyone.” This request got me nowhere.
Some of Apple’s most powerful AI tricks aren’t even its own. iOS 18.2 brought a Visual Intelligence update to Camera Control (that new “button” on the side of your iPhone). You press hard on Camera Control to open a special window for capturing images. Press the shutter button and you’ll be presented with two options: Ask or Search.
If you choose “Ask” your question will be sent to OpenAI’s ChatGPT, and if you choose “Search” it will, yes, go to Google. Both work well, but where is Apple Intelligence in this photo? Where is Siri? Apple can’t really claim other people’s AI work as its own, can it?
I understand that Apple Intelligence is a work in progress, but the standout object for any generative AI platform is still the chatbot. The voice assistant on platforms like Gemini and ChatGPT can hold long conversations, see what you see, understand the context and take action.
Even after the iOS 18.2 update, Siri is still miles away from that. I know how cautious Apple likes to be with these things, but if the company continues to check the engine before hitting the accelerator, the rest of the AI race cars will be left far behind. You can’t win a race when it’s over and won.