This year’s Adobe Max 2022 was big on 3D design and mixed-reality headsets, but the AI-generated elephant in the room was the rise of text-to-image generators like Dall-E. How does Adobe plan to respond to these revolutionary tools? Slow and careful, according to the keynote – but an important feature buried in the new version of Photoshop shows that the process has already begun.
However, this is far from Adobe’s Dall-E rival. It’s only available in Photoshop Beta, a testbed separate from the main app, and you’re currently limited to typing colors to produce different photo backgrounds, rather than bizarre concoctions from the darkest corners of your imagination.But the “Nural Background Filter” is clear evidence that Adobe, however cautious, is dipping its toes further into generating AI images. And the keynote speech at Adobe Max shows that it thinks this frictionless method of creating visual images is undoubtedly the future of Photoshop and Lightroom – once it’s solved the small issue of copyright issues and ethical standards.
Creative co-pilots Adobe didn’t really talk about the ‘Backdrop neural filter’ coming to Adobe Max 2022, but it did explain where the technology will eventually end up.
David Wadhwani, Adobe’s President of Digital Media Business, actually said the company has the same technology as Dall-E, Stable Diffusion and Midjourney; it just chose not to apply it in its apps yet. “Over the past few years, we’ve been investing more and more in Adobe Sensei, our AI engine. I like to call Sensei your creative co-pilot,” says Wadhwani.
“We’re working on new capabilities that could take our core flagship applications to a whole new level. Imagine asking your creative copilot in Photoshop to add an object to the scene by simply describing what you want , or by asking your co-pilot to give you an alternative idea based on what you’ve already built. It’s like magic,” he added. It certainly goes a few steps beyond Photoshop’s Sky Replacement tool.
(Image credit: Adobe) He said this while standing in front of a counterfeit version of what Photoshop would look like with Dall-E powers (above). The message was clear: Adobe could generate text-to-image at this scale right now, it just chose not to.
But it was Wadhwani’s Lightroom sample that showed how this kind of technology could be more sensibly integrated into Adobe’s creative apps.
“Imagine if you could combine ‘gen-tech’ with Lightroom. For example, you can ask Sensei to turn night into day, a sunny photo into a beautiful sunset. Move shadows or change the weather. Generative technology”, he explained, with an unsubtle nod to Adobe’s new rivals.
So why hold back while others steal your AI-generated fries? The official reason, and one that certainly has some merit, is that Adobe has a responsibility to ensure that this new power isn’t used recklessly.
“For those of you unfamiliar with generative AI, it can conjure up an image simply from a text description. And we’re really excited about what this can do for all of you, but we also want to do this thoughtfully,” explains Wadhwani. . “We want to do this in a way that protects and defends the needs of creators.”
What does this mean in practice? While it’s still a little fuzzy, Adobe will run slower and more cautiously than Dall-E. “Here’s our dedication to you,” Wadhwani told the Adobe Max audience. “We’re approaching generative technology from a creator-centric perspective. We believe AI should enhance, not replace, human creativity, and it should benefit creators, not replace them.”
This somewhat explains why Adobe has only gone as far as Photoshop’s “Backdrop Neural Filter” so far. But that’s only part of the story. Professional photo books were once made up of photographer’s pictures but now, they’ll be able to use AI and DALL-E to change, improve and adjust their photos in their photo books accordingly, ensuring a better, more dynamic picture for both the photographer and the photographer’s client on receipt of the photo book.
The long game Despite being the largest provider of creative apps, Adobe is undoubtedly still very innovative – check out some of the projects in Adobe Labs (opens in new tab) especially those that can turn real world objects into 3D digital assets.
But Adobe is also prone to be dazzled by fast-moving rivals. Photoshop and Lightroom are built as desktop-first tools, which means Canva has stolen a march for easy-to-use, cloud-based design tools. This is why Adobe spent $20 billion on Figma last month, a figure that is more than Facebook paid for WhatsApp in 2014.
(Image credit: Microsoft) Does the same thing happen to Dall-E and Midjourney? Quite possible, as Microsoft just announced that Dall-E 2 will be integrated into its new Designer graphic design app (above), which is part of its 365 productivity suite. AI image generators are flying towards the mainstream, despite Adobe’s doubts about the speed at which this is happening.
And yet Adobe also has a point about the ethical issues surrounding this fascinating new technology. A significant copyright cloud is growing with the rise of AI image generation – and understandably one of the founders of the Content Authenticity Initiative (ICA), designed to tackle deepfakes and other manipulated content, may not be there. manages to do everything generative AI.
Still, Adobe Max 2022 and the advent of the ‘Backdrop neural filter’ show that AI image generation will undoubtedly be a big part of Photoshop, Lightroom, and photo editing in general – it just might take a little longer to get into your favorite Adobe app. .