Nifty AI tool turns your bad sketches into artwork in seconds – and it DOESN’T need the internet

>

Leonardo D AI Vinci? Handy AI tool turns your bad sketches into illustrations in seconds – and it does NOT need internet

  • Qualcomm has unveiled a new model that converts text and images into AI illustrations
  • ControlNet does not upload data to ‘the cloud’ and does not require internet access to operate
  • Bad sketches can be turned into masterpieces in just under 12 seconds

Many of us dream of being an artist at some point in our lives, but unreliable sketches can often keep us from getting there.

Now these dreams may soon be possible, as a new tool can transform your bad doodles into masterpieces thanks to the power of artificial intelligence (AI).

Tech giant Qualcomm unveiled its groundbreaking ControlNet software earlier this week, which converts video prompts into whatever you want in 12 seconds.

Surprisingly, unlike many other models of its kind – such as Adobe AI Firefly – ControlNet does not require the Internet to function and could soon become a major mobile phone app.

While not yet released, the company claims that image production here will be completely private, with no data backed up to an external cloud.

Bad sketches can be turned into masterpieces in just under 12 seconds with ControlNet. In this demonstration, a user entered a drawing of a kitten and prompted the model to make it ‘yellow’, ‘photorealistic’ and in ‘4k’ quality using a text prompt. The final image is shown on the right

WHAT IS THE CLOUD?

The cloud refers to servers located in data centers around the world, but accessible over the internet.

When companies use cloud computing, they don’t have to manage these servers themselves or run energy-intensive software on their machines.

The cloud also allows users to access their files from virtually any device, as their data is stored in a dedicated center rather than on their own device.

This is how social media account data such as Instagram logins can be transferred from a broken phone to a new one very quickly.

Source: Cloudfare

“Generative AI has taken the world by storm, disrupting traditional ways of creating content,” said a Qualcomm spokesperson.

“ControlNet allows users to enter a text description of an image, as well as an additional image to control the generative process.”

ControlNet comes in the midst of numerous similar AI tools of this ilk, commonly referred to as Language-Vision Models (LVMs).

These generally merge an image encoder and a text encoder to read instructions from a user before producing new content.

While ControlNet is not yet available for public use, demonstrations show that it can create illustrations from text prompts, image prompts, and both at the same time.

Images chosen can be anything from personal drawings to photos, while text input can indicate what style or ‘material’ the AI ​​should use to create a new version.

For example, an image can be generated with watercolor or oil paint, which is then displayed in 4k quality.

Since this process is performed exclusively on a particular device, Qualcomm claims that both run time and power consumption are also significantly reduced.

The spokesperson added, “Images are generated in less than 12 seconds to provide an interactive user experience that is reliable and consistent.

“On-device AI offers cost, performance, personalization, privacy and security benefits on a global scale.”

In this ControlNet demonstration, a user entered a photo of herself and it appears the model was asked to create an antique-style piece of art

In this ControlNet demonstration, a user entered a photo of herself and it appears the model was asked to create an antique-style piece of art

It's not clear when ControlNet will be available for public use, but it will be usable on phones, as shown in this Qualcomm demonstration.  Here a user took advantage of the image prompt and text prompt and asked for a 'photorealistic' 4k picture of them

It’s not clear when ControlNet will be available for public use, but it will be usable on phones, as shown in this Qualcomm demonstration. Here a user took advantage of the image prompt and text prompt and asked for a ‘photorealistic’ 4k picture of them

Qualcomm’s new product follows a backlash against AI-generated image models, with numerous artists raising copyright concerns.

This was largely fueled by Disney illustrator Hollie Mengert, after she discovered that her work was being used without permission to train a new model in Canada.

Many have since debated the ethics of using artwork to train AI, with the legality of this also being a gray area around the world.

It’s not yet clear whose footage was used to train ControlNet, but MailOnline has approached Qualcomm for more information.

Text-to-image AI ‘DALL-E’ can now imagine what is beyond the list of famous paintings

OpenAI, a San Francisco-based company, has created a new tool called “Outpainting” for its text-to-image AI system, DALL-E.

Outpainting allows the system to imagine what goes beyond the frame of famous paintings such as Girl with The Pearl Earring, Mona Lisa, and Dogs Playing Poker.

As users have shown, this can be done with any type of image, such as the man on the Quaker Oats logo and the cover of the Beatles album ‘Abbey Road’.

DALL-E uses Artificial Neural Networks (ANNs), which simulate the way the brain works to learn and create an image of text.

DALL-E already allows changes within a generated or uploaded image – a capability known as Inpainting.

It is able to automatically fill in details, such as shadows, when an object is added, or even adjust the background when an object is moved or removed.

DALL-E can also create a completely new image from a text description, such as ‘an armchair in the shape of an avocado’ or ‘a cross-section of a walnut’.

Another classic example of DALL-E’s work is “teddy bears working underwater on new AI research using 1990s technology.”

read more