Scientists use six-month-old baby named Sam to teach AI how humanity is evolving – amid fears technology could destroy us

Scientists have trained an AI through the eyes of a baby in an attempt to teach the technology how humanity is evolving – amid fears it could destroy us.

Researchers from New York University strapped a headcam recorder to Sam when he was just six months old until his second birthday.

The 250,000-word footage and associated images were fed to an AI model, which learned to recognize different objects, similar to how Sam did.

The AI ​​developed its knowledge in the same way as the child: by observing the environment, listening to people nearby and connecting the dots between what was seen and heard.

The experiment also determined the link between visual and linguistic representation in a child’s development.

Researchers at NYU captured a first-person perspective of a child by attaching a camera to six-month-old Sam (pictured) until he was about two years old.

Researchers wanted to discover how people link words to the visual representation, such as associating the word “ball” with a round, springy object rather than with other features, objects or events.

The camera randomly captured Sam’s daily activities such as meals, reading books and child play, amounting to approximately 60 hours of data

‘By using AI models to study the real language learning problem children face, we can address classic debates about what ingredients children need to learn words – whether they need language-specific biases, innate knowledge, or simply associative learning to to get started. says Brenden Lake, assistant professor at NYU’s Center for Data Science and Department of Psychology and senior author of the paper.

The camera captured 61 hours of footage, which amounts to approximately one percent of Sam's waking hours, and was used to train the CVCL model to match words to images.  The AI ​​was able to determine that it saw a cat

The camera captured 61 hours of footage, which amounts to approximately one percent of Sam’s waking hours, and was used to train the CVCL model to match words to images. The AI ​​was able to determine that it saw a cat

The CVCL model accurately matched images and text about 61.6 percent of the time.  Depicted is the object that the AI ​​was able to determine based on viewing the images

The CVCL model accurately matched images and text about 61.6 percent of the time. Depicted is the object that the AI ​​was able to determine based on viewing the images

‘It seems that we can achieve more with learning alone than is often thought.’

The researchers used a vision and text coder to translate images and written language so that the AI ​​model could interpret them based on images obtained through Sam’s headset.

Although the images often did not directly connect words and images, the Child’s View for Contrastive Learning model (CVCL) bot, consisting of the AI ​​and the headcam, was able to recognize the meanings.

The model used a contrastive learning approach that builds information to predict which images and text match.

1706917115 800 Scientists use six month old baby named Sam to teach AI how

Researchers presented several tests of 22 individual words and images present in the child’s video footage and found that the model was able to correctly match many of the words and their images.

Their findings showed that the AI ​​model could generalize what it learned with 61.6 percent accuracy and correctly identify invisible examples such as “apple” and “dog” 35 percent of the time.

“We show for the first time that a neural network trained on this developmentally realistic input from one child can learn to associate words with their visual counterparts,” said Wai Keen Vong, a researcher at NYU’s Center for Data Science and the first author of the article.

‘Our results demonstrate how recent algorithmic advances combined with one child’s naturalistic experience have the potential to reshape our understanding of early language and concept acquisition.’

Researchers found that there are still drawbacks to the AI ​​model and while the test showed promise in understanding how babies develop cognitive functions, it was limited by its inability to fully experience the baby’s life.

One example showed that CVCL had difficulty learning the word “hand,” something a baby usually learns very early in life.

“Babies have their own hands, they have a lot of experience with it,” Vong said Natureand added, “That’s definitely a missing part of our model.”

The researchers plan to conduct additional research to replicate early language learning in young children around two years old.

While the information wasn’t perfect, Lake said it was “completely unique” and “provides the best insight we’ve ever had into what a single child has access to.”