For years, linguists have debated how children acquire language. Scientists have answered this debate with a trained AI model. Before now, some people propose that babies start as “blank slates,” absorbing language through everyday experiences like listening, seeing, and interacting with their surroundings. Others suggest that innate brain mechanisms are necessary to facilitate language learning, alongside experience.
AI advancements, such as GPT-4, have not resolved this debate. These models learn language differently, sifting through vast amounts of text data from the internet, a far cry from a baby’s experiences.
To explore this issue, a team of researchers at New York University conducted a unique experiment. They trained an AI model using the experiences of a single infant named Sam. Over several months, Sam wore a head-mounted camera for an hour each week, capturing his interactions with toys, outings to the park, and interactions with his pet cats. The recorded data, along with transcribed audio, were fed into an artificial intelligence model, allowing it to associate images and words that occurred simultaneously.
Results of the Trained AI
Despite the limited training data, the AI successfully identified objects and learned corresponding words. When tested, it correctly identified objects Sam had seen before with a 62% accuracy rate, surpassing chance levels. Surprisingly, the AI could even recognize objects Sam had never encountered. Although the AI learned around 40 different words, it did not match Sam’s vocabulary by the study’s end.
Published in the journal Science, the researchers argue that experiential learning alone may suffice for associating words with objects. However, skeptics question the AI’s ability to grasp abstract concepts like nouns and verbs, casting doubt on the similarity between artificial intelligence learning and human language acquisition. Thus, the mystery of language acquisition remains unsolved.
See also: Intel Unveils Gaudi 3 AI Chip To Rival Nvidia’s Supremacy