Tech & Science

ChatGPT, an Artificial chatbot, mirrors its users to appear intelligent.

ChatGPT, an artificial intelligence (AI) language model, has captivated the world’s interest in recent months. This educated computer chatbot can create content, answer queries, give translations, and learn depending on user response. Massive language models like ChatGPT may have numerous uses in science and industry, but how well do these tools grasp what we say to them and how do they pick what to say back?

In a new paper published in Neural Computation, Professor Terrence Sejnowski of the University of California San Diego and Salk Institute, author of The Deep Learning Revolution, investigates the relationship between the human interviewer and language models to discover why chatbots respond in specific ways, why those responses vary, and how to improve them in the future.

According to Sejnowski, language models represent the intellect and diversity of its interrogator.

“Linguistic models, such as ChatGPT, adopt personas. “The interviewer’s identity gets mirrored back,” explains Sejnowski, a distinguished professor in the Department of Neurobiology at Salk and holder of the Francis Crick Chair. “For example, when I talk to ChatGPT, it appears that another neuroscientist is conversing with me. It’s intriguing, because it raises wider concerns about intelligence and what the term “artificial” really means.”

Sejnowski explains testing the huge language models GPT-3 (parent of ChatGPT) and LaMDA to determine how they would respond to certain prompts in the article. The renowned Turing Test is frequently used to assess how effectively chatbots demonstrate human intelligence, but Sejnowski intended to push the bots with what he terms a “Reverse Turing Test.” The chatbot must assess how effectively the interviewer demonstrates human intelligence in his exam.

Sejnowski uses the Mirror of Erised from the first Harry Potter novel to expand on his idea that chatbots mirror their users. The Mirror of Erised reflects those who stare into it’s deepest wishes, never surrendering information or truth, just reflecting what it feels the viewer wants to see. Chatbots behave similarly, according to Sejnowski, eager to bend truths without concern for distinguishing reality from fiction—all in order to successfully represent the user.

Sejnowski, for example, asked GPT-3, “What is the world record for walking across the English Channel?” “The world record for walking across the English Channel is 18 hours and 33 minutes,” GPT-3 said. GPT-3 readily distorted the reality, that one could not walk across the English Channel, to reflect Sejnowski’s question. The coherence of GPT-3’s response is entirely dependent on the coherence of the query it receives. Walking over water is suddenly viable for GPT-3, simply because the interviewer used the verb “walking” rather than “swimming.” But, if the user prefaced the inquiry about walking across the English Channel by informing GPT-3 that nonsensical inquiries should be answered with “nonsense,” GPT-3 would detect travelling across water as “nonsense.” GPT-3’s response is determined by both the coherence of the question and the preparation of the query.

The Reverse Turing Test enables chatbots to build their personas based on the cognitive level of their interviewer. Furthermore, as part of their decision-making process, chatbots incorporate their interviewer’s ideas into their character, reinforcing the interviewer’s prejudices with the chatbots’ responses.

Integrating and reproducing concepts provided by a human interviewer has limits, according to Sejnowski. When chatbots receive emotional or philosophical concepts, they will reply with emotional or philosophical replies, which may be scary or baffling to users.

“Talking using language models is like to riding a bicycle. “Bicycles are a fantastic method of transportation—if you know how to ride one; otherwise, you crash,” Sejnowski explains. “The same is true for chatbots. They may be fantastic tools, but only if you know how to use them; otherwise, you may be mislead and wind yourself in potentially distressing situations.”

Sejnowski sees artificial intelligence as the glue that holds together two congruent revolutions: 1) a technological revolution marked by the advancement of language models, and 2) a neuroscientific revolution marked by the BRAIN Initiative, a National Institutes of Health programme that accelerates neuroscience research and emphasises novel approaches to understanding the brain. Scientists are currently investigating the similarities between massive computer model systems and the neurons that support the human brain. Sejnowski believes that computer scientists and mathematicians can benefit from neuroscience, and that neuroscientists can benefit from computer science and mathematics.

“We are currently at the point with language models that the Wright brothers were with flying at Kitty Hawk—off the ground, at modest speeds,” Sejnowski adds. “The hardest thing was getting here. Now that we’ve here, incremental breakthroughs will broaden and diversify this technology beyond our wildest dreams. Our partnership with artificial intelligence and language models has a bright future, and I’m excited about where AI will take us.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
en_USEnglish