Verbal nonsense reveals limitations of AI chatbots

The era of artificial intelligence (AI) chatbots that appear to understand and use language in a human-like manner has dawned. These chatbots rely on large language models, a type of neural network. However, a recent study has revealed a vulnerability in these large language models, as they can sometimes mistake nonsense for natural language. Researchers at Columbia University see this flaw as an opportunity to enhance chatbot performance and gain insights into how humans process language.

In their paper published in Nature Machine Intelligence, the scientists describe how they conducted experiments using nine different language models. They presented hundreds of pairs of sentences to human participants and asked them to select the sentence they believed sounded more natural, i.e., the one more likely to be encountered in everyday communication. The researchers then evaluated whether the AI models would provide the same judgments as the human participants.

In head-to-head comparisons, the more advanced AI models based on transformer neural networks generally outperformed simpler models, such as recurrent neural networks and statistical models that rely on word pair frequencies from the internet or online databases. However, all models exhibited errors, occasionally selecting sentences that sounded like gibberish to humans.

Dr. Nikolaus Kriegeskorte, a principal investigator at Columbia’s Zuckerman Institute and a coauthor of the paper, noted, “That some of the large language models perform as well as they do suggests that they capture something important that the simpler models are missing. That even the best models we studied still can be fooled by nonsense sentences shows that their computations are missing something about the way humans process language.”

For example, consider the following sentence pair:

  1. That is the narrative we have been sold.
  2. This is the week you have been dying.

Human participants in the study judged the first sentence as more natural. However, BERT, one of the advanced models, rated the second sentence as more natural, while GPT-2, another widely known model, correctly identified the first sentence as more natural, aligning with human judgments.

Christopher Baldassano, an assistant professor of psychology at Columbia and the senior author of the study, emphasized that all models had blind spots and labeled some sentences as meaningful when human participants considered them gibberish. He cautioned against relying too heavily on AI systems for important decisions, at least in their current state.

One of the intriguing findings of the study is the good yet imperfect performance of many models. Dr. Kriegeskorte emphasized the importance of understanding why these gaps exist and why certain models outperform others, as this knowledge can drive progress in language models.

The researchers are also curious about whether the computations in AI chatbots can inspire new scientific questions and hypotheses, potentially guiding neuroscientists toward a better understanding of human brain function. Analyzing the strengths and weaknesses of various chatbots and their underlying algorithms may contribute to answering this question.

Tal Golan, the paper’s corresponding author, who recently established his own lab at Ben-Gurion University of the Negev in Israel, highlighted the interest in understanding how people think and the unique processing of language by AI tools, offering a fresh perspective on human cognition.

Posted in

Aihub Team

Leave a Comment