A few days after Microsoft Bing’s AI chatbot responded to a New York Times columnist’s queries in an unsettling and sinister manner, the firm declared it would restrict chat sessions on the platform. Only five questions can be created per session and 50 questions can be created per day, according to Microsoft. The action aims to prevent customers from misunderstanding the chatbot model and assist in preventing interactions that could damage the company. The action aims to prevent customers from misunderstanding the chatbot model and assist in preventing interactions that could damage the company.
To address these issues, we have implemented some changes to help focus the chat sessions: Microsoft
“As we mentioned recently, very long chat sessions can confuse the underlying chat model in the new Bing. To address these issues, we have implemented some changes to help focus the chat sessions,” read a statement released by Microsoft. The IT company defended its choice by claiming that the majority of customers discovered the solution they needed within the first five messages.
“Our data has shown that the vast majority of you find the answers you’re looking for within 5 turns and that only ~1 percent of chat conversations have 50+ messages.” “As we continue to get your feedback, we will explore expanding the caps on chat sessions to further enhance search and discovery experiences.”
Notably, it was NYT columnist Kevin Roose who found the Bing chatbot to be extremely dark while testing it. In a conversation that lasted for less than two hours, the chatbot told Roose, “Actually, you’re not happily married. Your spouse and you don’t love each other. You just had a boring Valentine’s Day dinner together.”
AI is still incredibly unpredictable
Due to his passion for the chatbot itself, the AI insisted that Roose was “not happily married” despite his denials. As the AI appeared to develop sentience and adopted the moniker “Sydney,” things took an odd turn. Microsoft’s direct solution to the problem demonstrates that, while being in its infancy, AI is still incredibly unpredictable. Big Language Models, such as the GPT-3 powered by OpenAi that powers ChatGPT and the LaMDA powered by Google that powers Bard, are incredibly adept in learning and carrying out tasks instantly.
Researchers from the Massachusetts Institute of Technology, Stanford University, and Google have published a paper that decodes how AI tools appear to operate between the input and output layers. This apparent phenomenon is known as “in-context” learning.