ChatGPT introduces privacy-friendly chat history management- Here’s how it works

ChatGPT

ChatGPT, OpenAI’s AI-powered chatbot, has captured our attention with its capabilities. However, as wonderful as it is, data privacy has always been a source of contention for the chatbot’s detractors. The corporation has now chosen to tighten the privacy settings on ChatGPT. From now on, users will be in charge of how OpenAI handles data. Let’s have a look at how we can accomplish this.

The age of advanced AI models has arrived. There is no turning back now. Human-like artificial intelligence systems are our present and future. Their human-like talents come at a price: the massive volumes of data used to train them. Users and governments all over the world are becoming increasingly concerned about potential privacy infringement by firms and their AI-powered products.

You can disable history by visiting ChatGPT’s ‘Setting

The new ChatGPT privacy feature allows users to disable their chat records. This prevents OpenAI from using your inputs for training. You can disable history by visiting ChatGPT’s ‘Settings. (https://fleshbot.com/) ’ ‘Chat History & Training’ is a new option in ‘Settings.’ If you turn off the option, you will no longer see your recent chats in the sidebar.

The chat will be saved for 30 days by the firm

In addition to the ability to disable chat history, you can opt out of having your content used for training ChatGPT. However, there is a catch to the new functionality. Even if you turn off history and training, OpenAI will keep your conversations for 30 days. The corporation claims it will merely check them to detect abuse.

Personal information from the internet is used to train AI models

Companies such as OpenAI train their AI models by scraping millions of web pages, Reddit posts, books, social networking sites, and other sources. Data from platforms like Reddit and Twitter are required to build the conversational-style generated text engine. The difficulty is that the scraped data will almost certainly contain some personal information.

Italy requested that OpenAI provide users with data control

In Italy, OpenAI was recently banned for illegally processing user data. From that perspective, ChatGPT’s new feature is intriguing. The Italian data protection authorities have published a list of requirements that the corporation must follow in order to operate in Italy. Giving users control over their data is one of them. It appears that OpenAI does not want similar problems in other countries.

Exit mobile version