5 mind-blowing things that GPT-4 can do but ChatGPT couldn’t

5 mind-blowing things that GPT-4 can do but ChatGPT couldn’t

The ability of GPT-4 to file lawsuits, pass standardized tests, and create a functioning website from a hand-drawn design astounded many users in early tests and a company demo the day after it was introduced. The newest iteration of the artificial intelligence system that powers ChatGPT, OpenAI’s popular chatbot platform, was unveiled on Tuesday. The more potent GPT-4 promises to outperform earlier iterations, potentially altering how we work, play, and create online. Yet it might also raise more difficult considerations about how AI technologies might change professions, make it easier for students to cheat, and alter how we interact with technology.

The company’s huge language model, known as GPT-4, was updated and is trained on a sizable amount of web data to produce sophisticated user rapid responses. It is now on a waitlist and has already appeared in a few third-party applications, such as Microsoft’s brand-new Bing AI-powered search engine. Early adopter users are sharing their insights and highlighting some of the tool’s most appealing use cases.

Gpt-4 can analyze images

Fundamentally, GPT-4’s ability to operate with uploaded pictures is the largest change. One of the most astounding use cases to date came from an OpenAI video demo that demonstrated how a sketch could be quickly transformed into a working website. The demonstrator inserted the generated code into a preview to demonstrate how a functioning website may look when the image was posted to GPT-4.

In its introduction, OpenAI also demonstrated how GPT-4 was tasked with describing the humor of a sequence of photographs that included a smartphone being charged incorrectly. Despite the seeming simplicity, the required context makes it more difficult for artificial intelligence tools to recognize jokes.

In a different test, The New York Times gave GPT-4 a photo of a refrigerator’s interior and asked it to create a dinner using the ingredients. The images feature has not yet gone online, but OpenAI plans to do so in the coming weeks.

Making coding even simpler

Following the tool’s step-by-step instructions, some early GPT-4 users with little to no prior experience with coding have also used it to replicate classic games like Pong, Tetris, or Snake. Some people have created their unique games. (According to OpenAI, GPT-4 is capable of writing code in all popular programming languages.)

“The powerful language capabilities of GPT-4 will be used for everything from storyboarding, and character creation to gaming content creation,” said Arun Chandrasekaran, an analyst at Gartner Research. “This could give rise to more independent gaming providers in the future. But beyond the game itself, GPT-4 and similar models can be used for creating marketing content around game previews, generating news articles, and even moderating gaming discussion boards.”

Similar to how gaming has changed, GPT-4 may also alter how apps are created. On Twitter, one person claimed to have created a straightforward drawing app in a matter of minutes, and another claimed to have programmed an app that recommends five new movies every day and provides information on where to watch them as well as trailers.

“Coding is like learning how to drive — as long as the beginner gets some guidance, anyone can code,” said Lian Jye Su, an analyst at ABI Research. “AI can be a good teacher.”

Exams passed with flying colors

Although OpenAI claims the update is “less capable” than humans in many real-world settings, it performs at a “human level” on a variety of professional and academic assessments. GPT-4 just completed a simulated law school bar exam with a score in the top 10% of test takers, according to the business. In comparison, the previous version, GPT-3.5, scored in the bottom 10%. According to OpenAI, the current version also fared well on the LSAT, GRE, SATs, and several AP examinations.

ChatGPT made headlines in January for its capacity to pass important graduate-level tests, such as one from the University of Pennsylvania’s Wharton School of Management, but not very well. The company stated that it spent months improving the system’s accuracy and ability to stay on topic utilizing lessons from its testing program and ChatGPT.

Giving more specific responses

According to the business, GPT-4 is capable of producing lengthier, more thorough, and more dependable written responses than the previous version.

The most recent version can now produce responses of up to 25,000 words, up from roughly 4,000 earlier, and can provide thorough directions for even the most unusual cases, such as cleaning a piranha’s fish tank or extracting the DNA of a strawberry. According to one early user, it supplied detailed suggestions for pickup lines based on a question on a dating profile.

Streamlining work in a variety of industries

In an early indication of GPT-4’s immense potential to revolutionize how people operate across industries, Joshua Browder, CEO of legal services chatbot DoNotPay, said his organization is already working on using the tool to write “one-click lawsuits” to fight robocalls.

“Imagine receiving a call, clicking a button, [the] call is transcribed and the 1,000-word lawsuit is generated. GPT-3.5 was not good enough, but GPT-4 handles the job extremely well,” Browder tweeted.

Meanwhile, Jake Kozloski, CEO of the dating site Keeper, stated that the technique is being used to better match members.

According to Su of ABI Research, we may also see big breakthroughs in “connected car [dashboards], remote diagnosis in healthcare, and other AI applications that were previously not possible.”

This is a work in progress

Despite significant advancements in the company’s AI model, GPT-4 retains restrictions comparable to prior versions. According to OpenAI, the system lacks knowledge of events that occurred before the end of its data set (September 2021) and does not benefit from its experience. It can also make “basic logic errors” or be “overly gullible in accepting evident false statements from a user,” according to the business, and not double-check work.

Gartner’s Chandrasekaran said this is also reflective of many AI models today. “Let us not forget that these AI models aren’t perfect,” Chandrasekaran said. “They can produce inaccurate information from time to time and can be black-box in nature.” For now, OpenAI said GPT-4 users should exercise caution and use “great care” particularly “in high-stakes contexts.”

Exit mobile version