What the New GPT-4 AI Can Do

Tech research company OpenAI has just released an updated version of its text-generating artificial intelligence program, called GPT-4

Image Credit - Google

Not only can GPT-4 produce more natural-sounding text and solve problems more accurately than its predecessor

Image Credit - Google

It can also process images in addition to text. But the AI is still vulnerable to some of the same problems that plagued earlier GPT models

Image Credit - Google

Perhaps the most significant change is that GPT-4 is “multimodal,” meaning it works with both text and images

Image Credit - Google

Although it cannot output pictures (as do generative AI models such as DALL-E and Stable Diffusion), it can process and respond to the visual inputs it receives

Image Credit - Google

A device with the ability to analyze and then describe images could be enormously valuable for people who are visually impaired or blind

Image Credit - Google

The app recently incorporated GPT-4 into a “virtual volunteer” that, according to a statement on OpenAI’s website

Image Credit - Google

But GPT-4’s image analysis goes beyond describing the picture. In the same demonstration Vee watched, an OpenAI representative sketched an image of a simple website

Image Credit - Google

OpenAI says it has run both GPT-3.5 and GPT-4 through a variety of tests designed for humans, including a simulation of a lawyer’s bar exam

Image Credit - Google

GPT-4 achieved human-level scores on many of these benchmarks and consistently outperformed

Image Credit - Google

FOR MORE SUCH STORIES VISIT BREEZYSCROLL