On Tuesday, Meta Platforms said that it will provide academics access to parts of a new “human-like” artificial intelligence model, I-JEPA, that it claims is better than current models at analyzing and finishing incomplete photos.
Instead of focusing exclusively on surrounding pixels like previous generative AI models, the model, called I-JEPA, fills in empty portions of images using background knowledge about the outside environment, the company claimed.
I-JEPA incorporates the kind of human-like reasoning advocated by Meta’s top AI scientist Yann LeCun
That approach incorporates the kind of human-like reasoning advocated by Meta’s top AI scientist Yann LeCun and aids the technology in avoiding mistakes that are typical of AI-generated graphics, such as hands with extra fingers, the company claimed.
Through its internal research lab, Meta, the company that owns Facebook and Instagram, publishes a lot of open-sourced AI research. According to Chief Executive Mark Zuckerberg, sharing models created by Meta’s researchers can benefit the business by fostering innovation, identifying safety flaws, and reducing expenses.
“For us, it’s way better if the industry standardizes on the basic tools that we’re using and therefore we can benefit from the improvements that others make,” he told investors in April.
The company’s management has disregarded industry warnings about the potential dangers of the technology, refusing to sign a declaration last month that compared its dangers to pandemics and armed conflicts and was supported by top executives from OpenAI, DeepMind, Microsoft, and Google.
One of the “godfathers of AI,” Lecun, has denounced “AI doomerism” and argued in favor of incorporating safety measures into AI systems.
Additionally, Meta is beginning to add generative AI capabilities to its consumer goods, such as Instagram products that can edit user photographs and advertising tools that can generate visual backgrounds based on text input.