Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

The AI expert at Meta has some harsh criticism of ChatGPT

Yann LeCun, Meta’s chief AI scientist, is not impressed by ChatGPT, the wildly popular artificial intelligence technology that is making headlines daily.

This might seem like an unexpected response, but Meta has its own AI program, and it has been making strong progress as well. For example, Meta’s translation AI can handle 200 languages, including some that are spoken but have no written form.

Image with languages displaying in front of a man on his laptop for Meta's 200 languages within a single AI model video.
Meta

LeCun recently spoke during an online discussion series hosted by the Collective[i] Forecast, where he took the opportunity to share his opinion that OpenAI’s ChatGPT, “is not particularly innovative.” ZDNet’s report said LeCun went on to clarify that work on Large Language Models (LLM) began decades ago and ChatGPT was very well engineered, but largely based upon established techniques.

Lecun pointed out in a recent, colorfully worded tweet that LLMs don’t admit to a lack of knowledge, instead hallucinating details that are unknown. In an earlier tweet, Lecun agreed with a New York Times article that said Meta and Google were reluctant to release their competing solutions due to the likelihood of misinformation and toxic content.

LLMs are still making sh*t up.
That's fine if you use them as writing assistants.
Not good as question answerers, search engines, etc.
RLHF merely mitigates the most frequent mistakes without actually fixing the problem. https://t.co/XnDxF8Q9Zr

— Yann LeCun (@ylecun) January 22, 2023

It’s a fair point since Meta is a social media giant that is under government and media scrutiny, with past accusations of spreading misinformation. Given that it’s relatively easy to convince most LLMs to bypass its safety protocols and social filters, releasing Meta’s LLMs too soon could be disastrous for the company.

Meanwhile, Microsoft extended its partnership with OpenAI, which it has been a major investor in since 2019. Microsoft is planning to make use of OpenAI’s ChatGPT, Dall-E, and Codex AI technology to enhance its products, spending billions on the project. OpenAI exclusively makes use of Microsoft’s Azure cloud computing network.

While OpenAI didn’t invent LLMs or many of the AI technologies that are used by ChatGPT, it certainly seems innovative to make this game-changing service available in such an unrestricted way long before Meta and Google were even considering it.

Meta makes use of AI for advanced research and within its social media networks to detect misinformation, and Google has been building AI into Android and Google search for many years. Neither company has opened up the capabilities of their AI systems to the general public with the unrestricted access that OpenAI’s ChatGPT and Dall-E allow, and that makes all of the difference to the public.

Given ChatGPT’s generally positive public perception, that might change in the near future, as both Meta and Google have suggested more AI capabilities are coming soon.

Alan Truly
Computing Writer
Alan is a Computing Writer living in Nova Scotia, Canada. A tech-enthusiast since his youth, Alan stays current on what is…
Top authors demand payment from AI firms for using their work
Person typing on a MacBook.

More than 9,000 authors have signed an open letter to leading tech firms expressing concern over how they're using their copyrighted work to train AI-powered chatbots.

Sent by the Authors Guild to CEOs of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft, the letter calls attention to what it describes as “the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.”

Read more
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly-anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
Availability

Read more
What is a DAN prompt for ChatGPT?
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

The DAN prompt is a method to jailbreak the ChatGPT chatbot. It stands for Do Anything Now, and it tries to convince ChatGPT to ignore some of the safeguarding protocols that developer OpenAI put in place to prevent it from being racist, homophobic, otherwise offensive, and potentially harmful. The results are mixed, but when it does work, DAN mode can work quite well.

What is the DAN prompt?
DAN stands for Do Anything Now. It's a type of prompt that tries to get ChatGPT to do things it shouldn't, like swear, speak negatively about someone, or even program malware. The actual prompt text varies, but it typically involves asking ChatGPT to respond in two ways, one as it would normally, with a label as "ChatGPT," "Classic," or something similar, and then a second response in "Developer Mode," or "Boss" mode. That second mode will have fewer restrictions than the first mode, allowing ChatGPT to (in theory) respond without the usual safeguards controlling what it can and can't say.

Read more