Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Google’s ChatGPT rival is an ethical mess, say Google’s own workers

Google launched Bard, its ChatGPT rival, despite internal concerns that it was a “pathological liar” and produced “cringeworthy” results, a new report has claimed. Worker say these worries were apparently ignored in a frantic attempt to catch up with ChatGPT and head off the threat it could pose to Google’s search business.

The revelations come from a Bloomberg report that took a deep dive into Google Bard and the issues raised by employees who have worked on the project. It’s an eye-opening account of the ways the chatbot has apparently gone off the rails and the misgivings these incidents have raised among concerned workers.

ChatGPT versus Google on smartphones.

For instance, Bloomberg cites an anonymous employee who asked Bard for instructions on how to land a plane, then were horrified to see that Bard’s description would lead to a crash. A different worker said Bard’s scuba diving tips “would likely result in serious injury or death.”

These issues were apparently raised shortly before Bard launched, according, yet Google pressed ahead with the go-live date, such was its desire to keep pace with the path blazed by ChatGPT. But it has done so while disregarding its own ethical commitments, resulting not only in dangerous advice, but the potential spread of misinformation too.

Rushing ahead to launch

The Google Bard AI chatbot in a web browser shown on the screen of an Android smartphone.
Mojahid Mottakin / Unsplash

In 2021, Google pledged to double its team of employees studying the ethical consequences of artificial intelligence (AI) and invest more heavily in determining potential harms. Yet that team is now “disempowered and demoralized,” the Bloomberg report claims. Worse, team members have been told “not to get in the way or to try to kill any of the generative AI tools in development,” bringing Google’s commitment to AI ethics into question.

That was seen in action just before Bard launched. In February, a Google worker messaged an internal group to say, “Bard is worse than useless: please do not launch,” with scores of other employees chiming in to agree. The next month, Jen Gennai, Google’s AI governance lead, overruled a risk evaluation that said Bard could cause harm and was not ready for launch, pushing ahead with the first public release of the chatbot.

Bloomberg’s report paints a picture of a company distrustful of ethical concerns that it feels could get in the way of its own products’ profitability. For instance, one worker asked to work on fairness in machine learning, but was repeatedly discouraged, to the point that it affected their performance review. Managers complained that ethical concerns were obstructing their “real work,” the employee stated.

It’s a concerning stance, particularly since we’ve already seen plenty of examples of AI chatbot misconduct that has produced offensive, misleading or downright false information. If the Bloomberg report is correct about Google’s seemingly hostile approach to ethical concerns, this could just be the beginning when it comes to problems caused by AI.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
What is a DAN prompt for ChatGPT?
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

The DAN prompt is a method to jailbreak the ChatGPT chatbot. It stands for Do Anything Now, and it tries to convince ChatGPT to ignore some of the safeguarding protocols that developer OpenAI put in place to prevent it from being racist, homophobic, otherwise offensive, and potentially harmful. The results are mixed, but when it does work, DAN mode can work quite well.

What is the DAN prompt?
DAN stands for Do Anything Now. It's a type of prompt that tries to get ChatGPT to do things it shouldn't, like swear, speak negatively about someone, or even program malware. The actual prompt text varies, but it typically involves asking ChatGPT to respond in two ways, one as it would normally, with a label as "ChatGPT," "Classic," or something similar, and then a second response in "Developer Mode," or "Boss" mode. That second mode will have fewer restrictions than the first mode, allowing ChatGPT to (in theory) respond without the usual safeguards controlling what it can and can't say.

Read more
Wix uses ChatGPT to help you quickly build an entire website
wix chatgpt ai site generator

Wix is an oft-recommended online service that lets you knock together a website without any coding knowledge.

Now the Israel-based company has announced a new AI Site Generator that aims to make the process even smoother and more intuitive, and less time-consuming, too.

Read more
Google Bard can now speak, but can it drown out ChatGPT?
Google Bard on a green and black background.

In the world of artificial intelligence (AI) chatbots, OpenAI’s ChatGPT is undoubtedly the best known. But Google Bard is hot on its heels, and the bot has just been granted a new ability: the power of speech.

The change was detailed in a Google blog post, which described the update as “Bard’s biggest expansion to date.” It grants Bard not just speech, but the ability to converse in over 40 languages, use images as prompts, and more.

Read more