Skip to main content

Microsoft has a new way to keep ChatGPT ethical, but will it work?

Microsoft caught a lot of flak when it shut down its artificial intelligence (AI) Ethics & Society team in March 2023. It wasn’t a good look given the near-simultaneous scandals engulfing AI, but the company has just laid out how it intends to keep its future efforts responsible and in check going forward.

In a post on Microsoft’s On the Issues blog, Natasha Crampton — the Redmond firm’s Chief Responsible AI Officer — explained that the ethics team was disbanded because “A single team or a single discipline tasked with responsible or ethical AI was not going to meet our objectives.”

Bing Chat shown on a laptop.
Jacob Roach / Digital Trends

Instead, Microsoft adopted the approach it has taken with its privacy, security, and accessibility teams, and “embedded responsible AI across the company.” In practice, this means Microsoft has senior staff “tasked with spearheading responsible AI within each core business group,” as well as “a large network of responsible AI “champions” with a range of skills and roles for more regular, direct engagement.”

Beyond that, Crampton said Microsoft has “nearly 350 people working on responsible AI, with just over a third of those (129 to be precise) dedicated to it full time; the remainder have responsible AI responsibilities as a core part of their jobs.”

After Microsoft shuttered its Ethics & Society team, Crampton noted that some team members were subsequently embedded into teams across the company. However, seven members of the group were fired as part of Microsoft’s extensive job cuts that saw 10,000 workers laid off at the start of 2023.

Navigating the scandals

Bing Chat saying it wants to be human.
Jacob Roach / Digital Trends

AI has hardly been free of scandals in recent months, and it’s those worries that fuelled the backlash against Microsoft’s disbanding of its AI ethics team. If Microsoft lacked a dedicated team to help guide its AI products in responsible directions, the thinking went, it would struggle to curtail the kinds of abuses and questionable behavior its Bing chatbot has become notorious for.

The company’s latest blog post is surely aiming to alleviate those concerns among the public. Rather than abandoning its AI efforts entirely, it seems Microsoft is seeking to ensure teams across the company have regular contact with experts in responsible AI.

Still, there’s no doubt that shutting down its AI Ethics & Society team didn’t go over well, and chances are Microsoft still has some way to go to ease the public’s collective mind on this topic. Indeed, even Microsoft itself thinks ChatGPT — whose developer, OpenAI, is owned by Microsoft — should be regulated.

Just yesterday, Geoffrey Hinton — the “godfather of AI — quit Google and told the New York Times he had serious misgivings about the pace and direction of AI expansion, while a group of leading tech experts recently signed an open letter calling for a pause on AI development so that its risks can be better understood.

Microsoft might not be disregarding worries about ethical AI development, but whether or not its new approach is the right one remains to be seen. After the controversial start Bing Chat has endured, Natasha Crampton and her colleagues will be hoping things are going to change for the better.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly-anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
Availability

Read more
What is a DAN prompt for ChatGPT?
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

The DAN prompt is a method to jailbreak the ChatGPT chatbot. It stands for Do Anything Now, and it tries to convince ChatGPT to ignore some of the safeguarding protocols that developer OpenAI put in place to prevent it from being racist, homophobic, otherwise offensive, and potentially harmful. The results are mixed, but when it does work, DAN mode can work quite well.

What is the DAN prompt?
DAN stands for Do Anything Now. It's a type of prompt that tries to get ChatGPT to do things it shouldn't, like swear, speak negatively about someone, or even program malware. The actual prompt text varies, but it typically involves asking ChatGPT to respond in two ways, one as it would normally, with a label as "ChatGPT," "Classic," or something similar, and then a second response in "Developer Mode," or "Boss" mode. That second mode will have fewer restrictions than the first mode, allowing ChatGPT to (in theory) respond without the usual safeguards controlling what it can and can't say.

Read more
Bing Chat fights back against workplace bans on AI
Bing Chat shown on a laptop.

Microsoft has announced Bing Chat Enterprise, a security-minded version of its Bing Chat AI chatbot that's made specifically for use by company employees.

The announcement comes in response to a large number of businesses implementing wide-reaching bans on the technology -- including companies like Apple, Goldman Sachs, Verizon, and Samsung. ChatGPT was the main target, but alternatives like Bing Chat and Google Bard were included in the bans.

Read more