Skip to main content

Even Microsoft thinks ChatGPT needs to be regulated — here’s why

Artificial intelligence (AI) chatbots have been taking the world by storm, with the capabilities of Microsoft’s ChatGPT causing wonderment and fear in almost equal measure. But in an intriguing twist, even Microsoft is now calling on governments to take action and regulate AI before things spin dangerously out of control.

The appeal was made by BSA, a trade group representing numerous business software companies, including Microsoft, Adobe, Dropbox, IBM, and Zoom. According to CNBC, the group is advocating for the US government to integrate rules governing the use of AI into national privacy legislation.

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

More specifically, BSA’s argument has four main tenets. These include the assertions that Congress should clearly set out when companies need to determine the potential impact of AI, and that those requirements should come into effect when the use of AI leads to “consequential decisions” — which Congress should also define.

Recommended Videos

BSA also states that Congress should ensure company compliance using an existing federal agency and that the development of risk-management programs must be a requirement for any company dealing with high-risk AI.

According to Craig Albright, vice president of U.S. government relations at BSA, “We’re an industry group that wants Congress to pass this legislation, so we’re trying to bring more attention to this opportunity. We feel it just hasn’t gotten as much attention as it could or should.”

BSA believes the American Data Privacy and Protection Act, a bipartisan bill that is yet to become law, is the right legislation to codify its ideas on AI regulation. The trade group has already been in touch with the House Energy and Commerce Committee — the body that first introduced the bill — about its views.

Legislation is surely coming

A laptop opened to the ChatGPT website.
Shutterstock

The breakneck speed at which AI tools have developed in recent months has caused alarm in many corners about the potential consequences for society and culture, and those fears have been heightened by the numerous scandals and controversies that have dogged the field.

Indeed, BSA is not the first body to have advocated for tougher guardrails against AI abuse. In March 2023, a group of prominent tech leaders called on AI firms to pause research on anything more advanced than GPT-4. The group stated this was necessary because “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and that society at large needed to catch up and understand what AI development could mean for the future of civilization.

It is clear that the rapid speed with which AI tools have developed has caused a lot of consternation among both industry leaders and the general public. And when even Microsoft is suggesting its own AI products should be regulated, it seems increasingly likely that some form of AI legislation will become law sooner or later.

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly-anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
Availability

Read more
What is a DAN prompt for ChatGPT?
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

The DAN prompt is a method to jailbreak the ChatGPT chatbot. It stands for Do Anything Now, and it tries to convince ChatGPT to ignore some of the safeguarding protocols that developer OpenAI put in place to prevent it from being racist, homophobic, otherwise offensive, and potentially harmful. The results are mixed, but when it does work, DAN mode can work quite well.

What is the DAN prompt?
DAN stands for Do Anything Now. It's a type of prompt that tries to get ChatGPT to do things it shouldn't, like swear, speak negatively about someone, or even program malware. The actual prompt text varies, but it typically involves asking ChatGPT to respond in two ways, one as it would normally, with a label as "ChatGPT," "Classic," or something similar, and then a second response in "Developer Mode," or "Boss" mode. That second mode will have fewer restrictions than the first mode, allowing ChatGPT to (in theory) respond without the usual safeguards controlling what it can and can't say.

Read more
Wix uses ChatGPT to help you quickly build an entire website
wix chatgpt ai site generator

Wix is an oft-recommended online service that lets you knock together a website without any coding knowledge.

Now the Israel-based company has announced a new AI Site Generator that aims to make the process even smoother and more intuitive, and less time-consuming, too.

Read more