Microsoft has updated the terms of service going into effect at the end of September and is clarifying that its Copilot AI services should not be used as a replacement for advice from actual humans.
AI-based agents are popping up across industries as chatbots are increasingly used for customer service calls, health and wellness applications, and even doling out legal advice. However, Microsoft is once again reminding its customers that its chatbots responses should not be taken as gospel. “AI services are not designed, intended, or to be used as substitutes for professional advice,” the updated Service Agreement reads.
The company specifically referred to its health bots as an example. The bots, “are not designed or intended as substitutes for professional medical advice or for use in the diagnosis, cure, mitigation, prevention, or treatment of disease or other conditions,” the new terms explain. “Microsoft is not responsible for any decision you make based on information you receive from health bots.”
The revised Service Agreement also detailed additional AI practices that are explicitly no longer allowed. Users, for example, cannot use its AI services for extracting data. “Unless explicitly permitted, you may not use web scraping, web harvesting, or web data extraction methods to extract data from the AI services,” the agreement reads. The company is also banning reverse engineering attempts to reveal the model’s weights or use its data “to create, train, or improve (directly or indirectly) any other AI service.”
“You may not use the AI services to discover any underlying components of the models, algorithms, and systems,” the new terms read. “For example, you may not try to determine and remove the weights of models or extract any parts of the AI services from your device.”
Microsoft has long been vocal about the potential dangers of generative AI’s misuse. With these new terms of service, Microsoft looks to be staking out legal cover for itself as its AI products gain ubiquity.