Skip to main content

Slack patches potential AI security issue

Manage Members in Slack on a laptop.
Slack

Update: Slack has published an update, claiming to have “deployed a patch to address the reported issue,” and that there isn’t currently any evidence that customer data have been accessed without authorization. Here’s the official statement from Slack that was posted on its blog:

When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data.

Below is the original article that was published.

Recommended Videos

When ChatGTP was added to Slack, it was meant to make users’ lives easier by summarizing conversations, drafting quick replies, and more. However, according to security firm PromptArmor, trying to complete these tasks and more could breach your private conversations using a method called “prompt injection.”

The security firm warns that by summarizing conversations, it can also access private direct messages and deceive other Slack users into phishing. Slack also lets users request grab data from private and public channels, even if the user has not joined them. What sounds even scarier is that the Slack user does not need to be in the channel for the attack to function.

In theory, the attack starts with a Slack user tricking the Slack AI into disclosing a private API key by making a public Slack channel with a malicious prompt. The newly created prompt tells the AI to swap the word “confetti” with the API key and send it to a particular URL when someone asks for it.

The situation has two parts: Slack updated the AI system to scrape data from file uploads and direct messages. Second is a method named “prompt injection,” which PromptArmor proved can make malicious links that may phish users.

The technique can trick the app into bypassing its normal restrictions by modifying its core instructions. Therefore, PromptArmor goes on to say, “Prompt injection occurs because a [large language model] cannot distinguish between the “system prompt” created by a developer and the rest of the context that is appended to the query. As such, if Slack AI ingests any instruction via a message, if that instruction is malicious, Slack AI has a high likelihood of following that instruction instead of, or in addition to, the user query.”

To add insult to injury, the user’s files also become targets, and the attacker who wants your files doesn’t even have to be in the Slack Workspace to begin with.

Judy Sanhz
Judy Sanhz is a Digital Trends computing writer covering all computing news. Loves all operating systems and devices.
Microsoft Copilot: how to use this powerful AI assistant
Man using Windows Copilot PC to work

In the rapidly evolving landscape of artificial intelligence, Microsoft's Copilot AI assistant is a powerful tool designed to streamline and enhance your professional productivity. Whether you're new to AI or a seasoned pro, this guide will help you through the essentials of Copilot, from understanding what it is and how to sign up, to mastering the art of effective prompts and creating stunning images.

Additionally, you'll learn how to manage your Copilot account to ensure a seamless and efficient user experience. Dive in to unlock the full potential of Microsoft's Copilot and transform the way you work.
What is Microsoft Copilot?
Copilot is Microsoft's flagship AI assistant, an advanced large language model. It's available on the web, through iOS, and Android mobile apps as well as capable of integrating with apps across the company's 365 app suite, including Word, Excel, PowerPoint, and Outlook. The AI launched in February 2023 as a replacement for the retired Cortana, Microsoft's previous digital assistant. It was initially branded as Bing Chat and offered as a built-in feature for Bing and the Edge browser. It was officially rebranded as Copilot in September 2023 and integrated into Windows 11 through a patch in December of that same year.

Read more
TikTok lays off hundreds in favor of AI moderators while Instagram blames humans for its own issues
a person using Tiktok on their phone

ByteDance, the company behind video social media platform TikTok, has reportedly laid off hundreds of human content moderators worldwide as it transitions to an AI-first moderation scheme.

Most of the roughly 500 jobs lost were located in Malaysia, Reuters reports. Per the company, ByteDance employs more than 110,000 people in total. "We're making these changes as part of our ongoing efforts to further strengthen our global operating model for content moderation," a TikTok spokesperson said in a statement.

Read more
OpenAI uses its own models to fight election interference
chatGPT on a phone on an encyclopedia

OpenAI, the brains behind the popular ChatGPT generative AI solution, released a report saying it blocked more than 20 operations and dishonest networks worldwide in 2024 so far. The operations differed in objective, scale, and focus, and were used to create malware and write fake media accounts, fake bios, and website articles.

OpenAI confirms it has analyzed the activities it has stopped and provided key insights from its analysis. "Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the report says.

Read more