Skip to main content

OpenAI uses its own models to fight election interference

OpenAI, the brains behind the popular ChatGPT generative AI solution, released a report saying it blocked more than 20 operations and dishonest networks worldwide in 2024 so far. The operations differed in objective, scale, and focus, and were used to create malware and write fake media accounts, fake bios, and website articles.

OpenAI confirms it has analyzed the activities it has stopped and provided key insights from its analysis. “Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the report says.

Recommended Videos

This is especially important given that it’s an election year in various countries, including the United States, Rwanda, India, and the European Union. For example, in early July, OpenAI banned a number of accounts that created comments about the elections in Rwanda that were posted by different accounts on X (formerly Twitter). So, it’s good to hear that OpenAI says the threat actors couldn’t make much headway with the campaigns.

Another win for OpenAI is disrupting a China-based threat actor known as “SweetSpecter” that attempted spear-phishing OpenAI employees’ corporate and personal addresses. The report goes on to say that in August, Microsoft exposed a set of domains that they attributed to an Iranian covert influence operation known as “STORM-2035.” “Based on their report, we investigated, disrupted and reported an associated set of activity on ChatGPT.”

OpenAI also says the social media posts that their models created didn’t get much attention because they received few or no comments, likes, or shares. OpenAI ensures they will continue to anticipate how threat actors use advanced models for harmful ends and plan to take the necessary actions to stop it.

Judy Sanhz
Judy Sanhz is a Digital Trends computing writer covering all computing news. Loves all operating systems and devices.
Nvidia just released an open-source LLM to rival GPT-4
Nvidia CEO Jensen in front of a background.

Nvidia, which builds some of the most highly sought-after GPUs in the AI industry, has announced that it has released an open-source large language model that reportedly performs on par with leading proprietary models from OpenAI, Anthropic, Meta, and Google.

The company introduced its new NVLM 1.0 family in a recently released white paper, and it's spearheaded by the 72 billion-parameter NVLM-D-72B model. “We introduce NVLM 1.0, a family of frontier-class multimodal large language models that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models,” the researchers wrote.

Read more
California governor vetoes expansive AI safety bill
CA Gov Gavin Newsom speaking at a lecturn

California Gov. Gavin Newsom has vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Models Act, arguing in a letter to lawmakers that it "establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology."

"I do not believe this is the best approach to protecting the public from real threats posed by the technology," he wrote. SB 1047 would have required "that a developer, before beginning to initially train a covered model … comply with various requirements, including implementing the capability to promptly enact a full shutdown … and implement a written and separate safety and security protocol.”

Read more
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more