Skip to main content

Meta and Google made AI news this week. Here were the biggest announcements

Ray-Ban Meta Smart Glasses will be available in clear frames.
Meta

From Meta’s AI-empowered AR glasses to its new Natural Voice Interactions feature to Google’s AlphaChip breakthrough and ChromaLock’s chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

a pcb
Pixabay / Pexels

Google taught an AI to design computer chips

Deciding how and where all the bits and bobs go into today’s leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Recommended Videos

Recall promotional image.

Microsoft outlines Recall security: ‘The user is always in control’

Microsoft got itself raked over the proverbial coals back in June when it attempted to foist its Recall feature upon users. The AI-powered tool was billed as a way for users to search their computing history using natural language queries, except it did so by automatically capturing screenshots as users worked, which led to a huge outcry by both users and data privacy advocates. This week, Microsoft published a blog post attempting to regain users’ trust by laying out the steps it is taking to prevent data misuse, including restrictions on which apps it can track and which hardware systems it can run on, all while reasserting that “the user is always in control.”

Zuckerberg debuting natural voice interactions
Meta

Meta rolls out its own version of Advanced Voice Mode

Fancy Ray-Ban smart glasses weren’t the only items to debut at Meta’s Connect 2024 event this past Wednesday. The company also announced the release of its new Natural Voice Interactions feature for Meta AI. Just as with Gemini Live and Advanced Voice Mode, Natural Voice Interactions enables you to speak directly with the chatbot as you would another person, rather than type or dictate your prompts to the AI. The new feature is available to play with right now and, unlike AVM, is completely free to use.

ChatGPT and OpenAI logos.

OpenAI drops nonprofit status in large-scale reorganization

In what should come as a surprise to nobody, OpenAI CEO Sam Altman is taking steps to further consolidate his control over the multibillion-dollar AI startup. Reuters reported this week that OpenAI is discussing plans to reorganize its core business, not as a nonprofit as its been since its founding in 2015, but as a for-profit entity. The company is apparently trying to make itself more “attractive to investors” but the fact that the nonprofit board of directors, which briefly ousted Altman last November, will no longer have jurisdiction over his actions is of obvious benefit to him specifically.

A TI-84 calculator displayed next to a calculus textbook
ChromaLock / YouTube

A modder just put ChatGPT on a TI-84 graphing calculator

The latest version of the large language model that ChatGPT runs on, GPT-4o, is not what you’d call petite, given that it was trained on more than 200 billion parameters. Yet, despite its girth, YouTuber ChromaLock managed to stuff the chatbot’s capabilities into a TI-84 graphing calculator. Granted, they didn’t load the AI into the calculator itself to run locally, but the modder did manage to gain access to the online resource with the clever application of a custom Wi-Fi module and an open-source software suite. Best I could ever do with my old TI-83 was make crude anatomical references.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI Project Strawberry: Here’s everything we know so far
a strawberry

Even as it is reportedly set to spend $7 billion on training and inference costs (with an overall $5 billion shortfall), OpenAI is steadfastly seeking to build the world's first Artificial General Intelligence (AGI).

Project Strawberry is the company's next step toward that goal, and as of mid September, it's officially been announced.
What is Project Strawberry?
Project Strawberry is OpenAI's latest (and potentially greatest) large language model, one that is expected to broadly surpass the capabilities of current state-of-the-art systems with its "human-like reasoning skills" when it rolls out. It just might power the next generation of ChatGPT.
What can Strawberry do?
Project Strawberry will reportedly be a reasoning powerhouse. Using a combination of reinforcement learning and “chain of thought” reasoning, the new model will reportedly be able to solve math problems it has never seen before and act as a high-level agent, creating marketing strategies and autonomously solving complex word puzzles like the NYT's Connections. It can even "navigate the internet autonomously" to perform "deep research," according to internal documents viewed by Reuters in July.

Read more
A new definition of ‘open source’ could spell trouble for Big AI
Meta AI can generate images within a chat in about five seconds.

The Open Source Initiative (OSI), self-proclaimed steward of the open source definition, the most widely used standard for open-source software, announced an update to what constitutes an "open source AI" on Thursday. The new wording could now exclude models from industry heavyweights like Meta and Google.

"Open Source has demonstrated that massive benefits accrue to everyone after removing the barriers to learning, using, sharing, and improving software systems," the OSI wrote in a recent blog post. "For AI, society needs the same essential freedoms of Open Source to enable AI developers, deployers, and end users to enjoy those same benefits."

Read more
GPT-4: everything you need to know about ChatGPT’s standard AI model
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot originally powered by the GPT-3.5 large language model. But when the highly anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, with some calling it the early glimpses of AGI (artificial general intelligence).
What is GPT-4?
GPT-4 is the newest language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which was previously based on GPT-3.5 but has since been updated. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.

According to OpenAI, this next-generation language model is more advanced than ChatGPT in three key areas: creativity, visual input, and longer context. In terms of creativity, OpenAI says GPT-4 is much better at both creating and collaborating with users on creative projects. Examples of these include music, screenplays, technical writing, and even "learning a user's writing style."

Read more