Skip to main content

Google is bringing AI to the classroom — in a big way

a teacher teaching teens
Google

Google is already incorporating its Gemini AI assistant into the rest of its product ecosystem to help individuals and businesses streamline their existing workflows. Now, the Silicon Valley titan is looking to bring AI into the classroom.

While we’ve already seen the damage that teens can do when given access to generative AI, Google argues that it is taking steps to ensure the technology is employed responsibly by students and academic faculty alike.

Recommended Videos

Following last year’s initial rollout of a teen-safe version of Gemini for personal use, the company at the time decided to not enable the AI’s use with school-issued accounts. That will change in the coming months as Google makes the AI available free of charge to students in over 100 countries though its Google Workspace for Education accounts and school-issued Chromebooks.

Teens that meet Google’s minimum age requirements — they have to be 13 or older in the U.S., 18 or over in the European Economic Area (EEA), Switzerland, Canada, and the U.K. — will be able to converse with Gemini as they would on their personal accounts. That includes access to features like Help me write, Help me read, generative AI backgrounds, and AI-powered noise cancellation. The company was quick to point out that no personal data from this program will be used to train AI models, and that school administrators will be granted admin access to implement or remove features as needed.

What’s more, teens will be able to organize and track their homework assignments through Google Task and Calendar integrations as well as collaborate with their peers using Meet and Assignments.

Google Classroom will also integrate with the school’s Student Information System (SIS), allowing educators to set up classes and import pertinent data such as student lists and grading settings. They’ll also have access to an expanded Google for Education App Hub with 16 new app integrations including Kami, Quizizz, and Screencastify available at launch.

Students will also have access to the Read Along in Classroom feature, which provides them with real-time, AI-based reading help. Conversely, educators will receive feedback from the AI on the student’s reading accuracy, speed, and comprehension.

In the coming months, Google also hopes to introduce the ability for teachers to generate personalized stories tailored to each student’s specific education needs. The feature is currently available in English, with more than 800 books for teachers to choose from, though it will soon offer support for other languages, starting with Spanish.

Additionally, Google is piloting a suite of Gemini in Classroom tools that will enable teachers to “define groups of students in Classroom to assign different content based on each group’s needs.” The recently announced Google Vids, which helps users quickly and easily cut together engaging video clips, will be coming to the classroom as well. A non-AI version of Vids arrives on Google Workspace for Education Plus later this year, while the AI-enhanced version will only be available as a Workspace add-on.

That said, Google has apparently not forgotten just how emotionally vicious teenagers can be. As such, the company is incorporating a number of safety and privacy tools into the new AI system. For example, school administrators will be empowered to prevent students from initiating direct messages and creating spaces to hinder bullying.

Admins will also have the option to block access to Classroom from compromised Android and iOS devices, and can require multiparty approval (i.e. at least two school officials) before security-sensitive changes (like turning off two-step authentication) can be implemented.

Google is introducing a slew of accessibility features as well. Chromebooks will get a new Read Aloud feature in the Chrome browser, for example. Extract Text from PDF will leverage OCR technology to make PDFs accessible to screen readers through the Chrome browser, while the Files app will soon offer augmented image labels to assist screen readers with relaying the contents of images in Chrome.

Later this year, Google also plans to release a feature that will allow users to control their Chromebooks using only their facial expressions and head movements.

These features all sound impressive and should help bring AI into the classroom in a safe and responsible manner — in theory, at least. Though given how quickly today’s teens can exploit security loopholes to bypass their school’s web filters, Google’s good intentions could ultimately prove insufficient.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Google AI helped researchers win two Nobel Prizes this week
nobel peace prize

It's been another insane week in the world of AI. While Tesla CEO Elon Musk was debuting his long-awaited Cybercab this week (along with a windowless Robovan that nobody asked for), Google's AI was helping researchers win Nobel Prizes, Zoom revealed its latest digital assistant, and Meta sent its Facebook and Instagram chatbots to the U.K.

Check out these stories and more from this week's top AI headlines.
Google's AI helped researchers win two Nobel Prizes

Read more
Google expands its AI search function, incorporates ads into Overviews on mobile
A woman paints while talking on her Google Pixel 7 Pro.

Google announced on Thursday that it is "taking another big leap forward" with an expansive round of AI-empowered updates for Google Search and AI Overview.
Earlier in the year, Google incorporated generative AI technology into its existing Lens app, which allows users to identify objects within a photograph and search the web for more information on them, so that the app will return an AI Overview based on what it sees rather than a list of potentially relevant websites. At the I/O conference in May, Google promised to expand that capability to video clips.
With Thursday's update, "you can use Lens to search by taking a video, and asking questions about the moving objects that you see," Google's announcement reads. The company suggests that the app could be used to, for example, provide personalized information about specific fish at an aquarium simply by taking a video and asking your question.
Whether this works on more complex subjects like analyzing your favorite NFL team's previous play or fast-moving objects like identifying makes and models of cars in traffic, remains to be seen. If you want to try the feature for yourself, it's available globally (though only in English) through the iOS and Android Google App. Navigate to the Search Lab and enroll in the “AI Overviews and more” experiment to get access.

You won't necessarily have to type out your question either. Lens now supports voice questions, which allows you to simply speak your query as you take a picture (or capture a video clip) rather than fumbling across your touchscreen in a dimly lit room. 
Your Lens-based shopping experience is also being updated. In addition to the links to visually similar products from retailers that Lens already provides, it will begin displaying "dramatically more helpful results," per the announcement. Those include reviews of the specific product you're looking at, price comparisons from across the web, and information on where to buy the item. 

Read more
Google’s Gemini Live now speaks nearly four-dozen languages
A demonstration of Gemini Live on a Google Pixel 9.

Google announced Thursday that it is making Gemini Live available in more than 40 languages, allowing global users (no longer just English speakers) to access the conversational AI feature, as well as enabling the full Gemini AI to connect with additional Google apps in more languages.

Gemini Live is Google's answer to OpenAI's Advanced Voice Mode or Meta's Voice Interactions. The feature enables users to converse with the AI as if it were another person, eliminating the need for text-based prompts. Gemini Live made its debut in May during the company's I/O 2024 event and was initially released for Gemini Advanced subscribers in August before being made available to all users (on Android, at least) in September.

Read more