Skip to main content

Google may build Gemini AI directly into Chrome

The Google Gemini AI logo.
Google

Google is now fleshing out its newly unified Gemini AI system in its browser with its first attempt at implementing Chat with Gemini into the Chrome Omnibox.

This latest effort will update Google Chrome with a Chat with Gemini shortcut in the Chrome Omnibox, allowing users to access the AI chatbot feature without having to go to the Gemini website, according to WindowsReport. The Omnibox serves as an address bar and search bar, and it adds multiple other tasks to a browser. Now with a simple @ prompt, you can also access Google’s AI chatbot to answer questions, create images, and generate summaries, among other tasks.

A screenshot of Chrome chat with Gemini, as taken by Windows Report.
Windows Report

Currently, the Chat with Gemini for Chrome Omnibox is being tested on the Canary level, however, it is available for use via a manual extension. Follow these instructions to enable the feature:

  • Open Google Chrome.
  • Visit chrome://flags in Chrome Omnibox.
  • Find Expansion pack for the Site Search starter pack.
  • Select Enable.
  • Restart the browser.
  • Visit chrome://settings/searchEngines in the address bar.
  • Note the Chat with Gemini shortcut under Site Search.
  • In a new tab type the @ symbol, which will then drop down the Gemini shortcut, among other Omnibox shortcuts, including search tabs, history, and bookmarks.
  • Select the Gemini shortcut.
  • Enter your query and select Enter. This will direct you to the Gemini website.
Recommended Videos

While the canary test for Chat with Gemini for Chrome Omnibox requires a manual installation, users would likely receive a seamless system update of the Chrome browser if it were to be released publicly.

Google’s rollout of its AI system has been slow over the last year, but now that the brand has unified under a single name, Gemini, it will likely pick up the pace of its product releases. Google recently announced its Google One AI Premium services, which runs Gemini Advanced, a paid tier of the AI that showcases the power of its latest large language model, Ultra 1.0. Google paired this service with its productivity services and a premium-tier storage plan.

Notably, several competitor browsers have already implemented their iteration of AI chatbots. Microsoft has its Copilot chatbot connected to its Edge browser through a direct collaboration with OpenAI. The browser company Opera has several browser options with AI and features, including its flagship Opera One and gaming option Opera GX. Both include the brand’s proprietary AI system, called Aria, in addition to other integrations to models such as ChatGPT.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
Seven nuclear reactors to power Google’s AI ambitions
Four nuclear power plants.

Google announced on Tuesday that it has signed a deal with nuclear energy startup Kairos Power to purchase 500 megawatts of “new 24/7 carbon-free power" from seven of the company's small modular reactors (SMRs).  The companies are reportedly looking at an initial delivery from the first SMR in 2030 and a full rollout by 2035.

"The grid needs new electricity sources to support AI technologies that are powering major scientific advances, improving services for businesses and customers, and driving national competitiveness and economic growth," Michael Terrell, Google's senior director of Energy and Climate, wrote in a Google Blog on Tuesday. "This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone."

Read more
Google expands its AI search function, incorporates ads into Overviews on mobile
A woman paints while talking on her Google Pixel 7 Pro.

Google announced on Thursday that it is "taking another big leap forward" with an expansive round of AI-empowered updates for Google Search and AI Overview.
Earlier in the year, Google incorporated generative AI technology into its existing Lens app, which allows users to identify objects within a photograph and search the web for more information on them, so that the app will return an AI Overview based on what it sees rather than a list of potentially relevant websites. At the I/O conference in May, Google promised to expand that capability to video clips.
With Thursday's update, "you can use Lens to search by taking a video, and asking questions about the moving objects that you see," Google's announcement reads. The company suggests that the app could be used to, for example, provide personalized information about specific fish at an aquarium simply by taking a video and asking your question.
Whether this works on more complex subjects like analyzing your favorite NFL team's previous play or fast-moving objects like identifying makes and models of cars in traffic, remains to be seen. If you want to try the feature for yourself, it's available globally (though only in English) through the iOS and Android Google App. Navigate to the Search Lab and enroll in the “AI Overviews and more” experiment to get access.

You won't necessarily have to type out your question either. Lens now supports voice questions, which allows you to simply speak your query as you take a picture (or capture a video clip) rather than fumbling across your touchscreen in a dimly lit room. 
Your Lens-based shopping experience is also being updated. In addition to the links to visually similar products from retailers that Lens already provides, it will begin displaying "dramatically more helpful results," per the announcement. Those include reviews of the specific product you're looking at, price comparisons from across the web, and information on where to buy the item. 

Read more
Google’s Gemini Live now speaks nearly four-dozen languages
A demonstration of Gemini Live on a Google Pixel 9.

Google announced Thursday that it is making Gemini Live available in more than 40 languages, allowing global users (no longer just English speakers) to access the conversational AI feature, as well as enabling the full Gemini AI to connect with additional Google apps in more languages.

Gemini Live is Google's answer to OpenAI's Advanced Voice Mode or Meta's Voice Interactions. The feature enables users to converse with the AI as if it were another person, eliminating the need for text-based prompts. Gemini Live made its debut in May during the company's I/O 2024 event and was initially released for Gemini Advanced subscribers in August before being made available to all users (on Android, at least) in September.

Read more