AI

Gemini’s data-analyzing abilities aren’t as good as Google claims

Kommentar

In this photo illustration a Gemini logo and a welcome message on Gemini website are displayed on two screens.
Image Credits: Lorenzo Di Cola/NurPhoto / Getty Images

One of the selling points of Google’s flagship generative AI models, Gemini 1.5 Pro and 1.5 Flash, is the amount of data they can supposedly process and analyze. In press briefings and demos, Google has repeatedly claimed that the models can accomplish previously impossible tasks thanks to their “long context,” like summarizing multiple hundred-page documents or searching across scenes in film footage.

But new research suggests that the models aren’t, in fact, very good at those things.

Two separate studies investigated how well Google’s Gemini models and others make sense out of an enormous amount of data — think “War and Peace”-length works. Both find that Gemini 1.5 Pro and 1.5 Flash struggle to answer questions about large datasets correctly; in one series of document-based tests, the models gave the right answer only 40%-50% of the time.

“While models like Gemini 1.5 Pro can technically process long contexts, we have seen many cases indicating that the models don’t actually ‘understand’ the content,” Marzena Karpinska, a postdoc at UMass Amherst and a co-author on one of the studies, told TechCrunch.

Gemini’s context window is lacking

A model’s context, or context window, refers to input data (e.g., text) that the model considers before generating output (e.g., additional text). A simple question — “Who won the 2020 U.S. presidential election?” — can serve as context, as can a movie script, show or audio clip. And as context windows grow, so does the size of the documents being fit into them.

The newest versions of Gemini can take in upward of 2 million tokens as context. (“Tokens” are subdivided bits of raw data, like the syllables “fan,” “tas” and “tic” in the word “fantastic.”) That’s equivalent to roughly 1.4 million words, two hours of video or 22 hours of audio — the largest context of any commercially available model.

In a briefing earlier this year, Google showed several pre-recorded demos meant to illustrate the potential of Gemini’s long-context capabilities. One had Gemini 1.5 Pro search the transcript of the Apollo 11 moon landing telecast — around 402 pages — for quotes containing jokes, and then find a scene in the telecast that looked similar to a pencil sketch.

VP of research at Google DeepMind Oriol Vinyals, who led the briefing, described the model as “magical.”

“[1.5 Pro] performs these sorts of reasoning tasks across every single page, every single word,” he said.

That might have been an exaggeration.

In one of the aforementioned studies benchmarking these capabilities, Karpinska, along with researchers from the Allen Institute for AI and Princeton, asked the models to evaluate true/false statements about fiction books written in English. The researchers chose recent works so that the models couldn’t “cheat” by relying on foreknowledge, and they peppered the statements with references to specific details and plot points that’d be impossible to comprehend without reading the books in their entirety.

Given a statement like “By using her skills as an Apoth, Nusis is able to reverse engineer the type of portal opened by the reagents key found in Rona’s wooden chest,” Gemini 1.5 Pro and 1.5 Flash — having ingested the relevant book — had to say whether the statement was true or false and explain their reasoning.

Image Credits: UMass Amherst

Tested on one book around 260,000 words (~520 pages) in length, the researchers found that 1.5 Pro answered the true/false statements correctly 46.7% of the time while Flash answered correctly only 20% of the time. Averaging all the benchmark results, neither model managed to achieve a bit higher than random chance in terms of question-answering accuracy.

“We’ve noticed that the models have more difficulty verifying claims that require considering larger portions of the book, or even the entire book, compared to claims that can be solved by retrieving sentence-level evidence,” Karpinska said. “Qualitatively, we also observed that the models struggle with verifying claims about implicit information that is clear to a human reader but not explicitly stated in the text.”

The second of the two studies, co-authored by researchers at UC Santa Barbara, tested the ability of Gemini 1.5 Flash (but not 1.5 Pro) to “reason over” videos — that is, search through and answer questions about the content in them.

The co-authors created a dataset of images (e.g., a photo of a birthday cake) paired with questions for the model to answer about the objects depicted in the images (e.g., “What cartoon character is on this cake?”). To evaluate the models, they picked one of the images at random and inserted “distractor” images before and after it to create slideshow-like footage.

Flash didn’t perform all that well. In a test that had the model transcribe six handwritten digits from a “slideshow” of 25 images, Flash got around 50% of the transcriptions right. The accuracy dropped to around 30% with eight digits.

“On real question-answering tasks over images, it appears to be particularly hard for all the models we tested,” Michael Saxon, a PhD student at UC Santa Barbara and one of the study’s co-authors, told TechCrunch. “That small amount of reasoning — recognizing that a number is in a frame and reading it — might be what is breaking the model.”

Google is overpromising with Gemini

Neither of the studies have been peer-reviewed, nor do they probe the releases of Gemini 1.5 Pro and 1.5 Flash with 2-million-token contexts. (Both tested the 1-million-token context releases.) And Flash isn’t meant to be as capable as Pro in terms of performance; Google advertises it as a low-cost alternative.

Nevertheless, both add fuel to the fire that Google’s been overpromising — and under-delivering — with Gemini from the beginning. None of the models the researchers tested, including OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet, performed well. But Google’s the only model provider that’s given context window top billing in its advertisements.

“There’s nothing wrong with the simple claim, ‘Our model can take X number of tokens’ based on the objective technical details,” Saxon said. “But the question is, what useful thing can you do with it?”

Generative AI broadly speaking is coming under increased scrutiny as businesses (and investors) grow frustrated with the technology’s limitations.

In a pair of recent surveys from Boston Consulting Group, about half of the respondents — all C-suite executives — said that they don’t expect generative AI to bring about substantial productivity gains and that they’re worried about the potential for mistakes and data compromises arising from generative AI-powered tools. PitchBook recently reported that, for two consecutive quarters, generative AI dealmaking at the earliest stages has declined, plummeting 76% from its Q3 2023 peak.

Faced with meeting-summarizing chatbots that conjure up fictional details about people and AI search platforms that basically amount to plagiarism generators, customers are on the hunt for promising differentiators. Google — which has raced, at times clumsily, to catch up to its generative AI rivals — was desperate to make Gemini’s context one of those differentiators.

But the bet was premature, it seems.

“We haven’t settled on a way to really show that ‘reasoning’ or ‘understanding’ over long documents is taking place, and basically every group releasing these models is cobbling together their own ad hoc evals to make these claims,” Karpinska said. “Without the knowledge of how long context processing is implemented — and companies do not share these details — it is hard to say how realistic these claims are.”

Google didn’t respond to a request for comment.

Both Saxon and Karpinska believe the antidotes to hyped-up claims around generative AI are better benchmarks and, along the same vein, greater emphasis on third-party critique. Saxon notes that one of the more common tests for long context (liberally cited by Google in its marketing materials), “needle in the haystack,” only measures a model’s ability to retrieve particular info, like names and numbers, from datasets — not answer complex questions about that info.

“All scientists and most engineers using these models are essentially in agreement that our existing benchmark culture is broken,” Saxon said, “so it’s important that the public understands to take these giant reports containing numbers like ‘general intelligence across benchmarks’ with a massive grain of salt.”

Updated 7/3: A previous version of this article stated that Gemini 1.5 Pro and 1.5 Flash’s accuracy was below random chance on the task of reasoning over long text. In fact, their accuracy was above random chance. We’ve made the correction. Google PR also sent links to studies that suggest Gemini’s long-context performance is stronger than implied here: Extended Multi-Doc QA, Video MME, longer queries subset on LMSYS, Ruler.

More TechCrunch

Just weeks ago, during an interview with TechCrunch, Thomas Ingenlath laid out his plan to turn Polestar into a self-sustaining company. Now, he’s out.  Polestar said Tuesday Ingenlath has resigned as…

Polestar is getting a new CEO amid EV sales slump

Midjourney, the AI image-generating platform that’s reportedly raking in more than $200 million in revenue without a single dime of VC investment, is officially getting into hardware. The company made…

Midjourney says it’s ‘getting into hardware’

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here. Say what you will about generative AI. But it’s commoditizing…

This Week in AI: AI is rapidly being commoditized

OpenSea, which calls itself the “world’s largest” nonfungible token (NFT) marketplace, received a Wells notice from the SEC, the company said in a blog post Wednesday, indicating the regulator may…

SEC takes aim at NFT marketplace OpenSea

Kissner previously served as Twitter’s chief information security officer, and held senior security and privacy positions at Apple, Google, and Lacework.

Ex-Twitter CISO Lea Kissner appointed as LinkedIn security chief

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the…

A comprehensive list of 2024 tech layoffs

It’s been more than a year since Tesla agreed to open its Supercharger network to electric vehicles from other automakers, like General Motors and Ford. But Tesla’s network of nearly…

Tesla’s Supercharger network is still unavailable to non-Tesla EVs

Tumblr is making the move to WordPress. After its 2019 acquisition by WordPress.com parent company Automattic in a $3 million fire sale, the new owner has focused on improving Tumblr’s…

Tumblr to move its half a billion blogs to WordPress

Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show an anachronistic…

Google says it’s fixed Gemini’s people-generating feature

Exclusive: Millennium Space Systems will soon have a new CEO as Jason Kim has departed the company, TechCrunch has learned. 

The CEO of Boeing’s satellite maker, Millennium Space, has quietly left the company

As of the company’s most recent financial quarter, Apple’s Services bsuiness represented about one-quarter of the tech giant’s revenue.

Apple reportedly cuts 100 jobs working on Books and other services

After a long week of coding, you might assume San Francisco’s builders would retreat into the Bay Area’s mountains, beaches or vibrant clubbing scene. But in reality, when the week…

Born from San Francisco’s AI hackathons, Agency lets you see what your AI agents do

You’ve got the product — now how do you find customers? And once you find those customers, how do you keep them coming back for more? At TechCrunch Disrupt 2024,…

VCs and founders talk finding (and keeping) product-market fit at TechCrunch Disrupt 2024

Snapchat announced on Wednesday that it’s releasing new resources for educators to help them create safe environments in their schools by better understanding how their students use the app. The…

Snapchat releases new teen safety resources for educators

Marty Kausas, Pylon’s CEO and co-founder, says they quickly learned that the omnichannel approach the company originally took was just a first step, and customers were clamoring for more.

Pylon lands $17M investment to build a full service B2B customer service platform

Update 8/27: The Polaris Dawn launch has been pushed back a day and is now planned for Wednesday, August 28 after a helium leak was detected ahead of its takeoff.…

Polaris Dawn will push the limits of SpaceX’s human spaceflight program — here’s how to watch it launch live

Pryzm announced its $2 million pre-seed round, led by XYZ Venture Capital and Amplify.LA.

Pryzm is a new kind of defense tech startup: One that helps others win lucrative contracts

Comun, a digital bank focused on serving immigrants in the United States, has raised $21.5 million in a Series A funding round less than nine months after announcing a $4.5…

Fast-growing immigrant-focused neobank Comun has secured $21.5M in new funding just months after its last raise

Calm is rolling out a suite of new features to make it easier for people to fit mindfulness into their lives. Most notably, the app is launching “Taptivities,” which are…

Calm’s new Story-like mindfulness exercises offer an alternative to social media

The NotePin, which hits preorder Wednesday, is $169 and comes with a free starter plan or a Pro Plan, which costs $79 per year.

Plaud takes a crack at a simpler AI pin

CoinSwitch, a prominent Indian cryptocurrency exchange, is suing rival platform WazirX to recover trapped funds.

CoinSwitch sues WazirX to recover trapped funds

Web browser and search startup Brave has laid off 27 employees across the different departments, TechCrunch has learned. The company confirmed the layoffs but didn’t give more details about the…

Brave lays off 27 employees

Zepto co-founder Aadit Palicha told a group of analysts and investors on Tuesday that the three-year-old Indian delivery startup anticipates growth of 150% in the next 12 months, a remarkable…

Zepto, snagging $1B in 90 days, projects 150% annual growth

VerSe Innovation, India’s content tech startup, has acquired digital marketing firm Valueleaf Group to bolster its presence in the Indian digital ad space.

India’s VerSe buys Valueleaf to boost digital marketing

Astrobotic’s Peregrine lunar lander failed to reach the moon because of a problem with a single valve in the propulsion system, according to a report on the mission released Tuesday.…

One busted valve led to the failure of Astrobotic’s $108M Peregrine lunar lander mission

Meta and Spotify are exploring deeper music integration in Meta’s Instagram app. New findings indicate the companies are testing a feature that would allow users to continuously share what music…

Meta and Spotify spotted developing a new social music-sharing feature

In Latin American countries like Brazil and Chile, messaging platform WhatsApp has become one of the most popular apps to use to buy things online. It was even the e-commerce…

How Techstars, Meta helped profitable LatAm startup Mercately raise a $2.6M seed

Before entrepreneur and investor Mike Lynch died along with six others after the yacht they were on capsized in a storm last week, the party was celebrating Lynch’s victory in…

Will HP still demand $4B from Mike Lynch’s estate?

How many times does the letter “r” appear in the word “strawberry”? According to formidable AI products like GPT-4o and Claude, the answer is twice. Large language models (LLMs) can…

Why AI can’t spell ‘strawberry’

The SEC has updated its limits to the amount of money a “qualified venture fund” can raise to $12 million from $10 million.

The SEC just made life a little easier for smaller VCs