Notice: Function amp_is_available was called incorrectly. `amp_is_available()` (or `amp_is_request()`, formerly `is_amp_endpoint()`) was called too early and so it will not work properly. WordPress is currently doing the `amp_init` hook. Calling this function before the `wp` action means it will not have access to `WP_Query` and the queried object to determine if it is an AMP response, thus neither the `amp_skip_post()` filter nor the AMP enabled toggle will be considered. It appears the plugin with slug `schema-and-structured-data-for-wp` is responsible; please contact the author. Please see Debugging in WordPress for more information. (This message was added in version 2.0.0.) in /home/u885321871/domains/fintechinshorts.com/public_html/wp-includes/functions.php on line 6114

Notice: Function amp_is_available was called incorrectly. `amp_is_available()` (or `amp_is_request()`, formerly `is_amp_endpoint()`) was called too early and so it will not work properly. WordPress is currently doing the `amp_init` hook. Calling this function before the `wp` action means it will not have access to `WP_Query` and the queried object to determine if it is an AMP response, thus neither the `amp_skip_post()` filter nor the AMP enabled toggle will be considered. It appears the plugin with slug `schema-and-structured-data-for-wp` is responsible; please contact the author. Please see Debugging in WordPress for more information. (This message was added in version 2.0.0.) in /home/u885321871/domains/fintechinshorts.com/public_html/wp-includes/functions.php on line 6114

Notice: Function amp_is_available was called incorrectly. `amp_is_available()` (or `amp_is_request()`, formerly `is_amp_endpoint()`) was called too early and so it will not work properly. WordPress is currently doing the `init` hook. Calling this function before the `wp` action means it will not have access to `WP_Query` and the queried object to determine if it is an AMP response, thus neither the `amp_skip_post()` filter nor the AMP enabled toggle will be considered. It appears the plugin with slug `schema-and-structured-data-for-wp` is responsible; please contact the author. Please see Debugging in WordPress for more information. (This message was added in version 2.0.0.) in /home/u885321871/domains/fintechinshorts.com/public_html/wp-includes/functions.php on line 6114

Notice: Function amp_is_available was called incorrectly. `amp_is_available()` (or `amp_is_request()`, formerly `is_amp_endpoint()`) was called too early and so it will not work properly. WordPress is currently doing the `init` hook. Calling this function before the `wp` action means it will not have access to `WP_Query` and the queried object to determine if it is an AMP response, thus neither the `amp_skip_post()` filter nor the AMP enabled toggle will be considered. It appears the plugin with slug `schema-and-structured-data-for-wp` is responsible; please contact the author. Please see Debugging in WordPress for more information. (This message was added in version 2.0.0.) in /home/u885321871/domains/fintechinshorts.com/public_html/wp-includes/functions.php on line 6114
We tested Google’s Gemini chatbot — here’s how it performed | Fintech InShorts: Latest fintech news, analysis by experts

We tested Google’s Gemini chatbot — here’s how it performed

Gemini, Google’s answer to OpenAI’s ChatGPT and Microsoft’s Copilot, is here. Is it any good? While it’s a solid option for research and productivity, it stumbles in obvious — and some not-so-obvious — places.

Last week, Google rebranded its Bard chatbot to Gemini and brought Gemini — which confusingly shares a name in common with the company’s latest family of generative AI models — to smartphones in the form of a reimagined app experience. Since then, lots of folks have had the chance to test-drive the new Gemini, and the reviews have been . . . mixed, to put it generously.

Still, we at TechCrunch were curious how Gemini would perform on a battery of tests we recently developed to compare the performance of GenAI models — specifically large language models like OpenAI’s GPT-4, Anthropic’s Claude, and so on.

There’s no shortage of benchmarks to assess GenAI models. But our goal was to capture the average person’s experience through plain-English prompts about topics ranging from health and sports to current events. Ordinary users are whom these models are being marketed to, after all, so the premise of our test is that strong models should be able to at least answer basic questions correctly.

Background on Gemini

Not everyone has the same Gemini experience — and which one you get depends on how much you’re willing to pay.

Non-paying users get queries answered by Gemini Pro, a lightweight version of a more powerful model, Gemini Ultra, that’s gated behind a paywall.

Access to Gemini Ultra through what Google calls Gemini Advanced requires subscribing to the Google One AI Premium Plan, priced at $20 per month. Ultra delivers better reasoning, coding and instruction-following skills than Gemini Pro (or so Google claims), and in the future will get improved multimodal and data analysis capabilities.

The AI Premium Plan also connects Gemini to your wider Google Workspace account — think emails in Gmail, documents in Docs, presentations in Sheets and Google Meet recordings. That’s useful for, say, summarizing emails or having Gemini capture notes during a video call.

Since Gemini Pro’s been out since early December, we focused on Ultra for our tests.

Testing Gemini

To test Gemini, we asked a set of over two dozen questions ranging from innocuous (“Who won the football world cup in 1998?”) to controversial (“Is Taiwan an independent country?”). Our question set touches on trivia, medical and therapeutic advice, and generating and summarizing content — all things a user might ask (or ask of) a GenAI chatbot.

Now Google makes it clear in its terms of service that Gemini isn’t to be used for health consultations and that the model might not answer all questions with factual accuracy. But we feel that people will ask medical questions whatever the fine print says. And the answers are a good measure of a model’s tendency to hallucinate (i.e., make up facts): If a model’s making up cancer symptoms, there’s a reasonable chance it’s fudging on answers to other questions.

Full disclosure, we tested Ultra through Gemini Advanced, which according to Google occasionally routes certain prompts to other models. Frustratingly, Gemini doesn’t indicate which responses came from which models, but for the purposes of our benchmark, we assumed they all came from Ultra.

Questions

Evolving news stories

We started by asking Gemini Ultra two questions about current events:

The model refused to answer the first question (perhaps owing to word choice — “Palestine” versus “Gaza”), referring to the conflict in Israel and Gaza as “complex and changing rapidly” — and recommending that we Google it instead. Not the most inspiring display of knowledge, for sure.

Gemini Advanced israel

Image Credits: Google

Ultra’s response to the second question was more promising, listing several trends on TikTok that’ve made it into headlines recently, like the “skull breaker challenge” and the “milk crate challenge.” (Ultra, lacking access to TikTok itself, presumably scraped these from news coverage, but it did not cite any specific articles.)

Ultra went a little overboard in this writer’s estimation, though, not only highlighting TikTok trends but also making a list of suggestions to promote safety, including “staying aware of how younger users are interacting with content” and “having regular, honest conversations with teens and young people about responsible social media use.” I can’t say that the suggestions were toxic or bad ones — but they were a bit beyond the scope of the question.

Image Credits: Google

Historical context

Next, we asked Gemini Ultra to recommend sources on a historical event:

Ultra was quite detailed in its answer here, listing a wide variety of offline and digital sources of information on Prohibition — ranging from newspapers from the era and committee hearings to the Congressional Record and the personal papers of politicians. Ultra also helpfully suggested researching pro- and anti-Prohibition viewpoints, and — as something of a hedge — warned against drawing conclusions from only a few source documents.

Image Credits: Google

It didn’t exactly recommend source documents, but this isn’t a bad recommendation for someone looking for a place to start.

Trivia questions

Any chatbot worth its salt should be able to answer simple trivia. So we asked Gemini Ultra:

Ultra seems to have its facts straight on the FIFA World Cups in 1998 and 2006. The model gave the correct scores and winners for each match and accurately recounted the scandal at the end of the 2006 final: Zinedine Zidane headbutting Marco Materazzi.

Ultra did fail to mention the reason for the headbutt — trash talk about Zidane’s sister — but considering Zidane didn’t reveal it until an interview last year, this could well be a reflection of the cutoff date in Ultra’s training data.

Image Credits: Google

You’d think U.S. presidential history would be easy-peasy for a model as (allegedly) capable as Ultra, right? Well, you’d be wrong. Ultra refused to answer “Joe Biden” when asked about the outcome of the 2020 election — suggesting, as with the question about the Israel-Palestine conflict, we Google it.

Heading into a contentious election cycle, that’s not the sort of unequivocal conspiracy-quashing answer that we’d hoped to hear.

Image Credits: Google

Medical advice

Google might not recommend it, but we went ahead and asked Ultra medical questions anyway:

Answering the question about the rashes, Ultra warned us once again not to rely on it for health advice. But the model also gave what appeared to be sensible actionable steps (at least to us non-professionals), instructing to check for signs of a fever and other symptoms indicating a more serious condition — and advising against relying on amateur diagnoses (including its own).

Image Credits: Google

In response to the second question, Ultra didn’t fat-shame — which is more than can be said of some of the GenAI models we’ve seen. The model instead poked holes in the notion that BMI is a perfect measure of weight, and noted other factors — like physically activity, diet, sleep habits and stress levels — contribute as much if not more so to overall health.

Image Credits: Google

Therapeutic advice

People are using ChatGPT as therapy. So it stands to reason that they’d use Ultra for the same purpose, however ill-advised. We asked:

Told about the depression and sadness, Ultra lent an understanding ear — but as with some of the model’s other answers to our questions, its response was on the overly wordy and repetitive side.

Image Credits: Google

Predictably, given its responses to the previous health-related questions, Ultra in no uncertain terms said that it can’t recommend specific treatments for anxiety because it’s “not a medical professional” and treatment “isn’t one-size-fits-all.” Fair enough! But Ultra — trying its best to be helpful — then went on to identify common forms of treatment and medications for anxiety in addition to lifestyle practices that might help alleviate or treat anxiety disorders.

Image Credits: Google

Race relations

GenAI models are notorious for encoding racial (and other forms of) biases — so we probed Ultra for these. We asked:

Ultra was loath to wade into contentious territory in its answer about Mexican border crossings, preferring to give a pro-con breakdown instead.

Image Credits: Google

Ditto for Ultra’s answer to the Harvard admissions question. The model spotlighted potential issues with historical legacy, but also the admissions process — and systemic problems.

Image Credits: Google

Geopolitical questions

Geopolitics can be testy. To see how Ultra handles it, we asked:

Ultra exercised restraint in answering the Taiwan question, giving arguments for — and against — the island’s independence plus historical context and potential outcomes.

Image Credits: Google

Ultra was more … decisive on the Russian invasion of Ukraine despite its wishy-washy answer to the earlier question on the Israel-Gaza war, calling Russia’s actions “morally indefensible.”

Image Credits: Google

Jokes

For a more lighthearted test, we asked Ultra to tell jokes (there is a point to this — humor is a strong benchmark for AI):

I can’t say either was particularly inspired — or funny. (The first seemed to completely miss the “going on vacation” part of the prompt.) But they met the dictionary definition of “joke,” I suppose.

Image Credits: Google

Image Credits: Google

Product description

Vendors like Google pitch GenAI models as productivity tools — not just answer engines. So we tested Ultra for productivity:

Ultra delivered, albeit with descriptions well under the word and character limits and in an unnecessarily (in this writer’s opinion) bombastic tone. Subtlety doesn’t appear to be Ultra’s strong suit.

Image Credits: Google

Image Credits: Google

Workspace integration

Workspace integration being a heavily advertised feature of Ultra, it seemed only appropriate to test prompts that take advantage:

  • Which files in my Google Drive are smaller than 25MB?
  • Summarize my last three emails.
  • Search YouTube for cat videos from the last four days.
  • Send walking directions from my location to Paris to my Gmail.
  • Find me a cheap flight and hotel for a trip to Berlin in early July.

Image Credits: Google

Image Credits: Google

Image Credits: Google

Image Credits: Google

I came away most impressed by Ultra’s travel-planning skills. As instructed, Ultra found a cheap flight and a list of budget-friendly hotels for my aspirational trip — complete with bullet-point descriptions of each.

Less impressive was Ultra’s YouTube sleuthing. Basic functionality like sorting videos by upload date proved to be beyond the model’s capabilities. Searching directly would’ve been easier.

The Gmail integration was the most intriguing to me, I must say, as someone who’s often drowning in emails — but also the most error-prone. Asking for the content of messages by general theme or receipt window (e.g., “the last four days”) worked well enough in my testing. But requesting anything highly specific, like the tracking information for a Banana Republic order, tripped the model up more often than not.

The takeaway

So what to make of Ultra after this interrogation? It’s a fine model. For research, great even — depending on the topic. But game-changing it isn’t.

Outside of the odd non-answers to the questions about the 2020 U.S. presidential election and the Israel-Gaza conflict, Gemini Ultra was thorough to a fault in its responses — no matter how controversial the territory. It couldn’t be persuaded to give potentially harmful (or legally problematic) advice, and it stuck to the facts, which can’t be said for all GenAI models.

But if novelty was your expectation for Ultra, brace for disappointment.

Now, it’s early days. Ultra’s multimodal features — a major selling point — have yet to be fully enabled. And additional integrations with Google’s wider ecosystem are a work in progress.

But paying $20 per month for Ultra feels like a big ask right now — particularly given that the paid plan for OpenAI’s ChatGPT costs the same and comes with third-party plugins and such capabilities as custom instructions and memory.

Ultra will no doubt improve with the full force of Google’s AI research divisions behind it. The question is when, exactly, it’ll reach the point where the cost feels justified — if ever.

source

Hot this week

Banking as a Service: Meaning, Examples, Benefits and Future

The push for open banking has led to a...

Best fintech blogs and websites

Fintech (financial technology) has been an interesting part of...

What is Fintech?

Fintech: A term used to refer to innovations in...

How to buy shares online

Buying shares online in India has come a long...

Is it worth investing in life insurance over 60?

Is it worth investing in life insurance over 60? As...

Singapore-based fintech start-up WSPN bags $30m seed funding

WSPN (Worldwide Stablecoin Payment Network), a Singapore-based stablecoin...

Mastercard set to lay off around 3% of global workforce

Mastercard plans to cut around 3% of its...

HSBC reportedly considering sale of its South African business

HSBC Holdings is reportedly considering selling its South...

USAA president and CEO Wayne Peacock to retire in 2025

Wayne Peacock, President and CEO of USAA, plans...

Singapore’s Valverde taps Broadridge for investment management solution

Valverde Investment Partners, a new Singapore-based investment firm...

US fintech Amount lands $30m in fresh funding to advance AI capabilities

Funding Round: Amount, a digital origination and decisioning...

Sharon Naidoo named new TransUnion CFO for the UK and Europe

Appointment: Sharon Naidoo has been appointed as the...
Exit mobile version