“AI” was everywhere this year at CES; you couldn’t swing a badge without hitting some company claiming generative AI was going to revolutionize your sleep, teeth or business. But a few applications of machine learning stood out as genuinely helpful or surprising — here are a few examples of AI that might actually do some good.
The whole idea that AI might not be a total red flag occurred to me when I chatted with Whispp at a press event. This small team is working on voicing the voiceless, meaning people who have trouble speaking normally due to a condition or illness.
The name refers to conditions where a person is able to form words, but whose vocal cords have been taken out of the picture by, for example, throat cancer or an injury. These folks can whisper just fine, but not speak — often having to rely on a decidedly last-century electronic voice box. So Whispp’s first big feature is to synthesize their voices and turn those whispers into full speech.
The synthetic voice is created by similar means as other platforms — a few old recordings of someone and you can customize a voice model to sound sufficiently like them. Whispp’s main challenge, it seems, was to build a speech recognition model that worked well with whispers and other affected speech. Interestingly, this also works for people who stutter, since for whatever reason whispering often reduces stuttering by a huge amount. You can read more about Whispp in our post about it here.
Around the corner from Whispp I ran into the extremely cheerful women of Louise, a French startup focused on fertility tracking and advice for both men and women looking to improve their chances of conceiving.
Louise is largely a B2B affair, working with hospitals and fertility clinics to analyze patient data. It uses machine learning as a signal detector, sorting through thousands of data points from tests and surveys and looking for biomarkers that could give insight into the complex process of improving fertility. AI is good at finding subtle correlations or patterns in large collections of data, and fertility is definitely an area that could benefit from more of that.
The company was actually at CES to promote its new app, Olly, which is its first B2C offering: a start-to-finish “fertility journey” app for both women and men (whose part in the process is often overlooked), from decision to go for it all the way through success. It tracks appointments, offers documentation on medications and strategies, and so on. And the icon is a cute little chick. Olly is planned for global release on February 14.
The rabbit r1 got a fair amount of hype at CES, as a candy-colored pocket AI assistant should. But while it’s anyone’s guess whether the company will survive long enough to reproduce (the r2, one presumes), this little doodad’s capabilities might actually be more helpful to people with vision impairments than sighted folks who just don’t want to pull out their phones.
Alexa, Siri, and other voice assistants have been transformative for countless people for whom navigating a smartphone or desktop computer is a pain due to the fundamentally graphical user interface. Being able to just speak and get basic info like the weather, the news and so on is a huge enabler.
The problem is that these so-called assistants couldn’t do much outside some strictly defined tasks and APIs. So you might be able to find out when a flight is leaving, but you can’t rebook it. You can book a car but not customize the ride for accessibility. Look up vacation destinations but not rent a cabin on the beach there. Stuff like that. The r1 is built to be able to not just do basic assistant queries through a ChatGPT-like voice interface, but also operate any normal phone or web app.
If the device and service match the company’s claims, the r1 could be a useful helper indeed for anyone who has trouble interacting with a traditional computer. If you can talk, you can get things done — and if you use Whispp, you don’t even need to talk!
Elder care is another area where some of the common criticisms of generative AI fall short. I don’t think anyone should have to rely on a computer for companionship, but that doesn’t mean that the computers we already have to interact with couldn’t be a little nicer about it. Even though I’ve made it clear (at great length) that I don’t think they should pretend to be people, they can still be friendly and helpful.
ElliQ makes devices (“robots”) for places like assistive care facilities, where having a gadget in the room that can remind patients of things or ask them how they are is a win-win addition. The latest device uses a large language model to produce more natural conversation, and like ChatGPT and others, it can talk about just about anything. Many older folks are starved for conversation or struggle to understand new developments in tech or the news, so this could be a genuinely good way for them to stay engaged. Plus it adds an element of care and safety, able to hear the person if they call for help, or relay requests to caregivers, or just check in.
That’s not all the good applications of AI at CES, but it’s enough of a sample to show you that while it may be the next big hyped thing, that doesn’t mean there are no good ways to apply machine learning in everyday life.