Betaworks is no stranger to investing in artificial intelligence and machine learning, but the latest cohort of their Camp “thematic accelerator” indicates a confidence in the field beyond the present fascination with chatbots. Founder and CEO John Borthwick described the firm as being “rabidly interested” in the field of AI as augmentation rather than just a product in itself.
They’re not the only ones, either: “This particular Camp had twice the applicants as last year,” Borthwick told me. “The fun part of these is that you put out an open call, and under that banner, that thesis, you get more diversity than you expect. We believe that over the next two-three years, we’re going to see an incredible amount of companies building and using AI models to augment human workflows and behaviors.”
It is perhaps ChatGPT’s most universally useful quality that (assuming you can tell when it’s putting you on) it can quickly and satisfactorily answer a question on nearly any topic, or give a reasonable answer to something like a coding problem. Few talk with AIs just for the pleasure of it (though there are those who do); if it can make your work easier, why not let it?
Borthwick noted that Betaworks has been investing in AI and ML since 2016, when it was far more rudimentary.
“We started by going systematically through the intersection of ML and a particular modality: machine learning and audio, synthetic media, all those different objects of data or media,” he said. “Over the last year or two we’ve been thinking about the role of AI as it relates to human workflows, and we firmly believe, and want to invest in and move the market towards augmentation.”
This is like thinking of AI as “a bicycle for the mind” rather than a purely generative or self-contained product. That’s visible in the selected companies, many of which are or use AI to speed up or improve existing processes rather than do something completely new. Each will receive $500,000 in funding, in addition to anything they’ve already raised.
“We’re looking across the AI stack; certain things in this Camp are almost apps, then there are things that are much more in the middleware category,” Borthwick continued. “The program is really about finding product-market fit and developing a product roadmap, it’s less about performative fundraising exercises. About half of the companies do their raise before or during the program.”
They brought in three co-investors this year: Greycroft, Differential and Mozilla, all of which will make co-investments and make their resources and networks available to the startups. Betaworks still does all the actual accelerator stuff.
Here are the 12 companies in this year’s cohort, summarized from summaries they sent over; I asked each company the most obvious question I could think of (in italics) after hearing what they’re trying to do. In the interest of brevity I have also summarized their sometimes extensive answers. There’s more detail on each, including founders and their backgrounds, over at Betaworks.
- Armilla Assurance: A service for assessing the quality and reliability of AI systems. The company then offers insurance against losses due to AI performing below its assessed level.
What metrics are used to assess AI risk and fitness, and if they’re industry standard, why would the company not just assess them internally?
Armilla uses both industry standards and proprietary testing methods to provide an objective measure of quality and a performance warranty, though they are no substitute for including these measures in the development process.
- Bionic Health: Preventative healthcare using an AI-driven model trained on data (“real-world practices, protocols and workflows of doctors, practitioners and patients”) from their own clinic in North Carolina. Has also built a smarter electronic health record system that uses embeddings for improved search and insights. $3.5 million already raised in a seed round.
Why I would want to use an AI model based on decisions by doctors and health specialists, rather than asking a doctor or other accredited health specialist?
The system is assistive to doctors, not a direct to consumer thing, and the improved EHR should reduce clerical work in this setting, allowing doctors and patients to focus on making well-informed care decisions.
- Deftly: An ML platform that aggregates and synthesizes customer feedback and other signals into more easily actionable product changes and features.
How would an early-stage startup come by “troves of dispersed product feedback” to aggregate and synthesize?
Not directly answered, but what data there is in any feedback forms, meeting notes and other channels is ingested and shared in a dashboard for easier interpretation by product teams.
- Globe: Creates large language models for teams that need to “gather, exchange, and understand complex information,” like in large-scale studies or product development. The LLM ingests all relevant documents and can be consulted at any level of detail, from overview to technical details or exact quotes from relevant documents.
Given LLMs’ limitations, why would I trust one to provide multiple levels of detail of complex data or projects?
Surfacing useful information, and specifically information that one may not have been aware of to begin with, is the goal — as opposed to distilling new information out of it. It seems to act more as a semantically enhanced search.
- GroupLang: Working on software that allows LLMs to interact with groups of people instead of individuals, a task that involves redefining user preferences, privacy and other interesting questions.
What’s an example of a group having to interact collectively with an LLM?
It’s more that collective use could be beneficial, they say, such as a shared complex task where a central system is tracking information important to all involved.
- Open Souls:
Aims to create conversational AI models that “autonomously think and behave like real people,” complete with feelings and personalities and internal complexity.
This is quite a claim. But doesn’t it more or less amount to a fine-tuned model with an artificial persona loaded via initial instructions?
Fine-tuning personas primarily produces a change in speech patterns but not how the model operates internally. Their approach is to augment LLMs with extra non-visible processes to simulate “rich inner monologues” that inform behavior.
- Pangaea: Using AI and some custom back-end tech to build games faster and take on time-consuming tasks, with first-party development of a rogue-lite battle royale (Project Rise) with procedurally-generated maps.
Competitive multiplayer games require careful gameplay and map balance. How can that be achieved with this level of procedural generation?
Some games are more about perfect balances than others, and in this case it’s more important to make sure it’s “fair” and that loss doesn’t result directly from bad proc gen. There will be hand-designed rooms, challenges, levels and rules to make sure the experience is well tuned. Plus if you die you are reborn as a monster and keep some of your progress.
- Plastic Labs: Aims to improve LLM viability by “securely managing the flow of intimate psychological data between users and models.” So you get customization across different agents without it having to learn and stash your various preferences and tendencies every time.
What does this framework actually consist of, and how can it remain effective if the AI apps in question all use different foundation models or tuning processes?
A “secure middleware relay.” Certain approaches work across LLMs because all the foundation models seem to share an ability to “construct and comprehend predictions about internal mental states.” What exactly this ability amounts to is not clear (though the team has their theories) but they claim it enables their portable personalization.
- Shader: A social camera app that lets users create AR filters using a simple, no-code interface including voice and simple taps and swipes.
What does the process of creation look like and how can the filter be shared to proprietary platforms like Instagram or Snapchat?
You describe what you want with a traditional prompt like “cyberpunk elf face” and then it can be mapped onto your face live. The filter itself stays on Shader, you’ll have to export videos to other services. There are several examples on their IG and Tiktok.
- Unakin: Also aiming to reduce development time with AI code assistants. First is a UI programming agent that builds functioning game interfaces with text or visual prompts, with more to come.
Does the proposed agent exist, and what specifically is it capable of right now compared to other code-generating LLMs?
They’re using it internally for improved code search, code generation (not yet benchmarked but expected to be competitive in UI creation in particular) and an image-to-code process whereby Figma and Adobe files can be turned directly to in-game UI.
- Vera: Helps workplaces adopt AI by filtering what goes in and out of the models, according to rules set up by the company. It’s basically the kind of oversight IT gets for other business software, but for generative AI.
So this records all inputs and outputs from AIs used by an enterprise and allows closer controls over what is asked or answered?
Basically yes — it addresses security and privacy concerns by making the interactions observable and intercepting things like sensitive info before they get sent to the LLM. Responses can also be checked for consistency and errors.
- Waverly: A “social network of ideas” that uses AI to “remix” them, and uses conversational AI as a control method for the feed.
How exactly does the AI model “remix” ideas, and how does a conversational AI provide a better way to control one’s feed?
The “WordDJ” tool has no keyboard but lets you move blocks of text around like fridge magnets or combine them. The conversational agent allows users to describe more specifically what they’d like to see more or less of rather than muting accounts or the like.