Garry Tan, president and CEO of Y Combinator, told a crowd at The Economic Club of Washington, D.C. this week that “regulation is likely necessary” for artificial intelligence.
Tan spoke with Teresa Carlson, a General Catalyst board member, as part of a one-on-one interview where he discussed everything from how to get into Y Combinator to AI, noting that there is “no better time to be working in technology than right now.”
Tan said he was “overall supportive” of the National Institute of Standards and Technology (NIST) attempt to construct a GenAI risk mitigation framework, and said that “large parts of the EO by the Biden administration are probably on the right track.”
NIST’s framework proposes things like defining that GenAI should comply with existing laws that govern things like data privacy and copyright; disclosing GenAI use to end users; establishing regulations that ban GenAI from creating child sexual abuse materials, and so on. Biden’s executive order covers a wide range of dictums, from requiring AI companies to share safety data with the government to ensuring that small developers have fair access.
But Tan, like many Valley VCs, was wary of other regulatory efforts. He called bills related to AI that are moving through the California and San Francisco legislatures, “very concerning.”
One such California bill that’s causing a stir is the one put forth by state Sen. Scott Wiener that would allow the attorney general to sue AI companies if their wares are harmful, Politico reports.
“The big discussion broadly in terms of policy right now is what does a good version of this really look like?” Tan said. “We can look to people like Ian Hogarth, in the U.K., to be thoughtful. They’re also mindful of this idea of concentration of power. At the same time, they’re trying to figure out how we support innovation while also mitigating the worst possible harms.”
Hogarth is a former YC entrepreneur and AI expert who’s been tapped by the U.K. to create an AI model taskforce.
“The thing that scares me is that if we try to address a sci-fi concern that is not present at hand,” Tan said.
As for how YC manages responsibility, Tan said that if the organization doesn’t agree with a startup’s mission or what that product would do for society, “YC just doesn’t fund it.” He noted that there are several times when he would read about a company in the media that had applied to YC.
“We go back and look at the interview notes, and it’s like, we don’t think this is good for society. And thankfully, we didn’t fund it,” he said.
Artificial intelligence leaders keep messing up
Tan’s guideline still leaves room for Y Combinator to crank out a lot of AI startups as cohort grads. As my colleague Kyle Wiggers reported, the Winter 2024 cohort had 86 AI startups, nearly double the number from the Winter 2023 batch and close to triple the number from Winter 2021, according to YC’s official startup directory.
And recent news events are making people wonder if they can trust those selling AI products to be the ones to define responsible AI. Last week, TechCrunch reported that OpenAI is getting rid of its AI responsibility team.
Then there was the debacle related to the company using a voice that sounded like actress Scarlett Johansson’s when demoing its new GPT-4o model. Turns out, she was asked about using her voice, and she turned them down. OpenAI has since removed the Sky voice, though it denied it was based on Johansson. That, and issues around OpenAI’s ability to claw back vested employee equity, were among several items that led folks to openly question Sam Altman’s scruples.
Meanwhile, Meta made AI news of its own when it announced the creation of an AI advisory council that only had white men on it, effectively leaving out women and people of color, many of whom played a key role in the creation and innovation of that industry.
Tan didn’t reference any of these instances. Like most Silicon Valley VCs, what he sees is opportunities for new, huge, lucrative businesses.
“We like to think about startups as an idea maze,” Tan said. “When a new technology comes out, like large language models, the whole idea maze gets shaken up. ChatGPT itself was probably one of the fastest-to-success consumer products to be released in recent memory. And that’s good news for founders.”
Artificial intelligence of the future
Tan also said that San Francisco is at the center of the AI movement. For example, that’s where Anthropic, started by YC alums, got its start, and OpenAI, which was a YC spinout.
Tan also joked that he wasn’t going to follow in Altman’s footsteps, noting that Altman “had my job a number of years ago, so no plans on starting an AI lab.”
One of the other YC success stories is legal tech startup Casetext, which sold to Thomson Reuters for $600 million in 2023. Tan believed Casetext was one of the first companies in the world to get access to generative AI and was then one of the first exits in generative AI.
When looking to the future of AI, Tan said that “obviously, we have to be smart about this technology” as it relates to risks around bioterror and cyberattacks. At the same time, he said there should be “a much more measured approach.”
He also assumes that there isn’t likely to be a “winner take all” model, but rather an “incredible garden of consumer choice of freedom and of founders to be able to create something that touches a billion people.”
At least, that’s what he wants to see happen. That would be in his and YC’s best interest — lots of successful startups returning lots of cash to investors. So what scares Tan most isn’t run-amok evil AIs, but a scarcity of AIs to choose from.
“We might actually find ourselves in this other really monopolistic situation where there’s great concentration in just a few models. Then you’re talking about rent extraction, and you have a world that I don’t want to live in.”
We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.