Awash in private money and charismatic founders, tech startups are putting corporate governance to the test. Jennifer Fan suggests ways to bring law to the frontier.
By Michael Blanding
Watch Professor Fan's Featured Insights video
For decades, power emerged within companies following a familiar script. Startups began with founders and moved through successive rounds of venture capital — seed, Series A, B, and C — with each investor taking a place at the table. “There was a well-worn way boards used to be constituted,” says Jennifer Fan, professor of law and Therese Maynard Chair in Business Law at LMU Loyola Law School (LLS). “With each round, the lead investor would ask for a seat on the board.” Eventually, as companies matured, they’d take on outside experts in the industry or related sectors who could provide their counsel on the way toward a public offering.
In recent years, Fan argues, that model has quietly unraveled. Today’s startups are fueled not only by venture capital, but also by a dizzying array of nontraditional funders, including private equity, hedge funds, mutual funds, and even investment arms of tech giants, who aren’t insisting on taking seats on the board. “What they bring is a lot of money — and sometimes they don’t do as much due diligence as they should,” Fan says. As companies are staying private for longer, it’s leading to a “wild west” of corporate governance, led by a cult of personality around founders, with looser deal terms, and sometimes novel governance structures that raise red flags for corporate oversight.
Nowhere is that more dangerous than with artificial intelligence, where the breakneck speed of innovation has turned startups into global powerhouses within the space of only a few years. Investors’ fear of missing out on the next new technology and regulators’ fear of stifling innovation have combined to place unprecedented power in founders’ hands. “That often means under-regulation, where we grant new technologies wide latitude in the hope that they will self-correct somehow,” notes Fan. “That’s not always a good tactic.”
Fan has long examined corporate governance around new and emerging technologies, especially in private companies, which are often opaque to the outside world, with little information made public until there’s a problem. Oftentimes her work comes down to a deceptively simple question: Who holds power inside the companies shaping our future, and what keeps that power in check? “What are the legal guardrails we’re putting in place around companies as they are developing?” she asks. “That often sets the foundation for a company as it matures.”
A “Frontier Model”
A self-described “accidental academic,” Fan graduated from the University of Pennsylvania’s Carey Law School in 1998, before embarking on a career as a corporate securities attorney in global law firms representing high-tech and biotech firms, advising startups and public companies with offices around the world. That experience gave her a “front-row seat to how governance structures evolve around new technologies,” states Fan, who began lecturing at the University of Washington in 2010 before eventually becoming a professor there in 2017.
By the time she joined LMU Loyola Law School in 2024, she was already a prolific author of papers published in “Harvard Business Law Review,” “Boston College Law Review,” “Columbia Business Law Review,” and other publications. She’s since helped launch Loyola’s Transactional Law Clinic, giving students hands-on experience in advising clients on corporate and intellectual property matters, and recently became associate dean for Faculty and Academic Affairs.
Fan first raised alarm bells about the sharp rise of nontraditional investors in a 2022 article, the first descriptive account of how their hands-off approach has led to less investor oversight and due diligence. She’s elaborated on the dangers in a new working paper where she’s warned that the new “frontier model” of corporate governance strips away traditional checks and balances, narrows avenues for accountability, and concentrates decision-making power in a small group of insiders.
“We’re at a critical juncture right now. I’m hopeful that the lawyers and future lawyers drawn to this field will play a key role in making this technology more equitable and in holding companies accountable.”
“Everyone wants to be first to market, and investors in the AI space are driven by the belief that the next trillion-dollar company is around the corner, creating intense pressure to secure a stake in whatever seems promising,” she observes. “Terms that would have been unthinkable a decade ago are now considered routine.”
It’s not only private investors that are leading the trend, she adds, but also investment arms of tech giants such as Microsoft, Amazon, and Google, which are providing the huge infusions of cash necessary for development of AI technologies while largely giving founders free reign. “Founders have been elevated to an almost mythical status,” she says, describing a pattern in which charismatic leaders are granted extraordinary latitude by their boards. “They’re always deferring to the founder as opposed to thinking: ‘Is this the right thing to do?’” she states. “They’re unlikely to challenge the status quo as long as the prospective financial returns are substantial.”
With an ethos of “move fast and break things” that characterizes startup culture, it’s too tempting to cut corners on governance, says Fan. In addition to placing more board seats in the hands of founders or a close-knit group of insiders, the phenomenon has led to less governance. Before going public, a company would typically put more committees in place but with the sharp decline in companies going public, such committees are less likely to be implemented. Recent high-profile tech scandals involving WeWork’s Adam Neumann, Theranos founder Elizabeth Holmes, and FTX’s Sam Bankman-Fried, for example, have revealed what can happen when governance fails to keep pace with ambition.
Fan says that founders’ appeal can be particularly seductive when they wrap their missions in moral language. Bankman-Fried promoted “effective altruism,” pledging to ultimately give his fortune away, even as he was embezzling money from his crypto exchange. Similarly, Holmes promised to revolutionize medicine, while cultivating a distinctive persona complete with black turtleneck and husky voice, and stacking her board with big names such as George Schultz and Henry Kissinger with no background in health. She ultimately faced fraud charges and prison time.
When trust in founders proves misplaced, the fallout can be severe, not just for the company, but also for employees, consumers, and entire industries, Fan states. Studies have shown, for example, that female founders in the bio tech space have had a harder time raising money in the wake of Holmes’ demise.
Dangers of AI
While such corporate implosions aren’t necessarily new, AI presents some unique challenges, says Fan. For starters, there is the issue of bias based on the data models are trained on. While algorithms are often perceived as neutral, they can be shaped by the backgrounds and demographics of those who create them. “People who make technology tend to be a fairly monolithic group, with very few women and people of color,” she observes. “Once you examine who builds the system, and whose experiences are and aren’t represented, it becomes pretty clear that’s going to amplify some of the biases we see in society.”
That lack of diversity flows from the top, Fan argued as far back as 2019 in a paper for “Florida State Law Review.” She found that the percentage of women on high-tech boards of private firms was just 8 percent, recommending reforms to hiring and retention practices to increase the representation of women and people of color. In the wake of #MeToo and Black Lives Matter, the number has increased only slightly. “You saw a little bit more movement in those areas, but not as much as we would have liked to see.”
Concerns extend to sources of the data themselves, raising issues of privacy and intellectual property violations. In the wake of a class-action lawsuit, AI developer Anthropic recently agreed to a $1.5 billion settlement with authors alleging it used their copyrighted works to train its Claude AI system, a cost that could have been avoided with more oversight, Fan argues. “Some companies may just look at that as the ‘cost of doing business,’ but it’s a lawyers’ job to anticipate problems and head them off before they snowball and get out of control,” she says. Some AI companies have explored novel corporate governance structures as a way of trying to show that the dangers of AI are being considered and addressed. For example, after founder Sam Altman left OpenAI, the company created a nonprofit umbrella to address safety concerns. Ultimately, however, the plan was rejected by investors and proved unworkable, and the company reverted to its old structure. In the “Harvard Journal of Law & Technology," however, Fan argued that such novel structures create more problems than they solve.
Instead of reinventing the wheel, she says, companies should return to tried-and-true practices, insisting on a diverse board representing multiple stakeholders, and constituting committees to deal with issues of bias and safety early in the process. “You have to be sure you have the right people in the room thinking about these issues,” she notes, “especially independent directors and outside experts who can point out things that might potentially become a problem.”
The question is: Who will ensure that those guardrails are instituted? Back in 2016, Fan advocated for regulations that would implement such requirements for companies once they reached a certain size. With the federal government taking a laissez-faire approach to AI regulation, however, she holds out little hope for such reforms. States such as California have had more luck experimenting with their own approaches, she says. “Not all of them are good, but at least they are trying,” states Fan.
Ultimately, however, governance may depend less on outside regulation than on internal pressure. Lawyers, in particular, can provide a critical voice to insist on structural changes to provide increased oversight, but this requires a level of savvy amongst legal counsel. “How do we train lawyers to navigate the rules around this new technology?” Fan asks.
She answers that question in three parts. First, technological fluency is vital for lawyers to be taken seriously by others within the company. “You don’t have to learn to code, but you have to understand the biases inherent in the technology,” she says. Second, interdisciplinary breadth. Lawyers advising AI companies must be conversant not only in corporate law, but also privacy, intellectual property, and anti-discrimination law, among other fields.
Lastly, it is important to emphasize training in legal ethics, so lawyers can develop their own internal compass and separate right from wrong even amid the powerful pull founders can exert. “When you see your client doing something wrong, what are you going to do?” she says. “These are murky areas, and they demand clarity about your own legal obligations and ethical values so you can navigate emerging challenges in a principled way.”
AI, Fan believes, will continue to test the limits of existing institutions and challenge traditional notions of corporate governance. The pressure it puts on boundaries, however, also represents a rare opportunity, she says, providing the right training is in place. “We’re at a critical juncture right now. I’m hopeful that the lawyers and future lawyers drawn to this field will play a key role in making this technology more equitable and in holding companies accountable.”
Michael Blanding is a Boston-based investigative journalist whose work has appeared in The New York Times, WIRED, Smithsonian, Slate, The Nation, The Boston Globe Magazine, and Boston. His newest book, "The Gospel According to Hobby Lobby: Inside a Billionaire Family’s Quest to Craft a Christian Nation," will be published by PublicAffairs in July 2026.
