October 8, 2025
1550 Bay st Ste. C242, San Francisco, CA 94123
Technology

How a 23-year-old former OpenAI researcher turned a viral AI prophecy into profit, with a $1.5 billion hedge fund and outsize influence from Silicon Valley to D.C.

Many spoke of Aschenbrenner with a mixture of admiration and wariness—“intense,” “scarily smart,” “brash,” “confident.” More than one described him as carrying the aura of a wunderkind, the kind of figure Silicon Valley has long been eager to anoint. Others, however, noted that his thinking wasn’t especially novel, just unusually well-packaged and well-timed. Yet, while critics dismiss him as more hype than insight, investors Fortune spoke with see him differently, crediting his essays and early portfolio bets with unusual foresight.

There is no doubt, however, that Aschenbrenner’s rise reflects a unique convergence: vast pools of global capital eager to ride the AI wave; a Valley enthralled by the prospect of achieving artificial general intelligence (AGI), or AI that matches or surpasses human intelligence; and a geopolitical backdrop that frames AI development as a technological arms race with China.

Within certain corners of the AI world, Leopold Aschenbrenner was already familiar as someone who had written blog posts, essays, and research papers that circulated among AI safety circles, even before joining OpenAI. But for most people, he appeared seemingly overnight in June 2024. That’s when he self-published online a 165-page monograph called Situational Awareness: The Decade Ahead. The long essay borrowed for its title a phrase already familiar in AI circles, where “situational awareness” usually refers to models becoming aware of their own circumstances—a safety risk. But Aschenbrenner used it to mean something else entirely: the need for governments and investors to recognize how quickly AGI might arrive, and what was at stake if the U.S. fell behind.

In a sense, Aschenbrenner intended his manifesto to be the AI era’s equivalent of George Kennan’s “Long Telegram,” in which the American diplomat and Russia expert sought to awaken elite opinion in the U.S. to what he saw as the looming Soviet threat to Europe. In the introduction, Aschenbrenner sketched a future he claimed was visible only to a few hundred prescient people, “most of them in San Francisco and the AI labs.” Not surprisingly, he included himself among those with “situational awareness,” while the rest of the world had “not the faintest glimmer of what is about to hit them.” To most, AI looked like hype or, at best, another internet-scale shift. What he insisted he could see more clearly was that LLMs were improving at an exponential rate, scaling rapidly toward AGI, and then beyond to “superintelligence”—with geopolitical consequences and, for those who moved early, the chance to capture the biggest economic windfall of the century.

To drive the point home, he invoked the example of COVID in early 2020—arguing that only a few grasped the implications of a pandemic’s exponential spread, understood the scope of the coming economic shock, and profited by shorting before the crash. “All I could do is buy masks and short the market,” he wrote. Similarly, he emphasized that only a small circle today comprehends how quickly AGI is coming, and those who act early stand to capture historic gains. And once again, he cast himself among the prescient few.

But the core of Situational Awareness’s argument wasn’t the COVID parallel. It was the argument that the math itself—the scaling curves that suggested AI capabilities increased exponentially with the amount of data and compute thrown at the same basic algorithms—showed where things were headed.

Douglas, now a tech lead on scaling reinforcement learning at Anthropic, is both a friend and former roommate of Aschenbrenner’s who had conversations with him about the monograph. He told Fortune that the essay crystallized what many AI researchers had felt. ”If we believe that the trend line will continue, then we end up in some pretty wild places,” Douglas said. Unlike many who focused on the incremental progress of each successive model release, Aschenbrenner was willing to “really bet on the exponential,” he said.

Plenty of long, dense essays about AI risk and strategy circulate every year, most vanishing after brief debates in niche forums like LessWrong, a website founded by AI theorist and “doomer” extraordinaire Eliezer Yudkowsky that became a hub for rationalist and AI-safety ideas.

But Situational Awareness hit differently. Scott Aaronson, a computer science professor at UT Austin who spent two years at OpenAI overlapping with Aschenbrenner, remembered his initial reaction: “Oh man, another one.” But after reading, he told Fortune: “I had the sense that this is actually the document some general or national security person is going to read and say: ‘This requires action.’” In a blog post, he called the essay “one of the most extraordinary documents I’ve ever read,” saying Aschenbrenner “makes a case that, even after ChatGPT and all that followed it, the world still hasn’t come close to ‘pricing in’ what’s about to hit it.”

A longtime AI governance expert described the essays as “a big achievement,” but emphasized that the ideas were not new: “He basically took what was already common wisdom inside frontier AI labs and wrote it up in a very nicely packaged, compelling, easy-to-consume way.” The result was to make insider thinking legible to a much broader audience at a fever-pitch moment in the AI conversation.

Among AI safety researchers, who worry primarily about the ways in which AI might pose an existential risk to humanity, the essays were more divisive. For many, Aschenbrenner’s work felt like a betrayal, particularly because he had come out of those very circles. They felt their arguments urging caution and regulation had been repurposed into a sales pitch to investors. “Some people who are very worried about [existential risks] quite dislike Leopold now because of what he’s done—they basically think he sold out,” said one former OpenAI governance researcher. Others agreed with most of his predictions and saw value in amplifying them.

Still, even critics conceded his knack for packaging and marketing. “He’s very good at understanding the zeitgeist—what people are interested in and what could go viral,” said another former OpenAI researcher. “That’s his superpower. He knew how to capture the attention of powerful people by articulating a narrative very favorable to the mood of the moment: that the U.S. needed to beat China, that we needed to take AI security more seriously. Even if the details were wrong, the timing was perfect.”

That timing made the essays unavoidable. Tech founders and investors shared Situational Awareness with the sort of urgency usually reserved for hot term sheets, while policymakers and national security officials circulated it like the juiciest classified NSA assessment.

As one current OpenAI staffer put it, Aschenbrenner’s skill is “knowing where the puck is skating.”

At the same time as the essays were released, Aschenbrenner launched Situational Awareness LP, a hedge fund built around the theme of AGI, with its bets placed in publicly traded companies rather than private startups.

The fund was seeded by Silicon Valley heavyweights like investor and current Meta AI product lead Nat Friedman—Aschenbrenner reportedly connected with him after Friedman read one of his blog posts in 2023—as well as Friedman’s investing partner Daniel Gross, and Patrick and John Collison, Stripe’s cofounders. Patrick Collison reportedly met Aschenbrenner at a 2021 dinner set up by a connection “to discuss their shared interests.” Aschenbrenner also brought on Carl Shulman—a 45-year-old AI forecaster and governance researcher with deep ties in the AI safety field and a past stint at Peter Thiel’s Clarium Capital—to be the new hedge fund’s director of research.

In a four-hour podcast with Dwarkesh Patel tied to the launch, Aschenbrenner touted the explosive growth he expects once AGI arrives, saying, “The decade after is also going to be wild,” in which “capital will really matter.” If done right, he said, “there’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x.”

Together, the manifesto and the fund reinforced each other: Here was a book-length investment thesis paired with a prognosticator with so much conviction he was willing to put serious money on the line. It proved an irresistible combination to a certain kind of investor. One former OpenAI researcher said Friedman is known for “zeitgeist hacking”—backing people who could capture the mood of the moment and amplify it into influence. Supporting Aschenbrenner fit that playbook perfectly.

Situational Awareness’s strategy is straightforward: It bets on global stocks likely to benefit from AI—semiconductors, infrastructure, and power companies—offset by shorts on industries that could lag behind. Public filings reveal part of the portfolio: A June SEC filing showed stakes in U.S. companies including Intel, Broadcom, Vistra, and former Bitcoin-miner Core Scientific (which CoreWeave announced it would acquire in July), all seen as beneficiaries of the AI build-out. So far, it has paid off: The fund quickly swelled to over $1.5 billion in assets and delivered 47% gains, after fees, in the first half of this year.

According to a spokesperson, Situational Awareness LP has global investors, including West Coast founders, family offices, institutions, and endowments. In addition, the spokesperson said, Aschenbrenner “has almost all of his net worth invested in the fund.”

To be sure, any picture of a U.S. hedge fund’s holdings is incomplete. The publicly available 13F filings only cover long positions in U.S.-listed stocks—shorts, derivatives, and international investments aren’t disclosed—adding an inevitable layer of mystery around what the fund is really betting on. Still, some observers have questioned whether Aschenbrenner’s early results reflect skill or fortunate timing. For example, his fund disclosed roughly $459 million in Intel call options in its first-quarter filing—positions that later looked prescient when Intel’s shares climbed over the summer following a federal investment and a subsequent $5 billion stake from Nvidia.

But at least some experienced financial industry professionals have come to view him differently. Veteran hedge-fund investor Graham Duncan, who invested personally in Situational Awareness LP and now serves as an advisor to the fund, said he was struck by Aschenbrenner’s combination of insider perspective and bold investment strategy. “I found his paper provocative,” Duncan said, adding that Aschenbrenner and Shulman weren’t outsiders scanning opportunities but insiders building an investment vehicle around their view. The fund’s thesis reminded him of the few contrarians who spotted the subprime collapse before it hit—people like Michael Burry, who Michael Lewis made famous in his book The Big Short. “If you want to have variant perception, it helps to be a little variant.”

He pointed to Situational Awareness’ reaction to Chinese startup DeepSeek’s January release of its R1 open-source LLM, which many dubbed a “Sputnik moment” that showcased China’s rising AI capabilities despite limited funding and export controls. While most investors panicked, he said Aschenbrenner and Shulman had already been tracking it and saw the sell-off as an overreaction. They bought instead of sold, and even a major tech fund reportedly held back from dumping shares after an analyst said, “Leopold says it’s fine.” That moment, Duncan said, cemented Aschenbrenner’s credibility—though Duncan acknowledged “he could yet be proven wrong.”

Another investor in Situational Awareness LP, who manages a leading hedge fund, told Fortune that he was struck by Aschenbrenner’s answer when asked why he was starting a hedge fund focused on AI rather than a VC fund, which seemed like the most obvious choice.

“He said that AGI was going to be so impactful to the global economy that the only way to fully capitalize on it was to express investment ideas in the most liquid markets in the world,” he said. “I am a bit stunned by how fast they have come up the learning curve…they are way more sophisticated on AI investing than anyone else I speak to in the public markets.“

Aschenbrenner, born in Germany to two doctors, enrolled at Columbia when he was just 15 and graduated valedictorian at 19. The longtime AI governance researcher, who described herself as an acquaintance of Aschenbrenner’s, recalled that she first heard of him when he was still an undergraduate.

“I heard about him as, ‘oh, we heard about this Leopold Aschenbrenner kid, he seems like a sharp guy,” she said. “The vibe was very much a whiz-kid sort of thing.”

That wunderkind reputation only deepened. At 17, Aschenbrenner won a grant from economist Tyler Cowen’s Emergent Ventures, and Cowen called him an “economics prodigy.” While still at Columbia, he also interned at the Global Priorities Institute, co-authoring a paper with economist Phillip Trammell, and contributed essays to Works in Progress, a Stripe-funded publication that gave him another foothold in the tech-intellectual world.

He was already embedded in the Effective Altruism community—a controversial philosophy-driven movement influential in AI safety circles —and co-founded Columbia’s EA chapter. That network eventually led him to a job at the FTX Futures Fund, a charity founded by cryptocurrency exchange founder Sam Bankman-Fried. Bankman-Fried was another EA adherent who donated hundreds of millions of dollars to causes, including AI governance research, that aligned with EA’s philanthropic priorities.

The FTX Futures Fund was designed to support EA-aligned philanthropic priorities, although it was later found to have used money from Bankman-Fried’s FTX cryptocurrency exchange that was essentially looted from account holders. (There is no evidence that anyone who worked at the FTX Futures Fund knew the money was stolen or did anything illegal.)

At the FTX Futures Fund, Aschenbrenner worked with a small team that included William MacAskill, a co-founder of Effective Altruism, and Avital Balwit—now chief of staff to Anthropic CEO Dario Amodei and, according to a Situational Awareness LP spokesperson, currently engaged to Aschenbrenner. Balwit wrote in a June 2024 essay that “these next five years might be the last few years that I work,” because AGI might “end employment as I know it”–a striking mirror image of Aschenbrenner’s conviction that the same technology will make his investors rich.

But when Bankman-Fried’s FTX empire collapsed in November 2022, the Futures Fund philanthropic effort imploded. “We were a tiny team, and then from one day to the next, it was all gone and associated with a giant fraud,” Aschenbrenner told Dwarkesh Patel. “That was incredibly tough.”

Just months after FTX collapsed, however, Aschenbrenner reemerged — at OpenAI. He joined the company’s newly-launched “superalignment” team in 2023, created to tackle a problem no one yet knows how to solve: how to steer and control future AI systems that would be far smarter than any human being, and perhaps smarter than all of humanity put together. Existing methods like reinforcement learning from human feedback (RLHF) had proven somewhat effective for today’s models, but they depend on humans being able to evaluate outputs — something which might not be possible if systems surpassed human comprehension.

Aaronson, the UT computer science professor, joined OpenAI before Aschenbrenner and said what impressed him was Aschenbrenner’s instinct to act. Aaronson had been working on watermarking ChatGPT outputs to make AI-generated text easier to identify. “I had a proposal for how to do that, but the idea was just sort of languishing,” he said. “Leopold immediately started saying, ‘Yes, we should be doing this, I’m going to take responsibility for pushing it.’”

Others remembered him differently, as politically clumsy and sometimes arrogant. “He was never afraid to be astringent at meetings or piss off the higher-ups, to a degree I found alarming,” said one current OpenAI researcher. A former OpenAI staffer, who said they first became aware of Aschenbrenner when he gave a talk at a company all-hands meeting that previewed themes he would later publish in Situational Awareness, recalled him as “a bit abrasive.” Multiple researchers also described a holiday party where, in a casual group discussion, Aschenbrenner told then Scale AI CEO Alexandr Wang how many GPUs OpenAI had— “just straight out in the open,” as one put it. Two people told Fortune they had directly overheard the remark. A number of people were taken aback, they explained, at how casually Aschenbrenner shared something so sensitive. Through spokespeople, both Wang and Aschenbrenner denied that the exchange occurred.

“This account is entirely false,” a representative of Aschenbrenner told Fortune. “Leopold never discussed private information with Alex. Leopold often discusses AI scaling trends such as in Situational Awareness, based on public information and industry trends.”

In April 2024, OpenAI fired Aschenbrenner, officially citing the leaking of internal information (the incident was not related to the alleged GPU remarks to Wang). On the Dwarkesh podcast two months later, Aschenbrenner maintained the “leak” was “a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI” that he shared with three external researchers for feedback–something he said was “totally normal” at OpenAI at the time.  He argued that an earlier memo in which he said OpenAI’s security was “egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors” was the real reason for his dismissal.

According to news reports, OpenAI did respond, via a spokesperson, that the concerns about security that he raised internally (including to the board) “did not lead to his separation.” The spokesperson also said they “disagree with many of the claims he has since made” about OpenAI’s security and the circumstances of his departure.

Either way, Aschenbrenner’s ouster came amid broader turmoil: Within weeks, OpenAI’s “superalignment” team—led by OpenAI’s cofounder and chief scientist Ilya Sutskever and AI researcher Jan Leike, and where Aschenbrenner had worked—dissolved after both leaders departed the company.

Two months later, Aschenbrenner published Situational Awareness and unveiled his hedge fund. The speed of the rollout prompted speculation among some former colleagues that he had been laying the groundwork while still at OpenAI.

Even skeptics acknowledge the market has rewarded Aschenbrenner for channeling today’s AGI hype, but still, doubts linger. “I can’t think of anybody that would trust somebody that young with no prior fund management [experience],” said a former OpenAI colleague who is now a founder. “I would not be an LP in a fund drawn by a child unless I felt there was really strong governance in place.”

Others question the ethics of profiting from AI fears. “Many agree with Leopold’s arguments, but disapprove of stoking the US-China race or raising money based off AGI hype, even if the hype is justified,” said one former OpenAI researcher. “Either he no longer thinks that [the existential risk from AI] is a big deal or he is arguably being disingenuous,” said another.

One former strategist within the Effective Altruism community said many in that world “are annoyed with him,” particularly for promoting the narrative that there’s a “race to AGI” that “becomes a self-fulfilling prophecy.” While profiting from stoking the idea of an arms race can be rationalized—since Effective Altruists often view making money for the purpose of then giving it away as virtuous—the former strategist argued that “at the level of Leopold’s fund, you’re meaningfully providing capital,” and that carries more moral weight.

The deeper worry, said Aaronson, is that Aschenbrenner’s message—that the U.S. must accelerate the pace of AI development at all costs in order to beat China—has landed in Washington at a moment when accelerationist voices like Marc Andreessen, David Sacks and Michael Kratsios are ascendant. “Even if Leopold doesn’t believe that, his essay will be used by people who do,” Aaronson said. If so, his biggest legacy may not be a hedge fund, but a broader intellectual framework that is helping to cement a technological Cold War between the U.S. and China.

If that proves true, Aschenbrenner’s real impact may be less about returns and more about rhetoric—the way his ideas have rippled from Silicon Valley into Washington. It underscores the paradox at the center of his story: To some, he’s a genius who saw the moment more clearly than anyone else. To others, he’s a Machiavellian figure who repackaged insider safety worries into an investor pitch. Either way, billions are now riding on whether his bet on AGI delivers.

This story was originally featured on Fortune.com



Source by [author_name]

Many spoke of Aschenbrenner with a mixture of admiration and wariness—“intense,” “scarily smart,” “brash,” “confident.” More than one described him as carrying the aura of a wunderkind, the kind of figure Silicon Valley has long been eager to anoint. Others, however, noted that his thinking wasn’t especially novel, just unusually well-packaged and well-timed. Yet, while critics dismiss him as more hype than insight, investors Fortune spoke with see him differently, crediting his essays and early portfolio bets with unusual foresight.

There is no doubt, however, that Aschenbrenner’s rise reflects a unique convergence: vast pools of global capital eager to ride the AI wave; a Valley enthralled by the prospect of achieving artificial general intelligence (AGI), or AI that matches or surpasses human intelligence; and a geopolitical backdrop that frames AI development as a technological arms race with China.

Within certain corners of the AI world, Leopold Aschenbrenner was already familiar as someone who had written blog posts, essays, and research papers that circulated among AI safety circles, even before joining OpenAI. But for most people, he appeared seemingly overnight in June 2024. That’s when he self-published online a 165-page monograph called Situational Awareness: The Decade Ahead. The long essay borrowed for its title a phrase already familiar in AI circles, where “situational awareness” usually refers to models becoming aware of their own circumstances—a safety risk. But Aschenbrenner used it to mean something else entirely: the need for governments and investors to recognize how quickly AGI might arrive, and what was at stake if the U.S. fell behind.

In a sense, Aschenbrenner intended his manifesto to be the AI era’s equivalent of George Kennan’s “Long Telegram,” in which the American diplomat and Russia expert sought to awaken elite opinion in the U.S. to what he saw as the looming Soviet threat to Europe. In the introduction, Aschenbrenner sketched a future he claimed was visible only to a few hundred prescient people, “most of them in San Francisco and the AI labs.” Not surprisingly, he included himself among those with “situational awareness,” while the rest of the world had “not the faintest glimmer of what is about to hit them.” To most, AI looked like hype or, at best, another internet-scale shift. What he insisted he could see more clearly was that LLMs were improving at an exponential rate, scaling rapidly toward AGI, and then beyond to “superintelligence”—with geopolitical consequences and, for those who moved early, the chance to capture the biggest economic windfall of the century.

To drive the point home, he invoked the example of COVID in early 2020—arguing that only a few grasped the implications of a pandemic’s exponential spread, understood the scope of the coming economic shock, and profited by shorting before the crash. “All I could do is buy masks and short the market,” he wrote. Similarly, he emphasized that only a small circle today comprehends how quickly AGI is coming, and those who act early stand to capture historic gains. And once again, he cast himself among the prescient few.

But the core of Situational Awareness’s argument wasn’t the COVID parallel. It was the argument that the math itself—the scaling curves that suggested AI capabilities increased exponentially with the amount of data and compute thrown at the same basic algorithms—showed where things were headed.

Douglas, now a tech lead on scaling reinforcement learning at Anthropic, is both a friend and former roommate of Aschenbrenner’s who had conversations with him about the monograph. He told Fortune that the essay crystallized what many AI researchers had felt. ”If we believe that the trend line will continue, then we end up in some pretty wild places,” Douglas said. Unlike many who focused on the incremental progress of each successive model release, Aschenbrenner was willing to “really bet on the exponential,” he said.

Plenty of long, dense essays about AI risk and strategy circulate every year, most vanishing after brief debates in niche forums like LessWrong, a website founded by AI theorist and “doomer” extraordinaire Eliezer Yudkowsky that became a hub for rationalist and AI-safety ideas.

But Situational Awareness hit differently. Scott Aaronson, a computer science professor at UT Austin who spent two years at OpenAI overlapping with Aschenbrenner, remembered his initial reaction: “Oh man, another one.” But after reading, he told Fortune: “I had the sense that this is actually the document some general or national security person is going to read and say: ‘This requires action.’” In a blog post, he called the essay “one of the most extraordinary documents I’ve ever read,” saying Aschenbrenner “makes a case that, even after ChatGPT and all that followed it, the world still hasn’t come close to ‘pricing in’ what’s about to hit it.”

A longtime AI governance expert described the essays as “a big achievement,” but emphasized that the ideas were not new: “He basically took what was already common wisdom inside frontier AI labs and wrote it up in a very nicely packaged, compelling, easy-to-consume way.” The result was to make insider thinking legible to a much broader audience at a fever-pitch moment in the AI conversation.

Among AI safety researchers, who worry primarily about the ways in which AI might pose an existential risk to humanity, the essays were more divisive. For many, Aschenbrenner’s work felt like a betrayal, particularly because he had come out of those very circles. They felt their arguments urging caution and regulation had been repurposed into a sales pitch to investors. “Some people who are very worried about [existential risks] quite dislike Leopold now because of what he’s done—they basically think he sold out,” said one former OpenAI governance researcher. Others agreed with most of his predictions and saw value in amplifying them.

Still, even critics conceded his knack for packaging and marketing. “He’s very good at understanding the zeitgeist—what people are interested in and what could go viral,” said another former OpenAI researcher. “That’s his superpower. He knew how to capture the attention of powerful people by articulating a narrative very favorable to the mood of the moment: that the U.S. needed to beat China, that we needed to take AI security more seriously. Even if the details were wrong, the timing was perfect.”

That timing made the essays unavoidable. Tech founders and investors shared Situational Awareness with the sort of urgency usually reserved for hot term sheets, while policymakers and national security officials circulated it like the juiciest classified NSA assessment.

As one current OpenAI staffer put it, Aschenbrenner’s skill is “knowing where the puck is skating.”

At the same time as the essays were released, Aschenbrenner launched Situational Awareness LP, a hedge fund built around the theme of AGI, with its bets placed in publicly traded companies rather than private startups.

The fund was seeded by Silicon Valley heavyweights like investor and current Meta AI product lead Nat Friedman—Aschenbrenner reportedly connected with him after Friedman read one of his blog posts in 2023—as well as Friedman’s investing partner Daniel Gross, and Patrick and John Collison, Stripe’s cofounders. Patrick Collison reportedly met Aschenbrenner at a 2021 dinner set up by a connection “to discuss their shared interests.” Aschenbrenner also brought on Carl Shulman—a 45-year-old AI forecaster and governance researcher with deep ties in the AI safety field and a past stint at Peter Thiel’s Clarium Capital—to be the new hedge fund’s director of research.

In a four-hour podcast with Dwarkesh Patel tied to the launch, Aschenbrenner touted the explosive growth he expects once AGI arrives, saying, “The decade after is also going to be wild,” in which “capital will really matter.” If done right, he said, “there’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x.”

Together, the manifesto and the fund reinforced each other: Here was a book-length investment thesis paired with a prognosticator with so much conviction he was willing to put serious money on the line. It proved an irresistible combination to a certain kind of investor. One former OpenAI researcher said Friedman is known for “zeitgeist hacking”—backing people who could capture the mood of the moment and amplify it into influence. Supporting Aschenbrenner fit that playbook perfectly.

Situational Awareness’s strategy is straightforward: It bets on global stocks likely to benefit from AI—semiconductors, infrastructure, and power companies—offset by shorts on industries that could lag behind. Public filings reveal part of the portfolio: A June SEC filing showed stakes in U.S. companies including Intel, Broadcom, Vistra, and former Bitcoin-miner Core Scientific (which CoreWeave announced it would acquire in July), all seen as beneficiaries of the AI build-out. So far, it has paid off: The fund quickly swelled to over $1.5 billion in assets and delivered 47% gains, after fees, in the first half of this year.

According to a spokesperson, Situational Awareness LP has global investors, including West Coast founders, family offices, institutions, and endowments. In addition, the spokesperson said, Aschenbrenner “has almost all of his net worth invested in the fund.”

To be sure, any picture of a U.S. hedge fund’s holdings is incomplete. The publicly available 13F filings only cover long positions in U.S.-listed stocks—shorts, derivatives, and international investments aren’t disclosed—adding an inevitable layer of mystery around what the fund is really betting on. Still, some observers have questioned whether Aschenbrenner’s early results reflect skill or fortunate timing. For example, his fund disclosed roughly $459 million in Intel call options in its first-quarter filing—positions that later looked prescient when Intel’s shares climbed over the summer following a federal investment and a subsequent $5 billion stake from Nvidia.

But at least some experienced financial industry professionals have come to view him differently. Veteran hedge-fund investor Graham Duncan, who invested personally in Situational Awareness LP and now serves as an advisor to the fund, said he was struck by Aschenbrenner’s combination of insider perspective and bold investment strategy. “I found his paper provocative,” Duncan said, adding that Aschenbrenner and Shulman weren’t outsiders scanning opportunities but insiders building an investment vehicle around their view. The fund’s thesis reminded him of the few contrarians who spotted the subprime collapse before it hit—people like Michael Burry, who Michael Lewis made famous in his book The Big Short. “If you want to have variant perception, it helps to be a little variant.”

He pointed to Situational Awareness’ reaction to Chinese startup DeepSeek’s January release of its R1 open-source LLM, which many dubbed a “Sputnik moment” that showcased China’s rising AI capabilities despite limited funding and export controls. While most investors panicked, he said Aschenbrenner and Shulman had already been tracking it and saw the sell-off as an overreaction. They bought instead of sold, and even a major tech fund reportedly held back from dumping shares after an analyst said, “Leopold says it’s fine.” That moment, Duncan said, cemented Aschenbrenner’s credibility—though Duncan acknowledged “he could yet be proven wrong.”

Another investor in Situational Awareness LP, who manages a leading hedge fund, told Fortune that he was struck by Aschenbrenner’s answer when asked why he was starting a hedge fund focused on AI rather than a VC fund, which seemed like the most obvious choice.

“He said that AGI was going to be so impactful to the global economy that the only way to fully capitalize on it was to express investment ideas in the most liquid markets in the world,” he said. “I am a bit stunned by how fast they have come up the learning curve…they are way more sophisticated on AI investing than anyone else I speak to in the public markets.“

Aschenbrenner, born in Germany to two doctors, enrolled at Columbia when he was just 15 and graduated valedictorian at 19. The longtime AI governance researcher, who described herself as an acquaintance of Aschenbrenner’s, recalled that she first heard of him when he was still an undergraduate.

“I heard about him as, ‘oh, we heard about this Leopold Aschenbrenner kid, he seems like a sharp guy,” she said. “The vibe was very much a whiz-kid sort of thing.”

That wunderkind reputation only deepened. At 17, Aschenbrenner won a grant from economist Tyler Cowen’s Emergent Ventures, and Cowen called him an “economics prodigy.” While still at Columbia, he also interned at the Global Priorities Institute, co-authoring a paper with economist Phillip Trammell, and contributed essays to Works in Progress, a Stripe-funded publication that gave him another foothold in the tech-intellectual world.

He was already embedded in the Effective Altruism community—a controversial philosophy-driven movement influential in AI safety circles —and co-founded Columbia’s EA chapter. That network eventually led him to a job at the FTX Futures Fund, a charity founded by cryptocurrency exchange founder Sam Bankman-Fried. Bankman-Fried was another EA adherent who donated hundreds of millions of dollars to causes, including AI governance research, that aligned with EA’s philanthropic priorities.

The FTX Futures Fund was designed to support EA-aligned philanthropic priorities, although it was later found to have used money from Bankman-Fried’s FTX cryptocurrency exchange that was essentially looted from account holders. (There is no evidence that anyone who worked at the FTX Futures Fund knew the money was stolen or did anything illegal.)

At the FTX Futures Fund, Aschenbrenner worked with a small team that included William MacAskill, a co-founder of Effective Altruism, and Avital Balwit—now chief of staff to Anthropic CEO Dario Amodei and, according to a Situational Awareness LP spokesperson, currently engaged to Aschenbrenner. Balwit wrote in a June 2024 essay that “these next five years might be the last few years that I work,” because AGI might “end employment as I know it”–a striking mirror image of Aschenbrenner’s conviction that the same technology will make his investors rich.

But when Bankman-Fried’s FTX empire collapsed in November 2022, the Futures Fund philanthropic effort imploded. “We were a tiny team, and then from one day to the next, it was all gone and associated with a giant fraud,” Aschenbrenner told Dwarkesh Patel. “That was incredibly tough.”

Just months after FTX collapsed, however, Aschenbrenner reemerged — at OpenAI. He joined the company’s newly-launched “superalignment” team in 2023, created to tackle a problem no one yet knows how to solve: how to steer and control future AI systems that would be far smarter than any human being, and perhaps smarter than all of humanity put together. Existing methods like reinforcement learning from human feedback (RLHF) had proven somewhat effective for today’s models, but they depend on humans being able to evaluate outputs — something which might not be possible if systems surpassed human comprehension.

Aaronson, the UT computer science professor, joined OpenAI before Aschenbrenner and said what impressed him was Aschenbrenner’s instinct to act. Aaronson had been working on watermarking ChatGPT outputs to make AI-generated text easier to identify. “I had a proposal for how to do that, but the idea was just sort of languishing,” he said. “Leopold immediately started saying, ‘Yes, we should be doing this, I’m going to take responsibility for pushing it.’”

Others remembered him differently, as politically clumsy and sometimes arrogant. “He was never afraid to be astringent at meetings or piss off the higher-ups, to a degree I found alarming,” said one current OpenAI researcher. A former OpenAI staffer, who said they first became aware of Aschenbrenner when he gave a talk at a company all-hands meeting that previewed themes he would later publish in Situational Awareness, recalled him as “a bit abrasive.” Multiple researchers also described a holiday party where, in a casual group discussion, Aschenbrenner told then Scale AI CEO Alexandr Wang how many GPUs OpenAI had— “just straight out in the open,” as one put it. Two people told Fortune they had directly overheard the remark. A number of people were taken aback, they explained, at how casually Aschenbrenner shared something so sensitive. Through spokespeople, both Wang and Aschenbrenner denied that the exchange occurred.

“This account is entirely false,” a representative of Aschenbrenner told Fortune. “Leopold never discussed private information with Alex. Leopold often discusses AI scaling trends such as in Situational Awareness, based on public information and industry trends.”

In April 2024, OpenAI fired Aschenbrenner, officially citing the leaking of internal information (the incident was not related to the alleged GPU remarks to Wang). On the Dwarkesh podcast two months later, Aschenbrenner maintained the “leak” was “a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI” that he shared with three external researchers for feedback–something he said was “totally normal” at OpenAI at the time.  He argued that an earlier memo in which he said OpenAI’s security was “egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors” was the real reason for his dismissal.

According to news reports, OpenAI did respond, via a spokesperson, that the concerns about security that he raised internally (including to the board) “did not lead to his separation.” The spokesperson also said they “disagree with many of the claims he has since made” about OpenAI’s security and the circumstances of his departure.

Either way, Aschenbrenner’s ouster came amid broader turmoil: Within weeks, OpenAI’s “superalignment” team—led by OpenAI’s cofounder and chief scientist Ilya Sutskever and AI researcher Jan Leike, and where Aschenbrenner had worked—dissolved after both leaders departed the company.

Two months later, Aschenbrenner published Situational Awareness and unveiled his hedge fund. The speed of the rollout prompted speculation among some former colleagues that he had been laying the groundwork while still at OpenAI.

Even skeptics acknowledge the market has rewarded Aschenbrenner for channeling today’s AGI hype, but still, doubts linger. “I can’t think of anybody that would trust somebody that young with no prior fund management [experience],” said a former OpenAI colleague who is now a founder. “I would not be an LP in a fund drawn by a child unless I felt there was really strong governance in place.”

Others question the ethics of profiting from AI fears. “Many agree with Leopold’s arguments, but disapprove of stoking the US-China race or raising money based off AGI hype, even if the hype is justified,” said one former OpenAI researcher. “Either he no longer thinks that [the existential risk from AI] is a big deal or he is arguably being disingenuous,” said another.

One former strategist within the Effective Altruism community said many in that world “are annoyed with him,” particularly for promoting the narrative that there’s a “race to AGI” that “becomes a self-fulfilling prophecy.” While profiting from stoking the idea of an arms race can be rationalized—since Effective Altruists often view making money for the purpose of then giving it away as virtuous—the former strategist argued that “at the level of Leopold’s fund, you’re meaningfully providing capital,” and that carries more moral weight.

The deeper worry, said Aaronson, is that Aschenbrenner’s message—that the U.S. must accelerate the pace of AI development at all costs in order to beat China—has landed in Washington at a moment when accelerationist voices like Marc Andreessen, David Sacks and Michael Kratsios are ascendant. “Even if Leopold doesn’t believe that, his essay will be used by people who do,” Aaronson said. If so, his biggest legacy may not be a hedge fund, but a broader intellectual framework that is helping to cement a technological Cold War between the U.S. and China.

If that proves true, Aschenbrenner’s real impact may be less about returns and more about rhetoric—the way his ideas have rippled from Silicon Valley into Washington. It underscores the paradox at the center of his story: To some, he’s a genius who saw the moment more clearly than anyone else. To others, he’s a Machiavellian figure who repackaged insider safety worries into an investor pitch. Either way, billions are now riding on whether his bet on AGI delivers.

This story was originally featured on Fortune.com

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video
Music Festivals

News

Music Festivals

Technology

Technology

News

Technology

News

Technology

News