How We Would Approach an Investment Thesis for Decentralised AI
Amidst the hype around Decentralised AI (DeAI) with tokens like $VIRTUAL, $AI16Z and $AIXBT popping, Paul Joe (@0xPeejay who writes at Fundamentals) and I got to talking about how we would approach developing a thesis for investing in DeAI. This article captures some of that thinking - not the thesis itself, but the kinds of questions we would ask and what we need to understand more deeply before formulating a position on why, what and where to invest in DeAI. Here, we outline a framework for developing an investment thesis for DeAI, drawing from questions and insights that came up during our conversation.
To understand if there are real investment opportunities in DeAI, we want to look beyond the hype and evaluate the potential of DeAI to create real, lasting value. Thinking about DeAI (and specifically in contrast to centralised AI), the kinds of questions we would ask are, "Is there a real use case? Is there real value created? Can users really extract that value? Can the protocol/network really accrue value? Can the ecosystem building on the protocol really create new incremental value? What needs to be true for DeAI to be successful, and how likely is it that that will happen?" Here's how we would critically develop a thesis, if there were one to develop at all:
1. Focus on value creation and use cases
The core of a DeAI investment thesis should hinge on whether the technology creates genuine value that wasn't possible with centralised AI (CeAI). So we'd start by looking for use cases that are uniquely enabled by decentralisation, just like how stablecoins (and decentralised real world payments enabled by them) are a valuable use case enabled by decentralised finance (DeFi) that wasn't previously possible with centralised finance (CeFi). In this regard, we would consider whether the decentralised approach offers a solution that is not only different but also superior to what centralised AI can do.
In hype-driven sectors, the question of what constitutes value can be contentious. Taking another (controversial) example from DeFi, Pump Dot Fun has made over $300 million in revenue in a year with a small team by creating what could be characterised as a super scalable casino facilitating memecoin investment as a gambling activity. In some people's eyes, that's certainly some kind of value (since investing in casinos can be very profitable, for the investor in the establishment, not the gambler), but not what everyone might consider real lasting value (though we're open to being proven wrong). Unsurprisingly, similar propositions can be found in DeAI.
2. Identify valuable problems that DeAI can solve
A parallel approach to focusing on valuable use cases is to identify the set of valuable problems that DeAI can meaningfully and uniquely solve better than CeAI. The key question here is - even if DeAI can offer a solution that CeAI can't, does anyone really need that solution?
Verifiable inference is one such problem suggested by proponents of DeAI. At one level, verifiable inference for the vast majority of AI use cases seems questionable. It's hard to imagine if anyone will care whether a specific model was used to run a prompt and generate an inference, or if an inference or output was indeed generated by a given model and prompt. But a stronger argument is that in reality, verifiability is not a reason to use DeAI. Verifiability is necessary if you choose to use DeAI, because in DeAI, you cannot trust the provider (or "honesty is not assumed, it is/must be verified"), and hence verifiability is needed. (In other words, verifiable inference isn't a reason to use DeAI over CeAI, it's a problem arising from DeAI.) So what's the reason to use DeAI in the first place? One rational answer is because you don't trust the centralised provider with your data, which implies either a societal-wide collapse in trust towards major centralised platforms, or a very narrow user segment that has an acute need for data privacy. (See our DePIN article for an elaboration of this perspective, but the tl;dr is - most people trust centralised providers so this argument has little traction today.)
Training data ownership and privacy is another oft touted problem for DeAI to solve. Again, we're inclined to ask if users really care about this enough to move from CeAI to DeAI. When extended to data attribution and incentivisation (i.e. to contribute training data and earn revenue from that contribution), the argument starts to sound interesting, but given the vast amount of data required to make a dent in training, we remain sceptical about whether there is a feasible path to general adoption. As for enterprise customers that do (and should) care about data privacy, trust in centralised providers offering data sandbox and privacy solutions combined with legal and reputational assurances seems to keep those customers happy enough, while locally-hosted open source models appear to satisfy the ones with higher bars without the need for decentralisation.
Will current attitudes towards trust in centralised providers and indifference towards training data privacy change to usher in a more widespread recognition of the value of DeAI? That's where the key questions - "What needs to be true in the world for this to be successful?" and "Why now?" are most relevant, and that's how we would approach validating such a thesis.
On the agent side, if we ignore the ideological imperative for decentralisation, it behooves us to identify problems where a decentralised agent is really necessary (i.e. serves a purpose that a centralised AI agent could not have done). When do we really need fully autonomous decentralised (as opposed to centralised) AI agents? Perhaps this is a space worth looking at both broadly and deeply in order to determine this.
3. Question the necessity of decentralisation
As we inspect potential use cases and problems to solve, we would critically evaluate if a decentralised approach is truly necessary for a given use case or problem. Is there a real need for decentralisation, or could a centralised solution perform just as well, or even better? Starting from the assumption that decentralisation isn't inherently valuable, what crucial and distinct advantage does decentralisation provide in each use case or problem, if any?
4. Analyse the potential for adoption
We would also ask if people will actually care about the purported differentiation that DeAI offers over CeAI, and, as in the examples above, whether problems like verifiable inference and training data privacy are in fact important enough to users to be able to drive adoption. Generally speaking, will users recognise and adopt the value created by DeAI? And does the success of DeAI depend on network effects and DeAI becoming the dominant mode? More generally, is there a realistic and believable path for the world to transition from CeAI to DeAI, similar to a CeFi to DeFi transition (which, in all frankness, hasn't really happened yet)? Or are we looking at CeAI and DeAI coexisting, with DeAI limited to specific use cases and customer segments?
5. Evaluate the viability of the ecosystem
An interesting angle to consider in DeAI is the opportunity to create ecosystems built on top of an underlying protocol where incentives are aligned and actions coordinated through a token economy (the latter aspect being what differentiates this from open source AI). Here, we would take an objective look at whether an aligned and coordinated ecosystem built on top of a protocol, in which participants are building components, agents and applications, a) is actually possible, b) can truly create value for users, c) will actually accrue value to the protocol or network, and d) said value can actually be extracted by participants in ways that don't negatively impact the economy. A tall order that DeFi has also yet to prove definitively.
6. Don't get distracted by hype
Keep in mind that, today, DeAI appears to be (far) behind CeAI in terms of stage and speed of advancement, as well as the volume of top AI talent it's able to attract. Many DeAI propositions and pitches focus on the current narrative, with the current state of the art in DeAI lagging behind the rapid advances in CeAI. As such, one of the biggest risks we see for DeAI investors is that they're looking at what DeAI builders are pitching to them, and not at the state of the art in CeAI (which is sometimes many cycles ahead, moving a lot faster, and often not (yet?) possible in DeAI), skating to where the puck is now, not where the puck is going.
Ultimately, developing any investment thesis requires a critical evaluation of real-world value creation, unique use cases and valuable problems, the necessity of the proposition (in this case, decentralisation), and a believable path to adoption. By focusing on these aspects, we can make more informed decisions and potentially identify projects with long-term potential and long-lasting value. All this is not to suggest that we don’t believe there’s an investment thesis for DeAI, but that we would approach the exercise with extreme scepticism in order to uncover the meaningful propositions that truly matter. Surely that’s what they pay VCs for?
If you liked this article and are reading it on the web or received it from a friend, please consider subscribing to my regular newsletter (so you’ll get articles like this delivered fresh to your inbox) by clicking the subscribe button below.