Paul Joe (@0xPeejay, Investment Associate at Lyrik Ventures) and I co-wrote this article based on our observations from investing in web3 projects. You can read more of his writing at Fundamentals.
This article is a meander that sprang from broader research on Crypto x AI. One of the most popular propositions in this intersection is decentralised compute for AI. While looking into decentralised compute, as well as DePIN (decentralised physical infrastructure) in general, we began to ask, “Is there demand for this?”, “How do you differentiate and ensure defensibility?”, and “Can they really compete with centralised infrastructure, and what are those vectors of competition?” We conclude that provision of supply and competing on cost alone are not sufficiently compelling propositions, while trust and platform risk may be compelling propositions, but only in some circumstances, and DePIN projects must put forward more compelling ways to differentiate and compete with centralised infrastructure before they can be seen as compelling to customers. This approach may be applicable in certain specialised domains, but is unlikely to apply in more generalised and commoditisable domains of infrastructure (including generalised high performance compute resources such as for AI).
At a high level, decentralised compute is about providing access to compute resources (CPUs, GPUs, etc.) through a marketplace that connects those renting out compute resources with those seeking to rent compute resources, together with some way of verifying that the computations were done correctly by those providers. For example, io.net sources GPU compute resources from data centres, cryptocurrency miners, and decentralised storage providers to offer to compute customers, using Ray.io, an open-source library and framework for distributed computing to scale model training across thousands of GPUs. This pooling of spare and underutilised compute resources supposedly reduces the costs associated with renting GPUs, making it more accessible for a wider range of customers, and io.net claims that their decentralised platform is able to offer compute resources at a fraction of the cost1 of current centralised alternatives by increasing supply through better utilisation of excess capacity found in independent data centres around the world2.
But while the increased demand for GPUs for AI has led to a GPU shortage, long wait times and high prices, and the average crypto miner stands to gain by renting out their hardware to compete with Amazon Web Services (AWS) for AI compute customers3, are customers really turning to decentralised compute as a solution? How much adoption do platforms like io.net and other DePIN solutions, including Render Network for graphics/video rendering tasks and Akash Network for cloud compute, really have?
We looked into the data for Akash Network, a decentralised compute marketplace that’s been operational since March 2021. Since inception, only $608K in total has been spent by customers to rent computing power on the network to date. New leases paid for per day tends to hover around 250, and current active leases (which we can take as the number of concurrent active users/customers) sits at around 900. Render Network is a decentralized GPU rendering platform that enables customers to scale graphics/video rendering work on-demand to high-performance GPU nodes worldwide at a fraction of the cost of and orders of magnitude faster than a centralised GPU cloud through a blockchain-based marketplace for idle GPU compute resources. Since its inception in 2017, the network has rendered about 36 million frames, which even at 60 frames per second, is the equivalent of about 167 hours of video, and looking into their data, the number of current active users since their migration to Solana sits at under 700, with new users added each week in the single digits. At the time of writing, io.net claims to have about 34K cluster-ready (ready-to-use) GPUs and 5K cluster-ready CPUs. But these are supply metrics and, unfortunately, their demand or utilisation metrics have been hard to come by.
These numbers don’t seem like a lot when compared to centralised cloud computing providers like AWS ($25 billion in revenue in the first quarter of 2024), Microsoft Azure ($26.7 billion in Q1) and Google Cloud Compute ($9.6 billion in Q1). In terms of customer demand, AWS reportedly has over 1.45 million businesses using them, including, ironically, many blockchain and crypto companies.
While supply derived from excess capacity can (theoretically) lead to cost savings, it depends heavily on the platform being able to attract qualified suppliers, and this is where the promise may fall short. Yes, decentralised compute platforms can point to the volume of underutilised GPUs around the world, but converting them into suppliers for their platform is another matter, and not so simply done by offering them a financial incentive that has to be mediated through a crypto token4. Infrastructure providers also need to drive demand, not just aggregate supply, and even if using a token to incentivise supply might work, this isn’t necessarily the case for demand.
It’s also worth questioning whether crowdsourced compute can capture economies of scale as supply needs to grow with demand. As broader market demand for compute grows beyond the ability to onboard excess capacity (for reasons mentioned above), and non-data-centre GPUs (i.e. those owned by ordinary users) aren’t a viable source of high performance GPU supply (since most performant GPUs are not owned by retail consumers or end users), decentralised platforms lack economies of scale as purchasers to secure incremental GPUs. At the margin of saturated supply and excess demand, the unit economics may be arguably better for centralised providers.5
So then, under what circumstances is a decentralised solution cheaper/faster/better?
Proponents of decentralisation often fall back on trust as a key reason for decentralisation, asserting that we can’t trust centralised platforms not to change terms, raise prices, shut down, etc., especially when there’s some degree of market concentration. To address this, crypto offers a “trustless” system of validation and verification, where services can be provided without the user/customer needing to trust any specific participant/provider through the operation of token network protocols and smart contracts, in which the rules of operation are hardcoded into the system and immutable. But how applicable is this idea to the cloud compute market?
Is trust really a problem that needs to be solved in this space? While crypto enthusiasts may argue that centralised platforms have been known to unilaterally change their terms, this doesn’t seem to bother the vast majority of customers. As uncomfortable as crypto enthusiasts might be with the idea, ordinary customers seem to implicitly trust centralised compute providers like Amazon Web Services, Google Cloud Compute and Microsoft Azure partly because of their non-trivial efforts and investments in building a reliable brand and reputation, and mostly because customers have terms of service (TOS), service level agreements (SLAs) and legal recourse to fall back on. Conversely, because decentralised platforms don’t provide such assurances, we need the extra layer of verification and validation either in the absence of implicit trust or because we don’t want to depend on implicit trust and prefer to operate in a “trustless” manner.6 Unsurprisingly, with limited legal recourse in the context of a token network protocol that’s controlled and owned by no one, users of decentralised platforms might feel they have even less protections (and more difficulty in seeking recourse) in the event of negative events such as quality of service disputes, service disruptions or downtime, a hack, or a rugpull by the developers, making these platforms seem even less trustworthy despite their “trustless” intentions.
If trust isn’t a realistically compelling proposition for decentralised compute, then decentralised compute platforms end up competing with centralised platforms on supply and price (which are intricately linked), and in most cases, price either isn’t compelling enough or will end up becoming a race to the bottom, which doesn’t seem like a good game to play.
We believe that this line of thinking is also applicable to more general ideas of infrastructure provision and DePIN. When it comes to provision of infrastructure, customers care most about trust (in the provider), quality assurance, service level and customer experience when choosing a platform (be it centralised or decentralised). But in many of these cases, trust isn’t a real problem and most ordinary customers are satisfied with the legal assurances (TOS, SLA and legal recourse) that centralised platforms offer. In this case, a decentralised infrastructure platform will need to solve some other key customer pains beyond solving for trust (not perceived as a problem) and supply/price (not a good game to play) in order to have a strong proposition, and more so if they can solve those key pains in a differentiated and defensible way. One example is if there is a unique user experience, task pipeline, or complexity of problem specific to the domain that competitors without domain-specific expertise will find difficult to replicate, and for which decentralisation offers a unique approach to developing a solution. This test would rule out generalised DePIN propositions that can easily be commoditised and turn into a race to the bottom across both centralised and decentralised offerings.
Alternatively, when is trust a real problem that customers want solved, and when is trustlessness (and therefore the decentralisation underpinning it) really helpful? One example is in the case of platform risk, when customers are genuinely concerned that the infrastructure platform they’re using might be shut down or have terms (especially prices) abruptly changed. In the game infrastructure market (specifically Game Backend as a Service), game developers who rely on third party infrastructure faced exactly this situation when Amazon acquired and discontinued GameSparks and Microsoft acquired Playfab and integrated it with Azure. Beamable, a decentralised alternative, seeks to help game developers address this real platform risk while avoiding having to build their own game backend by delivering its scalable game live operations service over a decentralised software plus hardware infrastructure platform, which makes for a potentially compelling DePIN proposition.
The interesting characteristics of the game backend infrastructure market that make Beamable’s DePIN proposition a potentially suitable solution are:
(a) trust (in the form of platform risk) is a real problem that customers need solved;
(b) no need for the most performant compute resources (unlike AI), so that underutilised excess capacity can viably be onboarded as supply; and
(c) relatively niche market that requires domain-specific expertise and proprietary software solutions on top of what would otherwise be commoditisable hardware.
To summarise, if the foregoing analysis is correct, then
(a) in contexts where trust isn’t a real problem (as far as users/customers perceive it, which includes many “status quo” situations today), decentralised platforms/providers need to offer strong enough alternative value propositions to be compelling (and in the case of decentralised compute, this seems to be absent); and
(b) in contexts where trust is indeed a real problem (in the sense that customers are unable or unwilling to trust the provider)7, decentralisation (and blockchain-based network protocols) can play a valuable role.
We think that’s what we should be looking out for in DePIN opportunities, and in all likelihood, that ship has sailed for AI cloud compute, with centralised platforms far ahead of decentralised alternatives. It sounds like a simple thesis, that decentralised infrastructure providers must solve problems that customers care about (instead of problems that customers don’t care about) in a differentiated and defensible way, but perhaps one that’s too easily overlooked amidst the hype around narrative (especially in AI) and token launches.
If you liked this article and are reading it on the web or received it from a friend, please consider subscribing to my regular newsletter (so you’ll get articles like this delivered fresh to your inbox) by clicking the subscribe button below.
Supposedly up to 90% cheaper than incumbent suppliers.
io.net claims that “there are thousands of independent data centres in the U.S. alone, with an average utilisation rate of 12%–18%.”
Again, io.net claims that “the average miner using a 40GB A100 makes $0.52 a day, while AWS is selling the same card for AI computing for $59.78 a day,” so there’s apparently a lot of room for arbitrage.
See this earlier article on how financial incentives don’t always play out the way crypto enthusiasts hope because “behavioural economics”.
i.e. Can AWS get incremental GPUs more cheaply if supply is constrained for everyone? The answer is probably yes.
Serious question: Is this just an education/awareness problem, or is the concept of “trust” for a non-crypto native actually different from that for a crypto-enthusiast? It almost feels as if the latter is solving for a different concept of “trust” that isn’t quite relevant to the non-crypto native. The naughty question is: If this is true for cloud compute, might it also be true for (gasp!) defi?
Transactions between criminals and transactions among ordinary citizens living in totalitarian or corrupt government regimes come to mind, but surely these are not the only examples.