Change Language
wds-media
  • Home
  • FOREX
Decentralized Machine Learning: How Gensyn Leverages DLT to Train AI Models

Decentralized Machine Learning: How Gensyn Leverages DLT to Train AI Models

Decentralized Machine Learning, how to leverage DLT to train AI models

Neither the author, Tim Fries, nor this website, The Tokenist, provide financial advice. Please consult our website policy prior to making financial decisions.

Just after announcing the opening of its new international office in London, a16z Crypto also announced its latest investment. The VC firm led a £34.25 million ($43 million) Series A funding round for Gensyn, a firm seeking to leverage blockchain technology to train AI models.

Previously, Galaxy Digital provided $6.5 million in Gensyn’s seed funding round in March 2022. Veteran computer science and machine learning experts Ben Fielding and Harry Grieve founded the company in 2020. Other notable investors are CoinFund, Protocol Labs, Canonical Crypto, and Eden Block.

The financial boost will undoubtedly grow the firm from the currently listed two employees, the co-founders. As ChatGPT broke into the mainstream and sparked the AI fire, Gensyn’s Machine Learning Compute Protocol can finally take off as a distributed cloud computing pay-as-you-go model for software engineers and researchers. 

A Decentralized Trust Layer for Machine Learning

Blockchain interfacing with AI is often thought of as inefficient. After all, blockchains provide immutable records across many network nodes that need to verify them, but AI models require high data throughput and frequent updates. 

At the same time, it would be highly beneficial if computational tasks were verifiable and powered across global hardware. a16z sees the UK-based Gensyn as the breakthrough needed to provide a “decentralized trust layer for machine learning,” as Gensyn co-founder Harry Grieve said.

Just like Bitcoin disintermediates the need for trusted third parties to create sound money, Gensyn’s peer-to-peer (P2P) network aims to disintermediate cloud computing. Namely, going against the oligopolies like Amazon Web Services (AWS).

As of today, Amazon’s node-hosting share is even greater for Ethereum, at 64.7%. Image credit: Messari.

Gensyn’s network protocol endeavors to accomplish this by connecting all the world’s unutilized computing devices. This includes consumer GPUs used in video gaming, custom ASICs for mining, and System-on-Chip (SoC) devices commonly found on smartphones and tablets as integrated circuits. 

In particular, consumer SoC devices would be in high demand for machine learning, i.e., training neural networks. They integrate multiple components – memory, CPU, GPU, and storage – on a single microchip. This compactness lends scalability as they can form a computing cluster for a distributed system.

But wouldn’t a blockchain network inherently slow down such a system?

Join our Telegram group and never miss a breaking digital asset story.

Is Gensyn Scalable?

Like Ethereum or Solana, Gensyn’s network is a layer 1 proof-of-stake protocol network. More precisely, it is based on the framework for building blockchains, called the Substrate protocol, allowing for communication between network nodes on a peer-to-peer basis. 

According to Gensyn’s Python simulations on the protocol’s performance, Modified National Institute of Standards and Technology (MNIST), a widely-used dataset for machine learning, shows low overhead compared to native runtime.

Image credit: Gensyn.ai

Harry Grieve told Decrypt that Gensyn is “unlimited in scale and super low cost in terms of verification overhead.” The core issue to resolve is to have a trustless consensus mechanism that can execute arbitrary small computations, overcoming six GHOSTLY problems:

  • Generalisability – The need for ML developers to receive a trained neural network regardless of custom architecture or dataset. 
  • Heterogeneity – Utilizing different processing architectures across different operating systems. 
  • Overhead – Computing verification must have negligible overhead to be feasible. Case in point, Gensyn notes that using Ethereum for ML would exert a ~7850x computational overhead vs. AWS using Nvidia GPUs, such as V100 Tensor Core.
  • Scalability – Specialized hardware prevents scalability. 
  • Trustlessness – Scaling cannot happen if the need for trust is not removed because a trusted party would be required to verify the work. 
  • Latency – Training neural networks demand low latency as high latency negatively affects inference. Inference is when a properly trained machine learning model can make real-world predictions on newly fed data. 

In short, Gensyn aims to create such an incentive structure to allow “the unit cost of ML compute to settle into its fair equilibrium.” By their projections, this would put Gensyn even above AWS.

Gensyn’s projection of future performance in the real-world competitive arena. Image credit: Gensyn.ai

In the end, if AI is to become a part of everyone’s lives, it would be beneficial if their models were trained trustlessly.

“Instead of placing our trust in corporations, we can place our trust in community-owned and -operated software, transforming the internet’s governing principle from “don’t be evil” back to “can’t be evil”.”

Chris Dixon, head of a16z crypto.

Finance is changing.
Learn how, with Five Minute Finance.
A weekly newsletter that covers the big trends in FinTech and Decentralized Finance.

Do you trust corporations to conduct ML training ethically? Let us know in the comments below.

The post Decentralized Machine Learning: How Gensyn Leverages DLT to Train AI Models appeared first on Tokenist.

The Best Small Dining Tables to Fit in a Tight Space

The Best Small Dining Tables to Fit in a Tight Space

Read More