1. Home
  2. Gemini Ultra

Gemini Ultra

Incentive networks could save millions on AI compute costs

Compute costs for AI are going up. Incentive-network-driven compute could be the key to saving you and your investors millions of dollars.

Decentralized networks make everything more complicated, but decentralized networks can handle complicated things. When it comes to solving artificial intelligence’s (AI) gluttonous demand for computing power, the problem might just be complicated enough for decentralization.

Incentive networks are a form of decentralized network that rewards individual behavior that benefits the network as a whole, creating an “ecosystem” mentality. The difference between a simple ecosystem and an incentive network is its intentionality and mechanisms. An ecosystem is often a lucky accident, the sum of competing forces deciding they are better off working within certain limits rather than being outside the group. An incentive network is designed for shared success from day one.

But how does this tie back into AI? Think of scalable AI applications as something mechanical that produces simple answers from ludicrously large data sets using computational power, like gas in a car. The more data you haul and the faster you want the answers, the more gas you burn, and the biggest and most complex AI models burn multiple times what smaller ones do: OpenAI’s GPT-4 cost $78 million in compute to train, while Google’s Gemini Ultra cost $191 million. With numbers that big, a system that can reduce hardware investments and dynamically allocate resources to reduce overall costs while being impartial to the participants is vital — and that’s what incentive networks do.

Read more

How Crypto Is Changing Online Gambling and Betospin Leads the Way