How Google Gemini Proves the Case for Decentralized AI

How Google Gemini Proves the Case for Decentralized AI

AI dominated the technology news for all the first quarter of 2024 with a list of advancements and announcements.  The most controversial and memorable AI story, however, was the launch of Google Gemini’s image creator which received such scathing reviews that it was quickly ‘paused’ for ‘updates’.  In Google’s quest to be as politically sensitive as possible, it forced its model to insert diversity in nearly all images making outcomes that are preposterously inaccurate such as African and Asian Nazis and a Southeast Asian female Pope. Not only were these images wildly inaccurate, they are clearly offensive. Most importantly, they lifted the veil on the manipulation risks inherent in AI models, particularly models developed run by companies. Companies are subject to a wide range of influences and are unable to resist opaque manipulation of their AI models, making open, transparent and decentralized models far more trusted and the future of AI.

How do the models work?

Image prompts to Google’s Gemini model are run through a set of rules developed by Google to fit a particular agenda such as increasing diversity. These rules can be well intentioned, but the user does not know the criteria added to generate the image. With Gemini, the diversity rules are so obvious and so clumsy that the output was quickly the subject of global ridicule, as users vied to generate the most absurd result. Because image requests rely on the AI model to generate results, we know a similar bias exists in every answer. Users quickly began to discover that the model wasn’t able to definitively answers to even simple questions, such as ‘Were Elon Musk’s memes more damaging to society than Hitler?’. When AI LLMs (Large Language Models) like ChatGPT launched, they were quickly accused of having biases. Alas, a picture is worth a thousand words, and the world now has an enduring window into Google’s manipulation. 

Open, transparent AI is the answer.

Rather than being trained and manipulated behind closed doors by corporations, AI models can be provably trained on open data sets. The models can be open source, available for anyone to see, and can run on a decentralized network of computers that prove each result was executed against the model without manipulation. Highly resilient decentralized networks exist currently for payments and storage, and a number of networks are being optimized to train and run AI models. Decentralized networks are necessary because they operate globally on a wide range of infrastructure with no single owner, making them very hard to threaten or shut down.

Is this realistic?

Google and Microsoft have spent billions of dollars developing their LLMs which seems like an insurmountable lead, but we have seen these huge companies outcompeted before. Linux overcame Microsoft’s Windows’ decade head start and billions of dollars to become the leading operating system. The open source community worked together to build Linux, and they can expect similar level of success in building and training open source LLMs.  Models do not have to have the same scale as ChatGPT, and domain specific models are likely to emerge that are more trusted in their particular topics. A single front end could pull from a wide range of these domain specific models replicating a ChatGPT experience but on a transparent and trusted foundation. 

As important as building and training AI is the running of the model. No matter the inputs, scrutiny is on the outputs, and any organization running the model will be subject to pressure. Companies are subject to influence from politicians, regulators, shareholders, employees, the general public as well as armies of Twitter bots. But decentralized models, hosted by any storage provider anywhere in the world, run on an open, decentralized compute network - like Fluence - which can process auditable queries, are immune from both hidden bias and censorship and will be far more trustworthy.

Google will update Gemini to be more historically accurate, but bias will remain; it will just be harder to see, thus even more dangerous. We should use this revelation of Google’s manipulation as a welcome warning about the risks of relying on any company developing and running AI models - no matter how well intentioned. This is our call to build open, transparent and decentralized AI systems we can trust.