in , , ,

AI Model Training Is Changing. Is It A Step Toward Democratization?

Jiahao Sun, Founder and CEO of FLock.io.

Training AI models used to mean billion-dollar data centers and massive infrastructure. Smaller players had no real path to competing.

That’s starting to shift. New open-source models and better training techniques have lowered costs, making it possible for smaller teams to enter the space. I’ve seen it happen, and it’s clear AI development isn’t as exclusive as it once was.

A Shift In AI Model Training

DeepSeek made headlines for training a competitive AI model, R1, on what first seemed like minimal resources. Initial reports circulated that their model was trained for a mere $6 million, a far stretch from the training costs of the OpenAI models, which R1 tied or even exceeded on all benchmarks. Then industry analysts surfaced telling a different story—clarifying the number and types of GPUs as well as the $1.3 billion investment in DeepSeek’s parent company.

Lately we’ve seen companies fine-tune existing open-source models, modifying the last 10-20% instead of building everything from the ground up. That kind of approach has become more common thanks to open-source efforts like Meta’s LLaMA, which provide a foundation for others to build on. Mistral has also taken a similar path to Meta, releasing smaller, highly optimized models based on open architectures. A team of researchers from Stanford University and the University of Washington just released an AI model competitive with both DeepSeek and OpenAI by following this strategy of optimizing existing models instead of developing entirely new ones.

AI model training is changing, and smaller players are finding ways to compete. But open-source AI alone doesn’t fix the bigger problem I see—centralization. Even Apple’s latest AI push still relies on cloud processing for complex tasks. The real democratization of AI won’t happen until models run entirely on personal devices, where users control their own data.

That’s where things are headed. Federated learning is one of the most promising steps in that direction. Instead of sending raw data to a central server for training, federated learning allows AI models to be trained directly on personal devices. Each device processes data locally, updating the model without exposing user information. This approach significantly reduces privacy risks while making AI development less dependent on large-scale cloud infrastructure.

GDPR And The Global Push For AI Privacy

GDPR regulations have made it considerably more difficult for traditional AI models to operate as they do in the U.S. AI companies need user data to train models, but GDPR restricts how that data can be transferred, stored and processed.

Right now, most AI systems work in a way that’s fundamentally at odds with GDPR. Companies like OpenAI and Google would need fully independent data centers in Europe to comply, which OpenAI is offering in some areas, but that’s an expensive workaround solution.

Decentralized AI offers a real alternative. Instead of sending user data to a central server, federated learning allows AI models to be trained locally—keeping data within the user’s control and within legal boundaries. That’s why I see Europe as a place where decentralized AI could take off faster than in the U.S. If leading AI companies can’t meet GDPR requirements under the current structure, those companies will have to rethink their approach altogether.

What DeepSeek’s Case Signals About The Future Of AI

I don’t see DeepSeek as an example of decentralization, but I do see it as part of a much bigger trend. Whether or not their model holds up as a true low-cost innovation, the emergence of a powerful, open-source model shows that AI development outside of the usual players is gaining attention.

The fact that so many were eager to believe a company could train an AI model at such a low cost tells me there’s demand for something different.

AI development shouldn’t be limited to just one or two dominant players. It should be open, adaptable and customized to how people actually want to use it. Right now, too much power is concentrated in the hands of a few companies. If AI development keeps heading in that direction, we could end up in a situation where there are only two AI models to choose from, which is really just an illusion of choice. That’s not the future I want to see.

More challengers will continue to emerge, and the best models won’t always come from the biggest companies. Competing directly with major AI companies is difficult, but finding the right niche—whether through efficiency, cost or customization—creates opportunity.

The future of AI shouldn’t be about forcing people to pick between centralized models. It should be about giving users real control over how AI works for them. That means not just breaking AI out of the hands of a few corporations but also creating AI that people can own, fine-tune and run on their own devices. That’s the direction I want to see this industry move toward—more competition, more customization and more control in the hands of users.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


This post was created with our nice and easy submission form. Create your post!

What do you think?

Get the Google Pixel 9a with $100 store credit at Amazon, Best Buy and the Google Store

Get the Google Pixel 9a with $100 store credit at Amazon, Best Buy and the Google Store

The best live TV streaming services to cut cable in 2025

The best live TV streaming services to cut cable in 2025