Opinion of: Naman Kabra, co -founder and CEO or Nodeops Network
The graphics processing units (GPU) have become predetermined hardware for many AI workloads, especially when large models are trained. That thought is everywhere. While it makes sense in some contexts, it has also created a blind spot that stops us.
The GPUs have earned their reputation. They are incredible to destroy a massive number in parallel, which makes them perfect for training large language models or executing an inference of high speed AI. That is why companies such as Openi, Google and Meta spend a lot of money by building GPU groups.
While you can prefer GPUs to execute AI, we cannot forget the central processing units (CPU), which are still very capable. Forgetting this could cost us time, money and opportunity.
CPUs are not outdated. More people need to realize that they can be used for AI tasks. They are inactive in millions of machines worldwide, capable or executing a wide range of AI tasks efficiently and affordable, if we only give each other an opportunity.
Where the CPUS shine in AI
It is easy to see how we get here. GPUs are built for parallelism. They can handle massive amounts of data simultaneously, which is excellent for tasks such as image recognition or the training of a chatbot with thousands of millions of parameters. CPUs cannot compete in those works.
AI is not just model training. They are not just high speed matrix mathematics. Today, AI includes tasks such as running smaller models, interpreting data, managing logical chains, making decisions, obtaining documents and answering questions. These are not just “silly mathematics” problems. They require flexible thinking. They require logic. They require CPU.
While the GPUs obtain all the owners, the CPU silently handles the backbone of many AI work flows, especially when approaching how AI systems are executed in the real world.
Recently: ‘Our GPU is melting
CPUs are impressive in what were designed: flexible operations based on logic. They are built to handle one or some tasks at the same time, very good. That may not seem impressive next to the mass parallelism or GPU, but many tasks of AI do not need that son of fire power.
Consider autonomous agents, those elegant tools that can use AI to complete tasks such as searching the web, writing code or planning a project. Of course, the agent could call a large language model that runs in a GPU, but everything that is around that, logic, planning, decision, works well on a CPU.
Even inference (Ai-Speak for the real use of the model after training) can be done on CPUs, as specials if the models are narrower, optimized or executed in situations where ultra-low latency is not necessary.
CPUs can handle a great variety or tasks of very well. We are so focused on GPU performance, however, that we are not using what we already have in front of us.
We do not need to continue building new Expectations Centers full of GPU to meet the growing demand for AI. We just need to use what is already efficiently.
That’s where things get interesting. Because now we have a form of do That.
How decentralized computing networks change the game
Depin, or decentralized physical infrastructure networks are a viable solution. It is a bite, but the idea is simple: people contribute to their united computer power (such as inactive CPUs), which is grouped into a global network in which ethers can take advantage of.
Instead of renting time in the GPU cluster of a centralized cloud supplier, you can execute workloads of AI in a decentralized network or CPU anywhere in the world. These platforms create a type of pairs computing layer where works can be distributed, executed and verified safely.
This model has some clear benefits. First, it’s much cheaper. I do not need to pay the prices of the premium so that a scarce GPU is exhausted when a CPU will do the job well. Secondly, scale naturally.
The available calculation grows as more people connect their machines to the network. Third, brings computing to the edge. Tasks can be executed in nearby machines where the data live, reducing latency and increasing privacy.
Think about it as Airbnb for computation. Instead of building more hotels (data centers), we are making a better use of all empty rooms (inactive CPU) that people already have.
By changing our thinking and using decentralized networks to enruate the workloads of AI to the correct processor type, GPU when necessary and CPU when possible, we unlock scale, efficiency and resistance.
The final result
It is time to stop accumulating CPU as second -class citizens in the AI world. Yes, GPUs are critical. No one is denying that. The CPUs are everywhere. They are underutilized but still perfectly capable of promoting many of the tasks of ia that matter to us.
Instead of throwing more money in GPU’s shortage, let’s ask a smarter question: are we using the computing we already have?
With decentralized computing platforms that intensify to connect inactive CPUs with AI economy, we have a great opportunity to rotate in AI infrastructure. True restriction is not just the availability of GPU. It is a change of mentality. We are so conditioned to pursue high -end hardware that we overlook the without exploiting potential that is inactive throughout the network.
Opinion of: Naman Kabra, co -founder and CEO or Nodeops Network.
This article is for general information purposes and does not intend to be and should not be tasks such as legal or investment advice. The views, the thoughts and opinions expressed here are those of the author alone and do not necessarily reflect or represent the opinions and opinions of Cointelegraph.