By Stephen Sefton. Sunday August 3, 2025 2:55 pm
The struggle over the development of Artificial Intelligence technology is fundamental to the future of humanity. Many observers consider that, as in the case of climate change, this is a struggle for humanity’s very survival. A recent article by Maria Zakharova, spokesperson for the Russian Foreign Ministry, emphasizes the geopolitical and geoeconomic elements of the issue by noting how the Western neocolonial bias is integral to the development of Artificial Intelligence by Western digital mega-corporations.
This is another example of the conflict between the focus of the capitalist elites of the collective West, in their insanely growing concentration of wealth, and the focus on the development of the human person by governments responsive to the needs of their people, such as China and Russia. Naturally, as Comrade Zakharova points out, the prejudices of those who provide the inputs for training Artificial Intelligence will dictate its results. It’s another variation on the basic computing principle “garbage in, garbage out.”
Since its inception, experts in the field have debated the most appropriate direction to ensure the successful development of Artificial Intelligence. In recent decades, the greatest research and investment resources have been dedicated to the development of large language models. These models assert that the best path to genuine Artificial Intelligence is through learning by so-called neural networks based on their ability to respond to assigned tasks by reviewing and analyzing enormous amounts of data at tremendous speeds.
The fundamental problem with this model of Artificial Intelligence is that it cannot be trusted to recognize and correct errors. Large language models can recognize patterns or structures without requiring constant human guidance, but they struggle to assimilate unfamiliar contexts because they favor high-probability outcomes, not necessarily true outcomes. Their responses to assigned tasks are based on a supposedly exhaustive statistical review of past data without taking into account possible anomalous or alternative future outcomes that do not obey statistical probabilities.
For this reason, large language models can suffer from episodes of so-called “hallucinations,” which result when their neural networks lack sufficient appropriate information and invent it in a spurious manner. Some specialists argue that a fundamental problem related to “hallucinations” is that these systems lack a comprehensive mechanism that allows them to verify reality in real time. Attempting to correct this type of technical problem entails significantly higher costs and much more complex solutions. Aside from these technical criticisms, other specialists point to the lack of transparency of current Artificial Intelligence systems and their lack of ethical sense.
Prominent specialists such as Gary Marcus and Subbarao Kambhampati recognize the strengths of large language models but insist on the importance of also recognizing their limitations as reasoning models capable of solving highly complex problems. Subbarao Kambhampati’s work has demonstrated how the logical processes of Artificial Intelligence based on neural networks can appear to be correct but still produce erroneous results. These specialists argue that the concept of neural networks must be combined with the neuro-symbolic mode of Artificial Intelligence capable of following normative rules.
These specialists argue that in this way, Artificial Intelligence can develop a cognitive model capable of obeying normative rules without relying on learning from large volumes of data. The idea is to achieve a generative Artificial Intelligence that produces responses to assigned tasks based on correctly structured and explainable reasoning, capable of assimilating unknown situations. These debates and arguments are of utmost importance to ensuring the successful application of Artificial Intelligence in healthcare, finance, agriculture, maritime and aviation navigation, and countless other areas vital to human life.
On the other hand, as Comrade Maria Zakharova explains, “building a just and multipolar world order will depend on the ability to curb attempts to reproduce the neocolonial inequalities and suppressions of the past in the digital sphere.” Of course, there is a direct connection between the policies of Western elites determined to impose their Artificial Intelligence technology on the majority world and their support for the genocide of the Palestinian people, their military and economic war against Russia, and their ongoing hybrid war against China. It’s all part of their desperate defense of the undeserved advantages they’ve achieved through centuries of genocidal conquest and slavery.
This desperation of Western elites is reflected in the poor profitability of their massive investment. Investment in the sector by Western companies to date is estimated at over US$330 billion, generating gross revenues, not profits, of less than US$30 billion. The proclaimed benefits of increased productivity, other than improvements in industrial efficiency resulting from the use of robotic systems, have yet to be significantly demonstrated. On the contrary, credible surveys have found that the application of models like ChatGPT in administrative, managerial, or academic contexts can lead to unintended consequences that diminish efficiency.
Another highly negative aspect of the application of large language models is the impossibly excessive consumption of electricity and water by their data centers. The International Energy Agency reportedly forecasts that data centers’ electricity consumption will be double the current figure by 2030, equivalent to the electricity consumption of an entire advanced economy like Japan. Excessive water consumption to cool data centers is also a topic fueling the arguments of experts who call for a complete halt to the application of artificial intelligence due to the irreversible environmental damage.
It is estimated that Amazon, Google, and Microsoft alone occupy more than 630 data centers around the world, with an average water consumption of two million liters per day for every 100 megawatts of electricity, sometimes in areas where water is already a scarce resource. Thus, the collective Western model for developing Artificial Intelligence is absolutely unsustainable. Furthermore, as Maria Zakharova explains, all of this does not take into account the neocolonial extraction of rare earth resources from countries around the world, which are essential materials for the development of technological applications of Artificial Intelligence.
The response in Latin America to this problem has followed the predictable ideological patterns prevailing in the region. The dignified and sovereign response of the ALBA-TCP countries has been an initiative of a Regional Center for the Development of Artificial Intelligence for Latin America and the Caribbean. The initiative promotes technological autonomy to enable appropriate and relevant solutions to our regional and national reality. It aims to strengthen value-added processes in production chains, promote greater economic integration at the national and regional levels, and prioritize historically marginalized regions and populations.
The initiative recognizes that Artificial Intelligence is a new battlefield in the hybrid war waged against our countries by the declining powers of the collective West. The sovereign aspect of the Brazilian Artificial Intelligence Plan, launched in August of last year, has been largely co-opted by North American companies Amazon, Google, and Microsoft, which have received tax benefits for their investments in data centers in the country. This concessionary treatment in the face of the extractive assault on Brazil’s sovereignty is typical of neoliberal social-democratic governments and exemplifies precisely the neocolonialism warned against by Maria Zakharova.
The cases of Chile, Uruguay, and other countries in the region are similar to those of Brazil. Local natural resources are exploited for the benefit of North American elites and their local business counterparts, leaving no benefits for the attacked population. Greater political will is needed in the region to defend national sovereignty firmly enough to ensure the implementation of Artificial Intelligence focused on the human development of peoples, health, education, productive activities, disaster prevention and mitigation, among many other sectors of national life.
At the level of the BRICS+ countries, China’s Deepseek application has already proven to be more competitive than Western applications like ChatGPT, being more efficient and consuming far less natural resources. In itself, Deepseek has exposed the great lie of capitalism’s supposed superior efficiency in promoting innovation. Initiatives by China, India, and Russia to develop Artificial Intelligence, and to a lesser extent by the public sector in other BRICS+ countries such as Brazil, South Africa, Egypt, and Ethiopia, signal an awareness among the BRICS countries of the threat of neocolonial monopolization of this technology by the collective West.
Like our ALBA-TCP countries, the BRICS+ countries, both collectively and nationally, support investments and initiatives that promote digital sovereignty, both at the software level and in the development of the required physical technology. In this regard, China, India, and Iran are developing their Artificial Intelligence technology through the new disciplines of quantum computing and neuromorphic computing. India and China are even developing semiconductors based on so-called “photon chips” and exploring the use of new materials to overcome the limitations of existing conventional silicon chip technology.
New techniques and materials to improve the speed, efficiency, and sustainability of digital technologies will radically change the paths open to the development of Artificial Intelligence. It is no coincidence that the dynamic economies of the leading countries of the majority world, and not predatory Western giant corporations, are driving these new initiatives. This reality is another sign of the unstoppable advance of the commercial, financial, and technological democratization of developing international relations thanks to collaborative solidarity based on respect and equality among the nations and peoples of the majority world.
I don’t think there’s any evidence to suggest that gen-AI atrophies the brain. It simply lets you focus on different things. It’s like arguing that a high level programming language atrophies the brain compared to writing assembly.
All workers are exploited under capitalism, that’s the inherent aspect of the system. Every technology will be used by capitalists to increase exploitation. Stopping technological progress is not the answer here.
I don’t think I’ve made an argument regarding AI being important to humanity’s survival. Very much agree that capitalism is the biggest problem facing humanity.
Democratizing creativity is not a bullshit phrase. I’ve explained explicitly what I meant there. More people are able to express themselves because there is less technical skill required for production of art. We already see this being put into practice with many political memes being generated using gen-AI tools.
The argument that grassroots boycott can stop the development of this tech is beyond silly. Companies are pouring billions (maybe trillions) of dollars into it right now. There is no chance of any sort of boycott stopping this technology. Furthermore, there is already a critical mass of people who see absolutely no qualms regarding using it.
All a boycott accomplishes is that people who actually have moral considerations regarding this tech will shut themselves out of the process of this tech being developed. this is simply ceding power to capitalists.
The rational thing to do is to follow the proven model of open source and create community driven alternatives that are developed outside capitalist incentive system.
The argument that we’re joining the arms race that does nothing for the workers is completely false. These tools can be run locally already, and they’re not running on some server farms. They are already in the hands of the workers.
My whole argument is literally against using AI as a service and building tools that can either be run locally or using technologies similar to bittorrent and SETI@home in decentralized fashion. I’m frankly disappointed that you would ignore this key aspect of my argument to make a straw man.
And no, public does not mean government controlled. It means open source model that’s community driven. It means developing AI in a way that can be run by individuals and do with as they please.
Your whole argument here is based on the false premise that you need a data centre to run models. This hasn’t been true for years now.
People who reject AI today will almost certainly end up being left behind as this technology evolves. It’s the same thing we already see happening to older generations who are struggling to use computers or navigate the web. Keeping up with the technology allows gradually learning it, getting used to it, and developing intuition for how it works and what it can do.
Meanwhile, the only take unbecoming of a Marxist is to reject the notion that people develop skills using their tools. The skill of photography is honed through constant practice. You learn to identify composition, lighting, subjects, and so on, by doing photography. Dismissing the role of experience and skill in production of art is frankly the height of absurdity. If you think experience is not important then I encourage you to try doing photography and see how your results stack up against professionals.
It’s also ironic how you proceed to contradict yourself in the next paragraph where you discuss the need for skill when it comes to programming. Turns out you understand why skill is important in the domain you’re personally familiar with, but you dismiss it in other domains such as photography. Seems to me is that you’re falling for the classic trap of thinking that the domain you work requires skills, while photography is just pointing the camera at something and pressing a button.
I also get the impression you don’t have the appreciation for how many layers there are in modern computer systems. If you think you understand what lies underneath because you know a “low level” language like C, I have some bad news for you https://dl.acm.org/doi/pdf/10.1145/3212477.3212479
That said, I completely agree that using AI tools for coding does require understanding the underlying concepts. This is why I repeatedly point out that AI is just a tool, and skill, which you dismissed earlier, is a key aspect of this.
The only argument that’s ignorant here is your own. Training AI systems is done rarely, and it’s not even done from scratch vast majority of the time. People use foundational models where they tweak weights. Meanwhile, majority of energy usage comes from actual prompting which is indeed incredibly efficient already. The latest GLM 4.5 model is only using 13% of resources that DeepSeek uses, which itself can already be run on a consumer desktop. The argument is so dishonest, it makes me very angry.
Finally, it’s quite obvious to anybody who is minimally rational that AI isn’t going anywhere. It’s literally being developed all over the world, and there is no chance of all of humanity agreeing to stop its development. The only question is how it will be developed and who will control it.
Honestly, i started reading this but after like 5 or more parts just saying “no” i lost interest.
I think it is incredibly disrespectful to counter arguments with just saying the opposite.
If you are interested in meaningful debate, let me know.
What’s actually disrespectful is dismissing the work I put into addressing your points with glib reply, then acting like you’re interested in a meaningful debate. I’ve provided arguments and examples of why I disagree with you, and if you’re unable to engage with that then it’s entirely your problem.