By Stephen Sefton. Sunday August 3, 2025 2:55 pm
The struggle over the development of Artificial Intelligence technology is fundamental to the future of humanity. Many observers consider that, as in the case of climate change, this is a struggle for humanity’s very survival. A recent article by Maria Zakharova, spokesperson for the Russian Foreign Ministry, emphasizes the geopolitical and geoeconomic elements of the issue by noting how the Western neocolonial bias is integral to the development of Artificial Intelligence by Western digital mega-corporations.
This is another example of the conflict between the focus of the capitalist elites of the collective West, in their insanely growing concentration of wealth, and the focus on the development of the human person by governments responsive to the needs of their people, such as China and Russia. Naturally, as Comrade Zakharova points out, the prejudices of those who provide the inputs for training Artificial Intelligence will dictate its results. It’s another variation on the basic computing principle “garbage in, garbage out.”
Since its inception, experts in the field have debated the most appropriate direction to ensure the successful development of Artificial Intelligence. In recent decades, the greatest research and investment resources have been dedicated to the development of large language models. These models assert that the best path to genuine Artificial Intelligence is through learning by so-called neural networks based on their ability to respond to assigned tasks by reviewing and analyzing enormous amounts of data at tremendous speeds.
The fundamental problem with this model of Artificial Intelligence is that it cannot be trusted to recognize and correct errors. Large language models can recognize patterns or structures without requiring constant human guidance, but they struggle to assimilate unfamiliar contexts because they favor high-probability outcomes, not necessarily true outcomes. Their responses to assigned tasks are based on a supposedly exhaustive statistical review of past data without taking into account possible anomalous or alternative future outcomes that do not obey statistical probabilities.
For this reason, large language models can suffer from episodes of so-called “hallucinations,” which result when their neural networks lack sufficient appropriate information and invent it in a spurious manner. Some specialists argue that a fundamental problem related to “hallucinations” is that these systems lack a comprehensive mechanism that allows them to verify reality in real time. Attempting to correct this type of technical problem entails significantly higher costs and much more complex solutions. Aside from these technical criticisms, other specialists point to the lack of transparency of current Artificial Intelligence systems and their lack of ethical sense.
Prominent specialists such as Gary Marcus and Subbarao Kambhampati recognize the strengths of large language models but insist on the importance of also recognizing their limitations as reasoning models capable of solving highly complex problems. Subbarao Kambhampati’s work has demonstrated how the logical processes of Artificial Intelligence based on neural networks can appear to be correct but still produce erroneous results. These specialists argue that the concept of neural networks must be combined with the neuro-symbolic mode of Artificial Intelligence capable of following normative rules.
These specialists argue that in this way, Artificial Intelligence can develop a cognitive model capable of obeying normative rules without relying on learning from large volumes of data. The idea is to achieve a generative Artificial Intelligence that produces responses to assigned tasks based on correctly structured and explainable reasoning, capable of assimilating unknown situations. These debates and arguments are of utmost importance to ensuring the successful application of Artificial Intelligence in healthcare, finance, agriculture, maritime and aviation navigation, and countless other areas vital to human life.
On the other hand, as Comrade Maria Zakharova explains, “building a just and multipolar world order will depend on the ability to curb attempts to reproduce the neocolonial inequalities and suppressions of the past in the digital sphere.” Of course, there is a direct connection between the policies of Western elites determined to impose their Artificial Intelligence technology on the majority world and their support for the genocide of the Palestinian people, their military and economic war against Russia, and their ongoing hybrid war against China. It’s all part of their desperate defense of the undeserved advantages they’ve achieved through centuries of genocidal conquest and slavery.
This desperation of Western elites is reflected in the poor profitability of their massive investment. Investment in the sector by Western companies to date is estimated at over US$330 billion, generating gross revenues, not profits, of less than US$30 billion. The proclaimed benefits of increased productivity, other than improvements in industrial efficiency resulting from the use of robotic systems, have yet to be significantly demonstrated. On the contrary, credible surveys have found that the application of models like ChatGPT in administrative, managerial, or academic contexts can lead to unintended consequences that diminish efficiency.
Another highly negative aspect of the application of large language models is the impossibly excessive consumption of electricity and water by their data centers. The International Energy Agency reportedly forecasts that data centers’ electricity consumption will be double the current figure by 2030, equivalent to the electricity consumption of an entire advanced economy like Japan. Excessive water consumption to cool data centers is also a topic fueling the arguments of experts who call for a complete halt to the application of artificial intelligence due to the irreversible environmental damage.
It is estimated that Amazon, Google, and Microsoft alone occupy more than 630 data centers around the world, with an average water consumption of two million liters per day for every 100 megawatts of electricity, sometimes in areas where water is already a scarce resource. Thus, the collective Western model for developing Artificial Intelligence is absolutely unsustainable. Furthermore, as Maria Zakharova explains, all of this does not take into account the neocolonial extraction of rare earth resources from countries around the world, which are essential materials for the development of technological applications of Artificial Intelligence.
The response in Latin America to this problem has followed the predictable ideological patterns prevailing in the region. The dignified and sovereign response of the ALBA-TCP countries has been an initiative of a Regional Center for the Development of Artificial Intelligence for Latin America and the Caribbean. The initiative promotes technological autonomy to enable appropriate and relevant solutions to our regional and national reality. It aims to strengthen value-added processes in production chains, promote greater economic integration at the national and regional levels, and prioritize historically marginalized regions and populations.
The initiative recognizes that Artificial Intelligence is a new battlefield in the hybrid war waged against our countries by the declining powers of the collective West. The sovereign aspect of the Brazilian Artificial Intelligence Plan, launched in August of last year, has been largely co-opted by North American companies Amazon, Google, and Microsoft, which have received tax benefits for their investments in data centers in the country. This concessionary treatment in the face of the extractive assault on Brazil’s sovereignty is typical of neoliberal social-democratic governments and exemplifies precisely the neocolonialism warned against by Maria Zakharova.
The cases of Chile, Uruguay, and other countries in the region are similar to those of Brazil. Local natural resources are exploited for the benefit of North American elites and their local business counterparts, leaving no benefits for the attacked population. Greater political will is needed in the region to defend national sovereignty firmly enough to ensure the implementation of Artificial Intelligence focused on the human development of peoples, health, education, productive activities, disaster prevention and mitigation, among many other sectors of national life.
At the level of the BRICS+ countries, China’s Deepseek application has already proven to be more competitive than Western applications like ChatGPT, being more efficient and consuming far less natural resources. In itself, Deepseek has exposed the great lie of capitalism’s supposed superior efficiency in promoting innovation. Initiatives by China, India, and Russia to develop Artificial Intelligence, and to a lesser extent by the public sector in other BRICS+ countries such as Brazil, South Africa, Egypt, and Ethiopia, signal an awareness among the BRICS countries of the threat of neocolonial monopolization of this technology by the collective West.
Like our ALBA-TCP countries, the BRICS+ countries, both collectively and nationally, support investments and initiatives that promote digital sovereignty, both at the software level and in the development of the required physical technology. In this regard, China, India, and Iran are developing their Artificial Intelligence technology through the new disciplines of quantum computing and neuromorphic computing. India and China are even developing semiconductors based on so-called “photon chips” and exploring the use of new materials to overcome the limitations of existing conventional silicon chip technology.
New techniques and materials to improve the speed, efficiency, and sustainability of digital technologies will radically change the paths open to the development of Artificial Intelligence. It is no coincidence that the dynamic economies of the leading countries of the majority world, and not predatory Western giant corporations, are driving these new initiatives. This reality is another sign of the unstoppable advance of the commercial, financial, and technological democratization of developing international relations thanks to collaborative solidarity based on respect and equality among the nations and peoples of the majority world.
I’d argue this highlights why it’s so important for this tech to be developed outside the control of western corps. People who think that it’s just going to go away if a niche group of leftists starts boycotting it are delusional. The reality is that it’s mainstream already, and majority of people do not have any qualms about using it. The focus has to be on how we can build tools that are outside corporate control and that aren’t built with corporate values imbued into them.
I’m also rather optimistic that the economic bubble around AI in the west is going to burst which will be the end of corporate hype train. What we see coming out of China now is very promising as well, Chinese companies have embraced open source model and treat AI as foundational tech. They’re directly undercutting the whole business model of US companies that are trying to monetize this tech directly. By keeping their models closed they’re ensuring that the rest of the world will be using ones from China making it the standard setter going forward.
They’re directly undercutting the whole business model of US companies that are trying to monetize this tech directly.
Very true and the article also mentions one important aspect which is the lack profitability for the IA in the West. While working in the industry, I can see this with my own eyes:
- This desperation of Western elites is reflected in the poor profitability of their massive investment. Investment in the sector by Western companies to date is estimated at over US$330 billion, generating gross revenues, not profits, of less than US$30 billion. The proclaimed benefits of increased productivity, other than improvements in industrial efficiency resulting from the use of robotic systems, have yet to be significantly demonstrated. On the contrary, credible surveys have found that the application of models like ChatGPT in administrative, managerial, or academic contexts can lead to unintended consequences that diminish efficiency.
It also amazed me that some western marxists are willing to denounce AI and project western cruelty into non westerners while plenty of marxists in the Global South celebrate it and share it with joy. I really hope that westerner refusal and neo-Luddism among our comrades disappears.
I have to apologize for my distaste for it. It’s entirely chauvinistic because it’s entirely based on what Western LLM is. Please forgive me.
Exactly, I find it kind of funny how people on the left both correctly say that the whole AI hype in the west is a bubble, but all scream that the sky is falling and AI needs to be boycotted. If we all agree that what the west is doing is not productive use of the technology, then what’s all the panic about?
It’s also quite clear to me how a lot of westerners, including ones who identify as Marxists, have deeply internalized liberal values. The whole idea of boycotting AI is just a form of voting. The real solution is to engage materially with the problem, and to recognize that if we’re not happy with what corporations are doing, then the solution is to roll up the sleeves and put in the effort to build a compelling alternative.
@haui@lemmygrad.ml this article touches lots of arguments from both sides that we wrote in here -> https://lemmygrad.ml/post/8734686 .
I really couldn’t find a better article than this one that encompasses my thoughts that I wanted to share with you in that Socialist music thread.
Also, this article will help you too @bunbun@lemmygrad.ml . I really hope you give this a read and see why it is important to not ignore the global south. I literally beg you to not ignore us.
Okay. So I now had the time to read yogthos article about it. At one point i was pretty angry to be honest but in general, it is pretty well put. That does not mean I agree with one iota of it though. Here’s my answer to it:
here are my thoughts on ai, as an IT professional.
Atrophizing the brain
Gen-AI is like a bycicle. You can drive large distances in a short time with less effort than running. If you use it all the time, you will lose the ability to walk though.
Copyright
While it is true that large companies will circumvent or outright break copyright laws and are massively overadvantaged by them, the fact that AI does the same but faster does not help small creators at all. They remain the massively overexploited class, also through ai
Humanity’s survival
The argument that AI is somehow important to humanity’s survival is absolute bullshit. Our survival currently depends on breaking capitalism before it breaks the planet, either by war or by overheating.
Dont oppose progress
it is correct that marxists should not oppose progress on principle. “Democratizing creativity” on the other hand is a bullshit phrase, suggesting that creativity itself was somehow held hostage (which it isnt) instead of ideas being held hostage (which can be changed).
Planted thought
The mere existence of the arguments I debunked above shows the actual problem. Capitalist planting positive ideas in our heads to somehow view AI in this favorable light. They do the same in the opposite direction with russia, china and other countries.
Democratizing Creativity
It is true that a factory worker can now make a poster. But it also means that the same poster can pop up next to it saying the opposite. That is not a good development. The barrier of creativity is the same as an entry barrier to a market. If you pull it down, more and lower quality products enter the scene. That by itself might not be a problem but you have to keep overview to secure that the chaos does not spark pandemonium. In a food situation, we are talking about mass deaths in lieu of sufficient controls and regulations. China is the best example why “excessive” control leads to great outcomes.
Grassroots boycott is bad
The argument that grassroots boycott does not help is also false. There are thousands of examples where grassroots boycott prevented larger issues. Especially in the society we aim to create, everything will be handled at the grassroots level (communism). Using and adopting AI and normalizing it in leftist spaces makes it impossible for example to adhere to “thats a fake, leftists dont use AI”. The same argument can be made for using a flamethrower. There is a natural ethical border. This btw does not mean ai is not okay to use as a testing or planning tool. it is not to be used in an outward, open way as to not align oneself to it.
Losing the arms race, resistance is futile
The problem with the argument that we need to join the arms race does nothing for workers. We are not making AI tools or building server farms to be only used by us. We are using the enemy’s tools which helps them. Every prompt we make can be analyzed and used against us. Especially leftist creators can therefore be complicit in helping dismantle the left. We should instead use guerilla strategies in countering this threat. For example, we could poison information where we have access too, ask everyone to delete and or poison their data as well. We could think of new ways to attack this instead of making funny pictures.
Demand public AI
This is equally futile as “public” means government controlled, which means corporate controlled with some roundabouts. What we need is black sites. Privately or collectively owned basements with computers and GPUs, to be used to purposefully build countermeasures. We need no oversight, no “moderation” in terms of a less “radical” approach. We need john connor style resistance to a terminator level threat.
The tool is inert
Of course, there is no need to quickly draw something anymore with ai. But there wont be anyone able to draw it because of that. it is the deindustrialization of the west to the benefit of india and china atm and the deabilitation of the people toward ai in the future. A person who uses a calculator from the beginning will never learn to calculate by hand or in their head because we are not able to know without practice. Tools are okay to use in general. But we are in a giant hype bubble. The people who reject AI today will be the ones to ask about drawing or photographing ideas tomorrow.
Infinite ableism and hybris
“Hand my camera to a novice, and it is unlikely they would produce anything interesting with it.” which is a take unbecoming of a marxist. All humans (and animals) are the same. We can all learn the same things, only parted by certain traits which are not important. This argument leads to race theory if you go down the path stringently. There of course is a difference between years of experience but they are not nearly as important as people would like them to be.
Real coders
As a coder myself, I totally agree that high level languages are cheating. You absolutely need to understand THAT there is something underneath and you OUGHT to understand the basics of that lying underneath. Otherwise you have script kiddies who can make a beautiful website but dont know how to turn off the computer. It is again the capitalist way of specializing people into dependency. Therefore, someone who does AI for example would need at least some understanding and possibly training to understand the stuff they are creating. instead they produce hundreds or thousands of misses by using electricity and water which both are scarce due to capitalism.
Skill issue
The argument here is that a worker does not have the time to learn how to use the camera to take the great picture. The more important question is “Should they?”. The counterargument here is that there are more than enough pictures of anything and everything out there and reality is so vast that we cant even grasp it (dialectical materialism) so that there is absolutely no point to add fake reality into the mix.
Energy usage
This argument is either deeply ignorant or plainly made in bad faith. AI is not only the power it uses to answer prompts but always includes the power of training. By using ai, one already has insanely high power draws (about boiling a liter of water per prompt) but additionally, you have scaling effects of the hype, meaning millions of people using the stuff, scripts that produce hundred prompts a second and mixing the results. You have mass training efforts which use exactly the whole datacenters that were mentioned and one computer chews through kilowatts. a datacenter chews through megawatts. and they would not be built on every corner and especially not in low income neighborhoods if they did not suck this much energy and produce this much noise polution. The argument is so dishonest, it makes me very angry.
Who will control the tools
Like the atomic bomb, AI arms race will definitely kill us and someone needs to stop it. We should be the ones to very deliberately boycott AI, especially in overt usage as to not spread the hype any further. It is the democratic model. We only have finite amounts of times in our lives. We should spend them for healthy outcomes, not unhealthy ones.
I don’t think there’s any evidence to suggest that gen-AI atrophies the brain. It simply lets you focus on different things. It’s like arguing that a high level programming language atrophies the brain compared to writing assembly.
All workers are exploited under capitalism, that’s the inherent aspect of the system. Every technology will be used by capitalists to increase exploitation. Stopping technological progress is not the answer here.
I don’t think I’ve made an argument regarding AI being important to humanity’s survival. Very much agree that capitalism is the biggest problem facing humanity.
Democratizing creativity is not a bullshit phrase. I’ve explained explicitly what I meant there. More people are able to express themselves because there is less technical skill required for production of art. We already see this being put into practice with many political memes being generated using gen-AI tools.
The argument that grassroots boycott can stop the development of this tech is beyond silly. Companies are pouring billions (maybe trillions) of dollars into it right now. There is no chance of any sort of boycott stopping this technology. Furthermore, there is already a critical mass of people who see absolutely no qualms regarding using it.
All a boycott accomplishes is that people who actually have moral considerations regarding this tech will shut themselves out of the process of this tech being developed. this is simply ceding power to capitalists.
The rational thing to do is to follow the proven model of open source and create community driven alternatives that are developed outside capitalist incentive system.
The argument that we’re joining the arms race that does nothing for the workers is completely false. These tools can be run locally already, and they’re not running on some server farms. They are already in the hands of the workers.
My whole argument is literally against using AI as a service and building tools that can either be run locally or using technologies similar to bittorrent and SETI@home in decentralized fashion. I’m frankly disappointed that you would ignore this key aspect of my argument to make a straw man.
And no, public does not mean government controlled. It means open source model that’s community driven. It means developing AI in a way that can be run by individuals and do with as they please.
Your whole argument here is based on the false premise that you need a data centre to run models. This hasn’t been true for years now.
People who reject AI today will almost certainly end up being left behind as this technology evolves. It’s the same thing we already see happening to older generations who are struggling to use computers or navigate the web. Keeping up with the technology allows gradually learning it, getting used to it, and developing intuition for how it works and what it can do.
Meanwhile, the only take unbecoming of a Marxist is to reject the notion that people develop skills using their tools. The skill of photography is honed through constant practice. You learn to identify composition, lighting, subjects, and so on, by doing photography. Dismissing the role of experience and skill in production of art is frankly the height of absurdity. If you think experience is not important then I encourage you to try doing photography and see how your results stack up against professionals.
It’s also ironic how you proceed to contradict yourself in the next paragraph where you discuss the need for skill when it comes to programming. Turns out you understand why skill is important in the domain you’re personally familiar with, but you dismiss it in other domains such as photography. Seems to me is that you’re falling for the classic trap of thinking that the domain you work requires skills, while photography is just pointing the camera at something and pressing a button.
I also get the impression you don’t have the appreciation for how many layers there are in modern computer systems. If you think you understand what lies underneath because you know a “low level” language like C, I have some bad news for you https://dl.acm.org/doi/pdf/10.1145/3212477.3212479
That said, I completely agree that using AI tools for coding does require understanding the underlying concepts. This is why I repeatedly point out that AI is just a tool, and skill, which you dismissed earlier, is a key aspect of this.
The only argument that’s ignorant here is your own. Training AI systems is done rarely, and it’s not even done from scratch vast majority of the time. People use foundational models where they tweak weights. Meanwhile, majority of energy usage comes from actual prompting which is indeed incredibly efficient already. The latest GLM 4.5 model is only using 13% of resources that DeepSeek uses, which itself can already be run on a consumer desktop. The argument is so dishonest, it makes me very angry.
Finally, it’s quite obvious to anybody who is minimally rational that AI isn’t going anywhere. It’s literally being developed all over the world, and there is no chance of all of humanity agreeing to stop its development. The only question is how it will be developed and who will control it.
Honestly, i started reading this but after like 5 or more parts just saying “no” i lost interest.
I think it is incredibly disrespectful to counter arguments with just saying the opposite.
If you are interested in meaningful debate, let me know.
What’s actually disrespectful is dismissing the work I put into addressing your points with glib reply, then acting like you’re interested in a meaningful debate. I’ve provided arguments and examples of why I disagree with you, and if you’re unable to engage with that then it’s entirely your problem.
I’m confused. Me calling the “communist ai video” out as ironic trash equals me ignoring the Global South? And you’re begging me to, I presume, save it by watching more ai slop? And tagging me in an article that lists all the negatives of the industry? 🤔
Me calling the “communist ai video” out as ironic trash equals me ignoring the Global South?
The video had a beautiful message and was acknowledged by plenty of people as you could see in the Youtube comments. This was also shared in plenty of Cuban Telegrams. If this AI video is helpful in raising class consciousness, then it is not trash but a helpful tool.
And you’re begging me to, I presume, save it by watching more ai slop?
I am begging you to acknowledge that the video wasn’t AI slop and to actually go beyond that western bias that you currently have. If a tool is useful, why not use it for the good of all of workers? Why shut down people that use that tool and project the capitalist filth into them as you did in that thread? Why assume that everyone uses western AI when a plethora of AI tools exist outside of western control?
And tagging me in an article that lists all the negatives of the industry?
The article mentions plenty of negatives but it also mentions positives. Let me help you out here. The article could be resumed into this:
- The produce of a tool depends on its wielder. If the tool is handled by a despicable western capitalist, then the produce is exploitation of the working class. However, if the tool is handled by workers in socialism, the produce will be successes that are vital to human life.
The video had a beautiful message and was acknowledged by plenty of people
Jordan Peterson has a beautiful message of cleaning your room, Andrew Tate has a beautiful message of working out, and Benito Mussolini had a beautiful message of trains running on time. If it came from an asshole - it’s shit. Don’t touch it yourself and definitely don’t spread it here.
However, if the tool is handled by workers in socialism, the produce will be successes that are vital to human life.
It is not handled by workers in socialism. It is predominantly and mainly used against workers in capitalism. I stg this is the crypto bros all over again, but now the blockchain itself talks to y’all, saying “please share my glory, don’t let me die”. I understand the tech very well from inside and out, and my gripe is not with the bits and pixels, but with who uses them and for what. I’d rather never see an ai-generated video or use decentralized currency ever again than have them in the state that they are today. /thread
Dog, I love the coverage you share here on African affairs, particularly on Sahel States progress. I barely get that news anywhere else, and I love to see the positive developments on the planet. This is good work and I really appreciate you doing it. But please leave me out of your ai crusade, I see plenty of brave warriors on every other platform.
Jordan Peterson has a beautiful message of cleaning your room, Andrew Tate has a beautiful message of working out, and Benito Mussolini had a beautiful message of trains running on time. If it came from an asshole - it’s shit. Don’t touch it yourself and definitely don’t spread it here.
Are you seriously comparing the message of the Dialectical Fire song to reactionary scum like Jordan Peterson, Andrew Tate and fucking Mussolini? It is very disappointing to see just another horseshoe theory crap grouping two completely different and opposing groups and messages.
Dog, I love the coverage you share here on African affairs, particularly on Sahel States progress. I barely get that news anywhere else, and I love to see the positive developments on the planet. This is good work and I really appreciate you doing it. But please leave me out of your ai crusade, I see plenty of brave warriors on every other platform.
I am not a dog btw but thanks. This whole exchange was very depressing to read.