Control of the Machines?
Benjamin Bratton: “The State is taking the framework of a machine because machines have already assumed the functions and the register of the State.”
Thomas Hobbes, in his famous book Leviathan (1651), conceived society as a great artificial automaton, a self-organized system possessing its own life and (artificial) intelligence. Human society would constitute a new form of life. It was a collective organism that transcended individual beings and the institutional organs of which it was composed. Hobbes’ Leviathan-automaton replicated itself, and bureaucracy exercised control functions akin to a brain.
“[…] through art, that great Leviathan, which we call a commonwealth or state, is created, which is but an artificial man, though of greater stature and strength than the natural, for whose protection and defense it was intended; and in which the sovereignty is an artificial soul giving life and motion to the whole body; the magistrates and other officers […] are the nerves that do the same in the natural body; the wealth and riches of all the particular members are its strength; […]”
"Darwin Among the Machines" is an article written by Samuel Butler and published in The Press on June 13, 1863, in New Zealand. It proposed the possibility that machines could be a form of "mechanical life" in constant evolution, and that eventually, increasingly perfected and autonomous machines could replace humans as the dominant species. Butler perceived that control of the society he lived in seemed to be slipping away from humans.
Samuel Butler (1863): “Machines are gaining control over us; with each passing day, we are becoming more their servants; more and more humans live daily enslaved to serve and maintain them; many men devote the energy of their entire lives to the development of mechanical life. […] What kind of creature will likely be the next successor to man in the supremacy of the earth? We have often heard this debated; but it seems to us that we ourselves are creating our own successors; daily we increase the beauty and delicacy of their physical organization; daily we give them greater power and provide them, through all kinds of ingenious inventions, with that power of self-regulation and action that will be to them what intellect has been to the human race. Over the centuries, we will become the inferior race.”
The mathematician and physicist John Von Neumann (1903-1957) used the term “universal constructor” to describe a certain type of self-replicating machine . A machine of this kind, provided with appropriate instructions, would be capable of building a copy of itself. Each of the two machines would then proceed to build another; the four would become eight, and so on. The fundamental details of the machine were published in his posthumous book Theory of Self-Reproducing Automata.
Neumann, the son of a powerful banker from Budapest, fled the city with his family during the 1919 Hungarian communist uprising, which expropriated his bank. Developing artificial intelligence to produce better and more powerful thermonuclear bombs to eliminate the socialist threat became the main goal of his career. In 1950, he declared: “If you tell me not to bomb the Soviets tomorrow, I say: Why not bomb them today? If you tell me at five, I say: Why not at one?” He was one of the pioneers of the stored-program computer.
From working on the fission bomb, he moved on to the hydrogen bomb (the so-called “superbomb”) and then to intercontinental missiles. “We can pack into a single airplane more explosive power than the sum of all fleets and all combatants during World War II.” The progress he achieved in modern computing was crucial not only for creating new weapons but also for simulating war situations on a virtual battlefield rehearsed at the Pentagon. In 1950, he participated in the Weapons Systems Evaluation Group and the Special Weapons Project of the Armed Forces, in 1951-1952 as a consultant for the CIA, a member of the General Advisory Committee of the Atomic Energy Commission (AEC), a consultant for the Livermore Weapons Laboratory, and a member of the Scientific Advisory Board of the U.S. Air Force.
The Capital Account
Beatrice K. Rome & Sidney C. Rome (1997): “Once living agents are incorporated into our dynamic computational system, the result may be a machine capable of teaching humans how to function better within the context of a large man-machine system in which information is processed and decisions are made.”
The nebulous perception of Samuel Butler, Benjamin H. Bratton, George Dyson, and countless science fiction writers that machines were taking over was based on a certain perception, albeit distorted and delusional, of the real and unstoppable advance of another artificial, monstrous, and self-replicating entity that was developing under their feet, effectively taking control not only of machines but of the very society in which they lived. Man had not been reduced to slavery by machines but to the most abject servitude to the capital account.
From the 16th century onward in Western Europe, a new social formation began to develop with exceptional characteristics; its strength and resilience were based on a new and original form of wealth accumulation outside the central authorities and their bureaucracies. What defined the new structure in formation was no longer an aristocracy of blood or a military or religious hierarchy but numbers carefully written in well-bound and sizable books called the daybook, ledger, and auxiliary books (warehouse entries and exits, cash book, etc.). In the ledger, the largest book, there were accounts where money inflows and outflows were recorded. One of them, the last and most important, was the capital account. All the movements recorded in those books were intrinsically related, as if they were part of a large spreadsheet, and all their movements ultimately translated into an increase or decrease in the figures recorded in that powerful and unsettling account. The new society, Hobbes’ Leviathan, truly behaved like an automaton, replicating, self-reproducing, and evolving according to the genetic code embedded in that account.
But over time, the capital account gained permanence and autonomy. It left its connection to the life of a single machine or a concret number of them to incorporate a permanent park of machines that had to replicate and grow. It ceased to obey a particular merchant or group of partners to become an “anonymous” entity, with its own independent life. Samuel Butler perceived this in his own way in 1863: “Why could machines not become something as complicated as we are, or reach a sufficient degree of complexity to qualify as living beings in nature as we are?”
In reality, the capital account was learning to design increasingly perfected machines to perpetuate itself and, of course, destroy any hint of resistance to its advance; it was gradually approaching the most sophisticated and relentless form of tracking, control, and annihilation of any kind of opposition, the tandem of the atomic bomb and Artificial Intelligence.
Marx, when trying to analyze its insatiable gluttony, never had a clear awareness of the real magnitude of the new self-replicating creature he was studying. He dreamed illusively of being able to neutralize it someday and wrest control of the machines from it. The Toxic Cloud of AI M. Fourcade and K. Healy: “Information networks produce enormous amounts of data at the individual level, which are analyzed to classify people […] Storing and studying the daily activities of people, even seemingly mundane ones, has become a given and not an exception. […] The digital traces of individual behaviors (where classification tools define what a 'behavior' is and how it should be measured) are increasingly aggregated, stored, and analyzed. […] “In all institutional domains, monitoring and measurement are expanding and becoming ever more detailed. We see it in everyday consumption, in housing and credit markets, in health, employment, education, social relations, including intimate ones, legal services, and even in political life and the private sphere.”
Peter Thiel (founder of PayPal and co-founder of Palantir): “Forget the fantasy of science fiction; what is powerful about AI is its application in relatively mundane tasks such as computer vision and data analysis. […] These tools are, however, valuable for any army.”
Since the early 20th century, the capital account has been perfecting a lethal weapon: Artificial Intelligence (AI), the ability to process large amounts of data in a short time, which allows for more informed decisions and thus faster and more automated tasks that previously required human intervention.
Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology, was one of the fathers of cybernetics and AI. Aware of the enormous power that the development of such research could place in the hands of irresponsible individuals, along with Norbert Wiener, another MIT professor, he demanded that scientists and technologists take responsibility for the use of what they discover and develop.
The first conversational computer program of the kind imagined by Alan Turing in the 1930s was created by Weizenbaum between 1964 and 1966. This system, called ELIZA , executed a series of instructions that made the computer pass as a psychotherapist. The doctor (the computer) was not infallible, but the simple set of programmed instructions allowed ELIZA to maintain some plausible conversations, posing as a Rogerian psychologist who asked the user to reflect on any comment offered.
While Turing’s assumption that by the year 2000 there would be a “thinking machine” can be debated, his prediction that computers would plausibly interact with people using language as an interface (Turing machine) was confirmed by ELIZA.
Weizenbaum published a surprising book in 1976 titled Computer Power and Human Reason: From Judgment to Calculation, in which he expressed his deep concern both about the vehement pro-technological and anti-humanistic declarations and behaviors of some computer scientists and about the remarkably passive acceptance by the public of computer innovations.
Any useful tool can always be misused; the particular dangers of computers stem from their versatility and complexity. Because computers are general-purpose symbol manipulation devices, they are versatile enough to be used (and also misused) in applications that affect life or have serious and irreversible social consequences. Computer programs are often so complex that no one person understands the basis of the program’s decisions. As a result, decision-makers do not feel too responsible for the consequences of the decisions proposed by computers. A large part of the book is an explanation of these dangers. These dangers are also used to demonstrate that there are decision-making tasks that we should not entrust to computers.
Therefore, he warns that it is dangerous to order computers to deal with human affairs, that is, to perform tasks that require understanding and empathy for human problems (essentially because computers lack the complete vision that a person has of other people, they lack the empathy of H. sapiens). Weizenbaum calls for no research on programs, methods, or computer tools with obvious potential misuse: he finds no benefit worth the price of getting entangled with dangerous tools "that could represent an attack on life itself." He also advocates abandoning projects with "irreversible and not fully foreseeable side effects."
Google (initially BackRub) emitted its first babblings in 1996 with the idea of generating an ordered list or ranking (PageRank) of the most relevant websites that best responded to a search term based on the number of links to other pages incorporated into each of the websites. The idea placed Google at the forefront of the first existing search engines. To respond to any search, Google’s computers had to capture the entire internet of the moment, which in its beginnings (1996) fit on a few hard drives. By the year 2000, Google’s men noticed that user search behavior, captured and encapsulated in logs that could be analyzed and extracted, could be used to improve search results and for other things. Without realizing it too much, Google had created a spying machinery that would become the most powerful detective agency ever imagined.
Steven Levy: “Amit Patel was the first to realize the value of Google’s logs. Patel was one of Google’s first employees and arrived in early 1999. […] He realized that Google could function as a sensor of human behavior. For example, he noticed that questions about school assignments increased on weekends. ‘People waited until Sunday night to do their homework and then searched on Google’ […] Every aspect of user behavior had value. How many queries there were, how long they lasted, what were the main words used in the queries, how often they clicked on the first result, who had referred them to Google, where they were geographically located. […] Those logs told stories. Not only when or how people used Google, but what kind of people the users were and how they thought […] Until then, Google had not been methodical in storing the information that told them who their users were and what they were doing. 'In those days, data was stored on disks that failed very often and those machines were often reused for something else,' says Patel. One day, to Patel’s horror, one of the engineers pointed to three machines and announced that he needed them for his project and was going to reformat the disks, which at that time contained thousands of query logs. Patel began working on systems that would transfer this data to a safe place.”
With the smartphone becoming a true appendage of the human being, the Internet began to offer the possibility of massive data extraction, without any permission, (images uploaded to websites, photo-sharing services, and especially social media platforms) for use by machine learning algorithms of biometric systems and computer vision. Suddenly, a new natural resource had appeared, a new mineral susceptible to being stored to be consumed to feed the furnaces of AI. “Everything became water for the mills of machine learning. […] Everything that was online was being prepared to become a training dataset for AI. […] When data is considered a form of capital, then everything is justified in collecting more.” (Kate Crawford) Accumulating data has become a new form of capital accumulation. Machine learning models need continuous flows of data to become more accurate. Fulfilling the imperative of data implies something more than simply collecting data passively; it means actively creating data, which implies increasingly intrusive systems of permanent and exhaustive surveillance of people, places, processes, things, and the relationships between them.
The Whitewashing of AI
Nick Bostrom: “Time and again we see Cartesian dualism in AI: the fantasy that AI systems are disembodied brains that absorb and produce knowledge independently of their creators […] These illusions distract from the more relevant questions: Who do these systems serve? What are the political economies responsible for their construction?”
What is AI? (according to Repsol): “You wake up in the morning and your phone unlocks after scanning your face. You get in the car to go to work, and the navigator suggests an alternative route because the one you usually take is more congested than usual. Your favorite shopping app suggests a pair of shoes that fit your style perfectly, and your bank sends you a notification about a product that fits your savings plans. Is it magic? No, it’s progress. It’s artificial intelligence, which has ceased to be science fiction to improve numerous aspects of our daily lives.”
Two twin discourses about AI have been insistently reproduced. On the one hand, the dystopia of a dangerous superintelligent machine that will dominate the world ; on the other hand, the techno-orthopedic chimera that any problem we have will be solved by AI. Both perspectives, based on technological determinism, leave aside what AI is really for, who promotes and is really in charge of its development, and what social consequences derive from it.
The AlphaGo show, as in the case of Deep Blue , was organized as a great worldwide spectacle. AI had to be whitewashed, sold well-packaged, presented as a smiling and adorable Mogwai , which, however, would soon show its true and terrible nature. Just as AI is being bombarded on the population of Gaza and Lebanon, Google DeepMind has presented Mobile ALOHA, a humanoid robot based on the ALOHA system (A Low-cost Open-source Hardware System for Bimanual Teleoperation). The robot uses imitation learning (human demonstrations in which a user instructs the robot through their own movement) and, well-taught, seems capable of imitating complex domestic manipulations (folding clothes, watering plants, making coffee, making the bed, serving a drink, feeding a pet, etc.) . An ideal gem to replace service staff.
Automated Annihilation
Tung-Hui Hu: “The mistake is to believe that acts of war are exceptions to the normal operations of the cloud.”
Frank Rose: “The computerization of society... has essentially been a side effect of the computerization of war.”
The Snowden archive constituted an extract of a world in which data collection had metastasized. Phones, social networks, emails, etc., were becoming sources of data. TREASUREMAP was a program designed to build an interactive map to track the location of the owner of any computer, mobile phone, or router connected. “Mapping the entire Internet, any device, anywhere, all the time.”
With the industrial revolution, machines filled cities with a thick layer of soot and dark, sticky, harmful gases. With the arrival of AI, a new atmospheric layer covers us all wherever we go, a cloud, even stickier, of monstrous toxicity that permanently and systematically brutalizes, and that, like the Holy Inquisition, stalks, watches, denounces, and kills.
The field of AI has always been guided by military priorities. “As it was the project with the least immediate utility and the most far-reaching ambitions, AI came to depend heavily and unusually on DARPA funding. As a result, DARPA became the main sponsor during the first twenty years of AI research. […] It all began, ultimately, from research funded by DARPA.” The general logic of AI has been marked by its explicitly battlefield-oriented origins (collecting large datasets remotely to gain knowledge about groups or communities, spying on what text messages are drafted or read, target detection, anomaly detection, high, low, or medium-risk category, need for constant alert and targeting, tracking people, identification by metadata, identifying threats, assigning guilt or innocence. “AI is basically a multitude of war machines that never rest. […] The AI industry is challenging and reshaping the traditional role of states. […] Algorithmic governance […] exceeds traditional state governance.”( Kate Crawford )
Although they are algorithms and systems specifically designed for surveillance, espionage, repression, blackmail, and annihilation, they are being presented as innocuous applications to supposedly improve consumers’ lives. Virtual assistants based on AI, such as Siri, Alexa, or Google Assistant, can answer questions, perform tasks, and provide information (in reality, they spy on what we do at home). Chatbots seem to provide quick and accurate answers to frequently asked questions, but in reality, they are probing their users. AI can be used in “personalized product and service recommendations” based on the supposed interests of the customer (guided bombardment of ads) or to launch email marketing campaigns (another type of less selective bombardment), etc., but this “civil” application is secondary; its main use (and the most lucrative) is in the implementation of a new generation of weapons and war systems.
Palantir, established in 2004, co-founded by Peter Thiel, is among the most reserved and little-studied surveillance companies in the world. The company supplies information technology solutions for data integration and tracking to police and government agencies, humanitarian organizations, and corporations (Walmart). In addition to data extraction through spying devices, it uses network filtering methods to track and evaluate people and targets. It has designed software for managing deportations of immigrants by the Immigration and Customs Enforcement (ICE) agency or for detecting health insurance fraud by the U.S. Department of Health and Human Services. The FBI uses it to identify criminals, while the Department of Homeland Security uses it to monitor air travelers.
In April 2017, the U.S. Department of Defense announced the creation of the Algorithmic Warfare Cross-Functional Team. Its codename was Project Maven, and its goal was to create an AI system, an automated search engine to detect and track enemy combatants. Since machine learning skills were in the commercial sector, it was decided that the Department of Defense would pay technology companies to analyze data collected by military satellites and drones. This was what GAFAM (or FAANG) was waiting for. The lucrative contract was awarded to Google, which offered the TensorFlow infrastructure to sweep through drone images to detect objects or people as they moved between different locations. In 2018, some Google employees discovered the existence of the contract. It was made public that the identification targets included vehicles, buildings, and human beings. There were loud protests, and more than three thousand employees signed a protest letter. Faced with the evident threat to its brand image, Google ended up withdrawing from the project, renouncing the $10 billion offered by the Pentagon. Less timid were its comrades in Silicon Valley. Microsoft won the contract after outbidding Amazon. Google soon ended the dissent among its employees to join the bandwagon of state contracts. The big tech companies were determined to test their AI algorithms for the purposes for which they were designed from the beginning: battlefields and programmed annihilation.
Credit Score and Signature Strike
Kevin Williams (2024): The U.S. Department of Homeland Security under the Biden administration has already allocated $5 million in its 2025 budget to open an AI office. AI-assisted surveillance towers, “Robodogs,” and facial recognition tools, all from the private industry, were already being used and could be further intensified in the mass deportation plan proposed by Trump (the largest mass deportation in the history of our country) […] Security experts are concerned about how a Trump-led DHS could handle untested AI for its plans. […] With these tools at its disposal, a surveillance network not only on the border but also inland could capture communities across the United States.”
“Credit score” and “signature strike” seem innocuous terms that seem linked to peaceful commercial activity, but in reality, they are closely linked to military logic. We are all under the permanent surveillance of AI, which assigns “points” of suspicion to each of us, and once a suspicious pattern is found in the data, and it reaches a certain threshold, the “signature strike” is automatically activated (your metadata signature has accumulated enough “suspicious points” to be annihilated).
In 2014, the legal organization Reprieve published a report showing that drone strikes programmed to kill 41 individuals resulted in the deaths of 1,147 people. The credit score is used in all countries and in all areas (all spheres of daily life, both municipal and domestic , susceptible to being tracked and scored) probing “anomalous data patterns” to detect deviations from “social solvency models” (models that do not call into question the functioning of the system) and penalize those who deviate. In this “credit rating” converge state agencies , commercial firms, and military structures.
AI Goes to War
With Israel’s attack on the Palestinians, which began in October 2023, the testing ground for the perfection of AI entered full operation. In Israel’s war against the Palestinians and Lebanese, the army is using two sophisticated AI applications, The Gospel and Lavender. The Gospel is an AI that automatically reviews data (obtained from the internet, telephone communications, photogrammetry, and tracking by drones or satellites, etc.) for buildings, equipment, and people believed to belong to the enemy and, upon finding them, recommends bombing the targets to a human analyst, who can decide whether to do so or not. Another AI application is Lavender, which is capable of identifying and locating people allegedly linked to Hamas or the Palestinian Islamic Jihad, and is used to recommend targets to be eliminated.
“At its peak, the system managed to generate 37,000 people as possible human targets. But the numbers changed all the time because it depends on where the bar is set for what is a Hamas operative. […] There were times when a Hamas operative was defined more broadly, and then the machine began to bring us all kinds of civil defense personnel, police officers, on whom it would be a shame to waste bombs. They help the Hamas government, but they do not really endanger soldiers.”
The Gospel is believed to combine surveillance data from various sources (for example, scans of cancerous tissue, facial expression photographs, surveillance of Hamas or Hezbollah members identified by human analysts, etc.) The recommendations to attack are based on pattern matching. A person with enough similarities to other people labeled as enemy combatants can be labeled as a combatant and, therefore, eliminated. Such AI algorithms are notoriously flawed, and high error rates are observed in their results.
Despite the fact that the Lavender system is known to have an error rate of at least 10%, the military deciding the attacks spent only about 20 seconds analyzing the AI’s proposal and accepting the assassination. The decisions of such a system depend entirely on the data with which it is trained and are not based on reasoning, factual evidence, or causality, but solely on probability statistics. Military operators, treating people as mere "data" or "targets," trusted the system so much that they approved targets suggested by the AI in a matter of seconds. The reliance on AI to make life-or-death decisions reduces the role of human deliberation and provokes the dehumanization of operations.
In early November 2023, the Israeli Defense Forces declared that more than 12,000 targets in Gaza had been identified by the target management division using The Gospel. As a result of the notable margin of error of the applications, thousands of Palestinians (most of them women and children or people who did not participate in the fighting) were annihilated. Lavender uses cheap weaponry, bombs without guidance systems, and therefore susceptible to causing greater “collateral damage,” intended for the assassination of supposed young militants whom it considers legitimate but low-importance targets, while for the attack on commanders and leaders it uses more precise but more expensive smart bombs.
Hamdan Ballal: “I really didn’t expect such a savage response. My family was sleeping. My father, my sister, my niece… Twenty-one people were sleeping in my house. They were in the south. There were no militants in the area. They bombed them and killed everyone. Of the 21, only two were men, the rest women and children. My father was 75 years old, a very good, peaceful man. Why would anyone commit such a crime? They do it to send the Palestinians the message that no one is safe, that anyone can and should be killed.”
Microsoft is a major provider of cloud services and artificial intelligence to the Israeli military, according to internal documents related to contracts between the Israeli Ministry of Defense and Microsoft Israel, obtained by Drop Site News. The leaked documents show that Israel’s use increased dramatically in the months following October 7, 2023, when Israel was using AI and other technologies to wage its brutal war in Gaza.
AI vs. non-artificial intelligence GENEVA (18 April 2024) – UN experts today expressed grave concern about the pattern of attacks on schools, universities, teachers and students in the Gaza Strip, raising serious alarm about the systemic destruction of the Palestinian education system. “Given that more than 80 per cent of educational facilities in Gaza have been damaged or destroyed, it is reasonable to ask whether there is an intentional effort to completely destroy the Palestinian education system, an action known as ‘scholasticide’.”
After the Nakba (catastrophe of 1948), Palestinians saw in education a potential lifeline to a better future; a heritage that, unlike land or housing, they thought, could never be taken from them.
In the Oslo Accords, education was one of the areas of competence transferred by Israel to the new Palestinian Authority. UNRWA (the United Nations Relief and Works Agency for Palestine Refugees in the Middle East) helped build 288 schools in Gaza. In Gaza, primary education was universal and secondary education was over 80% (the West Bank and Gaza were the leaders in schooling in the Middle East and North Africa region). In 2022, the illiteracy rate was only 1.8% (13.3% worldwide). At the beginning of the 2023-2024 academic year, there were 803 schools in Gaza, housed in 550 buildings and serving approximately 625,000 students. Due to overcrowding and a shortage of facilities, many of these schools operated on double or triple shifts. Gaza was also home to 19 higher education institutions, serving some 88,000 students and employing 5,100 staff members.
From the very start of the offensive, it was clear that education was one of the Israeli command’s main strategic objectives. The Islamic University, the oldest and most respected in the enclave, was destroyed to the ground on 11 October. Al-Israa University and the archaeological museum it housed were thoroughly dynamited by troops. Al-Azhar University, founded by Yasser Arafat, disappeared in November under bombing. From the first minute, academics, scientists and intellectual figures were systematically targeted by targeted airstrikes on their homes without warning; by September 2024, more than 500 teachers and some 10,500 students had already perished under the storm of Israeli fire and shrapnel. Tens of thousands had been seriously injured. After the universities, the AI and bomb targets focused on the rest of the Palestinian school system. 88% of schools were deliberately bombed, repeatedly destroyed. 625,000 school-age children have been left without schools and teachers.
AI was driving schoolicide, artificial intelligence against human intelligence. Joseph Weizenbaum's accurate predictions are coming true one after another. Powerful technology companies are subcontracting states to provide them with real testing grounds where they can test and refine their sophisticated control tools. Machines rule the planet and behind the machines, in the control room, the capital account operates. The capital account dances alone.