Intro
We translated this insightful article on artificial intelligence by comrades from Wildcat in Germany, because we feel the general debate is dominated by both fearful demonisation and uncritical reverence when it comes to automation technologies.
We can explain this by the fact that within the left milieu, as in society at large, the separation between social and political commentators on one side and technical ‘experts’ on the other is deepening. We faced a similar, and indeed fatal, separation during the pandemic, when the left was largely divided between people who neglected the medical and scientific aspects and primarily focussed on the state’s attempts to use the lockdown as a way to repress any form of discontent – and people who uncritically supported the state measures due to the reliance on medical ‘experts’. This reliance is real. In the case of the pandemic it would have needed the collaboration between working class communities and patients who can report about the immediate impact of the pandemic, the nurses and health care workers who can assess the outcome of ‘medical decisions’ within the hospitals, the science workers within the global medical industries and research departments who are critical of the disjointed response of the state, which bowed to diverging national and corporate interests.
Such collaboration can only establish itself as a combative international movement that defends the immediate working class interest, questions the manual and intellectual hierarchies within the class and takes on the responsibility to develop a working class plan as a social alternative. In the meantime it takes a collective effort of organised comrades to undermine the separation between social critique and scientific knowledge. We recommend also reading these previous articles and interviews on the crisis of microchip production, in order to deepen the understanding of the current limits of so-called artificial intelligence.
We can afford neither a primitivist external rejection of capitalist technology, nor an instrumentalist affirmation a la ‘fully automated communism’. For working class counter-engineering!
—-
Capitalist intelligence?
(from: Wildcat 112, Autumn 2023)
“Future generations would then have the opportunity to see in amazement how one caste, by making it possible to say what it had to say to the entire world, made it possible at the same time for the world to see that actually it had nothing to say.”
(Bertolt Brecht Radio Theory)
On the 30th of November 2022, ChatGPT, a conversational AI, or as it is known in the jargon ‘large language model’, was released. For the first time, a generative AI that can create independent texts and pretend to understand the questions it is asked was publicly available free of charge. Within five days, one million people had registered on the chat.openai.com website. By January 2023, this figure had risen to one hundred million. It was a stroke of genius for OpenAI (Microsoft) to make its chatbot publicly accessible. No marketing department could have advertised it better than the hysterical debate that ensued. All competitors had to follow suit and also publish chatbots.
The open letter from the 22nd of March 2023 calling for a six-month moratorium on AI development was a major publicity stunt for the entire industry. The signatories were a who’s who of Silicon Valley. (By the way, calling for regulation is also the usual way of keeping smaller competitors out). They formulated their demands as questions. The first question: should we let the machines flood our information channels with propaganda and falsehoods? This was asked by Musk, who had just sacked all the moderators on Twitter and cancelled the EU’s voluntary code of conduct against disinformation only a week before the open letter. On the 17th of April, Musk announced that he had founded his own AI company, X.AI, at the beginning of March and wanted to create a large language model with TruthGPT that was not as ‘politically correct’ as ChatGPT.
At the end of May the chatbot elite, together with a few artists and Taiwan’s Digital Minister Audrey Tang, even warned of “the extinction of humanity through AI”. They put AI on a par with pandemics and nuclear war (mind you: not with the climate crisis, which they consider harmless). The signatories include Sam Altman (head of OpenAI), Demis Hassabis (head of Google DeepMind), Microsoft’s head of technology, and numerous AI experts from the world of research and business. It is hardly possible to imagine a more obscene form of advertising for your own product.
What’s behind the hype surrounding AI?
Why do we see such a boom in AI now? And why did chatbots, of all things, trigger it? Tech companies urgently needed a new business model. Language is seen as a sign of intelligence, and there is clearly a great social need for dialogue partners. Thirdly, while there are fewer and fewer fundamental innovations, the expectations for them are rising.
Tech crisis
Text, voice and image generators – ‘generative AI’ – are the bootstraps with which the five big tech companies Apple, Amazon, Facebook, Google and Microsoft are trying to pull themselves out of the ‘big tech’ crisis of 2021 and 2022, during which they laid off 200,000 employees. The big five dominate over 90 per cent of the AI market. A sixth company, Nvidia, is taking the biggest slice of the cake by providing the hardware. Nvidia used to produce graphics cards, and still does, but just over ten years ago it was discovered that graphics processing units (GPUs) have enormous parallel computing power. The first boom was in computer games, the second in cryptocurrency mining, and now it’s generative AI. GPUs are notorious for consuming large amounts of electricity.
After years of losses, the rise of the US stock markets in 2023 hinges on just seven companies (in addition to the companies mentioned above, Tesla is the seventh). In mid-July 2023, these seven companies accounted for 60 per cent of the Nasdaq 100, an important index for technology stocks. The boom is based on a single expectation: that “AI will change everything”. Economically, it has not yet been enough of a boom to offset the recession in the chip industry. Investments are being postponed in chip manufacturing because turnover and profits are collapsing. The manufacturers of memory chips are all reporting losses. Despite production cuts, Samsung’s operating profit fell by 95 per cent in the second quarter and by 80 per cent in the third. Qualcomm announced a fall in turnover of almost 23 per cent, and redundancies.
There is a social need for chatbots
Joseph Weizenbaum built the first chatbot in 1966. His ELIZA was already able to pretend to be human in short, written conversations. Weizenbaum was surprised that many people entrusted this relatively simple programme with their most intimate secrets. They were convinced that the ‘dialogue partner’ had a real understanding of their problems, because the answers to their questions seemed ‘human’. This so-called ‘Eliza effect’ is exploited by many chatbots today. An unwanted by product has become a bestseller and a business model.
Since 2017, the company Luka Inc. has been marketing its chatbot Replika as a “companion”, a substitute for a romantic friend. However, you still have to buy an upgrade for “romantic interactions”. There are women who can’t have children and create AI children. There are men who create a kind of AI harem, while abandoned people engage with chatbots for comfort and people who feel misunderstood find reassurance in the communication with them. In the US, the story of a woman who married her chatbot went viral in summer 2023. In spring, it mde the news when a Belgian man committed suicide after having been councelled on how to do so by his chatbot.
Chatbots are trained on huge amounts of human dialogue data, and can therefore also parrot the expression of emotions. Not only Replika, but also ChatGPT and others seem to be designed as a kind of romance scammer. In order to feign understanding or for the sake of a good story, these models like to spontaneously invent sources and supposed facts. These “social hallucinations” (Emily Bender) are desirable and are used to build customer loyalty.
“You might ask yourself what kind of friends they are, who constantly assure you how great you are, who reply to even the most boring retelling of a confused dream: “Wow, that’s so fascinating”.Who, like well-behaved dogs, find nothing nicer than when you come home and greet them. On the other hand, users in forums and chats appreciate just that. … There may be good reasons not to make your happiness dependent on a real person. But if you share your life with an AI, you’re sharing sensitive data, not just with a smiling avatar, but with a tech company.” (“A husband to put aside”, German newspaper Süddeutsche, 1st of August 2023)
Many users do not understand that they are also feeding and training the AI with new data through their questions. In early 2023, Samsung discovered that programme code from its developers had been uploaded to ChatGTP. In the middle of the year, Samsung, Morgan Chase Bank, Verizon, Amazon, Walmart and others officially banned their employees from using chatbots on company computers. They are also not allowed to enter any company-related information or personal data into generative AI on their private computers.
Few real innovations
Hardly anyone still believes that the world will become a pleasant place in the foreseeable future. Ecological crises are piling up higher and higher, wars are getting closer and social problems are growing.
Perhaps this is why utopian energies are increasingly attached to technology, be it nuclear fusion, electric cars or AI. Yet capitalist technologies do not create a new world; they preserve the old one. Weizenbaum said in an interview in 1985 that the invention of the computer had primarily saved the status quo. His example: because the financial and banking system continued to swell, it was barely controllable by manual transfers and cheques. The computer solved this problem. Everything went on as before – only digitised, and therefore faster.
At the beginning of 2023, the magazine Nature published a study according to which “groundbreaking findings” have become less frequent. Earlier studies had already shown this in relation to the development of semiconductors and medicines, for example. Many things are just improvements to an invention that has already been made, not ‘real innovations’. Scientific and technological progress has slowed down despite the continued rise in spending on science and technology, and the significant increase in the number of knowledge workers. The article in Nature sees the cause as too much knowledge and too much specialisation. The amount of scientific and technical knowledge has increased by leaps and bounds in recent decades, and the scientific literature has doubled every 17 years. However, there is a big difference between the availability of knowledge and its actual use. Scientists are increasingly focussing on specific topics, and primarily cite themselves (Third-party funding, publish or perish). [1]
This mixed situation leads to constant talk of ‘technological breakthroughs’. Even if – as with mRNA – research has been going on for six decades, or – as with chatbots – they only utilise collateral effects that have been known for half a century.
“The AI seems reassuringly stupid to me“
(German comedian Helge Schneider)
AI is everywhere. Especially in advertising. Smartphones and tablets sort photos by topic; they are unlocked using facial recognition; the railway uses image recognition for maintenance; financial service providers use machines to assess the risk of borrowers…
But these examples have nothing to do with generative AI. They are simply algorithms for big data analysis. For marketing reasons, everything that has to do with big data is currently labelled as AI. After all, even the simplest programming loop for data analysis can be sold more effectively this way. In the summer, the Hamburg-based start-up Circus raised money from investors. Its business idea: home delivery of meals that are “cooked by artificial intelligence depending on the customer’s preferences”.
There are also productive examples: a team has used AI to develop new proteins in pharmaceutical research. In chip production, self-learning systems save human rework. Amazon uses AI for predictive shipping, even though a classic probability calculation would be just as good.
The term ‘artificial intelligence’ was coined in the 1950s for advertising purposes, and it has also made what is understood by ‘intelligence’ compatible with capitalism.
In 1959, the electrical engineer Arthur Samuel wrote a programme for the board game checkers, which for the first time was able to play better than humans. The breakthrough was that Samuel taught an IBM mainframe computer to play against itself and record which move increased the chances of winning in which game situation. Machine playing against machine and learning in the process is the beginning of ‘artificial intelligence’ – artificial indeed, but why ‘intelligence’?
The term ‘artificial intelligence’ had been invented four years earlier by the US computer scientist John McCarthy. He was researching data processing alongside many others, including the cyberneticist Norbert Wiener. But McCarthy didn’t just want to follow in the footsteps of others. He wanted to collect the laurels for something of his own. So instead of ‘cybernetics’, he wrote ‘artificial intelligence’ in his application to the Rockefeller Foundation for funding for the Dartmouth Summer Research Project. “The seminar will be based on the assumption that, in principle, all aspects of learning and other features of intelligence can be described so precisely that a machine can be built to simulate these processes. The aim is to find out how machines can be made to use language….”. The application was approved – but not in full: the Rockefeller Foundation only paid 7,500 Dollars, so that around eight scientists could meet for a summer. The conference only lasted a month and was nothing more than an “extended brainstorming session” with no results. But today it is regarded as the beginning of AI and all participants became internationally renowned experts in artificial intelligence.
McCarthy later wrote that he wanted to use the term to “nail the flag to the mast.” But he was replacing intelligence with something else. The Latin word intellegere means “to realise, understand, grasp”. People become intelligent by grasping. ‘Intelligence’ arises in interaction with the environment (no cognition without a body) and in social interaction. People developed language so that they could cook together. The taste of chocolate and the smell of rosemary are qualitative experiences that cannot be stored as ‘data’. But McCarthy had shown the way: “simulation of these processes” – meaning, a simulation of understanding. [2] In the euphoric phase of the 1960s, AI researchers thought they could feed computers with sufficient data and interconnect them so skilfully that they would outperform the human brain. But disillusionment soon followed. The more we understood about the human brain, the clearer it became that it would never be possible to replicate it by a machine (almost 100 billion nerve cells, all interconnected by 5,800,000 kilometres of neural pathways…). The EU flagship project Human Brain has made no progress in this respect in ten years. [3]
A long ‘AI winter’ began in the early 1970s.
The victory of the IBM computer Deep Blue over the reigning world chess champion in 1997 was celebrated as another major appearance of ‘artificial intelligence’ on the world stage. However, Deep Blue was not an ‘artificially intelligent’ system that learnt from its mistakes. It was merely an extremely fast computer that could evaluate 200 million chess positions per second (brute force). More significant was AlphaGo’s victory over the world’s best Go player in 2016. The machine had previously played against itself many millions of times, and independently developed moves that no human had thought of before.
“Lies, damn lies – and statistics”
(Mark Twain)
McCarthy’s use of the term ‘neural networks’ in his proposal was an equally skilful advertising ploy. The term conjures up images of an artificial brain simulated with computer chips. But the ‘neural networks of AI’ bear no resemblance to the network of neurons in the brain. They are a statistical process used to arrange so-called ‘nodes’ in several layers. As a rule, a node is connected to a subset of nodes in the layer below. If you want a particular computer to be able to recognise horses, you feed it with many horse photos. From these, the system extracts a ‘feature set’: ears, eyes, hooves, short coat, etc. If it is then to assess a new image, the program proceeds hierarchically: the first layer analyses only brightness values, the next layer horizontal and vertical lines, the third circular shapes, the fourth eyes and so on. Only the last layer puts together an overall model.
The subsequent fine-tuning consists of praising the system when it has correctly recognised an image (the connections between the nodes are strengthened) or criticising it when it recognises a dog as a horse (the connections between the nodes are rearranged). In this way, the system becomes faster and more accurate – but without ever ‘understanding’ what a horse is.
Chatbots create language this way. They are neither the highest nor the most important, neither the most powerful nor the most dangerous, type of AI. When it comes to multiplying large numbers, they are inferior to any 70s pocket calculator. The technology behind so-called ‘generative AI’ is essentially based on statistical inference from huge amounts of data. Statistics is an auxiliary science. Economists, epidemiologists, sociologists, etc. apply statistics ‘intuitively’ in order to obtain an approximate orientation in certain contexts. They are aware that statistical predictions are rarely accurate; they make mistakes and sometimes lead to dead ends. Generative AI presents statistical predictions as a result. This is the basis of its performance. By definition, the models are not able to derive or justify their results. They are trained until the results fit.
You can’t tell an AI system that it has made a mistake: “Don’t do that again!” Because the system has no idea what ‘that’ is, or how to avoid it. AI systems based on machine learning and trained on the basis of vast amounts of data, rather than on general principles or rules of thumb, are not able to take advice.
A chatbot stitches together sequences of language forms from its training data without any reference to the meaning of the words. When ChatGPT is asked what Berlin is, it spits out that Berlin is the capital of Germany. Not because it has any idea what Berlin is, what a city is or where Germany is located, but because it is the statistically most likely answer.
Chatbots get dumber as they progress. This is partly because they are also fed with products from other chatbots during machine learning, and partly because poorly paid clickworkers sometimes use ChatGPT themselves for fine-tuning in order to generate supposedly handwritten texts faster. Only six months after its release, complaints began to mount that the performance of ChatGPT was becoming increasingly riddled with errors and poorer, with usage time dropping by ten per cent overall and downloads of this AI falling by 38 per cent. The AI industry reacts in its paradoxical but typical way: it further increases the amount of training data and parameters – despite the fact that data overload created the problem in the first place.
Big Data
It is pretty crazy to generate language by machine, based not on logical rules and meaning, but on how likely it is that one word or text module will follow another – because the process requires huge computer capacities, enormous power consumption and a lot of reworking. But it is precisely this insanity that is at the heart of the business model. Because only the big tech companies have such huge data centres, and they have accumulated the necessary data volumes and money over the last two decades. Large language models are therefore a business model in which nobody can compete with them; not even state research institutions or top international universities have the necessary computers, let alone the data!
Google, Facebook, Amazon etc. have captured the digital footprint of the entire human race. Google, for example, has used special crawlers to mine 1.56 trillion words from public dialogue data and web texts for its training data over the past twelve years. Crawlers are data suckers that capture everything on the public Internet. What was accepted for many years as data collection for advertising purposes can now no longer be reversed. Once training models have processed the data, it can no longer be deleted.
However, the chatbots’ training data not only includes the billions and billions of data that we have ‘voluntarily’ made available to them, but also copyright-protected texts. The AIs are also trained with databases that illegally make protected works available. Journalists from the US magazine The Atlantic searched the approximately 100 gigabyte Books3 database that feeds every artificial intelligence. As a result, they published a searchable database with around 183,000 titles with ISBNs on the 25th of September.
The same applies to the image generators: Billions of photos on the internet are the building material for the images in programmes such as Dall-E. Some of the photos have been created by professional photographers, which are easily scanned by the AI on their professional websites. Nobody asked them whether they agreed to this, let alone offered them any remuneration. They cannot prove whether their photos were used during the training of the artificial generators. By definition, it is not possible to reconstruct which individual photos were used to create a machine image.
Resource consumption
Perhaps the biggest problem with the current spread of generative AI in chatbots and image generators are their enormous resource consumption. In 2010, it was still possible to train an AI on a standard notebook, whereas today special computers with many thousands of GPUs are used for this purpose.
Energy
Twelve percent of global energy consumption is attributable to digital applications; just over half of this (six to eight percent) is accounted for by large data centres. They are barely keeping pace with the AI boom. According to the head of HPE, data centres could consume 20 percent of the world’s energy in five years’ time. Training AI models consumes more energy than any other computer work. This development has only really taken off since 2019, when GPT-2 was published; it worked with six billion parameters. The algorithm of GPT-3 comprises 175 billion parameters. GPT-4 works with 1.7 trillion parameters. Each new model increases the parameters by a factor of 10, but the energy consumption grows exponentially with the amount of data processed. The last training run of GPT-3 alone consumed 189 megawatt hours of energy, which corresponds to around nine times the annual CO₂ emissions per capita of Germany. And for every model that actually goes online, there were hundreds that were discarded beforehand.
But it’s not just the AI training – actually using these programs requires a lot more power. A single request of around 230 words requires 581 watt hours. The one billion requests made to ChatGPT in February 2023 would therefore have consumed 581 gigawatt hours. In May, it was already 1.9 billion just for ChatGPT. That corresponds to almost 464,000 tonnes of CO₂. And the energy hunger of the successor model GPT-4 is even greater. AI now consumes more electricity than crypto mining (Bitcoin’s electricity requirements were estimated at 120 terawatt hours in 2021).
In the old days – in 2016 – Google calculated that processing a search query consumed as much energy as lighting a 60-watt light bulb for 17 seconds. Google therefore consumed around 900 gigawatt hours of electricity for the approximately 3.3 trillion search queries per year at the time. This was equivalent to the power consumption of 300,000 households with two people, but was paltry compared to the power consumption of AI.
At a conference a year ago, it was stated that the energy required to train an AI had increased 18,000-fold in the past two years – this already took into account the energy savings from new chips. [4] On the 25th of September, the German daily newspaper FAZ reported: “Due to AI strategy: nuclear power to supply Microsoft’s data centre”. “A fleet of small nuclear reactors” is to supply the company’s data centres with “secure electricity”. Bill Gates also founded the company Terrapower, which is currently building a nuclear power plant in the state of Wyoming.
But it’s not just a lack of electricity; the development of computing power is also reaching its limits. The computing power used to train AI had already increased 300,000-fold between 2015 and 2021. According to Moore’s Law, the number of computing operations that computers can perform per second doubles approximately every twenty months. The demand for computing operations through machine learning is currently doubling every three to four months.
Water
Water may be an even bigger problem. It is needed to produce the chips and to cool the data centres. “The production of a chip weighing two grams … consumes 35 litres of water. A modern [chip factory needs] up to 45 million litres of water per day, a large part of it ‘ultra pure water’…” (The Summer of Semis, Wildcat 110).
Chip factories and data centres are built wherever governments are stupid enough to give the companies not only billions in subsidies and cheap electricity, but also water practically for free (just like the Tesla factory in the middle of an area that supplies drinking water in Brandenburg). In 2021, Google began building a huge data centre in Uruguay that requires seven million litres of fresh water every day to cool the computers. There was a water crisis in Uruguay in the summer; more than one million people have no access to clean drinking water. [5] Even the Taipei Times, which is otherwise not particularly hostile to technology, warned in mid-September of the “heavy ecological costs of ChatGPT”. Microsoft draws 43.5 million litres of water from the rivers in a hot summer month for a supercomputer in Iowa (10,000 GPUs) on which it trains GPT-4 – which becomes a problem for neighbouring agriculture. According to its own figures, Microsoft’s global water consumption in 2022 was 34 per cent higher than in 2021, while Google reported an increase of 20 per cent. For both, the sharp increase is almost exclusively due to AI. [6]
In Iowa, new data centres are now only permitted if they use water more sparingly. In Saxony and Saxony-Anhalt, the shot has not yet been heard. The chip industry in Saxony needs so much water that the groundwater is no longer sufficient. “No problem,” say the politicians, “we’ll take it from the Elbe.” Now a TSMC chip factory is to be added 200 kilometres downstream, people are starting to worry, after the initial euphoria. According to early official estimates, the Intel plant in Magdeburg will use record amounts of water for production. The state estimates 6.5 million cubic metres of water per year. This means that the Intel plant would consume more than the Tesla plant in Brandenburg. It is not yet clear from which sources the water will be drawn. There are considerations for an Elbe waterworks. By the way, ChatGPT not only consumes water during training, it also swallows half a litre during use if someone asks it five to 50 questions in a session.
Search engines and business models
“Bing is based on AI, so surprises and errors are possible”
(Microsoft, from the homepage of its search engine)
On the 30th April 1993, the World Wide Web was opened to the public free of charge. The Google search engine went online on 15 September 1997. It has shaped the world wide web, and will transform it further. A significant part of the www works according to the formula: sites create content, Google leads people to that content, everyone places adverts. Even large sites get up to 40 per cent of their clicks via the search engine, and the position in the search results has a big influence on how many you get. Websites use search engine optimisation to get as high up as possible. The internet looks the way we know it because Google demands it – right down to specific standards for page design, technology and content. Advertising finances almost the entire Internet. It converts attention (clicks) into money. (This ‘attention economy’ rewards sensationalist headlines and fake news, but that’s another story).
Over the past few years, Google has continued to evolve from a search engine to an answer engine. Certain questions are answered directly instead of displaying a long list of website links. Suitable results are delivered in response to a search query, and the corresponding adverts are displayed. Amazon has also been recommending books and other products based on your previous purchases or browsing history for a long time. Other websites suggest friends, predict flu epidemics, signal changing consumer habits and know your taste in music (YouTube, Facebook, Netflix, Spotify, etc). Millions and millions of people use these services every day. Two thirds of people google what their symptoms could mean before they go to the doctor, and there are thousands of health apps worldwide. And 300 million patient data records have now been fed into Google’s medical AI Med Palm2 in the USA.
In the course of this, Google got worse and worse as a search engine; the search results more irrelevant, the searches more fruitless. Attempts to use machine learning to enable Google to ‘understand’ what people are ‘really’ looking for sometimes have the opposite effect. This is partly due to the algorithm, which interprets results that are frequently clicked on as being more relevant than others (also self-reinforcing!). Many people now only use Google to search their favourite websites by adding /github, /reddit or /wiki.
With the integration of its chatbot Bard into search, Google is once again fundamentally changing the www and its business models. Bard is supposed to read the results and then summarise them. For many, these summaries will be enough. Why keep clicking when you already have all the answers? However, if Google no longer drives the same number of clicks to their websites, this will mean the end for many operators. If companies can no longer finance themselves through advertising, they would have to switch to payment systems and shield their content from AI. This would significantly change the internet as we know it.
The parrot paper
In March 2021, the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Timnit Gebru, Margaret Mitchell and four other colleagues from Google’s AI ethics department in cooperation with computational linguist Emily Bender, was published. They began working on their paper in 2020, just as the precursors of ChatGPT were attracting attention for producing texts that appeared error-free and trustworthy at first glance. By the time the paper became public, Timnit Gebru and Margaret Mitchell, the two Google employees who had refused to withdraw their signatures in response to threats from their boss, had already been fired. In the final sections, they do make suggestions for the “mindful development” of AI. Nevertheless, the paper is a frontal attack; it criticises the very thing about chatbots that makes up their business model: they are big (so that no one can keep up); they suck up everything like a black hole: computing power, electricity, water, research funds; and to make them sell, an irrational hype is created and a rational discussion about the possibilities of AI deliberately sidestepped.
The Parrot paper also criticises – “at a time of unprecedented environmental change worldwide” – the huge waste of resources caused by chatbots (electricity, CO₂, water, etc). The majority of the electricity required for AI comes from fossil fuels. Although the tech industry is betting that everything will soon come from renewable energy sources, this is unrealistic and renewable energy is not ‘free’. The Global South is paying for the development of English-language models for high earners with the ecological consequences.
Chatbots have led to a huge misallocation of research funds and scientific resources. Ultimately, they are preventing real linguistic progress and work on real ‘artificial intelligence’.
The language models are racist and anti-minority because they ‘over represent’ mainstream opinion. AI increases bias in a self-reinforcing cycle. In practice, AI reproduces racist and other patterns (black people have been denied legitimate insurance claims, medical services and state social benefits). In the USA, AI systems are involved in the imprisonment of minorities for relatively longer sentences.
The language generators do not understand or produce ‘language’. Language always has form and meaning, but chatbots only have ‘form’. They are only successful in tasks that can be approached by manipulating linguistic forms. However, as they produce grammatically largely error-free, genuine-sounding texts, they exploit people’s tendency to find meaning in language and to interpret sequences of characters as meaningful communicative acts. Due to this potential for manipulation, work on ‘synthetic human behaviour’ is a ‘glaring red line’ in AI development. ‘Synthetic’ can be translated as artificial, but it does not exactly hit the nail on the head. The authors (these sections can probably be traced back to the computational linguist Emily Bender) criticise the approach of using artificial language to imitate human speech in order to deliberately and purposefully confuse users.
Clickworker
“Digitalisation first, concerns second”
(The German liberal party FDP‘s slogan for the 2017 federal election)
Machine learning has prerequisites and needs to be improved. Generative AI is based on the work of so-called click workers, who often analyse texts, keyword images, listen to audio recordings and sometimes collect data themselves, for example by taking photos on predefined topics, often under precarious conditions. Without these poorly paid click workers and content moderators in Kenya, Venezuela, Argentina, Bulgaria, etc., ChatGPT would not exist any more than social media did before it. The work of these people often remains hidden because it doesn’t fit in with the corporate narrative that everything will take care of itself with AI. For ChatGPT, for example, three dozen workers in Kenya have created pre-training filters for an hourly wage of between 1.32 and 2 US Dollars. However, they are not paid per hour, but on a piecework system (in Eastern Europe, Latin America and Asia, you get a maximum of one Dollar per processed data record, text passage, etc). [7]
It is difficult to find out how many click workers are working on AIs and even more difficult to estimate the volume of work. Providers such as Applause or Clickworker claim to have several million click workers each, and Clickworker alone around 4.5 million. They do not talk about the labour time that is required to train bots, etc. OpenAI, Google, Microsoft and Amazon don’t say anything about it, and there are no serious independent studies.
Milagros Miceli, who researches the work behind AI systems at the Weizenbaum Institute in Berlin, also only speaks of “millions”: “There are millions of people behind the applications, moderating content and labelling training data. They also help to generate the data in the first place by uploading images and speaking words. There are even employees who pretend to be AI to users.” For example, one such case became known in Madagascar: 35 people lived in a house with only one toilet; they had to constantly monitor cameras and raise the alarm if something happened. A Parisian start-up had previously sold the system to large French supermarkets for a lot of money as “AI-controlled camera surveillance against shoplifting”. In another case, refugees from the Middle East monitored people in hospitals from Bulgaria via camera and had to trigger an alarm if someone fell out of bed or needed help, for example. They had hourly wages of around half a US Dollar. Some also worked directly from Syria.
Miceli estimates that 80 per cent of the costs for an AI go to the computing power required, 20 per cent to the manpower required, of which 90 per cent is likely to go to the engineers in the USA.
“The workers gather a lot of expertise. They are the experts in dealing directly with data because they have to deal with it on a daily basis. Nobody has learnt the trade better, not even the engineers. Some resist the miserable working conditions. It helps the workers most if they organise themselves. Our conversations with them have also shown this.” (Milagros Miceli) [8]
In a petition to the German parliament at the end of June, hundreds of content moderators for online networks such as Facebook and TikTok demanded better working conditions. Previously, employees of the Kenyan Meta subcontractor Sama had sued their employer for illegal dismissals. Meta did not wish to comment on this issue.
Digitalisation is not the same as increased productivity
“The degradation of workers is not caused by systems that are actually capable of replacing them. Rather, they are already having effects when people are led to believe that such systems can replace workers.” (Meredith Whittaker at re:publica 2023)
“From 2035, there will no longer be a job that has nothing to do with artificial intelligence” (Federal Labour Minister Hubertus Heil)
AI helps to circumvent labour laws and prescribed rest periods. In 2015, for example, a new staff scheduling software programme at Starbucks made headlines by scheduling shifts for employees in an extremely chaotic and short-term manner. Based on a database of customer flows in real time, the programme only ever called as many workers into the shift as necessary – and always assigned them less than 30 hours per week so that Starbucks did not have to pay statutory health insurance.
The threat to translators, journalists, actors, authors, etc. is of a different kind. AI-supported journalism would save many jobs; translators who ‘only’ correct a deepL translation would earn much less; etc. It is therefore no wonder that authors and actors in the USA have started the first strikes against AI (see below).
But does AI also help to increase productivity?
An increase in productivity (“increase in the productive power of labour”) occurs when the working time required to produce a commodity is reduced. A smaller amount of labour creates a constant or even larger amount of use value. Social progress would be achieved if this increase in productivity meant that people had to work less and the standard of living (living space, mobility, good food) remained at least the same or even increased (whereby more freely available time in itself increases the quality of life). Historically, this has usually led to longer life expectancy.
This has been true for a large part of humanity over the last 200 years. The average working time for a worker fell from over 3,000 hours per year in 1870 to around 1,500 in 2017. General life expectancy rose from around 30-40 to 70-80 years.
Today, however, it is more important than in the past which income bracket you belong to and where you live, i.e. how much access you have to the ‘productive forces’ in the areas of medicine, infrastructure, etc. Depending on their country and sex, people in the highest income brackets live five to 15 years longer than those in the lowest. Never before has there been such a large difference in wealth and income between the rich and the poor. Consequently, there has never been such a difference in life expectancy between rich and poor. While it is now possible to live healthier and longer, the life expectancy of the poor is falling.
Progress and productive forces
In the 19th and 20th centuries, employers ultimately responded to labour struggles by increasing productivity. A turning point was the introduction of computerised just-in-time production without warehousing (lean production) in response to the ‘crisis of Fordism’. What was propagated in the West as ‘Toyotism’ was not a change in the labour process to increase productivity, as could be seen in the transition from water to steam power or in the development of the assembly line. It was a shift to Asia and subcontracting. The same assembly line factories were built in China as in the West – only the labour costs were much lower.
On a political level, it worked to dismantle the huge factories with tens of thousands of workers and thus break the fighting power of the working class. But since then, rates of productivity growth have been falling and are nowhere near those of the pre-1970s. Car companies have spent the last 20 years making profits not through ingenious production processes, but through financial transactions, price increases, emissions-promoting sales regulations and cheating their suppliers. Growth is based on infrastructure wear and tear, withheld investments and credit expansion.
The ‘smart factory’ is a reaction to falling sales and the implosion of the just-in-time system.
Against a backdrop of stagnation in the tech and automotive industries, the two have joined forces to propagate a new business model. Production facilities and logistics are to ‘organise themselves’ through consistent automation and digitalisation, with goods production from order to delivery functioning without people. Because the smart factory networks ‘everything with everything’. At the Hannover Messe trade fair in April 2023, there was talk of ten million factories worldwide that “are waiting to be digitalised”; the market for smart factory components is already worth 86 billion Dollars a year. For automation companies such as Siemens, SAP, ABB, General Electric, etc., ‘digitalisation’ is indeed the big revenue driver – the market for AI in particular accounted for almost 400 billion euros in 2021 (the total turnover of the automotive industry is just under two trillion euros). Now they are starting to introduce AI applications at factory level.
However, compared to the pre-coronavirus phase, many employers have toned down their fantasies about how many workers in factories could be replaced by AI. They tend to promote the smart factory as a means of saving resources, improving the eco-balance, parts quality and monitoring supply chains. Mass sales can no longer be expanded – which is why many are switching to luxury production. The profitable production of small batch sizes is crucial – hence the talk of ‘individual customer requirements’ and ‘batch size 1’.
Mercedes equips machines and parts with chips that collect all kinds of data for the cloud. The AI is supposed to derive useful measures from this, and the 800 colleagues per shift in ‘Factory 56’, the digitalised [model factory] in Sindelfingen, are then supposed to ‘implement’ this. Management calls this data ‘democratisation’. Mercedes is working with Siemens and Microsoft to achieve this, with Microsoft providing AI and the cloud.
At BMW, 15,000 employees work in Omniverse, a real-time graphic collaboration platform that virtually maps a factory. The aim is to get plants up and running faster and more smoothly and to optimise them continuously. This would reduce development and maintenance costs. Computers with the RTX graphics card from Nvidia, which costs several hundred euros, are required. Nvidia is also the provider and owner of the Omniverse, in which 700 companies are currently working.
Further development of monitoring
BMW – like other companies – has been working on AI personnel planning since 2022. BMW is currently negotiating with the trade union IG Metall. Jens Rauschenbach, ‘Head of Standards/Methods of Value Added Production System and Industrial Engineering’ at BMW, sees the opportunity to finally monitor all factories live and centrally and compare them with each other: “Until now, it was almost impossible to compare personnel scheduling between two plants, but in future we will have access to standardised data for all functional levels, which will be available at the touch of a button. [9] In the immediate sphere of production, AI is supposed to find optimisation potential by, for example, looking for correlations between rework, rejects, frequent cycle changes and tool changes. (A lot of things that workers themselves know, but do not disclose even if they are offered a bonus through ‘continuous improvement’ schemes). The data collected can help a more accurate production process reduce wear and tear, but at the other end, the massive accumulation of data creates new costs. From their offices and meeting rooms, managers imagine using computers and sensors to track workers’ movements and turn the breathing spaces of ‘free time’ they create into productive labour time. But they may soon find out that digitalisation and increased productivity are two different things. At the end of August, Toyota ran out of memory during server maintenance and all 14 Japanese plants were shut down for a day. At the end of September, a faulty computing process on a server at VW’s main plant in Wolfsburg multiplied to such an extent that almost the entire global production network went down. [10]
The first strike against AI
Since the beginning of May 2023, more than 11,000 screenwriters organised in the Writers Guild of America have been on strike in the USA. In mid-July, the actors also went on strike. Their union, the Screen Actors Guild, has around 160,000 members. According to union president Fran Drescher, 86 per cent of them earn less than 26,000 Dollars a year. Both unions are demanding higher wages and regulation of the use of AI.
After almost five months of strike action, the typists’ union reported success. In addition to higher allowances for retirement and health care and wage increases of five per cent this year, four per cent next year and 3.5 per cent in 2025, it was agreed that an AI may not replace a human writer or take their pay. The studios themselves are not allowed to use AI to write scripts or develop ideas, but the authors can use AI. In future, streaming services will have to disclose to authors how often their series are watched and pay them accordingly. On the 9th of October, union members approved the collective agreement, which runs until May 2026.
The strike by the actors’ union continues. Negotiations were held for the first time since the strike began at the beginning of October, but they failed and were interrupted. The future of the industry is being negotiated here even more than with the authors: Generative AI that builds scenes and sequences for films is the wet dream of second-tier producers and directors who supply the entertainment industry’s daily bread of TV series and B-movies. The boom in the streaming industries has put even more pressure on these producers of run-of-the-mill picture goods. Eliminating the insecurities and costs associated with the creative proletariat of actors is supposed to save the arse of their business model. There have already been some attempts with digital tools. Murdered rapper Tupac made another virtual appearance via hologram at the Coachella festival in 2012. Scorsese had the main actors in his film The Irishman digitally rejuvenated, etc. However, these were digital methods for individual details or shots in films that do not make the work of actors superfluous. With the use of generative AIs, however, they would become largely obsolete. Generative AIs could create any number of sequences using recordings of them in various situations and emotional states. [11]
The US actors’ union SAG-AFTRA does not want to completely rule out the use of AIs in the production of films. However, their demand for appropriate compensation for actors has already met with firm resistance from the Alliance of Motion Picture and Television Producers. The union decided to target both the production and marketing of all TV, film and streaming productions with their strike. This was binding for all union members. The union is demanding mandatory approval and appropriate remuneration when AIs are used to change scenes or generate new scenes. The companies only want to pay half a working day for recordings, which can then be used to generate further scenes at will – without the consent of the actors and without further payment. According to the bosses‘ will it should also be possible to use the recordings made for training AIs without consent.
The production power of screenwriters and actors was enough to bring the industry to a screeching halt. But this shows the corporations all the more why they are relying on generative AI – it’s about breaking this power. The outcome of the strike was still unclear at the time of going to press with this issue of Wildcat.
At the end of July, writers in the USA also made their voices heard. In an open letter from the Authors Guild, more than 9,000 of them protested against the free use of their works in the development of AI. “Our writings are food for a machine that eats incessantly without paying for it,” it says. The president of the writers’ union, Maya Shanbhag Lang, said in an interview that the results of AI will always be “derivative” and that the technology can only “regurgitate” what has been fed to it by humans. The petition also points out that the average income of professional authors has already fallen by up to forty per cent over the past decade. This coincided with the news that the largest book company in the US, Penguin Random House, is laying off more people. The New York Times quoted a letter from the CEO to his employees: “I’m sad to share the news that yesterday some of our colleagues across the company were informed that their roles will be eliminated.” That sounds like an AI horror. Dietmar Dath pointed out in an FAZ article: “It’s not AI that’s the problem, but the mindset of Bill Gates, for example, who recently said at a Goldman Sachs event that personal AI assistants would soon make Amazon redundant because they could ‘read the stuff you don’t have time to read’. The fact that Gates obviously doesn’t know what reading and writing are good for, apart from making money, could have been seen from the functional design of his company Microsoft’s products.” [12]
“Do not expect to be downloaded in an android body-form any time soon.”
(This was the advice of the Fugs, in their song Advice from the Fugs back in 2003).
Very often, new technologies in capitalism were brought into the world with fantasies of redemption. Werner von Siemens wanted to raise people to a “higher level of existence” in the “scientific age”. Emil Rathenau believed that electricity would enable a “civilisation that made people happy”. Henry Ford promised that his “motor car” would bring “paradise on earth”, and James Martin, the computer pioneer of the 1960s, had a vision of a wired society in which computers would create more democracy, more leisure time, more security, more clean air and more peace.
Internet companies sell us expropriation (data theft) and manipulation (romance scammers) as progress. Complex problems can be solved by reducing them to technical problems, and it is “society’s task” to “better adapt” to the new technologies, as Musk and Co. stated in their open letter in March.
Umberto Eco saw the cult of technology as a hallmark of fascism. And indeed, Henry Ford was not only crazy, but also a great admirer of National Socialism. The state of mind and political orientation of today’s Silicon Valley celebrities is very similar. When it comes to their ‘visions’, it is often difficult to separate marketing from megalomania. Do they possibly believe in it themselves? With AI, “we can make the world and people’s lives wonderful. We can cure diseases and increase material prosperity. We can help people live happier and more fulfilling lives,” said the head of OpenAI Sam Altman – in the same month that he warned of the extinction of humanity through AI!
Will AI bring salvation, or the extinction of humanity? Tremendous happiness, or the end of the world? This stark juxtaposition emotionalises and narrows the debate. There is no more room for criticism and questions – including the extent to which large parts of the hype are instigated for publicity reasons, and the promises are just hot air and staged deception. This strategy is typical of ‘long-termism’.
Twitter is now called X
‘Long-termism’ sees humanity’s primary moral obligation as securing the conditions for the well-being of trillions and trillions of sentient beings in the distant future. To do this, however, humanity must survive, which is why all moral questions are reduced to ‘existential risk’ (the so-called ‘xrisk’).
A central thought experiment of Long-termism is the consideration that peace and a nuclear war that kills 99 per cent of the world’s population could have more in common in the long term than this nuclear war with something that wipes out the entire human race. Such predictions are based on assumptions such as interstellar space colonisation, mind uploading and the digital replicability of consciousness – although these are likely to take a few more weeks (see Human Brain Project)! However, they disregard current problems, such as the consequences of global heating or social inequality, as “morally negligible”. After all, compared to the trillions and trillions of happy beings in the distant future, the few billion people of today are just a rounding error, and their problems are negligible.
The distinguishing feature of Long-termism is the x (xrisk). Musk attested to Long-termism having a “close alignment with my own philosophy”. He named a son X Æ A-12, his space tourism company “Space X”. Twitter is now called X – and is “a logical next step towards superintelligence”, in the words of digital marketing expert Helén Orgis, on LinkedIn. This is because X provides an enormous amount of data (communication, financial movements, purchasing behaviour). In turn, Musk can use this to feed his AI company X.AI and his brain implant company Neuralink. In general, Musk is conducting ‘research’ in all the fields specified by long-termism.
The shell of Long-termism is “Effective Altruism” (there is no morality; good is what benefits the most people; make as much money as possible in order to do as much good as possible). This sees itself as a further development of Ayn Rand’s ‘Objectivism’. In addition to Altman and Musk, other prominent supporters include Peter Thiel, representatives of the crypto scene (for example Sam Bankman-Fried) and the founder of the website Our World in Data. The UN report Our Common Agenda, published in 2021, is said to have adopted key concepts and approaches of long-termism.
An entrepreneurial philosophy straight out of a book
Timnit Gebru also wrote in November 2022 that in her two decades in Silicon Valley, she had witnessed how “the Effective Altruism movement has gained a disturbing level of influence” and is increasingly dominating AI research. Thiel and Musk, for example, were speakers at the Effective Altruist conferences in 2013 and 2015 respectively. [13]
Like Long-termism, Effective Altruism also works with apocalyptic threats. The biggest threat is that a general artificial intelligence will wipe out humanity. The only way to prevent this would be to create a good AI as quickly as possible. To this end, Elon Musk and Peter Thiel founded the company OpenAI in 2015 to “ensure that artificial intelligence benefits all of humanity”, as their website states. Unfortunately, things turned out differently: OpenAI released ChatGPT at the end of 2022. Four years earlier, Musk had withdrawn from the company because he was unable to take sole control; Microsoft has been the main shareholder since 2019. Musk and Thiel have invested heavily in similar companies to develop “good AI”, such as DeepMind and MIRI.
Musk likes to play with codes of the Qanon movement. Thiel is also a fascist technocrat who wants to hand over power to a “superior ruling class”; a “single individual” could change people’s fate for the better, he wrote. In 2009, he declared that freedom and democracy were not “compatible”. During the 2016 election campaign, he publicly sided with Trump and donated millions to his campaign. Since then, he has financed ultra-right-wing Republicans. His company Palantir is intertwined with secret services and the military industry.
Only an elite can save humanity; selfishness, ingenuity and efficiency are the highest virtues; self-interested big industrialists are the “engine of the world”; stopping this engine leads to the end of civilisation; therefore all state intervention is immoral. Ayn Rand turned this kind of stuff into bestselling literature in the 1950s. In the USA, she is still one of the most influential and most widely read political authors. Her booklet Atlas Shrugged from 1957 has been repeatedly translated into German under new titles, most recently in 2021 as Der freie Mensch. Rand saw Kant, the philosopher of the Enlightenment, as “the most evil man in the history of mankind”. Alan Greenspan, the former Chairman of the Federal Reserve, was a close friend of Rand and adopted her political-economic ideas. So did the Tea Party movement. The Ayn Rand Institute, which propagates Effective Altruism, also played an important role in the protests against Barack Obama’s healthcare reform.
“We must free ourselves from the alternative that [Effective Altruism] sells us: either to be subjugated by an AI or to be saved by an increasingly elusive techno-utopia promised to us by the Silicon Valley elites.” (Gebru, ibid.)
Technology is not a driver of social progress
A data leak at Tesla revealed in June that its autopilot had already caused more than 1,000 accidents. It was technically the same as VW’s solution, which was already several years old, with one crucial difference: it did not switch itself off as a precaution. The nonchalance with which Musk and his companies ignore technical deficits and make further untenable promises is not a “quirk of an entrepreneurial genius” – it is his business principle.
Using the startup method of the Minimal Viable Product (launching a barely viable product onto the market), he shoots rockets into space, puts cars on the road and unleashes dangerous software on us. Ultimately, he doesn’t care about human lives if they get in the way of his long-term mission.
The tech billionaires have become rich with big data, share deals and company sales. They are anything but progressive. They use their wealth to finance reactionary forces (Thiel). They are pushing the development of AI in the direction that suits them (black box, excluding responsibility, exploiting social needs as with the program Replica). AI is indeed accelerating some social developments – but not in a progressive direction.
If, for example, ‘learning’ no longer means understanding something, but rather passing a test via multiple choice, sooner or later the hour of generative AI will strike.
If think tanks can replace political debate and manipulate governments, as we described in the editorial, then for big business, AI is “an even more ingenious instrument of indirect political-economic trickery … than the popular ‘foundations’.” (Dietmar Dath, Finde ein Kürzel…)
Ford’s ‘motor car’ led, among other things, to the dismantling of functioning public transport systems (electric buses, rail networks, pneumatic tube systems). But the car was a very successful business model for more than a century and a half. This is not yet the case with generative AI. ChatGPT costs around 700,000 Dollars a day to operate. It is too expensive by a factor of 90 to finance it through advertising like the Google search engine. Twitter’s monthly US advertising revenue has halved since Musk’s takeover in October 2022. So we need subscription models. Microsoft charges 10 Dollars a month for the co-pilot on its developer platform Github – and apparently makes a loss of 20 Dollars per user, or even 80 Dollars a month for Power Users. With a few million developers, Microsoft can certainly cope with the deficit. But it cannot make its AI helpers available to hundreds of millions of users of its Office programmes or as part of its smartphone operating systems. For business customers, Microsoft and Google have now agreed on 30 Dollars per user – most private users will certainly not pay that for additional AI tools. The big five tech companies are still expanding their power by cross-subsidising AI. But at some point, profits will have to be generated. Otherwise Dietmar Dath is right and AI was the abbreviation for “awaiting insolvency”.
Customers decide on the success of the business (nobody has to stay with X). The people in Brandenburg, Magdeburg and Saxony also decide how the story continues: will they continue to have their water taken away? And Musk’s workers, who are only paid irregularly and have a high risk of accidents – will they suck it up forever? For a year now, there have been sickness rates of up to 30 per cent in the car plant in Grünheide. Yesterday (on the 9th of October), the first major action was taken against extreme workloads, excessive production targets and a lack of occupational safety.
Footnotes
[1] M. Park, E. Leahey, R.J. Funk: Papers and patents are becoming less disruptive over time. Nature 613, 138-144 (2023).
Interesting commentary on this by Florian Rötzer on telepolis: “Fewer breakthroughs, sluggish progress”.
[2] “The cooking ties smell, flavor and language together in a way seldom recognised: the smell and flavors of cooking were likely a prime factor in the development of language.” Gordon M. Shepherd: Neurogastronomy, How the Brain Creates Flavour and Why It Matters
[3] In 2013, the EU provided brain researcher Henry Markram with 600 million euros to set up the Human Brain Project, the largest brain research project in the world. Markram had promised to simulate the entire human brain one-to-one in a computer model and develop therapies for everything from Alzheimer’s to schizophrenia. It ended in October after ten years. It was not even close to being able to recreate the human brain. Neither schizophrenia nor Alzheimer’s have been defeated. Neuroscience has no clear theory at all; there is not even agreement on central concepts such as memory, cognition or even consciousness. It plugs this hole with computer metaphors. This generates research funding – but does not advance science.
[4] Brian Bailey: AI Power Consumption Exploding. 15 AUG. 2022, https://semiengineering.com
[5] The head of the Saxon State Chancellery, Oliver Schenk (CDU), who is responsible for the billions of Euro in subsidies to the chip industry, celebrated the announced investment by TSMC. “TSMC is one of the most important companies in the world. … These companies tie their investment decisions to three conditions: Public funding is essential in competition with other countries, sufficient staff must be available and the water supply must be secured.” German magazine ZEIT on the 4th of October 2023 “Water shortage in Saxony: just tap into the Elbe”
[6] Matt O’Brian, Hannah Fingerhut: AI technology behind ChatGPT carries hefty costs. Taipei Times, 14th of September 2023
[7] Billy Perrigo: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. time.com, 18th of January 2023
[8] Interview with Milagros Miceli: How millions of people work for AI. https://netzpolitik.org, 17/03/2023
[9] With AI to the optimal shift plan, AutomotiveIT, 17th of October 2022
[10] To read more: Industry 4.0 inWildcat104 – Sabine Pfeiffer: Digitalisation as a distributive force, Adrian Mengay: Production system criticism
[11] The union’s strike resolution is available online at: https://www.sagaftrastrike.org/post/sag-aftra-strike-order-for-tv-theatrical-streaming-contracts – The Alliance of Motion Picture and Television Producers includes Amazon/MGM, Apple, Disney/ABC/Fox, NBCUniversal, Netflix, Paramount/CBS, Sony, Warner Bros. Discovery/HBO and others.
[12] Dietmar Dath: FAZ 1st of June 2023
[13] Timnit Gebru: “Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety'” in Wired, 30 November 2022.