
When people are starved of human contact, does it undermine their dignity to fob them off with something less? Emphatically yes, says Alegre: removing the human aspects of care work dehumanises the carers and those they care for. Rather than replacing humans, we need machines that perform specific functional tasks better, such as lifting. To create a digital illusion of loving β or caring β will compound isolation, she warns. It is the βcorporate capture of human connectionβ.
Generative AI, like ChatGPT, may have fired the public imagination, but Alegre sees human creativity at risk. Asking βWho is Susie Alegre?β was dispiriting as the system failed to find her despite 25 yearsβ worth of publications, including a well-received book, Freedom to Think. When pressed, it attributed her work to various male authors.
βWriting women out of their own work is not unique to AI,β she observes dryly. Nevertheless, the failure is striking. βWhatever you ask it, ChatGPT will just make up plausible stuff.β In effect, it βsucks up all the ideas it can findβ β using copyright-protected work to feed large-language models β and βregurgitates them without attributionβ (or consent and remuneration). If you are an independent thinker or creator, she says, it is nothing less than βintellectual asset-strippingβ.

Helen Mirren played a military commander in charge of a drone attack in βEye in the Skyβ.Β
Copyright and human rights laws give artists economic and moral rights in their work, but for individuals the costs of mounting a legal challenge are too great. In 2023, strikes in the US by screenwriters β partly about the threat of generative AI to their future work β secured restrictions on the use of AI and mitigated this persisting problem of access to justice.
Alegre gives other examples where AI has failed. There are legal submissions drafted using ChatGPT which cite precedents that do not exist. Technology that sifts and screens documents is useful to lawyers, and there is a role for automation in handling certain disputes. But not knowing its limits risks creating a legal process βas fair as a medieval witch trialβ. Judges relying on ChatGPT have also not emerged well. Judging disputes often involves a context-dependant, textured mix of intellect and feeling β which AI canβt undertake.
The dangers inherent in trying to replace human reasoning also arise in warfare. Drones can kill the enemy without exposing your own forces to immediate jeopardy. But, as viewers of the 2015 film Eye in the Sky will recall, when to strike can still pose a moral dilemma. A lawful, proportionate attack must weigh the military advantage against the risk of harm to civilians β which it is duty bound to minimise.
Alegre doubts AI can reliably work out what is lawful and not. Furthermore, AI remains prone to βhallucinationsβ and βinexplicably incorrect outputsβ. If fully autonomous weapons are in use, βthis means arbitrary killingβ. Who is accountable when mass death results from a glitch in the system? The use of AI blurs lines of responsibility when things go wrong, as they inevitably do.
Acknowledging the fallibility of machines and the need to hold those responsible for harm to account are key themes of this interesting book.
There are cases where this is happening. Errors in the Horizon software system resulted in hundreds of sub-postmasters and sub-postmistresses being wrongfully convicted (and in some cases imprisoned) for fraud, false accounting and theft.
Alegre sees βautomation biasβ in Post Office managementβs refusal to accept Horizon was flawed: whereby βpeople tend to favour information produced by a machine over contrary information right in front of their nosesβ. In a high-profile public inquiry, Post Office and Fujitsu employees are being confronted (in the presence of some they helped send to prison) with their own automation bias, and their willingness to tolerate injustice to βprotectβ a corporate reputation.
In another set of legal proceedings β the inquest into the death of 14-year-old Molly Russell β executives from social media companies Meta and Pinterest were summoned and challenged. As the inquest heard, Molly suffered from a depressive illness and was vulnerable due to her age. Recommendation algorithms provided her with text, images and video of suicide and self-harm that negatively impacted her mental health, leading her to take her own life in 2017. This tragic case informed, and helped enact, the Online Safety Act 2023. Meanwhile, in the US 41 states are suing Meta, accusing it of encouraging addictive behaviour in children while concealing its research into the harm caused.
Alegre shines a light on the less visible social costs of our use of AI. Workers on pitiful pay in Kenya, Uganda and India moderate content to enable ChatGPT to recognise graphic descriptions of child abuse, bestiality, murder, suicide, torture, self-harm and incest. To do so, they will have sifted a mass of disturbing material. This βexporting of traumaβ is recognised as a global human rights problem, says Alegre, and has already led Meta to pay $US52 million to settle a single dispute brought by content moderators in the US.
When Luddites destroyed machinery in the Industrial Revolution they werenβt resisting innovation but protesting βunethical entrepreneursβ.
It is also a βfallacyβ she says, βthat virtual worlds are somehow greenerβ. Manufacturing and disposing of the devices we use has an enormous environmental impact, and even before the AI boom, the information, computing and technology (ICT) sector produced more emissions than global aviation fuel.
Urgent calls for global AI regulation can be misleading, warns Alegre, as they suggest these new technologies are currently beyond our control. They arenβt. We have privacy laws, anti-discrimination laws, labour laws, environmental laws, intellectual property laws, criminal laws and the human rights laws that underpin them. Rather than creating βgrandiose new global regulatorsβ, she says, we need to enforce existing laws and βidentify specific gaps that need to be filledβ.

Β
Anticipating being labelled a Luddite, Alegre explains that the Luddites are misunderstood. When they destroyed cloth-making machinery in the Industrial Revolution they werenβt resisting innovation but protesting against βunethical entrepreneursβ using technology to depress wages βwhile trampling over workersβ rightsβ and βcheating consumers with inferior productsβ. They lost, she says, βnot because they were wrong, but because they were crushedβ.
AI is not evil, Alegre believes, but it has no moral compass. Guiding the direction of βscientific endeavourβ to safeguard human rights is up to us as sentient beings. There is nothing inevitable about where we are going. We have a choice.
In writing Human Rights, Robot Wrongs, Alegre plainly hopes to enlist the like-minded and avoid the fate of the Luddites. I sincerely hope that she does.
Human Rights, Robot Wrongs: Being Human in the Age of AI by Susie Alegre (Atlantic).
New Statesman