Managing the AI backlash

By Edoardo Campanella

MILAN – Disruptive technologies are rarely welcomed by workers or others with a significant stake in the status quo. Innovation requires adaptation, and adaptation is costly. Powerful incumbents’ resistance to revolutionary technologies has been a major factor in past periods of stagnant growth. Predictably, the initial enthusiasm for generative artificial intelligence, following ChatGPT’s release last year, has given way to fears of technological unemployment.

No one expects disruption caused by AI to be minor. According to Goldman Sachs, “generative AI could substitute up to one-fourth of current work” in Europe and the United States, with administrative and legal professions being more exposed than physically intensive professions such as construction and maintenance. Already, AI can produce text, videos, and pictures that are indistinguishable from human-created content. It is vastly better than humans at any task involving pattern recognition, and it is increasingly good at making basic judgement calls in many domains (for example, in responding to customer-service queries).

History offers hints about how the backlash against AI will play out, though some parallels are more useful than others. The most common analogy is to the Luddites, who reacted to industrialization in early-nineteenth-century England by destroying the machines. But this comparison is inapt, given that AI is an indestructible digital tool. Similarly, AI is unlikely to revive trade unions, which were born of industrialization, because it primarily threatens white-collar jobs, rather than assembly-line workers.

To find an adequate historical comparison, we need to go back further to the Middle Ages, when powerful craft guilds – associations of lawyers, notaries, artisans, scribes, painters, sculptors, musicians, physicians, and so forth – regulated skilled professions across Europe. Though guilds benefited society by ensuring product quality and certifying practitioners’ qualifications, their main purpose was to protect and enrich their members by excluding competitors. Such monopolization generated large profits with which to reward political elites at the expense of consumers – “a conspiracy against the public,” as Adam Smith put it.

Guilds were generally conservative when it came to innovation. Though they promoted a favorable environment for technological change through technical specialization, artisan upward mobility, and monopoly rents, they resisted new devices and products, banned members from adopting novel processes, and boycotted wares and workers from places that used forbidden techniques. In response to guilds’ petitions, local rulers often passed laws blocking innovation.

The printing press is a case in point. Invented in 1440, it did not become widely used until the eighteenth century, owing to obstruction by scribes’ guilds. Similarly, in fifteenth-century Cologne, masters of the linen-twisters’ guild banned horse-powered twisting-wheels for fear that horses would take their jobs.

But this is not to suggest that guilds were anti-technology. While they fiercely opposed labor-replacing innovations, they were generally open to labor-augmenting ones that saved working capital and improved quality.

Nowadays, many of those threatened by generative AI – such as lawyers, doctors, or architects – are organized into professional associations that, in fact, descend from the old guilds. In Europe especially, these organizations still restrict competition by imposing entry barriers, setting professional fees, establishing qualitative standards, curtailing the use of advertising, and other measures.

Looking ahead, we may see many professions form a common front to control AI through general rules concerning data regulation, ethical standards, or capital taxation. But the fiercest reactions will likely be confined to specific professions, given how much the threat of automation varies from one occupation to another. Like medieval guilds, today’s professional groups will likely appeal to sympathetic politicians and lobby for regulations to control AI. Ideally, these efforts will steer the technology toward labor-augmenting uses, rather than labor-replacing ones.

Moreover, some professions will police themselves, such as by establishing new standards for interactions between clients and AI; prohibiting the automation of certain tasks on ethical grounds; or limiting access to certain client data for privacy reasons, thus curbing the learning potential of the technology. Medical doctors, for example, will probably insist that they be given the last word in AI-assisted disease diagnosis, and reputable news outlets will want to check the reliability of facts reported in AI-written articles.

As is always the case, some members of a given profession will be more vulnerable than others. In the Middle Ages, the wealthiest masters exerted the most influence over public authorities, and often shaped policies to their own benefit – rather than to the benefit of all guild members. Today, partners in law firms will welcome the cost-cutting automation of tasks performed by younger associates (such as writing standard contracts or finding legal precedents), so long as they can retain the high-value-added tasks that they themselves perform.

Some countries will be much more exposed to a guild-style backlash than others. In Europe, guilds formally disappeared after the French Revolution, when rulers came under pressure to establish more egalitarian societies and unleash industrialization. Nonetheless, the corporatist mindset has survived across the continent, as evidenced by persistent over-regulation in the services sector. By contrast, the US has no history of craft guilds.

To unleash AI’s full potential, policymakers and innovators alike should promote uses that elevate, rather than suppress, human agency. If AI is seen as more of a threat than a source of empowerment, well-organized lobbies will delay or even derail its adoption in many sectors. The slow diffusion of the printing press is a cautionary reminder. In their eagerness to usher in the future, AI developers should heed the lessons of the past.