Similar to the printing press and the dotcom boom, early excitement and speculation often overshadow the enduring impact of emerging technologies.
“Innovation,” as economist William Janeway outlined in his influential book “Doing Capitalism in the Innovation Economy,” “commences with discovery and concludes with speculation.” This sentiment encapsulates much of 2023. The discovery was AI (personified by ChatGPT), while the current scenario embodies a speculative bubble. Major corporations release products capable of “hallucinating” (a newfound technical term associated with extensive language models), lavishly spending on the equipment necessary for creating even more colossal models. At present, reports indicate Microsoft’s intention to purchase 150,000 Nvidia chips next year, each priced at $30,000 (£24,000). It seems somewhat irrational. However, viewed through the Janeway perspective, such occurrences have always been part of the landscape of innovation.
He notes that the revolutions shaping the structure of the market economy — from canals to the internet — demanded substantial investments in constructing networks whose eventual utility couldn’t have been envisioned at their inception. Put bluntly, what we retrospectively identify as advancements in technology often emerged from periods of irrational enthusiasm, leading to significant waste, investor bankruptcies, and societal upheaval — essentially, bubbles. Consider the dotcom frenzy of the late 1990s or the US railway expansion starting in the 1850s, which witnessed the construction of no less than five railway lines connecting New York and Chicago. In both instances, numerous individuals suffered significant financial losses. However, as economist Brad DeLong astutely highlighted in his 2003 Wired article “Profits of Doom,” “Americans and the American economy benefited immensely from the resulting network of railroad tracks spanning the continent.” Amidst railroad bankruptcies and price conflicts that reduced shipping costs and slashed rail rates nationwide, a peculiar phenomenon occurred: new industries emerged.
The historical lesson regarding tech bubbles emphasizes what remains once the bubble inevitably bursts. This concept neatly circles back to the current frenzy surrounding AI. Undoubtedly, AI’s ability to assist those struggling to articulate coherent sentences is impressive. Moreover, as noted by Cory Doctorow, the accessibility for Dungeons & Dragons-playing teenagers to generate epic illustrations of their characters battling monsters is noteworthy, even if these depictions showcase “six-fingered swordspeople with three pupils in each eye.” Additionally, the technology showcases various captivating features that captivate millions, mostly offered free of charge. However, what lasting significance will endure? What aspects will future historians recognize as the enduring legacy of this technology?
Presently, predicting such outcomes seems unattainable, primarily because we habitually overrate the immediate influence of groundbreaking technologies while vastly underestimating their enduring consequences. Consider assessing the societal impact of printing in 1485, a mere 40 years after Gutenberg’s inaugural Bible printing. At that point, no one foresaw its role in undermining the authority of the Catholic church, inciting the thirty years’ war, fostering the emergence of modern science, initiating new industries and professions, and, as cultural critic Neil Postman noted, reshaping our perceptions of childhood. In essence, print molded human society for four centuries. If machine learning technology proves to be as revolutionary as proponents claim, its enduring impact might parallel the profundity of print’s influence.
Where might we seek indications of how this could unfold? Three realms merit contemplation. First, despite its present imperfections, the technology appears poised to offer a substantial enhancement of human capacity—a novel form of “mind’s power steering.” However, this advancement also implies the empowerment of distorted perspectives. Second, the technology’s sustainability raises concerns, considering its voracious appetite for energy and both natural and human resources. (It’s important to note that much of the output from current AI relies on the unrecognized efforts of poorly compensated individuals in less affluent nations.) Third, the speed and feasibility of achieving economic viability come into question. Presently, there’s an assumption that enthusiasm from the public, government, and tech industry will seamlessly translate into widespread implementation and tangible returns on the monumental costs of operating these systems. However, according to figures like the head of Accenture, a global consultancy firm, this might be overly optimistic. “Most companies,” she stated recently, “aren’t prepared to implement generative artificial intelligence extensively due to lacking robust data infrastructure or the necessary controls ensuring safe technology usage.” Indeed, here’s to a more grounded perspective in the new year!