[ICML 2025] Position: AI Safety should prioritize the Future of Work
Authors: Sanchaita Hazra, Bodhisattwa Prasad Majumder, Tuhin Chakrabarty
Paper: https://arxiv.org/abs/2504.13959, ICML submission
Code: https://github.com/joshgivens/ScoreMatchingwithMissingData
At the ICML 2025 there were two papers with the Outstanding Position Paper Award. I want to review one of them.
TL;DR
WHAT was done? The authors argue that the current AI safety paradigm is dangerously narrow, focusing on technical and long-term existential risks while overlooking the immediate, systemic disruption AI causes to the future of work. In this position paper, they use established economic theories—such as rent-seeking (where firms pursue wealth by manipulating policy rather than creating value), intertemporal consumption, and institutional economics—to frame the societal risks of unchecked AI deployment. These risks include destabilizing economies through job insecurity, exacerbating inequality by favoring capital over labor, creating an "algorithmic monoculture" that impairs learning, and devaluing creative labor through widespread copyright infringement.
WHY it matters? Its most crucial contribution is reframing the very definition of existential risk. The paper compellingly argues that we should be as concerned with "accumulative x-risks"—death by a thousand cuts from systemic job loss, decaying institutions, and data colonialism—as we are with a single, "decisive" event like a rogue superintelligence. This shifts the timeline for "safety" from a hypothetical future to a pressing now. By proposing a pro-worker governance framework, the paper provides a crucial bridge between technical AI research and the tangible, human-centric policy needed to steer AI development towards shared prosperity rather than systemic disruption.
Details
The field of AI safety has been largely dominated by concerns over decisive, long-term existential risks—scenarios involving rogue superintelligence, bioterrorism, or large-scale manipulation. While these concerns are valid, a recent position paper argues that this narrow focus misses the forest for the trees. The authors make a compelling case that the most immediate and greatest risk stems from the systemic disruption of human agency and economic dignity in the workforce, and that AI safety, as a discipline, must prioritize the future of work.
A New Framework for AI-Induced Risk
Instead of introducing a new algorithm, this paper presents a new lens through which to view AI-induced harm. The authors' methodology is to apply established economic and social theories to the current AI landscape, identifying a series of systemic risks that are often treated as secondary externalities rather than core safety issues.
The paper outlines six central claims (P1-P6), painting a concerning picture of AI's current trajectory:
Economic Destabilization (P1): The competitive "arms race" among AI developers leads to rushed deployments, accumulating "technical debt" and creating widespread job insecurity that disrupts traditional patterns of economic stability.
Accelerated Skill Disparity (P2): AI-driven automation disproportionately benefits high-skilled workers and capital owners, displacing lower-skilled labor and widening the economic divide without adequate workforce adaptation.
Extractive Economics (P3): Dominant AI firms are framed as "extractive institutions" (systems designed to transfer resources from the many to a powerful few) that concentrate wealth and power, diminishing worker bargaining power and hindering the shared prosperity that underpins stable societies.
Uneven Global Democratization (P4): The benefits and control of AI are concentrated in high-income nations, fostering a form of "data colonialism" where lower-income countries become dependent consumers rather than co-creators of technology.
Impaired Learning and Creativity (P5): Over-reliance on generative AI in education and research risks creating an "algorithmic monoculture," eroding critical thinking skills and homogenizing human expression.
Devaluation of Creative Labor (P6): The practice of training models on vast troves of copyrighted data without fair compensation is identified as a direct threat to the livelihoods of artists, writers, and other creators.
This framework is powerful because it moves the discussion from abstract, future hypotheticals to concrete, present-day harms grounded in well-understood economic principles like rent-seeking behavior (as described in Krueger, A. O. The Political Economy of the Rent-Seeking Society. The American Economic Review, 1974) and collective action problems.
A Pro-Worker Path Forward
After diagnosing the problems, the authors propose a comprehensive, pro-worker framework for AI governance built on six key recommendations (R1-R6):
Worker Support and Policy: Governments must establish robust social safety nets and retraining programs to support workers displaced by AI (R1).
Promoting Openness and Competition: The dominance of big tech should be countered by promoting open-source AI, including open data and open weights, to foster a more competitive and equitable ecosystem (R2).
Accountability through Technical Safeguards: Mandating watermarking of all generative AI content and funding research into reliable detection tools are crucial for accountability and mitigating misinformation (R3, R4).
Fair Compensation for Data: The paper strongly advocates for policies that mandate training data disclosure and introduce royalty-based compensation systems to ensure creators are fairly paid for their work (R5).
Inclusive Governance: To avoid "regulatory capture," policy-making must involve a broad set of stakeholders, including worker unions and advocacy groups, to ensure that corporate lobbying does not override public and worker interests (R6).
A Necessary Broadening of the AI Safety Agenda
The primary strength of this paper is its timely and well-reasoned argument to expand the AI safety paradigm. By grounding its claims in a rich body of economic theory and real-world examples—from lawsuits against AI labs for using pirated books (The Verge, 2024) to the measurable decline in creative freelance jobs (Demirci et al., 2024)—the authors make the issue of economic impact feel both urgent and tangible.
An interesting tension within the paper’s recommendations, worthy of further discussion, is the potential conflict between openness and safety. For example, while promoting open-source AI (R2) is a powerful tool to counter the dominance of large corporations, it could inadvertently accelerate the proliferation of fine-tuned, hard-to-detect models. This complicates efforts to watermark content (R4) and prevent the "impaired learning" (P5) the authors rightly caution against. A truly robust framework must therefore not only propose these individual solutions but also address the inherent trade-offs between them.
While the paper is a position piece and thus lacks original empirical data, its synthesis of existing evidence is compelling. The central challenge, which the paper acknowledges, lies in the implementation of its recommendations. Achieving global consensus on copyright, preventing regulatory arbitrage, and funding massive worker transition programs are monumental tasks. However, by clearly articulating the risks of inaction, the paper provides a strong impetus to begin tackling these challenges.
Conclusion
"Position: AI Safety Should Prioritize the Future of Work" is a significant and timely contribution to the AI discourse. It serves as a powerful call to action for researchers, policymakers, and developers to look beyond long-term, speculative risks and address the immediate, systemic harm that unchecked AI development poses to our economic structures and social fabric. It is an essential read for anyone who believes that the goal of building beneficial AI must include protecting the value and future of human work.