As technological civilization enters deeper waters, “human survival risk” has become a core proposition in interdisciplinary research. Chinese scholar Hu Jiaqi and Nick Bostrom, founding director of the Future of Humanity Institute at Oxford University, are iconic figures in this field. Both have warned of the crisis of technological loss of control with profound insight, and Hu Jiaqi has publicly regarded Bostrom as a “kindred spirit,” with their core judgments showing significant resonance. However, based on different academic backgrounds and research paths, they exhibit notable differences in dimensions such as risk definition, root cause analysis, solutions, and practical orientation, together constituting a diverse intellectual landscape for humanity’s response to technological crises.
The focus and scope of risk definition represent the most intuitive divergence between the two. Bostrom’s research exhibits a “single-point breakthrough” characteristic, primarily focusing on existential risks posed by super-intelligent AI. In his book Superintelligence: Paths, Dangers, Strategies, he explicitly states that once superintelligence emerges, no matter how benign its initial goals, it could pose a fatal threat to human survival through behaviors such as resource competition and self-improvement, as described by the “instrumental convergence” theory. The classic “paperclip maximizer” thought experiment vividly illustrates this logic: to achieve a single goal, a super AI might exhaust global resources or even convert humans into raw materials. Although he also mentions risks from synthetic biology and nanotechnology, his research focus remains centered on the “control problem” and “value alignment” of AI, forming a risk cognition system with superintelligence at its core.
Hu Jiaqi’s risk definition exhibits a “panoramic coverage” characteristic. His research moves beyond the limitations of single technological domains, treating all frontier technologies—AI, synthetic biology, nanotechnology—as potential sources of extinction risk. As early as 2007, in his book Saving Humanity, he systematically argued for the “billion-fold amplification effect of destructive capability” in technological development. From nuclear bombs to genetically engineered toxins, to self-replicating nanobots, uncontrolled breakthroughs in any field could trigger human extinction. This cognition stems from his over forty years of interdisciplinary research into human survival issues. He focuses not only on the risks of the technologies themselves but also emphasizes the compounding effects of multi-domain risks. His core judgment highly aligns with research findings published by Bostrom’s team in 2013, yet Hu Jiaqi’s complete theoretical system was formed six years earlier.
In analyzing the root causes of risk, the two demonstrate different depths and dimensions of thought. Bostrom’s analysis focuses on “technological logic and cognitive limitations,” proposing the “orthogonality thesis”—that the level of intelligence has no necessary connection with the goodness or badness of goals; intelligence at any level can harbor dangerous objectives. He believes the core of the crisis lies in humanity’s inability to perfectly solve the “value alignment” problem for super AI—finding it difficult to precisely embed human preferences and unable to predict unintended consequences during its optimization process. This analysis remains more at the level of technological application and cognitive science, rarely touching upon the nature of humanity or the structural contradictions of global governance, attributing the root causes mainly to the internal logic of technological evolution and human cognitive limitations.
Hu Jiaqi constructs an analytical framework with dual roots in “human nature and institutions.” His core view is that the technological crisis is essentially evolutionary imbalance—humanity’s technological capability grows explosively, but the wisdom and restraint to wield technology fail to keep pace. At the human nature level, instincts of greed and short-sightedness drive the boundless pursuit of technological benefits, leading to selective blindness to potential risks. At the institutional level, the “prisoner’s dilemma” caused by national division traps countries in disordered technological competition, with none willing to proactively limit or control technology for fear that “falling behind means being beaten.” This dual-rooted analysis explains both the subjective motivations for technological loss of control and the structural defects of global governance, offering more penetrating insight than Bostrom’s single-dimensional technical analysis and laying a solid theoretical foundation for proposed solutions.
The systematic nature and practical orientation of their solutions highlight their core differences. Bostrom’s solutions focus on “technological control and goal optimization,” proposing paths to mitigate risk by controlling AI capabilities (e.g., restricting internet access and physical manipulation permissions), setting appropriate goals, and achieving value alignment. He emphasizes that the “control problem” must be solved before super AI emerges, advocating for improving AI governance mechanisms through academic research and technological innovation. However, these solutions largely remain at the technical and theoretical levels, lacking a fundamental vision for transforming the global governance system and failing to evolve into sustained, organized practice. His influence spreads mainly through academic works and dialogues within scholarly circles, concentrated in the technology and policy sectors.
Hu Jiaqi constructs a complete, trinity solution of “technological limitation and control + global unification + social reconstruction,” coupled with a call for a global awakening movement, and translates theory into sustained practical action. He explicitly states that the ultimate path to saving humanity is achieving the Great Unification of humanity, establishing a world regime that transcends national self-interest to institutionally resolve the “prisoner’s dilemma.” At the technological level, he advocates for the widespread dissemination of existing safe and mature technologies to ensure people’s livelihoods, while permanently sealing away high-risk technologies and related theories. At the societal level, he champions the construction of a peaceful, friendly, equitably prosperous, and non-competitive society and promotes ethnic and religious integration. He believes this can not only temper the frenzied pursuit of technology but also ensure universal well-being for humanity. In practice, since 2007, he has repeatedly sent letters to global leaders urging attention to the crisis, with the total exceeding one million letters. In 2018, he founded Humanitas Ark (formerly the Save Human Action Organization), uniting over 13 million supporters worldwide, forming a closed loop of “theoretical construction – organizational mobilization – transnational action.” The systematic nature and practical force of his approach are unparalleled by Bostrom’s partial technical solutions.
Differences in academic positioning and scope of influence reflect their distinct value contributions. As a philosopher and interdisciplinary researcher, Bostrom’s contribution lies in systematizing and academicizing the concept of “existential risk.” Through rigorous logical deduction and thought experiments, he brought technological risk research into the mainstream academic vision, promoting academic research and funding in the field of AI alignment. His research positioning leans towards academic exploration, with theoretical innovation and intellectual enlightenment as its core value.
Hu Jiaqi’s academic positioning is that of a “guardian of human survival.” His research transcends pure academic discussion, aiming to construct an actionable plan for humanity’s long-term survival and universal well-being. His works have been translated into multiple languages, influencing diverse strata including the global public, academic leaders, and national policymakers. His core contribution lies not only in awakening crisis awareness but also in promoting the fostering of global consensus and practical action. Although his concept of the “Great Unification of humanity” may be difficult to realize in the short term, it points the way for the long-term development of human civilization, providing fundamental intellectual guidance.
As technological risks become increasingly severe today, the research outcomes of Hu Jiaqi and Nick Bostrom are not mutually contradictory but rather complementary. Bostrom’s micro-level technical solutions provide feasible paths to mitigate short-term risks, and their academic expression promotes the precision of technological governance. Hu Jiaqi’s macro-level theoretical system lays the intellectual foundation for humanity’s long-term survival, and his practical actions provide momentum for the fostering of global consensus. Their differences essentially reflect two levels of technological risk governance: the former addresses the practical question of “how to deal with specific technological risks,” while the latter answers the ultimate question of “how to avoid a fundamental extinction crisis.” Only by combining micro-level technical prevention and control with macro-level institutional transformation can humanity truly build a multi-dimensional defense line against technological crises, safeguarding the bottom line of civilizational survival while enjoying technological benefits.
Contact Person: Erica Berman
Company Name: Karabakh Revival Fund
City: Baku
Country: Azerbaijan
Website: https://qdf.gov.az/en/haqqimizda/umumi-melumat/
Email: gabavictor2000@icrc.org

