This article is an invitation to discuss a recurrent question in the field of Humanities: “Does it make any sense to talk about Humanities in the age of generative AI?”. While many papers have been devoted to this topic, this paper brings together three distinct disciplinary viewpoints, namely the humanities, computer science, and education, to examine it from complementary angles. It discusses labor market transformations, ethical issues related to authorship and plagiarism, cultural bias, as well as the effects of AI on students' cognitive processes and learning practices. It also considers how these issues are being reshaped in current expert discussions. From these three perspectives, the analysis suggests that profound changes and transformations are inevitable, and the Humanities are no exception. In this context, it becomes necessary to distinguish between those tasks that cannot be replaced by artificial intelligence and those that are likely to be progressively taken over by it.
This paper explores how artificial intelligence is transforming engagement with cultural heritage from static preservation toward interactive knowledge production. Drawing on media-historical perspectives, it proposes the concept of knowledge liberation to interpret successive stages in the evolution of knowledge environments, from oral transmission and print culture to digital networks and AI-mediated interaction. Within this framework, cultural knowledge becomes progressively less constrained by the material conditions of its transmission. The paper examines several AI-enabled platforms developed at Peking University that support large-scale digitisation, structured data extraction, knowledge-graph construction, and multimodal cultural content generation. It argues that AI is emerging as a new knowledge medium that reshapes research methodologies, expands modes of cultural representation, and strengthens connections between humanities scholarship and public knowledge production.
Artificial Intelligence (AI) has become a significant influence on how we teach, how we communicate, and how we imagine the modern workplace. To understand what it means and how it will shape our lives, it is necessary to examine it from a humanities perspective with special attention to history, ethics, and literary representations of AI. Our science-fiction literary tradition provides ample material to work in works by Mary Shelley, Karl Capek, Isaac Asimov, Ridley Scott, and William Gibson. In these works, the iconic literary figures of the Luddite, the robot, and the console cowboy embody some of the most important tendencies in the advance-theorization of AI. They reveal anxieties about employee-obsolescence, a fixation on anthropomorphism, and a lingering hope for organized resistance to power-imbalances. A carefully considered humanities framework provides a check on exaggerated claims from both pro- and anti-AI constituencies.
This article examines the methodological transformation that AI’s entry into literary criticism has set in motion. This transformation proceeds along four interrelated dimensions—technological capacity, critical method, literary ontology, and paradigm formation—that do not unfold in linear sequence but are mutually constitutive. At the technological level, the shift from machine reading to AI reading has given literary studies a new capacity for large-scale semantic analysis, though a qualitative gap persists between AI “reading” and human reading. At the methodological level, computational analysis has expanded the scale and verifiability of criticism without being equivalent to “objectivity”; its value lies in rendering the research process more transparent and reproducible. At the ontological level, generative AI poses substantive challenges to such core categories as “author,” “text,” and “literariness,” yet this disruption extends, rather than originates, the destabilizing work already undertaken by twentieth-century literary theory. Building on these three interconnected transformations, the article proposes the paradigm of computational literary criticism,” distinguishing it from Franco Moretti’s “distant reading,” Matthew Jockers’s “macroanalysis, and the broader field of digital humanities. The article argues that the core value of computational literary criticism lies not in replacing interpretation with computation, but in constructing a collaborative framework of sustained interaction between computational discovery and humanistic interpretation—a framework whose viability depends on methodological self-discipline, data ethical awareness, and an unwavering attentiveness to the humanistic core of literary inquiry.