This article examines the methodological transformation that AI’s entry into literary criticism has set in motion. This transformation proceeds along four interrelated dimensions—technological capacity, critical method, literary ontology, and paradigm formation—that do not unfold in linear sequence but are mutually constitutive. At the technological level, the shift from machine reading to AI reading has given literary studies a new capacity for large-scale semantic analysis, though a qualitative gap persists between AI “reading” and human reading. At the methodological level, computational analysis has expanded the scale and verifiability of criticism without being equivalent to “objectivity”; its value lies in rendering the research process more transparent and reproducible. At the ontological level, generative AI poses substantive challenges to such core categories as “author,” “text,” and “literariness,” yet this disruption extends, rather than originates, the destabilizing work already undertaken by twentieth-century literary theory. Building on these three interconnected transformations, the article proposes the paradigm of computational literary criticism,” distinguishing it from Franco Moretti’s “distant reading,” Matthew Jockers’s “macroanalysis, and the broader field of digital humanities. The article argues that the core value of computational literary criticism lies not in replacing interpretation with computation, but in constructing a collaborative framework of sustained interaction between computational discovery and humanistic interpretation—a framework whose viability depends on methodological self-discipline, data ethical awareness, and an unwavering attentiveness to the humanistic core of literary inquiry.
The date of a historical document, if it was published, can be recognized through the date of its preface or copyright page. However, dating an unpublished manuscript is much more difficult. In the case of old Hangeul documents, various linguistic features can be used to estimate the approximate date of the document, but as the number of features increases, the task tends to go beyond the purview of an individual human researcher, and become more appropriate to AI. This paper shows how artificial neural networks can be trained to estimate the date of documents using material whose date is known. For this purpose, various kinds of neural networks are examined: Bag-of-words model, CNN, RNN and Transformer. In addition, these models can be further sub-divided: unigram or bigram, character(syllable)-based or grapheme(phoneme)-based. After trained on documents with known dates, these models are applied to new (unseen) data, and the results are evaluated.
This paper examines contemporary society through the concept of algorithmic hyper-late modernity, a new phase emerging from the convergence of digital technologies and global mobilities. Rather than viewing the digital revolution as a purely technological development, it conceptualizes it as a profound transformation of social systems, everyday life, and modes of human existence. Drawing on both classical and contemporary sociological theory, the paper situates this transformation in relation to earlier phases of modernity and late modernity, engaging with thinkers such as Durkheim, Giddens, Jameson, and Elliott. It further analyzes how social media platforms, algorithmic infrastructures, and artificial intelligence increasingly mediate political processes, public opinion, affective dynamics, and risk. The paper also advances the concept of Mobilities 3.0 to describe a condition in which mobility becomes deeply entangled with digital systems, generating algorithmic forms of movement, communication, subjectivity, and governance that operate across national borders. The paper concludes by arguing that human life is now constituted within networks linking humans and technologies, rather than in opposition to them. In this context, the humanities play a crucial role in critically reflecting on these transformations and in articulating ethical boundaries for human life in the age of artificial intelligence.
This paper explores how artificial intelligence is transforming engagement with cultural heritage from static preservation toward interactive knowledge production. Drawing on media-historical perspectives, it proposes the concept of knowledge liberation to interpret successive stages in the evolution of knowledge environments, from oral transmission and print culture to digital networks and AI-mediated interaction. Within this framework, cultural knowledge becomes progressively less constrained by the material conditions of its transmission. The paper examines several AI-enabled platforms developed at Peking University that support large-scale digitisation, structured data extraction, knowledge-graph construction, and multimodal cultural content generation. It argues that AI is emerging as a new knowledge medium that reshapes research methodologies, expands modes of cultural representation, and strengthens connections between humanities scholarship and public knowledge production.
Large language models (llms) have made language technologies widely accessible to humanities scholars, but they have also intensified concerns about transparency, reproducibility, and interpretive responsibility. This article argues that graph-based meaning representations, especially Abstract Meaning Representation (amr) and Uniform Meaning Representation (umr), can function as interpretive infrastructure for humanities research in the age of llms. Meaning graphs are “thin” by design in that they deliberately encode a constrained set of semantic distinctions. That selectivity is not a weakness for humanistic inquiry; rather, it enables a disciplined workflow in which researchers can separate (i) the semantic commitments that a text licenses (events, participants, temporal and modal dependencies) from (ii) richer interpretive claims (stance, ideology, affect, narrative framing) that can be layered on top. I review amr and umr at a level accessible to humanities audiences, discuss what changes in the llm era (including both opportunities and limits of using llms for semantic parsing), and propose humanities-centered workflows and research questions. Several compact sample analyses illustrate how meaning graphs can support interpretive tasks in historiography, narrative analysis, and translation studies. A final section explicitly lists resources (datasets, tools, and guidelines) to support reproducible experimentation.
Artificial Intelligence (AI) has become a significant influence on how we teach, how we communicate, and how we imagine the modern workplace. To understand what it means and how it will shape our lives, it is necessary to examine it from a humanities perspective with special attention to history, ethics, and literary representations of AI. Our science-fiction literary tradition provides ample material to work in works by Mary Shelley, Karl Capek, Isaac Asimov, Ridley Scott, and William Gibson. In these works, the iconic literary figures of the Luddite, the robot, and the console cowboy embody some of the most important tendencies in the advance-theorization of AI. They reveal anxieties about employee-obsolescence, a fixation on anthropomorphism, and a lingering hope for organized resistance to power-imbalances. A carefully considered humanities framework provides a check on exaggerated claims from both pro- and anti-AI constituencies.
This article is an invitation to discuss a recurrent question in the field of Humanities: “Does it make any sense to talk about Humanities in the age of generative AI?”. While many papers have been devoted to this topic, this paper brings together three distinct disciplinary viewpoints, namely the humanities, computer science, and education, to examine it from complementary angles. It discusses labor market transformations, ethical issues related to authorship and plagiarism, cultural bias, as well as the effects of AI on students' cognitive processes and learning practices. It also considers how these issues are being reshaped in current expert discussions. From these three perspectives, the analysis suggests that profound changes and transformations are inevitable, and the Humanities are no exception. In this context, it becomes necessary to distinguish between those tasks that cannot be replaced by artificial intelligence and those that are likely to be progressively taken over by it.