On March 21, 2026, East China Normal University (ECNU) released findings from its large-scale "AI as the Lead Author" social experiment. Co-organized with the National Alliance of University Laboratories for Philosophy and Social Sciences , the launch event and symposium was held at ECNU in Shanghai. Scholars, educators, publishers, and industry representatives gathered to examine how AI is reshaping academic writing, collaboration, and scholarly norms, as well as evolving definitions of authorship, responsibility, and academic standards.
A Real-World Stress Test for Human–AI Collaboration
In opening remarks, ECNU Vice President Lei Qili described the experiment as a real-world stress test under conditions of intensive human–AI collaboration. He emphasized that the initiative is not intended to replace human researchers, but to explore how AI may support research while humans retain responsibility for research direction, scholarly judgment, and value guardianship.
Speaking on behalf of the National Alliance of Philosophy and Social Science Laboratories in Chinese Universities, Chu Xiaobo of Peking University characterized the experiment as a milestone. He noted that positioning humans as "second authors" does not diminish human subjectivity, but reflects a new division of labor: AI expands the breadth of data-processing, while humans safeguard intellectual depth and the value dimension of scholarship.
Yuan Zhenguo of ECNU, one of the experiment's initiators, highlighted the broader significance of the initiative, noting that traditional systems of knowledge production are being fundamentally reshaped by advances in generative AI. He called on researchers to respond proactively and engage constructively with the evolving landscape of human–AI collaboration.
Key Findings : Opportunities, Limits, and Emerging Patterns
A central feature of the event was the release of the Landscape Report on the " AI as the Lead Author " Large-Scale Social Experiment , presented by Zhang Zhi of ECNU. Following a global call for papers in September 2025 , the project received 820 submissions, of which 724 were valid, involving 1,177 human authors.
Instead of traditional awards, organizers released three ranked lists — Pioneer Papers, Nominated Pioneer Papers, and AI-Recommended Emerging Papers — to more accurately reflect the current stages of human–AI co-creation.
The report identifies ten key findings. AI is now deeply embedded in educational research writing, with nearly one hundred AI tools reported; DeepSeek was the most frequently used. AI demonstrates clear strengths in idea generation, data analysis, and logical structuring, while also showing notable limitations such as fabricated references and superficial argumentation. The report further suggests that AI-driven research innovation often operates through recombination, cross-domain transfer, and boundary expansion.
A sample survey found a 76% consistency rate between AI and human expert evaluation, rising to over 80% in identifying both high-quality and clearly inadequate papers. Zhang called for moving beyond debates over whether AI should be used, toward strengthening researchers' capacity for critical and effective use, while remaining alert to risks such as cognitive outsourcing and intellectual passivity.
From Debate to Practice: Expanding the Human–AI Research Agenda
The symposium also featured a roundtable, keynote speeches, and four parallel sub-forums exploring the implications of AI for academic roles, educational practice, and institutional governance. Key topics included creativity ownership, ethical responsibility, editorial standards, and the evolving relationship between human and machine intelligence.
The roundtable, moderated by Chen Shuangye of ECNU, brought together participants across disciplines and generations. Under the theme "When Humans Become Second Authors: Redefining Roles in Human–AI Co-Creation," discussions converged on a shared view that the future lies not in opposition between humans and AI, but in coexistence and symbiosis.
Wang Yanfeng of Shanghai Jiao Tong University delivered a keynote in which he introduced AI agents— TeachMaster, LearnMaster, and SciMaster —and highlighted their application in teaching. He emphasized AI's potential to improve efficiency and the practical challenges of implementation, including domain limitations, policy constraints, and user resistance.
In her keynote, " What Should AI Do? " , Yuan Wen of Shanghai Normal University called for renewed attention to the fundamental purposes of education. She cautioned against the phenomenon of "brain rot" and argued for strengthening human capacities for lifelong learning and meaningful engagement in an AI-driven era.
F our sub-forums examined AI-assisted authorship and expression, ethical reconstruction and responsibility sharing, the co-evolution of human and artificial intelligence, and the transformation of publishing paradigms. Bringing together perspectives from academia and industry, discussions addressed issues such as AI's impact on academic writing, the "responsibility vacuum," and quality control in AI-assisted editing.
Toward New Academic Norms in the AI Era
The "AI as the Lead Author" experiment forms part of ECNU's broader exploration of AI in educational research and what has been described as the emerging "fifth paradigm" of scientific research. By creating a realistic setting for human–AI co-authorship, it provides a timely reference point for global discussions on academic integrity, human responsibility, and AI governance.
As AI continues to reshape knowledge production, the central challenge is no longer whether it can participate in research, but how academic communities can establish norms that preserve human understanding, accountability, and intellectual rigor in increasingly AI-mediated environments. In this evolving landscape, AI is not merely a tool, but a collaborative partner—pointing toward a future of deeper human–AI (carbon–silicon) symbiosis.