Bluesky Facebook Reddit Email

Generative AI projects persist in public administration, even when AI tools fail to perform as promised

02.17.26 | University of Eastern Finland

AmScope B120C-5M Compound Microscope

AmScope B120C-5M Compound Microscope supports teaching labs and QA checks with LED illumination, mechanical stage, and included 5MP camera.

New ethnographic research reveals nine justifications that make AI innovations almost “irresistible” across organisational and professional boundaries. The study conducted at the University of Eastern Finland and Aalto University provides rich empirical insight into how innovation teams mobilise multiple conceptions of the common good to keep AI projects going forward.

“Our findings show that generative AI projects in public administration often continue not because the tools work well, but because a compelling set of justifications makes them hard to stop and declare the tools nonfunctional,” notes Marta Choroszewicz , a Senior Researcher at the University of Eastern Finland and co-author of the study.

Generative AI is rapidly entering public administration worldwide, driven by optimism and policy pressure to adopt cutting-edge technologies. Understanding how AI projects persist even when tools underperform is crucial for improving public sector innovation and for preventing resource lock-in, while enabling deployment of fit-for-purpose AI solutions that are adequately aligned with the work performed in public administration.

Drawing on nearly 1.5 years of ethnographic fieldwork in Finland, a new study sheds light on the development of a large language model (LLM)-based generative AI decision support tool designed to help claims specialists navigate complex, scattered and ever-changing guidance documents. Like many other tools currently being developed and tested in the Finnish public sector and elsewhere in the world, this tool was designed to address a widely recognised challenge: managing, identifying and retrieving the overwhelming volume of guidance documents essential for delivering welfare benefits and services to citizens.

The study shows how a team of innovators sustained innovation momentum through nine justificatory frames. Five tool-oriented frames emphasised familiar AI promises: efficiency, cost savings, employee well-being, fairness and desirability – especially when limitations in accuracy, precision and consistency became evident. Four process- and ideology-oriented frames legitimised speed, bold initiative and experimentation, normalising setbacks and sustaining momentum.

Together, these justifications formed a protective structure around the tool’s development, keeping the innovation moving through continuous testing phases and insulating it from criticism. They also protected the tool’s imagined value from criticism, normalised setbacks as essential “learning” for successful AI innovation, made the tool compelling across organisational boundaries and limited consideration of alternative innovation pathways.

Boundary work: alliances and divides

The study also shows that effective boundary work, i.e., operating within, across and beyond organisational and professional lines, played a crucial role. Public sector innovators’ collaborative and configurational work served to build powerful alliances with managers and consultants and reconfigure existing organisational boundaries to acquire existing resources crucial to the tool’s development. However, public sector innovators’ competitive boundary work with claims specialists reinforced a divide between the flexible world of innovation and the controlled routines of frontline work, asserting innovators’ authority over the shaping of the tool.

When the tool’s promises proved unattainable, success was reframed as contingent on organisational change, users’ engagement and their AI skills rather than on the tool’s performance.

Normalising failure as “business as usual”

Repeated breakdowns and the tool’s failure to perform as promised did not halt the project. Instead, failures were reframed as expected steps in a learning process with emerging technologies, sustaining the sense of progress and justifying continued investment.

“By normalising setbacks as learning, the team maintained innovation momentum even when accuracy, precision and consistency remained out of reach,” notes co-author Antti Rannisto , a Doctoral Researcher at Aalto University.

The technical opacity of the tool and the allure surrounding generative AI made it hard to pinpoint the causes of failure and to pause for critical re-evaluation. The team’s attention shifted away from the tool’s technical limitations to the changes required of users and the organisation to accommodate the tool.

Big Data & Society

10.1177/20539517261424159

AI innovation at the boundaries: Justifying a generative AI decision support tool.

Keywords

Article Information

Contact Information

Maj Vuorre
University of Eastern Finland
maj.vuorre@uef.fi

How to Cite This Article

APA:
University of Eastern Finland. (2026, February 17). Generative AI projects persist in public administration, even when AI tools fail to perform as promised. Brightsurf News. https://www.brightsurf.com/news/L7V09G48/generative-ai-projects-persist-in-public-administration-even-when-ai-tools-fail-to-perform-as-promised.html
MLA:
"Generative AI projects persist in public administration, even when AI tools fail to perform as promised." Brightsurf News, Feb. 17 2026, https://www.brightsurf.com/news/L7V09G48/generative-ai-projects-persist-in-public-administration-even-when-ai-tools-fail-to-perform-as-promised.html.