Wrapping up 2025! 🌲🥂✨

2025 is close to be over and looking back at 2025, what stays with me most is not the accumulation of roles, publications, or events, although each of them matters, but the sense that the work is gradually finding its place, creating space for better questions, and shaped, refined, sustained by people and conversations rather than metrics alone. This year marked a consolidation of my research agenda around digital, data, and AI governance, with a strong emphasis on responsible AI adoption, public value, and sustainable data ecosystems. It also continued earlier started transition – from building a research profile to more explicitly shaping research spaces, communities, and conversations.

From Research Agenda to Institutional Responsibility

In 2025, I was appointed Associate Professor of Applied AI and Information Systems, a role that formalized something that had already been happening in practice – working at the intersection of AI innovation, governance, and responsibility. This appointment aligned naturally with my work on responsible AI adoption in the public sector, research on data-centric challenges, governance models, and institutional readiness, and a growing focus on sustainable and trustworthy digital ecosystems, and as of this year on Green AI, including collaborative work with KNOW Center and ENFIELD.

At the University of Tartu, we also laid important foundations for the future – the establishment of the IDEAS Lab (Intelligent Distributed Environments and Systems), and hence my new role as lead of the Responsible Innovation and Digital Governance Team (RISING) – a space that, hopefully, will become more visible in 2026.

Research Milestones: Asking Better Questions About AI

A symbolic, but meaningful, milestone this year was publishing my 100th and 101st (IT people will get the point of the later number, too) scientific papers. These special papers are Responsible AI Adoption in the Public Sector: A Data-Centric Taxonomy of AI Adoption Challenges – the work that crystallizes our empirical research into a structured understanding of why responsible AI adoption remains difficult, even when technical solutions exist, and “Reflections on the nature of digital government research: Marking the 50th anniversary of Government Information Quarterly” marking the 50th anniversary of Government Information Quarterly – the top journal I read extensively as a master’s student, and now serve as an editorial board member, contributing reflections on the journal’s past, present, and future trajectory. That continuity — from reader to contributor to steward — feels particularly meaningful.

Beyond this, 2025 included some more contributions to Government Information Quarterly, Computer Law & Security Review, Telematics and Informatics, Information Polity, Data & Policy, EGOV2025, HICSS2025, CAiSE2025, DGO2025, and some more with topics ranging from data ecosystems, data and AI governance, post-bureaucratic governance, dark data, UX of open data portals to AI in education. What is more important, some of these papers were contributions of my students’ – from early master students to doctoral ones – my own or those more “adopted” ones I am always happy to collaborate with. All in all, seven students in total published their works this year and I hope for many more in the years to come!

Global Dialogue: From Keynotes to Fireside Chats

In 2025, I had the privilege to contribute to global conversations on AI and governance through keynotes, invited talks, panels, and workshops, including invited talk for EU Open Data Days 2025 on “Data for AI or AI for data,” panel on “AI and Data Science Revolutions” and “National Data Strategies in Europe” with Data for Policy, keynote “Responsible AI Adoption for a Sustainable Future: Balancing Opportunities and Risks“ for International Conference on Innovative Approaches and Applications for Sustainable Development, invited talk on “Mapping the Roadblocks: Towards Responsible Artificial Intelligence Adoption in the Public Sector“ for International Summer School on Digital Government and some more seminars and fireside chats with Cambridge University, LSE, The Governance Lab, Microsoft Open Data Policy Lab on future of open data in the age of (Gen)AI and AI governance among others. These conversations reaffirmed responsible data practices and even more so responsible AI is no longer a niche concern – it is now a core governance challenge across regions, policy domains, and institutional contexts.

Community Building: Workshops, Tracks, and Field-Shaping

2025 was also a year of active field-building, through organizing and leading scholarly spaces where new ideas can emerge. Together with my colleagues we organized workshops at ECAI2025 (Green-aware AI), IJCAI2025, PRICAI2025 (AI and democracy and AI in public sector), CBI-EDOC2025 (Enterprise Architecture for Augmented Intelligence workshop) and (mini)tracks at HICSS2026 (Sustainable and Trustworthy Digital and Data Ecosystems for Societal Transformation), dg.o2025 (Sustainable Public and Open Data Ecosystems for Inclusive and Innovative Digital Government), EGOV2025 (Emerging Issues and Innovations). For the later one, we also organized Junior Faculty School and Doctoral Colloquium. Apart of this, I took several new editorial roles, including Senior Editor at IEEE TTS, as well as joined initiatives such as the AIS Women’s Network College (incl. as mentor), Women in AI, and Digital Statecraft Academy, which aims to guide fellows in navigating complex digital governance challenges and contribute to advancing responsible, inclusive, and sustainable policy and technology practices. I am very eager to see how these all will evolve looking forward contributing to the success of these joint efforts!

Perhaps one of the most surreal moments was hosting a Turing Award winner – a reminder of how far the field has come, and how much responsibility comes with shaping its future direction. Unfortunately, though, I missed meeting Yoshua Bengio in montreal this year, when my colleagues with whom we co-organized the workshop with IJCAI2025 made it, visiting his MILA lab… But one Turing award recipient at a time, I guess..

This year also brought external recognition, such as being ranked Top voice in Estonia in Data Science (as per Favikon), Top-1 researcher globally in Open Government and top Government and Engineering and CS researcher (as per ScholarGPS, according to last five years achievements), top 2% of scientists in Artificial Intelligence (as per Stanford University’s database). While I am grateful, among all achievements, the most rewarding was witnessing the success of my students and I hope much to come along both lines in the future. Their growth is a constant reminder that academic impact is not only measured in citations — but in confidence built, curiosity nurtured, and doors opened. At the end of the day, it is all about people. I am thus grateful to all the collaborators I am surrounded with – those I continue to learn from, and to those who now learn with me — both equally shape the work and sustain the motivation to carry it forward, as well as help to have some fun that is a special type of the fuel for our work!

Looking Ahead to 2026

The coming year will bring new responsibilities and, hopefully, opportunities. But above all, I hope 2026 continues what 2025 reinforced. As of now we already work hard on preparing several events to take place and I warmly invite you to consider joining us:

If you’d like to continue these conversations in person, you can also find me speaking at events such as ICDEc2026, ISIoT2026, and AI Summit Europe 2026, discussing questions like What happens when AI ambitions collide with governance capacity, legitimacy, and readiness? How do we design AI-enabled systems that don’t collapse under institutional and societal pressure? Are we moving from e-government to AI_government or maybe even toward something closer to an agentic state — and what does that really mean? If you work on AI, data, governance, sustainability, or public value, I would love to meet you — to exchange ideas, challenge assumptions, and think together about how to design systems that are not only intelligent, but also legitimate, resilient, and trustworthy.

Wishing a peaceful and joyful holiday season, and a thoughtful, kind, and inspiring year ahead to all of us!

Advancing Democracy & AI: Reflections from IJCAI, PRICAI, and ICA 2025 Workshops

Artificial intelligence is rapidly reshaping how societies govern, deliberate, and make collective decisions. Over the past year, our Democracy & AI workshop series—held across IJCAI, PRICAI, and ICA—has become a global forum for examining both the promise and the perils of AI in democratic contexts. From Montréal to Wellington to Wuhan, our community continues to grow, connecting researchers across AI, political science, HCI, law, design, ethics, and public administration.

DemocrAI at IJCAI 2025: AI at the Service of Society

As part of the IJCAI International Joint Conferences on Artificial Intelligence in Montréal, themed “AI at the service of society,” we (Jawad Haqbeen, Takayuki Ito, Rafik Hadfi, and myself) convened the 6th International Workshop on Democracy & AI (DemocrAI25).
Although I could not attend in person, I am deeply grateful to my co-organizers for leading the workshop and for representing our team—as well as for the chance to meet Yoshua Bengio, one of the pioneers of modern deep learning and the one who recently became the very first researcher who while still being active in research achieved the milestone of 1 million citations!

The workshop opened with two outstanding keynote talks:

  • Mary Lou Maher (UNC Charlotte) — “The Imperative for AI Literacy”
  • Michael Inzlicht (University of Toronto) — “In Praise of Empathic AI”

Across 13 diverse presentations, contributors explored: AI’s impact on trust, civic engagement, and deliberation, risks and governance of LLMs in judicial settings and policymaking, collective intelligence and value aggregation for democratic processes, AI applications in education, law, and policy design, governance, fairness, inclusion, and global research equity.

We were delighted to recognize several exceptional contributions:

  • Best Paper Award“LLMs in Court: Risks and Governance of LLMs in Judicial Decision-Making” (Djalel Bouneffouf & Sara Migliorini)
  • Best Student Paper Award“Finding Our Moral Values: Guidelines for Value System Aggregation” (Víctor Abia Alonso, Marc Serramia & Eduardo Alonso Sánchez)
  • Best Extended Abstract Award“Group Discussions Are More Positive with AI Facilitation” (Sofia Sahab, Jawad Haqbeen & Takayuki Ito)
  • Best Presentation Award“Democracy as a Scaled Collective Intelligence Process” (Marc-Antoine Parent)

A key message echoed throughout the day: AI can enhance social cohesion, participation, and equity—but only through responsible design and robust governance frameworks.

DemocrAI at PRICAI 2025: Participation, Values, and Governance

Following IJCAI, I joined the organizing committee for the 7th Democracy & AI Workshop at PRICAI 2025, held in Wellington, New Zealand. Two years ago, I had the privilege of giving a keynote at PRICAI DemocrAI on symbiotic relationship of Artificial Intelligence, Data Intelligence, and Collaborative Intelligence for Innovative Decision-Making and Problem Solving. This year, I am excited to help shape the conversation from the organizing side.

The workshop explored the expanding role of AI in democratic life, including AI-assisted policy design and decision-making, AI in governance, elections, and public administration, citizen participation and deliberative democracy tools, behavioral impacts of AI on trust, engagement, and polarization, transparency, accountability, and legitimacy of algorithmic decisions, ethics, socio-technical risks, and AI’s impact on societal wellbeing, and reimagining democracy in the LLM era.

Special Track at ICA 2025: AI in e-Government & Public Administration

Our workshop series expands further with a dedicated Special Track on AI in e-Government & Public Administration at the IEEE International Conference on Agentic AI (ICA 2025), held in Wuhan, China.

Co-organized with Jawad Haqbeen, Takayuki Ito, and Torben Juul Andersen, this track examines how AI-driven tools are transforming public governance—from policy co-creation and civic engagement to service delivery and institutional decision-making.

Topics include:

  • AI for participatory and deliberative governance
  • AI’s impact on societal wellbeing
  • AI in public service delivery and policy design
  • Ethics and risk governance in public-sector AI
  • Case studies and experiments with deployed systems
  • Transparency, accountability, and responsible administration

Across IJCAI, PRICAI, and ICA, one theme is clear: AI’s role in democracy is neither predetermined nor neutral. It can support inclusion, transparency, and collective intelligence—or undermine trust, equity, and participation. The outcome depends on the choices we make now: the values we embed, the governance we build, and the communities we bring together.

Our Democracy & AI workshop series exists to advance this work—uniting technologists, policymakers, social scientists, designers, and ethicists in a shared mission: to ensure AI serves democracy, rather than the other way around.

Huge thanks to all speakers, awardees, participants, and co-organizers.
Onward to DemocrAI at PRICAI and ICA 2025!

From Krems to Linz: Reflections from EGOV 2025 and a Research Visit to Austria

September brought a truly inspiring and intense sequence of events: the EGOV 2025 from Doctoral Colloquium to Junior Faculty School, and the main IFIP EGOV 2025 conference in Krems, followed by a research stay at Johannes Kepler University Linz. Five days of discussion, mentoring, presenting, and connecting in Krems with several more in Linz where the intensive research stay was enriched by a memorable dive into the Ars Electronica Festival and its conversations on technology, fear, and democratic futures.

What follows is a reflection on an academically dense but deeply rewarding journey across two Austrian cities.

EGOV2025: Doctoral Colloquium & Junior Faculty School

We began in the breathtaking setting of Göttweig Abbey with the EGOV 2025 Doctoral Colloquium, where 13 PhD students presented their research, shared challenges of the doctoral journey, and engaged in open discussions with mentors.

The following day, the Junior Faculty School expanded these conversations to early career researchers (up to five years post-PhD). Together with a wonderfully engaged group, we explored questions about career trajectories, researcher identity, publishing strategies, gender inequalities in academia, and the importance of being in a workplace that supports—not drains—well-being.

A recurring theme across both days was impact. We examined it from multiple perspectives:

  • during the Colloquium’s mentor panel
  • in Tomasz Janowski’s keynote
  • through the “from research to policy” workshop by Paula Rodriguez Müller, Sven Schade, and Luca Tangi
  • in the panel on publishing in top journals with Panos Panagiotopoulos and Manuel Pedro Rodríguez Bolívar
  • and in roundtable discussions on career development

Sincere thanks to the organizers—Gabriela Viale Pereira, Ida Lindgren, Lieselot Danneels, J. Ramon Gil-Garcia, and Michael Koddebusch—for making these events equally enriching for participants and mentors.

EGOV 2025 conference

With the main conference underway, we launched the Emerging Technologies and Innovations track, which I co-chaired with Francesco Mureddu and Paula Rodriguez Müller. This year, we welcomed Paula (European Commission JRC) to the team and continued pushing the track beyond academic silos, aiming to strengthen the bridge between research, policy, and practice.

We were delighted to see a record number of submissions—double compared to last year. A growing Information Systems community joined us, fulfilling the long-term ambition that Marijn Janssen and I have shared for the track.

Across three sessions, we explored topics that shape the future of governance:

  • the potential of generative AI and LLMs for administrative literacy and public sector transformation
  • trust frameworks and platform governance
  • GovTech incubators and the gap between prototypes and long-term implementation
  • self-assessment tools for climate adaptation
  • digital transformation patterns in smart city strategies

These studies together illustrated how emerging technologies and governance innovation are reshaping public institutions.

A special highlight was the Best Paper Award in the category “Most Innovative Research Contribution or Case Study”, received by Lukas Daßler for “GovTech Incubators: Bridging the Gap Between Prototypes and Long-Term Implementation” (co-authored with Andreas Hein and Helmut Krcmar). Congratulations once again!

I was happy to present two papers at the conference:

  1. “Proactive Public Services in the Age of Artificial Intelligence: Towards Post-Bureaucratic Governance” with Paula Rodriguez Müller, Luca Tangi, and Jaume Martin Bosch – the first (or “step 0”) output of our ongoing research on AI-enabled proactive service provision.
  2. “May the Data Be with You: Towards an AI-Powered Semantic Recommender for Unlocking Dark Data” based on the master thesis of my former student, now at Microsoft, Ramil Huseynov; co-authored with Dimitris Simeonidis and David Duenas-Cid – a project that combines technical exploration with a generous dose of nerdiness and fun.

Research Visit to Linz

Right after EGOV, I travelled to JKU Linz, hosted by Christoph Schuetz at the Institute of Business Informatics – Data & Knowledge Engineering. During the visit, I delivered an invited talk titled “Responsible Data Ecosystems: From Data Governance to AI Adoption.”
We discussed how to establish trustworthy, effective data practices while responsibly integrating AI, and explored opportunities for future collaboration.

Beyond the academic exchange, Linz offered its own inspiration: diverse, vibrant, and beautifully intertwined with nature and art such as..

Ars Electronica 2025: Panic – Yes/No?

One of the standout experiences was the Ars Electronica Festival, which this year examined the theme “Panic – Yes/No?”. The exhibitions brought together over 1,400 contributors—artists, scientists, developers, entrepreneurs, and activists—questioning our collective sense of alarm and exploring whether “collective panic” is a rational response or a product of sensationalism.

AI and its societal implications stood at the heart of many installations: Who designs these systems? For whom? According to which values? These questions resonate strongly with the core of my own research and offered a refreshing, interdisciplinary lens on technology and democratic futures.

From mentoring early-stage researchers and running a dynamic track, to presenting new work, reconnecting with colleagues, expanding the Information Systems presence within EGOV, and diving into Linz’s research and cultural landscape—it was an intense but profoundly rewarding start of the semester. Weeks like these is a food reminder of why mentoring, connecting, and building research communities matter so much—and why an early Sunday alarm can indeed be worth it.

Green-Aware AI 2025 Workshop at ECAI2025

Join us – Riccardo Cantini, Luca Ferragina, Davide Mario Longo, Anastasija Nikiforova, Simona Nisticò, Francesco Scarcello, Reza Shahbazian, Dipanwita Thakur, Irina Trubitsyna, Giovanna Varricchio (University of Calabria & University of Tartu) – at the 2nd Workshop on Green-Aware Artificial Intelligence (Green-Aware AI 2025) to take place conjunction with the 28th European Conference on Artificial Intelligence (ECAI2025) in Bologna, Italy, October 25-30 to examine the sustainability challenges posed by widespread adoption of AI systems, particularly those powered by increasingly complex models, pushing toward responsible AI development and provide a timely response.

The widespread adoption of AI systems, particularly those powered by increasingly complex models, necessitates a critical examination of the sustainability challenges posed by this technological revolution. The call for green awareness in AI extends beyond energy efficiency—it encompasses the integration of sustainability principles into system design, theoretical modeling, and real-world applications.

Green-aware AI requires a multidisciplinary effort to ensure sustainability in its fullest sense, that is, where the green dimension is interpreted broadly, fostering the creation of inherently green-aware AI systems aligned with human-centered values. These systems should uphold sustainability principles such as transparency, accountability, safety, robustness, reliability, non-discrimination, eco-friendliness, interpretability, and fairness—principles reflected in the 17 Sustainable Development Goals (SDGs) defined by the United Nations. The ethical and sustainable advancement of AI systems faces diverse challenges across every stage, including architectural and framework design, algorithm conceptualization, user interaction, data collection, and deployment. This involves designing tools that are inherently green-aware or introducing mechanisms, such as incentives, to encourage agents in AI systems to adopt green-aware behaviors. This principle can be applied across various domains of AI, including but not limited to Algorithm Design, Fairness, Ethics, Game Theory and Economic Paradigms, Machine Learning, Multiagent Systems, and all their applications.

It is worthwhile noting that machine learning systems rank among the most energy-intensive computational applications, significantly impacting the environment through their substantial carbon emissions. Notable examples include the training of large-scale, cutting-edge AI models like those used in ChatGPT and AlphaFold. The creation of such systems demands vast resources, including high-performance computing infrastructure, extensive datasets, and specialized expertise. These requirements create barriers to the democratization of AI, limiting access to large organizations or well-funded entities while excluding smaller businesses, under-resourced institutions, and individuals. The lack of interpretability in AI systems further exacerbates these challenges, raising significant concerns about trustworthiness, accountability, and reliability. Such systems often function as black boxes, making it difficult to understand their underlying decision-making processes. This opaqueness can erode public trust and create barriers to holding developers accountable for harmful outcomes. Additionally, AI systems are prone to biases embedded in their training data and reinforced through user interactions, perpetuating discrimination and unfair treatment, disproportionately affecting marginalized and underrepresented groups.

By addressing these pressing challenges, the workshop aligns with the global push toward responsible AI development and provides a timely response to the environmental and social implications of AI technologies. The primary goal of this workshop is to foster discussions among scholars from diverse disciplines, facilitating the integration of technological advancements with environmental responsibility to drive progress toward a sustainable future. As such Green-Aware AI 2025 invites contributions around the following topics of interest (not limited to thm exclusively though):
💡Green-aware AI frameworks and applications;
💡AI methodologies for energy-efficient computing;
💡Human-centered and ethical AI design;
💡Reliable, transparent, interpretable, and explainable AI;
💡Trustworthy AI for resilient and adaptive systems;
💡Fairness in machine learning models and applications;
💡Impact of AI on underrepresented communities, bias mitigation, and exclusion studies (datasets and benchmarks);
💡Theoretical analysis of energy efficiency in AI systems;
💡Green and sustainable AI applications in environmental and social sciences, healthcare, smart cities, education, finance, and law;
💡Compression techniques and energy-aware training strategies for language models;
💡Approximate computing and efficient on-device learning;
💡Green-oriented models in game theory, economics, and computational social choice;
💡Green-awareness in multi-agent systems;
💡Security and privacy concerns in machine learning models.

Stay tuned about keynotes info on whom to come soon!

📆Important dates:
Abstract submission: May 23
Paper submission: May 30
Notification of acceptance: July 25
Camera-ready: July 31

Join us at Green-Aware AI to help facilitating the integration of technological advancements with environmental responsibility to drive progress toward a sustainable future.

Workshop is supported by the Future AI Research (FAIR), the Italian Ministry of Education, Universities and Research and Italia Domani.

IJCAI2025 Workshop on Democracy and AI (DemocrAI 2025) workshop

Join us – Jawad Haqbeen, Rafik Hadfi, Takayuki Ito, Anastasija Nikiforova (Kyoto University & University of Tartu) – at the 6th International Workshop on Democracy and AI (DemocrAI 2025) to take place conjunction with the 34th International Joint Conference on Artificial Intelligence (IJCAI 2025) in Montreal (Canada), August 16-22 to examine opportunities and risks associated with AI in democratic contexts.

Recent technical advances in machine learning, natural language processing, and multi-agent systems have greatly expanded the use of artificial intelligence (AI) applications in our daily lives. AI-driven systems are transforming the way we process, monitor, and manage data and services, offering innovative solutions for evidence-based policy planning and decision management. AI offers enormous potential to boost efficiency and improve decision-making by processing large amounts of data. For example, AI-assisted conversational chatbots can help strengthen democratic processes by delivering better public services, customizing services for citizens, facilitating engagement with large groups, connecting their ideas and fostering social participation. However, alongside these benefits, AI may pose risks to individuals, organizations, and society as a whole. One significant concern is that machines lack accountability while generating information and can make decisions that fundamentally affect the lives of ordinary citizens by generating (mis)information. The focus of this workshop will be on both the current and potential uses of AI in society.

This workshop welcomes research on the intersection of AI and democracy, focusing on, but are not limited to:

  • Systems to Support Digital Citizen Participation
  • Tools to Support Decision-Making Process
  • The behavioral impacts of AI – e.g., on civic motivation & engagement, trust, etc.
  • The impact of AI on planning & policy development
  • The role of Societal factors in the implementation of AI
  • Rebooting Democracy in the Age of AI
  • AI and the Future of Wellbeing
  • AI in governance and public participation 
  • AI and the Future of Elections (the legitimacy of algorithmic decisions)
  • The ethics and risk governance of AI and algorithms in society
  • Transparency, Accountability, and Ethical Issues in Artificial Intelligence

Important dates:

  • Paper submission deadline: June 15, 2025
  • Notification of acceptance: July 15, 2025
  • Camera ready submission: August 1, 2025
  • Workshop Date: August 16-22, 2025

Join us at IJCAI 2025 to help shape the future of AI for democratic governance.