IJCAI2025 Workshop on Democracy and AI (DemocrAI 2025) workshop

Join us – Jawad Haqbeen, Rafik Hadfi, Takayuki Ito, Anastasija Nikiforova (Kyoto University & University of Tartu) – at the 6th International Workshop on Democracy and AI (DemocrAI 2025) to take place conjunction with the 34th International Joint Conference on Artificial Intelligence (IJCAI 2025) in Montreal (Canada), August 16-22 to examine opportunities and risks associated with AI in democratic contexts.

Recent technical advances in machine learning, natural language processing, and multi-agent systems have greatly expanded the use of artificial intelligence (AI) applications in our daily lives. AI-driven systems are transforming the way we process, monitor, and manage data and services, offering innovative solutions for evidence-based policy planning and decision management. AI offers enormous potential to boost efficiency and improve decision-making by processing large amounts of data. For example, AI-assisted conversational chatbots can help strengthen democratic processes by delivering better public services, customizing services for citizens, facilitating engagement with large groups, connecting their ideas and fostering social participation. However, alongside these benefits, AI may pose risks to individuals, organizations, and society as a whole. One significant concern is that machines lack accountability while generating information and can make decisions that fundamentally affect the lives of ordinary citizens by generating (mis)information. The focus of this workshop will be on both the current and potential uses of AI in society.

This workshop welcomes research on the intersection of AI and democracy, focusing on, but are not limited to:

  • Systems to Support Digital Citizen Participation
  • Tools to Support Decision-Making Process
  • The behavioral impacts of AI – e.g., on civic motivation & engagement, trust, etc.
  • The impact of AI on planning & policy development
  • The role of Societal factors in the implementation of AI
  • Rebooting Democracy in the Age of AI
  • AI and the Future of Wellbeing
  • AI in governance and public participation 
  • AI and the Future of Elections (the legitimacy of algorithmic decisions)
  • The ethics and risk governance of AI and algorithms in society
  • Transparency, Accountability, and Ethical Issues in Artificial Intelligence

Important dates:

  • Paper submission deadline: June 15, 2025
  • Notification of acceptance: July 15, 2025
  • Camera ready submission: August 1, 2025
  • Workshop Date: August 16-22, 2025

Join us at IJCAI 2025 to help shape the future of AI for democratic governance.

HICSS2026 Sustainable and Trustworthy Digital and Data Ecosystems for Societal Transformation mini-track


Are you researching sustainable and trustworthy digital ecosystems? Then, submit your work to our HICSS2026 “Sustainable and Trustworthy Digital and Data Ecosystems for Societal Transformation” mini-track we chair together with Daniel Staegemann and Asif Gill at the Association for Information Systems Hawaii International Conference on System Sciences (HICSS-59)!

In an era where data is the foundation of digital transformation, well-designed and managed sustainable and trustworthy digital and data ecosystems are critical for artificial intelligence (AI), strategic innovation, governance, competitive advantage, and trust in increasingly digital societies. With the rise of new data architectures (e.g., data meshes and data lakehouses), the shift from centralized to decentralized systems, and the integration of AI in data governance and management among others emerging technologies (e.g., blockchain, cloud computing), these ecosystems are becoming more dynamic, interconnected, and complex. However, alongside their potential benefits that is a common focus of the research around these ecosystems, challenges related to trustworthiness, transparency, security, sustainability, and governance must be addressed.

HICSS2026 “Sustainable and Trustworthy Digital and Data Ecosystems for Societal Transformation” mini-track we chair together with Daniel Staegemann and Asif Gill invites research on how digital and data ecosystems evolve in terms of resilience, trustworthiness, and sustainability while enabling strategic innovation and societal transformation. We welcome studies that explore the interplay between AI, data governance, policies, methodologies, human factors, and digital transformation across sectors such as finance, government, healthcare, and education.
We seek theoretical, empirical, design science, case study, and interdisciplinary contributions on topics including, but not limited to:

  1. AI, trustworthiness, and governance in digital and data ecosystems:
    • AI as an actor and stakeholder in data ecosystems;
    • AI-augmented governance, security, and data quality management;
    • human factors in AI-integrated ecosystems (trust, user acceptance, participation);
    • interoperability, observability, and data linking across ecosystems;
  2. Emerging technologies and strategic innovation:
    • transition from centralized to decentralized data architectures (e.g., data lakehouses, data meshes);
    • emerging technologies for trustworthy ecosystems;
    • AI-driven business process augmentation and decision-making;
    • industry and government case studies on evolving data ecosystems;
  3. Resilience and sustainability of data ecosystems:
    • ethical AI and responsible innovation in data ecosystems;
    • sustainability and long-term governance of digital and data infrastructures;
    • cross-sectoral and interdisciplinary approaches for building sustainable ecosystems;
    • impact of data democratization on digital transformation and innovation.

By combining the strengths of strategic innovation, trustworthy AI, and data ecosystem governance, this track expects to offer a holistic perspective at the intersection of information systems, AI governance, data science, and digital transformation. It will serve as a platform for researchers and practitioners to explore how digital and data ecosystems can be sustainable, resilient, and trustworthy while driving innovation and societal transformation.

We welcome conceptual, empirical, design science, case study, and theoretical papers from fields such as information systems, computer science, data science, management and process science, policy-making, behavioral economics, and social sciences.

This mini-track is part of HICSS59 “Organizational Systems and Technology” track (chairs: Hugh Watson and Dorothy Leidner) and more information about it can be found here.

Panel on Trust in AI @Digital Life Norway: In AI we trust!? (or don’t we? / should we?)

This October, I had an opportunity to participate in the panel on Trust in AI that took place as part of Digital Life Norway conference organized by Centre for Digital Life Norway (Norwegian University of Science and Technology (NTNU)) that took place in a very peaceful Hurdal (Norway) 🇳🇴🇳🇴🇳🇴.


As part of this panel, together with M. Nicolas Cruz B. (KIWI-biolab), Korbinian Bösl (ELIXIR, both of us being also part of EOSC Association)), and Anamika Chatterjee (Norwegian University of Science and Technology (NTNU)), who masterly chaired this discussion, we discussed trust in AI and data (as an integral part of it), emphasizing the need for transparency, reproducibility, and responsibility in managing them.


What made this discussion to be rather insightful – for ourselves, and, hopefully, for the audience as well – is that each of us represented a distinct stage in the data lifecycle debated upon the aspect of trust and where concerns arise as data moves from the lab to inform AI tools [in biotechnology].
As such we:
✅highlighted the interconnectedness of human actors involved in data production, governance, and application;
✅highlighted the importance of proper documentation to make data usable and trustworthy, along with the need for transparency – not only for data but also for AI in general, incl. explainable AI;
✅discussed how responsibility becomes blurred as AI-driven methodologies become more prevalent, agreeing that responsibility for AI systems must be shared across teams.
Lastly, despite being openness advocate, I used this opportunity to touch on the risks of open data, including the potential for misuse and ethical concerns, esp. when it comes to medical- and biotechnologies-related topics.


All in all, although rather short discussion with some more things we would love to cover but were forced to omit this time, but very lively and insightful. Sounds interesting? Watch the video, incl. keynote by Nico Cruz 👇.

And not of least interest was a diverse set of other events – keynotes, panels, posters etc. – takeaways from which to take back home (not really to home, as from the DNL, I went to the Estonian Open Data Forum, from which to ECAI, and then, finally back home to digest all the insights), where “Storytelling: is controversy good? How to pitch your research to a non-academic audience” by Kam Sripada and panel on supervision are probably the main things I take with me.


Many thanks go organizers for having me and the hospitality, where the later also goes to Hurdal 🇳🇴 in general, as we were lucky enough to have a very sunny weather, which made this very first trip to Norway – and, hopefully, not the last one – very pleasant!