AMCIS2026 Human–AI Collaboration and Governance for Responsible and Sustainable Digital Ecosystems mini-track

As digital transformation accelerates, the convergence of AI, data governance, and ecosystem thinking is reshaping how organizations create strategic value, build competitiveness, and sustain innovation advantage. Digital and data ecosystems are increasingly complex, spanning cloud, edge, and decentralized architectures such as data meshes and lakehouses, raising critical questions of trustworthiness, responsibility, and sustainability in AI integration.

This AMCIS2026 mini-track (by Association of Information Systems (AIS)) explores how AI, including increasingly agentic systems, acts as both a strategic enabler and active participant in digital and data ecosystems, enhancing governance, augmenting and automating decision-making, and transforming how organizations create value, while raising important governance, ethical, and human-agency considerations. We invite research examining how these ecosystems can remain responsible, resilient, and sustainable, while enhancing organizational agility, competitiveness, and long-term strategic performance across sectors such as government, healthcare, finance, manufacturing, and education.

The track bridges perspectives from information systems, data science, AI governance, and sustainability research to understand how the strategic and responsible design and management of AI-driven data ecosystems can support long-term value creation, competitiveness, and societal transformation. We invite interdisciplinary contributions from fields such as computer science, management science, data science, process science, decision science, organizational design, policy-making, complexity, behavioral economics, and the social sciences. Submissions may include conceptual, design science, empirical, theoretical, or case-based studies, including literature reviews.

Topics of interest include but are not limited to:

  • AI for governance, accountability, and trustworthiness in digital and data ecosystems;
  • human–AI collaboration and delegation, human-in-the-loop and hybrid governance;
  • responsible, sustainable, and strategically aligned management of AI-augmented data ecosystems, including Green AI;
  • governance and data management in emerging architectures (e.g., data mesh, data lakehouse), including data quality, transparency, and explainability;
  • transition from centralized to decentralized data architectures – organizational and design challenges;
  • ethical, interoperable, observable, and explainable AI in connected and cross-sectoral data ecosystems;
  • co-evolution of digital and data ecosystem components;
  • coopetition between digital and data ecosystems;
  • resilience, sustainability, and long-term governance of digital infrastructures;
  • socio-technical, organizational, and policy approaches to trustworthy and responsible data ecosystems;
  • emerging technologies (e.g., blockchain, edge computing, generative AI, digital twins, IoT, AR/VR) shaping responsible, sustainable, and energy- or resource-efficient strategic ecosystem innovation;
  • empirical studies and sectoral case analyses (e.g., healthcare, finance, government, education) on evolving AI-driven ecosystems;
  • design science, conceptual, and interdisciplinary frameworks for responsible, sustainable, and strategically effective data ecosystem innovation.

This mini-track will serve as a platform for interdisciplinary dialogue on the critical role of responsible, sustainable, and strategically oriented digital and data ecosystems in driving competitive and societal innovation. Researchers and practitioners are invited to share insights, theoretical perspectives, and empirical findings in this rapidly evolving domain.

📌 Submission Deadline: March 1, 2026
📍 Venue: AMCIS 2026 — Reno, Nevada (August 20–22)

Mini-Track Chairs

Anastasija Nikiforova – University of Tartu, Estonia
Daniel Staegemann – Otto von Guericke University Magdeburg, Germany
Asif Gill – University of Technology Sydney, Australia
Martin Lnenicka – University of Hradec Králové, Czech Republic
George Marakas – Florida International University, USA

Read more and submit papers via AMCIS2026 website.

Green-Aware AI 2025 Workshop at ECAI2025

Join us – Riccardo Cantini, Luca Ferragina, Davide Mario Longo, Anastasija Nikiforova, Simona Nisticò, Francesco Scarcello, Reza Shahbazian, Dipanwita Thakur, Irina Trubitsyna, Giovanna Varricchio (University of Calabria & University of Tartu) – at the 2nd Workshop on Green-Aware Artificial Intelligence (Green-Aware AI 2025) to take place conjunction with the 28th European Conference on Artificial Intelligence (ECAI2025) in Bologna, Italy, October 25-30 to examine the sustainability challenges posed by widespread adoption of AI systems, particularly those powered by increasingly complex models, pushing toward responsible AI development and provide a timely response.

The widespread adoption of AI systems, particularly those powered by increasingly complex models, necessitates a critical examination of the sustainability challenges posed by this technological revolution. The call for green awareness in AI extends beyond energy efficiency—it encompasses the integration of sustainability principles into system design, theoretical modeling, and real-world applications.

Green-aware AI requires a multidisciplinary effort to ensure sustainability in its fullest sense, that is, where the green dimension is interpreted broadly, fostering the creation of inherently green-aware AI systems aligned with human-centered values. These systems should uphold sustainability principles such as transparency, accountability, safety, robustness, reliability, non-discrimination, eco-friendliness, interpretability, and fairness—principles reflected in the 17 Sustainable Development Goals (SDGs) defined by the United Nations. The ethical and sustainable advancement of AI systems faces diverse challenges across every stage, including architectural and framework design, algorithm conceptualization, user interaction, data collection, and deployment. This involves designing tools that are inherently green-aware or introducing mechanisms, such as incentives, to encourage agents in AI systems to adopt green-aware behaviors. This principle can be applied across various domains of AI, including but not limited to Algorithm Design, Fairness, Ethics, Game Theory and Economic Paradigms, Machine Learning, Multiagent Systems, and all their applications.

It is worthwhile noting that machine learning systems rank among the most energy-intensive computational applications, significantly impacting the environment through their substantial carbon emissions. Notable examples include the training of large-scale, cutting-edge AI models like those used in ChatGPT and AlphaFold. The creation of such systems demands vast resources, including high-performance computing infrastructure, extensive datasets, and specialized expertise. These requirements create barriers to the democratization of AI, limiting access to large organizations or well-funded entities while excluding smaller businesses, under-resourced institutions, and individuals. The lack of interpretability in AI systems further exacerbates these challenges, raising significant concerns about trustworthiness, accountability, and reliability. Such systems often function as black boxes, making it difficult to understand their underlying decision-making processes. This opaqueness can erode public trust and create barriers to holding developers accountable for harmful outcomes. Additionally, AI systems are prone to biases embedded in their training data and reinforced through user interactions, perpetuating discrimination and unfair treatment, disproportionately affecting marginalized and underrepresented groups.

By addressing these pressing challenges, the workshop aligns with the global push toward responsible AI development and provides a timely response to the environmental and social implications of AI technologies. The primary goal of this workshop is to foster discussions among scholars from diverse disciplines, facilitating the integration of technological advancements with environmental responsibility to drive progress toward a sustainable future. As such Green-Aware AI 2025 invites contributions around the following topics of interest (not limited to thm exclusively though):
💡Green-aware AI frameworks and applications;
💡AI methodologies for energy-efficient computing;
💡Human-centered and ethical AI design;
💡Reliable, transparent, interpretable, and explainable AI;
💡Trustworthy AI for resilient and adaptive systems;
💡Fairness in machine learning models and applications;
💡Impact of AI on underrepresented communities, bias mitigation, and exclusion studies (datasets and benchmarks);
💡Theoretical analysis of energy efficiency in AI systems;
💡Green and sustainable AI applications in environmental and social sciences, healthcare, smart cities, education, finance, and law;
💡Compression techniques and energy-aware training strategies for language models;
💡Approximate computing and efficient on-device learning;
💡Green-oriented models in game theory, economics, and computational social choice;
💡Green-awareness in multi-agent systems;
💡Security and privacy concerns in machine learning models.

Stay tuned about keynotes info on whom to come soon!

📆Important dates:
Abstract submission: May 23
Paper submission: May 30
Notification of acceptance: July 25
Camera-ready: July 31

Join us at Green-Aware AI to help facilitating the integration of technological advancements with environmental responsibility to drive progress toward a sustainable future.

Workshop is supported by the Future AI Research (FAIR), the Italian Ministry of Education, Universities and Research and Italia Domani.

Panel on Trust in AI @Digital Life Norway: In AI we trust!? (or don’t we? / should we?)

This October, I had an opportunity to participate in the panel on Trust in AI that took place as part of Digital Life Norway conference organized by Centre for Digital Life Norway (Norwegian University of Science and Technology (NTNU)) that took place in a very peaceful Hurdal (Norway) 🇳🇴🇳🇴🇳🇴.


As part of this panel, together with M. Nicolas Cruz B. (KIWI-biolab), Korbinian Bösl (ELIXIR, both of us being also part of EOSC Association)), and Anamika Chatterjee (Norwegian University of Science and Technology (NTNU)), who masterly chaired this discussion, we discussed trust in AI and data (as an integral part of it), emphasizing the need for transparency, reproducibility, and responsibility in managing them.


What made this discussion to be rather insightful – for ourselves, and, hopefully, for the audience as well – is that each of us represented a distinct stage in the data lifecycle debated upon the aspect of trust and where concerns arise as data moves from the lab to inform AI tools [in biotechnology].
As such we:
✅highlighted the interconnectedness of human actors involved in data production, governance, and application;
✅highlighted the importance of proper documentation to make data usable and trustworthy, along with the need for transparency – not only for data but also for AI in general, incl. explainable AI;
✅discussed how responsibility becomes blurred as AI-driven methodologies become more prevalent, agreeing that responsibility for AI systems must be shared across teams.
Lastly, despite being openness advocate, I used this opportunity to touch on the risks of open data, including the potential for misuse and ethical concerns, esp. when it comes to medical- and biotechnologies-related topics.


All in all, although rather short discussion with some more things we would love to cover but were forced to omit this time, but very lively and insightful. Sounds interesting? Watch the video, incl. keynote by Nico Cruz 👇.

And not of least interest was a diverse set of other events – keynotes, panels, posters etc. – takeaways from which to take back home (not really to home, as from the DNL, I went to the Estonian Open Data Forum, from which to ECAI, and then, finally back home to digest all the insights), where “Storytelling: is controversy good? How to pitch your research to a non-academic audience” by Kam Sripada and panel on supervision are probably the main things I take with me.


Many thanks go organizers for having me and the hospitality, where the later also goes to Hurdal 🇳🇴 in general, as we were lucky enough to have a very sunny weather, which made this very first trip to Norway – and, hopefully, not the last one – very pleasant!