HICSS2026 Sustainable and Trustworthy Digital and Data Ecosystems for Societal Transformation mini-track


Are you researching sustainable and trustworthy digital ecosystems? Then, submit your work to our HICSS2026 “Sustainable and Trustworthy Digital and Data Ecosystems for Societal Transformation” mini-track we chair together with Daniel Staegemann and Asif Gill at the Association for Information Systems Hawaii International Conference on System Sciences (HICSS-59)!

In an era where data is the foundation of digital transformation, well-designed and managed sustainable and trustworthy digital and data ecosystems are critical for artificial intelligence (AI), strategic innovation, governance, competitive advantage, and trust in increasingly digital societies. With the rise of new data architectures (e.g., data meshes and data lakehouses), the shift from centralized to decentralized systems, and the integration of AI in data governance and management among others emerging technologies (e.g., blockchain, cloud computing), these ecosystems are becoming more dynamic, interconnected, and complex. However, alongside their potential benefits that is a common focus of the research around these ecosystems, challenges related to trustworthiness, transparency, security, sustainability, and governance must be addressed.

HICSS2026 “Sustainable and Trustworthy Digital and Data Ecosystems for Societal Transformation” mini-track we chair together with Daniel Staegemann and Asif Gill invites research on how digital and data ecosystems evolve in terms of resilience, trustworthiness, and sustainability while enabling strategic innovation and societal transformation. We welcome studies that explore the interplay between AI, data governance, policies, methodologies, human factors, and digital transformation across sectors such as finance, government, healthcare, and education.
We seek theoretical, empirical, design science, case study, and interdisciplinary contributions on topics including, but not limited to:

  1. AI, trustworthiness, and governance in digital and data ecosystems:
    • AI as an actor and stakeholder in data ecosystems;
    • AI-augmented governance, security, and data quality management;
    • human factors in AI-integrated ecosystems (trust, user acceptance, participation);
    • interoperability, observability, and data linking across ecosystems;
  2. Emerging technologies and strategic innovation:
    • transition from centralized to decentralized data architectures (e.g., data lakehouses, data meshes);
    • emerging technologies for trustworthy ecosystems;
    • AI-driven business process augmentation and decision-making;
    • industry and government case studies on evolving data ecosystems;
  3. Resilience and sustainability of data ecosystems:
    • ethical AI and responsible innovation in data ecosystems;
    • sustainability and long-term governance of digital and data infrastructures;
    • cross-sectoral and interdisciplinary approaches for building sustainable ecosystems;
    • impact of data democratization on digital transformation and innovation.

By combining the strengths of strategic innovation, trustworthy AI, and data ecosystem governance, this track expects to offer a holistic perspective at the intersection of information systems, AI governance, data science, and digital transformation. It will serve as a platform for researchers and practitioners to explore how digital and data ecosystems can be sustainable, resilient, and trustworthy while driving innovation and societal transformation.

We welcome conceptual, empirical, design science, case study, and theoretical papers from fields such as information systems, computer science, data science, management and process science, policy-making, behavioral economics, and social sciences.

This mini-track is part of HICSS59 “Organizational Systems and Technology” track (chairs: Hugh Watson and Dorothy Leidner) and more information about it can be found here.

Panel on Trust in AI @Digital Life Norway: In AI we trust!? (or don’t we? / should we?)

This October, I had an opportunity to participate in the panel on Trust in AI that took place as part of Digital Life Norway conference organized by Centre for Digital Life Norway (Norwegian University of Science and Technology (NTNU)) that took place in a very peaceful Hurdal (Norway) 🇳🇴🇳🇴🇳🇴.


As part of this panel, together with M. Nicolas Cruz B. (KIWI-biolab), Korbinian Bösl (ELIXIR, both of us being also part of EOSC Association)), and Anamika Chatterjee (Norwegian University of Science and Technology (NTNU)), who masterly chaired this discussion, we discussed trust in AI and data (as an integral part of it), emphasizing the need for transparency, reproducibility, and responsibility in managing them.


What made this discussion to be rather insightful – for ourselves, and, hopefully, for the audience as well – is that each of us represented a distinct stage in the data lifecycle debated upon the aspect of trust and where concerns arise as data moves from the lab to inform AI tools [in biotechnology].
As such we:
✅highlighted the interconnectedness of human actors involved in data production, governance, and application;
✅highlighted the importance of proper documentation to make data usable and trustworthy, along with the need for transparency – not only for data but also for AI in general, incl. explainable AI;
✅discussed how responsibility becomes blurred as AI-driven methodologies become more prevalent, agreeing that responsibility for AI systems must be shared across teams.
Lastly, despite being openness advocate, I used this opportunity to touch on the risks of open data, including the potential for misuse and ethical concerns, esp. when it comes to medical- and biotechnologies-related topics.


All in all, although rather short discussion with some more things we would love to cover but were forced to omit this time, but very lively and insightful. Sounds interesting? Watch the video, incl. keynote by Nico Cruz 👇.

And not of least interest was a diverse set of other events – keynotes, panels, posters etc. – takeaways from which to take back home (not really to home, as from the DNL, I went to the Estonian Open Data Forum, from which to ECAI, and then, finally back home to digest all the insights), where “Storytelling: is controversy good? How to pitch your research to a non-academic audience” by Kam Sripada and panel on supervision are probably the main things I take with me.


Many thanks go organizers for having me and the hospitality, where the later also goes to Hurdal 🇳🇴 in general, as we were lucky enough to have a very sunny weather, which made this very first trip to Norway – and, hopefully, not the last one – very pleasant!

CFP for The International Symposium on Foundation and Large Language Models (FLLM2023)

On behalf of the organizers of the The International Symposium on Foundation and Large Language Models (FLLM2023) co-located with The 10th International Conference on Social Networks Analysis, Management and Security(SNAMS-2023), I am inviting everyone, who is conducting research in this area, to consider submitting the paper to it. Hurry up as the deadline for submitting the paper is October 28! 📢📢📢

Call for Papers:

With the emergence of foundation models (FMs) and and Large Language Models (LLMs) that are trained on large amounts of data at scale and adaptable to a wide range of downstream applications, Artificial intelligence is experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-4, Falcon 180B, Codex, DALL-E, Whisper, and CLIP are now the foundation for new applications ranging from computer vision to protein sequence study and from speech recognition to coding. Earlier models had a reputation of starting from scratch with each new challenge. The capacity to experiment with, examine, and comprehend the capabilities and potentials of next-generation FMs is critical to undertaking this research and guiding its path. Nevertheless, these models are currently inaccessible as the resources required to train these models are highly concentrated in industry, and even the assets (data, code) required to replicate their training are frequently not released due to their demand in the real-time industry. At the moment, mostly large tech companies such as OpenAI, Google, Facebook, and Baidu can afford to construct FMs and LLMS. Despite the expected widely publicized use of FMs and LLMS, we still lack a comprehensive knowledge of how they operate, why they underperform, and what they are even capable of because of their emerging global qualities. To deal with these problems, we believe that much critical research on FMs and LLMS would necessitate extensive multidisciplinary collaboration, given their essentially social and technical structure.The International Symposium on Foundation and Large Language Models (FLLM) addresses the architectures, applications, challenges, approaches, and future directions. We invite the submission of original papers on all topics related to FLLMs, with special interest in but not limited to:

💡Architectures and Systems

  • Transformers and Attention
  • Bidirectional Encoding
  • Autoregressive Models
  • Prompt Engineering
  • Fine-tuning

💡Challenges

  • Hallucination
  • Cost of Creation and Training
  • Energy and Sustainability Issues
  • integration
  • Safety and Trustworthiness
  • Interpretability
  • Fairness
  • Social Impact

💡Future Directions

  • Generative AI
  • Explainability
  • Federated Learning for FLLM
  • Data Augmentation

💡Natural Language Processing Applications

  • Generation
  • Summarization
  • Rewrite
  • Search
  • Question Answering
  • Language Comprehension and Complex Reasoning
  • Clustering and Classification

💡Applications

  • Natural Language Processing
  • Communication Systems
  • Security and Privacy
  • Image Processing and Computer Vision
  • Life Sciences
  • Financial Systems

Read more here and join our team in Abu Dhabi, UAE. November 22-23, 2023!

CFP for Data For Policy 2024 is open!

And CFP for Data For Policy 2024 scheduled for 9-11 July, 2024 is open! All submissions are welcome with the deadline of 27 November, 2023.

This year Data For Policy conference, which is organized in collaboration with Imperial College London and Cambridge University Press will take place in London, UK, and will be running under the title “Decoding the Future: Trustworthy Governance with AI” – trendy, isn’t it? In this edition the conference “[we] are focusing on the future of governance and decision making with AI. Firstly, what are the emerging capabilities, use cases, and best practices enabling innovation that could contribute to improved governance with AI? Secondly, what concerns are being raised regarding these advancements in areas such as data, algorithms, privacy, security, fairness, and potential risks? For both discussions, we invite proposals that delve into the role and capacity of governance in preventing AI-related harms and explore the potential for governance to generate added value through responsible AI deployment. For a more thorough consideration of the conference theme, please read this informative blog, by Zeynep Engin and conference co-chairs.

Data for Policy is looking forward to your submission to one of six areas of the respective Data & Policy journal, which are transformed into the tracks for this conference. In addition, this list is complemented with a rich list of 11 special tracks.

Of course, my personal recommendation is to consider Area 1 “Digital & Data-driven Transformations in Governance” (chairs: Sarah Giest, Sharique Manazir, Francesco Mureddu, Keegan McBride, Anastasija Nikiforova, Sujit Sikder). More specifically, the track seeks for contributions on topics that include but are not necessarily limited to:

  • From data to decisions: knowledge generation and evidence formation;
  • Process, psychology and behaviour of decision-making in digital era;
  • Government operations and services;
  • Government-citizen interactions; and open government;
  • Democracy, public deliberation, public infrastructure, justice, media;
  • Public, private and voluntary sector governance and policy-making.


Of course, do not ignore other tracks since each and every track definitely deserves your attention:

  • Area 1: Digital & Data-Driven Transformations in Governance – the one I just suggested;
  • Area 2: Data Technologies & Analytics for Governance;
  • Area 3: Policy & Literacy for Data;
  • Area 4: Ethics, Equity & Trustworthiness;
  • Area 5: Algorithmic Governance;
  • Area 6: Global Challenges & Dynamic Threats;
  • Special Track 1: Establishing an Allied by Design AI ecosystem
  • Special Track 2: Anticipating Migration for Policymaking: Data-Based Approaches to Forecasting and Foresight
  • Special Track 3: AI, Ethics and Policy Governance in Africa
  • Special Track 4: Social Media and Government
  • Special Track 5: Data and AI: critical global perspectives on the governance of datasets used for artificial intelligence
  • Special Track 6: Generative AI for Sound Decision-making: Challenges and Opportunities
  • Special Track 7: Governance of Health Data for AI Innovation
  • Special Track 8: Accelerating collective decision intelligence
  • Special Track 9: Artificial Intelligence, Bureaucracy, and Organizations
  • Special Track 10: AI and data science to strengthen official statistics
  • Special Track 11: Data-driven environmental policy-making

To sum up:

🗓️ WHEN? 9-11 July, 2024 -> deadline for papers and abstracts – 27 November, 2023

WHERE? London, UK

WHY? To understand what are the emerging capabilities, use cases, and best practices enabling innovation that could contribute to improved governance with AI? what concerns are being raised regarding these advancements in areas such as data, algorithms, privacy, security, fairness, and potential risks? For a more thorough consideration of the conference theme, please read this.

Find your favorite among tracks and submit! See detail on the official website.

The International Conference on Intelligent Metaverse Technologies & Applications (iMeta) and the 8th IEEE International Conference on Fog and Mobile Edge Computing (FMEC) in Tartu

This year we – University of Tartu, Institute of Computer Science – have a pleasure to host FMEC2023, taking place in conjunction with iMETA, where iMETA, as you can assume, is associated with the metaverse (more precisely, the International Conference on Intelligent Metaverse Technologies & Applications), while FMEC – for the The Eighth IEEE International Conference on Fog and Mobile Edge Computing.

FMEC 2023 conference aims to investigate the opportunities and requirements for Mobile Edge Computing dominance, and seeks for novel contributions that help mitigating Mobile Edge Computing challenges. That is, the objective of FMEC 2023 is to provide a forum for scientists, engineers, and researchers to discuss and exchange new ideas, novel results and experience on all aspects of Fog and Mobile Edge Computing (FMEC) covering one of its major areas, which include, but not limited to the following tracks:

  • Track 1: Fog and Mobile Edge Computing fuels Smart Mobility
  • Track 2: Edge-Cloud Continuum and Networking
  • Track 3: Industrial Fog and Mobile Edge Computing Applications
  • Track 4: Trustworthy AI for Edge and Fog Computing
  • Track 5: Security and privacy in Fog and Mobile Edge Computing
  • Track 6: Decentralized Data Management and Streaming Systems in FMEC
  • Track 7: FMEC General Track

iMETA conference, in turn, aims to provide attendees with comprehensive understanding of the communication, computing, and system requirements of the metaverse. Through keynote speeches, panel discussions, and presentations, attendees had the opportunity to engage with experts and learn about the latest developments and future trends in the field, covering areas such as:

  • AI
  • Security and Privacy
  • Networking and Communications
  • Systems and Computing
  • Multimedia and Computer Vision
  • Immersive Technologies and Services
  • Storage and Processing

As part of these conferences, I had the pleasure of chairing one of the sessions, where the room was carefully selected by the organizers to make me feel as I would be at home – we were located in the so-called Baltic rooms of VSpa conference center, i.e., Estonia, Lithuania, and Latvia, so guess which room the session took place in? Bingo, Latvia! All in all, 5 talks were delivered:

  • Federated Object Detection for Quality Inspection in Shared Production by Vinit Hegiste
  • Federated Bayesian Network Ensembles by Florian van Daalen
  • Hyperparameters Optimization for Federated Learning System: Speech Emotion Recognition Case Study by Mohammadreza Mohammadi
  • Towards Energy-Aware Federated Traffic Prediction for Cellular Networks by Vasileios Perifanis
  • RegAgg: A Scalable Approach for Efficient Weight Aggregation in Federated Lesion Segmentation of Brain MRIs by Muhammad Irfan Khan, Esa Alhoniemi, Elina Kontio, Suleiman A. Khan and Mojtaba Jafaritadi

Each of the above was followed by a very lively discussion, which continued also after the session. This, in turn, was followed by an insightful keynote delivered by Mérouane Debbah on “Immersive Media and Massive Twinning: Advancing Towards the Metaverse”.

Also, thanks to our colleagues from EEVR (Estonian VR and AR Association), I briefly went to my school times and chemistry lessons having a bit of fun – good point, I’ve always loved them (nerd and weirdo, I know…).

Thanks to the entire FMEC and iMETA organizing team!