CFP for AMCIS2025 “Sustainable Digital and Data Ecosystems – Navigating the Age of AI” mini-track

The Association for Information Systems (AIS) organized America’s Conference on Information Systems is coming! This year it will be held in Montreal (Canada), running under the general theme of “Intelligent technologies for a better future” and the revised list of (mini-)tracks, where the special attention I invite you to draw to is a new “Sustainable Digital and Data Ecosystems – Navigating the Age of AI” mini-track (chairs: Anastasija Nikiforova, Daniel Staegemann, George Marakas, Martin Lnenicka).

In an increasingly data-driven world, well-designed and managed digital and data ecosystems are critical to strategic innovation and competitive advantage. With the rise of new data architectures, the shift from centralized to decentralized systems, and the integration of artificial intelligence (AI) in data management, these ecosystems are becoming more dynamic, interconnected, and complex.

The growing importance of emerging data architectures such as data lakehouses and data meshes coupled with the emerging technologies of AI, blockchain, cloud computing to name a few, requires us to rethink how we manage, govern, and secure data across these ecosystems. Moreover, AI is no longer a mere component but an active agent/actor in these ecosystems, transforming processes such as data governance, data quality management, and security. Simultaneously, there is a pressing need to address how these systems can remain resilient and sustainable in the face of technological disruption and societal challenges, and how interdisciplinary approaches can provide new insights into managing these digital environments.

This mini-track seeks to explore the evolving nature of these ecosystems and their role in fostering sustainable, resilient, and innovative digital environments.

We encourage research from an ecosystem perspective (grounded in systems theory) that takes a holistic view, as well as more focused studies on specific components such as policies, strategies, interfaces, methodologies, or technologies. Special attention will be paid to the ongoing evolution of these ecosystems, especially their capacity to remain trustworthy, sustainable, and resilient over time.

Potential topics include but are not limited to:

  • data management and governance in emerging data architectures (data lakehouse, data mesh, etc.), including data governance, data quality management, and security;
  • the role of AI in data management, including AI-augmented governance, data quality management, and security;
  • AI-driven resilience and sustainability in digital and data ecosystems, incl. AI-augmentation of data lifecycle- and business- processes;
  • conceptualization and evolution of digital and data ecosystem components and their interrelationships;
  • emerging technologies, such as blockchain, cloud computing, sensors etc., shaping the strategic development of digital and data ecosystems;
  • case studies on the transition from centralized (data warehouse, data lake, data lakehouse) to decentralized data architectures (e.g., data mesh);
  • human/user factors in digital and data ecosystems (acceptance, interactions, participation etc.);
  • empirical studies on the sustainability, trustworthiness, and resilience of digital ecosystems;
  • methodologies and strategies for managing evolving digital ecosystems in different sectors (e.g., finance, healthcare, government / public sector, education).
  • interdisciplinary approaches to building, managing, and sustaining digital and data ecosystems.

The research and innovation in digital and data ecosystems requires an interdisciplinary approach. Therefore, this track invites papers from various disciplines such as information systems, computer science, management science, data science, decision science, organizational design, policy making, complexity, and behavioral economics, and social science to continue the problematization exploration of concepts, theories, models, and tools for building, managing and sustaining ecosystems. These can be conceptual, design science research, empirical studies, industry and government case studies, and theoretical papers, including literature reviews.

As such, this mini-track will serve as a platform for interdisciplinary dialogue on the critical role of sustainable digital and data ecosystems in driving strategic innovation and competitive advantage. We invite researchers and practitioners alike to share their insights, theoretical perspectives, and empirical findings in this rapidly evolving domain.

This mini-track is part of “Strategic & Competitive Uses of Information and Digital Technologies (SCUIDT)” track (chairs: Jack Becker, Russell Torres, Parisa Aasi, Vess Johnson).

For more information, see AMCIS2025 website (for this (min-)track, navigate to “Strategic & Competitive Uses of Information and Digital Technologies (SCUIDT)” track).

Is your research related to any of the above topics? Then do not wait – submit! 📅📅📅Submissions are due February 28, 2025.

📢📜New paper alert! Framework for understanding quantum computing use cases from a multidisciplinary perspective and future research directions

This post is dedicated to theFramework for understanding quantum computing use cases from a multidisciplinary perspective and future research directions” (Ukpabi, D.C., Karjaluoto, H., Botticher, A., Nikiforova, A., Petrescu, D.I., Schindler, P., Valtenbergs, V., Lehmann, L.) paper that just has been published in Futures journal (Elsevier, Q1 in both (1) Business and International Management, (2) Development, (3) Sociology and Political Science) in an open access.

Recently, there has been increasing awareness of the tremendous opportunities inherent in quantum computing. It is expected that the speed and efficiency of quantum computing will significantly impact the Internet of Things, cryptography, finance, and marketing. Accordingly, there has been increased quantum computing research funding from national and regional governments and private firms. However, critical concerns regarding legal, political, and business-related policies germane to quantum computing adoption exist. Therefore, recently a call for a framework from an interdisciplinary perspective has been made to help an understanding the potential impact of quantum computing on society, which is vital to improve strategic planning and management by governments and other stakeholders. The lack of such a framework is due to the fact that quantum computing per se is a highly technical domain, hence most of the existing studies focus heavily on the technical aspects of quantum computing. In contrast, our study highlights its practical and social uses cases, which are needed for the increased interest of governments. More specifically, our study took this call and offered a preliminary version of a framework for understanding the social, economic and political use cases of quantum computing, as well as identified possible areas of market disruption and offer empirically based recommendations that are critical for forecasting, planning, and strategically positioning QCs for accelerated diffusion, incl. definition of 52 Research Questions that will be critical for the adoption of quantum computing.


To this end, we conducted a gray literature research, whose outputs were structured in accordance with Dwivedi et al. (2021) that embodies environment, users, & application areas. We then validated through the discussing the findings with the quantum computing community at QWorld Quantum Science Days 2023 (QSD 2023) (on which I posted before 👉 here).

In short:

  • the hottest application areas are 🔥🔥🔥 business & finance, renewable energy, medicine & pharmaceuticals, & manufacturing 🔥🔥🔥;
  • at the level of environment – ecosystem, security, jurisprudence, institutional change & geopolitics;
  • users – customers, firms, countries or governments, to be more precise, with the reference to both national and local governments.

We then dived into these areas, and come up with the most popular & promising & overlooked topics, and as the very end-result, define 52 research questions, i.e., very specific things that are expected to be covered in the future to understand the current state-of-the-art, as well as transformations needed at various levels. The insights offered by various contributors from diverse disciplines – business, information systems, quantum computing, political science, and law offer a broad-based view of the potential of quantum computing to different aspects of our technological, economic, and social development. This framework is intended to help in identifying possible areas of market disruption offering empirically based recommendations that are critical for forecasting, planning, and strategically positioning prior to quantum computing emergence.

This is a truly a “happy end!” for the consortia that we built ~3 years ago – with Germany, Spain, Finland, Romania, and Latvia – while working on a project proposal to CHANSE call “Transformations: Social and Cultural Dynamics in the Digital Age”. We went there much far beyond my expectations, i.e. in fact, we were notified that this time we will not be granted the funding for the project at the very last stage, having gone through all those intermediate evaluation rounds, which were already fascinating news (at least for me). While working on the proposal and building our network, we conducted a preliminary analysis of the area, which then, regardless of the output of the application, we decided to continue and bring to at least some logical end. We like our result so decided to make it publicly available.

All in all, this is our warm welcome to read the paper -> here

And just in case you prefer a condensed version, you can just watch the video of the talk I delivered at QWorld Quantum Science Days 2023 (QSD 2023) 👇

References:

Dandison Ukpabi, Heikki Karjaluoto, Astrid Bötticher, Anastasija Nikiforova, Dragoş PETRESCU, Paulina Schindler, Visvaldis Valtenbergs, Lennard Lehmann, Framework for Understanding Quantum Computing Use Cases From A Multidisciplinary Perspective and Future Research Directions, Futures, 2023, 103277, ISSN 0016-3287, https://doi.org/10.1016/j.futures.2023.103277.

Dwivedi, Y. K., Ismagilova, E., Hughes, D. L., Carlson, J., Filieri, R., Jacobson, J., … & Wang, Y. (2021). Setting the future of digital and social media marketing research: Perspectives and research propositions. International Journal of Information Management, 59, 102168.

CyberCommando’s meetup and my talk on Internet of Things Search Engines and their role in detecting vulnerable open data sources

October is Cybersecurity Awareness Month, as part of which CyberCommando’s meetup 2023 took place in the very heart of Latvia – Riga, where I was invited to deliver an invited talk that I devoted to IoTSE and entitled “What do Internet of Things Search Engines know about you? or IoTSE as a vulnerable open data sources detection tool“.

CyberCommando’s meetup organizers claim it to be the most anticipated vendor independent industry event in the realm of cybersecurity, a conference designed to empower our local and regional IT security professionals as we face the evolving challenges of the digital age by bringing together high-level ICT professionals from local, regional, and international businesses, governments and government agencies, tech communities, financial, public and critical infrastructure sectors. CyberCommando’s meetup covered a broad set of topics, starting from development of ICT security skills and Awareness Raising, to modern market developments and numerous technological solutions in the Cloud, Data, Mobility, Network, Application, Endpoint, Identity & Access, and SecOps, to corporate and government strategies and the future of the sector. Three parallel sessions and numerous talks delivered by 20+ local and international experts, including but not limited to IT-Harvest, Radware, DeepInstinct, Pentera, ForeScout Technologies, CERT.LV, ESET. It is a great honor to complement this list by the University of Tartu, which I represented delivering my talk at the main stage 🙂

Let’s refer to my talk – “What do Internet of Things Search Engines know about you? or IoTSE as a vulnerable open data sources detection tool“. Luckily, very few attendees knew or used OSINT (Open Source INTelligence), Internet of Things Search Engines (IoTSE) (however, perhaps they were just too shy to raise their hands when I asked this), so, hopefully, this was a good choice of topic. So, what was it about?

Today, there are billions of interconnected devices that form Cyber-Physical Systems (CPS), Internet of Things (IoT) and Industrial Internet of Things (IIoT) ecosystems. As the number of devices and systems in use and the volume and the value of data increases, the risks of security breaches increase as well.

As I discussed previously, this “has become even more relevant in terms of COVID-19 pandemic, when in addition to affecting the health, lives, and lifestyle of billions of citizens globally, making it even more digitized, it has had a significant impact on business [3]. This is especially the case because of challenges companies have faced in maintaining business continuity in this so-called “new normal”. However, in addition to those cybersecurity threats that are caused by changes directly related to the pandemic and its consequences, many previously known threats have become even more desirable targets for intruders, hackers. Every year millions of personal records become available online [4-6]. Lallie et al. [3] have compiled statistics on the current state of cybersecurity horizon during the pandemic, which clearly indicate a significant increase of such. As an example, Shi [7] reported a 600% increase in phishing attacks in March 2020, just a few months after the start of the pandemic, when some countries were not even affected. Miles [8], however, reported that in 2021, there was a record-breaking number of data compromises, where “the number of data compromises was up more than 68% when compared to 2020”, when LinkedIn was the most exploited brand in phishing attacks, followed by DHL, Google, Microsoft, FedEx, WhatsApp, Amazon, Maersk, AliExpress and Apple.”

And while Risk based security & Flashpoint (2021) [5] suggests that vulnerability landscape is returning to normal, , incl. but not limited due to various activities, such as #WashYourCyberHands INTERPOL capmaign and “vaccinate your organization” movements, another trigger closely related to cybersecurity that is now affecting the world is geopolitical upheaval. Additionally, according to Cybersecurity Ventures, by 2025, cybercrime will cost the world economy around $10.5 trillion annually, increasing from $3 trillion in 2015. Moreover, we are at risk of what is called Cyber Apocalypse or Cyber Armageddon, as was discussed during World Economic Forum (and according to Forbes), which is very likely to happen in coming 2 years (hopefully, it will not).

According to Forbes, the key drivers for this are the ongoing digitization of society, behavioral changes due to COVID-19 pandemic, political instability such as wars, the global economic downturn, while WEF relate this to the fact that technology becomes more complex, in particular, breakthrough technologies such as AI (considering current state-of-the-art, I would stress the role of quantum computing here), where I would stress that this “complexity” is two-fold, i.e., technologies become more advanced, while at the same time – easier to use, including those that can be used to detect and expose vulnerabilities. At the same time, although society is being digitized, society tend to lack digital literacy, data literacy & security literacy.

Hence, when we ask what should be done to tackle associated issues, the answer is also multi-fold, where some recommendations being actively discussed, including Forbes and Accenture, are to “secure the core”, which, in turn, involves ensuring that security and resilience are built into every aspect of the organization, understanding that cybersecurity is not something that’s only discussed within the IT department but rather at all levels of organization, organizations need to address the skills shortage within the cybersecurity domain, and it should involve utilizing automation where possible

To put it simply:

  • (cyber)security governance
  • digital literacy
  • cybersecurity is not a one-time event, but a continuous process
  • automation whenever possible
  • «security first!» as a principle for all artifacts, processes and ecosystem
  • preferably – «security-by-design» and «absolute security», which, of course, is rather an utopia, but still something we have to try to achieve (despite the fact we know it is impossible to achieve this level).

Or even simpler, as I typically say – “security to every home!”.

In the light of the above, i.e., “security first!” as a principle for all artifacts and the need to “secure the core” – are our data management systems always protected by default (i.e., secure-by-design)? While it can sound surprisingly and weird in 2023, but this is a fact that while various security protection mechanisms have been widely implemented, the concept of a “primitive” artifact such as a data management system seems to have been more neglected and the number of unprotected or insufficiently protected data sources is enormous. Recent research demonstrated that weak data and database protection in particular is one of the key security threats [4,6,9-11]. According to a list drawn up by Bekker [5] and Identity Force on major security breaches in 2020, a large number of data leaks occur due to unsecured databases. As an example:

  • Estee Lauder – 440 million customer records
  • Prestige Software hotel reservation platform – over 10 million hotel guests, including Expedia, Hotels.com, Booking.com, Agoda etc.
  • U.K-based Security Firm gained data of Adobe, Twitter, Tumbler, LinkedIn etc. and users with a total of over 5 billion records
  • Marijuana Dispensaries – 85 000 medical patient and recreational user records

to name just a few… At times it is due to their (mis)configuration, at times – due to the vulnerabilities in products or services, where additional security mechanisms would be required. Sometimes, of course, this due to the very targeted attacks, where the remaining of this post will have limited value, but let’s rather focus on those very critical cases, which refer to the above, especially in the context of the above mentioned fact that recent advances in ICT decreased the level of complexity of searching for connected devices on the Internet and easy access to them even for novices due to the widespread popularity of step-by-step guides on how to use IoTSE – aka Internet of Everything (IoE) or Open Source Intelligence (OSINT) Search Engines such as Shodan, BinaryEdge, Censys, ZoomEye, Hunter, Greynoise, Shodan, Censys, IoTCrawler – to find and gain access to insufficiently protected webcams, routers, databases, refrigerators, power plants, and even wind turbines. As a result, OSINT was recognized to be one of the five major categories of CTI (Cyber Threat Intelligence )sources (at times more than five are named, but OSINT remain to be part of this X categories), along with Human Intelligence (HUMINT), Counter Intelligence, Internal Intelligence and Finished Intelligence (FINTEL).

While these tools may represent a security risk, they provide many positive and security-enhancing opportunities. They provide an overview on network security, i.e., devices connected to the Internet within the company, are useful for market research and adapting business strategies, allow to track the growing number of smart devices representing the IoT world, tracking ransomware – the number and nature of devices affected by it, and therefore allow to determine the appropriate actions to protect yourself in the light of current trends. However, almost every of these white hat-oriented objectives can be exploited by black-hatters.

In this talk I raised several questions that can be at least partly answered with the help of IoTSE, such as:

  • Whether data source is visible and even accessible outside the organization?
  • What data can be gathered from it? and what is their “value” for external actors, such as attackers and fraudsters? I.e., whether these data can pose a threat to the organization using them to deploy an attack?
  • Are stronger security mechanisms needed? Is the vulnerability related to internal (mis)configuration or database in use?

To answer the above questions, I referred to the study that has been conducted by me and my former student – Artjoms Daškevičs (very talented student, whose bachelor thesis was even nominated to the best Computer Science thesis of in Latvia) some time ago. As part of that study an Internet of Things Search Engines- (IoTSE-) based tool called ShoBEVODSDT (Shodan- and Binary Edge-based Vulnerable Open Data Sources Detection Tool) was developed. This “toy example” of IoTSE conducts the passive assessment – it does not harm the databases but rather checks for potentially existing bottlenecks or weaknesses which, if the attack would take place, could be exposed. It allows for both comprehensive analysis for all unprotected data sources falling into the list of predefined data sources – MySQL, PostgreSQL, MongoDB, Redis, Elasticsearch, CouchDB, Cassandra and Memcached, or to define IP range to examine what can be seen from the outside of the organization about the data source (read more in (Daskevics and Nikiforova, 2021)).

The remainder was mostly built around four questions (and articles / book chapters) that we addressed with its help, namely:

  • Which data sources have proven to be the most vulnerable and visible outside the organization?
  • What data can be gathered from open data sources (if any), and what is their “value” for external actors, such as attacker and fraudsters? Whether these data can pose a threat to the organization using them to deploy an attack?

This part was built around our conference paper and this book chapter. In short (for a bit longer answer refer to the article), the number of data sources accessible outside the organization is less than 2% (more than 98% of data sources are not accessible via a simple IoTSE tool). However, there are some data sources that may pose risks to organizations and 12% of open data sources – data sources IoTSE tool was able to reach were already compromised or contain the data that can be used to compromise them. ElasticSearch and Memcached had the highest ratio of instances to which it was possible to connect, while MongoDB, PostgreSQL and ElasticSearch demonstrate the most negative trend in terms of already compromised databases (not by us, of course).

In addition, we might be interested in comparing SQL and NoSQL databases, where the latter are less likely to provide security measures, including sometimes very primitive and simple measures such as an authentication,authorization (Sahafizadeh et al., 2015) and data encryption. This is what we explored in the book chapter. We were not able to find significant differences, where from the “most secure”service viewpoint, CouchDB has demonstrated very good results in the context of security as the NoSQL database and MySQL as a relational database. However, if the developer needs to use Redis or Memcached, additional security mechanisms and/ or activities should be introduced to protect them. It must be understood, however, that these results cannot be broadly disseminated with regard to the security of the open data storage facility, mostly by demonstrating how many data storage holders were concerned about the security of their data storage facilities, since many data storage facilities have the potential to apply a series of built-in mechanisms. For the “most unsecure” service, Elasticsearch is characterized by weaker and less frequently used security protection mechanisms. This means that the database holder should be wary of using it. Similar conclusion can be drawn on Memcached (although it contradicts to CVE Details), where the total number of vulnerabilities found was the highest.However, the risk of these vulnerabilities was lower compared to ElasticSearch, so it can be assumed that CVE Details either does not respect such “low-level” weaknesses or have not yet identified them. Here in the future, an in-depth analysis of what CVE Details counts as vulnerability, and further exploration of the correlation with our results, could be carried out.

The next question we were interested in was:

  • Which Baltic country – Latvia, Lithuania, Estonia, has the most open & vulnerable data sources? and whether technological development of Estonia will be visible here as well?

This question was raised and partially answered in another conference paper. It is impossible to give an unambiguous answer here, since while Latvia showed the highest ratio of successful connections (and Estonia the lowest), Lithuania showed the most negative result in terms of already compromised data sources, and Estonia – for sensitive and non-sensitive data. Estonia, however, had the largest number of data sources from which data could not be obtained (with Latvia having a slightly lower but still relatively good result in this regard). And based on the average value of the data that could be obtained form these data sources, Lithuania again demonstrated the most negative result, which, however, was only slightly different from the result demonstrated by Estonia and Latvia (which may be a statistical error, since the total number of data sources found by our tool, differed significantly for these countries). When examining specific data sources that are more likely causing lower results, they vary from one country to another, so it is impossible to find the most insecure database that is the root of all problems.

And one more question I raised was:

  • Do “traditional” vulnerability registries provide a sufficiently comprehensive view of the DBMS security, or should they be subject for intensive and dynamic inspection by DBMS owners?

This was covered in the book chapter, which provides a comparative analysis of the results extracted from the CVE database with the results obtained as a result of the application of the IoTSE-based tool. It is not surprising – the results in most cases are rather complimentary, and one source cannot completely replace the second. This is not only due to scope limitations of both sources – CVE Details cover some databases not covered by ShobeVODSDT, as well as provide insights on more diverse set of vulnerabilities, while not providing the most up-to-date information with a very limited insight on MySQL. At the same time, there are cases when both sources refer to a security-related issue and their frequency, which can be seen as a trend and treated by users respectively taking action to secure the database that definitely do not comply with the “secure by design” principle. This refers to MongoDB, PostgreSQL and Redis.

All in all, it can be said that the answers to some of those questions may seem obvious or expected, however, as our research has shown, firstly, not all of them are obvious to everyone (i.e., there are no secure-by-design databases/data sources, so the data source owner has to think about its security), and, secondly, not all of these “obvious” answers are 100% correct.

All in all, both the talk and these studies show an obvious reality, which, however, is not always visible to the company. While “this may seem surprisingly in light of current advances, the first step that still needs to be taken thinking about date security is to make sure that the database uses the basic security features […] Ignorance or non-awareness can have serious consequences leading to data leakages if these vulnerabilities are exploited. Data security and appropriate database configuration is not only about NoSQL, which is typically considered to be much less secured, but also about RDBMS. This study has shown that RDBMS are also relatively inferior to various types of vulnerabilities. Moreover, there is no “secure by design” database, which is not surprising since absolute security is known to be impossible. However, this does not mean that actions should not be taken to improve it. More precisely, it should be a continuous process consisting of a set of interrelated steps, sometimes referred to as “reveal-prioritize-remediate”. It should be noted that 85% of breaches in 2021 were due to a human factor, with social engineering recognized as the most popular pattern [12]. The reason for this is that even in the case of highly developed and mature data and system protection mechanism (e.g., IDS), the human factor remains very difficult to control. Therefore, education and training of system users regarding digital literacy, as well as the definition, implementation and maintaining security policies and risk management strategy, must complement technical advances.

Or, to put it even simpler, once again: digital literacy “to every home”, cybersecurity is not a one-time event but a continuous process, automation whenever possible, cybersecurity governance, “security first!” principle for all artifacts, processes and ecosystem, and, preferably, “security-by-design” principle whenever and wherever possible. Or, as I concluded the talk – “We have got to start locking that door!” (by Ross, F.R.I.E.N.D.S) before we act as Commando

Big thanks goes to the organizers of the event, esp. to Andris Soroka and sponsors, who supported such a wonderful event – HeadTechnology, ForeScout, LogPoint, DeepInstinct, IT-Harvest, Pentera, GTB Technologies, Stellar Cyber, Appgate, OneSpan, ESET Digital Security, Veriato, Radware, Riseba, Ministry of Defence of Latvia, CERT.LV, Latvijas Sertificēto Persona Datu Aizsardzības Speciālistu Asociācija, Dati Group, Latvijas Kiberpshiloģijas Asociācija, Optimcom, Vidzeme University of Appliced Sciences, Stallion, ITEksperts, Kingston Technology.

P.S. If, considering the topics I typically cover, you are wondering, why I am talking about security this time, let me briefly answer your question. First, for those who knows me better, it is a well-known fact that cybersecurity was my first choice in a big IT world – it was, is and probably remain my passion, although now it is rather a hobby. This was also the central part of my duties in one of my previous workplaces, incl. the one when I worked with the organizer of this event (oh my first honeypot…). Second, but related to the first point, this was the topic, addressing which one of my professors (during the first or the second year of my studies) told me that I must become a researcher (“yes, sure 😀 😀 😀 you must be kidding” was my thought at that point, but I do not laugh on this “ridiculous joke” anymore, and am rather grateful that I was noticed so early and was then constantly reminded about this by other colleagues, which resulted in the current version of me). Third, data quality and open data that I am talking about a lot are all about the value of the data, while two main prerequisites for this are (1) data quality and (2) data security, so, in fact, data security is inevitable component that we must think and talk about.

References:

CFP for Data For Policy 2024 is open!

And CFP for Data For Policy 2024 scheduled for 9-11 July, 2024 is open! All submissions are welcome with the deadline of 27 November, 2023.

This year Data For Policy conference, which is organized in collaboration with Imperial College London and Cambridge University Press will take place in London, UK, and will be running under the title “Decoding the Future: Trustworthy Governance with AI” – trendy, isn’t it? In this edition the conference “[we] are focusing on the future of governance and decision making with AI. Firstly, what are the emerging capabilities, use cases, and best practices enabling innovation that could contribute to improved governance with AI? Secondly, what concerns are being raised regarding these advancements in areas such as data, algorithms, privacy, security, fairness, and potential risks? For both discussions, we invite proposals that delve into the role and capacity of governance in preventing AI-related harms and explore the potential for governance to generate added value through responsible AI deployment. For a more thorough consideration of the conference theme, please read this informative blog, by Zeynep Engin and conference co-chairs.

Data for Policy is looking forward to your submission to one of six areas of the respective Data & Policy journal, which are transformed into the tracks for this conference. In addition, this list is complemented with a rich list of 11 special tracks.

Of course, my personal recommendation is to consider Area 1 “Digital & Data-driven Transformations in Governance” (chairs: Sarah Giest, Sharique Manazir, Francesco Mureddu, Keegan McBride, Anastasija Nikiforova, Sujit Sikder). More specifically, the track seeks for contributions on topics that include but are not necessarily limited to:

  • From data to decisions: knowledge generation and evidence formation;
  • Process, psychology and behaviour of decision-making in digital era;
  • Government operations and services;
  • Government-citizen interactions; and open government;
  • Democracy, public deliberation, public infrastructure, justice, media;
  • Public, private and voluntary sector governance and policy-making.


Of course, do not ignore other tracks since each and every track definitely deserves your attention:

  • Area 1: Digital & Data-Driven Transformations in Governance – the one I just suggested;
  • Area 2: Data Technologies & Analytics for Governance;
  • Area 3: Policy & Literacy for Data;
  • Area 4: Ethics, Equity & Trustworthiness;
  • Area 5: Algorithmic Governance;
  • Area 6: Global Challenges & Dynamic Threats;
  • Special Track 1: Establishing an Allied by Design AI ecosystem
  • Special Track 2: Anticipating Migration for Policymaking: Data-Based Approaches to Forecasting and Foresight
  • Special Track 3: AI, Ethics and Policy Governance in Africa
  • Special Track 4: Social Media and Government
  • Special Track 5: Data and AI: critical global perspectives on the governance of datasets used for artificial intelligence
  • Special Track 6: Generative AI for Sound Decision-making: Challenges and Opportunities
  • Special Track 7: Governance of Health Data for AI Innovation
  • Special Track 8: Accelerating collective decision intelligence
  • Special Track 9: Artificial Intelligence, Bureaucracy, and Organizations
  • Special Track 10: AI and data science to strengthen official statistics
  • Special Track 11: Data-driven environmental policy-making

To sum up:

🗓️ WHEN? 9-11 July, 2024 -> deadline for papers and abstracts – 27 November, 2023

WHERE? London, UK

WHY? To understand what are the emerging capabilities, use cases, and best practices enabling innovation that could contribute to improved governance with AI? what concerns are being raised regarding these advancements in areas such as data, algorithms, privacy, security, fairness, and potential risks? For a more thorough consideration of the conference theme, please read this.

Find your favorite among tracks and submit! See detail on the official website.

The International Conference on Intelligent Metaverse Technologies & Applications (iMeta) and the 8th IEEE International Conference on Fog and Mobile Edge Computing (FMEC) in Tartu

This year we – University of Tartu, Institute of Computer Science – have a pleasure to host FMEC2023, taking place in conjunction with iMETA, where iMETA, as you can assume, is associated with the metaverse (more precisely, the International Conference on Intelligent Metaverse Technologies & Applications), while FMEC – for the The Eighth IEEE International Conference on Fog and Mobile Edge Computing.

FMEC 2023 conference aims to investigate the opportunities and requirements for Mobile Edge Computing dominance, and seeks for novel contributions that help mitigating Mobile Edge Computing challenges. That is, the objective of FMEC 2023 is to provide a forum for scientists, engineers, and researchers to discuss and exchange new ideas, novel results and experience on all aspects of Fog and Mobile Edge Computing (FMEC) covering one of its major areas, which include, but not limited to the following tracks:

  • Track 1: Fog and Mobile Edge Computing fuels Smart Mobility
  • Track 2: Edge-Cloud Continuum and Networking
  • Track 3: Industrial Fog and Mobile Edge Computing Applications
  • Track 4: Trustworthy AI for Edge and Fog Computing
  • Track 5: Security and privacy in Fog and Mobile Edge Computing
  • Track 6: Decentralized Data Management and Streaming Systems in FMEC
  • Track 7: FMEC General Track

iMETA conference, in turn, aims to provide attendees with comprehensive understanding of the communication, computing, and system requirements of the metaverse. Through keynote speeches, panel discussions, and presentations, attendees had the opportunity to engage with experts and learn about the latest developments and future trends in the field, covering areas such as:

  • AI
  • Security and Privacy
  • Networking and Communications
  • Systems and Computing
  • Multimedia and Computer Vision
  • Immersive Technologies and Services
  • Storage and Processing

As part of these conferences, I had the pleasure of chairing one of the sessions, where the room was carefully selected by the organizers to make me feel as I would be at home – we were located in the so-called Baltic rooms of VSpa conference center, i.e., Estonia, Lithuania, and Latvia, so guess which room the session took place in? Bingo, Latvia! All in all, 5 talks were delivered:

  • Federated Object Detection for Quality Inspection in Shared Production by Vinit Hegiste
  • Federated Bayesian Network Ensembles by Florian van Daalen
  • Hyperparameters Optimization for Federated Learning System: Speech Emotion Recognition Case Study by Mohammadreza Mohammadi
  • Towards Energy-Aware Federated Traffic Prediction for Cellular Networks by Vasileios Perifanis
  • RegAgg: A Scalable Approach for Efficient Weight Aggregation in Federated Lesion Segmentation of Brain MRIs by Muhammad Irfan Khan, Esa Alhoniemi, Elina Kontio, Suleiman A. Khan and Mojtaba Jafaritadi

Each of the above was followed by a very lively discussion, which continued also after the session. This, in turn, was followed by an insightful keynote delivered by Mérouane Debbah on “Immersive Media and Massive Twinning: Advancing Towards the Metaverse”.

Also, thanks to our colleagues from EEVR (Estonian VR and AR Association), I briefly went to my school times and chemistry lessons having a bit of fun – good point, I’ve always loved them (nerd and weirdo, I know…).

Thanks to the entire FMEC and iMETA organizing team!