The United Nations University EGOV’s repository platform and five of my articles it recommends 📖📚🧐

Recently, the United Nations University announced the launch of the United Nations University EGOV’s repository platform – a centralized hub of specialized repositories tackling global challenges, which is dedicated to two topics – EGOV for Emergencies that provides a set of content on innovations in digital governance for emergency response, and Data for EGOV is the repository intended “to supports policymakers, decision-makers, researchers, and the community interested in digitally transforming the public sector through emerging technologies and data. The repository combines diverse academic documents, use cases, rankings, best practices, standards, benchmarking, portals, datasets, and pilot projects to support open data, quality and purpose of open data, application of data techniques analytics in the public sector, and making cities smarter. This repository results from the “INOV.EGOV-Digital Governance Innovation for Inclusive, Resilient and Sustainable Societies” project on the role of open data and data science technologies in the digital transformation of State and Public Administration institutions“. The latter, recommends 286 reading materials (reports, articles, standards etc.) I find to be very relevant for the above described, and highly recommend to surf through. However, what made me specially happy while browsing this collection, is the fact that five of these reading materials are articles (co-)authored by me. Therefore, considering that not always I keep track of what I conducted in past, let me use this opportunity to reflect on those studies, in case you had not came across them previously, as well as to refresh mine memories (some of them dated back to times, when I worked on my PhD thesis).

By the way, every article is accompanied with tags that enrich keywords by which that article was described by authors, with a particular attention being paid to main topics, incl. “data analytics”, “smart city”, “open data”, “sustainability” etc., where for “the latter”sustainability”, tagging based on the compliance with a specific Sustainable Development Goal (SDG) takes place, thereby allowing to filter out relevant articles by a specific SDG or find out what SDG does your article contributes, where although while conducting research I kept in mind some of those I find my research more suited with, for one of them (the last one in the list) I was pretty surprised to see that it is very SDGs-compliant, being compliant with 11 SDGs (SDG-2, SDG-3, SDG-6, SDG-7, SDG-9, SDG-11, SDG-13, SDG-14, SDG-15).

So, back to those studies that the United Nations University recommends…

A multi-perspective knowledge-driven approach for analysis of the demand side of the Open Government Data portal, which proposes a multi-perspective approach where an OGD portal is analyzed from (1) citizens’ perspective, (2) users’ perspective, (3) experts’ perspective, and (4) state of the art. By considering these perspectives, we can define how to improve the portal in question by focusing on its demand side. In view of the complexity of the analysis, we look for ways to simplify it by reusing data and knowledge on the subject, thereby proposing a knowledge-driven analysis that supports the idea under OGD – their reuse. Latvian open data portal is used as an example demonstrating how this analysis should be carried out, validating the proposed approach at the same time. We are aiming to find (1) the level of the citizens’ awareness of the portal existence and its quality by means of the simple survey, (2) the key challenges that may negatively affect users’ experience identified in the course of the usability analysis carried out by both users and experts, (3) combine these results with those already known from the external sources. These data serve as an input, while the output is the assessment of the current situation allowing defining corrective actions. Since the debates on the Latvian OGD portal serving as the use-case appear more frequently, this study also brings significant benefit at national level.

Transparency of open data ecosystems in smart cities: Definition and assessment of the maturity of transparency in 22 smart cities, which focuses on the issue of the transparency maturity of open data ecosystems seen as the key for the development and maintenance of sustainable, citizen-centered, and socially resilient smart cities. This study inspects smart cities’ data portals and assesses their compliance with transparency requirements for open (government) data. The expert assessment of 34 portals representing 22 smart cities, with 36 features, allowed us to rank them and determine their level of transparency maturity according to four predefined levels of maturity – developing, defined, managed, and integrated. In addition, recommendations for identifying and improving the current maturity level and specific features have been provided. An open data ecosystem in the smart city context has been conceptualized, and its key components were determined. Our definition considers the components of the data-centric and data-driven infrastructure using the systems theory approach. We have defined five predominant types of current open data ecosystems based on prevailing data infrastructure components. The results of this study should contribute to the improvement of current data ecosystems and build sustainable, transparent, citizen-centered, and socially resilient open data-driven smart cities.

Smarter open government data for society 5.0: Are your open data smart enough? in which, considering the fact that the open (government) data initiative as well as users’ intent for open (government) data are changing continuously and today, in line with IoT and smart city trends, real-time data and sensor-generated data have higher interest for users that are considered to be one of the crucial drivers for the sustainable economy, and might have an impact on ICT innovation and become a creativity bridge in developing a new ecosystem in Industry 4.0 and Society 5.0, the paper examines 51 OGD portals on the presence of the relevant data and their suitability for further reuse, by analyzing their machine-readability, currency or frequency of updates, the ability to submit request/comment/complaint/suggestion and their visibility to other users, and the ability to assess the value of these data assessed by others, i.e., rating, reuse, comments, etc., which is usually considered to be a very time-consuming and complex task, and therefore rarely conducted. The analysis leads to the conclusion that although many OGD portals and data publishers are working hard to make open data a useful tool moving towards Industry 4.0 and Society 5.0, many portals do not even respect the principles of open data, such as machine-readability. Moreover, according to the lists of most competitive countries by topic, there are no leaders who provide their users with excellent data and service, therefore there is room for improvements for all portals. The paper shows that open data, particularly those published and updated in time, are provided in machine-readable format and support to their users, attract audience interest and are used to develop solutions that benefit the entire society (the case in France, Spain, Cyprus, the Netherlands, Taiwan, Austria, Switzerland, etc.). Thus, the publication of open data should be done not only because it is a modern trend, but also because it incentivizes scientists, researchers and enthusiasts to reuse the data by transforming it into knowledge and value, providing solutions, improving the world, and moving towards Society 5.0 or the super smart society.

Definition and evaluation of data quality: User-oriented data object-driven approach to data quality assessment proposes a data object-driven approach to data quality evaluation. This user-oriented solution is based on 3 main components: data object, data quality specification and the process of data quality measuring. These components are defined by 3 graphical DSLs, that are easy enough even for non-IT experts. The approach ensures data quality analysis depending on the use-case. Developed approach allows analysing quality of “third-party” data. The proposed solution is applied to open data sets. The result of approbation of the proposed approach demonstrated that open data have numerous data quality issues. There are also underlined common data quality problems detected not only in Latvian open data but also in open data of 3 European countries – Estonia, Norway, the United Kingdom. I.e., none of the very simple or intuitive and even obvious use cases in which the values of the primary parameters were analysed were satisfied by any Company Register. However, the Estonian and Norwegian Registers can be used to identify any company by its name and registration number, since only they have passed quality checks of the relevant fields.

Open Data Hackathon as a Tool for Increased Engagement of Generation Z: To Hack or Not to Hack? examines the role of open data hackathons, known as a form of civic innovation in which participants representing citizens can point out existing problems or social needs and propose a solution, in OGD initiative. Given the high social, technical, and economic potential of open government data (OGD), the concept of open data hackathons is becoming popular around the world. This concept has become popular in Latvia with the annual hackathons organised for a specific cluster of citizens – Generation Z. Contrary to the general opinion, the organizer suggests that the main goal of open data hackathons to raise an awareness of OGD has been achieved, and there has been a debate about the need to continue them. This study presents the latest findings on the role of open data hackathons and the benefits that they can bring to both the society, participants, and government. First, a systematic literature review is carried out to establish a knowledge base. Then, empirical research of 4 case studies of open data hackathons for Generation Z participants held between 2018 and 2021 in Latvia is conducted to understand which ideas dominated and what were the main results of these events for the OGD initiative. It demonstrates that, despite the widespread belief that young people are indifferent to current societal and natural problems, the ideas developed correspond to current situation and are aimed at solving them, revealing aspects for improvement in both the provision of data, infrastructure, culture, and government- related areas.

More to come, and let’s keep track of updates in this repository! Do not also to check other works in both the repository, as well as more work of mine you can find here.

Keynote at the 5th International Conference on Advanced Research Methods and Analytics (CARMA 2023)

June 28 I had the honor to participate in the opening of CARMA2023 – 5th International Conference on Advanced Research Methods and Analytics “Internet and Big Data in Economics and Social Sciences” delivering my keynote “Public data ecosystems in and for smart cities: how to make open / Big / smart / geo data ecosystems value-adding for SDG-compliant Smart Living and Society 5.0?” in the spectacular city of Sevilla, Spain 🇪🇸 🇪🇸 🇪🇸. What a honor to open the conference, immediately after the inaugural speech by organizers and sponsors, including representatives of Joint Research Center, European Commission (JRC), who even mentioned the topics I covered in my keynote (not limited to them, of course) as those that make this conference an event to attend and to learn from!!!

In this talk, as the title suggests, I:

  • elaborated on the concepts of public /open data (incl. OGD), smart city and SDG and how are they related?
  • introduced the concept of Society 5.0 and how is it related to open data?
  • and finally, and more importantly, public/ open data ecosystem – what it is? what does it consist of?

I then dived into (1) data-related aspects of the public data ecosystem, i.e. what are the data-related prerequisites for a sustainable and resilient data ecosystem? (2) data portal / platforms as entry points and how to make it sufficiently attractive for the target audience? (3) stakeholder engagement – how to involve the target audience? what are the benefits of their involvement? and some more things.

Public data ecosystem part was built around our “Transparency of open data ecosystems in smart cities: Definition and assessment of the maturity of transparency in 22 smart cities“, with some references to other studies such us Transparency-by-design: What is the role of open data portals?, “Timeliness of Open Data in Open Government Data Portals Through Pandemic-related Data: A long data way from the publisher to the user“, “Open government data portal usability: A user-centred usability analysis of 41 open government data portals“, which were previously noticed by the Living Library that recommends studies they see as the “signal in the noise” and the Open Data Institute.

For the data, apart of almost “classical things”, I referred to the topic of “high-value datasets” and dived into a taxonomy we presented in “Towards High-Value Datasets determination for data-driven development: a systematic literature review” (also recommended by the Living Library as the “sound in the noise”), enriched by the results of my earlier study “Towards enrichment of the open government data: a stakeholder-centered determination of High-Value Data sets for Latvia” as well as results of two international workshops we organized.

The part on the public / open data, smart city, SDG and Society 5.0 and how they are interrelated was, in turn, based on our Chapter “The Role of Open Data in Transforming the Society to Society 5.0: A Resource or a Tool for SDG-Compliant Smart Living?”, which was called by FIT Academy “a groundbreaking research”.

And for the engagement, it mostly was about the workshops, datathons, hackathons, data competitions, as we as a co-creation and how the co-creation ecosystem occurs, what are the prerequisites for this etc., incl. referencing to “Open data hackathon as a tool for increased engagement of Generation Z: to hack or not to hack?” and “The Role of Open Government Data and Co-creation in Crisis Management: Initial Conceptual Propositions from the COVID-19 Pandemic

CARMA is a forum for researchers and practitioners to exchange ideas and advances on how emerging research methods and sources are applied to different fields of social sciences as well as to discuss current and future challenges with main focus on the topics such as Internet and Big Data sources in economics and social sciences including Social media and public opinion mining, Web scraping, Google Trends and Search Engine data, Geospatial and mobile phone data, Open data and public data, Big Data methods in economics and social sciences such as Sentiment analysis, Internet econometrics, AI and Machine learning applications, Statistical learning, Information quality and assessment, Crowdsourcing, Natural Language processing, Explainability and interpretability, the applications of the above including but not limited to Politics and social media, Sustainability and development, Finance applications, Official statistics, Forecasting and nowcasting, Bibliometrics and sciencetometrics, Social and consumer behaviour, mobility patterns, eWOM and social media marketing, Labor market, Business analytics with social media, Advances in travel, tourism and leisure, Digital management, Marketing Intelligence analytics, Data governance, and Digital transition and global society, which, in turn, expects contributions in relation to Privacy and legal aspects, Electronic Government, Data Economy, Smart Cities, Industry adoption.

In addition to the regular sessions, poster session and two keynotes, a Special JRC session (EC) took place, during which Luca Barbaglia, Nestor Duch Brown, Matteo Sostero and Paolo Canfora presented projects they work on.

Great thanks goes to organizers and sponsors of CARMA2023 – Universidad de SevillaCátedra Metropol ParasolCátedra Digitalización Empresarial, IBMUniversitat Politècnica de ValènciaJoint Research Center – European Commission and Coca-Cola, who made this event a true success. Enjoyed this experience very much! Excellent venue! Great audience! ¡Muchas gracias!

References:

Towards data quality by design – ISO/IEC 25012-based methodology for managing DQ requirements in the development of IS – one of the most downloaded article of DKE from July 2023

It is obvious that users should trust the data that are managed by software applications constituting the Information Systems (IS). This means that organizations should ensure an appropriate level of quality of the data they manage in their IS. Therefore, the requirement for the adequate level of quality of data to be managed by IS must be an essential requirement for every organization. Many advances have been done in recent years in software quality management both at the process and product level. This is also supported by the fact that a number of global standards have been developed and involved, addressing some specific issues, using quality models such as (ISO 25000, ISO 9126), those related to process maturity models (ISO 15504, CMMI), and standards focused mainly on software verification and validation (ISO 12207, IEEE 1028, etc.). These standards have been considered in worldwide for over 15 years.

However, awareness of software quality depends on other variables, such as the quality of information and data managed by application. This is recognized by SQUARE standards (ISO/IEC 25000), which highlight the need to deal with data quality as part of the assessment of the quality level of the software product, according to which “the target computer system also includes computer hardware, non-target software products, non-target data, and the target data, which is the subject of the data quality model”. This means that organizations should take into account data quality concerns when developing various software, as data is a key factor. To this end, we stress that such data quality concerns should be considered at the initial stages of software development, attending the “data quality by design” principle (with the reference to the “quality by design” considered relatively often with significantly more limited interest (if any) to “data quality” as a subset of the “quality” concept when referring to data / information artifacts).

The “data quality” concept is considered to be multidimensional and largely context dependent. For this reason, the management of specific requirements is a difficult task. Thus, the main objective of our new paper titled “ISO/IEC 25012-based methodology for managing data quality requirements in the development of information systems: Towards data quality by design” is to present a methodology for Project Management of Data Quality Requirements Specification called DAQUAVORD aimed at eliciting DQ requirements arising from different users’ viewpoints. These specific requirements should serve as typical requirements, both functional and non-functional, at the time of the development of IS that takes Data Quality into account by default leading to smarter and collaborative development.

In a bit more detail, we introduce the concept of Data Quality Software Requirement as a method to implement a Data Quality Requirement in an application. Data Quality Software Requirement is described as a software requirement aimed at satisfying a Data Quality Requirement. The justification for this concept lies in the fact that we want to capture the Data Quality Software Requirements that best match the data used by a user in each usage scenario, and later, originate the consequent Data Quality Software Requirements that will complement the normal software requirements linked to each of those scenarios. Addressing multiple Data Quality Software Requirements is indisputably a complex process, taking into account the existence of strong dependencies such as internal constraints and interaction with external systems, and the diversity of users. As a result, they tend to impact and show the consequences of contradictory overlaps on both process and data models.

In terms of such complexity and attempting to improve the developing efforts, we introduce DAQUAVORD, a Methodology for Project Management of Data Quality Requirements Specification, which is based on the Viewpoint-Oriented Requirements Definition (VORD) method, and the latest and most generally accepted ISO/IEC 25012 standard. It is universal and easily adaptable to different information systems in terms of both their nature, number and variety of actors and other aspects. The paper proposes both the concept of the proposed methodology and an example of its application, which is a kind of manual step-by-step guidance on how to use it to achieve smarter software development with data quality by design. This paper is a continuation of our previous study. This paper establishes the following research questions (RQs):

RQ1: What is the state of the art regarding the “data quality by design” principle in the area of software development? What are (if any) current approaches to data quality management during the development of IS?

RQ2: How the concepts of the Data Quality Requirements (DQR) and the Viewpoint-Oriented Requirements Definition (VORD) method should be defined and implemented in order to promote the “data quality by design” principle?

Sounds interesting? Read the full-text of the article published in Elsevier Data & Knowledge Engineering – here.

The first comprehensive approach to this problematic is presented in this paper, setting out the methodology for project management of the specification for data quality requirements. Given the relative nature of the concept of “data quality” and active discussions on the universal view on the data quality dimensions, we have based our proposal on the latest and most generally accepted ISO/IEC 25012 standard, thus seeking to achieve a better integration of this methodology with existing documentation and systems or projects existing in the organization. We suppose that this methodology will help Information System developers to plan and execute a proper elicitation and specification of specific data quality requirements expressed by different roles (viewpoints) that interact with the application. This can be assumed as a guide that analysts can obey when writing a Requirements Specification Document supplemented with Data Quality management. The identification and classification of data quality requirements at the initial stage makes it easier to developers to be aware of the quality of data to be implemented for each function during all development process of the application.

As future work thinking, we plan to consider the advantages provided by the Model Driven Architecture (MDA), focusing mainly on its capabilities of both abstraction and modelling characteristics. It will be much easier to integrate our results into the development of “Data Quality aware Information Systems” (DQ-aware-IS) with other software development methodologies and tools. This, however, is expected to expand the scope of the developed methodology and consider various feature related to data quality, including the development of a conceptual measure of data value, i.e., intrinsic value, as proposed in.

UPDATE: In July 2023 it also became one of the most downloaded articles from Data & Knowledge Engineering (Elsevier) in the last 90 days – have not read it yet? take a look, it is waiting for your reading 😉

César Guerra-García, Anastasija Nikiforova, Samantha Jiménez, Héctor G. Perez-Gonzalez, Marco Ramírez-Torres, Luis Ontañon-García, ISO/IEC 25012-based methodology for managing data quality requirements in the development of information systems: Towards Data Quality by Design, Data & Knowledge Engineering, 2023, 102152, ISSN 0169-023X, https://doi.org/10.1016/j.datak.2023.102152

14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K): how it was and who got the Best Paper Award?

In this post I would like to briefly elaborate on a truly insightful 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K), where I was honored to participate as a speaker, presenting our paper “Putting FAIR principles in the context of research information: FAIRness for CRIS and CRIS for FAIRness” (authors: Otmane Azeroual, Joachim Schopfel and Janne Polonen, and Anastasija Nikiforova), and as a chair of two absolutely amazing sessions, where live and fruitful discussions took place, which is a real indicator of the success of such! And spoiler, our paper was recognized as the Best Paper! (i.e., best paper award goes to… :))

IC3K consists of three subconferences, namely 14th International Conference on Knowledge Discovery and Information Retrieval (KDIR), 14th International Conference on Knowledge Engineering and Ontology Development (KEOD), and 14th International Conference on Knowledge Management and Information Systems (KMIS), where the latter is the one, to which my paper has been accepted, and also won the Best Paper Award – I know, this is a repetition, but I am glad to receive it, same as the euroCRIS community is proud for us – read more here…!

Briefly about our study, with which we mostly wanted to urge a call for action in the area of CRIS and their FAIRness. Of course, this is all about the digitization, which take place in various domain, including but not limited to the research domain, where it refers to the increasing integration and analysis of research information as part of the research data management process. However, it is not clear whether this research information is actually used and, more importantly, whether this information and data are of sufficient quality, and value and knowledge could be extracted from them. It is considered that FAIR principles (Findability, Accessibility, Interoperability, Reusability) represent a promising asset to achieve this. Since their publication (by one of the colleagues I work together in European Open Science Cloud), they have rapidly proliferated and have become part of both national and international research funding programs. A special feature of the FAIR principles is the emphasis on the legibility, readability, and understandability of data. At the same time, they pose a prerequisite for data and their reliability, trustworthiness, and quality. In this sense, the importance of applying FAIR principles to research information and respective systems such as Current Research Information Systems (CRIS, also known as RIS, RIMS), which is an underrepresented subject for research, is the subject of our study. What should be kept in mind is that the research information is not just research data, and research information management systems such as CRIS are not just repositories for research data. They are much more complex, alive, dynamic, interactive and multi-stakeholder objects. However, in the real-world they are not directly subject to the FAIR research data management guiding principles. Thus, supporting the call for the need for a ”one-stop-shop and register-once use-many approach”, we argue that CRIS is a key component of the research infrastructure landscape / ecosystem, directly targeted and enabled by operational application and the promotion of FAIR principles. We hypothesize that the improvement of FAIRness is a bidirectional process, where CRIS promotes FAIRness of data and infrastructures, and FAIR principles push further improvements to the underlying CRIS. All in all, three propositions on which we elaborate in our paper and invite  everyone representing this domain to think of, are:

1. research information management systems (CRIS) are helpful to assess the FAIRness of research data and data repositories;

2. research information management systems (CRIS) contribute to the FAIRness of other research infrastructure;

3. research information management systems (CRIS) can be improved through the application of the FAIR principles.

Here, we have raised a discussion on this topic showing that the improvement of FAIRness is a dual or bidirectional process, where CRIS promotes and contributes to the FAIRness of data and infrastructures, and FAIR principles push for further improvement in the underlying CRIS data model and format, positively affecting the sustainability of these systems and underlying artifacts. CRIS are beneficial for FAIR, and FAIR is beneficial for CRIS. Nevertheless, as pointed out by (Tatum and Brown, 2018), the impact of CRIS on FAIRness is mainly focused on the (1) findability (“F” in FAIR) through the use of persistent identifiers and (2) interoperability (“I” in FAIR) through standard metadata, while the impact on the other two principles, namely accessibility and reusability (“A” and “R” in FAIR) seems to be more indirect, related to and conditioned by metadata on licensing and access. Paraphrasing the statement that “FAIRness is necessary, but not sufficient for ‘open’” (Tatum and Brown, 2018), our conclusion is that “CRIS are necessary but not sufficient for FAIRness”.

This study differs significantly from what I typically talk about, but it was to contribute to it, thereby sharing the experience I gain in European Open Science Cloud (EOSC), and respective Task Force I am involved in – “FAIR metrics and data quality”. It also allowed me to provide some insights on what we are dealing with within this domain and how our activities contribute to the currently limited body of knowledge on this topic.

A bit about the sessions I chaired and topics raised within them, which were very diverse but equally relevant and interesting. I was kindly invited to chair two sessions, namely “Big Data and Analytics” and “Knowledge management Strategies and Implementations”, where the papers on the following topics were presented:

  • Decision Support for Production Control based on Machine Learning by Simulation-generated Data (Konstantin Muehlbauer, Lukas Rissmann, Sebastian Meissner, Landshut University of Applied Sciences, Germany);
  • Exploring the Test Driven Development of a Fraud Detection Application using the Google Cloud Platform (Daniel Staegemann, Matthias Volk, Maneendra Perera, Klaus Turowski, Otto-von-Guericke University Magdeburg, Germany) – this paper was also recognized as the best student paper;
  • Decision Making with Clustered Majority Judgment (Emanuele D’ajello , Davide Formica, Elio Masciari, Gaia Mattia, Arianna Anniciello, Cristina Moscariello, Stefano Quintarelli, Davide Zaccarella, University of Napoli Federico II, Copernicani, Milano, Italy.
  • Virtual Reality (VR) Technology Integration in the Training Environment Leads to Behaviour Change (Amy Rosellini, University of North Texas, USA)
  • Innovation in Boutique Hotels in Valletta, Malta: A Multi-level Investigation (Kristina, University of Malta, Malta)

And, of course, as is the case for each and every conference, the keynotes are panelists are those, who gather the highest number of attendees, which is obvious, considering the topic they elaborate on, as well as the topics they raise and discuss. IC3K is not an exception, and the conference started with a very insightful discussion on Current Data Security Regulations and the discussion on whether they Serve or rather Restrict the Application of the Tools and Techniques of AI. Each of three speakers, namely Catholijn Jonker, Bart Verheijen, and Giancarlo Guizzardi, presented their views considering the domain they represent. As a result, both were very different, but at the same time leading you to “I cannot agree more” feeling!

One of panelists – Catholijn Jonker (TU Delft) delivered then an absolutely exceptional keynote speech on Self-Reflective Hybrid Intelligence: Combining Human with Artificial Intelligence and Logic. Enjoyed not only the content, but also the style, where the propositions are critically elaborated on, pointing out that they are not indented to serve as a silver bullet, and the scope, as well as side-effects should be determined and considered. Truly insightful and, I would say, inspiring talk.

All in all, thank you, organizers – INSTICC (Institute for Systems and Technologies of Information, Control and Communication), for bringing us together!

Europe Biobank Week 2021

This is a short note about Europe Biobank Week 2021, which took place in an online mode this year during 8-10 November. EBW is jointly organized by ESBB (European, Middle Eastern & African Society for Biopreservation and Biobanking) and BBMRI-ERIC with this year’s theme “Biobanking for our Future – Opportunities Unlocked. The programme was plenty of very different events and opportunities, including rich programme of live presentations from high-level experts, and a collection of selected posters, where I was honored to be represented in two of them (14 main topics) as part of the Latvian Biomedical Research & Study centre team, where I work as an IT-expert.

One of them authored by me was presented within “Novel IT solutions, effective data storage, processing and analysis” section. This poster titled “Towards efficient data management of biobank, health register and research data: the use-case of BBMRI-ERIC Latvian National Node” (authors: Anastasija Nikiforova, Vita Rovīte, Laura Ansone) was devoted to the ongoing project (funded by the European Union under the Horizon 2020) called INTEGROMED – Integration of knowledge and biobank resources in comprehensive translational approach for personalized prevention and treatment of metabolic disorders, where some preliminary results of my activities on the inspecting and improving the ecosystem of the Latvian Biomedical Research and Study centre and then summarized and transformed into the set of guidelines towards efficient data management for heterogeneous data holders and exchangers, were presented.

European Biobank Week 2021, poster
“Towards efficient data management of biobank, health register and research data: the use-case of BBMRI-ERIC Latvian National Node” (authors: A. Nikiforova, V. Rovīte, L. Ansone)

Another poster titled “Development of a dynamic informed consent system for Latvian national biobank and citizen science data management, quality control and integration” (authors: Kante N., Nikiforova A., Kalēja J., Svandere A., Mezinska S., Rovīte V.) was presented under “Population-based cohorts – addressing global challenges for future generations” section and was dedicated to another project, which is funded by European Regional Development Fund (ERDF) – “DECIDE – Development of a dynamic informed consent system for biobank and citizen science data management, quality control and integration“.

“Development of a dynamic informed consent system for Latvian national biobank and citizen science data management, quality control and integration” (authors: Kante N., Nikiforova A., Kalēja J., Svandere A., Mezinska S., Rovīte V.)

This was another very nice experience!