CyberCommando’s meetup and my talk on Internet of Things Search Engines and their role in detecting vulnerable open data sources

October is Cybersecurity Awareness Month, as part of which CyberCommando’s meetup 2023 took place in the very heart of Latvia – Riga, where I was invited to deliver an invited talk that I devoted to IoTSE and entitled “What do Internet of Things Search Engines know about you? or IoTSE as a vulnerable open data sources detection tool“.

CyberCommando’s meetup organizers claim it to be the most anticipated vendor independent industry event in the realm of cybersecurity, a conference designed to empower our local and regional IT security professionals as we face the evolving challenges of the digital age by bringing together high-level ICT professionals from local, regional, and international businesses, governments and government agencies, tech communities, financial, public and critical infrastructure sectors. CyberCommando’s meetup covered a broad set of topics, starting from development of ICT security skills and Awareness Raising, to modern market developments and numerous technological solutions in the Cloud, Data, Mobility, Network, Application, Endpoint, Identity & Access, and SecOps, to corporate and government strategies and the future of the sector. Three parallel sessions and numerous talks delivered by 20+ local and international experts, including but not limited to IT-Harvest, Radware, DeepInstinct, Pentera, ForeScout Technologies, CERT.LV, ESET. It is a great honor to complement this list by the University of Tartu, which I represented delivering my talk at the main stage 🙂

Let’s refer to my talk – “What do Internet of Things Search Engines know about you? or IoTSE as a vulnerable open data sources detection tool“. Luckily, very few attendees knew or used OSINT (Open Source INTelligence), Internet of Things Search Engines (IoTSE) (however, perhaps they were just too shy to raise their hands when I asked this), so, hopefully, this was a good choice of topic. So, what was it about?

Today, there are billions of interconnected devices that form Cyber-Physical Systems (CPS), Internet of Things (IoT) and Industrial Internet of Things (IIoT) ecosystems. As the number of devices and systems in use and the volume and the value of data increases, the risks of security breaches increase as well.

As I discussed previously, this “has become even more relevant in terms of COVID-19 pandemic, when in addition to affecting the health, lives, and lifestyle of billions of citizens globally, making it even more digitized, it has had a significant impact on business [3]. This is especially the case because of challenges companies have faced in maintaining business continuity in this so-called “new normal”. However, in addition to those cybersecurity threats that are caused by changes directly related to the pandemic and its consequences, many previously known threats have become even more desirable targets for intruders, hackers. Every year millions of personal records become available online [4-6]. Lallie et al. [3] have compiled statistics on the current state of cybersecurity horizon during the pandemic, which clearly indicate a significant increase of such. As an example, Shi [7] reported a 600% increase in phishing attacks in March 2020, just a few months after the start of the pandemic, when some countries were not even affected. Miles [8], however, reported that in 2021, there was a record-breaking number of data compromises, where “the number of data compromises was up more than 68% when compared to 2020”, when LinkedIn was the most exploited brand in phishing attacks, followed by DHL, Google, Microsoft, FedEx, WhatsApp, Amazon, Maersk, AliExpress and Apple.”

And while Risk based security & Flashpoint (2021) [5] suggests that vulnerability landscape is returning to normal, , incl. but not limited due to various activities, such as #WashYourCyberHands INTERPOL capmaign and “vaccinate your organization” movements, another trigger closely related to cybersecurity that is now affecting the world is geopolitical upheaval. Additionally, according to Cybersecurity Ventures, by 2025, cybercrime will cost the world economy around $10.5 trillion annually, increasing from $3 trillion in 2015. Moreover, we are at risk of what is called Cyber Apocalypse or Cyber Armageddon, as was discussed during World Economic Forum (and according to Forbes), which is very likely to happen in coming 2 years (hopefully, it will not).

According to Forbes, the key drivers for this are the ongoing digitization of society, behavioral changes due to COVID-19 pandemic, political instability such as wars, the global economic downturn, while WEF relate this to the fact that technology becomes more complex, in particular, breakthrough technologies such as AI (considering current state-of-the-art, I would stress the role of quantum computing here), where I would stress that this “complexity” is two-fold, i.e., technologies become more advanced, while at the same time – easier to use, including those that can be used to detect and expose vulnerabilities. At the same time, although society is being digitized, society tend to lack digital literacy, data literacy & security literacy.

Hence, when we ask what should be done to tackle associated issues, the answer is also multi-fold, where some recommendations being actively discussed, including Forbes and Accenture, are to “secure the core”, which, in turn, involves ensuring that security and resilience are built into every aspect of the organization, understanding that cybersecurity is not something that’s only discussed within the IT department but rather at all levels of organization, organizations need to address the skills shortage within the cybersecurity domain, and it should involve utilizing automation where possible

To put it simply:

  • (cyber)security governance
  • digital literacy
  • cybersecurity is not a one-time event, but a continuous process
  • automation whenever possible
  • «security first!» as a principle for all artifacts, processes and ecosystem
  • preferably – «security-by-design» and «absolute security», which, of course, is rather an utopia, but still something we have to try to achieve (despite the fact we know it is impossible to achieve this level).

Or even simpler, as I typically say – “security to every home!”.

In the light of the above, i.e., “security first!” as a principle for all artifacts and the need to “secure the core” – are our data management systems always protected by default (i.e., secure-by-design)? While it can sound surprisingly and weird in 2023, but this is a fact that while various security protection mechanisms have been widely implemented, the concept of a “primitive” artifact such as a data management system seems to have been more neglected and the number of unprotected or insufficiently protected data sources is enormous. Recent research demonstrated that weak data and database protection in particular is one of the key security threats [4,6,9-11]. According to a list drawn up by Bekker [5] and Identity Force on major security breaches in 2020, a large number of data leaks occur due to unsecured databases. As an example:

  • Estee Lauder – 440 million customer records
  • Prestige Software hotel reservation platform – over 10 million hotel guests, including Expedia, Hotels.com, Booking.com, Agoda etc.
  • U.K-based Security Firm gained data of Adobe, Twitter, Tumbler, LinkedIn etc. and users with a total of over 5 billion records
  • Marijuana Dispensaries – 85 000 medical patient and recreational user records

to name just a few… At times it is due to their (mis)configuration, at times – due to the vulnerabilities in products or services, where additional security mechanisms would be required. Sometimes, of course, this due to the very targeted attacks, where the remaining of this post will have limited value, but let’s rather focus on those very critical cases, which refer to the above, especially in the context of the above mentioned fact that recent advances in ICT decreased the level of complexity of searching for connected devices on the Internet and easy access to them even for novices due to the widespread popularity of step-by-step guides on how to use IoTSE – aka Internet of Everything (IoE) or Open Source Intelligence (OSINT) Search Engines such as Shodan, BinaryEdge, Censys, ZoomEye, Hunter, Greynoise, Shodan, Censys, IoTCrawler – to find and gain access to insufficiently protected webcams, routers, databases, refrigerators, power plants, and even wind turbines. As a result, OSINT was recognized to be one of the five major categories of CTI (Cyber Threat Intelligence )sources (at times more than five are named, but OSINT remain to be part of this X categories), along with Human Intelligence (HUMINT), Counter Intelligence, Internal Intelligence and Finished Intelligence (FINTEL).

While these tools may represent a security risk, they provide many positive and security-enhancing opportunities. They provide an overview on network security, i.e., devices connected to the Internet within the company, are useful for market research and adapting business strategies, allow to track the growing number of smart devices representing the IoT world, tracking ransomware – the number and nature of devices affected by it, and therefore allow to determine the appropriate actions to protect yourself in the light of current trends. However, almost every of these white hat-oriented objectives can be exploited by black-hatters.

In this talk I raised several questions that can be at least partly answered with the help of IoTSE, such as:

  • Whether data source is visible and even accessible outside the organization?
  • What data can be gathered from it? and what is their “value” for external actors, such as attackers and fraudsters? I.e., whether these data can pose a threat to the organization using them to deploy an attack?
  • Are stronger security mechanisms needed? Is the vulnerability related to internal (mis)configuration or database in use?

To answer the above questions, I referred to the study that has been conducted by me and my former student – Artjoms Daškevičs (very talented student, whose bachelor thesis was even nominated to the best Computer Science thesis of in Latvia) some time ago. As part of that study an Internet of Things Search Engines- (IoTSE-) based tool called ShoBEVODSDT (Shodan- and Binary Edge-based Vulnerable Open Data Sources Detection Tool) was developed. This “toy example” of IoTSE conducts the passive assessment – it does not harm the databases but rather checks for potentially existing bottlenecks or weaknesses which, if the attack would take place, could be exposed. It allows for both comprehensive analysis for all unprotected data sources falling into the list of predefined data sources – MySQL, PostgreSQL, MongoDB, Redis, Elasticsearch, CouchDB, Cassandra and Memcached, or to define IP range to examine what can be seen from the outside of the organization about the data source (read more in (Daskevics and Nikiforova, 2021)).

The remainder was mostly built around four questions (and articles / book chapters) that we addressed with its help, namely:

  • Which data sources have proven to be the most vulnerable and visible outside the organization?
  • What data can be gathered from open data sources (if any), and what is their “value” for external actors, such as attacker and fraudsters? Whether these data can pose a threat to the organization using them to deploy an attack?

This part was built around our conference paper and this book chapter. In short (for a bit longer answer refer to the article), the number of data sources accessible outside the organization is less than 2% (more than 98% of data sources are not accessible via a simple IoTSE tool). However, there are some data sources that may pose risks to organizations and 12% of open data sources – data sources IoTSE tool was able to reach were already compromised or contain the data that can be used to compromise them. ElasticSearch and Memcached had the highest ratio of instances to which it was possible to connect, while MongoDB, PostgreSQL and ElasticSearch demonstrate the most negative trend in terms of already compromised databases (not by us, of course).

In addition, we might be interested in comparing SQL and NoSQL databases, where the latter are less likely to provide security measures, including sometimes very primitive and simple measures such as an authentication,authorization (Sahafizadeh et al., 2015) and data encryption. This is what we explored in the book chapter. We were not able to find significant differences, where from the “most secure”service viewpoint, CouchDB has demonstrated very good results in the context of security as the NoSQL database and MySQL as a relational database. However, if the developer needs to use Redis or Memcached, additional security mechanisms and/ or activities should be introduced to protect them. It must be understood, however, that these results cannot be broadly disseminated with regard to the security of the open data storage facility, mostly by demonstrating how many data storage holders were concerned about the security of their data storage facilities, since many data storage facilities have the potential to apply a series of built-in mechanisms. For the “most unsecure” service, Elasticsearch is characterized by weaker and less frequently used security protection mechanisms. This means that the database holder should be wary of using it. Similar conclusion can be drawn on Memcached (although it contradicts to CVE Details), where the total number of vulnerabilities found was the highest.However, the risk of these vulnerabilities was lower compared to ElasticSearch, so it can be assumed that CVE Details either does not respect such “low-level” weaknesses or have not yet identified them. Here in the future, an in-depth analysis of what CVE Details counts as vulnerability, and further exploration of the correlation with our results, could be carried out.

The next question we were interested in was:

  • Which Baltic country – Latvia, Lithuania, Estonia, has the most open & vulnerable data sources? and whether technological development of Estonia will be visible here as well?

This question was raised and partially answered in another conference paper. It is impossible to give an unambiguous answer here, since while Latvia showed the highest ratio of successful connections (and Estonia the lowest), Lithuania showed the most negative result in terms of already compromised data sources, and Estonia – for sensitive and non-sensitive data. Estonia, however, had the largest number of data sources from which data could not be obtained (with Latvia having a slightly lower but still relatively good result in this regard). And based on the average value of the data that could be obtained form these data sources, Lithuania again demonstrated the most negative result, which, however, was only slightly different from the result demonstrated by Estonia and Latvia (which may be a statistical error, since the total number of data sources found by our tool, differed significantly for these countries). When examining specific data sources that are more likely causing lower results, they vary from one country to another, so it is impossible to find the most insecure database that is the root of all problems.

And one more question I raised was:

  • Do “traditional” vulnerability registries provide a sufficiently comprehensive view of the DBMS security, or should they be subject for intensive and dynamic inspection by DBMS owners?

This was covered in the book chapter, which provides a comparative analysis of the results extracted from the CVE database with the results obtained as a result of the application of the IoTSE-based tool. It is not surprising – the results in most cases are rather complimentary, and one source cannot completely replace the second. This is not only due to scope limitations of both sources – CVE Details cover some databases not covered by ShobeVODSDT, as well as provide insights on more diverse set of vulnerabilities, while not providing the most up-to-date information with a very limited insight on MySQL. At the same time, there are cases when both sources refer to a security-related issue and their frequency, which can be seen as a trend and treated by users respectively taking action to secure the database that definitely do not comply with the “secure by design” principle. This refers to MongoDB, PostgreSQL and Redis.

All in all, it can be said that the answers to some of those questions may seem obvious or expected, however, as our research has shown, firstly, not all of them are obvious to everyone (i.e., there are no secure-by-design databases/data sources, so the data source owner has to think about its security), and, secondly, not all of these “obvious” answers are 100% correct.

All in all, both the talk and these studies show an obvious reality, which, however, is not always visible to the company. While “this may seem surprisingly in light of current advances, the first step that still needs to be taken thinking about date security is to make sure that the database uses the basic security features […] Ignorance or non-awareness can have serious consequences leading to data leakages if these vulnerabilities are exploited. Data security and appropriate database configuration is not only about NoSQL, which is typically considered to be much less secured, but also about RDBMS. This study has shown that RDBMS are also relatively inferior to various types of vulnerabilities. Moreover, there is no “secure by design” database, which is not surprising since absolute security is known to be impossible. However, this does not mean that actions should not be taken to improve it. More precisely, it should be a continuous process consisting of a set of interrelated steps, sometimes referred to as “reveal-prioritize-remediate”. It should be noted that 85% of breaches in 2021 were due to a human factor, with social engineering recognized as the most popular pattern [12]. The reason for this is that even in the case of highly developed and mature data and system protection mechanism (e.g., IDS), the human factor remains very difficult to control. Therefore, education and training of system users regarding digital literacy, as well as the definition, implementation and maintaining security policies and risk management strategy, must complement technical advances.

Or, to put it even simpler, once again: digital literacy “to every home”, cybersecurity is not a one-time event but a continuous process, automation whenever possible, cybersecurity governance, “security first!” principle for all artifacts, processes and ecosystem, and, preferably, “security-by-design” principle whenever and wherever possible. Or, as I concluded the talk – “We have got to start locking that door!” (by Ross, F.R.I.E.N.D.S) before we act as Commando

Big thanks goes to the organizers of the event, esp. to Andris Soroka and sponsors, who supported such a wonderful event – HeadTechnology, ForeScout, LogPoint, DeepInstinct, IT-Harvest, Pentera, GTB Technologies, Stellar Cyber, Appgate, OneSpan, ESET Digital Security, Veriato, Radware, Riseba, Ministry of Defence of Latvia, CERT.LV, Latvijas Sertificēto Persona Datu Aizsardzības Speciālistu Asociācija, Dati Group, Latvijas Kiberpshiloģijas Asociācija, Optimcom, Vidzeme University of Appliced Sciences, Stallion, ITEksperts, Kingston Technology.

P.S. If, considering the topics I typically cover, you are wondering, why I am talking about security this time, let me briefly answer your question. First, for those who knows me better, it is a well-known fact that cybersecurity was my first choice in a big IT world – it was, is and probably remain my passion, although now it is rather a hobby. This was also the central part of my duties in one of my previous workplaces, incl. the one when I worked with the organizer of this event (oh my first honeypot…). Second, but related to the first point, this was the topic, addressing which one of my professors (during the first or the second year of my studies) told me that I must become a researcher (“yes, sure 😀 😀 😀 you must be kidding” was my thought at that point, but I do not laugh on this “ridiculous joke” anymore, and am rather grateful that I was noticed so early and was then constantly reminded about this by other colleagues, which resulted in the current version of me). Third, data quality and open data that I am talking about a lot are all about the value of the data, while two main prerequisites for this are (1) data quality and (2) data security, so, in fact, data security is inevitable component that we must think and talk about.

References:

📢New paper alert 📢“Predictive Analytics intelligent decision-making framework and testing it through sentiment analysis on Twitter data” or what people do and will think about ChatGPT?

This paper alert is dedicated to “Predictive Analytics intelligent decision-making framework and testing it through sentiment analysis on Twitter data” (authors: Otmane Azeroual, Radka Nacheva, Anastasija Nikiforova, Uta Störl, Amel Fraisse) paper, which is now publicly available in ACM Digital Library!

In this paper we present a predictive analytics-driven decision framework based on machine learning and data mining methods and techniques. We then demonstrate it in action by predicting sentiments and emotions in social media posts as a use-case choosing perhaps the trendiest topic – ChatGPT. In other words we check whether it is eternal love and complete trust or rather 🤬?

Why PA?

Predictive Analytics are seen to be useful in business, medical/ healthcare domain, incl. but not limited to crisis management, where, in addition to health-related crises, Predictive Analytics have proven useful in natural disasters management, industrial use-cases, such as energy to forecast supply and demand, predict the impact of equipment costs, downtimes / outages etc., aerospace to predict the impact of specific maintenance operations on aircraft reliability, fuel use, and uptime, while the biggest airlines – to predict travel patterns, setting ticket prices and flight schedules as well as predict the impact of, e.g., price changes, policy changes, and cancellations. And, of course, business process management and specifically retail, where Predictive Analytics allows retailers to follow customers in real-time, delivering targeted marketing and incentives, forecast inventory requirements, and configure their website (or store) to increase sales. It business process management area, in turn, Predictive Analytics give rise to what is called predictive process monitoring (PPM). Predictive Analytics uses were also found in Smart Cities and Smart Transportation domain, i.e. to support smart transportation services using open data, but also in education, i.e., to predict performance in MOOCs.

This popularity can be easily explained by examining their key strategic objectives, which IBM (Siegel, 2015) has summarized as: (1) competition – to secure the most powerful and unique stronghold of competitiveness, (2) growth – to increase sales and keep customers competitively, (3) enforcement – to maintain business integrity by managing fraud, (4) improvement – to advance core business capacity competitively, (5) satisfaction – to meet rising consumer expectations, (6) learning – to employ today’s most advanced analytics, (7) acting – to render business intelligence and analytics truly effective actionable. Marketing, sales, fraud detection, call center and core businesses of business units, same as customers and the enterprise as  a whole are expected to gain benefits, which makes PA a “must”.

And although according to (MicroStrategy, 2020), in 2020, 52% of companies worldwide used predictive analytics to optimize operations as part of business intelligence platform solution, although so far, predictive analytics have been used mostly by large companies (65% of companies with $100 million to $500 million in revenue, and 46% of companies under $10 million in revenue), with less adoption in medium-sized companies, not to say about small companies

Based on management theory and Gartner’s Business Intelligence and Performance Management Maturity Model, our framework covers four management levels of business intelligence – (a) Operational, (b) Tactical, (c) Strategic and (d) Pervasive. These are the levels that determine the need to manage data in organizations, transform them into information and turn them into knowledge, which is also the basis for making forecasts. The end result of applying it for business purposes is to generate effective solutions for each of these levels.

Sounds catchy? Read the paper here.

Many thanks to my co-authors – Radka and Otmane, who invited me to contribute to this study, and drove the entire process!

Cite paper as:

O. Azeroual, R. Nacheva, A. Nikiforova, U. Störl, and A. Fraisse. 2023. Predictive Analytics intelligent decision-making framework and testing it through sentiment analysis on Twitter data. In Proceedings of the 24th International Conference on Computer Systems and Technologies (CompSysTech ’23). Association for Computing Machinery, New York, NY, USA, 42–53. https://doi.org/10.1145/3606305.3606309

UT & Swedbank Data Science Seminar “When, Why and How? The Importance of Business Intelligence”

Last week I had the pleasure of taking part in a Data Science Seminar titled “When, Why and How? The Importance of Business Intelligence. In this seminar, organized by the Institute of Computer Science  (University of Tartu) in cooperation with Swedbank, we (me, Mohammad Gharib, Jurgen Koitsalu, Igor Artemtsuk) discussed the importance of BI with some focus on data quality. More precisely, 2 of 4 talks were delivered by representatives of the University of Tartu and were more theoretical in nature, where we both decided to focus our talks on data quality (for my talk, however, this was not the main focus this time), while another two talks were delivered by representatives of Swedbank, mainly elaborating on BI – what it can give, what it already gives, how it is achieved and much more. These talks were followed by a panel moderated by prof. Marlon Dumas.

In a bit more detail…. In my presentation I talked about:

  • Data warehouse vs. data lake – what are they and what is the difference between them?” – in a very few words – structured vs unstructured, static vs dynamic (real-time data), schema-on-write vs schema on-read, ETL vs ELT. With further elaboration on What are their goals and purposes? What is their target audience? What are their pros and cons? 
  • Is the Data warehouse the only data repository suitable for BI?” – no, (today) data lakes can also be suitable. And even more, both are considered the key to “a single version of the truth”. Although, if descriptive BI is the only purpose, it might still be better to stay within data warehouse. But, if you want to either have predictive BI or use your data for ML (or do not have a specific idea on how you want to use the data, but want to be able to explore your data effectively and efficiently), you know that a data warehouse might not be the best option.
  • So, the data lake will save my resources a lot, because I do not have to worry about how to store /allocate the data – just put it in one storage and voila?!” – no, in this case your data lake will turn into a data swamp! And you are forgetting about the data quality you should (must!) be thinking of!
  • But how do you prevent the data lake from becoming a data swamp?” – in short and simple terms – proper data governance & metadata management is the answer (but not as easy as it sounds – do not forget about your data engineer and be friendly with him [always… literally always :D) and also think about the culture in your organization.
  • So, the use of a data warehouse is the key to high quality data?” – no, it is not! Having ETL do not guarantee the quality of your data (transform&load is not data quality management). Think about data quality regardless of the repository!
  • Are data warehouses and data lakes the only options to consider or are we missing something?“– true! Data lakehouse!
  • If a data lakehouse is a combination of benefits of a data warehouse and data lake, is it a silver bullet?“– no, it is not! This is another option (relatively immature) to consider that may be the best bit for you, but not a panacea. Dealing with data is not easy (still)…

In addition, in this talk I also briefly introduced the ongoing research into the integration of the data lake as a data repository and data wrangling seeking for an increased data quality in IS. In short, this is somewhat like an improved data lakehouse, where we emphasize the need of data governance and data wrangling to be integrated to really get the benefits that the data lakehouses promise (although we still call it a data lake, since a data lakehouse, although not a super new concept, is still debated a lot, including but not limited to, on the definition of such).

However, my colleague Mohamad Gharib discussed what DQ and more specifically data quality requirements, why they really matter, and provided a very interesting perspective of how to define high quality data, which further would serve as the basis for defining these requirements.

All in all, although we did not know each other before and had a very limited idea of what each of us will talk about, we all admitted that this seminar turned out to be very coherent, where we and our talks, respectively, complemented each other, extending some previously touched but not thoroughly elaborated points. This allowed us not only to make the seminar a success, but also to establish a very lively discussion (although the prevailing part of this discussion took place during the coffee break – as it usually happens – so, unfortunately, is not available in the recordings, the link to which is available below).

Research and Innovation Forum 2022: panel organizer, speaker, PC member, moderator and Best panel moderator award

As I wrote earlier, this year I was invited to organize my own panel session within the Research and Innovation Forum (Rii Forum). This invitation was a follow-up on several articles that I have recently published (article#1, article#2, article#3) and a Chapter to be published in “Big data & decision-making: how big data is relevant across fields and domains” (Emerald Studies in Politics and Technology) I was developing at that time. I was glad to accept this invitation, but I did not even think about how many roles I will act in Rii Forum and how many emotions I will experience. So, how was it?

First, what was my panel about? It was dedicated to data security entitled “Security of data storage facilities: is your database sufficiently protected?” being a part of the track called “ICT, safety, and security in the digital age: bringing the human factor back into the analysis“.

My own talk was titled “Data security as a top priority in the digital world: preserve data value by being proactive and thinking security first“, which makes it to be a part of the panel described above. In this talk I elaborated on the main idea of the panel, referring to an a study I recently conducted. In short, today, in the age of information and Industry 4.0, billions of data sources, including but not limited to interconnected devices (sensors, monitoring devices) forming Cyber-Physical Systems (CPS) and the Internet of Things (IoT) ecosystem, continuously generate, collect, process, and exchange data. With the rapid increase in the number of devices and information systems in use, the amount of data is increasing. Moreover, due to the digitization and variety of data being continuously produced and processed with a reference to Big Data, their value, is also growing. As a result, the risk of security breaches and data leaks. The value of data, however, is dependent on several factors, where data quality and data security that can affect the data quality if the data are accessed and corrupted, are the most vital. Data serve as the basis for decision-making, input for models, forecasts, simulations etc., which can be of high strategical and commercial / business value. This has become even more relevant in terms of COVID-19 pandemic, when in addition to affecting the health, lives, and lifestyle of billions of citizens globally, making it even more digitized, it has had a significant impact on business. This is especially the case because of challenges companies have faced in maintaining business continuity in this so-called “new normal”. However, in addition to those cybersecurity threats that are caused by changes directly related to the pandemic and its consequences, many previously known threats have become even more desirable targets for intruders, hackers. Every year millions of personal records become available online. Moreover, the popularity of IoTSE decreased a level of complexity of searching for connected devices on the internet and easy access even for novices due to the widespread popularity of step-by-step guides on how to use IoT search engine to find and gain access if insufficiently protected to webcams, routers, databases and other artifacts. A recent research demonstrated that weak data and database protection in particular is one of the key security threats. Various measures can be taken to address the issue. The aim of the study to which this presentation refers is to examine whether “traditional” vulnerability registries provide a sufficiently comprehensive view of DBMS security, or whether they should be intensively and dynamically inspected by DBMS holders by referring to Internet of Things Search Engines moving towards a sustainable and resilient digitized environment. The study brings attention to this problem and make you think about data security before looking for and introducing more advanced security and protection mechanisms, which, in the absence of the above, may bring no value.

Other presentations delivered during this session were “Information Security Risk Awareness Survey of non-governmental Organization in Saudi Arabia”, “Fake news and threats to IoT – the crucial aspects of cyberspace in the times of cyber war” and “Minecraft as a Tool to Enhance Engagement in Higher Education” – both were incredibly interesting, and all three talks were delivered by females, where only the moderator of the session was a male researcher, which he found to be very specific, given the topic and ICT orientation – not a very typical case 🙂 But, nevertheless, we managed to have a great session and a very lively and fruitful discussion, mostly around GDPR-related questions, which seems to be one of the hottest areas of discussion for people representing different ICT “subbranches”. The main question that we discussed was – is the GDPR more a supportive tool and a “great thing” or rather a “headache” that sometimes even interferes with development.

In addition, shortly before the start of the event, I was asked to become a moderator of the panel “Business in the era of pervasive digitalization“. Although, as you may know, this is not exactly in line with my area of expertise, it is in line with what I am interested in. This is not surprising, since both management, business, the economics are very closely connected and dependent on ICT. Moreover, they affect ICT, thereby pointing out the critical areas that we as IT-people need to refer to. All in all, we had a great session with excellent talks and lively discussion at the end of the session, where we discussed different session-related topics, shared our experience, thoughts etc. Although it was a brilliant experience, there is one thing that made it even better… A day later, a ceremony was held where the best contributions of the forum were announced and I was named the best panel moderator as a recognition of “the academic merit, quality of moderation, scheduling, and discussion held during the panel”!!!

These were wonderful three days of the forum with very positive emotions and so many roles – panel organizer, speaker / presenter, program committee member and panel moderator with the cherry on the cake and such a great end of the event. Thank you Research and Innovation Forum!!! Even being at home and participating online, you managed to give us an absolute amazing experience and even the feeling that we were all together in Athens!