On the Challenge of Sound Code for Operating Systems

J. Klimt, M. Kröning, St. Lankes, A. Monti

The memory-safe systems programming language Rust is gaining more and more attention in the operating system development communities, as it provides memory safety without sacrificing performance or control. However, these safety guarantees only apply to the safe subset of Rust, while bare-metal programming requires some parts of the program to be written in unsafe Rust. Writing abstractions for these parts of the software that are sound, meaning that they guarantee the absence of undefined behavior and thus uphold the invariants of safe Rust, can be challenging. Producing sound code, however, is essential to avoid breakage when the code is used in new ways or the compiler behavior changes.

In this paper, we present common patterns of unsound abstractions derived from the experience of reworking soundness in our kernel. During this process, we were able to remove over 400 unsafe expressions while discovering and fixing several hard-to-spot concurrency bugs along the way.


The Role of Open-Source Software in the Energy Sector

J. Klimt, N. Eiling, F. Wege, A. Monti

Fast digitalization of the power grids and the adoption of innovative software solutions is key to a successful energy transition. In other sectors, such as telecommunication or cloud computing, open-source software has already proven capable of transforming entire industries, by speeding up development and lowering development costs while achieving high levels of stability, interoperability, and security. However, the energy sector has not yet embraced open-source software to the same level. We discuss how existing open-source software principles can be applied to the unique challenges of the energy sector during the transition towards higher penetration of renewable energy resources. To provide an overview of the current state of the open-source software landscape, we collected and analyzed 388 open-source projects, in terms of project activities, community composition, relevant licenses, and commonly used programming languages. One finding was that the majority of projects are currently driven by academic contributors, but that commercial players do also play a role, and we identify positive examples of collaboration between the two, mostly related to standardization.


ML as a Service (MLaaS): Enhancing IoT with intelligence with adaptive online deep and reinforcement learning, model sharing and zero knowledge model verification

J. Mira, I. Moreno, H. Bardisbanian, J. Gorroñogoitia

AI has changed our lives in many aspects, including the way we (as humans) interact with internet and computational devices, but also on way devices interact with us, and among them, in most of the processes of the industry and other socioeconomic domains, where machine learning (ML) based applica- tions are getting increasing influence. Internet of Things (IoT) plays a key role in these process interactions, by providing contextual information that requires to be processed for extracting intelligence that would largely improve them. However, the delivery of ML-based applications for IoT domains faces the intrinsic complexity of ML operations, and the online interoperability with IoT devices. In this chapter, we present the IoT-NGIN ML as a service (MLaaS) platform, an MLOps platform devised for the delivery of intelligent applications for IoT. Its services for online deep learning (DL) training and inference, ML model conversion and sharing, and zero-knowledge model verification based on blockchain technology are also presented.


Analysis of Privacy Preservation Enhancements in Federated Learning Frameworks

Z. Anastasakis, S. Bourou, T. H. Velivasaki, A. Voulkidis, and D. Skias

Machine learning (ML) plays a growing role in the Internet of Things (IoT) applications and has efficiently contributed to many aspects, both for busi- nesses and consumers, including proactive intervention, tailored experiences, and intelligent automation. Traditional cloud computing machine learning applications need the data, generated by IoT devices, to be uploaded and processed on a central server giving data access to third parties raising privacy and data ownership concerns. Federated learning (FL) is able to over- come these privacy concerns by enabling an on-device collaborative training of a machine learning model without sharing any data over the network. However, model sharing can also potentially reveal sensitive information. Therefore, federated learning needs additional privacy-preserving techniques to enable fully private machine learning model sharing and training. In this chapter, privacy-preserving techniques for federated learning are studied. In addition, a comparative analysis of state-of-the-art federated learning frameworks against privacy-preserving techniques is presented. The analysis comprises the identification of main advantages and disadvantages for eight FL frameworks as well as the investigation of the frameworks under criteria related to their FL features and privacy preservation options.


Intelligent Management at the Edge

M. Mosahebfard, C. Torres-Pérez, E. Carmona-Cejudo, A. Cárdenas Córdova, A. Pino Martínez, J. Sebastian Camargo Barragan, E. Coronado, M.Shuaib Siddiqui

AI/ML techniques play a key role in 5G/6G networks providing connectivity to IoT devices. In such scenarios, not only is it necessary to run time-sensitive applications with strict latency requirements without human intervention, but it is also key to apply automation techniques at both the application and the network levels. The chapter is composed of three sections. In the first section, we present different cloud native (CN) technologies enabling scalable, cost- efficient, and reliable IoT solutions. The second section details different distributed and hierarchical monitoring frameworks and metrics collection schemes as inputs to AI engines. In the last section, application placement problems focused on delay minimization in geographically distributed single- cluster environments are first discussed. Afterwards, application placement issues ensuring latency requirements for the applications and energy con- sumption in distributed multi-access edge computing (MEC) systems using AI pipelines are presented.


Impact of an ML-Based Demand Response Mechanism on the Electrical Distribution Network: A Case Study in Terni

M. A. Bucarelli, M. Ghoreishi, F. Santori, J. Mira, J. Gorroñogoitia

The development of smart grids requires the active participation of end users through demand response mechanisms to provide technical benefits to the distribution network and receive economic savings. Integrating advanced machine learning tools makes it possible to optimise the network and manage the mechanism to maximise the benefits. This paper proceeds by forecasting consumption for the next 24 h using a recurrent neural network and by processing these data using a reinforcement learning-based optimisation model to identify the best demand response policy. The model is tested in a real environment: a portion of the Terni electrical distribution network. Several scenarios were identified, considering users’ participation at different levels and limiting the potential with various constraints.


Accuracy analysis of power sensors in compute servers using high precision power analyzer

M. Pitz, N. Eiling, F. Wege, I. Köster, S. Lankes, A. Monti

As power densities increase and energy prices become more relevant for operational costs, the demand for analysis, optimization and evaluation of the energy efficiency of computing systems grows. An indispensable tool for evaluating energy efficiency are power measurements. Today, CPUs, GPUs and platform management controllers have power sensors that allow evaluating energy consumption. However, these sensors have limited time resolution and reading them introduces computational load leading to distorted data. In this poster, we use a dedicated, calibrated power monitoring device to determine the accuracy of integrated power sensors of a typical server platform. We show that the use of the integrated power sensors alone is not sufficient to characterize the energy consumption of a computing system under varying loads. We find that not all measurement methodologies show an accurate picture of the energy consumption of the entire system, which can lead to deriving false conclusions.


Technical–economic Impact of the Deployment of Renewable Energy Communities: An Italian Case Study

M. A. Bucarelli, T. Bragatto, A. Curci, M. Maccioni, J. Dell’Olmo, A. Geri, M. Paulucci

Many European governments have defined ambitious energy policies that are newly promoting the installation of small distributed generators and encouraging the customers’ aggregation through energy communities. This paper assesses the impact of Renewable Energy Communities (REC) on the electricity distribution grid, with a focus on the case study of Terni (Italy). In order to assess the impact of RECs, power flows on the grid were calculated varying the composition and the behavior of energy communities. Assuming the full development of a scenario based on the European renewable energy growth program REPowerEU, the study examines the integration of renewable energy sources within local communities and their effects on the distribution grid’s performance and their economic viability. Key technical aspects analyzed include the grid’s capacity to accommodate RES integration, reverse power flow and self-consumption. In addition, the economic analysis evaluates the financial feasibility of REC implementation, considering potential revenue streams, within the Italian regulatory framework.


Investigating the impact of short-circuit faults in different neutral configurations: a real case study

M. A. Bucarelli, D. Agostini, A. Geri, F. M. Gatta, M. Maccioni, M. Paulucci

This paper presents a comprehensive analysis of different types of short-circuit faults and various neutral management configurations in ASM Terni’s network. The study focused on a portion of the Medium Voltage network comprising 102 nodes and 34.5 km of lines. The main objective was to assess the effects of short-circuit faults under different neutral management configurations to determine the best strategies for enhancing system reliability and stability. The analyses were conducted using two open-source software, OpenDSS and ATPdraw, which provide versatile platforms for analyzing electrical distribution systems. Various neutral management configurations were considered, including grounded neutral, isolated neutral, and neutral connection through resistance or reactance. Through simulations, different parameters of interest were evaluated, such as short-circuit currents, phase voltages, and current in the shields. The obtained results provided valuable insights into the behaviour of the network under different fault conditions and neutral management configurations. The findings can assist in optimizing the network’s design and operation, leading to improved system performance and enhanced reliability.


Nearby connections strategies: Features, usage, and empirical performance evaluation. Internet of Things. 2023

T. Lagos Jenschke, M. Dias de Amorim, S. Fdida

The Device-to-Device (D2D) communications involve a direct, peer-to-peer link between devices that operates independently of fixed network infrastructures. It can either complement existing network infrastructures or be used as a standalone network to provide services, such as distributing content, supporting emergency services during natural disasters, and enabling smartphone applications like AirDrop (iOS) and Nearby Share (Android). However, the behavior of D2D communication is not always predictable, and there are many challenges to implementing and deploying D2D communication systems in real-world situations. In particular, it is hard to know what throughput one can obtain, as nominal numbers provided by the documentation are seldom observed in practice. This paper focuses on Nearby Connections Application Programming Interface (API) provided by Google. This API offers distinct strategies in function of the network’s topology, and choosing the right one for an application is a challenging task as they perform differently. In this paper, we contribute to the research community in two ways. Firstly, we investigate the real-time performance of throughput of two strategies (STAR and POINT-TO-POINT, as they are the only ones to use Bluetooth and Wi-Fi Direct to transfer data). Our study shows that the POINT-TO-POINT strategy generally achieves high throughput performance and is suitable for use cases involving high network traffic. In contrast, the STAR performs poorly in throughput, which can result in link instability and slow transfers in some cases. Secondly, we disclose a tool to help network designers evaluate their networks and fine-tune their protocols and algorithms according to their specificities.


Implementation Challenges of Single-hop, Device-to-Device Communication in 5G Networks. EuCNC & 6G Summit. SPS9 6G enabled Network Applications for the Future of Connected Robotics. 2023

M. Sophocleous, A. Karkotis, L. Christofi, T. Lagos Jenschke, M. Dias de Amorim, S. Fdida, J. Costa-Requena, S. Katta

We consider the important emerging scenario of a private 5G packet core supporting proximity services enabling coverage extension for end-devices through relays using Wi-Fi Direct connectivity. We demonstrate and evaluate the implementation of Device-to-device (D2D) communications using Off-The-Shelf User Equipment through the development of a customized 5G Packet Core with Local Area Network capability and a mobile/server application to allow direct communication, discovery, and relay selection between the end-device and the relay(s). The system was tested in a laboratory-based testbed, and latency, throughput, and jitter measurements were obtained for multiple devices. We conclude that 5G networks are suitable for industrial applications, although current 5G solutions are focused on consumer communications, which require changes in the configuration to have similar capacity and delay both for download and upload traffic.


GAN-Driven Data Poisoning Attacks and Their Mitigation in Federated Learning Systems. Electronics. 2023

K. Psychogyios, T.H. Velivassaki, St. Bourou, A. Voulkidis, D. Skias, Th. Zahariadis

Federated learning (FL) is an emerging machine learning technique where machine learning models are trained in a decentralized manner. The main advantage of this approach is the data privacy it provides because the data are not processed in a centralized device. Moreover, the local client models are aggregated on a server, resulting in a global model that has accumulated knowledge from all the different clients. This approach, however, is vulnerable to attacks because clients can be malicious or malicious actors may interfere within the network. In the first case, these types of attacks may refer to data or model poisoning attacks where the data or model parameters, respectively, may be altered. In this paper, we investigate the data poisoning attacks and, more specifically, the label-flipping case within a federated learning system. For an image classification task, we introduce two variants of data poisoning attacks, namely model degradation and targeted label attacks. These attacks are based on synthetic images generated by a generative adversarial network (GAN). This network is trained jointly by the malicious clients using a concatenated malicious dataset. Due to dataset sample limitations, the architecture and learning procedure of the GAN are adjusted accordingly. Through the experiments, we demonstrate that these types of attacks are effective in achieving their task and managing to fool common federated defenses (stealth). We also propose a mechanism to mitigate these attacks based on clean label training on the server side. In more detail, we see that the model degradation attack causes an accuracy degradation of up to 25%, while common defenses can only alleviate this for a percentage of ∼5%. Similarly, the targeted label attack results in a misclassification of 56% compared to 2.5% when no attack takes place. Moreover, our proposed defense mechanism is able to mitigate these attacks.


Enhancing Cyber Security in IoT Systems using FL-based IDS with Differential Privacy. Global Information Infrastructure and Networking Symposium (GIIS). 2022

Z. Anastasakis, K. Psychogyios, T.-H. Velivassaki, St. Bourou, A. Voulkidis, D. Skias, A. Gonos, Th. Zahariadis,

Nowadays, IoT networks and devices exist in our everyday life, capturing and carrying unlimited data. However, increasing penetration of connected systems and devices implies rising threats for cybersecurity with IoT systems suffering from network attacks. Artificial Intelligence (AI) and Machine Learning take advantage of huge volumes of IoT network logs to enhance their cybersecurity in IoT. However, these data are often desired to remain private. Federated Learning (FL) provides a potential solution which enables collaborative training of attack detection model among a set of federated nodes, while preserving privacy as data remain local and are never disclosed or processed on central servers. While FL is resilient and resolves, up to a point, data governance and ownership issues, it does not guarantee security and privacy by design. Adversaries could interfere with the communication process, expose network vulnerabilities, and manipulate the training process, thus affecting the performance of the trained model. In this paper, we present a federated learning model which can successfully detect network attacks in IoT systems. Moreover, we evaluate its performance under various settings of differential privacy as a privacy preserving technique and configurations of the participating nodes. We prove that the proposed model protects the privacy without actually compromising performance. Our model realizes a limited performance impact of only ~ 7% less testing accuracy compared to the baseline while simultaneously guaranteeing security and applicability.


Quantifying Direct Link Establishment Delay Between Android Devices. IEEE 47th Conference on Local Computer Networks (LCN). 2022

T. Lagos Jenschke, M. Dias de Amorim, S. Fdida

The enormous success of direct communication applications has shed light on the practical interest of Device-to-device (D2D) communications. However, to set up a direct link between two neighboring nodes, they have first to detect each other, which introduces a delay before they can start sending and receiving data. The link establishment delay can be particularly unfavorable in situations of strong mobility, as the availability of the direct communication link depends on how long the devices stay within communication range of each other. This paper reports on our experiments to evaluate the link establishment delay. We focus on Android devices and use the Nearby Connection Application Programming Interface (API), which supports Bluetooth Classic and Bluetooth Low Energy (BLE) to perform link connectivity. In a nutshell, we observe that the link establishment delay requires several seconds to complete in the case of Bluetooth Classic and even tens of seconds for BLE.


A Low-Cost Smart Monitoring Device For Demand-Side Response Campaigns. International Congress on Information and Communication Technology. 2022

A.Geri, F.M.Gatta, M.Maccioni, J.Dell’Olmo, F.Carere, M.A.Bucarelli, P.Poursoltan, N.Hadifar, M.Paulucci

The energy transition requires an increasing penetration of renewable resources, particularly at MV/LV levels. The emerging production scheme is characterized by distributed power plants, imposes a capillary control of produc-tion and consumption among the Distribution Network (DN). The implementa-tion of Demand-side Response (DSR) campaigns is widely seen as a solution that can increase grid stability, but they require a complex and expensive monitoring infrastructure to select the optimal operating point of the production/consumption systems. This paper suggests a cheap and reliable smart monitoring device based on Raspberry Pi technology. The communication infrastructure adopted in the smart building of ASM S.p.A., the Distribution System Operator (DSO) of Terni city, shows the feasibility of implementing this prototype on a large-scale.


Capabilities-based access control for IoT devices using Verifiable Credentials.SafeThings 2022: IEEE Workshop on the Internet of Safe Things.2022- 2nd Best Paper Award

N. Fotiou, V. A. Siris, G. Polyzos, Y. Kortesniemi, D. Lagutin

Capabilities-based access control is a promising paradigm that can handle the particularities of IoT systems.Nevertheless, existing systems are not interoperable and they have limitations, such as lack of proof of possession, inefficient revocation mechanisms, and reliance on trusted third parties. In this paper we overcome these limitations by designing and implementing a system that leverages Verifiable Credentials (VCs) to encode the access rights. Our solution specifies protocols for requesting and using VCs that can be mapped to OAuth 2.0, includes an efficient and privacy preserving proof of possession mechanism, and it supports revocation. We implement and evaluate our solution and we show that it can be directly used even by constrained devices. Index Terms—Decentralized Identifiers, OAuth 2.0, Proof-ofPossession, Internet of Things

Full text

Distributed Generation Monitoring: a cost-effective raspberry pi-based device. 2022 2nd international conference on innovative research in applied science, engineering and technology (iraset).2022

A. Geri, F.M. Gatta, M. Maccioni, J. Dell’Olmo, F. Carere, M.A. Bucarelli, P. Poursoltan, N. Hadifar, M. Paulucci

The evolution of the distribution grids towards the smart grid paradigm requires the implementation of a telecommunication network overlayed to the distribution grid. To achieve this target a new generation of reliable, cheap, and easily deployable smart meters needs to be developed. This paper presents a smart meter that fits in a series of possible implementations from the household metering to the distributed generation monitoring. The Raspberry Pi ecosystem is chosen for this purpose due to low cost and a highly reliable technology to develop an easy-deployable smart meter, to collect the principal magnitudes of interest of the monitored side and make them accessible from Laptop or mobile phone. The designed device is realized and deployed in a secondary substation to monitor a PV power plant in the ASM Terni distribution network.

Full text

A Review of Tabular Data Synthesis using GANs on an IDS Dataset. Information. 2021

St. Bourou, A. El Saer, T.-H. Velivassaki, A. Voulkidis, Th. Zahariadis

Recent technological innovations along with the vast amount of available data worldwide, have led to the rise of cyberattacks against network systems. Intrusion Detection Systems (IDS) play a crucial role as a defense mechanism in networks, against adversarial attackers. Machine Learning methods provide various cybersecurity tools. However, these methods require plenty of data to be trained efficiently. Data which may be hard to collect or to use due to privacy reasons. One of the most notable Machine Learning tools is the Generative Adversarial Network (GAN) and it has great potential for Tabular data synthesis. In this work, we start by briefly presenting the most popular GAN architectures, VanillaGAN, WGAN and WGAN-GP. Focusing on tabular data generation, CTGAN, CopulaGAN and TableGAN models are used for the creation of synthetic IDS data. Specifically, the models are trained and evaluated on NSL-KDD dataset, considering the limitations and requirements that this procedure needs. Finally, based on certain quantitative and qualitative methods we argue and evaluate the most prominent GANs for tabular network data synthesis.

Full text

Cricket: A virtualization layer for distributed execution of CUDA applications with checkpoint/restart support. Concurrency and computation. 2021

N. Eiling, J. Baude, S. Lankes, A. Monti

In high-performance computing and cloud computing the introduction of heterogeneous computing resources, such as GPU accelerator have led to a dramatic increase in performance and efficiency. While the benefits of virtualization features in these environments are well researched, GPUs do not offer virtualization support that enables fine-grained control, increased flexibility, and fault tolerance. In this article, we present Cricket: A transparent and low-overhead solution to GPU virtualization that enables future research into other virtualization techniques, due to its open-source nature. Cricket supports remote execution and checkpoint/restart of CUDA applications. Both features enable the distribution of GPU tasks dynamically and flexibly across computing nodes and the multi-tenant usage of GPU resources, thereby improving flexibility and utilization for high-performance and cloud computing.

Full text

Discovering the Digital Twin Web - From singular applications to a scalable network

Juuso Autiosalo

Digital twins are virtual entities that are linked to real-world entities. Twins mirror their real counterparts and consist of features that are selected to serve the underlying use cases. Over the past decade, digital twins rose from an engineering curiosity to a concept praised by consulting agencies, offered by corporations, and pursued by governmental agencies. Digital twins are expected to bridge the gap between digital and physical worlds.

This dissertation set out to build machine design-focused digital twins for industrial products, specifically for an industrial overhead crane. However, during the work, it turned out that the existing tools and techniques did not allow the scalability required to create a twin for each manufactured product, suggesting that a more ground-up research approach was required. Hence, the latter part of the dissertation focused on initiating the development of fundamental digital twin technologies.

The four main results are a feature-based digital twin framework, a case study on practical digital twin development, preliminary digital twin document specification as a method for describing digital twins, and Twinbase as server software for distributing digital twin documents. 1) The framework presents a conceptual approach for building digital twins, consisting of features and connecting software blocks to build modular digital twins. 2) The case study describes lessons learned from developing a multi-component digital twin for an industrial crane in multiple teams and organizations. 3) A digital twin document enables the integration of twin components and acts as a master metadata source for a digital twin. 4) The open-source server software Twinbase allows twin owners to publish digital twin documents as effortlessly as possible, allowing rapid development cycles and innate collaboration.

The developed technologies will accelerate the creation and standardization of scalable digital twin entities. As a synthesis of the results, the dissertation introduces the Digital Twin Web (DTW) as an initiative for building a global network of digital twins. In the network, each digital twin is represented by a digital twin document. The structure of DTW is analogous to the World Wide Web (WWW), consisting of servers and clients and leveraging open specifications. Key differences are the direct relation to the real world and readability for both humans and machines. Standardization of the digital twin document is seen as an imperative topic for future research. Other topics for standardization were identified as well, for example defining a transfer protocol for twin documents.

During the digital twin development, user-friendly Application Programming Interfaces (APIs) proved valuable resources, and also open-source software and open standards supported the development.