Curriculum Ingegneria Informazione
Referente: Giuseppe Liotta and Walter Didimo (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - The use of visualization to analyze large amounts of data is a well-established methodology and it is considered a valuable approach in a variety of application domains. Visual analytics systems are especially relevant to analyze and mine data that can be modeled as graphs and networks, which are often large, complex, dynamic, and uncertain. This type of data is ubiquitous in many real-life applications such as social sciences, finance and economy, biology, and telecommunications. In particular, the design of visualization algorithms to gain a deep understanding of a complex and large network, of its structural properties, and of its recurrent patterns is of utmost importance in several decision-making processes. The aim of this research project is to design and develop efficient algorithms and visual analytics systems for big data, with special focus on networks that are complex, uncertain, and dynamically evolving over time. From a theoretical perspective, the research will concentrate on the design of graph drawing techniques capable of handling geometric and topological constraints. It will investigate the combinatorial properties of graphs and explore both graph visualization paradigms, also in dynamic scenarios. From a practical perspective, the research aims to conduct experimental validations of the designed algorithms, and to integrate these algorithms into visual analytics systems, which can be effectively used in different application domains. Overall, this research involves topics and skills from several broader areas, such as Algorithm Engineering, Software Engineering, Information Visualization, Human-Computer Interaction, Big Data Analytics and Data Science.
Referente: Andrea Fronzetti Colladon (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Unconscious social signals complement the conscious language and can be studied to understand the intentions, goals and values of employees and customers. Honest Signals cannot be measured through time-consuming and costly surveys: they are subtle patterns in how we interact with other people. In the last years, computer algorithms have become capable of reading emotions from emails and through body sensors, and to recommend interventions to team members for better collaboration, which have never been possible before. Virtual mirroring of digital communication dynamics makes social actors aware of their behavior, beliefs and emotions; improves collaboration; and nudges change and work-commitment within organizations. At the same time, research on “Words and Networks” has led to eminent work, e.g., on language change, recommender systems, collaborative work, semantic computing, and the diffusion and adoption of (mis)information offline and online. In addition, the study of social networks has emerged as a major trend of research in management science. Methods and tools from these disciplines can serve the business in new radical ways. To face the complexity and unpredictability of today’s business and social systems, innovation and rapid adaptation of emerging technologies are a key imperative. The advent of machine learning, natural language processing (NLP), and artificial intelligence allows humans to collaborate and communicate in new ways. While text mining, social network analysis and machine learning have evolved into mature yet still quickly advancing fields, work at their intersection lags behind in theoretical, empirical, and methodological foundations. PhD students are expected to extend the research on the advantages of combining social network analysis, machine learning and text mining for business intelligence. For example, actions meant to support successful interactions with clients and employees’ come from a better understanding of the impact that language use has within and across organizations. Analogously, text mining can help brand managers identify (virtual) consumer tribes, or develop customized marketing strategies. Students should explore new applications of social network analysis, to support decision-making processes in medium and big enterprises – for example, in the fields of innovation management, human resource management, brand management, knowledge management and organizational communication. Social network analysis should be complemented with methods and tools of other disciplines – such as semantic analysis, big data analysis and machine learning. Students are also encouraged to explore new applications of advanced analytics – such as the Semantic Brand Score - which uses a methodology for the assessment of brand importance that combines text mining and social network analysis. Indeed, brand-management tools should provide answers in almost real time and take into account the ‘spontaneous’ discourse of a brand’s stakeholders. Gaining a deeper understanding of brand importance and image can change the way executives make decisions and manage organizations in the era of big data.
Referente: Gianluca Reali (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - The ongoing refactoring of traditional network function in their virtualized software-based counterparts holds the promise to dramatically enhance service delivery and deployment agility. Indeed, business cycles shrink and network function virtualization (NFV) technologies will allow network stakeholders to be able to move quicker than ever, change offerings, promptly add new services, and get better insight, consistency, troubleshooting and visibility into the network status. Computing platforms and virtualization layers have, therefore, a fundamental role in the evolution of network functions technologies and application services. The introduction of lightweight virtualization approaches is paving the way for new, better infrastructures for deploying network functions in terms of resource exploitation. In particular, serverless and function-as-a-service (FaaS) technologies are gaining popularity, given their capability of scaling computing and storage resources according to the user demand. However, nothing comes free and the major drawbacks of these techniques consists of a potential increase in service latency.
This research proposal considers a class of applications that are candidate to be deployed in edge computing platforms, e.g. for latency and data protection. Some IoT-based and vehicle-based applications fall in this category: for these applications, both latency and resource usage efficiency may be important key performance indicators (KPIs). The goal is to investigate the suitability of these new serverless and FaaS virtualization solutions and to pursue an innovative orchestration strategy driven by artificial intelligence for deploying container-based virtualized network middleboxes and critical service components. In fact, AI will play a critical role in designing and optimizing future architectures, protocols, and operations, including forthcoming services fostered by 6G. To go further, the stakeholders using this approach will not deal with AI algorithms, but will simply define their intents, and it will be an AI-empowered decision and orchestration engine to translate these intents into detailed and operative network configurations.
This research proposal considers a class of applications that are candidate to be deployed in edge computing platforms, e.g. for latency and data protection. Some IoT-based and vehicle-based applications fall in this category: for these applications, both latency and resource usage efficiency may be important key performance indicators (KPIs). The goal is to investigate the suitability of these new serverless and FaaS virtualization solutions and to pursue an innovative orchestration strategy driven by artificial intelligence for deploying container-based virtualized network middleboxes and critical service components. In fact, AI will play a critical role in designing and optimizing future architectures, protocols, and operations, including forthcoming services fostered by 6G. To go further, the stakeholders using this approach will not deal with AI algorithms, but will simply define their intents, and it will be an AI-empowered decision and orchestration engine to translate these intents into detailed and operative network configurations.
Referente: Stefania Bonafoni (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Remote sensing, the technique of observing and analyzing targets without being in direct contact with them, allows to gather information about Earth atmosphere and surface. Earth Observation (EO) data are used to map and model the spatial and temporal pattern of Earth surface/atmosphere parameters (e.g. land cover types, surface temperature, albedo, water vapor, precipitation events, pollutants, etc.). Advances in space-based remote sensing and data availability have contributed to the growing number of EO studies, exploiting the global spatial coverage and the revisit time of satellite missions. Satellite sensor measurements are extensively used to retrieve geophysical parameters: for instance, suitable image processing and statistical analysis allow to detect land changes and thermal involvements. Machine Learning (ML), i.e. the automated approach to infer empirical models from the data alone, is a successful technique to process remote sensing data and images, currently revolutionizing many areas of research, science, and technology. Also, ML is routinely used to work with large volumes of data (big data) in different formats. For a successful application of ML, two aspects are essential: a machine learning algorithm, and a comprehensive training data set. Then, once the training phase has been performed, an independent validation test is necessary to assess the ML technique accuracy. If the ML algorithm provides poor performance, a ML modification is necessary as well as the availability of a more complete training data set. Now, several open source tools and common programming environments are available to facilitate the use of ML. The research activity will be based on the following tasks: - Definition of the area of interest (urban and/or rural) - Choice of the geophysical parameters to monitor. For instance, in an urban area, useful parameters are: surface and air temperature, albedo, land cover types, pollutants. - Choice of the satellite platforms and sensors (e.g. multispectral radiometers): data download, calibration and atmospheric correction. - Retrieval and modeling of the selected parameters using ML techniques. Regressions and neural networks will be the typical (but not the only) ML techniques to be implemented for parameter estimations and image classifications. - Assessment of the ML algorithm performance: modification/optimization of the algorithms. - Application of ML techniques to image downscaling: from existing downscaling models to new performing models. Novel and original results are expected: the starting point is the study of the broad existing literature.
Referente: Stefania Bonafoni (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Remote sensing has the capability of observing several variables for hydrological applications over large areas on a repetitive basis, allowing the modelling of hydrological phenomena. These variables include surface soil moisture, rainfall, surface temperature, water depth, flow velocity, and river discharge. In particular, river discharge is fundamental to the hydrologic cycle, flood forecasting, hydraulic risk mitigation, and water resources management. Improving discharge monitoring for various flow conditions at river sites can be accomplished by leveraging new technologies developed for ground measurements and remote sensing, thus enhancing the understanding of the surface hydrologic processes. Satellite platforms have paved the way for the development of new approaches for discharge monitoring from space at ungauged river sites. In this context, research efforts have focused on multiple remote sensing platforms thanks to the growing data availability for Earth Observation and the advanced processing techniques. The extensive availability of the different satellite missions represents a huge enhancement in the hydrology field, allowing their temporally continuative use on a global scale. The research activity will be based on the following tasks, aimed to assess the effective support of satellite data to hydrological studies and their benefit with respect to ground measurements: - Definition of the sites of interests - Choice of the satellite platforms and sensors (e.g. multispectral radiometers) at different spatial resolution (MODIS, Landsat, Sentinel-2, etc…) - Integration of multi-sensor observation - Reflectance processing and computation of spectral indices for river discharge and flow velocity estimation applying different methodologies - Comparison with ground-based measurements Novel and original results are expected: the starting point is the study of the broad existing literature.
Referente: Gabriele Costante (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - The realization of autonomous robotic platforms with high-level reasoning skills has recently become one of the most crucial elements to take a substantial step towards the technological advancement in several contexts, ranging from logistics and supply chain to industries and human assistance. Recent research solutions from the Artificial Intelligence AI and the Robotics communities have shown impressive results in a wide variety of applications. Furthermore, the number of robotic-based commercial products is exponentially growing, proving that the level of maturity and robustness of these systems has considerably improved. This has been made possible since several medium and low-level capabilities related to the navigation and localization tasks (such as depth estimation, object detection and obstacle avoidance) have reached a grade of efficiency and robustness previously unthinkable, mostly thanks to the advent of Deep Learning technologies. As an instance, Micro Aerial Vehicles (MAVs) are certainly among those that benefited the most from these advancements. These platforms have been successfully equipped with autonomous perception and planning capabilities (including Simultaneous Localization And Mapping (SLAM), path planning and visual odometry), running in real-time complex and computationally intensive multi-sensor and vision-based control systems. However, the majority of those success cases have been designed for narrow and specific problems, such as navigation and exploration in structured environments or delivery systems in well-defined and controlled areas. The next generation of robots, on the other hand, should be able to fulfill more complex tasks in unstructured, unknown and dynamic contexts. To interact with humans, understand the given objective and execute it by interacting with a complex and unknown scenario, requires multiple high-level capabilities: i) to understand the task and identify the steps and the objects to interact with; ii) to recognize the entities in the environment and model their relationship; iii) to map the information acquired from sensors to actions. To this end, stand-alone deep architectures are not sufficient, since a tight interaction between different modules and capabilities is required. A step toward this direction has been recently made by relying upon the Deep Reinforcement Learning (DRL) paradigm. The DRL framework allows modeling complex perception-to-action relationships in an end-to-end manner, avoiding the need to explicitly build several disconnected modules to perform localization, detection and mapping. DRL is grounded on Reinforcement Learning (RL), which has a long history in the Artificial Intelligence research field. However, it is only in recent years, with the adoption of Deep Neural Networks (DNNs) that the first great results have been achieved. The pioneering works aimed to show the potentialities of DRL were focused on gaming. The ISARLab research group has developed strong expertise on these topics, positioning itself as one of the reference research groups on vision-based navigation algorithms for robotic platforms that take advantage of deep learning and deep reinforcement learning techniques. Inspired by the previous considerations, one of the most important research directions of the group in the immediate future will focus. The Ph.D. project is aimed at the development of innovative solutions to provide robotic platforms (both ground and aerial) with advanced AI capabilities for different robotic tasks (e.g., navigation, localization, exploration, target tracking, and target-driven navigation), accounting for different platform constraints. Research activities include the implementation and testing of the proposed solutions in real applications. As a first stage, besides an accurate review of the literature, the implementation of state of the art solutions will allow for baseline schemes to be used for comparison purposes. The key project goals are: - Developing algorithms for analysis and observations across different conditions/limitations related to autonomous systems; - Developing Deep Learning and Deep Reinforcement Learning algorithms for perception, tracking, sensor fusion, localization; - Devising perception-to-action strategies based on deep reinforcement learning for global/local planning and navigation; - Exploring scalable algorithms for perception, tracking, sensor fusion and localization. The whole set of solutions will be accurately tested in real-world applications with different robotic platforms, in both indoor and outdoor scenarios.
Referente: Federico Alimenti and Paolo Mezzanotte (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - High performance transceivers are becoming increasingly important in commercial and scientific cubesats missions. Nowadays apparatuses, that operates in the K/Ka frequency bands, are required to reach data rates up 100 Mbit/s in both uplink and downlink. They are typically composed by 17.8 to 20.2 GHz (K-band) transmitter, and by a 27.8 to 30 GHz (Ka-band) receiver. The transmitter power is around 1 W, whereas the receiver noise figure is close to 3 dB. These systems are equipped with a 20 dBi, dual-band, circular polarization horn antenna, draw a 20 W from the satellite power supply unit, and fit inside a 3U cubesat. Beside the analog front-end, a Software Defined Radio (SDR) is used to perform the base-band signal processing and to make the system reconfiguration quite easy. The main novelty of the above transceivers is related to the usage of Components Of The Shelf (COTS), i.e. electronic devices originally developed for high-reliability, ground based applications (automotive, military, etc.). This choice is aimed at reducing the production costs of the cubesat electronics and is intended for Low-Earth Orbit (LEO). As a consequence the design reliability is one of the major challenge to afford. Millimeter wave radiometers are complex electronic apparatuses adopted for remote-sensing and imaging purposes. Radiometers are passive sensors that reveal the microwave black-body radiation of the matter and, through this principle, they are capable to measure, in a contactless way, the (brightness) temperature of objects. Furthermore, since various molecule and atom resonances occur in the microwave range, adiometers can identify specific substances such as the water vapor in the atmosphere. In addition, the long wavelength microwaves can penetrate through cloud cover, haze, dust, and even rain. Finally radiometers are at the heart of radio-telescopes and radioastronomy and the nowaday knowledge of the universe is also based on their discoveries: from the Cosmic Background Radiation (CMB) to the atomic hydrogen in stellar clouds and nebula. The present Ph.D. project aims at exploring a high performance, reconfigurable mm-wave radio that can operate either in transceiver mode or in radiometer mode to allow cubesats RADIO SCIence missions. For example there are plenty Moon missions that are planned for cubesats in the next years. In these missions there is the need to set high-speed communications with the Earth (through the Lunar Gateway) as well as to perform a complete brightness temperature mapping of the Moon crust. A reconfigurable radio could perform both, thus saving space, mass and power consumption, quantities that are extremely precious onboard cubesats. In summary, the system to be studied promise a significant technological impact since it can provide high data rate links to subesats, thus enabling them as cost effective platforms for a great variety of missions. The scientific impact, instead, is related to the reconfiguration of the mm-wave receiver as a radiometer. Ground observation (atmospheric study, soil analysis, etc.), solar flares detection (microwave emission form solar flares) and other radio-astronomy experiments can be conceived with this flexible instrument. An engineering model of the transceiver has already been developed trough a cooperation between Italian Space Agency (ASI), European Space Agency (ESA), Picosats Trieste, University of Trieste and University of Perugia. Such a transceiver is now under extensive testing. The next steps will be: i) the development of the flight model, ii) a in-orbit testing, and iii) the receiver front-end modifications to perform radiometric experiments with the same hardware.
Referente: Giuseppe Liotta and Fabrizio Montecchiani (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Numerous computational problems of wide interest are known to be NP-hard in general, that is, unless P=NP, there exists no polynomial-time algorithm to solve such problems for any input instance. This motivated a long-standing systematic research of tractability results for various problems on specific classes of instances, and research in this direction constitutes one of the fundamental areas of Computer Science. Yet, the kind of instances that make a problem hard is generally far from being frequent in real datasets. Indeed, it is often possible to utilize the structure implicitly underlying many real-world instances to find exact solutions efficiently. The relatively young parameterized complexity field offers new tools to deliver efficient algorithms to practitioners working in applications. In the parameterized setting, we associate each instance with a numerical parameter, which captures how structured the instance is. This then allows the design and engineering of algorithms whose performance strongly depends on the parameter. In other words, parameterized algorithms naturally scale with the amount of structure contained in the instance and are in fact often used to efficiently process real datasets. The parameterized complexity landscape also consists of a variety of companion notions such as XP-tractability, kernelization and W-hardness. Among others, the parameterized complexity tools are profitably applied in the fields of computational geometry and graph drawing, which in turn play a key role in various application scenarios, such as, for instance, social network analysis, financial fraud detection, and network visualization. In this context, the main objectives of the research will be the development of new combinatorial characterizations for problems of interest, the design and engineering of algorithms to solve such problems, the development of systems to be adopted and evaluated by practitioners. Examples of problems of interest are linear layouts of graphs; constrained, hybrid and beyond planarity; geometric graph representations.
Referente: Daniele Passeri and Pisana Placidi (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Present radiation pixel sensors and related read-out electronics forces to face some critical issues in the microelectronics design as follows: - the increased number of read-out channels forces to have more accurate power budget (reduced power consumption of each channel). - the increased number of read data forces to develop advanced algorithms for data handling - any integrated circuit will be exposed to extremely large radiation dose, up to 1 Grad in 10 years. Optimal IC design is therefore required for high-performance, low-power and low-cost systems. These results can be achieved with a custom design and performance optimization since different experiments and applications require different specifications in terms, for instance, of radiation hardness, power supply, power consumption, accuracy. The purpose of the present research is to exploit innovative CMOS technologies options in terms of: - minimum gate size (65nm, 45nm, 32nm, 28nm, or below) - technology features (CMOS-bulk, CMOS-FinFET, CMOS-SOI, etc…) - technology access & cost (this is in terms not only of prototyping for investigation, but also for the final production) This will enable: - higher digital circuit density to includes more digital functionalities - higher speed to achieve improved performance in term of larger bandwidth, enabling new application In particular, the main outcome of the project is to develop innovative CMOS monolithic pixel detectors that can replace standard hybrid pixel and silicon strip detectors in a wide range of applications such as High Energy Physics experiments, Medical applications as well as Space applications. The proposed development relies on three key elements: − a sensor fabrication technology that, starting from the experience gained in the previous research activity within national and international collaborations and projects, will improve in such a way to be suitable for a wide range of applications. − a set of smaller-size test structures to investigate relevant issues that can be addressed without full-size prototypes (e.g. radiation resistance, monitoring of the substrate properties, influence of different pixel’s architectures on charge collection efficiency). − a versatile and scalable front-end electronics and architectures, capable to effectively support the development of sensors with realistic size and performances. The CMOS electronics will be common to the different sensor options, that will be explored by changing only the substrate material and/or some step of the production process. All the design improvements and modifications introduced in the process flow will be validated with the help of Technology Computer-Aided Design (TCAD) simulations, relying on process data provided by the foundry. A proper TCAD modeling of the bulk and surface radiation damage effect should also be devised and validated for the selected technology, thus fostering its application for the comparison of different layout/doping profiles aiming at optimizing the radiation resistance of the device in terms of SNR and breakdown effects. Aiming at low power design, alternative pixel front-end architectures will be investigated. A so-called Weak Inversion Pixel Sensor (WIPS), exploits a dedicated, yet simple circuitry, based on a pre-charge/evaluation scheme, which allows for ‘‘sparse’’ access mode and thus for speeding-up the read-out phase
Referente: Paolo Banelli (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - We are currently experiencing a new technological revolution, where the massive growth of data traffic, transmitted and generated in wireless and wired telecommunication networks and user equipment’s (mobile phones, wearable devices, autonomous vehicles, etc.) is accompanied by a pervasive use of artificial intelligence (AI) and machine learning (ML) algorithms aimed to extract specific, task-oriented, meanings from the data, as well as to enable communication functionalities, or optimizing the communication network performance. In communication systems, the widespread success of (deep) ML approaches, has been firstly envisaged to fulfill classical telecommunications goals, such as channel estimation, detection, coding, traffic shaping, network discovery, and so forth. The motivating idea to resort to data-driven ML algorithms, is that they can outperform classical model-based solutions and designs when the models are not matched to the practical scenarios, thus representing a viable alternative for communication and network reconfiguration in heterogenous scenarios. However, a much more intriguing and challenging framework looks at a deeper interplay of ML (computation) tasks with wireless communication networks that, rather than being only perceived as technological enablers for ML tasks performed by devices connected to the Internet, are conversely integrated with, and optimally managed for, the specific statistical learning task they have to support, with a paradigm known as Edge Machine Learning. The massive data to be processed nowadays by AI applications, are typically distributed across a cloudified network, or generated in real time by thousands of IoT devices. As well known, this makes impossible to analyze the data in a central fashion both for capacity issues of the network and the prohibitive computational complexity, which in most cases does not even scale linearly with the data-size. To overcome these criticalities, in the last decade a huge research effort has been dedicated to developing both statistical learning algorithms and platforms with distributed and collaborative architectures. However, such algorithmic and architectural solutions are not well suited to situations where the learning result is requested back in a noticeably short time, possibly by the same equipment that generated the data, such as users or devices connected by a wireless network, whose specific nature and constraints are not always taken into account. For example, a challenging situation is when the inference or learning tasks must be performed by mobile or simple devices, which may not have at their disposal either enough energy, or computational capabilities, or both. This could be the case of small drones that have to identify or classify objects during their mission or exploit computer vision deep learning algorithms for collision avoidance and pose estimation: if they do not have enough computational or energy capabilities on board, they may ask a server of a (wireless connected) network to perform the task and send back the outcome. The problem would become even more challenging in the presence of a swarm of drones, or a network of IoT devices, that need to cooperate for a common mission, or learning task. The drones must rely on a secure, fast, and low-latency communication network, to act as a seamless networked system and possibly employ what they learn within their control algorithms to stabilize the flight and to perform the mission. As another example, a plethora of IoT devices may have extremely limited batteries and computational capabilities, although possibly a less stringent end-to-end delay for their (possibly co-operative) learning task. In this framework, the recently introduced 5G networks offer amazing wireless communications capabilities, by enabling different new services using the same communication platform, which are characterized by a significant communication performance enhancement, by means of a) Ultra-reliable and low-latency communications (URLLC) b) Enhanced mobile broadband (EMBB) c) Massive Machine Type Communications (eMTC). d) Pervasive use of cloud capabilities and the Wireless Network EDGE. This last characteristic of 5G networks, differently from classical cloud-based architectures, moves the computational services to the network edge, i.e., closer to the end-user and the wireless connected devices. This is crucial, together with the other three pillars of 5G networks, to enable a high-data-rate learning framework where the computational burden of a (mobile) equipment (drone, sensor, IoT device, mobile phone, etc.) could be demanded (off-loaded) to the network, still preserving acceptable end-to-end delay. This Edge Computing vision, formerly known as Fog Computing, has been also recently standardized by ETSI under the name of Multi-Access Edge Computing (MEC). In the described framework the research activity will be focused on a holistic view of the learning goals and algorithms, the network resources (bandwidth, rate, power), and the edge computational capabilities. More specifically the research activity, that will be positioned in the hot topic of Edge Machine Learning, will focus on: a) exploring the trade-offs of learning accuracy and resource allocation for dynamic training and inference of (single agent) machine learning tasks, which can be possibly split between the end-user and the edge of the wireless network, rather than being simply off-loaded to the edge. The goal is to explore trade-offs between energy consumption, delay and learning accuracy, by proper allocation of the available resources (radio, data-rate, scheduling, edge computations, etc.) as proposed in some seminal works for classical computation off-loading. b) data compression techniques to reduce the wireless rate transmission while preserving the accuracy of the learning tasks. For instance, the research activity will investigate approaches inspired by the information theoretic bottleneck method that has been largely shown to play a fundamental role in the design and performance evaluation of deep machine learning algorithms. c) Collaborative machine learning at the edges of 5G wireless networks and beyond, where the existing literature on wireless distributed estimation and Federated Learning will be casted and extended to the proposed holistic view for the joint network & learning design and optimization, which will be the core of the research activity. The said research objectives, will exploit an interdisciplinary approach that finds its theoretical roots on statistical learning and signal processing, wireless communications, information theory, network science, distributed and stochastic optimization, coupled with deep machine learning architectures, such as (graph) convolutional neural networks.
Referenti: Manuela Cecconi, Pisana Placidi(Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - In the framework of the Internet of Things (IoT), communication technologies can improve the current methods of monitoring, supporting the response appropriately in real time for a wide range of applications in different fields of engineering. In the environmental field, sensors are designed for collecting information (e.g., temperature, pressure, light, humidity, soil moisture, etc.) whereas network-capable microcontrollers are able to process, store, and interpret information, building intelligent wireless sensor networks (WSN). A clear advantage of wireless transmission is a significant reduction and simplification in wiring and harness, in the perspective of a sustainable economy. Recently, an experimental characterization of a commercial, low-cost “capacitive” soil moisture sensor that can be housed in distributed nodes for IoT applications has been developed at our Engineering Department. The chosen sensor is the cheapest and most easily available in the market. A preliminary validation of the sensor for the determination of the soil water content has been recently carried out on silica sandy soil samples. In this scenario, the objectives of this work can be summarized as follows: • acquisition of awareness on Low Power Wide Area Networks (LPWAN) in different environmental fields, e.g. precision agriculture. • acquisition of basic physical parameters of plants (vegetation), soil and ambient/environment: soil water content, soil temperature, greenhouse relative humidity (RH), temperature and light; • availability of a modular system built with cheap off-the-shelf components aiming to compare the performance with commercially available expensive systems to select possible applications in the IoT scenario. • laboratory experimental investigation aimed at validating the applicability of the developed sensor to different soil types, in terms of mineralogical constituents, physical soil-state parameters, degree of saturation, etc… The design of a modular architecture subdivided into different layers will be investigated: i) wireless nodes (encompassing sensors, actuators, low-power embedded processor, battery), ii) internet gateway/concentrator, iii) user interface and iv) database applications placed in a virtual machine in the cloud. The proposed development relies on these key elements: - participation; - building a system by using a smart hardware and software platform available in the market and enabling Real Time Software Execution; - capability to store the acquired information for real time or post-processing elaboration; - getting acquainted with electronics operating also in harsh environment.
Referente: Paolo Carbone (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - The development of hardware and software technologies to improve the human-computer interaction (HCI) has produced a new generation of highly sophisticated sensors that open up novel application domains. This is case for instance of 24 GHz and 76 GHz radar sensors that offer at very reasonable costs processing platforms to detect falls of elderly people, to count people in a room, to track the movement of people and objects. As another example consider Time-to-digital measurement- based devices that, at unprecedented low costs, offer the possibility to measure very accurately the position of small objects at close distance to allow gesture recognition and improve HCI. As a third example consider magnetic technologies applied to short-range movement tracking, developed in the lab of the research proponent, that enables applications in the healthcare (e.g. monitoring of Parkinsons’s disease) or telecontrol. All these technologies share a common denominator: the possibility to better interact with the reality can be empowered by a clever application of dedicated numerical processing procedures aimed at feature extraction and precise measurements. The research lab has extensive experience in the development of technologies for the interaction of a user within his environment, through the development of applications based on ultrasound, ultrawideband-width and magnetic physical principles. The lab has also several years of experience in the design and tuning of algorithms and estimators aimed at easing the measurement of complex quantities: these techniques range from the design of very simple testing signals having specific spectral properties, to the development of a 1-bit spectral analyzer, using a 1-bit DAC and 1-bit ADC for the online identification of linear and nonlinear systems. The proposed research activity aims at exploring the many possibilities offered by the development of dedicated data processing algorithms and estimators for maximal exploitation of available sensors and HCI technologies. The PhD candidate will work both on the development of new simple hardware to better interact with available hardware technologies and on the development of new estimators to realize low-cost, low-energy and low-complexity measurement systems specifically for the IoT application domain. This will include the development of a framework for very low-complexity 1-bit data acquisition system for easing data transfer and processing in huge multichannel, massive multiple-output multiple-input systems. It will also include the design and implementation of clever algorithms for the invention of low-resolution testing signals to shorten the measurement time of otherwise lengthy measurement procedures, such as in the case of Electroimpedance spectroscopy applied to battery state-of-health monitoring and sensing for cultural heritage or for healthcare. The overall subject of frugal measurements will be explored: finding simple-to-realize measurement architectures that enable the user to easily interact with reality and to capture the needed information.
Referente: Mauro Femminella (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - The 6G technologies under definition are expected to face several challenges. 6G presents itself as the umbrella under which a number of heterogeneous technologies can coexist, ranging from molecular communications in access segments where classical electromagnetic communications are not possible, to system-wide cloud-native deployment and testing. In addition, given the claims about the potential improvements brought by 6G in terms of key performance indicators (KPI), always more challenging applications are being defined to fully leverage these capabilities, from time sensitive networking to metaverse applications, opening the way to future evolutions of the network itself. Given this heterogeneous scenario, the research proposal of the candidate can be shaped according to several different themes, that will be selected at the beginning of the doctoral course. Potential activities include, but are not limited to:
• Molecular communications based on transport of molecules as information carriers through advection/diffusion in biological scenarios where e.m. communications are not possible/recommended. Focus of the research activity could be the usage of multiple molecules to improve the amount/quality of transferred information in different scenarios. In this regard, the usage of artificial intelligence techniques could allow selecting the most suitable set of molecules and their optimal release processes to minimize potential interference with ongoing biological processes.
• Tools for validation of 6G KPIs: design and development of novel validation suite for 6G KPIs, able to abstract all the underlying operations regarding configuration and testing of individual technologies or services. The proposed tool could be used also for root-cause-analysis of failure in 6G networks. The adoption of robust deep learning techniques allows extracting information by the inner components of the network with minimal overhead, by leveraging novel data acquisition tools. In this regard, the proposed approach will pursue the realization of cloud-native, serverless software probes leveraging data plane programmability to limit intrusiveness and resource consumption.
• Usage of edge computing for metaverse applications: cloud-native serverless technologies allow deploying agile software modules acting as micro-functions to manipulate objects in augmented/virtual/extended reality applications. Deployment in edge nodes will guarantee limited latency for accessing objects. Research will focus on technologies guaranteeing efficiency of usage of edge resources and latency-bounded services through the adoption of artificial intelligence tools.
• Molecular communications based on transport of molecules as information carriers through advection/diffusion in biological scenarios where e.m. communications are not possible/recommended. Focus of the research activity could be the usage of multiple molecules to improve the amount/quality of transferred information in different scenarios. In this regard, the usage of artificial intelligence techniques could allow selecting the most suitable set of molecules and their optimal release processes to minimize potential interference with ongoing biological processes.
• Tools for validation of 6G KPIs: design and development of novel validation suite for 6G KPIs, able to abstract all the underlying operations regarding configuration and testing of individual technologies or services. The proposed tool could be used also for root-cause-analysis of failure in 6G networks. The adoption of robust deep learning techniques allows extracting information by the inner components of the network with minimal overhead, by leveraging novel data acquisition tools. In this regard, the proposed approach will pursue the realization of cloud-native, serverless software probes leveraging data plane programmability to limit intrusiveness and resource consumption.
• Usage of edge computing for metaverse applications: cloud-native serverless technologies allow deploying agile software modules acting as micro-functions to manipulate objects in augmented/virtual/extended reality applications. Deployment in edge nodes will guarantee limited latency for accessing objects. Research will focus on technologies guaranteeing efficiency of usage of edge resources and latency-bounded services through the adoption of artificial intelligence tools.
Referente: Federico Alimenti (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - In recent years we have witnessed a mighty development of electronics and its pervasive use in all spheres of society: electronics is counted among the technologies necessary for the progress of advanced economies, as also demonstrated by the recent semiconductor crisis and its impact on many production chains. Modern electronic applications in key sectors such as energy, automotive and aerospace require extreme miniaturization, state-of-the-art functional specifications and extremely high reliability. The safety of self-driving vehicles, for example, relies on on-board equipment. Such applications are called "safety critical" and require the large-scale extension of methods (design, manufacturing, testing and qualification) typically adopted by the aerospace industry. In particular, semiconductor manufacturers will need to replace the current sample tests, which are valid for noncritical applications, with tests on all the manufactured devices (100% coverage). It is estimated that, without the development of ad hoc technologies compatible with the vision of the smart factory (Industry 4.0), this will increase manufacturing costs by 25 to 50 percent. One way to address this challenge is to move test equipment from labs to the silicon slice. The idea is to miniaturize instruments and test benches, or at least parts of them, by integrating them together with the electronics to be tested. This approach, known in the literature as Built-In Self-Test (BIST), has found increasing scientific interest in recent years in fields ranging from automotive to aerospace, from digital to analog and radio frequency electronics In 2013, the IEEE 1149.1-2013 standard for managing BIST system interfaces in integrated circuits was defined. BISTs could be used in reliability qualifications for collecting on-chip data (electrical parameters, temperature, etc.), either as actuators of stress factors to accelerate device aging (heaters, etc.). Such use, however, did not emerge from the literature review. Similarly, there are no publications devoted to the integration of BISTs with high-power transistors required for power generation and management. Finally, the adoption of BISTs in radio-frequency integrated circuits appears relevant both because of the application implications (telecommunications, radar sensors) and because of the significant cost of measuring instruments in the millimeter bands (> 100 GHz), where sixth-generation (6G) services will be allocated.
In this context there are three scientifically relevant and still under-researched areas: reliability qualifications, high-power electronic devices, and millimeter-wave integrated circuits are areas that can benefit from a systematic use of the BIST approach. These topics, which are characterized by a high degree of novelty, will form the core of the proposed research. The present Ph.D. project has the following objectives. 1) to investigate microelectronic technologies and BIST circuit design methods for millimeter-wave electronic circuits. 2) to develop a library of integrated components on silicon (elementary blocks) for BIST applications, prototype the circuits and characterize them experimentally with laboratory tests. These blocks should be able to be combined to implement the specific measurement technique. This consists of BIST development to increase capability to stimulate and diagnose the occurrence of defects, and to measure the performance of electronics during qualification tests (Test for Reliability). 3) apply the BIST approach to reliability qualification of silicon integrated millimeter-wave circuits, and to their performance monitoring in operational contexts (In-Field Monitoring).
The PhD research will be carried out at the Department of Engineering of the University of Perugia in strict cooperation with Eles SpA, Todi, Italy (https://www.eles.com/). The Technical University of Chemnitz, Germany (https://www.tu-chemnitz.de) will also be partner of this project. An Erasmus+ agreement is active and there is a strong interest for the issues and the objectives of the research. The Fraunhofer Institut ENAS, Chemnitz, Germany (https://www.enas.fraunhofer.de) is initiating a cooperation with both Eles SpA and the Department of Engineering on aspects related to reliability and BIST systems. The research will require the design, realization and experimental characterization of integrated circuits in Si CMOS and SiGe BiCMOS technologies trough Multi-Project Wafers (MPW).
In this context there are three scientifically relevant and still under-researched areas: reliability qualifications, high-power electronic devices, and millimeter-wave integrated circuits are areas that can benefit from a systematic use of the BIST approach. These topics, which are characterized by a high degree of novelty, will form the core of the proposed research. The present Ph.D. project has the following objectives. 1) to investigate microelectronic technologies and BIST circuit design methods for millimeter-wave electronic circuits. 2) to develop a library of integrated components on silicon (elementary blocks) for BIST applications, prototype the circuits and characterize them experimentally with laboratory tests. These blocks should be able to be combined to implement the specific measurement technique. This consists of BIST development to increase capability to stimulate and diagnose the occurrence of defects, and to measure the performance of electronics during qualification tests (Test for Reliability). 3) apply the BIST approach to reliability qualification of silicon integrated millimeter-wave circuits, and to their performance monitoring in operational contexts (In-Field Monitoring).
The PhD research will be carried out at the Department of Engineering of the University of Perugia in strict cooperation with Eles SpA, Todi, Italy (https://www.eles.com/). The Technical University of Chemnitz, Germany (https://www.tu-chemnitz.de) will also be partner of this project. An Erasmus+ agreement is active and there is a strong interest for the issues and the objectives of the research. The Fraunhofer Institut ENAS, Chemnitz, Germany (https://www.enas.fraunhofer.de) is initiating a cooperation with both Eles SpA and the Department of Engineering on aspects related to reliability and BIST systems. The research will require the design, realization and experimental characterization of integrated circuits in Si CMOS and SiGe BiCMOS technologies trough Multi-Project Wafers (MPW).
Referente: Federico Alimenti (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - In recent years we have witnessed a mighty development of electronics and its pervasive use in all spheres of society: electronics is counted among the technologies necessary for the progress of advanced economies, as also demonstrated by the recent semiconductor crisis and its impact on many production chains. Modern electronic applications in key sectors such as energy, automotive and aerospace require extreme miniaturization, state-of-the-art functional specifications and extremely high reliability. The safety of self-driving vehicles, for example, relies on on-board equipment. Such applications are called "safety critical" and require the large-scale extension of methods (design, manufacturing, testing and qualification) typically adopted by the aerospace industry. In particular, semiconductor manufacturers will need to replace the current sample tests, which are valid for noncritical applications, with tests on all the manufactured devices (100% coverage). It is estimated that, without the development of ad hoc technologies compatible with the vision of the smart factory (Industry 4.0), this will increase manufacturing costs by 25 to 50 percent. One way to address this challenge is to move test equipment from labs to the silicon slice. The idea is to miniaturize instruments and test benches, or at least parts of them, by integrating them together with the electronics to be tested. This approach, known in the literature as Built-In Self-Test (BIST), has found increasing scientific interest in recent years in fields ranging from automotive to aerospace, from digital to analog and radio frequency electronics In 2013, the IEEE 1149.1-2013 standard for managing BIST system interfaces in integrated circuits was defined. BISTs could be used in reliability qualifications for collecting on-chip data (electrical parameters, temperature, etc.), either as actuators of stress factors to accelerate device aging (heaters, etc.). Such use, however, did not emerge from the literature review. Similarly, there are no publications devoted to the integration of BISTs with high-power transistors required for power generation and management. Finally, the adoption of BISTs in radio-frequency integrated circuits appears relevant both because of the application implications (telecommunications, radar sensors) and because of the significant cost of measuring instruments in the millimeter bands (> 100 GHz), where sixth-generation (6G) services will be allocated.
In this context there are three scientifically relevant and still under-researched areas: reliability qualifications, high-power electronic devices, and millimeter-wave integrated circuits are areas that can benefit from a systematic use of the BIST approach. These topics, which are characterized by a high degree of novelty, will form the core of the proposed research. The present Ph.D. project has the following objectives. 1) to investigate microelectronic technologies and BIST circuit design methods for analog and power electronic devices. 2) to develop a library of integrated components on silicon (elementary blocks) for BIST applications, prototype the circuits and characterize them experimentally with laboratory tests. These blocks should be able to be combined to implement the specific measurement technique. This consists of BIST development to increase capability to stimulate and diagnose the occurrence of defects, and to measure the performance of electronics during qualification tests (Test for Reliability). 3) apply the BIST approach to reliability qualification of silicon integrated analog and power electronic devices, and to their performance monitoring in operational contexts (In-Field Monitoring).
The PhD research will be carried out at the Department of Engineering of the University of Perugia in strict cooperation with Eles SpA, Todi, Italy (https://www.eles.com/). The Technical University of Chemnitz, Germany (https://www.tu-chemnitz.de) will also be partner of this project. An Erasmus+ agreement is active and there is a strong interest for the issues and the objectives of the research. The Fraunhofer Institut ENAS, Chemnitz, Germany (https://www.enas.fraunhofer.de) is initiating a cooperation with both Eles SpA and the Department of Engineering on aspects related to reliability and BIST systems. The research will require the design, realization and experimental characterization of integrated circuits in Si CMOS and SiGe BiCMOS technologies trough Multi-Project Wafers (MPW).
In this context there are three scientifically relevant and still under-researched areas: reliability qualifications, high-power electronic devices, and millimeter-wave integrated circuits are areas that can benefit from a systematic use of the BIST approach. These topics, which are characterized by a high degree of novelty, will form the core of the proposed research. The present Ph.D. project has the following objectives. 1) to investigate microelectronic technologies and BIST circuit design methods for analog and power electronic devices. 2) to develop a library of integrated components on silicon (elementary blocks) for BIST applications, prototype the circuits and characterize them experimentally with laboratory tests. These blocks should be able to be combined to implement the specific measurement technique. This consists of BIST development to increase capability to stimulate and diagnose the occurrence of defects, and to measure the performance of electronics during qualification tests (Test for Reliability). 3) apply the BIST approach to reliability qualification of silicon integrated analog and power electronic devices, and to their performance monitoring in operational contexts (In-Field Monitoring).
The PhD research will be carried out at the Department of Engineering of the University of Perugia in strict cooperation with Eles SpA, Todi, Italy (https://www.eles.com/). The Technical University of Chemnitz, Germany (https://www.tu-chemnitz.de) will also be partner of this project. An Erasmus+ agreement is active and there is a strong interest for the issues and the objectives of the research. The Fraunhofer Institut ENAS, Chemnitz, Germany (https://www.enas.fraunhofer.de) is initiating a cooperation with both Eles SpA and the Department of Engineering on aspects related to reliability and BIST systems. The research will require the design, realization and experimental characterization of integrated circuits in Si CMOS and SiGe BiCMOS technologies trough Multi-Project Wafers (MPW).
Referente: Gianluca Reali(Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Physics of failure (PoF) for reliability is the application of knowledge of physical degradation processes in semiconductor devices to determine the life expectancy of an integrated circuit under specified conditions. Central to this approach is the documentation of expected environmental stresses and the identification of failure modes, failure locations, and failure mechanisms occurring within the device. Physics-based models, which model the time to failure due to identified failure mechanisms, are used to quantify life expectancy and prioritize failures. PoF for reliability can be applied in all phases of the life cycle of an integrated circuit (IC) device and can help improve integrated circuit design, select appropriate qualification tests, perform accelerated testing and reliability evaluation virtual, monitor the health of a product and predict failures. Evaluating the reliability of electronic systems using prediction manuals can lead to erroneous reliability predictions due to the assumption of a constant failure rate and the imprecision of the proposed semi-empirical models. Although PoF is a practical approach to evaluate the reliability of semiconductor devices and failure physics (PoF) modeling has been reported to provide the best estimates of reliability, however, the methodologies used are generally insufficient to address multi- mechanism. The first objective of the project is methodological, and consists in the identification of innovative metrics that can be associated right from the design phase with the problems of reliability, as well as those of functional correctness, testability and manufacturability, extending the concepts of Design For Testability (DfT) to reliability testing, and introducing the concept of Design For Reliability Test (DfRT) with the automation of the DfRT process. The standardization of the test phase (TfR) and the improvement of the analysis processes in the Learn from Failure (LFF) phase represent as many strategic objectives. The activities consist in identifying, first of all, the current problems and gaps and fix the future challenges. Secondly, they aim to define relevant data to characterize the failure mechanism, addressing design and manufacturing process issues. The last step is to detect how the TfR (Test for Reliability) always based on the PoF, the Testing data and the data from the field can improve the failure model. The PhD research will be carried out at the Department of Engineering of the University of Perugia in strict cooperation with Eles SpA, Todi, Italy (https://www.eles.com/). The Technical University of Chemnitz, Germany (https://www.tu-chemnitz.de) will also be partner of this project. An Erasmus+ agreement is active and there is a strong interest for the issues and the objectives of the research. The Fraunhofer Institut ENAS, Chemnitz, Germany (https://www.enas.fraunhofer.de) is initiating a cooperation with both Eles SpA and the Department of Engineering on these aspects.
Referente: Gianluca Reali(Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Physics of Failure (PoF) is a research area oriented towards determining the root causes of failures for electrical, electronic or electromechanical equipment. PoF is aimed at understanding the physical, chemical, mechanical, thermal or electrical stresses that can degrade or cause failure. The discipline also provides algorithms for estimating the probability of failure in specific systems and includes the study of design and analysis functions during the development cycle phases, which are progressively becoming data-oriented. PoF research has evolved into the engineering approach of reliability physics analysis (RPA). RPA implements the principles of PoF for specific applications, producing guidelines and analysis tools called Design-For-test (DfT). These assist the designer in creating testable designs without introducing additional design time. In the electronics industry, manufacturers are turning to the latest chip-scale device packaging technologies and other solutions that provide the required functionality and miniaturization. However, in the implementation of the solutions identified, difficulties emerged in accessing the printed circuit boards and in configuring the devices. These issues are addressed in the IEEE 1149.1 standard, which addresses pin-level access, regardless of device packaging technology. Nowadays almost all popular complex integrated circuits support IEEE 1149.1 test features. At the PCB level, it is the responsibility of hardware designers and project managers to use the available IEEE 1149.1 device features to achieve better testability and implement DfT at the board level. By making use of this technology, data collection functions have been introduced during the test, the processing of which requires the adoption of management and machine learning techniques to make a definitive qualitative leap in analysis and design techniques, finalized and improve the reliability of the devices. The project has the following objectives: • Define relevant data to characterize the failure mechanism: design and manufacturing process issues need to be addressed • How TfR (Test for Reliability) can improve the model (test data capture driven by PoF modeling) • PoF Data-Oriented Model Refinement (Experimental/Field Data). Other data in the overall test chain (from IC@Wafer to SLT, from process data and field data link) to the model. • Enhance Feedback LFF (Learn From Failure) and provide better diagnosis to IC Design via DfR (Design for Reliability) Libraries. • Definition of techniques to identify model parameters, focused on experimental test data. This approach will complement and supplement the traditional structural approach based on the use of model equations, producing a complete data-driven modeling process. • Development of Inference and Artificial Intelligence methodologies for the extraction of useful information present in the experimental datasets. The PhD research will be carried out at the Department of Engineering of the University of Perugia in strict cooperation with Eles SpA, Todi, Italy (https://www.eles.com/). The Technical University of Chemnitz, Germany (https://www.tu-chemnitz.de) will also be partner of this project. An Erasmus+ agreement is active and there is a strong interest for the issues and the objectives of the research. The Fraunhofer Institut ENAS, Chemnitz, Germany (https://www.enas.fraunhofer.de) is initiating a cooperation with both Eles SpA and the Department of Engineering on these aspects.
Curriculum Ingegneria Industriale
Referente: Francesco Bianconi (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - The definition of mathematical models to compute shape and texture features from planar and volumetric images is central to a number of applications, including product inspection, object classification, surface grading, content-based multimedia retrieval and computer-assisted medicine. Up until not long ago the approach to the problem used to be model-based, with the visual features being designed by hand (hence the term 'hand-crafted'). In recent years, however, research has been shifting towards data-based models (Deep Learning). It is still an open issue, however, how the knowledge from the the mathematical features defined by hand ('engineered') can be integrated with Deep learning models to produce high-performance visual models. The overall objective of this research activity is to investigate suitable shape and texture analysis methods to extract meaningful features from planar and volumetric images. Particular attention will be devoted to the development of descriptors for symmetry. Applications will focus on the following topics: 1. Recognition and characterization of the visual appearance of industrial materials; 2. Analysis of three-dimensional medical scans (e.g. PET, CT and MRI) for computer-assisted diagnosis and prognostication.
Referente: Linda Barelli (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - In the European scenario, production from renewable energy sources (RES) is strongly encouraged by Community policies to achieve the full decarbonisation target at 2050. However, the penetration of renewable energy in the electricity mix causes problems relative to grid congestion and perturbation due to its high variability over time (i.e. fluctuating and intermittent production profiles, particulatly for solar and wind power plants). To mitigate grid instability, RES plants are often curtailed during low consumption periods, with negative effect on both revenues and DSCR. In some European local areas, the need of RES curtailment is also greater due to infrastructure critical issues of specific grid districts interested by large RES installations. Synchronization of network reserves and ESS integration in the electric grid can be seen as two effective and complementary solutions to overcome the above-mentioned RES technical limits. This is consistent with the objectives of the IEC T120 work program, where ESSs are identified as a solution to efficiently deliver sustainable, economic and secure electricity supplies, allowing a better RES exploitation and penetration. Moreover, ESSs, coupled to RES plant, deserve a relevant interest with reference to micro-grids in remote and non or low-interconnected areas, including also the case of not developed countries. Anyway installed energy storage capacity is currently very limited in the European and worlwide scenarios. The challenge for energy storage penetration is technological and mainly economic. Regarding technological issues, the crucial key for their penetration is the improvement of ESS performance in terms of availability, durability, efficiency, energy density, response time and a contextual cost reduction with respect to current state of the art. Moreover, also safety issues have to answered. As well-known lithium-ion batteries technology is the most limited by safety issues and fire and explosion incidents history shows numerous events. Thermal runaway phenomenon is originated by an internal short circuit leading to fast heat release, temperature increase, explosion, fire and emission of hazardous substances. Battery Management Systems absolve the function of active protection but residual risk must be taken into account in life cycle and plant feasibility analysis. System vulnerability is intrinsic, because of a small safety window in T-V diagram with operational limits of -10 ÷ 90 °C and 2.3 ÷ 4.1 V. Sodium sulfur batteries exhibit still risks, even if safety measures like internal fuses and anti-fire boards are implemented. In any case sodium burns in contact with air and moisture, which is the most important issue to take into account particularly to avoid external causes of incident like natural calamities. Hazardous substances are contained in the batteries and may be released in case of fire. Also power to gas systems, based on water electrolysis, exhibit flammable gas connected risks. A specific task is hydrogen material compatibility. Dedicated metal alloys can be used as hydrogen permeation barriers, which adoption must be evaluated to avoid metal embrittlement in piping and the consequent risk of fracture and leak. Polyphenylenesulphide liners can be used in high pressure pipelines, high density polyethylene in pressure vessels. Equipment such as pressure regulators in decompression stations is based on elastomeric rubber membranes, which materials are often subject to hydrogen permeation, leading to the risk of pressure increase in the downstream systems. Hydrogen permeation is a phenomenon that can affect manometers and pressure transmitters, with consequences such as signal loss and measurement distortions. Countermeasures can be taken using dedicated products and adopting accurate equipment controls planning. For what indicated above, the development of enhanced energy storage technologies and also their implementation in hybrid storage systems is strongly needed with attention to the use of sustainable, available and cheap materials and to safety issues at both components and system levels. The activity should address one or more topics detailed in the following to provide the description of the research activity framework on energy storage technologies (for both stationary and transportation applications) development and integration. The reserach activity in this field is aimed to the development of both innovative storage technologies and hybrid configurations, as well as to their application. Concerning enhanced technologies the research activity, performed through both experimental and simulation activities, is mainly focused on innovative flow batteries considering their advantages for large-scale stationary installations as the independent scaling of power and capacity, high efficiency and cycling (long lifespan) and security. Vanadium redox flow batteries is a promising technology also for safety issues. Vanadium electrolyte is an aqueous solution, not inflammable, with no risk of ignition, explosion and thermal runaway thanks to electrolyte flowing. Solutions toxicity is lower than lead-acid batteries and risk of corrosion leak can be restrained by double containers. Also air-flow battery technology and Na/seawater battery are of interest as innovative and promising technologies. Aiming to provide a general view of the possible ambit of activity, also solid oxide cells for reverse operation (electrolizer/fuel cell) rSOCs and solid oxide electrolyzers for power to gas applications are object of study, exploring new and optimized operating conditions on the base of previous work of the research group and focusing at system level on safety aspects concerning hydrogen. Also molten carbonate technology is of interest for the innovative application as electrolyzer or reversible operation. Concerning hybrid systems, our research group already investigated flywheel hybridization with other technologies characterized by higher energy capacity (as rSOC and batteries), extending its application range. At the same time, hybridization provides, mainly thanks to flywheel fast response, beneficial effects towards the other base technologies. If batteries are considered, it was already highlighted by our research the potential enhancement of their duration due to the flywheel peak shaving function, as well as a significant reduction of fluctuations toward the grid in case of grid-connected systems. The design of these hybrid systems is innovative particularly in the small-size. The research activity will be further focused in this direction to optimize and design hybrid systems customized for specific applications (stationary, transport, …) and assess their environomic impact. Also the integration of enhanced and hybrid ESSs in complex systems will be investigated in view of the Multi-Energy systems application, whereby multiple energy sectors (e.g. energy, transport) are optimally integrated to increase flexibility, allowing a smart integration of renewable sources in the energy system. Finally also social aspects related to energy storage technologies development with attention to the produced social impact, taking into consideration also the social perception towards innovative devices and systems, can be object of the research acitvity.
Referente: Linda Barelli (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Enzymatic biofuel cells (EFCs) are bioelectronic devices that use oxidoreductase enzymes as electrocatalysts for the oxidation of an organic substrate and/or the reduction of oxygen or peroxide, finalized to direct energy conversion to electricity. Enzymes provide excellent specificity towards the substrates, avoiding, in some cases, the need of membranes and noble metals, thus realizing very compact systems suitable for miniaturization. Other advantages include high catalytic activity with low overvoltage for substrate conversion, mild operating conditions, like ambient temperature and near-neutral pH and low cost. EFCs can be utilized in a variety of applications, which need low power input and the biocompatibility of the device, including implantable or wearable biofuel cells, self-powered biosensors and, generally, portable battery-free power solutions. For enzymatic biofuel cell design, an effective immobilization of enzymes on the electrodes is an important challenge to obtain direct electron transfer without mediators, resulting in higher performance and improved long term stability. The use of conductive nanomaterials and different types of polymers as electrodes allow to achieve high specific surface, increasing the number of wired enzymes per volume unit, and facilitate the electron transfer between enzyme active site and electrode. Interest on ammonia is recently strongly increased as an anergy and hydrogen vector also for high temperature fuel cells feeding. Ammonia is nowadys substantially produced based on fossil fuels and an enzymatic path for renewable ammonia production is a current callenge. The activity should address one or more topics detailed in the following to provide the description of the research activity framework. The research activity focuses on the development and prototyping at lab scale of an innovative EFC design with particular attention to glucose fuel cells (GFCs), a subtype of conventional EFCs able to oxidize glucose provided by a lot of metabolic processes. Specifically the activity aims to the prototype realization with continous substrate feeding at both cathode and anode realizing a flow cell The design of both bioanode and cathode are mainly focused on the use of glucose oxidase (GOx) and glucose dehydrogenase (GDH), but also different enzymes (e.g. copper oxidases, such as laccase or bilirubin oxidase (BOD) for biocathodes) could be considered. Since glucose is an essential, relatively abundant and almost unlimited source of energy in living organisms, possible applications are the development of implantable GFCs as well as the exploitation of agro-industrial wastes in the framework of a circular economy. Also an activity on the integration of proper enzymes in engineered devices for ammonia production is a topic of great interest in the present research framework.
Referente: Linda Barelli (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Green hydrogen allows several applications. Power-to-power application provides storage of renewable energy for off grid communities and remote locations, while power-to-gas allows the use of hydrogen from electrolysers directly as an energy carrier to several users enabling different and multiple uses. Some examples are hydrogen-powered turbines, hydrogen-powered vehicles, hydrogen injection into the natural gas grid, hydrogen use, combined with biogas or CO2 to produce clean methane or methanol. Green hydrogen on-site industry applications deserve an increasing attention for reducing CO2 emission, since the gains provided by renewables is expected largely confined to electricity generation, not interesting other carbon-intensive industrial sectors. For steel and other industries using large quantities plants of hydrogen, on-site hydrogen generation using electrolysers coupled to renewable plants can be a greener alternative. Anyway, the OPEX for a Direct Reduction Process of iron ore combined with Electric Arc Furnace (DRP+EAF) based process running on hydrogen is estimated to be about 80% higher than that of the current reference production, that is a Blast Furnace combined with a Basic Oxygen Furnace (BF+BOF), which is based mainly on coal use. On the other hand OPEX for a DRP+EAF based process running on natural gas is expected to be about 30% more than that of a BF+BOF process. For this reason also CCU technologies as dry reforming of methane converting fossil CO2 emissions into a hydrogen rich syngas (hydrogen mixed with CO), by exploiting waste heat to sustain the process, could play a relevant role. The aim of the study is the experimental and numerical investigation and optimization of specific CCU and green hydrogen technologies, as well as their implementation in on-site industry application. Specifically, steelmaking and other industry sectors, as ammonia production or oil refining which require large quantities of hydrogen, are considered. In detail, the technology focus of the research activity is: - hydrogen production through electrolysis or also co-electrolysis by means of high temperature electrolyzers - dry reforming of methane using fossil CO2 as reforming agent and waste heat to sustain the process. The activity includes therefore also catalysts investigation to reduce as much as possible the process temperature, avoiding their deactivation for coking.
Referente: Francesco Di Maria (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Modern cities can be considered as organisms that consumes for the needs of their population, foods, materials, resources and energy, returning metabolites (e.g. products, energy) and catabolites. These last are represented by the main rejects of this metabolic activity among which wastewater, waste, gaseous and air born particles. Furthermore, considering that: within the 2050 70% of the world population will live in cities (UN, 2015); that the rapidly growing 27 megacities currently consumes about 9% of world electricity, generates 13% of the waste and hosts the 7% of the global population (Kennedy et al., 2015); the monitoring the health condition in these areas will be of paramount importance for preventing any future pandemic disease; in effect cities represents also a relevant source of rejected energies and materials that can needs to be properly reused, recycled and recovered for increasing the global environmental sustainability. For all these complex aims several research activities are going on focusing the attention of the use of urban catabolites as important means for environmental surveillance (Urban Catabolites Surveillance – UCS) for preventing future pandemic disease and on the development of new technologies and techniques for it’s successful reuse, recycling and recovery. Some results concerning UCS have already been achieved with particular focus on the recent SARC-CoV-2 disease. In fact, wastewater, aerosols and also solid waste have been successfully analyzed and exploited for the early detection of the virus in given communities and areas. By the way, more work is also necessary for a better development and implementation of such methodologies. The research activity will be focused on the analysis of new approaches and methodologies for the reuse, recycling and recovery of wastewater and solid waste generated in urban environments. Furthermore, these catabolites will be also further investigated for its exploitation for early detection of present and future diseases but also for monitoring the health conditions of given communities and areas. The research activity will be carried out by literature review, mathematical model development and experimental analysis performed in collaboration with other research entities.
Referente: Francesco Castellani (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Early fault diagnosis for rotating machines is fundamental in the optimization of the remaining operational life, future condition, or probability of reliable operation of the equipment based on the acquired condition monitoring data. Condition monitoring of rotating machinery is particularly important in energy conversion technology especially when the operation regime is not steady. For example, wind turbines operate under non-stationary conditions so that condition monitoring is challenging and interesting for developing research test cases. The literature on this topic is particularly focused on the drive-train and the bearings. Several are the possible techniques, from physical models to data-driven prognostic models. Each of them has its pros and cons: basically, the main drawback of the former category is that real-life systems are often too stochastic to be successfully modelled, while the main drawback of the latter category is that feature extraction might be very complex, especially if the signal to noise ratio under an incoming damage is low. As regards data-mining methods, several are the possible approaches: basically, the main categories are time-domain and frequency-domain feature extraction. Ciclostationarity has emerged in the last decades for characterizing certain types of non-stationary mechanical signals, as for example from rotating machines like wind turbines. It is argued that conventional cyclic spectral estimators end up with similar asymptotic results and the rationale for selecting the most appropriate ones depends on the specific application. On these grounds, it is important to critically analyse the techniques for condition monitoring in relation to the target technology. At the Department of Engineering, several test cases for condition monitoring studies of rotating machines are available. For example, a small horizontal-axis wind turbine has been designed and its dynamic behaviour has been studied through wind tunnel tests (also in unsteady conditions) and aero-elastic simulations. Small horizontal-axis wind turbines are characterized by a very high rotational speed, modulated through a small rotor, in order to guarantee a reasonable energy conversion efficiency. This implies that this kind of wind turbines can be affected by severe noise and vibration issues and devoted condition monitoring techniques are therefore needed. MW-scale wind turbines have rotational speed of the order of 15 revolutions per minute and most often the slow rotation of the main shaft is converted to the fast rotation of the generator through a gearbox. Gearbox condition monitoring of the gearbox is therefore crucial and a considerable amount of scientific literature is devoted to this topic. Finally, test benches are available at the Department of Engineering for the detailed study of rotating devices, as for example components of wind turbines. The condition monitoring of these components can also be inspiring for optimizing the mechanical design for rotating machinery. On these grounds, the objective of the project is the study of innovative techniques for condition monitoring of rotating machinery through the analysis of real test cases spanning a vast range. A detailed match between the features of each target technology and the techniques for condition monitoring is expected to be discussed and developed; a further step of the application of this methods is the main goal of the research.
Referente: Lucio Postrioti and Ermanno Cardelli (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Despite the exponential growth of electric vehicles registered in recent years, the main barriers to their penetration remain still specific energy, specific power, safety, cost (related to cycle life and calendar life), low temperature performance (e.g., cold startup in winter) and charge time, beyond the lack of knowledge in the Battery Management System (BMS). For an efficient vehicle operation, the development of an efficient Battery Management System (BMS) is crucial. In particular, the BMS strategies design requires a detailed knowledge of the battery pack characteristics, either based on simulation (typically according a lumped-parameters approach) or experimental analysis. In particular, the battery performance is significantly affected by the temperature, hence the battery thermal analysis is crucial both for the BMS and the battery pack design. it is important to understand how heat is generated inside a cell and how to dissipate heat properly. However, the factors influencing the thermal behaviour of cell result quite complex. Multiple mechanisms including electricity, electrochemistry, heat transfer are coupled, and the involved parameters change with time, temperature, State of Charge (SoC), State of Health (SoH), etc… The present research activity aims at developing a comprehensive analysis methodology for the characterization of cells and battery packs for automotive applications, based either on lumped parameters simulation (by Gamma Technology AutoLi-Ion) and experimental analysis (by a ITech 18 kW-500 V battery emulator/tester). The experimental activity will be finalized to the battery model build-up for BMS strategy design and to support the battery pack design by CFD-3D simulation (Ansys Fluent). The simulation CFD tools, supported and validated with experimental tests, constitute one of the most flexible and efficient instruments for the thermal analysis of the single battery, the batteries package and its cooling system. The objective of the thermal management pillar of the proposal consists on understanding the driving parameters of the heat transfer on batteries and the connection with BMS.
Referente: Filippo Cianetti (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Smart structures, i.e. structures able of self-monitoring some physical quantities without the need of external sensors are rapidly spreading in several engineering sectors such as aerospace, automotive and bioengineering. Additive manufacturing, due to the several advantages in terms of geometric complexity and of low production cost, is the most widely used manufacturing technique for the realization of structures with embedded sensors. At the moment there are two possibilities for the realization of smart structures through additive manufacturing: an hybrid approach, where the sensing element is manually inserted during the printing process, or a multi-material technique in which the structure and the sensor are printed in a single process. The recent possibilities of additive manufacturing allow, by exploiting the piezoresistive effect, realizing efficiently embedded 3D-printed sensors such as accelerometers and strain gauges. Given the well-known feasibility of 3D-printed sensors, in recent years the research has moved towards their optimization, trying to increase both their static and dynamic performances. A set of printing parameters, that guarantee the maximum sensitivity of the instrument, is the actual research point. However few research activities, about the optimization of the shape and positioning of the sensor within the structures, have been made. These last two aspects are however of considerable importance when dealing with the fabrication of smart structures for three main reasons: Both the shape and the position of the 3D-printed sensor may contribute to a possible increase of the sensor performances. Particular locations may introduce geometrical imperfections in the base structure, thus decreasing the structural strength of the components to external excitations. If the sensor failed due to external loads, is it still able to monitor the phisycal quanty for which it was realized for? Since additive manufactured smart structures are frequently replacing components fabricated with conventional techniques, the previous questions must be necessarily answered. Only in this way, it would be possible to define a general procedure for the realization of 3D printed smart structures that guarantees high performance of the 3D-printed sensors, reliable smart structures and high structural strength capabilities of the component. The purpose of the proposed research project is to design 3D-printed smart structures, trying to optimize (through numerical simulation, machine learning and advance approaches) the shape and the position of the 3D-printed sensor within the base structure. The main targets will be to maximize the sensor’s perfomances and its ability to monitor physical quantites under whichever stress condition, trying to maintain good structural strength properties of the base structure. The results initially obtained will be experimentally validated on simple structures, with ad-hoc tests, in order to certify their accuracy. Once satisfactory results will be obtained, the 3D-printed sensors will be inserted into real mechanical components or systems, existing and/or newly produced, with applications ranging from aerospace, automotive to the agricultural sector. Since smart structures are of general applicability, many engineering disciplines will be addressed having to face problems related to structural strength, piezoresistivity and thermal effects.
Referente: Filippo Cianetti (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Vibration environments generated by rotating machinery are often characterized by sinusoidal waves superimposed on a background random noise with a wideband frequency range. This category of signals, known as Sine on Random (SoR) excitations, are specified in international standards such as MIL STD 810G or RTCA DO 160G and cover helicopters, propeller aircraft and rocket engines as typical applications. If in time domain a SoR signal is readily obtained by superimposing (adding) one or more sinusoidal waves to random noise, its description in frequency domain became more complicated; in fact it is customary represent random signal in therms of one sided PSD, while the pure tones are expressed by mean of their FT. The results is a composed spectra, where power densities of random noise and amplitude of sinusoids coexist. For these reasons the assessment of Sine on Random excitation, as vibration response and related fatigue damage, in frequency domain, is still a challenge, as testified from the significant works produced by Braccesi, Cianetti, Kihm, Angeli et al. As known the main issue in spectral methods became to find a relation between the shape of response PSD, and the reliability of the analysed structure. The keynote, in applications where structural failure may occur due to exceeding the overload threshold, or due to the accumulation of fatigue damage, is to know probability distributions of the appropriate processes (e.g. envelope, range, mean...). Several methods have been developed for dealing with random process, and obviously for the sinusoidal process no special remarks are needed considering its deterministic nature. However, determination of statistical properties of n sinusoidal waves in addition to random noise has been an issue for more than a century. Firstly described by Lord Rayleigh as vibratory investigation, the question assumed a mathematical fashion in the formulation proposed by Pearson in the well known article “The Problem of Random Walk”. During the 1940s, understanding the statistics of instantaneous and extreme values of the sum of n sinusoids and random noise, became a pivotal problem in radio transmission. The pioneering studies by Rice and Slack, well defined the boundaries of what will be later described as “Rician Fading”, solving the case for n = 1. In the 1970s again several authors addressed the issue providing their contributes, Esposito et al. found a closed form solution for n = 2, and Goldman solved for n = 3. In the same years also Barakat while working in the optoelectronics and investigating the application to the lasers, gave relevant contributions: he found the solution for n fixed and Poisson distributed, and he cast the results in a form suitable for numerical computation. It has to be noticed that the field of application (e.g. communications, optics, pure maths) modifies the hypothesis, the approach to the problem, and therefore the solution: the works of Rice are based on the assumption that [...] noise as being confined to a relatively narrow band and the frequencies of the sine waves lying within, or close to, this band. What happens if the sine waves are far (before or after in frequency sense) from the random band? As a matter of fact in typical SoR spectra, the condition are completely different: sine frequencies are lower than a wide random noise. It was realised that the distance, in the Fourier domain, between the central frequency of the sine and that of the random is decisive for the process. In particular, if Z(t) is a process composed by sine wave of argument om_s superimposed to random noise of central frequency om_ro , and if Z_maz(t) is the function of maxima of Z(t), then a statistical distribution pZ_max(z) of Z_max(t) is determined under each of the following conditions: (i) om_s ≈ om_ro ; (ii) om_s « om_ro; (iii) om_s » om_ro. The first case was assessed by Rice in “Mathematical Analysis of Random Noise” and led to the well known Rician distribution; the second and the third are a results of previous PhD work, obtained by the summation of the right random variable. To conclude, in the research activity of a previous PhD, the very first step in Sine on Random spectra statistical description was moved, deriving maxima probability distribution. The purpose of the proposed research project is to extend the previous activity in the case of multiple sine summation and to fatigue damage evaluation. The joint Probability density Function of mean and amplitude will be searched, and compared to rainflow counting simulation. The effect of complex dynamic systems on the response PdF will be also investigated.
Advanced industrial solutions for waste recycling and recovery and related sustainability assessment
Referente: Francesco Di Maria (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - The implementation of the so called waste management hierarchy is a fundamental step for the implementation of the circular use of resources. Of particular importance in this sector is the development of advanced industrial solution able to maximize the effectiveness and efficiency of this goal. Nowadays most diffused solutions are based on the separation at source of the different recyclable streams for their delivery to the recycling industry. By the way several shortcomings are associated to this approach as: costs; quality of the materials collected; environmental impact; social acceptance; etc. Furthermore there is another relevant amount of waste stream that cannot be managed without the adoption of complex and integrated plants as the bio-waste and the residual waste. Nowadays such plants are mainly based on incineration, anaerobic digestion, composting, mechanical biological treatment and similar facilities. By the way it is clear that an increase in their sustainability, in the broader sense of this word, including also their efficiency, cannot be maximized without an integrated approach. Integrating biological processes with thermal and mechanical is a key factor for maximizing the implementation the waste hierarchy and related effectiveness in the best use of waste materials. Above the identification of most effective integrated plant treatment schemes another important aspect is the assessment of their sustainability. Sustainability is a broader concept involving at least three main issues: social, environmental and economic. How to asses this is another challenging issue. Many solutions have been proposed as those based on Multi-criteria analysis; cost benefit analysis; risk assessment; etc. etc. But still remain the need of the implementation of a ore objective approach. The research activity will be focused on innovative and advanced integrated industrial waste treatment, recycling and recovery facility. The integration will be based on the coupling of thermal, biological, physical and mechanical treatments able to maximize the extraction of recyclables and energy and fuels from bio-waste and residual waste. Together with this main issue the development of advanced sustainability assessment methods will be also investigated.
Referente: Michele Battistoni (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - A novel process in post-combustion technology is the carbon capture using a cryogenic process. A CO2-laden flue gas is cooled to desublimation temperatures (−100 to −135 °C) and separates CO2 in the solid form. A sprayed contact liquid can be used to condense the gas to a solid on liquid droplets so that a liquid CO2 stream can be recovered and separated. No equivalent industrial process exists and therefore there is no precedent operating experience with these systems. The performance of the Cryogenic Carbon Capture (CCC) system depends on effective performance of the desublimation spray tower heat exchanger. The process design and optimization require insights from high-fidelity multi-dimensional simulations, validated against novel data. To this end, fundamental understanding of underlying physical and chemical processes is critical. The proposed PhD project aims to enable and facilitate the CCC technology development by enhancing predictive capabilities through high-fidelity modeling validated against parallel novel experiments. Since capturing detailed phase interfaces is impossible at the device scale, a subgrid-scale (SGS) model for interface heat and mass transfers needs to be developed. The models will account for the desublimation of carbon dioxide from the flue gas on the drop surface and for the dissolution from the frost layer into the inner liquid core. The closure models will then be coupled to existing LES multi-phase frameworks. These new spray model capabilities will be validated against experimental data, collected in spray chamber experiments by partners at the collaborating institutions, and then applied to assist the design and optimization of the contact liquid desublimating heat exchanger, a crucial component of the cryogenic carbon capture system. The PhD activity is part of a large project involving also KAUST and Cambridge universities. The research aims to develop a new two-phase flow solver for liquid sprays with cryogenic desublimation phase change model. The specific focus is on low-temperature conditions, multicomponent thermodynamics, sgs models for heat and mass transfer associated with the CO2 desublimation on cold spray droplets. The code will be developed in OpenFOAM.
Referente: Carlo Nazareno Grimaldi(Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - The process of technological development of internal combustion engines (ICEs) is nowadays driven by two main factors consisting in the need to reduce pollutant emissions harmful to human health, and the need to reduce the specific consumption of the engines and therefore to limit CO2 production. This development is the consequence of the imposition of increasingly stringent regulations concerning emissions (carbon monoxide, unburnt hydrocarbons, particulates and nitrogen oxides [1]), both for engines used for transport vehicles and for stationary power plants for electricity generation or cogeneration systems. The transition to a type of economy completely independent from fossil fuels is not possible immediately; therefore, a phase of gradual transition from internal combustion engines to the large-scale use of electric motors and the exploitation of renewable energy sources is necessary. In the context of this transition, there is a need to pursue a research work on the various types of internal combustion engines (ICEs) in order to understand even more deeply the various phenomena underlying the conversion of energy, with a view to developing subsystems aimed at maximizing the efficiency (with lower CO2 production) and the limitation of the production of pollutants in the combustion chamber in order to reduce the quantity of chemical species to be converted in the aftertreatment systems.
The research project concerns the study of physical phenomena occurring inside the engine, in particular non-equilibrium plasma ignition and low temperature combustion, and their optimization through the use of advanced systems and concepts. This path will be divided into an initial phase of analysis of publications on the state of the art in the engine industry and in the subsequent phases of instrumentation and equipment integration for experimental research such as those present in the laboratories of the Engineering Department of the University of Perugia. CFD-3D numerical simulations of the combustion generated by these concepts will be performed as well, and a tuning of the models will be carried out thanks to a comparison with experimental data. Finally, particular attention will also be given to the development of methodologies and tools for the analysis of systems behavior, as long as to industrial production systems. The two main areas of research are: - The study of the behavior of spark ignition (SI) engines operating with lean and/or strongly diluted mixtures (lean combustion), such that high thermal efficiencies can be achieved. In such cases the operating limits are given by approaching unstable conditions described by factors such as the IMEP COV (Coefficient of Variation of Indicated Mean Effective Pressure). To extend these limits, innovative ignition systems such as multiple spark systems, corona-effect ignition systems (e.g. ACIS - Advanced Corona Ignition System, and BDI – Barrier Discharge Igniter) or microwave are used, in order to overcome the intrinsic problems of conventional ignition systems. - The use of low temperature combustion (LTC - Low Temperature Combustion). Compression ignition (CI) engines have a generally higher efficiency than spark-ignition engines, but in the standard operating regimes with lean mixture, large quantities of nitrogen oxides are formed due to the high temperatures that ensure the activation energies for the dissociation of chemical species into polluting products such as NOx. Due to the presence of areas in which there is a locally rich mixture, on the other hand, particulate matter formation occurs, therefore the compression of a homogeneous or premixed charge at reduced temperatures would allow the simultaneous reduction of both main pollutants together with the improvement of overall efficiency due to a lower heat flux transferred from the working fluid to the chamber walls. As part of the present research project, different technologies, potentially able to allow the increase of global efficiency and the reduction of emissions, will be tested. Pursuing the two above mentioned lines, the performance of ignition prototype systems for SI engines will be analyzed. A system designed to enable innovative low temperature GCI combustion processes will also be implemented and studied. The physical comprehension of the phenomena will be helped by the use of CFD-3D simulations. The project will be able to exploit equipment and systems supplied by the Engineering Test Laboratory of the Engineering Department of Perugia. As far as LTC research is concerned, the transformation of the optical access engine is planned in order to use a typical diesel common-rail system for the high-pressure injection of gasoline. The injection system is managed through a system based on SW-controlled input / output cards developed as part of this research. On the optical access engine, which will also be used, in another configuration, to analyze the corona and / or microwave ignition system, it is possible to perform non-intrusive observation of the internal cylinder combustion processes. This analysis is possible through the use of a high-speed camera (up to 1 Mfps) and a subsequent post-processing phase of the images through calculation codes developed on site. This methodology is particularly useful for the analysis of the evolution of the flame front in its early propagation phases, in which the phenomenon is so rapid that it requires high temporal and spatial resolutions. Last but not least, cyclic dispersion phenomena are expected to be investigated (thanks to high speed acquisitions extended for several cycles), which are characteristic of internal combustion engines and can limit their functioning in many operating points, through synchronized acquisitions of both images of the evolution of the deflagration front, and of the combustion parameters that can be calculated on the basis of the measurement of the internal cylinder pressure values. The analysis of innovative ignition systems will be performed not only with an experimental approach, but also with numerical simulations: models will be set up for the implementation of the particular types of igniters, that are often characterized by different shapes, materials, and operational behavior. The ignition system itself will be studied as well in terms of mechanical (both static and dynamic) characterization; the electromagnetic behaviour will also be analyzed and optimized. The different approaches and analyses can be considered separately or in conjunction in the proposed projects.
The research project concerns the study of physical phenomena occurring inside the engine, in particular non-equilibrium plasma ignition and low temperature combustion, and their optimization through the use of advanced systems and concepts. This path will be divided into an initial phase of analysis of publications on the state of the art in the engine industry and in the subsequent phases of instrumentation and equipment integration for experimental research such as those present in the laboratories of the Engineering Department of the University of Perugia. CFD-3D numerical simulations of the combustion generated by these concepts will be performed as well, and a tuning of the models will be carried out thanks to a comparison with experimental data. Finally, particular attention will also be given to the development of methodologies and tools for the analysis of systems behavior, as long as to industrial production systems. The two main areas of research are: - The study of the behavior of spark ignition (SI) engines operating with lean and/or strongly diluted mixtures (lean combustion), such that high thermal efficiencies can be achieved. In such cases the operating limits are given by approaching unstable conditions described by factors such as the IMEP COV (Coefficient of Variation of Indicated Mean Effective Pressure). To extend these limits, innovative ignition systems such as multiple spark systems, corona-effect ignition systems (e.g. ACIS - Advanced Corona Ignition System, and BDI – Barrier Discharge Igniter) or microwave are used, in order to overcome the intrinsic problems of conventional ignition systems. - The use of low temperature combustion (LTC - Low Temperature Combustion). Compression ignition (CI) engines have a generally higher efficiency than spark-ignition engines, but in the standard operating regimes with lean mixture, large quantities of nitrogen oxides are formed due to the high temperatures that ensure the activation energies for the dissociation of chemical species into polluting products such as NOx. Due to the presence of areas in which there is a locally rich mixture, on the other hand, particulate matter formation occurs, therefore the compression of a homogeneous or premixed charge at reduced temperatures would allow the simultaneous reduction of both main pollutants together with the improvement of overall efficiency due to a lower heat flux transferred from the working fluid to the chamber walls. As part of the present research project, different technologies, potentially able to allow the increase of global efficiency and the reduction of emissions, will be tested. Pursuing the two above mentioned lines, the performance of ignition prototype systems for SI engines will be analyzed. A system designed to enable innovative low temperature GCI combustion processes will also be implemented and studied. The physical comprehension of the phenomena will be helped by the use of CFD-3D simulations. The project will be able to exploit equipment and systems supplied by the Engineering Test Laboratory of the Engineering Department of Perugia. As far as LTC research is concerned, the transformation of the optical access engine is planned in order to use a typical diesel common-rail system for the high-pressure injection of gasoline. The injection system is managed through a system based on SW-controlled input / output cards developed as part of this research. On the optical access engine, which will also be used, in another configuration, to analyze the corona and / or microwave ignition system, it is possible to perform non-intrusive observation of the internal cylinder combustion processes. This analysis is possible through the use of a high-speed camera (up to 1 Mfps) and a subsequent post-processing phase of the images through calculation codes developed on site. This methodology is particularly useful for the analysis of the evolution of the flame front in its early propagation phases, in which the phenomenon is so rapid that it requires high temporal and spatial resolutions. Last but not least, cyclic dispersion phenomena are expected to be investigated (thanks to high speed acquisitions extended for several cycles), which are characteristic of internal combustion engines and can limit their functioning in many operating points, through synchronized acquisitions of both images of the evolution of the deflagration front, and of the combustion parameters that can be calculated on the basis of the measurement of the internal cylinder pressure values. The analysis of innovative ignition systems will be performed not only with an experimental approach, but also with numerical simulations: models will be set up for the implementation of the particular types of igniters, that are often characterized by different shapes, materials, and operational behavior. The ignition system itself will be studied as well in terms of mechanical (both static and dynamic) characterization; the electromagnetic behaviour will also be analyzed and optimized. The different approaches and analyses can be considered separately or in conjunction in the proposed projects.
Referente: Maria Cristina Valigi (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - The advancements and implementations in the fields of material science, soft robotics and pneumatics have enabled a rapid progress in research focused on soft grippers, giving particular importance to the development of advanced grasping architectures, technologies, and devices and their optimization. Nowadays the most widespread solutions are based on several application scenarios where a closer cooperation between men and robots is favoured. In particular, the under-actuated and flexible robotic grippers represent innovative solutions in many industrial applications and human operations performed with collaborative robots, as they improve the grasping and manipulation ability allowing the compliance with freeform objects. Soft grippers can be used on a large variety of objects of different shapes, textures, and consistencies and have enabled the increasing automation of many tasks previously deemed to be too delicate to be performed by robotic manipulation. Soft grippers enable compliant and controlled grasping by designing proper actuations, controlled stiffness, and controlled adhesion. Furthermore, exploring the combination of different technologies and different materials sets is an exciting investigation for the ideation of innovative compliant grippers. The research activity will be focused on the development and implementation of innovative and advanced grasping architectures and devices for soft robotics. Different materials and structures will be analysed and compared, with particular attention to the actuation technologies, the stiffness of the joints and the contact mechanics. For this reason, both numerical simulation and experimental campaigns will be performed to evaluate the grasping abilities and compliance. Particular attention will be also paid to the sustainability of the devices in terms of costs, quality of the used materials, environmental impact and social acceptance.
Referente: Silvia Logozzo (Questo indirizzo email è protetto dagli spambots. È necessario abilitare JavaScript per vederlo.) - Robots and robotic devices are often and often devoted to help humans in their works in a man-robot collaborative environment, and the collaboration of humans and robots in close proximity is a stimulating feature of Industry 4.0. Collaborative robotic solutions are mainly employed in grasping and manipulation tasks next to human operators, or in close cooperation with them but preserving men from hazards. Tribological aspects related to these kinds of devices regard the interaction of the end-effectors with operators, objects, and environment, the performance of their components in the contact mediation in dry and wet conditions and the durability of the devices with attention to sustainability. Regarding the interaction between robotic end-effectors and the operators, objects and environment, the study of contact mechanics is a fundamental aspect to analyse both by theoretical models and by experimental approaches.
The study of friction performance of the contact surfaces in dry and wet conditions is also a fundamental approach to predict and control the contact stability during the exercise of a robotic device. Exploring and investigating these tribological aspects is the aim of the proposed doctoral project to optimize devices for robotic applications. The research activity will be focused on both a theoretical and an experimental approach: on one hand the study and development of mathematical models to simulate the tribological behaviour of devices for robotic applications, and on the other hand the design and preparation of experimental setups to characterize devices and components. The activities will be also focused on the optimization of devices and components regarding all the studied tribological aspects and with particular attention to the sustainability development goals.
The study of friction performance of the contact surfaces in dry and wet conditions is also a fundamental approach to predict and control the contact stability during the exercise of a robotic device. Exploring and investigating these tribological aspects is the aim of the proposed doctoral project to optimize devices for robotic applications. The research activity will be focused on both a theoretical and an experimental approach: on one hand the study and development of mathematical models to simulate the tribological behaviour of devices for robotic applications, and on the other hand the design and preparation of experimental setups to characterize devices and components. The activities will be also focused on the optimization of devices and components regarding all the studied tribological aspects and with particular attention to the sustainability development goals.