If available, call for Ph.D. positions are listed in this page (please check the deadlines for your application). Access to Ph.D.in Italy is subject to a public examination. Please refer to the following PhD program pages for the specific requirements and deadlines. ScuDO – Politecnico di Torino: requirements and call for applications
If you are interested into a proposal and wish to submit your application for the position, send us an email with the following information:
your updated CV
a summary of your Master’s thesis
references from two tutors or supervisors.
Available Thesis
Enhancing Verbal Communication with Multiple Virtual Agents in Collaborative VR Environments
Thesis @CGVG Tutors: Edoardo Battegazzorre, Andrea Bottino TAGS: Virtual Reality, Multi-Agent Systems, Natural Language Processing, Human-Agent Interaction, Collaborative Interfaces
Virtual reality (VR) environments increasingly rely on the interaction between users and virtual agents, particularly in tasks requiring teamwork and collaboration. However, managing verbal communication with multiple agents presents unique challenges, such as discerning the intended recipient and ensuring accurate interpretation of commands. This thesis aims to address these challenges by investigating and developing systems that facilitate seamless communication with multiple virtual agents in VR settings.
The project will begin with an analysis of current solutions and approaches for multi-agent communication in VR. Based on these findings, a VR application will be developed to implement some of these systems for managing verbal interactions with multiple agents in a shared context. These systems may involve different strategies for agent recognition, dialogue management, and Natural Language Processing (NLP).
The final phase of the thesis will involve conducting user tests to evaluate the implemented systems. These tests will assess key factors such as usability, efficiency, and user satisfaction, providing insights into the strengths and weaknesses of each approach.
Required Skills:
Unity development
VR implementation techniques
Basic understanding of Natural Language Processing (NLP)
Using Large Language Models (LLMs) to Generate Complex Branching Stories: Analysis, Design, and Experimentation
Thesis @CGVG in collaboration with University of Turin and Museo Egizio di Torino Tutors: Andrea Bottino TAGS: Narrative Theory, Large Language Models, Interactive Storytelling, Branching Narratives, Story Design, Prompt Engineering
Large Language Models (LLMs), such as ChatGPT, represent a promising tool for the creation of interactive narratives, particularly branching stories where user choices lead to different paths and outcomes. This thesis aims to explore the design and implementation of a system capable of generating such stories, scene by scene, while maintaining stylistic and structural coherence throughout the narrative.
The core of the project involves the design of a framework for structuring the skeleton of a branching story. This framework organizes the story into interconnected scenes, where branching points represent choices that split the narrative into different paths. For each scene, the creator defines key elements, such as characters, locations, and objects, alongside a brief description of the intended events, mood, and tone. The system will use these inputs to generate the entire story, ensuring consistency across all scenes while respecting the branching structure.
The primary objective of the thesis is to conduct prompt engineering to enable LLMs to generate scenes that adhere to these specifications. This process will involve crafting and iterating on prompts that effectively guide the model to produce coherent, engaging, and contextually appropriate scenes based on the predefined structure and elements. Achieving this will require integrating insights from established theoretical frameworks for designing branching narratives and studying best practices in the literature on LLM applications for storytelling tasks.
Improving Training and Learning Methods in eXtended Reality
Thesis @CGVG, available for multiple students; Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada TAGS: Mixed Reality, Animation, Training, Education
Training and learning in XR (VR/AR) in several scenarios (industrial, medical, educational) can be envisage as actvities characterized by a sequence of activities that can be organized in procedures. The activity organization can differ according to the specific scenario (e.g., activities can be sequential, alternative, looped and so on), anc can be generally represented as a graph of activities.
The learning phase is then usually organized in different steps (or phases):
Learning: In a learning session, users are guided through the various actions the given procedure is organized into. Each action is introduced by visual and audio hints, aimed at explaining learners what they have to do, why, which are the objects they have to interact with and through which mechanics. Learners are then required to complete the activity in the XR environment, and the system controls the action execution and provide appropriate feedback to the learner
Rehersal: where users can train the procedures they have learnt in the previous phase. Users can freely perform any action, but they cannot benefit from the visual and audio instructions available in the learning mode. However, the system can provide cognitive feedback (in terms of audio or other hints) to the user. Game/gamification mechanics can be included in this phase to help learners achieve the expected learning goals
Evaluation: where learned skills are automatically assessed by the system. Users are required to perform the same procedure they have practiced before, without any feedback from the system, which is then capable to record execution times, correct and incorrect executions and any other problem arising during evaluation. Assessment is usually performed against a list of predefined metrics (which depends on the learning/training scenario)
As said before, this structure is standard in many application fields. The general objective of this proposal is to facilitate the development of such learning program and improve their effectiveness.
Research methods
An educational path can be structured through different learning methodologies, different assessment systems and activities organization (looped sequences, repeat only mistakes, and so on). However, the effectiveness of these approaches and the best combination of learning/assessment/organization methods is also related to the context where learning/training activities take place.
The objective of this thesis is to make a review of the state of the art to identify the most promising approaches, and to validate their effectiveness in different real-world contexts, in terms not only of knowledge and skill acquisition, but also of their retention over time
Topic 2: software farmework for rapid prototyping of MR-based learning environments
A first thesis topic is developing a software framework that allows a fast deployment of a learning program by defining the structure supporting activities, procedures and their scheduling, so that the designer of the educational intervention is only required to: i) create the assets to be used in the MR environment, ii) define the logic of the single activity, and iii) design the activity graph that define the procedures.
Students are required to implement the framework and develop at least two different use case scenario (in different fields, e.g., medical and industrial) to test it.
Topic 2: usability and UX of the learning environments
A second thesis topic is analyze the problem in terms of usability and User eXperience, i.e. analyze how to better deliver instructional/educational content in immersive AR/VR experiences, how to develop effective HCIs (in terms of input/output) and how to actively support users during their learning program (e.g., by adding AI-driven virtual instructors that can provide a natural and “face-to-face” support for the user).
Students are required to analyze the problem, proposed alternative solutions that can be implemented in the HCI, and validate them trough quantitative/qualitative assessment involving a panel of users. For this task, at least one use case scenario (in any possible field…) must be developed from scratch
Topic 3: development of an effective debriefing support
A third topic is the development a debriefing companion application, which relies on the analytics (and other data) collected during the rehearsal and evaluation sessions. The availability of a debriefing step is extremely relevant for knowledge retention, since it helps learners to reflect on what they did, get insights from their experience and make meaningful connections with the real world, thus enhancing transfer of knowledge and skills. Even when results are not as successful as the learners hoped, debriefing can still promote active learning by helping them to analyze mistakes made and explore alternative solutions.
Students are required to analyze the problem, proposed alternative solutions that can be implemented in the debriefing companion app, and validate them trough quantitative/qualitative assessment involving a panel of users. For this task, at least one use case scenario (in any possible field…) must be developed from scratch.
XR FRAMEWORK FOR COLLABORATIVE LEARNING and collaborative work
Thesis @CGVG, available for multiple students; Tutors: Andrea Bottino, Francesco Strada TAGS: Mixed Reality, Animation, Training, Education
The goal of this work is to evaluate and implement solutions that allow simultaneous access to a three-dimensional environment in which two or more users interact with each other and with the environment. Three application scenarios are proposed below:
Use for educational purposes in a classroom where the professor and students access the same application from different devices. A plausible scenario consists of:
Professor using a tablet or PC version of the application to highlight objects/points of interest;
VR device used by one or more students to perform operations on three-dimensional objects within the scene;
Touch device (e.g. lim) that allows interacting with objects in the scene and performing the same operations intended for virtual reality devices, but with different input devices (touch screen);
Use for remote assistance, which allows a technician/student (equipped with smart glasses) who wants to perform work on a machine/a trainign activity to receive real-time assistance from a senior operator/an expert who can connect to the machine via an app (desktop or VR) and view its 3D and related data. The senior can geographically locate the junior using the 3D model or the camera on the smart glasses and tell him where to go to work. It would be interesting to be able to give the junior the feeling of the senior's presence as well.
The objective of the thesis will be to evaluate the effectiveness of the possible solutions through the implementation of different use cases in both medical and industrial scenarios.
Entertainment games for complex problem-solving: implementing game design feature for fostering knowledge and skill acquisition
Thesis @CGVG Tutors: Andrea Bottino, Carlo Fabricatore, Dimitar Gyaurov TAGS: Serious Games, Complex Problem Solving, Design guidelines, Game design
Serious games are an innovative alternative to traditional educational methods, capable of promoting modern learning processes through engaging gameplay experiences. However, while entertainment games demonstrate high effectiveness in fostering learning through gameplay, educational games often struggle to achieve the same levels of engagement and transferability of skills to the real world. Specifically, developing cognitive skills for complex problem solving (CPS) is a challenge that many formal learning environments and educational games fail to address adequately.
This thesis aims to explore which design features from entertainment games can be leveraged to create serious games that effectively stimulate CPS and promote effective and transferable learning.
Thesis Work Definition:
The thesis will focus on the development of a serious game based on a set of design features identified through a recent systematic literature review of entertainment games and their potential to promote CPS. The project involves the design and creation of a game prototype that incorporates these features and the subsequent evaluation of its ability to foster learning through gameplay.
The candidate will be responsible for defining the game mechanics, programming, and developing the game using the Unity engine, as well as implementing tools for analyzing interactions and learning outcomes.
Required Skills:
Game design
Programming in C#
Proficiency with the Unity engine
Ability to analyze and implement methodological frameworks for game design
Pick and place VR exergame for motion cost estimation
Thesis @CGVG in collaboration with Rehab Technologies Lab, IIT genova Tutors: Andrea Bottino, Francesco Strada, Stefano Calzolari TAGS: Neuro-Robotics, Virtual Reality (VR), Exergame Development, Motion Cost Estimation, Rehabilitation Technology
The Laboratory
Rehab Technologies Lab is an innovation lab jointly created by IIT and INAIL (National Institute for Insurance against Accidents at Work) to develop new high tech rehabilitation solutions of high social impact and market potential. The projects so far include: the CE marked poly-articulated hand prosthesis (Hannes), the upper-limb (Float) and the lower-limb (TWIN) exoskeletons, both developed in compliance with the ISO standards for medical devices. Moreover, the laboratory leads and participates in neuro-rehabilitation projects aimed at studying cognitive and physical workload to help neurological patients in improving the quality of life.
Motivations and general objectives
The proposed activities are within the framework of the “NRTWIN - NeuroRobotic TWINning” project, aimed to design, develop, and test a set of neuro-robotic solutions (sensors, computational models, control systems) for replicate virtually physio-motor activities to study mental and physical effort for different population of subjects: healthy individuals, prosthetic users, people with Multiple Sclerosis (MS).
In general, the analysis and understanding of the pick-and-place task is fundamental in various industrial and rehabilitation settings, involving the movement of objects from one location to another. Improving the performance of this task and the required motion cost can have significant implications for ergonomics, productivity, and physical rehabilitation. Virtual Reality (VR) offers a unique platform for creating immersive and controlled environments to study and enhance motor skills. This thesis aims to contribute to the development of a VR-based pick-and-place exergame and to collect and analyze game data to understand the interaction of the individuals with the virtual environment through motion tracking data related to the execution of the task.
Required skills
The exergame is developed inside Unity engine and C#. The candidate is required to expand the existing game structure inside the engine and create customized scripts for data recording (C#) and analysis (MATLAB or Python).
Working location
The candidate can choose between working and being supervised remotely or working on site at the Rehab Technologies Laboratory, IIT (Morego, Genoa, Italy). The remote option will require some short stays (within a week) in Genoa for data acquisitions or application testing.
Max number of students: 1
Computer Vision algorithms for pose estimation of space
objects
Thesis @Astroscale France (Tolouse), internship + reserach grant available Tutors: Vincenzo Pesce, Andrea Bottino TAGS: Computer Vision, Machine Learning, Space Industry, Python Programming, Synthetic Data
This thesis focuses on the design and development of computer vision algorithms for pose estimation of space objects, a critical task for enhancing in-orbit services and space sustainability. The project involves creating and validating these algorithms using both synthetic data, generated through tools like Blender, and realistic data collected from a test bench. The research aims to contribute to the advancement of technologies that support safe and efficient space operations, aligning with the mission of Astroscale to promote long-term orbital sustainability.
Main Missions
Design and development of Computer Vision algorithms for pose estimation of space
objects
Validation of the algorithms using synthetic data, as well as realistic data acquired from a
test bench.
Essential Skills
Currently enrolled in a general or specialized engineering school in the computer vision
and machine learning field.
Experience with synthetic generating images software (i.e., Blender, SurRender)
Proficiency in Python and deep learning computer vision frameworks (i.e., TensorFlow,
PyTorch)
Willingness and ability to work in a multidisciplinary and international environment.
Innovative and proactive mindset.
Excellent communication skills.
Fluent in French and English.
Desired Skills
Experience in aerospace projects.
About Astroscale
Founded in 2013, Astroscale is the first private company with a mission to secure long-term
spaceflight safety and orbital sustainability for the benefit of future generations. Astroscale
develops in-orbit services that improve the characterization and deorbit of space debris or extend the life of old satellites. Headquartered in Japan, Astroscale has offices in the United Kingdom, United States, Israel and France. Astroscale has launched its pioneering technology missions, ELSA-d in 2020 and ADRAS-J in 2024, achieving the world’s first successful approach to a non-cooperative space object. Astroscale France (ASFR) was founded in mid-2023 and has grown to approximately 20 staff. Located in the central quarter of Toulouse, we are developing cutting-edge on-orbit services projects, such as a multi-target inspection mission.
Contact
v.pesce@astroscale.com
c.magueur@astroscale.com
Using generative AI to create annotated datasets for wet damage identification
Thesis @CGVG, available for multiple students. Reserach grant available Tutors: Andrea Bottino, Federico D'Asaro, Alessandro Emmanuel Pecora TAGS: Generative AI, Annotated Datasets, Instance Segmentation, Wet Damage Identification, Synthetic Data Generation, Machine Learning
This work focuses on the automatic detection of wet damages from images.
A major challenge in this work is the lack of annotated datasets large enough to effectively train instance segmentation algorithms. The core idea of this thesis proposal is to use the capabilities of generative AI (GenAI) to create a synthetic dataset of annotated images.
Using a small set of existing annotated images, the work aims to develop GenAI approaches that can extrapolate and generate a comprehensive dataset that simulates different scenarios and conditions of wet damages. This dataset will then be used to train robust instance segmentation algorithms to improve their accuracy and effectiveness in real-world applications.
The expected outcome of this work is twofold. First, to successfully demonstrate the feasibility of using generative AI to create large, diverse and reliable annotated datasets from a minimal number of real annotated images. Second, to evaluate the performance of instance segmentation algorithms trained on these synthetic datasets.
Developing a Comprehensive Digital Twin for the CARLA Simulator: A Case Study in Turin's Urban Environment
Thesis @CGVG avaliable for multiple students Tutors: Leonardo Vezzani, Francesco Strada, Andrea Bottino TAGS:Digital Twin, CARLA Simulator, Urban Environment Replication, Unreal Engine, VR
The aim of this thesis is to create an advanced digital twin of an urban environment, specifically focusing on a neighborhood in Turin, Italy, and integrating it with the CARLA simulator. This project encompasses the development and integration of various components required for a highly effective and realistic driving simulator.
Goals and Objectives:
Urban Environment Replication:
Accurately replicate roads, buildings, and landmarks of a specific neighborhood in Turin within the CARLA simulator.
Ensure the digital twin includes detailed environmental features to enhance realism.
Integration of Autonomous Agents and Realistic Elements:
Integrate autonomous agents such as vehicles, and vulnerable road users, ensuring their behavior mimics real-world scenarios.
Simulate realistic traffic flow and patterns to mirror actual urban driving conditions.
Incorporate dynamic weather conditions to test vehicle performance under varying environmental factors.
Real-Time VR Environment Implementation:
Enable the use of this digital twin within a real-time VR environment for immersive simulation experiences.
Focus on achieving high performance and low latency to ensure a seamless VR experience.
Methodology for Continuous Update and Replication:
Develop a methodology that allows for continuous updating and modification of the digital twin.
Create a framework that can be replicated for different urban and rural contexts, enhancing the versatility of the project.
Use of Advanced Technologies:
The project should be implemented with Unreal Engine 5 for its cutting-edge graphical capabilities, ensuring a visually realistic simulation.
The implementation will leverage CARLA’s open-source framework for its modularity, facilitating integration with various simulators and programs.
Research-Oriented Tool Development:
Aim to create a tool that can be used for research purposes, particularly in fields like autonomous driving, urban planning, and traffic management.
Ensure that the tool is adaptable for future technological advancements and research needs.
This thesis represents a blend of simulation technology, urban planning, and software engineering. It offers a unique opportunity for students to contribute to the growing field of digital twins, particularly in the context of urban environments and autonomous driving simulation. The project not only aims to create a realistic digital replica of a neighborhood but also establishes a replicable framework that can be applied to various urban settings, thereby contributing significantly to research and development in this field.
Development of a Modular HUD Design Tool for the CARLA Driving Simulator
Thesis @CGVG avaliable for multiple students Tutors: Leonardo Vezzani, Francesco Strada, Andrea Bottino TAGS: HUD Design, CARLA Simulator, Unreal Engine 5, VR
This thesis deals with the development of a tool for the development of sophisticated and modular Head-Up-Displays (HUD) for the CARLA driving simulator. HUDs, which project important information onto a vehicle's windshield, have become increasingly prevalent in modern vehicles and offer the advantage of reducing driver distraction and increasing road safety. This tool aims to replicate and extend the functionality of real HUDs in a virtual driving environment.
Aims and objectives:
Development of a modular HUD tool:
Create a HUD tool that allows for extensive customization, including adjustable dimensions, transparency levels, and color schemes.
Ensure that the tool is capable of implementing various interfaces and systems, such as navigation aids, warning systems, safety distance indicators and support for Non-Driving Related Tasks (NDRT).
Integration into the CARLA simulator:
Develop the HUD tool using Unreal Engine 5 to ensure seamless integration with the CARLA driving simulator, a widely recognized open-source platform for autonomous driving research.
Focus on compatibility and optimization for Virtual Reality (VR) to enhance the immersive experience of the driving simulator.
Diverse interface and system testing:
Enable the HUD tool to support a range of test cases, such as different driving scenarios, environmental conditions and vehicle types.
Integrate functions for the simulation of advanced driver assistance systems (ADAS) and autonomous driving technologies.
Design the user interface and driving experience:
Emphasize a user-friendly HUD tool design that allows users to easily create and modify HUD elements.
Implement intuitive controls and visualization techniques to enable efficient design processes.
Evaluation through case studies:
Conduct a detailed case study to evaluate the effectiveness, usability and potential of the developed HUD tool.
Analyze the impact of different HUD designs on driver behavior, attention and overall driving experience in the simulator.
Contribution to research:
Contribute to research in the field of driving simulation and automotive user interface
Explore the potential of HUDs to improve driver awareness and safety, especially in the context of autonomous vehicles and advanced driving simulators.
MASTER THESIS AT EST@ENERGY CENTER
Thesis In collaboration with Energy Center, Politecnico di Torino Tutors: Andrea Bottino, Francesco Strada, Daniele Grosso, Ettore Bompard TAGS: Climate and Energy Transition, Large Language Models (LLM), Prompt Engineering, Data Integration, Jupyter Notebooks, Data Visualization, Interactive Environment, City Sustainability, Scenario Planning, Digital Twin, 3D Modeling, Unity, Decision Theatre.
Details of the thesis are below (or here )
PRODUCTION OF A GENERATIVE BOOK ON THE CLIMATE AND ENERGY TRANSITIONS IN THEMEDITERRANEAN AREA APPLYING A LARGE LANGUAGE MODEL (LLM)
The context: EST has produced in the last years five (5) reports on the climate and energy transition in the
Mediterranean area, accumulating a large body of knowledge, data and references. The reports gather
information on all the energy technologies (from hydrocarbons to renewables), on maritime transport, and on their emissions of greenhouse gases.
The problem: to structure that volume of information in a manner that is ready and fit for all end users, enabling assisted interactions (open and with predefined prompts), and that can grow with the addition of new information. This goal derives from the limitation of traditional books where contents are static and too profuse, and therefore they are on the one hand quickly outdated, and on the other difficult to consult and not prone to rapid answers to the requests of the reader.
The thesis activity to develop a so-called Generative Book (GB) using LLM technologies, applying the platform BLOOM. The new GB will, using as the basic content the already available five reports and related sources, enable users to interact with the contents in different ways (quick summaries, put forward different questions, develop questionnaires based on the text, obtaining usable output against specific requests, etc); will let users contribute
to the contents by for instance indicating useful sources of information, indicating shortcomings, commenting, etc.; and will act as an evolving platform to facilitate the growth and evolution of the contents.
CUSTOM TRAINING AND PROMPT ENGINEERING OF A LLM PLATFORM ON THE ENERGY ANDCLIMATE TRANSITIONS IN THE MEDITERRANEAN AREA
The context: EST has produced in the last years five (5) reports on the climate and energy transition in the
Mediterranean area, accumulating a large body of knowledge, data and references. The reports make reference to a vast set of documents and data. EST intends to upload all those elements in an LLM-based Generative Book (GB).
The problem: to accelerate the adaptation of an out-of-the-box LLM (BLOOM) for its use with knowledge and contents referring to the Mediterranean energy and climate area.
The thesis activity: to compose and format prompts to maximize the model’s performance regarding the tasks defined for the GB, and to custom train the GB model with datasets taken from the EST reports. This will involve fine-tuning the training parameters, setting up the training environment, and fine-tuning the GB model.
INTEGRATION OF JUPYTER NOTEBOOKS WITH A LLM PLATFORM ON THE ENERGY AND CLIMATETRANSITIONS IN THE MEDITERRANEAN AREA
The context: EST has produced in the last years five (5) reports on the climate and energy transition in the Mediterranean area, accumulating a large body of knowledge, data and references. Many values are supported by equations and formulae, which are not made explicit in the reports.
The problem: to produce an interactive environment composed of computational documents using the data, equations and explanations present in the EST reports on the energy and climate in the Mediterranean ready for their customised use, visualization and analysis, and integrated into a Generative Book (GB).
The thesis activity: to produce Jupyter notebooks concerning energy and climate in the Mediterranean area to be integrated in an LLM-based GB, connecting software codes, data analytics and text, to work interactively and being customizable by the end users.
DEVELOPMENT OF A DIGITAL PLATFORM FOR SUPPORTING TABLE-TOP EXERCISES APPLIED TO THECLIMATE AND ENERGY TRANSITIONS IN CITIES
The context: EST supports cities in the elaboration of: i) their transition towards climate neutrality and the related production of Climate City Contracts, and ii) plans for their sustainability. For these goals, EST is developing two digital platforms, CLICC and CITTA, composed of interactive tools for dealing with data and text in a multimedia environment, and a full set of scientific instruments for the calculation and analysis of data. The study of future scenarios for climate neutrality and sustainability demands the interaction with all stakeholders, and the joint study of best alternatives concerning all potential contingencies. These interactions can be structured in the form of Table Top Exercises (TTXs).
The problem: to facilitate the arrangements and implementation of TTXs for cities by means of digital applications in an interactive environment, with the management of narrative scripts, a diversity of timing scales, alternative paths in the presence of contingencies, etc., while recording the decision and actions of all participants. TTxs are role-playing activities in which players respond to scenarios presented by the facilitators.
The thesis activity: to produce an interactive digital platform using open-source technologies for supporting the preparation and the running of TTXs applied to the climate neutrality and sustainability of cities, taking advantage of the existing platforms CLICC and CITTA. The platform should provide facilities for ex-ante preparation of the TTX, for the work of the participants (i.e. players, observers, facilitators, note takers), and for the ex-post analysis and reporting.
DESIGN OF AN INTERACTIVE INTERFACE FOR THE DIGITAL TWIN OF CITIES FOR THE STUDY OFCLIMATE NEUTRALITY AND SUSTAINABILITY
The context: EST supports the city of Torino in their plans to climate neutrality and sustainability. A crucial aspect of this support is to enable the city administrators and all city stakeholders to visualize and interact with a digital twin of the various main components of the city (e.g. energy, transport, waste, green areas, etc.) directly related to the production and mitigation of emissions. EST is developing two digital platforms, CLICC and CITTA, composed of interactive tools for dealing with data and text in a multimedia environment, and a full set of scientific instruments for the calculation and analysis of data. EST operates a Decision Theatre with a 180 degrees, 3 meter tall wall where to display interactive software applications.
The problem: to facilitate the interaction with the manifold aspects related to climate neutrality and sustainability of cities, including the virtual representation of the city systems as a digital twin. This representation should include both past data and future scenarios, with the possibility of displaying the evolution in time of those scenarios.
The thesis activity: to produce an interactive interface based on open-source tools such as Unity, to be used in both EST’s Decision Theatre and the web, able to dynamically exhibit data in 2D/3D, and create game-like
experiences based on the climate neutrality and sustainability scenarios produced by CLICC and CITTA. The activity will be applied to the city of Torino.
Development of sampling pattern for first degree aberration in raytracing rendering
Thesis @CGVG, available for multiple students. Tutors: Leonardo Vezzani, Francesco Strada, Andrea Bottino, Bartolomeo Montrucchio TAGS: Ray Tracing, Rendering, Sampling pattern, Point Spread Function, optical transfer function
The quality of renderings using ray tracing has become increasingly higher, thanks to recent advancements in computer graphics. Despite these advancements, some significant optical defects that characterize photographs taken with real lenses have yet to be implemented in various rendering engines.
This thesis aims to implement the optical defects of a physical lens in a virtual renderer. Specifically, the aim is to introduce first-order optical defects by manipulating the sampling pattern of the renderer to achieve a realistic appearance in both out-of-focus and in-focus planes of the rendered image.
This thesis seeks to re-implement the technology in the open-source renderer Mitsuba (https://www.mitsuba-renderer.org/), starting from a previous algorithm implementation in Blender. The resulting algorithm will be tested using objective and subjective measures to assess the quality of its renders.
Enhancing locomotion in Virtual Reality: The Development and Analysis of Walking Seat V2
Thesis @CGVG Tutors: Leonardo Vezzani, Francesco Strada TAGS: VR, locomotion methapores, leaning interfaces, input devices
Walking Seat V2 represents an advanced solution for locomotion in virtual reality (VR), utilizing seat pressure sensors for more intuitive and responsive movement.
Despite various approaches to VR locomotion discussed in literature (Locomotion Vault), the challenge remains largely unresolved. Our innovative device, the Walking Seat, is designed to improve VR navigation significantly. This thesis is dedicated to advancing this technology, focusing on critical aspects such as enhancing sensor density and refining data interpretation. A key challenge to address is distinguishing between leaning movements for navigation and object interaction within the VR space.
This research goes beyond mere development; it involves rigorous testing of the Walking Seat, alongside a comparative analysis with existing locomotion methods. Key objectives of this study include:
Constructing the second iteration of the device, integrating technologies like Raspberry Pi, Arduino, and programming in C/Python.
Developing a sophisticated control system for precise avatar manipulation using Unity, AI principles, and control theory.
Executing extensive testing to assess interface quality and benchmarking it against other VR locomotion techniques.
Additionally, this study encourages the exploration of various implementation alternatives and innovative approaches to further refine VR locomotion.
Animating Virtual Characters in Unity Using Generative AI: A Prompt-Based Approach
Thesis @CGVG, available for multiple students Tutors: Stefano Calzolari, Andrea Bottino, Francesco Strada TAGS: diffudion models, prompt-based generative AI, NPC animation, VR, Unity
This Master's thesis delves into the innovative intersection of generative artificial intelligence and virtual character animation within the Unity environment. The primary focus is on exploring and utilizing diffusion models for prompt-based animation generation, a cutting-edge approach in the realm of AI-driven content creation.
Key Tasks:
Literature Review on Diffusion Models for Prompt-Based Animation Generation: The student will conduct a comprehensive review of existing literature. This involves studying the current state and advancements in diffusion models, specifically how they are applied to generate animations based on textual prompts. This review will help in understanding the theoretical foundation and practical applications of these models in the context of animation.
Implementation of a Generative AI Solution for Unity Character Animation: The practical aspect of this thesis involves implementing a solution, potentially building upon existing models. The objective is to develop a system capable of animating a standard character in Unity based on prompts. This will require integration of AI models with the Unity engine, ensuring that the system is not only functional but also efficient and user-friendly.
Expected Outcomes:
A detailed understanding of how diffusion models can be leveraged for animating characters in a virtual environment.
A functional prototype that demonstrates the capabilities of prompt-based animation generation within Unity.
Insights into the challenges and potential of integrating generative AI with real-time game engines like Unity.
This thesis is an opportunity to contribute to the emerging field of AI in game development and animation, offering practical experience in implementing advanced AI techniques in a popular game development platform.
MPAI-MMM: MPAI Metaverse Model Arhitecture
Thesis @CGVG in collaboration with MPAI consortium, available for multiple students Tutors: Andrea Bottino, Francesco Strada TAGS: metaverse, distributed VR, MPAI-MMM, virtual classroom
Metaverse is the name given to a collection of application contexts in which humans, represented by avatars, engage in educational, social, work-related, recreational activities, etc. MPAI (Moving Picture, Audio, and Data Coding by Artificial Intelligence), of which PoliTo is a founding member, has developed a standard for a portable avatar format (Portable Avatar Format) and a standard for metaverse architecture (MPAI Metaverse Model – Architecture). PoliTo is creating the reference code for the use case Avatar-Based Videoconference, where humans participate in a virtual conference with their portable avatars. The use case is implemented as a stand-alone solution.
The proposed thesis, however, concerns the study of an innovative teaching method carried out in the context of the MPAI Metaverse Model – Architecture in which students and the teacher attend the lesson through their portable avatars, exploiting the functionalities of the metaverse.
Details about the MPAI Metaverse standard can be found here.
The Use of Interactive Virtual Scenarios in Personnel Training and Product Presentation: An Analysis of Graphic Optimization for Different Devices
Industrial thesis @ SynArea Academic tutor: Andrea Bottino, Francesco Strada TAGS: VR, Training, Product presentation
Technological advancements have made it possible to use interactive virtual scenarios to simulate reality using advanced rendering and graphic techniques. These scenarios can be used in various contexts, such as product presentation and personnel training in the management of industrial machinery. However, graphic optimization for different devices is crucial to ensure a smooth and high-quality experience.
The objective of this thesis is to analyze the use of interactive virtual scenarios in personnel training and product presentation, with particular attention to graphic optimization for different devices. The expected results includes highlighting the advantages and analyzing the challenges associated with the current proposal. The results obtained can be used by companies to improve user experience and ensure a good quality of interactive virtual scenarios.
Required skills
Basic skills in the field of 3D graphics, software development and game engine programming.
The activities will take place in Turin:
SynArea Consultants C.so Tortona 17
Polythecnic of Turin (when possible)
Assigned, ongoing and deprecated thesis
Innovative multimedia science based tools for supporting policy decision making in the energy area
Thesis @CGVG, in collaboration with ENERGY center available for multiple students Tutors: Andrea Bottino, Francesco Strada TAGS: What if analysis, Virtual intercative nevironments, strytelling and computational narrative
Energy transition is undeferrable for humanity. The decisions of the policy decision makers at various levels (super-national, national, local) must be based on quantitative analysis and assessments of their impacts on environment and socio-economics aspects. In this context “what if” tools are needed to provide and “in silico” environment in which different strategies can be confronted with their impacts. Those tools should create a “virtual world” in which the decision maker can immerge and experience the world around through an immersion in its virtual representation. This requires the development both SW innovative tools and design of communication languages and metalanguages. The “technologies” to make this possible developed presently at EST@energycenter/polito.it (energy security transition lab) range from interactive web-interface, to story telling and computational narrative. A decision theatre installed at Energy Center of Politecnico (round screen 225°, 4 class 1 laser projectors Panasonic PT‐RZ660 with a resolution of 1920 x 1200 pixels each, Dolby 7.1 multichannel speaker system with a professional Denon processor, workstation DELL Precision 7920 for image generation) provide an immersive reality venue in which the decision maker can confront their decision making process). The student will be involved and will contribute to develop this vision selecting one the possible “communication technologies” and working in the facilities available at ENERGY center.
Digital Twins
Thesis @CGVG, available for multiple students; thesis in collaboration with Applied Tutors: Andrea Bottino, Francesco Strada TAGS: Mixed Reality, Animation, Training, Education
The Digital Twin (DT) differs from traditional simulation tools since it integrates IOT protocols that transmit synchronized data from a real machine to obtain information that allows real-time monitoring and more accurate diagnosis and predictions.
The objective of the work is to evaluate and implement solutions that allow to process and visualize this data within a 3D simulation, thus increasing the usability of and user experience with the machine. In particular, the work will provide answers to the following challenges:
How to effectively organize, catalog and navigate the data (e.g., to clearly distinguish between standard behavior, warning and error) by developing an effective user experience.
Develop solutions for automatic data processing and machine control command generation to substitute user input.
Possible application contexts, which will be defined by Applied company, are industrial plants, automotive and home automation.
The Virtual Reality (VR) in Blended Learning settings, with particular focus to education in the field of robotics
Design and development of a Virtual Learning application that will be characterized by a 3D environment in which students will be able to follow in a “remote way” the physical activities carried out in the laboratory and use some interactive virtual procedures for their training at home.
In particular the activities will be carried out using a collaborative robot present in the laboratory of the Polythecnic of Turin and will consist of:
use of Unity as 3D engine
modeling of the virtual laboratory using probably photos and videos made in the laboratory itself
import and use of a simplified 3D model of the robot in the Unity 3D scene (the virtual laboratory)
Integration of IOT protocol (i.e. MQTT) to develop a ”digital twin” of the robot
This feature will allows the students to remotely follow in real-time the movements of the robot during the phisical lesson managed by the teacher in the Polito laboratory, exploring the 3D scene in all the virtual space and approaching the virtual robot in a safe mode.
analysis and development of some interactive virtual procedure for training: these will be defined during the design phase of the project, as agreed with the laboratory manager:
i.e how to program a linear or circular move, how to describe the robot TCP (Tool Center Point) to perform special processing work as welding, pick & place and so on
The main objectives are to provide some virtual solutions with wich the teacher can better explain the laboratory activities, which of course in this moment are particularly limited.
Under normal conditions of access to the laboratory, they will be also used to better and safely visualize the activities on the robotic cell, as only a few students can stay near the work area.
Some technological choices will be defined during the development of the thesis.
The adopted solutions will be tested and used as a use case in the context of a more structured project, so the thesis will focus only on some of the previously listed topics, according to the student.
The activities will take place in Turin:
SynArea Consultants C.so Tortona 17
Polythecnic of Turin (when possible)
Virtual Upper Limb Embodiment
Thesis @CGVG in collaboration with VR@POLITO and IIT, available for multiple students Tutors: Andrea Bottino, Fabrizio Lamberti, Giacinto Barresi (IIT) TAGS: Virtual Reality, Virtual Embodiment, Simulation
The thesis will focus, in collaboration with IIT (Istituto Italiano di Tecnologia), on the design, the implementation, and the experimental use of a virtual setting to improve/train the integration – embodiment – of a simulated upper limb in the body scheme of an individual. Such an approach is currently explored to improve the embodiment of an actual prosthetic system, a prerequisite for promoting the actual usage of the device and reducing the risk of its abandonment.
After a brief overview of the literature related to this topic, the master candidate will develop a virtual environment based on the Unity game engine for obtaining a setup for experiments of “virtual hand illusion”, collecting subjective and objective data pointing at the embodiment degree of the simulated limbs. Initially, the subjects involved in these studies will be people without impairments (evaluating their reactions as in classic literature on embodiment phenomena). However, the recruitment of amputees could become viable during these activities: if such an opportunity will be offered, the student will be free to opt for involving actual prosthetic users in the thesis work.
According to the experimental results, this thesis could lead to a scientific publication on peer-reviewed journals or conferences.
Gamification in Multiple Sclerosis Rehabilitation
Thesis @CGVG in collaboration with VR@POLITO and IIT, available for multiple students Tutors: Andrea Bottino, Fabrizio Lamberti, Giacinto Barresi (IIT) TAGS: Mixed Reality, Gamification, Rehabilitation
The thesis will focus on the development of virtual/mixed reality gamification settings to improve the engagement of people with Multiple Sclerosis during motor and cognitive rehabilitation exercises.
The activities will include the creation of interactive environments in Unity and their usage in experimental sessions for data collection and analysis, involving people with and without impairments to validate the capability of such solutions to engage the user.
The candidate will collaborate with experts of IIT (Istituto Italiano di Tecnologia) and AISM (Associazione Italiana Sclerosi Multipla) to devise, implement, and test each game-based system that will be described in the thesis.
According to the experimental results, this could lead to a scientific publication on peer-reviewed journals or conferences. The field activities will be performed in Genova in clinical settings to directly involve the research participants.
Holo-ACLS: team-based XR training for emergency first reponders
Holo-BLSD is a software tool for training laypersons and professionals in Basic Life Support and Defibrillation (BLSD), i.e., the sequence of actions to recognize a patient in cardiac arrest and perform first aid. The proposed tool is able to independently manage the phases of learning (i.e., teaching the BLSD procedure), training (where trainees can practice the learned concepts), and final assessment of the acquired skills.
The training content is delivered via a HMD-based (Hololens) mixed reality application (MR) that provides an experiential learning approach by integrating a low-cost standard cardiopulmonary resuscitation (CPR) manikin with virtual digital elements (integrated into the physical environment in which the training activity is conducted) that replicate elements of the emergency scenario.
The proposed project aims to further develop and improve the current prototype version of the system with three main objectives:
redesigning the prototype (developed for Hololens v1) to adapt it to the the Hololens v2, especially with regard to the human-computer interaction component. The interaction component, which has been revolutionized in H2 with significant improvements in terms of available features that offer new possibilities to developers and users. In particular, a system for the accurate and detailed detection of joint movements of both hands has been introduced in H2. This information allows a natural and immediate interaction with virtual objects by touching, picking them up and moving them.
implementing adaptive learning approaches for delivering training content according to the needs and abilities of the learners. This module will employ a virtual instructor who will be able to interact realistically with the user through voice and gestures so that the learner can turn to it for alternative explanations or repetition of the concepts presented when difficulties arise. The development of this component requires the integration of Natural Language Processing modules, procedural management of animations, and (if possible) the integration of empathic components in the management of communication between virtual teacher and learner.
managing team-based training activities for professional operators. The current version of Holo-BLSD is designed for use by individual learners. In contrast, the field of professional resuscitation also needs support in training highly skilled medical team operators (METs) in Advanced Life Saving (ALS) practices. These activities address the management of advanced resuscitation operations through a systematic and highly organized series of assessments and treatments that occur simultaneously and must be performed efficiently and effectively in the shortest time possible
Students will work with the Unity Engine, so basic knowledge of Unity, C#, and XR SDKs are required. Basic knowledge of a 3D modeling software (Blender, Maya, 3DS Max) is also advised.
MetaHumans for Unreal (and other platforms)
Thesis @CGVG, available for multiple students Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada TAGS: Mixed Reality, CG, Animation
Virtual humans are gaining attention given their increased diffusion in real-time applications for various purposes as guides, companions, or nonplayable characters (NPCs) in large-scale simulations. However, current tools are not satisfactory or too costly. Recently, Unreal Engine has released MetaHumans, a free tool to rapidly create texturized 3d models of human avatars which are entirely rigged and featured with standard face blend shapes.
The overall objective of this thesis is to evaluate the possibilities and limitations of the MetaHumans tool analyzing how it can be integrated within the Unity environment. The following steps of the production pipeline should be evaluated: 3dModel creation and import, animations (body and face), animation retargeting from popular human body animation libraries (e.g. Mixamo), and usage in real-time immersive environments (VR/AR).
If repetitive operations emerge in the pipeline the student should also develop Unity interface tools (editor scripts) to automate them as much as possible. Once the entire pipeline is evaluated the student should apply the gained knowledge (and developed tools) to create a general-purpose library of virtual humans that can be easily exported (unity package) to other projects.
Learning by making serious games
Thesis @CGVG, available for multiple students Tutors: Andrea Bottino, Dimitar Gyaurov, Francesco Strada TAGS: Serious Games, Learning, Game Engine, Collaboration
Playing serious games (i.e., games that have another purpose besides entertainment) is widely recognized as an effective approach for teaching a new subject, training on a complex procedure or raising awareness on complex topics (e.g., sustainability). However, we can see this from the opposite perspective: making serious games as a method for building knowledge on the specific topic addressed by the serious game. Moreover, making games (serious or not) in a collaborative fashion can be envisioned as a tool to provide support and which helps create a communication space for kids with special needs (e.g., kids suffering from autism or attention deficit disorder). In fact, these kids tend to spend a lot of their time playing games and when asked to interact with one another they mostly communicate through and about the games they mostly play. However, in both these scenarios (i.e., making games for learning or for interacting), some challenges arise. The main problem is that the target audience for these interventions is not acquainted with the tools generally used to create games, both complex or simple game engines. The objective is to develop a tool to create 3D games which is simple and equipped with high-level building blocks that simplify the whole game making process. This tool should not be a game engine developed from scratch, whereas an extension to preexisting game engines (e.g., Unity, Google Game Builder). The developed tool should employ the same approach as Scratch where complex behaviours are achieved by combing together visual lego-like blocks. Finally, in the case of collaborative game making this activity (and the underlying tool) should envisage as a playful activity itself (i.e., the game is making the game). The general activities of the thesis student should be:
Define a target audience (this can be discussed with tutors). This choice will ultimately drive the design and development choices.
Make an exhaustive research of readily available tool for easily creating games, with particular interest to those which allow the development of extensions
Based on the target audience and the subject, define a set of requirements the tool should have: which are the building blocks that should be developed
Develop the designed tool
Assess if the designed tool is indeed effective and simple enough for the defined target audience
Developing an AR location-based serius game capable of integrating mobile and remote users
Thesis @CGVG, available for multiple students Tutors: Andrea Bottino, Dimitar Gyaurov, Francesco Strada TAGS: Serious Games, AR location-based games, mobile AR, integration of mobile and remote players
The objective of the thesis is the development of an AR game that combines two modalities of participation. The first modality is a location-based one, in which users have to complete problem-solving tasks by exploring an urban area, collecting information, solving quizzes, completing quests and interacting with each other. The second modality is a remote one, in which users have to participate in problem-solving tasks by solving puzzles, constructing maps, deciphering encrypted texts and interacting with each other using different platforms (VR/AR, web based).
Players have to collaborate to identify, recreate and interpret clues by engaging in three types of tasks:
Finding clues (primary physical activity) represented by location-based information (e.g. a name of a street, a place or a building), artefacts (e.g. a statue, or a barge placed in a street) or NPCs (e.g. theater actors playing roles).
Producing clues (primary mental activity) in the form of artefacts (e.g. painting or photographing a garden marker as a key location on a map) or information (e.g. deciphering the name of a key building from key positions in a crossword puzzle).
Puzzling out clues (primary social activity), collectively interpreting locations of artefacts based on clues already found/produced (e.g. combining the name of the street, a picture of the statue, the painting of a garden and the name of a key building to identify part of a map leading to the hidden location).
Players’ actions and achievements will determine how the AR game unfolds in the style of an interactive novel, effectively making them co-authors of the story they interpret.
The underlying rationale of the game is to develop a serious game aimed at supporting ( through in-game and after-game activities) different types of intervention (e.g., helping persons with dementia cope and adapt to their condition, helping students to deal with challenges and sustain high levels of academic proficiency despite life adversities), promoting interactions among different local communities/groups (e.g., patients, students, caregivers, family and community) and support activities outside the game (e.g., support families and caregivers; promote dialogue within the family; inform counseling and coaching activities within and outside the school environment).
Students are required to develop an initial prototype implementing the core elements that will be used within the game (i.e., location-based content management and augmentation of real places, communication system and information storage, integration between mobile and remote participating users).
The use case scenario, which aims at involving a large user community with different roles, can be developed by the students, or chosen among a list of proposals.
Agent-based model for large (urban) scale simulation of pandemic spread
Thesis @CGVG, available for multiple students Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada TAGS: Agent Based Models, Medical Simulations, Unity ECS/DOTS
The recent worldwide COVID-19 pandemic highlighted the importance of planning intervention strategies in urban areas, to safeguard the population’s health while also minimizing the impact on the economy. Agent-based simulations can be useful tools for administrators to evaluate the impact of the epidemic and the effects of different containment policies.
While several epidemic simulations can be found in literature, some factors that proved to be critical in the spread of the COVID pandemic are still missing.
The objective of this thesis is the development of a large scale agent-based simulation, leveraging Unity’s DOTS (Data Oriented Technology Stack) and ECS (Entity Component System) that allow the development of optimized real-time simulations with numbers of agents in the order of the hundreds of thousands.
The students will be implementing a number of features on top of a pre-existing framework. This framework at the moment includes some basic classes of buildings (houses, offices, pubs, convenience stores, parks), a population module for controlling the agents based on the BDI (Belief, Desire, Intention) model and a customizable epidemic module.
Some of the most relevant features that need to be implemented are:
Key classes of buildings (schools, hospitals, retirement homes).
Implementation of (currently missing) agents’ characteristics such as age, sex, occupation
Development of a graph of social relationships between agents (households, co-workers, schoolmates, friends, etc.)
Implementation of a transportation system, focusing primarily on public transport
Implementation of several intervention policies, such as conditional lockdown and different vaccination policies.
Following the development and testing of the simulation inside a toy scenario, it will also be applied to a model of a real city environment (Torino), to validate the model by comparing the results with real data.
Embodied Pedagogical Agents (EPA) for Adaptive VR Medical Emergency Training Framework
Thesis @CGVG in collaboration with VR@POLITO, available for multiple students Tutors: Edoardo Battegazzorre, Andrea Bottino TAGS: Intelligent Agents, Medical Simulations, VR, Adaptive Learning
Medical education is a field that encompasses many skills, including knowledge acquisition, operation of medical equipment, and development of communication skills. Virtual and Mixed reality can offer a valuable contribution in medical training, as they provide a safe and flexible environment for trainees to practice these skills. These systems do not require the physical presence of an instructor, and they are able to support institutions and learners with standardized computer-based training and automatic assessments. Furthermore, they can foster self-learning and be easily adjusted, in terms of difficulty, to suit the learning pace for students at different levels.
To fully leverage the potential of VMR digital training, one of the most prominent approaches is the Adaptive Learning philosophy. The general definition of Adaptive Learning refers to systems capable of modulating contents and pace of learning based on a User Model, which is different for every learner (A user model contains information about the person’s preferred learning style, previous knowledge, attention threshold, gender etc.).
The student will work on developing an Embodied Pedagogical Agent for a VR Adaptive Learning framework for training doctors in emergency procedures (specifically: Airway Management, Pericardiocentesis, Central Line, Chest Drainage). Embodied Pedagogical Agents (EPA) are a specific category of Intelligent Agents able to tutor, guide and assess trainees in a Virtual Reality training application. In this particular context (VR training for emergency procedures), the EPA should be able to:
Explain all the required steps to perform the procedure correctly, and all the available options depending on the environment, patient condition, and available tools and personnel.
The EPA will give feedback on the trainee’s overall performance and point out mistakes and/or out-of-sequence actions.
The EPA should also give hints and answer questions (via GUI and NLP interfaces) when prompted by the trainee.
The student will work on refining a pre-existing ECA (Embodied Conversational Agent) framework and its integration into the Adaptive Learning system driving the VR simulation. Students will work with the Unity Engine, so basic knowledge of Unity, C#, and XR SDKs are required. Moreover, the project will possibly involve the creation of domain-specific 3D assets and animations, so basic knowledge of 3D modeling and animation software (Blender, Maya, 3DS Max) is also advised.
Dynamic Virtual Patient Avatar for Adaptive VR Medical Emergency Training Framework
Thesis @CGVG, available for multiple students Tutors: Edoardo Battegazzorre, Andrea Bottino TAGS: Intelligent Agents, Medical Simulations, VR, Adaptive Learning
Medical education is a field that encompasses many skills, including knowledge acquisition, operation of medical equipment, and development of communication skills. Virtual and Mixed reality can offer a valuable contribution in medical training, as they provide a safe and flexible environment for trainees to practice these skills. These systems do not require the physical presence of an instructor, and they are able to support institutions and learners with standardized computer-based training and automatic assessments. Furthermore, they can foster self-learning and be easily adjusted, in terms of difficulty, to suit the learning pace for students at different levels.
To this day, the standard approach usually still relies on classroom or on-the-job learning or interactions with human actors, the so-called «standardized patients». Virtual Patients are a novel and valid alternative to Standardized Patients that is becoming increasingly more popular as a training medium. Virtual Patients are interactive computer-based simulations capable of portraying patients and clinical scenarios in a realistic way. Patients are portrayed by Embodied Conversational Agents, virtual agents with human appearance with the ability to respond to users and engage in communication patterns typical of a real conversation.
The student will work on developing a Dynamic Virtual Patient Avatar for a VR Adaptive Learning framework for training doctors in emergency procedures (specifically: Airway Management, Pericardiocentesis, Central Line, Chest Drainage). Patient condition is the primary variable to consider when performing these procedures to determine the correct course of action. The doctor should consider the patient's anamnesis (existing and previous conditions, discovered through documents detailing the patient’s history or directly dialoguing with the patient) and the patient's current physical conditions. The student(s) will focus their efforts in the development of a dynamic patient avatar with the following characteristics:
The Virtual Patient must communicate with natural language with the user by answering their questions and describing their physical condition
The Virtual Patient must show various visible symptoms and characteristics (obstructed nostrils/mouth, broken jaw, broken teeth, bruises, burns, obesity, pregnancy, different ages, skin tone, etc.). Also, the Virtual Patient must be able to respond realistically to the medical procedures performed by the doctor (soft tissue deformation following the use of medical tools, bleeding after skin incisions, etc.).
Students will work with the Unity Engine, so basic knowledge of Unity, C#, and XR SDKs are required. The student will also need to create custom features of the avatar, such as specific blend shapes and materials, so basic knowledge of a 3D modeling software (Blender, Maya, 3DS Max) is also advised.
Always Open: development of web-AR applications for indoor and outdoor museum visits
Thesis @CGVG in collaboration with Parco Paloentologico Astigiano, available for multiple students Tutors: Andrea Bottino, Francesco Strada TAGS: webAR, mobile devices, location based AR
The project envisions the development of Web- AR applications (which provide access to AR content over the Internet without the need to download specific applications). These Web- AR applications will provide users with multimedia content to support museum visits.
Specifically, the main objectives of this project are the following:
Evaluate the current capabilities of available Web- AR frameworks.
Manage the fruition of content related to museum artifacts in different ways (marker-based, image-based, SLAM -based)
Manage the fruition of AR content in location-based applications for visiting cultural sites in an open territorial environment.
Develop a framework that allows the modification of the content and logic of the web-AR application by non-experts.
Generating avatar motion from head and hands position
Thesis @CGVG, available for multiple students Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada TAGS: Mixed Reality, CG, Animation, ML
In shared VR environments, seeing a realistic animation of the participating peers is of paramount relevance, helping increase the feeling os immersion and presence. In the case of shared environments where users are physically co-located in the same physical space, the availability of a full pose of the peer avatars becomes a mandatory requirement for enforcing the safety of the simulation environment and avoid collisions with other users. However, obtaining a realistic animation would require the availability of the full pose of the users, which can only be captured with external (and often expensive) devices, thus preventing the implementation of low-cost and off-the-shelf solutions.
A possible alternative (that this thesis proposals aim to explore) is to leverage the tracking data available with current HMDs (i.e., position and orientation of the HMD and the controllers, which can be made correspondent to head and hands) to reconstruct a believable, fluid and natural animation of the full avatar body. The problem is ill-posed since the available data are not enough to reconstruct the full-DOF of a real body. However, the scope of the work is NOT to reconstruct precisely the current posture, but estimate a posture that (i) mimics in a reasonable way the real one, (ii) evolve in affluent and realistic way through time. One possible solution is to use Inverse Kinematic for the upper-body and Machine Learning approaches to extract a plausible lower-body pose form a repository. Other solutions will be devised and investigated during the research project.
Improving the design and effectiveness of Virtual Patients
Thesis @CGVG, available for multiple students Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada TAGS: Mixed Reality, Medical Learning
Today, standardized patients (SP, i.e., actors who are instructed to represent a patient during a clinical encounter with a healthcare provider) are considered the gold standard in health care education to perform tarining activities such as the simulation of clinical processes, decision making, developing clinical reasoning skills, and medical communication. SPs provide students with the opportunity to learn and practice both technical and non-technical skills in an environment capable of reproducing the realism of the doctor-patient relationship. These simulated environments are less stressful for the students, who are not required to interact with a real patient, and not harmful for the patient. However, SPs are difficult to standardize, since their performance heavily depend on the actors' skills, and their recruitment and training can become very costly.
A practical alternative to SPs is represented by virtual patients (VPs), i.e., virtual agents that have a human appearance and the ability to respond to users and engage in communication patterns typical of a real conversation. They can be equipped with external sensors capable of capturing a wide range of non-verbal clues (user's gestures and motions, expressions and line of sight) and use them to modulate the evolution of the conversation. They are cost-effective solutions since they can be developed once and used many times. They can be deployed as in-class or self-learning tools that students can use at their own pace and at any place. Finally, VP simulations, compared to SPs, are easier to standardize. That said the current state of the art on VPs allows identifying several limitations and potentially unexplored areas that these thesis aim to explore.
Students will work on (and extend) a software library for the creation and management of Embodied Conversational Agents (ECA, i.e. avatar cabale of sustaining a realistic and empathical conversation with a human being) that CG&VG is currently developing,
Topic 1. Authoring tools for VPs
Implementing VPs is a cumbersome and complicated process, which requires taking into account several different elements (Natural Language Processing, emotion modelling, affective computing, 3D animations, etc.), which, in turn, involve specific technological and technical skills. Usually, the development of a VP is a cyclical process of research, refinement and validation with experts that can take a considerable amount of time. Thus, there is the need to develop simple (and effective) authoring tools that can allow developers to support clinical educators in the rapid design, prototyping and deploying of VPs in a variety of use cases.
Students are required to develop such an authoring tool and assess its usability with a panel of volunteers.
Topic 2. Detecting real users non-verbal cues (body-language, prosodic features) to drive rich emotional interactions with ECAs
The unfolding of the simulation's narrative should be dictated (in tandem) by both user's verbal and non-verbal behaviours. To this end, VPs should fully leverage non-verbal cues as a factor that actively influences the state of the agent. For instance, the same utterance should have a different outcome if the user maintains eye contact with the patient, looks in another direction, and is fidgeting or exhibiting an incoherent facial expression.
Students are required to develop software modules capable of tackling various issues:
Capturing and analyzing body pose and body-lnaguage features, possibly with off-the-shelf and low cost devices (e.g., cameras and ML/Computer vision algorithms)
Develop computational mechanisms capable of extracting para-linguistic factors such as tone of voice, loudness, inflection, rhythm, and pitch, which can provide information about the actual emotional states of the other peer in the communication.
Develop text-to-speech libraries capable of exploiting the same para-linguistic factors to modulate the VP response according to its emotional state.
Deep learning approaches for Video action recognition
Thesis @CGVG + @VANDAL, available for multiple students Tutors: Andrea Bottino; Mirco Planamente, Chiara Plizzari, Barbara Caputo TAGS: ML, Deep Learning, Domain Adaptation, Source Free Domain Adaptation, Self supervised tasks
This topic include a list of various thesis proposals related to video analysis recognition (either in first or third person) with specific focus on addressing the domain shift that affect models trained on a source domain and applied to a target domain through Domain Adapation approaches (i.e., methods that attempt in various ways to adapt the representation learned from the (labeled) "source" domain used for training to that of the "target" unseen domain using a set of unlabeled target data)
The full list of available topics (and their details) can be found here
Multi-user (co-located) interaction paradigms in VR (VERA)
Thesis @CGVG, available for multiple students Tutors: Andrea Bottino, Francesco Strada TAGS: Distributed VR Environments, Collaboration in VR
We recently developed a custom framework for managing shared virtual environments where a large number of users (i.e., up to 50) are present and active simultaneously. This framework has been adopted in the development of the musical performance VERA. The project aimed at experimenting with the possibilities offered by immersive VR with an extended audience of colocated (i.e., sharing the same physical space) spectators. In this experience, users shared the same virtual environment and were able to interact with it (e.g., gazing at objects or pushing buttons). Although users could virtually see each other (in the form of avatars) they could not interact with one another. The objective of this thesis is to expand our previously designed framework to encompass also this feature. The thesis student is thus required to:
Analyze the current state of the art for colocated multi-user interaction paradigms (e.g., how multiple users can manipulate objects and move within the environment without bouncing into each other)
Evaluate natural methods of interaction exploiting hand tracking, now readily available on low-cost VR headsets (e.g. Oculus Quest 2)
Extend our framework (developed within the Unity environment) with components allowing developers to rapidly define and program the aforementioned interactions
Develop 2 scenarios and evaluate (through user testing) the effectiveness and appropriateness of the proposed solution.
Automatic extraction of video analytics labels
Thesis @CGVG, available for multiple students Tutors: Andrea Bottino, Francesco Strada TAGS: Collaborative Mixed Reality, Signal processing, ML
Experimental evaluation of collaborative behaviours is usually carried through manual audio/video labelling. This process is extremely time-consuming because it requires a person to carefully watch and listen to hours of recorded material, manually annotating (i.e., labelling) specific events (e.g. people looking at each other, sharing material) or contents (e.g. what are they saying). However, these practices are extremely important for providing quantitative evidence (data) to experimentally assess the effectiveness of novel collaborative technologies (e.g. mixed reality environments). Nevertheless, automating these processes would allow the evaluation of collaborative behaviours in real-time.
The objective of this thesis is to propose and develop a software solution capable of processing and combining data, captured from multiple sources (e.g., cameras for body tracking data as well as microphones for audio), to automatically label and extract the presence or absence of collaborative behaviours.
The thesis student(s) will be required to:
analyze the current state of the art of collaborative labelling methodologies (frameworks)
from this analysis define an exhaustive list of the most significant behaviours (what people do) and contents (what people say) exploited in collaborative labelling methodologies
evaluate hardware and software solutions for capturing and analyzing audio contents and body-language features, possibly with off-the-shelf and low-cost devices (e.g., cameras and ML/Computer vision algorithms)
Develop computational mechanisms capable of combining the data collected from multiple sources in order to extract the collaborative behaviours/contents defined in the previous steps
Evaluate the effectiveness of the proposed solution with groups of people performing collaborative activities
AI-Driven Video Innovation: Revolutionizing Internal Communication through a Video Series Format
Thesis In collaboration with Reply S.p.A.; available for multiple students Tutors: Andrea Bottino, Francesco Strada Reply Tutor: Edoardo Raffaele (mailto:e.raffaele@reply.com) TAGS: Video content, AI-powered tools, Video production, Communication strategy, AI-generated elements
In today's digital age, video content has become a powerful tool for effective communication. The rise of AI-powered tools for generating images and videos presents exciting opportunities to streamline the video production process and produce dynamic, personalized content at scale.
The project focused on creating a new video series format, from shooting to the final editing, that leverages AI-powered generation of images and videos. The project will also involve experimentation with new video ideas and strategizing the best approach to launch and communicate the new format.
Objectives:
Create a New Video Series Format: Design and develop a fresh, engaging, and informative video series to be published on an employee-dedicated Video platform
Utilize AI-Powered Video Generation: Explore and integrate AI-powered tools and techniques for generating images and videos to streamline the video production workflow and enhance content creation efficiency.
Video Creation and editing: produce the video from shooting to the final editing, incorporating AI-generated elements where applicable, while maintaining a high standard of visual quality and storytelling.
Launch and Communication Strategy: Develop a strategic plan for launching the new video format and effectively communicating its value and purpose to the employees.
Required skills: Video shooting, video editing, basic knowledge of audio editing
The activity will take place mainly in Turin at Reply SpA
Patient-physician relationship in VR
Thesis In collaboration with University of Turin, Department of Neuroscience, prof. Elisa Carlino; available for multiple students Tutors: Andrea Bottino, Francesco Strada, Elisa Carlino TAGS: virtual production, character modeling, animated avatars, Internship
The presence of an external context can change symptoms’ perception. This phenomenon has been recognized in the medical field, where the role of the therapeutic context has been extensively documented by placebo research studies. Popularly known as a therapeutic effect derived from inert pills, placebo effect is more aptly described as a “context-effect” whereby internal and external variables, ranging from the physical aspect of a treatment to the physician-patient relationship, are meaningful and capable of producing remarkable clinical improvements when an inert treatment is administered. No studies have deeply investigated the effects of a virtual physician-patient interaction on healing processes and symptoms/pain perception.
In collaboration with the “ContExp Lab” of the University of Turin, this project aims to investigate the possible use of virtual reality, and virtual physician-patient interactions, to modulate pain experience when a treatment (real or inert) is delivered. The study aims to get an understanding of this phenomenon at behavioral levels, i.e. level of pain experienced, and cerebral levels, combining the VR technology with cerebral recordings techniques such as electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS).
In a typical neurophysiology study on placebo effect, healthy volunteers are involved to participate in a study where painful stimulations are experimentally delivered in a specific context (in this case a virtual context). Participants rate the painful stimulations before and after the administration of a treatment. EEG and/or fNIRS recordings accompany the entire process in order to identify biological components related to the placebo effect.
For the thesis, the student(s) will work on developing a Dynamic Virtual Environment where there will be several variables to create and modulate in order to understand which are the virtual determinants of placebo effects on pain perception. Example of variables are: aspects of the virtual hospital in which the user will receive the treatment to reduce pain, aspects of the physician-patient interaction between the virtual physician and the user, level of empathic interaction etc. The student(s) will focus their efforts in:
the creation of the hospital environment, with different degrees of details and possible interactions with the virtual physician,
and in the development of a dynamic physician avatar interaction with the following characteristics:
Modulation of the interactions with the Virtual Physician (e.g. verbal and nonverbal communication with the user);
Modulation of the emotional interactions with the Virtual Physician (e.g. modulation of her/his behavior and the emotion involvement depending on user status (i.e., pain scores)).
The final aim is to investigate the effects of these scenarios on pain perception, using behavioral and neurophysiological approaches in collaboration with the University of Turin.
CREATION OF VIRTUAL MODELS FOR MONITORING VIRTUAL MUSEUM VISIT EXPERIENCES
Thesis In collaboration with Museo Nazionale Etrusco di Roma (ETRU), Unito (prof. Annamaria Berti, prof. Raffaella Ricci); available for multiple students Tutors: Andrea Bottino, Michela Benente, Valeria Minucciani TAGS: modelling of museum environments, virtual visit, visitor behaviour/reactions
This proposal is part of a strand of research that examines the fruition of museum spaces, based on the assumption that visitor involvement is partly conscious and partly unconscious. It is strongly influenced by the characteristics of the exhibition space, which have a positive or negative impact on the visit. Some design solutions are capable of triggering very different motor and emotional responses.
For this reason, the research will investigate how neurophysiological parameters change in relation to the exhibition space. For this project, it is thus necessary to create virtual environments that reflect both the current layout of some rooms in the Etruscan National Museum in Rome and alternative design proposals. These environments will then be used to test the responses of (virtual) visitors, both in terms of behavior and neurophysiological parameters measured with biosensors.
In particular, the main objectives of this work are the following:
Acquire digital data and model at least three alternative virtual environments (current situation plus two different setup proposals);
Exploring the way virtual environments are experienced by monitoring users' behavioral and neurophysiological parameters;
Comparison and interpretation of the different fruitive responses in relation to the different spatial arrangements.
Comparison with the behaviors observed during the real visit, recorded in the physical museum and serving as baseline for the data obtained in the virtual environments.
Reflect on the different mechanisms of visitor engagement in the virtual space compared to the physical space.
MPAI-ARA: Avatar Representation and Animation standard
Thesis @CGVG in collaboration with MPAI consortium, available for multiple students Tutors: Andrea Bottino, Francesco Strada TAGS: virtual videoconferencing, avatar animation, body and facial animation, MPAI-ARA, distributed VR environmente
Avatar Representation and Animation (ARA) is a Technical Specification being developed to provide data format specifications enabling a party to represent and animate an avatar transmitted by another independent party.
The goal is represented by the following use case involving an Avatar-Based Videoconference: avatars representing humans with a high degree of accuracy participate in a videoconference in a virtual room. The Virtual environment is distributed and shared among all participants. User's avatars can be animated with available third-party AI-based facial and motion capture systems.
The virtual conference includes a virtual secretary (VS), represented as an avatar, which creates an online summary of the meeting recording the utterances of the speaker (by means of speech to text available APIs).
The goal of this thesis project is to develop a prototypal implementation of the MPAI-ARA standard including:
The development of a client server architecture for managing communications and VE state consistency.
The development and management of the virtual meeting rooms
The management of the avatar animations in the shared environment
The management of the VS
Required skills
Basic skills in the field of 3D graphics, software development and game engine programming.
Thesis available for multiple students
MPAI-SPG: Server-based Predictive Multiplayer Gaming standard
Thesis @CGVG in collaboration with Synesthesia and MPAI consortium, available for multiple students Tutors: Andrea Bottino, Marco Mazzaglia, Francesco Strada TAGS: online gaming, MPAI-SPG, authoritative servers, online cheat detection
MPAI-SPG aims to develop a standard for a software architecture that minimise the audio-visual and gameplay discontinuities caused by high latency or packet losses during an online real-time game. In case information from a client is missing, the data collected from the clients involved in a particular game are fed to an AI-based system that predicts the moves of the client whose data are missing. The same technologies provide a response to the need to detect whether a player is cheating. Details about the standard can be found here.
The goal of this thesis project is twofold:
To work on a prototype Racing game to test the architecture of MPAI-SPG.
To use the architecture as a tool to intercept cheating attempts by certain clients
AI and Art: narrate the emotional impact of musical performances to audiences through generated videos [Assigned]
Thesis @CGVG in collaboration with Lucas ETS - “Narrazioni Parallele” Project Tutors: Tatiana Mazali, Andrea Bottino Generative AI, Interdisciplinary, Humanizing Technology, Art & Technology, Digital Creativity
Thesis available for two students
The project focuses on creating videos produced through one or more generative AI technologies by processing data, images, and texts collected during musical events of the Narrazioni Parallele Festival (October-November 2024 - Spring 2025).
Musical artistic expression is transcoded into language and images by the collective intelligences of the audience, which are then further transcoded through AI. What result will be returned to the audience?
Final videos produced during the thesis will be presented on June 21-22, 2025.
Objectives:
Identify the most effective method for data collection during and after the events, engaging both performers and audiences with technologies that actively involve them in the process.
Collect and categorize data that will serve as the basis for prompts to generate videos using AI software.
Use generative AI to map potentially discriminatory stereotypes in the content with an approach open to diversity (in terms of gender, abilities, cultures, etc.).
Create one or more creative AI-generated videos, starting from a "script" and prompts based on inputs from the audience at the Narrazioni Parallele project concerts, generating an artistic artifact with its own aesthetic autonomy.
"Validate" the outputs through a qualitative assessment involving the users.
Virtual Production with Artificial Intelligence for Motion Capture in Broadcast: Benefits, Limitations, and Production Implications [Assigned]
Virtual production is a cutting-edge technology that combines real-time visualization with pre-visualization in the film and television industry. Through the use of virtual reality and artificial intelligence, virtual production enables more efficient and creative ways of authoring and capturing media content. The goal of this thesis is to investigate the potential of virtual production and artificial intelligence, with special focus on motion capture techniques, in the field of broadcast production. Through a literature review, case studies, and interviews with industry professionals, the benefits and limitations of using artificial intelligence for motion capture in virtual production will be explored, as well as impacts on the production value chain (costs, workflow, human resources, and skills to name a few). Among the expected outcomes of this research is a better understanding of how virtual production and AI can improve the production process, increase the accuracy and realism of motion capture, and reduce production costs while maintaining or even improving the quality of the final product.
During the internship, the student will work with qualified staff both in the Studio TV production center in Turin and in the Rai Research & Innovation Center offices.
Possible thesis topics
3D Content creation for broadcast virtual studio
Markerless mocap vs marker mocap
Integration of real-time MC systems with unreal engine
Analysis of the Production value chain (costs, workflow, skills…)
Type of Thesis
Experimental – Possibility of Internship
Required skills
Basic skills in the field of 3D graphics, software development and game engine programming.
Thesis available for multiple students
DIVINE: Utilizing Advanced AI for Precision Diagnosis of Vine Diseases in Compliance with the European Green Deal [Assigned]
Thesis @CGVG in collaboration with Pro-Logic, Torino, available for multiple students. Tirocinio + reserach grant available Tutors: Alessandro Emmanuel Pecora, Andrea Bottino TAGS: rtificial Intelligence, Deep Learning, Computer Vision, Precision Agriculture, Vine Disease Diagnosis, Image Analysis, Neural Networks, Sustainability, European Green Deal
The DIVINE (DIagnosi delle malattie della VIte per immagini tramite le reti NEurali e il deep learning) project is a pioneering initiative aligned with the European Green Deal, aimed at transforming the way vine diseases are detected and treated. This thesis aims at developing Computer Vision (Depp-learning based) methodologies for automatically and accurately diagnose from images (in the visible and multispectral interval) major vine diseases in Italy, such as Downy Mildew (Peronospora) and Powdery Mildew (Oidio).
This project is a collaborative effort, bringing together entities from various sectors including enterprises, academic institutions, agronomists, and sensor technology experts. The collaborative nature of the project is aimed at leveraging a wide spectrum of expertise to effective solution and build a comprehensive, annotated dataset from both controlled experiments and real-world crop scenarios that can be used to train the devised models.
Goals:
Literature Review and Model Exploration:
Conduct an extensive review of the literature to identify state-of-the-art AI and deep learning models suited for image-based plant disease diagnosis.
Investigate existing datasets and assess data availability for training AI models in the context of vine disease diagnosis.
Data Collection and Annotation:
Work in conjunction with project partners to construct a detailed dataset of grapevine leaf images, encompassing both healthy and diseased specimens.
Collaborate with agronomists and sensor technology specialists for precise annotation of these images in a controlled experimental setting.
Model Training and Validation:
Develop and refine AI models using the annotated dataset, with a focus on achieving high accuracy and versatility under varying field conditions (i.e., in-the-wild analysis).
Validate the effectiveness of these models in diverse environmental settings.
In-the-wild Application and Evaluation:
Implement the trained AI models in actual vineyard scenarios.
Assess the models' performance in diagnosing Downy Mildew and Powdery Mildew in different vineyard environments.
Objectives:
Reduce the reliance on pesticides in vine cultivation by facilitating AI-driven diagnosis of critical vine diseases.
Advance the understanding and application of AI within the field of precision agriculture, specifically for disease diagnosis.
Align with the sustainability goals set forth in the European Green Deal, promoting environmentally friendly agricultural practices.
Establish a valuable, annotated image database to support ongoing AI research in the agricultural sector.
This thesis represents an opportunity to contribute significantly to sustainable agriculture through the integration of cutting-edge AI technology.
Theses in collaboration with Centro Ricerche RAI about AI, synthetic humans, motion capture, and multimedia [Assigned]
Thesis In collaboration with Centro Ricerche, Innovazione Tecnologica e Sperimentazione; available for multiple students Tutors: Andrea Bottino, Francesco Strada TAGS: virtual production, character modeling, animated avatars, generative AI, motion capture, virtual studios, Internship
In the following section, we present a series of thesis proposals, some of which necessitate management as internships due to constraints related to the access of equipment or facilities at the CRR.
It is important to note that the CRR has a limited capacity for hosting students. As of the current date, there is only one available spot for a thesis project. Any additional projects may start upon the completion of the theses already underway.
All the following theses are eventually available for multiple students working on the same project
Sing language LIS [Tirocinio]
The thesis aims to develop an innovative, avatar-based interface for Italian Sign Language (LIS) communication, enhancing accessibility and interaction for the deaf and hard-of-hearing community. The system will leverage motion capture suits and gloves (Xsense & Rokoko) to accurately capture and translate LIS gestures. This technology will enable the detailed tracking of hand movements and body language, essential for conveying the nuances of LIS. The recorded movements will be applied to a digital avatar to replicate LIS gestures in real-time. The avatar will serve as a visual representation, translating LIS into a visual format that can be easily understood.