AIBusinessONo

Develop an artificial intelligence platform that monitors social networks to analyze user sentiment, detect emerging trends and provide real-time alerts.

Technologies:

Python, NoSQL, NLP, LLM, API, Web Scraping, Data Visualization


Keywords: IA, ML, NLP, API, Data Visualization
Complexity: 8

The project consists of creating a tool that uses advanced AI and natural language processing (NLP) techniques to analyze content from social networks such as X, Facebook, Instagram, TikTok, and other widely used platforms. The system should be able to categorize sentiment (positive, negative, neutral), identify emerging themes or keywords, and generate automated reports that include data visualizations for quick and effective interpretation.

The platform will enable organizations to monitor their reputation, anticipate potential crises, and adapt their communication and marketing strategies based on trends and audience sentiment. It will also allow them to identify market opportunities based on audience interests.

This project is not only applicable for private companies, but also for non-profit organizations, government institutions, and other entities interested in better understanding public perception and responding proactively.

ArtAIchemy

Web platform for the "modernization" of works of art using artificial intelligence techniques.

Technologies:

Python, PyTorch, Stable Diffusion


Keywords: Keywords, separadas por comas
Complexity: 8

The objective of this project is to develop a web platform to modernize works of art, offering advanced tools such as segmentation of portraits (for example, the creation of personalized stickers), the animation of images to bring static works to life, and the generation of Stereoscopic images for immersive 3D viewing. This platform will allow users to transform and revitalize classic works using cutting-edge technologies, opening new possibilities for interaction and enjoyment of art.

In order to accomplish this task, both in-house models can be developed and models available to the general public can be used, taking into consideration their licenses.

The web platform should take into consideration high resolution images, since many of the artwork sources are of high quality, adapting the output accordingly or allowing the user to select their output preferences.

AbuseGame 2

Continuation of the AbuseGame project, an online video game for child abuse and bullying risk scoring.

Technologies:

C#, Unity, Golang, OAuth2, miniKube

Keywords: Child abuse, bullying, videogame, 2D, web development, Cloud
Complexity: 7

The video game AbuseGame developed in Unity (C#) aimed at children, is used to detect situations of child abuse through a scoring system obtained from a decision tree. This year we have the collaboration of Cepsicap, a specialized psychological center that will give us support to adjust the scoring system already implemented.

AbuseGame 2 will aim at the integration of the web video game (WebGL) in the Cloud. It will be a secure ecosystem of high availability where forensic or teaching staff can manage students, tutors and results so that it can be applied in a real environment.

This ecosystem will ideally be composed of a kubernetes based cluster where we will host a UI (frontend) and several microservices that will form the backend focused on scalability. This composition would allow teachers to manage their sites through an alert system to identify possible early indicators, as well as to take metrics and even make changes on the scoring system.

BadmintonTracker

Badminton player and shuttlecock tracking, potential continuation of BasketballTracker project.

Technologies:

Python, PyTorch, OpenCV

Keywords: AI, video análisis, ML, tracking, badminton
Complexity: 9

The “BadmintonTracker” project aims to develop software for the detection and analysis of the position and speed of badminton players, as well as the shuttlecock, using images captured by tactical cameras during official matches. This project is situated at the intersection of software engineering, computer vision and sports analysis.

The project objectives are divided into two phases:

• Software Development: software will be designed that applies advanced image processing and computer vision techniques. The software will consist of detection and tracking algorithms that accurately identify the position and trajectory of the players and the shuttlecock.

• Data Extraction: Kinematic parameters such as position, velocity and acceleration of the players and the shuttlecock will be extracted and calculated in real time. The variables will be structured and stored in a suitable format for subsequent quantitative and qualitative analysis.

Regarding the methodology of the project, images from tactical cameras of badminton matches will be used. Preprocessing techniques such as noise filtering, distortion correction and illumination normalization will be applied. Then, convolutional neural network (CNN) based algorithms will be implemented for the detection and segmentation of players and shuttlecock in each frame. Tracking techniques such as Kalman filter and SORT (Simple Online and Realtime Tracking) algorithm will be used to maintain continuous tracking of the objects through the video sequence. Data association techniques will also be implemented to solve occlusion and temporal loss of detection problems. Finally, the velocity and acceleration of the players and the steering wheel will be calculated using numerical differencing methods and time series analysis. If possible, algorithms will be developed for the detection of critical events such as smashes, drop shots and fast displacements based on kinematic thresholds.

Example video camera: https://www.youtube.com/live/XyK0nwAdO_g?si=F_ia_t52eYCtOOxx&t=678

This project may be supported by Fernando Rivas, Carolina Marin's coach.

BocciaSinLimites

Application/web that allows interconnecting Boccia games for people in different locations, analyzing the game status on each field, and sending this information to the other players.

Technologies:

Python, C#, C++, Go, React, Angular, Android

Keywords: Object detection, computer vision, Boccia, augmented reality, AI, cameras, web application, paralympics, mobile application.
Complexity: 8

Boccia, whose origins date back to Classical Greece, is a complex combination of tactics and skill. It is played individually, in pairs, or in teams on a rectangular court, where players attempt to throw their balls as close as possible to the white target ball while also trying to push their opponents' balls away, in a continuous exercise of tension and precision.

This sport has been part of the Paralympic program since the 1984 New York Games and is included in the 2024 Paris Paralympic Games.

However, Boccia has a problem: it is not always easy for athletes to find matches of different levels in their local area. So, once again, we turn to technology for a solution.

We want to develop an application or web server that allows athletes to connect and play matches regardless of their location. To achieve this, we need to detect each player's playing field using an overhead camera, as well as track every move made on this field. This way, we can send the opponent the state of the balls and the white target ball after each player's throw. Once the first player's move is completed and received, the opponent will then place the balls on their own field to replicate the exact state of the previous play.

Regarding the solution to communicate the play to the remote player, several possibilities exist. The simplest option is to display the balls in a virtual field (a drawing), waiting for the player or their assistant to manually place them and confirm their correct positioning. This approach can be enhanced by using the remote player's camera to verify the approximate position, indicating whether the placement is accurate or needs adjustment. Another option is to implement an augmented reality solution, allowing the player to "see" where to place the balls. Finally, a laser pointer attached to a pan-and-tilt system could be used to indicate the exact placement of each ball on the real playing field.

ChatWithHistoTwin

AI/LLMs-based conversation with a specific historical character, with voice cloning as an optional feature.

Technologies:

Python, PyTorch, HF Transformers, FastAPI, Reflex

Keywords: AI, LLM, Chatbot, DigitalTwin, History

Complexity: 9

In this work we propose to implement a digital twin of a historical character through an LLM. The student will implement an application based on an LLM, and through different techniques (fine-tuning, prompt engineering, RAG, etc....) a user will be able to talk to the LLM as if he was talking to the selected historical character, and talk about the different historical events in which he was involved.

For the implementation, databases about the character must be searched (biographies etc, ...). As an extension of the work, it is also possible to search for phonographs and other digital files to clone his voice. The model should be served in a simple webapp that allows to converse with the model in a simple way. It will also be possible to incorporate several historical characters to the same web, and even to be able to buy the answers given to the same question by several of them.

ChronoStreetTourist3.0

ChronoStreetTourist is a mobile app that uses augmented reality to display historical images on a mobile device's camera when pointed at a building or location for which old images are available.

Technologies:

Unity, C#, Android, Google Dev, Vuforia

Keywords: Computer vision, augmented reality, tourism, geolocation

Complexity: 8

ChronoStreetTourist is a mobile app that uses augmented reality to display historical images on a mobile device's camera when pointed at a building or location for which old images are available.

It was developed in previous Final Year Projects (FYP) during the 2021-22 and 2022-23 academic years (the latter by the co-director Ánder). Now, the goal is to further develop the ChronoStreetTourist project.

  1. Get it up and running: Import the project into Unity and familiarize yourself with it. Update the Vuforia tokens where necessary and get the existing app up and running.

Once operational, ChronoStreetTourist project aims to evolve in different aspects:

2. Simplify architecture: The previous project had two components: a mobile app that displays augmented reality when viewing a monument and a web interface for managing images, users, etc.

a. Since the hosting is down, the goal is to eliminate the web interface and make the app function autonomously.

b. Implement an admin user role with access to all functionalities and a user role with limited access.

3. Geolocation: Integrate a map marking all the monuments included in the database. When clicking on these points on the map, the old photo would be displayed along with a link to a website with more information about it.

4. Push notifications: Notify the user if they approach within approximately 100 meters of any of the map points.

5. Additional improvements: If time permits, consider enhancements to the app’s aesthetics, QR code reading, etc. The student has the freedom to propose other improvements.

CognitiveCarSimulator

Realistic urban driving simulator for training and assessment of cognitive impairment after acquired brain injury.

Technologies:

C++, Unreal Engine, Blender, BlueprintUE

Keywords: Simulation, driving, car, cognitive damage, brain damage

Complexity: 9

This project is carried out in collaboration with the ASPAYM Foundation, whose mission is to promote and encourage all kinds of actions and activities aimed at improving the quality of life of people with spinal cord injury and severe physical disabilities.

It has been demonstrated that the way of driving of a person who has suffered an acquired brain injury is usually affected as a consequence of such injury.

Sometimes, such conditions can be solved with the implementation of different physical adaptations in vehicles: adapted pedals or controls, pedal actuation with maximum force, alternative adapted steering system controlled with one hand or one arm, etc. However, affected persons may also present cognitive alterations that impact their ability to drive. Some of these alterations may be present in different aspects such as attention, visuospatial skills, understanding of signs and signals, etc.

The proposal is to create a driving simulator that, through certain scenarios created according to the impairment to be evaluated, can be used both to practice and improve in overcoming the most frequent cognitive alterations and to monitor the evolution of the affected person.

The simulator must be usable by keyboard (and potentially mouse) but additional technologies such as virtual reality, steering wheel and pedals control, etc. can be incorporated.

It is proposed to carry out this project using Unreal Engine, developing the logic both in C++ and BlueprintUE and incorporating those assets available in the Unreal Engine marketplace that are considered necessary (as long as their license allows their use in this project).

Finally, in this project we will count on the advice of ASPAYM professionals.

ConceptIA

Semantic network in Spanish that represents relationships between words and concepts, improving the accuracy of language models.

Technologies:

Python, Neo4j, Word2Vec, spaCy, Hugging Face Transformers

Keywords: Semantic Networks, LLM, Graph Databases, Spanish Language, Machine Learning

Complexity: 8

ConceptIA is a project focused on building a semantic network for Spanish that represents relationships between words and concepts, helping language models to improve their contextual understanding. This structured system will allow capturing complex and semantic relationships, which is key to increase the accuracy of models such as LLM (Large Language Models) and SLM (Small Language Models).

ConceptIA has the potential to revolutionize the way linguistic models process and relate information in Spanish.

CropSense

Project consisting of 3 parts: The first is an Arduino device, which with the help of sensors uses collects crop data. The second is a server that collects this data and stores it. The third part is a frontend to view the data and control the irrigation, program it, etc...

Technologies:

C (Arduino), HTTP, NodeJs, JavaScript, Angular, MySQL, Redis

Keywords: Crops, Watering, Sensors, Humidity, Temperature

Complexity: 8

This App tries to help the need to keep the field management as up to date as possible and save water for irrigated crops.

The components of the project are:

1. Arduino code that performs the following tasks:

  • a. Automatic server discovery by MAC (optional).
  • b. Collect sensor information from:
  • i. Temperature
  • ii. Air humidity
  • iii. Soil humidity
  • c. Transmit the data to the server.
  • d. Visually report the status of the connection via LED.
  • e. Synchronize time with server
  • f. Receive commands from the server (when to water and when not to water, start watering, end watering...).
  • g. Send status information.
  • h. Send water flow information when watering (optional).

2. Backend:

  • a. Receive data from the Arduino and manage it.
  • b. Save the data in a database.
  • c. Decide when the system should water.
  • d. API for the frontend.

3. Frontend AdminLTE 3 | Dashboard:

  • a. Access control.
  • b. Display live data from sensors (Dashboard).
  • c. Show when watering and when not watering (Dashboard).
  • d. Weather forecast.
  • e. Arduino management (Configured, not configured).
  • f. Board configuration
  • g. Display historical sensor data.
  • h. Display watering intervals.

DataShotExtractor

AI-based web application capable of performing data extraction, and export to different standard data formats (CSV, XML, JSON, etc.) or Markdown, from an image containing a table or tables. The system will be able to queue the different data extraction jobs, because the inference process is time consuming. As well as, be able to automatically detect, in case it is available, the option to perform the inference on GPU, and if not, the default option, on CPU.

Technologies:

Python, PyTorch, Web frameworks, ONNX

Keywords: Segmentation, AI, inference, HuggingFace, ONNX

Complexity: 8

The “Data Shot Extractor” project aims to develop a web application that, through artificial intelligence (AI), allows the accurate extraction of tabular data from images. This tool will be able to process images containing tables, segment them, and extract the information in an automated way and then export it in various standard formats such as CSV, XML, JSON or Markdown, facilitating its integration into different workflows.

The system will be designed to efficiently handle the computational load associated with AI inference, especially in segmentation and complex pattern recognition tasks in images. To achieve this, a job queue will be implemented, which will allow multiple data extraction requests to be handled in an orderly fashion, avoiding system overload and optimizing processing time.

A prominent feature of the project will be the automatic detection of the availability of a GPU to perform inference. If a GPU is detected, the system will use this option to speed up the process; otherwise, it will fall back to the CPU as the default option, ensuring that the application works optimally in different environments.

The development of this application will be carried out using Python as the main language, PyTorch for the modeling and training of the AI models, and web frameworks for the creation of the user interface. In addition, pre-trained models available on platforms such as HuggingFace will be explored and optimized for deployment using ONNX, ensuring efficient performance in different environments.

“Data Shot Extractor is positioned as a versatile and practical solution for automated data extraction from images, aimed at users who need to integrate tabular data quickly and accurately into their systems, with the flexibility to adapt to different hardwar capabilities.

Diaberse: AI assistant

Diaberse extension to add artificial intelligence to our favorite avatar to help us manage our diabetes.

Technologies:

Unity, C#, Meta SDK, Python, PyTorch

Keywords: Unity, Metaverse, virtual reality, 3D models, diabetes

Complexity: 8

During the last two years, the Technological Observatory has developed the “Diaberse” project, an immersive virtual reality experience for diabetic education. Through videos and games, children and adults, especially those who are new to this pathology, are taught how to face different real life situations when you are diabetic.

In this extension, our diabetes consultation will be more interactive than ever, where our avatar, in addition to understanding voice commands, will be able to help us control our diabetes in an easy, intuitive and safe way. First of all, we will configure our avatar to be able to understand voice commands, which, once processed, and through an LLM that we can re-train or not, really understand what we want to know about diabetes and, through voice, will give us a result that can be complemented with more information, either text, images or videos. Thus, we will be able to have a conversation with our avatar whose purpose will be the control of diabetes, either type 1 or type 2.

For the realization of this project, the student will be provided with access to all the code of previous editions.

EDUCAQuest

Brief description: Tracking and gamification of school assignments through a platform with missions, rewards and social network components.

Technologies:

Android, IOS, Python, C#, Java, Kotlin, Go, React, Angular

Keywords: Homework, social network, mobile applications, web, education

Complexity: 8.5

The Montemadrid Foundation is a non-profit organization that works for inclusion, access to education, employment, culture and environmental conservation. In one of its spaces, Casa San Cristobal, located in the neighborhood of San Cristobal de los Angeles in the south of Madrid, the foundation's staff carries out various activities related to social integration, reading promotion, education, etc. One of these activities is the monitoring of the school homework of the children in the neighborhood.

To facilitate this task and make it more attractive to students, we want to develop an educational application called EDUCAQuest, which uses gamification techniques to motivate school children to keep up with their homework. In this platform, homework assignments will be represented as quests, with associated notifications, which will be rewarded, while children will be able to create their own avatar, customized to their liking, to see themselves represented in the platform.

The platform will also have social components, such as group work and access to chats to discuss homework progress and homework strategies. It should therefore protect the privacy of students and include a moderation system to make it a safe and secure environment.

Regarding implementation, EDUCAQuest will be accessible both from mobile devices (IOS, Android) and from a web browser. It is possible to consider the creation of specific applications for each device or, alternatively, to implement a responsive web application that is able to work on any platform. There will also be an administration interface for educators and parents, allowing them not only to configure the system but also to track students' homework.

Finally, it is desirable the integration of the platform with other educational tools, such as Moodle, so that through a bidirectional communication the status of the tasks performed by the children can be synchronized.

ESGenerator

Extrapolation of Unknown Data for Inclusion in ESG Reports.

Technologies:

Python, PyTorch, C#, React

Keywords: ESG, reporting, sustainability, LLMs, SLMs, RAG, AI

Complexity: 9

Every year, companies include in their annual report, in the non-financial information section, what is known as the "ESG report," if they so choose (large companies are required to do so, and SMEs will be required starting in 2027 in the European Union). ESG stands for "Environmental, Social, and Governance," representing the company's environmental, social, and governance criteria.

This report includes all corporate social and environmental responsibility activities: the environmental impact of their operations, the decarbonization measures they are implementing, the activities they carry out with various associations and non-governmental organizations, etc. There are various standards for preparing this report, such as the Global Reporting Initiative [1] or specific ISO standards, but companies have considerable freedom in drafting it.

A significant part of the data included in ESG reports relates to the impact of the company’s operations: electricity consumption, its sources (thermal, nuclear, renewable, hydro, wind, etc.), fuel consumption, water usage, and the condition in which it is returned to the environment, among others. Some data is well-known, but others—such as the energy consumption of a specific fleet of computers—are more challenging to obtain.

This project aims to develop a system that, by aggregating data from various sources and potentially using artificial intelligence techniques, can extrapolate missing data and incorporate it into the ESG report. Additionally, language models (LLMs, SLMs), complemented by Retrieval-Augmented Generation (RAG) techniques, will be used to align with the aforementioned reporting standards, minimizing the human effort required.

The solution may be presented as a set of scripts, a Power BI-style report visualization platform, or a web application, depending on the project's progress.

https://www.globalreporting.org/

FindMyBuddy

Implementation of an intelligent search algorithm based on knowledge graphs to find colleagues within large organizations with similar skills and know-how. The system will be based on user activity in code control systems such as GitHub and project management systems such as Jira.

Technologies:

Python, Angular, React, PyTorch

Keywords: Networks, Knowledge Graphs, project management, web applications, ML, AI

Complexity: 8

In all companies there is a hierarchy and a division into teams/areas/organizations set by company objectives and management teams. However, in large corporations, often teams and employees work with similar tools, develop similar solutions and know-how, even if they are not directly connected in their management hierarchy. The development of similar know-how creates a sort of "parallel organization" in which people from different teams are connected by the technologies, challenges and solutions they developed.

To improve horizontal knowledge transfer between employees within a large corporation, and increase its competitivity, we want to build a smart search tool in which any employee can find a close match within the large corporation.

In this project we want to integrate with code control and project management tools through their public APIs (for example, those of GitHub and Jira). Issues and user stories will be studied to see on what problems and challenges user worked on. In addition, the code itself will be analysed to discover the language, libraries, projects people have expertise.

Once the data ingestion component has been developed, a knowledge graph can be created based on semantic similarity that connect employees based on their know-how and expertise. This knowledge graph will have to be visualized using graph drawing tools to see this "parallel organization". It will be possible to find for a specific employee which other employee is the closest in terms of know-how and expertise, it will be possible to contact them and start a discussion to open a transversal collaboration. Each employee can also search for other colleagues based on a textual search.

Although not strictly necessary, the student may consider the application of artificial intelligence techniques to the data to find hidden relationships between members of a company.

Finally, the development will not use real HP data for confidentiality reasons. The student will work with public data available on community and/or open-source projects.

FootballPerformanceScore

Tool for analyzing player performance in a specific match, providing a numerical rating based on the team's needs for each position

Technologies:

Python, PowerBI, PyTorch

Keywords: Bigdata, analysis, football, performance, scoring

Complexity: 7

The aim is to create a tool that allows the coach to evaluate their players based on a set of positional data, taking into account the specific needs of each match. For example, for a full-back position, various options are offered such as offensive full-back, defensive full-back, or complete full-back. Depending on the selection, a rating will be assigned, giving greater or lesser weight to certain data points. Finally, a chart will display all the ratings of the players who participated in the match based on the chosen options.

Currently there are websites and applications that assign a score to each player based on his “performance”, but they do it in a generic way without taking into account the specific needs of each team or the guidelines set by the coach. Thus, the coach may decide that a player should have a more offensive or defensive character or that he should focus on covering another player or a certain area of the field.

For the realization of this project, techniques based on algorithms or techniques related to artificial intelligence can be applied. The student must perform an analysis of the state of the art as well as the available datasets to create the required system.

Gateway

Recommendations for Academic Training Based on the User’s Goals within the Tech Sector.

Technologies:

React, NodeJS, Python

Keywords: Web application, web scrapping, AI, frontend, backend

Complexity: 9

Many times, people wish to achieve a specific outcome, such as reaching a senior developer position in a particular technology or a Technical Project Manager role, without knowing exactly what skills they need to qualify for that position.

To help them, we aim to develop a web application that, based on the user's request, can generate a training recommendation, a path to follow, suggesting resources that may help the user achieve their goal.

The application will have a frontend to interact with the user and allow them to create a request, based on which they will gain access to a training path with different levels and resources, both free and/or paid (with the option for the user to exclude the latter). Additionally, the user will be able to indicate which trainings they have already completed and which ones are pending.

On the other hand, to support the web frontend, there will also be a backend that will actually generate the training recommendations and provide services to the UI, such as storing user information.

Regarding the data to be incorporated, it is suggested to create a system based on artificial intelligence using available datasets, such as those found on Kaggle [1], regarding Coursera, freeCodeCamp, or LinkedIn (to connect positions and training). APIs such as the Coursera Catalog API [2], EdX API [3], Udemy API [4], Google Courses Search [5], Skillshare (unofficial) API [6], or GitHub Education [7] could also be used. Alternatively, web scraping techniques could be applied if necessary.

https://www.kaggle.com/
https://www.coursera.org/about/partner-catalog
https://api.edx.org/
https://www.udemy.com/developers/
https://developers.google.com/custom-search
https://github.com/topics/skillshare-api
https://docs.github.com/en/rest

HappyTransplant

Survival model design for liver transplant patients.

Technologies:

Android, IOS, Java, Kotlin, Flutter, C#, Go, Java

Keywords: Transplant, cancer, healthcare, mobile application, web application

Complexity: 6

According to data from the National Transplant Organization (ONT), the survival rate for patients receiving a liver transplant is 81% in the first year, 68% at 5 years, 56% at 10 years, and 36% at 20 years, with some patients surviving over 30 years.

The leading causes of death 10 years after liver transplantation are related to the development of tumors, cardiovascular problems, infections, and issues related to the graft. Therefore, it is important for patients to maintain good control of their blood pressure, weight, blood glucose levels, and good adherence to treatment. This could all be summed up as leading a healthy lifestyle, with regular exercise and a balanced diet. Primary care specialists are responsible for monitoring the patients' metabolic factors, but due to the current workload, it is not always possible to provide exhaustive patient monitoring.

Thus, the goal is to develop an app that allows the patient to:

• Record important health data (blood pressure, blood glucose, weight, etc.). • Log daily physical activity and any substance use (alcohol, tobacco, etc.). • Have an alert/notification system for the time to take anti-rejection medication, the correct dosage, other chronic medications, and track medication intake to ensure adherence to treatment. • Health education section: The patient would have access to an interactive program with information related to liver transplantation.

When the app's data is abnormal or the target is not met, an alert will appear, prompting the patient to connect with the transplant center.

This device must comply with health data protection regulations, so a section will be included on the home screen informing patients of their rights and consent to share data with the transplant center.

[1] SPANISH LIVER TRANSPLANT REGISTRY (ont.es)
[2] Liver transplantation - EASL-The Home of Hepatology.
[3] Long-Term Management of the Adult Liver Transplant | AASLD
[4] Long-Term Management of the Pediatric Liver Transplant | AASLD

HCC-AI

Detection and delineation of hepatocarcinomas based on AI image analysis.

Technologies:

Python, PyTorch, OpenCV, C#, React, Angular

Keywords: HCC, cancer, healthcare, computer vision, AI, CNNs

Complexity: 7

Hepatocellular carcinoma or hepatocarcinoma (HCC) is a liver cancer that constitutes 80-90% of malignant liver tumours. According to World Health Organisation statistics, it is currently the sixth most prevalent cancer in the world, but is the third leading cause of cancer deaths.

90% of HCCs occur in patients with liver cirrhosis (regardless of the aetiology of the cirrhosis) and curative treatments are available at early stages, ranging from surgical resection to liver transplantation [....]. For this reason, early diagnosis is very important and all scientific societies around the world recommend screening [2][3]. This screening is performed by liver ultrasound at 6-month intervals. If a suspicious lesion is observed, a CT scan or MRI should be performed and if unclear, a biopsy.

Ultrasound is a convenient, inexpensive, minimally invasive method and has a sensitivity of 65-80 % and a specificity of over 90 %, so that sometimes the detection of this tumour is with a large size, or differential diagnosis with other benign tumour lesions can be difficult.

The aim of this project is to develop a model for the detection and delimitation of this type of cancer based on images taken by ultrasound. The model will not only be able to indicate a positive or negative diagnosis, but will also be able to indicate its grade and evolution, as well as to delimit within the images which is the affected area of the liver. If we are able to achieve with this model a high sensitivity and specificity (as well as high positive and negative predictive values) with the ultrasound images, we will be able to have a high diagnostic certainty without resorting to other additional tests or the need for a biopsy of the lesion.

Artificial intelligence techniques will be applied to make this diagnosis, either by developing our own models or by adapting detection, segmentation and classification models developed by third parties whose licence allows them to be used and fine-tuned for a specific case. Once the model has been developed, it will be desirable to be able to evaluate it by developing an application, which could be a web application or a desktop application.

Finally, this project will be carried out in collaboration with professionals from the Hospital Rio Hortega in Valladolid, who will provide labelled images of the different types of hepatocarcinoma and will also provide feedback on the results obtained by the model and the associated application.

LCEConectada

Specific services at La Casa Encendida (Madrid) based on location obtained via AP Wi-Fi

Technologies:

Android, IOS, Python, C#, Go, Java, Kotlin, React, Flutter, React Native

Keywords: Mobile application, Wi-Fi, location based services, networking

Complexity: 9

The Montemadrid Foundation is a non-profit organisation that works for inclusion, access to education, employment, culture and environmental conservation. La Casa Encendida in Madrid, is a social and cultural space managed by this foundation in which various activities take place, including exhibitions, concerts, film screenings and other activities around culture, solidarity and the environment.

The idea is to create a specific application for this space that not only facilitates connection to Wi-Fi but is also capable of providing specific content to visitors based on their location within La Casa. To this end, the application will have access to a database with the access points (APs) of the network and, based on their relative position (connected AP + triangulation with nearby APs according to their signal level), it will deduce the approximate location of the user, offering details about that room or area, its available services, and providing an interactive map of the environment.

The application will offer registration and login functionalities, both with specific credentials and with social network connections (Google login, Facebook login, etc.). Depending on the user's behaviour patterns, specific notifications can be triggered, taking into account the user's preferences and past visits, related to events, special offers or changes in the centre.

On the other hand, the administrators of the application will be able to use it for visitor analysis purposes, obtaining metrics on visitor flows (global and in each area), their stay in each space and their preferences, always respecting their privacy and anonymity. In addition, administrators must have access to an interface that allows them to configure the contents of each of the rooms, adapting them to the current exhibitions or events schedule.

Finally, with regard to the implementation itself, it is desired that the development be a mobile application with access to the user's signal and location data. To support this, web services and the administration interface mentioned above will also be developed.

LexIA

Open source tokeniser for the Spanish language with the capacity to contextualise legal texts, and to adapt to other sectors such as medicine, optimising the processing of compound terms.

Technologies:

PyTorch Python, Hugging Face Transformers, spaCy, NLTK, FastAPI, ANCORA

Keywords: Tokenizer, NLP, Open Source, LegalTech, PyTorch

Complexity: 7

LexIA is a project that seeks to develop a specialised tokeniser for the Spanish language, capable of identifying compound terms such as ‘Tribunal Supremo’ in a unique (one token instead of two) and efficient way.

It aims to improve the processing of legal data and to adapt to other industries, such as medicine, through customisable configurations. This tokeniser will be able to be integrated into natural language processing (NLP) workflows, optimising the speed and accuracy of legal text analysis.

The tool will be open source, facilitating its adoption and customisation by developers and companies.

ListenToTheStadium

Analysis of the feelings of those attending a football match based on the relationship between the ambient sound of the match and what happens on the pitch, with the possibility of transferring it to other sports.

Technologies:

Python, PyTorch

Keywords: AI, sports, audio, time series, sentiment analysis

Complexity: 9

The ‘ListenToTheStadium’ project aims to develop software for analysing the sentiment of fans attending football matches in stadiums. Using the ambient audio captured during matches and combining it with a detailed archive of events on the pitch (passes, fouls, goals, dribbles, etc.), this software seeks to understand and quantify the reactions of spectators in relation to each moment and event of the match. The ultimate goal is to detect and analyse the mood of the spectators, relating it to the actions of the team and the players, both own and opponents.

The objectives of the project are divided into two phases:

  • Software Development: Design and implement a software system that integrates advanced audio processing and sentiment analysis techniques. Develop algorithms capable of identifying and quantifying spectator sentiment from ambient audio.
  • Data Mining: Audio data captured during matches will be extracted and analysed. This data will be integrated with an archive containing detailed information of all events on the pitch.

Regarding the methodology, audio pre-processing techniques will be implemented to filter out unwanted noise and improve the quality of the signal. Signal processing algorithms will be developed for the detection of audio patterns associated with different feelings (elation, disappointment, anger, etc.). Machine learning and sentiment analysis techniques will be used to classify and quantify the emotional reactions of viewers. A synchronisation system will be developed between the audio data and the match event archive, ensuring that each spectator reaction is correctly related to the corresponding event.

LLMImpact

CO2 emissions calculator for LLMs

Technologies:

FastAPI, NodeJS, JavaScript, TypeScript, Python, HF transformers, React, Angular

Keywords: Web services, Cloud, AI, responsible AI, LLM

Complexity: 7

The impact of artificial intelligence, especially large language models (LLMs) on the environment is not minor and is one of the major concerns surrounding the current revolution in the application of AI to all areas of life.

In this project we want to build a calculator of the CO2 expenditure produced by an execution of a Deep Learning model, specifically an LLM. The calculator will take the task as input (if it is a simple conversation, or if it is a search and also includes a RAG in its execution), calculate the number of tokens expected (input + output) and translate them into CO2 with a simple conversion. This token calculation is an inexpensive process, compared to the execution of the complete model, which can be done on CPU.

The student will have to do a research on sources of CO consumption linked to machine learning, such as the one done around a DeepMind paper [1], adapting the calculation to the current models, the state of the art and the hardware used for its execution.

The tool will be implemented as a backend with its API, and then it can be consumed through a frontend, as a webapp that will be custom developed and where the user can select their LLMs, the tasks to be performed and calculate the impact of CO2 emissions. On the other hand, it is also desired that the calculation service (backend) is usable from third party tools and can be integrated into existing frontends, so the web service must be designed with these parameters in mind and not only to serve the developed web application.

  1. https://152334h.github.io/blog/scaling-exponents/

LoQueHayQueVer

Application for the quick and improvised design of a sightseeing tour of a requested environment.

Technologies:

GPT4, LLMs, Python, PyTorch, React, Angular, Java, NodeJS

Keywords: Tourism, web development, AI

Complexity: 8

Imagine the following situation: a group of friends or a family is about to take a long car journey to spend their holiday in an idyllic location, 6 hours away from home. Obviously, rest and relaxation stops will be necessary so that the driver or passengers can stretch their legs, clear their minds and take a break from the long drive. Ideally, it would be great to take the opportunity to visit an interesting place along the way. But of course... When and where will we want to stop? We can't predict that, as it will depend on unpredictable factors such as fatigue or the need to go to the toilet. And once we have decided where we will stop, who will be the brave one in the group who will look for the ‘must-sees of Villatripas de Arriba’, its ‘10 places to visit’ or ‘what you can't miss’? And will he or she be able to locate them on a map and then plan how to visit them?

This example makes it clear that preparing a tourist visit to a place can be a laborious and often boring task. What is proposed for this TFG is the creation of an application that, taking advantage of the facilities provided by artificial intelligence, is able to collect all the basic information to organise a tourist visit to a destination.

In particular, the application could take care of the following steps, which the design of a sightseeing tour usually involves:

  1. Identification, location and categorisation of places of interest. The application should be able to provide a list of places to visit (which could be limited in number or not). It should also categorise them according to some key aspects, such as whether they are outdoors or indoors, free or paid, etc.
  2. Location on a map and selection of places of interest. This location could be done using tools such as OpenStreetMap, or facilitate the export of the data so that it can be easily imported into other popular applications, such as Google Maps. In this step, the application should allow applying filters based on the categorisation made in the previous step.
  3. Exploration of places of interest. This step could be carried out in one of the following modes:

Simple mode: It would only show the places of interest on a map, along with the user's real-time location.

MyNaps

Android app for the visualisation of personal maps (my maps) in a better way than Google Maps itself.

Technologies:

Unity, Android Studio, Google Dev, Android, Java, Kotlin

Keywords: Mobile app, Google API, Google Maps

Complexity: 8

When we have a personal Google map (myMap), we can add images or documents to each point we mark. The images can be in Google Drive or can be uploaded directly to the map. However, the visualisation app for these MyMaps (which is Google Maps itself) has very poor visualisation options. Images look small, cannot be enlarged, and if documents are attached, they cannot be opened. On the other hand, there is a lack of improvements such as notifying you when you are near one of the marked points (by vibration, message, or whatever). On the other hand, there are no other third-party applications that improve on Google Maps' own solution.

It is proposed to make an Android app that serves for the visualisation of personal maps in an improved way.

Process:

  • Obtaining previous knowledge (Android programming, Google API...).
  • Design of the app.
  • Visual development of the app.
  • Connection with Google OAuth.
  • Reading and displaying myMaps.
  • Improvements.

MyTravelPlanning

Travel planning application including transport, accommodation and sightseeing, integrated in a single application.

Technologies:

Angular, Electron, JavaScript, HTML, CSS, TypeScript

Keywords: Maps, travelling, web application, mobile application, planning

Complexity: 8

When you’re planning a trip, you always spend hours checking transportation, hotels/apartments, things to see/do at each destination… in the end, you end up writing all that down in a notebook or, worse, on loose pieces of paper, and after a few days, you don’t even remember where you were planning to go.

This app would simplify all of that. On one hand, it allows you to create all the possible trips you want for your vacation. For example, you’re unsure whether to go to an island to relax or take a cultural trip across several European countries. The app will allow you to make multiple simultaneous plans in which you can store:

  • A description of the trip to be made (e.g., Beach Vacation 2025).

  • The approximate start and end dates.

  • The different researched transportation options (train, plane, car, bus).

  • The different researched accommodations.

  • The list of points of interest to visit.

The key part of this app is the last one: the points of interest to visit. When you go to a city like Vienna, for example, you’ll find that you have hundreds of things to see in just a few days, and many of them are quite far apart from each other. This part of the app will allow you to view the various points of interest on a map and generate optimized routes to go from one to another according to your preferences. That is, if you want to walk a maximum of 20 km on the first day, the system will generate an optimized route to visit x points of interest without exceeding that distance during the day, showing the optimal times to visit each point and the total cost for each one (of course, all this information will have been researched by the user when looking up the points of interest to visit).

In this way, you can have your trip completely planned, for example:

  • Bus from home to Madrid T1.

  • Flight from Madrid T1 to Vienna.

  • Transportation from Vienna Airport to accommodation in Vienna.

  • Daily planning of visits to points of interest based on accommodation location, available days, opening hours of points of interest, and available hours each day (it may be that the first or last day cannot be fully utilized due to the inbound/outbound transfers). The plan can also include places to eat/dine. Bars/restaurants of interest can be planned as points of interest, scheduled with a time range (e.g., from 1:00 p.m. to 3:00 p.m.).

  • Transportation from Vienna to Bratislava.

  • Planning visits to points of interest in Bratislava.

  • Transportation from Bratislava to Vienna.

  • Transportation from Vienna to Vienna airport.

  • Transportation from Vienna to Madrid T1.

  • Transportation from Madrid T1 to home.

The app will also allow you to store the prices of the different transportation methods you’ve researched. We know that flight prices (for example) change over time.

External APIs will be used to:

  • Locate points of interest in the cities to visit and download resources about them.

  • Generate optimized visit routes based on the points of interest.

Nolik

Web application to search Jira for solutions to problems reported by customers.

Technologies:

Python, React, Angular, TypeScript

Keywords: Service, Jira, System error, bugs, AI, LLMs

Complexity: 7

Typically, large format printer firmware/hardware teams receive escalations from service technicians when they are unable to resolve a problem on a printer. These escalations generate a case in Jira, reflecting all the investigation carried out and the solution to the problem (if any). The proposed application would search Jira for solutions to similar problems on the same product, to provide a list of possible solutions and the Jira case associated with each of them. This application could save time for the support/fmw/hw teams, reduce the number of parts (and their cost) incorrectly replaced, save customer visits and improve customer perception of HP's support quality.

For the implementation of this application, which will desirably be a web application, it is proposed to use open language models that are capable of analysing a large amount of data, in such a way that they are able to find support cases similar to the concrete request made by the user. Once these cases are identified, which can be found by a similarity search in a vector database (from embeddings created on a case-by-case basis in Jira), they will be provided to the user for inspection or, optionally, provided as context to an LLM so that the LLM can directly propose a solution or answer user questions about them.

As HP support cases are subject to HP's confidentiality policy, third party cases will be used for this project, such as issues found in open source projects on platforms like GitHub. In order to subsequently connect the system with Jira, the student will have to design a modular architecture that allows it to communicate with various incident management systems.

NotAStartupAnymore

Detection of new large companies, super growth and company relocation through webscrapping and language modelling.

Technologies:

Python, Beautiful Soup, Scrapy, Requests, React

Keywords: Companies, growth, LLMs, webscrapping, report

Complexity: 8

Although many companies publish it in the non-financial part of their annual reports, it is not easy to know the exact number of employees of a company, if it has experienced rapid growth or if it has decided to change its registered office and/or head office.

For strategic reasons (sales, operations, etc.) you would like to be able to obtain a report, for a certain period of time, on newly created companies with more than a certain number of employees (suggested, 400), that have experienced a growth also of a specific number (e.g. 300) or also that have changed their headquarters or head office location (city, country, etc.).

It is therefore desired to build a system that, based on information obtained from social networks and news sites, is able to detect these three types of events. To this end, the project will consist of three different modules:

  • An aggregation/webscrapping module that will be responsible for collecting the information necessary for the operation of the system. This module will be configurable with different sources and open to future expansion.
  • An artificial intelligence system, based on language models (LLMs, SLMs) and/or field segmenters (RPA) to extract the relevant information.
  • A report and alert manager that will alert when the indicated events occur and will also issue a report (PDF) on all the companies that have undergone any of these changes in the desired time period.

The system should be able to operate in an unattended/automatic way, periodically scanning the sources of information to be taken into consideration and issuing the corresponding reports and alerts. It is suggested to additionally develop a small web application (backend/frontend) to be able to manage the configuration of the system (limit values of employees, time periods, subscriptions, etc).

PrintingSegmentationToPrint

AI-based web application capable of segmenting an image into layers to be sent to print. The basic idea is to allow the user to give the possibility to upload an image, carry out the segmentation inference process, and the extraction of the layers in images. As well as being able to automatically detect, in case it is available, the option to carry out the inference on GPU, and if not, the default option, on CPU.

Technologies:

Python, PyTorch, Web Frameworks, ONNX, HuggingFace

Keywords: Segmentation, AI, inference

Complexity: 8

The ‘Printing Segmentation To Print’ project aims to develop an innovative web application that uses artificial intelligence (AI) to segment images into different layers, facilitating their subsequent printing. This tool will allow users to upload an image and, through an AI-based inference process, decompose it into multiple layers according to the elements present in the image. Each layer will be extracted as an independent image, ready to be sent for printing or processing according to the user's needs.

The application will be designed to be highly efficient and adaptive, incorporating automatic hardware detection functionality. If the system detects the availability of a GPU, it will use this to speed up the segmentation process; otherwise, it will perform the inference using the default CPU. This ensures that the application works optimally in different environments, from high-performance machines to more modest devices.

The development of the project will be based on the use of technologies such as Python for programming, PyTorch for implementation, and web frameworks for the creation of an intuitive and user-friendly interface. In addition, ONNX will be used to optimise the models and ensure efficient performance on various platforms.

‘Printing Segmentation To Print is emerging as a practical solution for professionals and amateurs who need to accurately segment images for printing. With the ability to handle different hardware capabilities, this tool promises to be versatile and efficient, facilitating the workflow from segmentation to final print.

ProPicMe

Android application that allows you to take a photograph and then improve it to get a portrait with a professional look

Technologies:

Python, PyTorch, Stable Diffusion, Android, Java, Kotlin

Keywords: AI, mobile applications, image generation, portraits

Complexity: 7

The goal of this project is to develop an Android application that allows users to capture a portrait (selfie) and convert it into a professional quality image by using advanced image processing and artificial intelligence techniques, such as Stable Diffusion or similar technologies.

To carry out this process, a client-server architecture can be created where the image generation model runs on a separate machine and the mobile application makes requests to it, or the possibility of using small embedded models (applying techniques such as LORA) within the device can be explored.

It will be necessary for the development of this project to adjust the prompts to the models accordingly, as well as potentially finetuning them to adapt them to the generation of professional portraits.

ProspectIA

Mobile application with AI to recognize medications through the camera and interact with the prospectus (consultation, summary, questions/answers).

Technologies:

NodeJS, Go, Kotlin, PyTorch, Python, OpenCV, LLM

Keywords: AI, mobile app, Cloud, healthcare

Complexity: 8

One of the problems today is that for many people, the prospectus is a complicated document to read, to find relevant information, or it gets lost and then some information is needed. The Spanish Agency of Medicines and Health Products (AEMPS) has made the CIMA (Center for Information on Medicines) publicly available.

The objective of this project is to use the information available in CIMA to offer the user a simple experience of managing the information of a medication. For this, the mobile application must be able to recognize a medication through the camera (photo of the box), then display the prospectus and be able to summarize it with useful information for the patient and answer user questions with the information contained in that prospectus (via chatbot).

The system must be hosted in the cloud, where it will synchronize with CIMA via API or web scraping (weekly) saving all the necessary information (text, data, images, links to videos, etc.) only for authorized, marketed, and non-hospital use medications. The mobile application will connect to the cloud and make the necessary requests.

he recognition of the medication with the camera can be done with OCR and/or ML techniques. The user must also be given the possibility to identify the medication with a form requesting the name or National Code.

The summary and answers must also use an ML model according to the project's needs

RetroRevive

Web platform for the restoration of old images

Technologies:

Python, PyTorch, Stable Diffusion, OpenCV

Keywords: AI, web development, art, heritage

Complexity: 9

The objective of this project is to develop a web application that allows users to restore old photographs in a simple and effective way. This platform will integrate various advanced image processing techniques and artificial intelligence to carry out the restoration. Users will be able to choose from different options to customize the restoration process according to their specific needs.

Restoration can comprise different subtasks, such as restoring the original color (or at least more vivid color tones) or filling in parts of the artwork that are deteriorated or have been lost.

You can create your own models, use open models such as Stable Diffusion or apply classical processing techniques with libraries such as OpenCV.

RoboHeliostat

Creation of an automated robotic heliostat for the improvement of renewable energy elements.

Technologies:

C++, Arduino, 3D printing

Keywords: 3D, Arduino, solar energy, renewable energy, robots, IoT

Complexity: 7

With the rise of renewable energies based on the solar field, the research and application of an archaic system such as the heliostat is proposed, applying it to new technologies that can automate and optimize the angles of incidence and redirection at certain times of the solar cycle to the photovoltaic receiver cells.

As the angles of installation of solar panels are usually limited in terms of positioning and working hours of direct incidence, the aim is to study and create a small-scale prototype of the device responsible for redirecting through refractors (mirrors or other reflective surfaces) at times when their efficiency suffers a considerable decrease and its redirection from the prototype is feasible.

In the first part of the project, the prototype will be developed on a feasible scale using 3D printing technologies, servomotors and an Arduino controller to locate the most intense point of light and its optimal degree of incidence. Subsequently, a model of reflective surfaces will be used to redirect the light to a desired point chosen by the user.

As a final section, it is proposed to make an application study in real environments towards photovoltaic receiver cells, along with a calculation of performance improvement with a full-scale prototype to be applied with the data obtained in the previous stages.

For the realization of this project, Arduino modules, servomotors and light sensors will be provided to the student. The 3D printing can be carried out by the student himself/herself if he/she has the equipment or, otherwise, it can be carried out by the main tutor who does have such infrastructure.

Sawubona

Identify the emotional response using facial emotion recognition that an event (scenes from a movie) produces in a live audience, obtaining information that indicates whether it is the desired one.

Technologies:

C#, OpenCV, SQLite, MAUI, Angular

Keywords: AI, facial recognition, sentiment analysis

Complexity: 9

In today's society it is very important to know whether the message being transmitted produces the desired reaction in the target audience. Identifying whether a sad scene in a movie or a part of a political speech produces the desired effect on a control sample can translate into achieving the desired objectives when exposed in public.

The goal of this project is to use facial recognition technologies to detect live audience expressions and to know if the message is producing the desired reaction in the audience.

It will be necessary to identify how many people, based on images, are watching the show in question. Subsequently, classification models will be applied to detect the type of feeling expressed by each of the spectators, whether it is joy, sadness, laughter, interest, boredom, anger, etc. Publicly available models may be used, or one's own models may be developed, as well as finetuning of third-party models. The student will be responsible for finding or generating the appropriate datasets for this task.

SilentDome

Development of a scaled prototype for a portable system to reduce construction machinery noise to acceptable thresholds using active noise cancellation.

Technologies:

Python, Raspberry, Arduino

Keywords: Active Noise Cancelation (ANC), IoT, noise, construction

Complexity: 6

Noise pollution becomes a more severe problem every year, affecting the mental and auditory health of the population. One of the most characteristic sources is construction works. Moreover, construction works are problematic because they need to be carried out at times that don't disrupt traffic and are restricted to a specific time frame for making noise.

The aim of the project is to create a prototype to help with this issue. The idea for the final product would be a kind of tripod placed around the construction site. When the machinery is turned on, the device would significantly reduce the noise through active noise cancellation (ANC). This would allow more flexible working hours for the construction, enabling them to avoid peak hours and the sun during its hottest moments. This device could also be used in construction projects in natural terrains, reducing the impact of noise pollution on wildlife.

The prototype would be a scaled-down version of these tripods. To perform active sound cancellation, the prototype would require microphones, speakers, an Arduino, and a Raspberry. The microphone would read the sound waves, send them to the Raspberry, and the Raspberry would send the inverse signal to the speakers to cancel the sound. The objectives would be to create a scaled prototype, test if it's truly possible to create a "sound cancellation dome" using this prototype, and analyze alternatives for this goal.

SurpriseDinner

Last-minute tables for canceled or no-show restaurant reservations.

Technologies:

Android, Java, Kotlin, Flutter, Go, Java, C#, NodeJS, Angular, React

Keywords: Restaurants, reservations, last minute, no show, mobile application

Complexity: 7

Last-minute canceled reservations (or those where diners don’t show up at all) are a serious problem for restaurants: on the one hand, they lose the use of one of their tables, with the opportunity cost this entails (along with other fixed costs such as staff, heating, etc.), and on the other hand, they may have to discard certain perishable goods they have based on that demand, such as meat, fish, seafood, etc.

On the other hand, there is also a group of people who would like to have lunch or dinner at certain restaurants with very limited availability but don’t call or visit daily to check if any reservations have been canceled by chance.

This project aims to connect restaurants and diners around canceled reservations. The goal is to develop a mobile application that fulfills a dual purpose: first, allowing restaurants to report last-minute cancellations or "no-shows," indicating at what time the table would be available again and the number of diners accepted (usually a range like "2-4 diners" depending on the table's size and the original reservation). Secondly, users can receive notifications based on their preferences and geographical location, notifying them that a table has become available, and they can confirm a new reservation through the application.

The application will therefore have two types of users: restaurants that register and offer their tables, and everyone else who subscribes to alerts about available tables. Ideally, profiles will be dual: a restaurant owner may also wish to use the application in its other mode, with owning a business being an "addition" similar to Google Business, where you add your business to your profile. However, this functionality is not 100% necessary.

It is suggested to create an Android application, either natively or through another technology (Flutter, web views, etc.), which accesses a cloud-hosted backend where the app's logic and user, restaurant, and reservation data will be managed.

SkillMAItrix

Web application for monitoring and analyzing capabilities and skills within teams.

Technologies:

React, Angular, Python, MariaDB, Postgres, Linux

Keywords: Web Application, Full stack, Cloud, IA, Big Data

Complexity: 5

Within multidisciplinary teams, it is always challenging to understand the skills and knowledge each member possesses. These skills tend to be individualized rather than treated collectively as a team resource.

For this reason, the development of a web application is proposed to help identify the current skills of a team, analyze them, and generate metrics on their distribution. The application will enable the creation of graphs and reports for further analysis, allowing users to assess potential risks within the team.

Examples:

  • Too many team members with certain skills and too few with necessary skills.

  • Analyzing team members' adaptability to new challenges that require new skills.

  • Training and certification planning for the team.

The project will consist of at least three main components:

Back-end:

  • A cross-platform back-end service (Linux and Windows) with a REST API implementation will be required.

  • The choice of programming language and REST API framework will be up to the student.

Front-end:

  • A front-end interface will be needed to manage and visualize the application.

  • The choice of language is up to the student, keeping in mind that it must be designed to display multiple metrics through graphs and reports.

Database:

  • A database must be created with tables based on a non-relational model for data storage.

  • A relational database may be used for managing application-related data.

TEAventuras

Social story creation to help children with ASD navigate their daily lives.

Technologies:

React, Angular, Flutter, Python, Go, C#, Java

Keywords: ASD, ASC, TEA, story-telling, generative AI, AI, web application, mobile application

Complexity: 8

The Montemadrid Foundation is a non-profit organization that works towards inclusion, access to education, employment, culture, and environmental conservation. As part of its programs, the foundation works with children with ASD (Autism Spectrum Disorder) and helps them face everyday situations that they do not understand and in which they do not know how to act. To achieve this, they explain to them in a personalized way what is going to happen, what they can and cannot do, where they can do it, etc., in those moments that are so confusing for them.

This project involves developing a platform for creating social stories, presenting them in an entirely visual way and introducing at the beginning of each story where the situation takes place, when it happens, what will happen first and what will happen next, and with whom it will occur. Subsequently, the story will unfold, and the child will interact with the platform, evaluating whether they act appropriately, always with the support of trainers and tutors.

It would be desirable for the situations to be customizable with photos of the children and their families, as well as real places where the events take place, such as their home, school, a neighborhood store, etc. This will eliminate abstraction difficulties and ensure the comprehension of the story. For this, artificial intelligence techniques for face replacement or more traditional methods such as drawings, masks, and others can be used.

Regarding implementation, it is desired that the platform for generating stories be a web platform or a mobile application. Additionally, it should feature an interface for administrators, i.e., members of the foundation, allowing them to configure each story and customize them for each child. Therefore, the stories will also be associated with a specific user, which implies adding a user management and permissions system.

Terranix

OSS infrastructure deployment tool using the Nix declarative language. (Infrastructure as Code)

Technologies:

Nix, Go

Keywords: Cloud, Kubernetes, Infrastructure as code (IaC), Nix, DevOps, declarative

Complexity: 5

Infrastructure as code has become a standard in the process of cloud infrastructure deployment. Terraform, tool widely used in this sector, has recently modified its license, prohibiting some uses allowed before.

On the other hand, Nix, a declarative language for describing system’s configuration, is a tool growing in popularity in dev fields and by DevOps profiles, given the expressiveness of the language and its capacity to define reproducible environments.

The project aims to build a tool capable of analyzing the differences between the selected platform (Kubernetes, AWS, Azure..), and the generated configuration from Nix files and be capable of performing the necessary reconciliation operations.

The system should be modular by accepting the import of plugins alike terraform accepts ‘providers’, so it wouldn’t be necessary to define all operations for all cloud providers in the market but allow the definition of new ones.

Troll Hunter

¿Troll or hater? Find out if the review or comment comes from a hater or not, and boost your brand's image by crafting a creative and original response.

Technologies:

Python, C#, Go, NodeJS, PyTorch

Keywords: AI, sentiment analysis, LLMs

Complexity: 8

Nowadays, social media entertains us as users, but for many brands, it is their main promotional tool, where they gain visibility or simply seek to enhance their brand image positively. However, the path is not easy; it is filled with trolls and haters who use social media as their stronghold to discredit, spread misinformation, or cancel brands.

For this reason, Troll Hunter aims to use AI to identify when a comment or review may come from a troll account or a hater, and propose to a brand a witty and creative response to improve its image.

To carry out this task, public data regarding comments on social media will be incorporated, particularly those linked to brands and their community managers with the responses they provide. Using LLMs (Large Language Models), a classification system will be generated to categorize each comment into one of the two categories (or neither), and a suitable response will be created.

UnmuteMe

Web platform that allows you to upload a video (with or without audio), generates audio according to the content of the frames and synchronizes them.

Technologies:

Python, PyTorch

Keywords: IA, Desarrollo web

Complexity: 9.5

This project proposes the use of Artificial Intelligence models and techniques for the generation of audio from an input video. The aim of this project is to develop a web application that allows users to add a new audio track to a video. The user will upload a video to the platform, the original audio will be removed from the video and using artificial intelligence, a new audio track will be generated synchronised with the content of the video frames.

To achieve this, it is proposed to implement a video to text conversion process and, subsequently, from text to audio, in order to generate the synchronised audio track. The implementation must consider the possible overlapping between the different sounds present in the scene, the temporal resolution necessary to achieve a good synchronisation between image and audio and whether only ambient noise is generated or the recreation of human voices.

VRRecoveryGym

Virtual reality environment for completing puzzles (currently physical) aimed at promoting recovery and mobility for individuals with motor impairments in their upper limbs.

Technologies:

C++, Unreal Engine, Blender, BlueprintUE

Keywords: VR, simulation, games, puzzle, physical recovery

Complexity: 8

This project is carried out in collaboration with the ASPAYM Foundation, whose mission is to promote and encourage all kinds of actions and activities aimed at improving the quality of life of people with spinal cord injuries and severe physical disabilities.

To support the recovery of individuals affected by these issues, the foundation has created specific 3D-printed puzzles designed to encourage users to perform certain physical movements: wrist rotation, elbow extension, arm abduction, etc. It has been verified that these puzzles achieve their intended purpose. However, they require individuals to physically visit the association to perform the exercises and to be accompanied and supervised by a professional from the association to analyze how the activity is being carried out.

The proposal is to create a virtual environment where individuals, through a VR headset and controllers (hand-tracking functionality could also be considered), perform puzzles analogous to the physical ones. This application will serve a dual purpose:

  • Allow users to perform the exercises remotely, in their home environment, while enabling tracking of each activity (number of sessions, time invested, etc.).
  • Conduct movement monitoring to supervise each user's progress in all planned exercises. This would enable the tracking of evolution parameters such as: number of wrist rotations, degrees of elbow extension, degrees of arm abduction, number of movements required to solve a puzzle, etc. Ideally, this data should be able to be sent remotely for professionals to review.

The project proposes using Unreal Engine, developing the logic both in C++ and BlueprintUE, and incorporating necessary assets available in the Unreal Engine marketplace, provided their licensing allows their use in this project. Lastly, the project will be carried out with guidance from ASPAYM professionals.

Webgit

Version Control System over http

Technologies:

Go, Git, Http, bases de datos

Keywords: Version Control System (VCS), Git, http

Complexity: 6

Machine learning requires managing big quantities of information that keeps changing over time and must be utilized concurrently by many users for different purposes, which requires a version control system.

Git has become a standard, but it is designed for managing source code (text), and the information must be cloned to access it (distributed). This means, for example, that for making a training over certain version of the dataset, it is required to download all files.

We would like to design a tool that stores, manages and version files; by using the same protocol than Git (object, commit, branch, merge, tag…), but optimized for the following cases:

  • Allow upload and download of large files in a paginated and concurrent manner (optimized for large files).
  • Allow the users to manipulate the repository on remote, without the need to download the whole project (remote editing of the files through GET/PUT/PATCH).
  • Allows downloading a file quickly, without the need of a checkout or diff calculations (be fast and do not create a bottleneck for a training).

Orgullosos colaboradores

Colaboramos con las más prestigiosas universidades españolas.

TOP