3DBrightener

Application for the automatic enrichment of data contained in the standard format used for 3D printing

Technologies:

Icono de PhytonPytorchIcono de C++


Keywords: 3D printers, AI, 3MF, 3D printing formats
Complejidad: 9

3D printers are poised to revolutionize the manufacturing process of a wide variety of objects that were previously produced through other techniques, as well as to open the possibility of manufacturing parts that are currently unfeasible due to their high costs. For example, in industrial sectors like automotive and medical prosthetics, the improvements that 3D printers can offer have already been demonstrated. Additionally, in the near future, applications may arise in areas that we currently cannot conceive of.

In order for 3D printers to continue evolving and expanding their capabilities, the research and development departments of the companies creating these devices play a fundamental role. In these environments, the massive collection of data from printing tests is as crucial as the process of creating physical objects itself. This data allows for the analysis of prototypes and the derivation of conclusions that can drive improvements in 3D printing technologies.

In this context, 3MF files play a crucial role as they represent the standard for creating 3D models (similar to the role of PDF files in conventional printing), and where the metadata of these files become invaluable. Unfortunately, currently, there is no tool that automatically fills or verifies whether the metadata of a 3MF file match the 3D models stored in the file. This task is tedious and typically carried out manually by the creators of the files, which is prone to errors or omissions.

Therefore, this project proposes the development of an application that automates or facilitates the metadata filling process in 3MF files. The project would consist of several phases, which could be tackled independently based on the complexity encountered by the student, and it could be completed by achieving only one of them. These phases are:

  1. Character recognition in the different components contained within the 3D models of a 3MF file.
  2. Categorization of the components contained in a 3MF file into categories predefined in a parts library.
  3. Segmentation of the 3D models in a 3MF file so that each entity corresponds exclusively to a physical object.

Furthermore, if the student is interested in these technologies, they could develop a web application that allows users to interact with the tool. However, the priority would be to provide the application with a command-line interface.

Finally, it's important to highlight that the software used to develop this application would preferably be free (open-source), unless the student decides otherwise.

3DScenePrint

2D printing of views on ultra-realistic 3D scenes using a web platform

Technologies:

Icono de PhytonIcono de Node JS


Keywords: 3D, 2D, print, web, cloud, rendering
Complejidad: 8

In this project we want to develop a web platform for printing arbitrary 2D views (for printing on paper) over complex 3D images. Thus, content creators will upload 3D models (scenes) and users will be able to request certain views over them, having control of the camera, position, and other additional parameters (filters, quality, etc).

Because the rendering of these 3D scenes can be very expensive in terms of computing power, and advanced and demanding techniques such as raytracing may be required, the user in their browser will not see the original scene but a simplified version of it, which will be generated automatically. Once the user selects the view they want in this simplified preview, it will be the server itself (or delegated application containers) that will be in charge of rendering the scene in high quality.

After that, the user will be able to view the image for printing and download it or send it to a third-party print service provider for printing. Since rendering the scene can be a lengthy process, the user will be able to check its status at any time, without having to remain on the page while it is being rendered. It would also be desirable to be able to receive some kind of notification (browser notifications, email, etc) when the view is finished.

Regarding the monetary part of the platform, creators will be able to set a price for each of the views on their 3D scene, as well as a limited number of views in total that can be printed. They will also be able to add, if they wish, a set of predetermined views that can be purchased at a lower price (but, in any case, configurable by the creator). For the implementation of this project, it would be desirable to connect the platform with a payment processor, although this part is considered optional, but at least the platform should be prepared for it.

AbuseGame

Online videogame to score risk of child abuse and bullying

Technologies:

Icono de JavascriptIcono de C++Icono de C#Icono de JavatypescriptIcono de React


Keywords: Child abuse, bullying, videogame, 2D, web development, AI
Complejidad: 7

Detecting situations of child abuse and bullying is always complicated, because in many cases the victims of this type of behavior hide their situation, either out of shame or fear of potential retaliation by the abusers that may occur after reporting the situation.

In this project we propose to develop a 2D video game, ideally accessible and playable from a web browser, that can be used to detect such situations. The players of this videogame will go through different phases and, depending on their answers, the decisions they make, etc., the risk score of child abuse and/or bullying will be determined.

The information from the players will be obtained through various forms of interaction: dialogues with non-playable characters (NPC) and responses to them, bifurcations in the game based on decisions, areas with which the player decides to interact or not, etc. The game design will most likely be RPG-like, although the project is open to other types of video games.

For the implementation of this project we will have the advice of staff from the company A Un Click Seguridad, formed by police experts in security for minors, both to prevent them from being victims of a crime and also to detect when they are the ones who may be at risk of committing it (another behaviour that the video game will also ideally detect). The target audience of the videogame will be minors between 13 and 18 years of age, so there will be freedom to incorporate complex game systems.

Regarding the videogame engine to be used, the student will be free to choose among several possibilities, such as libGDX (Java), Unity (C#), Cocos (C++), Phaser (JS/TS) or Godot (GDScript, C#, C++), as long as one of the execution platforms can be a web page. Free resources (sprites, sounds, backgrounds, etc) can be used for the development of the videogame, however, the knowledge of graphic design and digital art that the student may have can be useful to create an experience of a higher final quality.

Finally, the inclusion of AI algorithms to improve the detection of abusive situations will be considered.

AIAIAIWhatAPain

Application of artificial intelligence to analyze large amounts of social network data, predict disease outbreaks, public health risks and evaluate the effectiveness of public health campaigns

Technologies:

Icono de PhytontypescriptPytorchIcono de AngularIcono de React


Keywords: AI, web scraping, NLP, API, backend, frontend, web development
Complejidad: 9

This project aims to develop an artificial intelligence-driven general job interview simulator to prepare candidates for a specific position. Using Deep learning and NLP techniques, the system generates realistic and personalized interview scenarios that provide instant feedback to improve interviewees' skills.

Basic components:

  • AI model for sentiment analysis: the model categorizes comments and posts on social networks to understand the mood and perception of the public towards certain health topics. The dataset will be obtained through web scraping tools for data collection in various social networks and forums.
  • Outbreak prediction model: uses historical and current data patterns to predict the probability of a disease outbreak in a given geographic region.
  • Health campaign evaluator: this part will be used to evaluate the effectiveness of public health campaigns by measuring the change in conversations before and after the campaign.

AItrittion

AI techniques applied to employee data within a company to try to reduce the attrition risk

Technologies:

PytorchIcono de Phyton

Keywords: AI, HR, attrition, employees, company
Complejidad: 7

Attrition in modern companies is one of the biggest problems a company faces. When an employee leaves, especially if it is in a sudden wat, it often leaves a gap that is difficult to fill and a net loss of talent for the company. Although there are some obvious factors that companies can work on to avoid this phenomenon (salary, stress level, etc.), other parameters such as age, distance to the workplace or the frequency with which an employee travels for work reasons, influence it in a more unknown way.

In this project we propose to apply artificial intelligence techniques to try to obtain more information about this problem and be able to act early to tackle it. To do this, the student will incorporate information from open datasets available on the web [1][2][3], and their HP SCDS tutors will guide them in the use of the data in order to ensure, in the future, a certain compatibility with the internal data of our company. Research on the data will therefore be necessary, and it may also be necessary to create a specific dataset based on other data sources (employment services, recruitment websites, etc.).

Initially the study will be carried out on an individual basis, but the results should be extrapolated to complete teams (using averages of the individual values). An analysis will be performed using machine learning techniques (random forest, k-means clustering, etc.) and deep learning, to relate the input parameters (salary, seniority, etc.) with an output parameter, which will be whether the employee leaves the company or not.

This project will deliver at least a trained model with the data used, as well as a set of scripts to evaluate the model with specific input parameters and, also, to train and re-train the model with new data or build a new dataset if third party sources have been incorporated. Optionally, additional visualizations can also be provided such as a UI (web, mobile, desktop, etc) to use the model and analyze the results, or a dashboard where each team and its attrition risk can be checked.

  1. Employee attrition
  2. Predicting employee attrtion
  3. IBM HR Analytics attrition dataset

AIWantTheJob

Artificial intelligence-driven general job interview simulator designed to prepare candidates for specific positions and aims to improve interviewees' skills, provide detailed analysis and track progress

Technologies:

Icono de Phyton PytorchIcono de Angulartypescript

Keywords: Generative AI, NLP, API, backend, frontend, web development
Complejidad: 6

This project aims to develop an artificial intelligence-driven general job interview simulator to prepare candidates for a specific position. Using Deep Learning and NLP techniques, the system generates realistic and personalized interview scenarios that provide instant feedback to improve interviewees' skills.

Basic components:

  • Generative model of interview questions: this algorithm generates realistic, job-specific relevant interview questions based on job and industry information. Existing models will be searched or a specific one can be trained with a given dataset for a set of 1-2 roles that can then be extended.
  • Response analyzer: uses an AI model to evaluate the quality of candidate responses, both in terms of content and presentation.
  • Instant feedback system: provides real-time feedback and tips on how to improve interview comments and motivation.
  • Progress and performance monitoring: monitors user improvement over time.

AutoSSL

Automatic replacement and renewal of SSL certificates based in alocal daemon and a web application, supporting popular web server software (Nginx, Apache)

Technologies:

Icono de C#Icono de JavaLogo de GoIcono de React Icono de Linux

Keywords: SSL, HTTP, security, web server, service, web application, queues
Complejidad: 6

One of the repetitive tasks performed by system administrators is the renewal of SSL certificates associated with the web servers they manage. There is an automatic "bot" for certificate renewal called certbot, but it is only ready to use Let’s Encrypt certificates, which may not be suitable for corporate environments.

In this project we want to develop a bot or daemon that runs on the servers where the web servers are located, analyzes their configuration (supporting at least Nginx and Apache) and is ready to apply new certificates when required. Instead of taking care of automatic renewal, what the bot will do is communicate with a web backend (with direct HTTP calls to the backend, and it will receive queued messages from it using RabbitMQ, ZeroMQ, Kafka or similar), both to inform about which certificates are installed, their status and to which domain they are associated, and to receive new certificates and install them on the web server, restarting it if necessary.

Users will be able to access a web application to see the status of their servers (what web server each one has, what machine it is on, what domains are managed on it, the status of the certificates, etc) and send them a certificate renewal file (usually a ZIP containing the private key, certificate, certification chain, etc) to be validated and applied, keeping a copy of the previous certificates in case a rollback is needed. The bot will know where to place each file based on the current configuration and, if necessary, combine some of the files (e.g. certificate and certificate chain in the case of Nginx).

It would also be desirable to be able to generate a CSR (Certificate Signing Request) on the server because sometimes, if it is not generated by the system, it is necessary for renewal.

The daemons or bots should automatically register against the backend, reporting the configuration and status of the machine.

The latest major versions of Nginx and Apache will be supported, detecting unsupported versions to avoid causing damage to older servers.

Azure4All

Implement a Microsoft Azure DevOps solution to integrate it with other development platforms such as GitHub and Jira

Technologies:

Icono de Node JSIcono de HTML5Icono de CSSIcono de PhytonIcono de C#

Keywords: Azure, DevOps, Project Management, Jira
Complejidad: 7

Azure DevOps is one of the main tools to manage and control the software lifecycle projects, including features such as implementation task’s tracking and managing software repositories.

However, its integration with other external tools precarious and insufficient, and it does not allow us to take advantage of the full potential of project management if we use them as a complement to Azure DevOps.

This project is a challenge for you to implement an Azure DevOps extension that allows you to configure and execute a complete and visual synchronization of information, events and code repositories with other project management platforms such as JIRA and GitHub.

BasketballTracker

Tracking of the basketball and the players within a court to create a movement dataset

Technologies:

Icono de PhytonPytorch Icono de OpenCV

Keywords: AI, video analysis, tracking, ML, DL, basketball
Complejidad: 9

In order to improve the performance of a basketball team, it is important to know the movements of each of the players on the court and, based on these, to determine the types of moves they have made (shots, passes, blocks, etc.). Therefore, it is of interest to build a dataset of X and Y positions within a basketball court of both the players and the ball.

During 2015 and 2016 the NBA installed cameras in basketball arenas for the collection and sharing of this type of data. At a certain point they stopped sharing them and the project took a more of a commercial direction, however, the data from that time is still available [1][2].

In this project we would like to replicate an analogous, low-cost data collection process. To this end, alternatives can be evaluated with IoT devices for radio frequency (BT) localization [3][4], taking into account the cost of these systems and the potential refusal of the visiting team to use them. However, image and video analysis technologies [5][6] will preferably be used for player and ball tracking.

Therefore, it is requested:

  • Creation of a low-cost IoT-based data collection system or video image analysis (preferred) that generates data analogous to the SportsVU system.
  • The data will be annotated with the X and Y positions of each player within the field at a given time, with the resolution in seconds determined appropriate.
  • In case the camera is not zenithal but lateral, it will be necessary to perform the appropriate transformation to have positions in an orthogonal basketball field [5].
  • Also, the X and Y (and ideally, Z) position of the ball will be included.
  • Players will be identified based on the color of their equipment and, ideally, based on their bib number to know who is who.

[1] https://github.com/sealneaward/nba-movement-data

[2] https://github.com/neilmj/BasketballData

[3] https://www.mdpi.com/1424-8220/18/6/1940

[4] https://www.computer.org/csdl/proceedings-article/icdmw/2017/3800a894/12OmNzV70CJ

[5] https://dev.to/stephan007/open-source-sports-video-analysis-using-maching-learning-2ag4

[6] https://www.kaggle.com/datasets/trainingdatapro/basketball-tracking-dataset

ChronoStreetTurist3

Evolution of ChronoStreetTourist with linking to points on Google Maps and reading of QR codes from physical stickers on the street.

Technologies:

Icono de AndroidIcono de Unity

Keywords:Computer vision, augmented reality
Complejidad: 8

ContractMe

ContractMe is a blockchain-based application that simplifies contracting and verification through smart contracts, GPS tracking, and secure smartphone identification

Technologies:

Icono de PhytontypescriptIcono de Android

Keywords: Blockchain, Smart Contracts, GPS Location/Fencing, Smartphone Biometric Authentication, Data Encryption, Mobile Application Development, Payment Processing
Complejidad: 8

The project aims to develop a practical solution for contracting and verification using blockchain technology, smart contracts, GPS location/fencing, and secure smartphone identification. The goal is to streamline the contracting process, with a particular focus on sectors such as agriculture, household services, and freelance self-employment.

The core idea is to use smart contracts on a blockchain network to automate contract management from creation to execution, including time tracking and payment. The app will integrate smartphone-based biometric authentication and QR code scanning for secure user identification, maintaining data privacy through decentralized identity solutions.

A possible additional feature of the project is the integration of GPS technology, allowing employers to track worker locations and enforce specific work areas. This feature will ensure accurate time tracking and help prevent unauthorized work outside designated zones.

The user interface will be designed for easy interaction on smartphones, facilitating contract creation, tracking, and payment. Seamless integration with blockchain and GPS services will provide a secure and user-friendly experience, with an emphasis on practicality, ongoing improvements, and data security.

Ultimately, the project seeks to create a straightforward solution that simplifies contracting and verification processes, encouraging legal and transparent employment practices.

CropSense

Crops data gathering using IoT (Arduino) for its storage at a server and subsequent application of AI for watering decisions

Technologies:

Icono de Node JSIcono de Javascript Icono de Angular

Keywords: IoT, crops, watering, sensors, humidity, Temperature
Complejidad: 8

This project aims to implement a comprehensive system for field management and water conservation in irrigation crops. To achieve this, the following components will be implemented:

  • An IoT component based on Arduino, which will be responsible for collecting data from sensors connected to it (temperature, air humidity, soil humidity). This component will send this data to the server for storage. It will display its status (network connection, server connection) through LEDs and can receive commands from the server, such as the command to initiate irrigation. The following parts will be implemented for this:
    • Automatic server discovery by MAC (optional).
    • Collect information from sensors:
      • Temperature.
      • Air humidity.
      • Soil humidity.
    • Transmit the data to the server.
    • Visual indication of the connection status through LEDs.
    • Synchronize time with the server.
    • Receive commands from the server (when to irrigate, start irrigation, stop irrigation, etc.).
    • Send status information.
    • Send water flow information when irrigation is in progress (optional).
  • Note: Ideally, network configuration can be done through a web server embedded in an ad-hoc Wi-Fi created by the device.
  • A web server (backend) that will receive and store the data sent by the Arduino. It will consist of the following parts:
    • Receive data from the Arduino and manage it (Redis).
    • Store this data in a database.
    • Decide when the system should initiate irrigation.
    • Provide an API for the frontend.
  • A web frontend to display real-time sensor information and when irrigation is taking place. It will also show historical data for temperature and humidity values, as well as irrigation intervals. The frontend will include the following features:
    • Access control.
    • Real-time sensor data display (Dashboard).
    • Display when irrigation is active or not (Dashboard).
    • Weather forecast.
    • Arduino management (configured, not configured).
    • Configuration of boards.
    • Display historical sensor data.

For the development of this project, the student will receive all the necessary materials from HP SCDS: an Arduino board with built-in Wi-Fi, temperature sensors, air humidity sensors, soil humidity sensors, wiring, protoboards, LEDs, etc. This material will be returned to the company upon completion of the project.

DeWhatsAppifAIMe

Texting bot for summarizing group chats

Technologies:

Icono de Phyton PytorchIcono de Android

Keywords: Text bot, group chats, AI, summarization
Complejidad: 8

In the age of digital communication, group chats have evolved into vital channels for collaboration and information exchange. However, the sheer volume of messages can overwhelm individuals, making it difficult to stay updated and engaged. To address this challenge, the goal of this project is to develop a texting bot powered by large language models (LLM). The primary goal is to create a solution that can efficiently summarize group chat conversations, extracting relevant insights and aiding users in staying well-informed.

First, the aim is to craft a texting bot that can seamlessly process and analyse group chat discussions. Leveraging the capabilities of LLMs, the bot will generate concise yet informative summaries, distilling the essence of the conversations. This feature will empower users to receive tailored updates from group chats, significantly reducing the need to sift through extensive message histories. By streamlining the understanding of group chat discussions, users can save time and effort while remaining actively engaged in conversations.

The system architecture is designed to facilitate this functionality. The process begins with ingesting data from various messaging platforms through their APIs. Subsequently, incoming messages undergo preprocessing to remove noise and standardize the text for analysis. The integration with a pretrained LLM allows the bot to understand and synthesize human-like summaries. Automated summaries are sent to users based on these preferences, enabling them to stay informed without the burden of information overload. Personalization is a key theme, with customization options for summarization levels, notifications, and quiet hours.

The project's success hinges on addressing several challenges, including reliable and robust LLM integration, ensuring data privacy and security, and designing a user-friendly interface.

Possible enhancements include sentiment analysis, multilingual support, interactive queries, and AI-enhanced interaction.

Diaberse: the return of the hypoglycemia

Extension of "Diaberse" to include a fun game, aimed at children who are new to diabetes, to learn how to control their blood glucose levels while having fun.

Technologies:

Icono de UnityIcono de C#

Keywords: Metaverse, virtual reality, 3D models
Complejidad: 8

In last year's Observatory, a student developed the "Diaberse" project (original name, “Educaverse”), an immersive virtual reality experience for diabetes education.

In this extension of Diaberse, where an avatar through voice commands teaches us how to care for and control our diabetes, we propose a video game, focused on children who have just debuted diabetes, so that they learn to control it.

A perfect control of diabetes involves several cares throughout the day, among which are the portions of carbohydrates that are ingested, the units and types of insulin that are injected, physical activity, stress, and even the weather. Therefore, so that the child can learn and understand in an easy and fun way how all these factors affect him/her, we will develop a video game, in virtual reality, where the child will learn how the intake of different types of carbohydrates, the application of different types of insulin, and daily physical activity affect his/her "energy bar", which should be in a range between 70 and 120 units.

FreshStock

Android App that allows the user to create a shopping list, once a product has been purchased, add its expiration date and store it, displaying notifications when the product is close to expiring

Technologies:

Icono de AndroidIcono de Javakotlin

Keywords: Products, stock, shopping list, expiration dates
Complejidad: 7

A possible problem with this App, not everyone will want, as a product runs out, the user will not want to go to the phone, so as an alternative, you can place a Tablet in the kitchen, which will be the best place for more comfortable use. This means that the interface should not only be responsive between different mobile screens, but also for tablets.An Android application that allows the user to create lists of products stored in their pantry/refrigerator and notifies the user when they are nearing their expiration date. Additionally, as products are removed, it provides the option to add them back to a shopping list, thus maintaining a minimum stock at home. There is also a shopping list feature, whereas products are purchased, they are added to the stock lists with their quantity and expiration date.

Use Cases:

  1. Product Manager: Add/edit products that are purchased, indicating a desired minimum stock level.
  2. Stock Input: Assign the current stock for each product.
  3. Automatic Shopping List: Based on the stock of each product and the desired minimum stock, generate a shopping list to ensure the desired stock level for each product.
  4. Stock Output: When a product is used, reduce its stock. This option is also used to remove expired products.
  5. Expiry Control: List products nearing their expiration date.

App Requirements:

  1. Compatible with phones and tablets, in both vertical and horizontal orientations.
  2. Each product should have a property indicating where it is usually purchased, allowing users to filter where to buy each product.
  3. Storage List: After a user has purchased products, they should be able to easily add the product to a storage list with quantity and optional expiration date. Two options for adding the expiration date should be available:
    • manual entry with a date field or
    • using OCR with the device's camera.
  4. Cloud Data Storage: Since users change phones frequently, data should be easily transferable between devices. For this purpose: Export data to Google Drive, which requires implementing Google API login. This has the advantage of saving data in the user's storage space, hidden, and not occupying device storage.
  5. Notifications: Notifications are a crucial part of the app. Users should be notified when a product is nearing its expiration date. Three notifications should be sent: one when a product has one week left until expiration, another when it expires the next day, and a final one when it has already expired.

Technical Requirements:

  1. Respect the pattern recommended by Google, MVVM: This forces the student to use Live Data, RX java, View Model, etc.
  2. Clear and precise logs, this will help when an error arises to identify it quickly, for this I recommend the use of the Timber library.
  3. For local data storage it is advisable to use SQL lite, already incorporated into Android.
  4. The interface is not the most important thing, but the user experience is required to have a certain logic, that is, the App is not intended to be pretty, but easy to use.
  5. A possible problem about this App, not everyone will want it, as a product runs out the user will not want to go to the phone, so as an alternative, you can place a Tablet in the kitchen, which will be the best place to its more comfortable use. This means that the interface must not only be responsive between different mobile screens, but also for tablets.

HPPrintingSegmentation: extract and printing from your imagination

Automatic segmentation of images for printing, allowing to select specific categories and with optional replacement of layers using in-painting techniques

Technologies:

Icono de PhytonPytorchIcono de C++Icono de C#

Keywords: AI, semantic segmentation, in-painting, printing
Complejidad: 8

Semantic segmentation is a computer vision task that allows separating different areas of an image into categories, depending on what each part of the image is. A common application of this technique is autonomous driving, identifying what is road, what is sidewalk, what are other vehicles, what are road signs, etc. Normally segmentation models are tightly coupled to a specific task and have a set of fixed categories, as in the example above. However, recently models such as OneFormer [1][2] have appeared that allow segmentation to a very large number of different categories (or classes): people, sky, trees, grass, cars, etc.

In this project we propose to apply a model like OneFormer to images that will be printed. The user, after segmentation, will be able to select which classes to print, eliminating those categories that are not of interest. For the discarded areas, a transparent mask will be applied that will make them take the color of the paper (usually white), optionally also being able to choose a uniform fill color.

The idea is to be able to integrate this module into HP printing applications. For this, reusable code must be developed that can be used in a generic way, although for the development itself a separate application can be generated.

On the other hand, there are image generative models such as Stable Difussion [3] that, in addition to allowing the generation of images from a user's free text, can also perform what are called "in-painting" tasks [4], i.e., eliminate an area of an image and be able to generate in it whatever is desired, also from text. It would be desirable also in this project to incorporate such a model, so that if the user deletes a specific category (e.g., a cloudy sky), he can replace the empty area with something else (e.g., a cloudless sky).

Finally, it should be noted that the idea is to run the models locally, but if due to computational constraints this is not possible, a client/server model can be chosen where the AI models are run on more powerful machines (with GPUs).

[1] https://praeclarumjj3.github.io/oneformer/

[2] https://huggingface.co/spaces/shi-labs/OneFormer

[3] https://stability.ai/blog/stable-diffusion-public-release

[4] https://huggingface.co/runwayml/stable-diffusion-inpainting

MiraclePlus

Reimagining of the paintings "Los Milagros de San Isidoro" from the Doña Sancha chamber of the Royal Collegiate Church of San Isidoro in León using artificial intelligence and augmented reality visualization (or personalized printing)

Technologies:

Icono de PhytonPytorchArcoreIcono de Android

Keywords: Art, paintings, AI, AR, mobile
Complejidad: 9

MyParallelOrg

Deduction of an alternative hierarchy in a company based on user interactions in code control systems such as GitHub and project management systems such as Jira.

Technologies:

PytorchIcono de PhytonIcono de ReactIcono de Angular

Keywords: Hierarchy, version control, project management, APIs, web applications, ML, AI
Complejidad: 8

In all companies there is a hierarchy and a division into teams/areas/organizations set by company objectives and management teams. However, in collaborative environments and projects composed of multiple components, workers interact through various tools, creating a sort of "parallel organization" in which they may work more with people from different teams than their own or receive requests from people who are not directly in their hierarchical vertical.

In this project we want to integrate with code control and project management tools through their public APIs (for example, those of GitHub and Jira) and analyze user collaborations outside the official hierarchy. Thus, issues and user stories will be studied to see which users are responsible for opening them, which are responsible for updating their status and which are responsible for closing them. In addition, the code itself will be analyzed (commits in each project, users who modify certain files, etc.) to discover which people collaborate on the same development base.

Once the data ingestion component has been developed, it will be necessary to create a visualization of this data in order to see the "parallel organization". In this visualization, which can be a separate web application or a dashboard in tools such as PowerBI, users will be able to see different organizations of the data: it will be possible to see for a specific user which other users they interact with the most, creating a kind of collaboration cloud, and also it will be possible to see within a specific project or organization which users are involved in the work within it.

Although not strictly necessary, the student may consider the application of artificial intelligence techniques to the data in order to find hidden relationships between members of a company.

Finally, the development will not use real HP data for confidentiality reasons. The student will work with public data available on community and/or open source projects.

NoteThisTag

Mobile app where the user takes notes and they are automatically tagged into user-defined categories

Technologies:

Icono de ReactIcono de PhytonLogo de GokotlinIcono de Android

Keywords: App, mobile, AI, notes, tagging
Complejidad: 7

Mobile application to take notes with auto tagging functionality.

User writes a note and the app tags it automatically. These tags are previously defined by the user. Allows searching by tag.

Since AI models are hard to compute, a client/server model may be required.

Requirements:

  • Automatic content-aware tagging.
  • Easy to use. Ideas are fleeting, so we want to note them as soon as possible. Should not require to many clicks.
  • Tags definition by the user.
  • Search by tag feature.

Nice to have:

  • Delete/modify tags.
  • Async behaviour to allow usage without internet connection. Once connection is back, app will sync to tag the notes that weren’t tagged yet.
  • Tag re-evaluation if some are deleted/modified.

NoTimePrinting

Printer locator based on their availability, status and capabilities to be used in a corporate environment with a multi-printer fleet

Technologies:

Icono de AndroidkotlinIcono de JavaIcono de C#Icono de ReactIcono de Angular

Keywords: Print, fleet, mobile, web, web services
Complejidad: 7

It is common, in corporate or academic environments, to have dozens or hundreds of printers in the same location to be used by employees or customers. Depending on the setup, users can send works to these printers in a centralized fashion, sending them to a virtual printer (the “print on the go” model) or, on the other hand, they must choose which printer they want to print to.

In any case, it may happen that when the user goes to a specific printer, they find that the printer does not meet the required conditions for printing their job: no ink or paper, too large of a print queue, inadequate capabilities (paper size, color printing, etc). Therefore, we want to develop an application (web or, ideally, mobile) that, in the form of a map and depending on the user location, indicates the nearest printer that meets their requirements and has an acceptable printing time, balancing if necessary to another printer a little further away.

The project should have the following parts:

  • A web backend that will connect to the printers and store their status and location.
  • A frontend, either web or mobile in which you can see the printers, their status and location.
    • The presentation should be in the form of a map with the current location of the user and the location of the printers.
    • It will be possible to obtain a printer recommendation based on the user's needs, the distance to the printers and the status of each printer (ink, paper, queue size, capacities, etc.).
  • A simulated printer fleet. Since the developer will not have multiple printers or they can be of very heterogeneous models, it will be necessary to create a mini printer simulator with the necessary data to carry out the project. This simulator will be a very simple web service (backend) that will simply serve a JSON with the printer data: name, location, status, capacities, queue, etc.

Optionally there could be an administration frontend to add and remove printers, change their location, etc.

OhMyLog: What have I done?

Generation of an event timeline based on logs using a configurable tool for customized traces with integration with Jira

Technologies:

Icono de PhytonIcono de C++Icono de JavaIcono de C#Icono de AngularIcono de React

Keywords: Logs, printers, timeline, Jira, web, tool
Complejidad: 7

When development engineers need to diagnose a problem, they often have access to a log of events that have occurred on a particular software system or hardware device. This log contains a series of traces that record what has been happening in the system. These traces are (or should be) annotated with a timestamp, that is, a time associated with each event. Part of their job is to inspect these logs and be able to know what has happened on the system or machine, as well as the actions performed by its users.

In this project we want to develop a system that, given a structured log (in which the traces follow a specific format, with fixed and variable information), generates a timeline to easily visualize what has been happening in the system. This will help engineers to verify the problem information against the events reported by the reporter and also to know the history of the system up to the end of the traces.

The user will be able to configure what each of the traces means, for example, if there is a trace that says "[07/08/2023] Starting printing job plano1.pdf", that means that a job named "plano1.pdf" has started printing. To configure this, templates of the traces will be defined, optionally using regular expressions, indicating which is the common part, which parts are discardable, the variables to be captured (in the example above, the name "plano1.pdf") and to which event each trace is associated (in the example, the start of a printing job).

For the development, the student will be able to use the logs that they consider appropriate, or the project tutors will provide a set of test logs like the final logs for which the tool is to be applied, which are confidential.

Regarding the application itself, it can be a command line application (a set of scripts), a desktop application or a web application. The important thing is to be able to generate an image with the timeline of the events that have been happening and these can be consulted in an easy way.

Finally, we also want to integrate this tool with Jira, a project management system. Thus, the user will be able to indicate an issue number and, using the Jira API (there are libraries to access in different languages), the associated logs will be downloaded, potentially in a compressed format (ZIP, tar.gz, etc), analyzed, the timeline will be generated and added, as an image, to the original issue.

PackAPTtack

Analysis of packers in malware for targeted attacks (APTs) from 2019 to present

Technologies:


Keywords: Cybersecurity, packers, APT, malware
Complejidad: 6

We would like to do a study of the evolution of packers applied to malware used in targeted attacks known as APTs.

To do this, the student must initially be trained in the area of cybersecurity, understanding what APTs are and what are the packers used by the malware. They should then obtain a repository of APT malware (PE32 executables) from APT attacks that occurred between 2019 and today.

Using PEframe tools you should perform a static analysis of the collected PE32 executables and extract the imported functions (IMPORTS) and packers used in those samples. And compare them with those extracted in two other similar experiments (that we already have) with malware samples from 2010-2015 and 2015-2019. For this, the contents of the static analysis will be stored in a MariaDB database.

Finally, a study of the evolution of packers in this type of attacks over the last 12 years should be carried out.

PhotoContest

Online photo contest platform

Technologies:

Icono de AngularIcono de Node JS

Keywords: Web services, REST, API, Frontend, Backend, DB
Complejidad: 6

We propose the creation of a platform for the creation and management of online photo contests. It will have a system of roles that limit the functionality according to that role.

Requirements:

  • Contest creation (admin): creation dates, end dates, number of votes per user, kind of contest.
  • Photo upload (admin):
    • Direct upload related to a contest
    • Automatic upload from Instagram (or other platforms) based on a hashtag and a date range
  • Login against an LDAP/ActiveDirectory/similar user system
  • Traditional photo voting (all roles):
    • Display of contest photos (gallery and detail of each photo)
    • Number of photos to vote according to the maximum defined in the contest
    • Only one vote per contest and user (according to logged in user)
  • Voting per photo battle (all roles):
    • System that randomly pits 2 photos against each other, making the winners go through rounds until the final
    • Weighting of results among all votes to determine the winner
  • Visualization of results (all roles):
    • Last contest
    • Previous contests

Notes:

  • All frontend visualizations should be responsive to enable use on both computer and mobile/tablet.
  • The frontend photo visualization will perform an image reduction in order not to have too high transfer rates.

PhotoCopy

Photo álbum scanning using the HP WorkPath platform (Android for office) with integration with image management systems

Technologies:

Icono de AndroidIcono de JavakotlinIcono de OpenCV

Keywords: Printers, photo albums, scan, cloud, image analysis
Complejidad: 8

Often, people have "analog" photo albums that they have not yet digitized. The normal process of digitizing these albums involves removing the photographs from the pages (there may be several photographs per page), scanning them one by one and putting them back into the album. If the photographs are glued together, it is necessary to peel them off or scan the entire page and cut them out using photo editing software.

There are some applications already available to scan this type of albums with mobile phones, separating the photographs, the problem is the brightness, scanning angle and quality that can be achieved for each photo. In this project we propose to develop an HP Workpath application to perform this process in a fast and friendly way. Workpath is an application development platform for HP printers based on Android, so the technologies to be used will be the same as for the development in that operating system. In addition, the student will have a simulator that can be run on their PC for testing, having access when desired to real machines in our offices.

The application will scan each of the pages that the user will put in the printer scanner. It will identify the photographs and, ideally, separate them into individual photographs. To perform this task, the student can incorporate image processing libraries such as OpenCV and identify the edges or apply some simple algorithm to do so. The student will have access to a more powerful GS2 series printer to test this part of the application.

Finally, it would be desirable that the scanned images flow to the online library of photographs used by the user. For this purpose, a connection to an image management platform (ideally several), such as Google Photos, Amazon Photos or Apple iCloud, will be provided. The user will enter their platform credentials into the printer and the images will be automatically uploaded after scanning them in the printer. If integration proves to be too complicated, uploading to an FTP server or sending by email is a valid alternative.

PipelineAI

AI for filtering and categorizing images coming from an image search engine

Technologies:

Icono de Phyton Pytorch

Keywords: Image classification, image search engines, OpenAI, CLIP, computer vision, zero-shot image classification, image to text, semantic similarity
Complejidad: 7

Nowadays, image search engines (Google, Bing, etc) allow us to find images related to an input entered by the user. This action triggers a result in which the user can see that many of the images that appear after the search are related to the entry entered by the user. Others are similar, and others, in most cases, are unrelated or have nothing to do with it.

This is a negligible problem on a small scale, since the user themselves will be able to detect images that are not related to the requested search and can therefore discard them. In the case of a search for a couple of images, for example. The problem becomes insurmountable in time, when the number of images to be discarded is in the hundreds or thousands. It is not an efficient process. This is where traditional computer-vision based models have been improved, in favor of "Zero-shot Image Classification" and "Image to text" models.

The project proposed below aims to overcome the barrier of improving the image selection process based on a set of iterative user input. With the result of obtaining a quality in the set of images that could not be obtained with the current image search engines.

The objective of the project is to investigate and develop a system that, after obtaining a set of images from an image search engine or a web-scraping technique, allows iterating on each of these images. The images that are not detected by the artificial intelligence model will be discarded based on the filter introduced by the user. The user can add as many filters as desired. In the final result, a set of images that have passed all filters will be obtained. The filters introduced by the user will consist of phrases on the image that will be passed to the model. The model will show the score of each of the sentences according to the truthfulness of the image.

Those images that are considered to have a higher score will be able to pass to the next filter. Repeat the above process. At the final point of the cycle, the user will obtain a set of images classified in respect to a set of phrases (“Semantic similarity” models can also be incorporated).

During the execution of the project, the student will have to develop a graphical interface that allows interacting with it for the design of the filters. With the objective also to be the graphical interface that acts as a launcher of the process of iterating on the filters. The student will have to investigate the best technology adapted to support the inference process of the artificial intelligence models.

SearchPayAndGym

Application for gyms where they publish their prices for daily/weekly use of the gym and that serves as a gateway for the hiring of service users

Technologies:

kotlin

Keywords: Gym, app, offers, subscriptions, sharing, web, mobile
Complejidad: 7

As a result of the growing number of people who work as digital nomads and the idea of being able to enjoy going to the gym when you go on vacation anywhere in the world, we would like to create an app that connects users and gyms to be able to buy temporal subscriptions to the service.

This app will be a feed filtered by location where users can choose the offers of temporary vouchers (one day, three days, weekly...) that gyms publish in it. Both gyms and users should be able to register there. Users will be able to search for gyms on a map and hire their services, and gyms will be able to publish their catalog of services, where they are and, optionally, pictures of their facilities.

On the other hand, the application as an extension must have the ability to give users access to the gyms with the voucher purchased through a QR, barcode, or some authentication service, and if possible, allow users to make payment by debit or credit card or wallet technology (ApplePay, GooglePay, PayPal, etc.) so that money is issued to the corresponding gym.

Sugar Race: save the diabetics

Collaborative solution so a diabetic can raise an SOS signal and a collaborator locates and helps them with the sugar they need

Technologies:

Icono de AndroidLogo de GoIcono de Node JSIcono de React

Keywords: Web app, mobile app
Complejidad: 7

Having diabetes and having a low blood sugar level and not having anything that can raise that level is one of the greatest fears of every diabetic. To have blood sugar levels below 70 is dangerous for a diabetic, since at that level they enter hypoglycaemia, and they must correct that level by ingesting fast-absorbing carbohydrates to return to more stable and safer levels.

It often happens that due to carelessness or other circumstances, there is no access to this rescue ration, and it is likely that hypoglycaemia can cause loss of consciousness in the person. To avoid this situation, we propose the creation of a collaborative help system, in which we will have two types of users, collaborators and diabetics.

A collaborator will be registered with the purpose of quickly assisting a diabetic who is in an optimal distance range to ensure that he/she does not lose consciousness. The diabetic will be the one to launch an SOS signal that will be received by the closest collaborators. One of them will respond to the request and assist the diabetic. We will have a mobile application with two different roles, and a web application from which to manage the entire system.

WhatIsMyPlay

Identificación de jugadas (pases, tiros, robos, etc) de baloncesto en base a una serie temporal de posiciones de jugadores y balón

Tecnologías:

Icono de Phyton Pytorch

Keywords: AI, trajectories, tracking, ML, DL, geometry, basketball
Complejidad: 9,5

During the 2015 and 2016 seasons, the NBA installed in its arenas a system of cameras and video processing, called SportsVU, which allowed to collect the X and Y positions of each player at a given time, as a time series of events, also indicating the specific type of play that had occurred (a shot, a miss, a rebound, etc). Initially the NBA shared and disseminated this data, but later the project took a more commercial drift and stopped doing so. However, the original data is still available to anyone who wants to use it [1] [2].

In this project we are asked to develop a Machine Learning system (including Deep Learning) that is able to, starting from a dataset like the one presented in the previous paragraph, identify based on the movements of each player and the ball the type of play that has occurred. To do this it will be necessary to identify the trajectories of each player, treating it in a geometric way (classically or by AI), and also to split the temporal sequence of movements into events that frame each of these moves (for example: all these movements from timestamp X to timestamp Y and involving players A, B and C, are an offensive rebound).

Ideally the system will be able to detect the following events:

  • Shot (the ball goes out of a player towards the basket).
  • Pass (the ball moves between players of the same team).
  • Steal (the ball moves from a player of one team to a player of the other team).
  • Movement of players both with and without the ball.
  • Penetration (movement with the ball towards the basket).
  • Rebound (offensive or defensive, depending on location and player).
  • Block (normal or blind).

[1] https://github.com/sealneaward/nba-movement-data

[2] https://github.com/neilmj/BasketballData

WhatTheNoise

System to listen to events within a physical system (machine, device, etc.) with two microphones, locating its origin and type and registering it

Technologies:

Icono de C++Icono de PhytonIcono de LinuxIcono de Node JS

Keywords: Raspberry, web, audio, signals
Complejidad: 9,5

One of the things we have learned from mechanical technicians is that many of the noises from moving parts or electrical devices give us a clue as to what is happening. We want to do this automatically. To do this, this project is divided into phases:

Phase 1: Location and recording of sounds

In this phase of the project, we will implement a system that is capable of detecting nearby sounds using two microphones and locating the source (approximately) using TDOA (Time Difference Of Arrival) techniques. We will have a raspberry pi with two usb microphones.

The system shall allow configuration of sound thresholds and calibrations. For that, you will have a small web to configure it.

Precision must be a priority in this project, as the distance to the microphones may vary between 0.5 and 4 meters (approx.).

At least, you must detect and register three types of sounds:

  • continuous fixed (type motors, fans, etc) where only a sample of X sec and its relative position are recorded
  • continuous in movement (when the same sound is moving between the two points) where it only registers a sample and position “In movement”.
  • punctual (like hit, click, etc) where all the sound and its relative position are recorded.

Phase 2: creation of a dataset for the sound classification

A small application (web, cloud or desktop) will be created that will cross-reference the data of the sounds previously recorded, with the events and states recorded from the machine/system in a separate log file given, where there is also a time log. It will look for a relationship between the sounds and those events and states and classify them based on this.

The project must use the current state of the art (research) and incorporate it as far as possible.

YourTurn!

Aplicación web donde dos usuarios pueden jugar una partida de cartas en formato físico. Dos cámaras graban cada tablero y la aplicación permite ampliar las cartas del contrincante y a su vez traducirla

Tecnologías:

Icono de ReactLogo de GoIcono de OpenCV

Keywords: Web, computer vision, image recognition, video recognition
Complejidad: 8

Web application to play card games online with physical format cards.

Two users connect to play an online game. After selecting the game type, cameras start recording the tabletop of each player. Players can zoom in any opponent card to see the details. The app translates cards if players speak different languages.

Cameras use an image/video recognition program with a card database to identify and translate them.

Requirements:

  • Connection between two players.
  • Camera access.
  • Interactive tabletop to zoom cards.

Card translation if players speak different languages.

Collaborators

We collaborate with the most prestigious Spanish universities.

TOP