January 2024 - Present
- Lead machine learning: discuss technology requirements, gather data, train models, deployment, and integration in production environment.
- Improve response time in a production service by 200% reducing the cost by 80% by deploying Nvidia Triton inference server.
- Improve maintainability and scalability, and the overal developer experience of Python code of several FastAPI services.
January 2023 - January 2024
- Participate in face recognition and face PAD projects achieving top 1 in 3 types of attacks among 82 participants in the NIST FATE PAD.
- Constantly researching new deep learning architectures, models, and methods to improve our technology, achieving improvements in our error rate of up to 50%.
January 2021 - January 2023
- Participate in Face Recognition and Face PAD teams achieving top 20% position in FRVT NIST.
- Lead Image Quality Assessment team improving the user experience and conversion rate by 10%.
- Collaborate with C++ teams to implement deep and machine learning models in edge devices.
2022 - Present
University of Alicante
2018 - 2019
Technical University of Madrid
2014 - 2019
Technical University of Madrid
Music classification is a prominent research area within Music Information Retrieval. While Deep Learning methods are capable of adequately performing this task, their classification space remains fixed once trained, which conflicts with the dynamic nature of the ever-evolving music landscape. This work explores, for the first time, the application of Continual Learning (CL) in the context of music classification. Specifically, we thoroughly evaluate five state-of-the-art CL approaches across four different music classification tasks. Additionally, we showcase that a foundation model might be the key to CL in music classification. For that, we study a new approach called Pre-trained Class Centers, which leverages pre-trained features to create dynamic class-center spaces. Our results reveal that existing CL methods struggle when applied to music classification tasks, whereas this simple method consistently outperforms them. This highlights the interest in CL methods tailored specifically for music classification.
In the field of Document Image Analysis (DIA), it is common to find great heterogeneity in terms of the possible graphic domains. In this sense, it is interesting to build neural models that can be sequentially adapted to new domains without losing the knowledge from the domains already learned. This learning paradigm is known as Continual (or Lifelong) Learning (CL). Although the adaptation comes along with a training set of the new domain, neural networks suffer what is known as "catastrophic forgetting". Therefore, assuming the constraint of not keeping data from the domains already addressed, this paradigm represents a challenge yet to be solved. This work presents an approach for CL in document image binarization, one of the most considered tasks within the DIA field. Our results report that it is indeed feasible to address CL in this field, given that the approach is successfully implemented and outperforms the baseline by a wide margin in most of the analyzed scenarios.