Learning from interpreting transitions in explainable deep learning for biometrics

Autor: Wang, Zilong
Přispěvatelé: Ortega, Alfonso (director), Ribeiro, Tony (director), UAM. Departamento de Ingeniería Informática, Ortega, Alfonso, Ribeiro, Tony, Ortega de la Puente, Alfonso
Rok vydání: 2020
Předmět:
Zdroj: Biblos-e Archivo: Repositorio Institucional de la UAM
Universidad Autónoma de Madrid
Biblos-e Archivo. Repositorio Institucional de la UAM
Consejo Superior de Investigaciones Científicas (CSIC)
Popis: Máster Universitario en Métodos Formales en Ingeniería Informática
With the rapid development of machine learning algorithms, it has been applied to almost every aspect of tasks, such as natural language processing, marketing prediction. The usage of machine learning algorithms is also growing in human resources departments like the hiring pipeline. However, typical machine learning algorithms learn from the data collected from society, and therefore the model learned may inherently reflect the current and historical biases, and there are relevant machine learning algorithms that have been shown to make decisions largely influenced by gender or ethnicity. How to reason about the bias of decisions made by machine learning algorithms has attracted more and more attention. Neural structures, such as deep learning ones (the most successful machine learning based on statistical learning) lack the ability of explaining their decisions. The domain depicted in this point is just one example in which explanations are needed. Situations like this are in the origin of explainable AI. It is the domain of interest for this project. The nature of explanations is rather declarative instead of numerical. The hypothesis of this project is that declarative approaches to machine learning could be crucial in explainable AI
Databáze: OpenAIRE
načítá se...