Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Sami Romdhani"'
Publikováno v:
FG
Generative Adversarial Networks (GANs) are able to learn mappings between simple, relatively low-dimensional, random distributions and points on the manifold of realistic images in image-space. The semantics of this mapping, however, are typically en
Publikováno v:
IJCB
Generative Adversarial Networks (GANs) are now capable of producing synthetic face images of exceptionally high visual quality. In parallel to the development of GANs themselves, efforts have been made to develop metrics to objectively assess the cha
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::91f8949cc75f8656331351c81f7b35d2
Publikováno v:
CVPR
Facial recognition using deep convolutional neural networks relies on the availability of large datasets of face images. Many examples of identities are needed, and for each identity, a large variety of images are needed in order for the network to l
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::670f00f6426f34f861b291584baafb28
Publikováno v:
IEEE Transactions on Image Processing. 17:2456-2464
In this paper, a novel method for reducing the runtime complexity of a support vector machine classifier is presented. The new training algorithm is fast and simple. This is achieved by an over-complete wavelet transform that finds the optimal approx
Publikováno v:
Proceedings of the IEEE. 94:1977-1999
Unconstrained illumination and pose variation lead to significant variation in the photographs of faces and constitute a major hurdle preventing the widespread use of face recognition systems. The challenge is to generalize from a limited number of i
Publikováno v:
2014 5th IEEE European Workshop on Visual Information Processing (EUVIP)
2014 5th IEEE European Workshop on Visual Information Processing (EUVIP), Dec 2014, Paris, France. pp.1-6
EUVIP
2014 5th IEEE European Workshop on Visual Information Processing (EUVIP), Dec 2014, Paris, France. pp.1-6
EUVIP
International audience; The deployment of cameras for security control allows for video stream to be used as input for face recognition (FR). However, most state of the art FR SDKs are generally specifically tuned for dealing with frontal and neutral
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::83db1762758dbb30c586d866d08ab97a
https://hal.archives-ouvertes.fr/hal-01313196
https://hal.archives-ouvertes.fr/hal-01313196
Publikováno v:
Image and Vision Computing. 20:307-318
Modelling the appearance of 3D objects undergoing large pose variation relies on recovering correspondence of both shape and texture across views. The problem is hard because changes in pose not only introduce self-occlusions hence inconsistent 2D fe
Publikováno v:
27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2014, Columbus, Ohio, United States. pp.1-8
CVPR
27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2014, Columbus, Ohio, United States. pp.1-8
CVPR
International audience; Expression and pose variations are major challenges for reliable face recognition (FR) in 2D. In this paper, we aim to endow state of the art face recognition SDKs with robustnessto facial expression variations and pose change
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::9e19dfe513524959d5df45ad962e712f
https://hal.archives-ouvertes.fr/hal-01271809
https://hal.archives-ouvertes.fr/hal-01271809
Publikováno v:
Handbook of Face Recognition ISBN: 9780857299314
Handbook of Face Recognition
Handbook of Face Recognition
In this chapter, we present the Morphable Model, a three-dimensional (3D) representation that enables the accurate modeling of any illumination and pose as well as the separation of these variations from the rest (identity and expression). The Morpha
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::df2760fa52af20eae1d906f43fe826ba
https://doi.org/10.1007/978-0-85729-932-1_6
https://doi.org/10.1007/978-0-85729-932-1_6