The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Dalam makalah ini kami mencadangkan penyepaduan pengecaman muka dan pengecaman ekspresi muka. Wajah dimodelkan sebagai graf di mana nod mewakili titik ciri muka. Model ini digunakan untuk pengesanan titik ciri muka dan muka automatik serta titik ciri muka yang dijejaki dengan menggunakan padanan ciri yang fleksibel. Pengecaman muka dilakukan dengan membandingkan graf yang mewakili imej muka input dengan model muka individu. Ekspresi muka dimodelkan dengan mencari hubungan antara gerakan titik ciri muka dan perubahan ekspresi. Model ekspresi individu dan purata dijana dan kemudian digunakan untuk mengenal pasti ekspresi muka di bawah kategori yang sesuai dan tahap perubahan ekspresi. Model ekspresi yang digunakan untuk pengecaman ekspresi muka dipilih oleh hasil pengecaman muka.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Dadet PRAMADIHANTO, Yoshio IWAI, Masahiko YACHIDA, "Integrated Person Identification and Expression Recognition from Facial Images" in IEICE TRANSACTIONS on Information,
vol. E84-D, no. 7, pp. 856-866, July 2001, doi: .
Abstract: In this paper we propose an integration of face identification and facial expression recognition. A face is modeled as a graph where the nodes represent facial feature points. This model is used for automatic face and facial feature point detection, and facial feature points tracked by applying flexible feature matching. Face identification is performed by comparing the graphs representing the input face image with individual face models. Facial expression is modeled by finding the relationship between the motion of facial feature points and expression change. Individual and average expression models are generated and then used to identify facial expressions under appropriate categories and the degree of expression changes. The expression model used for facial expression recognition is chosen by the results of face identification.
URL: https://global.ieice.org/en_transactions/information/10.1587/e84-d_7_856/_p
Salinan
@ARTICLE{e84-d_7_856,
author={Dadet PRAMADIHANTO, Yoshio IWAI, Masahiko YACHIDA, },
journal={IEICE TRANSACTIONS on Information},
title={Integrated Person Identification and Expression Recognition from Facial Images},
year={2001},
volume={E84-D},
number={7},
pages={856-866},
abstract={In this paper we propose an integration of face identification and facial expression recognition. A face is modeled as a graph where the nodes represent facial feature points. This model is used for automatic face and facial feature point detection, and facial feature points tracked by applying flexible feature matching. Face identification is performed by comparing the graphs representing the input face image with individual face models. Facial expression is modeled by finding the relationship between the motion of facial feature points and expression change. Individual and average expression models are generated and then used to identify facial expressions under appropriate categories and the degree of expression changes. The expression model used for facial expression recognition is chosen by the results of face identification.},
keywords={},
doi={},
ISSN={},
month={July},}
Salinan
TY - JOUR
TI - Integrated Person Identification and Expression Recognition from Facial Images
T2 - IEICE TRANSACTIONS on Information
SP - 856
EP - 866
AU - Dadet PRAMADIHANTO
AU - Yoshio IWAI
AU - Masahiko YACHIDA
PY - 2001
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E84-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2001
AB - In this paper we propose an integration of face identification and facial expression recognition. A face is modeled as a graph where the nodes represent facial feature points. This model is used for automatic face and facial feature point detection, and facial feature points tracked by applying flexible feature matching. Face identification is performed by comparing the graphs representing the input face image with individual face models. Facial expression is modeled by finding the relationship between the motion of facial feature points and expression change. Individual and average expression models are generated and then used to identify facial expressions under appropriate categories and the degree of expression changes. The expression model used for facial expression recognition is chosen by the results of face identification.
ER -