Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Speech animation). Showing records 1 – 15 of 15 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters

1. Ding, Yu. Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé : Data-driven expressive animation model of speech and laughter for an embodied conversational agent.

Degree: Docteur es, Signal et images, 2014, Paris, ENST

Notre objectif est de simuler des comportements multimodaux expressifs pour les agents conversationnels animés ACA. Ceux-ci sont des entités dotées de capacités affectives et communicationnelles;… (more)

Subjects/Keywords: Modèle de Markov caché; Agent conversationnel animé; Synthèse d’animation; Animation de la parole; Animation du rire; Hidden Markov model; Embodied conversational agent; Animation synthesis; Speech animation; Laughter animation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ding, Y. (2014). Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé : Data-driven expressive animation model of speech and laughter for an embodied conversational agent. (Doctoral Dissertation). Paris, ENST. Retrieved from http://www.theses.fr/2014ENST0050

Chicago Manual of Style (16th Edition):

Ding, Yu. “Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé : Data-driven expressive animation model of speech and laughter for an embodied conversational agent.” 2014. Doctoral Dissertation, Paris, ENST. Accessed August 18, 2019. http://www.theses.fr/2014ENST0050.

MLA Handbook (7th Edition):

Ding, Yu. “Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé : Data-driven expressive animation model of speech and laughter for an embodied conversational agent.” 2014. Web. 18 Aug 2019.

Vancouver:

Ding Y. Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé : Data-driven expressive animation model of speech and laughter for an embodied conversational agent. [Internet] [Doctoral dissertation]. Paris, ENST; 2014. [cited 2019 Aug 18]. Available from: http://www.theses.fr/2014ENST0050.

Council of Science Editors:

Ding Y. Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé : Data-driven expressive animation model of speech and laughter for an embodied conversational agent. [Doctoral Dissertation]. Paris, ENST; 2014. Available from: http://www.theses.fr/2014ENST0050


University of Manchester

2. Deena, Salil Prashant. Visual speech synthesis by learning joint probabilistic models of audio and video.

Degree: PhD, 2012, University of Manchester

 Visual speech synthesis deals with synthesising facial animation from an audio representation of speech. In the last decade or so, data-driven approaches have gained prominence… (more)

Subjects/Keywords: 006.54; Visual Speech Synthesis; Speech-Driven Facial Animation; Artificial Talking Head; Gaussian Processes; Machine Learning; Speech Synthesis; Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Deena, S. P. (2012). Visual speech synthesis by learning joint probabilistic models of audio and video. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/visual-speech-synthesis-by-learning-joint-probabilistic-models-of-audio-and-video(bdd1a78b-4957-469e-8be4-34e83e676c79).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553442

Chicago Manual of Style (16th Edition):

Deena, Salil Prashant. “Visual speech synthesis by learning joint probabilistic models of audio and video.” 2012. Doctoral Dissertation, University of Manchester. Accessed August 18, 2019. https://www.research.manchester.ac.uk/portal/en/theses/visual-speech-synthesis-by-learning-joint-probabilistic-models-of-audio-and-video(bdd1a78b-4957-469e-8be4-34e83e676c79).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553442.

MLA Handbook (7th Edition):

Deena, Salil Prashant. “Visual speech synthesis by learning joint probabilistic models of audio and video.” 2012. Web. 18 Aug 2019.

Vancouver:

Deena SP. Visual speech synthesis by learning joint probabilistic models of audio and video. [Internet] [Doctoral dissertation]. University of Manchester; 2012. [cited 2019 Aug 18]. Available from: https://www.research.manchester.ac.uk/portal/en/theses/visual-speech-synthesis-by-learning-joint-probabilistic-models-of-audio-and-video(bdd1a78b-4957-469e-8be4-34e83e676c79).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553442.

Council of Science Editors:

Deena SP. Visual speech synthesis by learning joint probabilistic models of audio and video. [Doctoral Dissertation]. University of Manchester; 2012. Available from: https://www.research.manchester.ac.uk/portal/en/theses/visual-speech-synthesis-by-learning-joint-probabilistic-models-of-audio-and-video(bdd1a78b-4957-469e-8be4-34e83e676c79).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553442


Texas A&M University

3. Zavala Chmelicka, Marco Enrique. Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluation.

Degree: 2005, Texas A&M University

 Facial animations capable of articulating accurate movements in synchrony with a speech track have become a subject of much research during the past decade. Most… (more)

Subjects/Keywords: Facial Animation; Visual Prosody; Speech-Driven Facial Animation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zavala Chmelicka, M. E. (2005). Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluation. (Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/2436

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Zavala Chmelicka, Marco Enrique. “Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluation.” 2005. Thesis, Texas A&M University. Accessed August 18, 2019. http://hdl.handle.net/1969.1/2436.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Zavala Chmelicka, Marco Enrique. “Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluation.” 2005. Web. 18 Aug 2019.

Vancouver:

Zavala Chmelicka ME. Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluation. [Internet] [Thesis]. Texas A&M University; 2005. [cited 2019 Aug 18]. Available from: http://hdl.handle.net/1969.1/2436.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Zavala Chmelicka ME. Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluation. [Thesis]. Texas A&M University; 2005. Available from: http://hdl.handle.net/1969.1/2436

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


The Ohio State University

4. Somasundaram, Arunachalam. A facial animation model for expressive audio-visual speech.

Degree: PhD, Computer and Information Science, 2006, The Ohio State University

 Expressive facial speech animation is a challenging topic of great interest to the computer graphics community. Adding emotions to audio-visual speech animation is very important… (more)

Subjects/Keywords: Computer Science; expressive facial speech animation; expressive audio-visual speech; facial animation; speech animation; facial expressions; face; speech; emotions

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Somasundaram, A. (2006). A facial animation model for expressive audio-visual speech. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1148973645

Chicago Manual of Style (16th Edition):

Somasundaram, Arunachalam. “A facial animation model for expressive audio-visual speech.” 2006. Doctoral Dissertation, The Ohio State University. Accessed August 18, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1148973645.

MLA Handbook (7th Edition):

Somasundaram, Arunachalam. “A facial animation model for expressive audio-visual speech.” 2006. Web. 18 Aug 2019.

Vancouver:

Somasundaram A. A facial animation model for expressive audio-visual speech. [Internet] [Doctoral dissertation]. The Ohio State University; 2006. [cited 2019 Aug 18]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1148973645.

Council of Science Editors:

Somasundaram A. A facial animation model for expressive audio-visual speech. [Doctoral Dissertation]. The Ohio State University; 2006. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1148973645


University of Edinburgh

5. Hofer, Gregor Otto. Speech-driven animation using multi-modal hidden Markov models.

Degree: 2010, University of Edinburgh

 The main objective of this thesis was the synthesis of speech synchronised motion, in particular head motion. The hypothesis that head motion can be estimated… (more)

Subjects/Keywords: 502.85; synthesis of speech synchronised motion; head motion; Hidden Markov Models; computer animation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hofer, G. O. (2010). Speech-driven animation using multi-modal hidden Markov models. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/3786

Chicago Manual of Style (16th Edition):

Hofer, Gregor Otto. “Speech-driven animation using multi-modal hidden Markov models.” 2010. Doctoral Dissertation, University of Edinburgh. Accessed August 18, 2019. http://hdl.handle.net/1842/3786.

MLA Handbook (7th Edition):

Hofer, Gregor Otto. “Speech-driven animation using multi-modal hidden Markov models.” 2010. Web. 18 Aug 2019.

Vancouver:

Hofer GO. Speech-driven animation using multi-modal hidden Markov models. [Internet] [Doctoral dissertation]. University of Edinburgh; 2010. [cited 2019 Aug 18]. Available from: http://hdl.handle.net/1842/3786.

Council of Science Editors:

Hofer GO. Speech-driven animation using multi-modal hidden Markov models. [Doctoral Dissertation]. University of Edinburgh; 2010. Available from: http://hdl.handle.net/1842/3786


Vilnius Gediminas Technical University

6. Mažonavičiūtė, Ingrida. Lietuvių kalbos animavimo technologija taikant trimatį veido modelį.

Degree: PhD, Informatics Engineering, 2013, Vilnius Gediminas Technical University

Kalbos animacija plačiai naudojama technikos įrenginiuose siekiant kurtiesiems, vaikams, vidutinio ir vyresnio amžiaus žmonėms sudaryti vienodas bendravimo galimybes. Žmonės yra labai jautrūs veido išvaizdos pokyčiams,… (more)

Subjects/Keywords: Kalbos animacija; Kalbanti galva; Fonema; Vizema; Speech animation; Talking head; Phoneme; Viseme

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mažonavičiūtė, I. (2013). Lietuvių kalbos animavimo technologija taikant trimatį veido modelį. (Doctoral Dissertation). Vilnius Gediminas Technical University. Retrieved from http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130218_112524-41830 ;

Chicago Manual of Style (16th Edition):

Mažonavičiūtė, Ingrida. “Lietuvių kalbos animavimo technologija taikant trimatį veido modelį.” 2013. Doctoral Dissertation, Vilnius Gediminas Technical University. Accessed August 18, 2019. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130218_112524-41830 ;.

MLA Handbook (7th Edition):

Mažonavičiūtė, Ingrida. “Lietuvių kalbos animavimo technologija taikant trimatį veido modelį.” 2013. Web. 18 Aug 2019.

Vancouver:

Mažonavičiūtė I. Lietuvių kalbos animavimo technologija taikant trimatį veido modelį. [Internet] [Doctoral dissertation]. Vilnius Gediminas Technical University; 2013. [cited 2019 Aug 18]. Available from: http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130218_112524-41830 ;.

Council of Science Editors:

Mažonavičiūtė I. Lietuvių kalbos animavimo technologija taikant trimatį veido modelį. [Doctoral Dissertation]. Vilnius Gediminas Technical University; 2013. Available from: http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130218_112524-41830 ;


Universidade do Estado do Rio de Janeiro

7. Thiago Roseiro da Silva. Artes, Aprendizagens, Juventudes e Cidadanias: por práticas fonoaudiológicas revolucionárias.

Degree: Master, 2016, Universidade do Estado do Rio de Janeiro

Esta dissertação é uma análise das práticas fonoaudiológicas nas oficinas Anima Animação, que utilizam a produção de filmes de animação como forma de estimulação da… (more)

Subjects/Keywords: Learning disabilities; Speech and learning therapy; Adolescentes e jovens; POLITICAS PUBLICAS; Fonoaudiologia; Dificuldades de aprendizagem; Animação; Adolescentes; Animation; Adolescents and youths

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Silva, T. R. d. (2016). Artes, Aprendizagens, Juventudes e Cidadanias: por práticas fonoaudiológicas revolucionárias. (Masters Thesis). Universidade do Estado do Rio de Janeiro. Retrieved from http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=10827 ;

Chicago Manual of Style (16th Edition):

Silva, Thiago Roseiro da. “Artes, Aprendizagens, Juventudes e Cidadanias: por práticas fonoaudiológicas revolucionárias.” 2016. Masters Thesis, Universidade do Estado do Rio de Janeiro. Accessed August 18, 2019. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=10827 ;.

MLA Handbook (7th Edition):

Silva, Thiago Roseiro da. “Artes, Aprendizagens, Juventudes e Cidadanias: por práticas fonoaudiológicas revolucionárias.” 2016. Web. 18 Aug 2019.

Vancouver:

Silva TRd. Artes, Aprendizagens, Juventudes e Cidadanias: por práticas fonoaudiológicas revolucionárias. [Internet] [Masters thesis]. Universidade do Estado do Rio de Janeiro; 2016. [cited 2019 Aug 18]. Available from: http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=10827 ;.

Council of Science Editors:

Silva TRd. Artes, Aprendizagens, Juventudes e Cidadanias: por práticas fonoaudiológicas revolucionárias. [Masters Thesis]. Universidade do Estado do Rio de Janeiro; 2016. Available from: http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=10827 ;


Vilnius Gediminas Technical University

8. Mažonavičiūtė, Ingrida. Anglų kalbos vizemų pritaikymas lietuvių kalbos garsų animacijai.

Degree: Master, Informatics Engineering, 2008, Vilnius Gediminas Technical University

Baigiamajame darbe tiriamas lietuvių kalbos garsų ir jų vaizdinės informacijos ryššys. Atliekama kalbančių galvų modelių animavimo algoritmų analizė, išškeliama jų problematika ir atsižvelgiant į tai… (more)

Subjects/Keywords: Kalbantis galvos modelis; Kalbos garso ir vaizdo sintetinimo technologijos; Vizema; Fonema; Talking head; Speech animation technologies; Viseme; Phoneme

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mažonavičiūtė, Ingrida. (2008). Anglų kalbos vizemų pritaikymas lietuvių kalbos garsų animacijai. (Masters Thesis). Vilnius Gediminas Technical University. Retrieved from http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2008~D_20080627_150724-72069 ;

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

Mažonavičiūtė, Ingrida. “Anglų kalbos vizemų pritaikymas lietuvių kalbos garsų animacijai.” 2008. Masters Thesis, Vilnius Gediminas Technical University. Accessed August 18, 2019. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2008~D_20080627_150724-72069 ;.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

Mažonavičiūtė, Ingrida. “Anglų kalbos vizemų pritaikymas lietuvių kalbos garsų animacijai.” 2008. Web. 18 Aug 2019.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

Mažonavičiūtė, Ingrida. Anglų kalbos vizemų pritaikymas lietuvių kalbos garsų animacijai. [Internet] [Masters thesis]. Vilnius Gediminas Technical University; 2008. [cited 2019 Aug 18]. Available from: http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2008~D_20080627_150724-72069 ;.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

Mažonavičiūtė, Ingrida. Anglų kalbos vizemų pritaikymas lietuvių kalbos garsų animacijai. [Masters Thesis]. Vilnius Gediminas Technical University; 2008. Available from: http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2008~D_20080627_150724-72069 ;

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

9. AlTarawneh, Enas Khaled Ahm. A Cloud-Based Extensible Avatar For Human Robot Interaction.

Degree: MSc -MS, Electrical Engineering and Computer Science, 2019, York University

 Adding an interactive avatar to a human-robot interface requires the development of tools that animate the avatar so as to simulate an intelligent conversation partner.… (more)

Subjects/Keywords: HCI; HRI; Robotics; Avatar; Text-to-speech; Speech-to-text; AI; Cloud-Based; Parallel processing; Distributed processing; Artificial intelligence; Human-computer Interaction; Human-robot interaction; Rendering; Animation; XML

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

AlTarawneh, E. K. A. (2019). A Cloud-Based Extensible Avatar For Human Robot Interaction. (Masters Thesis). York University. Retrieved from http://hdl.handle.net/10315/36254

Chicago Manual of Style (16th Edition):

AlTarawneh, Enas Khaled Ahm. “A Cloud-Based Extensible Avatar For Human Robot Interaction.” 2019. Masters Thesis, York University. Accessed August 18, 2019. http://hdl.handle.net/10315/36254.

MLA Handbook (7th Edition):

AlTarawneh, Enas Khaled Ahm. “A Cloud-Based Extensible Avatar For Human Robot Interaction.” 2019. Web. 18 Aug 2019.

Vancouver:

AlTarawneh EKA. A Cloud-Based Extensible Avatar For Human Robot Interaction. [Internet] [Masters thesis]. York University; 2019. [cited 2019 Aug 18]. Available from: http://hdl.handle.net/10315/36254.

Council of Science Editors:

AlTarawneh EKA. A Cloud-Based Extensible Avatar For Human Robot Interaction. [Masters Thesis]. York University; 2019. Available from: http://hdl.handle.net/10315/36254


The Ohio State University

10. King, Scott Alan. A Facial Model and Animation Techniques for Animated Speech.

Degree: PhD, Computer and Information Science, 2001, The Ohio State University

 Creating animated speech requires a facial model capable of representing the myriad shapes the human face experiences during speech and a method to produce the… (more)

Subjects/Keywords: Computer Science; Facial Animation; Facial Modeling; Speech Synchroniation; Lip-Synch

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

King, S. A. (2001). A Facial Model and Animation Techniques for Animated Speech. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu991423221

Chicago Manual of Style (16th Edition):

King, Scott Alan. “A Facial Model and Animation Techniques for Animated Speech.” 2001. Doctoral Dissertation, The Ohio State University. Accessed August 18, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu991423221.

MLA Handbook (7th Edition):

King, Scott Alan. “A Facial Model and Animation Techniques for Animated Speech.” 2001. Web. 18 Aug 2019.

Vancouver:

King SA. A Facial Model and Animation Techniques for Animated Speech. [Internet] [Doctoral dissertation]. The Ohio State University; 2001. [cited 2019 Aug 18]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu991423221.

Council of Science Editors:

King SA. A Facial Model and Animation Techniques for Animated Speech. [Doctoral Dissertation]. The Ohio State University; 2001. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu991423221

11. Deena, Salil Prashant. Visual Speech Synthesis by Learning Joint Probabilistic Models of Audio and Video.

Degree: 2012, University of Manchester

Visual speech synthesis deals with synthesising facial animation from an audio representation of speech. In the last decade or so, data-driven approaches have gained prominence… (more)

Subjects/Keywords: Visual Speech Synthesis; Speech-Driven Facial Animation; Artificial Talking Head; Gaussian Processes; Machine Learning; Speech Synthesis; Computer Vision

…12 Abstract Visual speech synthesis deals with synthesising facial animation from an… …animation, where the rules governing facial movements from speech are learnt automatically from… …Realistic Speech-driven Facial Animation One of the goals of facial animation is to achieve a… …physically plausible. 1.2.1 Challenges Realistic facial animation driven by speech is… …identities using a limited amount of data for the new person. Transferable speech animation was… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Deena, S. P. (2012). Visual Speech Synthesis by Learning Joint Probabilistic Models of Audio and Video. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:158236

Chicago Manual of Style (16th Edition):

Deena, Salil Prashant. “Visual Speech Synthesis by Learning Joint Probabilistic Models of Audio and Video.” 2012. Doctoral Dissertation, University of Manchester. Accessed August 18, 2019. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:158236.

MLA Handbook (7th Edition):

Deena, Salil Prashant. “Visual Speech Synthesis by Learning Joint Probabilistic Models of Audio and Video.” 2012. Web. 18 Aug 2019.

Vancouver:

Deena SP. Visual Speech Synthesis by Learning Joint Probabilistic Models of Audio and Video. [Internet] [Doctoral dissertation]. University of Manchester; 2012. [cited 2019 Aug 18]. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:158236.

Council of Science Editors:

Deena SP. Visual Speech Synthesis by Learning Joint Probabilistic Models of Audio and Video. [Doctoral Dissertation]. University of Manchester; 2012. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:158236


University of Johannesburg

12. Hodgkinson, Warren. Interactive speech-driven facial animation.

Degree: 2008, University of Johannesburg

One of the fastest developing areas in the entertainment industry is digital animation. Television programmes and movies frequently use 3D animations to enhance or replace… (more)

Subjects/Keywords: Computer animation; Speech processing systems; Three dimensional imaging; Computer simulation; Signal processing (digital techniques)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hodgkinson, W. (2008). Interactive speech-driven facial animation. (Thesis). University of Johannesburg. Retrieved from http://hdl.handle.net/10210/807

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Hodgkinson, Warren. “Interactive speech-driven facial animation.” 2008. Thesis, University of Johannesburg. Accessed August 18, 2019. http://hdl.handle.net/10210/807.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Hodgkinson, Warren. “Interactive speech-driven facial animation.” 2008. Web. 18 Aug 2019.

Vancouver:

Hodgkinson W. Interactive speech-driven facial animation. [Internet] [Thesis]. University of Johannesburg; 2008. [cited 2019 Aug 18]. Available from: http://hdl.handle.net/10210/807.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Hodgkinson W. Interactive speech-driven facial animation. [Thesis]. University of Johannesburg; 2008. Available from: http://hdl.handle.net/10210/807

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Curtin University of Technology

13. Xiao, He. An affective personality for an embodied conversational agent .

Degree: 2006, Curtin University of Technology

 Curtin Universitys Embodied Conversational Agents (ECA) combine an MPEG-4 compliant Facial Animation Engine (FAE), a Text To Emotional Speech Synthesiser (TTES), and a multi-modal Dialogue… (more)

Subjects/Keywords: facial animation engine; email agent system; virtual human markup language; text to emotional speech synthesiser; affective personality model; Embodied Conversational Agents; multi-modal Dialogue Manager

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Xiao, H. (2006). An affective personality for an embodied conversational agent . (Thesis). Curtin University of Technology. Retrieved from http://hdl.handle.net/20.500.11937/167

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Xiao, He. “An affective personality for an embodied conversational agent .” 2006. Thesis, Curtin University of Technology. Accessed August 18, 2019. http://hdl.handle.net/20.500.11937/167.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Xiao, He. “An affective personality for an embodied conversational agent .” 2006. Web. 18 Aug 2019.

Vancouver:

Xiao H. An affective personality for an embodied conversational agent . [Internet] [Thesis]. Curtin University of Technology; 2006. [cited 2019 Aug 18]. Available from: http://hdl.handle.net/20.500.11937/167.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Xiao H. An affective personality for an embodied conversational agent . [Thesis]. Curtin University of Technology; 2006. Available from: http://hdl.handle.net/20.500.11937/167

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


ETH Zürich

14. Kalberer, Gregor Arthur. Realistic face animation for speech.

Degree: 2003, ETH Zürich

Subjects/Keywords: MATHEMATISCHE BILDVERARBEITUNG; COMPUTERANIMATION (COMPUTERGRAFIK); GESICHT (ANATOMIE UND PHYSIOLOGIE); SPRECHEN UND ARTIKULATION (ANATOMIE UND PHYSIOLOGIE); MATHEMATICAL IMAGE PROCESSING; COMPUTER ANIMATION (COMPUTER GRAPHICS); FACE (ANATOMY AND PHYSIOLOGY); SPEECH AND ARTICULATION (ANATOMY AND PHYSIOLOGY); info:eu-repo/classification/ddc/510; Mathematics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kalberer, G. A. (2003). Realistic face animation for speech. (Doctoral Dissertation). ETH Zürich. Retrieved from http://hdl.handle.net/20.500.11850/147734

Chicago Manual of Style (16th Edition):

Kalberer, Gregor Arthur. “Realistic face animation for speech.” 2003. Doctoral Dissertation, ETH Zürich. Accessed August 18, 2019. http://hdl.handle.net/20.500.11850/147734.

MLA Handbook (7th Edition):

Kalberer, Gregor Arthur. “Realistic face animation for speech.” 2003. Web. 18 Aug 2019.

Vancouver:

Kalberer GA. Realistic face animation for speech. [Internet] [Doctoral dissertation]. ETH Zürich; 2003. [cited 2019 Aug 18]. Available from: http://hdl.handle.net/20.500.11850/147734.

Council of Science Editors:

Kalberer GA. Realistic face animation for speech. [Doctoral Dissertation]. ETH Zürich; 2003. Available from: http://hdl.handle.net/20.500.11850/147734


Pontifical Catholic University of Rio de Janeiro

15. PAULA SALGADO LUCENA RODRIGUES. [en] A SYSTEM FOR GENERATING DYNAMIC FACIAL EXPRESSIONS IN 3D FACIAL ANIMATION WITH SPEECH PROCESSING.

Degree: 2008, Pontifical Catholic University of Rio de Janeiro

[pt] Esta tese apresenta um sistema para geração de expressões faciais dinâmicas sincronizadas com a fala em uma face realista tridimensional. Entende-se por expressões faciais… (more)

Subjects/Keywords: [pt] MPEG-4; [en] MPEG-4; [pt] ANIMACAO FACIAL; [en] FACIAL ANIMATION; [pt] EXPRESSOES FACIAIS DINAMICAS; [en] DYNAMIC FACIAL EXPRESSIONS; [pt] PROCESSAMENTO DE FALA; [en] SPEECH PROCESSING; [pt] MODELO DE EMOCAO; [en] EMOTION MODEL; [pt] HIPERCUBO EMOCIONAL; [en] EMOTIONAL HYPERCUBE; [pt] PERSONAGENS VIRTUAIS FALANTES; [en] VIRTUAL TALKING CHARACTERS

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

RODRIGUES, P. S. L. (2008). [en] A SYSTEM FOR GENERATING DYNAMIC FACIAL EXPRESSIONS IN 3D FACIAL ANIMATION WITH SPEECH PROCESSING. (Thesis). Pontifical Catholic University of Rio de Janeiro. Retrieved from http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11569

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

RODRIGUES, PAULA SALGADO LUCENA. “[en] A SYSTEM FOR GENERATING DYNAMIC FACIAL EXPRESSIONS IN 3D FACIAL ANIMATION WITH SPEECH PROCESSING.” 2008. Thesis, Pontifical Catholic University of Rio de Janeiro. Accessed August 18, 2019. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11569.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

RODRIGUES, PAULA SALGADO LUCENA. “[en] A SYSTEM FOR GENERATING DYNAMIC FACIAL EXPRESSIONS IN 3D FACIAL ANIMATION WITH SPEECH PROCESSING.” 2008. Web. 18 Aug 2019.

Vancouver:

RODRIGUES PSL. [en] A SYSTEM FOR GENERATING DYNAMIC FACIAL EXPRESSIONS IN 3D FACIAL ANIMATION WITH SPEECH PROCESSING. [Internet] [Thesis]. Pontifical Catholic University of Rio de Janeiro; 2008. [cited 2019 Aug 18]. Available from: http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11569.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

RODRIGUES PSL. [en] A SYSTEM FOR GENERATING DYNAMIC FACIAL EXPRESSIONS IN 3D FACIAL ANIMATION WITH SPEECH PROCESSING. [Thesis]. Pontifical Catholic University of Rio de Janeiro; 2008. Available from: http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11569

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.