Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Visual speech). Showing records 1 – 30 of 106 total matches.

[1] [2] [3] [4]

Search Limiters

Last 2 Years | English Only

Degrees

Levels

Country

▼ Search Limiters


University of Southern California

1. Files, Benjamin Taylor. Selectivity for visual speech in posterior temporal cortex.

Degree: PhD, Neuroscience, 2013, University of Southern California

Visual speech perception, also known as lipreading or speech reading, involves extracting linguistic information from seeing a talking face. What information is available in a… (more)

Subjects/Keywords: lipreading; visual speech perception; behavior; discrimination; electroencephalography; visual speech mismatch negativity

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Files, B. T. (2013). Selectivity for visual speech in posterior temporal cortex. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/340643/rec/5769

Chicago Manual of Style (16th Edition):

Files, Benjamin Taylor. “Selectivity for visual speech in posterior temporal cortex.” 2013. Doctoral Dissertation, University of Southern California. Accessed July 08, 2020. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/340643/rec/5769.

MLA Handbook (7th Edition):

Files, Benjamin Taylor. “Selectivity for visual speech in posterior temporal cortex.” 2013. Web. 08 Jul 2020.

Vancouver:

Files BT. Selectivity for visual speech in posterior temporal cortex. [Internet] [Doctoral dissertation]. University of Southern California; 2013. [cited 2020 Jul 08]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/340643/rec/5769.

Council of Science Editors:

Files BT. Selectivity for visual speech in posterior temporal cortex. [Doctoral Dissertation]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/340643/rec/5769


University of Minnesota

2. Bernstein, Sara. Individual differences in the acquisition of the /t/-/k/ contrast: A study of adults' perception of children's speech.

Degree: MA, Speech-Language-Hearing Sciences, 2015, University of Minnesota

 The presence of subtle but meaningful within-category sound differences has been documented in acoustic and articulatory analyses of children's speech. This study explored visual analog… (more)

Subjects/Keywords: development; perception; speech sound; visual analog scale

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bernstein, S. (2015). Individual differences in the acquisition of the /t/-/k/ contrast: A study of adults' perception of children's speech. (Masters Thesis). University of Minnesota. Retrieved from http://hdl.handle.net/11299/174721

Chicago Manual of Style (16th Edition):

Bernstein, Sara. “Individual differences in the acquisition of the /t/-/k/ contrast: A study of adults' perception of children's speech.” 2015. Masters Thesis, University of Minnesota. Accessed July 08, 2020. http://hdl.handle.net/11299/174721.

MLA Handbook (7th Edition):

Bernstein, Sara. “Individual differences in the acquisition of the /t/-/k/ contrast: A study of adults' perception of children's speech.” 2015. Web. 08 Jul 2020.

Vancouver:

Bernstein S. Individual differences in the acquisition of the /t/-/k/ contrast: A study of adults' perception of children's speech. [Internet] [Masters thesis]. University of Minnesota; 2015. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/11299/174721.

Council of Science Editors:

Bernstein S. Individual differences in the acquisition of the /t/-/k/ contrast: A study of adults' perception of children's speech. [Masters Thesis]. University of Minnesota; 2015. Available from: http://hdl.handle.net/11299/174721


Boston University

3. Rajaram, Siddharth. Selective attention and speech processing in the cortex.

Degree: PhD, Neuroscience, 2014, Boston University

 In noisy and complex environments, human listeners must segregate the mixture of sound sources arriving at their ears and selectively attend a single source, thereby… (more)

Subjects/Keywords: Neurosciences; attention; audio-visual speech; cross-modal; EEG; MEG; speech tracking

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rajaram, S. (2014). Selective attention and speech processing in the cortex. (Doctoral Dissertation). Boston University. Retrieved from http://hdl.handle.net/2144/13312

Chicago Manual of Style (16th Edition):

Rajaram, Siddharth. “Selective attention and speech processing in the cortex.” 2014. Doctoral Dissertation, Boston University. Accessed July 08, 2020. http://hdl.handle.net/2144/13312.

MLA Handbook (7th Edition):

Rajaram, Siddharth. “Selective attention and speech processing in the cortex.” 2014. Web. 08 Jul 2020.

Vancouver:

Rajaram S. Selective attention and speech processing in the cortex. [Internet] [Doctoral dissertation]. Boston University; 2014. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/2144/13312.

Council of Science Editors:

Rajaram S. Selective attention and speech processing in the cortex. [Doctoral Dissertation]. Boston University; 2014. Available from: http://hdl.handle.net/2144/13312


University of Minnesota

4. Johnson, Julie M. The role of clinical experience in listening for covert contrasts in children’s speech.

Degree: MA, Speech-Language Pathology, 2010, University of Minnesota

University of Minnesota M.A. thesis. June 2010. Major: Speech-Language Pathology. Advisor: Benjamin Munson, Ph.D. 1 computer file (PDF); vii, 61 pages, appendices A-B.

Children acquire… (more)

Subjects/Keywords: Speech sounds; Acoustic studies; Speech-language pathologists; Stimuli; Visual-analog scale (VAS); Speech-Language Pathology

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Johnson, J. M. (2010). The role of clinical experience in listening for covert contrasts in children’s speech. (Masters Thesis). University of Minnesota. Retrieved from http://purl.umn.edu/93163

Chicago Manual of Style (16th Edition):

Johnson, Julie M. “The role of clinical experience in listening for covert contrasts in children’s speech.” 2010. Masters Thesis, University of Minnesota. Accessed July 08, 2020. http://purl.umn.edu/93163.

MLA Handbook (7th Edition):

Johnson, Julie M. “The role of clinical experience in listening for covert contrasts in children’s speech.” 2010. Web. 08 Jul 2020.

Vancouver:

Johnson JM. The role of clinical experience in listening for covert contrasts in children’s speech. [Internet] [Masters thesis]. University of Minnesota; 2010. [cited 2020 Jul 08]. Available from: http://purl.umn.edu/93163.

Council of Science Editors:

Johnson JM. The role of clinical experience in listening for covert contrasts in children’s speech. [Masters Thesis]. University of Minnesota; 2010. Available from: http://purl.umn.edu/93163


Loughborough University

5. Ahmad, Nasir. A motion based approach for audio-visual automatic speech recognition.

Degree: PhD, 2011, Loughborough University

 The research work presented in this thesis introduces novel approaches for both visual region of interest extraction and visual feature extraction for use in audio-visual(more)

Subjects/Keywords: 005.3; Automatic speech recognition (ASR); Audio-visual automatic speech recognition (AVASR); Bi-modal speech recognition; Visual front-end; Features extraction; Visual ROI; Speech dynamics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ahmad, N. (2011). A motion based approach for audio-visual automatic speech recognition. (Doctoral Dissertation). Loughborough University. Retrieved from http://hdl.handle.net/2134/8564

Chicago Manual of Style (16th Edition):

Ahmad, Nasir. “A motion based approach for audio-visual automatic speech recognition.” 2011. Doctoral Dissertation, Loughborough University. Accessed July 08, 2020. http://hdl.handle.net/2134/8564.

MLA Handbook (7th Edition):

Ahmad, Nasir. “A motion based approach for audio-visual automatic speech recognition.” 2011. Web. 08 Jul 2020.

Vancouver:

Ahmad N. A motion based approach for audio-visual automatic speech recognition. [Internet] [Doctoral dissertation]. Loughborough University; 2011. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/2134/8564.

Council of Science Editors:

Ahmad N. A motion based approach for audio-visual automatic speech recognition. [Doctoral Dissertation]. Loughborough University; 2011. Available from: http://hdl.handle.net/2134/8564


RMIT University

6. Shaikh, A. Robust visual speech recognition using optical flow analysis and rotation invariant features.

Degree: 2011, RMIT University

 The focus of this thesis is to develop computer vision algorithms for visual speech recognition system to identify the visemes. The majority of existing speech(more)

Subjects/Keywords: Fields of Research; Lip-reading; visual speech recognition; Speech reading; visual feature extraction; temporal speech segmentation; classification of visual features; Support Vector Machines

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shaikh, A. (2011). Robust visual speech recognition using optical flow analysis and rotation invariant features. (Thesis). RMIT University. Retrieved from http://researchbank.rmit.edu.au/view/rmit:160103

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Shaikh, A. “Robust visual speech recognition using optical flow analysis and rotation invariant features.” 2011. Thesis, RMIT University. Accessed July 08, 2020. http://researchbank.rmit.edu.au/view/rmit:160103.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Shaikh, A. “Robust visual speech recognition using optical flow analysis and rotation invariant features.” 2011. Web. 08 Jul 2020.

Vancouver:

Shaikh A. Robust visual speech recognition using optical flow analysis and rotation invariant features. [Internet] [Thesis]. RMIT University; 2011. [cited 2020 Jul 08]. Available from: http://researchbank.rmit.edu.au/view/rmit:160103.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Shaikh A. Robust visual speech recognition using optical flow analysis and rotation invariant features. [Thesis]. RMIT University; 2011. Available from: http://researchbank.rmit.edu.au/view/rmit:160103

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Toronto

7. Kearney, Elaine Katrina. The Speech Movement Disorder and its Rehabilitation in Parkinson’s Disease Using Augmented Visual Feedback.

Degree: PhD, 2018, University of Toronto

 This dissertation comprises three studies that address the goals of better understanding the effects of Parkinson’s disease (PD) on speech movements and the development of… (more)

Subjects/Keywords: Augmented visual feedback; Parkinson's disease; Rehabiliation science; Speech intelligibility; Speech kinematics; 0460

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kearney, E. K. (2018). The Speech Movement Disorder and its Rehabilitation in Parkinson’s Disease Using Augmented Visual Feedback. (Doctoral Dissertation). University of Toronto. Retrieved from http://hdl.handle.net/1807/89669

Chicago Manual of Style (16th Edition):

Kearney, Elaine Katrina. “The Speech Movement Disorder and its Rehabilitation in Parkinson’s Disease Using Augmented Visual Feedback.” 2018. Doctoral Dissertation, University of Toronto. Accessed July 08, 2020. http://hdl.handle.net/1807/89669.

MLA Handbook (7th Edition):

Kearney, Elaine Katrina. “The Speech Movement Disorder and its Rehabilitation in Parkinson’s Disease Using Augmented Visual Feedback.” 2018. Web. 08 Jul 2020.

Vancouver:

Kearney EK. The Speech Movement Disorder and its Rehabilitation in Parkinson’s Disease Using Augmented Visual Feedback. [Internet] [Doctoral dissertation]. University of Toronto; 2018. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/1807/89669.

Council of Science Editors:

Kearney EK. The Speech Movement Disorder and its Rehabilitation in Parkinson’s Disease Using Augmented Visual Feedback. [Doctoral Dissertation]. University of Toronto; 2018. Available from: http://hdl.handle.net/1807/89669

8. Roxburgh, Zoe. Visualising articulation : real-time ultrasound visual biofeedback and visual articulatory models and their use in treating speech sound disorders associated with submucous cleft palate.

Degree: PhD, 2018, Queen Margaret University

 Background: Ultrasound Tongue Imaging (UTI) is growing increasingly popular for assessing and treating Speech Sound Disorders (SSDs) and has more recently been used to qualitatively… (more)

Subjects/Keywords: Cleft Palate; Speech; Ultrasound; Visual Biofeedback; Visual Articulatory Models; Phonetic Transcriptions; Perceptual Evaluation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Roxburgh, Z. (2018). Visualising articulation : real-time ultrasound visual biofeedback and visual articulatory models and their use in treating speech sound disorders associated with submucous cleft palate. (Doctoral Dissertation). Queen Margaret University. Retrieved from https://eresearch.qmu.ac.uk/handle/20.500.12289/8899 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760544

Chicago Manual of Style (16th Edition):

Roxburgh, Zoe. “Visualising articulation : real-time ultrasound visual biofeedback and visual articulatory models and their use in treating speech sound disorders associated with submucous cleft palate.” 2018. Doctoral Dissertation, Queen Margaret University. Accessed July 08, 2020. https://eresearch.qmu.ac.uk/handle/20.500.12289/8899 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760544.

MLA Handbook (7th Edition):

Roxburgh, Zoe. “Visualising articulation : real-time ultrasound visual biofeedback and visual articulatory models and their use in treating speech sound disorders associated with submucous cleft palate.” 2018. Web. 08 Jul 2020.

Vancouver:

Roxburgh Z. Visualising articulation : real-time ultrasound visual biofeedback and visual articulatory models and their use in treating speech sound disorders associated with submucous cleft palate. [Internet] [Doctoral dissertation]. Queen Margaret University; 2018. [cited 2020 Jul 08]. Available from: https://eresearch.qmu.ac.uk/handle/20.500.12289/8899 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760544.

Council of Science Editors:

Roxburgh Z. Visualising articulation : real-time ultrasound visual biofeedback and visual articulatory models and their use in treating speech sound disorders associated with submucous cleft palate. [Doctoral Dissertation]. Queen Margaret University; 2018. Available from: https://eresearch.qmu.ac.uk/handle/20.500.12289/8899 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760544


University of Minnesota

9. Julien, Hannah M. Modifying speech to children: an acoustic study of adults’ fricatives.

Degree: MA, Speech-Language Pathology, 2010, University of Minnesota

University of Minnesota M.A. thesis. May 2010. Major: Speech-Language Pathology. Advisor: Professor Benjamin Munson. 1 computer file (PDF); vi, 51 pages, appendices A-B. Ill. (some… (more)

Subjects/Keywords: Visual analog scale (VAS).; Speech sound; Listen-rate-say task; Children’s speech; Centroid; Speech-Language Patholog

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Julien, H. M. (2010). Modifying speech to children: an acoustic study of adults’ fricatives. (Masters Thesis). University of Minnesota. Retrieved from http://purl.umn.edu/93169

Chicago Manual of Style (16th Edition):

Julien, Hannah M. “Modifying speech to children: an acoustic study of adults’ fricatives.” 2010. Masters Thesis, University of Minnesota. Accessed July 08, 2020. http://purl.umn.edu/93169.

MLA Handbook (7th Edition):

Julien, Hannah M. “Modifying speech to children: an acoustic study of adults’ fricatives.” 2010. Web. 08 Jul 2020.

Vancouver:

Julien HM. Modifying speech to children: an acoustic study of adults’ fricatives. [Internet] [Masters thesis]. University of Minnesota; 2010. [cited 2020 Jul 08]. Available from: http://purl.umn.edu/93169.

Council of Science Editors:

Julien HM. Modifying speech to children: an acoustic study of adults’ fricatives. [Masters Thesis]. University of Minnesota; 2010. Available from: http://purl.umn.edu/93169


University of Manchester

10. Deena, Salil Prashant. Visual speech synthesis by learning joint probabilistic models of audio and video.

Degree: PhD, 2012, University of Manchester

Visual speech synthesis deals with synthesising facial animation from an audio representation of speech. In the last decade or so, data-driven approaches have gained prominence… (more)

Subjects/Keywords: 006.54; Visual Speech Synthesis; Speech-Driven Facial Animation; Artificial Talking Head; Gaussian Processes; Machine Learning; Speech Synthesis; Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Deena, S. P. (2012). Visual speech synthesis by learning joint probabilistic models of audio and video. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/visual-speech-synthesis-by-learning-joint-probabilistic-models-of-audio-and-video(bdd1a78b-4957-469e-8be4-34e83e676c79).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553442

Chicago Manual of Style (16th Edition):

Deena, Salil Prashant. “Visual speech synthesis by learning joint probabilistic models of audio and video.” 2012. Doctoral Dissertation, University of Manchester. Accessed July 08, 2020. https://www.research.manchester.ac.uk/portal/en/theses/visual-speech-synthesis-by-learning-joint-probabilistic-models-of-audio-and-video(bdd1a78b-4957-469e-8be4-34e83e676c79).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553442.

MLA Handbook (7th Edition):

Deena, Salil Prashant. “Visual speech synthesis by learning joint probabilistic models of audio and video.” 2012. Web. 08 Jul 2020.

Vancouver:

Deena SP. Visual speech synthesis by learning joint probabilistic models of audio and video. [Internet] [Doctoral dissertation]. University of Manchester; 2012. [cited 2020 Jul 08]. Available from: https://www.research.manchester.ac.uk/portal/en/theses/visual-speech-synthesis-by-learning-joint-probabilistic-models-of-audio-and-video(bdd1a78b-4957-469e-8be4-34e83e676c79).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553442.

Council of Science Editors:

Deena SP. Visual speech synthesis by learning joint probabilistic models of audio and video. [Doctoral Dissertation]. University of Manchester; 2012. Available from: https://www.research.manchester.ac.uk/portal/en/theses/visual-speech-synthesis-by-learning-joint-probabilistic-models-of-audio-and-video(bdd1a78b-4957-469e-8be4-34e83e676c79).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553442


University of Cincinnati

11. HARNISH, STACY M. The Relationship between Visual Perception and Confrontation Naming Abilities of Elderly and Individuals with Alzheimer's Disease.

Degree: PhD, Allied Health Sciences : Communication Science and Disorders, 2008, University of Cincinnati

 Confrontation naming abilities decline with normal aging and in Alzheimer's disease (AD). The focus of this research was to investigate at which stage the breakdown… (more)

Subjects/Keywords: Speech Therapy; Alzheimer's Disease; Aging; Confrontation Naming; Visual Perception; Semantic Memory

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

HARNISH, S. M. (2008). The Relationship between Visual Perception and Confrontation Naming Abilities of Elderly and Individuals with Alzheimer's Disease. (Doctoral Dissertation). University of Cincinnati. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=ucin1217438958

Chicago Manual of Style (16th Edition):

HARNISH, STACY M. “The Relationship between Visual Perception and Confrontation Naming Abilities of Elderly and Individuals with Alzheimer's Disease.” 2008. Doctoral Dissertation, University of Cincinnati. Accessed July 08, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1217438958.

MLA Handbook (7th Edition):

HARNISH, STACY M. “The Relationship between Visual Perception and Confrontation Naming Abilities of Elderly and Individuals with Alzheimer's Disease.” 2008. Web. 08 Jul 2020.

Vancouver:

HARNISH SM. The Relationship between Visual Perception and Confrontation Naming Abilities of Elderly and Individuals with Alzheimer's Disease. [Internet] [Doctoral dissertation]. University of Cincinnati; 2008. [cited 2020 Jul 08]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1217438958.

Council of Science Editors:

HARNISH SM. The Relationship between Visual Perception and Confrontation Naming Abilities of Elderly and Individuals with Alzheimer's Disease. [Doctoral Dissertation]. University of Cincinnati; 2008. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1217438958


Kansas State University

12. Pennington, Natalie R.D. No consequences: an analysis of images and impression management on Facebook.

Degree: MA, Department of Communication Studies, Theatre, and Dance, 2010, Kansas State University

 Goffman (1959) suggests that it is through communication that we are able to form impressions of self and express our identity to society. With the… (more)

Subjects/Keywords: Facebook; Visual communication; Computer-mediated Communication; Impression management; Speech Communication (0459)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Pennington, N. R. D. (2010). No consequences: an analysis of images and impression management on Facebook. (Masters Thesis). Kansas State University. Retrieved from http://hdl.handle.net/2097/4118

Chicago Manual of Style (16th Edition):

Pennington, Natalie R D. “No consequences: an analysis of images and impression management on Facebook.” 2010. Masters Thesis, Kansas State University. Accessed July 08, 2020. http://hdl.handle.net/2097/4118.

MLA Handbook (7th Edition):

Pennington, Natalie R D. “No consequences: an analysis of images and impression management on Facebook.” 2010. Web. 08 Jul 2020.

Vancouver:

Pennington NRD. No consequences: an analysis of images and impression management on Facebook. [Internet] [Masters thesis]. Kansas State University; 2010. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/2097/4118.

Council of Science Editors:

Pennington NRD. No consequences: an analysis of images and impression management on Facebook. [Masters Thesis]. Kansas State University; 2010. Available from: http://hdl.handle.net/2097/4118


Boston University

13. Cheng, Cheng. Can visual feedback improve English speakers' Mandarin tone production?.

Degree: MS, Sargent College of Health and Rehabilitation Sciences, 2017, Boston University

 Non-native tones are considered challenging for adult second language speakers to perceive and produce. The current study examined the effect of a laboratory-based intensive training… (more)

Subjects/Keywords: Speech therapy; Mandarin lexical tones; Production training; Visual feedback

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cheng, C. (2017). Can visual feedback improve English speakers' Mandarin tone production?. (Masters Thesis). Boston University. Retrieved from http://hdl.handle.net/2144/27056

Chicago Manual of Style (16th Edition):

Cheng, Cheng. “Can visual feedback improve English speakers' Mandarin tone production?.” 2017. Masters Thesis, Boston University. Accessed July 08, 2020. http://hdl.handle.net/2144/27056.

MLA Handbook (7th Edition):

Cheng, Cheng. “Can visual feedback improve English speakers' Mandarin tone production?.” 2017. Web. 08 Jul 2020.

Vancouver:

Cheng C. Can visual feedback improve English speakers' Mandarin tone production?. [Internet] [Masters thesis]. Boston University; 2017. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/2144/27056.

Council of Science Editors:

Cheng C. Can visual feedback improve English speakers' Mandarin tone production?. [Masters Thesis]. Boston University; 2017. Available from: http://hdl.handle.net/2144/27056


Boston University

14. Campbell, Rachael Elizabeth. A novel eye tracking paradigm for detecting semantic and phonological activation in aphasia.

Degree: MS, Sargent College of Health and Rehabilitation Sciences, 2018, Boston University

 Many persons with aphasia (PWA), who have trouble communicating after a stroke, have difficulty naming objects, frequently producing speech errors. Picture (confrontation) naming tasks are… (more)

Subjects/Keywords: Speech therapy; Anomia; Lexical access; Visual world paradigm

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Campbell, R. E. (2018). A novel eye tracking paradigm for detecting semantic and phonological activation in aphasia. (Masters Thesis). Boston University. Retrieved from http://hdl.handle.net/2144/31278

Chicago Manual of Style (16th Edition):

Campbell, Rachael Elizabeth. “A novel eye tracking paradigm for detecting semantic and phonological activation in aphasia.” 2018. Masters Thesis, Boston University. Accessed July 08, 2020. http://hdl.handle.net/2144/31278.

MLA Handbook (7th Edition):

Campbell, Rachael Elizabeth. “A novel eye tracking paradigm for detecting semantic and phonological activation in aphasia.” 2018. Web. 08 Jul 2020.

Vancouver:

Campbell RE. A novel eye tracking paradigm for detecting semantic and phonological activation in aphasia. [Internet] [Masters thesis]. Boston University; 2018. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/2144/31278.

Council of Science Editors:

Campbell RE. A novel eye tracking paradigm for detecting semantic and phonological activation in aphasia. [Masters Thesis]. Boston University; 2018. Available from: http://hdl.handle.net/2144/31278


University of Colorado

15. Kim, Daniel Hyunjae. The Rhetoric of Visual Aesthetics: Image, Convention, and Form in New Media.

Degree: PhD, Communication, 2016, University of Colorado

  Across multiple contexts within photography’s relatively brief history as a medium of ‘light inscription,’ a ubiquitous relationship pairing photographer with subject has dominated a… (more)

Subjects/Keywords: Aesthetics; Image; Media; Photography; Rhetoric; Visual; Communication; Speech and Rhetorical Studies

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kim, D. H. (2016). The Rhetoric of Visual Aesthetics: Image, Convention, and Form in New Media. (Doctoral Dissertation). University of Colorado. Retrieved from https://scholar.colorado.edu/comm_gradetds/64

Chicago Manual of Style (16th Edition):

Kim, Daniel Hyunjae. “The Rhetoric of Visual Aesthetics: Image, Convention, and Form in New Media.” 2016. Doctoral Dissertation, University of Colorado. Accessed July 08, 2020. https://scholar.colorado.edu/comm_gradetds/64.

MLA Handbook (7th Edition):

Kim, Daniel Hyunjae. “The Rhetoric of Visual Aesthetics: Image, Convention, and Form in New Media.” 2016. Web. 08 Jul 2020.

Vancouver:

Kim DH. The Rhetoric of Visual Aesthetics: Image, Convention, and Form in New Media. [Internet] [Doctoral dissertation]. University of Colorado; 2016. [cited 2020 Jul 08]. Available from: https://scholar.colorado.edu/comm_gradetds/64.

Council of Science Editors:

Kim DH. The Rhetoric of Visual Aesthetics: Image, Convention, and Form in New Media. [Doctoral Dissertation]. University of Colorado; 2016. Available from: https://scholar.colorado.edu/comm_gradetds/64


University of Colorado

16. Foster, Maha Saliba. Visual Speech Perception of Arabic Emphatics and Gutturals.

Degree: PhD, Linguistics, 2016, University of Colorado

  This investigation explores the potential effect on perception of speech visual cues associated with Arabic gutturals (AGs) and Arabic emphatics (AEs); AEs are pharyngealized… (more)

Subjects/Keywords: Arabic Emphatics and Gutturals; Visual Speech Perception; Cognitive Psychology; Linguistics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Foster, M. S. (2016). Visual Speech Perception of Arabic Emphatics and Gutturals. (Doctoral Dissertation). University of Colorado. Retrieved from https://scholar.colorado.edu/ling_gradetds/58

Chicago Manual of Style (16th Edition):

Foster, Maha Saliba. “Visual Speech Perception of Arabic Emphatics and Gutturals.” 2016. Doctoral Dissertation, University of Colorado. Accessed July 08, 2020. https://scholar.colorado.edu/ling_gradetds/58.

MLA Handbook (7th Edition):

Foster, Maha Saliba. “Visual Speech Perception of Arabic Emphatics and Gutturals.” 2016. Web. 08 Jul 2020.

Vancouver:

Foster MS. Visual Speech Perception of Arabic Emphatics and Gutturals. [Internet] [Doctoral dissertation]. University of Colorado; 2016. [cited 2020 Jul 08]. Available from: https://scholar.colorado.edu/ling_gradetds/58.

Council of Science Editors:

Foster MS. Visual Speech Perception of Arabic Emphatics and Gutturals. [Doctoral Dissertation]. University of Colorado; 2016. Available from: https://scholar.colorado.edu/ling_gradetds/58


University of Colorado

17. Rickard, Carolyn Eryl. Multimodal Cues in the Socialization of Joint Attention in Young Children with Varying Degrees of Vision: Getting the Point Even When You Can't See It.

Degree: PhD, Speech, Language & Hearing Sciences, 2012, University of Colorado

  Research on joint attention and language learning has focused primarily on cues requiring visual access. However, this narrow focus cannot account for the emergence… (more)

Subjects/Keywords: joint attention; language acquistion; visual impairment; Linguistics; Speech Pathology and Audiology

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rickard, C. E. (2012). Multimodal Cues in the Socialization of Joint Attention in Young Children with Varying Degrees of Vision: Getting the Point Even When You Can't See It. (Doctoral Dissertation). University of Colorado. Retrieved from https://scholar.colorado.edu/slhs_gradetds/44

Chicago Manual of Style (16th Edition):

Rickard, Carolyn Eryl. “Multimodal Cues in the Socialization of Joint Attention in Young Children with Varying Degrees of Vision: Getting the Point Even When You Can't See It.” 2012. Doctoral Dissertation, University of Colorado. Accessed July 08, 2020. https://scholar.colorado.edu/slhs_gradetds/44.

MLA Handbook (7th Edition):

Rickard, Carolyn Eryl. “Multimodal Cues in the Socialization of Joint Attention in Young Children with Varying Degrees of Vision: Getting the Point Even When You Can't See It.” 2012. Web. 08 Jul 2020.

Vancouver:

Rickard CE. Multimodal Cues in the Socialization of Joint Attention in Young Children with Varying Degrees of Vision: Getting the Point Even When You Can't See It. [Internet] [Doctoral dissertation]. University of Colorado; 2012. [cited 2020 Jul 08]. Available from: https://scholar.colorado.edu/slhs_gradetds/44.

Council of Science Editors:

Rickard CE. Multimodal Cues in the Socialization of Joint Attention in Young Children with Varying Degrees of Vision: Getting the Point Even When You Can't See It. [Doctoral Dissertation]. University of Colorado; 2012. Available from: https://scholar.colorado.edu/slhs_gradetds/44


University of Waterloo

18. Makkook, Mustapha. A Multimodal Sensor Fusion Architecture for Audio-Visual Speech Recognition.

Degree: 2007, University of Waterloo

 A key requirement for developing any innovative system in a computing environment is to integrate a sufficiently friendly interface with the average end user. Accurate… (more)

Subjects/Keywords: Multimodal fusion; Visual speech recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Makkook, M. (2007). A Multimodal Sensor Fusion Architecture for Audio-Visual Speech Recognition. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/3065

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Makkook, Mustapha. “A Multimodal Sensor Fusion Architecture for Audio-Visual Speech Recognition.” 2007. Thesis, University of Waterloo. Accessed July 08, 2020. http://hdl.handle.net/10012/3065.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Makkook, Mustapha. “A Multimodal Sensor Fusion Architecture for Audio-Visual Speech Recognition.” 2007. Web. 08 Jul 2020.

Vancouver:

Makkook M. A Multimodal Sensor Fusion Architecture for Audio-Visual Speech Recognition. [Internet] [Thesis]. University of Waterloo; 2007. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/10012/3065.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Makkook M. A Multimodal Sensor Fusion Architecture for Audio-Visual Speech Recognition. [Thesis]. University of Waterloo; 2007. Available from: http://hdl.handle.net/10012/3065

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Texas – Austin

19. Gevarter, Cindy B. A comparison of schematic and taxonomic iPad® AAC systems for teaching multistep navigational AAC requests to children with ASD.

Degree: PhD, Special Education, 2015, University of Texas – Austin

 The variety of augmentative and alternative communication (AAC) applications available on devices such as the Apple iPad®, necessitates research comparing different application components. AAC applications… (more)

Subjects/Keywords: Autism; Speech generating devices; Augmentative alternative communication; Visual scene displays

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gevarter, C. B. (2015). A comparison of schematic and taxonomic iPad® AAC systems for teaching multistep navigational AAC requests to children with ASD. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/31584

Chicago Manual of Style (16th Edition):

Gevarter, Cindy B. “A comparison of schematic and taxonomic iPad® AAC systems for teaching multistep navigational AAC requests to children with ASD.” 2015. Doctoral Dissertation, University of Texas – Austin. Accessed July 08, 2020. http://hdl.handle.net/2152/31584.

MLA Handbook (7th Edition):

Gevarter, Cindy B. “A comparison of schematic and taxonomic iPad® AAC systems for teaching multistep navigational AAC requests to children with ASD.” 2015. Web. 08 Jul 2020.

Vancouver:

Gevarter CB. A comparison of schematic and taxonomic iPad® AAC systems for teaching multistep navigational AAC requests to children with ASD. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2015. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/2152/31584.

Council of Science Editors:

Gevarter CB. A comparison of schematic and taxonomic iPad® AAC systems for teaching multistep navigational AAC requests to children with ASD. [Doctoral Dissertation]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/31584


Loughborough University

20. Ibrahim, Zamri. A novel lip geometry approach for audio-visual speech recognition.

Degree: PhD, 2014, Loughborough University

 By identifying lip movements and characterizing their associations with speech sounds, the performance of speech recognition systems can be improved, particularly when operating in noisy… (more)

Subjects/Keywords: 621.39; Lip reading; Lip geometry; Audio-visual speech recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ibrahim, Z. (2014). A novel lip geometry approach for audio-visual speech recognition. (Doctoral Dissertation). Loughborough University. Retrieved from http://hdl.handle.net/2134/16526

Chicago Manual of Style (16th Edition):

Ibrahim, Zamri. “A novel lip geometry approach for audio-visual speech recognition.” 2014. Doctoral Dissertation, Loughborough University. Accessed July 08, 2020. http://hdl.handle.net/2134/16526.

MLA Handbook (7th Edition):

Ibrahim, Zamri. “A novel lip geometry approach for audio-visual speech recognition.” 2014. Web. 08 Jul 2020.

Vancouver:

Ibrahim Z. A novel lip geometry approach for audio-visual speech recognition. [Internet] [Doctoral dissertation]. Loughborough University; 2014. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/2134/16526.

Council of Science Editors:

Ibrahim Z. A novel lip geometry approach for audio-visual speech recognition. [Doctoral Dissertation]. Loughborough University; 2014. Available from: http://hdl.handle.net/2134/16526


University of Tennessee – Knoxville

21. Cannistraci, Ryan Andrew. Do you see what I mean? The role of visual speech information in lexical representations.

Degree: MA, Psychology, 2017, University of Tennessee – Knoxville

 Human speech is necessarily multimodal and audiovisual redundancies in speech may play a vital role in speech perception across the lifespan. The majority of previous… (more)

Subjects/Keywords: Language learning; visual speech information; lexical representations; audiovisual communication

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cannistraci, R. A. (2017). Do you see what I mean? The role of visual speech information in lexical representations. (Thesis). University of Tennessee – Knoxville. Retrieved from https://trace.tennessee.edu/utk_gradthes/4992

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Cannistraci, Ryan Andrew. “Do you see what I mean? The role of visual speech information in lexical representations.” 2017. Thesis, University of Tennessee – Knoxville. Accessed July 08, 2020. https://trace.tennessee.edu/utk_gradthes/4992.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Cannistraci, Ryan Andrew. “Do you see what I mean? The role of visual speech information in lexical representations.” 2017. Web. 08 Jul 2020.

Vancouver:

Cannistraci RA. Do you see what I mean? The role of visual speech information in lexical representations. [Internet] [Thesis]. University of Tennessee – Knoxville; 2017. [cited 2020 Jul 08]. Available from: https://trace.tennessee.edu/utk_gradthes/4992.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Cannistraci RA. Do you see what I mean? The role of visual speech information in lexical representations. [Thesis]. University of Tennessee – Knoxville; 2017. Available from: https://trace.tennessee.edu/utk_gradthes/4992

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of British Columbia

22. Weikum, Whitney Marie. Visual language discrimination .

Degree: 2008, University of British Columbia

 Recognizing and learning one’s native language requires knowledge of the phonetic and rhythmical characteristics of the language. Few studies address the rich source of language… (more)

Subjects/Keywords: Infants; Language development; Visual speech

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Weikum, W. M. (2008). Visual language discrimination . (Thesis). University of British Columbia. Retrieved from http://hdl.handle.net/2429/481

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Weikum, Whitney Marie. “Visual language discrimination .” 2008. Thesis, University of British Columbia. Accessed July 08, 2020. http://hdl.handle.net/2429/481.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Weikum, Whitney Marie. “Visual language discrimination .” 2008. Web. 08 Jul 2020.

Vancouver:

Weikum WM. Visual language discrimination . [Internet] [Thesis]. University of British Columbia; 2008. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/2429/481.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Weikum WM. Visual language discrimination . [Thesis]. University of British Columbia; 2008. Available from: http://hdl.handle.net/2429/481

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Western Sydney

23. Fitzpatrick, Michael F. Auditory and auditory-visual speech perception and production in noise in younger and older adults.

Degree: 2014, University of Western Sydney

 The overall aim of the thesis was to investigate spoken communication in adverse conditions using methods that take into account that spoken communication is a… (more)

Subjects/Keywords: speech perception; speech; auditory perception; visual perception; lipreading; noise; Thesis (Ph.D.) – University of Western Sydney, 2014

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Fitzpatrick, M. F. (2014). Auditory and auditory-visual speech perception and production in noise in younger and older adults. (Thesis). University of Western Sydney. Retrieved from http://hdl.handle.net/1959.7/uws:31936

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Fitzpatrick, Michael F. “Auditory and auditory-visual speech perception and production in noise in younger and older adults.” 2014. Thesis, University of Western Sydney. Accessed July 08, 2020. http://hdl.handle.net/1959.7/uws:31936.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Fitzpatrick, Michael F. “Auditory and auditory-visual speech perception and production in noise in younger and older adults.” 2014. Web. 08 Jul 2020.

Vancouver:

Fitzpatrick MF. Auditory and auditory-visual speech perception and production in noise in younger and older adults. [Internet] [Thesis]. University of Western Sydney; 2014. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/1959.7/uws:31936.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Fitzpatrick MF. Auditory and auditory-visual speech perception and production in noise in younger and older adults. [Thesis]. University of Western Sydney; 2014. Available from: http://hdl.handle.net/1959.7/uws:31936

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Miami University

24. Schnobrich, Kathleen Marie. The Relationship between Literacy Readiness and Auditory and Visual Perception in Kindergarteners.

Degree: MA, Speech Pathology and Audiology, 2009, Miami University

 The current study seeks to identify the relationship between the linguistic skills of auditory and visual perception and literacy readiness in kindergarteners. The purpose of… (more)

Subjects/Keywords: Education; Literacy; Reading Instruction; Special Education; Speech Therapy; literacy readiness; kindergarten; auditory perception; visual perception; visual memory; DIBELS

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Schnobrich, K. M. (2009). The Relationship between Literacy Readiness and Auditory and Visual Perception in Kindergarteners. (Masters Thesis). Miami University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=miami1241010453

Chicago Manual of Style (16th Edition):

Schnobrich, Kathleen Marie. “The Relationship between Literacy Readiness and Auditory and Visual Perception in Kindergarteners.” 2009. Masters Thesis, Miami University. Accessed July 08, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=miami1241010453.

MLA Handbook (7th Edition):

Schnobrich, Kathleen Marie. “The Relationship between Literacy Readiness and Auditory and Visual Perception in Kindergarteners.” 2009. Web. 08 Jul 2020.

Vancouver:

Schnobrich KM. The Relationship between Literacy Readiness and Auditory and Visual Perception in Kindergarteners. [Internet] [Masters thesis]. Miami University; 2009. [cited 2020 Jul 08]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=miami1241010453.

Council of Science Editors:

Schnobrich KM. The Relationship between Literacy Readiness and Auditory and Visual Perception in Kindergarteners. [Masters Thesis]. Miami University; 2009. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=miami1241010453


Universidade do Estado do Rio de Janeiro

25. Carla Cunha Rodrigues. Significações da diferença nos discursos do ensino contemporâneo de Artes Visuais.

Degree: Master, 2014, Universidade do Estado do Rio de Janeiro

O objetivo desta pesquisa foi problematizar as significações da noção de diferença afirmadas em discursos do ensino contemporâneo de artes visuais. Para tanto, foram investigados,… (more)

Subjects/Keywords: Difference; Artes Visuais; Discurso; Significação; Diferença; EDUCACAO; Artes Estudo e ensino; Meaning; Speech; Visual arts

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rodrigues, C. C. (2014). Significações da diferença nos discursos do ensino contemporâneo de Artes Visuais. (Masters Thesis). Universidade do Estado do Rio de Janeiro. Retrieved from http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7768 ;

Chicago Manual of Style (16th Edition):

Rodrigues, Carla Cunha. “Significações da diferença nos discursos do ensino contemporâneo de Artes Visuais.” 2014. Masters Thesis, Universidade do Estado do Rio de Janeiro. Accessed July 08, 2020. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7768 ;.

MLA Handbook (7th Edition):

Rodrigues, Carla Cunha. “Significações da diferença nos discursos do ensino contemporâneo de Artes Visuais.” 2014. Web. 08 Jul 2020.

Vancouver:

Rodrigues CC. Significações da diferença nos discursos do ensino contemporâneo de Artes Visuais. [Internet] [Masters thesis]. Universidade do Estado do Rio de Janeiro; 2014. [cited 2020 Jul 08]. Available from: http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7768 ;.

Council of Science Editors:

Rodrigues CC. Significações da diferença nos discursos do ensino contemporâneo de Artes Visuais. [Masters Thesis]. Universidade do Estado do Rio de Janeiro; 2014. Available from: http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7768 ;


University of Rochester

26. Brown, Meredith. Interpreting prosodic variation in context.

Degree: PhD, 2014, University of Rochester

 This dissertation reports novel evidence that listeners evaluate the prosodic characteristics of speech with respect to surrounding linguistic context, and that these prosodic expectations influence… (more)

Subjects/Keywords: Expectations; Language processing; Prosody; Speech perception; Spoken word recognition; Visual world Paradigm; Generative models

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Brown, M. (2014). Interpreting prosodic variation in context. (Doctoral Dissertation). University of Rochester. Retrieved from http://hdl.handle.net/1802/28876

Chicago Manual of Style (16th Edition):

Brown, Meredith. “Interpreting prosodic variation in context.” 2014. Doctoral Dissertation, University of Rochester. Accessed July 08, 2020. http://hdl.handle.net/1802/28876.

MLA Handbook (7th Edition):

Brown, Meredith. “Interpreting prosodic variation in context.” 2014. Web. 08 Jul 2020.

Vancouver:

Brown M. Interpreting prosodic variation in context. [Internet] [Doctoral dissertation]. University of Rochester; 2014. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/1802/28876.

Council of Science Editors:

Brown M. Interpreting prosodic variation in context. [Doctoral Dissertation]. University of Rochester; 2014. Available from: http://hdl.handle.net/1802/28876


University of Kansas

27. Walker, Corinne Nicole. Intensive Eye Gaze Training for AAC Access: A Case Study.

Degree: MA, Hearing and Speech, 2016, University of Kansas

 This was a case study investigating intensive eye gaze intervention for accessing an augmentative and alternative communication device. The participant was an individual with cortical… (more)

Subjects/Keywords: Speech therapy; Augmentative and Alternative Communication; Cortical Visual Impairment; Eye gaze access; Multiple disabilities

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Walker, C. N. (2016). Intensive Eye Gaze Training for AAC Access: A Case Study. (Masters Thesis). University of Kansas. Retrieved from http://hdl.handle.net/1808/21909

Chicago Manual of Style (16th Edition):

Walker, Corinne Nicole. “Intensive Eye Gaze Training for AAC Access: A Case Study.” 2016. Masters Thesis, University of Kansas. Accessed July 08, 2020. http://hdl.handle.net/1808/21909.

MLA Handbook (7th Edition):

Walker, Corinne Nicole. “Intensive Eye Gaze Training for AAC Access: A Case Study.” 2016. Web. 08 Jul 2020.

Vancouver:

Walker CN. Intensive Eye Gaze Training for AAC Access: A Case Study. [Internet] [Masters thesis]. University of Kansas; 2016. [cited 2020 Jul 08]. Available from: http://hdl.handle.net/1808/21909.

Council of Science Editors:

Walker CN. Intensive Eye Gaze Training for AAC Access: A Case Study. [Masters Thesis]. University of Kansas; 2016. Available from: http://hdl.handle.net/1808/21909


University of Colorado

28. Morton, Margaret Anne. Use of a Visual Cueing System to Retell Events: Child with Fetal Alcohol Syndrome.

Degree: MA, Speech, Language & Hearing Sciences, 2014, University of Colorado

  A descriptive case study design was used with a 5-year old male subject diagnosed with Fetal Alcohol Syndrome (FAS). The purpose of the study… (more)

Subjects/Keywords: Event retell; verbal interactions; visual aid; distractive behaviors; Speech and Hearing Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Morton, M. A. (2014). Use of a Visual Cueing System to Retell Events: Child with Fetal Alcohol Syndrome. (Masters Thesis). University of Colorado. Retrieved from https://scholar.colorado.edu/slhs_gradetds/23

Chicago Manual of Style (16th Edition):

Morton, Margaret Anne. “Use of a Visual Cueing System to Retell Events: Child with Fetal Alcohol Syndrome.” 2014. Masters Thesis, University of Colorado. Accessed July 08, 2020. https://scholar.colorado.edu/slhs_gradetds/23.

MLA Handbook (7th Edition):

Morton, Margaret Anne. “Use of a Visual Cueing System to Retell Events: Child with Fetal Alcohol Syndrome.” 2014. Web. 08 Jul 2020.

Vancouver:

Morton MA. Use of a Visual Cueing System to Retell Events: Child with Fetal Alcohol Syndrome. [Internet] [Masters thesis]. University of Colorado; 2014. [cited 2020 Jul 08]. Available from: https://scholar.colorado.edu/slhs_gradetds/23.

Council of Science Editors:

Morton MA. Use of a Visual Cueing System to Retell Events: Child with Fetal Alcohol Syndrome. [Masters Thesis]. University of Colorado; 2014. Available from: https://scholar.colorado.edu/slhs_gradetds/23

29. Cooper, David G. Computational Affect Detection for Education and Health.

Degree: PhD, Computer Science, 2011, U of Massachusetts : PhD

  Emotional intelligence has a prominent role in education, health care, and day to day interaction. With the increasing use of computer technology, computers are… (more)

Subjects/Keywords: affect detection; intelligent tutoring systems; linear classifiers; multimodal sensors; speech prosody; visual tracking; Computer Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cooper, D. G. (2011). Computational Affect Detection for Education and Health. (Doctoral Dissertation). U of Massachusetts : PhD. Retrieved from https://scholarworks.umass.edu/open_access_dissertations/437

Chicago Manual of Style (16th Edition):

Cooper, David G. “Computational Affect Detection for Education and Health.” 2011. Doctoral Dissertation, U of Massachusetts : PhD. Accessed July 08, 2020. https://scholarworks.umass.edu/open_access_dissertations/437.

MLA Handbook (7th Edition):

Cooper, David G. “Computational Affect Detection for Education and Health.” 2011. Web. 08 Jul 2020.

Vancouver:

Cooper DG. Computational Affect Detection for Education and Health. [Internet] [Doctoral dissertation]. U of Massachusetts : PhD; 2011. [cited 2020 Jul 08]. Available from: https://scholarworks.umass.edu/open_access_dissertations/437.

Council of Science Editors:

Cooper DG. Computational Affect Detection for Education and Health. [Doctoral Dissertation]. U of Massachusetts : PhD; 2011. Available from: https://scholarworks.umass.edu/open_access_dissertations/437


University of Western Sydney

30. Paris, Tim. Audiovisual prediction using brain and behaviour measures.

Degree: 2014, University of Western Sydney

 The brain’s ability to generate predictions provides a foundation for efficient processing. When one event reliably follows another, the presence of the first event provides… (more)

Subjects/Keywords: cognition; speech perception; auditory perception; visual cognition; Thesis (Ph.D.) – University of Western Sydney, 2014

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Paris, T. (2014). Audiovisual prediction using brain and behaviour measures. (Thesis). University of Western Sydney. Retrieved from http://handle.uws.edu.au:8081/1959.7/uws:32646

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Paris, Tim. “Audiovisual prediction using brain and behaviour measures.” 2014. Thesis, University of Western Sydney. Accessed July 08, 2020. http://handle.uws.edu.au:8081/1959.7/uws:32646.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Paris, Tim. “Audiovisual prediction using brain and behaviour measures.” 2014. Web. 08 Jul 2020.

Vancouver:

Paris T. Audiovisual prediction using brain and behaviour measures. [Internet] [Thesis]. University of Western Sydney; 2014. [cited 2020 Jul 08]. Available from: http://handle.uws.edu.au:8081/1959.7/uws:32646.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Paris T. Audiovisual prediction using brain and behaviour measures. [Thesis]. University of Western Sydney; 2014. Available from: http://handle.uws.edu.au:8081/1959.7/uws:32646

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

[1] [2] [3] [4]

.