Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Auditory scene analysis). Showing records 1 – 30 of 43 total matches.

[1] [2]

Search Limiters

Last 2 Years | English Only

▼ Search Limiters


Queens University

1. Raynor, Graham Komei. Effects of Aging, Continuity and Frequency Difference on the Time Course of Auditory Perceptual Organization .

Degree: Psychology, 2011, Queens University

 Effective everyday hearing requires the auditory system to organize auditory input into perceptual streams corresponding to objects of interest. Changes in this process may be… (more)

Subjects/Keywords: Auditory Scene Analysis; Aging; Hearing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Raynor, G. K. (2011). Effects of Aging, Continuity and Frequency Difference on the Time Course of Auditory Perceptual Organization . (Thesis). Queens University. Retrieved from http://hdl.handle.net/1974/6741

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Raynor, Graham Komei. “Effects of Aging, Continuity and Frequency Difference on the Time Course of Auditory Perceptual Organization .” 2011. Thesis, Queens University. Accessed February 24, 2020. http://hdl.handle.net/1974/6741.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Raynor, Graham Komei. “Effects of Aging, Continuity and Frequency Difference on the Time Course of Auditory Perceptual Organization .” 2011. Web. 24 Feb 2020.

Vancouver:

Raynor GK. Effects of Aging, Continuity and Frequency Difference on the Time Course of Auditory Perceptual Organization . [Internet] [Thesis]. Queens University; 2011. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/1974/6741.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Raynor GK. Effects of Aging, Continuity and Frequency Difference on the Time Course of Auditory Perceptual Organization . [Thesis]. Queens University; 2011. Available from: http://hdl.handle.net/1974/6741

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Queen Mary, University of London

2. Sauvé, Sarah A. Prediction in polyphony : modelling musical auditory scene analysis.

Degree: PhD, 2018, Queen Mary, University of London

 How do we know that a melody is a melody? In other words, how does the human brain extract melody from a polyphonic musical context?… (more)

Subjects/Keywords: Electronic Engineering & Computer Science; auditory scene analysis

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sauvé, S. A. (2018). Prediction in polyphony : modelling musical auditory scene analysis. (Doctoral Dissertation). Queen Mary, University of London. Retrieved from http://qmro.qmul.ac.uk/xmlui/handle/123456789/46805 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.766239

Chicago Manual of Style (16th Edition):

Sauvé, Sarah A. “Prediction in polyphony : modelling musical auditory scene analysis.” 2018. Doctoral Dissertation, Queen Mary, University of London. Accessed February 24, 2020. http://qmro.qmul.ac.uk/xmlui/handle/123456789/46805 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.766239.

MLA Handbook (7th Edition):

Sauvé, Sarah A. “Prediction in polyphony : modelling musical auditory scene analysis.” 2018. Web. 24 Feb 2020.

Vancouver:

Sauvé SA. Prediction in polyphony : modelling musical auditory scene analysis. [Internet] [Doctoral dissertation]. Queen Mary, University of London; 2018. [cited 2020 Feb 24]. Available from: http://qmro.qmul.ac.uk/xmlui/handle/123456789/46805 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.766239.

Council of Science Editors:

Sauvé SA. Prediction in polyphony : modelling musical auditory scene analysis. [Doctoral Dissertation]. Queen Mary, University of London; 2018. Available from: http://qmro.qmul.ac.uk/xmlui/handle/123456789/46805 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.766239


University of California – Irvine

3. Yao, Justin. Neural Mechanisms of Spatial Stream Segregation Along the Ascending Auditory Pathway.

Degree: Biological Sciences, 2015, University of California – Irvine

 In a complex auditory scene, listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This process is referred to… (more)

Subjects/Keywords: Neurosciences; Auditory; Auditory Scene Analysis; Cortex; Midbrain; Spatial Hearing; Thalamus

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yao, J. (2015). Neural Mechanisms of Spatial Stream Segregation Along the Ascending Auditory Pathway. (Thesis). University of California – Irvine. Retrieved from http://www.escholarship.org/uc/item/2fg8f6dt

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Yao, Justin. “Neural Mechanisms of Spatial Stream Segregation Along the Ascending Auditory Pathway.” 2015. Thesis, University of California – Irvine. Accessed February 24, 2020. http://www.escholarship.org/uc/item/2fg8f6dt.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Yao, Justin. “Neural Mechanisms of Spatial Stream Segregation Along the Ascending Auditory Pathway.” 2015. Web. 24 Feb 2020.

Vancouver:

Yao J. Neural Mechanisms of Spatial Stream Segregation Along the Ascending Auditory Pathway. [Internet] [Thesis]. University of California – Irvine; 2015. [cited 2020 Feb 24]. Available from: http://www.escholarship.org/uc/item/2fg8f6dt.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Yao J. Neural Mechanisms of Spatial Stream Segregation Along the Ascending Auditory Pathway. [Thesis]. University of California – Irvine; 2015. Available from: http://www.escholarship.org/uc/item/2fg8f6dt

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Toronto

4. Gillingham, Susan. Auditory Search: The Deployment of Attention within a Complex Auditory Scene.

Degree: 2012, University of Toronto

Current theories of auditory attention are largely based upon studies examining either the presentation of a single auditory stimulus or requiring the identification and labeling… (more)

Subjects/Keywords: Auditory Attention; Attention Deployment; Auditory Search; Scene Analysis; MEG; 0623

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gillingham, S. (2012). Auditory Search: The Deployment of Attention within a Complex Auditory Scene. (Masters Thesis). University of Toronto. Retrieved from http://hdl.handle.net/1807/33220

Chicago Manual of Style (16th Edition):

Gillingham, Susan. “Auditory Search: The Deployment of Attention within a Complex Auditory Scene.” 2012. Masters Thesis, University of Toronto. Accessed February 24, 2020. http://hdl.handle.net/1807/33220.

MLA Handbook (7th Edition):

Gillingham, Susan. “Auditory Search: The Deployment of Attention within a Complex Auditory Scene.” 2012. Web. 24 Feb 2020.

Vancouver:

Gillingham S. Auditory Search: The Deployment of Attention within a Complex Auditory Scene. [Internet] [Masters thesis]. University of Toronto; 2012. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/1807/33220.

Council of Science Editors:

Gillingham S. Auditory Search: The Deployment of Attention within a Complex Auditory Scene. [Masters Thesis]. University of Toronto; 2012. Available from: http://hdl.handle.net/1807/33220


University of California – Berkeley

5. Moore, Richard Channing. Invariant Recognition of Vocal Features.

Degree: Biophysics, 2011, University of California – Berkeley

 Animals and humans are able to communicate vocally in very challenging acoustic conditions. Background noise, especially from other individuals of the same species, may mask… (more)

Subjects/Keywords: Neurosciences; Auditory scene analysis; Invariance; STRF; Zebra Finch

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Moore, R. C. (2011). Invariant Recognition of Vocal Features. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/4zp9m856

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Moore, Richard Channing. “Invariant Recognition of Vocal Features.” 2011. Thesis, University of California – Berkeley. Accessed February 24, 2020. http://www.escholarship.org/uc/item/4zp9m856.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Moore, Richard Channing. “Invariant Recognition of Vocal Features.” 2011. Web. 24 Feb 2020.

Vancouver:

Moore RC. Invariant Recognition of Vocal Features. [Internet] [Thesis]. University of California – Berkeley; 2011. [cited 2020 Feb 24]. Available from: http://www.escholarship.org/uc/item/4zp9m856.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Moore RC. Invariant Recognition of Vocal Features. [Thesis]. University of California – Berkeley; 2011. Available from: http://www.escholarship.org/uc/item/4zp9m856

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Washington

6. Oster, Monika-Maria. Infants’ Use of Temporal Cues in the Segregation of Concurrent Sounds.

Degree: PhD, 2018, University of Washington

 Infants have greater difficulties processing speech in the presence of competing sounds than adults. The mature auditory system solves this task in part by separating… (more)

Subjects/Keywords: Auditory Perception in Infants; Auditory Scene Analysis; Double-Vowels; Hearing; Infant Auditory Development; Sound Source Segregation; Audiology; Speech

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Oster, M. (2018). Infants’ Use of Temporal Cues in the Segregation of Concurrent Sounds. (Doctoral Dissertation). University of Washington. Retrieved from http://hdl.handle.net/1773/43151

Chicago Manual of Style (16th Edition):

Oster, Monika-Maria. “Infants’ Use of Temporal Cues in the Segregation of Concurrent Sounds.” 2018. Doctoral Dissertation, University of Washington. Accessed February 24, 2020. http://hdl.handle.net/1773/43151.

MLA Handbook (7th Edition):

Oster, Monika-Maria. “Infants’ Use of Temporal Cues in the Segregation of Concurrent Sounds.” 2018. Web. 24 Feb 2020.

Vancouver:

Oster M. Infants’ Use of Temporal Cues in the Segregation of Concurrent Sounds. [Internet] [Doctoral dissertation]. University of Washington; 2018. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/1773/43151.

Council of Science Editors:

Oster M. Infants’ Use of Temporal Cues in the Segregation of Concurrent Sounds. [Doctoral Dissertation]. University of Washington; 2018. Available from: http://hdl.handle.net/1773/43151


The Ohio State University

7. Shao, Yang. Sequential organization in computational auditory scene analysis.

Degree: PhD, Computer and Information Science, 2007, The Ohio State University

  A human listener's ability to organize the time-frequency (T-F) energy of the same sound source into a single stream is termed auditory scene analysis(more)

Subjects/Keywords: Computer Science; Sequential Organization; Sequential Grouping; Auditory Scene Analysis; Computational Auditory Scene Analysis; Speech Organization; Robust Speaker Recognition; Auditory Feature; Speaker Quantization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shao, Y. (2007). Sequential organization in computational auditory scene analysis. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1190127412

Chicago Manual of Style (16th Edition):

Shao, Yang. “Sequential organization in computational auditory scene analysis.” 2007. Doctoral Dissertation, The Ohio State University. Accessed February 24, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1190127412.

MLA Handbook (7th Edition):

Shao, Yang. “Sequential organization in computational auditory scene analysis.” 2007. Web. 24 Feb 2020.

Vancouver:

Shao Y. Sequential organization in computational auditory scene analysis. [Internet] [Doctoral dissertation]. The Ohio State University; 2007. [cited 2020 Feb 24]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1190127412.

Council of Science Editors:

Shao Y. Sequential organization in computational auditory scene analysis. [Doctoral Dissertation]. The Ohio State University; 2007. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1190127412


University of Kentucky

8. Unnikrishnan, Harikrishnan. AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES.

Degree: 2010, University of Kentucky

Auditory stream denotes the abstract effect a source creates in the mind of the listener. An auditory scene consists of many streams, which the listener… (more)

Subjects/Keywords: Audio Scene Segmentation; Sound Source Tracking; Computational Auditory Scene Analysis; Microphone Arrays; Speaker Recognition; Electrical and Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Unnikrishnan, H. (2010). AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES. (Masters Thesis). University of Kentucky. Retrieved from http://uknowledge.uky.edu/gradschool_theses/622

Chicago Manual of Style (16th Edition):

Unnikrishnan, Harikrishnan. “AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES.” 2010. Masters Thesis, University of Kentucky. Accessed February 24, 2020. http://uknowledge.uky.edu/gradschool_theses/622.

MLA Handbook (7th Edition):

Unnikrishnan, Harikrishnan. “AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES.” 2010. Web. 24 Feb 2020.

Vancouver:

Unnikrishnan H. AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES. [Internet] [Masters thesis]. University of Kentucky; 2010. [cited 2020 Feb 24]. Available from: http://uknowledge.uky.edu/gradschool_theses/622.

Council of Science Editors:

Unnikrishnan H. AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES. [Masters Thesis]. University of Kentucky; 2010. Available from: http://uknowledge.uky.edu/gradschool_theses/622


Texas Christian University

9. Hutchison, Joanna Lynn. Boundary extension in the auditory domain.

Degree: PhD, 2007, Texas Christian University

Subjects/Keywords: Memory.; Auditory perception.; Auditory scene analysis.; Musical perception.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hutchison, J. L. (2007). Boundary extension in the auditory domain. (Doctoral Dissertation). Texas Christian University. Retrieved from https://repository.tcu.edu:443/handle/116099117/3994

Chicago Manual of Style (16th Edition):

Hutchison, Joanna Lynn. “Boundary extension in the auditory domain.” 2007. Doctoral Dissertation, Texas Christian University. Accessed February 24, 2020. https://repository.tcu.edu:443/handle/116099117/3994.

MLA Handbook (7th Edition):

Hutchison, Joanna Lynn. “Boundary extension in the auditory domain.” 2007. Web. 24 Feb 2020.

Vancouver:

Hutchison JL. Boundary extension in the auditory domain. [Internet] [Doctoral dissertation]. Texas Christian University; 2007. [cited 2020 Feb 24]. Available from: https://repository.tcu.edu:443/handle/116099117/3994.

Council of Science Editors:

Hutchison JL. Boundary extension in the auditory domain. [Doctoral Dissertation]. Texas Christian University; 2007. Available from: https://repository.tcu.edu:443/handle/116099117/3994


Arizona State University

10. Patten, Kristopher Jakob. Natural Correlations of Spectral Envelope and their Contribution to Auditory Scene Analysis.

Degree: Psychology, 2017, Arizona State University

Auditory scene analysis (ASA) is the process through which listeners parse and organize their acoustic environment into relevant auditory objects. ASA functions by exploiting natural… (more)

Subjects/Keywords: Cognitive psychology; Audiology; Psychology; audition; auditory scene analysis; multidimensional scaling; perception; spectral envelope; timbre

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Patten, K. J. (2017). Natural Correlations of Spectral Envelope and their Contribution to Auditory Scene Analysis. (Doctoral Dissertation). Arizona State University. Retrieved from http://repository.asu.edu/items/46351

Chicago Manual of Style (16th Edition):

Patten, Kristopher Jakob. “Natural Correlations of Spectral Envelope and their Contribution to Auditory Scene Analysis.” 2017. Doctoral Dissertation, Arizona State University. Accessed February 24, 2020. http://repository.asu.edu/items/46351.

MLA Handbook (7th Edition):

Patten, Kristopher Jakob. “Natural Correlations of Spectral Envelope and their Contribution to Auditory Scene Analysis.” 2017. Web. 24 Feb 2020.

Vancouver:

Patten KJ. Natural Correlations of Spectral Envelope and their Contribution to Auditory Scene Analysis. [Internet] [Doctoral dissertation]. Arizona State University; 2017. [cited 2020 Feb 24]. Available from: http://repository.asu.edu/items/46351.

Council of Science Editors:

Patten KJ. Natural Correlations of Spectral Envelope and their Contribution to Auditory Scene Analysis. [Doctoral Dissertation]. Arizona State University; 2017. Available from: http://repository.asu.edu/items/46351


Macquarie University

11. Weisser, Adam. Complex acoustic environments: concepts, methods and auditory perception.

Degree: 2018, Macquarie University

Thesis by publication.

Bibliography: pages 191-215.

1. Introduction  – 2. The ambisonic recordings of typical environments (ARTE) database  – 3. Conversational speech levels and signal-to-noise… (more)

Subjects/Keywords: Auditory scene analysis; Hearing  – Research; complex acoustic environments; complexity; hearing; auditory; perception; auditory scene analysis; 3D sound; spatial listening; auditory information; information loss; communication; complex systems; adaptive systems; information overload; Lombard effect

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Weisser, A. (2018). Complex acoustic environments: concepts, methods and auditory perception. (Doctoral Dissertation). Macquarie University. Retrieved from http://hdl.handle.net/1959.14/1266534

Chicago Manual of Style (16th Edition):

Weisser, Adam. “Complex acoustic environments: concepts, methods and auditory perception.” 2018. Doctoral Dissertation, Macquarie University. Accessed February 24, 2020. http://hdl.handle.net/1959.14/1266534.

MLA Handbook (7th Edition):

Weisser, Adam. “Complex acoustic environments: concepts, methods and auditory perception.” 2018. Web. 24 Feb 2020.

Vancouver:

Weisser A. Complex acoustic environments: concepts, methods and auditory perception. [Internet] [Doctoral dissertation]. Macquarie University; 2018. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/1959.14/1266534.

Council of Science Editors:

Weisser A. Complex acoustic environments: concepts, methods and auditory perception. [Doctoral Dissertation]. Macquarie University; 2018. Available from: http://hdl.handle.net/1959.14/1266534

12. Delmotte, Varinthira Duangudom. Computational auditory saliency.

Degree: PhD, Electrical and Computer Engineering, 2012, Georgia Tech

 The objective of this dissertation research is to identify sounds that grab a listener's attention. These sounds that draw a person's attention are sounds that… (more)

Subjects/Keywords: Auditory attention; Perception; Saliency; Auditory scene analysis; Auditory perception; Computational auditory scene analysis; Computer sound processing

…surveillance, noise suppression, auditory scene analysis, and audio classification. 1.2 From Visual… …Auditory saliency map for the movie scene shown in Figure 46. . . . . 104 xi SUMMARY Sounds… …observers in a xii particular scene. Auditory saliency also helps to rapidly sort the… …information present in a complex auditory scene. Since our resources are finite, not all information… …saliency is closely related to many different areas, including scene analysis. The thesis… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Delmotte, V. D. (2012). Computational auditory saliency. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/45888

Chicago Manual of Style (16th Edition):

Delmotte, Varinthira Duangudom. “Computational auditory saliency.” 2012. Doctoral Dissertation, Georgia Tech. Accessed February 24, 2020. http://hdl.handle.net/1853/45888.

MLA Handbook (7th Edition):

Delmotte, Varinthira Duangudom. “Computational auditory saliency.” 2012. Web. 24 Feb 2020.

Vancouver:

Delmotte VD. Computational auditory saliency. [Internet] [Doctoral dissertation]. Georgia Tech; 2012. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/1853/45888.

Council of Science Editors:

Delmotte VD. Computational auditory saliency. [Doctoral Dissertation]. Georgia Tech; 2012. Available from: http://hdl.handle.net/1853/45888

13. Schuett, Jonathan Henry. Measuring the effects of display design and individual differences on the utilization of multi-stream sonifications.

Degree: PhD, Psychology, 2019, Georgia Tech

 Previous work in the auditory display community has discussed the impact of both display design and individual listener differences on how successfully listeners can use… (more)

Subjects/Keywords: Auditory display; Sonification; Stream segregation; Auditory scene analysis; Auditory perception

auditory scene analysis community as well as a continued topic of discussion within the auditory… …Auditory Scene Analysis (1990), which goes into great detail on how auditory streams… …Bregman – the same who wrote Auditory Scene Analysis – wrote the foreword for Kramer’s book… …display design, the connection between scene analysis and auditory display research emerged. In… …make sense of an acoustic environment. While scene analysis has its obvious ecological uses… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Schuett, J. H. (2019). Measuring the effects of display design and individual differences on the utilization of multi-stream sonifications. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61808

Chicago Manual of Style (16th Edition):

Schuett, Jonathan Henry. “Measuring the effects of display design and individual differences on the utilization of multi-stream sonifications.” 2019. Doctoral Dissertation, Georgia Tech. Accessed February 24, 2020. http://hdl.handle.net/1853/61808.

MLA Handbook (7th Edition):

Schuett, Jonathan Henry. “Measuring the effects of display design and individual differences on the utilization of multi-stream sonifications.” 2019. Web. 24 Feb 2020.

Vancouver:

Schuett JH. Measuring the effects of display design and individual differences on the utilization of multi-stream sonifications. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/1853/61808.

Council of Science Editors:

Schuett JH. Measuring the effects of display design and individual differences on the utilization of multi-stream sonifications. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61808

14. Shiratori, Takaaki. Synthesis of Dance Performance Based on Analyses of Human Motion and Music : 人体動作と音楽の解析に基づく舞踊動作生成.

Degree: 博士(情報理工学), 2017, The University of Tokyo / 東京大学

 Recently, demands for synthesizing realistic human motions are rapidly increasing in computer graphics (CG) and robotics fields. One of the easy solutions to this issue… (more)

Subjects/Keywords: motion capture; auditory scene analysis; human motion synthesis

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shiratori, T. (2017). Synthesis of Dance Performance Based on Analyses of Human Motion and Music : 人体動作と音楽の解析に基づく舞踊動作生成. (Thesis). The University of Tokyo / 東京大学. Retrieved from http://hdl.handle.net/2261/25870

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Shiratori, Takaaki. “Synthesis of Dance Performance Based on Analyses of Human Motion and Music : 人体動作と音楽の解析に基づく舞踊動作生成.” 2017. Thesis, The University of Tokyo / 東京大学. Accessed February 24, 2020. http://hdl.handle.net/2261/25870.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Shiratori, Takaaki. “Synthesis of Dance Performance Based on Analyses of Human Motion and Music : 人体動作と音楽の解析に基づく舞踊動作生成.” 2017. Web. 24 Feb 2020.

Vancouver:

Shiratori T. Synthesis of Dance Performance Based on Analyses of Human Motion and Music : 人体動作と音楽の解析に基づく舞踊動作生成. [Internet] [Thesis]. The University of Tokyo / 東京大学; 2017. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/2261/25870.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Shiratori T. Synthesis of Dance Performance Based on Analyses of Human Motion and Music : 人体動作と音楽の解析に基づく舞踊動作生成. [Thesis]. The University of Tokyo / 東京大学; 2017. Available from: http://hdl.handle.net/2261/25870

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

15. Devergie, Aymeric. Interactions audiovisuelles pour l'analyse de scènes auditives : Audiovisual interactions for auditory scene analysis.

Degree: Docteur es, Acoustique, 2010, Université Claude Bernard – Lyon I

Percevoir la parole dans le bruit représente une opération complexe pour notre système perceptif. Pour parvenir à analyser cette scène auditive, nous mettons en place… (more)

Subjects/Keywords: Analyse de scènes auditives; Interactions audiovisuelles; Perception de la parole; Auditory scene analysis; Audiovisual interactions; Speech perception

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Devergie, A. (2010). Interactions audiovisuelles pour l'analyse de scènes auditives : Audiovisual interactions for auditory scene analysis. (Doctoral Dissertation). Université Claude Bernard – Lyon I. Retrieved from http://www.theses.fr/2010LYO10283

Chicago Manual of Style (16th Edition):

Devergie, Aymeric. “Interactions audiovisuelles pour l'analyse de scènes auditives : Audiovisual interactions for auditory scene analysis.” 2010. Doctoral Dissertation, Université Claude Bernard – Lyon I. Accessed February 24, 2020. http://www.theses.fr/2010LYO10283.

MLA Handbook (7th Edition):

Devergie, Aymeric. “Interactions audiovisuelles pour l'analyse de scènes auditives : Audiovisual interactions for auditory scene analysis.” 2010. Web. 24 Feb 2020.

Vancouver:

Devergie A. Interactions audiovisuelles pour l'analyse de scènes auditives : Audiovisual interactions for auditory scene analysis. [Internet] [Doctoral dissertation]. Université Claude Bernard – Lyon I; 2010. [cited 2020 Feb 24]. Available from: http://www.theses.fr/2010LYO10283.

Council of Science Editors:

Devergie A. Interactions audiovisuelles pour l'analyse de scènes auditives : Audiovisual interactions for auditory scene analysis. [Doctoral Dissertation]. Université Claude Bernard – Lyon I; 2010. Available from: http://www.theses.fr/2010LYO10283

16. McMullan, Amanda R. Electroencephalographic measures of auditory perception in dynamic acoustic environments .

Degree: 2013, University of Lethbridge

 We are capable of effortlessly parsing a complex scene presented to us. In order to do this, we must segregate objects from each other and… (more)

Subjects/Keywords: Auditory scene analysis  – Research; Electroencephalography; Auditory perception  – Research; Dissertations, Academic

…Amplitude Panning x Chapter 1: Introduction This thesis is about auditory scene analysis. The… …auditory world engage different neural mechanisms. 1.1 Visual and Auditory Scene Analysis… …auditory scene analysis bear strong resemblance to the gestalt grouping rules of visual scene… …perspective of auditory scene analysis, such first-order boundaries are less interesting than other… …seems more chaotic, is that of auditory scene analysis. The spectrotemporal properties of a… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

McMullan, A. R. (2013). Electroencephalographic measures of auditory perception in dynamic acoustic environments . (Thesis). University of Lethbridge. Retrieved from http://hdl.handle.net/10133/3354

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

McMullan, Amanda R. “Electroencephalographic measures of auditory perception in dynamic acoustic environments .” 2013. Thesis, University of Lethbridge. Accessed February 24, 2020. http://hdl.handle.net/10133/3354.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

McMullan, Amanda R. “Electroencephalographic measures of auditory perception in dynamic acoustic environments .” 2013. Web. 24 Feb 2020.

Vancouver:

McMullan AR. Electroencephalographic measures of auditory perception in dynamic acoustic environments . [Internet] [Thesis]. University of Lethbridge; 2013. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/10133/3354.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

McMullan AR. Electroencephalographic measures of auditory perception in dynamic acoustic environments . [Thesis]. University of Lethbridge; 2013. Available from: http://hdl.handle.net/10133/3354

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Indian Institute of Science

17. Goel, Priyank. Harmonic Sound Source Separation in Monaural Music Signals.

Degree: 2013, Indian Institute of Science

 Sound Source Separation refers to separating sound signals according to their sources from a given observed sound. It is efficient to code and very easy… (more)

Subjects/Keywords: Sound Source Separation; Harmonic Musical Instruments; Harmonic Sound Source Seperation; Monaural Music Signals; Sinusoidal Modeling; Monaural Sound Source Seperation; Auditory Scene Analysis; Monaural Musical Recordings; Communication Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Goel, P. (2013). Harmonic Sound Source Separation in Monaural Music Signals. (Thesis). Indian Institute of Science. Retrieved from http://hdl.handle.net/2005/2803

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Goel, Priyank. “Harmonic Sound Source Separation in Monaural Music Signals.” 2013. Thesis, Indian Institute of Science. Accessed February 24, 2020. http://hdl.handle.net/2005/2803.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Goel, Priyank. “Harmonic Sound Source Separation in Monaural Music Signals.” 2013. Web. 24 Feb 2020.

Vancouver:

Goel P. Harmonic Sound Source Separation in Monaural Music Signals. [Internet] [Thesis]. Indian Institute of Science; 2013. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/2005/2803.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Goel P. Harmonic Sound Source Separation in Monaural Music Signals. [Thesis]. Indian Institute of Science; 2013. Available from: http://hdl.handle.net/2005/2803

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Université de Grenoble

18. Deleforge, Antoine. Acoustic Space Mapping : A Machine Learning Approach to Sound Source Separation and Localization : Projection d'espaces acoustiques : Une approche par apprentissage automatisé de la séparation et de la localisation de sources sonores.

Degree: Docteur es, Mathématiques et Informatique, 2013, Université de Grenoble

Dans cette thèse, nous abordons le problème longtemps étudié de la séparation et localisation binaurale (deux microphones) de sources sonores par l'apprentissage supervisé. Dans ce… (more)

Subjects/Keywords: Sensorimoteur; Robotique; Analyse de scène auditive; Perception; Apprentissage automatisé; Modèles bayesiens; Sensorimotor; Robotics; Auditory Scene Analysis; Perception; Machine Learning; Bayesian Models; 620

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Deleforge, A. (2013). Acoustic Space Mapping : A Machine Learning Approach to Sound Source Separation and Localization : Projection d'espaces acoustiques : Une approche par apprentissage automatisé de la séparation et de la localisation de sources sonores. (Doctoral Dissertation). Université de Grenoble. Retrieved from http://www.theses.fr/2013GRENM033

Chicago Manual of Style (16th Edition):

Deleforge, Antoine. “Acoustic Space Mapping : A Machine Learning Approach to Sound Source Separation and Localization : Projection d'espaces acoustiques : Une approche par apprentissage automatisé de la séparation et de la localisation de sources sonores.” 2013. Doctoral Dissertation, Université de Grenoble. Accessed February 24, 2020. http://www.theses.fr/2013GRENM033.

MLA Handbook (7th Edition):

Deleforge, Antoine. “Acoustic Space Mapping : A Machine Learning Approach to Sound Source Separation and Localization : Projection d'espaces acoustiques : Une approche par apprentissage automatisé de la séparation et de la localisation de sources sonores.” 2013. Web. 24 Feb 2020.

Vancouver:

Deleforge A. Acoustic Space Mapping : A Machine Learning Approach to Sound Source Separation and Localization : Projection d'espaces acoustiques : Une approche par apprentissage automatisé de la séparation et de la localisation de sources sonores. [Internet] [Doctoral dissertation]. Université de Grenoble; 2013. [cited 2020 Feb 24]. Available from: http://www.theses.fr/2013GRENM033.

Council of Science Editors:

Deleforge A. Acoustic Space Mapping : A Machine Learning Approach to Sound Source Separation and Localization : Projection d'espaces acoustiques : Une approche par apprentissage automatisé de la séparation et de la localisation de sources sonores. [Doctoral Dissertation]. Université de Grenoble; 2013. Available from: http://www.theses.fr/2013GRENM033


University of Florida

19. Woodruff Carr, Kali. Experience-Dependent Enhancement of Musical Training for Auditory Scene Analysis.

Degree: 2012, University of Florida

 Making sense of everyday acoustic environments relies on the ability to organize and correctly parse concurrent sounds into segregated streams originating from discrete sources. Resolution… (more)

Subjects/Keywords: Incidental music; Instrumental music; Musical memory; Musical perception; Musical performance; Musical pitch; Musical register; Musical statistics; Musical talent; Musicians; Auditory scene analysis; Music; Sound

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Woodruff Carr, K. (2012). Experience-Dependent Enhancement of Musical Training for Auditory Scene Analysis. (Thesis). University of Florida. Retrieved from http://ufdc.ufl.edu/AA00060615

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Woodruff Carr, Kali. “Experience-Dependent Enhancement of Musical Training for Auditory Scene Analysis.” 2012. Thesis, University of Florida. Accessed February 24, 2020. http://ufdc.ufl.edu/AA00060615.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Woodruff Carr, Kali. “Experience-Dependent Enhancement of Musical Training for Auditory Scene Analysis.” 2012. Web. 24 Feb 2020.

Vancouver:

Woodruff Carr K. Experience-Dependent Enhancement of Musical Training for Auditory Scene Analysis. [Internet] [Thesis]. University of Florida; 2012. [cited 2020 Feb 24]. Available from: http://ufdc.ufl.edu/AA00060615.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Woodruff Carr K. Experience-Dependent Enhancement of Musical Training for Auditory Scene Analysis. [Thesis]. University of Florida; 2012. Available from: http://ufdc.ufl.edu/AA00060615

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

20. Javier, Lauren. Psychophysics and Neurophysiology of Stream Segregation in the Cat.

Degree: Biological Sciences, 2016, University of California – Irvine

 Listeners have the remarkable ability to disentangle multiple competing sound sequences and organize this mixture into distinct sound sources. A previous study in human listeners… (more)

Subjects/Keywords: Audiology; Neurosciences; Biology; Auditory Scene Analysis; Stream Segregation

…General Introduction 1.1 Auditory Scene Analysis In a complex acoustic environment, sounds… …or auditory scene analysis (ASA) (Bregman, 1990). Sequences of sound… …the neurobiological basis of auditory scene analysis by evaluating the psychophysics of… …Cherry, 1953) or auditory scene analysis (Bregman, 1990). One key element of… …auditory scene analysis is stream segregation, which permits listeners to disentangle multiple… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Javier, L. (2016). Psychophysics and Neurophysiology of Stream Segregation in the Cat. (Thesis). University of California – Irvine. Retrieved from http://www.escholarship.org/uc/item/5z27m70b

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Javier, Lauren. “Psychophysics and Neurophysiology of Stream Segregation in the Cat.” 2016. Thesis, University of California – Irvine. Accessed February 24, 2020. http://www.escholarship.org/uc/item/5z27m70b.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Javier, Lauren. “Psychophysics and Neurophysiology of Stream Segregation in the Cat.” 2016. Web. 24 Feb 2020.

Vancouver:

Javier L. Psychophysics and Neurophysiology of Stream Segregation in the Cat. [Internet] [Thesis]. University of California – Irvine; 2016. [cited 2020 Feb 24]. Available from: http://www.escholarship.org/uc/item/5z27m70b.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Javier L. Psychophysics and Neurophysiology of Stream Segregation in the Cat. [Thesis]. University of California – Irvine; 2016. Available from: http://www.escholarship.org/uc/item/5z27m70b

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

21. Mouterde, Solveig. Long-range discrimination of individual vocal signatures by a songbird : from propagation constraints to neural substrate : Discrimination à longue distance des signatures vocales individuelles chez un oiseau chanteur : des contraintes de propagation au substrat neuronal.

Degree: Docteur es, Biologie et physiologie animale, 2014, Saint-Etienne

L'un des plus grands défis posés par la communication est que l'information codée par l'émetteur est toujours modifiée avant d'atteindre le récepteur, et que celui-ci… (more)

Subjects/Keywords: Communication acoustique; Signature individuelle; Dégradation due à la propagation; Discrimination individuelle; Analyse de scène auditive; Oiseau chanteur; Acoustic communication; Individual signature; Propagation-induced degradation; Individual discrimination; Auditory scene analysis; Songbird

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mouterde, S. (2014). Long-range discrimination of individual vocal signatures by a songbird : from propagation constraints to neural substrate : Discrimination à longue distance des signatures vocales individuelles chez un oiseau chanteur : des contraintes de propagation au substrat neuronal. (Doctoral Dissertation). Saint-Etienne. Retrieved from http://www.theses.fr/2014STET4012

Chicago Manual of Style (16th Edition):

Mouterde, Solveig. “Long-range discrimination of individual vocal signatures by a songbird : from propagation constraints to neural substrate : Discrimination à longue distance des signatures vocales individuelles chez un oiseau chanteur : des contraintes de propagation au substrat neuronal.” 2014. Doctoral Dissertation, Saint-Etienne. Accessed February 24, 2020. http://www.theses.fr/2014STET4012.

MLA Handbook (7th Edition):

Mouterde, Solveig. “Long-range discrimination of individual vocal signatures by a songbird : from propagation constraints to neural substrate : Discrimination à longue distance des signatures vocales individuelles chez un oiseau chanteur : des contraintes de propagation au substrat neuronal.” 2014. Web. 24 Feb 2020.

Vancouver:

Mouterde S. Long-range discrimination of individual vocal signatures by a songbird : from propagation constraints to neural substrate : Discrimination à longue distance des signatures vocales individuelles chez un oiseau chanteur : des contraintes de propagation au substrat neuronal. [Internet] [Doctoral dissertation]. Saint-Etienne; 2014. [cited 2020 Feb 24]. Available from: http://www.theses.fr/2014STET4012.

Council of Science Editors:

Mouterde S. Long-range discrimination of individual vocal signatures by a songbird : from propagation constraints to neural substrate : Discrimination à longue distance des signatures vocales individuelles chez un oiseau chanteur : des contraintes de propagation au substrat neuronal. [Doctoral Dissertation]. Saint-Etienne; 2014. Available from: http://www.theses.fr/2014STET4012

22. David, Marion. Toward sequential segregation of speech sounds based on spatial cues : Vers la ségrégation séquentielle de signaux de parole sur la base d'indices de position.

Degree: Docteur es, Acoustique, 2014, Vaulx-en-Velin, Ecole nationale des travaux publics

 Dans un contexte sonore constitué de plusieurs sources sonores, l’analyse de scène auditive a pour objectif de dresser une représentation précise et utile des sons… (more)

Subjects/Keywords: Analyse de scènes auditives; Ségrégation séquentielle; Différences spatiales; Indices de position; Signaux de parole; Auditory scene analysis; Sequential segregation; Spectral differences; Spatial cues; Speech sounds

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

David, M. (2014). Toward sequential segregation of speech sounds based on spatial cues : Vers la ségrégation séquentielle de signaux de parole sur la base d'indices de position. (Doctoral Dissertation). Vaulx-en-Velin, Ecole nationale des travaux publics. Retrieved from http://www.theses.fr/2014ENTP0013

Chicago Manual of Style (16th Edition):

David, Marion. “Toward sequential segregation of speech sounds based on spatial cues : Vers la ségrégation séquentielle de signaux de parole sur la base d'indices de position.” 2014. Doctoral Dissertation, Vaulx-en-Velin, Ecole nationale des travaux publics. Accessed February 24, 2020. http://www.theses.fr/2014ENTP0013.

MLA Handbook (7th Edition):

David, Marion. “Toward sequential segregation of speech sounds based on spatial cues : Vers la ségrégation séquentielle de signaux de parole sur la base d'indices de position.” 2014. Web. 24 Feb 2020.

Vancouver:

David M. Toward sequential segregation of speech sounds based on spatial cues : Vers la ségrégation séquentielle de signaux de parole sur la base d'indices de position. [Internet] [Doctoral dissertation]. Vaulx-en-Velin, Ecole nationale des travaux publics; 2014. [cited 2020 Feb 24]. Available from: http://www.theses.fr/2014ENTP0013.

Council of Science Editors:

David M. Toward sequential segregation of speech sounds based on spatial cues : Vers la ségrégation séquentielle de signaux de parole sur la base d'indices de position. [Doctoral Dissertation]. Vaulx-en-Velin, Ecole nationale des travaux publics; 2014. Available from: http://www.theses.fr/2014ENTP0013


Linköping University

23. Ardam, Nagaraju. Study of ASA Algorithms.

Degree: Electronics System, 2010, Linköping University

Hearing aid devices are used to help people with hearing impairment. The number of people that requires hearingaid devices are possibly constant over the… (more)

Subjects/Keywords: Auditory Scene Analysis (ASA); Noise suppression; ICA; Blind Source Separation; Hearing-Aid

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ardam, N. (2010). Study of ASA Algorithms. (Thesis). Linköping University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70996

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Ardam, Nagaraju. “Study of ASA Algorithms.” 2010. Thesis, Linköping University. Accessed February 24, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70996.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Ardam, Nagaraju. “Study of ASA Algorithms.” 2010. Web. 24 Feb 2020.

Vancouver:

Ardam N. Study of ASA Algorithms. [Internet] [Thesis]. Linköping University; 2010. [cited 2020 Feb 24]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70996.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Ardam N. Study of ASA Algorithms. [Thesis]. Linköping University; 2010. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70996

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Michigan State University

24. Aaronson, Neil L. Speech-on-speech masking in a front-back dimension and analysis of binaural parameters in rooms using MLS methods.

Degree: PhD, Department of Physics, 2008, Michigan State University

Subjects/Keywords: Acoustic localization; Architectural acoustics; Auditory scene analysis; Sound – Reverberation; Speech perception; Psychoacoustics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Aaronson, N. L. (2008). Speech-on-speech masking in a front-back dimension and analysis of binaural parameters in rooms using MLS methods. (Doctoral Dissertation). Michigan State University. Retrieved from http://etd.lib.msu.edu/islandora/object/etd:39170

Chicago Manual of Style (16th Edition):

Aaronson, Neil L. “Speech-on-speech masking in a front-back dimension and analysis of binaural parameters in rooms using MLS methods.” 2008. Doctoral Dissertation, Michigan State University. Accessed February 24, 2020. http://etd.lib.msu.edu/islandora/object/etd:39170.

MLA Handbook (7th Edition):

Aaronson, Neil L. “Speech-on-speech masking in a front-back dimension and analysis of binaural parameters in rooms using MLS methods.” 2008. Web. 24 Feb 2020.

Vancouver:

Aaronson NL. Speech-on-speech masking in a front-back dimension and analysis of binaural parameters in rooms using MLS methods. [Internet] [Doctoral dissertation]. Michigan State University; 2008. [cited 2020 Feb 24]. Available from: http://etd.lib.msu.edu/islandora/object/etd:39170.

Council of Science Editors:

Aaronson NL. Speech-on-speech masking in a front-back dimension and analysis of binaural parameters in rooms using MLS methods. [Doctoral Dissertation]. Michigan State University; 2008. Available from: http://etd.lib.msu.edu/islandora/object/etd:39170


University of South Florida

25. Ravulapalli, Sunil Babu. Association of Sound to Motion in Video Using Perceptual Organization.

Degree: 2006, University of South Florida

 Technological developments and innovations of the first forty years of the digital era have primarily addressed either the audio or the visual senses. Consequently, designers… (more)

Subjects/Keywords: Video Surveillance; Sound Association; Auditory Scene Analysis; Auditory Object; Motion Detection; American Studies; Arts and Humanities; Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ravulapalli, S. B. (2006). Association of Sound to Motion in Video Using Perceptual Organization. (Thesis). University of South Florida. Retrieved from https://scholarcommons.usf.edu/etd/3769

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Ravulapalli, Sunil Babu. “Association of Sound to Motion in Video Using Perceptual Organization.” 2006. Thesis, University of South Florida. Accessed February 24, 2020. https://scholarcommons.usf.edu/etd/3769.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Ravulapalli, Sunil Babu. “Association of Sound to Motion in Video Using Perceptual Organization.” 2006. Web. 24 Feb 2020.

Vancouver:

Ravulapalli SB. Association of Sound to Motion in Video Using Perceptual Organization. [Internet] [Thesis]. University of South Florida; 2006. [cited 2020 Feb 24]. Available from: https://scholarcommons.usf.edu/etd/3769.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Ravulapalli SB. Association of Sound to Motion in Video Using Perceptual Organization. [Thesis]. University of South Florida; 2006. Available from: https://scholarcommons.usf.edu/etd/3769

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

26. Patten, Kristopher Jakob. Psychophysical and Neural Correlates of Auditory Attraction and Aversion.

Degree: Psychology, 2014, Arizona State University

 This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based… (more)

Subjects/Keywords: Psychology; Neurosciences; Acoustics; auditory perception; auditory scene analysis; consonance; emotion; fMRI; psychoacoustics

…x29;, and may be used by the auditory scene analysis process to parse sounds. As defined by… …Bregman (1994, 2007), auditory scene analysis (ASA) is the process by which… …From an auditory scene analysis perspective, having a neural network that assigns different… …natural regularity – beyond degree of consonance – than can aid in auditory scene analysis. In… …x28;2007, October). Progress in the study of auditory scene analysis. In Applications of… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Patten, K. J. (2014). Psychophysical and Neural Correlates of Auditory Attraction and Aversion. (Masters Thesis). Arizona State University. Retrieved from http://repository.asu.edu/items/27518

Chicago Manual of Style (16th Edition):

Patten, Kristopher Jakob. “Psychophysical and Neural Correlates of Auditory Attraction and Aversion.” 2014. Masters Thesis, Arizona State University. Accessed February 24, 2020. http://repository.asu.edu/items/27518.

MLA Handbook (7th Edition):

Patten, Kristopher Jakob. “Psychophysical and Neural Correlates of Auditory Attraction and Aversion.” 2014. Web. 24 Feb 2020.

Vancouver:

Patten KJ. Psychophysical and Neural Correlates of Auditory Attraction and Aversion. [Internet] [Masters thesis]. Arizona State University; 2014. [cited 2020 Feb 24]. Available from: http://repository.asu.edu/items/27518.

Council of Science Editors:

Patten KJ. Psychophysical and Neural Correlates of Auditory Attraction and Aversion. [Masters Thesis]. Arizona State University; 2014. Available from: http://repository.asu.edu/items/27518


Kyoto University / 京都大学

27. Otsuka, Takuma. Bayesian Microphone Array Processing : ベイズ法によるマイクロフォンアレイ処理.

Degree: 博士(情報学), 2014, Kyoto University / 京都大学

新制・課程博士

甲第18412号

情博第527号

Subjects/Keywords: microphone array processing; sound source separation; statistical signal processing; computational auditory scene analysis; Bayesian nonparametrics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Otsuka, T. (2014). Bayesian Microphone Array Processing : ベイズ法によるマイクロフォンアレイ処理. (Thesis). Kyoto University / 京都大学. Retrieved from http://hdl.handle.net/2433/188871 ; http://dx.doi.org/10.14989/doctor.k18412

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Otsuka, Takuma. “Bayesian Microphone Array Processing : ベイズ法によるマイクロフォンアレイ処理.” 2014. Thesis, Kyoto University / 京都大学. Accessed February 24, 2020. http://hdl.handle.net/2433/188871 ; http://dx.doi.org/10.14989/doctor.k18412.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Otsuka, Takuma. “Bayesian Microphone Array Processing : ベイズ法によるマイクロフォンアレイ処理.” 2014. Web. 24 Feb 2020.

Vancouver:

Otsuka T. Bayesian Microphone Array Processing : ベイズ法によるマイクロフォンアレイ処理. [Internet] [Thesis]. Kyoto University / 京都大学; 2014. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/2433/188871 ; http://dx.doi.org/10.14989/doctor.k18412.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Otsuka T. Bayesian Microphone Array Processing : ベイズ法によるマイクロフォンアレイ処理. [Thesis]. Kyoto University / 京都大学; 2014. Available from: http://hdl.handle.net/2433/188871 ; http://dx.doi.org/10.14989/doctor.k18412

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


The Ohio State University

28. Roman, Nicoleta. Auditory-based algorithms for sound segregation in multisource and reverberant environments.

Degree: PhD, Computer and Information Science, 2005, The Ohio State University

 At a cocktail party, we can selectively attend to a single voice and filter out other interferences. This perceptual ability has motivated a new field… (more)

Subjects/Keywords: computational auditory scene analysis (CASA); binaural speech segregation; monaural speech segregation; robust automatic speech segregation; adaptive filtering; room impulse response; reverberation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Roman, N. (2005). Auditory-based algorithms for sound segregation in multisource and reverberant environments. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1124370749

Chicago Manual of Style (16th Edition):

Roman, Nicoleta. “Auditory-based algorithms for sound segregation in multisource and reverberant environments.” 2005. Doctoral Dissertation, The Ohio State University. Accessed February 24, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1124370749.

MLA Handbook (7th Edition):

Roman, Nicoleta. “Auditory-based algorithms for sound segregation in multisource and reverberant environments.” 2005. Web. 24 Feb 2020.

Vancouver:

Roman N. Auditory-based algorithms for sound segregation in multisource and reverberant environments. [Internet] [Doctoral dissertation]. The Ohio State University; 2005. [cited 2020 Feb 24]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1124370749.

Council of Science Editors:

Roman N. Auditory-based algorithms for sound segregation in multisource and reverberant environments. [Doctoral Dissertation]. The Ohio State University; 2005. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1124370749


Kyoto University

29. Otsuka, Takuma. Bayesian Microphone Array Processing .

Degree: 2014, Kyoto University

Subjects/Keywords: microphone array processing; sound source separation; statistical signal processing; computational auditory scene analysis; Bayesian nonparametrics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Otsuka, T. (2014). Bayesian Microphone Array Processing . (Thesis). Kyoto University. Retrieved from http://hdl.handle.net/2433/188871

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Otsuka, Takuma. “Bayesian Microphone Array Processing .” 2014. Thesis, Kyoto University. Accessed February 24, 2020. http://hdl.handle.net/2433/188871.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Otsuka, Takuma. “Bayesian Microphone Array Processing .” 2014. Web. 24 Feb 2020.

Vancouver:

Otsuka T. Bayesian Microphone Array Processing . [Internet] [Thesis]. Kyoto University; 2014. [cited 2020 Feb 24]. Available from: http://hdl.handle.net/2433/188871.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Otsuka T. Bayesian Microphone Array Processing . [Thesis]. Kyoto University; 2014. Available from: http://hdl.handle.net/2433/188871

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

30. Νταλαμπίρας, Σταύρος. Ψηφιακή επεξεργασία και αυτόματη κατηγοριοποίηση περιβαλλοντικών ήχων.

Degree: 2010, University of Patras

Στο κεφάλαιο 1 παρουσιάζεται μία γενική επισκόπηση της αυτόματης αναγνώρισης γενικευμένων ακουστικών γεγονότων. Επιπλέον συζητάμε τις εφαρμογές της τεχνολογίας αναγνώρισης ακουστικού σήματος και δίνουμε μία… (more)

Subjects/Keywords: Κατηγοριοποίηση ήχων; Θεωρία πιθανοτήτων; Εντοπισμός ηχητικού γεγονότος; Κρυμμένα μοντέλα Μαρκώφ; Αναγνώριση γενικευμένου ακουστικού σήματος; Υπολογιστική ακουστική ανάλυση σκηνής; Ανίχνευση καινοτομίας; 621.382 8; Sound classification; Probability theory; Sound event detection; Hidden Markov models; Generalized recognition of audio signals; Computational auditory scene analysis; Novelty detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Νταλαμπίρας, . (2010). Ψηφιακή επεξεργασία και αυτόματη κατηγοριοποίηση περιβαλλοντικών ήχων. (Doctoral Dissertation). University of Patras. Retrieved from http://nemertes.lis.upatras.gr/jspui/handle/10889/3705

Chicago Manual of Style (16th Edition):

Νταλαμπίρας, Σταύρος. “Ψηφιακή επεξεργασία και αυτόματη κατηγοριοποίηση περιβαλλοντικών ήχων.” 2010. Doctoral Dissertation, University of Patras. Accessed February 24, 2020. http://nemertes.lis.upatras.gr/jspui/handle/10889/3705.

MLA Handbook (7th Edition):

Νταλαμπίρας, Σταύρος. “Ψηφιακή επεξεργασία και αυτόματη κατηγοριοποίηση περιβαλλοντικών ήχων.” 2010. Web. 24 Feb 2020.

Vancouver:

Νταλαμπίρας . Ψηφιακή επεξεργασία και αυτόματη κατηγοριοποίηση περιβαλλοντικών ήχων. [Internet] [Doctoral dissertation]. University of Patras; 2010. [cited 2020 Feb 24]. Available from: http://nemertes.lis.upatras.gr/jspui/handle/10889/3705.

Council of Science Editors:

Νταλαμπίρας . Ψηφιακή επεξεργασία και αυτόματη κατηγοριοποίηση περιβαλλοντικών ήχων. [Doctoral Dissertation]. University of Patras; 2010. Available from: http://nemertes.lis.upatras.gr/jspui/handle/10889/3705

[1] [2]

.