Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(scene recognition). Showing records 1 – 30 of 91 total matches.

[1] [2] [3] [4]

Search Limiters

Last 2 Years | English Only

Country

▼ Search Limiters


McMaster University

1. Wade, Mark. The Nature of the Facilitative Effect of Locomotion on Scene Recognition.

Degree: MSc, 2010, McMaster University

Scene recognition performance is reduced when an observer undergoes a viewpoint shift. However, the cost of a viewpoint shift is less when it is… (more)

Subjects/Keywords: locomotion; scene recognition; mental transformation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wade, M. (2010). The Nature of the Facilitative Effect of Locomotion on Scene Recognition. (Masters Thesis). McMaster University. Retrieved from http://hdl.handle.net/11375/19186

Chicago Manual of Style (16th Edition):

Wade, Mark. “The Nature of the Facilitative Effect of Locomotion on Scene Recognition.” 2010. Masters Thesis, McMaster University. Accessed December 15, 2019. http://hdl.handle.net/11375/19186.

MLA Handbook (7th Edition):

Wade, Mark. “The Nature of the Facilitative Effect of Locomotion on Scene Recognition.” 2010. Web. 15 Dec 2019.

Vancouver:

Wade M. The Nature of the Facilitative Effect of Locomotion on Scene Recognition. [Internet] [Masters thesis]. McMaster University; 2010. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/11375/19186.

Council of Science Editors:

Wade M. The Nature of the Facilitative Effect of Locomotion on Scene Recognition. [Masters Thesis]. McMaster University; 2010. Available from: http://hdl.handle.net/11375/19186


Boston College

2. Linsley, Drew. A revised framework for human scene recognition.

Degree: PhD, Psychology, 2016, Boston College

 For humans, healthy and productive living depends on navigating through the world and behaving appropriately along the way. But in order to do this, humans… (more)

Subjects/Keywords: Biological vision; Computer vision; Scene recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Linsley, D. (2016). A revised framework for human scene recognition. (Doctoral Dissertation). Boston College. Retrieved from http://dlib.bc.edu/islandora/object/bc-ir:106986

Chicago Manual of Style (16th Edition):

Linsley, Drew. “A revised framework for human scene recognition.” 2016. Doctoral Dissertation, Boston College. Accessed December 15, 2019. http://dlib.bc.edu/islandora/object/bc-ir:106986.

MLA Handbook (7th Edition):

Linsley, Drew. “A revised framework for human scene recognition.” 2016. Web. 15 Dec 2019.

Vancouver:

Linsley D. A revised framework for human scene recognition. [Internet] [Doctoral dissertation]. Boston College; 2016. [cited 2019 Dec 15]. Available from: http://dlib.bc.edu/islandora/object/bc-ir:106986.

Council of Science Editors:

Linsley D. A revised framework for human scene recognition. [Doctoral Dissertation]. Boston College; 2016. Available from: http://dlib.bc.edu/islandora/object/bc-ir:106986


University of Victoria

3. Moria, Kawther. Computer vision-based detection of fire and violent actions performed by individuals in videos acquired with handheld devices.

Degree: Department of Computer Science, 2016, University of Victoria

 Advances in social networks and multimedia technologies greatly facilitate the recording and sharing of video data on violent social and/or political events via In- ternet.… (more)

Subjects/Keywords: Object detection; Fire detection; Object recognition; Violent scene detection; Crowd recognition.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Moria, K. (2016). Computer vision-based detection of fire and violent actions performed by individuals in videos acquired with handheld devices. (Thesis). University of Victoria. Retrieved from http://hdl.handle.net/1828/7423

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Moria, Kawther. “Computer vision-based detection of fire and violent actions performed by individuals in videos acquired with handheld devices.” 2016. Thesis, University of Victoria. Accessed December 15, 2019. http://hdl.handle.net/1828/7423.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Moria, Kawther. “Computer vision-based detection of fire and violent actions performed by individuals in videos acquired with handheld devices.” 2016. Web. 15 Dec 2019.

Vancouver:

Moria K. Computer vision-based detection of fire and violent actions performed by individuals in videos acquired with handheld devices. [Internet] [Thesis]. University of Victoria; 2016. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/1828/7423.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Moria K. Computer vision-based detection of fire and violent actions performed by individuals in videos acquired with handheld devices. [Thesis]. University of Victoria; 2016. Available from: http://hdl.handle.net/1828/7423

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Oxford

4. Mathibela, Bonolo. Situational awareness in autonomous vehicles : learning to read the road.

Degree: PhD, 2014, University of Oxford

 This thesis is concerned with the problem of situational awareness in autonomous vehicles. In this context, situational awareness refers to the ability of an autonomous… (more)

Subjects/Keywords: 629.04; Engineering & allied sciences; Robotics; autonomous driving; intelligent transport systems; scene recognition; scene interpretation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mathibela, B. (2014). Situational awareness in autonomous vehicles : learning to read the road. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:f9a788c4-1ce5-4733-be2b-ab3918ed079b ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.658467

Chicago Manual of Style (16th Edition):

Mathibela, Bonolo. “Situational awareness in autonomous vehicles : learning to read the road.” 2014. Doctoral Dissertation, University of Oxford. Accessed December 15, 2019. http://ora.ox.ac.uk/objects/uuid:f9a788c4-1ce5-4733-be2b-ab3918ed079b ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.658467.

MLA Handbook (7th Edition):

Mathibela, Bonolo. “Situational awareness in autonomous vehicles : learning to read the road.” 2014. Web. 15 Dec 2019.

Vancouver:

Mathibela B. Situational awareness in autonomous vehicles : learning to read the road. [Internet] [Doctoral dissertation]. University of Oxford; 2014. [cited 2019 Dec 15]. Available from: http://ora.ox.ac.uk/objects/uuid:f9a788c4-1ce5-4733-be2b-ab3918ed079b ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.658467.

Council of Science Editors:

Mathibela B. Situational awareness in autonomous vehicles : learning to read the road. [Doctoral Dissertation]. University of Oxford; 2014. Available from: http://ora.ox.ac.uk/objects/uuid:f9a788c4-1ce5-4733-be2b-ab3918ed079b ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.658467


University of Southern California

5. Kim, Jiye Gina. The neural coding of inter-object relations.

Degree: PhD, Psychology, 2011, University of Southern California

 Human vision is extraordinary in the speed and facility at which complex novel scenes can be understood. What is the mechanism by which such effortless… (more)

Subjects/Keywords: fMRI; inter-object relations; lateral occipital complex; object recognition; scene recognition; TMS

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kim, J. G. (2011). The neural coding of inter-object relations. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/667773/rec/7012

Chicago Manual of Style (16th Edition):

Kim, Jiye Gina. “The neural coding of inter-object relations.” 2011. Doctoral Dissertation, University of Southern California. Accessed December 15, 2019. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/667773/rec/7012.

MLA Handbook (7th Edition):

Kim, Jiye Gina. “The neural coding of inter-object relations.” 2011. Web. 15 Dec 2019.

Vancouver:

Kim JG. The neural coding of inter-object relations. [Internet] [Doctoral dissertation]. University of Southern California; 2011. [cited 2019 Dec 15]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/667773/rec/7012.

Council of Science Editors:

Kim JG. The neural coding of inter-object relations. [Doctoral Dissertation]. University of Southern California; 2011. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/667773/rec/7012


University of Edinburgh

6. Malcolm, George Law. Target template guidance of eye movements during real-world search.

Degree: PhD, 2010, University of Edinburgh

 Humans must regularly locate task-relevant objects when interacting with the world around them. Previous research has identified different types of information that the visual system… (more)

Subjects/Keywords: 617.7; active vision; eye movement; search; visual cognition; scene recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Malcolm, G. L. (2010). Target template guidance of eye movements during real-world search. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/4487

Chicago Manual of Style (16th Edition):

Malcolm, George Law. “Target template guidance of eye movements during real-world search.” 2010. Doctoral Dissertation, University of Edinburgh. Accessed December 15, 2019. http://hdl.handle.net/1842/4487.

MLA Handbook (7th Edition):

Malcolm, George Law. “Target template guidance of eye movements during real-world search.” 2010. Web. 15 Dec 2019.

Vancouver:

Malcolm GL. Target template guidance of eye movements during real-world search. [Internet] [Doctoral dissertation]. University of Edinburgh; 2010. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/1842/4487.

Council of Science Editors:

Malcolm GL. Target template guidance of eye movements during real-world search. [Doctoral Dissertation]. University of Edinburgh; 2010. Available from: http://hdl.handle.net/1842/4487


University of Alberta

7. Parnes, Michael. The Effect of Instructions on Landmark, Route, and Directional Memory for Active vs. Passive Learners of a Virtual Reality Environment.

Degree: MS, Department of Psychology, 2012, University of Alberta

 In two experiments, subjects either freely walked around a virtual building or watched a recording made made by a matched (free walk) subject. Subjects then… (more)

Subjects/Keywords: scene recognition; implicit/explicit; navigation; JRDs; active/passive; spatial cognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Parnes, M. (2012). The Effect of Instructions on Landmark, Route, and Directional Memory for Active vs. Passive Learners of a Virtual Reality Environment. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/g445cd92z

Chicago Manual of Style (16th Edition):

Parnes, Michael. “The Effect of Instructions on Landmark, Route, and Directional Memory for Active vs. Passive Learners of a Virtual Reality Environment.” 2012. Masters Thesis, University of Alberta. Accessed December 15, 2019. https://era.library.ualberta.ca/files/g445cd92z.

MLA Handbook (7th Edition):

Parnes, Michael. “The Effect of Instructions on Landmark, Route, and Directional Memory for Active vs. Passive Learners of a Virtual Reality Environment.” 2012. Web. 15 Dec 2019.

Vancouver:

Parnes M. The Effect of Instructions on Landmark, Route, and Directional Memory for Active vs. Passive Learners of a Virtual Reality Environment. [Internet] [Masters thesis]. University of Alberta; 2012. [cited 2019 Dec 15]. Available from: https://era.library.ualberta.ca/files/g445cd92z.

Council of Science Editors:

Parnes M. The Effect of Instructions on Landmark, Route, and Directional Memory for Active vs. Passive Learners of a Virtual Reality Environment. [Masters Thesis]. University of Alberta; 2012. Available from: https://era.library.ualberta.ca/files/g445cd92z

8. Arroyo Esquivel, Esteban. An adaptive image pre-processing system for quality control in production lines.

Degree: 2012, Instituto Politécnico de Bragança

 Adaptive and self-optimized behaviours in automated quality control systems based on computer vision and hence on digital image processing, constitute an approach that may signi… (more)

Subjects/Keywords: Adaptive systems; Image pre-processing; Industrial quality control; Scene recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Arroyo Esquivel, E. (2012). An adaptive image pre-processing system for quality control in production lines. (Thesis). Instituto Politécnico de Bragança. Retrieved from https://www.rcaap.pt/detail.jsp?id=oai:bibliotecadigital.ipb.pt:10198/7984

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Arroyo Esquivel, Esteban. “An adaptive image pre-processing system for quality control in production lines.” 2012. Thesis, Instituto Politécnico de Bragança. Accessed December 15, 2019. https://www.rcaap.pt/detail.jsp?id=oai:bibliotecadigital.ipb.pt:10198/7984.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Arroyo Esquivel, Esteban. “An adaptive image pre-processing system for quality control in production lines.” 2012. Web. 15 Dec 2019.

Vancouver:

Arroyo Esquivel E. An adaptive image pre-processing system for quality control in production lines. [Internet] [Thesis]. Instituto Politécnico de Bragança; 2012. [cited 2019 Dec 15]. Available from: https://www.rcaap.pt/detail.jsp?id=oai:bibliotecadigital.ipb.pt:10198/7984.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Arroyo Esquivel E. An adaptive image pre-processing system for quality control in production lines. [Thesis]. Instituto Politécnico de Bragança; 2012. Available from: https://www.rcaap.pt/detail.jsp?id=oai:bibliotecadigital.ipb.pt:10198/7984

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

9. Graves, Benjamin Lee. Methods Of Measuring Visual Scanning Of Upright And Inverted Ecological Images.

Degree: MSin Psychology, Psychology, 2016, Missouri State University

 Facial recognition has been long held as a special perceptual process at which humans excel, and is primarily a function of perceptual experience. However, there… (more)

Subjects/Keywords: visual scanning; ecological stimuli; recognition memory; scene perception; individual differences; Psychology

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Graves, B. L. (2016). Methods Of Measuring Visual Scanning Of Upright And Inverted Ecological Images. (Masters Thesis). Missouri State University. Retrieved from https://bearworks.missouristate.edu/theses/2963

Chicago Manual of Style (16th Edition):

Graves, Benjamin Lee. “Methods Of Measuring Visual Scanning Of Upright And Inverted Ecological Images.” 2016. Masters Thesis, Missouri State University. Accessed December 15, 2019. https://bearworks.missouristate.edu/theses/2963.

MLA Handbook (7th Edition):

Graves, Benjamin Lee. “Methods Of Measuring Visual Scanning Of Upright And Inverted Ecological Images.” 2016. Web. 15 Dec 2019.

Vancouver:

Graves BL. Methods Of Measuring Visual Scanning Of Upright And Inverted Ecological Images. [Internet] [Masters thesis]. Missouri State University; 2016. [cited 2019 Dec 15]. Available from: https://bearworks.missouristate.edu/theses/2963.

Council of Science Editors:

Graves BL. Methods Of Measuring Visual Scanning Of Upright And Inverted Ecological Images. [Masters Thesis]. Missouri State University; 2016. Available from: https://bearworks.missouristate.edu/theses/2963


Virginia Tech

10. Nguyen, Chuong Hoang. Features identification and tracking for an autonomous ground vehicle.

Degree: MS, Mechanical Engineering, 2013, Virginia Tech

 This thesis attempts to develop features identification and tracking system for an autonomous ground vehicle by focusing on four fundamental tasks: Motion detection, object tracking,… (more)

Subjects/Keywords: Motion detection; object tracking; scene recognition; object detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nguyen, C. H. (2013). Features identification and tracking for an autonomous ground vehicle. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/33127

Chicago Manual of Style (16th Edition):

Nguyen, Chuong Hoang. “Features identification and tracking for an autonomous ground vehicle.” 2013. Masters Thesis, Virginia Tech. Accessed December 15, 2019. http://hdl.handle.net/10919/33127.

MLA Handbook (7th Edition):

Nguyen, Chuong Hoang. “Features identification and tracking for an autonomous ground vehicle.” 2013. Web. 15 Dec 2019.

Vancouver:

Nguyen CH. Features identification and tracking for an autonomous ground vehicle. [Internet] [Masters thesis]. Virginia Tech; 2013. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/10919/33127.

Council of Science Editors:

Nguyen CH. Features identification and tracking for an autonomous ground vehicle. [Masters Thesis]. Virginia Tech; 2013. Available from: http://hdl.handle.net/10919/33127


Curtin University of Technology

11. Dillon, Craig. A theory of scene understanding and object recognition.

Degree: 1996, Curtin University of Technology

 This dissertation presents a new approach to image interpretation which can produce hierarchical descriptions of visually sensed scenes based on an incrementally learnt hierarchical knowledge… (more)

Subjects/Keywords: scene understanding; object recognition; Cite

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dillon, C. (1996). A theory of scene understanding and object recognition. (Thesis). Curtin University of Technology. Retrieved from http://hdl.handle.net/20.500.11937/194

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Dillon, Craig. “A theory of scene understanding and object recognition. ” 1996. Thesis, Curtin University of Technology. Accessed December 15, 2019. http://hdl.handle.net/20.500.11937/194.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Dillon, Craig. “A theory of scene understanding and object recognition. ” 1996. Web. 15 Dec 2019.

Vancouver:

Dillon C. A theory of scene understanding and object recognition. [Internet] [Thesis]. Curtin University of Technology; 1996. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/20.500.11937/194.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Dillon C. A theory of scene understanding and object recognition. [Thesis]. Curtin University of Technology; 1996. Available from: http://hdl.handle.net/20.500.11937/194

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Rochester Institute of Technology

12. Wilbee, Aaron J. A Framework For Learning Scene Independent Edge Detection.

Degree: MS, Electrical Engineering, 2014, Rochester Institute of Technology

  In this work, a framework for a system which will intelligently assign an edge detection filter to an image based on features taken from… (more)

Subjects/Keywords: Cellular automata; Edge detection; Learning algorithms; Scene recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wilbee, A. J. (2014). A Framework For Learning Scene Independent Edge Detection. (Masters Thesis). Rochester Institute of Technology. Retrieved from https://scholarworks.rit.edu/theses/8642

Chicago Manual of Style (16th Edition):

Wilbee, Aaron J. “A Framework For Learning Scene Independent Edge Detection.” 2014. Masters Thesis, Rochester Institute of Technology. Accessed December 15, 2019. https://scholarworks.rit.edu/theses/8642.

MLA Handbook (7th Edition):

Wilbee, Aaron J. “A Framework For Learning Scene Independent Edge Detection.” 2014. Web. 15 Dec 2019.

Vancouver:

Wilbee AJ. A Framework For Learning Scene Independent Edge Detection. [Internet] [Masters thesis]. Rochester Institute of Technology; 2014. [cited 2019 Dec 15]. Available from: https://scholarworks.rit.edu/theses/8642.

Council of Science Editors:

Wilbee AJ. A Framework For Learning Scene Independent Edge Detection. [Masters Thesis]. Rochester Institute of Technology; 2014. Available from: https://scholarworks.rit.edu/theses/8642


Indian Institute of Science

13. Kumar, Deepak. Methods for Text Segmentation from Scene Images.

Degree: 2014, Indian Institute of Science

Recognition of text from camera-captured scene/born-digital images help in the development of aids for the blind, unmanned navigation systems and spam filters. However, text in… (more)

Subjects/Keywords: Text Recognition; Digital Images; Scene Images; Text Segmentation; Kannada Word Recognition; Born-Digital Images; Scene Word Images Recognition; Text Segmentation Scene Images; Camera-Captured Scene Image Analysis; Segmented Images; Multi-Script Annotation Toolkit (MAST); Scenic Text; Born-Digital Word Images; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kumar, D. (2014). Methods for Text Segmentation from Scene Images. (Thesis). Indian Institute of Science. Retrieved from http://hdl.handle.net/2005/2693

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Kumar, Deepak. “Methods for Text Segmentation from Scene Images.” 2014. Thesis, Indian Institute of Science. Accessed December 15, 2019. http://hdl.handle.net/2005/2693.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Kumar, Deepak. “Methods for Text Segmentation from Scene Images.” 2014. Web. 15 Dec 2019.

Vancouver:

Kumar D. Methods for Text Segmentation from Scene Images. [Internet] [Thesis]. Indian Institute of Science; 2014. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/2005/2693.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Kumar D. Methods for Text Segmentation from Scene Images. [Thesis]. Indian Institute of Science; 2014. Available from: http://hdl.handle.net/2005/2693

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Indian Institute of Science

14. Kumar, Deepak. Methods for Text Segmentation from Scene Images.

Degree: 2014, Indian Institute of Science

Recognition of text from camera-captured scene/born-digital images help in the development of aids for the blind, unmanned navigation systems and spam filters. However, text in… (more)

Subjects/Keywords: Text Recognition; Digital Images; Scene Images; Text Segmentation; Kannada Word Recognition; Born-Digital Images; Scene Word Images Recognition; Text Segmentation Scene Images; Camera-Captured Scene Image Analysis; Segmented Images; Multi-Script Annotation Toolkit (MAST); Scenic Text; Born-Digital Word Images; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kumar, D. (2014). Methods for Text Segmentation from Scene Images. (Thesis). Indian Institute of Science. Retrieved from http://etd.iisc.ernet.in/handle/2005/2693 ; http://etd.ncsi.iisc.ernet.in/abstracts/3514/G25891-Abs.pdf

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Kumar, Deepak. “Methods for Text Segmentation from Scene Images.” 2014. Thesis, Indian Institute of Science. Accessed December 15, 2019. http://etd.iisc.ernet.in/handle/2005/2693 ; http://etd.ncsi.iisc.ernet.in/abstracts/3514/G25891-Abs.pdf.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Kumar, Deepak. “Methods for Text Segmentation from Scene Images.” 2014. Web. 15 Dec 2019.

Vancouver:

Kumar D. Methods for Text Segmentation from Scene Images. [Internet] [Thesis]. Indian Institute of Science; 2014. [cited 2019 Dec 15]. Available from: http://etd.iisc.ernet.in/handle/2005/2693 ; http://etd.ncsi.iisc.ernet.in/abstracts/3514/G25891-Abs.pdf.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Kumar D. Methods for Text Segmentation from Scene Images. [Thesis]. Indian Institute of Science; 2014. Available from: http://etd.iisc.ernet.in/handle/2005/2693 ; http://etd.ncsi.iisc.ernet.in/abstracts/3514/G25891-Abs.pdf

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Kentucky

15. Unnikrishnan, Harikrishnan. AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES.

Degree: 2010, University of Kentucky

 Auditory stream denotes the abstract effect a source creates in the mind of the listener. An auditory scene consists of many streams, which the listener… (more)

Subjects/Keywords: Audio Scene Segmentation; Sound Source Tracking; Computational Auditory Scene Analysis; Microphone Arrays; Speaker Recognition; Electrical and Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Unnikrishnan, H. (2010). AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES. (Masters Thesis). University of Kentucky. Retrieved from http://uknowledge.uky.edu/gradschool_theses/622

Chicago Manual of Style (16th Edition):

Unnikrishnan, Harikrishnan. “AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES.” 2010. Masters Thesis, University of Kentucky. Accessed December 15, 2019. http://uknowledge.uky.edu/gradschool_theses/622.

MLA Handbook (7th Edition):

Unnikrishnan, Harikrishnan. “AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES.” 2010. Web. 15 Dec 2019.

Vancouver:

Unnikrishnan H. AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES. [Internet] [Masters thesis]. University of Kentucky; 2010. [cited 2019 Dec 15]. Available from: http://uknowledge.uky.edu/gradschool_theses/622.

Council of Science Editors:

Unnikrishnan H. AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES. [Masters Thesis]. University of Kentucky; 2010. Available from: http://uknowledge.uky.edu/gradschool_theses/622

16. Higgins, James S. Canonical views of objects and scenes.

Degree: PhD, 0338, 2011, University of Illinois – Urbana-Champaign

 People frequently encounter and interact with objects and scenes from various vantage points in everyday life. The present set of studies explored canonical viewpoints in… (more)

Subjects/Keywords: object; scene; viewpoint; canonical; preference; object recognition; scene recognition; navigation

Scene recognition and viewpoints A similar theoretical issue has been raised in the scene… …and scene recognition performance and there are a number of neural structures associated… …and scene recognition from different viewpoints is full of contentious debates. Many of the… …same issues present in object recognition are present in the scene literature, the most… …for object recognition). Once these geometric primitives and their relations (… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Higgins, J. S. (2011). Canonical views of objects and scenes. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/24313

Chicago Manual of Style (16th Edition):

Higgins, James S. “Canonical views of objects and scenes.” 2011. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed December 15, 2019. http://hdl.handle.net/2142/24313.

MLA Handbook (7th Edition):

Higgins, James S. “Canonical views of objects and scenes.” 2011. Web. 15 Dec 2019.

Vancouver:

Higgins JS. Canonical views of objects and scenes. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2011. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/2142/24313.

Council of Science Editors:

Higgins JS. Canonical views of objects and scenes. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2011. Available from: http://hdl.handle.net/2142/24313


University of California – San Diego

17. Dixit, Mandar. Semantic transfer with deep neural networks.

Degree: Electrical Engineering (Intelsys, Robotics and Cont), 2017, University of California – San Diego

 Visual recognition is a problem of significant interest in computer vision. The current solution to this problem involves training a very deep neural network using… (more)

Subjects/Keywords: Computer science; Electrical engineering; Artificial intelligence; Convolutional Neural Networks; Object Recognition; Scene Classification; Transfer Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dixit, M. (2017). Semantic transfer with deep neural networks. (Thesis). University of California – San Diego. Retrieved from http://www.escholarship.org/uc/item/61v536xt

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Dixit, Mandar. “Semantic transfer with deep neural networks.” 2017. Thesis, University of California – San Diego. Accessed December 15, 2019. http://www.escholarship.org/uc/item/61v536xt.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Dixit, Mandar. “Semantic transfer with deep neural networks.” 2017. Web. 15 Dec 2019.

Vancouver:

Dixit M. Semantic transfer with deep neural networks. [Internet] [Thesis]. University of California – San Diego; 2017. [cited 2019 Dec 15]. Available from: http://www.escholarship.org/uc/item/61v536xt.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Dixit M. Semantic transfer with deep neural networks. [Thesis]. University of California – San Diego; 2017. Available from: http://www.escholarship.org/uc/item/61v536xt

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


UCLA

18. Zhu, Yixin. Visual Commonsense Reasoning: Functionality, Physics, Causality, and Utility.

Degree: Statistics, 2018, UCLA

 Reasoning about commonsense from visual input remains an important and challenging problem in the field of computer vision. It is important because the ability to… (more)

Subjects/Keywords: Statistics; Computer science; Causal Reasoning; Computer Vision; Intuitive Physics; Manipulation; Object Recognition; Scene Understanding

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zhu, Y. (2018). Visual Commonsense Reasoning: Functionality, Physics, Causality, and Utility. (Thesis). UCLA. Retrieved from http://www.escholarship.org/uc/item/7sm0389z

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Zhu, Yixin. “Visual Commonsense Reasoning: Functionality, Physics, Causality, and Utility.” 2018. Thesis, UCLA. Accessed December 15, 2019. http://www.escholarship.org/uc/item/7sm0389z.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Zhu, Yixin. “Visual Commonsense Reasoning: Functionality, Physics, Causality, and Utility.” 2018. Web. 15 Dec 2019.

Vancouver:

Zhu Y. Visual Commonsense Reasoning: Functionality, Physics, Causality, and Utility. [Internet] [Thesis]. UCLA; 2018. [cited 2019 Dec 15]. Available from: http://www.escholarship.org/uc/item/7sm0389z.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Zhu Y. Visual Commonsense Reasoning: Functionality, Physics, Causality, and Utility. [Thesis]. UCLA; 2018. Available from: http://www.escholarship.org/uc/item/7sm0389z

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Mid Sweden University

19. Meng, Zhaoxin. A deep learning model for scene recognition.

Degree: Information Systems and Technology, 2019, Mid Sweden University

Scene recognition is a hot research topic in the field of image recognition. It is necessary that we focus on the research on scene(more)

Subjects/Keywords: Scene recognition; CNN; convolutional supervised; Fisher Vector; transfer learning; Software Engineering; Programvaruteknik

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Meng, Z. (2019). A deep learning model for scene recognition. (Thesis). Mid Sweden University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36491

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Meng, Zhaoxin. “A deep learning model for scene recognition.” 2019. Thesis, Mid Sweden University. Accessed December 15, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36491.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Meng, Zhaoxin. “A deep learning model for scene recognition.” 2019. Web. 15 Dec 2019.

Vancouver:

Meng Z. A deep learning model for scene recognition. [Internet] [Thesis]. Mid Sweden University; 2019. [cited 2019 Dec 15]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36491.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Meng Z. A deep learning model for scene recognition. [Thesis]. Mid Sweden University; 2019. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36491

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Arizona

20. Del Pero, Luca. Top-Down Bayesian Modeling and Inference for Indoor Scenes .

Degree: 2013, University of Arizona

 People can understand the content of an image without effort. We can easily identify the objects in it, and figure out where they are in… (more)

Subjects/Keywords: Bayesian inference; Computer Vision; Indoor scenes; Object recognition; Scene understanding; Computer Science; 3D reconstruction

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Del Pero, L. (2013). Top-Down Bayesian Modeling and Inference for Indoor Scenes . (Doctoral Dissertation). University of Arizona. Retrieved from http://hdl.handle.net/10150/297040

Chicago Manual of Style (16th Edition):

Del Pero, Luca. “Top-Down Bayesian Modeling and Inference for Indoor Scenes .” 2013. Doctoral Dissertation, University of Arizona. Accessed December 15, 2019. http://hdl.handle.net/10150/297040.

MLA Handbook (7th Edition):

Del Pero, Luca. “Top-Down Bayesian Modeling and Inference for Indoor Scenes .” 2013. Web. 15 Dec 2019.

Vancouver:

Del Pero L. Top-Down Bayesian Modeling and Inference for Indoor Scenes . [Internet] [Doctoral dissertation]. University of Arizona; 2013. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/10150/297040.

Council of Science Editors:

Del Pero L. Top-Down Bayesian Modeling and Inference for Indoor Scenes . [Doctoral Dissertation]. University of Arizona; 2013. Available from: http://hdl.handle.net/10150/297040


University of Central Florida

21. Liu, Jingen. Learning Semantic Features For Visual Recognition.

Degree: 2009, University of Central Florida

 Visual recognition (e.g., object, scene and action recognition) is an active area of research in computer vision due to its increasing number of real-world applications… (more)

Subjects/Keywords: visual recognition; action recognition; scene recognition; pattern recognition; Computer Sciences; Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liu, J. (2009). Learning Semantic Features For Visual Recognition. (Doctoral Dissertation). University of Central Florida. Retrieved from https://stars.library.ucf.edu/etd/4002

Chicago Manual of Style (16th Edition):

Liu, Jingen. “Learning Semantic Features For Visual Recognition.” 2009. Doctoral Dissertation, University of Central Florida. Accessed December 15, 2019. https://stars.library.ucf.edu/etd/4002.

MLA Handbook (7th Edition):

Liu, Jingen. “Learning Semantic Features For Visual Recognition.” 2009. Web. 15 Dec 2019.

Vancouver:

Liu J. Learning Semantic Features For Visual Recognition. [Internet] [Doctoral dissertation]. University of Central Florida; 2009. [cited 2019 Dec 15]. Available from: https://stars.library.ucf.edu/etd/4002.

Council of Science Editors:

Liu J. Learning Semantic Features For Visual Recognition. [Doctoral Dissertation]. University of Central Florida; 2009. Available from: https://stars.library.ucf.edu/etd/4002


The Ohio State University

22. Shao, Yang. Sequential organization in computational auditory scene analysis.

Degree: PhD, Computer and Information Science, 2007, The Ohio State University

  A human listener's ability to organize the time-frequency (T-F) energy of the same sound source into a single stream is termed auditory scene analysis… (more)

Subjects/Keywords: Computer Science; Sequential Organization; Sequential Grouping; Auditory Scene Analysis; Computational Auditory Scene Analysis; Speech Organization; Robust Speaker Recognition; Auditory Feature; Speaker Quantization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shao, Y. (2007). Sequential organization in computational auditory scene analysis. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1190127412

Chicago Manual of Style (16th Edition):

Shao, Yang. “Sequential organization in computational auditory scene analysis.” 2007. Doctoral Dissertation, The Ohio State University. Accessed December 15, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1190127412.

MLA Handbook (7th Edition):

Shao, Yang. “Sequential organization in computational auditory scene analysis.” 2007. Web. 15 Dec 2019.

Vancouver:

Shao Y. Sequential organization in computational auditory scene analysis. [Internet] [Doctoral dissertation]. The Ohio State University; 2007. [cited 2019 Dec 15]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1190127412.

Council of Science Editors:

Shao Y. Sequential organization in computational auditory scene analysis. [Doctoral Dissertation]. The Ohio State University; 2007. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1190127412

23. Blachon, David. Reconnaissance de scènes multimodale embarquée : Embedded multimodal scene recognition.

Degree: Docteur es, Informatique, 2016, Grenoble Alpes

 Contexte : Cette thèse se déroule dans les contextes de l'intelligence ambiante et de la reconnaissance de scène (sur mobile). Historiquement, le projet vient de… (more)

Subjects/Keywords: Reconnaissance de scène; Audio; Multimodalité; Mobile; Apprentissage artificiel; Intelligence ambiante; Scene Recognition; Audio; Multimodal; Mobile; Machine Learning; Ambiant intelligence; 621; 510

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Blachon, D. (2016). Reconnaissance de scènes multimodale embarquée : Embedded multimodal scene recognition. (Doctoral Dissertation). Grenoble Alpes. Retrieved from http://www.theses.fr/2016GREAM001

Chicago Manual of Style (16th Edition):

Blachon, David. “Reconnaissance de scènes multimodale embarquée : Embedded multimodal scene recognition.” 2016. Doctoral Dissertation, Grenoble Alpes. Accessed December 15, 2019. http://www.theses.fr/2016GREAM001.

MLA Handbook (7th Edition):

Blachon, David. “Reconnaissance de scènes multimodale embarquée : Embedded multimodal scene recognition.” 2016. Web. 15 Dec 2019.

Vancouver:

Blachon D. Reconnaissance de scènes multimodale embarquée : Embedded multimodal scene recognition. [Internet] [Doctoral dissertation]. Grenoble Alpes; 2016. [cited 2019 Dec 15]. Available from: http://www.theses.fr/2016GREAM001.

Council of Science Editors:

Blachon D. Reconnaissance de scènes multimodale embarquée : Embedded multimodal scene recognition. [Doctoral Dissertation]. Grenoble Alpes; 2016. Available from: http://www.theses.fr/2016GREAM001


Tokyo Institute of Technology / 東京工業大学

24. 小島, 諒介. Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition : ロボット聴覚とプラン認識に基づく環境理解のためのPlan-Intention-Eventフレームワーク; Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition.

Degree: 博士(工学), 2017, Tokyo Institute of Technology / 東京工業大学

 This thesis addresses scene analysis, which is essential for environmental monitoring and understanding. The main problems for scene analysis are twofold; extraction of attributes and… (more)

Subjects/Keywords: 環境理解; Scene analysis; ロボット聴覚; robot audition; プラン認識; plan recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

小島, . (2017). Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition : ロボット聴覚とプラン認識に基づく環境理解のためのPlan-Intention-Eventフレームワーク; Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition. (Thesis). Tokyo Institute of Technology / 東京工業大学. Retrieved from http://t2r2.star.titech.ac.jp/cgi-bin/publicationinfo.cgi?q_publication_content_number=CTT100736587

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

小島, 諒介. “Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition : ロボット聴覚とプラン認識に基づく環境理解のためのPlan-Intention-Eventフレームワーク; Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition.” 2017. Thesis, Tokyo Institute of Technology / 東京工業大学. Accessed December 15, 2019. http://t2r2.star.titech.ac.jp/cgi-bin/publicationinfo.cgi?q_publication_content_number=CTT100736587.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

小島, 諒介. “Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition : ロボット聴覚とプラン認識に基づく環境理解のためのPlan-Intention-Eventフレームワーク; Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition.” 2017. Web. 15 Dec 2019.

Vancouver:

小島 . Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition : ロボット聴覚とプラン認識に基づく環境理解のためのPlan-Intention-Eventフレームワーク; Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition. [Internet] [Thesis]. Tokyo Institute of Technology / 東京工業大学; 2017. [cited 2019 Dec 15]. Available from: http://t2r2.star.titech.ac.jp/cgi-bin/publicationinfo.cgi?q_publication_content_number=CTT100736587.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

小島 . Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition : ロボット聴覚とプラン認識に基づく環境理解のためのPlan-Intention-Eventフレームワーク; Plan-Intention-Event Framework for Scene Analysis Based on Robot Audition and Plan Recognition. [Thesis]. Tokyo Institute of Technology / 東京工業大学; 2017. Available from: http://t2r2.star.titech.ac.jp/cgi-bin/publicationinfo.cgi?q_publication_content_number=CTT100736587

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


McMaster University

25. Teeter, Christopher J. AN INVESTIGATION OF SPATIAL REFERENCE FRAMES AND THE CHARACTERISTICS OF BODY-BASED INFORMATION FOR SPATIAL UPDATING.

Degree: PhD, 2011, McMaster University

Successful navigation requires an accurate mental spatial representation of the environment that can be updated during movement. Experiments with animals and humans have demonstrated… (more)

Subjects/Keywords: spatial reference frames; spatial updating; facilitative effect of locomotion; scene recognition; Cognitive Psychology; Other Psychology; Cognitive Psychology

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Teeter, C. J. (2011). AN INVESTIGATION OF SPATIAL REFERENCE FRAMES AND THE CHARACTERISTICS OF BODY-BASED INFORMATION FOR SPATIAL UPDATING. (Doctoral Dissertation). McMaster University. Retrieved from http://hdl.handle.net/11375/11247

Chicago Manual of Style (16th Edition):

Teeter, Christopher J. “AN INVESTIGATION OF SPATIAL REFERENCE FRAMES AND THE CHARACTERISTICS OF BODY-BASED INFORMATION FOR SPATIAL UPDATING.” 2011. Doctoral Dissertation, McMaster University. Accessed December 15, 2019. http://hdl.handle.net/11375/11247.

MLA Handbook (7th Edition):

Teeter, Christopher J. “AN INVESTIGATION OF SPATIAL REFERENCE FRAMES AND THE CHARACTERISTICS OF BODY-BASED INFORMATION FOR SPATIAL UPDATING.” 2011. Web. 15 Dec 2019.

Vancouver:

Teeter CJ. AN INVESTIGATION OF SPATIAL REFERENCE FRAMES AND THE CHARACTERISTICS OF BODY-BASED INFORMATION FOR SPATIAL UPDATING. [Internet] [Doctoral dissertation]. McMaster University; 2011. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/11375/11247.

Council of Science Editors:

Teeter CJ. AN INVESTIGATION OF SPATIAL REFERENCE FRAMES AND THE CHARACTERISTICS OF BODY-BASED INFORMATION FOR SPATIAL UPDATING. [Doctoral Dissertation]. McMaster University; 2011. Available from: http://hdl.handle.net/11375/11247


McMaster University

26. Comishen, Michael A. CHANGE DETECTION OF A SCENE FOLLOWING A VIEWPOINT CHANGE: MECHANISMS FOR THE REDUCED PERFORMANCE COST WHEN THE VIEWPOINT CHANGE IS CAUSED BY VIEWER LOCOMOTION.

Degree: MSc, 2013, McMaster University

When an observer detects changes in a scene from a viewpoint that is different from the learned viewpoint, viewpoint change caused by observer’s locomotion… (more)

Subjects/Keywords: change detection; proprioception; scene recognition; spatial reference direction; spatial updating; viewer locomotion; viewpoint change; Cognitive Psychology; Cognitive Psychology

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Comishen, M. A. (2013). CHANGE DETECTION OF A SCENE FOLLOWING A VIEWPOINT CHANGE: MECHANISMS FOR THE REDUCED PERFORMANCE COST WHEN THE VIEWPOINT CHANGE IS CAUSED BY VIEWER LOCOMOTION. (Masters Thesis). McMaster University. Retrieved from http://hdl.handle.net/11375/13536

Chicago Manual of Style (16th Edition):

Comishen, Michael A. “CHANGE DETECTION OF A SCENE FOLLOWING A VIEWPOINT CHANGE: MECHANISMS FOR THE REDUCED PERFORMANCE COST WHEN THE VIEWPOINT CHANGE IS CAUSED BY VIEWER LOCOMOTION.” 2013. Masters Thesis, McMaster University. Accessed December 15, 2019. http://hdl.handle.net/11375/13536.

MLA Handbook (7th Edition):

Comishen, Michael A. “CHANGE DETECTION OF A SCENE FOLLOWING A VIEWPOINT CHANGE: MECHANISMS FOR THE REDUCED PERFORMANCE COST WHEN THE VIEWPOINT CHANGE IS CAUSED BY VIEWER LOCOMOTION.” 2013. Web. 15 Dec 2019.

Vancouver:

Comishen MA. CHANGE DETECTION OF A SCENE FOLLOWING A VIEWPOINT CHANGE: MECHANISMS FOR THE REDUCED PERFORMANCE COST WHEN THE VIEWPOINT CHANGE IS CAUSED BY VIEWER LOCOMOTION. [Internet] [Masters thesis]. McMaster University; 2013. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/11375/13536.

Council of Science Editors:

Comishen MA. CHANGE DETECTION OF A SCENE FOLLOWING A VIEWPOINT CHANGE: MECHANISMS FOR THE REDUCED PERFORMANCE COST WHEN THE VIEWPOINT CHANGE IS CAUSED BY VIEWER LOCOMOTION. [Masters Thesis]. McMaster University; 2013. Available from: http://hdl.handle.net/11375/13536


University of South Florida

27. Sulman, Noah Patrick. The Role of Contextual Associations in the Selection of Objects.

Degree: 2011, University of South Florida

 This paper describes a sequence of experiments addressing basic questions about the control of visual attention and the relationship between attention and object recognition. This… (more)

Subjects/Keywords: attentional capture; attentional control; object recognition; scene context; American Studies; Arts and Humanities; Behavioral Disciplines and Activities; Clinical Psychology; Computer Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sulman, N. P. (2011). The Role of Contextual Associations in the Selection of Objects. (Thesis). University of South Florida. Retrieved from https://scholarcommons.usf.edu/etd/3372

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Sulman, Noah Patrick. “The Role of Contextual Associations in the Selection of Objects.” 2011. Thesis, University of South Florida. Accessed December 15, 2019. https://scholarcommons.usf.edu/etd/3372.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Sulman, Noah Patrick. “The Role of Contextual Associations in the Selection of Objects.” 2011. Web. 15 Dec 2019.

Vancouver:

Sulman NP. The Role of Contextual Associations in the Selection of Objects. [Internet] [Thesis]. University of South Florida; 2011. [cited 2019 Dec 15]. Available from: https://scholarcommons.usf.edu/etd/3372.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Sulman NP. The Role of Contextual Associations in the Selection of Objects. [Thesis]. University of South Florida; 2011. Available from: https://scholarcommons.usf.edu/etd/3372

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Adelaide

28. Hu, Qichang. Dynamic Scene Understanding with Applications to Traffic Monitoring.

Degree: 2017, University of Adelaide

 Many breakthroughs have been witnessed in the computer vision community in recent years, largely due to deep Convolutional Neural Networks (CNN) and largescale datasets. This… (more)

Subjects/Keywords: Traffic scene perception; Object subcategorization; Treffic sign detection; Car detection; Cyclist detection; Pedestrian detection; Fine-grained recognition; Car model classification; Vehicle color recognition; Deep learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hu, Q. (2017). Dynamic Scene Understanding with Applications to Traffic Monitoring. (Thesis). University of Adelaide. Retrieved from http://hdl.handle.net/2440/119678

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Hu, Qichang. “Dynamic Scene Understanding with Applications to Traffic Monitoring.” 2017. Thesis, University of Adelaide. Accessed December 15, 2019. http://hdl.handle.net/2440/119678.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Hu, Qichang. “Dynamic Scene Understanding with Applications to Traffic Monitoring.” 2017. Web. 15 Dec 2019.

Vancouver:

Hu Q. Dynamic Scene Understanding with Applications to Traffic Monitoring. [Internet] [Thesis]. University of Adelaide; 2017. [cited 2019 Dec 15]. Available from: http://hdl.handle.net/2440/119678.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Hu Q. Dynamic Scene Understanding with Applications to Traffic Monitoring. [Thesis]. University of Adelaide; 2017. Available from: http://hdl.handle.net/2440/119678

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


EPFL

29. Fornoni, Marco. Saliency-based representations and multi-component classifiers for visual scene recognition.

Degree: 2014, EPFL

 Visual scene recognition deals with the problem of automatically recognizing the high-level semantic concept describing a given image as a whole, such as the environment… (more)

Subjects/Keywords: visual scene recognition; saliency maps; feature pooling; multi-component classification; multi-class classification; locally linear SVM; latent SVM; naive Bayes nearest neighbor

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Fornoni, M. (2014). Saliency-based representations and multi-component classifiers for visual scene recognition. (Thesis). EPFL. Retrieved from http://infoscience.epfl.ch/record/203531

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Fornoni, Marco. “Saliency-based representations and multi-component classifiers for visual scene recognition.” 2014. Thesis, EPFL. Accessed December 15, 2019. http://infoscience.epfl.ch/record/203531.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Fornoni, Marco. “Saliency-based representations and multi-component classifiers for visual scene recognition.” 2014. Web. 15 Dec 2019.

Vancouver:

Fornoni M. Saliency-based representations and multi-component classifiers for visual scene recognition. [Internet] [Thesis]. EPFL; 2014. [cited 2019 Dec 15]. Available from: http://infoscience.epfl.ch/record/203531.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Fornoni M. Saliency-based representations and multi-component classifiers for visual scene recognition. [Thesis]. EPFL; 2014. Available from: http://infoscience.epfl.ch/record/203531

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

30. Portaz, Maxime. Accès à de l'information en mobilité par l'image pour la visite de Musées : Réseaux profonds pour l'identification de gestes et d'objets : Information Access in mobile environment for museum visits : Deep Neraul Networks for Instance and Gesture Recognition.

Degree: Docteur es, Informatique, 2018, Grenoble Alpes

Dans le cadre du projet GUIMUTEIC, qui vise à équiper les visiteurs de musées d'un outils d'aide à la visite équipé d'une caméra, cette thèse… (more)

Subjects/Keywords: Recherche d'information; Traitement d'image; Recherche d'image; Modèle de données; Reconnaissance de gestes; Information Retrieval; Image Processing; Image Retrieval; Matching Model; Scene Recognition; Image and Sensor Fusion; 004

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Portaz, M. (2018). Accès à de l'information en mobilité par l'image pour la visite de Musées : Réseaux profonds pour l'identification de gestes et d'objets : Information Access in mobile environment for museum visits : Deep Neraul Networks for Instance and Gesture Recognition. (Doctoral Dissertation). Grenoble Alpes. Retrieved from http://www.theses.fr/2018GREAM053

Chicago Manual of Style (16th Edition):

Portaz, Maxime. “Accès à de l'information en mobilité par l'image pour la visite de Musées : Réseaux profonds pour l'identification de gestes et d'objets : Information Access in mobile environment for museum visits : Deep Neraul Networks for Instance and Gesture Recognition.” 2018. Doctoral Dissertation, Grenoble Alpes. Accessed December 15, 2019. http://www.theses.fr/2018GREAM053.

MLA Handbook (7th Edition):

Portaz, Maxime. “Accès à de l'information en mobilité par l'image pour la visite de Musées : Réseaux profonds pour l'identification de gestes et d'objets : Information Access in mobile environment for museum visits : Deep Neraul Networks for Instance and Gesture Recognition.” 2018. Web. 15 Dec 2019.

Vancouver:

Portaz M. Accès à de l'information en mobilité par l'image pour la visite de Musées : Réseaux profonds pour l'identification de gestes et d'objets : Information Access in mobile environment for museum visits : Deep Neraul Networks for Instance and Gesture Recognition. [Internet] [Doctoral dissertation]. Grenoble Alpes; 2018. [cited 2019 Dec 15]. Available from: http://www.theses.fr/2018GREAM053.

Council of Science Editors:

Portaz M. Accès à de l'information en mobilité par l'image pour la visite de Musées : Réseaux profonds pour l'identification de gestes et d'objets : Information Access in mobile environment for museum visits : Deep Neraul Networks for Instance and Gesture Recognition. [Doctoral Dissertation]. Grenoble Alpes; 2018. Available from: http://www.theses.fr/2018GREAM053

[1] [2] [3] [4]

.