Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

Language: English

You searched for subject:(Computer Vision). Showing records 1 – 30 of 2109 total matches.

[1] [2] [3] [4] [5] … [71]

Search Limiters

Last 2 Years | English Only

Degrees

Levels

Country

▼ Search Limiters


University of Georgia

1. Khatamian Oskooei, Seyed Alireza. Feature extraction and analysis for 3D point cloud-based object recognition.

Degree: PhD, Computer Science, 2016, University of Georgia

 Object recognition is one of the most problematic challenges in computer vision, robotics, autonomous agents and others. Image Processing and Machine Learning collaborate to solve… (more)

Subjects/Keywords: Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Khatamian Oskooei, S. A. (2016). Feature extraction and analysis for 3D point cloud-based object recognition. (Doctoral Dissertation). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/khatamian-oskooei_seyed_a_201608_phd

Chicago Manual of Style (16th Edition):

Khatamian Oskooei, Seyed Alireza. “Feature extraction and analysis for 3D point cloud-based object recognition.” 2016. Doctoral Dissertation, University of Georgia. Accessed July 16, 2019. http://purl.galileo.usg.edu/uga_etd/khatamian-oskooei_seyed_a_201608_phd.

MLA Handbook (7th Edition):

Khatamian Oskooei, Seyed Alireza. “Feature extraction and analysis for 3D point cloud-based object recognition.” 2016. Web. 16 Jul 2019.

Vancouver:

Khatamian Oskooei SA. Feature extraction and analysis for 3D point cloud-based object recognition. [Internet] [Doctoral dissertation]. University of Georgia; 2016. [cited 2019 Jul 16]. Available from: http://purl.galileo.usg.edu/uga_etd/khatamian-oskooei_seyed_a_201608_phd.

Council of Science Editors:

Khatamian Oskooei SA. Feature extraction and analysis for 3D point cloud-based object recognition. [Doctoral Dissertation]. University of Georgia; 2016. Available from: http://purl.galileo.usg.edu/uga_etd/khatamian-oskooei_seyed_a_201608_phd


Queens University

2. Hughes, Kevin. Subspace Bootstrapping and Learning for Background Subtraction .

Degree: Electrical and Computer Engineering, 2013, Queens University

 A new background subtraction algorithm is proposed based on using a subspace model. The key components of the algorithm include a novel method for initializing… (more)

Subjects/Keywords: Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hughes, K. (2013). Subspace Bootstrapping and Learning for Background Subtraction . (Thesis). Queens University. Retrieved from http://hdl.handle.net/1974/8154

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Hughes, Kevin. “Subspace Bootstrapping and Learning for Background Subtraction .” 2013. Thesis, Queens University. Accessed July 16, 2019. http://hdl.handle.net/1974/8154.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Hughes, Kevin. “Subspace Bootstrapping and Learning for Background Subtraction .” 2013. Web. 16 Jul 2019.

Vancouver:

Hughes K. Subspace Bootstrapping and Learning for Background Subtraction . [Internet] [Thesis]. Queens University; 2013. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1974/8154.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Hughes K. Subspace Bootstrapping and Learning for Background Subtraction . [Thesis]. Queens University; 2013. Available from: http://hdl.handle.net/1974/8154

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Hong Kong

3. Lin, Angran. Discriminative parts in computer vision : discovery and application.

Degree: M. Phil., 2015, University of Hong Kong

Discriminative part-based approaches have become increasingly popular in the past few years. The reason of their popularity can be attributed to the fact that discriminative… (more)

Subjects/Keywords: Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lin, A. (2015). Discriminative parts in computer vision : discovery and application. (Masters Thesis). University of Hong Kong. Retrieved from Lin, A. [林盎然]. (2015). Discriminative parts in computer vision : discovery and application. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5610992 ; http://hdl.handle.net/10722/221188

Chicago Manual of Style (16th Edition):

Lin, Angran. “Discriminative parts in computer vision : discovery and application.” 2015. Masters Thesis, University of Hong Kong. Accessed July 16, 2019. Lin, A. [林盎然]. (2015). Discriminative parts in computer vision : discovery and application. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5610992 ; http://hdl.handle.net/10722/221188.

MLA Handbook (7th Edition):

Lin, Angran. “Discriminative parts in computer vision : discovery and application.” 2015. Web. 16 Jul 2019.

Vancouver:

Lin A. Discriminative parts in computer vision : discovery and application. [Internet] [Masters thesis]. University of Hong Kong; 2015. [cited 2019 Jul 16]. Available from: Lin, A. [林盎然]. (2015). Discriminative parts in computer vision : discovery and application. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5610992 ; http://hdl.handle.net/10722/221188.

Council of Science Editors:

Lin A. Discriminative parts in computer vision : discovery and application. [Masters Thesis]. University of Hong Kong; 2015. Available from: Lin, A. [林盎然]. (2015). Discriminative parts in computer vision : discovery and application. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5610992 ; http://hdl.handle.net/10722/221188


Oregon State University

4. Amer, Mohamed R. Recognizing human group activities in video through mining optimal features.

Degree: MS, Electrical and Computer Engineering, 2011, Oregon State University

 Given a video, we would like to recognize group activities, localize video parts where these activities occur, and detect actors involved in them. To this… (more)

Subjects/Keywords: Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Amer, M. R. (2011). Recognizing human group activities in video through mining optimal features. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/21793

Chicago Manual of Style (16th Edition):

Amer, Mohamed R. “Recognizing human group activities in video through mining optimal features.” 2011. Masters Thesis, Oregon State University. Accessed July 16, 2019. http://hdl.handle.net/1957/21793.

MLA Handbook (7th Edition):

Amer, Mohamed R. “Recognizing human group activities in video through mining optimal features.” 2011. Web. 16 Jul 2019.

Vancouver:

Amer MR. Recognizing human group activities in video through mining optimal features. [Internet] [Masters thesis]. Oregon State University; 2011. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1957/21793.

Council of Science Editors:

Amer MR. Recognizing human group activities in video through mining optimal features. [Masters Thesis]. Oregon State University; 2011. Available from: http://hdl.handle.net/1957/21793


California State Polytechnic University – Pomona

5. Hsu, Yu-Ching. Object Detection Through Image Processing for Unmanned Aerial Vehicles.

Degree: MS, Department of Computer Sciences, 2018, California State Polytechnic University – Pomona

 An Unmanned Aerial Vehicles (UAV) is an aircraft operated without a human pilot aboard and comes with the ability to fly to the desired locations… (more)

Subjects/Keywords: computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hsu, Y. (2018). Object Detection Through Image Processing for Unmanned Aerial Vehicles. (Masters Thesis). California State Polytechnic University – Pomona. Retrieved from http://hdl.handle.net/10211.3/206495

Chicago Manual of Style (16th Edition):

Hsu, Yu-Ching. “Object Detection Through Image Processing for Unmanned Aerial Vehicles.” 2018. Masters Thesis, California State Polytechnic University – Pomona. Accessed July 16, 2019. http://hdl.handle.net/10211.3/206495.

MLA Handbook (7th Edition):

Hsu, Yu-Ching. “Object Detection Through Image Processing for Unmanned Aerial Vehicles.” 2018. Web. 16 Jul 2019.

Vancouver:

Hsu Y. Object Detection Through Image Processing for Unmanned Aerial Vehicles. [Internet] [Masters thesis]. California State Polytechnic University – Pomona; 2018. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/10211.3/206495.

Council of Science Editors:

Hsu Y. Object Detection Through Image Processing for Unmanned Aerial Vehicles. [Masters Thesis]. California State Polytechnic University – Pomona; 2018. Available from: http://hdl.handle.net/10211.3/206495

6. Narayanan, Maruthi. Bottom-Up Perceptual Organization of Images into Object Part Proposals Using Medial Visual Fragments: Application to Visual Recognition.

Degree: School of Engineering, 2017, Brown University

 Automatic recognition and segmentation of objects in images is a central open problem in computer vision. The failure of the segmentation-then-recognition paradigm to produce reliable… (more)

Subjects/Keywords: Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Narayanan, M. (2017). Bottom-Up Perceptual Organization of Images into Object Part Proposals Using Medial Visual Fragments: Application to Visual Recognition. (Thesis). Brown University. Retrieved from https://repository.library.brown.edu/studio/item/bdr:733453/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Narayanan, Maruthi. “Bottom-Up Perceptual Organization of Images into Object Part Proposals Using Medial Visual Fragments: Application to Visual Recognition.” 2017. Thesis, Brown University. Accessed July 16, 2019. https://repository.library.brown.edu/studio/item/bdr:733453/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Narayanan, Maruthi. “Bottom-Up Perceptual Organization of Images into Object Part Proposals Using Medial Visual Fragments: Application to Visual Recognition.” 2017. Web. 16 Jul 2019.

Vancouver:

Narayanan M. Bottom-Up Perceptual Organization of Images into Object Part Proposals Using Medial Visual Fragments: Application to Visual Recognition. [Internet] [Thesis]. Brown University; 2017. [cited 2019 Jul 16]. Available from: https://repository.library.brown.edu/studio/item/bdr:733453/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Narayanan M. Bottom-Up Perceptual Organization of Images into Object Part Proposals Using Medial Visual Fragments: Application to Visual Recognition. [Thesis]. Brown University; 2017. Available from: https://repository.library.brown.edu/studio/item/bdr:733453/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

7. Yuan, Rong. Scale-disstected Pose Estimation in Visual Odometry.

Degree: Electrical Sciences and Computer Engineering, 2017, Brown University

 Traditional visual odometry approaches often rely on estimating the world in the form a 3D cloud of points from keyframes, which are then projected onto… (more)

Subjects/Keywords: Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yuan, R. (2017). Scale-disstected Pose Estimation in Visual Odometry. (Thesis). Brown University. Retrieved from https://repository.library.brown.edu/studio/item/bdr:733572/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Yuan, Rong. “Scale-disstected Pose Estimation in Visual Odometry.” 2017. Thesis, Brown University. Accessed July 16, 2019. https://repository.library.brown.edu/studio/item/bdr:733572/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Yuan, Rong. “Scale-disstected Pose Estimation in Visual Odometry.” 2017. Web. 16 Jul 2019.

Vancouver:

Yuan R. Scale-disstected Pose Estimation in Visual Odometry. [Internet] [Thesis]. Brown University; 2017. [cited 2019 Jul 16]. Available from: https://repository.library.brown.edu/studio/item/bdr:733572/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Yuan R. Scale-disstected Pose Estimation in Visual Odometry. [Thesis]. Brown University; 2017. Available from: https://repository.library.brown.edu/studio/item/bdr:733572/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Oregon State University

8. Inouye, Jennifer A. Analysis of bio-based composites for image segmentation with the aid of games.

Degree: MS, Computer Science, 2012, Oregon State University

 A fundamental problem in computer vision is to partition an image into meaningful segments. While image segmentation is required by many applications, the thesis focuses… (more)

Subjects/Keywords: Computer; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Inouye, J. A. (2012). Analysis of bio-based composites for image segmentation with the aid of games. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/29507

Chicago Manual of Style (16th Edition):

Inouye, Jennifer A. “Analysis of bio-based composites for image segmentation with the aid of games.” 2012. Masters Thesis, Oregon State University. Accessed July 16, 2019. http://hdl.handle.net/1957/29507.

MLA Handbook (7th Edition):

Inouye, Jennifer A. “Analysis of bio-based composites for image segmentation with the aid of games.” 2012. Web. 16 Jul 2019.

Vancouver:

Inouye JA. Analysis of bio-based composites for image segmentation with the aid of games. [Internet] [Masters thesis]. Oregon State University; 2012. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1957/29507.

Council of Science Editors:

Inouye JA. Analysis of bio-based composites for image segmentation with the aid of games. [Masters Thesis]. Oregon State University; 2012. Available from: http://hdl.handle.net/1957/29507


Oregon State University

9. Harmon, Paul L. (Paul Lucas). Comprehensive Characterization of Motion of a Helical Coil Due to Flow Induced Vibration.

Degree: MS, Nuclear Engineering, 2015, Oregon State University

 Mechanical vibrations compromise the integrity of key components of thermal power plants. Without careful design, strong resonances during steady state operation can wear these components… (more)

Subjects/Keywords: Helix; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Harmon, P. L. (. L. (2015). Comprehensive Characterization of Motion of a Helical Coil Due to Flow Induced Vibration. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/56378

Chicago Manual of Style (16th Edition):

Harmon, Paul L (Paul Lucas). “Comprehensive Characterization of Motion of a Helical Coil Due to Flow Induced Vibration.” 2015. Masters Thesis, Oregon State University. Accessed July 16, 2019. http://hdl.handle.net/1957/56378.

MLA Handbook (7th Edition):

Harmon, Paul L (Paul Lucas). “Comprehensive Characterization of Motion of a Helical Coil Due to Flow Induced Vibration.” 2015. Web. 16 Jul 2019.

Vancouver:

Harmon PL(L. Comprehensive Characterization of Motion of a Helical Coil Due to Flow Induced Vibration. [Internet] [Masters thesis]. Oregon State University; 2015. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1957/56378.

Council of Science Editors:

Harmon PL(L. Comprehensive Characterization of Motion of a Helical Coil Due to Flow Induced Vibration. [Masters Thesis]. Oregon State University; 2015. Available from: http://hdl.handle.net/1957/56378


University of Victoria

10. Szabo, Jason Leslie. Automated detection of photogrammetric pipe features.

Degree: Department of Mechanical Engineering, 2018, University of Victoria

 This dissertation presents original computer vision algorithms to automate the identification of piping and photogrammetric piping features in individual digital images of industrial installations. Automatic… (more)

Subjects/Keywords: Computer vision; Photogrammetry

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Szabo, J. L. (2018). Automated detection of photogrammetric pipe features. (Thesis). University of Victoria. Retrieved from https://dspace.library.uvic.ca//handle/1828/9141

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Szabo, Jason Leslie. “Automated detection of photogrammetric pipe features.” 2018. Thesis, University of Victoria. Accessed July 16, 2019. https://dspace.library.uvic.ca//handle/1828/9141.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Szabo, Jason Leslie. “Automated detection of photogrammetric pipe features.” 2018. Web. 16 Jul 2019.

Vancouver:

Szabo JL. Automated detection of photogrammetric pipe features. [Internet] [Thesis]. University of Victoria; 2018. [cited 2019 Jul 16]. Available from: https://dspace.library.uvic.ca//handle/1828/9141.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Szabo JL. Automated detection of photogrammetric pipe features. [Thesis]. University of Victoria; 2018. Available from: https://dspace.library.uvic.ca//handle/1828/9141

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Cornell University

11. Chu, Hang. Vision-Based Localization With Map Information .

Degree: 2015, Cornell University

 Maps are available for various types of environments. Most people can easily read maps and localize themselves. In this thesis we address this problem: Can… (more)

Subjects/Keywords: Localization; Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chu, H. (2015). Vision-Based Localization With Map Information . (Thesis). Cornell University. Retrieved from http://hdl.handle.net/1813/40929

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chu, Hang. “Vision-Based Localization With Map Information .” 2015. Thesis, Cornell University. Accessed July 16, 2019. http://hdl.handle.net/1813/40929.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chu, Hang. “Vision-Based Localization With Map Information .” 2015. Web. 16 Jul 2019.

Vancouver:

Chu H. Vision-Based Localization With Map Information . [Internet] [Thesis]. Cornell University; 2015. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1813/40929.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chu H. Vision-Based Localization With Map Information . [Thesis]. Cornell University; 2015. Available from: http://hdl.handle.net/1813/40929

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Waterloo

12. Li, Zhizhou. Study of Implementation of CNN on Low-power Platform for Smart Traffic Optimization.

Degree: 2017, University of Waterloo

 Accompanied with the rise of smart city and the development of IoT (Internet of Things), people are looking forward to monitoring and regulating the traffic… (more)

Subjects/Keywords: Computer Vision; IoT

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, Z. (2017). Study of Implementation of CNN on Low-power Platform for Smart Traffic Optimization. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/12310

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Li, Zhizhou. “Study of Implementation of CNN on Low-power Platform for Smart Traffic Optimization.” 2017. Thesis, University of Waterloo. Accessed July 16, 2019. http://hdl.handle.net/10012/12310.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Li, Zhizhou. “Study of Implementation of CNN on Low-power Platform for Smart Traffic Optimization.” 2017. Web. 16 Jul 2019.

Vancouver:

Li Z. Study of Implementation of CNN on Low-power Platform for Smart Traffic Optimization. [Internet] [Thesis]. University of Waterloo; 2017. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/10012/12310.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Li Z. Study of Implementation of CNN on Low-power Platform for Smart Traffic Optimization. [Thesis]. University of Waterloo; 2017. Available from: http://hdl.handle.net/10012/12310

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Nelson Mandela Metropolitan University

13. Matthews, Timothy. Sketch-based digital storyboards and floor plans for authoring computer-generated film pre-visuals.

Degree: MSc, Faculty of Science, 2012, Nelson Mandela Metropolitan University

 Pre-visualisation is an important tool for planning films during the pre-production phase of filmmaking. Existing pre-visualisation authoring tools do not effectively support the user in… (more)

Subjects/Keywords: Computer graphics; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Matthews, T. (2012). Sketch-based digital storyboards and floor plans for authoring computer-generated film pre-visuals. (Masters Thesis). Nelson Mandela Metropolitan University. Retrieved from http://hdl.handle.net/10948/d1008430

Chicago Manual of Style (16th Edition):

Matthews, Timothy. “Sketch-based digital storyboards and floor plans for authoring computer-generated film pre-visuals.” 2012. Masters Thesis, Nelson Mandela Metropolitan University. Accessed July 16, 2019. http://hdl.handle.net/10948/d1008430.

MLA Handbook (7th Edition):

Matthews, Timothy. “Sketch-based digital storyboards and floor plans for authoring computer-generated film pre-visuals.” 2012. Web. 16 Jul 2019.

Vancouver:

Matthews T. Sketch-based digital storyboards and floor plans for authoring computer-generated film pre-visuals. [Internet] [Masters thesis]. Nelson Mandela Metropolitan University; 2012. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/10948/d1008430.

Council of Science Editors:

Matthews T. Sketch-based digital storyboards and floor plans for authoring computer-generated film pre-visuals. [Masters Thesis]. Nelson Mandela Metropolitan University; 2012. Available from: http://hdl.handle.net/10948/d1008430


University of Hong Kong

14. 李冠彬; Li, Guanbin. Deep saliency detection and color sketch generation.

Degree: PhD, 2016, University of Hong Kong

In recent years, with a wide spread of mobile devices with cameras, image has become an important medium for people to record and share their… (more)

Subjects/Keywords: Computer vision; Computer drawing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

李冠彬; Li, G. (2016). Deep saliency detection and color sketch generation. (Doctoral Dissertation). University of Hong Kong. Retrieved from http://hdl.handle.net/10722/233924

Chicago Manual of Style (16th Edition):

李冠彬; Li, Guanbin. “Deep saliency detection and color sketch generation.” 2016. Doctoral Dissertation, University of Hong Kong. Accessed July 16, 2019. http://hdl.handle.net/10722/233924.

MLA Handbook (7th Edition):

李冠彬; Li, Guanbin. “Deep saliency detection and color sketch generation.” 2016. Web. 16 Jul 2019.

Vancouver:

李冠彬; Li G. Deep saliency detection and color sketch generation. [Internet] [Doctoral dissertation]. University of Hong Kong; 2016. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/10722/233924.

Council of Science Editors:

李冠彬; Li G. Deep saliency detection and color sketch generation. [Doctoral Dissertation]. University of Hong Kong; 2016. Available from: http://hdl.handle.net/10722/233924


University of California – Berkeley

15. Karayev, Sergey. Anytime Recognition of Objects and Scenes.

Degree: Computer Science, 2014, University of California – Berkeley

 Humans are capable of perceiving a scene at a glance, and obtain deeper understanding with additional time. Computer visual recognition should be similarly robust to… (more)

Subjects/Keywords: Computer science; computer vision; recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Karayev, S. (2014). Anytime Recognition of Objects and Scenes. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/38j8b41p

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Karayev, Sergey. “Anytime Recognition of Objects and Scenes.” 2014. Thesis, University of California – Berkeley. Accessed July 16, 2019. http://www.escholarship.org/uc/item/38j8b41p.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Karayev, Sergey. “Anytime Recognition of Objects and Scenes.” 2014. Web. 16 Jul 2019.

Vancouver:

Karayev S. Anytime Recognition of Objects and Scenes. [Internet] [Thesis]. University of California – Berkeley; 2014. [cited 2019 Jul 16]. Available from: http://www.escholarship.org/uc/item/38j8b41p.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Karayev S. Anytime Recognition of Objects and Scenes. [Thesis]. University of California – Berkeley; 2014. Available from: http://www.escholarship.org/uc/item/38j8b41p

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Hong Kong University of Science and Technology

16. Cheng, Sijing. Modeling the neural population of simple cells tuned to horizontal disparity.

Degree: 2016, Hong Kong University of Science and Technology

 The responses of complex cells in the mammalian visual system are often modeled by the disparity energy model. The model linearly combines inputs from binocular… (more)

Subjects/Keywords: Binocular vision; Mathematical models; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cheng, S. (2016). Modeling the neural population of simple cells tuned to horizontal disparity. (Thesis). Hong Kong University of Science and Technology. Retrieved from https://doi.org/10.14711/thesis-b1610790 ; http://repository.ust.hk/ir/bitstream/1783.1-86632/1/th_redirect.html

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Cheng, Sijing. “Modeling the neural population of simple cells tuned to horizontal disparity.” 2016. Thesis, Hong Kong University of Science and Technology. Accessed July 16, 2019. https://doi.org/10.14711/thesis-b1610790 ; http://repository.ust.hk/ir/bitstream/1783.1-86632/1/th_redirect.html.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Cheng, Sijing. “Modeling the neural population of simple cells tuned to horizontal disparity.” 2016. Web. 16 Jul 2019.

Vancouver:

Cheng S. Modeling the neural population of simple cells tuned to horizontal disparity. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2016. [cited 2019 Jul 16]. Available from: https://doi.org/10.14711/thesis-b1610790 ; http://repository.ust.hk/ir/bitstream/1783.1-86632/1/th_redirect.html.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Cheng S. Modeling the neural population of simple cells tuned to horizontal disparity. [Thesis]. Hong Kong University of Science and Technology; 2016. Available from: https://doi.org/10.14711/thesis-b1610790 ; http://repository.ust.hk/ir/bitstream/1783.1-86632/1/th_redirect.html

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Boston College

17. Linsley, Drew. A revised framework for human scene recognition.

Degree: PhD, Psychology, 2016, Boston College

 For humans, healthy and productive living depends on navigating through the world and behaving appropriately along the way. But in order to do this, humans… (more)

Subjects/Keywords: Biological vision; Computer vision; Scene recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Linsley, D. (2016). A revised framework for human scene recognition. (Doctoral Dissertation). Boston College. Retrieved from http://dlib.bc.edu/islandora/object/bc-ir:106986

Chicago Manual of Style (16th Edition):

Linsley, Drew. “A revised framework for human scene recognition.” 2016. Doctoral Dissertation, Boston College. Accessed July 16, 2019. http://dlib.bc.edu/islandora/object/bc-ir:106986.

MLA Handbook (7th Edition):

Linsley, Drew. “A revised framework for human scene recognition.” 2016. Web. 16 Jul 2019.

Vancouver:

Linsley D. A revised framework for human scene recognition. [Internet] [Doctoral dissertation]. Boston College; 2016. [cited 2019 Jul 16]. Available from: http://dlib.bc.edu/islandora/object/bc-ir:106986.

Council of Science Editors:

Linsley D. A revised framework for human scene recognition. [Doctoral Dissertation]. Boston College; 2016. Available from: http://dlib.bc.edu/islandora/object/bc-ir:106986

18. Iscen, Ahmet. Continuous memories for representing sets of vectors and image collections : Mémoires continues représentant des ensembles de vecteurs et des collections d’images.

Degree: Docteur es, Informatique, 2017, Rennes 1; Université de Rennes I

Cette thèse étudie l'indexation et le mécanisme d'expansion de requête en recherche d'image. L'indexation sacrifie la qualité de la recherche pour une plus grande efficacité;… (more)

Subjects/Keywords: Vision par ordinateur; Indexation; Computer vision; Indexing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Iscen, A. (2017). Continuous memories for representing sets of vectors and image collections : Mémoires continues représentant des ensembles de vecteurs et des collections d’images. (Doctoral Dissertation). Rennes 1; Université de Rennes I. Retrieved from http://www.theses.fr/2017REN1S039

Chicago Manual of Style (16th Edition):

Iscen, Ahmet. “Continuous memories for representing sets of vectors and image collections : Mémoires continues représentant des ensembles de vecteurs et des collections d’images.” 2017. Doctoral Dissertation, Rennes 1; Université de Rennes I. Accessed July 16, 2019. http://www.theses.fr/2017REN1S039.

MLA Handbook (7th Edition):

Iscen, Ahmet. “Continuous memories for representing sets of vectors and image collections : Mémoires continues représentant des ensembles de vecteurs et des collections d’images.” 2017. Web. 16 Jul 2019.

Vancouver:

Iscen A. Continuous memories for representing sets of vectors and image collections : Mémoires continues représentant des ensembles de vecteurs et des collections d’images. [Internet] [Doctoral dissertation]. Rennes 1; Université de Rennes I; 2017. [cited 2019 Jul 16]. Available from: http://www.theses.fr/2017REN1S039.

Council of Science Editors:

Iscen A. Continuous memories for representing sets of vectors and image collections : Mémoires continues représentant des ensembles de vecteurs et des collections d’images. [Doctoral Dissertation]. Rennes 1; Université de Rennes I; 2017. Available from: http://www.theses.fr/2017REN1S039


Rutgers University

19. Mi, Xiaofeng, 1978-. Representation and depiction of 2D shapes using parts.

Degree: PhD, Computer Science, 2010, Rutgers University

We describe a 2D shape abstraction system that aims to clarify the structure without loss of the expressiveness of the original shape. To do this,… (more)

Subjects/Keywords: Computer graphics; Rendering (Computer graphics); Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mi, Xiaofeng, 1. (2010). Representation and depiction of 2D shapes using parts. (Doctoral Dissertation). Rutgers University. Retrieved from http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000056580

Chicago Manual of Style (16th Edition):

Mi, Xiaofeng, 1978-. “Representation and depiction of 2D shapes using parts.” 2010. Doctoral Dissertation, Rutgers University. Accessed July 16, 2019. http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000056580.

MLA Handbook (7th Edition):

Mi, Xiaofeng, 1978-. “Representation and depiction of 2D shapes using parts.” 2010. Web. 16 Jul 2019.

Vancouver:

Mi, Xiaofeng 1. Representation and depiction of 2D shapes using parts. [Internet] [Doctoral dissertation]. Rutgers University; 2010. [cited 2019 Jul 16]. Available from: http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000056580.

Council of Science Editors:

Mi, Xiaofeng 1. Representation and depiction of 2D shapes using parts. [Doctoral Dissertation]. Rutgers University; 2010. Available from: http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000056580


Georgia Tech

20. Li, Yin. Learning embodied models of actions from first person video.

Degree: PhD, Interactive Computing, 2017, Georgia Tech

 Advances in sensor miniaturization, low-power computing, and battery life have enabled the first generation of mainstream wearable cameras. Millions of hours of videos are captured… (more)

Subjects/Keywords: First person vision; Egocentric vision; Action recognition; Gaze estimation; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, Y. (2017). Learning embodied models of actions from first person video. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59207

Chicago Manual of Style (16th Edition):

Li, Yin. “Learning embodied models of actions from first person video.” 2017. Doctoral Dissertation, Georgia Tech. Accessed July 16, 2019. http://hdl.handle.net/1853/59207.

MLA Handbook (7th Edition):

Li, Yin. “Learning embodied models of actions from first person video.” 2017. Web. 16 Jul 2019.

Vancouver:

Li Y. Learning embodied models of actions from first person video. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1853/59207.

Council of Science Editors:

Li Y. Learning embodied models of actions from first person video. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/59207


University of Georgia

21. Tang, Yarong. Computer vision-based reconstructive plastic surgery.

Degree: MS, Artificial Intelligence, 2003, University of Georgia

 High energy traumatic impact of the craniofacial skeleton is an inevitable consequence of today’s fast paced society. The work in the thesis leverages recent advances… (more)

Subjects/Keywords: Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tang, Y. (2003). Computer vision-based reconstructive plastic surgery. (Masters Thesis). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/tang_yarong_200312_ms

Chicago Manual of Style (16th Edition):

Tang, Yarong. “Computer vision-based reconstructive plastic surgery.” 2003. Masters Thesis, University of Georgia. Accessed July 16, 2019. http://purl.galileo.usg.edu/uga_etd/tang_yarong_200312_ms.

MLA Handbook (7th Edition):

Tang, Yarong. “Computer vision-based reconstructive plastic surgery.” 2003. Web. 16 Jul 2019.

Vancouver:

Tang Y. Computer vision-based reconstructive plastic surgery. [Internet] [Masters thesis]. University of Georgia; 2003. [cited 2019 Jul 16]. Available from: http://purl.galileo.usg.edu/uga_etd/tang_yarong_200312_ms.

Council of Science Editors:

Tang Y. Computer vision-based reconstructive plastic surgery. [Masters Thesis]. University of Georgia; 2003. Available from: http://purl.galileo.usg.edu/uga_etd/tang_yarong_200312_ms


University of Alberta

22. Metcalf, Adam. Pinball: High-Speed Real-Time Tracking and Playing.

Degree: MS, Department of Computing Science, 2011, University of Alberta

 Pinball is fast-paced arcade-style game of which the origins date back hundreds of years. Game playing robots exist for billiards, foosball, and soccer and each… (more)

Subjects/Keywords: Artificial Intelligence; Computer Vision; Pinball

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Metcalf, A. (2011). Pinball: High-Speed Real-Time Tracking and Playing. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/js956h040

Chicago Manual of Style (16th Edition):

Metcalf, Adam. “Pinball: High-Speed Real-Time Tracking and Playing.” 2011. Masters Thesis, University of Alberta. Accessed July 16, 2019. https://era.library.ualberta.ca/files/js956h040.

MLA Handbook (7th Edition):

Metcalf, Adam. “Pinball: High-Speed Real-Time Tracking and Playing.” 2011. Web. 16 Jul 2019.

Vancouver:

Metcalf A. Pinball: High-Speed Real-Time Tracking and Playing. [Internet] [Masters thesis]. University of Alberta; 2011. [cited 2019 Jul 16]. Available from: https://era.library.ualberta.ca/files/js956h040.

Council of Science Editors:

Metcalf A. Pinball: High-Speed Real-Time Tracking and Playing. [Masters Thesis]. University of Alberta; 2011. Available from: https://era.library.ualberta.ca/files/js956h040


Texas A&M University

23. Conway, Dylan Taylor. Vision-Aided Navigation: Improved Measurements Models and a Data Driven Approach.

Degree: 2016, Texas A&M University

Vision-aided navigation is the process of fusing data from visual cameras with other information sources to provide vehicle state estimation. Fusing information from multiple sources… (more)

Subjects/Keywords: Navigation; Computer vision; Robust estimation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Conway, D. T. (2016). Vision-Aided Navigation: Improved Measurements Models and a Data Driven Approach. (Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/159100

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Conway, Dylan Taylor. “Vision-Aided Navigation: Improved Measurements Models and a Data Driven Approach.” 2016. Thesis, Texas A&M University. Accessed July 16, 2019. http://hdl.handle.net/1969.1/159100.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Conway, Dylan Taylor. “Vision-Aided Navigation: Improved Measurements Models and a Data Driven Approach.” 2016. Web. 16 Jul 2019.

Vancouver:

Conway DT. Vision-Aided Navigation: Improved Measurements Models and a Data Driven Approach. [Internet] [Thesis]. Texas A&M University; 2016. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1969.1/159100.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Conway DT. Vision-Aided Navigation: Improved Measurements Models and a Data Driven Approach. [Thesis]. Texas A&M University; 2016. Available from: http://hdl.handle.net/1969.1/159100

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Texas A&M University

24. Kejriwal, Gaurav. AIDING MODERN TEXTUAL SCHOLARSHIP USING A VIRTUAL HINMAN COLLATOR.

Degree: 2014, Texas A&M University

 Collation is an important step in textual criticism and is most often an arduous task for most scholars involved in scholarly edition. Finding variations is… (more)

Subjects/Keywords: Digital Humanities; Collation; Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kejriwal, G. (2014). AIDING MODERN TEXTUAL SCHOLARSHIP USING A VIRTUAL HINMAN COLLATOR. (Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/152504

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Kejriwal, Gaurav. “AIDING MODERN TEXTUAL SCHOLARSHIP USING A VIRTUAL HINMAN COLLATOR.” 2014. Thesis, Texas A&M University. Accessed July 16, 2019. http://hdl.handle.net/1969.1/152504.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Kejriwal, Gaurav. “AIDING MODERN TEXTUAL SCHOLARSHIP USING A VIRTUAL HINMAN COLLATOR.” 2014. Web. 16 Jul 2019.

Vancouver:

Kejriwal G. AIDING MODERN TEXTUAL SCHOLARSHIP USING A VIRTUAL HINMAN COLLATOR. [Internet] [Thesis]. Texas A&M University; 2014. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1969.1/152504.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Kejriwal G. AIDING MODERN TEXTUAL SCHOLARSHIP USING A VIRTUAL HINMAN COLLATOR. [Thesis]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/152504

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Canterbury

25. Grant, Robert. Signal-linear representations of colour for computer vision.

Degree: Computer Science and Software Engineering, 2010, University of Canterbury

 Most cameras detect colour by using sensors that separate red, green and blue wavelengths of light which is similar to the human eye. As such… (more)

Subjects/Keywords: computer vision; colour models

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Grant, R. (2010). Signal-linear representations of colour for computer vision. (Thesis). University of Canterbury. Retrieved from http://hdl.handle.net/10092/5685

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Grant, Robert. “Signal-linear representations of colour for computer vision.” 2010. Thesis, University of Canterbury. Accessed July 16, 2019. http://hdl.handle.net/10092/5685.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Grant, Robert. “Signal-linear representations of colour for computer vision.” 2010. Web. 16 Jul 2019.

Vancouver:

Grant R. Signal-linear representations of colour for computer vision. [Internet] [Thesis]. University of Canterbury; 2010. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/10092/5685.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Grant R. Signal-linear representations of colour for computer vision. [Thesis]. University of Canterbury; 2010. Available from: http://hdl.handle.net/10092/5685

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Universidade do Minho

26. Abreu, Hélder Paulo Monteiro. Visual speech recognition for European Portuguese .

Degree: 2014, Universidade do Minho

 O reconhecimento da fala baseado em características visuais teve início na década de 80, integrado em sistemas de reconhecimento audiovisual da fala. De facto, o… (more)

Subjects/Keywords: Speech recognition; Kinect; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Abreu, H. P. M. (2014). Visual speech recognition for European Portuguese . (Masters Thesis). Universidade do Minho. Retrieved from http://hdl.handle.net/1822/37465

Chicago Manual of Style (16th Edition):

Abreu, Hélder Paulo Monteiro. “Visual speech recognition for European Portuguese .” 2014. Masters Thesis, Universidade do Minho. Accessed July 16, 2019. http://hdl.handle.net/1822/37465.

MLA Handbook (7th Edition):

Abreu, Hélder Paulo Monteiro. “Visual speech recognition for European Portuguese .” 2014. Web. 16 Jul 2019.

Vancouver:

Abreu HPM. Visual speech recognition for European Portuguese . [Internet] [Masters thesis]. Universidade do Minho; 2014. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1822/37465.

Council of Science Editors:

Abreu HPM. Visual speech recognition for European Portuguese . [Masters Thesis]. Universidade do Minho; 2014. Available from: http://hdl.handle.net/1822/37465


Universiteit Utrecht

27. Zumbrink, T.A. Video-Based Scene and Material Editing.

Degree: 2013, Universiteit Utrecht

 The technique presented in this report alters the appearance of objects in video by substituting the original material for another, synthetic, material. Inspired by methods… (more)

Subjects/Keywords: computer vision; scene editing; rendering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zumbrink, T. A. (2013). Video-Based Scene and Material Editing. (Masters Thesis). Universiteit Utrecht. Retrieved from http://dspace.library.uu.nl:8080/handle/1874/276056

Chicago Manual of Style (16th Edition):

Zumbrink, T A. “Video-Based Scene and Material Editing.” 2013. Masters Thesis, Universiteit Utrecht. Accessed July 16, 2019. http://dspace.library.uu.nl:8080/handle/1874/276056.

MLA Handbook (7th Edition):

Zumbrink, T A. “Video-Based Scene and Material Editing.” 2013. Web. 16 Jul 2019.

Vancouver:

Zumbrink TA. Video-Based Scene and Material Editing. [Internet] [Masters thesis]. Universiteit Utrecht; 2013. [cited 2019 Jul 16]. Available from: http://dspace.library.uu.nl:8080/handle/1874/276056.

Council of Science Editors:

Zumbrink TA. Video-Based Scene and Material Editing. [Masters Thesis]. Universiteit Utrecht; 2013. Available from: http://dspace.library.uu.nl:8080/handle/1874/276056


California State Polytechnic University – Pomona

28. Faust, Jeffrey J. Tracking small falling objects in video.

Degree: MS, Computer Science, 2006, California State Polytechnic University – Pomona

 This thesis identifies falling flowers and leaves in color video sequences of jacaranda trees. The spatial resolution is 320x240 pixels, and the temporal resolution is… (more)

Subjects/Keywords: Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Faust, J. J. (2006). Tracking small falling objects in video. (Masters Thesis). California State Polytechnic University – Pomona. Retrieved from http://hdl.handle.net/10211.3/99392

Chicago Manual of Style (16th Edition):

Faust, Jeffrey J. “Tracking small falling objects in video.” 2006. Masters Thesis, California State Polytechnic University – Pomona. Accessed July 16, 2019. http://hdl.handle.net/10211.3/99392.

MLA Handbook (7th Edition):

Faust, Jeffrey J. “Tracking small falling objects in video.” 2006. Web. 16 Jul 2019.

Vancouver:

Faust JJ. Tracking small falling objects in video. [Internet] [Masters thesis]. California State Polytechnic University – Pomona; 2006. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/10211.3/99392.

Council of Science Editors:

Faust JJ. Tracking small falling objects in video. [Masters Thesis]. California State Polytechnic University – Pomona; 2006. Available from: http://hdl.handle.net/10211.3/99392


Queens University

29. Balaketheeswaran, Sai. Recovery of Arthroscope Head Rotation From the Arthroscope Image .

Degree: Computing, 2015, Queens University

 An arthroscope consists of a shaft which is inserted into the patient and a camera head which can rotate about the shaft axis. In computer-assisted… (more)

Subjects/Keywords: arthroscope camera rotation; computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Balaketheeswaran, S. (2015). Recovery of Arthroscope Head Rotation From the Arthroscope Image . (Thesis). Queens University. Retrieved from http://hdl.handle.net/1974/13051

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Balaketheeswaran, Sai. “Recovery of Arthroscope Head Rotation From the Arthroscope Image .” 2015. Thesis, Queens University. Accessed July 16, 2019. http://hdl.handle.net/1974/13051.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Balaketheeswaran, Sai. “Recovery of Arthroscope Head Rotation From the Arthroscope Image .” 2015. Web. 16 Jul 2019.

Vancouver:

Balaketheeswaran S. Recovery of Arthroscope Head Rotation From the Arthroscope Image . [Internet] [Thesis]. Queens University; 2015. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1974/13051.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Balaketheeswaran S. Recovery of Arthroscope Head Rotation From the Arthroscope Image . [Thesis]. Queens University; 2015. Available from: http://hdl.handle.net/1974/13051

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Queens University

30. Deretey, Edith. Visual Localization in Underground Mines and Indoor Environments using PnP .

Degree: Electrical and Computer Engineering, 2016, Queens University

 This thesis presents a visual technique for localization in underground mines and indoor environments that exploits the use of a calibrated monocular camera. The objective… (more)

Subjects/Keywords: Indoor Localization; Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Deretey, E. (2016). Visual Localization in Underground Mines and Indoor Environments using PnP . (Thesis). Queens University. Retrieved from http://hdl.handle.net/1974/13928

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Deretey, Edith. “Visual Localization in Underground Mines and Indoor Environments using PnP .” 2016. Thesis, Queens University. Accessed July 16, 2019. http://hdl.handle.net/1974/13928.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Deretey, Edith. “Visual Localization in Underground Mines and Indoor Environments using PnP .” 2016. Web. 16 Jul 2019.

Vancouver:

Deretey E. Visual Localization in Underground Mines and Indoor Environments using PnP . [Internet] [Thesis]. Queens University; 2016. [cited 2019 Jul 16]. Available from: http://hdl.handle.net/1974/13928.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Deretey E. Visual Localization in Underground Mines and Indoor Environments using PnP . [Thesis]. Queens University; 2016. Available from: http://hdl.handle.net/1974/13928

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

[1] [2] [3] [4] [5] … [71]

.