Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(RGBD). Showing records 1 – 30 of 30 total matches.

Search Limiters

Last 2 Years | English Only

▼ Search Limiters


Princeton University

1. Halber, Maciej Stanislaw. RGBD Pipeline for Indoor Scene Reconstruction and Understanding .

Degree: PhD, 2019, Princeton University

 In this work, we consider the problem of reconstructing a 3D model from a sequence of color and depth frames. Generating such a model has… (more)

Subjects/Keywords: Indoor; Reconstruction; RGBD

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Halber, M. S. (2019). RGBD Pipeline for Indoor Scene Reconstruction and Understanding . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01jd4730399

Chicago Manual of Style (16th Edition):

Halber, Maciej Stanislaw. “RGBD Pipeline for Indoor Scene Reconstruction and Understanding .” 2019. Doctoral Dissertation, Princeton University. Accessed May 07, 2021. http://arks.princeton.edu/ark:/88435/dsp01jd4730399.

MLA Handbook (7th Edition):

Halber, Maciej Stanislaw. “RGBD Pipeline for Indoor Scene Reconstruction and Understanding .” 2019. Web. 07 May 2021.

Vancouver:

Halber MS. RGBD Pipeline for Indoor Scene Reconstruction and Understanding . [Internet] [Doctoral dissertation]. Princeton University; 2019. [cited 2021 May 07]. Available from: http://arks.princeton.edu/ark:/88435/dsp01jd4730399.

Council of Science Editors:

Halber MS. RGBD Pipeline for Indoor Scene Reconstruction and Understanding . [Doctoral Dissertation]. Princeton University; 2019. Available from: http://arks.princeton.edu/ark:/88435/dsp01jd4730399


Rutgers University

2. Ren, Baozhang. Development of an efficient RGB-D annotation tool for video sequence.

Degree: MS, Computer Science, 2020, Rutgers University

 With the prevalence of neural networks and deep learning models, more data is required to expand the domain as well as to improve the accuracy… (more)

Subjects/Keywords: RGBD; Image processing  – Digital techniques

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ren, B. (2020). Development of an efficient RGB-D annotation tool for video sequence. (Masters Thesis). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/62750/

Chicago Manual of Style (16th Edition):

Ren, Baozhang. “Development of an efficient RGB-D annotation tool for video sequence.” 2020. Masters Thesis, Rutgers University. Accessed May 07, 2021. https://rucore.libraries.rutgers.edu/rutgers-lib/62750/.

MLA Handbook (7th Edition):

Ren, Baozhang. “Development of an efficient RGB-D annotation tool for video sequence.” 2020. Web. 07 May 2021.

Vancouver:

Ren B. Development of an efficient RGB-D annotation tool for video sequence. [Internet] [Masters thesis]. Rutgers University; 2020. [cited 2021 May 07]. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/62750/.

Council of Science Editors:

Ren B. Development of an efficient RGB-D annotation tool for video sequence. [Masters Thesis]. Rutgers University; 2020. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/62750/


Université de Grenoble

3. Quiroga Sepúlveda, Julián. Scene Flow Estimation from RGBD Images : Estimation du flot de scène à partir des images RGBD.

Degree: Docteur es, Mathématiques et Informatique, 2014, Université de Grenoble

Cette thèse aborde le problème du calcul de manière fiable d'un champ de mouvement 3D, appelé flot de scène, à partir d'une paire d'images RGBD(more)

Subjects/Keywords: Mouvement; Profondeur; RGBD; Semi-rigide; Flot de scene; Variationnelle; Motion; Depth; RGBD; Semi-rigid; Scene flow; Variational; 004

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Quiroga Sepúlveda, J. (2014). Scene Flow Estimation from RGBD Images : Estimation du flot de scène à partir des images RGBD. (Doctoral Dissertation). Université de Grenoble. Retrieved from http://www.theses.fr/2014GRENM057

Chicago Manual of Style (16th Edition):

Quiroga Sepúlveda, Julián. “Scene Flow Estimation from RGBD Images : Estimation du flot de scène à partir des images RGBD.” 2014. Doctoral Dissertation, Université de Grenoble. Accessed May 07, 2021. http://www.theses.fr/2014GRENM057.

MLA Handbook (7th Edition):

Quiroga Sepúlveda, Julián. “Scene Flow Estimation from RGBD Images : Estimation du flot de scène à partir des images RGBD.” 2014. Web. 07 May 2021.

Vancouver:

Quiroga Sepúlveda J. Scene Flow Estimation from RGBD Images : Estimation du flot de scène à partir des images RGBD. [Internet] [Doctoral dissertation]. Université de Grenoble; 2014. [cited 2021 May 07]. Available from: http://www.theses.fr/2014GRENM057.

Council of Science Editors:

Quiroga Sepúlveda J. Scene Flow Estimation from RGBD Images : Estimation du flot de scène à partir des images RGBD. [Doctoral Dissertation]. Université de Grenoble; 2014. Available from: http://www.theses.fr/2014GRENM057

4. Zarrouati-Vissière, Nadège. La réalité augmentée : fusion de vision et navigation : Augmented reality : the fusion of vision and navigation.

Degree: Docteur es, Mathématique et automatique, 2013, Paris, ENMP

Cette thèse a pour objet l'étude d'algorithmes pour des applications de réalité visuellement augmentée. Plusieurs besoins existent pour de telles applications, qui sont traités en… (more)

Subjects/Keywords: Observateur asymptotique; Realité augmentée; Systèmes de navigation; Données RGBD; Algorithmes SLAM; Asymptotic observer; Augmented reality; Navigation systems; RGBD data; SLAM algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zarrouati-Vissière, N. (2013). La réalité augmentée : fusion de vision et navigation : Augmented reality : the fusion of vision and navigation. (Doctoral Dissertation). Paris, ENMP. Retrieved from http://www.theses.fr/2013ENMP0061

Chicago Manual of Style (16th Edition):

Zarrouati-Vissière, Nadège. “La réalité augmentée : fusion de vision et navigation : Augmented reality : the fusion of vision and navigation.” 2013. Doctoral Dissertation, Paris, ENMP. Accessed May 07, 2021. http://www.theses.fr/2013ENMP0061.

MLA Handbook (7th Edition):

Zarrouati-Vissière, Nadège. “La réalité augmentée : fusion de vision et navigation : Augmented reality : the fusion of vision and navigation.” 2013. Web. 07 May 2021.

Vancouver:

Zarrouati-Vissière N. La réalité augmentée : fusion de vision et navigation : Augmented reality : the fusion of vision and navigation. [Internet] [Doctoral dissertation]. Paris, ENMP; 2013. [cited 2021 May 07]. Available from: http://www.theses.fr/2013ENMP0061.

Council of Science Editors:

Zarrouati-Vissière N. La réalité augmentée : fusion de vision et navigation : Augmented reality : the fusion of vision and navigation. [Doctoral Dissertation]. Paris, ENMP; 2013. Available from: http://www.theses.fr/2013ENMP0061

5. David da Silva Pires. Estimação de movimento a partir de imagens RGBD usando homomorfismo entre grafos.

Degree: 2012, University of São Paulo

Recentemente surgiram dispositivos sensores de profundidade capazes de capturar textura e geometria de uma cena em tempo real. Com isso, diversas técnicas de Visão Computacional,… (more)

Subjects/Keywords: casamento entre grafos; estimação de movimento; imagens RGBD; segmentação de movimento; graph matching; motion estimation; motion segmentation; RGBD images

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Pires, D. d. S. (2012). Estimação de movimento a partir de imagens RGBD usando homomorfismo entre grafos. (Doctoral Dissertation). University of São Paulo. Retrieved from http://www.teses.usp.br/teses/disponiveis/45/45134/tde-13022014-152114/

Chicago Manual of Style (16th Edition):

Pires, David da Silva. “Estimação de movimento a partir de imagens RGBD usando homomorfismo entre grafos.” 2012. Doctoral Dissertation, University of São Paulo. Accessed May 07, 2021. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-13022014-152114/.

MLA Handbook (7th Edition):

Pires, David da Silva. “Estimação de movimento a partir de imagens RGBD usando homomorfismo entre grafos.” 2012. Web. 07 May 2021.

Vancouver:

Pires DdS. Estimação de movimento a partir de imagens RGBD usando homomorfismo entre grafos. [Internet] [Doctoral dissertation]. University of São Paulo; 2012. [cited 2021 May 07]. Available from: http://www.teses.usp.br/teses/disponiveis/45/45134/tde-13022014-152114/.

Council of Science Editors:

Pires DdS. Estimação de movimento a partir de imagens RGBD usando homomorfismo entre grafos. [Doctoral Dissertation]. University of São Paulo; 2012. Available from: http://www.teses.usp.br/teses/disponiveis/45/45134/tde-13022014-152114/


University of Alberta

6. Saini, Amritpal S. Real time spatio temporal segmentation of RGBD cloud and applications.

Degree: MS, Department of Computing Science, 2015, University of Alberta

 There is considerable research work going on segmentation of RGB-D clouds due its applications in tasks like scene understanding, robotics etc. The availability of inexpensive… (more)

Subjects/Keywords: Segmentation; Object discovery; Object detection; GPU; Point cloud; RGBD; Micorsoft Kinect

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Saini, A. S. (2015). Real time spatio temporal segmentation of RGBD cloud and applications. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/k3569693x

Chicago Manual of Style (16th Edition):

Saini, Amritpal S. “Real time spatio temporal segmentation of RGBD cloud and applications.” 2015. Masters Thesis, University of Alberta. Accessed May 07, 2021. https://era.library.ualberta.ca/files/k3569693x.

MLA Handbook (7th Edition):

Saini, Amritpal S. “Real time spatio temporal segmentation of RGBD cloud and applications.” 2015. Web. 07 May 2021.

Vancouver:

Saini AS. Real time spatio temporal segmentation of RGBD cloud and applications. [Internet] [Masters thesis]. University of Alberta; 2015. [cited 2021 May 07]. Available from: https://era.library.ualberta.ca/files/k3569693x.

Council of Science Editors:

Saini AS. Real time spatio temporal segmentation of RGBD cloud and applications. [Masters Thesis]. University of Alberta; 2015. Available from: https://era.library.ualberta.ca/files/k3569693x


Cornell University

7. Koppula, Hema. Understanding People From Rgbd Data For Assistive Robots.

Degree: PhD, Computer Science, 2016, Cornell University

 Understanding people in complex dynamic environments is important for many applications such as robotic assistants, health-care monitoring systems, self driving cars, etc. This is a… (more)

Subjects/Keywords: Human Activity Understaning; Semantic Labeling; RGBD for Assistive Robots

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Koppula, H. (2016). Understanding People From Rgbd Data For Assistive Robots. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/43620

Chicago Manual of Style (16th Edition):

Koppula, Hema. “Understanding People From Rgbd Data For Assistive Robots.” 2016. Doctoral Dissertation, Cornell University. Accessed May 07, 2021. http://hdl.handle.net/1813/43620.

MLA Handbook (7th Edition):

Koppula, Hema. “Understanding People From Rgbd Data For Assistive Robots.” 2016. Web. 07 May 2021.

Vancouver:

Koppula H. Understanding People From Rgbd Data For Assistive Robots. [Internet] [Doctoral dissertation]. Cornell University; 2016. [cited 2021 May 07]. Available from: http://hdl.handle.net/1813/43620.

Council of Science Editors:

Koppula H. Understanding People From Rgbd Data For Assistive Robots. [Doctoral Dissertation]. Cornell University; 2016. Available from: http://hdl.handle.net/1813/43620

8. Amamra, A. Robust 3D registration and tracking with RGBD sensors.

Degree: PhD, 2015, Cranfield University

 This thesis investigates the utilisation of cheap RGBD sensors in rigid body tracking and 3D multiview registration for augmented and Virtual reality applications. RGBD sensors… (more)

Subjects/Keywords: Kalman filtering; RGBD camaera

…49 2.12 Case Study: RGBD Cameras… …79 3 GPU-Based Real-Time RGBD Data Filtering 81 3.1 Overview… …Kalman Filter Effect on RGBD Data for Moving Vehicles Tracking .........103 3.7 Kalman Filter… …Effect on RGBD Data for Depth Image Registration ........105 3.8 Results & Discussions… …132 4 RGBD Data Correction and Background Removal 135 4.1 Overview… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Amamra, A. (2015). Robust 3D registration and tracking with RGBD sensors. (Doctoral Dissertation). Cranfield University. Retrieved from http://dspace.lib.cranfield.ac.uk/handle/1826/9291

Chicago Manual of Style (16th Edition):

Amamra, A. “Robust 3D registration and tracking with RGBD sensors.” 2015. Doctoral Dissertation, Cranfield University. Accessed May 07, 2021. http://dspace.lib.cranfield.ac.uk/handle/1826/9291.

MLA Handbook (7th Edition):

Amamra, A. “Robust 3D registration and tracking with RGBD sensors.” 2015. Web. 07 May 2021.

Vancouver:

Amamra A. Robust 3D registration and tracking with RGBD sensors. [Internet] [Doctoral dissertation]. Cranfield University; 2015. [cited 2021 May 07]. Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/9291.

Council of Science Editors:

Amamra A. Robust 3D registration and tracking with RGBD sensors. [Doctoral Dissertation]. Cranfield University; 2015. Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/9291


George Mason University

9. Paton, Michael. Dynamic RGB-D Mapping .

Degree: 2012, George Mason University

 Localization and mapping has been an area of great importance and interest to the robotics and computer vision community. Localization and mapping has traditionally been… (more)

Subjects/Keywords: SLAM; RGB-D; Robotics; RGBD; Computer Vision; Kinect

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Paton, M. (2012). Dynamic RGB-D Mapping . (Thesis). George Mason University. Retrieved from http://hdl.handle.net/1920/7497

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Paton, Michael. “Dynamic RGB-D Mapping .” 2012. Thesis, George Mason University. Accessed May 07, 2021. http://hdl.handle.net/1920/7497.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Paton, Michael. “Dynamic RGB-D Mapping .” 2012. Web. 07 May 2021.

Vancouver:

Paton M. Dynamic RGB-D Mapping . [Internet] [Thesis]. George Mason University; 2012. [cited 2021 May 07]. Available from: http://hdl.handle.net/1920/7497.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Paton M. Dynamic RGB-D Mapping . [Thesis]. George Mason University; 2012. Available from: http://hdl.handle.net/1920/7497

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

10. Guerry, Joris. Reconnaissance visuelle robuste par réseaux de neurones dans des scénarios d'exploration robotique. Détecte-moi si tu peux ! : Robust visual recognition by neural networks in robotic exploration scenarios. Detect me if you can!.

Degree: Docteur es, Informatique, 2017, Université Paris-Saclay (ComUE)

L'objectif principal ce travail de thèse est la reconnaissance visuelle pour un robot mobile dans des conditions difficiles. En particulier nous nous intéressons aux réseaux… (more)

Subjects/Keywords: Classification; Réseaux de neurones; Apprentissage profond; Rgbd; Segmentation; Apprentissage supervisé

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Guerry, J. (2017). Reconnaissance visuelle robuste par réseaux de neurones dans des scénarios d'exploration robotique. Détecte-moi si tu peux ! : Robust visual recognition by neural networks in robotic exploration scenarios. Detect me if you can!. (Doctoral Dissertation). Université Paris-Saclay (ComUE). Retrieved from http://www.theses.fr/2017SACLX080

Chicago Manual of Style (16th Edition):

Guerry, Joris. “Reconnaissance visuelle robuste par réseaux de neurones dans des scénarios d'exploration robotique. Détecte-moi si tu peux ! : Robust visual recognition by neural networks in robotic exploration scenarios. Detect me if you can!.” 2017. Doctoral Dissertation, Université Paris-Saclay (ComUE). Accessed May 07, 2021. http://www.theses.fr/2017SACLX080.

MLA Handbook (7th Edition):

Guerry, Joris. “Reconnaissance visuelle robuste par réseaux de neurones dans des scénarios d'exploration robotique. Détecte-moi si tu peux ! : Robust visual recognition by neural networks in robotic exploration scenarios. Detect me if you can!.” 2017. Web. 07 May 2021.

Vancouver:

Guerry J. Reconnaissance visuelle robuste par réseaux de neurones dans des scénarios d'exploration robotique. Détecte-moi si tu peux ! : Robust visual recognition by neural networks in robotic exploration scenarios. Detect me if you can!. [Internet] [Doctoral dissertation]. Université Paris-Saclay (ComUE); 2017. [cited 2021 May 07]. Available from: http://www.theses.fr/2017SACLX080.

Council of Science Editors:

Guerry J. Reconnaissance visuelle robuste par réseaux de neurones dans des scénarios d'exploration robotique. Détecte-moi si tu peux ! : Robust visual recognition by neural networks in robotic exploration scenarios. Detect me if you can!. [Doctoral Dissertation]. Université Paris-Saclay (ComUE); 2017. Available from: http://www.theses.fr/2017SACLX080


King Abdullah University of Science and Technology

11. Bibi, Adel. Advances in RGB and RGBD Generic Object Trackers.

Degree: Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, 2016, King Abdullah University of Science and Technology

 Visual object tracking is a classical and very popular problem in computer vision with a plethora of applications such as vehicle navigation, human computer interface,… (more)

Subjects/Keywords: Trackers; Correlation Filters; Convolution Filters; Sparse representation; RGBD Trackers; Synchronization; Registration

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bibi, A. (2016). Advances in RGB and RGBD Generic Object Trackers. (Thesis). King Abdullah University of Science and Technology. Retrieved from http://hdl.handle.net/10754/609455

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bibi, Adel. “Advances in RGB and RGBD Generic Object Trackers.” 2016. Thesis, King Abdullah University of Science and Technology. Accessed May 07, 2021. http://hdl.handle.net/10754/609455.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bibi, Adel. “Advances in RGB and RGBD Generic Object Trackers.” 2016. Web. 07 May 2021.

Vancouver:

Bibi A. Advances in RGB and RGBD Generic Object Trackers. [Internet] [Thesis]. King Abdullah University of Science and Technology; 2016. [cited 2021 May 07]. Available from: http://hdl.handle.net/10754/609455.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bibi A. Advances in RGB and RGBD Generic Object Trackers. [Thesis]. King Abdullah University of Science and Technology; 2016. Available from: http://hdl.handle.net/10754/609455

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

12. Twinanda, Andru Putra. Vision-based approaches for surgical activity recognition using laparoscopic and RBGD videos : Approches basées vision pour la reconnaissance d’activités chirurgicales à partir de vidéos laparoscopiques et multi-vues RGBD.

Degree: Docteur es, Image et vision, 2017, Université de Strasbourg

Cette thèse a pour objectif la conception de méthodes pour la reconnaissance automatique des activités chirurgicales. Cette reconnaissance est un élément clé pour le développement… (more)

Subjects/Keywords: Reconnaissance d’activités chirurgicales; Deep learning; Vidéo laparoscopique; Vidéo multi-vues RGBD; Computer vision; Deep learning; Laparoscopic video; Multi-view RGBD video; Machine learning; Activity recognition; Surgical workflow modeling; 006.4; 617.9

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Twinanda, A. P. (2017). Vision-based approaches for surgical activity recognition using laparoscopic and RBGD videos : Approches basées vision pour la reconnaissance d’activités chirurgicales à partir de vidéos laparoscopiques et multi-vues RGBD. (Doctoral Dissertation). Université de Strasbourg. Retrieved from http://www.theses.fr/2017STRAD005

Chicago Manual of Style (16th Edition):

Twinanda, Andru Putra. “Vision-based approaches for surgical activity recognition using laparoscopic and RBGD videos : Approches basées vision pour la reconnaissance d’activités chirurgicales à partir de vidéos laparoscopiques et multi-vues RGBD.” 2017. Doctoral Dissertation, Université de Strasbourg. Accessed May 07, 2021. http://www.theses.fr/2017STRAD005.

MLA Handbook (7th Edition):

Twinanda, Andru Putra. “Vision-based approaches for surgical activity recognition using laparoscopic and RBGD videos : Approches basées vision pour la reconnaissance d’activités chirurgicales à partir de vidéos laparoscopiques et multi-vues RGBD.” 2017. Web. 07 May 2021.

Vancouver:

Twinanda AP. Vision-based approaches for surgical activity recognition using laparoscopic and RBGD videos : Approches basées vision pour la reconnaissance d’activités chirurgicales à partir de vidéos laparoscopiques et multi-vues RGBD. [Internet] [Doctoral dissertation]. Université de Strasbourg; 2017. [cited 2021 May 07]. Available from: http://www.theses.fr/2017STRAD005.

Council of Science Editors:

Twinanda AP. Vision-based approaches for surgical activity recognition using laparoscopic and RBGD videos : Approches basées vision pour la reconnaissance d’activités chirurgicales à partir de vidéos laparoscopiques et multi-vues RGBD. [Doctoral Dissertation]. Université de Strasbourg; 2017. Available from: http://www.theses.fr/2017STRAD005

13. Melbouci, Kathia. Contributions au RGBD-SLAM : RGBD-SLAM contributions.

Degree: Docteur es, Vision pour la Robotique, 2017, Université Clermont Auvergne‎ (2017-2020)

Pour assurer la navigation autonome d’un robot mobile, les traitements effectués pour sa localisation doivent être faits en ligne et doivent garantir une précision suffisante… (more)

Subjects/Keywords: Localisation et cartographie simultanées par vision; Capteur 3D; Ajustement de faisceaux; Plans; RGBD-SLAM; Simultaneous Localisation and Mapping; 3D sensor; Bundle adjustment; Plans; RGBD-SLAM

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Melbouci, K. (2017). Contributions au RGBD-SLAM : RGBD-SLAM contributions. (Doctoral Dissertation). Université Clermont Auvergne‎ (2017-2020). Retrieved from http://www.theses.fr/2017CLFAC006

Chicago Manual of Style (16th Edition):

Melbouci, Kathia. “Contributions au RGBD-SLAM : RGBD-SLAM contributions.” 2017. Doctoral Dissertation, Université Clermont Auvergne‎ (2017-2020). Accessed May 07, 2021. http://www.theses.fr/2017CLFAC006.

MLA Handbook (7th Edition):

Melbouci, Kathia. “Contributions au RGBD-SLAM : RGBD-SLAM contributions.” 2017. Web. 07 May 2021.

Vancouver:

Melbouci K. Contributions au RGBD-SLAM : RGBD-SLAM contributions. [Internet] [Doctoral dissertation]. Université Clermont Auvergne‎ (2017-2020); 2017. [cited 2021 May 07]. Available from: http://www.theses.fr/2017CLFAC006.

Council of Science Editors:

Melbouci K. Contributions au RGBD-SLAM : RGBD-SLAM contributions. [Doctoral Dissertation]. Université Clermont Auvergne‎ (2017-2020); 2017. Available from: http://www.theses.fr/2017CLFAC006


University of Ottawa

14. El Ahmar, Wassim. Head and Shoulder Detection using CNN and RGBD Data .

Degree: 2019, University of Ottawa

 Alex Krizhevsky and his colleagues changed the world of machine vision and image processing in 2012 when their deep learning model, named Alexnet, won the… (more)

Subjects/Keywords: deep learning; convolutional neural networks; machine learning; artificial intelligence; ai; cnn; rgbd; machine vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

El Ahmar, W. (2019). Head and Shoulder Detection using CNN and RGBD Data . (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/39448

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

El Ahmar, Wassim. “Head and Shoulder Detection using CNN and RGBD Data .” 2019. Thesis, University of Ottawa. Accessed May 07, 2021. http://hdl.handle.net/10393/39448.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

El Ahmar, Wassim. “Head and Shoulder Detection using CNN and RGBD Data .” 2019. Web. 07 May 2021.

Vancouver:

El Ahmar W. Head and Shoulder Detection using CNN and RGBD Data . [Internet] [Thesis]. University of Ottawa; 2019. [cited 2021 May 07]. Available from: http://hdl.handle.net/10393/39448.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

El Ahmar W. Head and Shoulder Detection using CNN and RGBD Data . [Thesis]. University of Ottawa; 2019. Available from: http://hdl.handle.net/10393/39448

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

15. Des Bouvrie, S.L. (author). Improving RGBD Indoor Mapping with IMU data.

Degree: 2011, Delft University of Technology

With the release of RGBD-cameras (cameras that provide both RGB as well as depth information) researchers have started evaluating how these devices can be used… (more)

Subjects/Keywords: RGBD camera; IMU; Indoor Mapping

…combination of visual sensors [20]. But with the coming of RGBD cameras, which give both… …purposes. 3.1.The Microsoft Kinect The Microsoft Kinect is a special RGBD camera created for… …Available software and possible applications As the Kinect allows RGBD data for a cheap unit cost… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Des Bouvrie, S. L. (. (2011). Improving RGBD Indoor Mapping with IMU data. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:6015a91c-201a-4716-b374-d237c67aa1df

Chicago Manual of Style (16th Edition):

Des Bouvrie, S L (author). “Improving RGBD Indoor Mapping with IMU data.” 2011. Masters Thesis, Delft University of Technology. Accessed May 07, 2021. http://resolver.tudelft.nl/uuid:6015a91c-201a-4716-b374-d237c67aa1df.

MLA Handbook (7th Edition):

Des Bouvrie, S L (author). “Improving RGBD Indoor Mapping with IMU data.” 2011. Web. 07 May 2021.

Vancouver:

Des Bouvrie SL(. Improving RGBD Indoor Mapping with IMU data. [Internet] [Masters thesis]. Delft University of Technology; 2011. [cited 2021 May 07]. Available from: http://resolver.tudelft.nl/uuid:6015a91c-201a-4716-b374-d237c67aa1df.

Council of Science Editors:

Des Bouvrie SL(. Improving RGBD Indoor Mapping with IMU data. [Masters Thesis]. Delft University of Technology; 2011. Available from: http://resolver.tudelft.nl/uuid:6015a91c-201a-4716-b374-d237c67aa1df


Southern Illinois University

16. Coen, Paul Dixon. Human Activity Recognition and Prediction using RGBD Data.

Degree: MS, Computer Science, 2019, Southern Illinois University

  Being able to predict and recognize human activities is an essential element for us to effectively communicate with other humans during our day to… (more)

Subjects/Keywords: Artificial Intelligence; CAD-120; Convolutional LSTM; Human Activity; Neural Networks; RGBD Data

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Coen, P. D. (2019). Human Activity Recognition and Prediction using RGBD Data. (Masters Thesis). Southern Illinois University. Retrieved from https://opensiuc.lib.siu.edu/theses/2562

Chicago Manual of Style (16th Edition):

Coen, Paul Dixon. “Human Activity Recognition and Prediction using RGBD Data.” 2019. Masters Thesis, Southern Illinois University. Accessed May 07, 2021. https://opensiuc.lib.siu.edu/theses/2562.

MLA Handbook (7th Edition):

Coen, Paul Dixon. “Human Activity Recognition and Prediction using RGBD Data.” 2019. Web. 07 May 2021.

Vancouver:

Coen PD. Human Activity Recognition and Prediction using RGBD Data. [Internet] [Masters thesis]. Southern Illinois University; 2019. [cited 2021 May 07]. Available from: https://opensiuc.lib.siu.edu/theses/2562.

Council of Science Editors:

Coen PD. Human Activity Recognition and Prediction using RGBD Data. [Masters Thesis]. Southern Illinois University; 2019. Available from: https://opensiuc.lib.siu.edu/theses/2562

17. Amamra, A. Robust 3D registration and tracking with RGBD sensors.

Degree: PhD, 2015, Cranfield University

 This thesis investigates the utilisation of cheap RGBD sensors in rigid body tracking and 3D multiview registration for augmented and Virtual reality applications. RGBD sensors… (more)

Subjects/Keywords: 629.8; Kalman filtering; RGBD camaera

…49 2.12 Case Study: RGBD Cameras… …79 3 GPU-Based Real-Time RGBD Data Filtering 81 3.1 Overview… …Kalman Filter Effect on RGBD Data for Moving Vehicles Tracking .........103 3.7 Kalman Filter… …Effect on RGBD Data for Depth Image Registration ........105 3.8 Results & Discussions… …132 4 RGBD Data Correction and Background Removal 135 4.1 Overview… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Amamra, A. (2015). Robust 3D registration and tracking with RGBD sensors. (Doctoral Dissertation). Cranfield University. Retrieved from http://dspace.lib.cranfield.ac.uk/handle/1826/9291 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.656007

Chicago Manual of Style (16th Edition):

Amamra, A. “Robust 3D registration and tracking with RGBD sensors.” 2015. Doctoral Dissertation, Cranfield University. Accessed May 07, 2021. http://dspace.lib.cranfield.ac.uk/handle/1826/9291 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.656007.

MLA Handbook (7th Edition):

Amamra, A. “Robust 3D registration and tracking with RGBD sensors.” 2015. Web. 07 May 2021.

Vancouver:

Amamra A. Robust 3D registration and tracking with RGBD sensors. [Internet] [Doctoral dissertation]. Cranfield University; 2015. [cited 2021 May 07]. Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/9291 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.656007.

Council of Science Editors:

Amamra A. Robust 3D registration and tracking with RGBD sensors. [Doctoral Dissertation]. Cranfield University; 2015. Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/9291 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.656007

18. Xia, Lu, active 21st century. Recognizing human activity using RGBD data.

Degree: PhD, Electrical and Computer Engineering, 2014, University of Texas – Austin

 Traditional computer vision algorithms try to understand the world using visible light cameras. However, there are inherent limitations of this type of data source. First,… (more)

Subjects/Keywords: Activity recognition; RGBD; Depth sensing; 3D; Human detection; First-person; Human interaction

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Xia, Lu, a. 2. c. (2014). Recognizing human activity using RGBD data. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/24981

Chicago Manual of Style (16th Edition):

Xia, Lu, active 21st century. “Recognizing human activity using RGBD data.” 2014. Doctoral Dissertation, University of Texas – Austin. Accessed May 07, 2021. http://hdl.handle.net/2152/24981.

MLA Handbook (7th Edition):

Xia, Lu, active 21st century. “Recognizing human activity using RGBD data.” 2014. Web. 07 May 2021.

Vancouver:

Xia, Lu a2c. Recognizing human activity using RGBD data. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2014. [cited 2021 May 07]. Available from: http://hdl.handle.net/2152/24981.

Council of Science Editors:

Xia, Lu a2c. Recognizing human activity using RGBD data. [Doctoral Dissertation]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/24981


University of Kentucky

19. Li, Sen. Per-Pixel Calibration for RGB-Depth Natural 3D Reconstruction on GPU.

Degree: 2016, University of Kentucky

 Ever since the Kinect brought low-cost depth cameras into consumer market, great interest has been invigorated into Red-Green-Blue-Depth (RGBD) sensors. Without calibration, a RGBD camera’s… (more)

Subjects/Keywords: 3D Reconstruction; Camera Calibration; RGBD; Image Processing; Digital Communications and Networking; Electrical and Electronics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, S. (2016). Per-Pixel Calibration for RGB-Depth Natural 3D Reconstruction on GPU. (Masters Thesis). University of Kentucky. Retrieved from https://uknowledge.uky.edu/ece_etds/93

Chicago Manual of Style (16th Edition):

Li, Sen. “Per-Pixel Calibration for RGB-Depth Natural 3D Reconstruction on GPU.” 2016. Masters Thesis, University of Kentucky. Accessed May 07, 2021. https://uknowledge.uky.edu/ece_etds/93.

MLA Handbook (7th Edition):

Li, Sen. “Per-Pixel Calibration for RGB-Depth Natural 3D Reconstruction on GPU.” 2016. Web. 07 May 2021.

Vancouver:

Li S. Per-Pixel Calibration for RGB-Depth Natural 3D Reconstruction on GPU. [Internet] [Masters thesis]. University of Kentucky; 2016. [cited 2021 May 07]. Available from: https://uknowledge.uky.edu/ece_etds/93.

Council of Science Editors:

Li S. Per-Pixel Calibration for RGB-Depth Natural 3D Reconstruction on GPU. [Masters Thesis]. University of Kentucky; 2016. Available from: https://uknowledge.uky.edu/ece_etds/93

20. Li, Francis. RGB-D Scene Flow via Grouping Rigid Motions.

Degree: 2016, University of Waterloo

 Robotics and artificial intelligence have seen drastic advancements in technology and algorithms over the last decade. Computer vision algorithms play a crucial role in enabling… (more)

Subjects/Keywords: Scene flow; RGBD; Motion analysis; Spectral grouping

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, F. (2016). RGB-D Scene Flow via Grouping Rigid Motions. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/10799

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Li, Francis. “RGB-D Scene Flow via Grouping Rigid Motions.” 2016. Thesis, University of Waterloo. Accessed May 07, 2021. http://hdl.handle.net/10012/10799.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Li, Francis. “RGB-D Scene Flow via Grouping Rigid Motions.” 2016. Web. 07 May 2021.

Vancouver:

Li F. RGB-D Scene Flow via Grouping Rigid Motions. [Internet] [Thesis]. University of Waterloo; 2016. [cited 2021 May 07]. Available from: http://hdl.handle.net/10012/10799.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Li F. RGB-D Scene Flow via Grouping Rigid Motions. [Thesis]. University of Waterloo; 2016. Available from: http://hdl.handle.net/10012/10799

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Texas State University – San Marcos

21. Ekram, Mohammad Azim Ul. Measurement of Control parameters for Omnidirectional Treadmills using RGBD camera.

Degree: MS, Engineering, 2017, Texas State University – San Marcos

 Omnidirectional treadmill systems are an effective platform that allows unconstrained locomotion possibilities to a user for effective VR exploration. There are two most common problems… (more)

Subjects/Keywords: Omnidirectional treadmill; Depth camera; Kinect; Geometric calculation; Virtual environment interface; RGBD; VR; Virtual reality; Image processing; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ekram, M. A. U. (2017). Measurement of Control parameters for Omnidirectional Treadmills using RGBD camera. (Masters Thesis). Texas State University – San Marcos. Retrieved from https://digital.library.txstate.edu/handle/10877/7722

Chicago Manual of Style (16th Edition):

Ekram, Mohammad Azim Ul. “Measurement of Control parameters for Omnidirectional Treadmills using RGBD camera.” 2017. Masters Thesis, Texas State University – San Marcos. Accessed May 07, 2021. https://digital.library.txstate.edu/handle/10877/7722.

MLA Handbook (7th Edition):

Ekram, Mohammad Azim Ul. “Measurement of Control parameters for Omnidirectional Treadmills using RGBD camera.” 2017. Web. 07 May 2021.

Vancouver:

Ekram MAU. Measurement of Control parameters for Omnidirectional Treadmills using RGBD camera. [Internet] [Masters thesis]. Texas State University – San Marcos; 2017. [cited 2021 May 07]. Available from: https://digital.library.txstate.edu/handle/10877/7722.

Council of Science Editors:

Ekram MAU. Measurement of Control parameters for Omnidirectional Treadmills using RGBD camera. [Masters Thesis]. Texas State University – San Marcos; 2017. Available from: https://digital.library.txstate.edu/handle/10877/7722


IUPUI

22. Dale, Ashley S. 3D Object Detection Using Virtual Environment Assisted Deep Network Training.

Degree: 2020, IUPUI

Indiana University-Purdue University Indianapolis (IUPUI)

An RGBZ synthetic dataset consisting of five object classes in a variety of virtual environments and orientations was combined with… (more)

Subjects/Keywords: Machine Learning; MASK R-CNN; ARTIFICIAL INTELLIGENCE; IMAGE PROCESSING; 3D IMAGE; SIGNAL PROCESSING; OBJECT DETECTION; THREAT DETECTION; VIRTUAL ENVIRONMENTS; SYNTHETIC DATASET; IMAGE SEGMENTATION; RGBD; RGBD VIDEO; RGBZ; ALGORITHM; MS COCO; TRANSFER LEARNING

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dale, A. S. (2020). 3D Object Detection Using Virtual Environment Assisted Deep Network Training. (Thesis). IUPUI. Retrieved from http://hdl.handle.net/1805/24756

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Dale, Ashley S. “3D Object Detection Using Virtual Environment Assisted Deep Network Training.” 2020. Thesis, IUPUI. Accessed May 07, 2021. http://hdl.handle.net/1805/24756.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Dale, Ashley S. “3D Object Detection Using Virtual Environment Assisted Deep Network Training.” 2020. Web. 07 May 2021.

Vancouver:

Dale AS. 3D Object Detection Using Virtual Environment Assisted Deep Network Training. [Internet] [Thesis]. IUPUI; 2020. [cited 2021 May 07]. Available from: http://hdl.handle.net/1805/24756.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Dale AS. 3D Object Detection Using Virtual Environment Assisted Deep Network Training. [Thesis]. IUPUI; 2020. Available from: http://hdl.handle.net/1805/24756

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

23. Jacques, Maxime. Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot .

Degree: 2012, University of Ottawa

 The recent advent of consumer grade Brain-Computer Interfaces (BCI) provides a new revolutionary and accessible way to control computers. BCI translate cognitive electroencephalography (EEG) signals… (more)

Subjects/Keywords: RGBD; Kinect; BCI; Mobile Robot; NXT; Human-computer interface

…drive 70 LISTINGS ACRONYMS BCI Brain-Computer Interface RGB Red Green Blue RGBD Red… …the number of navigation steps. In this case, an Red Green Blue Depth (RGBD)… …M. on controlling a PR2 robot to grasp 3 different objects with an RGBD camera and a BCI… …with a consumer-grade BCI. New sensors systems like the Microsoft Kinect RGBD cameras are… …explore and develop a way to efficiently navigate a mobile robot with a BCI and a Kinect RGBD… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jacques, M. (2012). Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot . (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/22896

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Jacques, Maxime. “Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot .” 2012. Thesis, University of Ottawa. Accessed May 07, 2021. http://hdl.handle.net/10393/22896.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Jacques, Maxime. “Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot .” 2012. Web. 07 May 2021.

Vancouver:

Jacques M. Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot . [Internet] [Thesis]. University of Ottawa; 2012. [cited 2021 May 07]. Available from: http://hdl.handle.net/10393/22896.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Jacques M. Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot . [Thesis]. University of Ottawa; 2012. Available from: http://hdl.handle.net/10393/22896

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


KTH

24. Möckelind, Christoffer. Improving deep monocular depth predictions using dense narrow field of view depth images.

Degree: RPL, 2018, KTH

In this work we study a depth prediction problem where we provide a narrow field of view depth image and a wide field of… (more)

Subjects/Keywords: Deep learning; Monocular; Depth estimation; Narrow field of view; RGB; RGBD; Noicy depth; Dense depth; Narrow depth; Sparse depth; Computer Sciences; Datavetenskap (datalogi)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Möckelind, C. (2018). Improving deep monocular depth predictions using dense narrow field of view depth images. (Thesis). KTH. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Möckelind, Christoffer. “Improving deep monocular depth predictions using dense narrow field of view depth images.” 2018. Thesis, KTH. Accessed May 07, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Möckelind, Christoffer. “Improving deep monocular depth predictions using dense narrow field of view depth images.” 2018. Web. 07 May 2021.

Vancouver:

Möckelind C. Improving deep monocular depth predictions using dense narrow field of view depth images. [Internet] [Thesis]. KTH; 2018. [cited 2021 May 07]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Möckelind C. Improving deep monocular depth predictions using dense narrow field of view depth images. [Thesis]. KTH; 2018. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

25. Orts-Escolano, Sergio. A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUs .

Degree: 2014, University of Alicante

 The research described in this thesis was motivated by the need of a robust model capable of representing 3D data obtained with 3D sensors, which… (more)

Subjects/Keywords: 3D representation method; Growing neural gas; Self-organizing maps; Topology preservation; Parallel computing; CUDA; Real-time; Point cloud; 3D reconstruction; GPGPU; RGBD; Noisy 3D data; Object recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Orts-Escolano, S. (2014). A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUs . (Thesis). University of Alicante. Retrieved from http://hdl.handle.net/10045/36484

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Orts-Escolano, Sergio. “A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUs .” 2014. Thesis, University of Alicante. Accessed May 07, 2021. http://hdl.handle.net/10045/36484.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Orts-Escolano, Sergio. “A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUs .” 2014. Web. 07 May 2021.

Vancouver:

Orts-Escolano S. A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUs . [Internet] [Thesis]. University of Alicante; 2014. [cited 2021 May 07]. Available from: http://hdl.handle.net/10045/36484.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Orts-Escolano S. A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUs . [Thesis]. University of Alicante; 2014. Available from: http://hdl.handle.net/10045/36484

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

26. Feng, David. RGB-D Scene Representations for Prosthetic Vision .

Degree: 2017, Australian National University

 This thesis presents a new approach to scene representation for prosthetic vision. Structurally salient information from the scene is conveyed through the prosthetic vision display.… (more)

Subjects/Keywords: rgbd saliency; depth saliency; rgbd salient object detection; depth salient object detection; local background enclosure; lbe; depth structural descriptor; dsd; histogram of surface orientation; hoso; prosthetic vision; rgbd edge detection; surface irregularities; salient structure; deep edge detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Feng, D. (2017). RGB-D Scene Representations for Prosthetic Vision . (Thesis). Australian National University. Retrieved from http://hdl.handle.net/1885/167000

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Feng, David. “RGB-D Scene Representations for Prosthetic Vision .” 2017. Thesis, Australian National University. Accessed May 07, 2021. http://hdl.handle.net/1885/167000.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Feng, David. “RGB-D Scene Representations for Prosthetic Vision .” 2017. Web. 07 May 2021.

Vancouver:

Feng D. RGB-D Scene Representations for Prosthetic Vision . [Internet] [Thesis]. Australian National University; 2017. [cited 2021 May 07]. Available from: http://hdl.handle.net/1885/167000.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Feng D. RGB-D Scene Representations for Prosthetic Vision . [Thesis]. Australian National University; 2017. Available from: http://hdl.handle.net/1885/167000

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Linköping University

27. Stynsberg, John. Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking.

Degree: Computer Vision, 2018, Linköping University

  Visual tracking is a computer vision problem where the task is to follow a targetthrough a video sequence. Tracking has many important real-world applications… (more)

Subjects/Keywords: Tracking; Visual; Deep; Learning; Machine; Learning; CNN; Convolutional; Neural; Network; Unsupervised; Learning; Clustering; Genetic Algorithms; Features; Visual featues; Channel; Coding; RGBD; Scene; Depth; Map; Kinect; Discriminative; Correlation; Filters; SRDCF; DCF; Spatial; Spatially; Regularized; Hyperparameter; Search; Occlusion; Detection; Handling; Kalman; Filters; Normalized; Convolution; Bayesian; Gaussian; Mixture; Scale; Estimation; Conjugate; Gradient; Linkoping; Sweden; Visuell; Följning; Särdrag; Djupa; Faltningsnätverk; Maskininlärning; Djup; Inlärning; Genetiska; Algoritmer; Klustring; Djup; RGBD; Linköping; Sverige; Computer Vision and Robotics (Autonomous Systems); Datorseende och robotik (autonoma system)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Stynsberg, J. (2018). Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking. (Thesis). Linköping University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Stynsberg, John. “Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking.” 2018. Thesis, Linköping University. Accessed May 07, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Stynsberg, John. “Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking.” 2018. Web. 07 May 2021.

Vancouver:

Stynsberg J. Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking. [Internet] [Thesis]. Linköping University; 2018. [cited 2021 May 07]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Stynsberg J. Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking. [Thesis]. Linköping University; 2018. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Indian Institute of Science

28. Naik, Narmada. Real Time Face Recognition on GPU using OPENCL.

Degree: MSc Engg, Faculty of Engineering, 2018, Indian Institute of Science

 Face recognition finds various applications in surveillance, Law enforcement etc. These applications require fast image processing in real time. Modern GPUs have evolved fully programmable… (more)

Subjects/Keywords: Real Time Face Recognition; Face Recognition; GPU; Local Binary Pattern (LBP); Video Based Face Recognition; Tracking after Recognition; Local Ternary Pattern (LTP); Enhanced Local Ternary Patterns (ELTP); Face Identification; Face Identification from Depth (RGBD); Kinect Sensor; OpenCL Memory Model; Electrical Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Naik, N. (2018). Real Time Face Recognition on GPU using OPENCL. (Masters Thesis). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/3596

Chicago Manual of Style (16th Edition):

Naik, Narmada. “Real Time Face Recognition on GPU using OPENCL.” 2018. Masters Thesis, Indian Institute of Science. Accessed May 07, 2021. http://etd.iisc.ac.in/handle/2005/3596.

MLA Handbook (7th Edition):

Naik, Narmada. “Real Time Face Recognition on GPU using OPENCL.” 2018. Web. 07 May 2021.

Vancouver:

Naik N. Real Time Face Recognition on GPU using OPENCL. [Internet] [Masters thesis]. Indian Institute of Science; 2018. [cited 2021 May 07]. Available from: http://etd.iisc.ac.in/handle/2005/3596.

Council of Science Editors:

Naik N. Real Time Face Recognition on GPU using OPENCL. [Masters Thesis]. Indian Institute of Science; 2018. Available from: http://etd.iisc.ac.in/handle/2005/3596

29. WU ZHE. TOWARDS CASUAL APPEARANCE CAPTURE BY REFLECTANCE SYMMETRY.

Degree: 2014, National University of Singapore

Subjects/Keywords: isotropic BRDF; half-vector symmetry; uncalibrated photometric stereo; appearance capture; material sensing; RGBD-M sensor

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

ZHE, W. (2014). TOWARDS CASUAL APPEARANCE CAPTURE BY REFLECTANCE SYMMETRY. (Thesis). National University of Singapore. Retrieved from http://scholarbank.nus.edu.sg/handle/10635/118901

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

ZHE, WU. “TOWARDS CASUAL APPEARANCE CAPTURE BY REFLECTANCE SYMMETRY.” 2014. Thesis, National University of Singapore. Accessed May 07, 2021. http://scholarbank.nus.edu.sg/handle/10635/118901.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

ZHE, WU. “TOWARDS CASUAL APPEARANCE CAPTURE BY REFLECTANCE SYMMETRY.” 2014. Web. 07 May 2021.

Vancouver:

ZHE W. TOWARDS CASUAL APPEARANCE CAPTURE BY REFLECTANCE SYMMETRY. [Internet] [Thesis]. National University of Singapore; 2014. [cited 2021 May 07]. Available from: http://scholarbank.nus.edu.sg/handle/10635/118901.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

ZHE W. TOWARDS CASUAL APPEARANCE CAPTURE BY REFLECTANCE SYMMETRY. [Thesis]. National University of Singapore; 2014. Available from: http://scholarbank.nus.edu.sg/handle/10635/118901

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

30. Mateo Agulló, Carlos. Reconocimiento geométrico de objetos 3D y detección de deformaciones en manipulación robótica .

Degree: 2017, University of Alicante

 Recientemente, con la aparición de nuevos sensores visuales de bajo coste capaces de adquirir y reconstruir datos 3D, y de los desarrollos de nuevos métodos,… (more)

Subjects/Keywords: Percepción visual 3D; Visión por computador 3D; Detección de forma 3D; Reconocimiento de objetos; Reconocimiento de objetos 3D; Reconocimiento geométrico de objetos; Nubes de puntos; Descripción de características; Supervisión de deformaciones; Percepción visual de deformaciones; Superficies; Curvaturas; RGBD; Algoritmos de visión para manipulación; Sensorizado para manipulación robótica; Manipulación robótica; Interacción Hombre-Robot

…35 2.4.3 Cámara RGBD… …35 2.4 Sensor RGBD y nube de puntos adquiridas… …36 2.5 Imágenes obtenidas por una cámara RGBD… …PA10 equipado con una mano robot antropomórfica de la compañía Allegro y un sensor RGBD (… …MGC+ 12] “RGBD human-hand recognition for the interaction with robot-hand”. CM Mateo, P… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mateo Agulló, C. (2017). Reconocimiento geométrico de objetos 3D y detección de deformaciones en manipulación robótica . (Thesis). University of Alicante. Retrieved from http://hdl.handle.net/10045/72265

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Mateo Agulló, Carlos. “Reconocimiento geométrico de objetos 3D y detección de deformaciones en manipulación robótica .” 2017. Thesis, University of Alicante. Accessed May 07, 2021. http://hdl.handle.net/10045/72265.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Mateo Agulló, Carlos. “Reconocimiento geométrico de objetos 3D y detección de deformaciones en manipulación robótica .” 2017. Web. 07 May 2021.

Vancouver:

Mateo Agulló C. Reconocimiento geométrico de objetos 3D y detección de deformaciones en manipulación robótica . [Internet] [Thesis]. University of Alicante; 2017. [cited 2021 May 07]. Available from: http://hdl.handle.net/10045/72265.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Mateo Agulló C. Reconocimiento geométrico de objetos 3D y detección de deformaciones en manipulación robótica . [Thesis]. University of Alicante; 2017. Available from: http://hdl.handle.net/10045/72265

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.