Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Action Recognition). Showing records 1 – 30 of 166 total matches.

[1] [2] [3] [4] [5] [6]

Search Limiters

Last 2 Years | English Only

Degrees

Levels

Country

▼ Search Limiters

1. Arslan, Ali. Exploring the Role of Motion and Depth in Action Perception.

Degree: PhD, Cognitive Sciences, 2015, Brown University

 Recognizing the content of human actions is an important skill in our adaptation to the environment. We rely on our visual system to constantly interpret… (more)

Subjects/Keywords: action recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Arslan, A. (2015). Exploring the Role of Motion and Depth in Action Perception. (Doctoral Dissertation). Brown University. Retrieved from https://repository.library.brown.edu/studio/item/bdr:674175/

Chicago Manual of Style (16th Edition):

Arslan, Ali. “Exploring the Role of Motion and Depth in Action Perception.” 2015. Doctoral Dissertation, Brown University. Accessed June 17, 2019. https://repository.library.brown.edu/studio/item/bdr:674175/.

MLA Handbook (7th Edition):

Arslan, Ali. “Exploring the Role of Motion and Depth in Action Perception.” 2015. Web. 17 Jun 2019.

Vancouver:

Arslan A. Exploring the Role of Motion and Depth in Action Perception. [Internet] [Doctoral dissertation]. Brown University; 2015. [cited 2019 Jun 17]. Available from: https://repository.library.brown.edu/studio/item/bdr:674175/.

Council of Science Editors:

Arslan A. Exploring the Role of Motion and Depth in Action Perception. [Doctoral Dissertation]. Brown University; 2015. Available from: https://repository.library.brown.edu/studio/item/bdr:674175/


Virginia Tech

2. AlBahar, Badour A Sh A. Im2Vid: Future Video Prediction for Static Image Action Recognition.

Degree: MS, Electrical and Computer Engineering, 2018, Virginia Tech

 Static image action recognition aims at identifying the action performed in a given image. Most existing static image action recognition approaches use high-level cues present… (more)

Subjects/Keywords: Human Action Recognition; Static Image Action Recognition; Video Action Recognition; Future Video Prediction

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

AlBahar, B. A. S. A. (2018). Im2Vid: Future Video Prediction for Static Image Action Recognition. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/83602

Chicago Manual of Style (16th Edition):

AlBahar, Badour A Sh A. “Im2Vid: Future Video Prediction for Static Image Action Recognition.” 2018. Masters Thesis, Virginia Tech. Accessed June 17, 2019. http://hdl.handle.net/10919/83602.

MLA Handbook (7th Edition):

AlBahar, Badour A Sh A. “Im2Vid: Future Video Prediction for Static Image Action Recognition.” 2018. Web. 17 Jun 2019.

Vancouver:

AlBahar BASA. Im2Vid: Future Video Prediction for Static Image Action Recognition. [Internet] [Masters thesis]. Virginia Tech; 2018. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10919/83602.

Council of Science Editors:

AlBahar BASA. Im2Vid: Future Video Prediction for Static Image Action Recognition. [Masters Thesis]. Virginia Tech; 2018. Available from: http://hdl.handle.net/10919/83602


Delft University of Technology

3. Barbadillo Amor, J. Single person pose recognition and tracking:.

Degree: 2010, Delft University of Technology

 The goal of this research is to improve a system capable to detect, track a single person and recognize poses real time for controlling a… (more)

Subjects/Keywords: pose recognition; computer vision; human action recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Barbadillo Amor, J. (2010). Single person pose recognition and tracking:. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:944a11fd-eea8-445e-9a46-98855f7766d3

Chicago Manual of Style (16th Edition):

Barbadillo Amor, J. “Single person pose recognition and tracking:.” 2010. Masters Thesis, Delft University of Technology. Accessed June 17, 2019. http://resolver.tudelft.nl/uuid:944a11fd-eea8-445e-9a46-98855f7766d3.

MLA Handbook (7th Edition):

Barbadillo Amor, J. “Single person pose recognition and tracking:.” 2010. Web. 17 Jun 2019.

Vancouver:

Barbadillo Amor J. Single person pose recognition and tracking:. [Internet] [Masters thesis]. Delft University of Technology; 2010. [cited 2019 Jun 17]. Available from: http://resolver.tudelft.nl/uuid:944a11fd-eea8-445e-9a46-98855f7766d3.

Council of Science Editors:

Barbadillo Amor J. Single person pose recognition and tracking:. [Masters Thesis]. Delft University of Technology; 2010. Available from: http://resolver.tudelft.nl/uuid:944a11fd-eea8-445e-9a46-98855f7766d3


NSYSU

4. Pan, Po-Hsun. The Optimal Design for Action Recognition Algorithm on Cell Processor Architecture.

Degree: Master, Electrical Engineering, 2011, NSYSU

 In recent years, automatic human action recognition has been widely researched within the computer vision and image processing communities. To identify human behavior which achieve… (more)

Subjects/Keywords: action recognition; SIMD; CELL; parallelize

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Pan, P. (2011). The Optimal Design for Action Recognition Algorithm on Cell Processor Architecture. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0823111-143005

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Pan, Po-Hsun. “The Optimal Design for Action Recognition Algorithm on Cell Processor Architecture.” 2011. Thesis, NSYSU. Accessed June 17, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0823111-143005.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Pan, Po-Hsun. “The Optimal Design for Action Recognition Algorithm on Cell Processor Architecture.” 2011. Web. 17 Jun 2019.

Vancouver:

Pan P. The Optimal Design for Action Recognition Algorithm on Cell Processor Architecture. [Internet] [Thesis]. NSYSU; 2011. [cited 2019 Jun 17]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0823111-143005.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Pan P. The Optimal Design for Action Recognition Algorithm on Cell Processor Architecture. [Thesis]. NSYSU; 2011. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0823111-143005

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Technology, Sydney

5. Chen, Y. Human action recognition based on key postures.

Degree: 2009, University of Technology, Sydney

 Human motion analysis has gained considerable interests in the computer vision area due to the large number of potential applications and its inherent complexity. Currently,… (more)

Subjects/Keywords: Human action recognition.; Posture.; Computer vision.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, Y. (2009). Human action recognition based on key postures. (Thesis). University of Technology, Sydney. Retrieved from http://hdl.handle.net/10453/30251

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chen, Y. “Human action recognition based on key postures.” 2009. Thesis, University of Technology, Sydney. Accessed June 17, 2019. http://hdl.handle.net/10453/30251.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chen, Y. “Human action recognition based on key postures.” 2009. Web. 17 Jun 2019.

Vancouver:

Chen Y. Human action recognition based on key postures. [Internet] [Thesis]. University of Technology, Sydney; 2009. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10453/30251.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chen Y. Human action recognition based on key postures. [Thesis]. University of Technology, Sydney; 2009. Available from: http://hdl.handle.net/10453/30251

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Dublin City University

6. Jargalsaikhan, Iveel. An action recognition framework for uncontrolled video capture based on a spatio-temporal video graph.

Degree: School of Electronic Engineering; Dublin City University. INSIGHT Centre for Data Analytics; Dublin City University. School of Computing, 2017, Dublin City University

 The task of automatic categorization and localization of human action in video sequences is valuable for a variety of applications such as detecting relevant activities… (more)

Subjects/Keywords: Artificial intelligence; computer vision; action recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jargalsaikhan, I. (2017). An action recognition framework for uncontrolled video capture based on a spatio-temporal video graph. (Thesis). Dublin City University. Retrieved from http://doras.dcu.ie/21816/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Jargalsaikhan, Iveel. “An action recognition framework for uncontrolled video capture based on a spatio-temporal video graph.” 2017. Thesis, Dublin City University. Accessed June 17, 2019. http://doras.dcu.ie/21816/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Jargalsaikhan, Iveel. “An action recognition framework for uncontrolled video capture based on a spatio-temporal video graph.” 2017. Web. 17 Jun 2019.

Vancouver:

Jargalsaikhan I. An action recognition framework for uncontrolled video capture based on a spatio-temporal video graph. [Internet] [Thesis]. Dublin City University; 2017. [cited 2019 Jun 17]. Available from: http://doras.dcu.ie/21816/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Jargalsaikhan I. An action recognition framework for uncontrolled video capture based on a spatio-temporal video graph. [Thesis]. Dublin City University; 2017. Available from: http://doras.dcu.ie/21816/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Penn State University

7. Felemban, Noor. ON-DEMAND VIDEO PROCESSING IN WIRELESS NETWORKS AN APPLICATION: ACTION RECOGNITION.

Degree: MS, Computer Science and Engineering, 2016, Penn State University

 With the widespread use of mobile devices with built in cameras, the number of captured videos has increased. Videos are a rich source of information,… (more)

Subjects/Keywords: wireless network; action recognition; video processing; offloading

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Felemban, N. (2016). ON-DEMAND VIDEO PROCESSING IN WIRELESS NETWORKS AN APPLICATION: ACTION RECOGNITION. (Masters Thesis). Penn State University. Retrieved from https://etda.libraries.psu.edu/catalog/w0892992w

Chicago Manual of Style (16th Edition):

Felemban, Noor. “ON-DEMAND VIDEO PROCESSING IN WIRELESS NETWORKS AN APPLICATION: ACTION RECOGNITION.” 2016. Masters Thesis, Penn State University. Accessed June 17, 2019. https://etda.libraries.psu.edu/catalog/w0892992w.

MLA Handbook (7th Edition):

Felemban, Noor. “ON-DEMAND VIDEO PROCESSING IN WIRELESS NETWORKS AN APPLICATION: ACTION RECOGNITION.” 2016. Web. 17 Jun 2019.

Vancouver:

Felemban N. ON-DEMAND VIDEO PROCESSING IN WIRELESS NETWORKS AN APPLICATION: ACTION RECOGNITION. [Internet] [Masters thesis]. Penn State University; 2016. [cited 2019 Jun 17]. Available from: https://etda.libraries.psu.edu/catalog/w0892992w.

Council of Science Editors:

Felemban N. ON-DEMAND VIDEO PROCESSING IN WIRELESS NETWORKS AN APPLICATION: ACTION RECOGNITION. [Masters Thesis]. Penn State University; 2016. Available from: https://etda.libraries.psu.edu/catalog/w0892992w

8. Sheikh, Mohammad Masudul Ahsan. A Study on Human Actions Representation and Recognition : 人の行動の表現と認識に関する研究.

Degree: 博士(工学), 2017, Kyushu Institute of Technology / 九州工業大学

In recent years, analyzing human motion and recognizing a performed action from a video sequence has become very important and has been a well-researched topic… (more)

Subjects/Keywords: Action Recognition; DMHI; LBP; Histogram; RLBPD_SELSN_H; D_RLBPD_MEI_H

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sheikh, M. M. A. (2017). A Study on Human Actions Representation and Recognition : 人の行動の表現と認識に関する研究. (Thesis). Kyushu Institute of Technology / 九州工業大学. Retrieved from http://hdl.handle.net/10228/5702

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Sheikh, Mohammad Masudul Ahsan. “A Study on Human Actions Representation and Recognition : 人の行動の表現と認識に関する研究.” 2017. Thesis, Kyushu Institute of Technology / 九州工業大学. Accessed June 17, 2019. http://hdl.handle.net/10228/5702.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Sheikh, Mohammad Masudul Ahsan. “A Study on Human Actions Representation and Recognition : 人の行動の表現と認識に関する研究.” 2017. Web. 17 Jun 2019.

Vancouver:

Sheikh MMA. A Study on Human Actions Representation and Recognition : 人の行動の表現と認識に関する研究. [Internet] [Thesis]. Kyushu Institute of Technology / 九州工業大学; 2017. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10228/5702.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Sheikh MMA. A Study on Human Actions Representation and Recognition : 人の行動の表現と認識に関する研究. [Thesis]. Kyushu Institute of Technology / 九州工業大学; 2017. Available from: http://hdl.handle.net/10228/5702

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Urbana-Champaign

9. Fedorov, Igor. Kinect depth video compression for action recognition.

Degree: MS, 1200, 2014, University of Illinois – Urbana-Champaign

 Since the advent of the Kinect camera, depth videos have become easily accessible to consumers and researchers, allowing a variety of complex classification tasks to… (more)

Subjects/Keywords: Kinect; Depth; Video; Compression; Action; Recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Fedorov, I. (2014). Kinect depth video compression for action recognition. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/49462

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Fedorov, Igor. “Kinect depth video compression for action recognition.” 2014. Thesis, University of Illinois – Urbana-Champaign. Accessed June 17, 2019. http://hdl.handle.net/2142/49462.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Fedorov, Igor. “Kinect depth video compression for action recognition.” 2014. Web. 17 Jun 2019.

Vancouver:

Fedorov I. Kinect depth video compression for action recognition. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2014. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/2142/49462.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Fedorov I. Kinect depth video compression for action recognition. [Thesis]. University of Illinois – Urbana-Champaign; 2014. Available from: http://hdl.handle.net/2142/49462

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Missouri – Columbia

10. Gong, Wei. Action recognition via sequence embedding.

Degree: 2011, University of Missouri – Columbia

 [ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] A comb structural exemplar embedding based approach is introduced for action recognition. We propose a… (more)

Subjects/Keywords: computer vision; action recognition; machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gong, W. (2011). Action recognition via sequence embedding. (Thesis). University of Missouri – Columbia. Retrieved from http://hdl.handle.net/10355/14908

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gong, Wei. “Action recognition via sequence embedding.” 2011. Thesis, University of Missouri – Columbia. Accessed June 17, 2019. http://hdl.handle.net/10355/14908.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gong, Wei. “Action recognition via sequence embedding.” 2011. Web. 17 Jun 2019.

Vancouver:

Gong W. Action recognition via sequence embedding. [Internet] [Thesis]. University of Missouri – Columbia; 2011. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10355/14908.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gong W. Action recognition via sequence embedding. [Thesis]. University of Missouri – Columbia; 2011. Available from: http://hdl.handle.net/10355/14908

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Georgia Tech

11. Ciptadi, Arridhana. Interactive tracking and action retrieval to support human behavior analysis.

Degree: PhD, Computer Science, 2016, Georgia Tech

 The goal of this thesis is to develop a set of tools for continuous tracking of behavioral phenomena in videos to support human behavior study.… (more)

Subjects/Keywords: Behavior analysis; Tracking; Action recognition; Action retrieval; Attachment security

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ciptadi, A. (2016). Interactive tracking and action retrieval to support human behavior analysis. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/54987

Chicago Manual of Style (16th Edition):

Ciptadi, Arridhana. “Interactive tracking and action retrieval to support human behavior analysis.” 2016. Doctoral Dissertation, Georgia Tech. Accessed June 17, 2019. http://hdl.handle.net/1853/54987.

MLA Handbook (7th Edition):

Ciptadi, Arridhana. “Interactive tracking and action retrieval to support human behavior analysis.” 2016. Web. 17 Jun 2019.

Vancouver:

Ciptadi A. Interactive tracking and action retrieval to support human behavior analysis. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/1853/54987.

Council of Science Editors:

Ciptadi A. Interactive tracking and action retrieval to support human behavior analysis. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/54987


University of Technology, Sydney

12. Zare Borzeshi, E. Action recognition by graph embedding and temporal classifiers.

Degree: 2014, University of Technology, Sydney

 With the improved accessibility to an exploding amount of video data and growing demand in a wide range of video analysis applications, video-based action recognition(more)

Subjects/Keywords: Machine learning.; Pattern recognition.; Action recognition.; Time series analysis.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zare Borzeshi, E. (2014). Action recognition by graph embedding and temporal classifiers. (Thesis). University of Technology, Sydney. Retrieved from http://hdl.handle.net/10453/28064

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Zare Borzeshi, E. “Action recognition by graph embedding and temporal classifiers.” 2014. Thesis, University of Technology, Sydney. Accessed June 17, 2019. http://hdl.handle.net/10453/28064.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Zare Borzeshi, E. “Action recognition by graph embedding and temporal classifiers.” 2014. Web. 17 Jun 2019.

Vancouver:

Zare Borzeshi E. Action recognition by graph embedding and temporal classifiers. [Internet] [Thesis]. University of Technology, Sydney; 2014. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10453/28064.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Zare Borzeshi E. Action recognition by graph embedding and temporal classifiers. [Thesis]. University of Technology, Sydney; 2014. Available from: http://hdl.handle.net/10453/28064

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Manitoba

13. Naha, Shujon. Zero-shot Learning for Visual Recognition Problems.

Degree: Computer Science, 2015, University of Manitoba

 In this thesis we discuss different aspects of zero-shot learning and propose solutions for three challenging visual recognition problems: 1) unknown object recognition from images… (more)

Subjects/Keywords: Zero-shot Learning; Computer Vision; Object Recognition; Action Recognition; Object Segmentation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Naha, S. (2015). Zero-shot Learning for Visual Recognition Problems. (Masters Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/31806

Chicago Manual of Style (16th Edition):

Naha, Shujon. “Zero-shot Learning for Visual Recognition Problems.” 2015. Masters Thesis, University of Manitoba. Accessed June 17, 2019. http://hdl.handle.net/1993/31806.

MLA Handbook (7th Edition):

Naha, Shujon. “Zero-shot Learning for Visual Recognition Problems.” 2015. Web. 17 Jun 2019.

Vancouver:

Naha S. Zero-shot Learning for Visual Recognition Problems. [Internet] [Masters thesis]. University of Manitoba; 2015. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/1993/31806.

Council of Science Editors:

Naha S. Zero-shot Learning for Visual Recognition Problems. [Masters Thesis]. University of Manitoba; 2015. Available from: http://hdl.handle.net/1993/31806


University of North Texas

14. Janmohammadi, Siamak. Classifying Pairwise Object Interactions: A Trajectory Analytics Approach.

Degree: 2015, University of North Texas

 We have a huge amount of video data from extensively available surveillance cameras and increasingly growing technology to record the motion of a moving object… (more)

Subjects/Keywords: action recognition; machine learning; trajectory analysis; supervised classification methods; activity recognition; Human activity recognition.; Pattern recognition systems.; Machine learning.; Electronic surveillance.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Janmohammadi, S. (2015). Classifying Pairwise Object Interactions: A Trajectory Analytics Approach. (Thesis). University of North Texas. Retrieved from https://digital.library.unt.edu/ark:/67531/metadc801901/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Janmohammadi, Siamak. “Classifying Pairwise Object Interactions: A Trajectory Analytics Approach.” 2015. Thesis, University of North Texas. Accessed June 17, 2019. https://digital.library.unt.edu/ark:/67531/metadc801901/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Janmohammadi, Siamak. “Classifying Pairwise Object Interactions: A Trajectory Analytics Approach.” 2015. Web. 17 Jun 2019.

Vancouver:

Janmohammadi S. Classifying Pairwise Object Interactions: A Trajectory Analytics Approach. [Internet] [Thesis]. University of North Texas; 2015. [cited 2019 Jun 17]. Available from: https://digital.library.unt.edu/ark:/67531/metadc801901/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Janmohammadi S. Classifying Pairwise Object Interactions: A Trajectory Analytics Approach. [Thesis]. University of North Texas; 2015. Available from: https://digital.library.unt.edu/ark:/67531/metadc801901/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of North Texas

15. Santiteerakul, Wasana. Trajectory Analytics.

Degree: 2015, University of North Texas

 The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object… (more)

Subjects/Keywords: trajectory analytics; action recognition; activity recognition; Pattern recognition systems.; Machine learning.; Human activity recognition.; Electronic surveillance.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Santiteerakul, W. (2015). Trajectory Analytics. (Thesis). University of North Texas. Retrieved from https://digital.library.unt.edu/ark:/67531/metadc801885/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Santiteerakul, Wasana. “Trajectory Analytics.” 2015. Thesis, University of North Texas. Accessed June 17, 2019. https://digital.library.unt.edu/ark:/67531/metadc801885/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Santiteerakul, Wasana. “Trajectory Analytics.” 2015. Web. 17 Jun 2019.

Vancouver:

Santiteerakul W. Trajectory Analytics. [Internet] [Thesis]. University of North Texas; 2015. [cited 2019 Jun 17]. Available from: https://digital.library.unt.edu/ark:/67531/metadc801885/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Santiteerakul W. Trajectory Analytics. [Thesis]. University of North Texas; 2015. Available from: https://digital.library.unt.edu/ark:/67531/metadc801885/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

16. Murray, Thomas Simmons. Human Action Recognition from Active Acoustics: Physics Modelling for Representation Learning and Inference Using Generative Probabilistic Graphical Models.

Degree: 2015, Johns Hopkins University

 This dissertation explores computational methods to address the problem of physics-based modeling and ultimately doing inference from data in multiple modalities where there exists large… (more)

Subjects/Keywords: ultrasound; active acoustics; human action recognition; action recognition; machine learning; deep belief network; hidden Markov model

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Murray, T. S. (2015). Human Action Recognition from Active Acoustics: Physics Modelling for Representation Learning and Inference Using Generative Probabilistic Graphical Models. (Thesis). Johns Hopkins University. Retrieved from http://jhir.library.jhu.edu/handle/1774.2/37891

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Murray, Thomas Simmons. “Human Action Recognition from Active Acoustics: Physics Modelling for Representation Learning and Inference Using Generative Probabilistic Graphical Models.” 2015. Thesis, Johns Hopkins University. Accessed June 17, 2019. http://jhir.library.jhu.edu/handle/1774.2/37891.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Murray, Thomas Simmons. “Human Action Recognition from Active Acoustics: Physics Modelling for Representation Learning and Inference Using Generative Probabilistic Graphical Models.” 2015. Web. 17 Jun 2019.

Vancouver:

Murray TS. Human Action Recognition from Active Acoustics: Physics Modelling for Representation Learning and Inference Using Generative Probabilistic Graphical Models. [Internet] [Thesis]. Johns Hopkins University; 2015. [cited 2019 Jun 17]. Available from: http://jhir.library.jhu.edu/handle/1774.2/37891.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Murray TS. Human Action Recognition from Active Acoustics: Physics Modelling for Representation Learning and Inference Using Generative Probabilistic Graphical Models. [Thesis]. Johns Hopkins University; 2015. Available from: http://jhir.library.jhu.edu/handle/1774.2/37891

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Edinburgh

17. Kalogeiton, Vasiliki. Localizing spatially and temporally objects and actions in videos.

Degree: PhD, 2018, University of Edinburgh

 The rise of deep learning has facilitated remarkable progress in video understanding. This thesis addresses three important tasks of video understanding: video object detection, joint… (more)

Subjects/Keywords: action localization; action recognition; object detection; video analysis; computer vision; deep learning; machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kalogeiton, V. (2018). Localizing spatially and temporally objects and actions in videos. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/28984

Chicago Manual of Style (16th Edition):

Kalogeiton, Vasiliki. “Localizing spatially and temporally objects and actions in videos.” 2018. Doctoral Dissertation, University of Edinburgh. Accessed June 17, 2019. http://hdl.handle.net/1842/28984.

MLA Handbook (7th Edition):

Kalogeiton, Vasiliki. “Localizing spatially and temporally objects and actions in videos.” 2018. Web. 17 Jun 2019.

Vancouver:

Kalogeiton V. Localizing spatially and temporally objects and actions in videos. [Internet] [Doctoral dissertation]. University of Edinburgh; 2018. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/1842/28984.

Council of Science Editors:

Kalogeiton V. Localizing spatially and temporally objects and actions in videos. [Doctoral Dissertation]. University of Edinburgh; 2018. Available from: http://hdl.handle.net/1842/28984


University of Texas – Austin

18. Yu, Qingfeng. Human extremity detection and its applications in action detection and recognition.

Degree: Electrical and Computer Engineering, 2009, University of Texas – Austin

 It is proven that locations of internal body joints are sufficient visual cues to characterize human motion. In this dissertation I propose that locations of… (more)

Subjects/Keywords: Human extremity detection; Action detection; Contour tracking; Human action recognition; Motion detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yu, Q. (2009). Human extremity detection and its applications in action detection and recognition. (Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/7650

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Yu, Qingfeng. “Human extremity detection and its applications in action detection and recognition.” 2009. Thesis, University of Texas – Austin. Accessed June 17, 2019. http://hdl.handle.net/2152/7650.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Yu, Qingfeng. “Human extremity detection and its applications in action detection and recognition.” 2009. Web. 17 Jun 2019.

Vancouver:

Yu Q. Human extremity detection and its applications in action detection and recognition. [Internet] [Thesis]. University of Texas – Austin; 2009. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/2152/7650.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Yu Q. Human extremity detection and its applications in action detection and recognition. [Thesis]. University of Texas – Austin; 2009. Available from: http://hdl.handle.net/2152/7650

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Boston University

19. Ma, Shugao. Learning space-time structures for action recognition and localization.

Degree: PhD, Computer Science, 2016, Boston University

 In this thesis the problem of automatic human action recognition and localization in videos is studied. In this problem, our goal is to recognize the… (more)

Subjects/Keywords: Computer science; Action localization; Action recognition; Computer vision; Deep learning; Machine learning; Space-time structures

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ma, S. (2016). Learning space-time structures for action recognition and localization. (Doctoral Dissertation). Boston University. Retrieved from http://hdl.handle.net/2144/17720

Chicago Manual of Style (16th Edition):

Ma, Shugao. “Learning space-time structures for action recognition and localization.” 2016. Doctoral Dissertation, Boston University. Accessed June 17, 2019. http://hdl.handle.net/2144/17720.

MLA Handbook (7th Edition):

Ma, Shugao. “Learning space-time structures for action recognition and localization.” 2016. Web. 17 Jun 2019.

Vancouver:

Ma S. Learning space-time structures for action recognition and localization. [Internet] [Doctoral dissertation]. Boston University; 2016. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/2144/17720.

Council of Science Editors:

Ma S. Learning space-time structures for action recognition and localization. [Doctoral Dissertation]. Boston University; 2016. Available from: http://hdl.handle.net/2144/17720


University of Manchester

20. Wang, Qian. ZERO-SHOT VISUAL RECOGNITION VIA LATENT EMBEDDING LEARNING.

Degree: 2018, University of Manchester

 Traditional supervised visual recognition methods require a great number of annotated examples for each concerned class. The collection and annotation of visual data (e.g., images… (more)

Subjects/Keywords: Zero-shot learning; Human action recognition; Object recognition; Semantic representation; Multi-label learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, Q. (2018). ZERO-SHOT VISUAL RECOGNITION VIA LATENT EMBEDDING LEARNING. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:312951

Chicago Manual of Style (16th Edition):

Wang, Qian. “ZERO-SHOT VISUAL RECOGNITION VIA LATENT EMBEDDING LEARNING.” 2018. Doctoral Dissertation, University of Manchester. Accessed June 17, 2019. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:312951.

MLA Handbook (7th Edition):

Wang, Qian. “ZERO-SHOT VISUAL RECOGNITION VIA LATENT EMBEDDING LEARNING.” 2018. Web. 17 Jun 2019.

Vancouver:

Wang Q. ZERO-SHOT VISUAL RECOGNITION VIA LATENT EMBEDDING LEARNING. [Internet] [Doctoral dissertation]. University of Manchester; 2018. [cited 2019 Jun 17]. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:312951.

Council of Science Editors:

Wang Q. ZERO-SHOT VISUAL RECOGNITION VIA LATENT EMBEDDING LEARNING. [Doctoral Dissertation]. University of Manchester; 2018. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:312951


University of Manchester

21. Wang, Qian. Zero-shot visual recognition via latent embedding learning.

Degree: PhD, 2018, University of Manchester

 Traditional supervised visual recognition methods require a great number of annotated examples for each concerned class. The collection and annotation of visual data (e.g., images… (more)

Subjects/Keywords: 004; Semantic representation; Zero-shot learning; Human action recognition; Object recognition; Multi-label learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, Q. (2018). Zero-shot visual recognition via latent embedding learning. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/zeroshot-visual-recognition-via-latent-embedding-learning(bec510af-6a53-4114-9407-75212e1a08e1).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.740350

Chicago Manual of Style (16th Edition):

Wang, Qian. “Zero-shot visual recognition via latent embedding learning.” 2018. Doctoral Dissertation, University of Manchester. Accessed June 17, 2019. https://www.research.manchester.ac.uk/portal/en/theses/zeroshot-visual-recognition-via-latent-embedding-learning(bec510af-6a53-4114-9407-75212e1a08e1).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.740350.

MLA Handbook (7th Edition):

Wang, Qian. “Zero-shot visual recognition via latent embedding learning.” 2018. Web. 17 Jun 2019.

Vancouver:

Wang Q. Zero-shot visual recognition via latent embedding learning. [Internet] [Doctoral dissertation]. University of Manchester; 2018. [cited 2019 Jun 17]. Available from: https://www.research.manchester.ac.uk/portal/en/theses/zeroshot-visual-recognition-via-latent-embedding-learning(bec510af-6a53-4114-9407-75212e1a08e1).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.740350.

Council of Science Editors:

Wang Q. Zero-shot visual recognition via latent embedding learning. [Doctoral Dissertation]. University of Manchester; 2018. Available from: https://www.research.manchester.ac.uk/portal/en/theses/zeroshot-visual-recognition-via-latent-embedding-learning(bec510af-6a53-4114-9407-75212e1a08e1).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.740350


University of Texas – Austin

22. Gupta, Sonal. Activity retrieval in closed captioned videos.

Degree: Computer Sciences, 2009, University of Texas – Austin

 Recognizing activities in real-world videos is a difficult problem exacerbated by background clutter, changes in camera angle & zoom, occlusion and rapid camera movements. Large… (more)

Subjects/Keywords: Activity Recognition; Action Recognition; Video Retrieval; Machine Learning; Computer Vision; Multimedia; Closed Captions

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gupta, S. (2009). Activity retrieval in closed captioned videos. (Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2009-08-305

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gupta, Sonal. “Activity retrieval in closed captioned videos.” 2009. Thesis, University of Texas – Austin. Accessed June 17, 2019. http://hdl.handle.net/2152/ETD-UT-2009-08-305.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gupta, Sonal. “Activity retrieval in closed captioned videos.” 2009. Web. 17 Jun 2019.

Vancouver:

Gupta S. Activity retrieval in closed captioned videos. [Internet] [Thesis]. University of Texas – Austin; 2009. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/2152/ETD-UT-2009-08-305.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gupta S. Activity retrieval in closed captioned videos. [Thesis]. University of Texas – Austin; 2009. Available from: http://hdl.handle.net/2152/ETD-UT-2009-08-305

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Dalhousie University

23. Salmon, Joshua. HOW MANIPULABILITY (GRASPABILITY AND FUNCTIONAL USAGE) INFLUENCES OBJECT IDENTIFICATION.

Degree: PhD, Department of Psychology and Neuroscience, 2013, Dalhousie University

Manuscript-based dissertation. One introductory chapter, one concluding chapter, and five manuscripts (seven chapters in total).

In our environment we do two things with objects: identify… (more)

Subjects/Keywords: object recognition; identification; action; manipulable; manipulability; naming; categorization; photographs

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Salmon, J. (2013). HOW MANIPULABILITY (GRASPABILITY AND FUNCTIONAL USAGE) INFLUENCES OBJECT IDENTIFICATION. (Doctoral Dissertation). Dalhousie University. Retrieved from http://hdl.handle.net/10222/35395

Chicago Manual of Style (16th Edition):

Salmon, Joshua. “HOW MANIPULABILITY (GRASPABILITY AND FUNCTIONAL USAGE) INFLUENCES OBJECT IDENTIFICATION.” 2013. Doctoral Dissertation, Dalhousie University. Accessed June 17, 2019. http://hdl.handle.net/10222/35395.

MLA Handbook (7th Edition):

Salmon, Joshua. “HOW MANIPULABILITY (GRASPABILITY AND FUNCTIONAL USAGE) INFLUENCES OBJECT IDENTIFICATION.” 2013. Web. 17 Jun 2019.

Vancouver:

Salmon J. HOW MANIPULABILITY (GRASPABILITY AND FUNCTIONAL USAGE) INFLUENCES OBJECT IDENTIFICATION. [Internet] [Doctoral dissertation]. Dalhousie University; 2013. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10222/35395.

Council of Science Editors:

Salmon J. HOW MANIPULABILITY (GRASPABILITY AND FUNCTIONAL USAGE) INFLUENCES OBJECT IDENTIFICATION. [Doctoral Dissertation]. Dalhousie University; 2013. Available from: http://hdl.handle.net/10222/35395


Universidade do Rio Grande do Norte

24. Bezerra, Giuliana Silva. A framework for investigating the use of face features to identify spontaneous emotions .

Degree: 2014, Universidade do Rio Grande do Norte

 Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use… (more)

Subjects/Keywords: Facial expression recognition; Face biometrics; Emotion analysis; Action units

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bezerra, G. S. (2014). A framework for investigating the use of face features to identify spontaneous emotions . (Thesis). Universidade do Rio Grande do Norte. Retrieved from http://repositorio.ufrn.br/handle/123456789/19595

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bezerra, Giuliana Silva. “A framework for investigating the use of face features to identify spontaneous emotions .” 2014. Thesis, Universidade do Rio Grande do Norte. Accessed June 17, 2019. http://repositorio.ufrn.br/handle/123456789/19595.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bezerra, Giuliana Silva. “A framework for investigating the use of face features to identify spontaneous emotions .” 2014. Web. 17 Jun 2019.

Vancouver:

Bezerra GS. A framework for investigating the use of face features to identify spontaneous emotions . [Internet] [Thesis]. Universidade do Rio Grande do Norte; 2014. [cited 2019 Jun 17]. Available from: http://repositorio.ufrn.br/handle/123456789/19595.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bezerra GS. A framework for investigating the use of face features to identify spontaneous emotions . [Thesis]. Universidade do Rio Grande do Norte; 2014. Available from: http://repositorio.ufrn.br/handle/123456789/19595

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Universidade do Rio Grande do Norte

25. Bezerra, Giuliana Silva. A framework for investigating the use of face features to identify spontaneous emotions .

Degree: 2014, Universidade do Rio Grande do Norte

 Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use… (more)

Subjects/Keywords: Facial expression recognition; Face biometrics; Emotion analysis; Action units

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bezerra, G. S. (2014). A framework for investigating the use of face features to identify spontaneous emotions . (Masters Thesis). Universidade do Rio Grande do Norte. Retrieved from http://repositorio.ufrn.br/handle/123456789/19595

Chicago Manual of Style (16th Edition):

Bezerra, Giuliana Silva. “A framework for investigating the use of face features to identify spontaneous emotions .” 2014. Masters Thesis, Universidade do Rio Grande do Norte. Accessed June 17, 2019. http://repositorio.ufrn.br/handle/123456789/19595.

MLA Handbook (7th Edition):

Bezerra, Giuliana Silva. “A framework for investigating the use of face features to identify spontaneous emotions .” 2014. Web. 17 Jun 2019.

Vancouver:

Bezerra GS. A framework for investigating the use of face features to identify spontaneous emotions . [Internet] [Masters thesis]. Universidade do Rio Grande do Norte; 2014. [cited 2019 Jun 17]. Available from: http://repositorio.ufrn.br/handle/123456789/19595.

Council of Science Editors:

Bezerra GS. A framework for investigating the use of face features to identify spontaneous emotions . [Masters Thesis]. Universidade do Rio Grande do Norte; 2014. Available from: http://repositorio.ufrn.br/handle/123456789/19595


Anna University

26. Gomathi V. Multi view based human action recognition and behavior understanding using shape features;.

Degree: 2013, Anna University

Humans have the ability to recognize an event from a single still image. It is the natural tendency of the human beings to give more… (more)

Subjects/Keywords: Human action recognition; Triangulated shape orientation context; centroid orientation context

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

V, G. (2013). Multi view based human action recognition and behavior understanding using shape features;. (Thesis). Anna University. Retrieved from http://shodhganga.inflibnet.ac.in/handle/10603/11691

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

V, Gomathi. “Multi view based human action recognition and behavior understanding using shape features;.” 2013. Thesis, Anna University. Accessed June 17, 2019. http://shodhganga.inflibnet.ac.in/handle/10603/11691.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

V, Gomathi. “Multi view based human action recognition and behavior understanding using shape features;.” 2013. Web. 17 Jun 2019.

Vancouver:

V G. Multi view based human action recognition and behavior understanding using shape features;. [Internet] [Thesis]. Anna University; 2013. [cited 2019 Jun 17]. Available from: http://shodhganga.inflibnet.ac.in/handle/10603/11691.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

V G. Multi view based human action recognition and behavior understanding using shape features;. [Thesis]. Anna University; 2013. Available from: http://shodhganga.inflibnet.ac.in/handle/10603/11691

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Technology, Sydney

27. Moghaddam, Z. Towards practical automated human action recognition.

Degree: 2012, University of Technology, Sydney

 Modern video surveillance requires addressing high-level concepts such as humans' actions and activities. Automated human action recognition is an interesting research area, as well as… (more)

Subjects/Keywords: Video surveillance.; Motion perception.; Computer vision.; Human action recognition.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Moghaddam, Z. (2012). Towards practical automated human action recognition. (Thesis). University of Technology, Sydney. Retrieved from http://hdl.handle.net/10453/30247

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Moghaddam, Z. “Towards practical automated human action recognition.” 2012. Thesis, University of Technology, Sydney. Accessed June 17, 2019. http://hdl.handle.net/10453/30247.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Moghaddam, Z. “Towards practical automated human action recognition.” 2012. Web. 17 Jun 2019.

Vancouver:

Moghaddam Z. Towards practical automated human action recognition. [Internet] [Thesis]. University of Technology, Sydney; 2012. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10453/30247.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Moghaddam Z. Towards practical automated human action recognition. [Thesis]. University of Technology, Sydney; 2012. Available from: http://hdl.handle.net/10453/30247

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

28. Koperski, Michal. Reconnaissance d’actions humaines dans des vidéos utilisant une représentation locale : Human action recognition in videos with local representation.

Degree: Docteur es, Informatique, 2017, Côte d'Azur

Cette thèse étudie le problème de la reconnaissance d’actions humaines dans des vidéos. La reconnaissance d’action peut être définie comme étant la capacité à décider… (more)

Subjects/Keywords: Vision par ordinateur; Reconnaissance d'actions; Computer vision; Action recognition; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Koperski, M. (2017). Reconnaissance d’actions humaines dans des vidéos utilisant une représentation locale : Human action recognition in videos with local representation. (Doctoral Dissertation). Côte d'Azur. Retrieved from http://www.theses.fr/2017AZUR4096

Chicago Manual of Style (16th Edition):

Koperski, Michal. “Reconnaissance d’actions humaines dans des vidéos utilisant une représentation locale : Human action recognition in videos with local representation.” 2017. Doctoral Dissertation, Côte d'Azur. Accessed June 17, 2019. http://www.theses.fr/2017AZUR4096.

MLA Handbook (7th Edition):

Koperski, Michal. “Reconnaissance d’actions humaines dans des vidéos utilisant une représentation locale : Human action recognition in videos with local representation.” 2017. Web. 17 Jun 2019.

Vancouver:

Koperski M. Reconnaissance d’actions humaines dans des vidéos utilisant une représentation locale : Human action recognition in videos with local representation. [Internet] [Doctoral dissertation]. Côte d'Azur; 2017. [cited 2019 Jun 17]. Available from: http://www.theses.fr/2017AZUR4096.

Council of Science Editors:

Koperski M. Reconnaissance d’actions humaines dans des vidéos utilisant une représentation locale : Human action recognition in videos with local representation. [Doctoral Dissertation]. Côte d'Azur; 2017. Available from: http://www.theses.fr/2017AZUR4096


NSYSU

29. Lin, Tzu-chun. Implementation of Action Recognition Algorithm on Multiple-Streaming Multimedia Unit.

Degree: Master, Electrical Engineering, 2010, NSYSU

Action recognition had become prosperous in development and been broadly applied in several sectors. From homeland security, personal property, home caring, even the smart environment… (more)

Subjects/Keywords: SIMD; Action Recognition; Embedded computer vision; MMX; Streaming Processing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lin, T. (2010). Implementation of Action Recognition Algorithm on Multiple-Streaming Multimedia Unit. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0803110-142110

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lin, Tzu-chun. “Implementation of Action Recognition Algorithm on Multiple-Streaming Multimedia Unit.” 2010. Thesis, NSYSU. Accessed June 17, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0803110-142110.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lin, Tzu-chun. “Implementation of Action Recognition Algorithm on Multiple-Streaming Multimedia Unit.” 2010. Web. 17 Jun 2019.

Vancouver:

Lin T. Implementation of Action Recognition Algorithm on Multiple-Streaming Multimedia Unit. [Internet] [Thesis]. NSYSU; 2010. [cited 2019 Jun 17]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0803110-142110.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lin T. Implementation of Action Recognition Algorithm on Multiple-Streaming Multimedia Unit. [Thesis]. NSYSU; 2010. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0803110-142110

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Dayton

30. Youssef, Menatoallah M. Hull Convexity Defect Features for Human Action Recognition.

Degree: PhD, Electrical Engineering, 2011, University of Dayton

  Human action recognition is a rapidly developing field in computer vision. Accurate algorithmic modeling of action recognition must contend with a multitude of challenges.… (more)

Subjects/Keywords: Electrical Engineering; Human Action Recognition; Computer Vision; Biometrics; Convex Hulls

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Youssef, M. M. (2011). Hull Convexity Defect Features for Human Action Recognition. (Doctoral Dissertation). University of Dayton. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=dayton1312225825

Chicago Manual of Style (16th Edition):

Youssef, Menatoallah M. “Hull Convexity Defect Features for Human Action Recognition.” 2011. Doctoral Dissertation, University of Dayton. Accessed June 17, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1312225825.

MLA Handbook (7th Edition):

Youssef, Menatoallah M. “Hull Convexity Defect Features for Human Action Recognition.” 2011. Web. 17 Jun 2019.

Vancouver:

Youssef MM. Hull Convexity Defect Features for Human Action Recognition. [Internet] [Doctoral dissertation]. University of Dayton; 2011. [cited 2019 Jun 17]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=dayton1312225825.

Council of Science Editors:

Youssef MM. Hull Convexity Defect Features for Human Action Recognition. [Doctoral Dissertation]. University of Dayton; 2011. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=dayton1312225825

[1] [2] [3] [4] [5] [6]

.