You searched for subject:(Activity recognition)
.
Showing records 1 – 30 of
346 total matches.
◁ [1] [2] [3] [4] [5] … [12] ▶

University of Melbourne
1.
Cheng, Weihao.
Accurate and efficient human activity recognition.
Degree: 2018, University of Melbourne
URL: http://hdl.handle.net/11343/224027
► Human Activity Recognition (HAR) is a promising technology which enables artificial intelligence systems to identify user's physical activities such as walking, running, and cycling. Recently,…
(more)
▼ Human Activity Recognition (HAR) is a promising technology which enables artificial intelligence systems to identify user's physical activities such as walking, running, and cycling. Recently, the demand for HAR is continuously increasing in pace with the rapid development of ubiquitous computing techniques. Major applications of HAR including fitness tracking, safety monitoring, and contextual recommendation have been widely applied in people's daily lives. For example, a music App on smartphones can use HAR to detect the current activity of the user and recommend activity-related songs.
State-of-the-art HAR methods are based on the machine learning technique, where a classification model is trained on a dataset to infer a number of predefined activities. The data for HAR is usually in the form of time series, which can be collected by sensors such as accelerometers, microphones, and cameras. In this thesis, we mainly focus on HAR using the data from inertial sensors, such as accelerations from accelerometers.
A large number of existing studies on HAR aim to obtain high recognition accuracy. However, efficiency is also an important aspect of HAR. In this thesis, we attempt to improve HAR methods for both accuracy and efficiency. Toward this goal, we first devise accurate HAR methods, and then improve the efficiency of HAR while maintaining the accuracy. More specifically, we tackle three problems. The first problem is to accurately recognize the current activity during activity transitions. Existing HAR methods train classification models based on tailored time series containing single activity. However, in practical scenarios, a piece of time series data could capture multiple interleaving activities causing activity transitions. Thus, recognition of the current activity, i.e., the most recent one, is a critical problem to investigate. The second problem is to accurately predict complex activities from ongoing observations. Many time-critical applications, such as safety monitoring, require early recognition of complex activities which are performed over a long period of time. However, without being fully observed, complex activities are hard to be recognized due to their complicated patterns. Therefore, predicting complex activities from ongoing observations is an important task to study. The third problem is to improve energy-efficiency of HAR on mobile devices while maintaining high accuracy. Many applications of HAR are based on mobile devices. However, due to the limited battery capacity, real-time HAR requires minimization of energy cost to extend the operating spans of the devices. Generally, the cost can be cut down by reducing algorithmic computations and sensing frequencies. Yet it is worth to find a maximal cost reduction while preserving a high recognition accuracy.
In this thesis, we present a set of algorithms to address the proposed problems. The key contributions of the thesis can be summarized as follows:
1. We propose a method to accurately recognize the current activity in the presence of…
Subjects/Keywords: activity recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cheng, W. (2018). Accurate and efficient human activity recognition. (Doctoral Dissertation). University of Melbourne. Retrieved from http://hdl.handle.net/11343/224027
Chicago Manual of Style (16th Edition):
Cheng, Weihao. “Accurate and efficient human activity recognition.” 2018. Doctoral Dissertation, University of Melbourne. Accessed March 07, 2021.
http://hdl.handle.net/11343/224027.
MLA Handbook (7th Edition):
Cheng, Weihao. “Accurate and efficient human activity recognition.” 2018. Web. 07 Mar 2021.
Vancouver:
Cheng W. Accurate and efficient human activity recognition. [Internet] [Doctoral dissertation]. University of Melbourne; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/11343/224027.
Council of Science Editors:
Cheng W. Accurate and efficient human activity recognition. [Doctoral Dissertation]. University of Melbourne; 2018. Available from: http://hdl.handle.net/11343/224027

University of Texas – Austin
2.
Chen, Chao-Yeh.
Learning human activities and poses with interconnected data sources.
Degree: PhD, Computer science, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/40260
► Understanding human actions and poses in images or videos is a challenging problem in computer vision. There are different topics related to this problem such…
(more)
▼ Understanding human actions and poses in images or videos is a challenging problem in computer vision. There are different topics related to this problem such as action
recognition, pose estimation, human-object interaction, and
activity detection. Knowledge of actions and poses could benefit many applications, including video search, surveillance, auto-tagging, event detection, and human-computer interfaces. To understand humans' actions and poses, we need to address several challenges. First, humans are able to perform an enormous amount of poses. For example, simply to move forward, we can do crawling, walking, running, and sprinting. These poses all look different and require examples to cover these variations. Second, the appearance of a person's pose changes when looking from different viewing angles. The learned action model needs to cover the variations from different views. Third, many actions involve interactions between people and other objects, so we need to consider the appearance change corresponding to that object as well. Fourth, collecting such data for learning is difficult and expensive. Last, even if we can learn a good model for an action, to localize when and where the action happens in a long video remains a difficult problem due to the large search space. My key idea to alleviate these obstacles in learning humans' actions and poses is to discover the underlying patterns that connect the information from different data sources. Why will there be underlying patterns? The intuition is that all people share the same articulated physical structure. Though we can change our pose, there are common regulations that limit how our pose can be and how it can move over time. Therefore, all types of human data will follow these rules and they can serve as prior knowledge or regularization in our learning framework. If we can exploit these tendencies, we are able to extract additional information from data and use them to improve learning of humans' actions and poses. In particular, we are able to find patterns for how our pose could vary over time, how our appearance looks in a specific view, how our pose is when we are interacting with objects with certain properties, and how part of our body configuration is shared across different poses. If we could learn these patterns, they can be used to interconnect and extrapolate the knowledge between different data sources. To this end, I propose several new ways to connect human
activity data. First, I show how to connect snapshot images and videos by exploring the patterns of how our pose could change over time. Building on this idea, I explore how to connect humans' poses across multiple views by discovering the correlations between different poses and the latent factors that affect the viewpoint variations. In addition, I consider if there are also patterns connecting our poses and nearby objects when we are interacting with them. Furthermore, I explore how we can utilize the predicted interaction as a cue to better address existing
recognition problems…
Advisors/Committee Members: Grauman, Kristen Lorraine, 1979- (advisor), Aggarwal, Jake K. (committee member), Mooney, Raymond J. (committee member), Ramanan, Deva (committee member), Stone, Peter (committee member).
Subjects/Keywords: Activity recognition; Activity detection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chen, C. (2016). Learning human activities and poses with interconnected data sources. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/40260
Chicago Manual of Style (16th Edition):
Chen, Chao-Yeh. “Learning human activities and poses with interconnected data sources.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 07, 2021.
http://hdl.handle.net/2152/40260.
MLA Handbook (7th Edition):
Chen, Chao-Yeh. “Learning human activities and poses with interconnected data sources.” 2016. Web. 07 Mar 2021.
Vancouver:
Chen C. Learning human activities and poses with interconnected data sources. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2152/40260.
Council of Science Editors:
Chen C. Learning human activities and poses with interconnected data sources. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/40260

University of Illinois – Urbana-Champaign
3.
Arora, Rohan R.
Metrics for analytics and visualization of big data with applications to activity recognition.
Degree: MS, Electrical & Computer Engr, 2016, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/90953
► Activity recognition systems detect the hidden actions of an agent from sensor measurements made on the agents' actions and the environmental conditions. For such systems,…
(more)
▼ Activity recognition systems detect the hidden actions of an agent from sensor measurements made on the agents' actions and the environmental conditions. For such systems, metrics are important for both performance evaluation and visualization purposes. In this thesis, such metrics are developed and illustrated. For human
activity recognition datasets, a reporting structure is described to visualize the metrics in a systematic manner. The other contribution of this thesis is to describe a visualization tool for estimating the orientation (attitude) of a rigid body from streaming motion sensor (accelerometer and gyroscope) data. A feedback particle filter (FPF) is implemented algorithmically to solve the estimation problem.
Advisors/Committee Members: Mehta, Prashant G (advisor).
Subjects/Keywords: activity; recognition; metrics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Arora, R. R. (2016). Metrics for analytics and visualization of big data with applications to activity recognition. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/90953
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Arora, Rohan R. “Metrics for analytics and visualization of big data with applications to activity recognition.” 2016. Thesis, University of Illinois – Urbana-Champaign. Accessed March 07, 2021.
http://hdl.handle.net/2142/90953.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Arora, Rohan R. “Metrics for analytics and visualization of big data with applications to activity recognition.” 2016. Web. 07 Mar 2021.
Vancouver:
Arora RR. Metrics for analytics and visualization of big data with applications to activity recognition. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2142/90953.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Arora RR. Metrics for analytics and visualization of big data with applications to activity recognition. [Thesis]. University of Illinois – Urbana-Champaign; 2016. Available from: http://hdl.handle.net/2142/90953
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Georgia Tech
4.
Ahsan, Unaiza.
Leveraging mid-level representations for complex activity recognition.
Degree: PhD, Interactive Computing, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/61199
► Dynamic scene understanding requires learning representations of the components of the scene including objects, environments, actions and events. Complex activity recognition from images and videos…
(more)
▼ Dynamic scene understanding requires learning representations of the components of the scene including objects, environments, actions and events. Complex
activity recognition from images and videos requires annotating large datasets with action labels which is a tedious and expensive task. Thus, there is a need to design a mid-level or intermediate feature representation which does not require millions of labels, yet is able to generalize to semantic-level
recognition of activities in visual data. This thesis makes three contributions in this regard. First, we propose an event concept-based intermediate representation which learns concepts via the Web and uses this representation to identify events even with a single labeled example. To demonstrate the strength of the proposed approaches, we contribute two diverse social event datasets to the community. We then present a use case of event concepts as a mid-level representation that generalizes to sentiment
recognition in diverse social event images. Second, we propose to train Generative Adversarial Networks (GANs) with video frames (which does not require labels), use the trained discriminator from GANs as an intermediate representation and finetune it on a smaller labeled video
activity dataset to recognize actions in videos. This unsupervised pre-training step avoids any manual feature engineering, video frame encoding or searching for the best video frame sampling technique. Our third contribution is a self-supervised learning approach on videos that exploits both spatial and temporal coherency to learn feature representations on video data without any supervision. We demonstrate the transfer learning capability of this model on smaller labeled datasets. We present comprehensive experimental analysis on the self-supervised
model to provide insights into the unsupervised pretraining paradigm and how it can help with
activity recognition on target datasets which the model has never seen during training.
Advisors/Committee Members: Essa, Irfan (advisor), Hays, James (committee member), De Choudhury, Munmun (committee member), Kira, Zsolt (committee member), Parikh, Devi (committee member), Sun, Chen (committee member).
Subjects/Keywords: Activity recognition; Self-supervised learning; Event recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ahsan, U. (2019). Leveraging mid-level representations for complex activity recognition. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61199
Chicago Manual of Style (16th Edition):
Ahsan, Unaiza. “Leveraging mid-level representations for complex activity recognition.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/61199.
MLA Handbook (7th Edition):
Ahsan, Unaiza. “Leveraging mid-level representations for complex activity recognition.” 2019. Web. 07 Mar 2021.
Vancouver:
Ahsan U. Leveraging mid-level representations for complex activity recognition. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/61199.
Council of Science Editors:
Ahsan U. Leveraging mid-level representations for complex activity recognition. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61199

Hong Kong University of Science and Technology
5.
Hu, Hao.
Learning-based human activity recognition.
Degree: 2012, Hong Kong University of Science and Technology
URL: http://repository.ust.hk/ir/Record/1783.1-7811
;
https://doi.org/10.14711/thesis-b1206070
;
http://repository.ust.hk/ir/bitstream/1783.1-7811/1/th_redirect.html
► Recognizing human activities has been an extensive and interesting research topic since early 1980s. However, when deploying human activity recognition solutions to the real world,…
(more)
▼ Recognizing human activities has been an extensive and interesting research topic since early 1980s. However, when deploying human activity recognition solutions to the real world, the solutions we provide must satisfy a series of requirements. We would expect our solution to be able to learn a reasonable model from as limited training data as possible. We also hope our solution would be able to deal with the complex relationships which exist in human activities. As is the case for almost all machine learning solutions, we would hope that our solution is scalable and efficient. In this thesis, we start by surveying related work and then study the solution to some specific challenges which are important to deploy these activity recognition systems in the real world. Specifically,We first analyze how to recognize multiple activities in the physical world environment, especially when such activities have concurrent and interleaving relationships. Next, we extend such a framework to the problem of Web query classification, by exploiting the relatedness of search queries to activities with interleaving relationships and propose a context-aware query classification algorithm. Secondly, we study the problem of abnormal activity recognition. These abnormal activities are rare to happen and it is difficult to collect enough training data about them. We design an algorithm based on the Hierarchical Dirichlet Process and the one-class Support Vector Machine to recognize abnormal activities when the training data is scarce. Finally, when we need to deploy the activity recognition systems in the real-world, it is impractical for us to collect enough training data for different activity recognition scenarios, especially when we need to collect training data for different persons and even for different actions. To solve this problem, we’ve developed an activity recognition framework based on transfer learning which borrows useful information from previously collected and learned activity recognition domains and then re-use such information into the new target activity recognition domain. Furthermore, we’ve conducted extensive experiments to demonstrate the effectiveness of our proposed approaches on real-world datasets collected from smart homes or sensor environments. We’ve also shown that our context-aware query classification algorithm could outperform state-of-the-art query classification approaches on real-world query engine search logs. At the end of this thesis, we discuss some possible directions and problems for future work and extensions.
Subjects/Keywords: Human activity recognition
; Machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hu, H. (2012). Learning-based human activity recognition. (Thesis). Hong Kong University of Science and Technology. Retrieved from http://repository.ust.hk/ir/Record/1783.1-7811 ; https://doi.org/10.14711/thesis-b1206070 ; http://repository.ust.hk/ir/bitstream/1783.1-7811/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Hu, Hao. “Learning-based human activity recognition.” 2012. Thesis, Hong Kong University of Science and Technology. Accessed March 07, 2021.
http://repository.ust.hk/ir/Record/1783.1-7811 ; https://doi.org/10.14711/thesis-b1206070 ; http://repository.ust.hk/ir/bitstream/1783.1-7811/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Hu, Hao. “Learning-based human activity recognition.” 2012. Web. 07 Mar 2021.
Vancouver:
Hu H. Learning-based human activity recognition. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2012. [cited 2021 Mar 07].
Available from: http://repository.ust.hk/ir/Record/1783.1-7811 ; https://doi.org/10.14711/thesis-b1206070 ; http://repository.ust.hk/ir/bitstream/1783.1-7811/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Hu H. Learning-based human activity recognition. [Thesis]. Hong Kong University of Science and Technology; 2012. Available from: http://repository.ust.hk/ir/Record/1783.1-7811 ; https://doi.org/10.14711/thesis-b1206070 ; http://repository.ust.hk/ir/bitstream/1783.1-7811/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Western Carolina University
6.
Shouse, Kirke.
Activity recognition using Grey-Markov model.
Degree: 2011, Western Carolina University
URL: http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=9032
► Activity Recognition (AR) is a process of identifying actions and goals of one or more agents of interest. AR techniques have been applied to both…
(more)
▼ Activity Recognition (AR) is a process of identifying
actions and goals of one or more agents of interest. AR techniques
have been applied to both large and small scale
activity
identification. Examples of AR techniques include Genetic
Algorithm, Markov Chain, and so on. This research proposes a novel
method, Grey Markov Model (GMM), for detection and prediction of
pre-defined activities. There were three objectives of this
research. The first objective was to establish a database of
pre-defined human activities. The second objective was to establish
the Grey Markov Model. The final objective was to verify the model
performance using the established database. This thesis describes
the methodology of test setup and data collection, as well as the
procedures of model generation. Furthermore, experimental results
of model performance verification test are also reported.;
Activity
Recognition, Grey-Markov Model
Advisors/Committee Members: James Zhang (advisor).
Subjects/Keywords: Human activity recognition; Markov processes
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shouse, K. (2011). Activity recognition using Grey-Markov model. (Masters Thesis). Western Carolina University. Retrieved from http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=9032
Chicago Manual of Style (16th Edition):
Shouse, Kirke. “Activity recognition using Grey-Markov model.” 2011. Masters Thesis, Western Carolina University. Accessed March 07, 2021.
http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=9032.
MLA Handbook (7th Edition):
Shouse, Kirke. “Activity recognition using Grey-Markov model.” 2011. Web. 07 Mar 2021.
Vancouver:
Shouse K. Activity recognition using Grey-Markov model. [Internet] [Masters thesis]. Western Carolina University; 2011. [cited 2021 Mar 07].
Available from: http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=9032.
Council of Science Editors:
Shouse K. Activity recognition using Grey-Markov model. [Masters Thesis]. Western Carolina University; 2011. Available from: http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=9032

University of Edinburgh
7.
Vafeias, Efstathios.
Recognising activities by jointly modelling actions and their effects.
Degree: PhD, 2015, University of Edinburgh
URL: http://hdl.handle.net/1842/14182
► With the rapid increase in adoption of consumer technologies, including inexpensive but powerful hardware, robotics appears poised at the cusp of widespread deployment in human…
(more)
▼ With the rapid increase in adoption of consumer technologies, including inexpensive but powerful hardware, robotics appears poised at the cusp of widespread deployment in human environments. A key barrier that still prevents this is the machine understanding and interpretation of human activity, through a perceptual medium such as computer vision, or RBG-D sensing such as with the Microsoft Kinect sensor. This thesis contributes novel video-based methods for activity recognition. Specifically, the focus is on activities that involve interactions between the human user and objects in the environment. Based on streams of poses and object tracking, machine learning models are provided to recognize various of these interactions. The thesis main contributions are (1) a new model for interactions that explicitly learns the human-object relationships through a latent distributed representation, (2) a practical framework for labeling chains of manipulation actions in temporally extended activities and (3) an unsupervised sequence segmentation technique that relies on slow feature analysis and spectral clustering. These techniques are validated by experiments with publicly available data sets, such as the Cornell CAD-120 activity corpus which is one of the most extensive publicly available such data sets that is also annotated with ground truth information. Our experiments demonstrate the advantages of the proposed methods, over and above state of the art alternatives from the recent literature on sequence classifiers.
Subjects/Keywords: 006.3; robotic vision; activity recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vafeias, E. (2015). Recognising activities by jointly modelling actions and their effects. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/14182
Chicago Manual of Style (16th Edition):
Vafeias, Efstathios. “Recognising activities by jointly modelling actions and their effects.” 2015. Doctoral Dissertation, University of Edinburgh. Accessed March 07, 2021.
http://hdl.handle.net/1842/14182.
MLA Handbook (7th Edition):
Vafeias, Efstathios. “Recognising activities by jointly modelling actions and their effects.” 2015. Web. 07 Mar 2021.
Vancouver:
Vafeias E. Recognising activities by jointly modelling actions and their effects. [Internet] [Doctoral dissertation]. University of Edinburgh; 2015. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1842/14182.
Council of Science Editors:
Vafeias E. Recognising activities by jointly modelling actions and their effects. [Doctoral Dissertation]. University of Edinburgh; 2015. Available from: http://hdl.handle.net/1842/14182

Louisiana State University
8.
Karki, Manohar.
Symbolic and Deep Learning Based Data Representation Methods for Activity Recognition and Image Understanding at Pixel Level.
Degree: PhD, Computer Sciences, 2017, Louisiana State University
URL: etd-06162017-134908
;
https://digitalcommons.lsu.edu/gradschool_dissertations/4298
► Efficient representation of large amount of data particularly images and video helps in the analysis, processing and overall understanding of the data. In this work,…
(more)
▼ Efficient representation of large amount of data particularly images and video helps in the analysis, processing and overall understanding of the data. In this work, we present two frameworks that encapsulate the information present in such data. At first, we present an automated symbolic framework to recognize particular activities in real time from videos. The framework uses regular expressions for symbolically representing (possibly infinite) sets of motion characteristics obtained from a video. It is a uniform framework that handles trajectory-based and periodic articulated activities and provides polynomial time graph algorithms for fast recognition. The regular expressions representing motion characteristics can either be provided manually or learnt automatically from positive and negative examples of strings (that describe dynamic behavior) using offline automata learning frameworks. Confidence measures are associated with recognitions using Levenshtein distance between a string representing a motion signature and the regular expression describing an activity. We have used our framework to recognize trajectory-based activities like vehicle turns (U-turns, left and right turns, and K-turns), vehicle start and stop, person running and walking, and periodic articulated activities like digging, waving, boxing, and clapping in videos from the VIRAT public dataset, the KTH dataset, and a set of videos obtained from YouTube.
Next, we present a core sampling framework that is able to use activation maps from several layers of a Convolutional Neural Network (CNN) as features to another neural network using transfer learning to provide an understanding of an input image. The intermediate map responses of a Convolutional Neural Network (CNN) contain information about an image that can be used to extract contextual knowledge about it. Our framework creates a representation that combines features from the test data and the contextual knowledge gained from the responses of a pretrained network, processes it and feeds it to a separate Deep Belief Network. We use this representation to extract more information from an image at the pixel level, hence gaining understanding of the whole image. We experimentally demonstrate the usefulness of our framework using a pretrained VGG-16 model to perform segmentation on the BAERI dataset of Synthetic Aperture Radar (SAR) imagery and the CAMVID dataset.
Using this framework, we also reconstruct images by removing noise from noisy character images. The reconstructed images are encoded using Quadtrees. Quadtrees can be an efficient representation in learning from sparse features. When we are dealing with handwritten character images, they are quite susceptible to noise. Hence, preprocessing stages to make the raw data cleaner can improve the efficacy of their use. We improve upon the efficiency of probabilistic quadtrees by using a pixel level classifier to extract the character pixels and remove noise from the images. The pixel level denoiser uses a pretrained CNN trained on a…
Subjects/Keywords: activity recognition; deep learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Karki, M. (2017). Symbolic and Deep Learning Based Data Representation Methods for Activity Recognition and Image Understanding at Pixel Level. (Doctoral Dissertation). Louisiana State University. Retrieved from etd-06162017-134908 ; https://digitalcommons.lsu.edu/gradschool_dissertations/4298
Chicago Manual of Style (16th Edition):
Karki, Manohar. “Symbolic and Deep Learning Based Data Representation Methods for Activity Recognition and Image Understanding at Pixel Level.” 2017. Doctoral Dissertation, Louisiana State University. Accessed March 07, 2021.
etd-06162017-134908 ; https://digitalcommons.lsu.edu/gradschool_dissertations/4298.
MLA Handbook (7th Edition):
Karki, Manohar. “Symbolic and Deep Learning Based Data Representation Methods for Activity Recognition and Image Understanding at Pixel Level.” 2017. Web. 07 Mar 2021.
Vancouver:
Karki M. Symbolic and Deep Learning Based Data Representation Methods for Activity Recognition and Image Understanding at Pixel Level. [Internet] [Doctoral dissertation]. Louisiana State University; 2017. [cited 2021 Mar 07].
Available from: etd-06162017-134908 ; https://digitalcommons.lsu.edu/gradschool_dissertations/4298.
Council of Science Editors:
Karki M. Symbolic and Deep Learning Based Data Representation Methods for Activity Recognition and Image Understanding at Pixel Level. [Doctoral Dissertation]. Louisiana State University; 2017. Available from: etd-06162017-134908 ; https://digitalcommons.lsu.edu/gradschool_dissertations/4298

Iowa State University
9.
Rahman, Mohammed Shaiqur.
Activity recognition and animation of activities of daily living.
Degree: 2020, Iowa State University
URL: https://lib.dr.iastate.edu/etd/18383
► Activities of Daily Living (ADL) can give us information about an individual’s health, both physical and mental. They are captured using sensors and then processed…
(more)
▼ Activities of Daily Living (ADL) can give us information about an individual’s health, both physical and mental. They are captured using sensors and then processed and recognized into different activities. Activity recognition is the process of understanding a person’s movement and actions. In this work, we develop a language in a simple grammar that describes the activity and uses it to recognize the activity. We call this language as Activities of Daily Living Description Language, or A(DL)2 in short.Even after an activity has been recognized, the data it represents is still digital data and it would take some expertise and time to understand it. To overcome this problem, a system that can visualize and animate individuals’ activity in real time without violating any privacy issues, can be built. This will not only help in understanding the current state of individual but will also help those who are in charge of monitoring them remotely like nurses, doctors, family members, thereby rendering better care and support especially to the elderly people who are aging. We propose a real time activity recognition and animation system that recognizes and animates the individual’s activity. We experimented with one of the basic ADLs, walking, and found the result satisfactory. Individuals location is tracked using sensors and is sent to the recognition system which then decides the type of activity in real time by using the language to describe it, and then the data is sent to a visualization system which animates that activity. When fully developed, this system intends to serve the purpose of providing better health care and immediate support to the people in need.
Subjects/Keywords: Activity Recognition; ADL; Animation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rahman, M. S. (2020). Activity recognition and animation of activities of daily living. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/18383
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Rahman, Mohammed Shaiqur. “Activity recognition and animation of activities of daily living.” 2020. Thesis, Iowa State University. Accessed March 07, 2021.
https://lib.dr.iastate.edu/etd/18383.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Rahman, Mohammed Shaiqur. “Activity recognition and animation of activities of daily living.” 2020. Web. 07 Mar 2021.
Vancouver:
Rahman MS. Activity recognition and animation of activities of daily living. [Internet] [Thesis]. Iowa State University; 2020. [cited 2021 Mar 07].
Available from: https://lib.dr.iastate.edu/etd/18383.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Rahman MS. Activity recognition and animation of activities of daily living. [Thesis]. Iowa State University; 2020. Available from: https://lib.dr.iastate.edu/etd/18383
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Melbourne
10.
Li, Han.
Low-cost leaving home activity recognition using mobile sensing.
Degree: 2017, University of Melbourne
URL: http://hdl.handle.net/11343/129509
► Leaving home activity recognition (LHAR) is essential in context-aware applications. For example, on a rainy day, a smartphone can remind a user to bring an…
(more)
▼ Leaving home activity recognition (LHAR) is essential in context-aware applications. For example, on a rainy day, a smartphone can remind a user to bring an umbrella when the leaving home activity is recognized. However, research in this field is substantially lacking. Most existing studies require extra hardware such as sensors installed at home to help recognize such activities, which limits the applicability of these studies. With the ubiquity of mobile sensing technique, it becomes feasible for a smartphone to sense the ambient environment and a user’s current context. In this thesis, we develop a low-cost system using mobile sensing for timely recognition of leaving home activities. To the best of our knowledge, we are the first to recognize leaving home activities using only sensors on smartphones. Overall, our system can recognize leaving home activities within 20 seconds after the home door is closed with a precision of 93.1% and recall of 100%.
Recognizing leaving home activities while only leveraging sensors on smartphones can be challenging in two aspects: 1) the diversity of home environments result in inconsistent features, which significantly affects the recognition performance; and 2) mobile sensing is restricted by limited resources on smartphones, e.g., power and computation capability.
To overcome such limitations, we first investigate sensors available on commodity smartphones and find that features extracted from WiFi, barometer, cell tower, magnetic field sensor and accelerometer readings are relatively robust in recognizing leaving home activities. Second, due to the variety of residential property environments, we propose a sensor selection algorithm to adaptively select the most suitable sensors for each home to personalize the training. Both classification performance and sensing cost are taken into consideration to provide the user an option to trade power consumption for recognition accuracy, and vice versa. Inspired by the observation that leaving home activity usually involves walking, we activate the power-hungry sensors to start the recognition only when the walking event is detected. Thus, we reduce the sensing cost by 76.65%.
Subjects/Keywords: mobile sensing; activity recognition; leaving home activity recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, H. (2017). Low-cost leaving home activity recognition using mobile sensing. (Masters Thesis). University of Melbourne. Retrieved from http://hdl.handle.net/11343/129509
Chicago Manual of Style (16th Edition):
Li, Han. “Low-cost leaving home activity recognition using mobile sensing.” 2017. Masters Thesis, University of Melbourne. Accessed March 07, 2021.
http://hdl.handle.net/11343/129509.
MLA Handbook (7th Edition):
Li, Han. “Low-cost leaving home activity recognition using mobile sensing.” 2017. Web. 07 Mar 2021.
Vancouver:
Li H. Low-cost leaving home activity recognition using mobile sensing. [Internet] [Masters thesis]. University of Melbourne; 2017. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/11343/129509.
Council of Science Editors:
Li H. Low-cost leaving home activity recognition using mobile sensing. [Masters Thesis]. University of Melbourne; 2017. Available from: http://hdl.handle.net/11343/129509

University of North Texas
11.
Janmohammadi, Siamak.
Classifying Pairwise Object Interactions: A Trajectory Analytics Approach.
Degree: 2015, University of North Texas
URL: https://digital.library.unt.edu/ark:/67531/metadc801901/
► We have a huge amount of video data from extensively available surveillance cameras and increasingly growing technology to record the motion of a moving object…
(more)
▼ We have a huge amount of video data from extensively available surveillance cameras and increasingly growing technology to record the motion of a moving object in the form of trajectory data. With proliferation of location-enabled devices and ongoing growth in smartphone penetration as well as advancements in exploiting image processing techniques, tracking moving objects is more flawlessly achievable. In this work, we explore some domain-independent qualitative and quantitative features in raw trajectory (spatio-temporal) data in videos captured by a fixed single wide-angle view camera sensor in outdoor areas. We study the efficacy of those features in classifying four basic high level actions by employing two supervised learning algorithms and show how each of the features affect the learning algorithms’ overall accuracy as a single factor or confounded with others.
Advisors/Committee Members: Buckles, Bill P., 1942-, Huang, Yan, Namuduri, Kamesh.
Subjects/Keywords: action recognition; machine learning; trajectory analysis; supervised classification methods; activity recognition; Human activity recognition.; Pattern recognition systems.; Machine learning.; Electronic surveillance.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share






University of North Texas
12.
Santiteerakul, Wasana.
Trajectory Analytics.
Degree: 2015, University of North Texas
URL: https://digital.library.unt.edu/ark:/67531/metadc801885/
► The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object…
(more)
▼ The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the
activity classes predicted by our method to the actual classes from the ground-truth set obtained using the crowdsourcing technique. The results show that the relationships between a pair of trajectories can signify the low-level multi-agent activities.
Advisors/Committee Members: Buckles, Bill P., 1942-, Swigger, Kathleen M., Mikler, Armin, Huang, Yan.
Subjects/Keywords: trajectory analytics; action recognition; activity recognition; Pattern recognition systems.; Machine learning.; Human activity recognition.; Electronic surveillance.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share






Georgia Tech
13.
Haresamudram, Harish.
The role of representations in human activity recognition.
Degree: MS, Electrical and Computer Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62706
► We investigate the role of representations in sensor based human activity recognition (HAR). In particular, we develop convolutional and recurrent autoencoder architectures for feature learning…
(more)
▼ We investigate the role of representations in sensor based human
activity recognition (HAR). In particular, we develop convolutional and recurrent autoencoder architectures for feature learning and compare their performance to a distribution-based representation as well as a supervised deep learning representation based on the DeepConvLSTM architecture. This is
motivated by the promises deep learning methods offer – they learn end-to-end, eliminate the necessity for hand crafting features and generalize well across tasks and datasets. The
choice of studying unsupervised learning methods is motivated by the fact that they afford the possibility of learning meaningful representations without the need for labeled data. Such representations allow for leveraging large, unlabeled datasets for performing feature and transfer learning. The study is performed on five datasets which are diverse in terms of the number of subjects, activities, and settings. The analysis is performed from a wearables standpoint, considering factors such as memory footprint, the effect of dimensionality, and computation time. We find that the convolutional and recurrent autoencoder based representations outperform the distribution-based representation on all datasets. Additionally, we conclude that autoencoder based representations offer comparable performance to supervised Deep-ConvLSTM based representation. On the larger datasets with multiple sensors such as Opportunity and PAMAP2, the convolutional and recurrent autoencoder based representations are observed to be highly effective. Resource-constrained scenarios justify the utilization of the distribution-based representation, which has low computational costs and memory requirements. Finally, when the number of sensors is low, we observe that the vanilla autoencoder based representations produce good performance.
Advisors/Committee Members: Ploetz, Thomas (advisor), Anderson, David V. (advisor), Essa, Irfan (committee member), Vela, Patricio (committee member).
Subjects/Keywords: Unsupervised learning; Human activity recognition; Autoencoder models
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Haresamudram, H. (2019). The role of representations in human activity recognition. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62706
Chicago Manual of Style (16th Edition):
Haresamudram, Harish. “The role of representations in human activity recognition.” 2019. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62706.
MLA Handbook (7th Edition):
Haresamudram, Harish. “The role of representations in human activity recognition.” 2019. Web. 07 Mar 2021.
Vancouver:
Haresamudram H. The role of representations in human activity recognition. [Internet] [Masters thesis]. Georgia Tech; 2019. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62706.
Council of Science Editors:
Haresamudram H. The role of representations in human activity recognition. [Masters Thesis]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62706

Nelson Mandela Metropolitan University
14.
[No author].
A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills.
Degree: Faculty of Science, 2015, Nelson Mandela Metropolitan University
URL: http://hdl.handle.net/10948/6204
► Fundamental movement skills (FMSs) are considered to be one of the essential phases of motor skill development. The proper development of FMSs allows children to…
(more)
▼ Fundamental movement skills (FMSs) are considered to be one of the essential phases of motor skill development. The proper development of FMSs allows children to participate in more advanced forms of movements and sports. To be able to perform an FMS correctly, children need to learn the right way of performing it. By making use of technology, a system can be developed that can help facilitate the learning of FMSs. The objective of the research was to propose an effective natural user interface (NUI) architecture for detecting FMSs using the Kinect. In order to achieve the stated objective, an investigation into FMSs and the challenges faced when teaching them was presented. An investigation into NUIs was also presented including the merits of the Kinect as the most appropriate device to be used to facilitate the detection of an FMS. An NUI architecture was proposed that uses the Kinect to facilitate the detection of an FMS. A framework was implemented from the design of the architecture. The successful implementation of the framework provides evidence that the design of the proposed architecture is feasible. An instance of the framework incorporating the jump FMS was used as a case study in the development of a prototype that detects the correct and incorrect performance of a jump. The evaluation of the prototype proved the following: - The developed prototype was effective in detecting the correct and incorrect performance of the jump FMS; and - The implemented framework was robust for the incorporation of an FMS. The successful implementation of the prototype shows that an effective NUI architecture using the Kinect can be used to facilitate the detection of FMSs. The proposed architecture provides a structured way of developing a system using the Kinect to facilitate the detection of FMSs. This allows developers to add future FMSs to the system. This dissertation therefore makes the following contributions: - An experimental design to evaluate the effectiveness of a prototype that detects FMSs - A robust framework that incorporates FMSs; and - An effective NUI architecture to facilitate the detection of fundamental movement skills using the Kinect.
Subjects/Keywords: Human activity recognition; Human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
author], [. (2015). A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills. (Thesis). Nelson Mandela Metropolitan University. Retrieved from http://hdl.handle.net/10948/6204
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
author], [No. “A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills.” 2015. Thesis, Nelson Mandela Metropolitan University. Accessed March 07, 2021.
http://hdl.handle.net/10948/6204.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
author], [No. “A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills.” 2015. Web. 07 Mar 2021.
Vancouver:
author] [. A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills. [Internet] [Thesis]. Nelson Mandela Metropolitan University; 2015. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10948/6204.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
author] [. A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills. [Thesis]. Nelson Mandela Metropolitan University; 2015. Available from: http://hdl.handle.net/10948/6204
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Houston
15.
Sharma, Sarthak 1994-.
Device Free Activity Recognition using Ultra-Wideband Radio Communication.
Degree: MS, Computer Science, 2018, University of Houston
URL: http://hdl.handle.net/10657/3303
► Human Activity Recognition (HAR) is a fundamental building block in many Internet of Things (IoT) applications. Although there has been a lot of interest in…
(more)
▼ Human
Activity Recognition (HAR) is a fundamental building block in many Internet of Things (IoT) applications. Although there has been a lot of interest in HAR, research in non-intrusive
activity recognition is still in nascent stages. This research investigates the capability of Ultra-Wideband (UWB) communication technology to be used for HAR. In this work, UWB radio devices are placed in the periphery of a monitored area. This setup infers user activities without the need of any additional sensors or physical device. Packets are exchanged between these UWB devices, and received packets are used to obtain information of the environment. The key idea is that these received packets are affected by environmental modification due to the human activities. We collect Channel Impulse Response (CIR) data from the received packets of the UWB signals. We then use machine learning algorithms to classify the
activity (standing, sitting, lying) being performed.
The experiments show that by using CIR data as features we can classify simple activities such as standing, sitting, lying and when the room is empty with an accuracy of 95%.To compare this performance, we trained classification models using Wi-Fi Channel State Information (CSI). We found that for all the models UWB CIR significantly outperformed Wi-Fi CSI in
activity classification. This study also includes an application for this system. We used the HAR system for caloric expenditure estimation during a time period. We use HAR to infer the pose and time spent at each pose and use models from the literature to estimate the caloric expenditure for each pose. Our approach reports 32% more calories than what is reported by commercial devices, which are known to severely under-report calories when the subjects are not very active.
Advisors/Committee Members: Gnawali, Omprakash (advisor), Gabriel, Edgar (committee member), Kim, Kyungki (committee member).
Subjects/Keywords: Ultra-Wideband; Device Free; Activity recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sharma, S. 1. (2018). Device Free Activity Recognition using Ultra-Wideband Radio Communication. (Masters Thesis). University of Houston. Retrieved from http://hdl.handle.net/10657/3303
Chicago Manual of Style (16th Edition):
Sharma, Sarthak 1994-. “Device Free Activity Recognition using Ultra-Wideband Radio Communication.” 2018. Masters Thesis, University of Houston. Accessed March 07, 2021.
http://hdl.handle.net/10657/3303.
MLA Handbook (7th Edition):
Sharma, Sarthak 1994-. “Device Free Activity Recognition using Ultra-Wideband Radio Communication.” 2018. Web. 07 Mar 2021.
Vancouver:
Sharma S1. Device Free Activity Recognition using Ultra-Wideband Radio Communication. [Internet] [Masters thesis]. University of Houston; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10657/3303.
Council of Science Editors:
Sharma S1. Device Free Activity Recognition using Ultra-Wideband Radio Communication. [Masters Thesis]. University of Houston; 2018. Available from: http://hdl.handle.net/10657/3303

Delft University of Technology
16.
Breider, Bas (author).
Automatic Recognition of Safety and Performance Related Activities in Motocross.
Degree: 2017, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:d0c04789-61f5-421a-aaf6-078280c336c7
► Motocross is a popular, but dangerous sport: improvements in performance and safety should be made to make it more attractive and less dangerous. By automatically…
(more)
▼ Motocross is a popular, but dangerous sport: improvements in performance and safety should be made to make it more attractive and less dangerous. By automatically recognizing activities of the rider on the track, riders can be informed about dangerous situations, and fans can be provided with insights into the performance of the riders. The goal of this study is to develop and validate an automatic
activity recognition methodology that can determine safety and performance related activities in motocross. A 3D accelerometer and gyroscope were used to collect movement data of the rider and motorcycle. Time and frequency domain features were extracted and used to evaluate several machine-learning classifiers: decision tree, knearest neighbor model, support vector machine, and multilayer perceptron neural network. These classifiers were evaluated based on accuracy, precision, recall, and speed to show overall classifier performance in real time, and to identify classification patterns for individual activities. The results were validated for multiple riders at different types of motocross tracks to test generalizability of the approach. Overall accuracy showed no large differences between the individual classifiers (74%-78% ± 6.8%). Similar results were found when the approach was validated with new riders and tracks (73%-79% and 68%-72%). The neural network classifier showed the highest precision for the safety related activities: stopping and falling (82%-95%). However, low precision was found for the performance related activities: jumping, turning and driving straight (20%-78%). To conclude, the neural network approach can be used for the detection of safety related activities, but more data of different riders is needed to confirm the proposed approach.
Advisors/Committee Members: van Gemert, Jan (mentor), van der Helm, Frans (mentor), Delft University of Technology (degree granting institution).
Subjects/Keywords: Activity Recognition; Machine Learning; Sport; Motocross
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Breider, B. (. (2017). Automatic Recognition of Safety and Performance Related Activities in Motocross. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:d0c04789-61f5-421a-aaf6-078280c336c7
Chicago Manual of Style (16th Edition):
Breider, Bas (author). “Automatic Recognition of Safety and Performance Related Activities in Motocross.” 2017. Masters Thesis, Delft University of Technology. Accessed March 07, 2021.
http://resolver.tudelft.nl/uuid:d0c04789-61f5-421a-aaf6-078280c336c7.
MLA Handbook (7th Edition):
Breider, Bas (author). “Automatic Recognition of Safety and Performance Related Activities in Motocross.” 2017. Web. 07 Mar 2021.
Vancouver:
Breider B(. Automatic Recognition of Safety and Performance Related Activities in Motocross. [Internet] [Masters thesis]. Delft University of Technology; 2017. [cited 2021 Mar 07].
Available from: http://resolver.tudelft.nl/uuid:d0c04789-61f5-421a-aaf6-078280c336c7.
Council of Science Editors:
Breider B(. Automatic Recognition of Safety and Performance Related Activities in Motocross. [Masters Thesis]. Delft University of Technology; 2017. Available from: http://resolver.tudelft.nl/uuid:d0c04789-61f5-421a-aaf6-078280c336c7
17.
Zhu, Shangyue.
Human activity localization and recognation based on radar sensors for smart homes.
Degree: Thesis (M.S.), 2017, Ball State University
URL: http://cardinalscholar.bsu.edu/handle/123456789/201073
► The smart home is going through a rapid development in which predicting behaviors provides convenient service in human daily life. Tracking a user and recognizing…
(more)
▼ The smart home is going through a rapid development in which predicting behaviors
provides convenient service in human daily life. Tracking a user and recognizing activities
in a living space are mature technology in smart home to solve many real-life problems
such as elderly monitoring, and healthcare. In this thesis, we propose a smart home
system including two main functions: tracking user's location and recognizing the
activity.
For location tracking, it is a noninvasive distance-based user localization and tracking
solution, DiLT, which collects data by using commodity o -the-shelf ultrasonic sensors for
minimal invasion of privacy for smart systems. For
activity recognition, it is an ambient
radar sensor based solution to recognize the activities that humans normally perform
in indoor environments. From many experiments in a laboratory room, our proposed
algorithms have demonstrated high e ectiveness and accuracy in the real environment.
Advisors/Committee Members: Wu, Shaoen, 1976- (advisor).
Subjects/Keywords: Acoustic localization.; Human activity recognition.; Home automation.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhu, S. (2017). Human activity localization and recognation based on radar sensors for smart homes. (Masters Thesis). Ball State University. Retrieved from http://cardinalscholar.bsu.edu/handle/123456789/201073
Chicago Manual of Style (16th Edition):
Zhu, Shangyue. “Human activity localization and recognation based on radar sensors for smart homes.” 2017. Masters Thesis, Ball State University. Accessed March 07, 2021.
http://cardinalscholar.bsu.edu/handle/123456789/201073.
MLA Handbook (7th Edition):
Zhu, Shangyue. “Human activity localization and recognation based on radar sensors for smart homes.” 2017. Web. 07 Mar 2021.
Vancouver:
Zhu S. Human activity localization and recognation based on radar sensors for smart homes. [Internet] [Masters thesis]. Ball State University; 2017. [cited 2021 Mar 07].
Available from: http://cardinalscholar.bsu.edu/handle/123456789/201073.
Council of Science Editors:
Zhu S. Human activity localization and recognation based on radar sensors for smart homes. [Masters Thesis]. Ball State University; 2017. Available from: http://cardinalscholar.bsu.edu/handle/123456789/201073

Massey University
18.
Ranhotigmage, Chagitha.
Human activities & posture recognition : innovative algorithm for highly accurate detection rate : a thesis submitted in fulfilment of the requirements for the degree of Master of Engineering in Electronics & Computer Systems Engineering at Massey University, Palmerston North, New Zealand
.
Degree: 2013, Massey University
URL: http://hdl.handle.net/10179/4339
► The main purpose of thesis is to introduce new innovative algorithm for “unintentional fall detection” with 100% accuracy of detecting falls on hard surfaces which…
(more)
▼ The main purpose of thesis is to introduce new
innovative algorithm for “unintentional fall detection”
with 100% accuracy of detecting falls on hard surfaces
which can cause severe and sometimes fatal injuries.
Furthermore this thesis explains how to detect deliberate
human activities such as running, walking etc using the
same algorithm with near perfect accuracy. Subset of the
above mention algorithm is used for posture recognition
as well.
The above mentioned algorithm is converted into computer software
using java programming language for real time detection. A graphical
user interface is developed to display human posture and activity
information.
Most pre-existing algorithms need expensive and wide range of sensors to
achieve this level of accuracy. In this thesis it explains how to use just one
tri-axial accelerometer with wireless zigbee communication module and
achieve far better accuracy. Most of the other sensor types violate human
privacy therefore they are unethical to be used at residence of vulnerable
elderly or sick individual and majority of them are very expensive when
compared to a tri-axial accelerometer which costs just around NZ$5.
Subjects/Keywords: Human activity recognition;
Mathematical models;
Algorithm
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ranhotigmage, C. (2013). Human activities & posture recognition : innovative algorithm for highly accurate detection rate : a thesis submitted in fulfilment of the requirements for the degree of Master of Engineering in Electronics & Computer Systems Engineering at Massey University, Palmerston North, New Zealand
. (Thesis). Massey University. Retrieved from http://hdl.handle.net/10179/4339
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ranhotigmage, Chagitha. “Human activities & posture recognition : innovative algorithm for highly accurate detection rate : a thesis submitted in fulfilment of the requirements for the degree of Master of Engineering in Electronics & Computer Systems Engineering at Massey University, Palmerston North, New Zealand
.” 2013. Thesis, Massey University. Accessed March 07, 2021.
http://hdl.handle.net/10179/4339.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ranhotigmage, Chagitha. “Human activities & posture recognition : innovative algorithm for highly accurate detection rate : a thesis submitted in fulfilment of the requirements for the degree of Master of Engineering in Electronics & Computer Systems Engineering at Massey University, Palmerston North, New Zealand
.” 2013. Web. 07 Mar 2021.
Vancouver:
Ranhotigmage C. Human activities & posture recognition : innovative algorithm for highly accurate detection rate : a thesis submitted in fulfilment of the requirements for the degree of Master of Engineering in Electronics & Computer Systems Engineering at Massey University, Palmerston North, New Zealand
. [Internet] [Thesis]. Massey University; 2013. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10179/4339.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ranhotigmage C. Human activities & posture recognition : innovative algorithm for highly accurate detection rate : a thesis submitted in fulfilment of the requirements for the degree of Master of Engineering in Electronics & Computer Systems Engineering at Massey University, Palmerston North, New Zealand
. [Thesis]. Massey University; 2013. Available from: http://hdl.handle.net/10179/4339
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of New South Wales
19.
Wei, Bo.
Embedded Sensing for Acoustic Classification, Activity Recognition and Localisation.
Degree: Computer Science & Engineering, 2015, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/54910
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36161/SOURCE02?view=true
► Embedded sensing aims to use low-cost computing, sensing and communication components to realise various sensing tasks. Embedded sensing has been successfully used in different applications.…
(more)
▼ Embedded sensing aims to use low-cost computing, sensing and communication components to realise various sensing tasks. Embedded sensing has been successfully used in different applications. Large amounts of sensing data contain abundant information, but create huge overheads for the resource-limited embedded nodes. Furthermore, interference from other sources also brings noise, which decreases the sensing performance. In this thesis, we apply Sparse Approximation-based Classification method (SAC) and electronically switched directional (ESD) antennas to address the challenges of the limited amount of resources in embedded systems and interference. Three different problems are addressed. The first problem is to reduce the overhead of real-time classification on Acoustic Sensor Networks (ASNs). The main challenges of in-network classification in ASNs include effective feature selection, intensive computation requirement and high noise levels. To address these challenges, we propose a sparse representation based featureless, low computational cost, and noise resilient framework for in-network classification in ASNs, which makes the computation feasible to be performed on resource constrained ASN platforms. The second problem is to make radio-based device-free
activity recognition robust to radio frequency interference (RFI). Device-free
activity recognition has the advantage that it does not have the privacy concern of using cameras and the subjects do not have to carry a device on them. Recently, it has been shown that channel state information (CSI) can be used for
activity recognition in a device-free setting. We investigate the impact of RFI on device-free CSI-based location-oriented
activity recognition. We propose a number of SAC-based fusion methods to mitigate the impact of RFI and improve the location-oriented
activity recognition performance. The third problem is to reduce the impact of multipath propagation for radio tomographic imaging (RTI) . RTI enables device-free localisation of people and objects in many challenging environments and situations. However, the localisation accuracy of RTI suffers from complicated multipath propagation behaviours in radio links. We propose to use inexpensive and energy efficient ESD antennas to improve the localisation accuracy of RTI. We implement a directional RTI system to understand how directional antennas can be used to improve RTI localisation accuracy.
Advisors/Committee Members: Chou, Chun Tung, Computer Science & Engineering, Faculty of Engineering, UNSW, Hu, Wen, Computer Science & Engineering, Faculty of Engineering, UNSW.
Subjects/Keywords: Activity recognition; Embedded sensing; Acoustic classification; Localisation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wei, B. (2015). Embedded Sensing for Acoustic Classification, Activity Recognition and Localisation. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/54910 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36161/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Wei, Bo. “Embedded Sensing for Acoustic Classification, Activity Recognition and Localisation.” 2015. Doctoral Dissertation, University of New South Wales. Accessed March 07, 2021.
http://handle.unsw.edu.au/1959.4/54910 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36161/SOURCE02?view=true.
MLA Handbook (7th Edition):
Wei, Bo. “Embedded Sensing for Acoustic Classification, Activity Recognition and Localisation.” 2015. Web. 07 Mar 2021.
Vancouver:
Wei B. Embedded Sensing for Acoustic Classification, Activity Recognition and Localisation. [Internet] [Doctoral dissertation]. University of New South Wales; 2015. [cited 2021 Mar 07].
Available from: http://handle.unsw.edu.au/1959.4/54910 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36161/SOURCE02?view=true.
Council of Science Editors:
Wei B. Embedded Sensing for Acoustic Classification, Activity Recognition and Localisation. [Doctoral Dissertation]. University of New South Wales; 2015. Available from: http://handle.unsw.edu.au/1959.4/54910 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36161/SOURCE02?view=true

Oklahoma State University
20.
Kallur, Dharmendra Chandrashekar.
Human Localization and Activity Recognition Using Distributed Motion Sensors.
Degree: Electrical Engineering, 2014, Oklahoma State University
URL: http://hdl.handle.net/11244/14924
► The purpose of this thesis is to localize a human and recognize his/her activities in indoor environments using distributed motion sensors. We propose to use…
(more)
▼ The purpose of this thesis is to localize a human and recognize his/her activities in indoor environments using distributed motion sensors. We propose to use a test bed simulated as mock apartment for conducting our experiments. The two parts of the thesis are localization and
activity recognition of the elderly person. We explain complete hardware and software setup used to provide these services. The hardware setup consists of two types of sensor end nodes and two sink nodes. The two types of end nodes are Passive Infrared sensor node and GridEye sensor node. Passive Infrared sensor nodes consist of Passive Infrared sensors for motion detection. GridEye sensor nodes consist of thermal array sensors. Data from these sensors are acquired using Arduino boards and transmitted using Xbee modules to the sink nodes. The sink nodes consist of receiver Xbee modules connected to a computer. The sensor nodes were strategically placed at different place inside the apartment. The thermal array sensor provides 64 pixel temperature values, while the PIR sensor provides binary information about motion in its field of view. Since the thermal array sensor provides more information, they were placed in large rooms such as living room and bed room. While PIR sensors were placed in kitchen and bathroom. Initially GridEye sensors are calibrated to obtain the transformation between pixel and real world coordinates. Data from these sensors were processed on computer and we were able to localize the human inside the apartment. We compared the location accuracy using ground truth data obtained from the OptiTrack system. GridEye sensors were also used for
activity recognition. Basic human activities such as sitting, sleeping, standing and walking were recognized. We used Support Vector Machine (SVM) to recognize sitting and sleeping activities. Gait speed of human was used to recognize the standing and walking activities. Experiments were performed to obtain the accuracy of classification for these activities.
Advisors/Committee Members: Sheng, Weihua (advisor), Cheng, Qi (committee member), Ramakumar, Rama (committee member).
Subjects/Keywords: activity recognition; home automation; indoor human localization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kallur, D. C. (2014). Human Localization and Activity Recognition Using Distributed Motion Sensors. (Thesis). Oklahoma State University. Retrieved from http://hdl.handle.net/11244/14924
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Kallur, Dharmendra Chandrashekar. “Human Localization and Activity Recognition Using Distributed Motion Sensors.” 2014. Thesis, Oklahoma State University. Accessed March 07, 2021.
http://hdl.handle.net/11244/14924.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Kallur, Dharmendra Chandrashekar. “Human Localization and Activity Recognition Using Distributed Motion Sensors.” 2014. Web. 07 Mar 2021.
Vancouver:
Kallur DC. Human Localization and Activity Recognition Using Distributed Motion Sensors. [Internet] [Thesis]. Oklahoma State University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/11244/14924.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Kallur DC. Human Localization and Activity Recognition Using Distributed Motion Sensors. [Thesis]. Oklahoma State University; 2014. Available from: http://hdl.handle.net/11244/14924
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Oklahoma State University
21.
Li, Gang.
ASCCbot: An Open Mobile Robot Platform.
Degree: School of Electrical & Computer Engineering, 2011, Oklahoma State University
URL: http://hdl.handle.net/11244/10237
► ASCCbot, an open mobile platform built in ASCC lab, is presented in this thesis. The hardware and software design of the ASCCbot makes it a…
(more)
▼ ASCCbot, an open mobile platform built in ASCC lab, is presented in this thesis. The hardware and software design of the ASCCbot makes it a robust, extendable and duplicable robot platform which is suitable for most mobile robotics research including navigation, mapping, localization, etc. ROS is adopted as the major software framework, which not only makes ASCCbot a open-source project, but also extends its network functions so that multi-robot network applications can be easily implemented based on multiple ASCCbots. Collaborative localization is designed to test the network features of the ASCCbot. A telepresence robot is built based on the ASCCbot. A Kinect-based human gesture
recognition method is implemented for intuitive human-robot interaction on it. For the telepresence robot, a GUI is also created in which basic control commands, video streaming and 2D metric map rendering are presented. Last but not least, semantic mapping through human
activity recognition is proposed as a novel approach to semantic mapping. For the human
activity recognition part, a power-aware wireless motion sensor is designed and evaluated. The overall semantic mapping system is explained and tested in a mock apartment. The experiment results show that the
activity recognition results are reliable, and the semantic map updating process is able to create an accurate semantic map which matches the real furniture layout. To sum up, the ASCCbot is a versatile mobile robot platform with basic functions as well as feature functions implemented. Complex high-level functions can be built upon the existing functions from the ASCCbot. With its duplicability, extendability and open-source feature, the ASCCbot will be very useful for mobile robotics research.
Advisors/Committee Members: Sheng, Weihua (advisor), Cheng, Qi (committee member), Hagan, Martin (committee member).
Subjects/Keywords: activity recognition; mobile robot; semantic mapping; telepresence
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, G. (2011). ASCCbot: An Open Mobile Robot Platform. (Thesis). Oklahoma State University. Retrieved from http://hdl.handle.net/11244/10237
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Li, Gang. “ASCCbot: An Open Mobile Robot Platform.” 2011. Thesis, Oklahoma State University. Accessed March 07, 2021.
http://hdl.handle.net/11244/10237.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Li, Gang. “ASCCbot: An Open Mobile Robot Platform.” 2011. Web. 07 Mar 2021.
Vancouver:
Li G. ASCCbot: An Open Mobile Robot Platform. [Internet] [Thesis]. Oklahoma State University; 2011. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/11244/10237.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Li G. ASCCbot: An Open Mobile Robot Platform. [Thesis]. Oklahoma State University; 2011. Available from: http://hdl.handle.net/11244/10237
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Adelaide
22.
Abedin, Alireza.
Deep Learning Methods for Human Activity Recognition using Wearables.
Degree: 2020, University of Adelaide
URL: http://hdl.handle.net/2440/129607
► Wearable sensors provide an infrastructure-less multi-modal sensing method. Current trends point to a pervasive integration of wearables into our lives with these devices providing the…
(more)
▼ Wearable sensors provide an infrastructure-less multi-modal sensing method. Current
trends point to a pervasive integration of wearables into our lives with these devices
providing the basis for wellness and healthcare applications across rehabilitation,
caring for a growing older population, and improving human performance.
Fundamental to these applications is our ability to automatically and accurately
recognise human activities from often tiny sensors embedded in wearables. In this
dissertation, we consider the problem of human
activity recognition (HAR) using
multi-channel time-series data captured by wearable sensors.
Our collective know-how regarding the solution of HAR problems with wearables has
progressed immensely through the use of deep learning paradigms. Nevertheless, this
field still faces unique methodological challenges. As such, this dissertation focuses on
developing end-to-end deep learning frameworks to promote HAR application opportunities
using wearable sensor technologies and to mitigate specific associated challenges. In our
efforts, the investigated problems cover a diverse range of HAR challenges and spans
from fully supervised to unsupervised problem domains.
In order to enhance automatic feature extraction from multi-channel time-series
data for HAR, the problem of learning enriched and highly discriminative
activity
feature representations with deep neural networks is considered. Accordingly, novel
end-to-end network elements are designed which: (a) exploit the latent relationships
between multi-channel sensor modalities and specific activities, (b) employ effective
regularisation through data-agnostic augmentation for multi-modal sensor data
streams, and (c) incorporate optimization objectives to encourage minimal intra-class
representation differences, while maximising inter-class differences to achieve more
discriminative features.
In order to promote new opportunities in HAR with emerging battery-less sensing
platforms, the problem of learning from irregularly sampled and temporally sparse readings
captured by passive sensing modalities is considered. For the first time, an efficient
set-based deep learning framework is developed to address the problem. This
framework is able to learn directly from the generated data, bypassing the need for
the conventional interpolation pre-processing stage. In order to address the multi-class window problem and create potential solutions
for the challenging task of concurrent human
activity recognition, the problem of
enabling simultaneous prediction of multiple activities for sensory segments is considered.
As such, the flexibility provided by the emerging set learning concepts is further
leveraged to introduce a novel formulation of HAR. This formulation treats HAR
as a set prediction problem and elegantly caters for segments carrying sensor data
from multiple activities. To address this set prediction problem, a unified deep HAR
architecture is designed that: (a) incorporates a set objective to learn mappings…
Advisors/Committee Members: Ranasinghe, Damith Chinthana (advisor), School of Computer Science (school).
Subjects/Keywords: Deep learning; human activity recognition; wearable sensors
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Abedin, A. (2020). Deep Learning Methods for Human Activity Recognition using Wearables. (Thesis). University of Adelaide. Retrieved from http://hdl.handle.net/2440/129607
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Abedin, Alireza. “Deep Learning Methods for Human Activity Recognition using Wearables.” 2020. Thesis, University of Adelaide. Accessed March 07, 2021.
http://hdl.handle.net/2440/129607.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Abedin, Alireza. “Deep Learning Methods for Human Activity Recognition using Wearables.” 2020. Web. 07 Mar 2021.
Vancouver:
Abedin A. Deep Learning Methods for Human Activity Recognition using Wearables. [Internet] [Thesis]. University of Adelaide; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2440/129607.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Abedin A. Deep Learning Methods for Human Activity Recognition using Wearables. [Thesis]. University of Adelaide; 2020. Available from: http://hdl.handle.net/2440/129607
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

California State University – Sacramento
23.
Ghorpade, Madhuri.
A novel semi-supervised learning framework for new human activity recognition.
Degree: MS, Computer Science, 2020, California State University – Sacramento
URL: http://hdl.handle.net/10211.3/217496
► Human Activity Recognition (HAR) has been an attractive research topic for its applications in areas such as healthcare, smart environments, assisted living, home monitoring, personal…
(more)
▼ Human
Activity Recognition (HAR) has been an attractive research topic for its applications in areas such as healthcare, smart environments, assisted living, home monitoring, personal fitness assistants etc. Traditional human
activity recognition systems are used to recognize common set of activities such as walking, running, sitting, cycling etc. However, these
Activity Recognition (AR) systems asks users to provide each
activity with large amount of annotations (label) to achieve the acceptable performance. This limitation makes traditional AR systems difficult to extend and to recognize new activities of interest with limited labelled training data. Therefore, it is impractical to assume that users will provide a large amount of annotations since labeling activities is a time-consuming and labor-some process Being able to learn new activities with a limited amount of training data is in demand for practical AR systems.
This master project addresses this issue by introducing a novel semi-supervised learning framework for recognizing new activities called SMART, by leveraging the knowledge of the mappings between existing activities with their respective semantic attributes and using limited amount of labelled training data. This model functions in three different spaces. (1)
Activity space: recurrent neural network (i.e., LSTM) is induced to transform each
activity instance to its semantic attribute representation and also extract the neural embeddings. (2) Semantic Attribute space: based on the
activity-attribute knowledge graph, the top-k most likely candidate activities are identified and, (3) Embedding space: density-based clustering (i.e., DBSCAN) is performed on the embeddings of all the instances belonging to the top-k candidate activities.
The model is evaluated using the REALDISP
activity recognition dataset [1] with 33 physical activities performed by 17 different users. Extensive experiments on real-world data showed that SMART outperformed the state-of-the-art approaches in terms of various metrics for effectively recognizing new/emerging activities.
Advisors/Committee Members: Chen, Haiquan.
Subjects/Keywords: Human activity recognition; Semi supervised based learning; Embedding based human activity recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ghorpade, M. (2020). A novel semi-supervised learning framework for new human activity recognition. (Masters Thesis). California State University – Sacramento. Retrieved from http://hdl.handle.net/10211.3/217496
Chicago Manual of Style (16th Edition):
Ghorpade, Madhuri. “A novel semi-supervised learning framework for new human activity recognition.” 2020. Masters Thesis, California State University – Sacramento. Accessed March 07, 2021.
http://hdl.handle.net/10211.3/217496.
MLA Handbook (7th Edition):
Ghorpade, Madhuri. “A novel semi-supervised learning framework for new human activity recognition.” 2020. Web. 07 Mar 2021.
Vancouver:
Ghorpade M. A novel semi-supervised learning framework for new human activity recognition. [Internet] [Masters thesis]. California State University – Sacramento; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10211.3/217496.
Council of Science Editors:
Ghorpade M. A novel semi-supervised learning framework for new human activity recognition. [Masters Thesis]. California State University – Sacramento; 2020. Available from: http://hdl.handle.net/10211.3/217496

University of Cincinnati
24.
Snyder, Kristian.
Utilizing Convolutional Neural Networks for Specialized
Activity Recognition: Classifying Lower Back Pain Risk Prediction
During Manual Lifting.
Degree: MS, Engineering and Applied Science: Computer
Science, 2020, University of Cincinnati
URL: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1583999458096255
► Classification of specialized human activity datasets utilizing methods not requiring manual feature extraction is an underserved area of research in the field of human activity…
(more)
▼ Classification of specialized human
activity datasets
utilizing methods not requiring manual feature extraction is an
underserved area of research in the field of human
activity
recognition (HAR). In this thesis, we present a convolutional
neural network (CNN)-based method to classify a dataset consisting
of subjects lifting an object from various positions relative to
their bodies, labeled by the level of back pain risk attributed to
the action. Specific improvements over other CNN-based models for
both general and
activity-based purposes include the use of average
pooling and dropout layers. Methods to reshape accelerometer and
gyroscope sensor data are also presented to encourage the model’s
use with other datasets. When developing the model, a dataset
previously developed by the National Institute for Occupational
Safety and Health (NIOSH) was used. It consists of 720 total trials
of accelerometer and gyroscope data from subjects lifting an object
at various relative distances from the body. In testing, 90.6%
accuracy was achieved on the NIOSH lifting dataset, a significant
improvement over other models tested. Saliency results are also
presented to investigate underlying feature extraction and justify
the results collected.
Advisors/Committee Members: Jha, Rashmi (Committee Chair).
Subjects/Keywords: Artificial Intelligence; activity recognition; back pain; accelerometer; convolutional neural network; deep learning; human activity recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Snyder, K. (2020). Utilizing Convolutional Neural Networks for Specialized
Activity Recognition: Classifying Lower Back Pain Risk Prediction
During Manual Lifting. (Masters Thesis). University of Cincinnati. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=ucin1583999458096255
Chicago Manual of Style (16th Edition):
Snyder, Kristian. “Utilizing Convolutional Neural Networks for Specialized
Activity Recognition: Classifying Lower Back Pain Risk Prediction
During Manual Lifting.” 2020. Masters Thesis, University of Cincinnati. Accessed March 07, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=ucin1583999458096255.
MLA Handbook (7th Edition):
Snyder, Kristian. “Utilizing Convolutional Neural Networks for Specialized
Activity Recognition: Classifying Lower Back Pain Risk Prediction
During Manual Lifting.” 2020. Web. 07 Mar 2021.
Vancouver:
Snyder K. Utilizing Convolutional Neural Networks for Specialized
Activity Recognition: Classifying Lower Back Pain Risk Prediction
During Manual Lifting. [Internet] [Masters thesis]. University of Cincinnati; 2020. [cited 2021 Mar 07].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1583999458096255.
Council of Science Editors:
Snyder K. Utilizing Convolutional Neural Networks for Specialized
Activity Recognition: Classifying Lower Back Pain Risk Prediction
During Manual Lifting. [Masters Thesis]. University of Cincinnati; 2020. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1583999458096255

University of Adelaide
25.
Ruan, Wenjie.
Device-free human localization and activity recognition for supporting the independent living of the elderly.
Degree: 2017, University of Adelaide
URL: http://hdl.handle.net/2440/112860
► Given the continuous growth of the aging population, the cost of health care, and the preference that the elderly want to live independently and safely…
(more)
▼ Given the continuous growth of the aging population, the cost of health care, and the preference that the elderly want to live independently and safely at their own homes, the demand on developing an innovative living-assistive system to facilitate the independent living for the elderly is becoming increasingly urgent. This novel system is envisioned to be device-free, intelligent, and maintenance-free as well as deployable in a residential environment. The key to realizing such envisioned system is to study low cost sensor technologies that are practical for device-free human indoor localization and
activity recognition, particularly under a clustered residential home. By exploring the latest, low-cost and unobtrusive RFID sensor technology, this thesis intends to design a new device-free system for better supporting the independent living of the elderly. Arising from this live-assistive system, this thesis specifically targets the following six research problems. Firstly, to deal with severe missing readings of passive RFID tags, this thesis proposes a novel tensor-based low-rank sensor reading recovery method, in which we formulate RFID sensor data as a high-dimensional tensor that can naturally preserve sensors’ spatial and temporal information. Secondly, by purely using passive RFID hardware, we build a novel data-driven device-free localization and tracking system. We formulate human localization problem as finding a location with the maximum posterior probability given the observed RSSIs (Received Signal Strength Indicator) from passive RFID tags. For tracking a moving target, we mathematically model the task as searching a location sequence with the most likelihood under a Hidden Markov Model (HMM) framework. Thirdly, to tackle the challenge that the tracking accuracy decreases in a cluttered residential environment, we propose to leverage the Human-Object Interaction (HOI) events to enhance the performance of the proposed RFID-based system. This idea is motivated by an intuition that HOI events, detected by pervasive sensors, can potentially reveal people’s interleaved locations during daily living activities such as watching TV or opening the fridge door. Furthermore, to recognize the resident’s daily activities, we propose a device-free human
activity recognition (HAR) system by deploying the passive RFID tags as an array attached on the wall. This HAR system operates by learning how RSSIs are distributed when a resident performs different activities. Moreover, considering that falls are among the leading causes of hospitalization for the elderly, we develop a fine-grained fall detection system that is capable of not only recognizing regular actions and fall events simultaneously, but also sensing the fine-grained fall orientations. Lastly, to remotely control the smart electronic appliances equipped in an intelligent environment, we design a device-free multi-modal hand gesture
recognition (HGR) system that can accurately sense the hand’s in-air speed, waving direction, moving range and duration around a…
Advisors/Committee Members: Sheng, Michael (advisor), Falkner, Nickolas John Gowland (advisor), Yao, Lina (advisor), Li, Xue (advisor), School of Computer Science (school).
Subjects/Keywords: indoor localization; human activity recognition; RFID; hand gesture recognition; tensor; decomposition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ruan, W. (2017). Device-free human localization and activity recognition for supporting the independent living of the elderly. (Thesis). University of Adelaide. Retrieved from http://hdl.handle.net/2440/112860
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ruan, Wenjie. “Device-free human localization and activity recognition for supporting the independent living of the elderly.” 2017. Thesis, University of Adelaide. Accessed March 07, 2021.
http://hdl.handle.net/2440/112860.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ruan, Wenjie. “Device-free human localization and activity recognition for supporting the independent living of the elderly.” 2017. Web. 07 Mar 2021.
Vancouver:
Ruan W. Device-free human localization and activity recognition for supporting the independent living of the elderly. [Internet] [Thesis]. University of Adelaide; 2017. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2440/112860.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ruan W. Device-free human localization and activity recognition for supporting the independent living of the elderly. [Thesis]. University of Adelaide; 2017. Available from: http://hdl.handle.net/2440/112860
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Illinois – Chicago
26.
Monna, Giovanni Clemente.
MY-AIR Project: Study on Semantic Location and Activity Recognition Algorithms for iOS Systems.
Degree: 2018, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/23034
► SLAR (Semantic Location and Activity Recognition) algorithms studied on iOS systems. This thesis provides an algorithm for concurrent detection of semantic location and activity of…
(more)
▼ SLAR (Semantic Location and
Activity Recognition) algorithms studied on iOS systems. This thesis provides an algorithm for concurrent detection of semantic location and
activity of the user, within a range of nine different possibilities. They are the combination of two possible semantic location states ("indoor" and "outdoor") and five different human activities ("stationary", "walking", "running", "biking" and "automotive"). The fi nal output values are {automotive, indoor stationary, indoor walking, indoor running, indoor biking, outdoor stationary, outdoor walking, outdoor running, outdoor biking}. The
recognition of these nine possible states is based on data coming from different smartphone sensors, selected between the less consumptive ones and basing on previous research works, for the application to be feasible and implementable. This branch of the research has been conducted on iOS systems, trying to overcome the limitations that this operative system presents if compared to the Android one. Then these SLAR algorithms will be used in a bigger project for recognizing the daily pollutant intake level of the user.
Advisors/Committee Members: Wolfson, Ouri (advisor), Lin, Jie (committee member), Baralis, Elena (committee member), Wolfson, Ouri (chair).
Subjects/Keywords: deep learning; activity recognition; location recognition; machine learning; iOS; Core ML
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Monna, G. C. (2018). MY-AIR Project: Study on Semantic Location and Activity Recognition Algorithms for iOS Systems. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/23034
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Monna, Giovanni Clemente. “MY-AIR Project: Study on Semantic Location and Activity Recognition Algorithms for iOS Systems.” 2018. Thesis, University of Illinois – Chicago. Accessed March 07, 2021.
http://hdl.handle.net/10027/23034.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Monna, Giovanni Clemente. “MY-AIR Project: Study on Semantic Location and Activity Recognition Algorithms for iOS Systems.” 2018. Web. 07 Mar 2021.
Vancouver:
Monna GC. MY-AIR Project: Study on Semantic Location and Activity Recognition Algorithms for iOS Systems. [Internet] [Thesis]. University of Illinois – Chicago; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10027/23034.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Monna GC. MY-AIR Project: Study on Semantic Location and Activity Recognition Algorithms for iOS Systems. [Thesis]. University of Illinois – Chicago; 2018. Available from: http://hdl.handle.net/10027/23034
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Hong Kong University of Science and Technology
27.
Sun, Lin ECE.
Deeply learned representations for human action recognition.
Degree: 2018, Hong Kong University of Science and Technology
URL: http://repository.ust.hk/ir/Record/1783.1-96003
;
https://doi.org/10.14711/thesis-991012637468003412
;
http://repository.ust.hk/ir/bitstream/1783.1-96003/1/th_redirect.html
► Unlike in image recognition, human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved…
(more)
▼ Unlike in image recognition, human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Previous research has mainly focused on using hand-designed local features, such as SIFT, HOG and SURF, to solve the video-based recognition problem. However, these approaches have complex implementation and are difficult to extend to the real-world data. Inspired by the success of deeply learned features for image classification, recent attempts have been made to learn deep features for video analysis. However, unlike image analysis, few deep learning models have been proposed to solve the problems in video analysis, and only limited success for videos has been reported. In particular, most such models either deal with simple datasets or rely on low-level local spatial-temporal features for the final precision. Transferring the success of two-dimensional (2D) Convolutional Neural Networks (CNNs) to videos by implementing 3D CNNs is a direct approach for action recognition. However, partially due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. Therefore, we investigate a new deep architecture which can handle 3D signals more effectively. We propose a factorized spatio-temporal convolutional network (FSTCN) structure which factorizes the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). In order to enhance the spatio-temporal representations for videos without losing the advantage of speed, we propose to add another modality, the difference between neighboring RGB frames, into the spatio-temporal modeling. CNN-based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. On the other hand, Recurrent Neural Networks (RNNs), are able to learn temporal motion dynamics though iteratively feeding the previous hidden features. In this thesis, we present RNNs as an alternative approach to CNNs. We establish that a feedback based approach, such as RNNs has several fundamental advantages over feedforward approach besides the comparable performance. We further apply RNNs, particularly the long short-term memory (LSTM) to human action recognition problems. In our experiments, we find that compared with CNNs, RNNs can better model the temporal relations in videos. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long. To address this invalid assumption, we propose the Lattice-LSTM (L2STM), which extends the LSTM by learning…
Subjects/Keywords: Human activity recognition
; Data processing
; Pattern recognition systems
; Computer vision
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sun, L. E. (2018). Deeply learned representations for human action recognition. (Thesis). Hong Kong University of Science and Technology. Retrieved from http://repository.ust.hk/ir/Record/1783.1-96003 ; https://doi.org/10.14711/thesis-991012637468003412 ; http://repository.ust.hk/ir/bitstream/1783.1-96003/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Sun, Lin ECE. “Deeply learned representations for human action recognition.” 2018. Thesis, Hong Kong University of Science and Technology. Accessed March 07, 2021.
http://repository.ust.hk/ir/Record/1783.1-96003 ; https://doi.org/10.14711/thesis-991012637468003412 ; http://repository.ust.hk/ir/bitstream/1783.1-96003/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Sun, Lin ECE. “Deeply learned representations for human action recognition.” 2018. Web. 07 Mar 2021.
Vancouver:
Sun LE. Deeply learned representations for human action recognition. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2018. [cited 2021 Mar 07].
Available from: http://repository.ust.hk/ir/Record/1783.1-96003 ; https://doi.org/10.14711/thesis-991012637468003412 ; http://repository.ust.hk/ir/bitstream/1783.1-96003/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Sun LE. Deeply learned representations for human action recognition. [Thesis]. Hong Kong University of Science and Technology; 2018. Available from: http://repository.ust.hk/ir/Record/1783.1-96003 ; https://doi.org/10.14711/thesis-991012637468003412 ; http://repository.ust.hk/ir/bitstream/1783.1-96003/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Illinois – Urbana-Champaign
28.
Yu, Wenbo.
Good-walk recognition using Android smartphone accelerometer with application on senior patients.
Degree: MS, Computer Science, 2016, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/90611
► Good walk from one's everyday activities can be used towards chronic disease diagnosis. Smartphones have become increasingly popular among people across ages. Properties including light…
(more)
▼ Good walk from one's everyday activities can be used towards chronic disease diagnosis. Smartphones have become increasingly popular among people across ages. Properties including light weight, computationally powerful make smartphones ideal platforms for
activity tracking and analysis. This work focuses on good walk
recognition using smartphone accelerometer readings. The algorithms are validated with
activity data collected from a large pool of healthy college students and senior patients. Softwares are implemented for walk
recognition and pulmonary function evaluations, and are integrated to a pipeline as part of a sequence of
activity data analysis.
Advisors/Committee Members: Schatz, Bruce R. (advisor).
Subjects/Keywords: bioinformatics; walk recognition; activity recognition; smartphone; accelerometer; senior patient
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yu, W. (2016). Good-walk recognition using Android smartphone accelerometer with application on senior patients. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/90611
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Yu, Wenbo. “Good-walk recognition using Android smartphone accelerometer with application on senior patients.” 2016. Thesis, University of Illinois – Urbana-Champaign. Accessed March 07, 2021.
http://hdl.handle.net/2142/90611.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Yu, Wenbo. “Good-walk recognition using Android smartphone accelerometer with application on senior patients.” 2016. Web. 07 Mar 2021.
Vancouver:
Yu W. Good-walk recognition using Android smartphone accelerometer with application on senior patients. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2142/90611.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Yu W. Good-walk recognition using Android smartphone accelerometer with application on senior patients. [Thesis]. University of Illinois – Urbana-Champaign; 2016. Available from: http://hdl.handle.net/2142/90611
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Georgia
29.
Aitha, Naveen Kumar.
A hybrid multi-layer wavelet-based video encoding scheme for computer vision applications on mobile resource constrained devices.
Degree: 2014, University of Georgia
URL: http://hdl.handle.net/10724/26843
► The use of multimedia-enabled mobile devices such as pocket PC's, smart cell phones and PDA's is increasing by the day and at a rapid pace.…
(more)
▼ The use of multimedia-enabled mobile devices such as pocket PC's, smart cell phones and PDA's is increasing by the day and at a rapid pace. Networked environments comprising of these multimedia-enabled mobile devices are typically resource
constrained in terms of their battery capacity and available bandwidth. Real-time computer vision applications typically entail the analysis, storage, transmission, and rendering of video data, and are hence resource-intensive. Consequently, it is very
important to develop a content-aware video encoding scheme that adapts dynamically to and makes efficient use of the available resources. A Hybrid Multi-Layered Video (HMLV) encoding scheme is proposed which comprises of content-aware, multi-layer
wavelet-based encoding of the image texture and motion, and a generative sketch-based representation of the object outlines. Each video layer in the proposed scheme is characterized by a distinct resource consumption profile. Experimental results on real
video data show that the proposed scheme is effective for computer vision and multimedia applications such as face recognition and activity recognition in resource-constrained mobile network environments.
Subjects/Keywords: layered media; Video streaming; Activity Recognition; Face Recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Aitha, N. K. (2014). A hybrid multi-layer wavelet-based video encoding scheme for computer vision applications on mobile resource constrained devices. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/26843
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Aitha, Naveen Kumar. “A hybrid multi-layer wavelet-based video encoding scheme for computer vision applications on mobile resource constrained devices.” 2014. Thesis, University of Georgia. Accessed March 07, 2021.
http://hdl.handle.net/10724/26843.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Aitha, Naveen Kumar. “A hybrid multi-layer wavelet-based video encoding scheme for computer vision applications on mobile resource constrained devices.” 2014. Web. 07 Mar 2021.
Vancouver:
Aitha NK. A hybrid multi-layer wavelet-based video encoding scheme for computer vision applications on mobile resource constrained devices. [Internet] [Thesis]. University of Georgia; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10724/26843.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Aitha NK. A hybrid multi-layer wavelet-based video encoding scheme for computer vision applications on mobile resource constrained devices. [Thesis]. University of Georgia; 2014. Available from: http://hdl.handle.net/10724/26843
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Washington State University
30.
[No author].
Scaling Activity Discovery and Recognition to Large, Complex Datasets
.
Degree: 2011, Washington State University
URL: http://hdl.handle.net/2376/2861
► In the past decade, activity discovery and recognition has been studied by many researchers. However there are still many challenges to be addressed before deploying…
(more)
▼ In the past decade,
activity discovery and
recognition has been studied by many researchers. However there are still many challenges to be addressed before deploying such technologies in the real world. We try to address some of those challenges in order to achieve a more scalable solution that can be used in the real world.
First, we introduce a novel data mining method called the continuous Varied order Sequence Mining method (DVSM). It is able to discover
activity pattern sequences, even if those patterns are disrupted or have varied step orders. We further extend DVSM into another data mining method called the Continuous varied Order Multi threshold
activity discovery method (COM). COM is able to handle issues such
as rare events across time and space. Furthermore, for discovering patterns in a real time manner, we extend COM as a stream mining method called StreamCOM.
In addition to discovering
activity patterns, we propose several methods for transferring discovered patterns from one setting to another. We propose methods for transferring
activity models of one resident to another,
activity models of a physical space to another, and
activity models of multiple spaces to another. We also show a method for selecting the most promising sources when multiple sources are available.
In order to further expedite the learning process, we also propose two novel active learning methods to construct generic active learning queries. Our generic queries are shorter and more intuitive and encompass many similar cases. We show how we can achieve a higher accuracy rate with fewer queries compared to traditional active learning methods.
All of our methods have been tested on real data collected from CASAS smart apartments. In several cases, we also tested our algorithms on various other datasets.
Advisors/Committee Members: Cook, Diane J (advisor).
Subjects/Keywords: Computer Science;
Activity Discovery;
Activity Recognition;
Data Mining;
Machine Learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
author], [. (2011). Scaling Activity Discovery and Recognition to Large, Complex Datasets
. (Thesis). Washington State University. Retrieved from http://hdl.handle.net/2376/2861
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
author], [No. “Scaling Activity Discovery and Recognition to Large, Complex Datasets
.” 2011. Thesis, Washington State University. Accessed March 07, 2021.
http://hdl.handle.net/2376/2861.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
author], [No. “Scaling Activity Discovery and Recognition to Large, Complex Datasets
.” 2011. Web. 07 Mar 2021.
Vancouver:
author] [. Scaling Activity Discovery and Recognition to Large, Complex Datasets
. [Internet] [Thesis]. Washington State University; 2011. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2376/2861.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
author] [. Scaling Activity Discovery and Recognition to Large, Complex Datasets
. [Thesis]. Washington State University; 2011. Available from: http://hdl.handle.net/2376/2861
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
◁ [1] [2] [3] [4] [5] … [12] ▶
.