You searched for subject:(Sensor Fusion)
.
Showing records 1 – 30 of
455 total matches.
◁ [1] [2] [3] [4] [5] … [16] ▶
1.
Sampaio, Luiz Gustavo Moreira.
Robust Orientation Estimation of a Mobile Camera for Interaction with Large Displays : 大画面ディスプレイに対するインタラクションのための モバイルカメラの頑強な姿勢推定; ダイガメン ディスプレイ ニ タイスル インタラクション ノ タメ ノ モバイル カメラ ノ ガンキョウ ナ シセイ スイテイ.
Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学
URL: http://hdl.handle.net/10061/10440
Subjects/Keywords: Sensor Fusion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sampaio, L. G. M. (n.d.). Robust Orientation Estimation of a Mobile Camera for Interaction with Large Displays : 大画面ディスプレイに対するインタラクションのための モバイルカメラの頑強な姿勢推定; ダイガメン ディスプレイ ニ タイスル インタラクション ノ タメ ノ モバイル カメラ ノ ガンキョウ ナ シセイ スイテイ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/10440
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Sampaio, Luiz Gustavo Moreira. “Robust Orientation Estimation of a Mobile Camera for Interaction with Large Displays : 大画面ディスプレイに対するインタラクションのための モバイルカメラの頑強な姿勢推定; ダイガメン ディスプレイ ニ タイスル インタラクション ノ タメ ノ モバイル カメラ ノ ガンキョウ ナ シセイ スイテイ.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed February 27, 2021.
http://hdl.handle.net/10061/10440.
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Sampaio, Luiz Gustavo Moreira. “Robust Orientation Estimation of a Mobile Camera for Interaction with Large Displays : 大画面ディスプレイに対するインタラクションのための モバイルカメラの頑強な姿勢推定; ダイガメン ディスプレイ ニ タイスル インタラクション ノ タメ ノ モバイル カメラ ノ ガンキョウ ナ シセイ スイテイ.” Web. 27 Feb 2021.
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Vancouver:
Sampaio LGM. Robust Orientation Estimation of a Mobile Camera for Interaction with Large Displays : 大画面ディスプレイに対するインタラクションのための モバイルカメラの頑強な姿勢推定; ダイガメン ディスプレイ ニ タイスル インタラクション ノ タメ ノ モバイル カメラ ノ ガンキョウ ナ シセイ スイテイ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10061/10440.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.
Council of Science Editors:
Sampaio LGM. Robust Orientation Estimation of a Mobile Camera for Interaction with Large Displays : 大画面ディスプレイに対するインタラクションのための モバイルカメラの頑強な姿勢推定; ダイガメン ディスプレイ ニ タイスル インタラクション ノ タメ ノ モバイル カメラ ノ ガンキョウ ナ シセイ スイテイ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/10440
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

University of Sydney
2.
Morton, Peter Michael.
Multi-target tracking using appearance models for identity maintenance
.
Degree: 2014, University of Sydney
URL: http://hdl.handle.net/2123/11644
► This thesis considers perception systems for urban environments. It focuses on the task of tracking dynamic objects and in particular on methods that can maintain…
(more)
▼ This thesis considers perception systems for urban environments. It focuses on the task of tracking dynamic objects and in particular on methods that can maintain the identities of targets through periods of ambiguity. Examples of such ambiguous situations occur when targets interact with each other, or when they are occluded by other objects or the environment. With the development of self driving cars, the push for autonomous delivery of packages, and an increasing use of technology for security, surveillance and public-safety applications, robust perception in crowded urban spaces is more important than ever before. A critical part of perception systems is the ability to understand the motion of objects in a scene. Tracking strategies that merge closely-spaced targets together into groups have been shown to offer improved robustness, but in doing so sacrifice the concept of target identity. Additionally, the primary sensor used for the tracking task may not provide the information required to reason about the identity of individual objects. There are three primary contributions in this work. The first is the development of 3D lidar tracking methods with improved ability to track closely-spaced targets and that can determine when target identities have become ambiguous. Secondly, this thesis defines appearance models suitable for the task of determining the identities of previously-observed targets, which may include the use of data from additional sensing modalities. The final contribution of this work is the combination of lidar tracking and appearance modelling, to enable the clarification of target identities in the presence of ambiguities caused by scene complexity. The algorithms presented in this work are validated on both carefully controlled and unconstrained datasets. The experiments show that in complex dynamic scenes with interacting targets, the proposed methods achieve significant improvements in tracking performance.
Subjects/Keywords: Robotics;
Tracking;
Sensor fusion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Morton, P. M. (2014). Multi-target tracking using appearance models for identity maintenance
. (Thesis). University of Sydney. Retrieved from http://hdl.handle.net/2123/11644
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Morton, Peter Michael. “Multi-target tracking using appearance models for identity maintenance
.” 2014. Thesis, University of Sydney. Accessed February 27, 2021.
http://hdl.handle.net/2123/11644.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Morton, Peter Michael. “Multi-target tracking using appearance models for identity maintenance
.” 2014. Web. 27 Feb 2021.
Vancouver:
Morton PM. Multi-target tracking using appearance models for identity maintenance
. [Internet] [Thesis]. University of Sydney; 2014. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/2123/11644.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Morton PM. Multi-target tracking using appearance models for identity maintenance
. [Thesis]. University of Sydney; 2014. Available from: http://hdl.handle.net/2123/11644
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of New South Wales
3.
Howarth, Blair David Sidney.
Real time 3D mapping for small wall climbing robots.
Degree: Mechanical & Manufacturing Engineering, 2012, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/51598
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10265/SOURCE02?view=true
► Small wall climbing robots are useful because they can access difficult environmentswhich preclude the use of more traditional mobile robot configurations. This couldinclude an industrial…
(more)
▼ Small wall climbing robots are useful because they can access difficult environmentswhich preclude the use of more traditional mobile robot configurations. This couldinclude an industrial plant or collapsed building which contains numerous obstaclesand enclosed spaces. These robots are very agile and they can move fully throughthree dimensional (3D) space by attaching to nearby surfaces. For autonomous operation,they need the ability to map their environment to allow navigation and motionplanning between footholds. This surface mapping must be performed onboard asline-of-sight and wireless communication may not always be available.As most of the methods used for robotic mapping and navigation were developedfor two dimensional usage, they do not scale well or generalise for 3D operation.Wall climbing robots require a 3D map of nearby surfaces to facilitate navigationbetween footholds. However, no suitable mapping method currently exists. A 3Dsurface mapping methodology is presented in this thesis to meet this need.The presented 3D mapping method is based on the
fusion of range and visioninformation in a novel fashion. Sparse scans from a laser range finder and a lowresolution camera are used, along with feature extraction, to significantly reducethe computational cost. These features are then grouped together to act as a basisfor the surface fitting. Planar surfaces, with full uncertainty, are generated fromthe grouped range features with the image features being used to generate planarpolygon boundaries. These surfaces are then merged together to build a 3D mapsurrounding a particular foothold position.Both experimental and simulated datasets are used to validate the presentedsurface mapping method. The surface fitting error is satisfactory and within therequired tolerances of a wall climbing robot prototype. An analysis of the computationalcost, along with experimental runtime results, indicates that onboard realtime operation is also achievable. The presented surface mapping methodology willtherefore allow small wall climbing robots to generate real time 3D environmentalmaps. This is an important step towards achieving autonomous operation.
Advisors/Committee Members: Katupitiya, Jayantha, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW, Guivant, Jose, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW.
Subjects/Keywords: Sensor Fusion; 3D Mapping; Robotics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Howarth, B. D. S. (2012). Real time 3D mapping for small wall climbing robots. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/51598 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10265/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Howarth, Blair David Sidney. “Real time 3D mapping for small wall climbing robots.” 2012. Doctoral Dissertation, University of New South Wales. Accessed February 27, 2021.
http://handle.unsw.edu.au/1959.4/51598 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10265/SOURCE02?view=true.
MLA Handbook (7th Edition):
Howarth, Blair David Sidney. “Real time 3D mapping for small wall climbing robots.” 2012. Web. 27 Feb 2021.
Vancouver:
Howarth BDS. Real time 3D mapping for small wall climbing robots. [Internet] [Doctoral dissertation]. University of New South Wales; 2012. [cited 2021 Feb 27].
Available from: http://handle.unsw.edu.au/1959.4/51598 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10265/SOURCE02?view=true.
Council of Science Editors:
Howarth BDS. Real time 3D mapping for small wall climbing robots. [Doctoral Dissertation]. University of New South Wales; 2012. Available from: http://handle.unsw.edu.au/1959.4/51598 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10265/SOURCE02?view=true

University of New South Wales
4.
Laird, John.
Modelling the impact of sensor placement and fusion for traffic monitoring.
Degree: Computer Science & Engineering, 2013, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/53000
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:11678/SOURCE01?view=true
► This thesis develops models to evaluate the impact of multi-modal sensor placement configurations on obtaining traffic parameters required for a variety of traffic monitoring and…
(more)
▼ This thesis develops models to evaluate the impact of multi-modal
sensor placement configurations on obtaining traffic parameters required for a variety of traffic monitoring and management applications.Existing traffic management strategies generally rely on the commonly used induction loop sensors, which are highly accurate presence detectors, however they have a limited sensing area. Alternate
sensor modalities may provide a higher information gain in comparison, especially vision based sensors. An inherent problem with using vision based sensors in traffic management is the occlusion between vehicles, which can make detection of individual vehicles difficult.Information
fusion from multiple sensors provides much richer information for scene understanding, leading to a greater ability to coordinate traffic management efficiently. Thus, effective
sensor placement and
fusion of data can improve the efficiency of traffic management. The aim of this thesis is to evaluate the impact of multi-modal
sensor placement, and as a result improve the estimation accuracy of road traffic parameters obtained from various
sensor configurations. More specifically, vision based sensors are studied, and later fused with inductive loops.In order to achieve this, models are developed to simulate various traffic flows, and to ensure consistency and relevance of the simulations to real world traffic. Models for single
sensor modality and multi-modal
sensor fusion are developed and are also validated. These models enable evaluation of
sensor placement configurations to determine effectiveness for specific traffic applications.Results of various single
sensor configurations demonstrate that the impact of view occlusion on the ability to detect vehicles can be improved by considering
sensor placement. Combining inductive loops and video cameras by
sensor fusion was found to overcome the problem of occlusion, resulting in a decided improvement in parameter estimation for traffic management and monitoring applications. For example, the
sensor fusion models developed resulted in the queue length estimation accuracy being improved by up to 20%. Finally, by applying the models presented in this thesis to current ramp metering strategies, viable alternative
sensor deployment solutions are recommended.
Advisors/Committee Members: Chou, Chun Tung, Computer Science & Engineering, Faculty of Engineering, UNSW, Geers, D. Glenn, NICTA, Wang, Yang, NICTA.
Subjects/Keywords: Traffic management; Sensor placement; Sensor fusion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Laird, J. (2013). Modelling the impact of sensor placement and fusion for traffic monitoring. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/53000 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:11678/SOURCE01?view=true
Chicago Manual of Style (16th Edition):
Laird, John. “Modelling the impact of sensor placement and fusion for traffic monitoring.” 2013. Doctoral Dissertation, University of New South Wales. Accessed February 27, 2021.
http://handle.unsw.edu.au/1959.4/53000 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:11678/SOURCE01?view=true.
MLA Handbook (7th Edition):
Laird, John. “Modelling the impact of sensor placement and fusion for traffic monitoring.” 2013. Web. 27 Feb 2021.
Vancouver:
Laird J. Modelling the impact of sensor placement and fusion for traffic monitoring. [Internet] [Doctoral dissertation]. University of New South Wales; 2013. [cited 2021 Feb 27].
Available from: http://handle.unsw.edu.au/1959.4/53000 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:11678/SOURCE01?view=true.
Council of Science Editors:
Laird J. Modelling the impact of sensor placement and fusion for traffic monitoring. [Doctoral Dissertation]. University of New South Wales; 2013. Available from: http://handle.unsw.edu.au/1959.4/53000 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:11678/SOURCE01?view=true

Penn State University
5.
Sinsley, Gregory.
Distributed Data Fusion Across Multiple Hard and Soft Mobile Sensor Platforms.
Degree: 2012, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/16157
► One of the biggest challenges currently facing the robotics field is sensor data fusion. Unmanned robots carry many sophisticated sensors including visual and infrared cameras,…
(more)
▼ One of the biggest challenges currently facing the robotics field is
sensor data
fusion. Unmanned robots carry many sophisticated sensors including visual and infrared cameras, radar, laser range finders, chemical sensors, accelerometers, gyros, and global positioning systems. By effectively fusing the data from these sensors, a robot would be able to form a coherent view of its world that could then be used to facilitate both
autonomous and intelligent operation. Another distinct
fusion problem is that of fusing data from teammates with data from onboard sensors. If an entire team of vehicles has the same worldview they will be able to cooperate much more effectively. Sharing worldviews is made even more difficult if the teammates have different
sensor types. The final
fusion challenge the robotics field faces is that of fusing data gathered by robots with data gathered by human teammates (soft sensors). Humans sense the world completely differently from robots, which makes this problem particularly difficult. The advantage of fusing data from humans is that it makes more information available to the entire team, thus helping each agent to make the best possible decisions.
This thesis presents a system for fusing data from multiple unmanned aerial vehicles, unmanned ground vehicles, and human observers. The first issue this thesis addresses is that of centralized data
fusion. This is a foundational data
fusion issue, which has been very well studied. Important issues in centralized
fusion include data association, classification, tracking, and robotics problems. Because these problems are so well studied, this thesis does not make any major contributions in this area, but does review it for completeness. The chapter on centralized
fusion concludes with an example unmanned aerial vehicle surveillance problem that demonstrates many of the traditional
fusion methods.
The second problem this thesis addresses is that of distributed data
fusion. Distributed data
fusion is a younger field than centralized
fusion. The main issues in distributed
fusion that are addressed are distributed classification and distributed tracking.
There are several well established methods for performing distributed
fusion that are first reviewed. The chapter on distributed
fusion concludes with a multiple unmanned vehicle collaborative test involving an unmanned aerial vehicle and an unmanned ground
vehicle.
The third issue this thesis addresses is that of soft
sensor only data
fusion. Soft-only
fusion is a newer field than centralized or distributed hard
sensor fusion. Because of the novelty of the field, the chapter on soft-only
fusion contains less background information and instead focuses on some new results in soft
sensor data
fusion. Specifically, it discusses a novel fuzzy logic based soft
sensor data
fusion method. This new method is tested using both simulations and field measurements.
The biggest issue addressed in this thesis is that of combined hard and soft
fusion.
Fusion of hard and soft data is the newest area for research…
Advisors/Committee Members: Lyle Norman Long, Dissertation Advisor/Co-Advisor, William Kenneth Jenkins, Committee Chair/Co-Chair, David Miller, Committee Member, David J Hall, Committee Member, John Yen, Committee Member, Joseph Francis Horn, Committee Chair/Co-Chair.
Subjects/Keywords: sensor data fusion; information fusion; soft sensor data fusion; random set theory; particle filter
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sinsley, G. (2012). Distributed Data Fusion Across Multiple Hard and Soft Mobile Sensor Platforms. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/16157
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Sinsley, Gregory. “Distributed Data Fusion Across Multiple Hard and Soft Mobile Sensor Platforms.” 2012. Thesis, Penn State University. Accessed February 27, 2021.
https://submit-etda.libraries.psu.edu/catalog/16157.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Sinsley, Gregory. “Distributed Data Fusion Across Multiple Hard and Soft Mobile Sensor Platforms.” 2012. Web. 27 Feb 2021.
Vancouver:
Sinsley G. Distributed Data Fusion Across Multiple Hard and Soft Mobile Sensor Platforms. [Internet] [Thesis]. Penn State University; 2012. [cited 2021 Feb 27].
Available from: https://submit-etda.libraries.psu.edu/catalog/16157.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Sinsley G. Distributed Data Fusion Across Multiple Hard and Soft Mobile Sensor Platforms. [Thesis]. Penn State University; 2012. Available from: https://submit-etda.libraries.psu.edu/catalog/16157
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of California – Santa Cruz
6.
Bruce, Jonathan.
Design, Building, Testing, and Control of SUPERball: A Tensegrity Robot to Enable New Forms of Planetary Exploration.
Degree: Computer Engineering (Robotics and Control), 2016, University of California – Santa Cruz
URL: http://www.escholarship.org/uc/item/0274v214
► Presented in this work are the concepts to build, sense and control a completely untethered tensegrity robotic system called SUPERball (Spherical Underactuated Planetary Exploration Robot),…
(more)
▼ Presented in this work are the concepts to build, sense and control a completely untethered tensegrity robotic system called SUPERball (Spherical Underactuated Planetary Exploration Robot), which is a compliant icosahedron tensegrity robot designed to enable research into tensegrity robots for planetary landing and exploration as part of a NASA funded program.Tensegrity robots are structurally compliant machines, uniquely able to absorb forces and interact with unstructured environments through the use of multiple rigid bodies stabilized by a network of cables.However, instead of engineering a single new robot, a fundamentally reusable component for tensegrity robots was developed by creating a modular tensegrity robotic strut which contains an integrated system of power, sensing, actuation, and communications.SUPERball utilizes six of these modular struts, making the SUPERball system analogous to a swarm of 6 individual robots, mutually constrained by a cable network.Since SUPERball is intended for use on planetary surfaces without the support of GPS, state estimation and control policies only utilize the sensors on board the robotic system.When external sensors are used, they must be able to account for imprecise placement and automatic calibration.Also, dynamic tensegrity systems do not exhibit continuous dynamics due to nonlinear cable conditions and interactions with the environment, thus non-traditional control development methods are implemented.In this work, control polices are developed using Monte Carlo, evolutionary algorithms, and advanced supervised learning through Guided Policy Search.Each system is evaluated in simulation, while state estimation and the Guided Policy Search method are additionally evaluated on the physical SUPERball robotic system.
Subjects/Keywords: Robotics; Machine Learning; Robotics; Sensor Fusion; Tensegrity
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bruce, J. (2016). Design, Building, Testing, and Control of SUPERball: A Tensegrity Robot to Enable New Forms of Planetary Exploration. (Thesis). University of California – Santa Cruz. Retrieved from http://www.escholarship.org/uc/item/0274v214
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bruce, Jonathan. “Design, Building, Testing, and Control of SUPERball: A Tensegrity Robot to Enable New Forms of Planetary Exploration.” 2016. Thesis, University of California – Santa Cruz. Accessed February 27, 2021.
http://www.escholarship.org/uc/item/0274v214.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bruce, Jonathan. “Design, Building, Testing, and Control of SUPERball: A Tensegrity Robot to Enable New Forms of Planetary Exploration.” 2016. Web. 27 Feb 2021.
Vancouver:
Bruce J. Design, Building, Testing, and Control of SUPERball: A Tensegrity Robot to Enable New Forms of Planetary Exploration. [Internet] [Thesis]. University of California – Santa Cruz; 2016. [cited 2021 Feb 27].
Available from: http://www.escholarship.org/uc/item/0274v214.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bruce J. Design, Building, Testing, and Control of SUPERball: A Tensegrity Robot to Enable New Forms of Planetary Exploration. [Thesis]. University of California – Santa Cruz; 2016. Available from: http://www.escholarship.org/uc/item/0274v214
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Penn State University
7.
Wen, Yicheng.
Heterogeneous Sensor Fusion in Sensor Networks: A Language-theoretic Approach
.
Degree: 2011, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/12213
► This dissertation presents a framework for feature-level heterogeneous sensor data fusion in sensor networks via a language-theoretic approach. Probabilistic finite state automata (PFSA) are used…
(more)
▼ This dissertation presents a framework for feature-level heterogeneous
sensor data
fusion in
sensor networks via a language-theoretic approach. Probabilistic finite state automata (PFSA) are used to model the semantic patterns in the observations of the sensors. A novel pattern discovery algorithm is developed to extract the PFSA model from symbol sequences. It is shown that this algorithm can capture semantic structures more effectively than the existing techniques. In order to formulate the data
fusion problem for semantic features, a link is established between the formal language theory and functional analysis by constructing a Hilbert space over a class of stochastic regular languages represented by PFSA. New algebraic operations are defined for PFSA with a family of parametrized inner products. The norm induced by the inner product is interpreted as a measure of the information contained in PFSA. Applications of this technique are discussed in the following areas: a) Orthogonal projection in the Hilbert space to solve the model reduction problem of PFSA. Numerical examples and experimental results are provided to elucidate the process of model order reduction. b) Supervised learning of semantic features of heterogeneous
sensor data in the product Hilbert space. The semantic features are combined optimally for classification using linear discriminant analysis (LDA). The proposed algorithm has a set of parameters that can be potentially configured by the users to adapt the algorithm to environment changes. The proposed algorithm is validated for object recognition at the US-Mexican border. An architecture of
fusion-driven
sensor networks is introduced to incorporate the proposed
fusion framework in
sensor networks. The network protocol, called dynamic time-space clustering (DSTC), and its heterogeneous version, are designed to adapt the network to the
fusion algorithms. A
sensor network for selectively tracking mobile targets is implemented in the network simulator NS-2 for both homogeneous and heterogeneous
sensor fields in an urban scenario for validating the propose architecture.
Advisors/Committee Members: Asok Ray, Dissertation Advisor/Co-Advisor, Asok Ray, Committee Chair/Co-Chair, Shashi Phoha, Committee Chair/Co-Chair, Qiang Du, Committee Member, Alok Sinha, Committee Member, Qian Wang, Committee Member, Ishanu Chattopadhyay, Committee Member.
Subjects/Keywords: wireless sensor network; pattern recognition; Information fusion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wen, Y. (2011). Heterogeneous Sensor Fusion in Sensor Networks: A Language-theoretic Approach
. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/12213
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Wen, Yicheng. “Heterogeneous Sensor Fusion in Sensor Networks: A Language-theoretic Approach
.” 2011. Thesis, Penn State University. Accessed February 27, 2021.
https://submit-etda.libraries.psu.edu/catalog/12213.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Wen, Yicheng. “Heterogeneous Sensor Fusion in Sensor Networks: A Language-theoretic Approach
.” 2011. Web. 27 Feb 2021.
Vancouver:
Wen Y. Heterogeneous Sensor Fusion in Sensor Networks: A Language-theoretic Approach
. [Internet] [Thesis]. Penn State University; 2011. [cited 2021 Feb 27].
Available from: https://submit-etda.libraries.psu.edu/catalog/12213.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Wen Y. Heterogeneous Sensor Fusion in Sensor Networks: A Language-theoretic Approach
. [Thesis]. Penn State University; 2011. Available from: https://submit-etda.libraries.psu.edu/catalog/12213
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
8.
Aeberhard, Michael Peter.
Object-level fusion for surround environment perception in automated driving applications.
Degree: 2017, Technische Universität Dortmund
URL: http://dx.doi.org/10.17877/DE290R-18029
► Driver assistance systems have increasingly relied on more sensors for new functions. As advanced driver assistance system continue to improve towards automated driving, new methods…
(more)
▼ Driver assistance systems have increasingly relied on more sensors for new functions. As advanced driver assistance system continue to improve towards automated driving, new methods are required for processing the data in an efficient and economical manner from the sensors for such complex systems. The detection of dynamic objects is one of the most important aspects required by advanced driver assistance systems and automated driving. In this thesis, an environment model approach for the detection of dynamic objects is presented in order to realize an effective method for
sensor data
fusion. A scalable high-level
fusion architecture is developed for fusing object data from several sensors in a single system, where processing occurs in three levels:
sensor,
fusion and application. A complete and consistent object model which includes the object’s dynamic state, existence probability and classification is defined as a
sensor-independent and generic
interface for
sensor data
fusion across all three processing levels. Novel algorithms are developed for object data association and
fusion at the
fusion-level of the architecture. An asynchronous
sensor-to-global
fusion strategy is applied in order to process
sensor data immediately within the high-level
fusion architecture, giving driver assistance systems the most up-to-date information about the vehicle’s environment. Track-to-track
fusion algorithms are uniquely applied for dynamic state
fusion, where the information matrix
fusion algorithm produces results comparable to a low-level central Kalman filter approach. The existence probability of an object is fused using a novel approach based on the Dempster-Shafer evidence theory, where the individual sensor’s existence estimation performance is considered during the
fusion process. A similar novel approach with the Dempster-Shafer evidence theory is also applied to the
fusion of an object’s classification. The developed high-level
sensor data
fusion architecture and its algorithms are evaluated using a prototype vehicle equipped with 12 sensors for surround environment perception. A thorough evaluation of the complete object model is performed on a closed test track using vehicles equipped with hardware for generating an accurate ground truth. Existence and classification performance is evaluated using labeled data sets from real traffic scenarios. The evaluation demonstrates the accuracy and effectiveness of the proposed
sensor data
fusion approach. The work presented in this thesis has additionally been extensively used in several research projects as the dynamic object detection platform for automated driving applications on highways in real traffic.
Advisors/Committee Members: Bertram, Torsten (advisor), Wünsche, Hans-Joachim (referee).
Subjects/Keywords: Autonomes Fahren; Perzeption; Sensor Daten Fusion; 620
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Aeberhard, M. P. (2017). Object-level fusion for surround environment perception in automated driving applications. (Doctoral Dissertation). Technische Universität Dortmund. Retrieved from http://dx.doi.org/10.17877/DE290R-18029
Chicago Manual of Style (16th Edition):
Aeberhard, Michael Peter. “Object-level fusion for surround environment perception in automated driving applications.” 2017. Doctoral Dissertation, Technische Universität Dortmund. Accessed February 27, 2021.
http://dx.doi.org/10.17877/DE290R-18029.
MLA Handbook (7th Edition):
Aeberhard, Michael Peter. “Object-level fusion for surround environment perception in automated driving applications.” 2017. Web. 27 Feb 2021.
Vancouver:
Aeberhard MP. Object-level fusion for surround environment perception in automated driving applications. [Internet] [Doctoral dissertation]. Technische Universität Dortmund; 2017. [cited 2021 Feb 27].
Available from: http://dx.doi.org/10.17877/DE290R-18029.
Council of Science Editors:
Aeberhard MP. Object-level fusion for surround environment perception in automated driving applications. [Doctoral Dissertation]. Technische Universität Dortmund; 2017. Available from: http://dx.doi.org/10.17877/DE290R-18029

Linköping University
9.
Alsén, Victoria.
GNSS Aided Inertial Human Body Motion Capture.
Degree: Automatic Control, 2016, Linköping University
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133291
► Human body motion capture systems based on inertial sensors (gyroscopes andaccelerometers) are able to track the relative motions in the body precisely, oftenwith the…
(more)
▼ Human body motion capture systems based on inertial sensors (gyroscopes andaccelerometers) are able to track the relative motions in the body precisely, oftenwith the aid of supplementary sensors. The sensor measurements are combinedthrough a sensor fusion algorithm to create estimates of, among other parame-ters, position, velocity and orientation for each body segment. As this algorithmrequires integration of noisy measurements, some drift, especially in the positionestimate, is expected. Taking advantage of the knowledge about the tracked sub-ject, a human body, models have been developed that improve the estimates, butposition still displays drift over time.In this thesis, a GNSS receiver is added to the motion capture system to givea drift-free measurement of the position as well as a velocity measurement. Theinertial data and the GNSS data complements each other well, particularly interms of observability of global and relative motions. To enable the models of thehuman body at an early stage of the fusion of sensor data, an optimization basedmaximum a posteriori algorithm was used, which is also better suited for thenonlinear system tracked compared to the conventional method of using Kalmanfilters.One of the models that improves the position estimate greatly, without addingadditional sensing, is the contact detection, with which the velocity of a segmentis set to zero whenever it is considered stationary in comparison to the surround-ing environment, e.g. when a foot touches the ground. This thesis looks at botha scenario when this contact detection can be applied and a scenario where itcannot be applied, to see what possibilities an addition of GNSS sensor couldbring to the human body motion tracking case. The results display a notable im-provement in position, both with and without contact detection. Furthermore,the heading estimate is improved at a full-body scale and the solution makes theestimates depend less on acceleration bias estimation.These results show great potential for more accurate estimates outdoors andcould prove valuable for enabling motion tracking of scenarios where the contactdetection model cannot be used, such as e.g. biking.
Subjects/Keywords: sensor fusion; GNSS; IMU; Control Engineering; Reglerteknik
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Alsén, V. (2016). GNSS Aided Inertial Human Body Motion Capture. (Thesis). Linköping University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133291
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Alsén, Victoria. “GNSS Aided Inertial Human Body Motion Capture.” 2016. Thesis, Linköping University. Accessed February 27, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133291.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Alsén, Victoria. “GNSS Aided Inertial Human Body Motion Capture.” 2016. Web. 27 Feb 2021.
Vancouver:
Alsén V. GNSS Aided Inertial Human Body Motion Capture. [Internet] [Thesis]. Linköping University; 2016. [cited 2021 Feb 27].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133291.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Alsén V. GNSS Aided Inertial Human Body Motion Capture. [Thesis]. Linköping University; 2016. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133291
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
10.
Nilsson, Sanna.
Sensor Fusion for Heavy Duty Vehicle Platooning.
Degree: The Institute of Technology, 2012, Linköping UniversityLinköping University
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78970
► The aim of platooning is to enable several Heavy Duty Vehicles (HDVs) to drive in a convoy and act as one unit to decrease the…
(more)
▼ The aim of platooning is to enable several Heavy Duty Vehicles (HDVs) to drive in a convoy and act as one unit to decrease the fuel consumption. By introducing wireless communication and tight control, the distance between the HDVs can be decreased significantly. This implies a reduction of the air drag and consequently the fuel consumption for all the HDVs in the platoon.
The challenge in platooning is to keep the HDVs as close as possible to each other without endangering safety. Therefore, sensor fusion is necessary to get an accurate estimate of the relative distance and velocity, which is a pre-requisite for the controller.
This master thesis aims at developing a sensor fusion framework from on-board sensor information as well as other vehicles’ sensor information communicated over a WiFi link. The most important sensors are GPS, that gives a rough position of each HDV, and radar that provides relative distance for each pair of HDV’s in the platoon. A distributed solution is developed, where an Extended Kalman Filter (EKF) estimates the state of the whole platoon. The state vector includes position, velocity and length of each HDV, which is used in a Model Predictive Control (MPC). Furthermore, a method is discussed on how to handle vehicles outside the platoon and how various road surfaces can be managed.
This master thesis is a part of a project consisting of three parallel master’s theses. The other two master’s theses investigate and implement rough pre-processing of data, time synchronization and MPC associated with platooning.
It was found that the three implemented systems could reduce the average fuel consumption by 11.1 %.
Subjects/Keywords: Extended Kalman Filter; EKF; platooning; sensor fusion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Nilsson, S. (2012). Sensor Fusion for Heavy Duty Vehicle Platooning. (Thesis). Linköping UniversityLinköping University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78970
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Nilsson, Sanna. “Sensor Fusion for Heavy Duty Vehicle Platooning.” 2012. Thesis, Linköping UniversityLinköping University. Accessed February 27, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78970.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Nilsson, Sanna. “Sensor Fusion for Heavy Duty Vehicle Platooning.” 2012. Web. 27 Feb 2021.
Vancouver:
Nilsson S. Sensor Fusion for Heavy Duty Vehicle Platooning. [Internet] [Thesis]. Linköping UniversityLinköping University; 2012. [cited 2021 Feb 27].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78970.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Nilsson S. Sensor Fusion for Heavy Duty Vehicle Platooning. [Thesis]. Linköping UniversityLinköping University; 2012. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78970
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology
11.
Carisi, Stefano (author).
Multimodal Sensor-Fusion for Context-Aware Semi-Autonomous Control of a Multi Degree-of-Freedom Upper Limb Prosthesis.
Degree: 2018, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:df9c8c3f-ba3b-4616-870a-5cbbda76ff1b
► Objective. Dexterous control of myoelectric upper limb prosthesis is still limited by the capabilities of the modern human-machine interfaces. The first goal of the current…
(more)
▼ Objective. Dexterous control of myoelectric upper limb prosthesis is still limited by the capabilities of the modern human-machine interfaces. The first goal of the current work was to develop a system that supplements the academic myoelectric state-of-the-art interface during the interaction with objects (e.g., grasping, manipulation) with the goal of increasing the overall performances and robustness of the prosthetic device. Additionally, the current study aims to define guidelines for a larger-scale experiment to be performed in the immediate future. Approach. I developed algorithms, which provide context- and user-awareness to the system by fusing multimodal sensory input data, and a control scheme that employs such context-awareness to estimate the user’s grasp intentions to automatically preshape the prosthesis for grasping in real time. The control scheme was compared against the major academic state-of-the-art myoelectric control scheme (i.e., pattern recognition) in two able-bodied subjects. The experimental tests consisted of grasping, reorienting, and relocating sets of common objects using a multi-degree-of-freedom prosthesis with two grip types and two degrees-of-freedom actuated wrist. Main Results. The proposed semi-autonomous system was able to function in realistic and time-varying cluttered environments. The obtained results illustrate better and more consistent performances (i.e., lower task completion time and standard deviation) of the developed control scheme with respect to the state-of-the-art counterpart. Improvements in control robustness during object manipulation (i.e., lower number of object drops) have also been obtained. The current study helped in defining guidelines for the future larger-scale experiment: more than one experimental session, data logging and subjective measurements recording. Significance. The proposed system improves multiple aspects involved in the control of myoelectric multi-degree-of-freedom upper limb prostheses. The guidelines defined in this work, are essential for evaluating, during the future larger-scale study, the impact of the proposed system on users’ experience (e.g., workload and ease of use).
Advisors/Committee Members: Plettenburg, Dick (mentor), Markovic, Marko (graduation committee), Abbink, David (graduation committee), van der Helm, Frans (graduation committee), Smit, Gerwin (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: sensor-fusion context-aware upper limb prosthesis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Carisi, S. (. (2018). Multimodal Sensor-Fusion for Context-Aware Semi-Autonomous Control of a Multi Degree-of-Freedom Upper Limb Prosthesis. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:df9c8c3f-ba3b-4616-870a-5cbbda76ff1b
Chicago Manual of Style (16th Edition):
Carisi, Stefano (author). “Multimodal Sensor-Fusion for Context-Aware Semi-Autonomous Control of a Multi Degree-of-Freedom Upper Limb Prosthesis.” 2018. Masters Thesis, Delft University of Technology. Accessed February 27, 2021.
http://resolver.tudelft.nl/uuid:df9c8c3f-ba3b-4616-870a-5cbbda76ff1b.
MLA Handbook (7th Edition):
Carisi, Stefano (author). “Multimodal Sensor-Fusion for Context-Aware Semi-Autonomous Control of a Multi Degree-of-Freedom Upper Limb Prosthesis.” 2018. Web. 27 Feb 2021.
Vancouver:
Carisi S(. Multimodal Sensor-Fusion for Context-Aware Semi-Autonomous Control of a Multi Degree-of-Freedom Upper Limb Prosthesis. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Feb 27].
Available from: http://resolver.tudelft.nl/uuid:df9c8c3f-ba3b-4616-870a-5cbbda76ff1b.
Council of Science Editors:
Carisi S(. Multimodal Sensor-Fusion for Context-Aware Semi-Autonomous Control of a Multi Degree-of-Freedom Upper Limb Prosthesis. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:df9c8c3f-ba3b-4616-870a-5cbbda76ff1b

Delft University of Technology
12.
GAO, Xinyu (author).
Sensor Data Fusion of Lidar and Camera for Road User Detection.
Degree: 2018, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a
► Object detection is one of the most important research topics in autonomous vehicles. The detection systems of autonomous vehicles nowadays are mostly image-based ones which…
(more)
▼ Object detection is one of the most important research topics in autonomous vehicles. The detection systems of autonomous vehicles nowadays are mostly image-based ones which detect target objects in the images. Although image-based detectors can provide a rather accurate 2D position of the object in the image, it is necessary to get the accurate 3D position of the object for an autonomous vehicle since it operates in the real 3D world. The relative position of the objects will heavily influence the vehicle control strategy. This thesis work aims to find out a solution for the 3D object detection by combining the Lidar point cloud and camera images, considering that these are two of the most commonly used perception sensors of autonomous vehicles. Lidar performs much better than the camera in 3D object detection since it rebuilds the surface of the surroundings by the point cloud. What’s more, combing Lidar with the camera provides the system redundancy in case of a single sensor failure. Due to the development of Neural Network (NN), past researches achieved great success in detecting objects in the images. Similarly, by applying the deep learning algorithms to parsing the point cloud, the proposed 3D object detection system obtains a competitive result in the KITTI 3D object detection benchmark.
Vehicle Engineering
Advisors/Committee Members: Gavrila, Dariu (mentor), Domhof, Joris (mentor), Kooij, Julian (graduation committee), Pan, Wei (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: 3D object detection; Lidar; Camera; sensor fusion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
GAO, X. (. (2018). Sensor Data Fusion of Lidar and Camera for Road User Detection. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a
Chicago Manual of Style (16th Edition):
GAO, Xinyu (author). “Sensor Data Fusion of Lidar and Camera for Road User Detection.” 2018. Masters Thesis, Delft University of Technology. Accessed February 27, 2021.
http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a.
MLA Handbook (7th Edition):
GAO, Xinyu (author). “Sensor Data Fusion of Lidar and Camera for Road User Detection.” 2018. Web. 27 Feb 2021.
Vancouver:
GAO X(. Sensor Data Fusion of Lidar and Camera for Road User Detection. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Feb 27].
Available from: http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a.
Council of Science Editors:
GAO X(. Sensor Data Fusion of Lidar and Camera for Road User Detection. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a
13.
Sentkerestiová Jana.
Development of Hall sensors for steady state magnetic diagnostic of fusion reactors
.
Degree: 2014, Czech University of Technology
URL: http://hdl.handle.net/10467/20229
Hall sensors development and evaluation for fusion reactors; Hall sensors development and evaluation for fusion reactors
Advisors/Committee Members: Ďuran Ivan (advisor).
Subjects/Keywords: Hall sensor; magnetic diagnostics; fusion reators; ITER
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jana, S. (2014). Development of Hall sensors for steady state magnetic diagnostic of fusion reactors
. (Thesis). Czech University of Technology. Retrieved from http://hdl.handle.net/10467/20229
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Jana, Sentkerestiová. “Development of Hall sensors for steady state magnetic diagnostic of fusion reactors
.” 2014. Thesis, Czech University of Technology. Accessed February 27, 2021.
http://hdl.handle.net/10467/20229.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Jana, Sentkerestiová. “Development of Hall sensors for steady state magnetic diagnostic of fusion reactors
.” 2014. Web. 27 Feb 2021.
Vancouver:
Jana S. Development of Hall sensors for steady state magnetic diagnostic of fusion reactors
. [Internet] [Thesis]. Czech University of Technology; 2014. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10467/20229.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Jana S. Development of Hall sensors for steady state magnetic diagnostic of fusion reactors
. [Thesis]. Czech University of Technology; 2014. Available from: http://hdl.handle.net/10467/20229
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Georgia Tech
14.
Liu, Kaibo.
Data fusion for system modeling, performance assessment and improvement.
Degree: PhD, Industrial and Systems Engineering, 2013, Georgia Tech
URL: http://hdl.handle.net/1853/52937
► Due to rapid advancements in sensing and computation technology, multiple types of sensors have been embedded in various applications, on-line automatically collecting massive production information.…
(more)
▼ Due to rapid advancements in sensing and computation technology, multiple types of sensors have been embedded in various applications, on-line automatically collecting massive production information. Although this data-rich environment provides great opportunity for more effective process control, it also raises new research challenges on data analysis and decision making due to the complex data structures, such as heterogeneous data dependency, and large-volume and high-dimensional characteristics.
This thesis contributes to the area of System Informatics and Control (SIAC) to develop systematic data
fusion methodologies for effective quality control and performance improvement in complex systems. These advanced methodologies enable (1) a better handling of the rich data environment communicated by complex engineering systems, (2) a closer monitoring of the system status, and (3) a more accurate forecasting of future trends and behaviors. The research bridges the gaps in methodologies among advanced statistics, engineering domain knowledge and operation research. It also forms close linkage to various application areas such as manufacturing, health care, energy and service systems.
This thesis started from investigating the optimal
sensor system design and conducting multiple
sensor data
fusion analysis for process monitoring and diagnosis in different applications. In Chapter 2, we first studied the couplings or interactions between the optimal design of a
sensor system in a Bayesian Network and quality management of a manufacturing system, which can improve cost-effectiveness and production yield by considering
sensor cost, process change detection speed, and fault diagnosis accuracy in an integrated manner. An algorithm named “Best Allocation Subsets by Intelligent Search” (BASIS) with optimality proof is developed to obtain the optimal
sensor allocation design at minimum cost under different user specified detection requirements.
Chapter 3 extended this line of research by proposing a novel adaptive
sensor allocation framework, which can greatly improve the monitoring and diagnosis capabilities of the previous method. A max-min criterion is developed to manage
sensor reallocation and process change detection in an integrated manner. The methodology was tested and validated based on a hot forming process and a cap alignment process.
Next in Chapter 4, we proposed a Scalable-Robust-Efficient Adaptive (SERA)
sensor allocation strategy for online high-dimensional process monitoring in a general network. A monitoring scheme of using the sum of top-r local detection statistics is developed, which is scalable, effective and robust in detecting a wide range of possible shifts in all directions. This research provides a generic guideline for practitioners on determining (1) the appropriate
sensor layout; (2) the “ON” and “OFF” states of different sensors; and (3) which part of the acquired data should be transmitted to and analyzed at the
fusion center, when only limited resources are available.
To improve the…
Advisors/Committee Members: Shi, Jianjun (advisor), Gebraeel, Nagi (committee member), Mei, Yajun (committee member), Kvam, Paul (committee member), Li, Jing (committee member).
Subjects/Keywords: Data fusion; Multiple Sensors; Sensor allocation; Prognostics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, K. (2013). Data fusion for system modeling, performance assessment and improvement. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52937
Chicago Manual of Style (16th Edition):
Liu, Kaibo. “Data fusion for system modeling, performance assessment and improvement.” 2013. Doctoral Dissertation, Georgia Tech. Accessed February 27, 2021.
http://hdl.handle.net/1853/52937.
MLA Handbook (7th Edition):
Liu, Kaibo. “Data fusion for system modeling, performance assessment and improvement.” 2013. Web. 27 Feb 2021.
Vancouver:
Liu K. Data fusion for system modeling, performance assessment and improvement. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1853/52937.
Council of Science Editors:
Liu K. Data fusion for system modeling, performance assessment and improvement. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/52937

University of Georgia
15.
Bottone, Michael Anthony.
A system to bring real time reconstructions of objects into the virtual environment for use in interaction-intensive applications.
Degree: 2016, University of Georgia
URL: http://hdl.handle.net/10724/35569"
► Bringing real objects into the virtual world has been shown to increase usability and presence in virtual reality applications. This paper presents a system to…
(more)
▼ Bringing real objects into the virtual world has been shown to increase usability and presence in virtual reality applications. This paper presents a system to generate a real time virtual reconstruction of real world user interface elements
for use in a head mounted display based driving simulator. Our system uses sensor fusion algorithms to combine data from depth and color cameras to generate an accurate, detailed, and fast rendering of the user’s hands while using the simulator. We
tested our system and show in our results that the inclusion of real objects in the virtual environment increases the immersion, presence, and usability of the simulation. Our system can also be used to bring other real objects into the virtual world,
especially when accuracy, detail, and real time updates are desired.
Subjects/Keywords: Virtual Reality; Sensor Fusion; Human Computer Interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bottone, M. A. (2016). A system to bring real time reconstructions of objects into the virtual environment for use in interaction-intensive applications. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/35569"
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bottone, Michael Anthony. “A system to bring real time reconstructions of objects into the virtual environment for use in interaction-intensive applications.” 2016. Thesis, University of Georgia. Accessed February 27, 2021.
http://hdl.handle.net/10724/35569".
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bottone, Michael Anthony. “A system to bring real time reconstructions of objects into the virtual environment for use in interaction-intensive applications.” 2016. Web. 27 Feb 2021.
Vancouver:
Bottone MA. A system to bring real time reconstructions of objects into the virtual environment for use in interaction-intensive applications. [Internet] [Thesis]. University of Georgia; 2016. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10724/35569".
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bottone MA. A system to bring real time reconstructions of objects into the virtual environment for use in interaction-intensive applications. [Thesis]. University of Georgia; 2016. Available from: http://hdl.handle.net/10724/35569"
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology
16.
Moyers Barrera, Gerardo (author).
Sensor Fusion for Localization of Autonomous Ground Drone in Indoor Environments.
Degree: 2020, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:af31d773-3d35-49fe-b48f-b950c17a7a64
► Technology is transforming almost all aspects of our lives, one of them is automation. The main motivation of automation is to help humans avoid performing…
(more)
▼ Technology is transforming almost all aspects of our lives, one of them is automation. The main motivation of automation is to help humans avoid performing tedious, high risk jobs. Automated driving, also known as autonomous driving, has been at the center of industrial and academic attention since a few decades now, thanks to its potential of making driving risk-free by enabling a highly efficient machine control the vehicle on roads. Apart from the common outdoor use-cases, several applications in indoor environments have also been extensively investigated. The primary ones include process automation and management in large factories and warehouses. Localization of the autonomous vehicle is crucial to determine the path to be followed to reach the desired destination. Sensor fusion techniques are extensively investigated for this. However, the major challenge arising in indoor environment localization is obtaining accuracy in the scale of a few centimeters in real-time. In this thesis, we intend to address this challenge. The contributions of this thesis are two-fold. Firstly, we develop a low-cost testbed – Autonomous Ground Drone (AGD) – that enables us to develop sensor fusion and localization scheme for autonomous driving. Secondly, we employ Extended Kalman Filter (EKF) on the sensor combination of UWB, IMU, and Radar, and achieve a localization accuracy of 8 cm. Our localization scheme outperforms state of the art in this field in terms of accuracy, latency, and power consumption. Keywords: Localization, Sensor Fusion, EKF, AGD, low-cost, real-time
Electrical Engineer | Embedded Systems
Advisors/Committee Members: Venkatesha Prasad, R.R. (mentor), van den Heuvel, Dirk (graduation committee), Gokhale, V. (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Localization; Sensor Fusion; Real-time; Low-cost
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Moyers Barrera, G. (. (2020). Sensor Fusion for Localization of Autonomous Ground Drone in Indoor Environments. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:af31d773-3d35-49fe-b48f-b950c17a7a64
Chicago Manual of Style (16th Edition):
Moyers Barrera, Gerardo (author). “Sensor Fusion for Localization of Autonomous Ground Drone in Indoor Environments.” 2020. Masters Thesis, Delft University of Technology. Accessed February 27, 2021.
http://resolver.tudelft.nl/uuid:af31d773-3d35-49fe-b48f-b950c17a7a64.
MLA Handbook (7th Edition):
Moyers Barrera, Gerardo (author). “Sensor Fusion for Localization of Autonomous Ground Drone in Indoor Environments.” 2020. Web. 27 Feb 2021.
Vancouver:
Moyers Barrera G(. Sensor Fusion for Localization of Autonomous Ground Drone in Indoor Environments. [Internet] [Masters thesis]. Delft University of Technology; 2020. [cited 2021 Feb 27].
Available from: http://resolver.tudelft.nl/uuid:af31d773-3d35-49fe-b48f-b950c17a7a64.
Council of Science Editors:
Moyers Barrera G(. Sensor Fusion for Localization of Autonomous Ground Drone in Indoor Environments. [Masters Thesis]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:af31d773-3d35-49fe-b48f-b950c17a7a64

University of South Carolina
17.
Rahman, Sharmin.
A Multi-Sensor Fusion-Based Underwater Slam System.
Degree: PhD, Computer Science and Engineering, 2020, University of South Carolina
URL: https://scholarcommons.sc.edu/etd/5987
► This dissertation addresses the problem of real-time Simultaneous Localization and Mapping (SLAM) in challenging environments. SLAM is one of the key enabling technologies for…
(more)
▼ This dissertation addresses the problem of real-time Simultaneous Localization and Mapping (SLAM) in challenging environments. SLAM is one of the key enabling technologies for autonomous robots to navigate in unknown environments by processing information on their on-board computational units. In particular, we study the exploration of challenging GPS-denied underwater environments to enable a wide range of robotic applications, including historical studies, health monitoring of coral reefs, underwater infrastructure inspection e.g., bridges, hydroelectric dams, water supply systems, and oil rigs. Mapping underwater structures is important in several fields, such as marine archaeology, Search and Rescue (SaR), resource management, hydrogeology, and speleology. However, due to the highly unstructured nature of such environments, navigation by human divers could be extremely dangerous, tedious, and labor intensive. Hence, employing an underwater robot is an excellent fit to build the map of the environment while simultaneously localizing itself in the map.
The main contribution of this dissertation is the design and development of a real-time robust SLAM algorithm for small and large scale underwater environments. SVIn – a novel tightly-coupled keyframe-based non-linear optimization framework fusing Sonar, Visual, Inertial and water depth information with robust initialization, loop-closing, and relocalization capabilities has been presented. Introducing acoustic range information to aid the visual data, shows improved reconstruction and localization. The availability of depth information from water pressure enables a robust initialization and refines the scale factor, as well as assists to reduce the drift for the tightly-coupled integration. The complementary characteristics of these sensing v modalities provide accurate and robust localization in unstructured environments with low visibility and low visual features – as such make them the ideal choice for underwater navigation. The proposed system has been successfully tested and validated in both benchmark datasets and numerous real world scenarios. It has also been used for planning for underwater robot in the presence of obstacles. Experimental results on datasets collected with a custom-made underwater
sensor suite and an autonomous underwater vehicle (AUV) Aqua2 in challenging underwater environments with poor visibility, demonstrate performance never achieved before in terms of accuracy and robustness. To aid the sparse reconstruction, a contour-based reconstruction approach utilizing the well defined edges between the well lit area and darkness has been developed. In particular, low lighting conditions, or even complete absence of natural light inside caves, results in strong lighting variations, e.g., the cone of the artificial video light intersecting underwater structures and the shadow contours. The proposed method utilizes these contours to provide additional features, resulting into a denser 3D point cloud than the usual point clouds from a…
Advisors/Committee Members: Ioannis Rekleitis.
Subjects/Keywords: computer vision; Robotics; sensor fusion; SLAM
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rahman, S. (2020). A Multi-Sensor Fusion-Based Underwater Slam System. (Doctoral Dissertation). University of South Carolina. Retrieved from https://scholarcommons.sc.edu/etd/5987
Chicago Manual of Style (16th Edition):
Rahman, Sharmin. “A Multi-Sensor Fusion-Based Underwater Slam System.” 2020. Doctoral Dissertation, University of South Carolina. Accessed February 27, 2021.
https://scholarcommons.sc.edu/etd/5987.
MLA Handbook (7th Edition):
Rahman, Sharmin. “A Multi-Sensor Fusion-Based Underwater Slam System.” 2020. Web. 27 Feb 2021.
Vancouver:
Rahman S. A Multi-Sensor Fusion-Based Underwater Slam System. [Internet] [Doctoral dissertation]. University of South Carolina; 2020. [cited 2021 Feb 27].
Available from: https://scholarcommons.sc.edu/etd/5987.
Council of Science Editors:
Rahman S. A Multi-Sensor Fusion-Based Underwater Slam System. [Doctoral Dissertation]. University of South Carolina; 2020. Available from: https://scholarcommons.sc.edu/etd/5987

University of Kentucky
18.
Zhao, Jian.
Camera Planning and Fusion in a Heterogeneous Camera Network.
Degree: 2011, University of Kentucky
URL: https://uknowledge.uky.edu/ece_etds/2
► Wide-area camera networks are becoming more and more common. They have widerange of commercial and military applications from video surveillance to smart home and from…
(more)
▼ Wide-area camera networks are becoming more and more common. They have widerange of commercial and military applications from video surveillance to smart home and from traffic monitoring to anti-terrorism. The design of such a camera network is a challenging problem due to the complexity of the environment, self and mutual occlusion of moving objects, diverse sensor properties and a myriad of performance metrics for different applications. In this dissertation, we consider two such challenges: camera planing and camera fusion. Camera planning is to determine the optimal number and placement of cameras for a target cost function. Camera fusion describes the task of combining images collected by heterogenous cameras in the network to extract information pertinent to a target application.
I tackle the camera planning problem by developing a new unified framework based on binary integer programming (BIP) to relate the network design parameters and the performance goals of a variety of camera network tasks. Most of the BIP formulations are NP hard problems and various approximate algorithms have been proposed in the literature. In this dissertation, I develop a comprehensive framework in comparing the entire spectrum of approximation algorithms from Greedy, Markov Chain Monte Carlo (MCMC) to various relaxation techniques. The key contribution is to provide not only a generic formulation of the camera planning problem but also novel approaches to adapt the formulation to powerful approximation schemes including Simulated Annealing (SA) and Semi-Definite Program (SDP). The accuracy, efficiency and scalability of each technique are analyzed and compared in depth. Extensive experimental results are provided to illustrate the strength and weakness of each method.
The second problem of heterogeneous camera fusion is a very complex problem. Information can be fused at different levels from pixel or voxel to semantic objects, with large variation in accuracy, communication and computation costs. My focus is on the geometric transformation of shapes between objects observed at different camera planes. This so-called the geometric fusion approach usually provides the most reliable fusion approach at the expense of high computation and communication costs. To tackle the complexity, a hierarchy of camera models with different levels of complexity was proposed to balance the effectiveness and efficiency of the camera network operation. Then different calibration and registration methods are proposed for each camera model. At last, I provide two specific examples to demonstrate the effectiveness of the model: 1)a fusion system to improve the segmentation of human body in a camera network consisted of thermal and regular visible light cameras and 2) a view dependent rendering system by combining the information from depth and regular cameras to collecting the scene information and generating new views in real time.
Subjects/Keywords: Sensor Planning; Camera Placement; Sensor Fusion; Human Segmentation; Multi-camera Fusion; Electrical and Computer Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhao, J. (2011). Camera Planning and Fusion in a Heterogeneous Camera Network. (Doctoral Dissertation). University of Kentucky. Retrieved from https://uknowledge.uky.edu/ece_etds/2
Chicago Manual of Style (16th Edition):
Zhao, Jian. “Camera Planning and Fusion in a Heterogeneous Camera Network.” 2011. Doctoral Dissertation, University of Kentucky. Accessed February 27, 2021.
https://uknowledge.uky.edu/ece_etds/2.
MLA Handbook (7th Edition):
Zhao, Jian. “Camera Planning and Fusion in a Heterogeneous Camera Network.” 2011. Web. 27 Feb 2021.
Vancouver:
Zhao J. Camera Planning and Fusion in a Heterogeneous Camera Network. [Internet] [Doctoral dissertation]. University of Kentucky; 2011. [cited 2021 Feb 27].
Available from: https://uknowledge.uky.edu/ece_etds/2.
Council of Science Editors:
Zhao J. Camera Planning and Fusion in a Heterogeneous Camera Network. [Doctoral Dissertation]. University of Kentucky; 2011. Available from: https://uknowledge.uky.edu/ece_etds/2

Michigan Technological University
19.
Demars, Casey D.
Target detection, tracking, and localization using multi-spectral image fusion and RF Doppler differentials.
Degree: PhD, Department of Electrical and Computer Engineering, 2018, Michigan Technological University
URL: https://digitalcommons.mtu.edu/etdr/720
► It is critical for defense and security applications to have a high probability of detection and low false alarm rate while operating over a…
(more)
▼ It is critical for defense and security applications to have a high probability of detection and low false alarm rate while operating over a wide variety of conditions.
Sensor fusion, which is the the process of combining data from two or more sensors, has been utilized to improve the performance of a system by exploiting the strengths of each
sensor. This dissertation presents algorithms to fuse multi-
sensor data that improves system performance by increasing detection rates, lowering false alarms, and improving track performance. Furthermore, this dissertation presents a framework for comparing algorithm error for image registration which is a critical pre-processing step for multi-spectral image
fusion.
First, I present an algorithm to improve detection and tracking performance for moving targets in a cluttered urban environment by fusing foreground maps from multi-spectral imagery. Most research in image
fusion consider visible and long-wave infrared bands; I examine these bands along with near infrared and mid-wave infrared. To localize and track a particular target of interest, I present an algorithm to fuse output from the multi-spectral image tracker with a constellation of RF sensors measuring a specific cellular emanation. The
fusion algorithm matches the Doppler differential from the RF sensors with the theoretical Doppler Differential of the video tracker output by selecting the
sensor pair that minimizes the absolute difference or root-mean-square difference. Finally, a framework to quantify shift-estimation error for both area- and feature-based algorithms is presented. By exploiting synthetically generated visible and long-wave infrared imagery, error metrics are computed and compared for a number of area- and feature-based shift estimation algorithms.
A number of key results are presented in this dissertation. The multi-spectral image tracker improves the location accuracy of the algorithm while improving the detection rate and lowering false alarms for most spectral bands. All 12 moving targets were tracked through the video sequence with only one lost track that was later recovered. Targets from the multi-spectral tracking algorithm were correctly associated with their corresponding cellular emanation for all targets at lower measurement uncertainty using the root-mean-square difference while also having a high confidence ratio for selecting the true target from background targets. For the area-based algorithms and the synthetic air-field image pair, the DFT and ECC algorithms produces sub-pixel shift-estimation error in regions such as shadows and high contrast painted line regions. The edge orientation feature descriptors increase the number of sub-field estimates while improving the shift-estimation error compared to the Lowe descriptor.
Advisors/Committee Members: Dr. Michael C. Roggemann.
Subjects/Keywords: Sensor fusion; image processing; data fusion; target tracking; Signal Processing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Demars, C. D. (2018). Target detection, tracking, and localization using multi-spectral image fusion and RF Doppler differentials. (Doctoral Dissertation). Michigan Technological University. Retrieved from https://digitalcommons.mtu.edu/etdr/720
Chicago Manual of Style (16th Edition):
Demars, Casey D. “Target detection, tracking, and localization using multi-spectral image fusion and RF Doppler differentials.” 2018. Doctoral Dissertation, Michigan Technological University. Accessed February 27, 2021.
https://digitalcommons.mtu.edu/etdr/720.
MLA Handbook (7th Edition):
Demars, Casey D. “Target detection, tracking, and localization using multi-spectral image fusion and RF Doppler differentials.” 2018. Web. 27 Feb 2021.
Vancouver:
Demars CD. Target detection, tracking, and localization using multi-spectral image fusion and RF Doppler differentials. [Internet] [Doctoral dissertation]. Michigan Technological University; 2018. [cited 2021 Feb 27].
Available from: https://digitalcommons.mtu.edu/etdr/720.
Council of Science Editors:
Demars CD. Target detection, tracking, and localization using multi-spectral image fusion and RF Doppler differentials. [Doctoral Dissertation]. Michigan Technological University; 2018. Available from: https://digitalcommons.mtu.edu/etdr/720

Carnegie Mellon University
20.
Han, Jun.
Advantages and Risks of Sensing for Cyber-Physical Security.
Degree: 2018, Carnegie Mellon University
URL: http://repository.cmu.edu/dissertations/1161
► With the the emergence of the Internet-of-Things (IoT) and Cyber-Physical Systems (CPS), modern computing is now transforming from residing only in the cyber domain to…
(more)
▼ With the the emergence of the Internet-of-Things (IoT) and Cyber-Physical Systems (CPS), modern computing is now transforming from residing only in the cyber domain to the cyber-physical domain. I focus on one important aspect of this transformation, namely shortcomings of traditional security measures. Security research over the last couple of decades focused on protecting data in regard to identities or similar static attributes. However, in the physical world, data rely more on physical relationships, hence requires CPS to verify identities together with relative physical context to provide security guarantees. To enable such verification, it requires the devices to prove unique relative physical context only available to the intended devices. In this work, I study how varying levels of constraints on physical boundary of co-located devices determine the relative physical context. Specifically, I explore different application scenarios with varying levels of constraints – including smart-home, semi-autonomous vehicles, and in-vehicle environments – and analyze how different constraints affect binding identities to physical relationships, ultimately enabling IoT devices to perform such verification. Furthermore, I also demonstrate that sensing may pose risks for CPS by presenting an attack on personal privacy in a smart home environment.
Subjects/Keywords: Context-based Pairing; IoT Security; Sensor Fusion; Sensor Security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Han, J. (2018). Advantages and Risks of Sensing for Cyber-Physical Security. (Thesis). Carnegie Mellon University. Retrieved from http://repository.cmu.edu/dissertations/1161
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Han, Jun. “Advantages and Risks of Sensing for Cyber-Physical Security.” 2018. Thesis, Carnegie Mellon University. Accessed February 27, 2021.
http://repository.cmu.edu/dissertations/1161.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Han, Jun. “Advantages and Risks of Sensing for Cyber-Physical Security.” 2018. Web. 27 Feb 2021.
Vancouver:
Han J. Advantages and Risks of Sensing for Cyber-Physical Security. [Internet] [Thesis]. Carnegie Mellon University; 2018. [cited 2021 Feb 27].
Available from: http://repository.cmu.edu/dissertations/1161.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Han J. Advantages and Risks of Sensing for Cyber-Physical Security. [Thesis]. Carnegie Mellon University; 2018. Available from: http://repository.cmu.edu/dissertations/1161
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
21.
Tonderski, Adam.
Sensor Fusion with Failure Robustness A Deep Learning 3D Object Detection Architecture
.
Degree: Chalmers tekniska högskola / Institutionen för fysik, 2019, Chalmers University of Technology
URL: http://hdl.handle.net/20.500.12380/300780
► Autonomous Driving is the task of navigating a vehicle without driver interaction. This requires accurate perception of the surroundings, which includes three dimensional object detection.…
(more)
▼ Autonomous Driving is the task of navigating a vehicle without driver interaction.
This requires accurate perception of the surroundings, which includes three dimensional
object detection. In this area, deep learning methods show great results, and
they usually use either images, point clouds, or a fusion of both. These methods
are often evaluated with full sensor availability. However, in a safety critical system,
unexpected scenarios such as sensor failure must be accounted for. The objective of
this thesis is to develop a deep learning architecture that is robust against the case
where some sensors stop sending data.
The proposed architecture is inspired by leading LIDAR object detection models,
which have reached a high performance. To be able to make detections during LIDAR
failure, the network learns to convert the 2D image to the same representation
as the LIDAR, in the form of an estimated 3D point cloud. The two point clouds
are merged into a common representation, which allows the model to perform the
detections jointly and thus work if either sensor fails. The final contribution is a
novel training procedure with simulated sensor failure.
The results show that the model is robust against sensor failure, by reaching close
to state-of-the-art performance for camera, LIDAR, and fusion with a single model.
Additionally, on the KITTI dataset, the model outperforms three specialized versions
that trained on camera, LIDAR, and fusion respectively.
A video of the model’s detections can be viewed at youtu.be/_rKN_USMUoo.
Subjects/Keywords: Deep Learning;
Sensor Fusion;
Object Detection;
Sensor Failure;
KITTI
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tonderski, A. (2019). Sensor Fusion with Failure Robustness A Deep Learning 3D Object Detection Architecture
. (Thesis). Chalmers University of Technology. Retrieved from http://hdl.handle.net/20.500.12380/300780
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Tonderski, Adam. “Sensor Fusion with Failure Robustness A Deep Learning 3D Object Detection Architecture
.” 2019. Thesis, Chalmers University of Technology. Accessed February 27, 2021.
http://hdl.handle.net/20.500.12380/300780.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Tonderski, Adam. “Sensor Fusion with Failure Robustness A Deep Learning 3D Object Detection Architecture
.” 2019. Web. 27 Feb 2021.
Vancouver:
Tonderski A. Sensor Fusion with Failure Robustness A Deep Learning 3D Object Detection Architecture
. [Internet] [Thesis]. Chalmers University of Technology; 2019. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/20.500.12380/300780.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Tonderski A. Sensor Fusion with Failure Robustness A Deep Learning 3D Object Detection Architecture
. [Thesis]. Chalmers University of Technology; 2019. Available from: http://hdl.handle.net/20.500.12380/300780
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology
22.
de Vries, Maarten (author).
Multi-Rate Unscented Kalman Filtering for Pose Estimation: Using a car-like vehicle-platform.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:e5d2506a-df25-4187-8ffa-10ffd2c819c6
► Pose estimation through fusion of GNSS with secondary sensors has long been an established field. With the developments surrounding autonomous navigation over the past decade…
(more)
▼ Pose estimation through fusion of GNSS with secondary sensors has long been an established field. With the developments surrounding autonomous navigation over the past decade this topic has gained extra importance. In the current literature GNSS based pose and localisation is often improved through fusion with either a IMU}or VS with the goal of improving on stand-alone GNSS localisation results as well as dealing with GNSS outages. In this thesis however, all three of these sensors will be fused together using a cascade of a IMU orientation filter and a Multi-Rate UKF. This filter structure is evaluated using simulations and real-world data obtained using a created vehicle-platform. The simulated results indicate that using a Multi-Rate Unscented Kalman Filter for pose estimation is promising as the filter, when configured properly, outperforms stand-alone GNSS receivers for pose estimation. However, the real-world experiments show that the used sensors lack accuracy and precision to obtain satisfactory results.
Biomechanical Design - BioRobotics
Advisors/Committee Members: Mazo Espinosa, Manuel (mentor), Seiffers, John (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: GNSS; Kalman Filter; Unscented Kalman; Pose estimation; sensor fusion; Multi-Rate; IMU sensor fusion; Vehicle-sensor; Protocol
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
de Vries, M. (. (2019). Multi-Rate Unscented Kalman Filtering for Pose Estimation: Using a car-like vehicle-platform. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:e5d2506a-df25-4187-8ffa-10ffd2c819c6
Chicago Manual of Style (16th Edition):
de Vries, Maarten (author). “Multi-Rate Unscented Kalman Filtering for Pose Estimation: Using a car-like vehicle-platform.” 2019. Masters Thesis, Delft University of Technology. Accessed February 27, 2021.
http://resolver.tudelft.nl/uuid:e5d2506a-df25-4187-8ffa-10ffd2c819c6.
MLA Handbook (7th Edition):
de Vries, Maarten (author). “Multi-Rate Unscented Kalman Filtering for Pose Estimation: Using a car-like vehicle-platform.” 2019. Web. 27 Feb 2021.
Vancouver:
de Vries M(. Multi-Rate Unscented Kalman Filtering for Pose Estimation: Using a car-like vehicle-platform. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Feb 27].
Available from: http://resolver.tudelft.nl/uuid:e5d2506a-df25-4187-8ffa-10ffd2c819c6.
Council of Science Editors:
de Vries M(. Multi-Rate Unscented Kalman Filtering for Pose Estimation: Using a car-like vehicle-platform. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:e5d2506a-df25-4187-8ffa-10ffd2c819c6
23.
Antigny, Nicolas.
Estimation continue de la pose d'un équipement tenu en main par fusion des données visio-inertielles pour les applications de navigation piétonne en milieux urbains : Continuous pose estimation of handheld device by fusion of visio-inertial data for pedestrian navigation applications in urban environments.
Degree: Docteur es, Signal, Image, Vision, 2018, Ecole centrale de Nantes
URL: http://www.theses.fr/2018ECDN0027
► Pour assister la navigation piétonne dans les espaces urbains et intérieurs, une estimation précise de la pose (i.e. la position 3D et l'orientation3D) d'un équipement…
(more)
▼ Pour assister la navigation piétonne dans les espaces urbains et intérieurs, une estimation précise de la pose (i.e. la position 3D et l'orientation3D) d'un équipement tenu en main constitue un point essentiel dans le développement d'outils d'aide à la mobilité (e.g. applications de réalité augmentée). Dans l'hypothèse où le piéton n'est équipé que d'appareils grand public, l'estimation de la pose est limitée à l'utilisation de capteurs à faible coût intégrés dans ces derniers (i.e. un récepteur GNSS, une unité de mesure inertielle et magnétique et une caméra monoculaire). De plus, les espaces urbains et intérieurs, comprenant des bâtiments proches et des éléments ferromagnétiques, constituent des zones difficiles pour la localisation et l'estimation de la pose lors de grands déplacements piétons.Cependant, le développement récent et la mise à disposition d'informations contenues dans des Systèmes d'Information Géographiques 3D constituent une nouvelle source de données exploitable pour la localisation et l'estimation de la pose. Pour relever ces défis, cette thèse propose différentes solutions pour améliorer la localisation et l'estimation de la pose des équipements tenus en main par le piéton lors de ses déplacements en espaces urbains et intérieurs. Les solutions proposées intègrent l'estimation de l'attitude basée inertielle et magnétique, l'odométrie visuelle monoculaire mise à l'échelle grâce à l'estimation des déplacements du piéton, l'estimation absolue de la pose basée sur la reconnaissance d'objets SIG 3D parfaitement connus et la mise à jour en position de la navigation à l'estime du piéton.Toutes ces solutions s'intègrent dans un processus de fusion permettant d'améliorer la précision de la localisation et d'estimer en continu une pose qualifiée de l'appareil tenu en main.Cette qualification est nécessaire à la mise en place d'un affichage en réalité augmentée sur site. Pour évaluer les solutions proposées, des données expérimentales ont été recueillies au cours de déplacements piétons dans un espace urbain avec des objets de référence et des passages intérieurs.
To support pedestrian navigation in urban and indoor spaces, an accurate pose estimate (i.e. 3Dposition and 3D orientation) of an equipment held inhand constitutes an essential point in the development of mobility assistance tools (e.g.Augmented Reality applications). On the assumption that the pedestrian is only equipped with general public devices, the pose estimation is restricted to the use of low-cost sensors embedded in the latter (i.e. an Inertial and Magnetic Measurement Unit and a monocular camera). In addition, urban and indoor spaces, comprising closely-spaced buildings and ferromagnetic elements,constitute challenging areas for localization and sensor pose estimation during large pedestrian displacements.However, the recent development and provision of data contained in 3D Geographical Information System constitutes a new wealth of data usable for localization and pose estimation.To address these challenges, this thesis proposes…
Advisors/Committee Members: Renaudin, Valérie (thesis director).
Subjects/Keywords: Navigation piétonne; Fusion de données; Réalité augmentée; Pedestrian navigation; Sensor fusion; Augmented reality; 004
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Antigny, N. (2018). Estimation continue de la pose d'un équipement tenu en main par fusion des données visio-inertielles pour les applications de navigation piétonne en milieux urbains : Continuous pose estimation of handheld device by fusion of visio-inertial data for pedestrian navigation applications in urban environments. (Doctoral Dissertation). Ecole centrale de Nantes. Retrieved from http://www.theses.fr/2018ECDN0027
Chicago Manual of Style (16th Edition):
Antigny, Nicolas. “Estimation continue de la pose d'un équipement tenu en main par fusion des données visio-inertielles pour les applications de navigation piétonne en milieux urbains : Continuous pose estimation of handheld device by fusion of visio-inertial data for pedestrian navigation applications in urban environments.” 2018. Doctoral Dissertation, Ecole centrale de Nantes. Accessed February 27, 2021.
http://www.theses.fr/2018ECDN0027.
MLA Handbook (7th Edition):
Antigny, Nicolas. “Estimation continue de la pose d'un équipement tenu en main par fusion des données visio-inertielles pour les applications de navigation piétonne en milieux urbains : Continuous pose estimation of handheld device by fusion of visio-inertial data for pedestrian navigation applications in urban environments.” 2018. Web. 27 Feb 2021.
Vancouver:
Antigny N. Estimation continue de la pose d'un équipement tenu en main par fusion des données visio-inertielles pour les applications de navigation piétonne en milieux urbains : Continuous pose estimation of handheld device by fusion of visio-inertial data for pedestrian navigation applications in urban environments. [Internet] [Doctoral dissertation]. Ecole centrale de Nantes; 2018. [cited 2021 Feb 27].
Available from: http://www.theses.fr/2018ECDN0027.
Council of Science Editors:
Antigny N. Estimation continue de la pose d'un équipement tenu en main par fusion des données visio-inertielles pour les applications de navigation piétonne en milieux urbains : Continuous pose estimation of handheld device by fusion of visio-inertial data for pedestrian navigation applications in urban environments. [Doctoral Dissertation]. Ecole centrale de Nantes; 2018. Available from: http://www.theses.fr/2018ECDN0027

Delft University of Technology
24.
Bootsma, B.G.N. (author).
Learning Interactively to Resolve Ambiguity in Sensor Policy Fusion for Robot.
Degree: 2021, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:1f4a04fe-f57b-46d2-87dd-2a9e1c7d30e4
► This work applies interactive imitation learning for the navigation of a mobile robot. The algorithm"Learning Interactively to Resolve Ambiguity in Sensor Policy Fusion" (LIRA-SPF) is…
(more)
▼ This work applies interactive imitation learning for the navigation of a mobile robot. The algorithm"Learning Interactively to Resolve Ambiguity in Sensor Policy Fusion" (LIRA-SPF) is introduced in the field of machine learning for robot navigation. This algorithm extends on existing methods by allowing the ambiguity-free fusion of existing single-sensor policy behavior using an active and interactive querying of the human expert. The ambiguous situations investigated in this work are due the possible perspective mismatch of each sensor: LIRA-SPF aims to detect these situations and save the correct solution in a new fused policy. As a consequence, we provide an alternative to training a new behavior again from scratch, leveraging the knowledge of existing expert behaviors and reducing the required teacher’s effort. The algorithm is tested with different supervised and unsupervised disambiguation strategies thanks to its modular implementation. This paper summarizes multiple simulated and real robot tests, showing the advantages of the proposed disambiguation module on state of the art approaches. In particular, the analysis underlines the necessity of less human-robot interaction during the training process. Finally the conclusions reveal the missing blocks of the approach and how this could be beneficial in the sensor fusion procedure.
Mechanical Engineering
Advisors/Committee Members: Kober, J. (mentor), Franzese, G. (mentor), Delft University of Technology (degree granting institution).
Subjects/Keywords: Interactive Imitation Learning; Navigation; End-to-End; Robot; Collision Avoidance; Sensor fusion; Policy fusion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bootsma, B. G. N. (. (2021). Learning Interactively to Resolve Ambiguity in Sensor Policy Fusion for Robot. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:1f4a04fe-f57b-46d2-87dd-2a9e1c7d30e4
Chicago Manual of Style (16th Edition):
Bootsma, B G N (author). “Learning Interactively to Resolve Ambiguity in Sensor Policy Fusion for Robot.” 2021. Masters Thesis, Delft University of Technology. Accessed February 27, 2021.
http://resolver.tudelft.nl/uuid:1f4a04fe-f57b-46d2-87dd-2a9e1c7d30e4.
MLA Handbook (7th Edition):
Bootsma, B G N (author). “Learning Interactively to Resolve Ambiguity in Sensor Policy Fusion for Robot.” 2021. Web. 27 Feb 2021.
Vancouver:
Bootsma BGN(. Learning Interactively to Resolve Ambiguity in Sensor Policy Fusion for Robot. [Internet] [Masters thesis]. Delft University of Technology; 2021. [cited 2021 Feb 27].
Available from: http://resolver.tudelft.nl/uuid:1f4a04fe-f57b-46d2-87dd-2a9e1c7d30e4.
Council of Science Editors:
Bootsma BGN(. Learning Interactively to Resolve Ambiguity in Sensor Policy Fusion for Robot. [Masters Thesis]. Delft University of Technology; 2021. Available from: http://resolver.tudelft.nl/uuid:1f4a04fe-f57b-46d2-87dd-2a9e1c7d30e4
25.
Dia, Roxana.
Towards Environment Perception using Integer Arithmetic for Embedded Application : Vers une perception de l'environnement en utilisant l'arithmétique entière pour une application sur systèmes embarqués.
Degree: Docteur es, Mathématiques et informatique, 2020, Université Grenoble Alpes
URL: http://www.theses.fr/2020GRALM038
► Le principal inconvénient de l'utilisation de représentations basées sur la grille pour SLAM et pour la localisation globale est la complexité de calcul exponentielle requise…
(more)
▼ Le principal inconvénient de l'utilisation de représentations basées sur la grille pour SLAM et pour la localisation globale est la complexité de calcul exponentielle requise en termes de taille de grille (de la carte et des cartes de pose). La taille de grille requise pour modéliser l'environnement entourant un robot ou un véhicule peut être de l'ordre de milliers de millions de cellules. Par exemple, un espace 2D de forme carrée de taille 100 m × 100 m, avec une taille de cellule de 10 cm est modélisé avec une grille de 1 million de cellules. Si nous incluons une hauteur de 2 m pour représenter la troisième dimension, 20 millions de cellules sont nécessaires. Par conséquent, les approches SLAM classiques basées sur une grille et de localisation globale nécessitent une unité de calcul parallèle afin de répondre à la latence imposée par les normes de sécurité. Un tel calcul est généralement effectué sur des postes de travail intégrant des unités de traitement graphique (GPU) et / ou des processeurs haut de gamme. Cependant, les véhicules autonomes ne peuvent pas gérer de telles plateformes pour des raisons de coût et de problèmes de certification. De plus, ces plates-formes nécessitent une consommation d'énergie élevée qui ne peut pas correspondre à la source d'énergie limitée disponible dans certains robots. Les plates-formes matérielles intégrées sont couramment utilisées comme solution alternative dans les applications automobiles. Ces plateformes répondent aux contraintes de faible coût, de faible puissance et de petit espace. De plus, certains d'entre eux sont certifiés automobiles1, suivant la norme ISO26262. Cependant, la plupart d'entre eux ne sont pas équipés d'une unité à virgule flottante, ce qui limite les performances de calcul.L'équipe du projet sigma-fusion du laboratoire LIALP du CEA-Leti a développé une méthode de perception basée sur des nombres entiers adaptée aux appareils embarqués. Cette méthode permet de construire une grille d'occupation via la fusion bayésienne en utilisant uniquement l'arithmétique entière, d'où son "embarquabilité" sur des plateformes informatiques embarquées, sans unité à virgule flottante. Ceci constitue la contribution majeure de la thèse de doctorat de Tiana Rakotovao [Rakotovao Andriamahefa 2017].L'objectif de la présente thèse est d'étendre le cadre de perception des nombres entiers au SLAM et aux problèmes de localisation globale, offrant ainsi des solutions «embarquables» sur les systèmes embarqués.
The main drawback of using grid-based representations for SLAM and for global localization is the required exponential computational complexity in terms of the grid size (of the map and the pose maps). The required grid size for modeling the environment surrounding a robot or of a vehicle can be in the order of thousands of millions of cells. For instance, a 2D square-shape space of size 100m × 100m, with a cell size of 10cm is modelled with a grid of 1 million cells. If we include a 2m of height to represent the third dimension, 20 millions of cells are required.…
Advisors/Committee Members: Lesecq, Suzanne (thesis director), Parissis, Ioannis (thesis director).
Subjects/Keywords: Cartographie et Localisation Simultanées; Fusion entière; Capteur; Integer Fusion; Simultaneous localization and mapping; Sensor; 510
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Dia, R. (2020). Towards Environment Perception using Integer Arithmetic for Embedded Application : Vers une perception de l'environnement en utilisant l'arithmétique entière pour une application sur systèmes embarqués. (Doctoral Dissertation). Université Grenoble Alpes. Retrieved from http://www.theses.fr/2020GRALM038
Chicago Manual of Style (16th Edition):
Dia, Roxana. “Towards Environment Perception using Integer Arithmetic for Embedded Application : Vers une perception de l'environnement en utilisant l'arithmétique entière pour une application sur systèmes embarqués.” 2020. Doctoral Dissertation, Université Grenoble Alpes. Accessed February 27, 2021.
http://www.theses.fr/2020GRALM038.
MLA Handbook (7th Edition):
Dia, Roxana. “Towards Environment Perception using Integer Arithmetic for Embedded Application : Vers une perception de l'environnement en utilisant l'arithmétique entière pour une application sur systèmes embarqués.” 2020. Web. 27 Feb 2021.
Vancouver:
Dia R. Towards Environment Perception using Integer Arithmetic for Embedded Application : Vers une perception de l'environnement en utilisant l'arithmétique entière pour une application sur systèmes embarqués. [Internet] [Doctoral dissertation]. Université Grenoble Alpes; 2020. [cited 2021 Feb 27].
Available from: http://www.theses.fr/2020GRALM038.
Council of Science Editors:
Dia R. Towards Environment Perception using Integer Arithmetic for Embedded Application : Vers une perception de l'environnement en utilisant l'arithmétique entière pour une application sur systèmes embarqués. [Doctoral Dissertation]. Université Grenoble Alpes; 2020. Available from: http://www.theses.fr/2020GRALM038

University of Manchester
26.
Zebin, Tahmina.
Wearable Inertial Multi-Sensor System for Physical
Activity Analysis and Classification with Machine Learning
Algorithms.
Degree: 2018, University of Manchester
URL: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314312
► Wearable sensors such as inertial measurement units (IMUs) have been widely used to measure the quality of physical activities during daily living in healthy and…
(more)
▼ Wearable sensors such as inertial measurement units
(IMUs) have been widely used to measure the quality of physical
activities during daily living in healthy and people with movement
disorders through activity classification. These sensors have the
potential to provide valuable information of the movement during
the activities of daily living (ADL), such as walking, sitting
down, and standing up, which could help clinicians to monitor
rehabilitation and therapeutic interventions. However, high
accuracy in the detection and segmentation of these activities is
necessary for proper evaluation of the quality of the performance
for a given activity. In this research, we devised a wearable
inertial sensor system to measure physical activities and to
calculate spatio-temporal gait parameters. We presented advanced
signal processing and machine learning algorithms for accurate
measurement of gait parameters from the sensor values. We
implemented a fusion factor based method to deal with the
accumulated drift and integration noise in inertial sensor data in
an adaptive manner. We further implemented a quaternion sensor
fusion algorithm for joint angle measurement and achieved less
noisy values for static and dynamic joint angles with a fourth
order Runge-Kutta method. For classification of daily life
activities,we rigorously analyzed and handcrafted sixty-six
statistical, and frequency domain features from the accelerometer
and gyroscope time-series. This feature set was then used to train
several state-ofthe- art and novel classifiers. We designed and
trained Decision trees, Support Vector Machines, k-nearest
neighbour, Ensemble algorithms during our experiments. Our
investigation revealed that support vector machine classifier with
quadratic kernel and a bagged ensemble classifier met the required
value of accuracy of above 90%. Since hand-crafting of features
require substantial domain knowledge, we devised a novel deep
convolutional neural network to extract and select the features
from the raw sensor signals automatically. For this, we proposed
and implemented a CNN with dropout regularization and batch
normalization, achieving a 96.7% accuracy proving the superiority
of automatically learned features over hand-crafted
ones.
Dataset, Demo and codes at
https://github.com/TZebin/Thesis-Supporting-Files
Advisors/Committee Members: SCULLY, PATRICIA PJ, Ozanyan, Krikor, Scully, Patricia.
Subjects/Keywords: Wearable Sensor; Machine Learning; Convolutional Neural Network; Sensor Fusion; Activity Recognition; Gait Analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zebin, T. (2018). Wearable Inertial Multi-Sensor System for Physical
Activity Analysis and Classification with Machine Learning
Algorithms. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314312
Chicago Manual of Style (16th Edition):
Zebin, Tahmina. “Wearable Inertial Multi-Sensor System for Physical
Activity Analysis and Classification with Machine Learning
Algorithms.” 2018. Doctoral Dissertation, University of Manchester. Accessed February 27, 2021.
http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314312.
MLA Handbook (7th Edition):
Zebin, Tahmina. “Wearable Inertial Multi-Sensor System for Physical
Activity Analysis and Classification with Machine Learning
Algorithms.” 2018. Web. 27 Feb 2021.
Vancouver:
Zebin T. Wearable Inertial Multi-Sensor System for Physical
Activity Analysis and Classification with Machine Learning
Algorithms. [Internet] [Doctoral dissertation]. University of Manchester; 2018. [cited 2021 Feb 27].
Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314312.
Council of Science Editors:
Zebin T. Wearable Inertial Multi-Sensor System for Physical
Activity Analysis and Classification with Machine Learning
Algorithms. [Doctoral Dissertation]. University of Manchester; 2018. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314312

University of Colorado
27.
Iglesias Echevarria, David I.
Cooperative Robot Localization Using Event-Triggered Estimation.
Degree: MS, 2017, University of Colorado
URL: https://scholar.colorado.edu/asen_gradetds/206
► It is known that multiple robot systems that need to cooperate to perform certain activities or tasks incur in high energy costs that hinder their…
(more)
▼ It is known that multiple robot systems that need to cooperate to perform certain activities or tasks incur in high energy costs that hinder their autonomous functioning and limit the benefits provided to humans by these kinds of platforms. This work presents a communications-based method for cooperative robot localization. Implementing concepts from event-triggered estimation, used with success in the field of wireless
sensor networks but rarely to do robot localization, agents are able to only send measurements to their neighbors when the expected novelty in this information is high. Since all agents know the condition that triggers a measurement to be sent or not, the lack of a measurement is therefore informative and fused into state estimates. In the case agents do not receive either direct nor indirect measurements of all others, the agents employ a covariance intersection
fusion rule in order to keep the local covariance error metric bounded. A comprehensive analysis of the proposed algorithm and its estimation performance in a variety of scenarios is performed, and the algorithm is compared to similar cooperative localization approaches. Extensive simulations are performed that illustrate the effectiveness of this method.
Advisors/Committee Members: Nisar R. Ahmed, Jay W. McMahon, Eric W. Frew.
Subjects/Keywords: localization; mobile robotics; sensor fusion; state estimation; statistical inference; wireless sensor networks; Aerospace Engineering; Robotics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Iglesias Echevarria, D. I. (2017). Cooperative Robot Localization Using Event-Triggered Estimation. (Masters Thesis). University of Colorado. Retrieved from https://scholar.colorado.edu/asen_gradetds/206
Chicago Manual of Style (16th Edition):
Iglesias Echevarria, David I. “Cooperative Robot Localization Using Event-Triggered Estimation.” 2017. Masters Thesis, University of Colorado. Accessed February 27, 2021.
https://scholar.colorado.edu/asen_gradetds/206.
MLA Handbook (7th Edition):
Iglesias Echevarria, David I. “Cooperative Robot Localization Using Event-Triggered Estimation.” 2017. Web. 27 Feb 2021.
Vancouver:
Iglesias Echevarria DI. Cooperative Robot Localization Using Event-Triggered Estimation. [Internet] [Masters thesis]. University of Colorado; 2017. [cited 2021 Feb 27].
Available from: https://scholar.colorado.edu/asen_gradetds/206.
Council of Science Editors:
Iglesias Echevarria DI. Cooperative Robot Localization Using Event-Triggered Estimation. [Masters Thesis]. University of Colorado; 2017. Available from: https://scholar.colorado.edu/asen_gradetds/206

University of Colorado
28.
Iglesias Echevarria, David I.
Cooperative Robot Localization Using Event-Triggered Estimation.
Degree: MS, 2017, University of Colorado
URL: https://scholar.colorado.edu/asen_gradetds/185
► It is known that multiple robot systems that need to cooperate to perform certain activities or tasks incur in high energy costs that hinder their…
(more)
▼ It is known that multiple robot systems that need to cooperate to perform certain activities or tasks incur in high energy costs that hinder their autonomous functioning and limit the benefits provided to humans by these kinds of platforms. This work presents a communications-based method for cooperative robot localization. Implementing concepts from event-triggered estimation, used with success in the field of wireless
sensor networks but rarely to do robot localization, agents are able to only send measurements to their neighbors when the expected novelty in this information is high. Since all agents know the condition that triggers a measurement to be sent or not, the lack of a measurement is therefore informative and fused into state estimates. In the case agents do not receive either direct nor indirect measurements of all others, the agents employ a covariance intersection
fusion rule in order to keep the local covariance error metric bounded. A comprehensive analysis of the proposed algorithm and its estimation performance in a variety of scenarios is performed, and the algorithm is compared to similar cooperative localization approaches. Extensive simulations are performed that illustrate the effectiveness of this method.
Advisors/Committee Members: Nisar R. Ahmed, Eric W. Frew, Jay W. McMahon.
Subjects/Keywords: localization; mobile robotics; sensor fusion; state estimation; statistical inference; wireless sensor networks; Aerospace Engineering; Robotics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Iglesias Echevarria, D. I. (2017). Cooperative Robot Localization Using Event-Triggered Estimation. (Masters Thesis). University of Colorado. Retrieved from https://scholar.colorado.edu/asen_gradetds/185
Chicago Manual of Style (16th Edition):
Iglesias Echevarria, David I. “Cooperative Robot Localization Using Event-Triggered Estimation.” 2017. Masters Thesis, University of Colorado. Accessed February 27, 2021.
https://scholar.colorado.edu/asen_gradetds/185.
MLA Handbook (7th Edition):
Iglesias Echevarria, David I. “Cooperative Robot Localization Using Event-Triggered Estimation.” 2017. Web. 27 Feb 2021.
Vancouver:
Iglesias Echevarria DI. Cooperative Robot Localization Using Event-Triggered Estimation. [Internet] [Masters thesis]. University of Colorado; 2017. [cited 2021 Feb 27].
Available from: https://scholar.colorado.edu/asen_gradetds/185.
Council of Science Editors:
Iglesias Echevarria DI. Cooperative Robot Localization Using Event-Triggered Estimation. [Masters Thesis]. University of Colorado; 2017. Available from: https://scholar.colorado.edu/asen_gradetds/185

University of Illinois – Chicago
29.
Khan, Nishat Anjum.
Multi-Sensor Preprocessing for Traffic Light Detection.
Degree: 2017, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/22181
► With the exponential growth of smartphone usage and its computational capability, there is an opportunity today to build a usable navigation system for the visually…
(more)
▼ With the exponential growth of smartphone usage and its computational capability, there is
an opportunity today to build a usable navigation system for the visually impaired. A smartphone contains virtually all sensors for sensing the surrounding environment such as GPS, cameras, and inertial sensors. However, there are many challenges for building a complete navigation system, such as low-level methods of environment sensing, accuracy, and efficient
data processing. In this dissertation, we address some of these challenges and present a system for traffic light detection, which is fundamental for pedestrian navigation by the visually
impaired in outdoors. In this system, we analyze the video feed from a smartphone’s camera using model-based computer vision techniques to detect traffic lights. Specifically, we utilize both color and shape information as they are the most prominent features of the traffic lights. Additionally, we use the inertial sensors of a smartphone to compute the 3D orientation of a smartphone to predict a subpart of a video frame, which is highly probable to contain the traffic lights. By processing only that subpart, we improve the computational time by an order of magnitude on average. Furthermore, due to the processing of a subpart instead of the whole video frame, our system achieves higher accuracy because of reduced false positive. Finally, we recognize walk and stop signs for pedestrians in addition to the regular traffic lights to obtain higher confidence during navigation. We evaluated this system in various lighting conditions such as cloudy, sunny, and at night, and achieved over 95% accuracy in the traffic light and
sign detection and recognition.
Advisors/Committee Members: Ansari, Rashid (advisor), Cetin, Ahmet E (advisor), Soltanalian, Mojtaba (committee member), Ansari, Rashid (chair).
Subjects/Keywords: Image Processing; Traffic Lights; Walk Signs; Pedestrian Navigation; Video Analytics; Sensor Fusion; Inertial Sensor
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Khan, N. A. (2017). Multi-Sensor Preprocessing for Traffic Light Detection. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/22181
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Khan, Nishat Anjum. “Multi-Sensor Preprocessing for Traffic Light Detection.” 2017. Thesis, University of Illinois – Chicago. Accessed February 27, 2021.
http://hdl.handle.net/10027/22181.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Khan, Nishat Anjum. “Multi-Sensor Preprocessing for Traffic Light Detection.” 2017. Web. 27 Feb 2021.
Vancouver:
Khan NA. Multi-Sensor Preprocessing for Traffic Light Detection. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10027/22181.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Khan NA. Multi-Sensor Preprocessing for Traffic Light Detection. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/22181
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Manchester
30.
Zebin, Tahmina.
Wearable inertial multi-sensor system for physical activity analysis and classification with machine learning algorithms.
Degree: PhD, 2018, University of Manchester
URL: https://www.research.manchester.ac.uk/portal/en/theses/wearable-inertial-multisensor-system-for-physical-activity-analysis-and-classification-with-machine-learning-algorithms(57df353c-e3e8-405b-bc8f-86c883554223).html
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.779579
► Wearable sensors such as inertial measurement units (IMUs) have been widely used to measure the quality of physical activities during daily living in healthy and…
(more)
▼ Wearable sensors such as inertial measurement units (IMUs) have been widely used to measure the quality of physical activities during daily living in healthy and people with movement disorders through activity classification. These sensors have the potential to provide valuable information of the movement during the activities of daily living (ADL), such as walking, sitting down, and standing up, which could help clinicians to monitor rehabilitation and therapeutic interventions. However, high accuracy in the detection and segmentation of these activities is necessary for proper evaluation of the quality of the performance for a given activity. In this research, we devised a wearable inertial sensor system to measure physical activities and to calculate spatio-temporal gait parameters. We presented advanced signal processing and machine learning algorithms for accurate measurement of gait parameters from the sensor values. We implemented a fusion factor based method to deal with the accumulated drift and integration noise in inertial sensor data in an adaptive manner. We further implemented a quaternion sensor fusion algorithm for joint angle measurement and achieved less noisy values for static and dynamic joint angles with a fourth order Runge-Kutta method. For classification of daily life activities,we rigorously analyzed and handcrafted sixty-six statistical, and frequency domain features from the accelerometer and gyroscope time-series. This feature set was then used to train several state-ofthe- art and novel classifiers. We designed and trained Decision trees, Support Vector Machines, k-nearest neighbour, Ensemble algorithms during our experiments. Our investigation revealed that support vector machine classifier with quadratic kernel and a bagged ensemble classifier met the required value of accuracy of above 90%. Since hand-crafting of features require substantial domain knowledge, we devised a novel deep convolutional neural network to extract and select the features from the raw sensor signals automatically. For this, we proposed and implemented a CNN with dropout regularization and batch normalization, achieving a 96.7% accuracy proving the superiority of automatically learned features over hand-crafted ones.
Subjects/Keywords: Gait Analysis; Activity Recognition; Sensor Fusion; Machine Learning; Wearable Sensor; Convolutional Neural Network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zebin, T. (2018). Wearable inertial multi-sensor system for physical activity analysis and classification with machine learning algorithms. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/wearable-inertial-multisensor-system-for-physical-activity-analysis-and-classification-with-machine-learning-algorithms(57df353c-e3e8-405b-bc8f-86c883554223).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.779579
Chicago Manual of Style (16th Edition):
Zebin, Tahmina. “Wearable inertial multi-sensor system for physical activity analysis and classification with machine learning algorithms.” 2018. Doctoral Dissertation, University of Manchester. Accessed February 27, 2021.
https://www.research.manchester.ac.uk/portal/en/theses/wearable-inertial-multisensor-system-for-physical-activity-analysis-and-classification-with-machine-learning-algorithms(57df353c-e3e8-405b-bc8f-86c883554223).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.779579.
MLA Handbook (7th Edition):
Zebin, Tahmina. “Wearable inertial multi-sensor system for physical activity analysis and classification with machine learning algorithms.” 2018. Web. 27 Feb 2021.
Vancouver:
Zebin T. Wearable inertial multi-sensor system for physical activity analysis and classification with machine learning algorithms. [Internet] [Doctoral dissertation]. University of Manchester; 2018. [cited 2021 Feb 27].
Available from: https://www.research.manchester.ac.uk/portal/en/theses/wearable-inertial-multisensor-system-for-physical-activity-analysis-and-classification-with-machine-learning-algorithms(57df353c-e3e8-405b-bc8f-86c883554223).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.779579.
Council of Science Editors:
Zebin T. Wearable inertial multi-sensor system for physical activity analysis and classification with machine learning algorithms. [Doctoral Dissertation]. University of Manchester; 2018. Available from: https://www.research.manchester.ac.uk/portal/en/theses/wearable-inertial-multisensor-system-for-physical-activity-analysis-and-classification-with-machine-learning-algorithms(57df353c-e3e8-405b-bc8f-86c883554223).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.779579
◁ [1] [2] [3] [4] [5] … [16] ▶
.