Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"ETH Zürich" +contributor:("Stachniss, Cyrill"). Showing records 1 – 3 of 3 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


ETH Zürich

1. Pfeiffer, Mark. Learning to Navigate: Data-driven Motion Planning for Autonomous Ground Robots.

Degree: 2018, ETH Zürich

Robotic navigation in static and known environments is well understood. However, the operation in unknown, dynamic or unstructured environments still poses major challenges. In order to support humans in a varied set of applications, such as the assistance of elderly or disabled, transportation, search and rescue or agriculture, the operation in shared workspaces among robots and humans is a key factor. This thesis introduces four data-driven navigation approaches for mobile robot navigation in challenging real-world environments. It covers the problems of navigation among humans in shared workspaces and through environments, where no knowledge of a map is available. In both areas, humans show outstanding capabilities by relying on their “common sense”, i.e. the experience gained throughout many years in their lives. The underlying goal of using data-driven approaches instead of classical hand-engineered solutions is to reduce the amount of hand-tuning and improve the navigation performance and social acceptance of robots. When navigating in dynamic environments shared with other agents, forecasting the evolution of the environment is an important factor. Therefore, in Part A of this thesis, two approaches for interaction-aware robot navigation are introduced. First, we present a framework which allows for fully cooperative robot motion planning in environments shared with pedestrians. The maximum entropy probability distribution is learned from pedestrian demonstrations using inverse reinforcement learning in order to avoid hand-tuning of the model parameters. Using this approach, cooperative real-world robot navigation is shown in environments shared with pedestrians. The results point out the importance of interaction-aware navigation strategies in order to improve the social compliance of robots. The second contribution is a data-driven model for interaction-aware pedestrian prediction in real-world environments with both static and dynamic obstacles based on Long-Short-Term Memory neural networks. This model is designed to be used with standard predict-react planners while still taking into account interactions among pedestrians. The presented results show state-of-the-art prediction accuracy and the importance of taking into account static obstacles for pedestrian prediction is evaluated. Part B of this thesis covers another challenging problem in the area of mobile robot navigation – the map-less end-to-end navigation. While classical approaches rely on the interplay of various different modules and prior knowledge of the map, map-less navigation targets the flexible deployment of mobile robots in unknown environments. In this thesis, two data-driven approaches for target-driven end-to-end navigation are presented. First, imitation learning based on expert demonstrations is used in order to find the complex mapping between sensor data and robot motion commands, which is represented by a neural network model. Second, this work is combined with deep reinforcement learning in order to find a more general and… Advisors/Committee Members: Siegwart, Roland Y., Stachniss, Cyrill, Abbeel, Pieter.

Subjects/Keywords: Robotics; Robot navigation; Motion planning; Machine learning; Object prediction; Cooperative planning; Autonomous mobile robots; Autonomous navigation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Pfeiffer, M. (2018). Learning to Navigate: Data-driven Motion Planning for Autonomous Ground Robots. (Doctoral Dissertation). ETH Zürich. Retrieved from http://hdl.handle.net/20.500.11850/324253

Chicago Manual of Style (16th Edition):

Pfeiffer, Mark. “Learning to Navigate: Data-driven Motion Planning for Autonomous Ground Robots.” 2018. Doctoral Dissertation, ETH Zürich. Accessed November 20, 2019. http://hdl.handle.net/20.500.11850/324253.

MLA Handbook (7th Edition):

Pfeiffer, Mark. “Learning to Navigate: Data-driven Motion Planning for Autonomous Ground Robots.” 2018. Web. 20 Nov 2019.

Vancouver:

Pfeiffer M. Learning to Navigate: Data-driven Motion Planning for Autonomous Ground Robots. [Internet] [Doctoral dissertation]. ETH Zürich; 2018. [cited 2019 Nov 20]. Available from: http://hdl.handle.net/20.500.11850/324253.

Council of Science Editors:

Pfeiffer M. Learning to Navigate: Data-driven Motion Planning for Autonomous Ground Robots. [Doctoral Dissertation]. ETH Zürich; 2018. Available from: http://hdl.handle.net/20.500.11850/324253


ETH Zürich

2. Dubé, Renaud. Real-Time Multi-Robot Localization and Mapping with 3D Point Clouds.

Degree: 2018, ETH Zürich

Multi-robot systems offer several advantages over their single-robot counterpart such as robustness to robot failure and faster exploration in time-critical search and rescue missions. In order to collaborate in these scenarios, the robots need to jointly build a unified map representation where they can co-localize each other. The goal of this thesis is to develop a real-time solution to the Simultaneous Localization and Mapping (SLAM) problem for multiple robots equipped with 3D sensors. In particular, we focus on LiDAR sensors which can be used to generate precise reconstructions of the environment and are robust to changes in lighting conditions. There exist multiple challenges with respect to the multi-robot SLAM problem with 3D point clouds. First, a global place recognition technique is often required as the relative transformations between the robots are not always known. Second, multi- robot systems generate large quantities of data which need to be processed efficiently for achieving real-time performance. Finally, such systems often operate under bandwidth-limited wireless communication channels. A compact representation that can easily be stored and transmitted is therefore required. This thesis specifically targets addressing these three challenges. In our work, we perform global localization using a novel segment extraction and matching algorithm. In essence, 3D point cloud measurements are segmented and each segment is compressed to a compact descriptor. Matching descriptors are retrieved in a map and subsequently filtered based on geometric consistency. The output of this algorithm is a 6 Degrees of Freedom (DoF) pose in a global map, without using prior position information. Globally recognizing places on the basis of segments can be more efficient than using key-point descriptors, as fewer descriptors are usually required to describe places. Additionally, we have developed a set of incremental and time-effective algorithms that exploit the inherent sequential nature of 3D LiDAR measurements. To further address the real-time requirement, we present an ego-motion estimator which attains efficiency by non-uniformly sampling knots over a continuous-time trajectory. A compact map representation is achieved by using a novel data-driven descriptor for 3D point clouds. This descriptor can be extracted by a Convolutional Neural Network (CNN) with an autoencoder-like architecture. The novelty of this approach is that it simultaneously allows us to perform robot localization, 3D environment reconstruction, and semantic extraction. These compact point cloud descriptors can easily be transmitted and used, for example to provide structural feedback to end-users operating in remote locations. We have incorporated all these functionalities in a complete multi-robot SLAM solution that can operate in real-time. The effectiveness of our system has been demonstrated in multiple experiments both in urban driving and search and rescue environments. Specifically, we achieve LiDAR-based global localization at 10Hz in… Advisors/Committee Members: Siegwart, Roland, Kaess, Michael, Stachniss, Cyrill.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dubé, R. (2018). Real-Time Multi-Robot Localization and Mapping with 3D Point Clouds. (Doctoral Dissertation). ETH Zürich. Retrieved from http://hdl.handle.net/20.500.11850/323152

Chicago Manual of Style (16th Edition):

Dubé, Renaud. “Real-Time Multi-Robot Localization and Mapping with 3D Point Clouds.” 2018. Doctoral Dissertation, ETH Zürich. Accessed November 20, 2019. http://hdl.handle.net/20.500.11850/323152.

MLA Handbook (7th Edition):

Dubé, Renaud. “Real-Time Multi-Robot Localization and Mapping with 3D Point Clouds.” 2018. Web. 20 Nov 2019.

Vancouver:

Dubé R. Real-Time Multi-Robot Localization and Mapping with 3D Point Clouds. [Internet] [Doctoral dissertation]. ETH Zürich; 2018. [cited 2019 Nov 20]. Available from: http://hdl.handle.net/20.500.11850/323152.

Council of Science Editors:

Dubé R. Real-Time Multi-Robot Localization and Mapping with 3D Point Clouds. [Doctoral Dissertation]. ETH Zürich; 2018. Available from: http://hdl.handle.net/20.500.11850/323152


ETH Zürich

3. Khanna, Raghav. Robotic Perception for Precision Agriculture: Calibration, Mapping and Inference.

Degree: 2019, ETH Zürich

To feed a growing world population with limited amount of arable land, we must develop new meth- ods of sustainable farming that maintain or increase yield while minimizing chemical inputs such as fertilizers, herbicides, and pesticides. Precision agriculture techniques seek to address this chal- lenge by monitoring key indicators of crop health and targeting timely treatment only to plants or infested areas that need it. Such monitoring is still, often, a time consuming and expensive activity and thus not performed as standard practice. Developing automated methods for such monitoring using unmanned aerial and ground vehicles can thus provide a major impetus for the adoption of precision agriculture practices at scale. This thesis deals with improving the perception capabilities of autonomous systems that can be deployed for environmental monitoring, especially in agricul- tural contexts. Deploying autonomous systems on agricultural fields is a challenging task, since the perception system must deal with an unstructured, dynamic environment under widely varying illumination and weather conditions. This thesis makes contributions towards three parts of the perception pipeline required for most autonomous systems - calibration, mapping and inference. In the first part we look at radiometrically calibrating vision sensors, i.e. cameras in a field context. In contrast to existing, laboratory based approaches requiring specialized and expensive equipment, we develop a practical and modular method to radiometrically calibrate monochrome, colour and hyperspectral cameras with data that can be collected on the field. Our data driven, parameter free, maximum likelihood estimation based approach allows robust estimation of the sensor response, lens vignetting and global illuminant with minimal prior knowledge about the camera and lens setup in use. In the second part, leveraging developments in 3D photogrammetry, we propose methods to metrically map and extract plant trait indicators from crop fields using a low cost Unmanned Aerial Vehicle (UAV) carrying a camera. We show that crop parameters extracted using these 3D maps compare reasonably well with measurements taken on the ground and hence can be used to estimate the crop status, thus providing a practical and effective means of monitoring fields at high spatial and temporal resolution with minimal user intervention. Additionally, we propose a method by which a Unmanned Ground Vehicle (UGV) can localize itself in such a 3D map enabling higher resolution map updates and automated targeted interventions, such as delivering fertilizer or herbicide. We show that our collaborative mapping approach, which combines the 3D geometry of the crop field with a weakly semantic feature - a vegetation index, performs well on a wide variety of different crop fields and outperforms several state of the art map registration techniques for such scenarios. In the third part of this thesis, we present a generic, machine learning based framework for determining the types and severities of… Advisors/Committee Members: Siegwart, Roland Y., Stachniss, Cyrill, Walter, Achim.

Subjects/Keywords: Robotics in Agriculture and Forestry; Calibration techniques; Inference drawing; Unmanned aerial systems; info:eu-repo/classification/ddc/620; info:eu-repo/classification/ddc/004; Engineering & allied operations; Data processing, computer science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Khanna, R. (2019). Robotic Perception for Precision Agriculture: Calibration, Mapping and Inference. (Doctoral Dissertation). ETH Zürich. Retrieved from http://hdl.handle.net/20.500.11850/360967

Chicago Manual of Style (16th Edition):

Khanna, Raghav. “Robotic Perception for Precision Agriculture: Calibration, Mapping and Inference.” 2019. Doctoral Dissertation, ETH Zürich. Accessed November 20, 2019. http://hdl.handle.net/20.500.11850/360967.

MLA Handbook (7th Edition):

Khanna, Raghav. “Robotic Perception for Precision Agriculture: Calibration, Mapping and Inference.” 2019. Web. 20 Nov 2019.

Vancouver:

Khanna R. Robotic Perception for Precision Agriculture: Calibration, Mapping and Inference. [Internet] [Doctoral dissertation]. ETH Zürich; 2019. [cited 2019 Nov 20]. Available from: http://hdl.handle.net/20.500.11850/360967.

Council of Science Editors:

Khanna R. Robotic Perception for Precision Agriculture: Calibration, Mapping and Inference. [Doctoral Dissertation]. ETH Zürich; 2019. Available from: http://hdl.handle.net/20.500.11850/360967

.