Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(MAVLAB). Showing records 1 – 3 of 3 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Delft University of Technology

1. Braber, T.I. (author). Vision-based stabilization of micro quadrotors.

Degree: 2017, Delft University of Technology

On-board stabilization of quadrotors is often done using an Inertial Measurement Unit (IMU), aided by additional sensors to combat the IMU drift. For example, GPS readings can aid when flying outdoors, or when flying in GPS denied environments, such as indoors, visual information from one or more camera modules can be used. A single downwards facing camera however cannot determine the absolute height of the quadrotor, leaving the results from the Optical Flow (OF) up to scale. To estimate the velocity of the quadrotor an additional range sensor, such as an Ultrasonic Sensor (US), is used to solve this scaling problem. These solutions are difficult to scale down to micro quadrotors as the platform becomes too small to fit and lift additional sensors. Therefore stabilizing a quadrotor with a single camera and IMU only would pave the way for the development of even smaller quadrotors. This master thesis presents an adaptive control strategy to stabilize a micro quadrotor in all three axes using only an IMU and a monocular camera. This is achieved by extending the stability based approach for a single, vertical, axis by De Croon in Distance estimation with efference copies and optical flow maneuvers: a stability-based strategy[1]. This stability based method ncreases the control gain in the visual feedback loop until the quadrotor detects it is oscillating by detecting that the covariance of the given thrust inputs and the measured divergence passes a threshold. Next the height can be estimated using the predetermined relationship between gain and height at which these self-induced oscillations occur and proper gains can be set for the estimated height. An analysis is done in simulation to present proof of concept of the stabilization method in three axis and to determine the effects of scaling and the effects of varying effective Frames per Second (FPS) caused by computations. It was shown that the adaptive gain strategy can stabilize the simulated quadrotor and prevent it from drifting. Furthermore, the control gains were scaled such that the effects of scaling a quadrotor could be mostly negated, though at about a tenth of the scale the simulated noise had such an influence that the scaled gains could not negate it anymore. Furthermore, the minimum effective FPS required to stabilize an ARDrone 2 was determined to be 15 FPS, and it was shown that an increase in effective FPS aids stabilizing the smaller scale quadrotors that became unstable due to the scaling effects. Furthermore, flights on an Parrot ARDrone 2 and Parrot Bebop are performed to show the usability of this control strategy in real life. It was shown that both quadrotors could achieve stable hover without drifting at multiple heights, using various strategies. Advisors/Committee Members: Babuska, Robert (mentor), de Croon, Guido (mentor), de Wagter, Christophe (mentor), de Bruin, Tim (graduation committee), Bregman, Sander (graduation committee), Delft University of Technology (degree granting institution).

Subjects/Keywords: Quadrotor; Adaptive control; vision; Monocular; ARDrone; Bebop; stabilization; MAV; MAVLAB; micro quadrotor; quadcopter

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Braber, T. I. (. (2017). Vision-based stabilization of micro quadrotors. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:6e3ce742-a974-491d-97c2-1cafc090b3d9

Chicago Manual of Style (16th Edition):

Braber, T I (author). “Vision-based stabilization of micro quadrotors.” 2017. Masters Thesis, Delft University of Technology. Accessed January 27, 2021. http://resolver.tudelft.nl/uuid:6e3ce742-a974-491d-97c2-1cafc090b3d9.

MLA Handbook (7th Edition):

Braber, T I (author). “Vision-based stabilization of micro quadrotors.” 2017. Web. 27 Jan 2021.

Vancouver:

Braber TI(. Vision-based stabilization of micro quadrotors. [Internet] [Masters thesis]. Delft University of Technology; 2017. [cited 2021 Jan 27]. Available from: http://resolver.tudelft.nl/uuid:6e3ce742-a974-491d-97c2-1cafc090b3d9.

Council of Science Editors:

Braber TI(. Vision-based stabilization of micro quadrotors. [Masters Thesis]. Delft University of Technology; 2017. Available from: http://resolver.tudelft.nl/uuid:6e3ce742-a974-491d-97c2-1cafc090b3d9


Delft University of Technology

2. van Dijk, Tom (author). Low-memory Visual Route Following for Micro Aerial Vehicles in Indoor Environments.

Degree: 2017, Delft University of Technology

This thesis presents a visual route following method that minimizes memory consumption to the point that even Micro Aerial Vehicles (MAV) equipped with only a simple microcontroller can traverse distances of a few hundred meters. Existing Simultaneous Localization and Mapping (SLAM) algorithms are too complex for use on a microcontroller. Instead, the route is modeled by a sequence of snapshots that can be followed back using a combination of visual homing and odometry. Three visual homing methods are evaluated to find and compare their memory efficiency. Of these methods, Fourier-based homing performed best: it still succeeds when snapshots are compressed to less than twenty bytes. Visual homing only works from a small region surrounding the snapshot, therefore odometry is used to travel longer distances between snapshots. The proposed route following technique is tested in simulation and on a Parrot AR.Drone 2.0. The drone can successfully follow long routes with a map that consumes only 17.5 bytes per meter.

BioMechanical Design & Systems and Control

Advisors/Committee Members: McGuire, Kimberly (mentor), de Croon, Guido (mentor), Campoy Cervera, Pascual (mentor), Jonker, Pieter (mentor), Delft University of Technology (degree granting institution).

Subjects/Keywords: navigation; route following; visual homing; odometry; indoor; micro aerial vehicle; MAV; unmanned aerial vehicle; UAV; quadrotor; vision; ARDrone; MAVLAB

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

van Dijk, T. (. (2017). Low-memory Visual Route Following for Micro Aerial Vehicles in Indoor Environments. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:82c91d74-6c01-4718-a574-221df210f01a

Chicago Manual of Style (16th Edition):

van Dijk, Tom (author). “Low-memory Visual Route Following for Micro Aerial Vehicles in Indoor Environments.” 2017. Masters Thesis, Delft University of Technology. Accessed January 27, 2021. http://resolver.tudelft.nl/uuid:82c91d74-6c01-4718-a574-221df210f01a.

MLA Handbook (7th Edition):

van Dijk, Tom (author). “Low-memory Visual Route Following for Micro Aerial Vehicles in Indoor Environments.” 2017. Web. 27 Jan 2021.

Vancouver:

van Dijk T(. Low-memory Visual Route Following for Micro Aerial Vehicles in Indoor Environments. [Internet] [Masters thesis]. Delft University of Technology; 2017. [cited 2021 Jan 27]. Available from: http://resolver.tudelft.nl/uuid:82c91d74-6c01-4718-a574-221df210f01a.

Council of Science Editors:

van Dijk T(. Low-memory Visual Route Following for Micro Aerial Vehicles in Indoor Environments. [Masters Thesis]. Delft University of Technology; 2017. Available from: http://resolver.tudelft.nl/uuid:82c91d74-6c01-4718-a574-221df210f01a


Delft University of Technology

3. Kisantal, Máté (author). Deep Reinforcement Learning for Goal-directed Visual Navigation.

Degree: 2018, Delft University of Technology

Safe navigation in a cluttered environment is a key capability for the autonomous operation of Micro Aerial Vehicles (MAVs). This work explores a (deep) Reinforcement Learning (RL) based approach for monocular vision based obstacle avoidance and goal directed navigation for MAVs in cluttered environments. We investigated this problem in the context of forest flight under the tree canopy. Our focus was on training an effective and practical neural control module, that is easy to integrate into conventional control hierarchies and can extend the capabilities of existing autopilot software stacks. This module has the potential to greatly improve the autonomous capabilities of MAVs, and their applicability for many interesting real world use-cases. We demonstrated training this module in a visually highly realistic virtual forest environment, created with a state-of-the-art computer game engine.

Aerospace Engineering | Control & Simulation

Advisors/Committee Members: de Croon, Guido (mentor), van Hecke, Kevin (mentor), Delft University of Technology (degree granting institution).

Subjects/Keywords: reinforcement learning; deep reinforcement learning; artificial intelligence; machine learning; computer vision; MAV; UAV; MAVLAB; drone; autonomous navigation; Autonomous Vehicles; deep learning; neural networks

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kisantal, M. (. (2018). Deep Reinforcement Learning for Goal-directed Visual Navigation. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:07bc64ba-42e3-4aa7-ba9b-ac0ac4e0e7a1

Chicago Manual of Style (16th Edition):

Kisantal, Máté (author). “Deep Reinforcement Learning for Goal-directed Visual Navigation.” 2018. Masters Thesis, Delft University of Technology. Accessed January 27, 2021. http://resolver.tudelft.nl/uuid:07bc64ba-42e3-4aa7-ba9b-ac0ac4e0e7a1.

MLA Handbook (7th Edition):

Kisantal, Máté (author). “Deep Reinforcement Learning for Goal-directed Visual Navigation.” 2018. Web. 27 Jan 2021.

Vancouver:

Kisantal M(. Deep Reinforcement Learning for Goal-directed Visual Navigation. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Jan 27]. Available from: http://resolver.tudelft.nl/uuid:07bc64ba-42e3-4aa7-ba9b-ac0ac4e0e7a1.

Council of Science Editors:

Kisantal M(. Deep Reinforcement Learning for Goal-directed Visual Navigation. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:07bc64ba-42e3-4aa7-ba9b-ac0ac4e0e7a1

.