Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Visual Teach AND Repeat). Showing records 1 – 2 of 2 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Toronto

1. McManus, Colin. Visual Teach and Repeat Using Appearance-based Lidar - A Method For Planetary Exploration.

Degree: 2011, University of Toronto

Future missions to Mars will place heavy emphasis on scientific sample and return operations, which will require a rover to revisit sites of interest. Visual Teach and Repeat (VT&R) has proven to be an effective method to enable autonomous repeating of any previously driven route without a global positioning system. However, one of the major challenges in recognizing previously visited locations is lighting change, as this can drastically change the appearance of the scene. In an effort to achieve lighting invariance, this thesis details the design of a VT&R system that uses a laser scanner as the primary sensor. The key novelty is to apply appearance-based vision techniques traditionally used with camera systems to laser intensity images for motion estimation. Field tests were conducted in an outdoor environment over an entire diurnal cycle, covering more than 11km with an autonomy rate of 99.7% by distance.

MAST

Advisors/Committee Members: Barfoot, Timothy D., Aerospace Science and Engineering.

Subjects/Keywords: mobile robotics; vision systems; visual teach and repeat; 0538

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

McManus, C. (2011). Visual Teach and Repeat Using Appearance-based Lidar - A Method For Planetary Exploration. (Masters Thesis). University of Toronto. Retrieved from http://hdl.handle.net/1807/31339

Chicago Manual of Style (16th Edition):

McManus, Colin. “Visual Teach and Repeat Using Appearance-based Lidar - A Method For Planetary Exploration.” 2011. Masters Thesis, University of Toronto. Accessed November 11, 2019. http://hdl.handle.net/1807/31339.

MLA Handbook (7th Edition):

McManus, Colin. “Visual Teach and Repeat Using Appearance-based Lidar - A Method For Planetary Exploration.” 2011. Web. 11 Nov 2019.

Vancouver:

McManus C. Visual Teach and Repeat Using Appearance-based Lidar - A Method For Planetary Exploration. [Internet] [Masters thesis]. University of Toronto; 2011. [cited 2019 Nov 11]. Available from: http://hdl.handle.net/1807/31339.

Council of Science Editors:

McManus C. Visual Teach and Repeat Using Appearance-based Lidar - A Method For Planetary Exploration. [Masters Thesis]. University of Toronto; 2011. Available from: http://hdl.handle.net/1807/31339


University of Toronto

2. Zhang, Nan. Towards Long-Term Vision-Based Localization in Support of Monocular Visual Teach and Repeat.

Degree: 2018, University of Toronto

This thesis presents an unsupervised learning framework within the Visual Teach and Repeat system to enable improved localization performance in the presence of lighting and scene changes. The resulting place-and-time-dependent binary descriptor is able to be updated as new experiences are gathered. We hypothesize that adapting the description function to a specific environment will improve the localization performance and allow the system to operate for a longer period of time before localization failure. We also present a low-cost monocular Visual Teach and Repeat system, which uses a calibrated camera and wheel odometry measurements for navigation in both indoor and outdoor environments. These two parts are then combined with the end goal of achieving a low-cost, robust, and easily deployable system that enables navigation in complex indoor and outdoor environments with the eventual goal of long-term operation.

M.A.S.

Advisors/Committee Members: Barfoot, Timothy D, Aerospace Science and Engineering.

Subjects/Keywords: Learning Descriptors; Long-Term Localization; SLAM; Visual Descriptor; Visual Teach and Repeat; 0771

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zhang, N. (2018). Towards Long-Term Vision-Based Localization in Support of Monocular Visual Teach and Repeat. (Masters Thesis). University of Toronto. Retrieved from http://hdl.handle.net/1807/91690

Chicago Manual of Style (16th Edition):

Zhang, Nan. “Towards Long-Term Vision-Based Localization in Support of Monocular Visual Teach and Repeat.” 2018. Masters Thesis, University of Toronto. Accessed November 11, 2019. http://hdl.handle.net/1807/91690.

MLA Handbook (7th Edition):

Zhang, Nan. “Towards Long-Term Vision-Based Localization in Support of Monocular Visual Teach and Repeat.” 2018. Web. 11 Nov 2019.

Vancouver:

Zhang N. Towards Long-Term Vision-Based Localization in Support of Monocular Visual Teach and Repeat. [Internet] [Masters thesis]. University of Toronto; 2018. [cited 2019 Nov 11]. Available from: http://hdl.handle.net/1807/91690.

Council of Science Editors:

Zhang N. Towards Long-Term Vision-Based Localization in Support of Monocular Visual Teach and Repeat. [Masters Thesis]. University of Toronto; 2018. Available from: http://hdl.handle.net/1807/91690

.