Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"Vanderbilt University" +contributor:("Dr. Alan Peters"). Showing records 1 – 3 of 3 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Vanderbilt University

1. Patki, Akash. Particle Filter based SLAM to map random environments using “iRobot Roomba”.

Degree: MS, Computer Science, 2011, Vanderbilt University

For any mobile robot application it is important that a robot knows its location in an operating environment. The map for the operating environment may not be available every time, so the robot needs to build a map as it explores its surroundings. As a result, robot must simultaneously localize and map the operating environment. This is "Simultaneous Localization And Mapping" (SLAM) problem. SLAM finds its applications in various real life situations where automated vehicles need to map the environment during disaster relief, underwater navigation, airborne systems, minimally invasive surgery, visual tracking, etc. Statistical techniques like Kalman filters or Particle filters provide a robust framework to map an environment. Based on particle filtering, this work presents a working prototype and analysis for a SLAM implementation using an iRobot Roomba and simulations of it using MATLAB and Blender. Advisors/Committee Members: Dr. Alan Peters (committee member), Dr. Gabor Karsai (Committee Chair).

Subjects/Keywords: Particle Filter; Monte Carlo; Mapping; SLAM; Localization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Patki, A. (2011). Particle Filter based SLAM to map random environments using “iRobot Roomba”. (Thesis). Vanderbilt University. Retrieved from http://hdl.handle.net/1803/15177

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Patki, Akash. “Particle Filter based SLAM to map random environments using “iRobot Roomba”.” 2011. Thesis, Vanderbilt University. Accessed January 18, 2021. http://hdl.handle.net/1803/15177.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Patki, Akash. “Particle Filter based SLAM to map random environments using “iRobot Roomba”.” 2011. Web. 18 Jan 2021.

Vancouver:

Patki A. Particle Filter based SLAM to map random environments using “iRobot Roomba”. [Internet] [Thesis]. Vanderbilt University; 2011. [cited 2021 Jan 18]. Available from: http://hdl.handle.net/1803/15177.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Patki A. Particle Filter based SLAM to map random environments using “iRobot Roomba”. [Thesis]. Vanderbilt University; 2011. Available from: http://hdl.handle.net/1803/15177

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Vanderbilt University

2. Kumar, Ankur N. Quantifying in vivo motion in video sequences using image registration.

Degree: PhD, Electrical Engineering, 2014, Vanderbilt University

Image registration is a pivotal part of many medical imaging analysis systems that provide clinically relevant medical information. One fundamental problem addressed by image registration is the accounting of a subject’s motion. This dissertation broadly addresses the problem of quantifying in vivo motion in video sequences for two different applications using image registration. The first problem involves the correction of motion in in vivo time-series microscopy imaging of islets of Langerhans in mice. The second problem focuses on delivering near real-time 3D intraoperative movements of the cortical surface to a computational biomechanical model framework for the compensation of brain shift during brain tumor surgery. For the first application, a fully automatic algorithm is developed for the correction of in vivo time-series microscopy images of islets of Langerhans. The second application focuses on delivering near real-time 3D intraoperative movements of the cortical surface to a computational biomechanical model framework for the compensation of brain shift during brain tumor surgery. This dissertation demonstrates a clinical microscope-based digitization platform capable of reliably providing temporally dense 3D textured point clouds in near real-time of the FOV for the entire duration and under realistic conditions of neurosurgery. A fully automatic technique has been developed for robustly digitizing 3D points intraoperatively using an operating microscope at 1Hz. Another algorithm has been developed for tracking points on the cortical surface intraoperatively, which can potentially deliver intraoperative 3D displacements of the cortical surface at different time points during brain tumor surgery. Advisors/Committee Members: Dr. Michael Miga (committee member), Dr. Reid Thompson (committee member), Dr. Alan Peters (committee member), Dr. Bobby Bodenheimer (committee member), Dr. Dave Piston (committee member), Dr. Benoit Dawant (Committee Chair).

Subjects/Keywords: stereovision; image registration; in vivo; brain tumor surgery; image guided surgery; magnification

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kumar, A. N. (2014). Quantifying in vivo motion in video sequences using image registration. (Doctoral Dissertation). Vanderbilt University. Retrieved from http://hdl.handle.net/1803/14690

Chicago Manual of Style (16th Edition):

Kumar, Ankur N. “Quantifying in vivo motion in video sequences using image registration.” 2014. Doctoral Dissertation, Vanderbilt University. Accessed January 18, 2021. http://hdl.handle.net/1803/14690.

MLA Handbook (7th Edition):

Kumar, Ankur N. “Quantifying in vivo motion in video sequences using image registration.” 2014. Web. 18 Jan 2021.

Vancouver:

Kumar AN. Quantifying in vivo motion in video sequences using image registration. [Internet] [Doctoral dissertation]. Vanderbilt University; 2014. [cited 2021 Jan 18]. Available from: http://hdl.handle.net/1803/14690.

Council of Science Editors:

Kumar AN. Quantifying in vivo motion in video sequences using image registration. [Doctoral Dissertation]. Vanderbilt University; 2014. Available from: http://hdl.handle.net/1803/14690


Vanderbilt University

3. Krootjohn, Soradech. Video Image Processing using MPEG Technology for a Mobile Robot.

Degree: PhD, Electrical Engineering, 2007, Vanderbilt University

Estimating egomotion from a video sequence is intrinsically difficult and requires high-level mathematics and programming skills. This work exploits existing technology to leverage the development of a mobile robot’s navigation. An open source software MPEG encoder package is modified so that its motion vectors and encoded frame type are accessible. As a result, the process of estimating a motion field from the MPEG motion vectors is far less complicated and time-consuming than those used in conventional methods. The main contribution is the creation of low-cost multiple sensors for a mobile robot. Two real-time applications, visual odometry and precipice detection, are presented. Despite employing simple trigonometry, the visual odometry performs consistently well on moderately textured surfaces with low specular reflection. A proposed novel approach to detecting a precipice in real-time is shown to be successful even when the robot runs at a very high speed. The experimental results substantiate the use of both applications in real situations. Advisors/Committee Members: Dr. George Cook (committee member), Dr. Andrew Dozier (committee member), Dr. Alan Peters (committee member), Dr. Douglas Hardin (committee member), Dr. Mitch Wilkes (Committee Chair).

Subjects/Keywords: mobile robots; precipice detection; MPEG; Visual odometry; Robot vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Krootjohn, S. (2007). Video Image Processing using MPEG Technology for a Mobile Robot. (Doctoral Dissertation). Vanderbilt University. Retrieved from http://hdl.handle.net/1803/12567

Chicago Manual of Style (16th Edition):

Krootjohn, Soradech. “Video Image Processing using MPEG Technology for a Mobile Robot.” 2007. Doctoral Dissertation, Vanderbilt University. Accessed January 18, 2021. http://hdl.handle.net/1803/12567.

MLA Handbook (7th Edition):

Krootjohn, Soradech. “Video Image Processing using MPEG Technology for a Mobile Robot.” 2007. Web. 18 Jan 2021.

Vancouver:

Krootjohn S. Video Image Processing using MPEG Technology for a Mobile Robot. [Internet] [Doctoral dissertation]. Vanderbilt University; 2007. [cited 2021 Jan 18]. Available from: http://hdl.handle.net/1803/12567.

Council of Science Editors:

Krootjohn S. Video Image Processing using MPEG Technology for a Mobile Robot. [Doctoral Dissertation]. Vanderbilt University; 2007. Available from: http://hdl.handle.net/1803/12567

.