Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

Language: English

You searched for +publisher:"Delft University of Technology" +contributor:("Pool, Ewoud"). Showing records 1 – 3 of 3 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Delft University of Technology

1. Duan, Wen Jie (author). Suction Grasp Pose Planning Using Self-supervision and Transfer Learning.

Degree: 2018, Delft University of Technology

Planning grasp poses for a robot on unknown objects in cluttered environments is still an open problem. Recent research suggests that deep learning technique is a promising approach to plan grasp poses on unknown objects in cluttered environments. In this field, three types of data are used for training: (a) human labeled data; (b) synthetic data; (c) real robot data. Each of them has different properties in terms of the cost of collection and label accuracy. Recent approaches solely use a single type of data to train a model. The problem of such a methodology is that human labeled data is inaccurate and costly, synthetic data is scalable but inaccurate, and real robot data is accurate but costly. In this paper, we use the method of combining synthetic data and real robot data to train a Grasp Quality Convolution Neural Network (GQ-CNN). We collect a real robot dataset of 10000 datapoints without human-annotation, by running a UR5 equipped with a pneumatic suction gripper, with an algorithmic supervisor. We use this dataset to fine-tune a GQ-CNN model. We evaluate models both by classifying collected data and running physical robot grasping experiments. We use 50 unknown objects with prismatic and complex shapes for testing. Our method achieves 100% grasp success rate on these objects, and results suggest that the fine-tuned model learns the diameter and the great suction force of the suction cup.

Mechanical Engineering

Advisors/Committee Members: Wisse, Martijn (mentor), Kober, Jens (graduation committee), Pool, Ewoud (graduation committee), Delft University of Technology (degree granting institution).

Subjects/Keywords: Grasping; Deep Learning; Transfer learning; Self-supervision; Robot

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Duan, W. J. (. (2018). Suction Grasp Pose Planning Using Self-supervision and Transfer Learning. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:f41bde0e-b9c9-46de-a52c-fc3b2885b850

Chicago Manual of Style (16th Edition):

Duan, Wen Jie (author). “Suction Grasp Pose Planning Using Self-supervision and Transfer Learning.” 2018. Masters Thesis, Delft University of Technology. Accessed February 25, 2021. http://resolver.tudelft.nl/uuid:f41bde0e-b9c9-46de-a52c-fc3b2885b850.

MLA Handbook (7th Edition):

Duan, Wen Jie (author). “Suction Grasp Pose Planning Using Self-supervision and Transfer Learning.” 2018. Web. 25 Feb 2021.

Vancouver:

Duan WJ(. Suction Grasp Pose Planning Using Self-supervision and Transfer Learning. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Feb 25]. Available from: http://resolver.tudelft.nl/uuid:f41bde0e-b9c9-46de-a52c-fc3b2885b850.

Council of Science Editors:

Duan WJ(. Suction Grasp Pose Planning Using Self-supervision and Transfer Learning. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:f41bde0e-b9c9-46de-a52c-fc3b2885b850


Delft University of Technology

2. Wang, Ziqi (author). Depth-aware Instance Segmentation with a Discriminative Loss Function.

Degree: 2018, Delft University of Technology

This work explores the possibility of incorporating depth information into a deep neural network to improve accuracy of RGB instance segmentation. The baseline of this work is semantic instance segmentation with discriminative loss function.The baseline work proposes a novel discriminative loss function with which the semantic net-work can learn a n-D embedding for all pixels belonging to instances. Embeddings of the same instances are attracted to their own centers while centers of different instance embeddings repulse each other. Two limitations are set for attraction and repulsion, namely the in-margin and out-margin. A post-processing procedure (clustering) is required to infer instance indices from embeddings with an important parameter bandwidth, the threshold for clustering. The contribution of the work in this thesis are several new methods to incorporate depth information into the baseline work. One simple method is adding scaled depth directly to RGB embeddings, which is named as scaling. Through theorizing and experiments, this work also proposes that depth pixels can be encoded into 1-D embeddings with the same discriminative loss function and combined with RGB embeddings. Explored combination methods are fusion and concatenation. Additionally, two depth pre-processing methods are proposed, replication and coloring. From the experimental result, both scaling and fusion lead to significant improvements over baseline work while concatenation contributes more to classes with lots of similarities.

Cognitive Robotics Lab

Advisors/Committee Members: Pool, Ewoud (mentor), Kooij, Julian (mentor), Gavrila, Dariu (graduation committee), Delft University of Technology (degree granting institution).

Subjects/Keywords: Deep Learning; Computer Vision; instance segmentation; Intelligent Vehicles

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, Z. (. (2018). Depth-aware Instance Segmentation with a Discriminative Loss Function. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:02bd3582-3304-4595-baa6-c6fcca755418

Chicago Manual of Style (16th Edition):

Wang, Ziqi (author). “Depth-aware Instance Segmentation with a Discriminative Loss Function.” 2018. Masters Thesis, Delft University of Technology. Accessed February 25, 2021. http://resolver.tudelft.nl/uuid:02bd3582-3304-4595-baa6-c6fcca755418.

MLA Handbook (7th Edition):

Wang, Ziqi (author). “Depth-aware Instance Segmentation with a Discriminative Loss Function.” 2018. Web. 25 Feb 2021.

Vancouver:

Wang Z(. Depth-aware Instance Segmentation with a Discriminative Loss Function. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Feb 25]. Available from: http://resolver.tudelft.nl/uuid:02bd3582-3304-4595-baa6-c6fcca755418.

Council of Science Editors:

Wang Z(. Depth-aware Instance Segmentation with a Discriminative Loss Function. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:02bd3582-3304-4595-baa6-c6fcca755418


Delft University of Technology

3. Bos, Evert (author). Including traffic light recognition in general object detection with YOLOv2.

Degree: 2019, Delft University of Technology

With an in vehicle camera many different things can be done that are essential for ADAS or autonomous driving mode in a vehicle. First, it can be used for detection of general objects, for example cars, cyclists or pedestrians. Secondly, the camera can be used for traffic light recognition, which is localization of traffic light position and traffic light state recognition. No method exists at the moment able to perform general object detection and traffic light recognition at the same time, therefore this work proposes methods to combine general object detection and traffic light recognition. The novel method presented is including traffic light recognition in a general object detection framework. The single shot object detector YOLOv2 is used as base detector. As general object class dataset COCO is used and the traffic light dataset is LISA. Two different methods for combined detection are proposed: adaptive combined training and YOLOv2++. For combined training YOLOv2 is trained on both datasets with the YOLOv2 network unchanged and the loss function adapted to optimize training on both datasets. For YOLOv2++ the feature extractor of YOLOv2 pre-trained on COCO is used as feature extractor. On the features LISA traffic light states are trained with a small sub-network. It is concluded the best performing method is adaptive combined training which reaches for IOU 0.5 a AUC of 24.02% for binary and 21.23% for multi-class classification. For IOU of 0.1 this increases to 56.74% for binary and 41.87% for multi-class classification. The performance of the adaptive combined detector is 20% lower than the baseline performance of an detector only detecting LISA traffic light states and 5% lower than the baseline of a detector only detecting COCO classes, however detection of classes from both dataset is almost twice as fast as separate detection with different networks for both datasets.

mech

Advisors/Committee Members: Kooij, Julian (mentor), Pool, Ewoud (graduation committee), Gavrila, Dariu (graduation committee), Kober, Jens (graduation committee), Delft University of Technology (degree granting institution).

Subjects/Keywords: Traffic Light recognition; machine learning; YOLO; object detection; COCO; LISA

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bos, E. (. (2019). Including traffic light recognition in general object detection with YOLOv2. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:09f32632-04eb-4907-9100-766590dc2d03

Chicago Manual of Style (16th Edition):

Bos, Evert (author). “Including traffic light recognition in general object detection with YOLOv2.” 2019. Masters Thesis, Delft University of Technology. Accessed February 25, 2021. http://resolver.tudelft.nl/uuid:09f32632-04eb-4907-9100-766590dc2d03.

MLA Handbook (7th Edition):

Bos, Evert (author). “Including traffic light recognition in general object detection with YOLOv2.” 2019. Web. 25 Feb 2021.

Vancouver:

Bos E(. Including traffic light recognition in general object detection with YOLOv2. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Feb 25]. Available from: http://resolver.tudelft.nl/uuid:09f32632-04eb-4907-9100-766590dc2d03.

Council of Science Editors:

Bos E(. Including traffic light recognition in general object detection with YOLOv2. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:09f32632-04eb-4907-9100-766590dc2d03

.