You searched for +publisher:"Delft University of Technology" +contributor:("Gavrila, Dariu")
.
Showing records 1 – 16 of
16 total matches.
No search limiters apply to these results.

Delft University of Technology
1.
Ammerlaan, Jelle (author).
Traffic Gesture Classification for Intelligent Vehicles.
Degree: 2020, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:6272db65-b324-40cf-83aa-6d7caf3c7917
► Self-driving vehicles have shown rapid development in recent years and continue to move towards full autonomy. For high or full automation, self-driving vehicles will have…
(more)
▼ Self-driving vehicles have shown rapid development in recent years and continue to move towards full autonomy. For high or full automation, self-driving vehicles will have to be able to address and solve a broad range of situations, one of which is interaction with traffic agents. For correct and save maneuvering through these situations, reliable detection of agents followed by an accurate classification of the traffic gestures used by agents is essential. This problem has received limited attention in literature to date. The objective of this work is to establish and investigate a working traffic gesture pipeline by leveraging the latest developments in the fields of computer vision and machine learning. This work investigates and compares how well state-of-the-art methods translate to traffic gesture recognition and what application specific problems are encountered. Multiple configurations based on skeletal features, estimated using OpenPose, and classified using recurrent neural networks (RNN) were investigated. Skeleton estimation using OpenPose and feature representations were evaluated using an action recognition dataset with motion capture ground-truth. Three RNN network architectures, varying in complexity and size, were evaluated on traffic gestures. The robustness of the developed system regarding viewpoint variation is explored, combined with the viability of transfer learning for traffic gestures. To train and validate these methods, a new traffic gesture dataset is introduced, on which an mAP of 0,70 is achieved. The results show that the proposed methods are able to classify traffic gestures within reasonable computation time and illustrate the value of transfer learning for gesture recognition. These promising results validate the methodology used and show that this direction warrants further research.
Advisors/Committee Members: Flohr, Fabian (mentor), Kooij, Julian (mentor), Gavrila, Dariu (graduation committee), de Winter, Joost (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Machine Learning; Intelligent Vehicles; Traffic Gestures; Pose estimation; Gesture recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ammerlaan, J. (. (2020). Traffic Gesture Classification for Intelligent Vehicles. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:6272db65-b324-40cf-83aa-6d7caf3c7917
Chicago Manual of Style (16th Edition):
Ammerlaan, Jelle (author). “Traffic Gesture Classification for Intelligent Vehicles.” 2020. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:6272db65-b324-40cf-83aa-6d7caf3c7917.
MLA Handbook (7th Edition):
Ammerlaan, Jelle (author). “Traffic Gesture Classification for Intelligent Vehicles.” 2020. Web. 23 Jan 2021.
Vancouver:
Ammerlaan J(. Traffic Gesture Classification for Intelligent Vehicles. [Internet] [Masters thesis]. Delft University of Technology; 2020. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:6272db65-b324-40cf-83aa-6d7caf3c7917.
Council of Science Editors:
Ammerlaan J(. Traffic Gesture Classification for Intelligent Vehicles. [Masters Thesis]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:6272db65-b324-40cf-83aa-6d7caf3c7917

Delft University of Technology
2.
Hafner, Frank (author).
Cross-Modal Re-identification of Persons between RGB and Depth.
Degree: 2018, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:6797b0e2-5a20-444f-8d32-a73581e00ff5
► Cross-modal person re-identification is the task to re-identify a person which was sensedin a first modality, like in visible light (RGB), in a second modality,…
(more)
▼ Cross-modal person re-identification is the task to re-identify a person which was sensedin a first modality, like in visible light (RGB), in a second modality, like depth. Therefore, the challenge is to sense between inputs from separate modalities, without information from both modalities at the same time step. Lately, the scientific challenge of cross-modal person re-identification between depth and RGB is getting more and more attention due to the needs of intelligent vehicles, but also interested parties in the surveillance domain, where sensing in poor illumination is desirable. Techniques for cross-modal person re-identification have to solve several concurrent tasks. First, techniques have to be robust against variations in the single modalities. Occurring challenges are viewpoint changes, pose variations or variations in camera resolution. Second, the challenge of re-identifying a person has to be solved across the modalities within a heterogeneous network of RGB and depth cameras. At the present day, work in cross-modal re-identification between infrared images and RGB images exist. At the same time almost no work was done in re-identification between depth images and visible light images. The objective of this work is to fill this gap by comparing the performance of different techniques for cross-modal re-identification of persons. The main contributions of this work are two-fold. First, different deep neural network architectures for cross-modal re-identification of persons between depth and visible light are investigated and compared. Second, a new technique for cross-modal person re-identification is presented. Thet echnique is based on two-step cross-distillation and allows to extract similar features from the depth and visible light modality. Therefore, the task of matching persons sensed between depth and visible light is facilitated and can be solved with higher accuracy. Within the evaluation, it was possible to report state-of-the-art results for two relevant datasets for cross-modal person re-identification between depth and RGB. For the BIWI RGBD-ID dataset the pre-existing state-of-the-art was improved by more than 15% in mean average precision. Additionally, it was possible to validate the performance of the method with the RobotPKU dataset. Although the method was successfully applied in cross-modal person re-identification between depth and RGB, it was shown that in another modality combinations, like RGB and infrared, the technique in its current definition cannot be considered state-of-the-art. Finally, it is possible to give a lookout on the implications of the results for the intelligent vehicles domain. For a successful deployment in this area more thorough datasets have to be developed and the performance on sparse depth maps, as provided by lidars or radars, have to be investigated.
Advisors/Committee Members: Gavrila, Dariu (mentor), Kooij, Julian (mentor), Tax, David (mentor), Pan, Wei (mentor), Delft University of Technology (degree granting institution).
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hafner, F. (. (2018). Cross-Modal Re-identification of Persons between RGB and Depth. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:6797b0e2-5a20-444f-8d32-a73581e00ff5
Chicago Manual of Style (16th Edition):
Hafner, Frank (author). “Cross-Modal Re-identification of Persons between RGB and Depth.” 2018. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:6797b0e2-5a20-444f-8d32-a73581e00ff5.
MLA Handbook (7th Edition):
Hafner, Frank (author). “Cross-Modal Re-identification of Persons between RGB and Depth.” 2018. Web. 23 Jan 2021.
Vancouver:
Hafner F(. Cross-Modal Re-identification of Persons between RGB and Depth. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:6797b0e2-5a20-444f-8d32-a73581e00ff5.
Council of Science Editors:
Hafner F(. Cross-Modal Re-identification of Persons between RGB and Depth. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:6797b0e2-5a20-444f-8d32-a73581e00ff5

Delft University of Technology
3.
Ai, Zhiwei (author).
Semantic Segmentation of Large-scale Urban Scenes from Point Clouds.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:a9cedaac-42ae-4cb0-9c14-67bab8e96a6d
► Deep learning methods have been demonstrated to be promising in semantic segmentation of point clouds. Existing works focus on extracting informative local features based on…
(more)
▼ Deep learning methods have been demonstrated to be promising in semantic segmentation of point clouds. Existing works focus on extracting informative local features based on individual points and their local neighborhood. They lack consideration of the general structures and latent contextual relations of underlying shapes among points. To this end, we design geometric priors to encode contextual relations of underlying shapes between corresponding point pairs. Geometric prior convolution operator is proposed to explicitly incorporate the contextual relations into the computation. Then, GP-net, which contains geometric prior convolution and a backbone network is constructed. Our experiments show that the performance of our backbone network can be improved by up to 6.9 percent in terms of mean Intersection over Union (mIoU) with the help of geometric prior convolution. We also analyze different design options of geometric prior convolution and GP-net. The GP-net has been tested on the Paris and Lille 3D benchmark, and it achieves the state-of-the-art performance of 74.7 % mIoU.
Mechanical Engineering
Advisors/Committee Members: Nan, Liangliang (mentor), Gavrila, Dariu (graduation committee), Lindenbergh, Roderik (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Deep Learning; Point Clouds; Semantic Segmentation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ai, Z. (. (2019). Semantic Segmentation of Large-scale Urban Scenes from Point Clouds. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:a9cedaac-42ae-4cb0-9c14-67bab8e96a6d
Chicago Manual of Style (16th Edition):
Ai, Zhiwei (author). “Semantic Segmentation of Large-scale Urban Scenes from Point Clouds.” 2019. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:a9cedaac-42ae-4cb0-9c14-67bab8e96a6d.
MLA Handbook (7th Edition):
Ai, Zhiwei (author). “Semantic Segmentation of Large-scale Urban Scenes from Point Clouds.” 2019. Web. 23 Jan 2021.
Vancouver:
Ai Z(. Semantic Segmentation of Large-scale Urban Scenes from Point Clouds. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:a9cedaac-42ae-4cb0-9c14-67bab8e96a6d.
Council of Science Editors:
Ai Z(. Semantic Segmentation of Large-scale Urban Scenes from Point Clouds. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:a9cedaac-42ae-4cb0-9c14-67bab8e96a6d

Delft University of Technology
4.
van Schouwenburg, Sietse (author).
Evaluating SLAM in an urban dynamic environment.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:af041e54-7660-4fb1-b68c-0af3aaf27c52
► Simultaneous Localization And Mapping (SLAM) algorithms provide accurate localization for autonomous vehicles and provide essential information for the path planning module. However, SLAM algorithms as-…
(more)
▼ Simultaneous Localization And Mapping (SLAM) algorithms provide accurate localization for autonomous vehicles and provide essential information for the path planning module. However, SLAM algorithms as- sume a static environment in order to estimate a location. This assumption influences the pose estimation in dynamic urban environments. The impact of this assumption on day-to-day scenarios of an intelligent vehicle is unknown. A deeper understanding on the effect of dynamic scenarios in an urban environment could lead to simple and robust solutions for SLAM algorithms in intelligent vehicles. The objective of this research is to develop a methodology that isolates the effect of an urban dynamic environment on the per- formance of a SLAM algorithm. This requires constant environment conditions including constant weather conditions, lighting conditions and identical trajectories over time. The methodology is tested with a stereo feature based V-SLAM algorithm called ORB SLAM [19], which illustrates the in-depth analysis that is possi- ble with this experiment. The main research question is: How does a dynamic urban environment influence the pose estimation accuracy of stereo ORB SLAM? Two specific dynamic scenarios are designed to represent a dynamic urban environment: driving behind another vehicle and vehicles approaching on the other side of the road. On these scenarios, an in-depth anal- ysis of ORB SLAM is performed to observe how the algorithm’s design influences the robustness to a dynamic environment. Functions within the algorithm are bypassed to analyze the effect on the performance. Specifi- cally, the place recognition function and map point filtering function are bypassed. The analysis proofs which functions assist in the overall robustness to a dynamic environment. Moreover, an analysis is performed of the algorithm in localization mode to research the effect of utilizing maps that were created under different conditions. The knowledge gained from the full analysis can be utilized to improve other V-SLAM algorithms. The experiment is performed in CARLA [6], an open source simulator. CARLA provides an elaborate sen- sor suite which support multiple camera setups and LIDAR sensors. Furthermore, the simulator provides free maps which represent realistic urban environments and allows for easy and accurate access to the ground truth position. A setup is designed with the simulator that allows complete isolation of the effect of a dy- namic environment. The setup allows full control of lighting conditions, weather conditions and allows iden- tical trajectories over time in different dynamic scenarios. Each scenario is simulated over several different trajectories in which the camera images are converted to rosbags. Each variation of the ORB SLAM algorithm is tested on the produced rosbags. The resulting pose estimations in dynamic conditions are compared to the pose estimations made during static conditions to analyze the effect of dynamic scenarios on the perfor- mance of the algorithm. The method…
Advisors/Committee Members: Kooij, Julian (mentor), Hehn, Thomas (mentor), Gavrila, Dariu (graduation committee), Hernandez Corbato, Carlos (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: SLAM; simulation; computer vision; simulataneous localization and mapping; localization; mapping; visual SLAM; ORB SLAM; CARLA
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
van Schouwenburg, S. (. (2019). Evaluating SLAM in an urban dynamic environment. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:af041e54-7660-4fb1-b68c-0af3aaf27c52
Chicago Manual of Style (16th Edition):
van Schouwenburg, Sietse (author). “Evaluating SLAM in an urban dynamic environment.” 2019. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:af041e54-7660-4fb1-b68c-0af3aaf27c52.
MLA Handbook (7th Edition):
van Schouwenburg, Sietse (author). “Evaluating SLAM in an urban dynamic environment.” 2019. Web. 23 Jan 2021.
Vancouver:
van Schouwenburg S(. Evaluating SLAM in an urban dynamic environment. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:af041e54-7660-4fb1-b68c-0af3aaf27c52.
Council of Science Editors:
van Schouwenburg S(. Evaluating SLAM in an urban dynamic environment. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:af041e54-7660-4fb1-b68c-0af3aaf27c52

Delft University of Technology
5.
Wang, Ziqi (author).
Depth-aware Instance Segmentation with a Discriminative Loss Function.
Degree: 2018, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:02bd3582-3304-4595-baa6-c6fcca755418
► This work explores the possibility of incorporating depth information into a deep neural network to improve accuracy of RGB instance segmentation. The baseline of this…
(more)
▼ This work explores the possibility of incorporating depth information into a deep neural network to improve accuracy of RGB instance segmentation. The baseline of this work is semantic instance segmentation with discriminative loss function.The baseline work proposes a novel discriminative loss function with which the semantic net-work can learn a n-D embedding for all pixels belonging to instances. Embeddings of the same instances are attracted to their own centers while centers of different instance embeddings repulse each other. Two limitations are set for attraction and repulsion, namely the in-margin and out-margin. A post-processing procedure (clustering) is required to infer instance indices from embeddings with an important parameter bandwidth, the threshold for clustering. The contribution of the work in this thesis are several new methods to incorporate depth information into the baseline work. One simple method is adding scaled depth directly to RGB embeddings, which is named as scaling. Through theorizing and experiments, this work also proposes that depth pixels can be encoded into 1-D embeddings with the same discriminative loss function and combined with RGB embeddings. Explored combination methods are fusion and concatenation. Additionally, two depth pre-processing methods are proposed, replication and coloring. From the experimental result, both scaling and fusion lead to significant improvements over baseline work while concatenation contributes more to classes with lots of similarities.
Cognitive Robotics Lab
Advisors/Committee Members: Pool, Ewoud (mentor), Kooij, Julian (mentor), Gavrila, Dariu (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Deep Learning; Computer Vision; instance segmentation; Intelligent Vehicles
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, Z. (. (2018). Depth-aware Instance Segmentation with a Discriminative Loss Function. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:02bd3582-3304-4595-baa6-c6fcca755418
Chicago Manual of Style (16th Edition):
Wang, Ziqi (author). “Depth-aware Instance Segmentation with a Discriminative Loss Function.” 2018. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:02bd3582-3304-4595-baa6-c6fcca755418.
MLA Handbook (7th Edition):
Wang, Ziqi (author). “Depth-aware Instance Segmentation with a Discriminative Loss Function.” 2018. Web. 23 Jan 2021.
Vancouver:
Wang Z(. Depth-aware Instance Segmentation with a Discriminative Loss Function. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:02bd3582-3304-4595-baa6-c6fcca755418.
Council of Science Editors:
Wang Z(. Depth-aware Instance Segmentation with a Discriminative Loss Function. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:02bd3582-3304-4595-baa6-c6fcca755418

Delft University of Technology
6.
Wout, Daan (author).
Policy Learning with Human Teachers: Using directive feedback in a Gaussian framework.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:d6cff61f-8e74-4714-b713-f127c1392b7a
► A prevalent approach for learning a control policy in the model-free domain is by engaging Reinforcement Learning (RL). A well known disadvantage of RL is…
(more)
▼ A prevalent approach for learning a control policy in the model-free domain is by engaging Reinforcement Learning (RL). A well known disadvantage of RL is the necessity for extensive amounts of data for a suitable control policy. For systems that concern physical application, acquiring this vast amount of data might take an extraordinary amount of time. In contrast, humans have shown to be very efficient in detecting a suitable control policy for reference tracking problems. Employing this intuitive knowledge has proven to render model-free learning strategies suitable for physical applications. Recent studies have shown that learning a policy by directive action corrections is a very efficient approach in employing this do-main knowledge. Moreover, feedback based methods do not necessarily require expert knowledge on modelling and control and are therefore more generally applicable. The current state-of-the-art regarding directional feedback was introduced by Celemin and Ruiz-del Solar (2015) and coined COrrective Advice Communicated by Humans (COACH). In this framework the trainer is able to correct the observed actions by providing directive advise for iterative policy updates. However, COACH employs Radial Basis Function (RBF) networks, which limit the capabilities to apply the framework on higher dimensional problems due to an infeasible tuning process.This study introduces Gaussian Process Coach (GPC), an algorithm preserving COACH’s structure, but introducing Gaussian Processes (GPS) as alternative to RBF networks. Moreover, the employment of GPS allows for uncertainty estimation of the policy, which will be used for 1) inquiringhigh-informative feedback samples in an Active Learning (AL) framework, 2) introduce an Adaptive Learning Rate (ALR) that adapts the learning rate to the coarse or refine focused learning phase of the trainer and 3) establish a novel sparsification technique that is specifically designed for iterative GP policy updates. We will show by employing synthesized and human teachers that the novel algorithm outperforms COACH on every domain tested, with the most outspoken difference on higher dimensional problems. Furthermore, we will prove the independent contributions of AL and ALR.
Systems and Control
Advisors/Committee Members: Kober, Jens (mentor), Celemin Paez, Carlos (mentor), Gavrila, Dariu (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Machine Learning; Interactive Learning; Gaussian Process; Regression; Feedback
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wout, D. (. (2019). Policy Learning with Human Teachers: Using directive feedback in a Gaussian framework. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:d6cff61f-8e74-4714-b713-f127c1392b7a
Chicago Manual of Style (16th Edition):
Wout, Daan (author). “Policy Learning with Human Teachers: Using directive feedback in a Gaussian framework.” 2019. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:d6cff61f-8e74-4714-b713-f127c1392b7a.
MLA Handbook (7th Edition):
Wout, Daan (author). “Policy Learning with Human Teachers: Using directive feedback in a Gaussian framework.” 2019. Web. 23 Jan 2021.
Vancouver:
Wout D(. Policy Learning with Human Teachers: Using directive feedback in a Gaussian framework. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:d6cff61f-8e74-4714-b713-f127c1392b7a.
Council of Science Editors:
Wout D(. Policy Learning with Human Teachers: Using directive feedback in a Gaussian framework. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:d6cff61f-8e74-4714-b713-f127c1392b7a

Delft University of Technology
7.
Wymenga, Jan (author).
Weather Condition Estimation in Automated Vehicles.
Degree: 2018, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:421b3c6d-b85e-4876-a963-4094b35dea94
► This work presents a multi-sensor approach for weather condition estimation in automated vehicles. Using combined data from weather sensors (barometer, hygrometer, etc) and an in-vehicle…
(more)
▼ This work presents a multi-sensor approach for weather condition estimation in automated vehicles. Using combined data from weather sensors (barometer, hygrometer, etc) and an in-vehicle camera, a machine learning and computer vision framework is employed to estimate the current weather condition in realtime and in-vehicle. The use of different sensor types is shown to improve robustness and reduce noise. The resulting modular framework allows it to be used with different sensor configurations, and allows changes in sensor configuration with minimal effort. Finally, a proof-of-concept experiment is presented; a dataset is recorded using a test vehicle and used for model evaluation. The resulting datasets contains 20.000 pairs of video frames and sensor measurements recorded in different weather situations.
ME-BMD-BR
Advisors/Committee Members: Domhof, Joris (mentor), Gavrila, Dariu (graduation committee), Kooij, Julian (graduation committee), Kober, Jens (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: weather types; machine learning; intelligent vehicles; vision; driving; weather
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wymenga, J. (. (2018). Weather Condition Estimation in Automated Vehicles. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:421b3c6d-b85e-4876-a963-4094b35dea94
Chicago Manual of Style (16th Edition):
Wymenga, Jan (author). “Weather Condition Estimation in Automated Vehicles.” 2018. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:421b3c6d-b85e-4876-a963-4094b35dea94.
MLA Handbook (7th Edition):
Wymenga, Jan (author). “Weather Condition Estimation in Automated Vehicles.” 2018. Web. 23 Jan 2021.
Vancouver:
Wymenga J(. Weather Condition Estimation in Automated Vehicles. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:421b3c6d-b85e-4876-a963-4094b35dea94.
Council of Science Editors:
Wymenga J(. Weather Condition Estimation in Automated Vehicles. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:421b3c6d-b85e-4876-a963-4094b35dea94

Delft University of Technology
8.
Jargot, Dominik (author).
Deep End-to-end Network for 3D Object Detection in the Context of Autonomous Driving.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:6389d77c-007d-455f-8e84-10a4f9b57a9d
► Nowadays, autonomous driving is a trending topic in the automotive field. One of the most crucial challenges of autonomous driving research is environment perception. Currently,…
(more)
▼ Nowadays, autonomous driving is a trending topic in the automotive field. One of the most crucial challenges of autonomous driving research is environment perception. Currently, many techniques achieve satisfactory performance in 2D object detection using camera images. Nevertheless, such 2D object detection might be not sufficient for autonomous driving applications as the vehicle is operating in a 3D world where all the dimensions have to be considered. In this thesis a new method for 3D object detection, using deep learning approach is presented. The proposed architecture is able to detect cars using data from images and point clouds. The proposed network does not use any hand-crafted features and is trained in an end-to-end manner. The network is trained and evaluated with the widely used KITTI dataset. The proposed method achieves an average precision of 81.38%, 67.02%, and 65.30% on the easy, moderate, and hard subsets of the KITTI validation dataset, respectively. The average inference time per scene is 0.2 seconds.
Mechanical Engineering | Vehicle Engineering
Advisors/Committee Members: Gavrila, Dariu (mentor), Roth, Markus (mentor), Kober, Jens (graduation committee), Kooij, Julian (graduation committee), Kok, Manon (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: 3D object detection; Thesis; Intelligent Vehicles; Deep Learning; Machine Learning; Camera; Lidar
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jargot, D. (. (2019). Deep End-to-end Network for 3D Object Detection in the Context of Autonomous Driving. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:6389d77c-007d-455f-8e84-10a4f9b57a9d
Chicago Manual of Style (16th Edition):
Jargot, Dominik (author). “Deep End-to-end Network for 3D Object Detection in the Context of Autonomous Driving.” 2019. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:6389d77c-007d-455f-8e84-10a4f9b57a9d.
MLA Handbook (7th Edition):
Jargot, Dominik (author). “Deep End-to-end Network for 3D Object Detection in the Context of Autonomous Driving.” 2019. Web. 23 Jan 2021.
Vancouver:
Jargot D(. Deep End-to-end Network for 3D Object Detection in the Context of Autonomous Driving. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:6389d77c-007d-455f-8e84-10a4f9b57a9d.
Council of Science Editors:
Jargot D(. Deep End-to-end Network for 3D Object Detection in the Context of Autonomous Driving. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:6389d77c-007d-455f-8e84-10a4f9b57a9d

Delft University of Technology
9.
Kossen, Rebecca (author).
Fault Diagnosis of Self-Localization in Autonomous Vehicles Using a Model-Based Approach: The WEpods Case.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:b97942a0-61c3-4ce4-b961-8121230cba17
► Autonomous driving is a development that has gained a lot of attention lately, because it can lead to major improvements in the mobility sector. One…
(more)
▼ Autonomous driving is a development that has gained a lot of attention lately, because it can lead to major improvements in the mobility sector. One example of a research project that aims to develop vehicles that are capable of reaching the highest level of autonomy in driving, is the WEpods project. The goal of this research is in line with this aim, having the thesis objective defined as follows: let the WEpods continue driving in autonomous mode more often than is currently the case. The WEpod shuttles are not yet completely able to drive autonomously due to their inability to handle unexpected behavior (terminology: faults). Currently, such faults need to be detected and solved by a steward, who will manually initiate a safe stop if necessary. The localization module, which is responsible for localizing the vehicle on a map, sometimes generates unreliable location estimates. This poses two challenges. First, the fact that there is a mismatch between reality and the sensor outcomes of the localization module that needs to be detected. Second, the question of how to prevent the system from showing behavior that is different from what is desired (terminology: failure) in case such a fault is present (terminology: fault tolerant contol). Fault tolerant control can be performed in either a passive or an active manner. The passive approach ensures that either the faults are prevented or the system is able to mitigate them by anticipation in the design. The approach evolves from passive to active fault tolerant control when an on-line adaptation of the system control is made. For applications in autonomous driving, it is apparent that it is important to handle not only anticipated faults, but also to be able to deal with unexpected faults in an on-line manner. This on-line fault tolerant control approach involves two fault diagnosis steps that lead to solving the first challenge: detection and isolation. A so-called model-based fault diagnosis approach turned out to be most suitable, as it has been used for similar applications in the past. However, a model-based fault diagnosis approach has not yet been implemented for detecting and isolating faults in a localization module of autonomous driving, indicating the scientific relevance of this research. In the model-based approach, kinematic and dynamic equations of the research vehicle (WEpod) are used to build a computational model. This model is then subjected to an observer, that is able to compare the model outcomes with the actual measurements in an off-line way. A residual is drawn up by taking the difference between the model outcomes and the measurements. A threshold is computed based on noise on the measurements to compare the residual with. When the residual exceeds the threshold, an alarm is raised. This way, the system itself has been enabled to detect faults when they occur internally. Inclusion of the suggested fault diagnosis approach in an on-line manner into the system is a big step towards fully autonomous driving of the…
Advisors/Committee Members: Ferrari, Riccardo M.G. (mentor), Gaisser, Floris (mentor), Gavrila, Dariu (graduation committee), Mugge, Winfred (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Autonomous Vehicles; Fault Detection; Localization; model-based
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kossen, R. (. (2019). Fault Diagnosis of Self-Localization in Autonomous Vehicles Using a Model-Based Approach: The WEpods Case. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:b97942a0-61c3-4ce4-b961-8121230cba17
Chicago Manual of Style (16th Edition):
Kossen, Rebecca (author). “Fault Diagnosis of Self-Localization in Autonomous Vehicles Using a Model-Based Approach: The WEpods Case.” 2019. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:b97942a0-61c3-4ce4-b961-8121230cba17.
MLA Handbook (7th Edition):
Kossen, Rebecca (author). “Fault Diagnosis of Self-Localization in Autonomous Vehicles Using a Model-Based Approach: The WEpods Case.” 2019. Web. 23 Jan 2021.
Vancouver:
Kossen R(. Fault Diagnosis of Self-Localization in Autonomous Vehicles Using a Model-Based Approach: The WEpods Case. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:b97942a0-61c3-4ce4-b961-8121230cba17.
Council of Science Editors:
Kossen R(. Fault Diagnosis of Self-Localization in Autonomous Vehicles Using a Model-Based Approach: The WEpods Case. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:b97942a0-61c3-4ce4-b961-8121230cba17

Delft University of Technology
10.
Bos, Evert (author).
Including traffic light recognition in general object detection with YOLOv2.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:09f32632-04eb-4907-9100-766590dc2d03
► With an in vehicle camera many different things can be done that are essential for ADAS or autonomous driving mode in a vehicle. First, it…
(more)
▼ With an in vehicle camera many different things can be done that are essential for ADAS or autonomous driving mode in a vehicle. First, it can be used for detection of general objects, for example cars, cyclists or pedestrians. Secondly, the camera can be used for traffic light recognition, which is localization of traffic light position and traffic light state recognition. No method exists at the moment able to perform general object detection and traffic light recognition at the same time, therefore this work proposes methods to combine general object detection and traffic light recognition. The novel method presented is including traffic light recognition in a general object detection framework. The single shot object detector YOLOv2 is used as base detector. As general object class dataset COCO is used and the traffic light dataset is LISA. Two different methods for combined detection are proposed: adaptive combined training and YOLOv2++. For combined training YOLOv2 is trained on both datasets with the YOLOv2 network unchanged and the loss function adapted to optimize training on both datasets. For YOLOv2++ the feature extractor of YOLOv2 pre-trained on COCO is used as feature extractor. On the features LISA traffic light states are trained with a small sub-network. It is concluded the best performing method is adaptive combined training which reaches for IOU 0.5 a AUC of 24.02% for binary and 21.23% for multi-class classification. For IOU of 0.1 this increases to 56.74% for binary and 41.87% for multi-class classification. The performance of the adaptive combined detector is 20% lower than the baseline performance of an detector only detecting LISA traffic light states and 5% lower than the baseline of a detector only detecting COCO classes, however detection of classes from both dataset is almost twice as fast as separate detection with different networks for both datasets.
mech
Advisors/Committee Members: Kooij, Julian (mentor), Pool, Ewoud (graduation committee), Gavrila, Dariu (graduation committee), Kober, Jens (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Traffic Light recognition; machine learning; YOLO; object detection; COCO; LISA
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bos, E. (. (2019). Including traffic light recognition in general object detection with YOLOv2. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:09f32632-04eb-4907-9100-766590dc2d03
Chicago Manual of Style (16th Edition):
Bos, Evert (author). “Including traffic light recognition in general object detection with YOLOv2.” 2019. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:09f32632-04eb-4907-9100-766590dc2d03.
MLA Handbook (7th Edition):
Bos, Evert (author). “Including traffic light recognition in general object detection with YOLOv2.” 2019. Web. 23 Jan 2021.
Vancouver:
Bos E(. Including traffic light recognition in general object detection with YOLOv2. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:09f32632-04eb-4907-9100-766590dc2d03.
Council of Science Editors:
Bos E(. Including traffic light recognition in general object detection with YOLOv2. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:09f32632-04eb-4907-9100-766590dc2d03

Delft University of Technology
11.
Katsaounis, Georgios (author).
Extended Object Tracking of Pedestrians in Automotive Applications.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:d7226685-9ffe-417f-9939-2167a9dfd749
► Recent advances in sensor technology have lead to increased resolution of novel sensors, while tracking applications where distance between sensors and objects of interest is…
(more)
▼ Recent advances in sensor
technology have lead to increased resolution of novel sensors, while tracking applications where distance between sensors and objects of interest is very small have gained research interest recently. In these cases, it is possible that multiple sensor detections are generated by each object of interest. Extended Object Tracking (EOT) approaches consist of algorithms which make use of multiple sensor detections per object to jointly estimate their kinematic and shape extent attributes within the Bayesian tracking framework. In the last decade, various EOT algorithms have been proposed for different types of tracking applications. This M.Sc. thesis project addresses the problem of extended tracking of a single pedestrian walking in the area of a stationary vehicle (referred as ego-vehicle in this report) during a real automotive scenario. The objective is to achieve accurate estimation of both the kinematic attributes (2D centroid position/velocity), as well as its shape extent in x-y plane. In more detail, PreScan software is enabled to design a simulation scenario that is very close to a real automotive application, in terms of motion characteristics of objects of interest and sensor data acquisition. In the considered scenario, different sensor modalities are mounted on the ego-vehicle, namely a Lidar sensor and a mono camera sensor. Moreover, OpenPose library is employed to to obtain pose detections of human body parts from obtained camera images. Concerning shape extent representation, the simplest and most popular approach in previous studies, in general and especially for VRUs tracking, is to assume an elliptical shape. In fact, the Random Matrix Model (RMM), proposed originally by Koch, 2008, is a state-of-the-art EOT state modeling approach that allows for joint estimation of centroid kinematics and physical extent for considered elliptical objects of interest. Based on that, a RMM-based filter using Lidar position measurements has been proposed by Feldmann, 2011. In this project, this algorithm is used as a baseline filter for comparison with our proposed algorithm. In addition, an alternative tracking algorithm is proposed in this study, which has the following differences with respect to the baseline filter: State Initialization of the filter: In our proposed version of the tracking algorithm, human pose detections of shoulders and ankles are are associated with obtained Lidar position measurements in order to provide initial values for the kinematic state (2D position/velocity) and shape parameters (ellipse orientation and semi-axes lengths) of the pedestrian.Measurement Update step of the filter: In our proposed version of the tracking algorithm, camera-obtained pose detections of pedestrian shoulders are associated with obtained Lidar position measurements in order to create an extra measurement, for pedestrian heading angle. Subsequently, a nonlinear filtering update step fusing Lidar-obtained point cloud data for pedestrian position and human-pose-obtained…
Advisors/Committee Members: Alonso Mora, Javier (mentor), Domhof, Joris (mentor), Tasoglou, Athanasios (mentor), Gavrila, Dariu (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Extended Object Tracking; Vulnurable Road Users; Pedestrians; Environmental Perception; Automotive Applications; Lidar sensor; Mono camera sensor; Sensor Fusion; Random Matrix Model; Elliptical shape; OpenPose library; Human Pose Detections; position measurement; heading angle measurement; Extended Kalman Filter; Kalman Filter
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Katsaounis, G. (. (2019). Extended Object Tracking of Pedestrians in Automotive Applications. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:d7226685-9ffe-417f-9939-2167a9dfd749
Chicago Manual of Style (16th Edition):
Katsaounis, Georgios (author). “Extended Object Tracking of Pedestrians in Automotive Applications.” 2019. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:d7226685-9ffe-417f-9939-2167a9dfd749.
MLA Handbook (7th Edition):
Katsaounis, Georgios (author). “Extended Object Tracking of Pedestrians in Automotive Applications.” 2019. Web. 23 Jan 2021.
Vancouver:
Katsaounis G(. Extended Object Tracking of Pedestrians in Automotive Applications. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:d7226685-9ffe-417f-9939-2167a9dfd749.
Council of Science Editors:
Katsaounis G(. Extended Object Tracking of Pedestrians in Automotive Applications. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:d7226685-9ffe-417f-9939-2167a9dfd749

Delft University of Technology
12.
YANG, MINGHAO (author).
Efficient Neural Network Architecture Search.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:9985c543-cb4e-4259-b6f8-b44ba433f1e3
► One-Shot Neural Architecture Search (NAS) is a promising method to significantly reduce search time without any separate training. It can be treated as a Network…
(more)
▼ One-Shot Neural Architecture Search (NAS) is a promising method to significantly reduce search time without any separate training. It can be treated as a Network Compression problem on the architecture parameters from an overparameterized network. However, there are two issues associated with most one-shot NAS methods. First, dependencies between a node and its predecessors and successors are often disregarded which result in improper treatment over zero operations. Second, architecture parameters pruning based on their magnitude is questionable. In this thesis, classic Bayesian learning approach is applied to alleviate these two issues. Unlike other NAS methods, we train the over-parameterized network for only one epoch before update network architecture. Impressively, this enabled us to find the optimal architecture in both proxy and proxyless tasks on CIFAR-10 within only 0.2 GPU days using a single GPU. As a byproduct, our approach can be transferred directly to convolutional neural networks compression by enforcing structural sparsity that is able to achieve extremely sparse networks without accuracy deterioration.
Mechanical Engineering | Vehicle Engineering
Advisors/Committee Members: Pan, Wei (mentor), Zhou, Hongpeng (mentor), Gavrila, Dariu (graduation committee), van de Plas, Raf (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: NAS; Deep Learning; ICML; Artificial Intelligence
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
YANG, M. (. (2019). Efficient Neural Network Architecture Search. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:9985c543-cb4e-4259-b6f8-b44ba433f1e3
Chicago Manual of Style (16th Edition):
YANG, MINGHAO (author). “Efficient Neural Network Architecture Search.” 2019. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:9985c543-cb4e-4259-b6f8-b44ba433f1e3.
MLA Handbook (7th Edition):
YANG, MINGHAO (author). “Efficient Neural Network Architecture Search.” 2019. Web. 23 Jan 2021.
Vancouver:
YANG M(. Efficient Neural Network Architecture Search. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:9985c543-cb4e-4259-b6f8-b44ba433f1e3.
Council of Science Editors:
YANG M(. Efficient Neural Network Architecture Search. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:9985c543-cb4e-4259-b6f8-b44ba433f1e3

Delft University of Technology
13.
Glastra, Thom (author).
Enabling GLOSA for on-street operating traffic light controllers.
Degree: 2020, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:63a21739-a200-4375-8f19-641352e504b2
► The bottleneck of the maximum road volume in urban areas is the maximum capacity of the traffic flow on the intersection, which is coordinated with…
(more)
▼ The bottleneck of the maximum road volume in urban areas is the maximum capacity of the traffic flow on the intersection, which is coordinated with Traffic Light Controllers (TLCs). A promising method to decrease the number of stops are Green Light Optimal Speed Advice (GLOSA) systems. These systems will give a speed advice to arriving vehicles based on the schedule of TLCs, which needs to be known and fixed. However, most on-street controllers change their schedule until the last moment to maximize the performance. In this thesis a predictive controller is developed that is suitable for real-world application based on DIRECTOR; a state-of-the-art predictive controller. A prediction model is used to predict future arrivals based on available measurements to optimize and fix the schedule in advance. The proposed controller can enable GLOSA systems to improve performance. Appropriate pre-processing steps are implemented and the optimal input features are selected to improve the performance of a Long Short-Term Memory (LSTM) network to predict future arrivals. All detection data is stationary over time by using the differenced series. The time data is divided into workdays and weekend days to create a binary input and undesirable jumps during midnight are removed. The combination of stop line detectors, queue detectors, arrival detectors and signal states of the controlled and preceding intersections as input maximized the performance. The prediction horizon of the proposed prediction model could be extended. The Normalized Root Mean Square Error (NRMSE) decreased with 17% compared to DIRECTOR. The proposed controller extends the control horizon and uses multiple prediction models to predict the arrivals for the entire control horizon. The proposed controller outperforms DIRECTOR with 14 - 38% reduction in terms of vehicle delay and 5 - 32% reduction in terms of numbers of stops based on the scheduling mode. The GLOSA system is an add-on of the controller and is able to operate without the GLOSA system. The control horizon of the proposed controller always has a fixed length which is needed to determine the time until green. The implemented GLOSA system will determine the optimal speed based on the time until green and the expected delay due to the surrounding vehicles. The proposed controller is a cloud controlled application. Therefore, it is possible to adjust the setup (i.e. scheduling modes) during the day. Enabling GLOSA all day except during rush hours will lead to 3 - 4% reduction in terms of vehicle delay and 29 - 32% reduction in terms of numbers of stops based on the scheduling mode. This setup of the proposed controller is also competitive with the hand-crafted non-predictive on-street controller. Compared with this controller, the proposed controller will reduce the number of stops with 26% at the cost of 16% increase in vehicle delay. The proposed controller is designed conform the safety standards used on-street. Permission is received from the provincial government to do on-street pilots with the…
Advisors/Committee Members: Gavrila, Dariu (graduation committee), Kooij, Julian (mentor), Wang, Meng (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Traffic flow; Traffic light controller; TLC; Predictive controller; Prediction model; Decentralized control; Model predictive control; green light optimal speed advise; GLOSA
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Glastra, T. (. (2020). Enabling GLOSA for on-street operating traffic light controllers. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:63a21739-a200-4375-8f19-641352e504b2
Chicago Manual of Style (16th Edition):
Glastra, Thom (author). “Enabling GLOSA for on-street operating traffic light controllers.” 2020. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:63a21739-a200-4375-8f19-641352e504b2.
MLA Handbook (7th Edition):
Glastra, Thom (author). “Enabling GLOSA for on-street operating traffic light controllers.” 2020. Web. 23 Jan 2021.
Vancouver:
Glastra T(. Enabling GLOSA for on-street operating traffic light controllers. [Internet] [Masters thesis]. Delft University of Technology; 2020. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:63a21739-a200-4375-8f19-641352e504b2.
Council of Science Editors:
Glastra T(. Enabling GLOSA for on-street operating traffic light controllers. [Masters Thesis]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:63a21739-a200-4375-8f19-641352e504b2

Delft University of Technology
14.
Uittenbogaard, Ries (author).
Moving object detection and image inpainting in street-view imagery.
Degree: 2018, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:5528c3a3-8ff8-4c96-b181-a9bbae0e6d28
► In this thesis, a pipeline is created consisting of two parts. In the first part, the moving objects (cars, cyclists, pedestrians) are detected in street-view…
(more)
▼ In this thesis, a pipeline is created consisting of two parts. In the first part, the moving objects (cars, cyclists, pedestrians) are detected in street-view imagery using image segmentation neural networks and a LIDAR-based moving object detection approach. In the second part, those moving objects are deleted from the image data and an image inpainting approach is used to inpaint the hole. This is a unique approach in which information from multiple views is used as an input for a Generative Adversarial Network (GAN).
Advisors/Committee Members: Gavrila, Dariu (mentor), Sebastian, Clint (mentor), Vijverberg, Julien (mentor), Delft University of Technology (degree granting institution).
Subjects/Keywords: Image inpainting; Moving object detection; Image segmentation; Generative Adversarial Networks; LIDAR
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Uittenbogaard, R. (. (2018). Moving object detection and image inpainting in street-view imagery. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:5528c3a3-8ff8-4c96-b181-a9bbae0e6d28
Chicago Manual of Style (16th Edition):
Uittenbogaard, Ries (author). “Moving object detection and image inpainting in street-view imagery.” 2018. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:5528c3a3-8ff8-4c96-b181-a9bbae0e6d28.
MLA Handbook (7th Edition):
Uittenbogaard, Ries (author). “Moving object detection and image inpainting in street-view imagery.” 2018. Web. 23 Jan 2021.
Vancouver:
Uittenbogaard R(. Moving object detection and image inpainting in street-view imagery. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:5528c3a3-8ff8-4c96-b181-a9bbae0e6d28.
Council of Science Editors:
Uittenbogaard R(. Moving object detection and image inpainting in street-view imagery. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:5528c3a3-8ff8-4c96-b181-a9bbae0e6d28

Delft University of Technology
15.
GAO, Xinyu (author).
Sensor Data Fusion of Lidar and Camera for Road User Detection.
Degree: 2018, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a
► Object detection is one of the most important research topics in autonomous vehicles. The detection systems of autonomous vehicles nowadays are mostly image-based ones which…
(more)
▼ Object detection is one of the most important research topics in autonomous vehicles. The detection systems of autonomous vehicles nowadays are mostly image-based ones which detect target objects in the images. Although image-based detectors can provide a rather accurate 2D position of the object in the image, it is necessary to get the accurate 3D position of the object for an autonomous vehicle since it operates in the real 3D world. The relative position of the objects will heavily influence the vehicle control strategy. This thesis work aims to find out a solution for the 3D object detection by combining the Lidar point cloud and camera images, considering that these are two of the most commonly used perception sensors of autonomous vehicles. Lidar performs much better than the camera in 3D object detection since it rebuilds the surface of the surroundings by the point cloud. What’s more, combing Lidar with the camera provides the system redundancy in case of a single sensor failure. Due to the development of Neural Network (NN), past researches achieved great success in detecting objects in the images. Similarly, by applying the deep learning algorithms to parsing the point cloud, the proposed 3D object detection system obtains a competitive result in the KITTI 3D object detection benchmark.
Vehicle Engineering
Advisors/Committee Members: Gavrila, Dariu (mentor), Domhof, Joris (mentor), Kooij, Julian (graduation committee), Pan, Wei (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: 3D object detection; Lidar; Camera; sensor fusion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
GAO, X. (. (2018). Sensor Data Fusion of Lidar and Camera for Road User Detection. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a
Chicago Manual of Style (16th Edition):
GAO, Xinyu (author). “Sensor Data Fusion of Lidar and Camera for Road User Detection.” 2018. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a.
MLA Handbook (7th Edition):
GAO, Xinyu (author). “Sensor Data Fusion of Lidar and Camera for Road User Detection.” 2018. Web. 23 Jan 2021.
Vancouver:
GAO X(. Sensor Data Fusion of Lidar and Camera for Road User Detection. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a.
Council of Science Editors:
GAO X(. Sensor Data Fusion of Lidar and Camera for Road User Detection. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:e310da67-98b2-4288-b656-15da36e3f12a

Delft University of Technology
16.
Yu, Rui (author).
Lane Change Intention Recognition Models Using Hidden Markov Models and Relevance Vector Machines.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:a4ea860a-46d1-498b-9742-152e92f55ace
► The development of intelligent vehicle and autonomous driving asked a higher requirement of ADAS on its functionality. Currently, ADAS systems are able to detect and…
(more)
▼ The development of intelligent vehicle and autonomous driving asked a higher requirement of ADAS on its functionality. Currently, ADAS systems are able to detect and segment urban and highway driving scenes. They cannot, in general, extract ’meaning’ from this segmentation yet. Learning the intention of other road users will help ADAS understand surroundings and make a response. In a highway scenario, understanding what the preceding vehicle is about to do, is the minimum level of understanding the environment in order to take a decision about your own actions. Among the driving behaviors the preceding vehicle could do, lane change is a complex and dangerous one. Thus, we aimed to develop a real-time lane change intention recognition model. This report presents three models inspired by the Hidden Markov Models (HMMs) and Relevance Vector Machines (RVMs). Besides these two methods, we proposed a new model which combines them and overcome both of their main shortcomings. According to the testing result, the proposed model can correctly recognize more than 95% of the driving behaviors within 1 second the behavior starts, while the F1 score is also as high as 0.98. Besides the high accuracy, the model also has a good performance on the flexibility, testing complexity and the generalization ability.
Mechanical Engineering | Vehicle Engineering | Cognitive Robotics
Advisors/Committee Members: Gavrila, Dariu (mentor), Tejada Ruiz, Arturo (graduation committee), Kooij, Julian (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Intention Recognition; Lane Change; Hidden Markov Model; Relevance Vector Machine
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yu, R. (. (2019). Lane Change Intention Recognition Models Using Hidden Markov Models and Relevance Vector Machines. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:a4ea860a-46d1-498b-9742-152e92f55ace
Chicago Manual of Style (16th Edition):
Yu, Rui (author). “Lane Change Intention Recognition Models Using Hidden Markov Models and Relevance Vector Machines.” 2019. Masters Thesis, Delft University of Technology. Accessed January 23, 2021.
http://resolver.tudelft.nl/uuid:a4ea860a-46d1-498b-9742-152e92f55ace.
MLA Handbook (7th Edition):
Yu, Rui (author). “Lane Change Intention Recognition Models Using Hidden Markov Models and Relevance Vector Machines.” 2019. Web. 23 Jan 2021.
Vancouver:
Yu R(. Lane Change Intention Recognition Models Using Hidden Markov Models and Relevance Vector Machines. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 23].
Available from: http://resolver.tudelft.nl/uuid:a4ea860a-46d1-498b-9742-152e92f55ace.
Council of Science Editors:
Yu R(. Lane Change Intention Recognition Models Using Hidden Markov Models and Relevance Vector Machines. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:a4ea860a-46d1-498b-9742-152e92f55ace
.