Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

Language: English

You searched for subject:(Learning from Demonstrations). Showing records 1 – 3 of 3 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters

1. Rozemuller, C.G. Action learning from human demonstrations for personal robots:.

Degree: 2013, Delft University of Technology

Household robots need to perform tasks specific for the owner. With Learning from demonstration (LfD) a robot can learn new tasks from human demonstrations, without requiring programming skills. This thesis investigates a novel representation of actions that can be learned by using only a 3d camera and an object tracker. The action representation is objectbased so it is independent of the morphology of the robot. The actions are represented using the average and standard deviation of multiple demonstrated trajectories with six degrees of freedom. The standard deviation serves as a weight factor for the required accuracy of the recognized or synthesized trajectory. Three novel methods proposed in this thesis aim to reduce variances in the demonstration that are not specific to the action. First the demonstrations are aligned in time using a novel action signature and a novel time warp algorithm. The time warp algorithm can approximate the alignment of multiple multidimensional signals in quadratic computing time. The third novel technique is a dynamically optimized choice of reference frame so variations in start and end position have little influence on the variance in trajectory. This method has been tested on a database of five actions repeatedly demonstrated by six subjects. The results show that it is possible to have a 90 percent action recognition rate with only three demonstrations in the database. It is also shown that a robot can use this action representation to synthesize four out of five actions with varying object positions. Advisors/Committee Members: Jonker, P.P., Rudinac, M..

Subjects/Keywords: learning from demonstrations

…x28;PbD), Learning by Experienced Demonstrations, Assembly Plan from Observation… …51 51 52 A Learning from demonstration in literature 55 A-1 Surveys… …1-2 Learning from demonstration The ideal household robot should be able to take over… …Science Thesis 1-3 Examples 3 The field of Learning from Demonstration (LfD) 1… …robot learner. Learning from demonstration is a relatively new topic in robotics with… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rozemuller, C. G. (2013). Action learning from human demonstrations for personal robots:. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:4dbcecf9-e143-4f16-b315-f4a509140b7d

Chicago Manual of Style (16th Edition):

Rozemuller, C G. “Action learning from human demonstrations for personal robots:.” 2013. Masters Thesis, Delft University of Technology. Accessed March 29, 2020. http://resolver.tudelft.nl/uuid:4dbcecf9-e143-4f16-b315-f4a509140b7d.

MLA Handbook (7th Edition):

Rozemuller, C G. “Action learning from human demonstrations for personal robots:.” 2013. Web. 29 Mar 2020.

Vancouver:

Rozemuller CG. Action learning from human demonstrations for personal robots:. [Internet] [Masters thesis]. Delft University of Technology; 2013. [cited 2020 Mar 29]. Available from: http://resolver.tudelft.nl/uuid:4dbcecf9-e143-4f16-b315-f4a509140b7d.

Council of Science Editors:

Rozemuller CG. Action learning from human demonstrations for personal robots:. [Masters Thesis]. Delft University of Technology; 2013. Available from: http://resolver.tudelft.nl/uuid:4dbcecf9-e143-4f16-b315-f4a509140b7d


Luleå University of Technology

2. Lundell, Jens. Dynamic movement primitives andreinforcement learning for adapting alearned skill.

Degree: 2016, Luleå University of Technology

Traditionally robots have been preprogrammed to execute specific tasks. Thisapproach works well in industrial settings where robots have to execute highlyaccurate movements, such as when welding. However, preprogramming a robot isalso expensive, error prone and time consuming due to the fact that every featuresof the task has to be considered. In some cases, where a robot has to executecomplex tasks such as playing the ball-in-a-cup game, preprogramming it mighteven be impossible due to unknown features of the task. With all this in mind,this thesis examines the possibility of combining a modern learning framework,known as Learning from Demonstrations (LfD), to first teach a robot how toplay the ball-in-a-cup game by demonstrating the movement for the robot, andthen have the robot to improve this skill by itself with subsequent ReinforcementLearning (RL). The skill the robot has to learn is demonstrated with kinestheticteaching, modelled as a dynamic movement primitive, and subsequently improvedwith the RL algorithm Policy Learning by Weighted Exploration with the Returns.Experiments performed on the industrial robot KUKA LWR4+ showed that robotsare capable of successfully learning a complex skill such as playing the ball-in-a-cupgame.

Traditionellt sett har robotar blivit förprogrammerade för att utföra specifika uppgifter.Detta tillvägagångssätt fungerar bra i industriella miljöer var robotar måsteutföra mycket noggranna rörelser, som att svetsa. Förprogrammering av robotar ärdock dyrt, felbenäget och tidskrävande eftersom varje aspekt av uppgiften måstebeaktas. Dessa nackdelar kan till och med göra det omöjligt att förprogrammeraen robot att utföra komplexa uppgifter som att spela bollen-i-koppen spelet. Medallt detta i åtanke undersöker den här avhandlingen möjligheten att kombinera ettmodernt ramverktyg, kallat inläraning av demonstrationer, för att lära en robothur bollen-i-koppen-spelet ska spelas genom att demonstrera uppgiften för denoch sedan ha roboten att själv förbättra sin inlärda uppgift genom att användaförstärkande inlärning. Uppgiften som roboten måste lära sig är demonstreradmed kinestetisk undervisning, modellerad som dynamiska rörelseprimitiver, ochsenare förbättrad med den förstärkande inlärningsalgoritmen Policy Learning byWeighted Exploration with the Returns. Experiment utförda på den industriellaKUKA LWR4+ roboten visade att robotar är kapabla att framgångsrikt lära sigspela bollen-i-koppen spelet

Subjects/Keywords: Learning from Demonstrations; Dynamic Movement Primitives; Reinforcement Learning; Robotics; Robotteknik och automation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lundell, J. (2016). Dynamic movement primitives andreinforcement learning for adapting alearned skill. (Thesis). Luleå University of Technology. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-45925

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lundell, Jens. “Dynamic movement primitives andreinforcement learning for adapting alearned skill.” 2016. Thesis, Luleå University of Technology. Accessed March 29, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-45925.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lundell, Jens. “Dynamic movement primitives andreinforcement learning for adapting alearned skill.” 2016. Web. 29 Mar 2020.

Vancouver:

Lundell J. Dynamic movement primitives andreinforcement learning for adapting alearned skill. [Internet] [Thesis]. Luleå University of Technology; 2016. [cited 2020 Mar 29]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-45925.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lundell J. Dynamic movement primitives andreinforcement learning for adapting alearned skill. [Thesis]. Luleå University of Technology; 2016. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-45925

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

3. YUAN JINQIANG. LEARNING ACTIONS FROM DEMONSTRATIONS FOR MANIPULATION TASK PLANNING.

Degree: 2019, National University of Singapore

Subjects/Keywords: Learning from Demonstrations; Combined Task and Motion Planning; Manipulationg Planning; Robotic Learning; Dynamic Movement Primitives; Task Planning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

JINQIANG, Y. (2019). LEARNING ACTIONS FROM DEMONSTRATIONS FOR MANIPULATION TASK PLANNING. (Thesis). National University of Singapore. Retrieved from https://scholarbank.nus.edu.sg/handle/10635/158088

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

JINQIANG, YUAN. “LEARNING ACTIONS FROM DEMONSTRATIONS FOR MANIPULATION TASK PLANNING.” 2019. Thesis, National University of Singapore. Accessed March 29, 2020. https://scholarbank.nus.edu.sg/handle/10635/158088.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

JINQIANG, YUAN. “LEARNING ACTIONS FROM DEMONSTRATIONS FOR MANIPULATION TASK PLANNING.” 2019. Web. 29 Mar 2020.

Vancouver:

JINQIANG Y. LEARNING ACTIONS FROM DEMONSTRATIONS FOR MANIPULATION TASK PLANNING. [Internet] [Thesis]. National University of Singapore; 2019. [cited 2020 Mar 29]. Available from: https://scholarbank.nus.edu.sg/handle/10635/158088.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

JINQIANG Y. LEARNING ACTIONS FROM DEMONSTRATIONS FOR MANIPULATION TASK PLANNING. [Thesis]. National University of Singapore; 2019. Available from: https://scholarbank.nus.edu.sg/handle/10635/158088

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.