Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Inverse reinforcement learning). Showing records 1 – 30 of 30 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Rice University

1. Daptardar, Saurabh. The Science of Mind Reading: New Inverse Optimal Control Framework.

Degree: MS, Engineering, 2018, Rice University

 Continuous control and planning by the brain remain poorly understood and is a major challenge in the field of Neuroscience. To truly say that we… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Inverse Optimal Control; Reinforcement Learning; Optimal Control; Neuroscience

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Daptardar, S. (2018). The Science of Mind Reading: New Inverse Optimal Control Framework. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/105893

Chicago Manual of Style (16th Edition):

Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Masters Thesis, Rice University. Accessed October 26, 2020. http://hdl.handle.net/1911/105893.

MLA Handbook (7th Edition):

Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Web. 26 Oct 2020.

Vancouver:

Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Internet] [Masters thesis]. Rice University; 2018. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/1911/105893.

Council of Science Editors:

Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Masters Thesis]. Rice University; 2018. Available from: http://hdl.handle.net/1911/105893


University of Illinois – Urbana-Champaign

2. Zaytsev, Andrey. Faster apprenticeship learning through inverse optimal control.

Degree: MS, Computer Science, 2017, University of Illinois – Urbana-Champaign

 One of the fundamental problems of artificial intelligence is learning how to behave optimally. With applications ranging from self-driving cars to medical devices, this task… (more)

Subjects/Keywords: Apprenticeship learning; Inverse reinforcement learning; Inverse optimal control; Deep learning; Reinforcement learning; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zaytsev, A. (2017). Faster apprenticeship learning through inverse optimal control. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/99228

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Zaytsev, Andrey. “Faster apprenticeship learning through inverse optimal control.” 2017. Thesis, University of Illinois – Urbana-Champaign. Accessed October 26, 2020. http://hdl.handle.net/2142/99228.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Zaytsev, Andrey. “Faster apprenticeship learning through inverse optimal control.” 2017. Web. 26 Oct 2020.

Vancouver:

Zaytsev A. Faster apprenticeship learning through inverse optimal control. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2017. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/2142/99228.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Zaytsev A. Faster apprenticeship learning through inverse optimal control. [Thesis]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/99228

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

3. Perundurai Rajasekaran, Siddharthan. Nonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks.

Degree: MS, 2017, Worcester Polytechnic Institute

 "This thesis focuses on two key problems in reinforcement learning: How to design reward functions to obtain intended behaviors in autonomous systems using the learning-based… (more)

Subjects/Keywords: Learning with uncertainty; Unsupervised learning; Reinforcement Learning; Inverse Reinforcement Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Perundurai Rajasekaran, S. (2017). Nonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks. (Thesis). Worcester Polytechnic Institute. Retrieved from etd-083017-144531 ; https://digitalcommons.wpi.edu/etd-theses/1205

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Perundurai Rajasekaran, Siddharthan. “Nonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks.” 2017. Thesis, Worcester Polytechnic Institute. Accessed October 26, 2020. etd-083017-144531 ; https://digitalcommons.wpi.edu/etd-theses/1205.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Perundurai Rajasekaran, Siddharthan. “Nonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks.” 2017. Web. 26 Oct 2020.

Vancouver:

Perundurai Rajasekaran S. Nonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks. [Internet] [Thesis]. Worcester Polytechnic Institute; 2017. [cited 2020 Oct 26]. Available from: etd-083017-144531 ; https://digitalcommons.wpi.edu/etd-theses/1205.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Perundurai Rajasekaran S. Nonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks. [Thesis]. Worcester Polytechnic Institute; 2017. Available from: etd-083017-144531 ; https://digitalcommons.wpi.edu/etd-theses/1205

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Texas – Austin

4. -8073-3276. Parameterized modular inverse reinforcement learning.

Degree: MSin Computer Sciences, Computer Science, 2015, University of Texas – Austin

Reinforcement learning and inverse reinforcement learning can be used to model and understand human behaviors. However, due to the curse of dimensionality, their use as… (more)

Subjects/Keywords: Reinforcement learning; Artificial intelligence; Inverse reinforcement learning; Modular inverse reinforcement learning; Reinforcement learning algorithms; Human navigation behaviors

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-8073-3276. (2015). Parameterized modular inverse reinforcement learning. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Masters Thesis, University of Texas – Austin. Accessed October 26, 2020. http://hdl.handle.net/2152/46987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Web. 26 Oct 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-8073-3276. Parameterized modular inverse reinforcement learning. [Internet] [Masters thesis]. University of Texas – Austin; 2015. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/2152/46987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-8073-3276. Parameterized modular inverse reinforcement learning. [Masters Thesis]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/46987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


NSYSU

5. Tseng, Yi-Chia. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.

Degree: Master, Electrical Engineering, 2015, NSYSU

Reinforcement learning (RL) techniques use a reward function to correct a learning agent to solve sequential decision making problems through interactions with a dynamic environment,… (more)

Subjects/Keywords: Apprenticeship Learning; Feature weight; Inverse Reinforcement learning; Reward function; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tseng, Y. (2015). An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Thesis, NSYSU. Accessed October 26, 2020. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Web. 26 Oct 2020.

Vancouver:

Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Internet] [Thesis]. NSYSU; 2015. [cited 2020 Oct 26]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

6. Lin, Hung-shyuan. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.

Degree: Master, Electrical Engineering, 2015, NSYSU

 Itâs a study on Reinforcement Learning, learning interaction of agents and dynamic environment to get reward function R, and update the policy, converge learning and… (more)

Subjects/Keywords: Inverse reinforcement learning; Reward function; Fuzzy; Reinforcement learning; AdaBoost; Apprenticeship learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lin, H. (2015). Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Thesis, NSYSU. Accessed October 26, 2020. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Web. 26 Oct 2020.

Vancouver:

Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Internet] [Thesis]. NSYSU; 2015. [cited 2020 Oct 26]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Delft University of Technology

7. van der Wijden, R. (author). Preference-driven demonstrations ranking for inverse reinforcement learning.

Degree: 2016, Delft University of Technology

New flexible teaching methods for robotics are needed to automate repetitive tasks that are currently still done by humans. For limited batch sizes, it is… (more)

Subjects/Keywords: robotics; reinforcement learning; preference learning; inverse reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

van der Wijden, R. (. (2016). Preference-driven demonstrations ranking for inverse reinforcement learning. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98

Chicago Manual of Style (16th Edition):

van der Wijden, R (author). “Preference-driven demonstrations ranking for inverse reinforcement learning.” 2016. Masters Thesis, Delft University of Technology. Accessed October 26, 2020. http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98.

MLA Handbook (7th Edition):

van der Wijden, R (author). “Preference-driven demonstrations ranking for inverse reinforcement learning.” 2016. Web. 26 Oct 2020.

Vancouver:

van der Wijden R(. Preference-driven demonstrations ranking for inverse reinforcement learning. [Internet] [Masters thesis]. Delft University of Technology; 2016. [cited 2020 Oct 26]. Available from: http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98.

Council of Science Editors:

van der Wijden R(. Preference-driven demonstrations ranking for inverse reinforcement learning. [Masters Thesis]. Delft University of Technology; 2016. Available from: http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98


University of Illinois – Chicago

8. Tirinzoni, Andrea. Adversarial Inverse Reinforcement Learning with Changing Dynamics.

Degree: 2017, University of Illinois – Chicago

 Most work on inverse reinforcement learning, the problem of recovering the unknown reward function being optimized by a decision-making agent, has focused on cases where… (more)

Subjects/Keywords: Machine Learning; Inverse Reinforcement Learning; Reinforcement Learning; Adversarial Prediction; Markov Decision Process; Imitation Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tirinzoni, A. (2017). Adversarial Inverse Reinforcement Learning with Changing Dynamics. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/22081

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tirinzoni, Andrea. “Adversarial Inverse Reinforcement Learning with Changing Dynamics.” 2017. Thesis, University of Illinois – Chicago. Accessed October 26, 2020. http://hdl.handle.net/10027/22081.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tirinzoni, Andrea. “Adversarial Inverse Reinforcement Learning with Changing Dynamics.” 2017. Web. 26 Oct 2020.

Vancouver:

Tirinzoni A. Adversarial Inverse Reinforcement Learning with Changing Dynamics. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10027/22081.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tirinzoni A. Adversarial Inverse Reinforcement Learning with Changing Dynamics. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/22081

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

9. Cheng, Tien-yu. Inverse Reinforcement Learning based on Critical State.

Degree: Master, Electrical Engineering, 2014, NSYSU

Reinforcement Learning (RL) makes an agent learn through interacting with a dynamic environment. One fundamental assumption of existing RL algorithms is that reward function, the… (more)

Subjects/Keywords: reward feature construction; Apprenticeship Learning; Inverse Reinforcement learning; reward function; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cheng, T. (2014). Inverse Reinforcement Learning based on Critical State. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Cheng, Tien-yu. “Inverse Reinforcement Learning based on Critical State.” 2014. Thesis, NSYSU. Accessed October 26, 2020. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Cheng, Tien-yu. “Inverse Reinforcement Learning based on Critical State.” 2014. Web. 26 Oct 2020.

Vancouver:

Cheng T. Inverse Reinforcement Learning based on Critical State. [Internet] [Thesis]. NSYSU; 2014. [cited 2020 Oct 26]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Cheng T. Inverse Reinforcement Learning based on Critical State. [Thesis]. NSYSU; 2014. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Chicago

10. Chen, Xiangli. Robust Structured Prediction for Process Data.

Degree: 2017, University of Illinois – Chicago

 Processes involve a series of actions performed to achieve a particular result. Developing prediction models for process data is important for many real problems such… (more)

Subjects/Keywords: Structured Prediction; Optimal Control; Reinforcement Learning; Inverse Reinforcement Learning; Imitation Learning; Regression; Covariate Shift

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, X. (2017). Robust Structured Prediction for Process Data. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/21987

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Thesis, University of Illinois – Chicago. Accessed October 26, 2020. http://hdl.handle.net/10027/21987.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Web. 26 Oct 2020.

Vancouver:

Chen X. Robust Structured Prediction for Process Data. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10027/21987.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chen X. Robust Structured Prediction for Process Data. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/21987

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Southern California

11. Kalakrishnan, Mrinal. Learning objective functions for autonomous motion generation.

Degree: PhD, Computer Science, 2014, University of Southern California

 Planning and optimization methods have been widely applied to the problem of trajectory generation for autonomous robotics. The performance of such methods, however, is critically… (more)

Subjects/Keywords: robotics; machine learning; motion planning; trajectory optimization; inverse reinforcement learning; reinforcement learning; locomotion; manipulation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kalakrishnan, M. (2014). Learning objective functions for autonomous motion generation. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787

Chicago Manual of Style (16th Edition):

Kalakrishnan, Mrinal. “Learning objective functions for autonomous motion generation.” 2014. Doctoral Dissertation, University of Southern California. Accessed October 26, 2020. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787.

MLA Handbook (7th Edition):

Kalakrishnan, Mrinal. “Learning objective functions for autonomous motion generation.” 2014. Web. 26 Oct 2020.

Vancouver:

Kalakrishnan M. Learning objective functions for autonomous motion generation. [Internet] [Doctoral dissertation]. University of Southern California; 2014. [cited 2020 Oct 26]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787.

Council of Science Editors:

Kalakrishnan M. Learning objective functions for autonomous motion generation. [Doctoral Dissertation]. University of Southern California; 2014. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787


University of New South Wales

12. Nguyen, Hung. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.

Degree: Engineering & Information Technology, 2018, University of New South Wales

 Apprenticeship Learning (AL) uses data collected from humans on tasks to design machine-learning algorithms to imitate the skills used by humans. Such a powerful approach… (more)

Subjects/Keywords: Apprenticeship Learning; Reinforcement learning; Inverse Reinforcement Learning; Apprenticeship Boostrapping; UAV and UGVs

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nguyen, H. (2018). Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. (Masters Thesis). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true

Chicago Manual of Style (16th Edition):

Nguyen, Hung. “Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.” 2018. Masters Thesis, University of New South Wales. Accessed October 26, 2020. http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true.

MLA Handbook (7th Edition):

Nguyen, Hung. “Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.” 2018. Web. 26 Oct 2020.

Vancouver:

Nguyen H. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. [Internet] [Masters thesis]. University of New South Wales; 2018. [cited 2020 Oct 26]. Available from: http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true.

Council of Science Editors:

Nguyen H. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. [Masters Thesis]. University of New South Wales; 2018. Available from: http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true


NSYSU

13. Chiang, Hsuan-yi. Action Segmentation and Learning by Inverse Reinforcement Learning.

Degree: Master, Electrical Engineering, 2015, NSYSU

Reinforcement learning allows agents to learn behaviors through trial and error. However, as the level of difficulty increases, the reward function of the mission also… (more)

Subjects/Keywords: Upper Confidence Bounds; Adaboost classifier; reward function; Inverse Reinforcement learning; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chiang, H. (2015). Action Segmentation and Learning by Inverse Reinforcement Learning. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chiang, Hsuan-yi. “Action Segmentation and Learning by Inverse Reinforcement Learning.” 2015. Thesis, NSYSU. Accessed October 26, 2020. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chiang, Hsuan-yi. “Action Segmentation and Learning by Inverse Reinforcement Learning.” 2015. Web. 26 Oct 2020.

Vancouver:

Chiang H. Action Segmentation and Learning by Inverse Reinforcement Learning. [Internet] [Thesis]. NSYSU; 2015. [cited 2020 Oct 26]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chiang H. Action Segmentation and Learning by Inverse Reinforcement Learning. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

14. Bogert, Kenneth Daniel. Inverse reinforcement learning for robotic applications.

Degree: 2017, University of Georgia

 Robots deployed into many real-world scenarios are expected to face situations that their designers could not anticipate. Machine learning is an effective tool for extending… (more)

Subjects/Keywords: robotics; inverse reinforcement learning; machine learning; Markov decision process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bogert, K. D. (2017). Inverse reinforcement learning for robotic applications. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/36625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bogert, Kenneth Daniel. “Inverse reinforcement learning for robotic applications.” 2017. Thesis, University of Georgia. Accessed October 26, 2020. http://hdl.handle.net/10724/36625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bogert, Kenneth Daniel. “Inverse reinforcement learning for robotic applications.” 2017. Web. 26 Oct 2020.

Vancouver:

Bogert KD. Inverse reinforcement learning for robotic applications. [Internet] [Thesis]. University of Georgia; 2017. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10724/36625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bogert KD. Inverse reinforcement learning for robotic applications. [Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/36625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

15. Das, Indrajit. Inverse reinforcement learning of risk-sensitive utility.

Degree: 2017, University of Georgia

 The uncertain and stochastic nature of the real world poses a challenge for autonomous cars in making decisions to ensure appropriate motion, considering the safety… (more)

Subjects/Keywords: Inverse Reinforcement Learning; One Switch Utility Functions; Entropy; Apprenticeship Learning; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Das, I. (2017). Inverse reinforcement learning of risk-sensitive utility. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/36698

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Das, Indrajit. “Inverse reinforcement learning of risk-sensitive utility.” 2017. Thesis, University of Georgia. Accessed October 26, 2020. http://hdl.handle.net/10724/36698.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Das, Indrajit. “Inverse reinforcement learning of risk-sensitive utility.” 2017. Web. 26 Oct 2020.

Vancouver:

Das I. Inverse reinforcement learning of risk-sensitive utility. [Internet] [Thesis]. University of Georgia; 2017. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10724/36698.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Das I. Inverse reinforcement learning of risk-sensitive utility. [Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/36698

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

16. Jain, Vinamra. Maximum likelihood approach for model-free inverse reinforcement learning.

Degree: 2018, University of Georgia

 Preparing an intelligent system in advance to respond optimally in every possible situation is difficult. Machine learning approaches like Inverse Reinforcement Learning can help learning(more)

Subjects/Keywords: Inverse Reinforcement Learning; Maximum Likelihood Estimation; Markov Decision Process; Learning from Demonstrations

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jain, V. (2018). Maximum likelihood approach for model-free inverse reinforcement learning. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37796

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Jain, Vinamra. “Maximum likelihood approach for model-free inverse reinforcement learning.” 2018. Thesis, University of Georgia. Accessed October 26, 2020. http://hdl.handle.net/10724/37796.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Jain, Vinamra. “Maximum likelihood approach for model-free inverse reinforcement learning.” 2018. Web. 26 Oct 2020.

Vancouver:

Jain V. Maximum likelihood approach for model-free inverse reinforcement learning. [Internet] [Thesis]. University of Georgia; 2018. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10724/37796.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Jain V. Maximum likelihood approach for model-free inverse reinforcement learning. [Thesis]. University of Georgia; 2018. Available from: http://hdl.handle.net/10724/37796

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Oxford

17. Wulfmeier, Markus. Efficient supervision for robot learning via imitation, simulation, and adaptation.

Degree: PhD, 2018, University of Oxford

 In order to enable more widespread application of robots, we are required to reduce the human effort for the introduction of existing robotic platforms to… (more)

Subjects/Keywords: Machine learning; Robotics; Domain Adaptation; Imitation Learning; Inverse Reinforcement Learning; Mobile Robotics; Transfer Learning; Autonomous Driving

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wulfmeier, M. (2018). Efficient supervision for robot learning via imitation, simulation, and adaptation. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819

Chicago Manual of Style (16th Edition):

Wulfmeier, Markus. “Efficient supervision for robot learning via imitation, simulation, and adaptation.” 2018. Doctoral Dissertation, University of Oxford. Accessed October 26, 2020. http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819.

MLA Handbook (7th Edition):

Wulfmeier, Markus. “Efficient supervision for robot learning via imitation, simulation, and adaptation.” 2018. Web. 26 Oct 2020.

Vancouver:

Wulfmeier M. Efficient supervision for robot learning via imitation, simulation, and adaptation. [Internet] [Doctoral dissertation]. University of Oxford; 2018. [cited 2020 Oct 26]. Available from: http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819.

Council of Science Editors:

Wulfmeier M. Efficient supervision for robot learning via imitation, simulation, and adaptation. [Doctoral Dissertation]. University of Oxford; 2018. Available from: http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819


University of Plymouth

18. Loviken, Pontus. Fast online model learning for controlling complex real-world robots.

Degree: PhD, 2019, University of Plymouth

 How can real robots with many degrees of freedom - without previous knowledge of themselves or their environment - act and use the resulting observations… (more)

Subjects/Keywords: model learning; Reinforcement learning; Online learning; Goal babbling; inverse models; Micro data learning; Developmental robotics; real-world robots; sensorimotor control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Loviken, P. (2019). Fast online model learning for controlling complex real-world robots. (Doctoral Dissertation). University of Plymouth. Retrieved from http://hdl.handle.net/10026.1/15078

Chicago Manual of Style (16th Edition):

Loviken, Pontus. “Fast online model learning for controlling complex real-world robots.” 2019. Doctoral Dissertation, University of Plymouth. Accessed October 26, 2020. http://hdl.handle.net/10026.1/15078.

MLA Handbook (7th Edition):

Loviken, Pontus. “Fast online model learning for controlling complex real-world robots.” 2019. Web. 26 Oct 2020.

Vancouver:

Loviken P. Fast online model learning for controlling complex real-world robots. [Internet] [Doctoral dissertation]. University of Plymouth; 2019. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10026.1/15078.

Council of Science Editors:

Loviken P. Fast online model learning for controlling complex real-world robots. [Doctoral Dissertation]. University of Plymouth; 2019. Available from: http://hdl.handle.net/10026.1/15078

19. Chandramohan, Senthilkumar. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.

Degree: Docteur es, Informatique, 2012, Avignon

Les récents progrès dans le domaine du traitement du langage ont apporté un intérêt significatif à la mise en oeuvre de systèmes de dialogue parlé.… (more)

Subjects/Keywords: Simulation d'utilisateurs; Systèmes de dialogue parlé; Apprentissage par renforcement; Apprentissage par renforcement inverse; Gestion de dialogue; User simulation; Spoken dialogue systems; Reinforcement learning; Inverse reinforcement learning; Dialogue management

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chandramohan, S. (2012). Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. (Doctoral Dissertation). Avignon. Retrieved from http://www.theses.fr/2012AVIG0185

Chicago Manual of Style (16th Edition):

Chandramohan, Senthilkumar. “Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.” 2012. Doctoral Dissertation, Avignon. Accessed October 26, 2020. http://www.theses.fr/2012AVIG0185.

MLA Handbook (7th Edition):

Chandramohan, Senthilkumar. “Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.” 2012. Web. 26 Oct 2020.

Vancouver:

Chandramohan S. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. [Internet] [Doctoral dissertation]. Avignon; 2012. [cited 2020 Oct 26]. Available from: http://www.theses.fr/2012AVIG0185.

Council of Science Editors:

Chandramohan S. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. [Doctoral Dissertation]. Avignon; 2012. Available from: http://www.theses.fr/2012AVIG0185


University of Georgia

20. Bhat, Sanath Govinda. Learning driver preferences for freeway merging using multitask irl.

Degree: 2018, University of Georgia

 Most automobile manufacturers today have invested heavily in the research and design of implementing autonomy in their cars. One important and challenging problem faced by… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Hierarchical Bayesian Model; Multitask; Highway Merging; NGSIM; Likelihood Weighting

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhat, S. G. (2018). Learning driver preferences for freeway merging using multitask irl. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37116

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Thesis, University of Georgia. Accessed October 26, 2020. http://hdl.handle.net/10724/37116.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Web. 26 Oct 2020.

Vancouver:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Internet] [Thesis]. University of Georgia; 2018. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10724/37116.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Thesis]. University of Georgia; 2018. Available from: http://hdl.handle.net/10724/37116

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

21. Bhat, Sanath Govinda. Learning driver preferences for freeway merging using multitask irl.

Degree: 2018, University of Georgia

 Most automobile manufacturers today have invested heavily in the research and design of implementing autonomy in their cars. One important and challenging problem faced by… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Hierarchical Bayesian Model; Multitask; Highway Merging; NGSIM; Likelihood Weighting

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhat, S. G. (2018). Learning driver preferences for freeway merging using multitask irl. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37273

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Thesis, University of Georgia. Accessed October 26, 2020. http://hdl.handle.net/10724/37273.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Web. 26 Oct 2020.

Vancouver:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Internet] [Thesis]. University of Georgia; 2018. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10724/37273.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Thesis]. University of Georgia; 2018. Available from: http://hdl.handle.net/10724/37273

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Virginia Tech

22. Shiraev, Dmitry Eric. Inverse Reinforcement Learning and Routing Metric Discovery.

Degree: MS, Computer Science, 2003, Virginia Tech

 Uncovering the metrics and procedures employed by an autonomous networking system is an important problem with applications in instrumentation, traffic engineering, and game-theoretic studies of… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Routing; Network Metrics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shiraev, D. E. (2003). Inverse Reinforcement Learning and Routing Metric Discovery. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/34728

Chicago Manual of Style (16th Edition):

Shiraev, Dmitry Eric. “Inverse Reinforcement Learning and Routing Metric Discovery.” 2003. Masters Thesis, Virginia Tech. Accessed October 26, 2020. http://hdl.handle.net/10919/34728.

MLA Handbook (7th Edition):

Shiraev, Dmitry Eric. “Inverse Reinforcement Learning and Routing Metric Discovery.” 2003. Web. 26 Oct 2020.

Vancouver:

Shiraev DE. Inverse Reinforcement Learning and Routing Metric Discovery. [Internet] [Masters thesis]. Virginia Tech; 2003. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10919/34728.

Council of Science Editors:

Shiraev DE. Inverse Reinforcement Learning and Routing Metric Discovery. [Masters Thesis]. Virginia Tech; 2003. Available from: http://hdl.handle.net/10919/34728


Wright State University

23. Nalamothu, Abhishek. Abusive and Hate Speech Tweets Detection with Text Generation.

Degree: MS, Computer Science, 2019, Wright State University

 According to a Pew Research study, 41% of Americans have personally experienced online harassment and two-thirds of Americans have witnessed harassment in 2017. Hence, online… (more)

Subjects/Keywords: Computer Science; Text generation; Generative adversarial network; Inverse Reinforcement Learning; Online Harassment detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nalamothu, A. (2019). Abusive and Hate Speech Tweets Detection with Text Generation. (Masters Thesis). Wright State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305

Chicago Manual of Style (16th Edition):

Nalamothu, Abhishek. “Abusive and Hate Speech Tweets Detection with Text Generation.” 2019. Masters Thesis, Wright State University. Accessed October 26, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305.

MLA Handbook (7th Edition):

Nalamothu, Abhishek. “Abusive and Hate Speech Tweets Detection with Text Generation.” 2019. Web. 26 Oct 2020.

Vancouver:

Nalamothu A. Abusive and Hate Speech Tweets Detection with Text Generation. [Internet] [Masters thesis]. Wright State University; 2019. [cited 2020 Oct 26]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305.

Council of Science Editors:

Nalamothu A. Abusive and Hate Speech Tweets Detection with Text Generation. [Masters Thesis]. Wright State University; 2019. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305


University of Pennsylvania

24. Wen, Min. Reinforcement Learning With High-Level Task Specifications.

Degree: 2019, University of Pennsylvania

Reinforcement learning (RL) has been widely used, for example, in robotics, recommendation systems, and financial services. Existing RL algorithms typically optimize reward-based surrogates rather than… (more)

Subjects/Keywords: Game theory; Inverse reinforcement learning; Learning-based control; Learning from demonstration; Reinforcement learning; Temporal logic specifications; Artificial Intelligence and Robotics; Computer Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wen, M. (2019). Reinforcement Learning With High-Level Task Specifications. (Thesis). University of Pennsylvania. Retrieved from https://repository.upenn.edu/edissertations/3509

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Wen, Min. “Reinforcement Learning With High-Level Task Specifications.” 2019. Thesis, University of Pennsylvania. Accessed October 26, 2020. https://repository.upenn.edu/edissertations/3509.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Wen, Min. “Reinforcement Learning With High-Level Task Specifications.” 2019. Web. 26 Oct 2020.

Vancouver:

Wen M. Reinforcement Learning With High-Level Task Specifications. [Internet] [Thesis]. University of Pennsylvania; 2019. [cited 2020 Oct 26]. Available from: https://repository.upenn.edu/edissertations/3509.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Wen M. Reinforcement Learning With High-Level Task Specifications. [Thesis]. University of Pennsylvania; 2019. Available from: https://repository.upenn.edu/edissertations/3509

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Chicago

25. Monfort, Mathew. Methods in Large Scale Inverse Optimal Control.

Degree: 2016, University of Illinois – Chicago

 As our technology continues to evolve, so does the complexity of the problems that we expect our systems to solve. The challenge is that these… (more)

Subjects/Keywords: machine learning; artificial intelligence; inverse optimal control; graph search; autonomous agents; reinforcement learning; path distributions; robotic control; robotics; robots; activity recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Monfort, M. (2016). Methods in Large Scale Inverse Optimal Control. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/21540

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Monfort, Mathew. “Methods in Large Scale Inverse Optimal Control.” 2016. Thesis, University of Illinois – Chicago. Accessed October 26, 2020. http://hdl.handle.net/10027/21540.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Monfort, Mathew. “Methods in Large Scale Inverse Optimal Control.” 2016. Web. 26 Oct 2020.

Vancouver:

Monfort M. Methods in Large Scale Inverse Optimal Control. [Internet] [Thesis]. University of Illinois – Chicago; 2016. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10027/21540.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Monfort M. Methods in Large Scale Inverse Optimal Control. [Thesis]. University of Illinois – Chicago; 2016. Available from: http://hdl.handle.net/10027/21540

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

26. Trivedi, Maulesh. Inverse learning of robot behavior for ad-hoc teamwork.

Degree: 2017, University of Georgia

 Machine Learning and Robotics present a very intriguing combination of research in Artificial Intelligence. Inverse Reinforcement Learning (IRL) algorithms have generated a great deal of… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Markov Decision Process; Bayes Adaptive Markov Decision Process; Best Response Model; Dec MDP; Optimal Policy; Reward Function

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Trivedi, M. (2017). Inverse learning of robot behavior for ad-hoc teamwork. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/36912

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Trivedi, Maulesh. “Inverse learning of robot behavior for ad-hoc teamwork.” 2017. Thesis, University of Georgia. Accessed October 26, 2020. http://hdl.handle.net/10724/36912.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Trivedi, Maulesh. “Inverse learning of robot behavior for ad-hoc teamwork.” 2017. Web. 26 Oct 2020.

Vancouver:

Trivedi M. Inverse learning of robot behavior for ad-hoc teamwork. [Internet] [Thesis]. University of Georgia; 2017. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10724/36912.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Trivedi M. Inverse learning of robot behavior for ad-hoc teamwork. [Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/36912

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

27. Johnson, Miles. Inverse optimal control for deterministic continuous-time nonlinear systems.

Degree: PhD, 4048, 2014, University of Illinois – Urbana-Champaign

Inverse optimal control is the problem of computing a cost function with respect to which observed state input trajectories are optimal. We present a new… (more)

Subjects/Keywords: optimal control; inverse reinforcement learning; inverse optimal control; apprenticeship learning; Learning from demonstration; iterative learning control

…problem are the following: • The max-margin inverse reinforcement learning method of Abbeel, et… …elastic rod from inverse optimal control. Errors here can be due to observation noise, model… …of time-optimal control and new method of inverse optimal control… …2. This figure shows the result of applying each method of inverse optimal control to… …observation of physical 3D elastic rod used for inverse optimal control. Here, t denotes the arc… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Johnson, M. (2014). Inverse optimal control for deterministic continuous-time nonlinear systems. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/46747

Chicago Manual of Style (16th Edition):

Johnson, Miles. “Inverse optimal control for deterministic continuous-time nonlinear systems.” 2014. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed October 26, 2020. http://hdl.handle.net/2142/46747.

MLA Handbook (7th Edition):

Johnson, Miles. “Inverse optimal control for deterministic continuous-time nonlinear systems.” 2014. Web. 26 Oct 2020.

Vancouver:

Johnson M. Inverse optimal control for deterministic continuous-time nonlinear systems. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2014. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/2142/46747.

Council of Science Editors:

Johnson M. Inverse optimal control for deterministic continuous-time nonlinear systems. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2014. Available from: http://hdl.handle.net/2142/46747

28. Mangin, Olivier. Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words.

Degree: Docteur es, Informatique, 2014, Bordeaux

Cette thèse considère l'apprentissage de motifs récurrents dans la perception multimodale. Elle s'attache à développer des modèles robotiques de ces facultés telles qu'observées chez l'enfant,… (more)

Subjects/Keywords: Apprentissage multimodal; Acquisition du langage; Ancrage de symboles; Apprentissage de concepts; Compréhension de comportement humains; Décomposition du mouvement; Primitive motrice; Décomposition de taches; Factorisation de matrice positive; Apprentissage par renforcement inverse factorisé; Multimodal learning; Language acquisition; Symbol grounding; Concept learning; Human behavior understanding; Motion decomposition; Motion primitive; Task decomposition; Nonnegative matrix factorization; Factorial inverse reinforcement learning; Developmental robotics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mangin, O. (2014). Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words. (Doctoral Dissertation). Bordeaux. Retrieved from http://www.theses.fr/2014BORD0002

Chicago Manual of Style (16th Edition):

Mangin, Olivier. “Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words.” 2014. Doctoral Dissertation, Bordeaux. Accessed October 26, 2020. http://www.theses.fr/2014BORD0002.

MLA Handbook (7th Edition):

Mangin, Olivier. “Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words.” 2014. Web. 26 Oct 2020.

Vancouver:

Mangin O. Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words. [Internet] [Doctoral dissertation]. Bordeaux; 2014. [cited 2020 Oct 26]. Available from: http://www.theses.fr/2014BORD0002.

Council of Science Editors:

Mangin O. Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words. [Doctoral Dissertation]. Bordeaux; 2014. Available from: http://www.theses.fr/2014BORD0002

29. -7202-287X. Quantifying grasp quality using an inverse reinforcement learning algorithm.

Degree: MSin Engineering, Mechanical Engineering, 2017, University of Texas – Austin

 This thesis considers the problem of using a learning algorithm to recognize when a mechanical gripper and sensor combination has achieved a robust grasp. Robotic… (more)

Subjects/Keywords: Robotics; Nuclear; Grasp; Grasping; Validation; Grasp validation; Radiation; Glovebox; Grasp quality; Machine learning; Inverse reinforcement learning; Learning; Algorithm; Reinforcement; Safety

…3.2 Inverse Reinforcement Learning Algorithm Design . 3.2.1 Features… …Methods in Robotics 2.3.1 Supervised Learning . . . . . . . . . . . . . 2.3.2 Reinforcement… …2.2 Non-Learning Based Grasp Validation . . . . . . . 2.3 Introduction to Probabilistic… …Learning . . . . . . . . . . . 2.3.3 Fuzzy Logic . . . . . . . . . . . . . . . . . . 2.3.4… …Machine Learning Components . . . . . . . 2.4 Efforts Involving Learning and Grasp Validation… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-7202-287X. (2017). Quantifying grasp quality using an inverse reinforcement learning algorithm. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/47303

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-7202-287X. “Quantifying grasp quality using an inverse reinforcement learning algorithm.” 2017. Masters Thesis, University of Texas – Austin. Accessed October 26, 2020. http://hdl.handle.net/2152/47303.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-7202-287X. “Quantifying grasp quality using an inverse reinforcement learning algorithm.” 2017. Web. 26 Oct 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-7202-287X. Quantifying grasp quality using an inverse reinforcement learning algorithm. [Internet] [Masters thesis]. University of Texas – Austin; 2017. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/2152/47303.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-7202-287X. Quantifying grasp quality using an inverse reinforcement learning algorithm. [Masters Thesis]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/47303

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


Université de Montréal

30. Ganin, Iaroslav. Natural image processing and synthesis using deep learning.

Degree: 2020, Université de Montréal

Subjects/Keywords: Apprentissage profond; Vision artificielle; Réseaux de neurones; Réseaux de neurones convolutionnels; Détections de bords; Correction du regard; Transformateurs spatiaux; Adaptation de domaine; Adversaire; Modèles génératifs; Apprentissage par renforcement; Graphisme inverse; Deep learning; Computer vision; Neural networks; Convolutional neural networks; Edge detection; Gaze correction; Spatial transformers; Domain adaptation; Adversarial; Generative models; Reinforcement learning; Inverse graphics; Applied Sciences - Artificial Intelligence / Sciences appliqués et technologie - Intelligence artificielle (UMI : 0800)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ganin, I. (2020). Natural image processing and synthesis using deep learning. (Thesis). Université de Montréal. Retrieved from http://hdl.handle.net/1866/23437

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Ganin, Iaroslav. “Natural image processing and synthesis using deep learning.” 2020. Thesis, Université de Montréal. Accessed October 26, 2020. http://hdl.handle.net/1866/23437.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Ganin, Iaroslav. “Natural image processing and synthesis using deep learning.” 2020. Web. 26 Oct 2020.

Vancouver:

Ganin I. Natural image processing and synthesis using deep learning. [Internet] [Thesis]. Université de Montréal; 2020. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/1866/23437.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Ganin I. Natural image processing and synthesis using deep learning. [Thesis]. Université de Montréal; 2020. Available from: http://hdl.handle.net/1866/23437

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.