Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Modular inverse reinforcement learning). Showing records 1 – 30 of 63500 total matches.

[1] [2] [3] [4] [5] … [2117]

Search Limiters

Last 2 Years | English Only

Degrees

Languages

Country

▼ Search Limiters


University of Texas – Austin

1. -8073-3276. Parameterized modular inverse reinforcement learning.

Degree: MSin Computer Sciences, Computer Science, 2015, University of Texas – Austin

Reinforcement learning and inverse reinforcement learning can be used to model and understand human behaviors. However, due to the curse of dimensionality, their use as… (more)

Subjects/Keywords: Reinforcement learning; Artificial intelligence; Inverse reinforcement learning; Modular inverse reinforcement learning; Reinforcement learning algorithms; Human navigation behaviors

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-8073-3276. (2015). Parameterized modular inverse reinforcement learning. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Masters Thesis, University of Texas – Austin. Accessed March 07, 2021. http://hdl.handle.net/2152/46987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Web. 07 Mar 2021.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-8073-3276. Parameterized modular inverse reinforcement learning. [Internet] [Masters thesis]. University of Texas – Austin; 2015. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2152/46987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-8073-3276. Parameterized modular inverse reinforcement learning. [Masters Thesis]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/46987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


Rice University

2. Daptardar, Saurabh. The Science of Mind Reading: New Inverse Optimal Control Framework.

Degree: MS, Engineering, 2018, Rice University

 Continuous control and planning by the brain remain poorly understood and is a major challenge in the field of Neuroscience. To truly say that we… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Inverse Optimal Control; Reinforcement Learning; Optimal Control; Neuroscience

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Daptardar, S. (2018). The Science of Mind Reading: New Inverse Optimal Control Framework. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/105893

Chicago Manual of Style (16th Edition):

Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Masters Thesis, Rice University. Accessed March 07, 2021. http://hdl.handle.net/1911/105893.

MLA Handbook (7th Edition):

Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Web. 07 Mar 2021.

Vancouver:

Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Internet] [Masters thesis]. Rice University; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1911/105893.

Council of Science Editors:

Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Masters Thesis]. Rice University; 2018. Available from: http://hdl.handle.net/1911/105893


University of Illinois – Urbana-Champaign

3. Zaytsev, Andrey. Faster apprenticeship learning through inverse optimal control.

Degree: MS, Computer Science, 2017, University of Illinois – Urbana-Champaign

 One of the fundamental problems of artificial intelligence is learning how to behave optimally. With applications ranging from self-driving cars to medical devices, this task… (more)

Subjects/Keywords: Apprenticeship learning; Inverse reinforcement learning; Inverse optimal control; Deep learning; Reinforcement learning; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zaytsev, A. (2017). Faster apprenticeship learning through inverse optimal control. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/99228

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Zaytsev, Andrey. “Faster apprenticeship learning through inverse optimal control.” 2017. Thesis, University of Illinois – Urbana-Champaign. Accessed March 07, 2021. http://hdl.handle.net/2142/99228.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Zaytsev, Andrey. “Faster apprenticeship learning through inverse optimal control.” 2017. Web. 07 Mar 2021.

Vancouver:

Zaytsev A. Faster apprenticeship learning through inverse optimal control. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2142/99228.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Zaytsev A. Faster apprenticeship learning through inverse optimal control. [Thesis]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/99228

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

4. Tseng, Yi-Chia. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.

Degree: Master, Electrical Engineering, 2015, NSYSU

Reinforcement learning (RL) techniques use a reward function to correct a learning agent to solve sequential decision making problems through interactions with a dynamic environment,… (more)

Subjects/Keywords: Apprenticeship Learning; Feature weight; Inverse Reinforcement learning; Reward function; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tseng, Y. (2015). An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Web. 07 Mar 2021.

Vancouver:

Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Internet] [Thesis]. NSYSU; 2015. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

5. Lin, Hung-shyuan. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.

Degree: Master, Electrical Engineering, 2015, NSYSU

 Itâs a study on Reinforcement Learning, learning interaction of agents and dynamic environment to get reward function R, and update the policy, converge learning and… (more)

Subjects/Keywords: Inverse reinforcement learning; Reward function; Fuzzy; Reinforcement learning; AdaBoost; Apprenticeship learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lin, H. (2015). Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Web. 07 Mar 2021.

Vancouver:

Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Internet] [Thesis]. NSYSU; 2015. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Delft University of Technology

6. van der Wijden, R. (author). Preference-driven demonstrations ranking for inverse reinforcement learning.

Degree: 2016, Delft University of Technology

New flexible teaching methods for robotics are needed to automate repetitive tasks that are currently still done by humans. For limited batch sizes, it is… (more)

Subjects/Keywords: robotics; reinforcement learning; preference learning; inverse reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

van der Wijden, R. (. (2016). Preference-driven demonstrations ranking for inverse reinforcement learning. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98

Chicago Manual of Style (16th Edition):

van der Wijden, R (author). “Preference-driven demonstrations ranking for inverse reinforcement learning.” 2016. Masters Thesis, Delft University of Technology. Accessed March 07, 2021. http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98.

MLA Handbook (7th Edition):

van der Wijden, R (author). “Preference-driven demonstrations ranking for inverse reinforcement learning.” 2016. Web. 07 Mar 2021.

Vancouver:

van der Wijden R(. Preference-driven demonstrations ranking for inverse reinforcement learning. [Internet] [Masters thesis]. Delft University of Technology; 2016. [cited 2021 Mar 07]. Available from: http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98.

Council of Science Editors:

van der Wijden R(. Preference-driven demonstrations ranking for inverse reinforcement learning. [Masters Thesis]. Delft University of Technology; 2016. Available from: http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98


Tampere University

7. Dewundara Liyanage, Ishira Uthkarshini. Reward Learning from Demonstrations for Autonomous Earthmoving .

Degree: 2020, Tampere University

 With the increasing complexity of specific tasks, automation engineers look at various machine learning methods as opposed to methods that require laborious task specifications. Imitation… (more)

Subjects/Keywords: inverse reinforcement learning ; reinforcement learning ; automation ; reward funtion ; unsupervised perceptual rewards

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dewundara Liyanage, I. U. (2020). Reward Learning from Demonstrations for Autonomous Earthmoving . (Masters Thesis). Tampere University. Retrieved from https://trepo.tuni.fi/handle/10024/123546

Chicago Manual of Style (16th Edition):

Dewundara Liyanage, Ishira Uthkarshini. “Reward Learning from Demonstrations for Autonomous Earthmoving .” 2020. Masters Thesis, Tampere University. Accessed March 07, 2021. https://trepo.tuni.fi/handle/10024/123546.

MLA Handbook (7th Edition):

Dewundara Liyanage, Ishira Uthkarshini. “Reward Learning from Demonstrations for Autonomous Earthmoving .” 2020. Web. 07 Mar 2021.

Vancouver:

Dewundara Liyanage IU. Reward Learning from Demonstrations for Autonomous Earthmoving . [Internet] [Masters thesis]. Tampere University; 2020. [cited 2021 Mar 07]. Available from: https://trepo.tuni.fi/handle/10024/123546.

Council of Science Editors:

Dewundara Liyanage IU. Reward Learning from Demonstrations for Autonomous Earthmoving . [Masters Thesis]. Tampere University; 2020. Available from: https://trepo.tuni.fi/handle/10024/123546


University of Illinois – Chicago

8. Tirinzoni, Andrea. Adversarial Inverse Reinforcement Learning with Changing Dynamics.

Degree: 2017, University of Illinois – Chicago

 Most work on inverse reinforcement learning, the problem of recovering the unknown reward function being optimized by a decision-making agent, has focused on cases where… (more)

Subjects/Keywords: Machine Learning; Inverse Reinforcement Learning; Reinforcement Learning; Adversarial Prediction; Markov Decision Process; Imitation Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tirinzoni, A. (2017). Adversarial Inverse Reinforcement Learning with Changing Dynamics. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/22081

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tirinzoni, Andrea. “Adversarial Inverse Reinforcement Learning with Changing Dynamics.” 2017. Thesis, University of Illinois – Chicago. Accessed March 07, 2021. http://hdl.handle.net/10027/22081.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tirinzoni, Andrea. “Adversarial Inverse Reinforcement Learning with Changing Dynamics.” 2017. Web. 07 Mar 2021.

Vancouver:

Tirinzoni A. Adversarial Inverse Reinforcement Learning with Changing Dynamics. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10027/22081.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tirinzoni A. Adversarial Inverse Reinforcement Learning with Changing Dynamics. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/22081

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

9. Cheng, Tien-yu. Inverse Reinforcement Learning based on Critical State.

Degree: Master, Electrical Engineering, 2014, NSYSU

Reinforcement Learning (RL) makes an agent learn through interacting with a dynamic environment. One fundamental assumption of existing RL algorithms is that reward function, the… (more)

Subjects/Keywords: reward feature construction; Apprenticeship Learning; Inverse Reinforcement learning; reward function; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cheng, T. (2014). Inverse Reinforcement Learning based on Critical State. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Cheng, Tien-yu. “Inverse Reinforcement Learning based on Critical State.” 2014. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Cheng, Tien-yu. “Inverse Reinforcement Learning based on Critical State.” 2014. Web. 07 Mar 2021.

Vancouver:

Cheng T. Inverse Reinforcement Learning based on Critical State. [Internet] [Thesis]. NSYSU; 2014. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Cheng T. Inverse Reinforcement Learning based on Critical State. [Thesis]. NSYSU; 2014. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Chicago

10. Chen, Xiangli. Robust Structured Prediction for Process Data.

Degree: 2017, University of Illinois – Chicago

 Processes involve a series of actions performed to achieve a particular result. Developing prediction models for process data is important for many real problems such… (more)

Subjects/Keywords: Structured Prediction; Optimal Control; Reinforcement Learning; Inverse Reinforcement Learning; Imitation Learning; Regression; Covariate Shift

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, X. (2017). Robust Structured Prediction for Process Data. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/21987

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Thesis, University of Illinois – Chicago. Accessed March 07, 2021. http://hdl.handle.net/10027/21987.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Web. 07 Mar 2021.

Vancouver:

Chen X. Robust Structured Prediction for Process Data. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10027/21987.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chen X. Robust Structured Prediction for Process Data. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/21987

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Southern California

11. Kalakrishnan, Mrinal. Learning objective functions for autonomous motion generation.

Degree: PhD, Computer Science, 2014, University of Southern California

 Planning and optimization methods have been widely applied to the problem of trajectory generation for autonomous robotics. The performance of such methods, however, is critically… (more)

Subjects/Keywords: robotics; machine learning; motion planning; trajectory optimization; inverse reinforcement learning; reinforcement learning; locomotion; manipulation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kalakrishnan, M. (2014). Learning objective functions for autonomous motion generation. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787

Chicago Manual of Style (16th Edition):

Kalakrishnan, Mrinal. “Learning objective functions for autonomous motion generation.” 2014. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787.

MLA Handbook (7th Edition):

Kalakrishnan, Mrinal. “Learning objective functions for autonomous motion generation.” 2014. Web. 07 Mar 2021.

Vancouver:

Kalakrishnan M. Learning objective functions for autonomous motion generation. [Internet] [Doctoral dissertation]. University of Southern California; 2014. [cited 2021 Mar 07]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787.

Council of Science Editors:

Kalakrishnan M. Learning objective functions for autonomous motion generation. [Doctoral Dissertation]. University of Southern California; 2014. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787


University of New South Wales

12. Nguyen, Hung. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.

Degree: Engineering & Information Technology, 2018, University of New South Wales

 Apprenticeship Learning (AL) uses data collected from humans on tasks to design machine-learning algorithms to imitate the skills used by humans. Such a powerful approach… (more)

Subjects/Keywords: Apprenticeship Learning; Reinforcement learning; Inverse Reinforcement Learning; Apprenticeship Boostrapping; UAV and UGVs

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nguyen, H. (2018). Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. (Masters Thesis). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true

Chicago Manual of Style (16th Edition):

Nguyen, Hung. “Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.” 2018. Masters Thesis, University of New South Wales. Accessed March 07, 2021. http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true.

MLA Handbook (7th Edition):

Nguyen, Hung. “Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.” 2018. Web. 07 Mar 2021.

Vancouver:

Nguyen H. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. [Internet] [Masters thesis]. University of New South Wales; 2018. [cited 2021 Mar 07]. Available from: http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true.

Council of Science Editors:

Nguyen H. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. [Masters Thesis]. University of New South Wales; 2018. Available from: http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true


NSYSU

13. Chiang, Hsuan-yi. Action Segmentation and Learning by Inverse Reinforcement Learning.

Degree: Master, Electrical Engineering, 2015, NSYSU

Reinforcement learning allows agents to learn behaviors through trial and error. However, as the level of difficulty increases, the reward function of the mission also… (more)

Subjects/Keywords: Upper Confidence Bounds; Adaboost classifier; reward function; Inverse Reinforcement learning; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chiang, H. (2015). Action Segmentation and Learning by Inverse Reinforcement Learning. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chiang, Hsuan-yi. “Action Segmentation and Learning by Inverse Reinforcement Learning.” 2015. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chiang, Hsuan-yi. “Action Segmentation and Learning by Inverse Reinforcement Learning.” 2015. Web. 07 Mar 2021.

Vancouver:

Chiang H. Action Segmentation and Learning by Inverse Reinforcement Learning. [Internet] [Thesis]. NSYSU; 2015. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chiang H. Action Segmentation and Learning by Inverse Reinforcement Learning. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

14. Bogert, Kenneth Daniel. Inverse reinforcement learning for robotic applications.

Degree: 2017, University of Georgia

 Robots deployed into many real-world scenarios are expected to face situations that their designers could not anticipate. Machine learning is an effective tool for extending… (more)

Subjects/Keywords: robotics; inverse reinforcement learning; machine learning; Markov decision process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bogert, K. D. (2017). Inverse reinforcement learning for robotic applications. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/36625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bogert, Kenneth Daniel. “Inverse reinforcement learning for robotic applications.” 2017. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/36625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bogert, Kenneth Daniel. “Inverse reinforcement learning for robotic applications.” 2017. Web. 07 Mar 2021.

Vancouver:

Bogert KD. Inverse reinforcement learning for robotic applications. [Internet] [Thesis]. University of Georgia; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/36625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bogert KD. Inverse reinforcement learning for robotic applications. [Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/36625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

15. Paraskevopoulos, Vasileios. Design of optimal neural network control strategies with minimal a priori knowledge.

Degree: PhD, 2000, University of Sussex

Subjects/Keywords: 629.8; Reinforcement learning; Real time; Modular

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Paraskevopoulos, V. (2000). Design of optimal neural network control strategies with minimal a priori knowledge. (Doctoral Dissertation). University of Sussex. Retrieved from https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189

Chicago Manual of Style (16th Edition):

Paraskevopoulos, Vasileios. “Design of optimal neural network control strategies with minimal a priori knowledge.” 2000. Doctoral Dissertation, University of Sussex. Accessed March 07, 2021. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189.

MLA Handbook (7th Edition):

Paraskevopoulos, Vasileios. “Design of optimal neural network control strategies with minimal a priori knowledge.” 2000. Web. 07 Mar 2021.

Vancouver:

Paraskevopoulos V. Design of optimal neural network control strategies with minimal a priori knowledge. [Internet] [Doctoral dissertation]. University of Sussex; 2000. [cited 2021 Mar 07]. Available from: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189.

Council of Science Editors:

Paraskevopoulos V. Design of optimal neural network control strategies with minimal a priori knowledge. [Doctoral Dissertation]. University of Sussex; 2000. Available from: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189


University of Georgia

16. Das, Indrajit. Inverse reinforcement learning of risk-sensitive utility.

Degree: 2017, University of Georgia

 The uncertain and stochastic nature of the real world poses a challenge for autonomous cars in making decisions to ensure appropriate motion, considering the safety… (more)

Subjects/Keywords: Inverse Reinforcement Learning; One Switch Utility Functions; Entropy; Apprenticeship Learning; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Das, I. (2017). Inverse reinforcement learning of risk-sensitive utility. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/36698

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Das, Indrajit. “Inverse reinforcement learning of risk-sensitive utility.” 2017. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/36698.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Das, Indrajit. “Inverse reinforcement learning of risk-sensitive utility.” 2017. Web. 07 Mar 2021.

Vancouver:

Das I. Inverse reinforcement learning of risk-sensitive utility. [Internet] [Thesis]. University of Georgia; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/36698.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Das I. Inverse reinforcement learning of risk-sensitive utility. [Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/36698

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

17. Jain, Vinamra. Maximum likelihood approach for model-free inverse reinforcement learning.

Degree: 2018, University of Georgia

 Preparing an intelligent system in advance to respond optimally in every possible situation is difficult. Machine learning approaches like Inverse Reinforcement Learning can help learning(more)

Subjects/Keywords: Inverse Reinforcement Learning; Maximum Likelihood Estimation; Markov Decision Process; Learning from Demonstrations

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jain, V. (2018). Maximum likelihood approach for model-free inverse reinforcement learning. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37796

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Jain, Vinamra. “Maximum likelihood approach for model-free inverse reinforcement learning.” 2018. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/37796.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Jain, Vinamra. “Maximum likelihood approach for model-free inverse reinforcement learning.” 2018. Web. 07 Mar 2021.

Vancouver:

Jain V. Maximum likelihood approach for model-free inverse reinforcement learning. [Internet] [Thesis]. University of Georgia; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/37796.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Jain V. Maximum likelihood approach for model-free inverse reinforcement learning. [Thesis]. University of Georgia; 2018. Available from: http://hdl.handle.net/10724/37796

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

18. Chen, Yan-heng. Analysis of Another Left Shift Binary GCD Algorithm.

Degree: Master, Computer Science and Engineering, 2009, NSYSU

 In general, to compute the modular inverse is very important in information security, many encrypt/decrypt and signature algorithms always need to use it. In 2007,… (more)

Subjects/Keywords: Self-test; Modular inverse; GCD

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, Y. (2009). Analysis of Another Left Shift Binary GCD Algorithm. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0714109-121741

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chen, Yan-heng. “Analysis of Another Left Shift Binary GCD Algorithm.” 2009. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0714109-121741.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chen, Yan-heng. “Analysis of Another Left Shift Binary GCD Algorithm.” 2009. Web. 07 Mar 2021.

Vancouver:

Chen Y. Analysis of Another Left Shift Binary GCD Algorithm. [Internet] [Thesis]. NSYSU; 2009. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0714109-121741.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chen Y. Analysis of Another Left Shift Binary GCD Algorithm. [Thesis]. NSYSU; 2009. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0714109-121741

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Oxford

19. Wulfmeier, Markus. Efficient supervision for robot learning via imitation, simulation, and adaptation.

Degree: PhD, 2018, University of Oxford

 In order to enable more widespread application of robots, we are required to reduce the human effort for the introduction of existing robotic platforms to… (more)

Subjects/Keywords: 006.3; Machine learning; Robotics; Domain Adaptation; Imitation Learning; Inverse Reinforcement Learning; Mobile Robotics; Transfer Learning; Autonomous Driving

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wulfmeier, M. (2018). Efficient supervision for robot learning via imitation, simulation, and adaptation. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819

Chicago Manual of Style (16th Edition):

Wulfmeier, Markus. “Efficient supervision for robot learning via imitation, simulation, and adaptation.” 2018. Doctoral Dissertation, University of Oxford. Accessed March 07, 2021. http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819.

MLA Handbook (7th Edition):

Wulfmeier, Markus. “Efficient supervision for robot learning via imitation, simulation, and adaptation.” 2018. Web. 07 Mar 2021.

Vancouver:

Wulfmeier M. Efficient supervision for robot learning via imitation, simulation, and adaptation. [Internet] [Doctoral dissertation]. University of Oxford; 2018. [cited 2021 Mar 07]. Available from: http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819.

Council of Science Editors:

Wulfmeier M. Efficient supervision for robot learning via imitation, simulation, and adaptation. [Doctoral Dissertation]. University of Oxford; 2018. Available from: http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819


University of Plymouth

20. Loviken, Pontus. Fast online model learning for controlling complex real-world robots.

Degree: PhD, 2019, University of Plymouth

 How can real robots with many degrees of freedom - without previous knowledge of themselves or their environment - act and use the resulting observations… (more)

Subjects/Keywords: model learning; Reinforcement learning; Online learning; Goal babbling; inverse models; Micro data learning; Developmental robotics; real-world robots; sensorimotor control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Loviken, P. (2019). Fast online model learning for controlling complex real-world robots. (Doctoral Dissertation). University of Plymouth. Retrieved from http://hdl.handle.net/10026.1/15078

Chicago Manual of Style (16th Edition):

Loviken, Pontus. “Fast online model learning for controlling complex real-world robots.” 2019. Doctoral Dissertation, University of Plymouth. Accessed March 07, 2021. http://hdl.handle.net/10026.1/15078.

MLA Handbook (7th Edition):

Loviken, Pontus. “Fast online model learning for controlling complex real-world robots.” 2019. Web. 07 Mar 2021.

Vancouver:

Loviken P. Fast online model learning for controlling complex real-world robots. [Internet] [Doctoral dissertation]. University of Plymouth; 2019. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10026.1/15078.

Council of Science Editors:

Loviken P. Fast online model learning for controlling complex real-world robots. [Doctoral Dissertation]. University of Plymouth; 2019. Available from: http://hdl.handle.net/10026.1/15078

21. Chandramohan, Senthilkumar. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.

Degree: Docteur es, Informatique, 2012, Avignon

Les récents progrès dans le domaine du traitement du langage ont apporté un intérêt significatif à la mise en oeuvre de systèmes de dialogue parlé.… (more)

Subjects/Keywords: Simulation d'utilisateurs; Systèmes de dialogue parlé; Apprentissage par renforcement; Apprentissage par renforcement inverse; Gestion de dialogue; User simulation; Spoken dialogue systems; Reinforcement learning; Inverse reinforcement learning; Dialogue management

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chandramohan, S. (2012). Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. (Doctoral Dissertation). Avignon. Retrieved from http://www.theses.fr/2012AVIG0185

Chicago Manual of Style (16th Edition):

Chandramohan, Senthilkumar. “Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.” 2012. Doctoral Dissertation, Avignon. Accessed March 07, 2021. http://www.theses.fr/2012AVIG0185.

MLA Handbook (7th Edition):

Chandramohan, Senthilkumar. “Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.” 2012. Web. 07 Mar 2021.

Vancouver:

Chandramohan S. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. [Internet] [Doctoral dissertation]. Avignon; 2012. [cited 2021 Mar 07]. Available from: http://www.theses.fr/2012AVIG0185.

Council of Science Editors:

Chandramohan S. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. [Doctoral Dissertation]. Avignon; 2012. Available from: http://www.theses.fr/2012AVIG0185


University of Georgia

22. Bhat, Sanath Govinda. Learning driver preferences for freeway merging using multitask irl.

Degree: 2018, University of Georgia

 Most automobile manufacturers today have invested heavily in the research and design of implementing autonomy in their cars. One important and challenging problem faced by… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Hierarchical Bayesian Model; Multitask; Highway Merging; NGSIM; Likelihood Weighting

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhat, S. G. (2018). Learning driver preferences for freeway merging using multitask irl. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37116

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/37116.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Web. 07 Mar 2021.

Vancouver:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Internet] [Thesis]. University of Georgia; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/37116.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Thesis]. University of Georgia; 2018. Available from: http://hdl.handle.net/10724/37116

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

23. Bhat, Sanath Govinda. Learning driver preferences for freeway merging using multitask irl.

Degree: 2018, University of Georgia

 Most automobile manufacturers today have invested heavily in the research and design of implementing autonomy in their cars. One important and challenging problem faced by… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Hierarchical Bayesian Model; Multitask; Highway Merging; NGSIM; Likelihood Weighting

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhat, S. G. (2018). Learning driver preferences for freeway merging using multitask irl. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37273

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/37273.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Web. 07 Mar 2021.

Vancouver:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Internet] [Thesis]. University of Georgia; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/37273.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Thesis]. University of Georgia; 2018. Available from: http://hdl.handle.net/10724/37273

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Virginia Tech

24. Shiraev, Dmitry Eric. Inverse Reinforcement Learning and Routing Metric Discovery.

Degree: MS, Computer Science, 2003, Virginia Tech

 Uncovering the metrics and procedures employed by an autonomous networking system is an important problem with applications in instrumentation, traffic engineering, and game-theoretic studies of… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Routing; Network Metrics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shiraev, D. E. (2003). Inverse Reinforcement Learning and Routing Metric Discovery. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/34728

Chicago Manual of Style (16th Edition):

Shiraev, Dmitry Eric. “Inverse Reinforcement Learning and Routing Metric Discovery.” 2003. Masters Thesis, Virginia Tech. Accessed March 07, 2021. http://hdl.handle.net/10919/34728.

MLA Handbook (7th Edition):

Shiraev, Dmitry Eric. “Inverse Reinforcement Learning and Routing Metric Discovery.” 2003. Web. 07 Mar 2021.

Vancouver:

Shiraev DE. Inverse Reinforcement Learning and Routing Metric Discovery. [Internet] [Masters thesis]. Virginia Tech; 2003. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10919/34728.

Council of Science Editors:

Shiraev DE. Inverse Reinforcement Learning and Routing Metric Discovery. [Masters Thesis]. Virginia Tech; 2003. Available from: http://hdl.handle.net/10919/34728


Wright State University

25. Nalamothu, Abhishek. Abusive and Hate Speech Tweets Detection with Text Generation.

Degree: MS, Computer Science, 2019, Wright State University

 According to a Pew Research study, 41% of Americans have personally experienced online harassment and two-thirds of Americans have witnessed harassment in 2017. Hence, online… (more)

Subjects/Keywords: Computer Science; Text generation; Generative adversarial network; Inverse Reinforcement Learning; Online Harassment detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nalamothu, A. (2019). Abusive and Hate Speech Tweets Detection with Text Generation. (Masters Thesis). Wright State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305

Chicago Manual of Style (16th Edition):

Nalamothu, Abhishek. “Abusive and Hate Speech Tweets Detection with Text Generation.” 2019. Masters Thesis, Wright State University. Accessed March 07, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305.

MLA Handbook (7th Edition):

Nalamothu, Abhishek. “Abusive and Hate Speech Tweets Detection with Text Generation.” 2019. Web. 07 Mar 2021.

Vancouver:

Nalamothu A. Abusive and Hate Speech Tweets Detection with Text Generation. [Internet] [Masters thesis]. Wright State University; 2019. [cited 2021 Mar 07]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305.

Council of Science Editors:

Nalamothu A. Abusive and Hate Speech Tweets Detection with Text Generation. [Masters Thesis]. Wright State University; 2019. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305


University of Pennsylvania

26. Wen, Min. Reinforcement Learning With High-Level Task Specifications.

Degree: 2019, University of Pennsylvania

Reinforcement learning (RL) has been widely used, for example, in robotics, recommendation systems, and financial services. Existing RL algorithms typically optimize reward-based surrogates rather than… (more)

Subjects/Keywords: Game theory; Inverse reinforcement learning; Learning-based control; Learning from demonstration; Reinforcement learning; Temporal logic specifications; Artificial Intelligence and Robotics; Computer Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wen, M. (2019). Reinforcement Learning With High-Level Task Specifications. (Thesis). University of Pennsylvania. Retrieved from https://repository.upenn.edu/edissertations/3509

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Wen, Min. “Reinforcement Learning With High-Level Task Specifications.” 2019. Thesis, University of Pennsylvania. Accessed March 07, 2021. https://repository.upenn.edu/edissertations/3509.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Wen, Min. “Reinforcement Learning With High-Level Task Specifications.” 2019. Web. 07 Mar 2021.

Vancouver:

Wen M. Reinforcement Learning With High-Level Task Specifications. [Internet] [Thesis]. University of Pennsylvania; 2019. [cited 2021 Mar 07]. Available from: https://repository.upenn.edu/edissertations/3509.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Wen M. Reinforcement Learning With High-Level Task Specifications. [Thesis]. University of Pennsylvania; 2019. Available from: https://repository.upenn.edu/edissertations/3509

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Oregon State University

27. Proper, Scott. Scaling multiagent reinforcement learning.

Degree: PhD, Computer Science, 2009, Oregon State University

Reinforcement learning in real-world domains suffers from three curses of dimensionality: explosions in state and action spaces, and high stochasticity or "outcome space" explosion. Multiagent… (more)

Subjects/Keywords: Reinforcement learning; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Proper, S. (2009). Scaling multiagent reinforcement learning. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/13662

Chicago Manual of Style (16th Edition):

Proper, Scott. “Scaling multiagent reinforcement learning.” 2009. Doctoral Dissertation, Oregon State University. Accessed March 07, 2021. http://hdl.handle.net/1957/13662.

MLA Handbook (7th Edition):

Proper, Scott. “Scaling multiagent reinforcement learning.” 2009. Web. 07 Mar 2021.

Vancouver:

Proper S. Scaling multiagent reinforcement learning. [Internet] [Doctoral dissertation]. Oregon State University; 2009. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1957/13662.

Council of Science Editors:

Proper S. Scaling multiagent reinforcement learning. [Doctoral Dissertation]. Oregon State University; 2009. Available from: http://hdl.handle.net/1957/13662


Oregon State University

28. Mehta, Neville. Hierarchical structure discovery and transfer in sequential decision problems.

Degree: PhD, Computer Science, 2011, Oregon State University

 Acting intelligently to efficiently solve sequential decision problems requires the ability to extract hierarchical structure from the underlying domain dynamics, exploit it for optimal or… (more)

Subjects/Keywords: hierarchical reinforcement learning; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mehta, N. (2011). Hierarchical structure discovery and transfer in sequential decision problems. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/25199

Chicago Manual of Style (16th Edition):

Mehta, Neville. “Hierarchical structure discovery and transfer in sequential decision problems.” 2011. Doctoral Dissertation, Oregon State University. Accessed March 07, 2021. http://hdl.handle.net/1957/25199.

MLA Handbook (7th Edition):

Mehta, Neville. “Hierarchical structure discovery and transfer in sequential decision problems.” 2011. Web. 07 Mar 2021.

Vancouver:

Mehta N. Hierarchical structure discovery and transfer in sequential decision problems. [Internet] [Doctoral dissertation]. Oregon State University; 2011. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1957/25199.

Council of Science Editors:

Mehta N. Hierarchical structure discovery and transfer in sequential decision problems. [Doctoral Dissertation]. Oregon State University; 2011. Available from: http://hdl.handle.net/1957/25199

29. Heikkilä, Filip. Autonomous Mapping of Unknown Environments Using a UAV .

Degree: Chalmers tekniska högskola / Institutionen för matematiska vetenskaper, 2020, Chalmers University of Technology

 Automatic object search in a bounded area can be accomplished using cameracarrying autonomous aerial robots. The system requires several functionalities to solve the task in… (more)

Subjects/Keywords: Deep reinforcement learning; autonomous exploration and navigation; feature extraction; object detection; voxel map; UAV; modular framework.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Heikkilä, F. (2020). Autonomous Mapping of Unknown Environments Using a UAV . (Thesis). Chalmers University of Technology. Retrieved from http://hdl.handle.net/20.500.12380/300894

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Heikkilä, Filip. “Autonomous Mapping of Unknown Environments Using a UAV .” 2020. Thesis, Chalmers University of Technology. Accessed March 07, 2021. http://hdl.handle.net/20.500.12380/300894.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Heikkilä, Filip. “Autonomous Mapping of Unknown Environments Using a UAV .” 2020. Web. 07 Mar 2021.

Vancouver:

Heikkilä F. Autonomous Mapping of Unknown Environments Using a UAV . [Internet] [Thesis]. Chalmers University of Technology; 2020. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/20.500.12380/300894.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Heikkilä F. Autonomous Mapping of Unknown Environments Using a UAV . [Thesis]. Chalmers University of Technology; 2020. Available from: http://hdl.handle.net/20.500.12380/300894

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

30. Zhang, Ruohan. Action selection in modular reinforcement learning.

Degree: MSin Computer Sciences, Computer Sciences, 2014, University of Texas – Austin

Modular reinforcement learning is an approach to resolve the curse of dimensionality problem in traditional reinforcement learning. We design and implement a modular reinforcement learning(more)

Subjects/Keywords: Modular reinforcement learning; Action selection; Module weight

…in a RL problem with large state space. We propose to take a modular reinforcement learning… …introduces a test domain, and demonstrates our modular reinforcement learning algorithm. In Chapter… …Modular reinforcement learning [7, 10, 12, 20] decomposes original RL problem into… …results suggest modular reinforcement learning might be a promising approach to curse of… …dimensionality problem. A close relative to modular reinforcement learning is hierarchical… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zhang, R. (2014). Action selection in modular reinforcement learning. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/25916

Chicago Manual of Style (16th Edition):

Zhang, Ruohan. “Action selection in modular reinforcement learning.” 2014. Masters Thesis, University of Texas – Austin. Accessed March 07, 2021. http://hdl.handle.net/2152/25916.

MLA Handbook (7th Edition):

Zhang, Ruohan. “Action selection in modular reinforcement learning.” 2014. Web. 07 Mar 2021.

Vancouver:

Zhang R. Action selection in modular reinforcement learning. [Internet] [Masters thesis]. University of Texas – Austin; 2014. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2152/25916.

Council of Science Editors:

Zhang R. Action selection in modular reinforcement learning. [Masters Thesis]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/25916

[1] [2] [3] [4] [5] … [2117]

.