Advanced search options
You searched for subject:(Modular inverse reinforcement learning)
.
Showing records 1 – 30 of
63500 total matches.
◁ [1] [2] [3] [4] [5] … [2117] ▶
Search Limiters
Dates
Universities
Department
Degrees
Levels
Languages
Country
▼ Search Limiters
University of Texas – Austin
1. -8073-3276. Parameterized modular inverse reinforcement learning.
Degree: MSin Computer Sciences, Computer Science, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/46987
Subjects/Keywords: Reinforcement learning; Artificial intelligence; Inverse reinforcement learning; Modular inverse reinforcement learning; Reinforcement learning algorithms; Human navigation behaviors
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
-8073-3276. (2015). Parameterized modular inverse reinforcement learning. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46987
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Masters Thesis, University of Texas – Austin. Accessed March 07, 2021. http://hdl.handle.net/2152/46987.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Web. 07 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-8073-3276. Parameterized modular inverse reinforcement learning. [Internet] [Masters thesis]. University of Texas – Austin; 2015. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2152/46987.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-8073-3276. Parameterized modular inverse reinforcement learning. [Masters Thesis]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/46987
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Rice University
2. Daptardar, Saurabh. The Science of Mind Reading: New Inverse Optimal Control Framework.
Degree: MS, Engineering, 2018, Rice University
URL: http://hdl.handle.net/1911/105893
Subjects/Keywords: Inverse Reinforcement Learning; Inverse Optimal Control; Reinforcement Learning; Optimal Control; Neuroscience
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Daptardar, S. (2018). The Science of Mind Reading: New Inverse Optimal Control Framework. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/105893
Chicago Manual of Style (16th Edition):
Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Masters Thesis, Rice University. Accessed March 07, 2021. http://hdl.handle.net/1911/105893.
MLA Handbook (7th Edition):
Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Web. 07 Mar 2021.
Vancouver:
Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Internet] [Masters thesis]. Rice University; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1911/105893.
Council of Science Editors:
Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Masters Thesis]. Rice University; 2018. Available from: http://hdl.handle.net/1911/105893
University of Illinois – Urbana-Champaign
3. Zaytsev, Andrey. Faster apprenticeship learning through inverse optimal control.
Degree: MS, Computer Science, 2017, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/99228
Subjects/Keywords: Apprenticeship learning; Inverse reinforcement learning; Inverse optimal control; Deep learning; Reinforcement learning; Machine learning
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Zaytsev, A. (2017). Faster apprenticeship learning through inverse optimal control. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/99228
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Zaytsev, Andrey. “Faster apprenticeship learning through inverse optimal control.” 2017. Thesis, University of Illinois – Urbana-Champaign. Accessed March 07, 2021. http://hdl.handle.net/2142/99228.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Zaytsev, Andrey. “Faster apprenticeship learning through inverse optimal control.” 2017. Web. 07 Mar 2021.
Vancouver:
Zaytsev A. Faster apprenticeship learning through inverse optimal control. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2142/99228.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Zaytsev A. Faster apprenticeship learning through inverse optimal control. [Thesis]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/99228
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
NSYSU
4. Tseng, Yi-Chia. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.
Degree: Master, Electrical Engineering, 2015, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716
Subjects/Keywords: Apprenticeship Learning; Feature weight; Inverse Reinforcement learning; Reward function; Reinforcement learning
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Tseng, Y. (2015). An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Web. 07 Mar 2021.
Vancouver:
Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Internet] [Thesis]. NSYSU; 2015. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
NSYSU
5. Lin, Hung-shyuan. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.
Degree: Master, Electrical Engineering, 2015, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021
Subjects/Keywords: Inverse reinforcement learning; Reward function; Fuzzy; Reinforcement learning; AdaBoost; Apprenticeship learning
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Lin, H. (2015). Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Web. 07 Mar 2021.
Vancouver:
Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Internet] [Thesis]. NSYSU; 2015. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Delft University of Technology
6. van der Wijden, R. (author). Preference-driven demonstrations ranking for inverse reinforcement learning.
Degree: 2016, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98
Subjects/Keywords: robotics; reinforcement learning; preference learning; inverse reinforcement learning
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
van der Wijden, R. (. (2016). Preference-driven demonstrations ranking for inverse reinforcement learning. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98
Chicago Manual of Style (16th Edition):
van der Wijden, R (author). “Preference-driven demonstrations ranking for inverse reinforcement learning.” 2016. Masters Thesis, Delft University of Technology. Accessed March 07, 2021. http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98.
MLA Handbook (7th Edition):
van der Wijden, R (author). “Preference-driven demonstrations ranking for inverse reinforcement learning.” 2016. Web. 07 Mar 2021.
Vancouver:
van der Wijden R(. Preference-driven demonstrations ranking for inverse reinforcement learning. [Internet] [Masters thesis]. Delft University of Technology; 2016. [cited 2021 Mar 07]. Available from: http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98.
Council of Science Editors:
van der Wijden R(. Preference-driven demonstrations ranking for inverse reinforcement learning. [Masters Thesis]. Delft University of Technology; 2016. Available from: http://resolver.tudelft.nl/uuid:4a85d32d-79da-4983-97d7-530c7bb1da98
Tampere University
7. Dewundara Liyanage, Ishira Uthkarshini. Reward Learning from Demonstrations for Autonomous Earthmoving .
Degree: 2020, Tampere University
URL: https://trepo.tuni.fi/handle/10024/123546
Subjects/Keywords: inverse reinforcement learning ; reinforcement learning ; automation ; reward funtion ; unsupervised perceptual rewards
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Dewundara Liyanage, I. U. (2020). Reward Learning from Demonstrations for Autonomous Earthmoving . (Masters Thesis). Tampere University. Retrieved from https://trepo.tuni.fi/handle/10024/123546
Chicago Manual of Style (16th Edition):
Dewundara Liyanage, Ishira Uthkarshini. “Reward Learning from Demonstrations for Autonomous Earthmoving .” 2020. Masters Thesis, Tampere University. Accessed March 07, 2021. https://trepo.tuni.fi/handle/10024/123546.
MLA Handbook (7th Edition):
Dewundara Liyanage, Ishira Uthkarshini. “Reward Learning from Demonstrations for Autonomous Earthmoving .” 2020. Web. 07 Mar 2021.
Vancouver:
Dewundara Liyanage IU. Reward Learning from Demonstrations for Autonomous Earthmoving . [Internet] [Masters thesis]. Tampere University; 2020. [cited 2021 Mar 07]. Available from: https://trepo.tuni.fi/handle/10024/123546.
Council of Science Editors:
Dewundara Liyanage IU. Reward Learning from Demonstrations for Autonomous Earthmoving . [Masters Thesis]. Tampere University; 2020. Available from: https://trepo.tuni.fi/handle/10024/123546
University of Illinois – Chicago
8. Tirinzoni, Andrea. Adversarial Inverse Reinforcement Learning with Changing Dynamics.
Degree: 2017, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/22081
Subjects/Keywords: Machine Learning; Inverse Reinforcement Learning; Reinforcement Learning; Adversarial Prediction; Markov Decision Process; Imitation Learning
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Tirinzoni, A. (2017). Adversarial Inverse Reinforcement Learning with Changing Dynamics. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/22081
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Tirinzoni, Andrea. “Adversarial Inverse Reinforcement Learning with Changing Dynamics.” 2017. Thesis, University of Illinois – Chicago. Accessed March 07, 2021. http://hdl.handle.net/10027/22081.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Tirinzoni, Andrea. “Adversarial Inverse Reinforcement Learning with Changing Dynamics.” 2017. Web. 07 Mar 2021.
Vancouver:
Tirinzoni A. Adversarial Inverse Reinforcement Learning with Changing Dynamics. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10027/22081.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Tirinzoni A. Adversarial Inverse Reinforcement Learning with Changing Dynamics. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/22081
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
NSYSU
9. Cheng, Tien-yu. Inverse Reinforcement Learning based on Critical State.
Degree: Master, Electrical Engineering, 2014, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500
Subjects/Keywords: reward feature construction; Apprenticeship Learning; Inverse Reinforcement learning; reward function; Reinforcement learning
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Cheng, T. (2014). Inverse Reinforcement Learning based on Critical State. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Cheng, Tien-yu. “Inverse Reinforcement Learning based on Critical State.” 2014. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Cheng, Tien-yu. “Inverse Reinforcement Learning based on Critical State.” 2014. Web. 07 Mar 2021.
Vancouver:
Cheng T. Inverse Reinforcement Learning based on Critical State. [Internet] [Thesis]. NSYSU; 2014. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Cheng T. Inverse Reinforcement Learning based on Critical State. [Thesis]. NSYSU; 2014. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
University of Illinois – Chicago
10. Chen, Xiangli. Robust Structured Prediction for Process Data.
Degree: 2017, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/21987
Subjects/Keywords: Structured Prediction; Optimal Control; Reinforcement Learning; Inverse Reinforcement Learning; Imitation Learning; Regression; Covariate Shift
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Chen, X. (2017). Robust Structured Prediction for Process Data. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/21987
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Thesis, University of Illinois – Chicago. Accessed March 07, 2021. http://hdl.handle.net/10027/21987.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Web. 07 Mar 2021.
Vancouver:
Chen X. Robust Structured Prediction for Process Data. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10027/21987.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Chen X. Robust Structured Prediction for Process Data. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/21987
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
University of Southern California
11. Kalakrishnan, Mrinal. Learning objective functions for autonomous motion generation.
Degree: PhD, Computer Science, 2014, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787
Subjects/Keywords: robotics; machine learning; motion planning; trajectory optimization; inverse reinforcement learning; reinforcement learning; locomotion; manipulation
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Kalakrishnan, M. (2014). Learning objective functions for autonomous motion generation. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787
Chicago Manual of Style (16th Edition):
Kalakrishnan, Mrinal. “Learning objective functions for autonomous motion generation.” 2014. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787.
MLA Handbook (7th Edition):
Kalakrishnan, Mrinal. “Learning objective functions for autonomous motion generation.” 2014. Web. 07 Mar 2021.
Vancouver:
Kalakrishnan M. Learning objective functions for autonomous motion generation. [Internet] [Doctoral dissertation]. University of Southern California; 2014. [cited 2021 Mar 07]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787.
Council of Science Editors:
Kalakrishnan M. Learning objective functions for autonomous motion generation. [Doctoral Dissertation]. University of Southern California; 2014. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3787
University of New South Wales
12.
Nguyen, Hung.
Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.
Degree: Engineering & Information Technology, 2018, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/60412
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true
Subjects/Keywords: Apprenticeship Learning; Reinforcement learning; Inverse Reinforcement Learning; Apprenticeship Boostrapping; UAV and UGVs
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Nguyen, H. (2018). Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. (Masters Thesis). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Nguyen, Hung. “Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.” 2018. Masters Thesis, University of New South Wales. Accessed March 07, 2021. http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true.
MLA Handbook (7th Edition):
Nguyen, Hung. “Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.” 2018. Web. 07 Mar 2021.
Vancouver:
Nguyen H. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. [Internet] [Masters thesis]. University of New South Wales; 2018. [cited 2021 Mar 07]. Available from: http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true.
Council of Science Editors:
Nguyen H. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. [Masters Thesis]. University of New South Wales; 2018. Available from: http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true
NSYSU
13. Chiang, Hsuan-yi. Action Segmentation and Learning by Inverse Reinforcement Learning.
Degree: Master, Electrical Engineering, 2015, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230
Subjects/Keywords: Upper Confidence Bounds; Adaboost classifier; reward function; Inverse Reinforcement learning; Reinforcement learning
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Chiang, H. (2015). Action Segmentation and Learning by Inverse Reinforcement Learning. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Chiang, Hsuan-yi. “Action Segmentation and Learning by Inverse Reinforcement Learning.” 2015. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Chiang, Hsuan-yi. “Action Segmentation and Learning by Inverse Reinforcement Learning.” 2015. Web. 07 Mar 2021.
Vancouver:
Chiang H. Action Segmentation and Learning by Inverse Reinforcement Learning. [Internet] [Thesis]. NSYSU; 2015. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Chiang H. Action Segmentation and Learning by Inverse Reinforcement Learning. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
University of Georgia
14. Bogert, Kenneth Daniel. Inverse reinforcement learning for robotic applications.
Degree: 2017, University of Georgia
URL: http://hdl.handle.net/10724/36625
Subjects/Keywords: robotics; inverse reinforcement learning; machine learning; Markov decision process
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Bogert, K. D. (2017). Inverse reinforcement learning for robotic applications. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/36625
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bogert, Kenneth Daniel. “Inverse reinforcement learning for robotic applications.” 2017. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/36625.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bogert, Kenneth Daniel. “Inverse reinforcement learning for robotic applications.” 2017. Web. 07 Mar 2021.
Vancouver:
Bogert KD. Inverse reinforcement learning for robotic applications. [Internet] [Thesis]. University of Georgia; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/36625.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bogert KD. Inverse reinforcement learning for robotic applications. [Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/36625
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
15. Paraskevopoulos, Vasileios. Design of optimal neural network control strategies with minimal a priori knowledge.
Degree: PhD, 2000, University of Sussex
URL: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189
Subjects/Keywords: 629.8; Reinforcement learning; Real time; Modular
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Paraskevopoulos, V. (2000). Design of optimal neural network control strategies with minimal a priori knowledge. (Doctoral Dissertation). University of Sussex. Retrieved from https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189
Chicago Manual of Style (16th Edition):
Paraskevopoulos, Vasileios. “Design of optimal neural network control strategies with minimal a priori knowledge.” 2000. Doctoral Dissertation, University of Sussex. Accessed March 07, 2021. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189.
MLA Handbook (7th Edition):
Paraskevopoulos, Vasileios. “Design of optimal neural network control strategies with minimal a priori knowledge.” 2000. Web. 07 Mar 2021.
Vancouver:
Paraskevopoulos V. Design of optimal neural network control strategies with minimal a priori knowledge. [Internet] [Doctoral dissertation]. University of Sussex; 2000. [cited 2021 Mar 07]. Available from: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189.
Council of Science Editors:
Paraskevopoulos V. Design of optimal neural network control strategies with minimal a priori knowledge. [Doctoral Dissertation]. University of Sussex; 2000. Available from: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189
University of Georgia
16. Das, Indrajit. Inverse reinforcement learning of risk-sensitive utility.
Degree: 2017, University of Georgia
URL: http://hdl.handle.net/10724/36698
Subjects/Keywords: Inverse Reinforcement Learning; One Switch Utility Functions; Entropy; Apprenticeship Learning; Markov Decision Process
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Das, I. (2017). Inverse reinforcement learning of risk-sensitive utility. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/36698
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Das, Indrajit. “Inverse reinforcement learning of risk-sensitive utility.” 2017. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/36698.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Das, Indrajit. “Inverse reinforcement learning of risk-sensitive utility.” 2017. Web. 07 Mar 2021.
Vancouver:
Das I. Inverse reinforcement learning of risk-sensitive utility. [Internet] [Thesis]. University of Georgia; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/36698.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Das I. Inverse reinforcement learning of risk-sensitive utility. [Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/36698
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
University of Georgia
17. Jain, Vinamra. Maximum likelihood approach for model-free inverse reinforcement learning.
Degree: 2018, University of Georgia
URL: http://hdl.handle.net/10724/37796
Subjects/Keywords: Inverse Reinforcement Learning; Maximum Likelihood Estimation; Markov Decision Process; Learning from Demonstrations
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Jain, V. (2018). Maximum likelihood approach for model-free inverse reinforcement learning. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37796
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Jain, Vinamra. “Maximum likelihood approach for model-free inverse reinforcement learning.” 2018. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/37796.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Jain, Vinamra. “Maximum likelihood approach for model-free inverse reinforcement learning.” 2018. Web. 07 Mar 2021.
Vancouver:
Jain V. Maximum likelihood approach for model-free inverse reinforcement learning. [Internet] [Thesis]. University of Georgia; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/37796.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Jain V. Maximum likelihood approach for model-free inverse reinforcement learning. [Thesis]. University of Georgia; 2018. Available from: http://hdl.handle.net/10724/37796
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
NSYSU
18. Chen, Yan-heng. Analysis of Another Left Shift Binary GCD Algorithm.
Degree: Master, Computer Science and Engineering, 2009, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0714109-121741
Subjects/Keywords: Self-test; Modular inverse; GCD
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Chen, Y. (2009). Analysis of Another Left Shift Binary GCD Algorithm. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0714109-121741
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Chen, Yan-heng. “Analysis of Another Left Shift Binary GCD Algorithm.” 2009. Thesis, NSYSU. Accessed March 07, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0714109-121741.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Chen, Yan-heng. “Analysis of Another Left Shift Binary GCD Algorithm.” 2009. Web. 07 Mar 2021.
Vancouver:
Chen Y. Analysis of Another Left Shift Binary GCD Algorithm. [Internet] [Thesis]. NSYSU; 2009. [cited 2021 Mar 07]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0714109-121741.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Chen Y. Analysis of Another Left Shift Binary GCD Algorithm. [Thesis]. NSYSU; 2009. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0714109-121741
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
University of Oxford
19. Wulfmeier, Markus. Efficient supervision for robot learning via imitation, simulation, and adaptation.
Degree: PhD, 2018, University of Oxford
URL: http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819
Subjects/Keywords: 006.3; Machine learning; Robotics; Domain Adaptation; Imitation Learning; Inverse Reinforcement Learning; Mobile Robotics; Transfer Learning; Autonomous Driving
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Wulfmeier, M. (2018). Efficient supervision for robot learning via imitation, simulation, and adaptation. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819
Chicago Manual of Style (16th Edition):
Wulfmeier, Markus. “Efficient supervision for robot learning via imitation, simulation, and adaptation.” 2018. Doctoral Dissertation, University of Oxford. Accessed March 07, 2021. http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819.
MLA Handbook (7th Edition):
Wulfmeier, Markus. “Efficient supervision for robot learning via imitation, simulation, and adaptation.” 2018. Web. 07 Mar 2021.
Vancouver:
Wulfmeier M. Efficient supervision for robot learning via imitation, simulation, and adaptation. [Internet] [Doctoral dissertation]. University of Oxford; 2018. [cited 2021 Mar 07]. Available from: http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819.
Council of Science Editors:
Wulfmeier M. Efficient supervision for robot learning via imitation, simulation, and adaptation. [Doctoral Dissertation]. University of Oxford; 2018. Available from: http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819
University of Plymouth
20. Loviken, Pontus. Fast online model learning for controlling complex real-world robots.
Degree: PhD, 2019, University of Plymouth
URL: http://hdl.handle.net/10026.1/15078
Subjects/Keywords: model learning; Reinforcement learning; Online learning; Goal babbling; inverse models; Micro data learning; Developmental robotics; real-world robots; sensorimotor control
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Loviken, P. (2019). Fast online model learning for controlling complex real-world robots. (Doctoral Dissertation). University of Plymouth. Retrieved from http://hdl.handle.net/10026.1/15078
Chicago Manual of Style (16th Edition):
Loviken, Pontus. “Fast online model learning for controlling complex real-world robots.” 2019. Doctoral Dissertation, University of Plymouth. Accessed March 07, 2021. http://hdl.handle.net/10026.1/15078.
MLA Handbook (7th Edition):
Loviken, Pontus. “Fast online model learning for controlling complex real-world robots.” 2019. Web. 07 Mar 2021.
Vancouver:
Loviken P. Fast online model learning for controlling complex real-world robots. [Internet] [Doctoral dissertation]. University of Plymouth; 2019. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10026.1/15078.
Council of Science Editors:
Loviken P. Fast online model learning for controlling complex real-world robots. [Doctoral Dissertation]. University of Plymouth; 2019. Available from: http://hdl.handle.net/10026.1/15078
21. Chandramohan, Senthilkumar. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.
Degree: Docteur es, Informatique, 2012, Avignon
URL: http://www.theses.fr/2012AVIG0185
Subjects/Keywords: Simulation d'utilisateurs; Systèmes de dialogue parlé; Apprentissage par renforcement; Apprentissage par renforcement inverse; Gestion de dialogue; User simulation; Spoken dialogue systems; Reinforcement learning; Inverse reinforcement learning; Dialogue management
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Chandramohan, S. (2012). Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. (Doctoral Dissertation). Avignon. Retrieved from http://www.theses.fr/2012AVIG0185
Chicago Manual of Style (16th Edition):
Chandramohan, Senthilkumar. “Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.” 2012. Doctoral Dissertation, Avignon. Accessed March 07, 2021. http://www.theses.fr/2012AVIG0185.
MLA Handbook (7th Edition):
Chandramohan, Senthilkumar. “Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.” 2012. Web. 07 Mar 2021.
Vancouver:
Chandramohan S. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. [Internet] [Doctoral dissertation]. Avignon; 2012. [cited 2021 Mar 07]. Available from: http://www.theses.fr/2012AVIG0185.
Council of Science Editors:
Chandramohan S. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. [Doctoral Dissertation]. Avignon; 2012. Available from: http://www.theses.fr/2012AVIG0185
University of Georgia
22. Bhat, Sanath Govinda. Learning driver preferences for freeway merging using multitask irl.
Degree: 2018, University of Georgia
URL: http://hdl.handle.net/10724/37116
Subjects/Keywords: Inverse Reinforcement Learning; Hierarchical Bayesian Model; Multitask; Highway Merging; NGSIM; Likelihood Weighting
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Bhat, S. G. (2018). Learning driver preferences for freeway merging using multitask irl. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37116
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/37116.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Web. 07 Mar 2021.
Vancouver:
Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Internet] [Thesis]. University of Georgia; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/37116.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Thesis]. University of Georgia; 2018. Available from: http://hdl.handle.net/10724/37116
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
University of Georgia
23. Bhat, Sanath Govinda. Learning driver preferences for freeway merging using multitask irl.
Degree: 2018, University of Georgia
URL: http://hdl.handle.net/10724/37273
Subjects/Keywords: Inverse Reinforcement Learning; Hierarchical Bayesian Model; Multitask; Highway Merging; NGSIM; Likelihood Weighting
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Bhat, S. G. (2018). Learning driver preferences for freeway merging using multitask irl. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37273
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/37273.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2018. Web. 07 Mar 2021.
Vancouver:
Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Internet] [Thesis]. University of Georgia; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/37273.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Thesis]. University of Georgia; 2018. Available from: http://hdl.handle.net/10724/37273
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Virginia Tech
24. Shiraev, Dmitry Eric. Inverse Reinforcement Learning and Routing Metric Discovery.
Degree: MS, Computer Science, 2003, Virginia Tech
URL: http://hdl.handle.net/10919/34728
Subjects/Keywords: Inverse Reinforcement Learning; Routing; Network Metrics
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Shiraev, D. E. (2003). Inverse Reinforcement Learning and Routing Metric Discovery. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/34728
Chicago Manual of Style (16th Edition):
Shiraev, Dmitry Eric. “Inverse Reinforcement Learning and Routing Metric Discovery.” 2003. Masters Thesis, Virginia Tech. Accessed March 07, 2021. http://hdl.handle.net/10919/34728.
MLA Handbook (7th Edition):
Shiraev, Dmitry Eric. “Inverse Reinforcement Learning and Routing Metric Discovery.” 2003. Web. 07 Mar 2021.
Vancouver:
Shiraev DE. Inverse Reinforcement Learning and Routing Metric Discovery. [Internet] [Masters thesis]. Virginia Tech; 2003. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10919/34728.
Council of Science Editors:
Shiraev DE. Inverse Reinforcement Learning and Routing Metric Discovery. [Masters Thesis]. Virginia Tech; 2003. Available from: http://hdl.handle.net/10919/34728
Wright State University
25. Nalamothu, Abhishek. Abusive and Hate Speech Tweets Detection with Text Generation.
Degree: MS, Computer Science, 2019, Wright State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305
Subjects/Keywords: Computer Science; Text generation; Generative adversarial network; Inverse Reinforcement Learning; Online Harassment detection
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Nalamothu, A. (2019). Abusive and Hate Speech Tweets Detection with Text Generation. (Masters Thesis). Wright State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305
Chicago Manual of Style (16th Edition):
Nalamothu, Abhishek. “Abusive and Hate Speech Tweets Detection with Text Generation.” 2019. Masters Thesis, Wright State University. Accessed March 07, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305.
MLA Handbook (7th Edition):
Nalamothu, Abhishek. “Abusive and Hate Speech Tweets Detection with Text Generation.” 2019. Web. 07 Mar 2021.
Vancouver:
Nalamothu A. Abusive and Hate Speech Tweets Detection with Text Generation. [Internet] [Masters thesis]. Wright State University; 2019. [cited 2021 Mar 07]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305.
Council of Science Editors:
Nalamothu A. Abusive and Hate Speech Tweets Detection with Text Generation. [Masters Thesis]. Wright State University; 2019. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305
University of Pennsylvania
26. Wen, Min. Reinforcement Learning With High-Level Task Specifications.
Degree: 2019, University of Pennsylvania
URL: https://repository.upenn.edu/edissertations/3509
Subjects/Keywords: Game theory; Inverse reinforcement learning; Learning-based control; Learning from demonstration; Reinforcement learning; Temporal logic specifications; Artificial Intelligence and Robotics; Computer Sciences
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Wen, M. (2019). Reinforcement Learning With High-Level Task Specifications. (Thesis). University of Pennsylvania. Retrieved from https://repository.upenn.edu/edissertations/3509
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Wen, Min. “Reinforcement Learning With High-Level Task Specifications.” 2019. Thesis, University of Pennsylvania. Accessed March 07, 2021. https://repository.upenn.edu/edissertations/3509.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Wen, Min. “Reinforcement Learning With High-Level Task Specifications.” 2019. Web. 07 Mar 2021.
Vancouver:
Wen M. Reinforcement Learning With High-Level Task Specifications. [Internet] [Thesis]. University of Pennsylvania; 2019. [cited 2021 Mar 07]. Available from: https://repository.upenn.edu/edissertations/3509.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Wen M. Reinforcement Learning With High-Level Task Specifications. [Thesis]. University of Pennsylvania; 2019. Available from: https://repository.upenn.edu/edissertations/3509
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Oregon State University
27. Proper, Scott. Scaling multiagent reinforcement learning.
Degree: PhD, Computer Science, 2009, Oregon State University
URL: http://hdl.handle.net/1957/13662
Subjects/Keywords: Reinforcement learning; Reinforcement learning
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Proper, S. (2009). Scaling multiagent reinforcement learning. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/13662
Chicago Manual of Style (16th Edition):
Proper, Scott. “Scaling multiagent reinforcement learning.” 2009. Doctoral Dissertation, Oregon State University. Accessed March 07, 2021. http://hdl.handle.net/1957/13662.
MLA Handbook (7th Edition):
Proper, Scott. “Scaling multiagent reinforcement learning.” 2009. Web. 07 Mar 2021.
Vancouver:
Proper S. Scaling multiagent reinforcement learning. [Internet] [Doctoral dissertation]. Oregon State University; 2009. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1957/13662.
Council of Science Editors:
Proper S. Scaling multiagent reinforcement learning. [Doctoral Dissertation]. Oregon State University; 2009. Available from: http://hdl.handle.net/1957/13662
Oregon State University
28. Mehta, Neville. Hierarchical structure discovery and transfer in sequential decision problems.
Degree: PhD, Computer Science, 2011, Oregon State University
URL: http://hdl.handle.net/1957/25199
Subjects/Keywords: hierarchical reinforcement learning; Reinforcement learning
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Mehta, N. (2011). Hierarchical structure discovery and transfer in sequential decision problems. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/25199
Chicago Manual of Style (16th Edition):
Mehta, Neville. “Hierarchical structure discovery and transfer in sequential decision problems.” 2011. Doctoral Dissertation, Oregon State University. Accessed March 07, 2021. http://hdl.handle.net/1957/25199.
MLA Handbook (7th Edition):
Mehta, Neville. “Hierarchical structure discovery and transfer in sequential decision problems.” 2011. Web. 07 Mar 2021.
Vancouver:
Mehta N. Hierarchical structure discovery and transfer in sequential decision problems. [Internet] [Doctoral dissertation]. Oregon State University; 2011. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1957/25199.
Council of Science Editors:
Mehta N. Hierarchical structure discovery and transfer in sequential decision problems. [Doctoral Dissertation]. Oregon State University; 2011. Available from: http://hdl.handle.net/1957/25199
29. Heikkilä, Filip. Autonomous Mapping of Unknown Environments Using a UAV .
Degree: Chalmers tekniska högskola / Institutionen för matematiska vetenskaper, 2020, Chalmers University of Technology
URL: http://hdl.handle.net/20.500.12380/300894
Subjects/Keywords: Deep reinforcement learning; autonomous exploration and navigation; feature extraction; object detection; voxel map; UAV; modular framework.
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Heikkilä, F. (2020). Autonomous Mapping of Unknown Environments Using a UAV . (Thesis). Chalmers University of Technology. Retrieved from http://hdl.handle.net/20.500.12380/300894
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Heikkilä, Filip. “Autonomous Mapping of Unknown Environments Using a UAV .” 2020. Thesis, Chalmers University of Technology. Accessed March 07, 2021. http://hdl.handle.net/20.500.12380/300894.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Heikkilä, Filip. “Autonomous Mapping of Unknown Environments Using a UAV .” 2020. Web. 07 Mar 2021.
Vancouver:
Heikkilä F. Autonomous Mapping of Unknown Environments Using a UAV . [Internet] [Thesis]. Chalmers University of Technology; 2020. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/20.500.12380/300894.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Heikkilä F. Autonomous Mapping of Unknown Environments Using a UAV . [Thesis]. Chalmers University of Technology; 2020. Available from: http://hdl.handle.net/20.500.12380/300894
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
30. Zhang, Ruohan. Action selection in modular reinforcement learning.
Degree: MSin Computer Sciences, Computer Sciences, 2014, University of Texas – Austin
URL: http://hdl.handle.net/2152/25916
Subjects/Keywords: Modular reinforcement learning; Action selection; Module weight
…in a RL problem with large state space. We propose to take a modular reinforcement learning… …introduces a test domain, and demonstrates our modular reinforcement learning algorithm. In Chapter… …Modular reinforcement learning [7, 10, 12, 20] decomposes original RL problem into… …results suggest modular reinforcement learning might be a promising approach to curse of… …dimensionality problem. A close relative to modular reinforcement learning is hierarchical…
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Zhang, R. (2014). Action selection in modular reinforcement learning. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/25916
Chicago Manual of Style (16th Edition):
Zhang, Ruohan. “Action selection in modular reinforcement learning.” 2014. Masters Thesis, University of Texas – Austin. Accessed March 07, 2021. http://hdl.handle.net/2152/25916.
MLA Handbook (7th Edition):
Zhang, Ruohan. “Action selection in modular reinforcement learning.” 2014. Web. 07 Mar 2021.
Vancouver:
Zhang R. Action selection in modular reinforcement learning. [Internet] [Masters thesis]. University of Texas – Austin; 2014. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2152/25916.
Council of Science Editors:
Zhang R. Action selection in modular reinforcement learning. [Masters Thesis]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/25916