Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(inverse reinforcement learning). Showing records 1 – 22 of 22 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Rice University

1. Daptardar, Saurabh. The Science of Mind Reading: New Inverse Optimal Control Framework.

Degree: MS, Engineering, 2018, Rice University

 Continuous control and planning by the brain remain poorly understood and is a major challenge in the field of Neuroscience. To truly say that we… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Inverse Optimal Control; Reinforcement Learning; Optimal Control; Neuroscience

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Daptardar, S. (2018). The Science of Mind Reading: New Inverse Optimal Control Framework. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/105893

Chicago Manual of Style (16th Edition):

Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Masters Thesis, Rice University. Accessed November 12, 2019. http://hdl.handle.net/1911/105893.

MLA Handbook (7th Edition):

Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Web. 12 Nov 2019.

Vancouver:

Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Internet] [Masters thesis]. Rice University; 2018. [cited 2019 Nov 12]. Available from: http://hdl.handle.net/1911/105893.

Council of Science Editors:

Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Masters Thesis]. Rice University; 2018. Available from: http://hdl.handle.net/1911/105893


University of Texas – Austin

2. -8073-3276. Parameterized modular inverse reinforcement learning.

Degree: MSin Computer Sciences, Computer Science, 2015, University of Texas – Austin

Reinforcement learning and inverse reinforcement learning can be used to model and understand human behaviors. However, due to the curse of dimensionality, their use as… (more)

Subjects/Keywords: Reinforcement learning; Artificial intelligence; Inverse reinforcement learning; Modular inverse reinforcement learning; Reinforcement learning algorithms; Human navigation behaviors

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-8073-3276. (2015). Parameterized modular inverse reinforcement learning. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Masters Thesis, University of Texas – Austin. Accessed November 12, 2019. http://hdl.handle.net/2152/46987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Web. 12 Nov 2019.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-8073-3276. Parameterized modular inverse reinforcement learning. [Internet] [Masters thesis]. University of Texas – Austin; 2015. [cited 2019 Nov 12]. Available from: http://hdl.handle.net/2152/46987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-8073-3276. Parameterized modular inverse reinforcement learning. [Masters Thesis]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/46987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


NSYSU

3. Tseng, Yi-Chia. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.

Degree: Master, Electrical Engineering, 2015, NSYSU

Reinforcement learning (RL) techniques use a reward function to correct a learning agent to solve sequential decision making problems through interactions with a dynamic environment,… (more)

Subjects/Keywords: Apprenticeship Learning; Feature weight; Inverse Reinforcement learning; Reward function; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tseng, Y. (2015). An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Thesis, NSYSU. Accessed November 12, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Web. 12 Nov 2019.

Vancouver:

Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Internet] [Thesis]. NSYSU; 2015. [cited 2019 Nov 12]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

4. Lin, Hung-shyuan. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.

Degree: Master, Electrical Engineering, 2015, NSYSU

 Itâs a study on Reinforcement Learning, learning interaction of agents and dynamic environment to get reward function R, and update the policy, converge learning and… (more)

Subjects/Keywords: Inverse reinforcement learning; Reward function; Fuzzy; Reinforcement learning; AdaBoost; Apprenticeship learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lin, H. (2015). Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Thesis, NSYSU. Accessed November 12, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Web. 12 Nov 2019.

Vancouver:

Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Internet] [Thesis]. NSYSU; 2015. [cited 2019 Nov 12]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Chicago

5. Tirinzoni, Andrea. Adversarial Inverse Reinforcement Learning with Changing Dynamics.

Degree: 2017, University of Illinois – Chicago

 Most work on inverse reinforcement learning, the problem of recovering the unknown reward function being optimized by a decision-making agent, has focused on cases where… (more)

Subjects/Keywords: Machine Learning; Inverse Reinforcement Learning; Reinforcement Learning; Adversarial Prediction; Markov Decision Process; Imitation Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tirinzoni, A. (2017). Adversarial Inverse Reinforcement Learning with Changing Dynamics. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/22081

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tirinzoni, Andrea. “Adversarial Inverse Reinforcement Learning with Changing Dynamics.” 2017. Thesis, University of Illinois – Chicago. Accessed November 12, 2019. http://hdl.handle.net/10027/22081.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tirinzoni, Andrea. “Adversarial Inverse Reinforcement Learning with Changing Dynamics.” 2017. Web. 12 Nov 2019.

Vancouver:

Tirinzoni A. Adversarial Inverse Reinforcement Learning with Changing Dynamics. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2019 Nov 12]. Available from: http://hdl.handle.net/10027/22081.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tirinzoni A. Adversarial Inverse Reinforcement Learning with Changing Dynamics. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/22081

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Chicago

6. Chen, Xiangli. Robust Structured Prediction for Process Data.

Degree: 2017, University of Illinois – Chicago

 Processes involve a series of actions performed to achieve a particular result. Developing prediction models for process data is important for many real problems such… (more)

Subjects/Keywords: Structured Prediction; Optimal Control; Reinforcement Learning; Inverse Reinforcement Learning; Imitation Learning; Regression; Covariate Shift

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, X. (2017). Robust Structured Prediction for Process Data. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/21987

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Thesis, University of Illinois – Chicago. Accessed November 12, 2019. http://hdl.handle.net/10027/21987.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Web. 12 Nov 2019.

Vancouver:

Chen X. Robust Structured Prediction for Process Data. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2019 Nov 12]. Available from: http://hdl.handle.net/10027/21987.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chen X. Robust Structured Prediction for Process Data. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/21987

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

7. Cheng, Tien-yu. Inverse Reinforcement Learning based on Critical State.

Degree: Master, Electrical Engineering, 2014, NSYSU

Reinforcement Learning (RL) makes an agent learn through interacting with a dynamic environment. One fundamental assumption of existing RL algorithms is that reward function, the… (more)

Subjects/Keywords: reward feature construction; Apprenticeship Learning; Inverse Reinforcement learning; reward function; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cheng, T. (2014). Inverse Reinforcement Learning based on Critical State. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Cheng, Tien-yu. “Inverse Reinforcement Learning based on Critical State.” 2014. Thesis, NSYSU. Accessed November 12, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Cheng, Tien-yu. “Inverse Reinforcement Learning based on Critical State.” 2014. Web. 12 Nov 2019.

Vancouver:

Cheng T. Inverse Reinforcement Learning based on Critical State. [Internet] [Thesis]. NSYSU; 2014. [cited 2019 Nov 12]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Cheng T. Inverse Reinforcement Learning based on Critical State. [Thesis]. NSYSU; 2014. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1028114-170500

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Southern California

8. Kalakrishnan, Mrinal. Learning objective functions for autonomous motion generation.

Degree: PhD, Computer Science, 2014, University of Southern California

 Planning and optimization methods have been widely applied to the problem of trajectory generation for autonomous robotics. The performance of such methods, however, is critically… (more)

Subjects/Keywords: robotics; machine learning; motion planning; trajectory optimization; inverse reinforcement learning; reinforcement learning; locomotion; manipulation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kalakrishnan, M. (2014). Learning objective functions for autonomous motion generation. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3781

Chicago Manual of Style (16th Edition):

Kalakrishnan, Mrinal. “Learning objective functions for autonomous motion generation.” 2014. Doctoral Dissertation, University of Southern California. Accessed November 12, 2019. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3781.

MLA Handbook (7th Edition):

Kalakrishnan, Mrinal. “Learning objective functions for autonomous motion generation.” 2014. Web. 12 Nov 2019.

Vancouver:

Kalakrishnan M. Learning objective functions for autonomous motion generation. [Internet] [Doctoral dissertation]. University of Southern California; 2014. [cited 2019 Nov 12]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3781.

Council of Science Editors:

Kalakrishnan M. Learning objective functions for autonomous motion generation. [Doctoral Dissertation]. University of Southern California; 2014. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369146/rec/3781


University of New South Wales

9. Nguyen, Hung. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.

Degree: Engineering & Information Technology, 2018, University of New South Wales

 Apprenticeship Learning (AL) uses data collected from humans on tasks to design machine-learning algorithms to imitate the skills used by humans. Such a powerful approach… (more)

Subjects/Keywords: Apprenticeship Learning; Reinforcement learning; Inverse Reinforcement Learning; Apprenticeship Boostrapping; UAV and UGVs

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nguyen, H. (2018). Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. (Masters Thesis). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true

Chicago Manual of Style (16th Edition):

Nguyen, Hung. “Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.” 2018. Masters Thesis, University of New South Wales. Accessed November 12, 2019. http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true.

MLA Handbook (7th Edition):

Nguyen, Hung. “Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles.” 2018. Web. 12 Nov 2019.

Vancouver:

Nguyen H. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. [Internet] [Masters thesis]. University of New South Wales; 2018. [cited 2019 Nov 12]. Available from: http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true.

Council of Science Editors:

Nguyen H. Apprenticeship Bootstrapping: Multi-Skill Reinforcement Learning for Autonomous Unmanned Aerial Vehicles. [Masters Thesis]. University of New South Wales; 2018. Available from: http://handle.unsw.edu.au/1959.4/60412 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:52104/SOURCE02?view=true


NSYSU

10. Chiang, Hsuan-yi. Action Segmentation and Learning by Inverse Reinforcement Learning.

Degree: Master, Electrical Engineering, 2015, NSYSU

Reinforcement learning allows agents to learn behaviors through trial and error. However, as the level of difficulty increases, the reward function of the mission also… (more)

Subjects/Keywords: Upper Confidence Bounds; Adaboost classifier; reward function; Inverse Reinforcement learning; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chiang, H. (2015). Action Segmentation and Learning by Inverse Reinforcement Learning. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chiang, Hsuan-yi. “Action Segmentation and Learning by Inverse Reinforcement Learning.” 2015. Thesis, NSYSU. Accessed November 12, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chiang, Hsuan-yi. “Action Segmentation and Learning by Inverse Reinforcement Learning.” 2015. Web. 12 Nov 2019.

Vancouver:

Chiang H. Action Segmentation and Learning by Inverse Reinforcement Learning. [Internet] [Thesis]. NSYSU; 2015. [cited 2019 Nov 12]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chiang H. Action Segmentation and Learning by Inverse Reinforcement Learning. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0906115-151230

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

11. Das, Indrajit. Inverse reinforcement learning of risk-sensitive utility.

Degree: MS, Computer Science, 2016, University of Georgia

 The uncertain and stochastic nature of the real world poses a challenge for autonomous cars in making decisions to ensure appropriate motion, considering the safety… (more)

Subjects/Keywords: Inverse Reinforcement Learning

…9 2.3 Inverse Reinforcement Learning… …decision makers using cumulative rewards. We apply Inverse Reinforcement Learning (IRL)… …decision making in self-driving cars. 2 We have developed an Inverse Reinforcement Learning (… …function. 3 We are validating the Inverse Reinforcement Learning algorithm with the help of a… …Theory and Inverse Reinforcement Learning, which are the building blocks of both the problem… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Das, I. (2016). Inverse reinforcement learning of risk-sensitive utility. (Masters Thesis). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/das_indrajit_201608_ms

Chicago Manual of Style (16th Edition):

Das, Indrajit. “Inverse reinforcement learning of risk-sensitive utility.” 2016. Masters Thesis, University of Georgia. Accessed November 12, 2019. http://purl.galileo.usg.edu/uga_etd/das_indrajit_201608_ms.

MLA Handbook (7th Edition):

Das, Indrajit. “Inverse reinforcement learning of risk-sensitive utility.” 2016. Web. 12 Nov 2019.

Vancouver:

Das I. Inverse reinforcement learning of risk-sensitive utility. [Internet] [Masters thesis]. University of Georgia; 2016. [cited 2019 Nov 12]. Available from: http://purl.galileo.usg.edu/uga_etd/das_indrajit_201608_ms.

Council of Science Editors:

Das I. Inverse reinforcement learning of risk-sensitive utility. [Masters Thesis]. University of Georgia; 2016. Available from: http://purl.galileo.usg.edu/uga_etd/das_indrajit_201608_ms

12. Trivedi, Maulesh. Inverse learning of robot behavior for ad-hoc teamwork.

Degree: MS, Artificial Intelligence, 2016, University of Georgia

 Machine Learning and Robotics present a very intriguing combination of research in Artificial Intelligence. Inverse Reinforcement Learning (IRL) algorithms have generated a great deal of… (more)

Subjects/Keywords: Inverse Reinforcement Learning

…Motivation for Inverse Reinforcement Learning Now that we have described a way of modelling… …6 Thus, our research addresses the problem of Inverse Reinforcement Learning (IRL… …conventional reinforcement learning techniques become mute in these situations. Inverse Reinforcement… …have not been developed to work with multi-agent inverse reinforcement learning domains… …task [18] [20] but do not use the inverse reinforcement learning… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Trivedi, M. (2016). Inverse learning of robot behavior for ad-hoc teamwork. (Masters Thesis). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/trivedi_maulesh_201608_ms

Chicago Manual of Style (16th Edition):

Trivedi, Maulesh. “Inverse learning of robot behavior for ad-hoc teamwork.” 2016. Masters Thesis, University of Georgia. Accessed November 12, 2019. http://purl.galileo.usg.edu/uga_etd/trivedi_maulesh_201608_ms.

MLA Handbook (7th Edition):

Trivedi, Maulesh. “Inverse learning of robot behavior for ad-hoc teamwork.” 2016. Web. 12 Nov 2019.

Vancouver:

Trivedi M. Inverse learning of robot behavior for ad-hoc teamwork. [Internet] [Masters thesis]. University of Georgia; 2016. [cited 2019 Nov 12]. Available from: http://purl.galileo.usg.edu/uga_etd/trivedi_maulesh_201608_ms.

Council of Science Editors:

Trivedi M. Inverse learning of robot behavior for ad-hoc teamwork. [Masters Thesis]. University of Georgia; 2016. Available from: http://purl.galileo.usg.edu/uga_etd/trivedi_maulesh_201608_ms


University of Oxford

13. Wulfmeier, Markus. Efficient supervision for robot learning via imitation, simulation, and adaptation.

Degree: PhD, 2018, University of Oxford

 In order to enable more widespread application of robots, we are required to reduce the human effort for the introduction of existing robotic platforms to… (more)

Subjects/Keywords: Machine learning; Robotics; Domain Adaptation; Imitation Learning; Inverse Reinforcement Learning; Mobile Robotics; Transfer Learning; Autonomous Driving

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wulfmeier, M. (2018). Efficient supervision for robot learning via imitation, simulation, and adaptation. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819

Chicago Manual of Style (16th Edition):

Wulfmeier, Markus. “Efficient supervision for robot learning via imitation, simulation, and adaptation.” 2018. Doctoral Dissertation, University of Oxford. Accessed November 12, 2019. http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819.

MLA Handbook (7th Edition):

Wulfmeier, Markus. “Efficient supervision for robot learning via imitation, simulation, and adaptation.” 2018. Web. 12 Nov 2019.

Vancouver:

Wulfmeier M. Efficient supervision for robot learning via imitation, simulation, and adaptation. [Internet] [Doctoral dissertation]. University of Oxford; 2018. [cited 2019 Nov 12]. Available from: http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819.

Council of Science Editors:

Wulfmeier M. Efficient supervision for robot learning via imitation, simulation, and adaptation. [Doctoral Dissertation]. University of Oxford; 2018. Available from: http://ora.ox.ac.uk/objects/uuid:2b5eeb55-639a-40ae-83b7-bd01fc8fd6cc ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.757819

14. Chandramohan, Senthilkumar. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.

Degree: Docteur es, Informatique, 2012, Avignon

Les récents progrès dans le domaine du traitement du langage ont apporté un intérêt significatif à la mise en oeuvre de systèmes de dialogue parlé.… (more)

Subjects/Keywords: Simulation d'utilisateurs; Systèmes de dialogue parlé; Apprentissage par renforcement; Apprentissage par renforcement inverse; Gestion de dialogue; User simulation; Spoken dialogue systems; Reinforcement learning; Inverse reinforcement learning; Dialogue management

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chandramohan, S. (2012). Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. (Doctoral Dissertation). Avignon. Retrieved from http://www.theses.fr/2012AVIG0185

Chicago Manual of Style (16th Edition):

Chandramohan, Senthilkumar. “Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.” 2012. Doctoral Dissertation, Avignon. Accessed November 12, 2019. http://www.theses.fr/2012AVIG0185.

MLA Handbook (7th Edition):

Chandramohan, Senthilkumar. “Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?.” 2012. Web. 12 Nov 2019.

Vancouver:

Chandramohan S. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. [Internet] [Doctoral dissertation]. Avignon; 2012. [cited 2019 Nov 12]. Available from: http://www.theses.fr/2012AVIG0185.

Council of Science Editors:

Chandramohan S. Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? : Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?. [Doctoral Dissertation]. Avignon; 2012. Available from: http://www.theses.fr/2012AVIG0185


University of Georgia

15. Bhat, Sanath Govinda. Learning driver preferences for freeway merging using multitask irl.

Degree: MS, Computer Science, 2017, University of Georgia

 Most automobile manufacturers today have invested heavily in the research and design of implementing autonomy in their cars. One important and challenging problem faced by… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Hierarchical Bayesian Model; Multitask; Highway Merging; NGSIM; Likelihood Weighting

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhat, S. G. (2017). Learning driver preferences for freeway merging using multitask irl. (Masters Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37273

Chicago Manual of Style (16th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2017. Masters Thesis, University of Georgia. Accessed November 12, 2019. http://hdl.handle.net/10724/37273.

MLA Handbook (7th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2017. Web. 12 Nov 2019.

Vancouver:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Internet] [Masters thesis]. University of Georgia; 2017. [cited 2019 Nov 12]. Available from: http://hdl.handle.net/10724/37273.

Council of Science Editors:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Masters Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/37273


University of Georgia

16. Bhat, Sanath Govinda. Learning driver preferences for freeway merging using multitask irl.

Degree: MS, Computer Science, 2017, University of Georgia

 Most automobile manufacturers today have invested heavily in the research and design of implementing autonomy in their cars. One important and challenging problem faced by… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Hierarchical Bayesian Model; Multitask; Highway Merging; NGSIM; Likelihood Weighting

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhat, S. G. (2017). Learning driver preferences for freeway merging using multitask irl. (Masters Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/37116

Chicago Manual of Style (16th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2017. Masters Thesis, University of Georgia. Accessed November 12, 2019. http://hdl.handle.net/10724/37116.

MLA Handbook (7th Edition):

Bhat, Sanath Govinda. “Learning driver preferences for freeway merging using multitask irl.” 2017. Web. 12 Nov 2019.

Vancouver:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Internet] [Masters thesis]. University of Georgia; 2017. [cited 2019 Nov 12]. Available from: http://hdl.handle.net/10724/37116.

Council of Science Editors:

Bhat SG. Learning driver preferences for freeway merging using multitask irl. [Masters Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/37116


Virginia Tech

17. Shiraev, Dmitry Eric. Inverse Reinforcement Learning and Routing Metric Discovery.

Degree: MS, Computer Science, 2003, Virginia Tech

 Uncovering the metrics and procedures employed by an autonomous networking system is an important problem with applications in instrumentation, traffic engineering, and game-theoretic studies of… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Routing; Network Metrics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shiraev, D. E. (2003). Inverse Reinforcement Learning and Routing Metric Discovery. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/34728

Chicago Manual of Style (16th Edition):

Shiraev, Dmitry Eric. “Inverse Reinforcement Learning and Routing Metric Discovery.” 2003. Masters Thesis, Virginia Tech. Accessed November 12, 2019. http://hdl.handle.net/10919/34728.

MLA Handbook (7th Edition):

Shiraev, Dmitry Eric. “Inverse Reinforcement Learning and Routing Metric Discovery.” 2003. Web. 12 Nov 2019.

Vancouver:

Shiraev DE. Inverse Reinforcement Learning and Routing Metric Discovery. [Internet] [Masters thesis]. Virginia Tech; 2003. [cited 2019 Nov 12]. Available from: http://hdl.handle.net/10919/34728.

Council of Science Editors:

Shiraev DE. Inverse Reinforcement Learning and Routing Metric Discovery. [Masters Thesis]. Virginia Tech; 2003. Available from: http://hdl.handle.net/10919/34728


Wright State University

18. Nalamothu, Abhishek. Abusive and Hate Speech Tweets Detection with Text Generation.

Degree: MS, Computer Science, 2019, Wright State University

 According to a Pew Research study, 41% of Americans have personally experienced online harassment and two-thirds of Americans have witnessed harassment in 2017. Hence, online… (more)

Subjects/Keywords: Computer Science; Text generation; Generative adversarial network; Inverse Reinforcement Learning; Online Harassment detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nalamothu, A. (2019). Abusive and Hate Speech Tweets Detection with Text Generation. (Masters Thesis). Wright State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305

Chicago Manual of Style (16th Edition):

Nalamothu, Abhishek. “Abusive and Hate Speech Tweets Detection with Text Generation.” 2019. Masters Thesis, Wright State University. Accessed November 12, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305.

MLA Handbook (7th Edition):

Nalamothu, Abhishek. “Abusive and Hate Speech Tweets Detection with Text Generation.” 2019. Web. 12 Nov 2019.

Vancouver:

Nalamothu A. Abusive and Hate Speech Tweets Detection with Text Generation. [Internet] [Masters thesis]. Wright State University; 2019. [cited 2019 Nov 12]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305.

Council of Science Editors:

Nalamothu A. Abusive and Hate Speech Tweets Detection with Text Generation. [Masters Thesis]. Wright State University; 2019. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1567510940365305


University of Illinois – Chicago

19. Monfort, Mathew. Methods in Large Scale Inverse Optimal Control.

Degree: 2016, University of Illinois – Chicago

 As our technology continues to evolve, so does the complexity of the problems that we expect our systems to solve. The challenge is that these… (more)

Subjects/Keywords: machine learning; artificial intelligence; inverse optimal control; graph search; autonomous agents; reinforcement learning; path distributions; robotic control; robotics; robots; activity recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Monfort, M. (2016). Methods in Large Scale Inverse Optimal Control. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/21540

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Monfort, Mathew. “Methods in Large Scale Inverse Optimal Control.” 2016. Thesis, University of Illinois – Chicago. Accessed November 12, 2019. http://hdl.handle.net/10027/21540.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Monfort, Mathew. “Methods in Large Scale Inverse Optimal Control.” 2016. Web. 12 Nov 2019.

Vancouver:

Monfort M. Methods in Large Scale Inverse Optimal Control. [Internet] [Thesis]. University of Illinois – Chicago; 2016. [cited 2019 Nov 12]. Available from: http://hdl.handle.net/10027/21540.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Monfort M. Methods in Large Scale Inverse Optimal Control. [Thesis]. University of Illinois – Chicago; 2016. Available from: http://hdl.handle.net/10027/21540

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

20. NGUYEN QUOC PHONG. AN ALTERNATIVE INFORMATION-THEORETIC CRITERION FOR ACTIVE LEARNING.

Degree: 2018, National University of Singapore

Subjects/Keywords: active learning; mutual information; inverse reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

PHONG, N. Q. (2018). AN ALTERNATIVE INFORMATION-THEORETIC CRITERION FOR ACTIVE LEARNING. (Thesis). National University of Singapore. Retrieved from http://scholarbank.nus.edu.sg/handle/10635/150065

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

PHONG, NGUYEN QUOC. “AN ALTERNATIVE INFORMATION-THEORETIC CRITERION FOR ACTIVE LEARNING.” 2018. Thesis, National University of Singapore. Accessed November 12, 2019. http://scholarbank.nus.edu.sg/handle/10635/150065.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

PHONG, NGUYEN QUOC. “AN ALTERNATIVE INFORMATION-THEORETIC CRITERION FOR ACTIVE LEARNING.” 2018. Web. 12 Nov 2019.

Vancouver:

PHONG NQ. AN ALTERNATIVE INFORMATION-THEORETIC CRITERION FOR ACTIVE LEARNING. [Internet] [Thesis]. National University of Singapore; 2018. [cited 2019 Nov 12]. Available from: http://scholarbank.nus.edu.sg/handle/10635/150065.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

PHONG NQ. AN ALTERNATIVE INFORMATION-THEORETIC CRITERION FOR ACTIVE LEARNING. [Thesis]. National University of Singapore; 2018. Available from: http://scholarbank.nus.edu.sg/handle/10635/150065

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

21. Johnson, Miles. Inverse optimal control for deterministic continuous-time nonlinear systems.

Degree: PhD, 4048, 2014, University of Illinois – Urbana-Champaign

Inverse optimal control is the problem of computing a cost function with respect to which observed state input trajectories are optimal. We present a new… (more)

Subjects/Keywords: optimal control; inverse reinforcement learning; inverse optimal control; apprenticeship learning; Learning from demonstration; iterative learning control

…problem are the following: • The max-margin inverse reinforcement learning method of Abbeel, et… …elastic rod from inverse optimal control. Errors here can be due to observation noise, model… …of time-optimal control and new method of inverse optimal control… …2. This figure shows the result of applying each method of inverse optimal control to… …observation of physical 3D elastic rod used for inverse optimal control. Here, t denotes the arc… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Johnson, M. (2014). Inverse optimal control for deterministic continuous-time nonlinear systems. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/46747

Chicago Manual of Style (16th Edition):

Johnson, Miles. “Inverse optimal control for deterministic continuous-time nonlinear systems.” 2014. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed November 12, 2019. http://hdl.handle.net/2142/46747.

MLA Handbook (7th Edition):

Johnson, Miles. “Inverse optimal control for deterministic continuous-time nonlinear systems.” 2014. Web. 12 Nov 2019.

Vancouver:

Johnson M. Inverse optimal control for deterministic continuous-time nonlinear systems. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2014. [cited 2019 Nov 12]. Available from: http://hdl.handle.net/2142/46747.

Council of Science Editors:

Johnson M. Inverse optimal control for deterministic continuous-time nonlinear systems. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2014. Available from: http://hdl.handle.net/2142/46747

22. Mangin, Olivier. Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words.

Degree: Docteur es, Informatique, 2014, Bordeaux

Cette thèse considère l'apprentissage de motifs récurrents dans la perception multimodale. Elle s'attache à développer des modèles robotiques de ces facultés telles qu'observées chez l'enfant,… (more)

Subjects/Keywords: Apprentissage multimodal; Acquisition du langage; Ancrage de symboles; Apprentissage de concepts; Compréhension de comportement humains; Décomposition du mouvement; Primitive motrice; Décomposition de taches; Factorisation de matrice positive; Apprentissage par renforcement inverse factorisé; Multimodal learning; Language acquisition; Symbol grounding; Concept learning; Human behavior understanding; Motion decomposition; Motion primitive; Task decomposition; Nonnegative matrix factorization; Factorial inverse reinforcement learning; Developmental robotics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mangin, O. (2014). Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words. (Doctoral Dissertation). Bordeaux. Retrieved from http://www.theses.fr/2014BORD0002

Chicago Manual of Style (16th Edition):

Mangin, Olivier. “Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words.” 2014. Doctoral Dissertation, Bordeaux. Accessed November 12, 2019. http://www.theses.fr/2014BORD0002.

MLA Handbook (7th Edition):

Mangin, Olivier. “Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words.” 2014. Web. 12 Nov 2019.

Vancouver:

Mangin O. Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words. [Internet] [Doctoral dissertation]. Bordeaux; 2014. [cited 2019 Nov 12]. Available from: http://www.theses.fr/2014BORD0002.

Council of Science Editors:

Mangin O. Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques : The Emergence of Multimodal Concepts : From Perceptual Motion Primitives to Grounded Acoustic Words. [Doctoral Dissertation]. Bordeaux; 2014. Available from: http://www.theses.fr/2014BORD0002

.