Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(optimal learning). Showing records 1 – 30 of 151 total matches.

[1] [2] [3] [4] [5] [6]

Search Limiters

Last 2 Years | English Only

Levels

Country

▼ Search Limiters


Penn State University

1. Ye, Jianbo. Computational Modeling of Compositional and Relational Data Using Optimal Transport and Probabilistic Models.

Degree: 2018, Penn State University

 Quantitative researchers often view our world as a large collection of data generated and organized by the structures and functions of society and technology. Those… (more)

Subjects/Keywords: machine learning; optimal transport; optimization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ye, J. (2018). Computational Modeling of Compositional and Relational Data Using Optimal Transport and Probabilistic Models. (Thesis). Penn State University. Retrieved from https://etda.libraries.psu.edu/catalog/15191jxy198

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Ye, Jianbo. “Computational Modeling of Compositional and Relational Data Using Optimal Transport and Probabilistic Models.” 2018. Thesis, Penn State University. Accessed July 04, 2020. https://etda.libraries.psu.edu/catalog/15191jxy198.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Ye, Jianbo. “Computational Modeling of Compositional and Relational Data Using Optimal Transport and Probabilistic Models.” 2018. Web. 04 Jul 2020.

Vancouver:

Ye J. Computational Modeling of Compositional and Relational Data Using Optimal Transport and Probabilistic Models. [Internet] [Thesis]. Penn State University; 2018. [cited 2020 Jul 04]. Available from: https://etda.libraries.psu.edu/catalog/15191jxy198.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Ye J. Computational Modeling of Compositional and Relational Data Using Optimal Transport and Probabilistic Models. [Thesis]. Penn State University; 2018. Available from: https://etda.libraries.psu.edu/catalog/15191jxy198

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Debrecen

2. Gyarmati, András. Young Learners and Second Language Acquisition .

Degree: DE – TEK – Bölcsészettudományi Kar, 2013, University of Debrecen

 In my thesis, I will touch upon the logical problem of foreign language learning, as well as the problems of observation in child language research… (more)

Subjects/Keywords: optimal age; second language learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gyarmati, A. (2013). Young Learners and Second Language Acquisition . (Thesis). University of Debrecen. Retrieved from http://hdl.handle.net/2437/168797

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gyarmati, András. “Young Learners and Second Language Acquisition .” 2013. Thesis, University of Debrecen. Accessed July 04, 2020. http://hdl.handle.net/2437/168797.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gyarmati, András. “Young Learners and Second Language Acquisition .” 2013. Web. 04 Jul 2020.

Vancouver:

Gyarmati A. Young Learners and Second Language Acquisition . [Internet] [Thesis]. University of Debrecen; 2013. [cited 2020 Jul 04]. Available from: http://hdl.handle.net/2437/168797.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gyarmati A. Young Learners and Second Language Acquisition . [Thesis]. University of Debrecen; 2013. Available from: http://hdl.handle.net/2437/168797

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Rice University

3. Daptardar, Saurabh. The Science of Mind Reading: New Inverse Optimal Control Framework.

Degree: MS, Engineering, 2018, Rice University

 Continuous control and planning by the brain remain poorly understood and is a major challenge in the field of Neuroscience. To truly say that we… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Inverse Optimal Control; Reinforcement Learning; Optimal Control; Neuroscience

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Daptardar, S. (2018). The Science of Mind Reading: New Inverse Optimal Control Framework. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/105893

Chicago Manual of Style (16th Edition):

Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Masters Thesis, Rice University. Accessed July 04, 2020. http://hdl.handle.net/1911/105893.

MLA Handbook (7th Edition):

Daptardar, Saurabh. “The Science of Mind Reading: New Inverse Optimal Control Framework.” 2018. Web. 04 Jul 2020.

Vancouver:

Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Internet] [Masters thesis]. Rice University; 2018. [cited 2020 Jul 04]. Available from: http://hdl.handle.net/1911/105893.

Council of Science Editors:

Daptardar S. The Science of Mind Reading: New Inverse Optimal Control Framework. [Masters Thesis]. Rice University; 2018. Available from: http://hdl.handle.net/1911/105893

4. Genevay, Aude. Entropy-regularized Optimal Transport for Machine Learning : Transport Optimal pour l'Apprentissage Automatique.

Degree: Docteur es, Mathématiques, 2019, Paris Sciences et Lettres

Le Transport Optimal régularisé par l’Entropie (TOE) permet de définir les Divergences de Sinkhorn (DS), une nouvelle classe de distance entre mesures de probabilités basées… (more)

Subjects/Keywords: Transport Optimal; Apprentissage Statistique; Optimal Transport; Machine Learning; 006.3

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Genevay, A. (2019). Entropy-regularized Optimal Transport for Machine Learning : Transport Optimal pour l'Apprentissage Automatique. (Doctoral Dissertation). Paris Sciences et Lettres. Retrieved from http://www.theses.fr/2019PSLED002

Chicago Manual of Style (16th Edition):

Genevay, Aude. “Entropy-regularized Optimal Transport for Machine Learning : Transport Optimal pour l'Apprentissage Automatique.” 2019. Doctoral Dissertation, Paris Sciences et Lettres. Accessed July 04, 2020. http://www.theses.fr/2019PSLED002.

MLA Handbook (7th Edition):

Genevay, Aude. “Entropy-regularized Optimal Transport for Machine Learning : Transport Optimal pour l'Apprentissage Automatique.” 2019. Web. 04 Jul 2020.

Vancouver:

Genevay A. Entropy-regularized Optimal Transport for Machine Learning : Transport Optimal pour l'Apprentissage Automatique. [Internet] [Doctoral dissertation]. Paris Sciences et Lettres; 2019. [cited 2020 Jul 04]. Available from: http://www.theses.fr/2019PSLED002.

Council of Science Editors:

Genevay A. Entropy-regularized Optimal Transport for Machine Learning : Transport Optimal pour l'Apprentissage Automatique. [Doctoral Dissertation]. Paris Sciences et Lettres; 2019. Available from: http://www.theses.fr/2019PSLED002


University of Waterloo

5. Lin, Jonathan Feng-Shun. Temporal Segmentation of Human Motion for Rehabilitation.

Degree: 2017, University of Waterloo

 Current physiotherapy practice relies on visual observation of patient movement for assessment and diagnosis. Automation of motion monitoring has the potential to improve accuracy and… (more)

Subjects/Keywords: Segmentation and Identification; Machine Learning; Optimal Control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lin, J. F. (2017). Temporal Segmentation of Human Motion for Rehabilitation. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/11764

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lin, Jonathan Feng-Shun. “Temporal Segmentation of Human Motion for Rehabilitation.” 2017. Thesis, University of Waterloo. Accessed July 04, 2020. http://hdl.handle.net/10012/11764.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lin, Jonathan Feng-Shun. “Temporal Segmentation of Human Motion for Rehabilitation.” 2017. Web. 04 Jul 2020.

Vancouver:

Lin JF. Temporal Segmentation of Human Motion for Rehabilitation. [Internet] [Thesis]. University of Waterloo; 2017. [cited 2020 Jul 04]. Available from: http://hdl.handle.net/10012/11764.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lin JF. Temporal Segmentation of Human Motion for Rehabilitation. [Thesis]. University of Waterloo; 2017. Available from: http://hdl.handle.net/10012/11764

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Princeton University

6. Li, Yan. Optimal Learning in High Dimensions .

Degree: PhD, 2016, Princeton University

 Collecting information in the course of sequential decision-making can be extremely challenging in high-dimensional settings, where the number of measurement budget is much smaller than… (more)

Subjects/Keywords: Bayesian Optimization; High-dimensional Statistics; Optimal Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, Y. (2016). Optimal Learning in High Dimensions . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp014m90dx99b

Chicago Manual of Style (16th Edition):

Li, Yan. “Optimal Learning in High Dimensions .” 2016. Doctoral Dissertation, Princeton University. Accessed July 04, 2020. http://arks.princeton.edu/ark:/88435/dsp014m90dx99b.

MLA Handbook (7th Edition):

Li, Yan. “Optimal Learning in High Dimensions .” 2016. Web. 04 Jul 2020.

Vancouver:

Li Y. Optimal Learning in High Dimensions . [Internet] [Doctoral dissertation]. Princeton University; 2016. [cited 2020 Jul 04]. Available from: http://arks.princeton.edu/ark:/88435/dsp014m90dx99b.

Council of Science Editors:

Li Y. Optimal Learning in High Dimensions . [Doctoral Dissertation]. Princeton University; 2016. Available from: http://arks.princeton.edu/ark:/88435/dsp014m90dx99b


Rice University

7. Losey, Dylan P. Responding to Physical Human-Robot Interaction: Theory and Approximations.

Degree: PhD, Engineering, 2018, Rice University

 This thesis explores how robots should respond to physical human interactions. From surgical devices to assistive arms, robots are becoming an important aspect of our… (more)

Subjects/Keywords: human-robot interaction; machine learning; optimal control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Losey, D. P. (2018). Responding to Physical Human-Robot Interaction: Theory and Approximations. (Doctoral Dissertation). Rice University. Retrieved from http://hdl.handle.net/1911/105912

Chicago Manual of Style (16th Edition):

Losey, Dylan P. “Responding to Physical Human-Robot Interaction: Theory and Approximations.” 2018. Doctoral Dissertation, Rice University. Accessed July 04, 2020. http://hdl.handle.net/1911/105912.

MLA Handbook (7th Edition):

Losey, Dylan P. “Responding to Physical Human-Robot Interaction: Theory and Approximations.” 2018. Web. 04 Jul 2020.

Vancouver:

Losey DP. Responding to Physical Human-Robot Interaction: Theory and Approximations. [Internet] [Doctoral dissertation]. Rice University; 2018. [cited 2020 Jul 04]. Available from: http://hdl.handle.net/1911/105912.

Council of Science Editors:

Losey DP. Responding to Physical Human-Robot Interaction: Theory and Approximations. [Doctoral Dissertation]. Rice University; 2018. Available from: http://hdl.handle.net/1911/105912


University of Oklahoma

8. Huffman, William. APPLICATION OF COGNITIVE PRINCIPLES WITHIN AN ONLINE STATISTICAL LEARNING ENVIRONMENT.

Degree: PhD, 2016, University of Oklahoma

 Three experiments were conducted in order to further investigate optimal learning procedures within an online statistical learning environment. Experiment 1 exposed learners to retrieval practice… (more)

Subjects/Keywords: optimal learning; retrieval practice; feedback; multiple evaluations

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Huffman, W. (2016). APPLICATION OF COGNITIVE PRINCIPLES WITHIN AN ONLINE STATISTICAL LEARNING ENVIRONMENT. (Doctoral Dissertation). University of Oklahoma. Retrieved from http://hdl.handle.net/11244/47090

Chicago Manual of Style (16th Edition):

Huffman, William. “APPLICATION OF COGNITIVE PRINCIPLES WITHIN AN ONLINE STATISTICAL LEARNING ENVIRONMENT.” 2016. Doctoral Dissertation, University of Oklahoma. Accessed July 04, 2020. http://hdl.handle.net/11244/47090.

MLA Handbook (7th Edition):

Huffman, William. “APPLICATION OF COGNITIVE PRINCIPLES WITHIN AN ONLINE STATISTICAL LEARNING ENVIRONMENT.” 2016. Web. 04 Jul 2020.

Vancouver:

Huffman W. APPLICATION OF COGNITIVE PRINCIPLES WITHIN AN ONLINE STATISTICAL LEARNING ENVIRONMENT. [Internet] [Doctoral dissertation]. University of Oklahoma; 2016. [cited 2020 Jul 04]. Available from: http://hdl.handle.net/11244/47090.

Council of Science Editors:

Huffman W. APPLICATION OF COGNITIVE PRINCIPLES WITHIN AN ONLINE STATISTICAL LEARNING ENVIRONMENT. [Doctoral Dissertation]. University of Oklahoma; 2016. Available from: http://hdl.handle.net/11244/47090


Delft University of Technology

9. Paramkusam, Deepak (author). Comparison of Optimal Control Techniques for Learning-based RRT.

Degree: 2018, Delft University of Technology

 Kinodynamic motion planning for a robot involves generating a trajectory from a given robot state to goal state while satisfying kinematic and dynamic constraints. Rapidly-exploring… (more)

Subjects/Keywords: RRT; Supervised Learning; Optimal control; Motion Planning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Paramkusam, D. (. (2018). Comparison of Optimal Control Techniques for Learning-based RRT. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:742ed24e-0525-4ae2-b6d4-2dc6f69e60e1

Chicago Manual of Style (16th Edition):

Paramkusam, Deepak (author). “Comparison of Optimal Control Techniques for Learning-based RRT.” 2018. Masters Thesis, Delft University of Technology. Accessed July 04, 2020. http://resolver.tudelft.nl/uuid:742ed24e-0525-4ae2-b6d4-2dc6f69e60e1.

MLA Handbook (7th Edition):

Paramkusam, Deepak (author). “Comparison of Optimal Control Techniques for Learning-based RRT.” 2018. Web. 04 Jul 2020.

Vancouver:

Paramkusam D(. Comparison of Optimal Control Techniques for Learning-based RRT. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2020 Jul 04]. Available from: http://resolver.tudelft.nl/uuid:742ed24e-0525-4ae2-b6d4-2dc6f69e60e1.

Council of Science Editors:

Paramkusam D(. Comparison of Optimal Control Techniques for Learning-based RRT. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:742ed24e-0525-4ae2-b6d4-2dc6f69e60e1


Delft University of Technology

10. Tsutsunava, Nick (author). Generative CoLearn: steering and cost prediction with generative adversarial nets in kinodynamic RRT.

Degree: 2018, Delft University of Technology

 Kinodynamic planning is motion planning in state space and aims to satisfy kinematic and dynamic constraints. To reduce its computational cost, a popular approach is… (more)

Subjects/Keywords: Deep Learning; Path Planning; Optimal Control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tsutsunava, N. (. (2018). Generative CoLearn: steering and cost prediction with generative adversarial nets in kinodynamic RRT. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:7953081a-1cf1-4e4b-8ca4-87908ffcfac5

Chicago Manual of Style (16th Edition):

Tsutsunava, Nick (author). “Generative CoLearn: steering and cost prediction with generative adversarial nets in kinodynamic RRT.” 2018. Masters Thesis, Delft University of Technology. Accessed July 04, 2020. http://resolver.tudelft.nl/uuid:7953081a-1cf1-4e4b-8ca4-87908ffcfac5.

MLA Handbook (7th Edition):

Tsutsunava, Nick (author). “Generative CoLearn: steering and cost prediction with generative adversarial nets in kinodynamic RRT.” 2018. Web. 04 Jul 2020.

Vancouver:

Tsutsunava N(. Generative CoLearn: steering and cost prediction with generative adversarial nets in kinodynamic RRT. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2020 Jul 04]. Available from: http://resolver.tudelft.nl/uuid:7953081a-1cf1-4e4b-8ca4-87908ffcfac5.

Council of Science Editors:

Tsutsunava N(. Generative CoLearn: steering and cost prediction with generative adversarial nets in kinodynamic RRT. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:7953081a-1cf1-4e4b-8ca4-87908ffcfac5

11. Farhani, Ghazal. Improved techniques for atmospheric ozone retrievals from lidar measurements using the Optimal Estimation Method and Machine Learning.

Degree: 2018, University of Western Ontario

 A new first-principle Optimal Estimation Method (OEM) to retrieve ozone number density profiles in both the troposphere and stratosphere using Differential Absorption Lidar (DIAL) measurements… (more)

Subjects/Keywords: Optimal Estimation Method; Ozone; Machine Learning; UTLS

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Farhani, G. (2018). Improved techniques for atmospheric ozone retrievals from lidar measurements using the Optimal Estimation Method and Machine Learning. (Thesis). University of Western Ontario. Retrieved from https://ir.lib.uwo.ca/etd/6020

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Farhani, Ghazal. “Improved techniques for atmospheric ozone retrievals from lidar measurements using the Optimal Estimation Method and Machine Learning.” 2018. Thesis, University of Western Ontario. Accessed July 04, 2020. https://ir.lib.uwo.ca/etd/6020.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Farhani, Ghazal. “Improved techniques for atmospheric ozone retrievals from lidar measurements using the Optimal Estimation Method and Machine Learning.” 2018. Web. 04 Jul 2020.

Vancouver:

Farhani G. Improved techniques for atmospheric ozone retrievals from lidar measurements using the Optimal Estimation Method and Machine Learning. [Internet] [Thesis]. University of Western Ontario; 2018. [cited 2020 Jul 04]. Available from: https://ir.lib.uwo.ca/etd/6020.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Farhani G. Improved techniques for atmospheric ozone retrievals from lidar measurements using the Optimal Estimation Method and Machine Learning. [Thesis]. University of Western Ontario; 2018. Available from: https://ir.lib.uwo.ca/etd/6020

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Southern California

12. Theodorou, Evangelos A. Iterative path integral stochastic optimal control: theory and applications to motor control.

Degree: PhD, Computer Science, 2011, University of Southern California

 Motivated by the limitations of current optimal control and reinforcement learning methods in terms of their efficiency and scalability, this thesis proposes an iterative stochastic… (more)

Subjects/Keywords: stochastic optimal control; reinforcement learning,; robotics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Theodorou, E. A. (2011). Iterative path integral stochastic optimal control: theory and applications to motor control. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/468575/rec/3680

Chicago Manual of Style (16th Edition):

Theodorou, Evangelos A. “Iterative path integral stochastic optimal control: theory and applications to motor control.” 2011. Doctoral Dissertation, University of Southern California. Accessed July 04, 2020. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/468575/rec/3680.

MLA Handbook (7th Edition):

Theodorou, Evangelos A. “Iterative path integral stochastic optimal control: theory and applications to motor control.” 2011. Web. 04 Jul 2020.

Vancouver:

Theodorou EA. Iterative path integral stochastic optimal control: theory and applications to motor control. [Internet] [Doctoral dissertation]. University of Southern California; 2011. [cited 2020 Jul 04]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/468575/rec/3680.

Council of Science Editors:

Theodorou EA. Iterative path integral stochastic optimal control: theory and applications to motor control. [Doctoral Dissertation]. University of Southern California; 2011. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/468575/rec/3680


Delft University of Technology

13. Munk, J. (author). Deep Reinforcement Learning - Pretraining actor-critic networks using state representation learning.

Degree: 2016, Delft University of Technology

 In control, the objective is to find a mapping from states to actions that steer a system to a desired reference. A controller can be… (more)

Subjects/Keywords: Deep Learning; Reinforcement Learning; State Representation Learning; Optimal Control; Artificial Intelligence

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Munk, J. (. (2016). Deep Reinforcement Learning - Pretraining actor-critic networks using state representation learning. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:5685a3bd-c278-4a1b-a372-da28822cf140

Chicago Manual of Style (16th Edition):

Munk, J (author). “Deep Reinforcement Learning - Pretraining actor-critic networks using state representation learning.” 2016. Masters Thesis, Delft University of Technology. Accessed July 04, 2020. http://resolver.tudelft.nl/uuid:5685a3bd-c278-4a1b-a372-da28822cf140.

MLA Handbook (7th Edition):

Munk, J (author). “Deep Reinforcement Learning - Pretraining actor-critic networks using state representation learning.” 2016. Web. 04 Jul 2020.

Vancouver:

Munk J(. Deep Reinforcement Learning - Pretraining actor-critic networks using state representation learning. [Internet] [Masters thesis]. Delft University of Technology; 2016. [cited 2020 Jul 04]. Available from: http://resolver.tudelft.nl/uuid:5685a3bd-c278-4a1b-a372-da28822cf140.

Council of Science Editors:

Munk J(. Deep Reinforcement Learning - Pretraining actor-critic networks using state representation learning. [Masters Thesis]. Delft University of Technology; 2016. Available from: http://resolver.tudelft.nl/uuid:5685a3bd-c278-4a1b-a372-da28822cf140


University of Illinois – Urbana-Champaign

14. Zaytsev, Andrey. Faster apprenticeship learning through inverse optimal control.

Degree: MS, Computer Science, 2017, University of Illinois – Urbana-Champaign

 One of the fundamental problems of artificial intelligence is learning how to behave optimally. With applications ranging from self-driving cars to medical devices, this task… (more)

Subjects/Keywords: Apprenticeship learning; Inverse reinforcement learning; Inverse optimal control; Deep learning; Reinforcement learning; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zaytsev, A. (2017). Faster apprenticeship learning through inverse optimal control. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/99228

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Zaytsev, Andrey. “Faster apprenticeship learning through inverse optimal control.” 2017. Thesis, University of Illinois – Urbana-Champaign. Accessed July 04, 2020. http://hdl.handle.net/2142/99228.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Zaytsev, Andrey. “Faster apprenticeship learning through inverse optimal control.” 2017. Web. 04 Jul 2020.

Vancouver:

Zaytsev A. Faster apprenticeship learning through inverse optimal control. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2017. [cited 2020 Jul 04]. Available from: http://hdl.handle.net/2142/99228.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Zaytsev A. Faster apprenticeship learning through inverse optimal control. [Thesis]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/99228

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Chicago

15. Chen, Xiangli. Robust Structured Prediction for Process Data.

Degree: 2017, University of Illinois – Chicago

 Processes involve a series of actions performed to achieve a particular result. Developing prediction models for process data is important for many real problems such… (more)

Subjects/Keywords: Structured Prediction; Optimal Control; Reinforcement Learning; Inverse Reinforcement Learning; Imitation Learning; Regression; Covariate Shift

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, X. (2017). Robust Structured Prediction for Process Data. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/21987

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Thesis, University of Illinois – Chicago. Accessed July 04, 2020. http://hdl.handle.net/10027/21987.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chen, Xiangli. “Robust Structured Prediction for Process Data.” 2017. Web. 04 Jul 2020.

Vancouver:

Chen X. Robust Structured Prediction for Process Data. [Internet] [Thesis]. University of Illinois – Chicago; 2017. [cited 2020 Jul 04]. Available from: http://hdl.handle.net/10027/21987.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chen X. Robust Structured Prediction for Process Data. [Thesis]. University of Illinois – Chicago; 2017. Available from: http://hdl.handle.net/10027/21987

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Alberta

16. Ajallooeian, Mohammad Mahdi. Optimal Mechanisms for Machine Learning: A Game-Theoretic Approach to Designing Machine Learning Competitions.

Degree: MS, Department of Computing Science, 2013, University of Alberta

 In this thesis we consider problems where a self-interested entity, called the principal, has private access to some data that she wishes to use to… (more)

Subjects/Keywords: competition; machine learning; game theory; mechanism design; optimal

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ajallooeian, M. M. (2013). Optimal Mechanisms for Machine Learning: A Game-Theoretic Approach to Designing Machine Learning Competitions. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/xg94hr01s

Chicago Manual of Style (16th Edition):

Ajallooeian, Mohammad Mahdi. “Optimal Mechanisms for Machine Learning: A Game-Theoretic Approach to Designing Machine Learning Competitions.” 2013. Masters Thesis, University of Alberta. Accessed July 04, 2020. https://era.library.ualberta.ca/files/xg94hr01s.

MLA Handbook (7th Edition):

Ajallooeian, Mohammad Mahdi. “Optimal Mechanisms for Machine Learning: A Game-Theoretic Approach to Designing Machine Learning Competitions.” 2013. Web. 04 Jul 2020.

Vancouver:

Ajallooeian MM. Optimal Mechanisms for Machine Learning: A Game-Theoretic Approach to Designing Machine Learning Competitions. [Internet] [Masters thesis]. University of Alberta; 2013. [cited 2020 Jul 04]. Available from: https://era.library.ualberta.ca/files/xg94hr01s.

Council of Science Editors:

Ajallooeian MM. Optimal Mechanisms for Machine Learning: A Game-Theoretic Approach to Designing Machine Learning Competitions. [Masters Thesis]. University of Alberta; 2013. Available from: https://era.library.ualberta.ca/files/xg94hr01s


University of Tasmania

17. Brumby, LE. The metacognitive monitoring and study decisions of incremental theorists.

Degree: 2016, University of Tasmania

 The present study investigated incremental theorists’ metacognitive monitoring and study behaviours during a word-pair learning task. Sixty-five participants (38 female; aged 18-64 years, M =… (more)

Subjects/Keywords: theories of intelligence; judgments of learning; metacognition; study behaviours; optimal study

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Brumby, L. (2016). The metacognitive monitoring and study decisions of incremental theorists. (Thesis). University of Tasmania. Retrieved from https://eprints.utas.edu.au/23527/1/Brumby_whole_thesis.pdf

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Brumby, LE. “The metacognitive monitoring and study decisions of incremental theorists.” 2016. Thesis, University of Tasmania. Accessed July 04, 2020. https://eprints.utas.edu.au/23527/1/Brumby_whole_thesis.pdf.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Brumby, LE. “The metacognitive monitoring and study decisions of incremental theorists.” 2016. Web. 04 Jul 2020.

Vancouver:

Brumby L. The metacognitive monitoring and study decisions of incremental theorists. [Internet] [Thesis]. University of Tasmania; 2016. [cited 2020 Jul 04]. Available from: https://eprints.utas.edu.au/23527/1/Brumby_whole_thesis.pdf.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Brumby L. The metacognitive monitoring and study decisions of incremental theorists. [Thesis]. University of Tasmania; 2016. Available from: https://eprints.utas.edu.au/23527/1/Brumby_whole_thesis.pdf

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

18. Lu, Ying. Transfer Learning for Image Classification : Transfert de connaissances pour la classification des images -.

Degree: Docteur es, Informatique et Mathématiques, 2017, Lyon

 Lors de l’apprentissage d’un modèle de classification pour un nouveau domaine cible avec seulement une petite quantité d’échantillons de formation, l’application des algorithmes d’apprentissage automatiques… (more)

Subjects/Keywords: Transfert d'apprentissage; Inductive Transfer Learning; Sparse Representation; Optimal Transport; Computer Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lu, Y. (2017). Transfer Learning for Image Classification : Transfert de connaissances pour la classification des images -. (Doctoral Dissertation). Lyon. Retrieved from http://www.theses.fr/2017LYSEC045

Chicago Manual of Style (16th Edition):

Lu, Ying. “Transfer Learning for Image Classification : Transfert de connaissances pour la classification des images -.” 2017. Doctoral Dissertation, Lyon. Accessed July 04, 2020. http://www.theses.fr/2017LYSEC045.

MLA Handbook (7th Edition):

Lu, Ying. “Transfer Learning for Image Classification : Transfert de connaissances pour la classification des images -.” 2017. Web. 04 Jul 2020.

Vancouver:

Lu Y. Transfer Learning for Image Classification : Transfert de connaissances pour la classification des images -. [Internet] [Doctoral dissertation]. Lyon; 2017. [cited 2020 Jul 04]. Available from: http://www.theses.fr/2017LYSEC045.

Council of Science Editors:

Lu Y. Transfer Learning for Image Classification : Transfert de connaissances pour la classification des images -. [Doctoral Dissertation]. Lyon; 2017. Available from: http://www.theses.fr/2017LYSEC045


Queens University

19. Moskowitz, Joshua. The Optimality of Decision Making During Motor Learning .

Degree: Psychology, 2016, Queens University

 In our daily lives, we often must predict how well we are going to perform in the future based on an evaluation of our current… (more)

Subjects/Keywords: motor prediction; decision making; motor learning; optimal behaviour

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Moskowitz, J. (2016). The Optimality of Decision Making During Motor Learning . (Thesis). Queens University. Retrieved from http://hdl.handle.net/1974/14584

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Moskowitz, Joshua. “The Optimality of Decision Making During Motor Learning .” 2016. Thesis, Queens University. Accessed July 04, 2020. http://hdl.handle.net/1974/14584.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Moskowitz, Joshua. “The Optimality of Decision Making During Motor Learning .” 2016. Web. 04 Jul 2020.

Vancouver:

Moskowitz J. The Optimality of Decision Making During Motor Learning . [Internet] [Thesis]. Queens University; 2016. [cited 2020 Jul 04]. Available from: http://hdl.handle.net/1974/14584.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Moskowitz J. The Optimality of Decision Making During Motor Learning . [Thesis]. Queens University; 2016. Available from: http://hdl.handle.net/1974/14584

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Southern California

20. Lee, Jeong-Yoon. Modeling motor memory to enhance multiple task learning.

Degree: PhD, Computer Science, 2011, University of Southern California

 Although recent computational modeling research has advanced our understanding of motor learning, previous studies focused on single-task motor learning and did not account for multiple… (more)

Subjects/Keywords: computational model; motor learning; stroke rehabilitation; optimal schedule

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lee, J. (2011). Modeling motor memory to enhance multiple task learning. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/462400/rec/4138

Chicago Manual of Style (16th Edition):

Lee, Jeong-Yoon. “Modeling motor memory to enhance multiple task learning.” 2011. Doctoral Dissertation, University of Southern California. Accessed July 04, 2020. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/462400/rec/4138.

MLA Handbook (7th Edition):

Lee, Jeong-Yoon. “Modeling motor memory to enhance multiple task learning.” 2011. Web. 04 Jul 2020.

Vancouver:

Lee J. Modeling motor memory to enhance multiple task learning. [Internet] [Doctoral dissertation]. University of Southern California; 2011. [cited 2020 Jul 04]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/462400/rec/4138.

Council of Science Editors:

Lee J. Modeling motor memory to enhance multiple task learning. [Doctoral Dissertation]. University of Southern California; 2011. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/462400/rec/4138


University of Florida

21. Deptula, Patryk. Data-Based Reinforcement Learning Approximate Optimal Control for Uncertain Nonlinear Systems.

Degree: PhD, Mechanical Engineering - Mechanical and Aerospace Engineering, 2019, University of Florida

 The last two decades have witnessed an influx of autonomous systems including: unmanned aerial vehicles (UAVs), autonomous underwater vehicles (AUVs), and autonomous land vehicles. Since… (more)

Subjects/Keywords: adaptive  – control  – data-based  – learning  – nonlinear  – optimal  – reinforcement  – uncertain

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Deptula, P. (2019). Data-Based Reinforcement Learning Approximate Optimal Control for Uncertain Nonlinear Systems. (Doctoral Dissertation). University of Florida. Retrieved from https://ufdc.ufl.edu/UFE0054227

Chicago Manual of Style (16th Edition):

Deptula, Patryk. “Data-Based Reinforcement Learning Approximate Optimal Control for Uncertain Nonlinear Systems.” 2019. Doctoral Dissertation, University of Florida. Accessed July 04, 2020. https://ufdc.ufl.edu/UFE0054227.

MLA Handbook (7th Edition):

Deptula, Patryk. “Data-Based Reinforcement Learning Approximate Optimal Control for Uncertain Nonlinear Systems.” 2019. Web. 04 Jul 2020.

Vancouver:

Deptula P. Data-Based Reinforcement Learning Approximate Optimal Control for Uncertain Nonlinear Systems. [Internet] [Doctoral dissertation]. University of Florida; 2019. [cited 2020 Jul 04]. Available from: https://ufdc.ufl.edu/UFE0054227.

Council of Science Editors:

Deptula P. Data-Based Reinforcement Learning Approximate Optimal Control for Uncertain Nonlinear Systems. [Doctoral Dissertation]. University of Florida; 2019. Available from: https://ufdc.ufl.edu/UFE0054227


University of Edinburgh

22. Mitrovic, Djordje. Stochastic optimal control with learned dynamics models.

Degree: PhD, 2011, University of Edinburgh

 The motor control of anthropomorphic robotic systems is a challenging computational task mainly because of the high levels of redundancies such systems exhibit. Optimality principles… (more)

Subjects/Keywords: 629.8; optimal control; learning; robotics; impedance control; series elastic actuator

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mitrovic, D. (2011). Stochastic optimal control with learned dynamics models. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/4783

Chicago Manual of Style (16th Edition):

Mitrovic, Djordje. “Stochastic optimal control with learned dynamics models.” 2011. Doctoral Dissertation, University of Edinburgh. Accessed July 04, 2020. http://hdl.handle.net/1842/4783.

MLA Handbook (7th Edition):

Mitrovic, Djordje. “Stochastic optimal control with learned dynamics models.” 2011. Web. 04 Jul 2020.

Vancouver:

Mitrovic D. Stochastic optimal control with learned dynamics models. [Internet] [Doctoral dissertation]. University of Edinburgh; 2011. [cited 2020 Jul 04]. Available from: http://hdl.handle.net/1842/4783.

Council of Science Editors:

Mitrovic D. Stochastic optimal control with learned dynamics models. [Doctoral Dissertation]. University of Edinburgh; 2011. Available from: http://hdl.handle.net/1842/4783


Washington University in St. Louis

23. Erez, Tom. Optimal Control for Autonomous Motor Behavior.

Degree: PhD, Computer Science and Engineering, 2011, Washington University in St. Louis

 This dissertation presents algorithms that allow robots to generate optimal behavior from first principles. Instead of hard-coding every desired behavior, we encode the task as… (more)

Subjects/Keywords: Computer Science; artificial intelligence, legged locomotion, machine learning, optimal control, reinforcement learning, robotics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Erez, T. (2011). Optimal Control for Autonomous Motor Behavior. (Doctoral Dissertation). Washington University in St. Louis. Retrieved from https://openscholarship.wustl.edu/etd/101

Chicago Manual of Style (16th Edition):

Erez, Tom. “Optimal Control for Autonomous Motor Behavior.” 2011. Doctoral Dissertation, Washington University in St. Louis. Accessed July 04, 2020. https://openscholarship.wustl.edu/etd/101.

MLA Handbook (7th Edition):

Erez, Tom. “Optimal Control for Autonomous Motor Behavior.” 2011. Web. 04 Jul 2020.

Vancouver:

Erez T. Optimal Control for Autonomous Motor Behavior. [Internet] [Doctoral dissertation]. Washington University in St. Louis; 2011. [cited 2020 Jul 04]. Available from: https://openscholarship.wustl.edu/etd/101.

Council of Science Editors:

Erez T. Optimal Control for Autonomous Motor Behavior. [Doctoral Dissertation]. Washington University in St. Louis; 2011. Available from: https://openscholarship.wustl.edu/etd/101


University of Southern California

24. Oh, Youngmin. Computational principles in human motor adaptation: sources, memories, and variability.

Degree: PhD, Neuroscience, 2015, University of Southern California

 In this dissertation study, I conducted behavioral experiment and applied computational theories to understand human motor adaptation. Motor adaptation is one kind of motor learning(more)

Subjects/Keywords: motor adaptation; supervised learning; reinforcement learning; Kalman filter; mixture of experts; optimal feedback control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Oh, Y. (2015). Computational principles in human motor adaptation: sources, memories, and variability. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/591951/rec/1553

Chicago Manual of Style (16th Edition):

Oh, Youngmin. “Computational principles in human motor adaptation: sources, memories, and variability.” 2015. Doctoral Dissertation, University of Southern California. Accessed July 04, 2020. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/591951/rec/1553.

MLA Handbook (7th Edition):

Oh, Youngmin. “Computational principles in human motor adaptation: sources, memories, and variability.” 2015. Web. 04 Jul 2020.

Vancouver:

Oh Y. Computational principles in human motor adaptation: sources, memories, and variability. [Internet] [Doctoral dissertation]. University of Southern California; 2015. [cited 2020 Jul 04]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/591951/rec/1553.

Council of Science Editors:

Oh Y. Computational principles in human motor adaptation: sources, memories, and variability. [Doctoral Dissertation]. University of Southern California; 2015. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/591951/rec/1553


Lehigh University

25. Nazari, Mohammadreza. Autonomous Decision-Making Schemes for Real-World Applications in Supply Chains and Online Systems.

Degree: PhD, Industrial Engineering, 2019, Lehigh University

  Designing hand-engineered solutions for decision-making in complex environments is a challenging task. This dissertation investigates the possibility of having autonomous decision-makers in several real-world… (more)

Subjects/Keywords: Machine Learning; Optimal Control; Reinforcement Learning; Service Systems; Supply Chain; Industrial Technology

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nazari, M. (2019). Autonomous Decision-Making Schemes for Real-World Applications in Supply Chains and Online Systems. (Doctoral Dissertation). Lehigh University. Retrieved from https://preserve.lehigh.edu/etd/5727

Chicago Manual of Style (16th Edition):

Nazari, Mohammadreza. “Autonomous Decision-Making Schemes for Real-World Applications in Supply Chains and Online Systems.” 2019. Doctoral Dissertation, Lehigh University. Accessed July 04, 2020. https://preserve.lehigh.edu/etd/5727.

MLA Handbook (7th Edition):

Nazari, Mohammadreza. “Autonomous Decision-Making Schemes for Real-World Applications in Supply Chains and Online Systems.” 2019. Web. 04 Jul 2020.

Vancouver:

Nazari M. Autonomous Decision-Making Schemes for Real-World Applications in Supply Chains and Online Systems. [Internet] [Doctoral dissertation]. Lehigh University; 2019. [cited 2020 Jul 04]. Available from: https://preserve.lehigh.edu/etd/5727.

Council of Science Editors:

Nazari M. Autonomous Decision-Making Schemes for Real-World Applications in Supply Chains and Online Systems. [Doctoral Dissertation]. Lehigh University; 2019. Available from: https://preserve.lehigh.edu/etd/5727


University of Florida

26. Walters, Patrick S. Guidance and Control of Marine Craft an Adaptive Dynamic Programming Approach.

Degree: PhD, Mechanical Engineering - Mechanical and Aerospace Engineering, 2015, University of Florida

 Advances in sensing and computational capabilities have enabled autonomous vehicles to become vital assets across multiple disciplines. These improved capabilities have led to increased interest… (more)

Subjects/Keywords: Approximation; Dynamic programming; Estimation methods; Matrices; Navigation; Optimal control; Optimal policy; Simulations; Trajectories; Velocity; control  – learning  – maritime  – optimization  – robotics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Walters, P. S. (2015). Guidance and Control of Marine Craft an Adaptive Dynamic Programming Approach. (Doctoral Dissertation). University of Florida. Retrieved from https://ufdc.ufl.edu/UFE0047829

Chicago Manual of Style (16th Edition):

Walters, Patrick S. “Guidance and Control of Marine Craft an Adaptive Dynamic Programming Approach.” 2015. Doctoral Dissertation, University of Florida. Accessed July 04, 2020. https://ufdc.ufl.edu/UFE0047829.

MLA Handbook (7th Edition):

Walters, Patrick S. “Guidance and Control of Marine Craft an Adaptive Dynamic Programming Approach.” 2015. Web. 04 Jul 2020.

Vancouver:

Walters PS. Guidance and Control of Marine Craft an Adaptive Dynamic Programming Approach. [Internet] [Doctoral dissertation]. University of Florida; 2015. [cited 2020 Jul 04]. Available from: https://ufdc.ufl.edu/UFE0047829.

Council of Science Editors:

Walters PS. Guidance and Control of Marine Craft an Adaptive Dynamic Programming Approach. [Doctoral Dissertation]. University of Florida; 2015. Available from: https://ufdc.ufl.edu/UFE0047829

27. Carpentier, Justin. Computational foundations of anthropomorphic locomotion : Fondements calculatoires de la locomotion anthropomorphe.

Degree: Docteur es, Robotique, 2017, Université Toulouse III – Paul Sabatier

 Locomotion anthropomorphe est un processus complexe qui met en jeu un très grand nombre de degrés de liberté, le corps humain disposant de plus de… (more)

Subjects/Keywords: Robotique humanoïde; Biomécanique; Contrôle optimal; Estimation; Apprentissage automatique; Anthropomorphic locomotion; Humanoid robotics; Biomechanics; Optimal control; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Carpentier, J. (2017). Computational foundations of anthropomorphic locomotion : Fondements calculatoires de la locomotion anthropomorphe. (Doctoral Dissertation). Université Toulouse III – Paul Sabatier. Retrieved from http://www.theses.fr/2017TOU30376

Chicago Manual of Style (16th Edition):

Carpentier, Justin. “Computational foundations of anthropomorphic locomotion : Fondements calculatoires de la locomotion anthropomorphe.” 2017. Doctoral Dissertation, Université Toulouse III – Paul Sabatier. Accessed July 04, 2020. http://www.theses.fr/2017TOU30376.

MLA Handbook (7th Edition):

Carpentier, Justin. “Computational foundations of anthropomorphic locomotion : Fondements calculatoires de la locomotion anthropomorphe.” 2017. Web. 04 Jul 2020.

Vancouver:

Carpentier J. Computational foundations of anthropomorphic locomotion : Fondements calculatoires de la locomotion anthropomorphe. [Internet] [Doctoral dissertation]. Université Toulouse III – Paul Sabatier; 2017. [cited 2020 Jul 04]. Available from: http://www.theses.fr/2017TOU30376.

Council of Science Editors:

Carpentier J. Computational foundations of anthropomorphic locomotion : Fondements calculatoires de la locomotion anthropomorphe. [Doctoral Dissertation]. Université Toulouse III – Paul Sabatier; 2017. Available from: http://www.theses.fr/2017TOU30376


University of Florida

28. Bhasin,Shubhendu. Reinforcement Learning and Optimal Control Methods for Uncertain Nonlinear Systems.

Degree: PhD, Mechanical Engineering - Mechanical and Aerospace Engineering, 2011, University of Florida

 Notions of optimal behavior expressed in natural systems led researchers to develop reinforcement learning (RL) as a computational tool in machine learning to learn actions… (more)

Subjects/Keywords: Approximation; Human error; Identifiers; Learning modalities; Machine learning; Neural networks; Optimal control; Signals; State estimation; Statistical estimation; adaptive  – approximate  – control  – dynamic  – learning  – neural  – nonlinear  – optimal  – reinforcement

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhasin,Shubhendu. (2011). Reinforcement Learning and Optimal Control Methods for Uncertain Nonlinear Systems. (Doctoral Dissertation). University of Florida. Retrieved from https://ufdc.ufl.edu/UFE0042825

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

Bhasin,Shubhendu. “Reinforcement Learning and Optimal Control Methods for Uncertain Nonlinear Systems.” 2011. Doctoral Dissertation, University of Florida. Accessed July 04, 2020. https://ufdc.ufl.edu/UFE0042825.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

Bhasin,Shubhendu. “Reinforcement Learning and Optimal Control Methods for Uncertain Nonlinear Systems.” 2011. Web. 04 Jul 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

Bhasin,Shubhendu. Reinforcement Learning and Optimal Control Methods for Uncertain Nonlinear Systems. [Internet] [Doctoral dissertation]. University of Florida; 2011. [cited 2020 Jul 04]. Available from: https://ufdc.ufl.edu/UFE0042825.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

Bhasin,Shubhendu. Reinforcement Learning and Optimal Control Methods for Uncertain Nonlinear Systems. [Doctoral Dissertation]. University of Florida; 2011. Available from: https://ufdc.ufl.edu/UFE0042825

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

29. Geisert, Mathieu. Optimal control and machine learning for humanoid and aerial robots : Contrôle optimal et apprentissage automatique pour robots humanoïdes et aériens.

Degree: Docteur es, Robotique, 2018, Toulouse, INSA

Quelle sont les points communs entre un robot humanoïde et un quadrimoteur ? Et bien, pas grand-chose… Cette thèse est donc dédiée au développement d’algorithmes… (more)

Subjects/Keywords: Contrôle optimal numérique; Contrôle hiérarchique; Apprentissage automatique; Planification de contacts; Robots humanoïdes; Robots aériens; Numerical optimal control; Machine learning; Machine learning; Contact planning; Humanoid robots; Aerial robots; 629.8

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Geisert, M. (2018). Optimal control and machine learning for humanoid and aerial robots : Contrôle optimal et apprentissage automatique pour robots humanoïdes et aériens. (Doctoral Dissertation). Toulouse, INSA. Retrieved from http://www.theses.fr/2018ISAT0011

Chicago Manual of Style (16th Edition):

Geisert, Mathieu. “Optimal control and machine learning for humanoid and aerial robots : Contrôle optimal et apprentissage automatique pour robots humanoïdes et aériens.” 2018. Doctoral Dissertation, Toulouse, INSA. Accessed July 04, 2020. http://www.theses.fr/2018ISAT0011.

MLA Handbook (7th Edition):

Geisert, Mathieu. “Optimal control and machine learning for humanoid and aerial robots : Contrôle optimal et apprentissage automatique pour robots humanoïdes et aériens.” 2018. Web. 04 Jul 2020.

Vancouver:

Geisert M. Optimal control and machine learning for humanoid and aerial robots : Contrôle optimal et apprentissage automatique pour robots humanoïdes et aériens. [Internet] [Doctoral dissertation]. Toulouse, INSA; 2018. [cited 2020 Jul 04]. Available from: http://www.theses.fr/2018ISAT0011.

Council of Science Editors:

Geisert M. Optimal control and machine learning for humanoid and aerial robots : Contrôle optimal et apprentissage automatique pour robots humanoïdes et aériens. [Doctoral Dissertation]. Toulouse, INSA; 2018. Available from: http://www.theses.fr/2018ISAT0011


University of California – Berkeley

30. Toth, Boriska. Targeted learning of individual effects and individualized treatments using an instrumental variable.

Degree: Biostatistics, 2016, University of California – Berkeley

 We consider estimation of causal effects when treatment assignment is potentially subject to unmeasured confounding, but a valid instrumental variable is available. Moreover, our models… (more)

Subjects/Keywords: Statistics; Economics; causal inference; instrumental variables; machine learning; optimal treatment regimes; semiparametric estimation; statistics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Toth, B. (2016). Targeted learning of individual effects and individualized treatments using an instrumental variable. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/99n2g6p4

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Toth, Boriska. “Targeted learning of individual effects and individualized treatments using an instrumental variable.” 2016. Thesis, University of California – Berkeley. Accessed July 04, 2020. http://www.escholarship.org/uc/item/99n2g6p4.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Toth, Boriska. “Targeted learning of individual effects and individualized treatments using an instrumental variable.” 2016. Web. 04 Jul 2020.

Vancouver:

Toth B. Targeted learning of individual effects and individualized treatments using an instrumental variable. [Internet] [Thesis]. University of California – Berkeley; 2016. [cited 2020 Jul 04]. Available from: http://www.escholarship.org/uc/item/99n2g6p4.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Toth B. Targeted learning of individual effects and individualized treatments using an instrumental variable. [Thesis]. University of California – Berkeley; 2016. Available from: http://www.escholarship.org/uc/item/99n2g6p4

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

[1] [2] [3] [4] [5] [6]

.