Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Multi Armed Bandits). Showing records 1 – 30 of 46 total matches.

[1] [2]

Search Limiters

Last 2 Years | English Only

Degrees

Country

▼ Search Limiters


University of Illinois – Urbana-Champaign

1. Jiang, Chong. Parametrized Stochastic Multi-armed Bandits with Binary Rewards.

Degree: MS, 1200, 2011, University of Illinois – Urbana-Champaign

 In this thesis, we consider the problem of multi-armed bandits with a large number of correlated arms. We assume that the arms have Bernoulli distributed… (more)

Subjects/Keywords: multi-armed bandits; machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jiang, C. (2011). Parametrized Stochastic Multi-armed Bandits with Binary Rewards. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/18352

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Jiang, Chong. “Parametrized Stochastic Multi-armed Bandits with Binary Rewards.” 2011. Thesis, University of Illinois – Urbana-Champaign. Accessed April 14, 2021. http://hdl.handle.net/2142/18352.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Jiang, Chong. “Parametrized Stochastic Multi-armed Bandits with Binary Rewards.” 2011. Web. 14 Apr 2021.

Vancouver:

Jiang C. Parametrized Stochastic Multi-armed Bandits with Binary Rewards. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2011. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/2142/18352.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Jiang C. Parametrized Stochastic Multi-armed Bandits with Binary Rewards. [Thesis]. University of Illinois – Urbana-Champaign; 2011. Available from: http://hdl.handle.net/2142/18352

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Urbana-Champaign

2. Jiang, Chong. Online advertisements and multi-armed bandits.

Degree: PhD, Electrical & Computer Engr, 2015, University of Illinois – Urbana-Champaign

 We investigate a number of multi-armed bandit problems that model different aspects of online advertising, beginning with a survey of the key techniques that are… (more)

Subjects/Keywords: Multi-armed bandits; online advertisements; reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jiang, C. (2015). Online advertisements and multi-armed bandits. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/78369

Chicago Manual of Style (16th Edition):

Jiang, Chong. “Online advertisements and multi-armed bandits.” 2015. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed April 14, 2021. http://hdl.handle.net/2142/78369.

MLA Handbook (7th Edition):

Jiang, Chong. “Online advertisements and multi-armed bandits.” 2015. Web. 14 Apr 2021.

Vancouver:

Jiang C. Online advertisements and multi-armed bandits. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2015. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/2142/78369.

Council of Science Editors:

Jiang C. Online advertisements and multi-armed bandits. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2015. Available from: http://hdl.handle.net/2142/78369

3. Gajane, Pratik. Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle.

Degree: Docteur es, Informatique, 2017, Lille 3

Dans cette thèse, nous étudions des problèmes de prise de décisions séquentielles dans lesquels, pour chacune de ses décisions, l'apprenant reçoit une information qu'il utilise… (more)

Subjects/Keywords: Bandits Multi-Bras; Retour D’information Partielle; Dueling Bandits; Corrupt Bandits; Évaluation du Ranker; Vie Privée Différentielle; Multi-Armed Bandit; Partial Feedback; Dueling Bandits; Corrupt Bandits; Ranker Evaluation; Differential Privacy

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gajane, P. (2017). Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle. (Doctoral Dissertation). Lille 3. Retrieved from http://www.theses.fr/2017LIL30045

Chicago Manual of Style (16th Edition):

Gajane, Pratik. “Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle.” 2017. Doctoral Dissertation, Lille 3. Accessed April 14, 2021. http://www.theses.fr/2017LIL30045.

MLA Handbook (7th Edition):

Gajane, Pratik. “Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle.” 2017. Web. 14 Apr 2021.

Vancouver:

Gajane P. Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle. [Internet] [Doctoral dissertation]. Lille 3; 2017. [cited 2021 Apr 14]. Available from: http://www.theses.fr/2017LIL30045.

Council of Science Editors:

Gajane P. Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle. [Doctoral Dissertation]. Lille 3; 2017. Available from: http://www.theses.fr/2017LIL30045

4. -4677-643X. Online experiment design with causal structures.

Degree: PhD, Electrical and Computer Engineering, 2019, University of Texas – Austin

 Modern learning systems like recommendation engines, computational advertising systems, online parameter tuning services are inherently online; i.e. these systems need to continually collect data, take… (more)

Subjects/Keywords: Online learning; Multi-armed bandits; Contextual bandits; Hyper-parameter tuning; Tree-search; CI testing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-4677-643X. (2019). Online experiment design with causal structures. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/2950

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-4677-643X. “Online experiment design with causal structures.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed April 14, 2021. http://dx.doi.org/10.26153/tsw/2950.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-4677-643X. “Online experiment design with causal structures.” 2019. Web. 14 Apr 2021.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-4677-643X. Online experiment design with causal structures. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2021 Apr 14]. Available from: http://dx.doi.org/10.26153/tsw/2950.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-4677-643X. Online experiment design with causal structures. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/2950

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

5. L. Cella. EFFICIENCY AND REALISM IN STOCHASTIC BANDITS.

Degree: 2021, Università degli Studi di Milano

 This manuscript is dedicated to the analysis of the application of stochastic bandits to the recommender systems domain. Here a learning agent sequentially recommends one… (more)

Subjects/Keywords: machine learning; multi-armed bandits; stochastic bandits; online learning; Settore INF/01 - Informatica

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cella, L. (2021). EFFICIENCY AND REALISM IN STOCHASTIC BANDITS. (Thesis). Università degli Studi di Milano. Retrieved from http://hdl.handle.net/2434/807862

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Cella, L.. “EFFICIENCY AND REALISM IN STOCHASTIC BANDITS.” 2021. Thesis, Università degli Studi di Milano. Accessed April 14, 2021. http://hdl.handle.net/2434/807862.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Cella, L.. “EFFICIENCY AND REALISM IN STOCHASTIC BANDITS.” 2021. Web. 14 Apr 2021.

Vancouver:

Cella L. EFFICIENCY AND REALISM IN STOCHASTIC BANDITS. [Internet] [Thesis]. Università degli Studi di Milano; 2021. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/2434/807862.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Cella L. EFFICIENCY AND REALISM IN STOCHASTIC BANDITS. [Thesis]. Università degli Studi di Milano; 2021. Available from: http://hdl.handle.net/2434/807862

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

6. Besson, Lilian. Multi-Players Bandit Algorithms for Internet of Things Networks : Algorithmes de Bandits Multi-Joueurs pour les Réseaux de l'Internet des Objets.

Degree: Docteur es, Télécommunications (STIC), 2019, CentraleSupélec

Dans cette thèse de doctorat, nous étudions les réseaux sans fil et les appareils reconfigurables qui peuvent accéder à des réseaux de type radio intelligente,… (more)

Subjects/Keywords: Internet des Objets (IdO); Radio Intelligente; Apprentissage séquentiel; Apprentissage par renforcement; Bandits multi-bras (BMB); Bandits multi-bras multi-joueurs; Bandits multi-bras non stationnaires; Internet of Things (IoT); Cognitive Radio; Sequential Learning; Reinforcement Learning; Multi-Armed Bandits (MAB); Multi-Player Multi-Armed Bandits; Non-Stationary Multi-Armed Bandits; 621.38

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Besson, L. (2019). Multi-Players Bandit Algorithms for Internet of Things Networks : Algorithmes de Bandits Multi-Joueurs pour les Réseaux de l'Internet des Objets. (Doctoral Dissertation). CentraleSupélec. Retrieved from http://www.theses.fr/2019CSUP0005

Chicago Manual of Style (16th Edition):

Besson, Lilian. “Multi-Players Bandit Algorithms for Internet of Things Networks : Algorithmes de Bandits Multi-Joueurs pour les Réseaux de l'Internet des Objets.” 2019. Doctoral Dissertation, CentraleSupélec. Accessed April 14, 2021. http://www.theses.fr/2019CSUP0005.

MLA Handbook (7th Edition):

Besson, Lilian. “Multi-Players Bandit Algorithms for Internet of Things Networks : Algorithmes de Bandits Multi-Joueurs pour les Réseaux de l'Internet des Objets.” 2019. Web. 14 Apr 2021.

Vancouver:

Besson L. Multi-Players Bandit Algorithms for Internet of Things Networks : Algorithmes de Bandits Multi-Joueurs pour les Réseaux de l'Internet des Objets. [Internet] [Doctoral dissertation]. CentraleSupélec; 2019. [cited 2021 Apr 14]. Available from: http://www.theses.fr/2019CSUP0005.

Council of Science Editors:

Besson L. Multi-Players Bandit Algorithms for Internet of Things Networks : Algorithmes de Bandits Multi-Joueurs pour les Réseaux de l'Internet des Objets. [Doctoral Dissertation]. CentraleSupélec; 2019. Available from: http://www.theses.fr/2019CSUP0005


University of Alberta

7. Neufeld, James, P. Adaptive Monte Carlo Integration.

Degree: PhD, Department of Computing Science, 2016, University of Alberta

 Monte Carlo methods are a simple, effective, and widely deployed way of approximating integrals that prove too challenging for deterministic approaches. This thesis presents a… (more)

Subjects/Keywords: Online Learning; Machine Learning; Monte Carlo; Multi-armed Bandits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Neufeld, James, P. (2016). Adaptive Monte Carlo Integration. (Doctoral Dissertation). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/chx11xf288

Chicago Manual of Style (16th Edition):

Neufeld, James, P. “Adaptive Monte Carlo Integration.” 2016. Doctoral Dissertation, University of Alberta. Accessed April 14, 2021. https://era.library.ualberta.ca/files/chx11xf288.

MLA Handbook (7th Edition):

Neufeld, James, P. “Adaptive Monte Carlo Integration.” 2016. Web. 14 Apr 2021.

Vancouver:

Neufeld, James P. Adaptive Monte Carlo Integration. [Internet] [Doctoral dissertation]. University of Alberta; 2016. [cited 2021 Apr 14]. Available from: https://era.library.ualberta.ca/files/chx11xf288.

Council of Science Editors:

Neufeld, James P. Adaptive Monte Carlo Integration. [Doctoral Dissertation]. University of Alberta; 2016. Available from: https://era.library.ualberta.ca/files/chx11xf288


Cornell University

8. Xu, Xiao. Multi-Armed Bandits in Large-Scale Complex Systems.

Degree: PhD, Electrical and Computer Engineering, 2020, Cornell University

 This dissertation focuses on the multi-armed bandit problem (MAB) where the objective is a sequential arm selection policy that maximizes the total reward over time.… (more)

Subjects/Keywords: Large-Scale Complex Systems; Multi-Armed Bandits; No-regret Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Xu, X. (2020). Multi-Armed Bandits in Large-Scale Complex Systems. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/70395

Chicago Manual of Style (16th Edition):

Xu, Xiao. “Multi-Armed Bandits in Large-Scale Complex Systems.” 2020. Doctoral Dissertation, Cornell University. Accessed April 14, 2021. http://hdl.handle.net/1813/70395.

MLA Handbook (7th Edition):

Xu, Xiao. “Multi-Armed Bandits in Large-Scale Complex Systems.” 2020. Web. 14 Apr 2021.

Vancouver:

Xu X. Multi-Armed Bandits in Large-Scale Complex Systems. [Internet] [Doctoral dissertation]. Cornell University; 2020. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/1813/70395.

Council of Science Editors:

Xu X. Multi-Armed Bandits in Large-Scale Complex Systems. [Doctoral Dissertation]. Cornell University; 2020. Available from: http://hdl.handle.net/1813/70395


University of Victoria

9. Huang, Zhiming. Thompson sampling-based online decision making in network routing.

Degree: Department of Computer Science, 2020, University of Victoria

 Online decision making is a kind of machine learning problems where decisions are made in a sequential manner so as to accumulate as many rewards… (more)

Subjects/Keywords: Online Decision Making; Multi-armed Bandits; Thompson Sampling; Network Routing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Huang, Z. (2020). Thompson sampling-based online decision making in network routing. (Masters Thesis). University of Victoria. Retrieved from http://hdl.handle.net/1828/12095

Chicago Manual of Style (16th Edition):

Huang, Zhiming. “Thompson sampling-based online decision making in network routing.” 2020. Masters Thesis, University of Victoria. Accessed April 14, 2021. http://hdl.handle.net/1828/12095.

MLA Handbook (7th Edition):

Huang, Zhiming. “Thompson sampling-based online decision making in network routing.” 2020. Web. 14 Apr 2021.

Vancouver:

Huang Z. Thompson sampling-based online decision making in network routing. [Internet] [Masters thesis]. University of Victoria; 2020. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/1828/12095.

Council of Science Editors:

Huang Z. Thompson sampling-based online decision making in network routing. [Masters Thesis]. University of Victoria; 2020. Available from: http://hdl.handle.net/1828/12095


Princeton University

10. Schneider, Jonathan. Learning Algorithms in Strategic Environments .

Degree: PhD, 2018, Princeton University

 Learning algorithms are often analyzed under the assumption their inputs are drawn from stochastic or adversarial sources. Increasingly, these algorithms are being applied in strategic… (more)

Subjects/Keywords: algorithmic mechanism design; multi-armed bandits; online learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Schneider, J. (2018). Learning Algorithms in Strategic Environments . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01w0892d703

Chicago Manual of Style (16th Edition):

Schneider, Jonathan. “Learning Algorithms in Strategic Environments .” 2018. Doctoral Dissertation, Princeton University. Accessed April 14, 2021. http://arks.princeton.edu/ark:/88435/dsp01w0892d703.

MLA Handbook (7th Edition):

Schneider, Jonathan. “Learning Algorithms in Strategic Environments .” 2018. Web. 14 Apr 2021.

Vancouver:

Schneider J. Learning Algorithms in Strategic Environments . [Internet] [Doctoral dissertation]. Princeton University; 2018. [cited 2021 Apr 14]. Available from: http://arks.princeton.edu/ark:/88435/dsp01w0892d703.

Council of Science Editors:

Schneider J. Learning Algorithms in Strategic Environments . [Doctoral Dissertation]. Princeton University; 2018. Available from: http://arks.princeton.edu/ark:/88435/dsp01w0892d703


University of Southern California

11. Kalathil, Dileep Manisseri. Empirical methods in control and optimization.

Degree: PhD, Electrical Engineering, 2014, University of Southern California

 This dissertation addresses some problems in the area of learning, optimization and decision making in stochastic systems using empirical methods. ❧ First part of the… (more)

Subjects/Keywords: online optimization; multi-armed bandits; MDP; approachability; spectrum sharing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kalathil, D. M. (2014). Empirical methods in control and optimization. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/489169/rec/2323

Chicago Manual of Style (16th Edition):

Kalathil, Dileep Manisseri. “Empirical methods in control and optimization.” 2014. Doctoral Dissertation, University of Southern California. Accessed April 14, 2021. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/489169/rec/2323.

MLA Handbook (7th Edition):

Kalathil, Dileep Manisseri. “Empirical methods in control and optimization.” 2014. Web. 14 Apr 2021.

Vancouver:

Kalathil DM. Empirical methods in control and optimization. [Internet] [Doctoral dissertation]. University of Southern California; 2014. [cited 2021 Apr 14]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/489169/rec/2323.

Council of Science Editors:

Kalathil DM. Empirical methods in control and optimization. [Doctoral Dissertation]. University of Southern California; 2014. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/489169/rec/2323


University of Texas – Austin

12. -4028-5309. Finding good enough coins under symmetric and asymmetric information.

Degree: MSin Engineering, Electrical and Computer Engineering, 2017, University of Texas – Austin

 We study the problem of returning m coins with biases above 0:5. These good enough coins that are returned by the agent should be acceptable… (more)

Subjects/Keywords: Multi-armed bandits; Sequential hypothesis testing; FWER control; Information asymmetry

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-4028-5309. (2017). Finding good enough coins under symmetric and asymmetric information. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/68220

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-4028-5309. “Finding good enough coins under symmetric and asymmetric information.” 2017. Masters Thesis, University of Texas – Austin. Accessed April 14, 2021. http://hdl.handle.net/2152/68220.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-4028-5309. “Finding good enough coins under symmetric and asymmetric information.” 2017. Web. 14 Apr 2021.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-4028-5309. Finding good enough coins under symmetric and asymmetric information. [Internet] [Masters thesis]. University of Texas – Austin; 2017. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/2152/68220.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-4028-5309. Finding good enough coins under symmetric and asymmetric information. [Masters Thesis]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/68220

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


The Ohio State University

13. Liu, Fang. Efficient Online Learning with Bandit Feedback.

Degree: PhD, Electrical and Computer Engineering, 2020, The Ohio State University

 Online learning has been widely used in modern machine learning systems. In spite of the recent progress in online learning, many challenges remain unsolved when… (more)

Subjects/Keywords: Computer Science; Electrical Engineering; machine learning; online learning; multi-armed bandits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liu, F. (2020). Efficient Online Learning with Bandit Feedback. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1587680990430268

Chicago Manual of Style (16th Edition):

Liu, Fang. “Efficient Online Learning with Bandit Feedback.” 2020. Doctoral Dissertation, The Ohio State University. Accessed April 14, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587680990430268.

MLA Handbook (7th Edition):

Liu, Fang. “Efficient Online Learning with Bandit Feedback.” 2020. Web. 14 Apr 2021.

Vancouver:

Liu F. Efficient Online Learning with Bandit Feedback. [Internet] [Doctoral dissertation]. The Ohio State University; 2020. [cited 2021 Apr 14]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1587680990430268.

Council of Science Editors:

Liu F. Efficient Online Learning with Bandit Feedback. [Doctoral Dissertation]. The Ohio State University; 2020. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1587680990430268

14. Lagrée, Paul. Méthodes adaptatives pour les applications d'accès à l'information centrées sur l'utilisateur : Adaptive Methods for User-Centric Information Access Applications.

Degree: Docteur es, Informatique, 2017, Université Paris-Saclay (ComUE)

Lorsque les internautes naviguent sur le Web, ils laissent de nombreuses traces que nous nous proposons d’exploiter pour améliorer les applications d'accès à l'information. Nous… (more)

Subjects/Keywords: Méthodes adaptatives; Réseaux sociaux; Bandits multi-bras; Recommandation; Marketing d'influence; Adaptive methods; Online social networks; Multi-armed bandits; Recommendation; Influencer marketing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lagrée, P. (2017). Méthodes adaptatives pour les applications d'accès à l'information centrées sur l'utilisateur : Adaptive Methods for User-Centric Information Access Applications. (Doctoral Dissertation). Université Paris-Saclay (ComUE). Retrieved from http://www.theses.fr/2017SACLS341

Chicago Manual of Style (16th Edition):

Lagrée, Paul. “Méthodes adaptatives pour les applications d'accès à l'information centrées sur l'utilisateur : Adaptive Methods for User-Centric Information Access Applications.” 2017. Doctoral Dissertation, Université Paris-Saclay (ComUE). Accessed April 14, 2021. http://www.theses.fr/2017SACLS341.

MLA Handbook (7th Edition):

Lagrée, Paul. “Méthodes adaptatives pour les applications d'accès à l'information centrées sur l'utilisateur : Adaptive Methods for User-Centric Information Access Applications.” 2017. Web. 14 Apr 2021.

Vancouver:

Lagrée P. Méthodes adaptatives pour les applications d'accès à l'information centrées sur l'utilisateur : Adaptive Methods for User-Centric Information Access Applications. [Internet] [Doctoral dissertation]. Université Paris-Saclay (ComUE); 2017. [cited 2021 Apr 14]. Available from: http://www.theses.fr/2017SACLS341.

Council of Science Editors:

Lagrée P. Méthodes adaptatives pour les applications d'accès à l'information centrées sur l'utilisateur : Adaptive Methods for User-Centric Information Access Applications. [Doctoral Dissertation]. Université Paris-Saclay (ComUE); 2017. Available from: http://www.theses.fr/2017SACLS341


Cornell University

15. Chen, Bangrui. Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration.

Degree: PhD, Operations Research, 2017, Cornell University

 In this thesis, we study adaptive preference learning, in which a machine learning system learns users' preferences from feedback while simultaneously using these learned preferences… (more)

Subjects/Keywords: Statistics; Operations research; Computer science; adaptive preference learning; bandit feedback; dueling bandits; incentivizing exploration; information filtering; multi-armed bandits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, B. (2017). Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/59050

Chicago Manual of Style (16th Edition):

Chen, Bangrui. “Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration.” 2017. Doctoral Dissertation, Cornell University. Accessed April 14, 2021. http://hdl.handle.net/1813/59050.

MLA Handbook (7th Edition):

Chen, Bangrui. “Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration.” 2017. Web. 14 Apr 2021.

Vancouver:

Chen B. Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration. [Internet] [Doctoral dissertation]. Cornell University; 2017. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/1813/59050.

Council of Science Editors:

Chen B. Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration. [Doctoral Dissertation]. Cornell University; 2017. Available from: http://hdl.handle.net/1813/59050

16. Allesiardo, Robin. Bandits Manchots sur Flux de Données Non Stationnaires : Multi-armed Bandits on non Stationary Data Streams.

Degree: Docteur es, Informatique, 2016, Université Paris-Saclay (ComUE)

Le problème des bandits manchots est un cadre théorique permettant d'étudier le compromis entre exploration et exploitation lorsque l'information observée est partielle. Dans celui-ci, un… (more)

Subjects/Keywords: Apprentissage en ligne; Bandits Manchots; Non Stationnarité; Apprentissage automatique; Online Learning; Multi-Armed Bandits; Non stationarity; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Allesiardo, R. (2016). Bandits Manchots sur Flux de Données Non Stationnaires : Multi-armed Bandits on non Stationary Data Streams. (Doctoral Dissertation). Université Paris-Saclay (ComUE). Retrieved from http://www.theses.fr/2016SACLS334

Chicago Manual of Style (16th Edition):

Allesiardo, Robin. “Bandits Manchots sur Flux de Données Non Stationnaires : Multi-armed Bandits on non Stationary Data Streams.” 2016. Doctoral Dissertation, Université Paris-Saclay (ComUE). Accessed April 14, 2021. http://www.theses.fr/2016SACLS334.

MLA Handbook (7th Edition):

Allesiardo, Robin. “Bandits Manchots sur Flux de Données Non Stationnaires : Multi-armed Bandits on non Stationary Data Streams.” 2016. Web. 14 Apr 2021.

Vancouver:

Allesiardo R. Bandits Manchots sur Flux de Données Non Stationnaires : Multi-armed Bandits on non Stationary Data Streams. [Internet] [Doctoral dissertation]. Université Paris-Saclay (ComUE); 2016. [cited 2021 Apr 14]. Available from: http://www.theses.fr/2016SACLS334.

Council of Science Editors:

Allesiardo R. Bandits Manchots sur Flux de Données Non Stationnaires : Multi-armed Bandits on non Stationary Data Streams. [Doctoral Dissertation]. Université Paris-Saclay (ComUE); 2016. Available from: http://www.theses.fr/2016SACLS334


Boston College

17. Baisi Hadad, Vitor. Essays in Econometrics and Dynamic Kidney Exchange.

Degree: PhD, Economics, 2018, Boston College

 This dissertation is divided into two parts. Part I - Dynamic Kidney Exchange In recent years, kidney paired donation (KPD) has an emerged as an… (more)

Subjects/Keywords: Correlated random coefficients; Dynamic kidney exchange; Kidney Paired Donation; Multi-armed bandits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Baisi Hadad, V. (2018). Essays in Econometrics and Dynamic Kidney Exchange. (Doctoral Dissertation). Boston College. Retrieved from http://dlib.bc.edu/islandora/object/bc-ir:107962

Chicago Manual of Style (16th Edition):

Baisi Hadad, Vitor. “Essays in Econometrics and Dynamic Kidney Exchange.” 2018. Doctoral Dissertation, Boston College. Accessed April 14, 2021. http://dlib.bc.edu/islandora/object/bc-ir:107962.

MLA Handbook (7th Edition):

Baisi Hadad, Vitor. “Essays in Econometrics and Dynamic Kidney Exchange.” 2018. Web. 14 Apr 2021.

Vancouver:

Baisi Hadad V. Essays in Econometrics and Dynamic Kidney Exchange. [Internet] [Doctoral dissertation]. Boston College; 2018. [cited 2021 Apr 14]. Available from: http://dlib.bc.edu/islandora/object/bc-ir:107962.

Council of Science Editors:

Baisi Hadad V. Essays in Econometrics and Dynamic Kidney Exchange. [Doctoral Dissertation]. Boston College; 2018. Available from: http://dlib.bc.edu/islandora/object/bc-ir:107962


Delft University of Technology

18. Dingjan, Mitchell (author). Exploring Exploration in Recommender Systems: Where? How Much? For Whom?.

Degree: 2020, Delft University of Technology

Recommender systems focus on automatically surfacing suitable items for users from digital collections that are too large for the user to oversee themselves. A considerable… (more)

Subjects/Keywords: Recommender Systems; Exploration; Personalization; Filtering; User Preferences; Taste Broadening; Multi-Armed Bandits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dingjan, M. (. (2020). Exploring Exploration in Recommender Systems: Where? How Much? For Whom?. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:ec22c976-7c3f-4101-80b4-ab0f3acea0eb

Chicago Manual of Style (16th Edition):

Dingjan, Mitchell (author). “Exploring Exploration in Recommender Systems: Where? How Much? For Whom?.” 2020. Masters Thesis, Delft University of Technology. Accessed April 14, 2021. http://resolver.tudelft.nl/uuid:ec22c976-7c3f-4101-80b4-ab0f3acea0eb.

MLA Handbook (7th Edition):

Dingjan, Mitchell (author). “Exploring Exploration in Recommender Systems: Where? How Much? For Whom?.” 2020. Web. 14 Apr 2021.

Vancouver:

Dingjan M(. Exploring Exploration in Recommender Systems: Where? How Much? For Whom?. [Internet] [Masters thesis]. Delft University of Technology; 2020. [cited 2021 Apr 14]. Available from: http://resolver.tudelft.nl/uuid:ec22c976-7c3f-4101-80b4-ab0f3acea0eb.

Council of Science Editors:

Dingjan M(. Exploring Exploration in Recommender Systems: Where? How Much? For Whom?. [Masters Thesis]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:ec22c976-7c3f-4101-80b4-ab0f3acea0eb


Princeton University

19. Han, Weidong. Lookahead Approximations for Online Learning with Nonlinear Parametric Belief Models .

Degree: PhD, 2019, Princeton University

 We consider sequential online learning problems where the response surface is described by a nonlinear parametric model. We adopt a sampled belief model which we… (more)

Subjects/Keywords: Advertisement auctions; Dynamic programming; Multi-armed bandits; Online learning; Optimal learning; Value of information

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Han, W. (2019). Lookahead Approximations for Online Learning with Nonlinear Parametric Belief Models . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp019p290d227

Chicago Manual of Style (16th Edition):

Han, Weidong. “Lookahead Approximations for Online Learning with Nonlinear Parametric Belief Models .” 2019. Doctoral Dissertation, Princeton University. Accessed April 14, 2021. http://arks.princeton.edu/ark:/88435/dsp019p290d227.

MLA Handbook (7th Edition):

Han, Weidong. “Lookahead Approximations for Online Learning with Nonlinear Parametric Belief Models .” 2019. Web. 14 Apr 2021.

Vancouver:

Han W. Lookahead Approximations for Online Learning with Nonlinear Parametric Belief Models . [Internet] [Doctoral dissertation]. Princeton University; 2019. [cited 2021 Apr 14]. Available from: http://arks.princeton.edu/ark:/88435/dsp019p290d227.

Council of Science Editors:

Han W. Lookahead Approximations for Online Learning with Nonlinear Parametric Belief Models . [Doctoral Dissertation]. Princeton University; 2019. Available from: http://arks.princeton.edu/ark:/88435/dsp019p290d227

20. Hadiji, Hédi. On some adaptivity questions in stochastic multi-armed bandits : Sur quelques questions d'adaptation dans des problèmes de bandits stochastiques.

Degree: Docteur es, Mathématiques appliquées, 2020, université Paris-Saclay

Cette thèse s'inscrit dans le domaine des statistiques séquentielles. Le cadre principal étudié est celui des bandits stochastiques à plusieurs bras, cadre idéal qui modélise… (more)

Subjects/Keywords: Bandits stochastiques à plusieurs bras; Statistiques adaptatives; Algorithme Upper Confidence Bound (UCB); Optimalité minimax; Optimalité asymptotique; Bandits à continuum de bras; Stochastic multi-armed bandits; Adaptive statistics; Upper confidence bound (UCB); Minimax optimality; Asymptotic optimality; Continuum-armed bandits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hadiji, H. (2020). On some adaptivity questions in stochastic multi-armed bandits : Sur quelques questions d'adaptation dans des problèmes de bandits stochastiques. (Doctoral Dissertation). université Paris-Saclay. Retrieved from http://www.theses.fr/2020UPASM021

Chicago Manual of Style (16th Edition):

Hadiji, Hédi. “On some adaptivity questions in stochastic multi-armed bandits : Sur quelques questions d'adaptation dans des problèmes de bandits stochastiques.” 2020. Doctoral Dissertation, université Paris-Saclay. Accessed April 14, 2021. http://www.theses.fr/2020UPASM021.

MLA Handbook (7th Edition):

Hadiji, Hédi. “On some adaptivity questions in stochastic multi-armed bandits : Sur quelques questions d'adaptation dans des problèmes de bandits stochastiques.” 2020. Web. 14 Apr 2021.

Vancouver:

Hadiji H. On some adaptivity questions in stochastic multi-armed bandits : Sur quelques questions d'adaptation dans des problèmes de bandits stochastiques. [Internet] [Doctoral dissertation]. université Paris-Saclay; 2020. [cited 2021 Apr 14]. Available from: http://www.theses.fr/2020UPASM021.

Council of Science Editors:

Hadiji H. On some adaptivity questions in stochastic multi-armed bandits : Sur quelques questions d'adaptation dans des problèmes de bandits stochastiques. [Doctoral Dissertation]. université Paris-Saclay; 2020. Available from: http://www.theses.fr/2020UPASM021

21. Ménard, Pierre. Sur la notion d'optimalité dans les problèmes de bandit stochastique : On the notion of optimality in the stochastic multi-armed bandit problems.

Degree: Docteur es, Mathématiques appliquées, 2018, Université Toulouse III – Paul Sabatier

Cette thèse s'inscrit dans les domaines de l'apprentissage statistique et de la statistique séquentielle. Le cadre principal est celui des problèmes de bandit stochastique à… (more)

Subjects/Keywords: Bandits stochastiques multi-bras; Théorie de l'information; Bornes inférieures non-asymptotiques; Analyse du regret; Optimalité asymptotique; Optimalité minimax; Borne supérieure de confiance; Stochastic multi-armed bandits; Information theory

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ménard, P. (2018). Sur la notion d'optimalité dans les problèmes de bandit stochastique : On the notion of optimality in the stochastic multi-armed bandit problems. (Doctoral Dissertation). Université Toulouse III – Paul Sabatier. Retrieved from http://www.theses.fr/2018TOU30087

Chicago Manual of Style (16th Edition):

Ménard, Pierre. “Sur la notion d'optimalité dans les problèmes de bandit stochastique : On the notion of optimality in the stochastic multi-armed bandit problems.” 2018. Doctoral Dissertation, Université Toulouse III – Paul Sabatier. Accessed April 14, 2021. http://www.theses.fr/2018TOU30087.

MLA Handbook (7th Edition):

Ménard, Pierre. “Sur la notion d'optimalité dans les problèmes de bandit stochastique : On the notion of optimality in the stochastic multi-armed bandit problems.” 2018. Web. 14 Apr 2021.

Vancouver:

Ménard P. Sur la notion d'optimalité dans les problèmes de bandit stochastique : On the notion of optimality in the stochastic multi-armed bandit problems. [Internet] [Doctoral dissertation]. Université Toulouse III – Paul Sabatier; 2018. [cited 2021 Apr 14]. Available from: http://www.theses.fr/2018TOU30087.

Council of Science Editors:

Ménard P. Sur la notion d'optimalité dans les problèmes de bandit stochastique : On the notion of optimality in the stochastic multi-armed bandit problems. [Doctoral Dissertation]. Université Toulouse III – Paul Sabatier; 2018. Available from: http://www.theses.fr/2018TOU30087

22. Jedor, Matthieu. Bandit algorithms for recommender system optimization : Algorithmes de bandit pour l'optimisation des systèmes de recommandation.

Degree: Docteur es, Mathématiques appliquées, 2020, université Paris-Saclay

Dans cette thèse de doctorat, nous étudions l'optimisation des systèmes de recommandation dans le but de fournir des suggestions de produits plus raffinées pour un… (more)

Subjects/Keywords: Apprentissage par renforcement; Bandit multi-Bras; Système de recommandation; Commerce en ligne; Reinforcement learning; Multi-Armed bandits; Recommender system; E-Commerce

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jedor, M. (2020). Bandit algorithms for recommender system optimization : Algorithmes de bandit pour l'optimisation des systèmes de recommandation. (Doctoral Dissertation). université Paris-Saclay. Retrieved from http://www.theses.fr/2020UPASM027

Chicago Manual of Style (16th Edition):

Jedor, Matthieu. “Bandit algorithms for recommender system optimization : Algorithmes de bandit pour l'optimisation des systèmes de recommandation.” 2020. Doctoral Dissertation, université Paris-Saclay. Accessed April 14, 2021. http://www.theses.fr/2020UPASM027.

MLA Handbook (7th Edition):

Jedor, Matthieu. “Bandit algorithms for recommender system optimization : Algorithmes de bandit pour l'optimisation des systèmes de recommandation.” 2020. Web. 14 Apr 2021.

Vancouver:

Jedor M. Bandit algorithms for recommender system optimization : Algorithmes de bandit pour l'optimisation des systèmes de recommandation. [Internet] [Doctoral dissertation]. université Paris-Saclay; 2020. [cited 2021 Apr 14]. Available from: http://www.theses.fr/2020UPASM027.

Council of Science Editors:

Jedor M. Bandit algorithms for recommender system optimization : Algorithmes de bandit pour l'optimisation des systèmes de recommandation. [Doctoral Dissertation]. université Paris-Saclay; 2020. Available from: http://www.theses.fr/2020UPASM027


Université Paris-Sud – Paris XI

23. Galichet, Nicolas. Contributions to Multi-Armed Bandits : Risk-Awareness and Sub-Sampling for Linear Contextual Bandits : Contributions aux bandits manchots : gestion du risque et sous-échantillonnage pour les bandits contextuels linéaires.

Degree: Docteur es, Informatique, 2015, Université Paris-Sud – Paris XI

Cette thèse s'inscrit dans le domaine de la prise de décision séquentielle en environnement inconnu, et plus particulièrement dans le cadre des bandits manchots (multi-armed(more)

Subjects/Keywords: Prise de décision séquentielle; Apprentissage automatique; Bandits manchots; Sous-échantillonnage; Aversion au risque; CVaR; Exploration vs Exploitation vs Risque; Bandits linéaires; Bandits contextuels; Analyse de regret; Sequential decision making; Machine learning; Multi-armed bandits; Sub-Sampling; Risk-aversion; CvaR; Exploration vs Exploitation vs Safety; Linear bandits; Contextual bandits; Regret analysis

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Galichet, N. (2015). Contributions to Multi-Armed Bandits : Risk-Awareness and Sub-Sampling for Linear Contextual Bandits : Contributions aux bandits manchots : gestion du risque et sous-échantillonnage pour les bandits contextuels linéaires. (Doctoral Dissertation). Université Paris-Sud – Paris XI. Retrieved from http://www.theses.fr/2015PA112242

Chicago Manual of Style (16th Edition):

Galichet, Nicolas. “Contributions to Multi-Armed Bandits : Risk-Awareness and Sub-Sampling for Linear Contextual Bandits : Contributions aux bandits manchots : gestion du risque et sous-échantillonnage pour les bandits contextuels linéaires.” 2015. Doctoral Dissertation, Université Paris-Sud – Paris XI. Accessed April 14, 2021. http://www.theses.fr/2015PA112242.

MLA Handbook (7th Edition):

Galichet, Nicolas. “Contributions to Multi-Armed Bandits : Risk-Awareness and Sub-Sampling for Linear Contextual Bandits : Contributions aux bandits manchots : gestion du risque et sous-échantillonnage pour les bandits contextuels linéaires.” 2015. Web. 14 Apr 2021.

Vancouver:

Galichet N. Contributions to Multi-Armed Bandits : Risk-Awareness and Sub-Sampling for Linear Contextual Bandits : Contributions aux bandits manchots : gestion du risque et sous-échantillonnage pour les bandits contextuels linéaires. [Internet] [Doctoral dissertation]. Université Paris-Sud – Paris XI; 2015. [cited 2021 Apr 14]. Available from: http://www.theses.fr/2015PA112242.

Council of Science Editors:

Galichet N. Contributions to Multi-Armed Bandits : Risk-Awareness and Sub-Sampling for Linear Contextual Bandits : Contributions aux bandits manchots : gestion du risque et sous-échantillonnage pour les bandits contextuels linéaires. [Doctoral Dissertation]. Université Paris-Sud – Paris XI; 2015. Available from: http://www.theses.fr/2015PA112242


University of Illinois – Urbana-Champaign

24. Yekkehkhany, Ali. Risk-averse multi-armed bandits and game theory.

Degree: PhD, Electrical & Computer Engr, 2020, University of Illinois – Urbana-Champaign

 The multi-armed bandit (MAB) and game theory literature is mainly focused on the expected cumulative reward and the expected payoffs in a game, respectively. In… (more)

Subjects/Keywords: Online Learning; Multi-Armed Bandits; Exploration-Exploitation; Explore-Then-Commit Bandits; Risk-Aversion; Game Theory; Stochastic Game Theory; Congestion Games; Affinity Scheduling; MapReduce; Data Center

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yekkehkhany, A. (2020). Risk-averse multi-armed bandits and game theory. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/108439

Chicago Manual of Style (16th Edition):

Yekkehkhany, Ali. “Risk-averse multi-armed bandits and game theory.” 2020. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed April 14, 2021. http://hdl.handle.net/2142/108439.

MLA Handbook (7th Edition):

Yekkehkhany, Ali. “Risk-averse multi-armed bandits and game theory.” 2020. Web. 14 Apr 2021.

Vancouver:

Yekkehkhany A. Risk-averse multi-armed bandits and game theory. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2020. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/2142/108439.

Council of Science Editors:

Yekkehkhany A. Risk-averse multi-armed bandits and game theory. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2020. Available from: http://hdl.handle.net/2142/108439


Rice University

25. Lan, Shiting. Machine Learning Techniques for Personalized Learning.

Degree: PhD, Engineering, 2016, Rice University

 Recent developments in personalized learning, powered by recent advances in machine learning and big data, have the potential to revamp the “one-size-fits-all” approach in today’s… (more)

Subjects/Keywords: Personalized learning; machine learning; convex optimization; Bayesian nonparametrics; contextual multi-armed bandits; Kalman filtering; educational data mining

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lan, S. (2016). Machine Learning Techniques for Personalized Learning. (Doctoral Dissertation). Rice University. Retrieved from http://hdl.handle.net/1911/96250

Chicago Manual of Style (16th Edition):

Lan, Shiting. “Machine Learning Techniques for Personalized Learning.” 2016. Doctoral Dissertation, Rice University. Accessed April 14, 2021. http://hdl.handle.net/1911/96250.

MLA Handbook (7th Edition):

Lan, Shiting. “Machine Learning Techniques for Personalized Learning.” 2016. Web. 14 Apr 2021.

Vancouver:

Lan S. Machine Learning Techniques for Personalized Learning. [Internet] [Doctoral dissertation]. Rice University; 2016. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/1911/96250.

Council of Science Editors:

Lan S. Machine Learning Techniques for Personalized Learning. [Doctoral Dissertation]. Rice University; 2016. Available from: http://hdl.handle.net/1911/96250


Universitat Pompeu Fabra

26. Wilhelmi Roca, Francesc. Towards spatial reuse in future wireless local area networks: a sequential learning approach.

Degree: Departament de Tecnologies de la Informació i les Comunicacions, 2020, Universitat Pompeu Fabra

 L'operació de reutilització espacial (SR) està guanyant impuls per a la darrera família d'estàndards IEEE 802.11 a causa dels aclaparadors requisits que presenten les xarxes… (more)

Subjects/Keywords: Artificial Intelligence; IEEE 802.11ax; Multi-armed bandits; Sequential learning; Spatial reuse; WLAN; Intel·ligència artifical; Aprenentatge seqüencial; Reutilizació espacial; 62

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wilhelmi Roca, F. (2020). Towards spatial reuse in future wireless local area networks: a sequential learning approach. (Thesis). Universitat Pompeu Fabra. Retrieved from http://hdl.handle.net/10803/669970

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Wilhelmi Roca, Francesc. “Towards spatial reuse in future wireless local area networks: a sequential learning approach.” 2020. Thesis, Universitat Pompeu Fabra. Accessed April 14, 2021. http://hdl.handle.net/10803/669970.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Wilhelmi Roca, Francesc. “Towards spatial reuse in future wireless local area networks: a sequential learning approach.” 2020. Web. 14 Apr 2021.

Vancouver:

Wilhelmi Roca F. Towards spatial reuse in future wireless local area networks: a sequential learning approach. [Internet] [Thesis]. Universitat Pompeu Fabra; 2020. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/10803/669970.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Wilhelmi Roca F. Towards spatial reuse in future wireless local area networks: a sequential learning approach. [Thesis]. Universitat Pompeu Fabra; 2020. Available from: http://hdl.handle.net/10803/669970

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

27. Dorff, Rebecca. Modelling Infertility with Markov Chains.

Degree: MS, 2013, Brigham Young University

Infertility affects approximately 15% of couples. Testing and interventions are costly, in time, money, and emotional energy. This paper will discuss using Markov decision and multi-armed bandit processes to identify a systematic approach of interventions that will lead to the desired baby while minimizing costs.

Subjects/Keywords: Infertility; Medical Diagnosis; Markov Decision Processes; Multi-armed Bandits; Mathematics

multi-armed bandit approach. Multi-armed bandits have gained popularity in simulating clinical… …failures [49]. The difficulty of using a straight multi-armed bandit problem with… …to the end of the horizon for any optimal policy. 3.4 Multi-Armed Bandit Problem One… …class of Markov decision processes is the multi-armed bandit problem, or N-armed bandit… …um = sm for some m = i Multi-armed bandit problems have been used to simulate clinical… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dorff, R. (2013). Modelling Infertility with Markov Chains. (Masters Thesis). Brigham Young University. Retrieved from https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=5069&context=etd

Chicago Manual of Style (16th Edition):

Dorff, Rebecca. “Modelling Infertility with Markov Chains.” 2013. Masters Thesis, Brigham Young University. Accessed April 14, 2021. https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=5069&context=etd.

MLA Handbook (7th Edition):

Dorff, Rebecca. “Modelling Infertility with Markov Chains.” 2013. Web. 14 Apr 2021.

Vancouver:

Dorff R. Modelling Infertility with Markov Chains. [Internet] [Masters thesis]. Brigham Young University; 2013. [cited 2021 Apr 14]. Available from: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=5069&context=etd.

Council of Science Editors:

Dorff R. Modelling Infertility with Markov Chains. [Masters Thesis]. Brigham Young University; 2013. Available from: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=5069&context=etd


Université de Lorraine

28. Collet, Timothé. Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification.

Degree: Docteur es, Informatique, 2016, Université de Lorraine

La classification se base sur un jeu de données étiquetées par un expert. Plus le jeu de données est grand, meilleure est la performance de… (more)

Subjects/Keywords: Optimisme face à l'incertitude; Classification; Apprentissage actif; Bandits à bras multiples; Optimism in the Face of Uncertainty; Classification; Active Learning; Multi-armed Bandit; 006.33

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Collet, T. (2016). Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification. (Doctoral Dissertation). Université de Lorraine. Retrieved from http://www.theses.fr/2016LORR0084

Chicago Manual of Style (16th Edition):

Collet, Timothé. “Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification.” 2016. Doctoral Dissertation, Université de Lorraine. Accessed April 14, 2021. http://www.theses.fr/2016LORR0084.

MLA Handbook (7th Edition):

Collet, Timothé. “Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification.” 2016. Web. 14 Apr 2021.

Vancouver:

Collet T. Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification. [Internet] [Doctoral dissertation]. Université de Lorraine; 2016. [cited 2021 Apr 14]. Available from: http://www.theses.fr/2016LORR0084.

Council of Science Editors:

Collet T. Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification. [Doctoral Dissertation]. Université de Lorraine; 2016. Available from: http://www.theses.fr/2016LORR0084


National University of Ireland – Galway

29. Hassan, Umair ul. Adaptive task assignment in spatial crowdsourcing .

Degree: 2016, National University of Ireland – Galway

 Spatial crowdsourcing has emerged as a new paradigm for solving difficult problems in the physical world. It engages a large number of human workers in… (more)

Subjects/Keywords: Spatial crowdsourcing; Crowdsourcing; Task assignment; Online algorithms; Multi-armed bandit; Combinatorial bandits; Fractional optimization; location diversity; Location diversity; Agent-based simulation; Data analytics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hassan, U. u. (2016). Adaptive task assignment in spatial crowdsourcing . (Thesis). National University of Ireland – Galway. Retrieved from http://hdl.handle.net/10379/6035

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Hassan, Umair ul. “Adaptive task assignment in spatial crowdsourcing .” 2016. Thesis, National University of Ireland – Galway. Accessed April 14, 2021. http://hdl.handle.net/10379/6035.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Hassan, Umair ul. “Adaptive task assignment in spatial crowdsourcing .” 2016. Web. 14 Apr 2021.

Vancouver:

Hassan Uu. Adaptive task assignment in spatial crowdsourcing . [Internet] [Thesis]. National University of Ireland – Galway; 2016. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/10379/6035.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Hassan Uu. Adaptive task assignment in spatial crowdsourcing . [Thesis]. National University of Ireland – Galway; 2016. Available from: http://hdl.handle.net/10379/6035

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

30. Lattimore, Finnian Rachel. Learning how to act: making good decisions with machine learning .

Degree: 2017, Australian National University

 This thesis is about machine learning and statistical approaches to decision making. How can we learn from data to anticipate the consequence of, and optimally… (more)

Subjects/Keywords: machine learning; causal inference; causality; reinforcement learning; multi-armed bandits

…observational causal inference problem over non-i.i.d data. Both multi-armed bandits and observational… …encapsulated by multi-armed bandits. This synthesis allows us to represent knowledge of how variables… …within reinforcement learning are multi-armed bandit problems. They describe settings in which… …viewpoint, including traditional randomised experiments (§3.1) and multi-armed bandit… …bandit problems that unify causal graphical models and multi-armed bandit problems. Bandit arms… 

Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Sample image

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lattimore, F. R. (2017). Learning how to act: making good decisions with machine learning . (Thesis). Australian National University. Retrieved from http://hdl.handle.net/1885/144602

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lattimore, Finnian Rachel. “Learning how to act: making good decisions with machine learning .” 2017. Thesis, Australian National University. Accessed April 14, 2021. http://hdl.handle.net/1885/144602.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lattimore, Finnian Rachel. “Learning how to act: making good decisions with machine learning .” 2017. Web. 14 Apr 2021.

Vancouver:

Lattimore FR. Learning how to act: making good decisions with machine learning . [Internet] [Thesis]. Australian National University; 2017. [cited 2021 Apr 14]. Available from: http://hdl.handle.net/1885/144602.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lattimore FR. Learning how to act: making good decisions with machine learning . [Thesis]. Australian National University; 2017. Available from: http://hdl.handle.net/1885/144602

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

[1] [2]

.