Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Multi Armed Bandit Mechanisms). Showing records 1 – 30 of 19384 total matches.

[1] [2] [3] [4] [5] … [647]

Search Limiters

Last 2 Years | English Only

Degrees

Languages

Country

▼ Search Limiters


Kansas State University

1. Chatterjee, Ranojoy. Evaluation of performance: multi-armed bandit vs. contextual bandit.

Degree: MS, Department of Computer Science, 2019, Kansas State University

 This work compares two methods, the multi-armed bandit (MAB) and contextual multi-armed bandit (CMAB), for action recommendation in a sequential decision making domain. It empirically… (more)

Subjects/Keywords: Multi-armed bandit

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chatterjee, R. (2019). Evaluation of performance: multi-armed bandit vs. contextual bandit. (Masters Thesis). Kansas State University. Retrieved from http://hdl.handle.net/2097/40287

Chicago Manual of Style (16th Edition):

Chatterjee, Ranojoy. “Evaluation of performance: multi-armed bandit vs. contextual bandit.” 2019. Masters Thesis, Kansas State University. Accessed October 26, 2020. http://hdl.handle.net/2097/40287.

MLA Handbook (7th Edition):

Chatterjee, Ranojoy. “Evaluation of performance: multi-armed bandit vs. contextual bandit.” 2019. Web. 26 Oct 2020.

Vancouver:

Chatterjee R. Evaluation of performance: multi-armed bandit vs. contextual bandit. [Internet] [Masters thesis]. Kansas State University; 2019. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/2097/40287.

Council of Science Editors:

Chatterjee R. Evaluation of performance: multi-armed bandit vs. contextual bandit. [Masters Thesis]. Kansas State University; 2019. Available from: http://hdl.handle.net/2097/40287


University of Alberta

2. Joulani, Pooria. Multi-Armed Bandit Problems under Delayed Feedback.

Degree: MS, Department of Computing Science, 2012, University of Alberta

 In this thesis, the multi-armed bandit (MAB) problem in online learning is studied, when the feedback information is not observed immediately but rather after arbitrary,… (more)

Subjects/Keywords: Online Learning; Multi-Armed Bandit; Delayed Feedback

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Joulani, P. (2012). Multi-Armed Bandit Problems under Delayed Feedback. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/d504rm03n

Chicago Manual of Style (16th Edition):

Joulani, Pooria. “Multi-Armed Bandit Problems under Delayed Feedback.” 2012. Masters Thesis, University of Alberta. Accessed October 26, 2020. https://era.library.ualberta.ca/files/d504rm03n.

MLA Handbook (7th Edition):

Joulani, Pooria. “Multi-Armed Bandit Problems under Delayed Feedback.” 2012. Web. 26 Oct 2020.

Vancouver:

Joulani P. Multi-Armed Bandit Problems under Delayed Feedback. [Internet] [Masters thesis]. University of Alberta; 2012. [cited 2020 Oct 26]. Available from: https://era.library.ualberta.ca/files/d504rm03n.

Council of Science Editors:

Joulani P. Multi-Armed Bandit Problems under Delayed Feedback. [Masters Thesis]. University of Alberta; 2012. Available from: https://era.library.ualberta.ca/files/d504rm03n

3. Landgren, Peter. Distributed Multi-agent Multi-armed Bandits .

Degree: PhD, 2019, Princeton University

 Social decision-making is a common feature of both natural and artificial systems. Humans, animals, and machines routinely communicate and observe each other to improve their… (more)

Subjects/Keywords: Decision-making; Distributed control; MAB; Multi-agent Multi-armed Bandit; Multi-armed Bandit; Network analysis and control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Landgren, P. (2019). Distributed Multi-agent Multi-armed Bandits . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01c534fr72c

Chicago Manual of Style (16th Edition):

Landgren, Peter. “Distributed Multi-agent Multi-armed Bandits .” 2019. Doctoral Dissertation, Princeton University. Accessed October 26, 2020. http://arks.princeton.edu/ark:/88435/dsp01c534fr72c.

MLA Handbook (7th Edition):

Landgren, Peter. “Distributed Multi-agent Multi-armed Bandits .” 2019. Web. 26 Oct 2020.

Vancouver:

Landgren P. Distributed Multi-agent Multi-armed Bandits . [Internet] [Doctoral dissertation]. Princeton University; 2019. [cited 2020 Oct 26]. Available from: http://arks.princeton.edu/ark:/88435/dsp01c534fr72c.

Council of Science Editors:

Landgren P. Distributed Multi-agent Multi-armed Bandits . [Doctoral Dissertation]. Princeton University; 2019. Available from: http://arks.princeton.edu/ark:/88435/dsp01c534fr72c


Indian Institute of Science

4. Prakash, Gujar Sujit. Novel Mechanisms For Allocation Of Heterogeneous Items In Strategic Settings.

Degree: PhD, Faculty of Engineering, 2012, Indian Institute of Science

 Allocation of objects or resources to competing agents is a ubiquitous problem in the real world. For example, a federal government may wish to allocate… (more)

Subjects/Keywords: Investments (Economics)- Allocation; Mechanism Design Theory; Heterogeneous Objects; Dynamic House Allocation; Multi-Unit Combinatorial Auctions; Optimal Combinatorial Auctions; Dynamic Matching; Multi-Armed Bandit Mechanisms; Two-Sided Markets; Financial Economics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Prakash, G. S. (2012). Novel Mechanisms For Allocation Of Heterogeneous Items In Strategic Settings. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/1654

Chicago Manual of Style (16th Edition):

Prakash, Gujar Sujit. “Novel Mechanisms For Allocation Of Heterogeneous Items In Strategic Settings.” 2012. Doctoral Dissertation, Indian Institute of Science. Accessed October 26, 2020. http://etd.iisc.ac.in/handle/2005/1654.

MLA Handbook (7th Edition):

Prakash, Gujar Sujit. “Novel Mechanisms For Allocation Of Heterogeneous Items In Strategic Settings.” 2012. Web. 26 Oct 2020.

Vancouver:

Prakash GS. Novel Mechanisms For Allocation Of Heterogeneous Items In Strategic Settings. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2012. [cited 2020 Oct 26]. Available from: http://etd.iisc.ac.in/handle/2005/1654.

Council of Science Editors:

Prakash GS. Novel Mechanisms For Allocation Of Heterogeneous Items In Strategic Settings. [Doctoral Dissertation]. Indian Institute of Science; 2012. Available from: http://etd.iisc.ac.in/handle/2005/1654


NSYSU

5. Chien, Zhi-hua. Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market.

Degree: Master, Information Management, 2016, NSYSU

 The Contextual Bandit Problem (CMAB) is usually used to recommend for online applications on article, music, movie, etc. One leading algorithm for contextual bandit is… (more)

Subjects/Keywords: LinUCB; Contextual Bandit Problem; Stock Recommendation; Contextual Multi-Armed Bandit; Personalized Recommendation System

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chien, Z. (2016). Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0703116-130605

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chien, Zhi-hua. “Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market.” 2016. Thesis, NSYSU. Accessed October 26, 2020. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0703116-130605.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chien, Zhi-hua. “Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market.” 2016. Web. 26 Oct 2020.

Vancouver:

Chien Z. Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market. [Internet] [Thesis]. NSYSU; 2016. [cited 2020 Oct 26]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0703116-130605.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chien Z. Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market. [Thesis]. NSYSU; 2016. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0703116-130605

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Indian Institute of Science

6. Chatterjee, Aritra. A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem.

Degree: MSc Engg, Faculty of Engineering, 2018, Indian Institute of Science

 The multi-armed bandit (MAB) problem provides a convenient abstraction for many online decision problems arising in modern applications including Internet display advertising, crowdsourcing, online procurement,… (more)

Subjects/Keywords: Thompson Sampling; Multi-Armed Bandit Problem; Upper Confidence Bound (UCB); Awake Upper Estimated Reward; Multi-Armed Bandit Algorithms; Sleeping Multi-Armed Bandit Model; TS-SMAB; Sleeping Multi-Armed Bandit (SMAB) Problem; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chatterjee, A. (2018). A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem. (Masters Thesis). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/3631

Chicago Manual of Style (16th Edition):

Chatterjee, Aritra. “A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem.” 2018. Masters Thesis, Indian Institute of Science. Accessed October 26, 2020. http://etd.iisc.ac.in/handle/2005/3631.

MLA Handbook (7th Edition):

Chatterjee, Aritra. “A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem.” 2018. Web. 26 Oct 2020.

Vancouver:

Chatterjee A. A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem. [Internet] [Masters thesis]. Indian Institute of Science; 2018. [cited 2020 Oct 26]. Available from: http://etd.iisc.ac.in/handle/2005/3631.

Council of Science Editors:

Chatterjee A. A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem. [Masters Thesis]. Indian Institute of Science; 2018. Available from: http://etd.iisc.ac.in/handle/2005/3631


University of Houston

7. Le, Thanh Dang 1984-. Sequential learning for passive monitoring of multi-channel wireless networks.

Degree: MSin Electrical Engineering, Electrical Engineering, 2013, University of Houston

 With the requirement for increasing efficiency of wireless spectrum usage, the cognitive radio technique has been emerging as an important solution. Passive monitoring over wireless… (more)

Subjects/Keywords: Sequential learning; Wireless monitoring; Multi-armed bandit; Electrical engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Le, T. D. 1. (2013). Sequential learning for passive monitoring of multi-channel wireless networks. (Masters Thesis). University of Houston. Retrieved from http://hdl.handle.net/10657/998

Chicago Manual of Style (16th Edition):

Le, Thanh Dang 1984-. “Sequential learning for passive monitoring of multi-channel wireless networks.” 2013. Masters Thesis, University of Houston. Accessed October 26, 2020. http://hdl.handle.net/10657/998.

MLA Handbook (7th Edition):

Le, Thanh Dang 1984-. “Sequential learning for passive monitoring of multi-channel wireless networks.” 2013. Web. 26 Oct 2020.

Vancouver:

Le TD1. Sequential learning for passive monitoring of multi-channel wireless networks. [Internet] [Masters thesis]. University of Houston; 2013. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10657/998.

Council of Science Editors:

Le TD1. Sequential learning for passive monitoring of multi-channel wireless networks. [Masters Thesis]. University of Houston; 2013. Available from: http://hdl.handle.net/10657/998


University of Victoria

8. Chen, Mianlong. Minimizing age of information for semi-periodic arrivals of multiple packets.

Degree: Department of Computer Science, 2019, University of Victoria

 Age of information (AoI) captures the freshness of information and has been used broadly for scheduling data transmission in the Internet of Things (IoT). We… (more)

Subjects/Keywords: age of information; restless multi-armed bandit problem

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, M. (2019). Minimizing age of information for semi-periodic arrivals of multiple packets. (Masters Thesis). University of Victoria. Retrieved from http://hdl.handle.net/1828/11350

Chicago Manual of Style (16th Edition):

Chen, Mianlong. “Minimizing age of information for semi-periodic arrivals of multiple packets.” 2019. Masters Thesis, University of Victoria. Accessed October 26, 2020. http://hdl.handle.net/1828/11350.

MLA Handbook (7th Edition):

Chen, Mianlong. “Minimizing age of information for semi-periodic arrivals of multiple packets.” 2019. Web. 26 Oct 2020.

Vancouver:

Chen M. Minimizing age of information for semi-periodic arrivals of multiple packets. [Internet] [Masters thesis]. University of Victoria; 2019. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/1828/11350.

Council of Science Editors:

Chen M. Minimizing age of information for semi-periodic arrivals of multiple packets. [Masters Thesis]. University of Victoria; 2019. Available from: http://hdl.handle.net/1828/11350


University of Illinois – Urbana-Champaign

9. Liao, De. A multi-armed bandit approach for batch mode active learning on information networks.

Degree: MS, Computer Science, 2016, University of Illinois – Urbana-Champaign

 We propose an adaptive batch mode active learning algorithm, MABAL (Multi-Armed Bandit for Active Learning), for classification on heterogeneous information networks. Observing the parallels between… (more)

Subjects/Keywords: Active learning; Heterogeneous information networks; Multi-armed bandit

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liao, D. (2016). A multi-armed bandit approach for batch mode active learning on information networks. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/90788

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Liao, De. “A multi-armed bandit approach for batch mode active learning on information networks.” 2016. Thesis, University of Illinois – Urbana-Champaign. Accessed October 26, 2020. http://hdl.handle.net/2142/90788.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Liao, De. “A multi-armed bandit approach for batch mode active learning on information networks.” 2016. Web. 26 Oct 2020.

Vancouver:

Liao D. A multi-armed bandit approach for batch mode active learning on information networks. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2016. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/2142/90788.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Liao D. A multi-armed bandit approach for batch mode active learning on information networks. [Thesis]. University of Illinois – Urbana-Champaign; 2016. Available from: http://hdl.handle.net/2142/90788

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Université Paris-Sud – Paris XI

10. Wang, Kehao. Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot.

Degree: Docteur es, Informatique, 2012, Université Paris-Sud – Paris XI

Dans cette thèse, nous abordons le problème fondamental de l'accès au spectre opportuniste dans un système de communication multi-canal. Plus précisément, nous considérons un système… (more)

Subjects/Keywords: Multi-canal d'accès opportuniste; Restless Multi-Armed Bandit; Politique myope; Optimisation stochastique; Multi-Channel opportunistic access; Restless Multi-Armed Bandit; Myopic Policy; Stochastic Optimization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, K. (2012). Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot. (Doctoral Dissertation). Université Paris-Sud – Paris XI. Retrieved from http://www.theses.fr/2012PA112103

Chicago Manual of Style (16th Edition):

Wang, Kehao. “Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot.” 2012. Doctoral Dissertation, Université Paris-Sud – Paris XI. Accessed October 26, 2020. http://www.theses.fr/2012PA112103.

MLA Handbook (7th Edition):

Wang, Kehao. “Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot.” 2012. Web. 26 Oct 2020.

Vancouver:

Wang K. Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot. [Internet] [Doctoral dissertation]. Université Paris-Sud – Paris XI; 2012. [cited 2020 Oct 26]. Available from: http://www.theses.fr/2012PA112103.

Council of Science Editors:

Wang K. Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot. [Doctoral Dissertation]. Université Paris-Sud – Paris XI; 2012. Available from: http://www.theses.fr/2012PA112103


Delft University of Technology

11. Liang, Yu (author). An Ensemble Approach for News Recommendation Based on Contextual Bandit Algorithms.

Degree: 2017, Delft University of Technology

 News recommendation is a field different from traditional recommendation fields. News articles are created and deleted continuously with a very short life cycle. Users' preference… (more)

Subjects/Keywords: News recommendation; Contextual bandit algorithms; Multi-armed bandit; Context; Recommender systems; Online evaluation; Offline evaluation; Ensemble recommender

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liang, Y. (. (2017). An Ensemble Approach for News Recommendation Based on Contextual Bandit Algorithms. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:45600ff4-3d22-4093-baf9-f8df40dcc211

Chicago Manual of Style (16th Edition):

Liang, Yu (author). “An Ensemble Approach for News Recommendation Based on Contextual Bandit Algorithms.” 2017. Masters Thesis, Delft University of Technology. Accessed October 26, 2020. http://resolver.tudelft.nl/uuid:45600ff4-3d22-4093-baf9-f8df40dcc211.

MLA Handbook (7th Edition):

Liang, Yu (author). “An Ensemble Approach for News Recommendation Based on Contextual Bandit Algorithms.” 2017. Web. 26 Oct 2020.

Vancouver:

Liang Y(. An Ensemble Approach for News Recommendation Based on Contextual Bandit Algorithms. [Internet] [Masters thesis]. Delft University of Technology; 2017. [cited 2020 Oct 26]. Available from: http://resolver.tudelft.nl/uuid:45600ff4-3d22-4093-baf9-f8df40dcc211.

Council of Science Editors:

Liang Y(. An Ensemble Approach for News Recommendation Based on Contextual Bandit Algorithms. [Masters Thesis]. Delft University of Technology; 2017. Available from: http://resolver.tudelft.nl/uuid:45600ff4-3d22-4093-baf9-f8df40dcc211

12. Gutowski, Nicolas. Recommandation contextuelle de services : application à la recommandation d'évènements culturels dans la ville intelligente : Context-aware recommendation systems for cultural events recommendation in Smart Cities.

Degree: Docteur es, Informatique, 2019, Angers

Les algorithmes de bandits-manchots pour les systèmes de recommandation sensibles au contexte font aujourd’hui l’objet de nombreuses études. Afin de répondre aux enjeux de cette… (more)

Subjects/Keywords: Bandit manchot (mathématiques); Contexte; Diversité; Précision individuelle; Reinforcement learning; Multi-Armed Bandit; Recommendation system; Context; Diversity; Individual Accuracy; 004

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gutowski, N. (2019). Recommandation contextuelle de services : application à la recommandation d'évènements culturels dans la ville intelligente : Context-aware recommendation systems for cultural events recommendation in Smart Cities. (Doctoral Dissertation). Angers. Retrieved from http://www.theses.fr/2019ANGE0030

Chicago Manual of Style (16th Edition):

Gutowski, Nicolas. “Recommandation contextuelle de services : application à la recommandation d'évènements culturels dans la ville intelligente : Context-aware recommendation systems for cultural events recommendation in Smart Cities.” 2019. Doctoral Dissertation, Angers. Accessed October 26, 2020. http://www.theses.fr/2019ANGE0030.

MLA Handbook (7th Edition):

Gutowski, Nicolas. “Recommandation contextuelle de services : application à la recommandation d'évènements culturels dans la ville intelligente : Context-aware recommendation systems for cultural events recommendation in Smart Cities.” 2019. Web. 26 Oct 2020.

Vancouver:

Gutowski N. Recommandation contextuelle de services : application à la recommandation d'évènements culturels dans la ville intelligente : Context-aware recommendation systems for cultural events recommendation in Smart Cities. [Internet] [Doctoral dissertation]. Angers; 2019. [cited 2020 Oct 26]. Available from: http://www.theses.fr/2019ANGE0030.

Council of Science Editors:

Gutowski N. Recommandation contextuelle de services : application à la recommandation d'évènements culturels dans la ville intelligente : Context-aware recommendation systems for cultural events recommendation in Smart Cities. [Doctoral Dissertation]. Angers; 2019. Available from: http://www.theses.fr/2019ANGE0030


University of California – Santa Cruz

13. Dai, Liang. Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward.

Degree: Technology and Information Management, 2014, University of California – Santa Cruz

 Online experiment is widely used in online advertising, web development to compare effects, e.g. click through rate, conversion rate, of different versions. Among all the… (more)

Subjects/Keywords: Information science; Computer science; A/B testing; Multi-armed Bandit; Online Experiment; Statistical Uncertainty

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dai, L. (2014). Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward. (Thesis). University of California – Santa Cruz. Retrieved from http://www.escholarship.org/uc/item/1hm5t2z6

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Dai, Liang. “Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward.” 2014. Thesis, University of California – Santa Cruz. Accessed October 26, 2020. http://www.escholarship.org/uc/item/1hm5t2z6.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Dai, Liang. “Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward.” 2014. Web. 26 Oct 2020.

Vancouver:

Dai L. Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward. [Internet] [Thesis]. University of California – Santa Cruz; 2014. [cited 2020 Oct 26]. Available from: http://www.escholarship.org/uc/item/1hm5t2z6.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Dai L. Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward. [Thesis]. University of California – Santa Cruz; 2014. Available from: http://www.escholarship.org/uc/item/1hm5t2z6

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of California – Santa Cruz

14. Rahmanian, Holakou. Online Learning of Combinatorial Objects.

Degree: Computer Science, 2018, University of California – Santa Cruz

 This thesis develops algorithms for learning combinatorial objects. A combinatorial object is a structured concept composed of components. Examples are permutations, Huffman trees, binary search… (more)

Subjects/Keywords: Computer science; Artificial intelligence; combinatorial objects; machine learning; multi-armed bandit; online learning; structured concepts

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rahmanian, H. (2018). Online Learning of Combinatorial Objects. (Thesis). University of California – Santa Cruz. Retrieved from http://www.escholarship.org/uc/item/7kw5d47f

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Rahmanian, Holakou. “Online Learning of Combinatorial Objects.” 2018. Thesis, University of California – Santa Cruz. Accessed October 26, 2020. http://www.escholarship.org/uc/item/7kw5d47f.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Rahmanian, Holakou. “Online Learning of Combinatorial Objects.” 2018. Web. 26 Oct 2020.

Vancouver:

Rahmanian H. Online Learning of Combinatorial Objects. [Internet] [Thesis]. University of California – Santa Cruz; 2018. [cited 2020 Oct 26]. Available from: http://www.escholarship.org/uc/item/7kw5d47f.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Rahmanian H. Online Learning of Combinatorial Objects. [Thesis]. University of California – Santa Cruz; 2018. Available from: http://www.escholarship.org/uc/item/7kw5d47f

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

15. BATIKAN UNAL, AHMET. Online Learning for Energy Efficient Navigation using Contextual Information .

Degree: Chalmers tekniska högskola / Institutionen för data och informationsteknik, 2020, Chalmers University of Technology

 Accurately predicting the energy consumption of road segments is an important topic in electric vehicles that might alleviate the range concerns if it is addressed… (more)

Subjects/Keywords: Contextual combinatorial multi-armed bandit; online learning; electric vehicles; energy consumption prediction; computer science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

BATIKAN UNAL, A. (2020). Online Learning for Energy Efficient Navigation using Contextual Information . (Thesis). Chalmers University of Technology. Retrieved from http://hdl.handle.net/20.500.12380/301396

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

BATIKAN UNAL, AHMET. “Online Learning for Energy Efficient Navigation using Contextual Information .” 2020. Thesis, Chalmers University of Technology. Accessed October 26, 2020. http://hdl.handle.net/20.500.12380/301396.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

BATIKAN UNAL, AHMET. “Online Learning for Energy Efficient Navigation using Contextual Information .” 2020. Web. 26 Oct 2020.

Vancouver:

BATIKAN UNAL A. Online Learning for Energy Efficient Navigation using Contextual Information . [Internet] [Thesis]. Chalmers University of Technology; 2020. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/20.500.12380/301396.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

BATIKAN UNAL A. Online Learning for Energy Efficient Navigation using Contextual Information . [Thesis]. Chalmers University of Technology; 2020. Available from: http://hdl.handle.net/20.500.12380/301396

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Minnesota

16. Arya, Sakshi. Contextual Bandits With Delayed Feedback Using Randomized Allocation.

Degree: PhD, Statistics, 2020, University of Minnesota

 Contextual bandit problems are important for sequential learning in various practical settings that require balancing the exploration-exploitation trade-off to maximize total rewards. Motivated by applications… (more)

Subjects/Keywords: contextual; Multi-armed bandit problem; nonparametric regression; regret; sequential analysis; strong consistency

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Arya, S. (2020). Contextual Bandits With Delayed Feedback Using Randomized Allocation. (Doctoral Dissertation). University of Minnesota. Retrieved from http://hdl.handle.net/11299/215062

Chicago Manual of Style (16th Edition):

Arya, Sakshi. “Contextual Bandits With Delayed Feedback Using Randomized Allocation.” 2020. Doctoral Dissertation, University of Minnesota. Accessed October 26, 2020. http://hdl.handle.net/11299/215062.

MLA Handbook (7th Edition):

Arya, Sakshi. “Contextual Bandits With Delayed Feedback Using Randomized Allocation.” 2020. Web. 26 Oct 2020.

Vancouver:

Arya S. Contextual Bandits With Delayed Feedback Using Randomized Allocation. [Internet] [Doctoral dissertation]. University of Minnesota; 2020. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/11299/215062.

Council of Science Editors:

Arya S. Contextual Bandits With Delayed Feedback Using Randomized Allocation. [Doctoral Dissertation]. University of Minnesota; 2020. Available from: http://hdl.handle.net/11299/215062


Princeton University

17. Reverdy, Paul Benjamin. Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems .

Degree: PhD, 2014, Princeton University

 Search is a ubiquitous human activity. It is a rational response to the uncertainty inherent in the tasks we seek to accomplish in our daily… (more)

Subjects/Keywords: Bayesian machine learning; Heuristic algorithms; Human-in-the-loop system; Multi-armed bandit

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Reverdy, P. B. (2014). Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01d504rn558

Chicago Manual of Style (16th Edition):

Reverdy, Paul Benjamin. “Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems .” 2014. Doctoral Dissertation, Princeton University. Accessed October 26, 2020. http://arks.princeton.edu/ark:/88435/dsp01d504rn558.

MLA Handbook (7th Edition):

Reverdy, Paul Benjamin. “Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems .” 2014. Web. 26 Oct 2020.

Vancouver:

Reverdy PB. Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems . [Internet] [Doctoral dissertation]. Princeton University; 2014. [cited 2020 Oct 26]. Available from: http://arks.princeton.edu/ark:/88435/dsp01d504rn558.

Council of Science Editors:

Reverdy PB. Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems . [Doctoral Dissertation]. Princeton University; 2014. Available from: http://arks.princeton.edu/ark:/88435/dsp01d504rn558

18. Bouneffouf, Djallel. DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque.

Degree: Docteur es, Informatique, 2013, Evry, Institut national des télécommunications

L’immense quantité d'information générée et gérée au quotidien par les systèmes d'information et leurs utilisateurs conduit inéluctablement à la problématique de surcharge d'information. Dans ce… (more)

Subjects/Keywords: Apprentissage automatique; Système de recommandation; Système de recommandation sensible au contexte; Apprentissage par renforcement; Bandit manchot; Bandit manchot contextuel; UCB; Système sensible au risque; Machine learning; Recommender system; Context-aware recommender system; Reinforcement learning; Multi-armed bandit; Contextual multi-armed bandit; UCB; Risk awareness

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bouneffouf, D. (2013). DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque. (Doctoral Dissertation). Evry, Institut national des télécommunications. Retrieved from http://www.theses.fr/2013TELE0031

Chicago Manual of Style (16th Edition):

Bouneffouf, Djallel. “DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque.” 2013. Doctoral Dissertation, Evry, Institut national des télécommunications. Accessed October 26, 2020. http://www.theses.fr/2013TELE0031.

MLA Handbook (7th Edition):

Bouneffouf, Djallel. “DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque.” 2013. Web. 26 Oct 2020.

Vancouver:

Bouneffouf D. DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque. [Internet] [Doctoral dissertation]. Evry, Institut national des télécommunications; 2013. [cited 2020 Oct 26]. Available from: http://www.theses.fr/2013TELE0031.

Council of Science Editors:

Bouneffouf D. DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque. [Doctoral Dissertation]. Evry, Institut national des télécommunications; 2013. Available from: http://www.theses.fr/2013TELE0031


Cornell University

19. Chen, Bangrui. Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration.

Degree: PhD, Operations Research, 2017, Cornell University

 In this thesis, we study adaptive preference learning, in which a machine learning system learns users' preferences from feedback while simultaneously using these learned preferences… (more)

Subjects/Keywords: Statistics; Operations research; Computer science; adaptive preference learning; bandit feedback; dueling bandits; incentivizing exploration; information filtering; multi-armed bandits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, B. (2017). Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/59050

Chicago Manual of Style (16th Edition):

Chen, Bangrui. “Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration.” 2017. Doctoral Dissertation, Cornell University. Accessed October 26, 2020. http://hdl.handle.net/1813/59050.

MLA Handbook (7th Edition):

Chen, Bangrui. “Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration.” 2017. Web. 26 Oct 2020.

Vancouver:

Chen B. Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration. [Internet] [Doctoral dissertation]. Cornell University; 2017. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/1813/59050.

Council of Science Editors:

Chen B. Adaptive Preference Learning With Bandit Feedback: Information Filtering, Dueling Bandits and Incentivizing Exploration. [Doctoral Dissertation]. Cornell University; 2017. Available from: http://hdl.handle.net/1813/59050


Indian Institute of Science

20. Satyanath Bhat, K. Design of Quality Assuring Mechanisms with Learning for Strategic Crowds.

Degree: PhD, Faculty of Engineering, 2018, Indian Institute of Science

 In this thesis, we address several generic problems concerned with procurement of tasks from a crowd that consists of strategic workers with uncertainty in their… (more)

Subjects/Keywords: Crowd Sourcing; Expert Sourcing Interdependent Values; Multi-Armed Bandit Auction; Game Theory; Mechanism Design; MAB Mechanism Design Environment; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Satyanath Bhat, K. (2018). Design of Quality Assuring Mechanisms with Learning for Strategic Crowds. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/3597

Chicago Manual of Style (16th Edition):

Satyanath Bhat, K. “Design of Quality Assuring Mechanisms with Learning for Strategic Crowds.” 2018. Doctoral Dissertation, Indian Institute of Science. Accessed October 26, 2020. http://etd.iisc.ac.in/handle/2005/3597.

MLA Handbook (7th Edition):

Satyanath Bhat, K. “Design of Quality Assuring Mechanisms with Learning for Strategic Crowds.” 2018. Web. 26 Oct 2020.

Vancouver:

Satyanath Bhat K. Design of Quality Assuring Mechanisms with Learning for Strategic Crowds. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2018. [cited 2020 Oct 26]. Available from: http://etd.iisc.ac.in/handle/2005/3597.

Council of Science Editors:

Satyanath Bhat K. Design of Quality Assuring Mechanisms with Learning for Strategic Crowds. [Doctoral Dissertation]. Indian Institute of Science; 2018. Available from: http://etd.iisc.ac.in/handle/2005/3597

21. Tourkaman, Mahan. Regret Minimization in the Gain Estimation Problem.

Degree: Electrical Engineering and Computer Science (EECS), 2019, KTH

  A novel approach to the gain estimation problem,using a multi-armed bandit formulation, is studied. The gain estimation problem deals with the problem of estimating… (more)

Subjects/Keywords: Exploration vs exploitation; gain estimation; multi-armed bandit; regret minimization; system identification.; Electrical Engineering, Electronic Engineering, Information Engineering; Elektroteknik och elektronik

Page 1 Page 2 Page 3 Page 4 Page 5 Page 6

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tourkaman, M. (2019). Regret Minimization in the Gain Estimation Problem. (Thesis). KTH. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254234

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tourkaman, Mahan. “Regret Minimization in the Gain Estimation Problem.” 2019. Thesis, KTH. Accessed October 26, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254234.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tourkaman, Mahan. “Regret Minimization in the Gain Estimation Problem.” 2019. Web. 26 Oct 2020.

Vancouver:

Tourkaman M. Regret Minimization in the Gain Estimation Problem. [Internet] [Thesis]. KTH; 2019. [cited 2020 Oct 26]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254234.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tourkaman M. Regret Minimization in the Gain Estimation Problem. [Thesis]. KTH; 2019. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254234

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Toronto

22. Shahrokhi Tehrani, Shervin. A Heuristic Approach to Explore: Value of Perfect Information.

Degree: PhD, 2018, University of Toronto

 How do consumers choose in a dynamic stochastic environment when they face uncertainty about the return from their choice? The classical solution to this problem… (more)

Subjects/Keywords: Bellman Equation; Exploration-Exploitation Tradeoff; Forward-looking Behavior; Heuristics; Multi-armed bandit; Value of Perfect Information; 0338

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shahrokhi Tehrani, S. (2018). A Heuristic Approach to Explore: Value of Perfect Information. (Doctoral Dissertation). University of Toronto. Retrieved from http://hdl.handle.net/1807/92040

Chicago Manual of Style (16th Edition):

Shahrokhi Tehrani, Shervin. “A Heuristic Approach to Explore: Value of Perfect Information.” 2018. Doctoral Dissertation, University of Toronto. Accessed October 26, 2020. http://hdl.handle.net/1807/92040.

MLA Handbook (7th Edition):

Shahrokhi Tehrani, Shervin. “A Heuristic Approach to Explore: Value of Perfect Information.” 2018. Web. 26 Oct 2020.

Vancouver:

Shahrokhi Tehrani S. A Heuristic Approach to Explore: Value of Perfect Information. [Internet] [Doctoral dissertation]. University of Toronto; 2018. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/1807/92040.

Council of Science Editors:

Shahrokhi Tehrani S. A Heuristic Approach to Explore: Value of Perfect Information. [Doctoral Dissertation]. University of Toronto; 2018. Available from: http://hdl.handle.net/1807/92040


University of Ontario Institute of Technology

23. Zandi, Marjan. Learning-based adaptive design for dynamic spectrum access in cognitive radio networks.

Degree: 2014, University of Ontario Institute of Technology

 This thesis is concerned with dynamic spectrum access in cognitive radio networks. The main objective is designing online learning and access policies which maximize the… (more)

Subjects/Keywords: Dynamic spectrum access (DSA); Access policies; Cognitive radio networks; Auction-based formulation; Decentralized multi-armed bandit (DMAB); Online learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zandi, M. (2014). Learning-based adaptive design for dynamic spectrum access in cognitive radio networks. (Thesis). University of Ontario Institute of Technology. Retrieved from http://hdl.handle.net/10155/476

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Zandi, Marjan. “Learning-based adaptive design for dynamic spectrum access in cognitive radio networks.” 2014. Thesis, University of Ontario Institute of Technology. Accessed October 26, 2020. http://hdl.handle.net/10155/476.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Zandi, Marjan. “Learning-based adaptive design for dynamic spectrum access in cognitive radio networks.” 2014. Web. 26 Oct 2020.

Vancouver:

Zandi M. Learning-based adaptive design for dynamic spectrum access in cognitive radio networks. [Internet] [Thesis]. University of Ontario Institute of Technology; 2014. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10155/476.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Zandi M. Learning-based adaptive design for dynamic spectrum access in cognitive radio networks. [Thesis]. University of Ontario Institute of Technology; 2014. Available from: http://hdl.handle.net/10155/476

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Duke University

24. Modaresi, Sajad. Data-Driven Learning Models with Applications to Retail Operations .

Degree: 2018, Duke University

  Data-driven approaches to decision-making under uncertainty is at the center of many operational problems. These are problems in which there is an element of… (more)

Subjects/Keywords: Business administration; Active Learning; Assortment Personalization; Combinatorial Optimization; Data-Driven Decision-Making; Multi-Armed Bandit; Retail Operations

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Modaresi, S. (2018). Data-Driven Learning Models with Applications to Retail Operations . (Thesis). Duke University. Retrieved from http://hdl.handle.net/10161/17459

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Modaresi, Sajad. “Data-Driven Learning Models with Applications to Retail Operations .” 2018. Thesis, Duke University. Accessed October 26, 2020. http://hdl.handle.net/10161/17459.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Modaresi, Sajad. “Data-Driven Learning Models with Applications to Retail Operations .” 2018. Web. 26 Oct 2020.

Vancouver:

Modaresi S. Data-Driven Learning Models with Applications to Retail Operations . [Internet] [Thesis]. Duke University; 2018. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10161/17459.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Modaresi S. Data-Driven Learning Models with Applications to Retail Operations . [Thesis]. Duke University; 2018. Available from: http://hdl.handle.net/10161/17459

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Indian Institute of Science

25. Divya, Padmanabhan. New Methods for Learning from Heterogeneous and Strategic Agents.

Degree: PhD, Faculty of Engineering, 2018, Indian Institute of Science

 1 Introduction In this doctoral thesis, we address several representative problems that arise in the context of learning from multiple heterogeneous agents. These problems are… (more)

Subjects/Keywords: Crowdsourcing; Heterogeneous Noisy Agents; Bayesian Learning; Linear Regression; Multi-label Classification; Learning Algorithms; Bayesian Linear Regression; Heterogeneous Strategic Agents; Multi-Armed Bandit Mechanism; Multi-armed Bandit Problems; Multiple Noisy Sources; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Divya, P. (2018). New Methods for Learning from Heterogeneous and Strategic Agents. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/3562

Chicago Manual of Style (16th Edition):

Divya, Padmanabhan. “New Methods for Learning from Heterogeneous and Strategic Agents.” 2018. Doctoral Dissertation, Indian Institute of Science. Accessed October 26, 2020. http://etd.iisc.ac.in/handle/2005/3562.

MLA Handbook (7th Edition):

Divya, Padmanabhan. “New Methods for Learning from Heterogeneous and Strategic Agents.” 2018. Web. 26 Oct 2020.

Vancouver:

Divya P. New Methods for Learning from Heterogeneous and Strategic Agents. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2018. [cited 2020 Oct 26]. Available from: http://etd.iisc.ac.in/handle/2005/3562.

Council of Science Editors:

Divya P. New Methods for Learning from Heterogeneous and Strategic Agents. [Doctoral Dissertation]. Indian Institute of Science; 2018. Available from: http://etd.iisc.ac.in/handle/2005/3562

26. Gajane, Pratik. Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle.

Degree: Docteur es, Informatique, 2017, Lille 3

Dans cette thèse, nous étudions des problèmes de prise de décisions séquentielles dans lesquels, pour chacune de ses décisions, l'apprenant reçoit une information qu'il utilise… (more)

Subjects/Keywords: Bandits Multi-Bras; Retour D’information Partielle; Dueling Bandits; Corrupt Bandits; Évaluation du Ranker; Vie Privée Différentielle; Multi-Armed Bandit; Partial Feedback; Dueling Bandits; Corrupt Bandits; Ranker Evaluation; Differential Privacy

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gajane, P. (2017). Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle. (Doctoral Dissertation). Lille 3. Retrieved from http://www.theses.fr/2017LIL30045

Chicago Manual of Style (16th Edition):

Gajane, Pratik. “Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle.” 2017. Doctoral Dissertation, Lille 3. Accessed October 26, 2020. http://www.theses.fr/2017LIL30045.

MLA Handbook (7th Edition):

Gajane, Pratik. “Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle.” 2017. Web. 26 Oct 2020.

Vancouver:

Gajane P. Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle. [Internet] [Doctoral dissertation]. Lille 3; 2017. [cited 2020 Oct 26]. Available from: http://www.theses.fr/2017LIL30045.

Council of Science Editors:

Gajane P. Multi-armed bandits with unconventional feedback : Bandits multi-armés avec rétroaction partielle. [Doctoral Dissertation]. Lille 3; 2017. Available from: http://www.theses.fr/2017LIL30045

27. Clement, Benjamin. Adaptive Personalization of Pedagogical Sequences using Machine Learning : Personalisation Adaptative de Séquences Pédagogique à l'aide d'Apprentissage Automatique.

Degree: Docteur es, Informatique, 2018, Bordeaux

 Les ordinateurs peuvent-ils enseigner ? Pour répondre à cette question, la recherche dans les Systèmes Tuteurs Intelligents est en pleine expansion parmi la communauté travaillant… (more)

Subjects/Keywords: Système Tuteur Intelligent; Enseignement Adaptatif; Théorie du Flow; Motivation Intrinsèque; Algorithme de Bandit Multi Bras; Modèle d’apprenant; Serious Game; Intelligent Tutoring System; Adaptive Teaching; Flow Theory; Intrinsic Motivation; Multi-Armed Bandit; Learner Model; Serious Game

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Clement, B. (2018). Adaptive Personalization of Pedagogical Sequences using Machine Learning : Personalisation Adaptative de Séquences Pédagogique à l'aide d'Apprentissage Automatique. (Doctoral Dissertation). Bordeaux. Retrieved from http://www.theses.fr/2018BORD0373

Chicago Manual of Style (16th Edition):

Clement, Benjamin. “Adaptive Personalization of Pedagogical Sequences using Machine Learning : Personalisation Adaptative de Séquences Pédagogique à l'aide d'Apprentissage Automatique.” 2018. Doctoral Dissertation, Bordeaux. Accessed October 26, 2020. http://www.theses.fr/2018BORD0373.

MLA Handbook (7th Edition):

Clement, Benjamin. “Adaptive Personalization of Pedagogical Sequences using Machine Learning : Personalisation Adaptative de Séquences Pédagogique à l'aide d'Apprentissage Automatique.” 2018. Web. 26 Oct 2020.

Vancouver:

Clement B. Adaptive Personalization of Pedagogical Sequences using Machine Learning : Personalisation Adaptative de Séquences Pédagogique à l'aide d'Apprentissage Automatique. [Internet] [Doctoral dissertation]. Bordeaux; 2018. [cited 2020 Oct 26]. Available from: http://www.theses.fr/2018BORD0373.

Council of Science Editors:

Clement B. Adaptive Personalization of Pedagogical Sequences using Machine Learning : Personalisation Adaptative de Séquences Pédagogique à l'aide d'Apprentissage Automatique. [Doctoral Dissertation]. Bordeaux; 2018. Available from: http://www.theses.fr/2018BORD0373


RMIT University

28. Kazimipour, B. Towards a more efficient use of computational budget in large-scale black-box optimization.

Degree: 2019, RMIT University

 Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models,… (more)

Subjects/Keywords: Fields of Research; Large-scale optimization; Black-box optimization; Reinforcement learning; Dimension reduction; Multi-armed bandit; Evolutionary algorithm; Metaheuristic; Evolutionary computation; Differential evolution algorithm; Population initialization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kazimipour, B. (2019). Towards a more efficient use of computational budget in large-scale black-box optimization. (Thesis). RMIT University. Retrieved from http://researchbank.rmit.edu.au/view/rmit:162882

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Kazimipour, B. “Towards a more efficient use of computational budget in large-scale black-box optimization.” 2019. Thesis, RMIT University. Accessed October 26, 2020. http://researchbank.rmit.edu.au/view/rmit:162882.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Kazimipour, B. “Towards a more efficient use of computational budget in large-scale black-box optimization.” 2019. Web. 26 Oct 2020.

Vancouver:

Kazimipour B. Towards a more efficient use of computational budget in large-scale black-box optimization. [Internet] [Thesis]. RMIT University; 2019. [cited 2020 Oct 26]. Available from: http://researchbank.rmit.edu.au/view/rmit:162882.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Kazimipour B. Towards a more efficient use of computational budget in large-scale black-box optimization. [Thesis]. RMIT University; 2019. Available from: http://researchbank.rmit.edu.au/view/rmit:162882

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Université de Lorraine

29. Collet, Timothé. Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification.

Degree: Docteur es, Informatique, 2016, Université de Lorraine

La classification se base sur un jeu de données étiquetées par un expert. Plus le jeu de données est grand, meilleure est la performance de… (more)

Subjects/Keywords: Optimisme face à l'incertitude; Classification; Apprentissage actif; Bandits à bras multiples; Optimism in the Face of Uncertainty; Classification; Active Learning; Multi-armed Bandit; 006.33

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Collet, T. (2016). Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification. (Doctoral Dissertation). Université de Lorraine. Retrieved from http://www.theses.fr/2016LORR0084

Chicago Manual of Style (16th Edition):

Collet, Timothé. “Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification.” 2016. Doctoral Dissertation, Université de Lorraine. Accessed October 26, 2020. http://www.theses.fr/2016LORR0084.

MLA Handbook (7th Edition):

Collet, Timothé. “Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification.” 2016. Web. 26 Oct 2020.

Vancouver:

Collet T. Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification. [Internet] [Doctoral dissertation]. Université de Lorraine; 2016. [cited 2020 Oct 26]. Available from: http://www.theses.fr/2016LORR0084.

Council of Science Editors:

Collet T. Méthodes optimistes d’apprentissage actif pour la classification : Optimistic Methods in Active Learning for Classification. [Doctoral Dissertation]. Université de Lorraine; 2016. Available from: http://www.theses.fr/2016LORR0084


National University of Ireland – Galway

30. Hassan, Umair ul. Adaptive task assignment in spatial crowdsourcing .

Degree: 2016, National University of Ireland – Galway

 Spatial crowdsourcing has emerged as a new paradigm for solving difficult problems in the physical world. It engages a large number of human workers in… (more)

Subjects/Keywords: Spatial crowdsourcing; Crowdsourcing; Task assignment; Online algorithms; Multi-armed bandit; Combinatorial bandits; Fractional optimization; location diversity; Location diversity; Agent-based simulation; Data analytics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hassan, U. u. (2016). Adaptive task assignment in spatial crowdsourcing . (Thesis). National University of Ireland – Galway. Retrieved from http://hdl.handle.net/10379/6035

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Hassan, Umair ul. “Adaptive task assignment in spatial crowdsourcing .” 2016. Thesis, National University of Ireland – Galway. Accessed October 26, 2020. http://hdl.handle.net/10379/6035.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Hassan, Umair ul. “Adaptive task assignment in spatial crowdsourcing .” 2016. Web. 26 Oct 2020.

Vancouver:

Hassan Uu. Adaptive task assignment in spatial crowdsourcing . [Internet] [Thesis]. National University of Ireland – Galway; 2016. [cited 2020 Oct 26]. Available from: http://hdl.handle.net/10379/6035.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Hassan Uu. Adaptive task assignment in spatial crowdsourcing . [Thesis]. National University of Ireland – Galway; 2016. Available from: http://hdl.handle.net/10379/6035

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

[1] [2] [3] [4] [5] … [647]

.