Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Bandit). Showing records 1 – 30 of 76 total matches.

[1] [2] [3]

Search Limiters

Last 2 Years | English Only

Degrees

Country

▼ Search Limiters


University of Alberta

1. Joulani, Pooria. Multi-Armed Bandit Problems under Delayed Feedback.

Degree: MS, Department of Computing Science, 2012, University of Alberta

 In this thesis, the multi-armed bandit (MAB) problem in online learning is studied, when the feedback information is not observed immediately but rather after arbitrary,… (more)

Subjects/Keywords: Online Learning; Multi-Armed Bandit; Delayed Feedback

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Joulani, P. (2012). Multi-Armed Bandit Problems under Delayed Feedback. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/d504rm03n

Chicago Manual of Style (16th Edition):

Joulani, Pooria. “Multi-Armed Bandit Problems under Delayed Feedback.” 2012. Masters Thesis, University of Alberta. Accessed November 19, 2019. https://era.library.ualberta.ca/files/d504rm03n.

MLA Handbook (7th Edition):

Joulani, Pooria. “Multi-Armed Bandit Problems under Delayed Feedback.” 2012. Web. 19 Nov 2019.

Vancouver:

Joulani P. Multi-Armed Bandit Problems under Delayed Feedback. [Internet] [Masters thesis]. University of Alberta; 2012. [cited 2019 Nov 19]. Available from: https://era.library.ualberta.ca/files/d504rm03n.

Council of Science Editors:

Joulani P. Multi-Armed Bandit Problems under Delayed Feedback. [Masters Thesis]. University of Alberta; 2012. Available from: https://era.library.ualberta.ca/files/d504rm03n


NSYSU

2. Chien, Zhi-hua. Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market.

Degree: Master, Information Management, 2016, NSYSU

 The Contextual Bandit Problem (CMAB) is usually used to recommend for online applications on article, music, movie, etc. One leading algorithm for contextual bandit is… (more)

Subjects/Keywords: LinUCB; Contextual Bandit Problem; Stock Recommendation; Contextual Multi-Armed Bandit; Personalized Recommendation System

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chien, Z. (2016). Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0703116-130605

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chien, Zhi-hua. “Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market.” 2016. Thesis, NSYSU. Accessed November 19, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0703116-130605.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chien, Zhi-hua. “Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market.” 2016. Web. 19 Nov 2019.

Vancouver:

Chien Z. Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market. [Internet] [Thesis]. NSYSU; 2016. [cited 2019 Nov 19]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0703116-130605.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chien Z. Using Contextual Multi-Armed Bandit Algorithms for Recommending Investment in Stock Market. [Thesis]. NSYSU; 2016. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0703116-130605

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

3. Zhong, Hongliang. Bandit feedback in Classification and Multi-objective Optimization : La rétroaction de bandit sur classification et optimization multi-objective.

Degree: Docteur es, Informatique, 2016, Ecole centrale de Marseille

Des problèmes de Bandit constituent une séquence d’allocation dynamique. D’une part, l’agent de système doit explorer son environnement ( à savoir des bras de machine)… (more)

Subjects/Keywords: Feedback de Bandit; Classification; L'algorithme en Passive-Aggressive; Front Pareto; Bandit feedback; Classification; Passive-Aggressive algorithm; Pareto front

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zhong, H. (2016). Bandit feedback in Classification and Multi-objective Optimization : La rétroaction de bandit sur classification et optimization multi-objective. (Doctoral Dissertation). Ecole centrale de Marseille. Retrieved from http://www.theses.fr/2016ECDM0004

Chicago Manual of Style (16th Edition):

Zhong, Hongliang. “Bandit feedback in Classification and Multi-objective Optimization : La rétroaction de bandit sur classification et optimization multi-objective.” 2016. Doctoral Dissertation, Ecole centrale de Marseille. Accessed November 19, 2019. http://www.theses.fr/2016ECDM0004.

MLA Handbook (7th Edition):

Zhong, Hongliang. “Bandit feedback in Classification and Multi-objective Optimization : La rétroaction de bandit sur classification et optimization multi-objective.” 2016. Web. 19 Nov 2019.

Vancouver:

Zhong H. Bandit feedback in Classification and Multi-objective Optimization : La rétroaction de bandit sur classification et optimization multi-objective. [Internet] [Doctoral dissertation]. Ecole centrale de Marseille; 2016. [cited 2019 Nov 19]. Available from: http://www.theses.fr/2016ECDM0004.

Council of Science Editors:

Zhong H. Bandit feedback in Classification and Multi-objective Optimization : La rétroaction de bandit sur classification et optimization multi-objective. [Doctoral Dissertation]. Ecole centrale de Marseille; 2016. Available from: http://www.theses.fr/2016ECDM0004

4. Louëdec, Jonathan. Stratégies de bandit pour les systèmes de recommandation : Bandit strategies for recommender systems.

Degree: Docteur es, Informatique, 2016, Université Toulouse III – Paul Sabatier

Les systèmes de recommandation actuels ont besoin de recommander des objets pertinents aux utilisateurs (exploitation), mais pour cela ils doivent pouvoir également obtenir continuellement de… (more)

Subjects/Keywords: Stratégies de bandit; Apprentissage en temps réel; Systèmes de recommandation; Recherche d'information; Bandit strategies; Real-time learning; Recommender systems

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Louëdec, J. (2016). Stratégies de bandit pour les systèmes de recommandation : Bandit strategies for recommender systems. (Doctoral Dissertation). Université Toulouse III – Paul Sabatier. Retrieved from http://www.theses.fr/2016TOU30257

Chicago Manual of Style (16th Edition):

Louëdec, Jonathan. “Stratégies de bandit pour les systèmes de recommandation : Bandit strategies for recommender systems.” 2016. Doctoral Dissertation, Université Toulouse III – Paul Sabatier. Accessed November 19, 2019. http://www.theses.fr/2016TOU30257.

MLA Handbook (7th Edition):

Louëdec, Jonathan. “Stratégies de bandit pour les systèmes de recommandation : Bandit strategies for recommender systems.” 2016. Web. 19 Nov 2019.

Vancouver:

Louëdec J. Stratégies de bandit pour les systèmes de recommandation : Bandit strategies for recommender systems. [Internet] [Doctoral dissertation]. Université Toulouse III – Paul Sabatier; 2016. [cited 2019 Nov 19]. Available from: http://www.theses.fr/2016TOU30257.

Council of Science Editors:

Louëdec J. Stratégies de bandit pour les systèmes de recommandation : Bandit strategies for recommender systems. [Doctoral Dissertation]. Université Toulouse III – Paul Sabatier; 2016. Available from: http://www.theses.fr/2016TOU30257

5. Saadane, Sofiane. Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire : Stochastic algorithms for learning, optimization and approximation of the steady regime.

Degree: Docteur es, Mathématiques appliquées, 2016, Université Toulouse III – Paul Sabatier

 Dans cette thèse, nous étudions des thématiques autour des algorithmes stochastiques et c'est pour cette raison que nous débuterons ce manuscrit par des éléments généraux… (more)

Subjects/Keywords: Algorithme stochastique; Optimisation stochastique; Bandit; McKean-Vlasov; Algorithme à mémoire; Stochastic algorithms; Stochastic optimisation; Memory algorithm; Bandit; McKaen-Vlasov

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Saadane, S. (2016). Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire : Stochastic algorithms for learning, optimization and approximation of the steady regime. (Doctoral Dissertation). Université Toulouse III – Paul Sabatier. Retrieved from http://www.theses.fr/2016TOU30203

Chicago Manual of Style (16th Edition):

Saadane, Sofiane. “Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire : Stochastic algorithms for learning, optimization and approximation of the steady regime.” 2016. Doctoral Dissertation, Université Toulouse III – Paul Sabatier. Accessed November 19, 2019. http://www.theses.fr/2016TOU30203.

MLA Handbook (7th Edition):

Saadane, Sofiane. “Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire : Stochastic algorithms for learning, optimization and approximation of the steady regime.” 2016. Web. 19 Nov 2019.

Vancouver:

Saadane S. Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire : Stochastic algorithms for learning, optimization and approximation of the steady regime. [Internet] [Doctoral dissertation]. Université Toulouse III – Paul Sabatier; 2016. [cited 2019 Nov 19]. Available from: http://www.theses.fr/2016TOU30203.

Council of Science Editors:

Saadane S. Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire : Stochastic algorithms for learning, optimization and approximation of the steady regime. [Doctoral Dissertation]. Université Toulouse III – Paul Sabatier; 2016. Available from: http://www.theses.fr/2016TOU30203

6. Li, Yaqin. "Bandit Suppression" in Manchukuo (1932-1945) .

Degree: PhD, 2012, Princeton University

 Manchukuo was a state that was established by the Japanese military in Manchuria (Northeast China) in 1932 and collapsed in 1945 with the defeat of… (more)

Subjects/Keywords: Bandit; Bandit Suppression; Manchukuo; State building

…109 Chapter Four—The Practice of “Bandit Suppression… …127 The “Bandit Law… …195 Epilogue: The Chinese Communist Party’s “Bandit Suppression” in Post-war Northeast China… …122 Table 1: Number of Bandit Raids in the South Manchuria Railway Zone (based on the… …size of bandit gangs)… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, Y. (2012). "Bandit Suppression" in Manchukuo (1932-1945) . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp011v53jx017

Chicago Manual of Style (16th Edition):

Li, Yaqin. “"Bandit Suppression" in Manchukuo (1932-1945) .” 2012. Doctoral Dissertation, Princeton University. Accessed November 19, 2019. http://arks.princeton.edu/ark:/88435/dsp011v53jx017.

MLA Handbook (7th Edition):

Li, Yaqin. “"Bandit Suppression" in Manchukuo (1932-1945) .” 2012. Web. 19 Nov 2019.

Vancouver:

Li Y. "Bandit Suppression" in Manchukuo (1932-1945) . [Internet] [Doctoral dissertation]. Princeton University; 2012. [cited 2019 Nov 19]. Available from: http://arks.princeton.edu/ark:/88435/dsp011v53jx017.

Council of Science Editors:

Li Y. "Bandit Suppression" in Manchukuo (1932-1945) . [Doctoral Dissertation]. Princeton University; 2012. Available from: http://arks.princeton.edu/ark:/88435/dsp011v53jx017


Princeton University

7. Landgren, Peter. Distributed Multi-agent Multi-armed Bandits .

Degree: PhD, 2019, Princeton University

 Social decision-making is a common feature of both natural and artificial systems. Humans, animals, and machines routinely communicate and observe each other to improve their… (more)

Subjects/Keywords: Decision-making; Distributed control; MAB; Multi-agent Multi-armed Bandit; Multi-armed Bandit; Network analysis and control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Landgren, P. (2019). Distributed Multi-agent Multi-armed Bandits . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01c534fr72c

Chicago Manual of Style (16th Edition):

Landgren, Peter. “Distributed Multi-agent Multi-armed Bandits .” 2019. Doctoral Dissertation, Princeton University. Accessed November 19, 2019. http://arks.princeton.edu/ark:/88435/dsp01c534fr72c.

MLA Handbook (7th Edition):

Landgren, Peter. “Distributed Multi-agent Multi-armed Bandits .” 2019. Web. 19 Nov 2019.

Vancouver:

Landgren P. Distributed Multi-agent Multi-armed Bandits . [Internet] [Doctoral dissertation]. Princeton University; 2019. [cited 2019 Nov 19]. Available from: http://arks.princeton.edu/ark:/88435/dsp01c534fr72c.

Council of Science Editors:

Landgren P. Distributed Multi-agent Multi-armed Bandits . [Doctoral Dissertation]. Princeton University; 2019. Available from: http://arks.princeton.edu/ark:/88435/dsp01c534fr72c


Universiteit Utrecht

8. Puglierin, F. A Bandit-Inspired Memetic Algorithm for Quadratic Assignment Problems.

Degree: 2012, Universiteit Utrecht

 In this thesis a new metaheuristic for combinatorial optimization is proposed, with focus on the Quadratic Assignment Problem as the hard-problem of choice - a… (more)

Subjects/Keywords: combinatorial optimization; bandit; QAP; Quadratic Assignment Problem; metaheuristic; memetic; hybrid

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Puglierin, F. (2012). A Bandit-Inspired Memetic Algorithm for Quadratic Assignment Problems. (Masters Thesis). Universiteit Utrecht. Retrieved from http://dspace.library.uu.nl:8080/handle/1874/255702

Chicago Manual of Style (16th Edition):

Puglierin, F. “A Bandit-Inspired Memetic Algorithm for Quadratic Assignment Problems.” 2012. Masters Thesis, Universiteit Utrecht. Accessed November 19, 2019. http://dspace.library.uu.nl:8080/handle/1874/255702.

MLA Handbook (7th Edition):

Puglierin, F. “A Bandit-Inspired Memetic Algorithm for Quadratic Assignment Problems.” 2012. Web. 19 Nov 2019.

Vancouver:

Puglierin F. A Bandit-Inspired Memetic Algorithm for Quadratic Assignment Problems. [Internet] [Masters thesis]. Universiteit Utrecht; 2012. [cited 2019 Nov 19]. Available from: http://dspace.library.uu.nl:8080/handle/1874/255702.

Council of Science Editors:

Puglierin F. A Bandit-Inspired Memetic Algorithm for Quadratic Assignment Problems. [Masters Thesis]. Universiteit Utrecht; 2012. Available from: http://dspace.library.uu.nl:8080/handle/1874/255702


Indian Institute of Science

9. Chatterjee, Aritra. A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem.

Degree: 2017, Indian Institute of Science

 The multi-armed bandit (MAB) problem provides a convenient abstraction for many online decision problems arising in modern applications including Internet display advertising, crowdsourcing, online procurement,… (more)

Subjects/Keywords: Thompson Sampling; Multi-Armed Bandit Problem; Upper Confidence Bound (UCB); Awake Upper Estimated Reward; Multi-Armed Bandit Algorithms; Sleeping Multi-Armed Bandit Model; TS-SMAB; Sleeping Multi-Armed Bandit (SMAB) Problem; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chatterjee, A. (2017). A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem. (Thesis). Indian Institute of Science. Retrieved from http://etd.iisc.ernet.in/2005/3631 ; http://etd.iisc.ernet.in/abstracts/4501/G28478-Abs.pdf

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chatterjee, Aritra. “A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem.” 2017. Thesis, Indian Institute of Science. Accessed November 19, 2019. http://etd.iisc.ernet.in/2005/3631 ; http://etd.iisc.ernet.in/abstracts/4501/G28478-Abs.pdf.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chatterjee, Aritra. “A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem.” 2017. Web. 19 Nov 2019.

Vancouver:

Chatterjee A. A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem. [Internet] [Thesis]. Indian Institute of Science; 2017. [cited 2019 Nov 19]. Available from: http://etd.iisc.ernet.in/2005/3631 ; http://etd.iisc.ernet.in/abstracts/4501/G28478-Abs.pdf.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chatterjee A. A Study of Thompson Sampling Approach for the Sleeping Multi-Armed Bandit Problem. [Thesis]. Indian Institute of Science; 2017. Available from: http://etd.iisc.ernet.in/2005/3631 ; http://etd.iisc.ernet.in/abstracts/4501/G28478-Abs.pdf

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of California – Berkeley

10. Li, Jian. Models of Information Acquisition under Ambiguity.

Degree: Economics, 2012, University of California – Berkeley

 This dissertation studies models of dynamic choices under uncertainty with endogenous information acquisition. In particular we are interested in exploring the interactions between ambiguity attitudes… (more)

Subjects/Keywords: Economics; Economic theory; Applied mathematics; ambiguity aversion; bandit problem; information acquisition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, J. (2012). Models of Information Acquisition under Ambiguity. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/16j0g7nd

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Li, Jian. “Models of Information Acquisition under Ambiguity.” 2012. Thesis, University of California – Berkeley. Accessed November 19, 2019. http://www.escholarship.org/uc/item/16j0g7nd.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Li, Jian. “Models of Information Acquisition under Ambiguity.” 2012. Web. 19 Nov 2019.

Vancouver:

Li J. Models of Information Acquisition under Ambiguity. [Internet] [Thesis]. University of California – Berkeley; 2012. [cited 2019 Nov 19]. Available from: http://www.escholarship.org/uc/item/16j0g7nd.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Li J. Models of Information Acquisition under Ambiguity. [Thesis]. University of California – Berkeley; 2012. Available from: http://www.escholarship.org/uc/item/16j0g7nd

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

11. Sani, Amir. Apprentissage automatique pour la prise de décisions : Machine learning for decisions-making under uncertainty.

Degree: Docteur es, Mathématiques appliquées, 2015, Université Lille I – Sciences et Technologies

La prise de décision stratégique concernant des ressources de valeur devrait tenir compte du degré d'aversion au risque. D'ailleurs, de nombreux domaines d'application mettent le… (more)

Subjects/Keywords: Bandit manchot (Mathématiques); Aversion au risque; Algorithme d'apprentissage incrémental; 519.6

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sani, A. (2015). Apprentissage automatique pour la prise de décisions : Machine learning for decisions-making under uncertainty. (Doctoral Dissertation). Université Lille I – Sciences et Technologies. Retrieved from http://www.theses.fr/2015LIL10038

Chicago Manual of Style (16th Edition):

Sani, Amir. “Apprentissage automatique pour la prise de décisions : Machine learning for decisions-making under uncertainty.” 2015. Doctoral Dissertation, Université Lille I – Sciences et Technologies. Accessed November 19, 2019. http://www.theses.fr/2015LIL10038.

MLA Handbook (7th Edition):

Sani, Amir. “Apprentissage automatique pour la prise de décisions : Machine learning for decisions-making under uncertainty.” 2015. Web. 19 Nov 2019.

Vancouver:

Sani A. Apprentissage automatique pour la prise de décisions : Machine learning for decisions-making under uncertainty. [Internet] [Doctoral dissertation]. Université Lille I – Sciences et Technologies; 2015. [cited 2019 Nov 19]. Available from: http://www.theses.fr/2015LIL10038.

Council of Science Editors:

Sani A. Apprentissage automatique pour la prise de décisions : Machine learning for decisions-making under uncertainty. [Doctoral Dissertation]. Université Lille I – Sciences et Technologies; 2015. Available from: http://www.theses.fr/2015LIL10038


University of Houston

12. Le, Thanh Dang 1984-. Sequential learning for passive monitoring of multi-channel wireless networks.

Degree: Electrical and Computer Engineering, Department of, 2013, University of Houston

 With the requirement for increasing efficiency of wireless spectrum usage, the cognitive radio technique has been emerging as an important solution. Passive monitoring over wireless… (more)

Subjects/Keywords: Sequential learning; Wireless monitoring; Multi-armed bandit; Electrical engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Le, T. D. 1. (2013). Sequential learning for passive monitoring of multi-channel wireless networks. (Thesis). University of Houston. Retrieved from http://hdl.handle.net/10657/998

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Le, Thanh Dang 1984-. “Sequential learning for passive monitoring of multi-channel wireless networks.” 2013. Thesis, University of Houston. Accessed November 19, 2019. http://hdl.handle.net/10657/998.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Le, Thanh Dang 1984-. “Sequential learning for passive monitoring of multi-channel wireless networks.” 2013. Web. 19 Nov 2019.

Vancouver:

Le TD1. Sequential learning for passive monitoring of multi-channel wireless networks. [Internet] [Thesis]. University of Houston; 2013. [cited 2019 Nov 19]. Available from: http://hdl.handle.net/10657/998.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Le TD1. Sequential learning for passive monitoring of multi-channel wireless networks. [Thesis]. University of Houston; 2013. Available from: http://hdl.handle.net/10657/998

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Urbana-Champaign

13. Liao, De. A multi-armed bandit approach for batch mode active learning on information networks.

Degree: MS, Computer Science, 2016, University of Illinois – Urbana-Champaign

 We propose an adaptive batch mode active learning algorithm, MABAL (Multi-Armed Bandit for Active Learning), for classification on heterogeneous information networks. Observing the parallels between… (more)

Subjects/Keywords: Active learning; Heterogeneous information networks; Multi-armed bandit

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liao, D. (2016). A multi-armed bandit approach for batch mode active learning on information networks. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/90788

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Liao, De. “A multi-armed bandit approach for batch mode active learning on information networks.” 2016. Thesis, University of Illinois – Urbana-Champaign. Accessed November 19, 2019. http://hdl.handle.net/2142/90788.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Liao, De. “A multi-armed bandit approach for batch mode active learning on information networks.” 2016. Web. 19 Nov 2019.

Vancouver:

Liao D. A multi-armed bandit approach for batch mode active learning on information networks. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2016. [cited 2019 Nov 19]. Available from: http://hdl.handle.net/2142/90788.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Liao D. A multi-armed bandit approach for batch mode active learning on information networks. [Thesis]. University of Illinois – Urbana-Champaign; 2016. Available from: http://hdl.handle.net/2142/90788

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

14. Bouneffouf, Djallel. DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque.

Degree: Docteur es, Informatique, 2013, Evry, Institut national des télécommunications

L’immense quantité d'information générée et gérée au quotidien par les systèmes d'information et leurs utilisateurs conduit inéluctablement à la problématique de surcharge d'information. Dans ce… (more)

Subjects/Keywords: Apprentissage automatique; Système de recommandation; Système de recommandation sensible au contexte; Apprentissage par renforcement; Bandit manchot; Bandit manchot contextuel; UCB; Système sensible au risque; Machine learning; Recommender system; Context-aware recommender system; Reinforcement learning; Multi-armed bandit; Contextual multi-armed bandit; UCB; Risk awareness

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bouneffouf, D. (2013). DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque. (Doctoral Dissertation). Evry, Institut national des télécommunications. Retrieved from http://www.theses.fr/2013TELE0031

Chicago Manual of Style (16th Edition):

Bouneffouf, Djallel. “DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque.” 2013. Doctoral Dissertation, Evry, Institut national des télécommunications. Accessed November 19, 2019. http://www.theses.fr/2013TELE0031.

MLA Handbook (7th Edition):

Bouneffouf, Djallel. “DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque.” 2013. Web. 19 Nov 2019.

Vancouver:

Bouneffouf D. DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque. [Internet] [Doctoral dissertation]. Evry, Institut national des télécommunications; 2013. [cited 2019 Nov 19]. Available from: http://www.theses.fr/2013TELE0031.

Council of Science Editors:

Bouneffouf D. DRARS, a dynamic risk-aware recommender system : DRARS, un système de recommandation dynamique sensible au risque. [Doctoral Dissertation]. Evry, Institut national des télécommunications; 2013. Available from: http://www.theses.fr/2013TELE0031


University of Edinburgh

15. Pang, Kunkun. Learning about the learning process : from active querying to fine-tuning.

Degree: PhD, 2019, University of Edinburgh

 The majority of research on academic machine learning addresses the core model fitting part of the machine learning workflow. However, prior to model fitting, data… (more)

Subjects/Keywords: meta learning; active learning; transfer learning; reinforcement learning; bandit learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Pang, K. (2019). Learning about the learning process : from active querying to fine-tuning. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/36031

Chicago Manual of Style (16th Edition):

Pang, Kunkun. “Learning about the learning process : from active querying to fine-tuning.” 2019. Doctoral Dissertation, University of Edinburgh. Accessed November 19, 2019. http://hdl.handle.net/1842/36031.

MLA Handbook (7th Edition):

Pang, Kunkun. “Learning about the learning process : from active querying to fine-tuning.” 2019. Web. 19 Nov 2019.

Vancouver:

Pang K. Learning about the learning process : from active querying to fine-tuning. [Internet] [Doctoral dissertation]. University of Edinburgh; 2019. [cited 2019 Nov 19]. Available from: http://hdl.handle.net/1842/36031.

Council of Science Editors:

Pang K. Learning about the learning process : from active querying to fine-tuning. [Doctoral Dissertation]. University of Edinburgh; 2019. Available from: http://hdl.handle.net/1842/36031


Syracuse University

16. Rahman, Mahmuda. ONLINE LEARNING WITH BANDITS FOR COVERAGE.

Degree: PhD, Electrical Engineering and Computer Science, 2017, Syracuse University

  With the rapid growth in velocity and volume, streaming data compels decision support systems to predict a small number of unique data points in… (more)

Subjects/Keywords: Multi Armed Bandit; Reinforcement Learning; Simulated Annealing; Stackelberg Game; Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rahman, M. (2017). ONLINE LEARNING WITH BANDITS FOR COVERAGE. (Doctoral Dissertation). Syracuse University. Retrieved from https://surface.syr.edu/etd/805

Chicago Manual of Style (16th Edition):

Rahman, Mahmuda. “ONLINE LEARNING WITH BANDITS FOR COVERAGE.” 2017. Doctoral Dissertation, Syracuse University. Accessed November 19, 2019. https://surface.syr.edu/etd/805.

MLA Handbook (7th Edition):

Rahman, Mahmuda. “ONLINE LEARNING WITH BANDITS FOR COVERAGE.” 2017. Web. 19 Nov 2019.

Vancouver:

Rahman M. ONLINE LEARNING WITH BANDITS FOR COVERAGE. [Internet] [Doctoral dissertation]. Syracuse University; 2017. [cited 2019 Nov 19]. Available from: https://surface.syr.edu/etd/805.

Council of Science Editors:

Rahman M. ONLINE LEARNING WITH BANDITS FOR COVERAGE. [Doctoral Dissertation]. Syracuse University; 2017. Available from: https://surface.syr.edu/etd/805


Université Paris-Sud – Paris XI

17. Wang, Kehao. Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot.

Degree: Docteur es, Informatique, 2012, Université Paris-Sud – Paris XI

Dans cette thèse, nous abordons le problème fondamental de l'accès au spectre opportuniste dans un système de communication multi-canal. Plus précisément, nous considérons un système… (more)

Subjects/Keywords: Multi-canal d'accès opportuniste; Restless Multi-Armed Bandit; Politique myope; Optimisation stochastique; Multi-Channel opportunistic access; Restless Multi-Armed Bandit; Myopic Policy; Stochastic Optimization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, K. (2012). Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot. (Doctoral Dissertation). Université Paris-Sud – Paris XI. Retrieved from http://www.theses.fr/2012PA112103

Chicago Manual of Style (16th Edition):

Wang, Kehao. “Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot.” 2012. Doctoral Dissertation, Université Paris-Sud – Paris XI. Accessed November 19, 2019. http://www.theses.fr/2012PA112103.

MLA Handbook (7th Edition):

Wang, Kehao. “Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot.” 2012. Web. 19 Nov 2019.

Vancouver:

Wang K. Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot. [Internet] [Doctoral dissertation]. Université Paris-Sud – Paris XI; 2012. [cited 2019 Nov 19]. Available from: http://www.theses.fr/2012PA112103.

Council of Science Editors:

Wang K. Multi-channel opportunistic access : a restless multi-armed bandit perspective : Accès opportuniste dans les systèmes de communication multi-canaux : une perspective du problème de bandit-manchot. [Doctoral Dissertation]. Université Paris-Sud – Paris XI; 2012. Available from: http://www.theses.fr/2012PA112103

18. Fröjd, Sebastian. Interaktionen mellan nyfikenhet och yttre motivation.

Degree: Psychology, 2019, Umeå University

Nyfikenhet är inneboende strävan mot inhämtande av ny information. Länge har det ansetts vedertaget att yttre motivation hämmar nyfikenhet, men på senare år har… (more)

Subjects/Keywords: curiosity extrinsic motivation; punishment; reward; two armed bandit; nyfikenhet; yttre motivation; belöning; bestraffning; tvåarmad bandit; Psychology (excluding Applied Psychology); Psykologi (exklusive tillämpad psykologi)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Fröjd, S. (2019). Interaktionen mellan nyfikenhet och yttre motivation. (Thesis). Umeå University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-155501

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Fröjd, Sebastian. “Interaktionen mellan nyfikenhet och yttre motivation.” 2019. Thesis, Umeå University. Accessed November 19, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-155501.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Fröjd, Sebastian. “Interaktionen mellan nyfikenhet och yttre motivation.” 2019. Web. 19 Nov 2019.

Vancouver:

Fröjd S. Interaktionen mellan nyfikenhet och yttre motivation. [Internet] [Thesis]. Umeå University; 2019. [cited 2019 Nov 19]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-155501.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Fröjd S. Interaktionen mellan nyfikenhet och yttre motivation. [Thesis]. Umeå University; 2019. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-155501

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

19. Kaufmann, Emilie. Analyse de stratégies bayésiennes et fréquentistes pour l'allocation séquentielle de ressources : Analysis of bayesian and frequentist strategies for sequential resource allocation.

Degree: Docteur es, Signal et images, 2014, Paris, ENST

Dans cette thèse, nous étudions des stratégies d’allocation séquentielle de ressources. Le modèle statistique adopté dans ce cadre est celui du bandit stochastique à plusieurs… (more)

Subjects/Keywords: Statistiques; Bayesien; Modèle de bandit; Apprentissage; Minimisation du regret; Identification des meilleurs bras; Statistics; Bayesian; Bandit models; Learning; Regret minimization; Best arm identification

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kaufmann, E. (2014). Analyse de stratégies bayésiennes et fréquentistes pour l'allocation séquentielle de ressources : Analysis of bayesian and frequentist strategies for sequential resource allocation. (Doctoral Dissertation). Paris, ENST. Retrieved from http://www.theses.fr/2014ENST0056

Chicago Manual of Style (16th Edition):

Kaufmann, Emilie. “Analyse de stratégies bayésiennes et fréquentistes pour l'allocation séquentielle de ressources : Analysis of bayesian and frequentist strategies for sequential resource allocation.” 2014. Doctoral Dissertation, Paris, ENST. Accessed November 19, 2019. http://www.theses.fr/2014ENST0056.

MLA Handbook (7th Edition):

Kaufmann, Emilie. “Analyse de stratégies bayésiennes et fréquentistes pour l'allocation séquentielle de ressources : Analysis of bayesian and frequentist strategies for sequential resource allocation.” 2014. Web. 19 Nov 2019.

Vancouver:

Kaufmann E. Analyse de stratégies bayésiennes et fréquentistes pour l'allocation séquentielle de ressources : Analysis of bayesian and frequentist strategies for sequential resource allocation. [Internet] [Doctoral dissertation]. Paris, ENST; 2014. [cited 2019 Nov 19]. Available from: http://www.theses.fr/2014ENST0056.

Council of Science Editors:

Kaufmann E. Analyse de stratégies bayésiennes et fréquentistes pour l'allocation séquentielle de ressources : Analysis of bayesian and frequentist strategies for sequential resource allocation. [Doctoral Dissertation]. Paris, ENST; 2014. Available from: http://www.theses.fr/2014ENST0056


INP Toulouse

20. Larrañaga, Maialen. Dynamic control of stochastic and fluid resource-sharing systems : Contrôle dynamique des systèmes stochastiques et fluides de partage de ressources.

Degree: Docteur es, Systèmes informatiques, 2015, INP Toulouse

 Dans cette thèse, nous étudions le contrôle dynamique des systèmes de partage de ressources qui se posent dans divers domaines : réseaux de gestion des… (more)

Subjects/Keywords: Contrôle optimal; Processus de décision markovien; Restless bandit problems; Abandons; Relaxation lagrangienne; Théorie de files d'attente; Optimal control; Markov decision processes; Restless bandit problems; Abandonments; Lagrangian relaxation; Queueing theory

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Larrañaga, M. (2015). Dynamic control of stochastic and fluid resource-sharing systems : Contrôle dynamique des systèmes stochastiques et fluides de partage de ressources. (Doctoral Dissertation). INP Toulouse. Retrieved from http://www.theses.fr/2015INPT0075

Chicago Manual of Style (16th Edition):

Larrañaga, Maialen. “Dynamic control of stochastic and fluid resource-sharing systems : Contrôle dynamique des systèmes stochastiques et fluides de partage de ressources.” 2015. Doctoral Dissertation, INP Toulouse. Accessed November 19, 2019. http://www.theses.fr/2015INPT0075.

MLA Handbook (7th Edition):

Larrañaga, Maialen. “Dynamic control of stochastic and fluid resource-sharing systems : Contrôle dynamique des systèmes stochastiques et fluides de partage de ressources.” 2015. Web. 19 Nov 2019.

Vancouver:

Larrañaga M. Dynamic control of stochastic and fluid resource-sharing systems : Contrôle dynamique des systèmes stochastiques et fluides de partage de ressources. [Internet] [Doctoral dissertation]. INP Toulouse; 2015. [cited 2019 Nov 19]. Available from: http://www.theses.fr/2015INPT0075.

Council of Science Editors:

Larrañaga M. Dynamic control of stochastic and fluid resource-sharing systems : Contrôle dynamique des systèmes stochastiques et fluides de partage de ressources. [Doctoral Dissertation]. INP Toulouse; 2015. Available from: http://www.theses.fr/2015INPT0075


Cornell University

21. Hu, Weici. Sequential Resource Allocation Under Uncertainty: An Index Policy Approach .

Degree: 2017, Cornell University

 We consider a class of stochastic sequential allocation problems - restless multi-armed bandits (RMAB) with a finite horizon and multiple pulls per period. Leveraging the… (more)

Subjects/Keywords: Index-based Policy; Restless Bandit; Sequential Resource Allocation; Stochastic Dynamic Program; Operations research

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hu, W. (2017). Sequential Resource Allocation Under Uncertainty: An Index Policy Approach . (Thesis). Cornell University. Retrieved from http://hdl.handle.net/1813/56952

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Hu, Weici. “Sequential Resource Allocation Under Uncertainty: An Index Policy Approach .” 2017. Thesis, Cornell University. Accessed November 19, 2019. http://hdl.handle.net/1813/56952.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Hu, Weici. “Sequential Resource Allocation Under Uncertainty: An Index Policy Approach .” 2017. Web. 19 Nov 2019.

Vancouver:

Hu W. Sequential Resource Allocation Under Uncertainty: An Index Policy Approach . [Internet] [Thesis]. Cornell University; 2017. [cited 2019 Nov 19]. Available from: http://hdl.handle.net/1813/56952.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Hu W. Sequential Resource Allocation Under Uncertainty: An Index Policy Approach . [Thesis]. Cornell University; 2017. Available from: http://hdl.handle.net/1813/56952

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of California – Santa Cruz

22. Dai, Liang. Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward.

Degree: Technology and Information Management, 2014, University of California – Santa Cruz

 Online experiment is widely used in online advertising, web development to compare effects, e.g. click through rate, conversion rate, of different versions. Among all the… (more)

Subjects/Keywords: Information science; Computer science; A/B testing; Multi-armed Bandit; Online Experiment; Statistical Uncertainty

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dai, L. (2014). Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward. (Thesis). University of California – Santa Cruz. Retrieved from http://www.escholarship.org/uc/item/1hm5t2z6

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Dai, Liang. “Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward.” 2014. Thesis, University of California – Santa Cruz. Accessed November 19, 2019. http://www.escholarship.org/uc/item/1hm5t2z6.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Dai, Liang. “Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward.” 2014. Web. 19 Nov 2019.

Vancouver:

Dai L. Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward. [Internet] [Thesis]. University of California – Santa Cruz; 2014. [cited 2019 Nov 19]. Available from: http://www.escholarship.org/uc/item/1hm5t2z6.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Dai L. Online Controlled Experiment Design: Trade-off between Statistical Uncertainty and Cumulative Reward. [Thesis]. University of California – Santa Cruz; 2014. Available from: http://www.escholarship.org/uc/item/1hm5t2z6

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Florida International University

23. Wang, Qing. Intelligent Data Mining Techniques for Automatic Service Management.

Degree: PhD, Computer Science, 2018, Florida International University

  Today, as more and more industries are involved in the artificial intelligence era, all business enterprises constantly explore innovative ways to expand their outreach… (more)

Subjects/Keywords: Artificail Intelligent; Automatic Service Management; Knowledge Base; Multi-armed Bandit Model; Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, Q. (2018). Intelligent Data Mining Techniques for Automatic Service Management. (Doctoral Dissertation). Florida International University. Retrieved from https://digitalcommons.fiu.edu/etd/3883 ; FIDC007024

Chicago Manual of Style (16th Edition):

Wang, Qing. “Intelligent Data Mining Techniques for Automatic Service Management.” 2018. Doctoral Dissertation, Florida International University. Accessed November 19, 2019. https://digitalcommons.fiu.edu/etd/3883 ; FIDC007024.

MLA Handbook (7th Edition):

Wang, Qing. “Intelligent Data Mining Techniques for Automatic Service Management.” 2018. Web. 19 Nov 2019.

Vancouver:

Wang Q. Intelligent Data Mining Techniques for Automatic Service Management. [Internet] [Doctoral dissertation]. Florida International University; 2018. [cited 2019 Nov 19]. Available from: https://digitalcommons.fiu.edu/etd/3883 ; FIDC007024.

Council of Science Editors:

Wang Q. Intelligent Data Mining Techniques for Automatic Service Management. [Doctoral Dissertation]. Florida International University; 2018. Available from: https://digitalcommons.fiu.edu/etd/3883 ; FIDC007024


University of California – Santa Cruz

24. Rahmanian, Holakou. Online Learning of Combinatorial Objects.

Degree: Computer Science, 2018, University of California – Santa Cruz

 This thesis develops algorithms for learning combinatorial objects. A combinatorial object is a structured concept composed of components. Examples are permutations, Huffman trees, binary search… (more)

Subjects/Keywords: Computer science; Artificial intelligence; combinatorial objects; machine learning; multi-armed bandit; online learning; structured concepts

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rahmanian, H. (2018). Online Learning of Combinatorial Objects. (Thesis). University of California – Santa Cruz. Retrieved from http://www.escholarship.org/uc/item/7kw5d47f

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Rahmanian, Holakou. “Online Learning of Combinatorial Objects.” 2018. Thesis, University of California – Santa Cruz. Accessed November 19, 2019. http://www.escholarship.org/uc/item/7kw5d47f.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Rahmanian, Holakou. “Online Learning of Combinatorial Objects.” 2018. Web. 19 Nov 2019.

Vancouver:

Rahmanian H. Online Learning of Combinatorial Objects. [Internet] [Thesis]. University of California – Santa Cruz; 2018. [cited 2019 Nov 19]. Available from: http://www.escholarship.org/uc/item/7kw5d47f.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Rahmanian H. Online Learning of Combinatorial Objects. [Thesis]. University of California – Santa Cruz; 2018. Available from: http://www.escholarship.org/uc/item/7kw5d47f

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Utah State University

25. Gordon, Thomas J. Joaquin Murieta: Fact, Fiction and Folklore.

Degree: MA, English, 1983, Utah State University

  This work explores the legendary 19th-century California bandit Joaquin Murieta as he is manifest in the history, literature and folklore of the West. The… (more)

Subjects/Keywords: Joaquin Murieta; folklore; bandit; American Studies

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gordon, T. J. (1983). Joaquin Murieta: Fact, Fiction and Folklore. (Masters Thesis). Utah State University. Retrieved from https://digitalcommons.usu.edu/etd/2055

Chicago Manual of Style (16th Edition):

Gordon, Thomas J. “Joaquin Murieta: Fact, Fiction and Folklore.” 1983. Masters Thesis, Utah State University. Accessed November 19, 2019. https://digitalcommons.usu.edu/etd/2055.

MLA Handbook (7th Edition):

Gordon, Thomas J. “Joaquin Murieta: Fact, Fiction and Folklore.” 1983. Web. 19 Nov 2019.

Vancouver:

Gordon TJ. Joaquin Murieta: Fact, Fiction and Folklore. [Internet] [Masters thesis]. Utah State University; 1983. [cited 2019 Nov 19]. Available from: https://digitalcommons.usu.edu/etd/2055.

Council of Science Editors:

Gordon TJ. Joaquin Murieta: Fact, Fiction and Folklore. [Masters Thesis]. Utah State University; 1983. Available from: https://digitalcommons.usu.edu/etd/2055


Princeton University

26. Reverdy, Paul Benjamin. Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems .

Degree: PhD, 2014, Princeton University

 Search is a ubiquitous human activity. It is a rational response to the uncertainty inherent in the tasks we seek to accomplish in our daily… (more)

Subjects/Keywords: Bayesian machine learning; Heuristic algorithms; Human-in-the-loop system; Multi-armed bandit

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Reverdy, P. B. (2014). Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01d504rn558

Chicago Manual of Style (16th Edition):

Reverdy, Paul Benjamin. “Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems .” 2014. Doctoral Dissertation, Princeton University. Accessed November 19, 2019. http://arks.princeton.edu/ark:/88435/dsp01d504rn558.

MLA Handbook (7th Edition):

Reverdy, Paul Benjamin. “Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems .” 2014. Web. 19 Nov 2019.

Vancouver:

Reverdy PB. Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems . [Internet] [Doctoral dissertation]. Princeton University; 2014. [cited 2019 Nov 19]. Available from: http://arks.princeton.edu/ark:/88435/dsp01d504rn558.

Council of Science Editors:

Reverdy PB. Human-inspired Algorithms for Search: A Framework for Human-machine Multi-armed Bandit Problems . [Doctoral Dissertation]. Princeton University; 2014. Available from: http://arks.princeton.edu/ark:/88435/dsp01d504rn558


Princeton University

27. LIU, CHE-YU. Thompson Sampling for Bandit Problems .

Degree: PhD, 2018, Princeton University

Bandit problems are the most basic examples of the sequential decision making problems with limited feedback and an exploitation/exploration trade-off. In these problems, an agent… (more)

Subjects/Keywords: Bandit Problems; Bayesian Algorithms; Bounded Regrets; Exploration-Exploitation Tradeoff; Pure Exploration; Thompson Sampling

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

LIU, C. (2018). Thompson Sampling for Bandit Problems . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp014j03d233b

Chicago Manual of Style (16th Edition):

LIU, CHE-YU. “Thompson Sampling for Bandit Problems .” 2018. Doctoral Dissertation, Princeton University. Accessed November 19, 2019. http://arks.princeton.edu/ark:/88435/dsp014j03d233b.

MLA Handbook (7th Edition):

LIU, CHE-YU. “Thompson Sampling for Bandit Problems .” 2018. Web. 19 Nov 2019.

Vancouver:

LIU C. Thompson Sampling for Bandit Problems . [Internet] [Doctoral dissertation]. Princeton University; 2018. [cited 2019 Nov 19]. Available from: http://arks.princeton.edu/ark:/88435/dsp014j03d233b.

Council of Science Editors:

LIU C. Thompson Sampling for Bandit Problems . [Doctoral Dissertation]. Princeton University; 2018. Available from: http://arks.princeton.edu/ark:/88435/dsp014j03d233b


University of Ontario Institute of Technology

28. Zandi, Marjan. Learning-based adaptive design for dynamic spectrum access in cognitive radio networks.

Degree: 2014, University of Ontario Institute of Technology

 This thesis is concerned with dynamic spectrum access in cognitive radio networks. The main objective is designing online learning and access policies which maximize the… (more)

Subjects/Keywords: Dynamic spectrum access (DSA); Access policies; Cognitive radio networks; Auction-based formulation; Decentralized multi-armed bandit (DMAB); Online learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zandi, M. (2014). Learning-based adaptive design for dynamic spectrum access in cognitive radio networks. (Thesis). University of Ontario Institute of Technology. Retrieved from http://hdl.handle.net/10155/476

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Zandi, Marjan. “Learning-based adaptive design for dynamic spectrum access in cognitive radio networks.” 2014. Thesis, University of Ontario Institute of Technology. Accessed November 19, 2019. http://hdl.handle.net/10155/476.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Zandi, Marjan. “Learning-based adaptive design for dynamic spectrum access in cognitive radio networks.” 2014. Web. 19 Nov 2019.

Vancouver:

Zandi M. Learning-based adaptive design for dynamic spectrum access in cognitive radio networks. [Internet] [Thesis]. University of Ontario Institute of Technology; 2014. [cited 2019 Nov 19]. Available from: http://hdl.handle.net/10155/476.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Zandi M. Learning-based adaptive design for dynamic spectrum access in cognitive radio networks. [Thesis]. University of Ontario Institute of Technology; 2014. Available from: http://hdl.handle.net/10155/476

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

29. Marques, Adalton Jose. Crime, proceder, convívio-seguro: um experimento antropológico a partir de relações entre ladrões.

Degree: Mestrado, Antropologia Social, 2010, University of São Paulo

Neste experimento antropológico, fortemente inspirado na obra de Michel Foucault, apresento uma etnografia constituída principalmente a partir de conversas travadas com presos, ex-presos e seus… (more)

Subjects/Keywords: Bandit; Conviviality-security (division of the space); Crime; Crime; Divisão espacial - convívio-seguro; Ladrão; Prisioneiros; Prisoners; Proceder; Proceder

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Marques, A. J. (2010). Crime, proceder, convívio-seguro: um experimento antropológico a partir de relações entre ladrões. (Masters Thesis). University of São Paulo. Retrieved from http://www.teses.usp.br/teses/disponiveis/8/8134/tde-15032010-103450/ ;

Chicago Manual of Style (16th Edition):

Marques, Adalton Jose. “Crime, proceder, convívio-seguro: um experimento antropológico a partir de relações entre ladrões.” 2010. Masters Thesis, University of São Paulo. Accessed November 19, 2019. http://www.teses.usp.br/teses/disponiveis/8/8134/tde-15032010-103450/ ;.

MLA Handbook (7th Edition):

Marques, Adalton Jose. “Crime, proceder, convívio-seguro: um experimento antropológico a partir de relações entre ladrões.” 2010. Web. 19 Nov 2019.

Vancouver:

Marques AJ. Crime, proceder, convívio-seguro: um experimento antropológico a partir de relações entre ladrões. [Internet] [Masters thesis]. University of São Paulo; 2010. [cited 2019 Nov 19]. Available from: http://www.teses.usp.br/teses/disponiveis/8/8134/tde-15032010-103450/ ;.

Council of Science Editors:

Marques AJ. Crime, proceder, convívio-seguro: um experimento antropológico a partir de relações entre ladrões. [Masters Thesis]. University of São Paulo; 2010. Available from: http://www.teses.usp.br/teses/disponiveis/8/8134/tde-15032010-103450/ ;


Texas A&M University

30. Mann, Timothy 1984-. Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining Exploration.

Degree: 2012, Texas A&M University

 The purpose of this dissertation is to understand how algorithms can efficiently learn to solve new tasks based on previous experience, instead of being explicitly… (more)

Subjects/Keywords: pruning; scaling; multiarmed bandit; Markov decision process; exploration/exploitation dilemma; exploration; machine learning; transfer learning; reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mann, T. 1. (2012). Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining Exploration. (Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/148402

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Mann, Timothy 1984-. “Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining Exploration.” 2012. Thesis, Texas A&M University. Accessed November 19, 2019. http://hdl.handle.net/1969.1/148402.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Mann, Timothy 1984-. “Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining Exploration.” 2012. Web. 19 Nov 2019.

Vancouver:

Mann T1. Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining Exploration. [Internet] [Thesis]. Texas A&M University; 2012. [cited 2019 Nov 19]. Available from: http://hdl.handle.net/1969.1/148402.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Mann T1. Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining Exploration. [Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/148402

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

[1] [2] [3]

.