Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

Dept: Computer Science

You searched for subject:(Visual learning). Showing records 1 – 17 of 17 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Rutgers University

1. Bakry, Amr M., 1981-. Leveraging image manifolds for visual learning.

Degree: PhD, Computer Science, 2016, Rutgers University

The field of computer vision has recently witnessed remarkable progress, due mainly to visual data availability and machine learning advances. Modeling the visual data is… (more)

Subjects/Keywords: Computer vision; Visual learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bakry, Amr M., 1. (2016). Leveraging image manifolds for visual learning. (Doctoral Dissertation). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/51184/

Chicago Manual of Style (16th Edition):

Bakry, Amr M., 1981-. “Leveraging image manifolds for visual learning.” 2016. Doctoral Dissertation, Rutgers University. Accessed October 23, 2019. https://rucore.libraries.rutgers.edu/rutgers-lib/51184/.

MLA Handbook (7th Edition):

Bakry, Amr M., 1981-. “Leveraging image manifolds for visual learning.” 2016. Web. 23 Oct 2019.

Vancouver:

Bakry, Amr M. 1. Leveraging image manifolds for visual learning. [Internet] [Doctoral dissertation]. Rutgers University; 2016. [cited 2019 Oct 23]. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/51184/.

Council of Science Editors:

Bakry, Amr M. 1. Leveraging image manifolds for visual learning. [Doctoral Dissertation]. Rutgers University; 2016. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/51184/


Boston University

2. Bargal, Sarah. Grounding deep models of visual data.

Degree: PhD, Computer Science, 2019, Boston University

 Deep models are state-of-the-art for many computer vision tasks including object classification, action recognition, and captioning. As Artificial Intelligence systems that utilize deep models are… (more)

Subjects/Keywords: Computer science; Deep learning; Grounding; Visual data

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bargal, S. (2019). Grounding deep models of visual data. (Doctoral Dissertation). Boston University. Retrieved from http://hdl.handle.net/2144/34810

Chicago Manual of Style (16th Edition):

Bargal, Sarah. “Grounding deep models of visual data.” 2019. Doctoral Dissertation, Boston University. Accessed October 23, 2019. http://hdl.handle.net/2144/34810.

MLA Handbook (7th Edition):

Bargal, Sarah. “Grounding deep models of visual data.” 2019. Web. 23 Oct 2019.

Vancouver:

Bargal S. Grounding deep models of visual data. [Internet] [Doctoral dissertation]. Boston University; 2019. [cited 2019 Oct 23]. Available from: http://hdl.handle.net/2144/34810.

Council of Science Editors:

Bargal S. Grounding deep models of visual data. [Doctoral Dissertation]. Boston University; 2019. Available from: http://hdl.handle.net/2144/34810


Georgia Tech

3. Chattopadhyay, Prithvijit. Evaluating visual conversational agents via cooperative human-AI games.

Degree: MS, Computer Science, 2019, Georgia Tech

 As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It… (more)

Subjects/Keywords: Visual conversational agents; Visual dialog; Human-AI teams; Reinforcement learning; Machine learning; Computer vision; Artificial intelligence

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chattopadhyay, P. (2019). Evaluating visual conversational agents via cooperative human-AI games. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61308

Chicago Manual of Style (16th Edition):

Chattopadhyay, Prithvijit. “Evaluating visual conversational agents via cooperative human-AI games.” 2019. Masters Thesis, Georgia Tech. Accessed October 23, 2019. http://hdl.handle.net/1853/61308.

MLA Handbook (7th Edition):

Chattopadhyay, Prithvijit. “Evaluating visual conversational agents via cooperative human-AI games.” 2019. Web. 23 Oct 2019.

Vancouver:

Chattopadhyay P. Evaluating visual conversational agents via cooperative human-AI games. [Internet] [Masters thesis]. Georgia Tech; 2019. [cited 2019 Oct 23]. Available from: http://hdl.handle.net/1853/61308.

Council of Science Editors:

Chattopadhyay P. Evaluating visual conversational agents via cooperative human-AI games. [Masters Thesis]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61308


University of California – Berkeley

4. Gupta, Saurabh. Representations for Visually Guided Actions.

Degree: Computer Science, 2018, University of California – Berkeley

 In recent times, computer vision has made great leaps towards 2D understanding of sparse visual snapshots of the world. This is insufficient for robots that… (more)

Subjects/Keywords: Computer science; Computer Vision; Cross-modal Learning; Machine Learning; Robotics; Scene Understanding; Visual Navigation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gupta, S. (2018). Representations for Visually Guided Actions. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/8c60g8fg

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gupta, Saurabh. “Representations for Visually Guided Actions.” 2018. Thesis, University of California – Berkeley. Accessed October 23, 2019. http://www.escholarship.org/uc/item/8c60g8fg.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gupta, Saurabh. “Representations for Visually Guided Actions.” 2018. Web. 23 Oct 2019.

Vancouver:

Gupta S. Representations for Visually Guided Actions. [Internet] [Thesis]. University of California – Berkeley; 2018. [cited 2019 Oct 23]. Available from: http://www.escholarship.org/uc/item/8c60g8fg.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gupta S. Representations for Visually Guided Actions. [Thesis]. University of California – Berkeley; 2018. Available from: http://www.escholarship.org/uc/item/8c60g8fg

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


New Jersey Institute of Technology

5. Liu, Qingfeng. Investigation of new learning methods for visual recognition.

Degree: PhD, Computer Science, 2017, New Jersey Institute of Technology

Visual recognition is one of the most difficult and prevailing problems in computer vision and pattern recognition due to the challenges in understanding the… (more)

Subjects/Keywords: Visual recognition; Sparse representation; Metric learning; Classification; Kinship verification; Deep learning; Computer Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liu, Q. (2017). Investigation of new learning methods for visual recognition. (Doctoral Dissertation). New Jersey Institute of Technology. Retrieved from https://digitalcommons.njit.edu/dissertations/20

Chicago Manual of Style (16th Edition):

Liu, Qingfeng. “Investigation of new learning methods for visual recognition.” 2017. Doctoral Dissertation, New Jersey Institute of Technology. Accessed October 23, 2019. https://digitalcommons.njit.edu/dissertations/20.

MLA Handbook (7th Edition):

Liu, Qingfeng. “Investigation of new learning methods for visual recognition.” 2017. Web. 23 Oct 2019.

Vancouver:

Liu Q. Investigation of new learning methods for visual recognition. [Internet] [Doctoral dissertation]. New Jersey Institute of Technology; 2017. [cited 2019 Oct 23]. Available from: https://digitalcommons.njit.edu/dissertations/20.

Council of Science Editors:

Liu Q. Investigation of new learning methods for visual recognition. [Doctoral Dissertation]. New Jersey Institute of Technology; 2017. Available from: https://digitalcommons.njit.edu/dissertations/20


Virginia Tech

6. Chen, Xin. Be the Data: Embodied Visual Analytics.

Degree: MS, Computer Science, 2016, Virginia Tech

 With the rise of big data, it is becoming increasingly important to educate students about data analytics. In particular, students without a strong mathematical background… (more)

Subjects/Keywords: Visual Analytics; Embodied Interaction; Collaborative Learning; Human-Computer Interaction; Immersive Environment

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, X. (2016). Be the Data: Embodied Visual Analytics. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/72287

Chicago Manual of Style (16th Edition):

Chen, Xin. “Be the Data: Embodied Visual Analytics.” 2016. Masters Thesis, Virginia Tech. Accessed October 23, 2019. http://hdl.handle.net/10919/72287.

MLA Handbook (7th Edition):

Chen, Xin. “Be the Data: Embodied Visual Analytics.” 2016. Web. 23 Oct 2019.

Vancouver:

Chen X. Be the Data: Embodied Visual Analytics. [Internet] [Masters thesis]. Virginia Tech; 2016. [cited 2019 Oct 23]. Available from: http://hdl.handle.net/10919/72287.

Council of Science Editors:

Chen X. Be the Data: Embodied Visual Analytics. [Masters Thesis]. Virginia Tech; 2016. Available from: http://hdl.handle.net/10919/72287


Colorado State University

7. Stern, Ryan. Scalable visual analytics over voluminous spatiotemporal data.

Degree: PhD, Computer Science, 2019, Colorado State University

 Visualization is a critical part of modern data analytics. This is especially true of interactive and exploratory visual analytics, which encourages speedy discovery of trends,… (more)

Subjects/Keywords: Machine Learning; Spatiotemporal; Distributed Systems; Visual Analytics; National-Scale

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Stern, R. (2019). Scalable visual analytics over voluminous spatiotemporal data. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/193130

Chicago Manual of Style (16th Edition):

Stern, Ryan. “Scalable visual analytics over voluminous spatiotemporal data.” 2019. Doctoral Dissertation, Colorado State University. Accessed October 23, 2019. http://hdl.handle.net/10217/193130.

MLA Handbook (7th Edition):

Stern, Ryan. “Scalable visual analytics over voluminous spatiotemporal data.” 2019. Web. 23 Oct 2019.

Vancouver:

Stern R. Scalable visual analytics over voluminous spatiotemporal data. [Internet] [Doctoral dissertation]. Colorado State University; 2019. [cited 2019 Oct 23]. Available from: http://hdl.handle.net/10217/193130.

Council of Science Editors:

Stern R. Scalable visual analytics over voluminous spatiotemporal data. [Doctoral Dissertation]. Colorado State University; 2019. Available from: http://hdl.handle.net/10217/193130


UCLA

8. Joo, Jungseock. Visual Persuasion in Mass Media: A Computational Framework for Understanding Visual Communication.

Degree: Computer Science, 2015, UCLA

 Visuals play a vital role in human communication in the modern media landscape, but there have been little progress on a systematic analysis on massive… (more)

Subjects/Keywords: Computer science; Statistics; Mass communication; Computer Vision; Mass Communication; Recognition; Visual Communication; Visual Persuasion; Weakly-supervised Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Joo, J. (2015). Visual Persuasion in Mass Media: A Computational Framework for Understanding Visual Communication. (Thesis). UCLA. Retrieved from http://www.escholarship.org/uc/item/97s189hr

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Joo, Jungseock. “Visual Persuasion in Mass Media: A Computational Framework for Understanding Visual Communication.” 2015. Thesis, UCLA. Accessed October 23, 2019. http://www.escholarship.org/uc/item/97s189hr.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Joo, Jungseock. “Visual Persuasion in Mass Media: A Computational Framework for Understanding Visual Communication.” 2015. Web. 23 Oct 2019.

Vancouver:

Joo J. Visual Persuasion in Mass Media: A Computational Framework for Understanding Visual Communication. [Internet] [Thesis]. UCLA; 2015. [cited 2019 Oct 23]. Available from: http://www.escholarship.org/uc/item/97s189hr.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Joo J. Visual Persuasion in Mass Media: A Computational Framework for Understanding Visual Communication. [Thesis]. UCLA; 2015. Available from: http://www.escholarship.org/uc/item/97s189hr

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


New Jersey Institute of Technology

9. Puthenputhussery, Ajit. Novel image descriptors and learning methods for image classification applications.

Degree: PhD, Computer Science, 2018, New Jersey Institute of Technology

  Image classification is an active and rapidly expanding research area in computer vision and machine learning due to its broad applications. With the advent… (more)

Subjects/Keywords: Enhanced sparse coding; Feature enchancement; Image classification; Metric learning; Visual recognition; Computer Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Puthenputhussery, A. (2018). Novel image descriptors and learning methods for image classification applications. (Doctoral Dissertation). New Jersey Institute of Technology. Retrieved from https://digitalcommons.njit.edu/dissertations/1383

Chicago Manual of Style (16th Edition):

Puthenputhussery, Ajit. “Novel image descriptors and learning methods for image classification applications.” 2018. Doctoral Dissertation, New Jersey Institute of Technology. Accessed October 23, 2019. https://digitalcommons.njit.edu/dissertations/1383.

MLA Handbook (7th Edition):

Puthenputhussery, Ajit. “Novel image descriptors and learning methods for image classification applications.” 2018. Web. 23 Oct 2019.

Vancouver:

Puthenputhussery A. Novel image descriptors and learning methods for image classification applications. [Internet] [Doctoral dissertation]. New Jersey Institute of Technology; 2018. [cited 2019 Oct 23]. Available from: https://digitalcommons.njit.edu/dissertations/1383.

Council of Science Editors:

Puthenputhussery A. Novel image descriptors and learning methods for image classification applications. [Doctoral Dissertation]. New Jersey Institute of Technology; 2018. Available from: https://digitalcommons.njit.edu/dissertations/1383


University of California – Riverside

10. Hasan, Mahmudul. Online Activity Understanding and Labeling in Natural Videos.

Degree: Computer Science, 2016, University of California – Riverside

 Understanding human activities in unconstrained natural videos is a widely studied problem, yet it remains as one of the most challenging problems in computer vision.… (more)

Subjects/Keywords: Computer science; Abnormal Event Detection; Active Learning; Activity Segmentation; Human Activity Recognition; Incremental Learning; Visual Context

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hasan, M. (2016). Online Activity Understanding and Labeling in Natural Videos. (Thesis). University of California – Riverside. Retrieved from http://www.escholarship.org/uc/item/7sh7831v

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Hasan, Mahmudul. “Online Activity Understanding and Labeling in Natural Videos.” 2016. Thesis, University of California – Riverside. Accessed October 23, 2019. http://www.escholarship.org/uc/item/7sh7831v.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Hasan, Mahmudul. “Online Activity Understanding and Labeling in Natural Videos.” 2016. Web. 23 Oct 2019.

Vancouver:

Hasan M. Online Activity Understanding and Labeling in Natural Videos. [Internet] [Thesis]. University of California – Riverside; 2016. [cited 2019 Oct 23]. Available from: http://www.escholarship.org/uc/item/7sh7831v.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Hasan M. Online Activity Understanding and Labeling in Natural Videos. [Thesis]. University of California – Riverside; 2016. Available from: http://www.escholarship.org/uc/item/7sh7831v

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Clemson University

11. Matzko, Sarah. τέχνη and Quest-Oriented Learning.

Degree: PhD, Computer Science, 2007, Clemson University

 A new approach for teaching undergraduate computer science courses is presented. A general teaching approach that is its basis is also described. Summaries and guides… (more)

Subjects/Keywords: Undergraduate; curriculum; design; visual learning; active learning; Computer Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Matzko, S. (2007). τέχνη and Quest-Oriented Learning. (Doctoral Dissertation). Clemson University. Retrieved from https://tigerprints.clemson.edu/all_dissertations/127

Chicago Manual of Style (16th Edition):

Matzko, Sarah. “τέχνη and Quest-Oriented Learning.” 2007. Doctoral Dissertation, Clemson University. Accessed October 23, 2019. https://tigerprints.clemson.edu/all_dissertations/127.

MLA Handbook (7th Edition):

Matzko, Sarah. “τέχνη and Quest-Oriented Learning.” 2007. Web. 23 Oct 2019.

Vancouver:

Matzko S. τέχνη and Quest-Oriented Learning. [Internet] [Doctoral dissertation]. Clemson University; 2007. [cited 2019 Oct 23]. Available from: https://tigerprints.clemson.edu/all_dissertations/127.

Council of Science Editors:

Matzko S. τέχνη and Quest-Oriented Learning. [Doctoral Dissertation]. Clemson University; 2007. Available from: https://tigerprints.clemson.edu/all_dissertations/127


Arizona State University

12. Pandhalkudi Govindarajan, Sesha Kumar. Bridging Cyber and Physical Programming Classes: An Application of Semantic Visual Analytics for Programming Exams.

Degree: Computer Science, 2016, Arizona State University

 With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information… (more)

Subjects/Keywords: Educational technology; Educational evaluation; Computer science; Intelligent authoring; Learning Analytics; Orchestration technology; Programming; Semantic Analytics; Visual Analytics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Pandhalkudi Govindarajan, S. K. (2016). Bridging Cyber and Physical Programming Classes: An Application of Semantic Visual Analytics for Programming Exams. (Masters Thesis). Arizona State University. Retrieved from http://repository.asu.edu/items/38667

Chicago Manual of Style (16th Edition):

Pandhalkudi Govindarajan, Sesha Kumar. “Bridging Cyber and Physical Programming Classes: An Application of Semantic Visual Analytics for Programming Exams.” 2016. Masters Thesis, Arizona State University. Accessed October 23, 2019. http://repository.asu.edu/items/38667.

MLA Handbook (7th Edition):

Pandhalkudi Govindarajan, Sesha Kumar. “Bridging Cyber and Physical Programming Classes: An Application of Semantic Visual Analytics for Programming Exams.” 2016. Web. 23 Oct 2019.

Vancouver:

Pandhalkudi Govindarajan SK. Bridging Cyber and Physical Programming Classes: An Application of Semantic Visual Analytics for Programming Exams. [Internet] [Masters thesis]. Arizona State University; 2016. [cited 2019 Oct 23]. Available from: http://repository.asu.edu/items/38667.

Council of Science Editors:

Pandhalkudi Govindarajan SK. Bridging Cyber and Physical Programming Classes: An Application of Semantic Visual Analytics for Programming Exams. [Masters Thesis]. Arizona State University; 2016. Available from: http://repository.asu.edu/items/38667


Arizona State University

13. Chandakkar, Parag Shridhar. Towards Learning Representations in Visual Computing Tasks.

Degree: Computer Science, 2017, Arizona State University

Subjects/Keywords: Artificial intelligence; Computer science; Deep Learning; Feature Engineering; Learning Representations; Machine Learning; Visual Computing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chandakkar, P. S. (2017). Towards Learning Representations in Visual Computing Tasks. (Doctoral Dissertation). Arizona State University. Retrieved from http://repository.asu.edu/items/46354

Chicago Manual of Style (16th Edition):

Chandakkar, Parag Shridhar. “Towards Learning Representations in Visual Computing Tasks.” 2017. Doctoral Dissertation, Arizona State University. Accessed October 23, 2019. http://repository.asu.edu/items/46354.

MLA Handbook (7th Edition):

Chandakkar, Parag Shridhar. “Towards Learning Representations in Visual Computing Tasks.” 2017. Web. 23 Oct 2019.

Vancouver:

Chandakkar PS. Towards Learning Representations in Visual Computing Tasks. [Internet] [Doctoral dissertation]. Arizona State University; 2017. [cited 2019 Oct 23]. Available from: http://repository.asu.edu/items/46354.

Council of Science Editors:

Chandakkar PS. Towards Learning Representations in Visual Computing Tasks. [Doctoral Dissertation]. Arizona State University; 2017. Available from: http://repository.asu.edu/items/46354

14. Nagarkar, Chetan. Addition of three animations to OSCAL.

Degree: MS, Computer Science, 2015, California State University – Sacramento

 OSCAL ??? acronym for ???Operating System Concept Animation Library??? ??? is an online library that helps students understand concepts in Operating System with a graphical… (more)

Subjects/Keywords: Multithreading; Operating system; Operating system principles; Java project; Visual learning

…animations gives the learner a tool for learning which gives a clearer picture of these ideas. In… …addition, anything learnt in an audio-visual manner always has a longer imprint on the memory. Dr… …apps for learning. 7 Chapter 2 BACKGROUND 2.1 Related Work: Race Condition has been… …However, there is no publicly available material to explain it in a visual manner, via animation… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nagarkar, C. (2015). Addition of three animations to OSCAL. (Masters Thesis). California State University – Sacramento. Retrieved from http://hdl.handle.net/10211.3/139278

Chicago Manual of Style (16th Edition):

Nagarkar, Chetan. “Addition of three animations to OSCAL.” 2015. Masters Thesis, California State University – Sacramento. Accessed October 23, 2019. http://hdl.handle.net/10211.3/139278.

MLA Handbook (7th Edition):

Nagarkar, Chetan. “Addition of three animations to OSCAL.” 2015. Web. 23 Oct 2019.

Vancouver:

Nagarkar C. Addition of three animations to OSCAL. [Internet] [Masters thesis]. California State University – Sacramento; 2015. [cited 2019 Oct 23]. Available from: http://hdl.handle.net/10211.3/139278.

Council of Science Editors:

Nagarkar C. Addition of three animations to OSCAL. [Masters Thesis]. California State University – Sacramento; 2015. Available from: http://hdl.handle.net/10211.3/139278

15. Shih, Kevin Jonathan. Learning visual tasks with selective attention.

Degree: PhD, Computer Science, 2017, University of Illinois – Urbana-Champaign

 Knowing where to look in an image can significantly improve performance in computer vision tasks by eliminating irrelevant information from the rest of the input… (more)

Subjects/Keywords: Computer vision; Visual attention; Visual question answering (VQA); Keypoint localization; Part localization; Image recognition; Fine-grained image recognition; Deep learning; Multi-task learning; Machine learning

…only VQA data. We also compare to a traditional multitask learning setup that is jointly… …trained on VQA and VR and shares visual features but does not use the object and attribute word… …embeddings for recognition. While multitask learning outperforms the VQAonly model, using the SVLR… …require visual recognition without specialized skills like counting or reading. In this setting… …light?” or “Is it raining?” We focus on the problem of learning where to look. The above… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shih, K. J. (2017). Learning visual tasks with selective attention. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/98359

Chicago Manual of Style (16th Edition):

Shih, Kevin Jonathan. “Learning visual tasks with selective attention.” 2017. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed October 23, 2019. http://hdl.handle.net/2142/98359.

MLA Handbook (7th Edition):

Shih, Kevin Jonathan. “Learning visual tasks with selective attention.” 2017. Web. 23 Oct 2019.

Vancouver:

Shih KJ. Learning visual tasks with selective attention. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2017. [cited 2019 Oct 23]. Available from: http://hdl.handle.net/2142/98359.

Council of Science Editors:

Shih KJ. Learning visual tasks with selective attention. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/98359

16. Farghally, Mohammed Fawzi Seddik. Visualizing Algorithm Analysis Topics.

Degree: PhD, Computer Science, 2016, Virginia Tech

 Data Structures and Algorithms (DSA) courses are critical for any computer science curriculum. DSA courses emphasize concepts related to procedural dynamics and Algorithm Analysis (AA).… (more)

Subjects/Keywords: Algorithm Analysis Visualizations; Visual Proofs; Student Engagement; Logged Data Analysis; Educational Data Mining; Online Learning Environments; Performance Evaluation; Concept Inventory

…120 viii List of Figures 2.1 Build-heap worst-case running time Visual Proof, from [… …85] . . . . . . . . . . 7 2.2 Summation from 1 to n Visual Proof, from [24… …x5D; . . . . . . . . . . . . . . . . 8 2.3 Alternating Series Convergence Visual Proof… …from [30] . . . . . . . . . . . . 9 2.4 Euler Tree Traversal Visual Proof, from… …exercises with automated assessment, and slideshows to improve the understanding and learning… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Farghally, M. F. S. (2016). Visualizing Algorithm Analysis Topics. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/73539

Chicago Manual of Style (16th Edition):

Farghally, Mohammed Fawzi Seddik. “Visualizing Algorithm Analysis Topics.” 2016. Doctoral Dissertation, Virginia Tech. Accessed October 23, 2019. http://hdl.handle.net/10919/73539.

MLA Handbook (7th Edition):

Farghally, Mohammed Fawzi Seddik. “Visualizing Algorithm Analysis Topics.” 2016. Web. 23 Oct 2019.

Vancouver:

Farghally MFS. Visualizing Algorithm Analysis Topics. [Internet] [Doctoral dissertation]. Virginia Tech; 2016. [cited 2019 Oct 23]. Available from: http://hdl.handle.net/10919/73539.

Council of Science Editors:

Farghally MFS. Visualizing Algorithm Analysis Topics. [Doctoral Dissertation]. Virginia Tech; 2016. Available from: http://hdl.handle.net/10919/73539

17. Das, Arindam. ACT-R Based Models For Learning Interactive Layouts.

Degree: PhD, Computer Science, 2015, York University

 This dissertation presents research on learning of interactive layouts. I develop two models based on a theory of cognition known as ACT-R (Adaptive Control of… (more)

Subjects/Keywords: Computer science; Cognitive psychology; Education; Cognitive model; Cognitive modeling; Learning; Retention; Memory; Decay; Interference; Proactive interference; Effort; Mental effort; Cognitive effort; Physical-cognitive effort; Human-computer interaction; HCI; Cognitive psychology; Cognitive science; Mathematical psychology; Education; Rational analysis; Mobile phone; Cell phone; Smart phone; Visual search; Phone; Keypad; Keyboard; Text entry; User Interface; GUI; ACT-R; Ego depletion; Computational science; Graphical layout; Spatial memory; Unified theory of cognition; Cognitive architecture; Mathematical modeling; Statistical power analysis; Number of model runs

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Das, A. (2015). ACT-R Based Models For Learning Interactive Layouts. (Doctoral Dissertation). York University. Retrieved from http://hdl.handle.net/10315/28227

Chicago Manual of Style (16th Edition):

Das, Arindam. “ACT-R Based Models For Learning Interactive Layouts.” 2015. Doctoral Dissertation, York University. Accessed October 23, 2019. http://hdl.handle.net/10315/28227.

MLA Handbook (7th Edition):

Das, Arindam. “ACT-R Based Models For Learning Interactive Layouts.” 2015. Web. 23 Oct 2019.

Vancouver:

Das A. ACT-R Based Models For Learning Interactive Layouts. [Internet] [Doctoral dissertation]. York University; 2015. [cited 2019 Oct 23]. Available from: http://hdl.handle.net/10315/28227.

Council of Science Editors:

Das A. ACT-R Based Models For Learning Interactive Layouts. [Doctoral Dissertation]. York University; 2015. Available from: http://hdl.handle.net/10315/28227

.