You searched for subject:(visual recognition)
.
Showing records 1 – 30 of
349 total matches.
◁ [1] [2] [3] [4] [5] … [12] ▶

University of New South Wales
1.
Rollond, Tania Louise.
Between objects and images: drawing diagrams of perception.
Degree: Art, 2011, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/51385
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10067/SOURCE01?view=true
► This research is concerned with the recognition of objects, by which I mean those artefacts made by humans for use in everyday life. To recognize…
(more)
▼ This research is concerned with the
recognition of objects, by which I mean those artefacts made by humans for use in everyday life. To recognize something has several possible meanings to identify it by sight, to see something of your self in it, or to acknowledge that it has special importance or validity. My MFA project explores the territory between these possibilities, in theory and concept, and through drawings on paper and three-dimensional objects. In this paper, objects are considered as an expression of self. I propose that in choosing and arranging the everyday objects in my life, I try to secure some part of my existence or find some objective confirmation of my identity, a tangible point of order to hold on to in the face of an ever-changing universe. In pursuing this theme as a drawing project, I ask: what does it mean to draw these objects, and what does it mean to draw on objects? The psychology of
visual perception demonstrates that what we see is a result of what we expect, remember, understand or feel. Looking is active and subjective. Therefore, I suggest that to draw these objects is to examine my self, simultaneously and intuitively mapping the external and internal, objective and subjective. My research moves in this area between seeing and feeling to create diagrams of perception, or drawings about
recognition. During this project, the sub-category of objects which I call things those objects that we cannot recognize or name, became central to my work. I argue that our experience of a thing is different, more direct and possibly more poetic, than of other objects because it is not mediated by culture or language. The thing as an image, occupies a position between abstraction and representation, allowing many interpretations, but refusing certainty. While we look through objects to their meanings as signs, things create a still point of doubt and uncertainty. The drawings of and on objects made during my MFA address those moments in life where
recognition prompts a search for answers, but no final truth can be determined.
Advisors/Committee Members: Esson, Michael, College of Fine Arts, UNSW.
Subjects/Keywords: Visual perception; Objects; Recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rollond, T. L. (2011). Between objects and images: drawing diagrams of perception. (Masters Thesis). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/51385 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10067/SOURCE01?view=true
Chicago Manual of Style (16th Edition):
Rollond, Tania Louise. “Between objects and images: drawing diagrams of perception.” 2011. Masters Thesis, University of New South Wales. Accessed March 09, 2021.
http://handle.unsw.edu.au/1959.4/51385 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10067/SOURCE01?view=true.
MLA Handbook (7th Edition):
Rollond, Tania Louise. “Between objects and images: drawing diagrams of perception.” 2011. Web. 09 Mar 2021.
Vancouver:
Rollond TL. Between objects and images: drawing diagrams of perception. [Internet] [Masters thesis]. University of New South Wales; 2011. [cited 2021 Mar 09].
Available from: http://handle.unsw.edu.au/1959.4/51385 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10067/SOURCE01?view=true.
Council of Science Editors:
Rollond TL. Between objects and images: drawing diagrams of perception. [Masters Thesis]. University of New South Wales; 2011. Available from: http://handle.unsw.edu.au/1959.4/51385 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10067/SOURCE01?view=true

University of Illinois – Urbana-Champaign
2.
Sadeghi, Mohammad Amin.
Recognition using visual phrases.
Degree: MS, 0112, 2012, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/32060
► In this thesis I introduce visual phrases, complex visual composites like ``a person riding a horse''. Visual phrases often display significantly reduced visual complexity compared…
(more)
▼ In this thesis I introduce
visual phrases, complex
visual composites like ``a person riding a horse''.
Visual phrases often display significantly reduced
visual complexity compared to their component objects, because the appearance of those objects can change profoundly when they participate in relations. I introduce a dataset suitable for phrasal
recognition that uses familiar PASCAL object categories, and demonstrate significant experimental gains resulting from exploiting
visual phrases.
I show that a
visual phrase detector significantly outperforms a baseline which detects component objects and reasons about relations, even though
visual phrase training sets tend to be smaller than those for objects. I argue that any multi-class detection system must decode detector outputs to produce final results; this is usually done with non-maximum suppression. I describe a novel decoding procedure that can account accurately for local context without solving difficult inference problems. I show this decoding procedure outperforms the state of the art. Finally, I show that decoding a combination of phrasal and object detectors produces real improvements in detector results.
Advisors/Committee Members: Forsyth, David A. (advisor).
Subjects/Keywords: Visual Phrase; Phrasal Recognition; Visual Composites; Object Recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sadeghi, M. A. (2012). Recognition using visual phrases. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/32060
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Sadeghi, Mohammad Amin. “Recognition using visual phrases.” 2012. Thesis, University of Illinois – Urbana-Champaign. Accessed March 09, 2021.
http://hdl.handle.net/2142/32060.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Sadeghi, Mohammad Amin. “Recognition using visual phrases.” 2012. Web. 09 Mar 2021.
Vancouver:
Sadeghi MA. Recognition using visual phrases. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2012. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/2142/32060.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Sadeghi MA. Recognition using visual phrases. [Thesis]. University of Illinois – Urbana-Champaign; 2012. Available from: http://hdl.handle.net/2142/32060
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Cambridge
3.
Tan, Li Li.
The Phenomenology of Visual Experience.
Degree: PhD, 2019, University of Cambridge
URL: https://www.repository.cam.ac.uk/handle/1810/298816
► Our visual experiences of the world around us deliver information about the visible features of the objects we see. What these features are, however, is…
(more)
▼ Our visual experiences of the world around us deliver information about the visible features of the objects we see. What these features are, however, is still up for debate. On the one hand, most philosophers agree that if our visual systems are functioning normally, “low-level” features such as colours, shapes, sizes, and movements undoubtedly figure in the phenomenal character of our visual experiences, or “what it is like” for us to visually experience the world. On the other, many have argued for the expansionist view that various “high-level” features figure in visual experience over and above low-level features. These include kind features (e.g. hibiscus-ness, armchair-ness), biological features (e.g. animacy), and facial expressions (e.g. surprise). The aim of this thesis is to defend the opposing restrictivist view that the visible features of objects that figure in our visual experiences are limited to low-level ones.
The first part of the thesis argues that this debate between expansionists and restrictivists should be resolved by identifying an independently-motivated criterion to determine the features that figure in visual experience. A criterion based on visual discrimination is suggested and developed, which has the result that high-level features do not figure in visual experience over and above low-level features. The second part considers the expansionist strategy of showing that high-level features must be introduced to explain how visual categorisation and recognition work. It is argued that this strategy does not succeed, and that a restrictivist account of visual recognition turns out to be better than the expansionist one. The third part focuses on the expansionist claim that objects visually appear to us to be mind-independent. It is argued that this claim is false, because the best account of mind-independence as a perceptible feature requires us to appeal to our proprioceptive sense in addition to visual perception.
Subjects/Keywords: visual perception; visual experience; phenomenology; high-level perception; visual recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tan, L. L. (2019). The Phenomenology of Visual Experience. (Doctoral Dissertation). University of Cambridge. Retrieved from https://www.repository.cam.ac.uk/handle/1810/298816
Chicago Manual of Style (16th Edition):
Tan, Li Li. “The Phenomenology of Visual Experience.” 2019. Doctoral Dissertation, University of Cambridge. Accessed March 09, 2021.
https://www.repository.cam.ac.uk/handle/1810/298816.
MLA Handbook (7th Edition):
Tan, Li Li. “The Phenomenology of Visual Experience.” 2019. Web. 09 Mar 2021.
Vancouver:
Tan LL. The Phenomenology of Visual Experience. [Internet] [Doctoral dissertation]. University of Cambridge; 2019. [cited 2021 Mar 09].
Available from: https://www.repository.cam.ac.uk/handle/1810/298816.
Council of Science Editors:
Tan LL. The Phenomenology of Visual Experience. [Doctoral Dissertation]. University of Cambridge; 2019. Available from: https://www.repository.cam.ac.uk/handle/1810/298816

Ryerson University
4.
Mahvarsayyad, Fereshteh.
Texture classification using gene expression programming.
Degree: 2009, Ryerson University
URL: https://digital.library.ryerson.ca/islandora/object/RULA%3A1873
► In computer vision, segmentation refers to the process of subdividing a digital image into constituent regions with homogeneity in some image characteristics. Image segmentation is…
(more)
▼ In computer vision, segmentation refers to the process of subdividing a digital image into constituent regions with homogeneity in some image characteristics. Image segmentation is considered as a pre-processing step for object
recognition. The problem of segmentation, being one of the most difficult tasks in image processing, gets more complicated in the presence of random textures in the image. This paper focuses on texture classification, which is defined as supervised texture segmentation with prior knowledge of textures in the image. We investigate a classification method using Gene Expression Programming (GEP). It is shown that GEP is capable of evolving accurate classifiers using simple arithmetic operations and direct pixel values without employing complicated feature extraction algorithms. It is also shown that the accuracy of classification is related to the fact that GEP can detect the regularities of texture patterns. As part of this project, we implemented a Photoshop plug-in that uses the evolved classifiers to identify and select target textures in digital images.
Advisors/Committee Members: Ryerson University (Degree grantor).
Subjects/Keywords: Genetic programming (Computer science); Visual texture recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mahvarsayyad, F. (2009). Texture classification using gene expression programming. (Thesis). Ryerson University. Retrieved from https://digital.library.ryerson.ca/islandora/object/RULA%3A1873
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Mahvarsayyad, Fereshteh. “Texture classification using gene expression programming.” 2009. Thesis, Ryerson University. Accessed March 09, 2021.
https://digital.library.ryerson.ca/islandora/object/RULA%3A1873.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Mahvarsayyad, Fereshteh. “Texture classification using gene expression programming.” 2009. Web. 09 Mar 2021.
Vancouver:
Mahvarsayyad F. Texture classification using gene expression programming. [Internet] [Thesis]. Ryerson University; 2009. [cited 2021 Mar 09].
Available from: https://digital.library.ryerson.ca/islandora/object/RULA%3A1873.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Mahvarsayyad F. Texture classification using gene expression programming. [Thesis]. Ryerson University; 2009. Available from: https://digital.library.ryerson.ca/islandora/object/RULA%3A1873
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Alberta
5.
Liu, Yang.
Appearance SLAM in Changing Illumination Environment.
Degree: PhD, Department of Computing Science, 2016, University of Alberta
URL: https://era.library.ualberta.ca/files/cjs956f83g
► With the rapid development in visual sensors such as monocular vision, appearance-based robot simultaneous localization and mapping (SLAM) has become an open research topic in…
(more)
▼ With the rapid development in visual sensors such as
monocular vision, appearance-based robot simultaneous localization
and mapping (SLAM) has become an open research topic in robotics.
In appearance SLAM, a robot uses the visual appearance of locations
(i.e., the images) acquired along its route to build a map of the
environment and localizes itself by recognizing the places it has
visited before. In this thesis, we address several issues in the
current appearance SLAM techniques, with the intention to develop a
systematic approach for SLAM under significant illumination change
– a typical scenario in long-term mapping. Instead of using
traditional Bag-of-Words (BoW) image descriptor in comparing the
appearance of locations, we use visual features directly to solve
the perceptual aliasing that may particularly happen in
illumination change caused partially by vector quantization of
feature descriptors in image encoding. Efficient data structures
such as k-d tree or random k-d forests are exploited to speed up
the feature matching with approximate nearest neighbor search to
ensure real-time robot exploration, without sacrificing performance
at the level of matching locations. In order to deal with the cases
in which local features do not work well, for example, in the
environment with significant illumination variance where feature
repeatability is not guaranteed, we propose to use a whole-image
descriptor which is a low dimensional compact representation of
image responses to a bank of filters incorporating the structural
information (e.g. the edges) of an image to describe the appearance
and measure similarities among locations. PCA is employed to
transform a high dimensional gist descriptor to a lower dimensional
form to improve both computational efficiency and discriminating
power of the descriptor. In addition, we use a particle filter to
exploit the correlation among images in a sequence captured by the
robot in the process of identifying loop closure candidates, making
the algorithm highly scalable due to both the compactness of image
descriptor and simplicity of particle filtering. Based on the above
methods, our final component of the SLAM system is a novel feature
matching method for multi-view geometry (MVG) based verification of
loop closures in illumination change. To develop such a method that
serves as the prerequisite of verification, we exploit the
particular camera motion in our application to illustrate that
spatial constraint of matching features (or keypoints) derived from
optical flow statistics can be used as an important basis in
finding true matches. Particularly, by assuming a weak perspective
camera model and planar camera motion, we derive a simple
constraint on correctly matched keypoints in terms of the flow
vectors between two images. We then use this constraint to prune
the putative matches to boost the inlier ratio significantly
thereby giving the subsequent verification algorithm a chance to
succeed.
Subjects/Keywords: Place Recognition; Robot Vision; Visual Navigation; SLAM
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, Y. (2016). Appearance SLAM in Changing Illumination Environment. (Doctoral Dissertation). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/cjs956f83g
Chicago Manual of Style (16th Edition):
Liu, Yang. “Appearance SLAM in Changing Illumination Environment.” 2016. Doctoral Dissertation, University of Alberta. Accessed March 09, 2021.
https://era.library.ualberta.ca/files/cjs956f83g.
MLA Handbook (7th Edition):
Liu, Yang. “Appearance SLAM in Changing Illumination Environment.” 2016. Web. 09 Mar 2021.
Vancouver:
Liu Y. Appearance SLAM in Changing Illumination Environment. [Internet] [Doctoral dissertation]. University of Alberta; 2016. [cited 2021 Mar 09].
Available from: https://era.library.ualberta.ca/files/cjs956f83g.
Council of Science Editors:
Liu Y. Appearance SLAM in Changing Illumination Environment. [Doctoral Dissertation]. University of Alberta; 2016. Available from: https://era.library.ualberta.ca/files/cjs956f83g

Texas Tech University
6.
Lovelace, Mitch TechM.
THE CELLULAR GROWTH ANALYZER: A SIMPLER AND MORE COMPREHENSIVE SCRATCH ASSAY ANALYZING PROGRAM.
Degree: MSin Mechanical Engineering, Mechanical Engineering, 2018, Texas Tech University
URL: http://hdl.handle.net/2346/82085
► A scratch, or wound, assay is a low-cost and simple method to measure cells migration; it is also an easy way to measure the growth…
(more)
▼ A scratch, or wound, assay is a low-cost and simple method to measure cells migration; it is also an easy way to measure the growth rate of cancer cells in vitro. It does so by removing a strip of cells from a cell culture dish and then measuring the cell migration and cellular growth back into the “scratch” area. The measurement of this migration and cellular growth has traditionally been done using the program, ImageJ. Though this method can work, it is labor intensive and time consuming. We therefore developed a more automated and easier-to-use Java program designed specifically to analyze scratch assays.
By contrasting and separating the pixels that formed the scratch and then counting them to provide a numerical result the migration and growth we significantly improved the speed and ease of the analysis while allowing the results to be easily repeatable when compared to ImageJ. Further development, refining and addition of new features to this software will significantly aid researchers, especially in the area of cancer research and in assessing the effectiveness of various treatments on cell migration.
Advisors/Committee Members: Moussa, Hanna (advisor), Kumar, Golden (committee member), Ramalingam, Latha (committee member).
Subjects/Keywords: Scratch Assay; Medical Equipment; Visual Recognition; Programming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lovelace, M. T. (2018). THE CELLULAR GROWTH ANALYZER: A SIMPLER AND MORE COMPREHENSIVE SCRATCH ASSAY ANALYZING PROGRAM. (Masters Thesis). Texas Tech University. Retrieved from http://hdl.handle.net/2346/82085
Chicago Manual of Style (16th Edition):
Lovelace, Mitch TechM. “THE CELLULAR GROWTH ANALYZER: A SIMPLER AND MORE COMPREHENSIVE SCRATCH ASSAY ANALYZING PROGRAM.” 2018. Masters Thesis, Texas Tech University. Accessed March 09, 2021.
http://hdl.handle.net/2346/82085.
MLA Handbook (7th Edition):
Lovelace, Mitch TechM. “THE CELLULAR GROWTH ANALYZER: A SIMPLER AND MORE COMPREHENSIVE SCRATCH ASSAY ANALYZING PROGRAM.” 2018. Web. 09 Mar 2021.
Vancouver:
Lovelace MT. THE CELLULAR GROWTH ANALYZER: A SIMPLER AND MORE COMPREHENSIVE SCRATCH ASSAY ANALYZING PROGRAM. [Internet] [Masters thesis]. Texas Tech University; 2018. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/2346/82085.
Council of Science Editors:
Lovelace MT. THE CELLULAR GROWTH ANALYZER: A SIMPLER AND MORE COMPREHENSIVE SCRATCH ASSAY ANALYZING PROGRAM. [Masters Thesis]. Texas Tech University; 2018. Available from: http://hdl.handle.net/2346/82085

Hong Kong University of Science and Technology
7.
Cao, Nan.
Visual analysis of relational patterns in multidimensional data.
Degree: 2012, Hong Kong University of Science and Technology
URL: http://repository.ust.hk/ir/Record/1783.1-7733
;
https://doi.org/10.14711/thesis-b1198602
;
http://repository.ust.hk/ir/bitstream/1783.1-7733/1/th_redirect.html
► Multidimensional data are commonly used to represent both structured and unstructured information. Understanding the innate relations among different dimensions and data items is one of…
(more)
▼ Multidimensional data are commonly used to represent both structured and unstructured information. Understanding the innate relations among different dimensions and data items is one of the most important tasks for multidimensional data analysis. However, relational data patterns such as correlations, co-occurrences, and many semantic relations such as causality, topics and clusters are usually difficult for users to detect as the data are usually heterogeneous in nature, huge in amount, and contain various statistical features. Although many fundamental data analysis techniques such as clustering and correlation analysis have been widely used in various application domains, it is still difficult for users to understand, interpret, compare, and evaluate analysis results given the lack of context information. Information visualization can be of great value for multidimensional data analysis as it can represent the data in intuitive ways with rich context over multiple dimensions and also support explorative visual analysis that keeps humans in the loop. In this thesis, we introduce advanced visual analysis techniques for uncovering relational patterns in complicated multidimensional datasets including the structured multivariate data, unstructured text documents, and heterogeneous datasets like social media data that contain both structured and unstructured information. Multiple visualizations are designed for these three data types to represent relational patterns within the same or across different information facets. First, for multivariate data, we introduce DICON which is an icon-based cluster visualization that embeds statistical information into a multi-attribute display to facilitate cluster interpretation, evaluation, and comparison. Then, for unstructured documents, we design a set of visual analysis systems, Contex-Tour, FacetAtlas, and Solarmap, for topic analysis based on our proposed multifaceted entity relational data model. These systems respectively represent the multifaceted topic patterns among name entities, the multi-relational patterns within topics inside the same information facet, and the semantic relational patterns within topics across different information facets. Finally, for heterogeneous data such as twitter datasets, we introduce Whisper for visualizing dynamic relationships between users in context of the information diffusion processes of a given event. These relations contain information from three key aspects: temporal trend, social-spatial extent, and community response of a topic of interest. To the best of our knowledge, the above techniques are cutting-edge studies of visually analyzing relational patterns in structured, unstructured, and heterogeneous multidimensional datasets. To show the power and usefulness of our study, all the proposed visual analysis systems and corresponding techniques have been deployed to real datasets and have been formally evaluated by domain experts or common users.
Subjects/Keywords: Visual analytics
; Information visualization
; Pattern recognition systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cao, N. (2012). Visual analysis of relational patterns in multidimensional data. (Thesis). Hong Kong University of Science and Technology. Retrieved from http://repository.ust.hk/ir/Record/1783.1-7733 ; https://doi.org/10.14711/thesis-b1198602 ; http://repository.ust.hk/ir/bitstream/1783.1-7733/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Cao, Nan. “Visual analysis of relational patterns in multidimensional data.” 2012. Thesis, Hong Kong University of Science and Technology. Accessed March 09, 2021.
http://repository.ust.hk/ir/Record/1783.1-7733 ; https://doi.org/10.14711/thesis-b1198602 ; http://repository.ust.hk/ir/bitstream/1783.1-7733/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Cao, Nan. “Visual analysis of relational patterns in multidimensional data.” 2012. Web. 09 Mar 2021.
Vancouver:
Cao N. Visual analysis of relational patterns in multidimensional data. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2012. [cited 2021 Mar 09].
Available from: http://repository.ust.hk/ir/Record/1783.1-7733 ; https://doi.org/10.14711/thesis-b1198602 ; http://repository.ust.hk/ir/bitstream/1783.1-7733/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Cao N. Visual analysis of relational patterns in multidimensional data. [Thesis]. Hong Kong University of Science and Technology; 2012. Available from: http://repository.ust.hk/ir/Record/1783.1-7733 ; https://doi.org/10.14711/thesis-b1198602 ; http://repository.ust.hk/ir/bitstream/1783.1-7733/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Illinois – Urbana-Champaign
8.
Wang, Gang.
Datasets, features, learning, and models in visual recognition.
Degree: PhD, 1200, 2011, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/18373
► Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to…
(more)
▼ Visual recognition is a fundamental research topic in computer vision. This dissertation
explores datasets, features, learning, and models used for
visual recognition.
In order to train
visual models and evaluate different
recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method.
Images are represented as features for
visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary
dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A
visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely
on its
visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently
improves the performance of
visual object classifiers, and is particularly effective when the training dataset is small.
With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional
visual features.
Visual models are usually trained to best separate positive and negative training examples.…
Advisors/Committee Members: Forsyth, David A. (advisor), Forsyth, David A. (Committee Chair), Hoiem, Derek W. (committee member), Huang, Thomas S. (committee member), Ahuja, Narendra (committee member), Hasegawa-Johnson, Mark A. (committee member).
Subjects/Keywords: Visual Recognition; Datasets; Features; Learning; Models
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, G. (2011). Datasets, features, learning, and models in visual recognition. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/18373
Chicago Manual of Style (16th Edition):
Wang, Gang. “Datasets, features, learning, and models in visual recognition.” 2011. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed March 09, 2021.
http://hdl.handle.net/2142/18373.
MLA Handbook (7th Edition):
Wang, Gang. “Datasets, features, learning, and models in visual recognition.” 2011. Web. 09 Mar 2021.
Vancouver:
Wang G. Datasets, features, learning, and models in visual recognition. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2011. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/2142/18373.
Council of Science Editors:
Wang G. Datasets, features, learning, and models in visual recognition. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2011. Available from: http://hdl.handle.net/2142/18373

Brock University
9.
Weissflog, Meghan.
Behavioural and neural correlates of emotion recognition as a function of psychopathic personality traits
.
Degree: Department of Psychology, 2011, Brock University
URL: http://hdl.handle.net/10464/3420
► Psychopathy is associated with well-known characteristics such as a lack of empathy and impulsive behaviour, but it has also been associated with impaired recognition of…
(more)
▼ Psychopathy is associated with well-known characteristics such as a lack of
empathy and impulsive behaviour, but it has also been associated with impaired
recognition of emotional facial expressions. The use of event-related potentials (ERPs) to
examine this phenomenon could shed light on the specific time course and neural
activation associated with emotion recognition processes as they relate to psychopathic
traits. In the current study we examined the PI , N170, and vertex positive potential (VPP)
ERP components and behavioural performance with respect to scores on the Self-Report
Psychopathy (SRP-III) questionnaire. Thirty undergraduates completed two tasks, the
first of which required the recognition and categorization of affective face stimuli under
varying presentation conditions. Happy, angry or fearful faces were presented under with
attention directed to the mouth, nose or eye region and varied stimulus exposure duration
(30, 75, or 150 ms). We found that behavioural performance to be unrelated to
psychopathic personality traits in all conditions, but there was a trend for the Nl70 to
peak later in response to fearful and happy facial expressions for individuals high in
psychopathic traits. However, the amplitude of the VPP was significantly negatively
associated with psychopathic traits, but only in response to stimuli presented under a
nose-level fixation. Finally, psychopathic traits were found to be associated with longer
N170 latencies in response to stimuli presented under the 30 ms exposure duration.
In the second task, participants were required to inhibit processing of irrelevant
affective and scrambled face distractors while categorizing unrelated word stimuli as
living or nonliving. Psychopathic traits were hypothesized to be positively associated
with behavioural performance, as it was proposed that individuals high in psychopathic traits would be less likely to automatically attend to task-irrelevant affective distractors,
facilitating word categorization. Thus, decreased interference would be reflected in
smaller N170 components, indicating less neural activity associated with processing of
distractor faces. We found that overall performance decreased in the presence of angry
and fearful distractor faces as psychopathic traits increased. In addition, the amplitude of
the N170 decreased and the latency increased in response to affective distractor faces for
individuals with higher levels of psychopathic traits.
Although we failed to find the predicted behavioural deficit in emotion
recognition in Task 1 and facilitation effect in Task 2, the findings of increased N170 and
VPP latencies in response to emotional faces are consistent wi th the proposition that
abnormal emotion recognition processes may in fact be inherent to psychopathy as a
continuous personality trait.
Subjects/Keywords: Personality disorders;
Visual perception;
Recognition (Psychology)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Weissflog, M. (2011). Behavioural and neural correlates of emotion recognition as a function of psychopathic personality traits
. (Thesis). Brock University. Retrieved from http://hdl.handle.net/10464/3420
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Weissflog, Meghan. “Behavioural and neural correlates of emotion recognition as a function of psychopathic personality traits
.” 2011. Thesis, Brock University. Accessed March 09, 2021.
http://hdl.handle.net/10464/3420.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Weissflog, Meghan. “Behavioural and neural correlates of emotion recognition as a function of psychopathic personality traits
.” 2011. Web. 09 Mar 2021.
Vancouver:
Weissflog M. Behavioural and neural correlates of emotion recognition as a function of psychopathic personality traits
. [Internet] [Thesis]. Brock University; 2011. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/10464/3420.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Weissflog M. Behavioural and neural correlates of emotion recognition as a function of psychopathic personality traits
. [Thesis]. Brock University; 2011. Available from: http://hdl.handle.net/10464/3420
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of New South Wales
10.
Liu, Yingying.
Improving the utilization of training samples in visual recognition.
Degree: Computer Science & Engineering, 2016, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/56113
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40034/SOURCE02?view=true
► Recognition is a fundamental computer vision problem, in which training samples are used to learn models, that then assign labels to test samples. The utilization…
(more)
▼ Recognition is a fundamental computer vision problem, in which training samples are used to learn models, that then assign labels to test samples. The utilization of training samples is of vital importance to
visual recognition, which can be addressed by increasing the capability of the description methods and the model learning methods. Two
visual recognition tasks namely object detection and action
recognition and are considered in this thesis. Active learning utilizes selected subsets of the training dataset as training samples. Active learning methods select the most informative training samples in each iteration, and therefore require fewer training samples to attain comparable performance to passive learning methods. In this thesis, an active learning method for object detection that exploits the distribution of training samples is presented. Experiments show that the proposed method outperforms a passive learning method and a simple margin active learning method. Weakly supervised learning facilitates learning on training samples with weak labels. In this thesis, a weakly supervised object detection method is proposed to utilize training samples with probabilistic labels. Base detectors are used to create object proposals from training samples with weak labels. Then the object proposals are assigned estimated probabilistic labels. A Generalized Hough Transform based object detector is extended to utilize the object proposals with probabilistic labels as training samples. The proposed method is shown to outperform both a comparison method that assigns strong labels to object proposals, and a weakly supervised deformable part-based models method. The proposed method also attains comparable performance to supervised learning methods.Increasing the capability of the description method can improve the utilization of training samples. In this thesis, temporal pyramid histograms are proposed to address the problem of missing temporal information in the classical bag of features description method used in action
recognition. Experiments show that the proposed description method outperforms the classical bag of features method in action
recognition.
Advisors/Committee Members: Sowmya, Arcot, Computer Science & Engineering, Faculty of Engineering, UNSW, Yang, Wang, NICTA, Wei, Wang, Computer Science & Engineering, Faculty of Engineering, UNSW.
Subjects/Keywords: Model learning; Visual recognition; Training sample description
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, Y. (2016). Improving the utilization of training samples in visual recognition. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/56113 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40034/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Liu, Yingying. “Improving the utilization of training samples in visual recognition.” 2016. Doctoral Dissertation, University of New South Wales. Accessed March 09, 2021.
http://handle.unsw.edu.au/1959.4/56113 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40034/SOURCE02?view=true.
MLA Handbook (7th Edition):
Liu, Yingying. “Improving the utilization of training samples in visual recognition.” 2016. Web. 09 Mar 2021.
Vancouver:
Liu Y. Improving the utilization of training samples in visual recognition. [Internet] [Doctoral dissertation]. University of New South Wales; 2016. [cited 2021 Mar 09].
Available from: http://handle.unsw.edu.au/1959.4/56113 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40034/SOURCE02?view=true.
Council of Science Editors:
Liu Y. Improving the utilization of training samples in visual recognition. [Doctoral Dissertation]. University of New South Wales; 2016. Available from: http://handle.unsw.edu.au/1959.4/56113 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40034/SOURCE02?view=true

University of New South Wales
11.
Xu, Jie.
On-line and unsupervised learning for codebook based visual recognition.
Degree: Computer Science & Engineering, 2011, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/51513
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10200/SOURCE02?view=true
► In this thesis we develop unsupervised and on-line learning algorithmsfor codebook based visual recognition tasks. First, we study the Prob-abilistic Latent Semantic Analysis (PLSA), which…
(more)
▼ In this thesis we develop unsupervised and on-line learning algorithmsfor codebook based
visual recognition tasks. First, we study the Prob-abilistic Latent Semantic Analysis (PLSA), which is one instance ofcodebook based
recognition models. It has been successfully appliedto
visual recognition tasks, such as image categorization, action recog-nition, etc. However it has been learned mainly in batch mode, andtherefore it cannot handle the data that arrives sequentially. We pro-pose a novel on-line learning algorithm for learning the parameters ofthe PLSA under that situation. Our contributions are two-fold: (i)an on-line learning algorithm that learns the parameters of the PLSAmodel from incoming data; (ii) a codebook adaptation algorithm thatcan capture the full characteristics of all features during the learn-ing. Experimental results demonstrate that the proposed algorithmcan handle sequentially arriving data that the batch PLSA learningcannot cope with.We then look at the Implicit Shape Model (ISM) for object detec-tion. ISM is a codebook based model in which object information isretained in codebooks. Existing ISM based methods require manuallabeling of training data. We propose an algorithm that can label thetraining data automatically. We also propose a method for identify-ing moving edges in video frames so that object hypotheses can begenerated only from the moving edges. We compare the proposed al-gorithm with a background subtraction based moving object detectionalgorithm. The experimental results demonstrate that the proposedalgorithm achieves comparable performance to the background sub-traction based counterpart, and it even outperforms the counterpartin complex situations.We then extend the aforementioned batch algorithm for on-line learn-ing. We propose an on-line training data collection algorithm and alsoan on-line codebook based object detector. We evaluate the algorithmon three video datasets. The experimental results demonstrate thatour algorithm outperforms the state-of-the-art on-line conservativelearning algorithm.
Advisors/Committee Members: Wang, Yang, Computer Science & Engineering, Faculty of Engineering, UNSW, Wang, Wei, Computer Science & Engineering, Faculty of Engineering, UNSW, Ye, Getian, Computer Science & Engineering, Faculty of Engineering, UNSW.
Subjects/Keywords: Visual recognition; Online learning; Unsupervised learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xu, J. (2011). On-line and unsupervised learning for codebook based visual recognition. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/51513 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10200/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Xu, Jie. “On-line and unsupervised learning for codebook based visual recognition.” 2011. Doctoral Dissertation, University of New South Wales. Accessed March 09, 2021.
http://handle.unsw.edu.au/1959.4/51513 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10200/SOURCE02?view=true.
MLA Handbook (7th Edition):
Xu, Jie. “On-line and unsupervised learning for codebook based visual recognition.” 2011. Web. 09 Mar 2021.
Vancouver:
Xu J. On-line and unsupervised learning for codebook based visual recognition. [Internet] [Doctoral dissertation]. University of New South Wales; 2011. [cited 2021 Mar 09].
Available from: http://handle.unsw.edu.au/1959.4/51513 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10200/SOURCE02?view=true.
Council of Science Editors:
Xu J. On-line and unsupervised learning for codebook based visual recognition. [Doctoral Dissertation]. University of New South Wales; 2011. Available from: http://handle.unsw.edu.au/1959.4/51513 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:10200/SOURCE02?view=true

University of New South Wales
12.
Xu, Joe.
The Effects of Semantic and Syntactic Factors on Complex Word Processing.
Degree: Psychology, 2015, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/54893
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36097/SOURCE02?view=true
► The studies in this thesis examine the semantic and syntactic factors that influence the recognition of morphologically complex words. The experiments of Chapter 2 explored…
(more)
▼ The studies in this thesis examine the semantic and syntactic factors that influence the recognition of morphologically complex words. The experiments of Chapter 2 explored the competing mechanisms between stem homographs. Unmasked priming experiments showed that the presence of inhibitory priming between complex words that correspond to different meanings of a homograph is dependent on the relative dominance of the prime-target pair. That is, inhibitory priming was found when the prime corresponded to the dominant meaning of the homograph and the target to the subordinate meaning (e.g., fined as a prime for FINEST), but not when the relative dominance was reversed (e.g., longer-LONGING). Within a masked priming paradigm, the incompatible condition with dominant primes and subordinate targets did not produce facilitatory priming, while facilitatory priming was found when relative dominance was reversed. Chapter 3 examined the effects of semantic transparency on unmasked morphological priming and base frequency effects, and found that the size of the morphological priming and base frequency effects interacted with semantic transparency in a continuous manner. Specifically, complex words that were higher on transparency (e.g., thriller) produced larger priming and base frequency effects than partially transparent words that were low on transparency (e.g., archer), while words with no semantic overlap with their stems (e.g., corner) did not produce any significant results. The experiments reported in Chapter 4 looked at nonword processing. The processing of nonwords with derivational suffixes was based on the interpretability of the stem and affix, where the more interpretable combinations (e.g., STRENGTHIFY) were harder to identify as nonwords than the less interpretable ones (e.g., OAKABLE). For the nonwords with inflectional suffixes, those with grammatically incompatible stem-affix combinations (e.g., GIANTED) were easier to reject as words compared to the semantically incompatible (e.g., OXYGENS), and interpretable nonwords (e.g., GOOSES), which suggests that the processing of nonwords with inflectional suffixes was based on the type of incompatibility between the stem and affix. The implications of these results for existing models of complex word processing were discussed, including connectionist frameworks, the supralexical model, and pre-lexical decomposition models.
Subjects/Keywords: Morphology; Visual word recognition; Reading; Priming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xu, J. (2015). The Effects of Semantic and Syntactic Factors on Complex Word Processing. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/54893 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36097/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Xu, Joe. “The Effects of Semantic and Syntactic Factors on Complex Word Processing.” 2015. Doctoral Dissertation, University of New South Wales. Accessed March 09, 2021.
http://handle.unsw.edu.au/1959.4/54893 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36097/SOURCE02?view=true.
MLA Handbook (7th Edition):
Xu, Joe. “The Effects of Semantic and Syntactic Factors on Complex Word Processing.” 2015. Web. 09 Mar 2021.
Vancouver:
Xu J. The Effects of Semantic and Syntactic Factors on Complex Word Processing. [Internet] [Doctoral dissertation]. University of New South Wales; 2015. [cited 2021 Mar 09].
Available from: http://handle.unsw.edu.au/1959.4/54893 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36097/SOURCE02?view=true.
Council of Science Editors:
Xu J. The Effects of Semantic and Syntactic Factors on Complex Word Processing. [Doctoral Dissertation]. University of New South Wales; 2015. Available from: http://handle.unsw.edu.au/1959.4/54893 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:36097/SOURCE02?view=true

University of Waikato
13.
Railton, Renee Caron Richards.
Visual discrimination and object/picture recognition in hens
.
Degree: 2011, University of Waikato
URL: http://hdl.handle.net/10289/5199
► Eight experiments were conducted to examine different aspects of hen’s visual behaviour, and to assess whether hens responded to photographs in the same way they…
(more)
▼ Eight experiments were conducted to examine different aspects of hen’s
visual behaviour, and to assess whether hens responded to photographs in the same way they do to the real objects that were depicted in the photographs. In Experiment 1, six hens were trained to perform either a conditional discrimination (successive) or forced-choice discrimination (simultaneous) between flickering (25 Hz) and steady lights. A descending method of limits procedure was then used to increase the flicker speed by 5 Hz over blocks of 20 trials until percentages correct decreased below 55%. The critical flicker fusion frequency of hens was found to range between 68.5 and 95.4 Hz (at a luminance of 300 cd/m2). In Experiment 2, hens were trained to discriminate between steady images presented on a TFT screen, and tested for transfer of that discrimination to a CRT monitor at different refresh rates, on which the images were assumed to appear flickering. It was found that hens showed transfer across all refresh rates with coloured stimuli, but that the degree of transfer decreased as refresh rate decreased with stimuli that were discriminable only on shape. In Experiment 3, a similar decrease in accuracy was shown as refresh rate decreased using a range of stimuli. However, hens did not learn to discriminate all stimuli, and thus transfer could not be assessed with some stimuli. Experiment 4, hens were trained with flickering images and showed relatively high transfer to less flickering, or steady, images. In Experiment 5, a procedure was developed to assess whether hens transferred a discrimination of 3D object to 2D photographs of those objects, and vice versa. In Experiment 6, hens were trained to discriminate stimuli of different colours, or of different shapes. The hens learned to discriminate, and transferred this discrimination, with the coloured shapes. The hens also learned to discriminate the same colour (but differently shaped) stimuli, however, further testing showed that an extraneous variables had come to control behaviour. As a result, the equipment was modified for Experiments 7 and 8. In both experiments, only three of the six hens showed discrimination to any degree, and none transferred this discrimination to photographs or objects. It was concluded that hens do not respond to objects depicted in pictures in the same way they do to the real objects. Thus, these experiments show that that animals’
visual systems need to taken into account when
visual stimuli are used in research, and researchers first need to establish that animals can see the
visual stimuli and that the method of stimulus presentation is species appropriate if images are to be used as representatives of real world stimuli.
Advisors/Committee Members: Temple, William (advisor), Foster, T. Mary (advisor).
Subjects/Keywords: hens;
conditional discrimination;
visual recognition;
critical flicker fusion frequency;
3D recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Railton, R. C. R. (2011). Visual discrimination and object/picture recognition in hens
. (Doctoral Dissertation). University of Waikato. Retrieved from http://hdl.handle.net/10289/5199
Chicago Manual of Style (16th Edition):
Railton, Renee Caron Richards. “Visual discrimination and object/picture recognition in hens
.” 2011. Doctoral Dissertation, University of Waikato. Accessed March 09, 2021.
http://hdl.handle.net/10289/5199.
MLA Handbook (7th Edition):
Railton, Renee Caron Richards. “Visual discrimination and object/picture recognition in hens
.” 2011. Web. 09 Mar 2021.
Vancouver:
Railton RCR. Visual discrimination and object/picture recognition in hens
. [Internet] [Doctoral dissertation]. University of Waikato; 2011. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/10289/5199.
Council of Science Editors:
Railton RCR. Visual discrimination and object/picture recognition in hens
. [Doctoral Dissertation]. University of Waikato; 2011. Available from: http://hdl.handle.net/10289/5199

Queen Mary, University of London
14.
Fu, Yanwei.
Attribute learning for image/video understanding.
Degree: PhD, 2015, Queen Mary, University of London
URL: http://qmro.qmul.ac.uk/xmlui/handle/123456789/8920
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.667425
► For the past decade computer vision research has achieved increasing success in visual recognition including object detection and video classification. Nevertheless, these achievements still cannot…
(more)
▼ For the past decade computer vision research has achieved increasing success in visual recognition including object detection and video classification. Nevertheless, these achievements still cannot meet the urgent needs of image and video understanding. The recently rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. In particular, these types of media data usually contain very complex social activities of a group of people (e.g. YouTube video of a wedding reception) and are captured by consumer devices with poor visual quality. Thus it is extremely challenging to automatically understand such a high number of complex image and video categories, especially when these categories have never been seen before. One way to understand categories with no or few examples is by transfer learning which transfers knowledge across related domains, tasks, or distributions. In particular, recently lifelong learning has become popular which aims at transferring information to tasks without any observed data. In computer vision, transfer learning often takes the form of attribute learning. The key underpinning idea of attribute learning is to exploit transfer learning via an intermediatelevel semantic representations – attributes. The semantic attributes are most commonly used as a semantically meaningful bridge between low feature data and higher level class concepts, since they can be used both descriptively (e.g., ’has legs’) and discriminatively (e.g., ’cats have it but dogs do not’). Previous works propose many different attribute learning models for image and video understanding. However, there are several intrinsic limitations and problems that exist in previous attribute learning work. Such limitations discussed in this thesis include limitations of user-defined attributes, projection domain-shift problems, prototype sparsity problems, inability to combine multiple semantic representations and noisy annotations of relative attributes. To tackle these limitations, this thesis explores attribute learning on image and video understanding from the following three aspects. Firstly to break the limitations of user-defined attributes, a framework for learning latent attributes is present for automatic classification and annotation of unstructured group social activity in videos, which enables the tasks of attribute learning for understanding complex multimedia data with sparse and incomplete labels. We investigate the learning of latent attributes for content-based understanding, which aims to model and predict classes and tags relevant to objects, sounds and events – anything likely to be used by humans to describe or search for media. Secondly, we propose the framework of transductive multi-view embedding hypergraph label propagation and solve three inherent limitations of most previous attribute learning work, i.e., the projection domain shift problems, the prototype sparsity problems and the inability to combine multiple semantic representations. We…
Subjects/Keywords: 621.39; Electronic Engineering; Video understanding; Visual recognition; Computer vision; Image recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fu, Y. (2015). Attribute learning for image/video understanding. (Doctoral Dissertation). Queen Mary, University of London. Retrieved from http://qmro.qmul.ac.uk/xmlui/handle/123456789/8920 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.667425
Chicago Manual of Style (16th Edition):
Fu, Yanwei. “Attribute learning for image/video understanding.” 2015. Doctoral Dissertation, Queen Mary, University of London. Accessed March 09, 2021.
http://qmro.qmul.ac.uk/xmlui/handle/123456789/8920 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.667425.
MLA Handbook (7th Edition):
Fu, Yanwei. “Attribute learning for image/video understanding.” 2015. Web. 09 Mar 2021.
Vancouver:
Fu Y. Attribute learning for image/video understanding. [Internet] [Doctoral dissertation]. Queen Mary, University of London; 2015. [cited 2021 Mar 09].
Available from: http://qmro.qmul.ac.uk/xmlui/handle/123456789/8920 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.667425.
Council of Science Editors:
Fu Y. Attribute learning for image/video understanding. [Doctoral Dissertation]. Queen Mary, University of London; 2015. Available from: http://qmro.qmul.ac.uk/xmlui/handle/123456789/8920 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.667425

University of Missouri – Columbia
15.
Brewster, Eric B.
The histogram of partitioned localized image textures.
Degree: 2017, University of Missouri – Columbia
URL: http://hdl.handle.net/10355/62032
► In the field of machine learning and pattern recognition, texture has been a prominent area of research. Humans are uniquely equipped to distinguish texture; however,…
(more)
▼ In the field of machine learning and pattern
recognition, texture has been a prominent area of research. Humans are uniquely equipped to distinguish texture; however, computers are more equipped to automate the process. Computers accomplish this by taking images and extracting meaningful features that describe their texture. Some of these features are the Haralick texture features, local binary pattern (LBP), and the local direction pattern (LDP). Using the local directional pattern as an example, we propose a new texture feature called the histogram of partitioned localized image textures (HoPLIT). This feature utilizes a set of filters, not necessarily directional, and generates filter response vectors at every pixel location. These response vectors can be thought of as words in a document, which causes one to think of the bag-of-words model. Using the bag-of-words model, a codebook is created by partitioning a subset of response vectors from the entire data set. The partitions are represented by their mean texture and thus a word in the codebook. The mean textures now represent the keywords within the document, i.e. image. A histogram descriptor for an image is the frequency of pixels that belong to each partition. This feature is applied to a texture classification and segmentation problem as well as object detection. Within each problem domain, the HoPLIT feature is compared to the Haralick texture features, LBP, and LDP. The HoPLIT feature does very well classifying texture as well as segmenting large texture mosaics. HoPLIT also shows a surprising robustness to noise. Object detection proves to be slightly more difficult than texture classification for HoPLIT. However, it continues to outperform LBP and LDP.
Advisors/Committee Members: Keller, James (advisor).
Subjects/Keywords: Machine learning; Pattern recognition systems; Visual texture recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Brewster, E. B. (2017). The histogram of partitioned localized image textures. (Thesis). University of Missouri – Columbia. Retrieved from http://hdl.handle.net/10355/62032
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Brewster, Eric B. “The histogram of partitioned localized image textures.” 2017. Thesis, University of Missouri – Columbia. Accessed March 09, 2021.
http://hdl.handle.net/10355/62032.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Brewster, Eric B. “The histogram of partitioned localized image textures.” 2017. Web. 09 Mar 2021.
Vancouver:
Brewster EB. The histogram of partitioned localized image textures. [Internet] [Thesis]. University of Missouri – Columbia; 2017. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/10355/62032.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Brewster EB. The histogram of partitioned localized image textures. [Thesis]. University of Missouri – Columbia; 2017. Available from: http://hdl.handle.net/10355/62032
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Lehigh University
16.
Sabia, Matthew T .
Exploring the Long-Term Consequences of False and Correct Recognition.
Degree: MS, Psychology, 2015, Lehigh University
URL: https://preserve.lehigh.edu/etd/2789
► Human memory is not a precise picture of the past and thus is prone to error. It is susceptible to the presentation of distorting information…
(more)
▼ Human memory is not a precise picture of the past and thus is prone to error. It is susceptible to the presentation of distorting information (e.g. Loftus, 1977; Loftus, Miller, & Burns, 1978), and people sometimes struggle to distinguish between highly similar alternatives (Guerin et al., 2012). However, certain situations seem to foster excellent memory even when the distinctions to be made are quite difficult (Brady et al., 2008). I conducted three experiments in an attempt to better understand how avoidance of common memory errors influences long-term memory. All three experiments asked participants to study individual objects during encoding. Immediately after, they performed a
visual recognition test, with some conditions fostering false
recognition (Guerin et al., 2012). Forty-eight hours later, a second
recognition test was administered to test the long-term effects of initial correct and false recognitions. Experiment 1 asked whether rejecting a non-target foil of high similarity to a previously encountered target at Test 1 might lead to richer encoding of that foil, leading to better
recognition of that foil at Test 2. Experiment 2 asked whether rejecting the foil initially might cause subjects to mistake the representation of the foil for the target representation. I found evidence against these hypotheses. Correct rejection of high-similarity foils at Test 1 led to subsequent failure to recognize those foils at Test 2 (Experiment 1). Moreover, even when the targets were again presented at Test 2 (Experiment 2), initial correct rejections most often led to subsequent failure to recognize targets. Experiment 3 asked whether correct rejections at Test 2 were made on the basis of impaired
recognition for conceptual or perceptual details. The results suggest that both conceptual and perceptual details play a role in Test 2 rejections. These findings are discussed with respect to misleading post-event information, reconsolidation, intentional forgetting, and the role of sleep in memory consolidation. Overall, the results of these studies produce a coherent pattern. Rejections on an initial test often lead to subsequent memory failure, while initial false recognitions of a target-related foil often lead to later false
recognition of the same foil, but do not necessarily interfere with access to the target representation.
Advisors/Committee Members: Hupbach, Almut.
Subjects/Keywords: false memory; false recognition; gist; recognition memory; visual recognition; Psychology; Social and Behavioral Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sabia, M. T. (2015). Exploring the Long-Term Consequences of False and Correct Recognition. (Thesis). Lehigh University. Retrieved from https://preserve.lehigh.edu/etd/2789
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Sabia, Matthew T. “Exploring the Long-Term Consequences of False and Correct Recognition.” 2015. Thesis, Lehigh University. Accessed March 09, 2021.
https://preserve.lehigh.edu/etd/2789.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Sabia, Matthew T. “Exploring the Long-Term Consequences of False and Correct Recognition.” 2015. Web. 09 Mar 2021.
Vancouver:
Sabia MT. Exploring the Long-Term Consequences of False and Correct Recognition. [Internet] [Thesis]. Lehigh University; 2015. [cited 2021 Mar 09].
Available from: https://preserve.lehigh.edu/etd/2789.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Sabia MT. Exploring the Long-Term Consequences of False and Correct Recognition. [Thesis]. Lehigh University; 2015. Available from: https://preserve.lehigh.edu/etd/2789
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Michigan State University
17.
Coggins, James Michael.
A framework for texture analysis based on spatial filtering.
Degree: PhD, Department of Computer Science, 1982, Michigan State University
URL: http://etd.lib.msu.edu/islandora/object/etd:45914
Subjects/Keywords: Visual texture recognition; Visual perception
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Coggins, J. M. (1982). A framework for texture analysis based on spatial filtering. (Doctoral Dissertation). Michigan State University. Retrieved from http://etd.lib.msu.edu/islandora/object/etd:45914
Chicago Manual of Style (16th Edition):
Coggins, James Michael. “A framework for texture analysis based on spatial filtering.” 1982. Doctoral Dissertation, Michigan State University. Accessed March 09, 2021.
http://etd.lib.msu.edu/islandora/object/etd:45914.
MLA Handbook (7th Edition):
Coggins, James Michael. “A framework for texture analysis based on spatial filtering.” 1982. Web. 09 Mar 2021.
Vancouver:
Coggins JM. A framework for texture analysis based on spatial filtering. [Internet] [Doctoral dissertation]. Michigan State University; 1982. [cited 2021 Mar 09].
Available from: http://etd.lib.msu.edu/islandora/object/etd:45914.
Council of Science Editors:
Coggins JM. A framework for texture analysis based on spatial filtering. [Doctoral Dissertation]. Michigan State University; 1982. Available from: http://etd.lib.msu.edu/islandora/object/etd:45914

University of Maryland
18.
Choi, Jonghyun.
Recognizing Visual Categories by Commonality and Diversity.
Degree: Electrical Engineering, 2015, University of Maryland
URL: http://hdl.handle.net/1903/16561
► Visual categories refer to categories of objects or scenes in the computer vision literature. Building a well-performing classifier for visual categories is challenging as it…
(more)
▼ Visual categories refer to categories of objects or scenes in the computer vision literature. Building a well-performing classifier for
visual categories is challenging as it requires a high level of generalization as the categories have large within class variability. We present several methods to build generalizable classifiers for
visual categories by exploiting commonality and diversity of labeled samples and the cat- egory definitions to improve category classification accuracy.
First, we describe a method to discover and add unlabeled samples from auxil- iary sources to categories of interest for building better classifiers. In the literature, given a pool of unlabeled samples, the samples to be added are usually discovered based on low level
visual signatures such as edge statistics or shape or color by an unsupervised or semi-supervised learning framework. This method is inexpensive as it does not require human intervention, but generally does not provide useful information for accuracy improvement as the selected samples are visually similar to the existing set of samples. The samples added by active learning, on the other
hand, provide different
visual aspects to categories and contribute to learning a better classifier, but are expensive as they need human labeling. To obtain high quality samples with less annotation cost, we present a method to discover and add samples from unlabeled image pools that are visually diverse but coherent to cat- egory definition by using higher level
visual aspects, captured by a set of learned attributes. The method significantly improves the classification accuracy over the baselines without human intervention.
Second, we describe now to learn an ensemble of classifiers that captures both commonly shared information and diversity among the training samples. To learn such ensemble classifiers, we first discover discriminative sub-categories of the la- beled samples for diversity. We then learn an ensemble of discriminative classifiers with a constraint that minimizes the rank of the stacked matrix of classifiers. The resulting set of classifiers both share the category-wide commonality and preserve diversity of subcategories. The proposed ensemble classifier improves
recognition accuracy significantly over the baselines and state-of-the-art subcategory based en- semble classifiers, especially for the challenging categories.
Third, we explore the commonality and diversity of semantic relationships of category definitions to improve classification accuracy in an efficient manner. Specif- ically, our classification model identifies the most helpful relational semantic queries to discriminatively refine the model by a small amount of semantic feedback in inter- active iterations. We improve the classification accuracy on challenging categories that have very small numbers of training samples via transferred knowledge from other related categories that have a lager number of training samples by solving a
semantically constrained transfer learning optimization problem.
…
Advisors/Committee Members: Davis, Larry Steven (advisor).
Subjects/Keywords: Computer science; attributes; classification; classifier; computer vision; visual category; visual recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Choi, J. (2015). Recognizing Visual Categories by Commonality and Diversity. (Thesis). University of Maryland. Retrieved from http://hdl.handle.net/1903/16561
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Choi, Jonghyun. “Recognizing Visual Categories by Commonality and Diversity.” 2015. Thesis, University of Maryland. Accessed March 09, 2021.
http://hdl.handle.net/1903/16561.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Choi, Jonghyun. “Recognizing Visual Categories by Commonality and Diversity.” 2015. Web. 09 Mar 2021.
Vancouver:
Choi J. Recognizing Visual Categories by Commonality and Diversity. [Internet] [Thesis]. University of Maryland; 2015. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/1903/16561.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Choi J. Recognizing Visual Categories by Commonality and Diversity. [Thesis]. University of Maryland; 2015. Available from: http://hdl.handle.net/1903/16561
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Loughborough University
19.
Ahmad, Nasir.
A motion based approach for audio-visual automatic speech recognition.
Degree: PhD, 2011, Loughborough University
URL: http://hdl.handle.net/2134/8564
► The research work presented in this thesis introduces novel approaches for both visual region of interest extraction and visual feature extraction for use in audio-visual…
(more)
▼ The research work presented in this thesis introduces novel approaches for both visual region of interest extraction and visual feature extraction for use in audio-visual automatic speech recognition. In particular, the speaker‘s movement that occurs during speech is used to isolate the mouth region in video sequences and motionbased features obtained from this region are used to provide new visual features for audio-visual automatic speech recognition. The mouth region extraction approach proposed in this work is shown to give superior performance compared with existing colour-based lip segmentation methods. The new features are obtained from three separate representations of motion in the region of interest, namely the difference in luminance between successive images, block matching based motion vectors and optical flow. The new visual features are found to improve visual-only and audiovisual speech recognition performance when compared with the commonly-used appearance feature-based methods. In addition, a novel approach is proposed for visual feature extraction from either the discrete cosine transform or discrete wavelet transform representations of the mouth region of the speaker. In this work, the image transform is explored from a new viewpoint of data discrimination; in contrast to the more conventional data preservation viewpoint. The main findings of this work are that audio-visual automatic speech recognition systems using the new features extracted from the frequency bands selected according to their discriminatory abilities generally outperform those using features designed for data preservation. To establish the noise robustness of the new features proposed in this work, their performance has been studied in presence of a range of different types of noise and at various signal-to-noise ratios. In these experiments, the audio-visual automatic speech recognition systems based on the new approaches were found to give superior performance both to audio-visual systems using appearance based features and to audio-only speech recognition systems.
Subjects/Keywords: 005.3; Automatic speech recognition (ASR); Audio-visual automatic speech recognition (AVASR); Bi-modal speech recognition; Visual front-end; Features extraction; Visual ROI; Speech dynamics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ahmad, N. (2011). A motion based approach for audio-visual automatic speech recognition. (Doctoral Dissertation). Loughborough University. Retrieved from http://hdl.handle.net/2134/8564
Chicago Manual of Style (16th Edition):
Ahmad, Nasir. “A motion based approach for audio-visual automatic speech recognition.” 2011. Doctoral Dissertation, Loughborough University. Accessed March 09, 2021.
http://hdl.handle.net/2134/8564.
MLA Handbook (7th Edition):
Ahmad, Nasir. “A motion based approach for audio-visual automatic speech recognition.” 2011. Web. 09 Mar 2021.
Vancouver:
Ahmad N. A motion based approach for audio-visual automatic speech recognition. [Internet] [Doctoral dissertation]. Loughborough University; 2011. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/2134/8564.
Council of Science Editors:
Ahmad N. A motion based approach for audio-visual automatic speech recognition. [Doctoral Dissertation]. Loughborough University; 2011. Available from: http://hdl.handle.net/2134/8564

Florida Atlantic University
20.
Rashford, Stacey.
How the Spatial Organization of Objects Affects Perceptual Processing of a Scene.
Degree: MS, 2015, Florida Atlantic University
URL: http://purl.flvc.org/fau/fd/FA00004537
;
(URL)
http://purl.flvc.org/fau/fd/FA00004537
► Summary: How does spatial organization of objects affect the perceptual processing of a scene? Surprisingly, little research has explored this topic. A few studies have…
(more)
▼ Summary: How does spatial organization of objects affect the perceptual processing of a scene? Surprisingly, little research has explored this topic. A few studies have reported that, when simple, homogenous stimuli (e.g., dots), are presented in a regular formation, they are judged to be more numerous than when presented in a random configuration (Ginsburg, 1976; 1978). However, these results may not apply to real-world objects. In the current study, fewer objects were believed to be on organized desks than their disorganized equivalents. Objects that are organized may be more likely to become integrated, due to classic Gestalt principles. Consequently, visual search may be more difficult. Such object integration may diminish saliency, making objects less apparent and more difficult to find. This could explain why, in the present study, objects on disorganized desks were found faster.
2015
Degree granted: Thesis (M.S.) – Florida Atlantic University, 2015.
Collection: FAU
Advisors/Committee Members: Barenholtz, Elan (Thesis advisor), Florida Atlantic University (Degree grantor), Charles E. Schmidt College of Science, Department of Psychology.
Subjects/Keywords: Image analysis; Optical pattern recognition; Pattern recognition systems; Phenomenological psychology; Visual perception
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rashford, S. (2015). How the Spatial Organization of Objects Affects Perceptual Processing of a Scene. (Masters Thesis). Florida Atlantic University. Retrieved from http://purl.flvc.org/fau/fd/FA00004537 ; (URL) http://purl.flvc.org/fau/fd/FA00004537
Chicago Manual of Style (16th Edition):
Rashford, Stacey. “How the Spatial Organization of Objects Affects Perceptual Processing of a Scene.” 2015. Masters Thesis, Florida Atlantic University. Accessed March 09, 2021.
http://purl.flvc.org/fau/fd/FA00004537 ; (URL) http://purl.flvc.org/fau/fd/FA00004537.
MLA Handbook (7th Edition):
Rashford, Stacey. “How the Spatial Organization of Objects Affects Perceptual Processing of a Scene.” 2015. Web. 09 Mar 2021.
Vancouver:
Rashford S. How the Spatial Organization of Objects Affects Perceptual Processing of a Scene. [Internet] [Masters thesis]. Florida Atlantic University; 2015. [cited 2021 Mar 09].
Available from: http://purl.flvc.org/fau/fd/FA00004537 ; (URL) http://purl.flvc.org/fau/fd/FA00004537.
Council of Science Editors:
Rashford S. How the Spatial Organization of Objects Affects Perceptual Processing of a Scene. [Masters Thesis]. Florida Atlantic University; 2015. Available from: http://purl.flvc.org/fau/fd/FA00004537 ; (URL) http://purl.flvc.org/fau/fd/FA00004537

Iowa State University
21.
Peasley, Charles Josef.
What causes the dip in object recognition rotation functions?.
Degree: 2019, Iowa State University
URL: https://lib.dr.iastate.edu/etd/17538
► Two experiments were conducted to determine why there is a local improvement in recognition times when object images are inverted. Experiment 1 used naturally occurring,…
(more)
▼ Two experiments were conducted to determine why there is a local improvement in recognition times when object images are inverted. Experiment 1 used naturally occurring, everyday objects and measured the effects of picture plane rotation on their identification times. Performance varied according to spatial configuration type, wherein only side-of objects become easier to recognize upon complete inversion than at neighboring orientations, forming a “dip”. Above-below objects became increasingly difficult to recognize as rotation approaches 180 degrees. Experiment 2 employed novel non-sense objects in a sequential matching paradigm. Rotation function shapes displayed an interaction in the same direction as Experiment 1, though no “dip’ in response times was observed. In Experiment 2, experimenter prescribed categorical part relations influenced the shape of rotation functions for recognition independent from other object properties. Rotation functions revealed that obtaining this counter-intuitive local improvement depends upon the presence of side-of relations between an object’s parts. Together, these experiments provide evidence for the use of categorically coded structural descriptions in object recognition.
Subjects/Keywords: Object recognition; recognition-by-components; rotation function; structural description; visual perception; Cognitive Psychology
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Peasley, C. J. (2019). What causes the dip in object recognition rotation functions?. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/17538
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Peasley, Charles Josef. “What causes the dip in object recognition rotation functions?.” 2019. Thesis, Iowa State University. Accessed March 09, 2021.
https://lib.dr.iastate.edu/etd/17538.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Peasley, Charles Josef. “What causes the dip in object recognition rotation functions?.” 2019. Web. 09 Mar 2021.
Vancouver:
Peasley CJ. What causes the dip in object recognition rotation functions?. [Internet] [Thesis]. Iowa State University; 2019. [cited 2021 Mar 09].
Available from: https://lib.dr.iastate.edu/etd/17538.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Peasley CJ. What causes the dip in object recognition rotation functions?. [Thesis]. Iowa State University; 2019. Available from: https://lib.dr.iastate.edu/etd/17538
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Washington
22.
Oganyan, Marina.
The Role of Morphology in Word Recognition of Hebrew as a Templatic Language.
Degree: PhD, 2017, University of Washington
URL: http://hdl.handle.net/1773/40618
► Research on recognition of complex words has primarily focused on affixational complexity in concatenative languages. This dissertation investigates both templatic and affixational complexity in Hebrew,…
(more)
▼ Research on
recognition of complex words has primarily focused on affixational complexity in concatenative languages. This dissertation investigates both templatic and affixational complexity in Hebrew, a templatic language, with particular focus on the role of the root and template morphemes in
recognition. It also explores the role of morphology in word
recognition across modality (
visual vs. auditory). Finally, it investigates whether acquisition of
visual word
recognition processes in Hebrew by speakers of a concatenative (non-templatic) language is dependent upon age of acquisition or age of arrival. The findings for native speakers in this dissertation suggest that both templatic words and affixed words in Hebrew are decomposed into their constituent morphemes and for templatic words this decomposition is the default. In templatic words, the root and template play different roles in
recognition. For nouns the role of the root is particularly important, as evidenced by sensitivity to letter position, while for verbs both roots and templates play key roles (Chapter 4). A phonemic restoration paradigm provides evidence of templatic morphology playing a key role in auditory word
recognition. As with
visual recognition of nouns, roots play an important role in auditory noun
recognition as evidenced by words with root sounds masked being harder to recover than words with template sounds masked (Chapter 5). In Hebrew, as with conctatenative languages, inflectional words show evidence of decomposition into stem and affix with a larger amplitude N400 for inflectionally affixed templatic words than unaffixed ones. Furthermore, higher processing costs are revealed for concatenative borrowings into the language than templatic words, with greater amplitude peakers in the 200-300 ms time-window, suggesting that for templatic words decomposition is the default strategy (Chapter 6). Results of the L2 Hebrew study suggest that even proficient readers show transfer effects from a concatenative L1. Unlike native readers, they are letter position flexible for root letters in nouns with nouns with transposed letters priming, suggesting that a whole-stem representation of templatic words is available. These effects are not shown to correlate with either age of acquisition or arrival (Chapter 7).
Advisors/Committee Members: Herschensohn, Julia (advisor), Wright, Richard (advisor).
Subjects/Keywords: Auditory Word Recognition; Morphology; Psycholinguistics; Second Language Acquisition; Visual Word Recognition; Linguistics; Cognitive psychology; Linguistics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Oganyan, M. (2017). The Role of Morphology in Word Recognition of Hebrew as a Templatic Language. (Doctoral Dissertation). University of Washington. Retrieved from http://hdl.handle.net/1773/40618
Chicago Manual of Style (16th Edition):
Oganyan, Marina. “The Role of Morphology in Word Recognition of Hebrew as a Templatic Language.” 2017. Doctoral Dissertation, University of Washington. Accessed March 09, 2021.
http://hdl.handle.net/1773/40618.
MLA Handbook (7th Edition):
Oganyan, Marina. “The Role of Morphology in Word Recognition of Hebrew as a Templatic Language.” 2017. Web. 09 Mar 2021.
Vancouver:
Oganyan M. The Role of Morphology in Word Recognition of Hebrew as a Templatic Language. [Internet] [Doctoral dissertation]. University of Washington; 2017. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/1773/40618.
Council of Science Editors:
Oganyan M. The Role of Morphology in Word Recognition of Hebrew as a Templatic Language. [Doctoral Dissertation]. University of Washington; 2017. Available from: http://hdl.handle.net/1773/40618

University of Maryland
23.
Pillai, Jaishanker K.
Learning Visual Classifiers From Limited Labeled Images.
Degree: Electrical Engineering, 2013, University of Maryland
URL: http://hdl.handle.net/1903/14224
► Recognizing humans and their activities from images and video is one of the key goals of computer vision. While supervised learning algorithms like Support Vector…
(more)
▼ Recognizing humans and their activities from images and video is one of the key goals of computer vision. While supervised learning algorithms like Support Vector Machines and Boosting have offered robust solutions, they require large amount of labeled data for good performance. It is often difficult to acquire large labeled datasets due to the significant human effort involved in data annotation. However, it is considerably easier to collect unlabeled data due to the availability of inexpensive cameras and large public databases like Flickr and YouTube. In this dissertation, we develop efficient machine learning techniques for
visual classification from small amount of labeled training data by utilizing the structure in the testing data, labeled data in a different domain and unlabeled data.
This dissertation has three main parts. In the first part of the dissertation, we consider how multiple noisy samples available during testing can be utilized to perform accurate
visual classification. Such multiple samples are easily available in video-based
recognition problem, which is commonly encountered in
visual surveillance. Specifically, we study the problem of unconstrained human
recognition from iris images. We develop a Sparse Representation-based selection and
recognition scheme, which learns the underlying structure of clean images. This learned structure is utilized to develop a quality measure, and a quality-based fusion scheme is proposed to combine the varying evidence.
Furthermore, we extend the method to incorporate privacy, an important requirement inpractical biometric applications, without significantly affecting the
recognition performance.
In the second part, we analyze the problem of utilizing labeled data in a different domain to aid
visual classification. We consider the problem of shifts in acquisition conditions during training and testing, which is very common in iris biometrics. In particular, we study the sensor mismatch problem, where the training samples are acquired using a sensor much older than the one used for testing. We provide one of the first solutions to this problem, a kernel learning framework to adapt iris data collected from one sensor to another. Extensive evaluations on iris data from multiple sensors demonstrate that the proposed method leads to considerable improvement in cross sensor
recognition accuracy. Furthermore, since the proposed technique requires minimal changes to the iris
recognition pipeline, it can easily be incorporated into existing iris
recognition systems.
In the last part of the dissertation, we analyze how unlabeled data available during training can assist
visual classification applications. Here, we consider still image-based vision applications involving humans, where explicit motion cues are not available. A human pose often conveys not only the configuration of the body parts, but also implicit predictive information about the ensuing motion. We propose a probabilistic framework to infer this dynamic information associated with a human pose, using…
Advisors/Committee Members: Chellappa, Rama (advisor).
Subjects/Keywords: Electrical engineering; Activity Recognition; Computer Vision; Iris Recognition; Machine Learning; Semi-Supervised Learning; Visual Classification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pillai, J. K. (2013). Learning Visual Classifiers From Limited Labeled Images. (Thesis). University of Maryland. Retrieved from http://hdl.handle.net/1903/14224
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Pillai, Jaishanker K. “Learning Visual Classifiers From Limited Labeled Images.” 2013. Thesis, University of Maryland. Accessed March 09, 2021.
http://hdl.handle.net/1903/14224.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Pillai, Jaishanker K. “Learning Visual Classifiers From Limited Labeled Images.” 2013. Web. 09 Mar 2021.
Vancouver:
Pillai JK. Learning Visual Classifiers From Limited Labeled Images. [Internet] [Thesis]. University of Maryland; 2013. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/1903/14224.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Pillai JK. Learning Visual Classifiers From Limited Labeled Images. [Thesis]. University of Maryland; 2013. Available from: http://hdl.handle.net/1903/14224
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
24.
Miller, Timothy S.
Does Visual Awareness of Object Categories Require Attention?.
Degree: MS, Psychology, 2013, University of Massachusetts
URL: https://scholarworks.umass.edu/theses/1140
► A key question in the investigation of awareness is whether it can occur without attention, or vice versa. Most evidence to date suggests that…
(more)
▼ A key question in the investigation of awareness is whether it can occur without attention, or vice versa. Most evidence to date suggests that attention is necessary for awareness of
visual stimuli, but that attention can sometimes be present without corresponding aware-ness. However, there has been some evidence that natural scenes in general, and in particular scenes including animals, may not require
visual attention for a participant to become aware of their gist. One relatively recent paradigm for providing evidence for animal awareness without attention (Li, VanRullen, Koch, & Perona, 2002) requires participants to perform an attention demanding primary task while also determining whether a photograph displayed briefly in the periphery contains an animal as a secondary task. However, Cohen, Alvarez, and Nakayama (2011) questioned whether the primary task in these experiments used up all the available attentional capacity. Their experiments used a more demanding primary task to be sure attention really was not available for the image-
recognition task, and the results indicated that attention was contributing to the animal detection task. However, in addition to changing the primary task, they displayed the stimuli for the two tasks superimposed on each other in the same area of the
visual field. The experiment reported here is similar to the one by Cohen et al., but with the stimuli for the two tasks separated spatially. Animal
recognition with separated stimuli was impaired by additionally performing the attention-demanding task, leaving no good evidence that it is possible to recognize natural scenes without attention, in turn removing this support for awareness without attention.
Advisors/Committee Members: Kyle Cave.
Subjects/Keywords: attention; visual attention; awareness; animal recognition; object recognition; focussed attention; Cognitive Psychology
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Miller, T. S. (2013). Does Visual Awareness of Object Categories Require Attention?. (Masters Thesis). University of Massachusetts. Retrieved from https://scholarworks.umass.edu/theses/1140
Chicago Manual of Style (16th Edition):
Miller, Timothy S. “Does Visual Awareness of Object Categories Require Attention?.” 2013. Masters Thesis, University of Massachusetts. Accessed March 09, 2021.
https://scholarworks.umass.edu/theses/1140.
MLA Handbook (7th Edition):
Miller, Timothy S. “Does Visual Awareness of Object Categories Require Attention?.” 2013. Web. 09 Mar 2021.
Vancouver:
Miller TS. Does Visual Awareness of Object Categories Require Attention?. [Internet] [Masters thesis]. University of Massachusetts; 2013. [cited 2021 Mar 09].
Available from: https://scholarworks.umass.edu/theses/1140.
Council of Science Editors:
Miller TS. Does Visual Awareness of Object Categories Require Attention?. [Masters Thesis]. University of Massachusetts; 2013. Available from: https://scholarworks.umass.edu/theses/1140

Macquarie University
25.
Li, Yu.
Early neural dynamics of visual word recognition.
Degree: Cognitive Science, ARC Centre of Excellence in Cognition and its Disorders, Faculty of Human Sciences, Macquarie University, Syd, 2017, Macquarie University
URL: http://hdl.handle.net/1959.14/1266161
► Thesis by publication.
Bibliography: pages 159-186.
Chapter 1. General introduction – Chapter 2. Early top-town feedback from frontal to ventral occipito-temporal cortex during visual word…
(more)
▼ Thesis by publication.
Bibliography: pages 159-186.
Chapter 1. General introduction – Chapter 2. Early top-town feedback from frontal to ventral occipito-temporal cortex during visual word recognition – Chapter 3. Task modulation of early top-down feedback from frontal to ventral occipito-temporal cortex during visual word recognition – Chapter 4. Task modulation of the time course of visual word recognition – Chapter 5. General discussion – References – Ethics approval.
For a skilled reader, visual word recognition can be completed within several hundred milliseconds. There is evidence that the left inferior frontal cortex is activated by visual words in the first 300 ms after stimulus onset; additionally, there is evidence for early top-down feedback from this frontal region to ventral occipito-temporal cortex during visual word recognition. Using magnetoencephalography (MEG), this thesis sought to examine early neural dynamics of visual word recognition by examining the early stage inter-regional connectivity and time course of visual word recognition.
In Chapter 1, I review studies of the neural correlates and relevant neural models of visual word recognition; in particular, two models on the ventral occipito-temporal cortex in visual word recognition with contrasting views on the role of top-down feedback are examined. I then introduce dynamic causal modeling (DCM), a crucial neuroimaging method for examining directional influences of one brain area to another. Next I review the neuroimaging studies of visual word recognition focusing on its time course and also highlight the importance of examining early brain activity of visual word recognition. Finally, I propose the research questions to be addressed in this thesis: What is the nature of early top-down feedback from frontal to ventral occipito-temporal cortex during visual word recognition? How task goals modulate the early top-down feedback? How task goals modulate the time course of visual word recognition? These questions are examined in three empirical chapters.
Using a semantic categorisation task, Chapter 2 examines the nature of top-down feedback from the left inferior frontal gyrus (LIFG) to the left ventral occipito-temporal cortex (LvOT) during the first 200 ms visual word recognition. The results revealed that the LIFG-to-LvOT connection was stronger for real words than for pseudowords, and stronger for false fonts than for consonant strings in both 1-150 ms and 1-200 ms time-windows, indicating that both lexical-semantic and surface letter information influence early top-down feedback. Furthermore, the LIFG-to-LvOT connection was stronger for pseudowords than for consonant strings in the 1-200 ms time-window, indicating that compared with lexical-semantic and surface letter information, the influences of phonological information occur later.
By comparing a non-linguistic visual discrimination task (is it a hash string?) with the semantic categorisation task (is it an animal word?) used in Chapter 2, Chapter 3…
Advisors/Committee Members: Macquarie University. Department of Cognitive Science, ARC Centre of Excellence in Cognition and its Disorders.
Subjects/Keywords: Word recognition – Physiological aspects; Visual perception – Physiological aspects; Brain – Physiology; visual word recognition; dynamic causal modelling; magnetoencephalography
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, Y. (2017). Early neural dynamics of visual word recognition. (Doctoral Dissertation). Macquarie University. Retrieved from http://hdl.handle.net/1959.14/1266161
Chicago Manual of Style (16th Edition):
Li, Yu. “Early neural dynamics of visual word recognition.” 2017. Doctoral Dissertation, Macquarie University. Accessed March 09, 2021.
http://hdl.handle.net/1959.14/1266161.
MLA Handbook (7th Edition):
Li, Yu. “Early neural dynamics of visual word recognition.” 2017. Web. 09 Mar 2021.
Vancouver:
Li Y. Early neural dynamics of visual word recognition. [Internet] [Doctoral dissertation]. Macquarie University; 2017. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/1959.14/1266161.
Council of Science Editors:
Li Y. Early neural dynamics of visual word recognition. [Doctoral Dissertation]. Macquarie University; 2017. Available from: http://hdl.handle.net/1959.14/1266161

Universidade do Rio Grande do Sul
26.
Malqui, José Luis Sotomayor.
A visual analytics approach for passing strateggies analysis in soccer using geometric features.
Degree: 2017, Universidade do Rio Grande do Sul
URL: http://hdl.handle.net/10183/158188
► Passing strategies analysis has always been of interest for soccer research. Since the beginning of soccer, managers have used scouting, video footage, training drills and…
(more)
▼ Passing strategies analysis has always been of interest for soccer research. Since the beginning of soccer, managers have used scouting, video footage, training drills and data feeds to collect information about tactics and player performance. However, the dynamic nature of passing strategies is complex enough to reflect what is happening in the game and makes it hard to understand its dynamics. Furthermore, there exists a growing demand for pattern detection and passing sequence analysis popularized by FC Barcelona’s tiki-taka. We propose an approach to abstract passing strategies and group them based on the geometry of the ball trajectory. To analyse passing sequences, we introduce a interactive visualization scheme to explore the frequency of usage, spatial location and time occurrence of the sequences. The frequency stripes visualization provide, an overview of passing groups frequency on three pitch regions: defense, middle, attack. A trajectory heatmap coordinated with a passing timeline allow, for the exploration of most recurrent passing shapes in temporal and spatial domains. Results show eight common ball trajectories for three-long passing sequences which depend on players positioning and on the angle of the pass. We demonstrate the potential of our approach with data from the Brazilian league under several case studies, and report feedback from a soccer expert.
As estrategias de passes têm sido sempre de interesse para a pesquisa de futebol. Desde os inícios do futebol, os técnicos tem usado olheiros, gravações de vídeo, exercícios de treinamento e feeds de dados para coletar informações sobre as táticas e desempenho dos jogadores. No entanto, a natureza dinâmica das estratégias de passes são bastante complexas para refletir o que está acontecendo dentro do campo e torna difícil o entendimento do jogo. Além disso, existe uma demanda crecente pela deteção de padrões e analise de estrategias de passes popularizado pelo tiki-taka utilizado pelo FC. Barcelona. Neste trabalho, propomos uma abordagem para abstrair as sequências de pases e agrupálas baseadas na geometria da trajetória da bola. Para analizar as estratégias de passes, apresentamos um esquema de visualização interátiva para explorar a frequência de uso, a localização espacial e ocorrência temporal das sequências. A visualização Frequency Stripes fornece uma visão geral da frequencia dos grupos achados em tres regiões do campo: defesa, meio e ataque. O heatmap de trajetórias coordenado com a timeline de passes permite a exploração das formas mais recorrentes no espaço e tempo. Os resultados demostram oito trajetórias comunes da bola para sequências de três pases as quais dependem da posição dos jogadores e os ângulos de passe. Demonstramos o potencial da nossa abordagem com utilizando dados de várias partidas do Campeonato Brasileiro sob diferentes casos de estudo, e reportamos os comentários de especialistas em futebol.
Advisors/Committee Members: Comba, Joao Luiz Dihl.
Subjects/Keywords: Computação gráfica; Visual analytics; Reconhecimento : Padroes; Visual knowledge discovery; Futebol : Regras; Sport analytics; Pattern recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Malqui, J. L. S. (2017). A visual analytics approach for passing strateggies analysis in soccer using geometric features. (Thesis). Universidade do Rio Grande do Sul. Retrieved from http://hdl.handle.net/10183/158188
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Malqui, José Luis Sotomayor. “A visual analytics approach for passing strateggies analysis in soccer using geometric features.” 2017. Thesis, Universidade do Rio Grande do Sul. Accessed March 09, 2021.
http://hdl.handle.net/10183/158188.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Malqui, José Luis Sotomayor. “A visual analytics approach for passing strateggies analysis in soccer using geometric features.” 2017. Web. 09 Mar 2021.
Vancouver:
Malqui JLS. A visual analytics approach for passing strateggies analysis in soccer using geometric features. [Internet] [Thesis]. Universidade do Rio Grande do Sul; 2017. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/10183/158188.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Malqui JLS. A visual analytics approach for passing strateggies analysis in soccer using geometric features. [Thesis]. Universidade do Rio Grande do Sul; 2017. Available from: http://hdl.handle.net/10183/158188
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Temple University
27.
DU, LIANG.
Exploiting Competition Relationship for Robust Visual Recognition.
Degree: PhD, 2015, Temple University
URL: http://digital.library.temple.edu/u?/p245801coll10,335545
► Computer and Information Science
Leveraging task relatedness has been proven to be beneficial in many machine learning tasks. Extensive researches has been done to exploit…
(more)
▼ Computer and Information Science
Leveraging task relatedness has been proven to be beneficial in many machine learning tasks. Extensive researches has been done to exploit task relatedness in various forms. A common assumption for the tasks is that they are intrinsically similar to each other. Based on this assumption, joint learning algorithms are usually implemented via some forms of information sharing. Various forms of information sharing have been proposed, such as shared hidden units of neural networks, common prior distribution in hierarchical Bayesian model, shared weak learners of a boosting classifier, distance metrics and a shared low rank structure for multiple tasks. However, another very common and important task relationship, i.e., task competition, has been largely overlooked. Task competition means that tasks are competing with each other if there are competitions or conflicts between their goals. Considering that tasks with competition relationship are universal, this dissertation is to accommodate this intuition from an algorithmic perspectives and apply the algorithms to various visual recognition problems. Focus on exploiting the task competition relationships in visual recognition, the dissertation presents three types of algorithms and applied them to different visual recognition tasks. First, hypothesis competition has been exploited in a boosting framework. The proposed algorithm CompBoost jointly model the target and auxiliary tasks with a generalized additive regression model regularized by competition constraints. This model treats the feature selection as the weak learner (\ie, base functions) selection problem, and thus provides a mechanism to improve feature filtering guided by task competition. More specifically, following a stepwise optimization scheme, we iteratively add a new weak learner that balances between the gain for the target task and the inhibition on the auxiliary ones. We call the proposed algorithm CompBoost, since it shares similar structures with the popular AdaBoost algorithm. In this dissertation, we use two test beds for evaluation of CompBoost: (1) content-independent writer identification by exploiting competing tasks of handwriting recognition, and (2) actor-independent facial expression recognition by exploiting competing tasks of face recognition. In the experiments for both applications, the approach demonstrates promising performance gains by exploiting the between-task competition relationship. Second, feature competition has been instantiated through an alternating coordinate gradient algorithm. Sharing the same feature pool, two tasks are modeled together in a joint loss framework, with feature interaction encouraged via an orthogonal regularization over feature importance vectors. Then, an alternating greedy coordinate descent learning algorithm (AGCD) is derived to estimate the model. The algorithm effectively excludes distracting features in a fine-grained level for improving face verification. In other words, the proposed algorithm does not forbid…
Advisors/Committee Members: Ling, Haibin;, Latecki, Longin, Shi, Yuan, Zhu, Ying;.
Subjects/Keywords: Computer science; Information science; Information technology;
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
DU, L. (2015). Exploiting Competition Relationship for Robust Visual Recognition. (Doctoral Dissertation). Temple University. Retrieved from http://digital.library.temple.edu/u?/p245801coll10,335545
Chicago Manual of Style (16th Edition):
DU, LIANG. “Exploiting Competition Relationship for Robust Visual Recognition.” 2015. Doctoral Dissertation, Temple University. Accessed March 09, 2021.
http://digital.library.temple.edu/u?/p245801coll10,335545.
MLA Handbook (7th Edition):
DU, LIANG. “Exploiting Competition Relationship for Robust Visual Recognition.” 2015. Web. 09 Mar 2021.
Vancouver:
DU L. Exploiting Competition Relationship for Robust Visual Recognition. [Internet] [Doctoral dissertation]. Temple University; 2015. [cited 2021 Mar 09].
Available from: http://digital.library.temple.edu/u?/p245801coll10,335545.
Council of Science Editors:
DU L. Exploiting Competition Relationship for Robust Visual Recognition. [Doctoral Dissertation]. Temple University; 2015. Available from: http://digital.library.temple.edu/u?/p245801coll10,335545

University of Kansas
28.
Indulkar, Shreya Sanjay.
The effect of irrelevant visual experience on visual memory.
Degree: MS, Pharmacology & Toxicology, 2019, University of Kansas
URL: http://hdl.handle.net/1808/29843
► Consolidation of memories for long term storage involves increases in excitatory synaptic strength and connectivity between neurons encoding a novel experience. The increase in neuronal…
(more)
▼ Consolidation of memories for long term storage involves increases in excitatory synaptic strength and connectivity between neurons encoding a novel experience. The increase in neuronal excitability caused by memory consolidation could augment excitability induced by the experience of related stimuli irrelevant to the memory. Therefore, the additional neuronal excitability caused by memory consolidation could perturb neuronal activity homeostasis towards higher neuronal activation levels. Under conditions of neuronal hyperactivity, such as in Alzheimer’s disease, an increase in excitation induced by memory consolidation would further destabilize homeostasis. We hypothesize that memory deficiency, which would result in reduced neuronal excitability, is an adaptation to maintain neuronal activity homeostasis. To test this hypothesis and to identify whether experience-evoked activity contributes to memory impairments, we used a
visual recognition memory (VRM) paradigm that involves synaptic plasticity in the primary
visual cortex. In this paradigm, mice are repeatedly presented with a
visual grating of a specific orientation and the
recognition memory is assessed as a decrease in the exploration of the same stimulus over time. We tested the orientation selective behavioral habituation in a mouse model of Alzheimer’s disease (J20 line) and non-transgenic control siblings (wild type). We found that wild type mice display VRM for grating stimulus when tested one day but not at one month after the training period. In contrast, J20 mice did not display VRM even one day after the training period. To examine whether reducing neuronal excitability caused by memory irrelevant
visual experience influences the long-term retention of the VRM for grating stimulus, we performed the same task in mice housed in total darkness except during the VRM task. Our preliminary data indicate that dark adaptation rescues the memory deficit in J20 mice whereas disrupts memory in control mice when tested one day after the training. These results suggest that competing experiences promote memory storage in control mice but interferes with it in APP mice.
Advisors/Committee Members: Subramanian, Jaichandar (advisor), Smith, Adam (cmtemember), Yan, Shirley ShiDu (cmtemember).
Subjects/Keywords: Pharmacology; Neurosciences; Alzheimer's Disease; Animal behavior; J20 mice; Memory; Visual experience; Visual recognition memory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Indulkar, S. S. (2019). The effect of irrelevant visual experience on visual memory. (Masters Thesis). University of Kansas. Retrieved from http://hdl.handle.net/1808/29843
Chicago Manual of Style (16th Edition):
Indulkar, Shreya Sanjay. “The effect of irrelevant visual experience on visual memory.” 2019. Masters Thesis, University of Kansas. Accessed March 09, 2021.
http://hdl.handle.net/1808/29843.
MLA Handbook (7th Edition):
Indulkar, Shreya Sanjay. “The effect of irrelevant visual experience on visual memory.” 2019. Web. 09 Mar 2021.
Vancouver:
Indulkar SS. The effect of irrelevant visual experience on visual memory. [Internet] [Masters thesis]. University of Kansas; 2019. [cited 2021 Mar 09].
Available from: http://hdl.handle.net/1808/29843.
Council of Science Editors:
Indulkar SS. The effect of irrelevant visual experience on visual memory. [Masters Thesis]. University of Kansas; 2019. Available from: http://hdl.handle.net/1808/29843

Florida Atlantic University
29.
Schlangen, Derrick.
Peripheral Object Recognition in Naturalistic Scenes.
Degree: 2016, Florida Atlantic University
URL: http://purl.flvc.org/fau/fd/FA00004669
;
(URL)
http://purl.flvc.org/fau/fd/FA00004669
► Summary: Most of the human visual field falls in the periphery, and peripheral processing is important for normal visual functioning. Yet, little is known about…
(more)
▼ Summary: Most of the human visual field falls in the periphery, and peripheral processing is
important for normal visual functioning. Yet, little is known about peripheral object
recognition in naturalistic scenes and factors that modulate this ability. We propose that
a critical function of scene and object memory is in order to facilitate visual object
recognition in the periphery. In the first experiment, participants identified objects in
scenes across different levels of familiarity and contextual information within the scene.
We found that familiarity with a scene resulted in a significant increase in the distance
that objects were recognized. Furthermore, we found that a semantically consistent scene
improved the distance that object recognition is possible, supporting the notion that
contextual facilitation is possible in the periphery. In the second experiment, the preview
duration of a scene was varied in order to examine how a scene representation is built and
how memory of that scene and the objects within it contributes to object recognition in
the periphery. We found that the closer participants fixated to the object in the preview,
the farther on average they recognized that target object in the periphery. However, only a preview duration of the scenes for 5000 ms produced significantly farther peripheral
object recognition compared to not previewing the scene. Overall, these experiments
introduce a novel research paradigm for object recognition in naturalistic scenes, and
demonstrates multiple factors that have systematic effects on peripheral object
recognition.
2016
Degree granted: Dissertation (Ph.D.) – Florida Atlantic University, 2016.
Collection: FAU
Advisors/Committee Members: Barenholtz, Elan (Thesis advisor), Florida Atlantic University (Degree grantor), Charles E. Schmidt College of Science, Department of Psychology.
Subjects/Keywords: Context effects (Psychology); Human information processing; Optical pattern recognition; Pattern recognition systems; Recognition (Psychology); Visual perception
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Schlangen, D. (2016). Peripheral Object Recognition in Naturalistic Scenes. (Thesis). Florida Atlantic University. Retrieved from http://purl.flvc.org/fau/fd/FA00004669 ; (URL) http://purl.flvc.org/fau/fd/FA00004669
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Schlangen, Derrick. “Peripheral Object Recognition in Naturalistic Scenes.” 2016. Thesis, Florida Atlantic University. Accessed March 09, 2021.
http://purl.flvc.org/fau/fd/FA00004669 ; (URL) http://purl.flvc.org/fau/fd/FA00004669.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Schlangen, Derrick. “Peripheral Object Recognition in Naturalistic Scenes.” 2016. Web. 09 Mar 2021.
Vancouver:
Schlangen D. Peripheral Object Recognition in Naturalistic Scenes. [Internet] [Thesis]. Florida Atlantic University; 2016. [cited 2021 Mar 09].
Available from: http://purl.flvc.org/fau/fd/FA00004669 ; (URL) http://purl.flvc.org/fau/fd/FA00004669.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Schlangen D. Peripheral Object Recognition in Naturalistic Scenes. [Thesis]. Florida Atlantic University; 2016. Available from: http://purl.flvc.org/fau/fd/FA00004669 ; (URL) http://purl.flvc.org/fau/fd/FA00004669
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Queensland University of Technology
30.
Pepperell, Edward.
Visual sequence-based place recognition for changing conditions and varied viewpoints.
Degree: 2016, Queensland University of Technology
URL: http://eprints.qut.edu.au/93741/
► Correctly identifying previously-visited locations is essential for robotic place recognition and localisation. This thesis presents training-free solutions to vision-based place recognition under changing environmental conditions…
(more)
▼ Correctly identifying previously-visited locations is essential for robotic place recognition and localisation. This thesis presents training-free solutions to vision-based place recognition under changing environmental conditions and camera viewpoints. Using vision as a primary sensor, the proposed approaches combine image segmentation and rescaling techniques over sequences of visual imagery to enable successful place recognition over a range of challenging environments where prior techniques have failed.
Subjects/Keywords: visual place recognition; navigation; appearance-based localisation; SeqSLAM; SMART; ODTA
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pepperell, E. (2016). Visual sequence-based place recognition for changing conditions and varied viewpoints. (Thesis). Queensland University of Technology. Retrieved from http://eprints.qut.edu.au/93741/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Pepperell, Edward. “Visual sequence-based place recognition for changing conditions and varied viewpoints.” 2016. Thesis, Queensland University of Technology. Accessed March 09, 2021.
http://eprints.qut.edu.au/93741/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Pepperell, Edward. “Visual sequence-based place recognition for changing conditions and varied viewpoints.” 2016. Web. 09 Mar 2021.
Vancouver:
Pepperell E. Visual sequence-based place recognition for changing conditions and varied viewpoints. [Internet] [Thesis]. Queensland University of Technology; 2016. [cited 2021 Mar 09].
Available from: http://eprints.qut.edu.au/93741/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Pepperell E. Visual sequence-based place recognition for changing conditions and varied viewpoints. [Thesis]. Queensland University of Technology; 2016. Available from: http://eprints.qut.edu.au/93741/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
◁ [1] [2] [3] [4] [5] … [12] ▶
.