You searched for +publisher:"University of Southern California" +contributor:("Kuo, C. C. Jay")
.
Showing records 1 – 30 of
109 total matches.
◁ [1] [2] [3] [4] ▶

University of Southern California
1.
Kang, Bong Jin.
High-frequency ultrasound array-based imaging system for
biomedical applications.
Degree: PhD, Biomedical Engineering, 2015, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/562902/rec/3182
► High frequency ultrasound imaging, capable of achieving superior spatial resolution in real-time, has been shown to be useful for imaging and visualizing blood flow in…
(more)
▼ High frequency ultrasound imaging, capable of
achieving superior spatial resolution in real-time, has been shown
to be useful for imaging and visualizing blood flow in
ophthalmology, dermatology, and small animal research. The
utilization of high frequency array-based imaging system can
alleviate the limitations of the systems with single element
transducers. This dissertation presents an investigation of high
frequency array-based imaging system and its potential biomedical
applications. The system is capable of B-mode imaging, PW-Doppler,
color Doppler imaging, and RF data acquisition. Three different
types of high frequency (30 MHz 256-element linear, 20 MHz
192-element convex, and 20 MHz 48-element phased) array transducers
were implemented on the array-based imaging system. The system was
also utilized for ophthalmic imaging: 30 MHz linear array for
anterior segment and 20 MHz convex array for both anterior and
posterior segments imaging of the eye. Anatomical structures, such
as cornea, iris, ciliary body, lens, and retina, choroid, and
sclera layers were identified. The high frequency PW Doppler and
micro-ECG were integrated to assess the ventricular diastolic
function during heart regeneration of the adult zebrafish.
Synchronized PW Doppler with ECG signals confirmed the A-wave in
response to atrial contraction (P wave in ECG), E-wave in response
to ventricular relaxation (T wave in ECG), and ventricular outflow
in response to ventricular contraction (QRS complex in ECG). The
E/A ratio is less than 1 in zebrafish at baseline, reflecting a
higher active filling (A-wave) than passive filling (E-wave)
velocities in the two-chamber heart system. High frequency dual
mode pulsed-wave Doppler imaging, which provides both tissue
Doppler and Doppler flow in a same cardiac cycle, was implemented
on the array-based imaging system for monitoring the functional
regeneration of adult zebrafish hearts. In the in vivo study of
zebrafish, both tissue Doppler and flow Doppler signals were
simultaneously obtained and the synchronized valve motions with the
blood flow were identified. In the longitudinal study on the
zebrafish heart regeneration, the parameters for diagnosing the
diastolic dysfunction were measured, and the type of diastolic
dysfunction caused by the amputation was found to be similar to the
restrictive filling. The diastolic function was fully recovered
within four weeks post-amputation. High frequency color Doppler
imaging was implemented on the array-based imaging system and
evaluated by the flow phantom and adult zebrafish in vivo studies.
In the flow phantom study, constant velocity with opposite flow
directions was detected utilizing color Doppler imaging. Also,
color Doppler imaging could be used to monitor blood flows inside
adult zebrafish heart and flow directions were clearly identified
by the color-coded images.
Advisors/Committee Members: Shung, Kirk Koping (Committee Chair), Yen, Jesse T. (Committee Member), Kuo, C. C. Jay (Committee Member).
Subjects/Keywords: high-frequency ultrasound; high-frequency ultrasound array-based imaging
system; high-frequency ultrasound pulsed-wave Doppler; high-frequency ultrasound color Doppler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kang, B. J. (2015). High-frequency ultrasound array-based imaging system for
biomedical applications. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/562902/rec/3182
Chicago Manual of Style (16th Edition):
Kang, Bong Jin. “High-frequency ultrasound array-based imaging system for
biomedical applications.” 2015. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/562902/rec/3182.
MLA Handbook (7th Edition):
Kang, Bong Jin. “High-frequency ultrasound array-based imaging system for
biomedical applications.” 2015. Web. 07 Mar 2021.
Vancouver:
Kang BJ. High-frequency ultrasound array-based imaging system for
biomedical applications. [Internet] [Doctoral dissertation]. University of Southern California; 2015. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/562902/rec/3182.
Council of Science Editors:
Kang BJ. High-frequency ultrasound array-based imaging system for
biomedical applications. [Doctoral Dissertation]. University of Southern California; 2015. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/562902/rec/3182

University of Southern California
2.
Lee, Chi-Chun.
Behavioral signal processing: computational approaches for
modeling and quantifying interaction dynamics in dyadic human
interactions.
Degree: PhD, Electrical Engineering, 2012, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/119634/rec/1063
► Behavioral Signal Processing (BSP) is an emerging interdisciplinary research domain, operationally defined as computational methods that model human behavior signals, with a goal of enhancing…
(more)
▼ Behavioral Signal Processing (BSP) is an emerging
interdisciplinary research domain, operationally defined as
computational methods that model human behavior signals, with a
goal of enhancing the capabilities of domain experts in
facilitating better decision making in terms of both scientific
discovery in human behavioral sciences and human-centered system
designs. Quantitative understanding of human behavior, both typical
and atypical, and mathematical modeling of interaction dynamics are
core elements in BSP. This thesis focuses on computational
approaches in modeling and quantifying interacting dynamics in
dyadic human interactions. ❧ The study of interaction dynamics has
long been at the center for multiple research disciplines in human
behavioral sciences (e.g., psychology). Exemplary scientific
questions addressed range from studying scenarios of interpersonal
communication (verbal interaction modeling, human affective state
generation, display, and perception mechanisms), modeling
domain-specific interactions (such as, assessment of the quality of
theatrical acting or children's reading ability), to analyzing
atypical interactions (for example, models of distressed married
couples behavior and response to therapeutic interventions,
quantitative diagnostics and treatment tracking of children with
Autism, people with psycho-pathologies such as addiction and
depression). In engineering, a metaphorical analogy and framework
to this notion in behavioral science is based on the idea of
conceptualizing a dyadic interaction as a coupled dynamical system:
an interlocutor is viewed as a dynamical system, whose state
evolution is not only based on its past history but also dependent
on the other interlocutor's state. However, the evolution of this
"coupled-states" is often hidden by nature; an interlocutor in a
conversation can at best "fully-observe" the expressed behaviors of
the other interlocutor. This observation or partial insights into
the other interlocutor's state is taken as "input" into the system
coupling with the evolution of its own state. This, then, in
returns, "outputs" behaviors to be taken as "input" for the other
interlocutors. This complex dynamics is in essence capturing the
flow of dyadic interaction quantitatively. The challenge in
modeling human interactions is, therefore, multi-fold: the coupling
dynamic between each interlocutor in an interaction spans multiple
levels, along variable time scales, and differs between interaction
contexts. At the same time, each interlocutor's internal behavioral
dynamic produces a coupling that is multimodal across the verbal
and nonverbal communicative channels. ❧ In this thesis, I will
focus on addressing questions of developing computational methods
for carrying out studies into understanding and modeling
interaction dynamics in dyadic interactions. In specific, I will
first demonstrate the efficacy of jointly model interlocutors
behaviors for better prediction of interruption in conversations.
Since turn taking is a highly-coordinated behavioral phenomenon…
Advisors/Committee Members: Narayanan, Shrikanth S. (Committee Chair), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member),
Margolin, Gayla (Committee Member).
Subjects/Keywords: behavioral signal processing; interpersonal interaction; interaction dynamics; dyadic interactions; affective computing; mental health
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, C. (2012). Behavioral signal processing: computational approaches for
modeling and quantifying interaction dynamics in dyadic human
interactions. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/119634/rec/1063
Chicago Manual of Style (16th Edition):
Lee, Chi-Chun. “Behavioral signal processing: computational approaches for
modeling and quantifying interaction dynamics in dyadic human
interactions.” 2012. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/119634/rec/1063.
MLA Handbook (7th Edition):
Lee, Chi-Chun. “Behavioral signal processing: computational approaches for
modeling and quantifying interaction dynamics in dyadic human
interactions.” 2012. Web. 07 Mar 2021.
Vancouver:
Lee C. Behavioral signal processing: computational approaches for
modeling and quantifying interaction dynamics in dyadic human
interactions. [Internet] [Doctoral dissertation]. University of Southern California; 2012. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/119634/rec/1063.
Council of Science Editors:
Lee C. Behavioral signal processing: computational approaches for
modeling and quantifying interaction dynamics in dyadic human
interactions. [Doctoral Dissertation]. University of Southern California; 2012. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/119634/rec/1063

University of Southern California
3.
Cho, Sungje.
Techniques for de novo sequence assembly: algorithms and
experimental results.
Degree: PhD, Electrical Engineering, 2012, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/77527/rec/6371
► The deep sequencing of second generation sequencing technology has enabled us to study complex biological structures, which have multiple DNA units simultaneously such as transcriptomics…
(more)
▼ The deep sequencing of second generation sequencing
technology has enabled us to study complex biological structures,
which have multiple DNA units simultaneously such as
transcriptomics and metagenomics. Unlike general genome sequence
assembly, a DNA unit of these biological structures may have
multiple copies with small or substantial structural variations
and/or SNPs simultaneously in an experimental sample. Therefore,
the deep sequencing is necessary to figure out such variations
concurrently. ❧ This dissertation focuses on de novo transcriptome
assembly which requires simultaneous assembly of multiple
alternatively spliced gene transcripts. In practice, the de novo
transcriptome assembly is the only option for studying the
transcriptome of organisms that do not have reference genome
sequences, and it can also be applied to identify novel transcripts
and structural variations in the gene regions of model organisms.
We propose WEAV for the de novo transcriptome assembly which
consists of two separate processes: clustering and assembly. ❧ WEAV
reduces the complexity of RNA-seq dataset by partitioning it into
clusters called clustering. WEAV simplify a diverse RNA-seq
dataset, which has many genes together, into many, smaller
clustered read sets, which have few genes a cluster, in the
clustering process. The underlying idea is straightforward. A
sequencer samples reads from random place so reads from one gene
may have overlaps with others if sequencing depth is enough. The
overlaps are the keys to connect reads from one gene. We can
transform a dataset into a graph where each read is a node and two
reads are connected by an edge when they have an overlap. Each
connected component will be a clustered read set. As a result, we
can assume that a cluster may have one or few genes; therefore, it
will not be mixed. ❧ After this process, WEAV assembles the
clustered read set with de Bruijn graph backbone, and a novel error
correction process simplify the backbone with a fast mapping tool,
PerM. Roughly speaking, WEAV tries to solve the historical Shortest
Common Superstring [61] problem with the graph to identify multiple
alternatively spliced gene transcripts simultaneously and
approaches the problem using Set Cover problem. We propose novel
statistical measures to make the NP hard problem manageable. The
measures are explainability based on the likelihood of sequences
and correctness based on bootstrapping. ❧ We compared WEAV with
other assemblers with various, simulated reads. We tested the
performance by widely used measures such as specificity,
sensitivity, N50, and the length of the longest sequence. After
this, we tested WEAV using an experimental dataset having 58.58
million 100bp human brain transcriptome reads. WEAV assembled
156,494 contigs that were longer than 300bp. 96.3% (specificity) of
these contigs were mapped onto either RefSeq, Gencode or human
Genome sequences (hg19), and they covered >72% sequenced bases
annotated in RefSeq and Gencode. These high sensitivity and
specificity showed the exceptional…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Leahy, Richard M. (Committee Member),
Chen, Ting (Committee Member).
Subjects/Keywords: computational biology; bioinformatics; sequence assembly
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cho, S. (2012). Techniques for de novo sequence assembly: algorithms and
experimental results. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/77527/rec/6371
Chicago Manual of Style (16th Edition):
Cho, Sungje. “Techniques for de novo sequence assembly: algorithms and
experimental results.” 2012. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/77527/rec/6371.
MLA Handbook (7th Edition):
Cho, Sungje. “Techniques for de novo sequence assembly: algorithms and
experimental results.” 2012. Web. 07 Mar 2021.
Vancouver:
Cho S. Techniques for de novo sequence assembly: algorithms and
experimental results. [Internet] [Doctoral dissertation]. University of Southern California; 2012. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/77527/rec/6371.
Council of Science Editors:
Cho S. Techniques for de novo sequence assembly: algorithms and
experimental results. [Doctoral Dissertation]. University of Southern California; 2012. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/77527/rec/6371

University of Southern California
4.
Karimi-Ashtiani, Shahryar.
Theory and simulation of diffusion magnetic resonance
imaging on brain's white matter.
Degree: PhD, Electrical Engineering, 2012, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/379286/rec/7435
► Diffusion MRI (D-MRI) has opened a new front for uncovering the convoluted structure of the central nervous system by providing capability for the non-invasive identification…
(more)
▼ Diffusion MRI (D-MRI) has opened a new front for
uncovering the convoluted structure of the central nervous system
by providing capability for the non-invasive identification of
white matter tract geometries in the brain. One of the major open
issues in fully extending this technology to the clinical domain is
the lack of in vivo validation of the results, which makes it
diffcult to have objective comparisons of different D-MRI
techniques. To this end, the application of simulated data from
known ground truths appears to be the second best choice. It is
well understood that, this imaging modality is characterized by the
shape of the self-diffusion (SD) profile within the brain fibers.
The previous methods for the quantification of the SD process in
the white matter environments suffer from the lack of enough
generality or solution precision, and often impose excessive
computational complexity, which limits their full extent of
applicability. The contributions of this research are two folds: 1)
the development of a new numerical method to compute the
self-diffusion SD profile and 2) the provision of a generic
framework for reconstruction of diffusion MR images under imperfect
imaging conditions.; In the first part of this thesis, a numerical
paradigm based on the finite element methods (FEM) for finding the
solution of SD partial differential equation (PDE) in the white
matter environment is proposed. The standard FEM is incapable of
accommodating the boundary conditions of the PDE on the interfaces
of different white matter materials. Theoretical constraints to
modify the FEM are investigated such that it becomes applicable to
the problem of interest. Our method is not confined to any special
geometry and virtually can handle any microscopic models of white
matters. One of the highlights of the developed method is
addressing the effect of partial permeability of the cell membrane.
Also, a self-validation technique is proposed to guarantee the
accuracy of the solution. In the meantime, the aggregate SD profile
of the MRI voxel is analytically derived to show the dependency of
the macroscopic behavior of the diffusion at the macroscopic level,
as a function of the contained tissue microscopic parameters.
Several simulation results are provided to showcase the
effectiveness of our method.; In the second part of this thesis, a
generic theoretical framework for reconstruction of MR images,
which factors most of the imaging artifacts due to true values of
imaging conditions, is developed. It is worthwhile to point out
that, on the simulation front, the previous D-MRI reconstruction
formulations were derived under ideal imaging conditions. However,
in reality, the actual values of imaging parameters leave the
available reconstruction techniques inapplicable. The severe
impacts of those parameters on the quality of the white matter
mapping from the D-MRI data necessitate the extension of existing
methods to include them. For example, the existing methods cannot
accommodate spatially varying T1 and T2 parameters, a condition…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Leahy, Richard M. (Committee Member),
Singh, Manbir (Committee Member).
Subjects/Keywords: brain imaging; white matter; MRI
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Karimi-Ashtiani, S. (2012). Theory and simulation of diffusion magnetic resonance
imaging on brain's white matter. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/379286/rec/7435
Chicago Manual of Style (16th Edition):
Karimi-Ashtiani, Shahryar. “Theory and simulation of diffusion magnetic resonance
imaging on brain's white matter.” 2012. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/379286/rec/7435.
MLA Handbook (7th Edition):
Karimi-Ashtiani, Shahryar. “Theory and simulation of diffusion magnetic resonance
imaging on brain's white matter.” 2012. Web. 07 Mar 2021.
Vancouver:
Karimi-Ashtiani S. Theory and simulation of diffusion magnetic resonance
imaging on brain's white matter. [Internet] [Doctoral dissertation]. University of Southern California; 2012. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/379286/rec/7435.
Council of Science Editors:
Karimi-Ashtiani S. Theory and simulation of diffusion magnetic resonance
imaging on brain's white matter. [Doctoral Dissertation]. University of Southern California; 2012. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/379286/rec/7435

University of Southern California
5.
Tsau, Enshuo.
Advanced features and feature selection methods for
vibration and audio signal classification.
Degree: PhD, Electrical Engineering, 2012, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/89597/rec/542
► An adequate feature set plays a key role in many signal classification and recognition applications. This is a challenging problem due to the nonlinearity and…
(more)
▼ An adequate feature set plays a key role in many
signal classification and recognition applications. This is a
challenging problem due to the nonlinearity and nonstationary
characteristics of real world signals, such as engine
acoustic/vibration data, environmental sounds, speech signals and
music instrument sounds. Some of traditional features such as the
Mel Frequency Cepstral Coefficients (MFCC) may not offer good
performance. Other features such as those based on the Matching
Pursuit (MP) decomposition may perform better, yet their complexity
is very high. In this research, we consider a new feature set that
can be easily generated in the model-based signal compression
process, known as the Code Excited Linear Prediction (CELP)
features. The CELP-based coding algorithm and its variants have
been widely used to encode speech and low-bit-rate audio signals.
In this research, we examine two applications based on CELP-based
features. ❧ First, we present a new approach to engine fault
detection and diagnosis based on acoustic and vibration sensor data
with MFCC and CELP features. Through proper algorithmic adaptation
to the specifics of the dataset, the fault conditions of a damaged
blade and a bearing failure can, with high probability, be
autonomously discovered and identified. The conducted experiments
will show that CELP features, although generally used in speech
applications, are particularly well suited to this problem, in
terms of both compactness and detection specificity. Furthermore,
the issue of automatic fault detection with different levels of
decision resolution is addressed. The low prediction error coupled
with ease of hardware implementation makes this proposed method an
attractive alternative to manual maintenance. ❧ Next, we propose
the use of CELP-based features to enhance the performance of the
environmental sound recognition (ESR) problem. Traditionally, MFCC
features have been used for the recognition of structured data like
speech and music. However, their performance for the ESR problem is
limited. An audio signal can be well preserved by its highly
compressed CELP bit streams, which motivates us to study the
CELP-based features for the audio scene recognition problem. We
present a way to extract a set of features from the CELP bit
streams and compare the performance of ESR using different feature
sets with the Bayesian network classifier. It is shown by
experimental results that the CELP-based features outperform the
MFCC features in the ESR problem by a significant margin and the
integrated MFCC and CELP-based feature set can even reach a correct
classification rate of 95.2% using the Bayesian network classifier.
❧ CELP-based features may not be suitable for wideband audio
signals such as music signals. To address this problem, we would
like to add other new features. One idea is to perform real-time
fundamental frequency estimation using a modified Hilbert-Huang
transform (HHT), as studied in the last part of this proposal. HHT
is a non-linear transform which is suitable for the analysis…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Jenkins, Brian Keith (Committee Member),
Chang, Tu-nan (Committee Member).
Subjects/Keywords: CELP; fault diagnosis; feature selection; HHT
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tsau, E. (2012). Advanced features and feature selection methods for
vibration and audio signal classification. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/89597/rec/542
Chicago Manual of Style (16th Edition):
Tsau, Enshuo. “Advanced features and feature selection methods for
vibration and audio signal classification.” 2012. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/89597/rec/542.
MLA Handbook (7th Edition):
Tsau, Enshuo. “Advanced features and feature selection methods for
vibration and audio signal classification.” 2012. Web. 07 Mar 2021.
Vancouver:
Tsau E. Advanced features and feature selection methods for
vibration and audio signal classification. [Internet] [Doctoral dissertation]. University of Southern California; 2012. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/89597/rec/542.
Council of Science Editors:
Tsau E. Advanced features and feature selection methods for
vibration and audio signal classification. [Doctoral Dissertation]. University of Southern California; 2012. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/89597/rec/542

University of Southern California
6.
Zhang, Jiangyang.
Advanced visual processing techniques for latent fingerprint
detection and video retargeting.
Degree: PhD, Electrical Engineering, 2014, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/382016/rec/552
► An important step in an automated fingerprint identification systems (AFIS) is the process of fingerprint segmentation. While a tremendous amount of efforts has been made…
(more)
▼ An important step in an automated fingerprint
identification systems (AFIS) is the process of fingerprint
segmentation. While a tremendous amount of efforts has been made on
plain and rolled fingerprint segmentation, latent fingerprint
segmentation remains to be a challenging problem. Traditional
segmentation methods fail to work properly on latent fingerprints
as they are based on many assumptions that are only valid for
rolled/plain fingerprints. We propose a new image decomposition
scheme, called the adaptive directional total variation (ADTV)
model, to achieve effective segmentation and enhancement for latent
fingerprint images in this work. The proposed model is inspired by
the classical total variation models, but it differentiates itself
by integrating two unique features of fingerprints; namely, scale
and orientation. The proposed ADTV model decomposes a latent
fingerprint image into two layers: cartoon and texture. The cartoon
layer contains unwanted components (e.g. structured noise) while
the texture layer mainly consists of the latent fingerprint. This
cartoon‐texture decomposition facilitates the process of
segmentation, as the region of interest can be easily detected from
the texture layer using traditional segmentation methods. The
effectiveness of the proposed scheme is validated through
experimental results on NIST SD27 latent fingerprint database. The
proposed scheme achieves accurate segmentation and enhancement
results, leading to improved feature detection and latent matching
performance. ❧ In the second part, we propose two solutions for
content‐aware image/video resizing (or called image/video
retargeting). The first solution address the issue of texture
redundancy for image retargeting. We analyze the effect of texture
regularity on the performance of image resizing, and then propose
an efficient texture‐aware resizing algorithm. Our solution
exploits region features, including the scale and the shape
information, to preserve both local and global structures. Texture
redundancy is effectively reduced through texture regularity
analysis and real‐time texture synthesis. The superior performance
of the proposed image resizing technique is demonstrated by
experimental results. Our second solution deals with conducting
video retargeting on compressed‐format video data. All existing
video retargeting techniques operates in the spatial pixel‐domain,
which could be difficult for practical usage, as most real‐world
digital videos are mainly available in compressed format. We
propose a novel video retargeting system that operates directly on
an intermediate representation in the compressed domain, namely,
the discrete cosine transform (DCT) domain. In this way, we are
able to avoid the computationally expensive process of
de‐compressing, processing, and recompression. As the system uses
the DCT coefficients directly for processing, only minimal decoding
of video streams is necessary. Our proposed solution achieves
comparable results with the state‐of‐art spatial domain video
retargeting techniques,…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Sawchuk, Alexander A. (Sandy) (Committee Member),
Nakano, Aiichiro (Committee Member).
Subjects/Keywords: latent fingerprint segmentation; image retargeting; video retargeting
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, J. (2014). Advanced visual processing techniques for latent fingerprint
detection and video retargeting. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/382016/rec/552
Chicago Manual of Style (16th Edition):
Zhang, Jiangyang. “Advanced visual processing techniques for latent fingerprint
detection and video retargeting.” 2014. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/382016/rec/552.
MLA Handbook (7th Edition):
Zhang, Jiangyang. “Advanced visual processing techniques for latent fingerprint
detection and video retargeting.” 2014. Web. 07 Mar 2021.
Vancouver:
Zhang J. Advanced visual processing techniques for latent fingerprint
detection and video retargeting. [Internet] [Doctoral dissertation]. University of Southern California; 2014. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/382016/rec/552.
Council of Science Editors:
Zhang J. Advanced visual processing techniques for latent fingerprint
detection and video retargeting. [Doctoral Dissertation]. University of Southern California; 2014. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/382016/rec/552

University of Southern California
7.
Chang, Yu-Teng.
Network structures: graph theory, statistics, and
neuroscience applications.
Degree: PhD, Electrical Engineering, 2012, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/39102/rec/4368
► Network modeling and graph theory have been widely studied and applied in a variety of modern research areas. The detection of network structures in these…
(more)
▼ Network modeling and graph theory have been widely
studied and applied in a variety of modern research areas. The
detection of network structures in these contexts is a fundamental
and challenging problem. ""Community structures'' (where nodes are
clustered into densely connected subnetworks) appear in a wide
spectrum of disciplines. Examples include groups of individuals
sharing common interests in social networks and groups of proteins
carrying out similar biological functions. Detecting network
structure also enables us to perform other kinds of network
analysis, such as hub identification. ❧ Modularity-based graph
partitioning is an approach that is specifically designed to detect
the community structures in a network. Though modularity methods
have achieved some amount of success in detecting modular
structures, many of the related problems have not yet been solved.
❧ Central to modularity-based graph partitioning is the null model:
a statistical representation of a network with no structure. In
this work, I will present a novel approach to design null models.
This new null model approach resolves many of the existing
problems, including dealing with non-negativity, topological
consistency, etc. Null models are presented for binary/weighted
graphs and undirected/directed graphs. I will also present several
new methods to assess the statistical significance of the detected
community structures. Several of the potential future work
directions as well as the position of the modular detection problem
in a broader network analysis scheme will be given at the end of
this thesis.
Advisors/Committee Members: Leahy, Richard M. (Committee Chair), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member),
Tjan, Bosco S. (Committee Member).
Subjects/Keywords: brain connectome; graph theory; modularity; network structures; random matrix theory; statistical significance
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chang, Y. (2012). Network structures: graph theory, statistics, and
neuroscience applications. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/39102/rec/4368
Chicago Manual of Style (16th Edition):
Chang, Yu-Teng. “Network structures: graph theory, statistics, and
neuroscience applications.” 2012. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/39102/rec/4368.
MLA Handbook (7th Edition):
Chang, Yu-Teng. “Network structures: graph theory, statistics, and
neuroscience applications.” 2012. Web. 07 Mar 2021.
Vancouver:
Chang Y. Network structures: graph theory, statistics, and
neuroscience applications. [Internet] [Doctoral dissertation]. University of Southern California; 2012. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/39102/rec/4368.
Council of Science Editors:
Chang Y. Network structures: graph theory, statistics, and
neuroscience applications. [Doctoral Dissertation]. University of Southern California; 2012. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/39102/rec/4368

University of Southern California
8.
Liang, Joyce.
Efficient methods for enhancing secure network codes.
Degree: PhD, Electrical Engineering, 2012, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/204887/rec/2248
► Network coding has been studied extensively in the last decade, during which it has been linked to applications in throughput gain, error correction, robustness to…
(more)
▼ Network coding has been studied extensively in the
last decade, during which it has been linked to applications in
throughput gain, error correction, robustness to non-ergodic link
failures, confidentiality, and security. It can be efficiently
implemented through low com- plexity linear operations, over both
wired and wireless networks. Network coding may also be utilized in
both centralized and decentralized designs, where it has been shown
to outperform traditional routing techniques. Despite its
potential, network coding must still abide by the same fundamental
tradeoffs. These tradeoffs are rarely treated in the literature
that studies each of its merits in isolation. We feature a holistic
approach to network coding in the areas of security, efficiency,
and robustness against erasures. Joint examination yields design
considerations that require flexible parameters to analyze
tradeoffs between desired algorithmic features. ❧ For the first
topic, we present a novel algorithm that achieves robust and secure
shar- ing based on network coding among multiple trusted peers in
wireless erasure networks. In the considered communication model,
an eavesdropper can take advantage of the broadcast medium to tap
messages along a min-cut. The fundamental tradeoff between secrecy,
robustness and efficiency enforced by the wireless environment is
examined. In situations where there is no secure capacity, we give
a convolutional NC scheme that achieves weak secrecy to decrease
the number of symmetric keys required. We further propose methods
to increase communication efficiency using the algorithm at the
cost of robustness or privacy. Finally, we show that network
erasures can actually increase the amount of secrecy in the
proposed scheme but at the cost of decreased efficiency. ❧ For the
second topic, an algorithm is presented that efficiently hides the
global coding kernels as an alternative method to providing privacy
in a single source multicast network that employs network coding.
Unlike the majority of secure network coding algorithms, the
proposed algorithm continues a very recent trend that instead
focuses on preventing the adversary from ascertaining any global or
local mappings. Without the global coding matrix, the adversary is
unable to recover the secret source messages. We address a more
powerful adversary that not only removes all wiretap restrictions
featured in previous works, but we also assume that the adversary
has the ability to manipulate source messages. The adversarial goal
is therefore to obtain information about the global coding matrix
by chosen plaintext attacks. Our algorithm is novel in the sense
that the source node codes messages based on the secretly indexed
set of coding vectors, rather than independently selecting each
coefficient. By reducing the size of this finite pool, we may
achieve enormous gains in overhead savings, which increase
utilizable network capacity. The impact of the field size and the
size of the vector pool on both security and efficiency is
discussed. The cost metric used is…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Zhang, Zhen (Committee Member),
Golubchik, Leana (Committee Member).
Subjects/Keywords: network coding; secrecy; robustness
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liang, J. (2012). Efficient methods for enhancing secure network codes. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/204887/rec/2248
Chicago Manual of Style (16th Edition):
Liang, Joyce. “Efficient methods for enhancing secure network codes.” 2012. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/204887/rec/2248.
MLA Handbook (7th Edition):
Liang, Joyce. “Efficient methods for enhancing secure network codes.” 2012. Web. 07 Mar 2021.
Vancouver:
Liang J. Efficient methods for enhancing secure network codes. [Internet] [Doctoral dissertation]. University of Southern California; 2012. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/204887/rec/2248.
Council of Science Editors:
Liang J. Efficient methods for enhancing secure network codes. [Doctoral Dissertation]. University of Southern California; 2012. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/204887/rec/2248

University of Southern California
9.
Wang, Quan.
Automatic image matching for mobile multimedia
applications.
Degree: PhD, Computer Science, 2011, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/435285/rec/999
► Image matching is a fundamental task in computer vision, used to correspond two or more images taken, for example, at different times, from different aspects,…
(more)
▼ Image matching is a fundamental task in computer
vision, used to correspond two or more images taken, for example,
at different times, from different aspects, or different sensors.
Image matching is also the core of many multimedia systems and
applications. Today, the rapid convergence of multimedia,
computation and communication technologies with techniques for
device miniaturization is ushering us into a mobile, pervasively
connected multimedia future, promising many exciting applications,
such as content-based image retrieval (CBIR), mobile augmented
reality (MAR), handheld 3D scene modeling and texturing, and
vision-based personal navigation and localization, etc.; Automatic
image matching, although notable progresses have been achieved in
recent years, is still a challenging problem especially for
applications on mobile platforms. Major technical difficulties
include the algorithms’ robustness to viewpoint and lighting
changes, processing speed and storage efficiency for mobile
devices, and the capability to handle inputs from different sensors
and data sources.; This research focuses on the advanced
technologies and approaches of image matching, particularly
targeting mobile multimedia applications. First, a real-time image
matching approach is developed. The approach uniquely combines
kernel projection technique with feature selection and multi-view
training to produce efficient feature representations for real-time
image matching. To address the computational and storage efficiency
for mobile devices, our produced feature descriptors are highly
compact (20-D) in comparisons to the state-of-the-art (e.g. SIFT:
128-D, SURF: 64-D, and PCA-SIFT: 36-D), suiting for applications on
mobile platforms. Second, couple with the matching approaches, a
fast data search technique has also been developed that can rapidly
recover and screen possible matches in a large high-dimensional
database of features and images. Third, in order to enhance the
distinctiveness and efficiency of image features, a feature
augmentation process has been proposed integrating geometry
information into local features and producing semi-global image
descriptors. Next, combining those developed techniques, an
application system called Augmented Museum Exhibitions has been
built to demonstrate their effectiveness.; Finally, the research is
extended to matching images acquired from different sensor
modalities, i.e. corresponding the 2D optical images to 3D range
data from LIDAR sensors. The developed high-level feature based
matching approach is efficient, being able to automatically
register the heterogeneous data with significant
differences.
Advisors/Committee Members: You, Suya (Committee Chair), Neumann, Ulrich (Committee Member), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member).
Subjects/Keywords: object recognition and tracking; image correspondences; augmented reality; machine learning; content-based image retrieval; urban modeling; image processing and computer graphics; computer vision and image understanding
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, Q. (2011). Automatic image matching for mobile multimedia
applications. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/435285/rec/999
Chicago Manual of Style (16th Edition):
Wang, Quan. “Automatic image matching for mobile multimedia
applications.” 2011. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/435285/rec/999.
MLA Handbook (7th Edition):
Wang, Quan. “Automatic image matching for mobile multimedia
applications.” 2011. Web. 07 Mar 2021.
Vancouver:
Wang Q. Automatic image matching for mobile multimedia
applications. [Internet] [Doctoral dissertation]. University of Southern California; 2011. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/435285/rec/999.
Council of Science Editors:
Wang Q. Automatic image matching for mobile multimedia
applications. [Doctoral Dissertation]. University of Southern California; 2011. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/435285/rec/999

University of Southern California
10.
Guan, Wei.
Hybrid methods for robust image matching and its application
in augmented reality.
Degree: PhD, Computer Science, 2014, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369764/rec/3294
► This thesis presents new matching algorithms that work robustly in challenging situations. Image matching is a fundamental and challenging problem in vision community due to…
(more)
▼ This thesis presents new matching algorithms that work
robustly in challenging situations. Image matching is a fundamental
and challenging problem in vision community due to varied sensing
techniques and imaging conditions. While it is almost impossible to
find a general method that is optimized for all uses, we focus on
those matching problems that are related to augmented reality (AR).
Many AR applications have been developed on portable devices, but
most are limited to indoor environments within a small workspace
because their matching algorithms are not robust out of controlled
conditions. ❧ The first part of the thesis describes 2D to 2D image
matching problems. Existing robust features are not suited for AR
applications due to their computational cost. A fast matching
scheme is applied to such features to increase matching speed by up
to 10 times without sacrificing their robustness. Lighting
variations can often cause match failures in outdoor environments.
It is a challenging problem because any change in illumination
causes unpredicted changes in image intensities. Some features have
been specially designed to be lighting invariant. While these
features handle linear or monotonic changes, they are not robust to
more complex changes. This thesis presents a line‐based feature
that is robust to complex and large illumination variations. Both
feature detector and descriptor are described in more detail. ❧ The
second part of the thesis describes image sequence matching with 3D
point clouds. Feature‐based matching becomes more challenging due
to different structures between 2D and 3D data. The features
extracted from one type of data are usually not repeatable in the
other. An ICP‐like method that iteratively aligns an image with a
3D point cloud is presented. While this method can be used to
calculate the pose for a single frame, it is not efficient to apply
it for all frames in the sequence. Once the first frame pose is
obtained, the poses for subsequent frames can be tracked from 2D to
3D point correspondences. It is observed that not all points on
LiDAR are suitable for tracking. A simple and efficient method is
used to remove unstable LiDAR points and identify features on
frames that are robust in the tracking process. With the above
methods, the poses can be calculated more stably for the whole
sequence. ❧ With provided solutions to above challenging problems,
we have applied our methods in an AR system. We describe each step
in building up such a system from data collections and
preprocessing, to pose calculations and trackings. The presented
system is shown to be robust and promising for most AR‐based
applications.
Advisors/Committee Members: You, Suya (Committee Chair), Neumann, Ulrich (Committee Member), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member).
Subjects/Keywords: computer science; computer vision; machine learning; graphical model; augmented reality; image matching/registration; line-based features; grid method; point cloud; segmentation; camera tracking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Guan, W. (2014). Hybrid methods for robust image matching and its application
in augmented reality. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369764/rec/3294
Chicago Manual of Style (16th Edition):
Guan, Wei. “Hybrid methods for robust image matching and its application
in augmented reality.” 2014. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369764/rec/3294.
MLA Handbook (7th Edition):
Guan, Wei. “Hybrid methods for robust image matching and its application
in augmented reality.” 2014. Web. 07 Mar 2021.
Vancouver:
Guan W. Hybrid methods for robust image matching and its application
in augmented reality. [Internet] [Doctoral dissertation]. University of Southern California; 2014. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369764/rec/3294.
Council of Science Editors:
Guan W. Hybrid methods for robust image matching and its application
in augmented reality. [Doctoral Dissertation]. University of Southern California; 2014. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/369764/rec/3294

University of Southern California
11.
Ko, Hyunsuk.
Advanced techniques for stereoscopic image rectification and
quality assessment.
Degree: PhD, Electrical Engineering, 2015, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/538362/rec/551
► New frameworks for an objective quality evaluation and an image rectification of stereoscopic image pairs are presented in this work. First, quality assessment of stereoscopic…
(more)
▼ New frameworks for an objective quality evaluation and
an image rectification of stereoscopic image pairs are presented in
this work. First, quality assessment of stereoscopic image pairs is
more complicated than that for 2D images since it is a
multi-dimensional problem where the quality is affected by
distortion types as well as the relation between the left and right
views such as different types/levels of distortion in two views. In
our work, we first introduce a novel formula-based metric that
provide better results than several existing methods. However, the
formula-based metric still has its limitation. For further
improvement, we propose a parallel boosting system based quality
index. That is, we classify distortion types into groups and design
a set of scorer to handle them separately. At stage 1, each scorer
generates its own score for a specific distortion type. At stage 2,
all intermediate scores are fused to predict the final quality
index with nonlinear regression. Experimental results demonstrate
that the proposed quality index outperforms most of state-of-the
art quality assessment methods by a significant margin over
different databases. ❧ Secondly, a novel algorithm for uncalibrated
stereo image-pair rectification under the constraint of geometric
distortion, called USR-CGD, is presented in this work. Although it
is straightforward to define a rectifying transformation (or
homography) given the epipolar geometry, many existing algorithms
have unwanted geometric distortions as a side effect. To obtain
rectified images with reduced geometric distortions while
maintaining a small rectification error, we parameterize the
homography by considering the influence of various kinds of
geometric distortions. Next, we define several geometric measures
and incorporate them into a new cost function for parameter
optimization. Finally, we propose a constrained adaptive
optimization scheme to allow a balanced performance between the
rectification error and the geometric error. Extensive experimental
results are provided to demonstrate the superb performance of the
proposed USR-CGD method, which outperforms existing algorithms by a
significant margin.
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Sawchuk, Alexander A. (Sandy) (Committee Member),
Nakano, Aiichiro (Committee Member).
Subjects/Keywords: stereoscopic images; quality assessment; rectification; stereo matching; machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ko, H. (2015). Advanced techniques for stereoscopic image rectification and
quality assessment. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/538362/rec/551
Chicago Manual of Style (16th Edition):
Ko, Hyunsuk. “Advanced techniques for stereoscopic image rectification and
quality assessment.” 2015. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/538362/rec/551.
MLA Handbook (7th Edition):
Ko, Hyunsuk. “Advanced techniques for stereoscopic image rectification and
quality assessment.” 2015. Web. 07 Mar 2021.
Vancouver:
Ko H. Advanced techniques for stereoscopic image rectification and
quality assessment. [Internet] [Doctoral dissertation]. University of Southern California; 2015. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/538362/rec/551.
Council of Science Editors:
Ko H. Advanced techniques for stereoscopic image rectification and
quality assessment. [Doctoral Dissertation]. University of Southern California; 2015. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/538362/rec/551

University of Southern California
12.
Sucontphunt, Tanasai.
3D face surface and texture synthesis from 2D landmarks of a
single face sketch.
Degree: PhD, Computer Science, 2012, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/122040/rec/18
► Synthesizing a 3D human face surface and texture from a drawing is a challenging problem. The problem involves inferring a 3D model and its color…
(more)
▼ Synthesizing a 3D human face surface and texture from
a drawing is a challenging problem. The problem involves inferring
a 3D model and its color information from 2D information. In
contrast, human beings have the natural ability to reconstruct 3D
models from a drawing in their mind effortlessly. This skill is
built up from years of experience in mapping the perceived 2D
information to the 3D model in the actual scene. By imitating this
mapping process, this work illustrates an approach to reconstruct a
3D human face from just the 2D facial landmarks gathering from a
sketch image. The 2D facial landmarks of the sketch image contain
enough information for this process because they semantically
represent a facial structure that can be recognizable as a human
face. The approach also exploits the perspective distortion of the
sketch image as a guidance to infer the depth information from the
2D landmarks. Various artistic styles can also be applied to the
generated face similar to how an artist would apply their own
artistic style to a drawing. The controlled environment evaluations
show that the reconstructed 3D faces are highly similar to the
ground-truth examples. This approach can be used in many face
modeling applications such as in 3D avatar creations, artistic face
modelings, and police investigations.
Advisors/Committee Members: Neumann, Ulrich (Committee Chair), Nakano, Aiichiro (Committee Member), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member).
Subjects/Keywords: 3D face modeling; 3D avatar; face sketch; face portrait
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sucontphunt, T. (2012). 3D face surface and texture synthesis from 2D landmarks of a
single face sketch. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/122040/rec/18
Chicago Manual of Style (16th Edition):
Sucontphunt, Tanasai. “3D face surface and texture synthesis from 2D landmarks of a
single face sketch.” 2012. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/122040/rec/18.
MLA Handbook (7th Edition):
Sucontphunt, Tanasai. “3D face surface and texture synthesis from 2D landmarks of a
single face sketch.” 2012. Web. 07 Mar 2021.
Vancouver:
Sucontphunt T. 3D face surface and texture synthesis from 2D landmarks of a
single face sketch. [Internet] [Doctoral dissertation]. University of Southern California; 2012. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/122040/rec/18.
Council of Science Editors:
Sucontphunt T. 3D face surface and texture synthesis from 2D landmarks of a
single face sketch. [Doctoral Dissertation]. University of Southern California; 2012. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/122040/rec/18

University of Southern California
13.
Kim, Woo-Shik.
3-D video coding system with enhanced rendered view
quality.
Degree: PhD, Electrical Engineering, 2011, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/636333/rec/17
► The objective of this research is to develop a new 3-D video coding system which can provide better coding efficiency with improved subjective quality as…
(more)
▼ The objective of this research is to develop a new 3-D
video coding system which can provide better coding efficiency with
improved subjective quality as compared to existing 3-D video
systems such as the depth image based rendering (DIBR) system.
Clearly, one would be able to increase overall performance by
focusing on better “generic” coding tools. Instead, here we focus
on techniques that are specific of 3-D video. Specifically, we
consider improved representations for depth information as well as
information that can directly contribute to improved intermediate
view interpolation. ❧ As a starting point, we analyze the
distortions that occur in rendered views generated using the DIBR
system, and classify them in order to evaluate their impact on
subjective quality. As a result, we find that the rendered view
distortion due to depth map coding has non-linear characteristics
(i.e., increases in intensity errors in the interpolated view are
not proportional to increases in depth map coding errors) and is
highly localized (i.e., very large errors occur only in a small
subset of pixels in a video frame), which can lead to significant
degradation in perceptual quality. A flickering artifact is also
observed due to temporal variation of depth map sequence. ❧ To
solve these problems, we first propose new coding tools which can
reduce the rendered view distortion by defining a new distortion
metric to derive relationships between distortions in coded depth
map and rendered view. In addition, a new skip mode selection
method is proposed based on local video characteristics. Our
experimental results show the efficiency of the proposed method
with coding gains of up to 1.6 dB in interpolated frame quality as
well as better subjective quality with reduced flickering
artifacts. ❧ We also propose a new transform coding using graph
based representation of a signal, which we name as graph based
transform. Considering depth map consists of smooth regions with
sharp edges along object boundaries, efficient transform coding can
be performed by forming a graph in which the pixels are not
connected across edges. Experimental results reveal that coding
efficiency improvement of 0.4 dB can be achieved by applying the
new transform in a hybrid manner with DCT to compress a depth map.
❧ Secondly, we propose a solution in which depth transition data is
encoded and transmitted to the decoder. Depth transition data for a
given pixel indicates the camera position for which this pixel’s
depth will change. For example in a pixel corresponding to
foreground in the left image, and background in the right image,
this information helps us determine in which intermediate view (as
we move left to right), this pixel will become a background pixel.
The main reason to consider transmitting explicitly this
information is that it can be used to improve view interpolation at
many different intermediate camera positions. Simulation results
show that the subjective quality can be significantly improved
using our proposed depth transition data. Maximum PSNR…
Advisors/Committee Members: Ortega, Antonio (Committee Chair), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member),
Neumann, Ulrich (Committee Member).
Subjects/Keywords: signal processing; multimedia processing; image processing; video processing; 3-D video; image compression; video compression; video coding; view synthesis; view rendering; depth map coding
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kim, W. (2011). 3-D video coding system with enhanced rendered view
quality. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/636333/rec/17
Chicago Manual of Style (16th Edition):
Kim, Woo-Shik. “3-D video coding system with enhanced rendered view
quality.” 2011. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/636333/rec/17.
MLA Handbook (7th Edition):
Kim, Woo-Shik. “3-D video coding system with enhanced rendered view
quality.” 2011. Web. 07 Mar 2021.
Vancouver:
Kim W. 3-D video coding system with enhanced rendered view
quality. [Internet] [Doctoral dissertation]. University of Southern California; 2011. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/636333/rec/17.
Council of Science Editors:
Kim W. 3-D video coding system with enhanced rendered view
quality. [Doctoral Dissertation]. University of Southern California; 2011. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/636333/rec/17

University of Southern California
14.
Lou, Chung-Cheng.
Low complexity and high efficiency prediction techniques for
video coding.
Degree: PhD, Electrical Engineering, 2011, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/657287/rec/3882
► Video compression has been extensively studied in the last two decades. The success of a coding algorithm relies on the effective removal of spatial and…
(more)
▼ Video compression has been extensively studied in the
last two decades. The success of a coding algorithm relies on the
effective removal of spatial and temporal redundancies in input
video sequences. On the other hand, effective spatial and temporal
prediction techniques demand high computational complexity, which
makes it challenging to implement in resource-limited mobile
devices. This research focuses on two topics: 1) complexity
reduction of temporal prediction without significant
rate-distortion (RD) performance degradation; and 2) the
development of a more effective spatial prediction technique to
enhance the RD performance. ❧ For the first topic, complexity
reduction in temporal prediction is achieved by the development of
an adaptive motion search range (SR) selection algorithm. A good
choice of the SR size helps reduce memory access bandwidth while
maintaining the RD coding performance. To begin with, we get a
motion vector predictor (MVP) for a target block based on motion
vectors (MVs) of its spatially and temporally neighboring blocks,
which form a MV prediction set. Then, we relate the variance of the
MV prediction set to the SR. That is, a larger variance implies
lower accuracy of the MVP and a larger SR. Finally, we derive a
probability model for the motion vector prediction difference
(MVPD), the difference between the optimal MV and the MVP, to
quantify the probability for a chosen SR to contain the optimal MV.
The superior performance of the proposed SR selection algorithm is
demonstrated by experimental results. ❧ For the second topic, a
novel multi-order-residual-prediction (MORP) coding approach is
proposed to improve spatial prediction efficiency in video coding.
We observe that the compression ratio of a video coding algorithm
depends on the nature of sequences as indicated by the ratio
between inter and intra blocks in the bit-stream. When the
percentage of intra blocks increases, the prediction efficiency
decreases, thus leading to a poorer coding gain. In other words,
one bottleneck of video coding lies in poor intra prediction
efficiency. To address this issue, we propose an MORP coding scheme
that adopts a second-order prediction scheme after the traditional
first-order prediction. Different prediction techniques are adopted
in different stages to tailor to the nature of the corresponding
residual signals. The proposed MORP scheme outperforms H.264/AVC
for the intra block coding and, thus, improves the overall coding
efficiency. ❧ Finally, we analyze prediction inefficiency of the
proposed MORP scheme and present an enhanced intra prediction
coding called the generalized line-based intra prediction (GLIP) to
improve it. The GLIP allows partial prediction of a coding block by
enabling a subset of the neighboring prediction pixels. The
residual signal after the first order prediction consists of the
local line structure while the GLIP is designed to exploit this
feature. The vector quantization (VQ) technique is used to
approximate and encode the shape of the binarized residual signal.
The…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Ortega, Antonio (Committee Member),
Nakano, Aiichiro (Committee Member).
Subjects/Keywords: motion estimation; MV search window prediction; mobile devices; memory-constrained system; video coding; motion search range; motion vector prediction; adaptive search range selection; high efficiency video coding
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lou, C. (2011). Low complexity and high efficiency prediction techniques for
video coding. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/657287/rec/3882
Chicago Manual of Style (16th Edition):
Lou, Chung-Cheng. “Low complexity and high efficiency prediction techniques for
video coding.” 2011. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/657287/rec/3882.
MLA Handbook (7th Edition):
Lou, Chung-Cheng. “Low complexity and high efficiency prediction techniques for
video coding.” 2011. Web. 07 Mar 2021.
Vancouver:
Lou C. Low complexity and high efficiency prediction techniques for
video coding. [Internet] [Doctoral dissertation]. University of Southern California; 2011. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/657287/rec/3882.
Council of Science Editors:
Lou C. Low complexity and high efficiency prediction techniques for
video coding. [Doctoral Dissertation]. University of Southern California; 2011. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/657287/rec/3882

University of Southern California
15.
Sheng, Lingyan.
Novel algorithms for large scale supervised and one class
learning.
Degree: PhD, Electrical Engineering, 2013, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/225861/rec/4466
► Supervised learning is the machine learning task of inferring a function from labeled training data. There have been numerous algorithms proposed for supervised learning, such…
(more)
▼ Supervised learning is the machine learning task of
inferring a function from labeled training data. There have been
numerous algorithms proposed for supervised learning, such as
linear discriminant analysis (LDA), support vector machine (SVM),
decision trees, and etc. However, most of them are not able to
handle an increasingly popular type of data, high dimensional data,
such as gene expression data, text documents, MRI images, and etc.
This phenomenon is often called the curse of dimensionality. Our
solution to this problem is an improvement to LDA that imposes a
regularized structure on the covariance matrix, so that it becomes
block diagonal while feature reduction is performed. The improved
method, which we call block diagonal discriminant analysis (BDLDA),
effectively exploits the off diagonal information in the covariance
matrix without huge computation and memory requirement. BDLDA is
further improved by using treelets as a preprocessing tool.
Treelets, by transforming the original data by successive local
PCA, concentrates more energy near the diagonal items in the
covariance matrix, and thus achieves even better accuracy compared
to BDLDA. ❧ Supervised learning requires labeled information of all
classes. However, since labeled data is often more difficult to
obtain than unlabeled data, there is an increasing interest in a
special form of learning, namely, one class learning. In one class
learning, the training set only has samples of one class, and the
goal is to distinguish the class from all other samples. We propose
a one class learning algorithm, Graph-One Class Learning
(Graph-OCL). Graph-OCL is a two step strategy, where we first
identify reliable negative samples, and then we classify the
samples based on labeled data and the identified negative samples
in the first step. The main novelty is the first step, in which
graph-based ranking by learning with local and global consistency
(LGC) is used. Graph-based ranking is particularly accurate if the
samples and their similarities are well represented by a graph. We
also theoretically prove that there is a simple method to select a
constant parameter ɑ for LGC, thus eliminating the necessity of
model selection by time consuming validation. ❧ Graph-based methods
usually scale badly as a function of the sample size. This can be
solved by using the Nyström approximation, which samples a few
columns to represent the affinity matrix. We propose a new method,
BoostNyström, which adaptively samples a subset of columns at each
iterative step and updates the sampling probability in the next
iterative step. This algorithm is based on a novel perspective,
which relates the quality of Nyström approximation with the
subspace spanned by the sampled columns. BoostNyström can be
potentially applied to Graph-OCL to solve the problem of large data
size.
Advisors/Committee Members: Ortega, Antonio K. (Committee Chair), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member),
Liu, Yan (Committee Member).
Subjects/Keywords: supervised learning; one class learning; linear discriminant analysis; graph; Nyströ; m approximation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sheng, L. (2013). Novel algorithms for large scale supervised and one class
learning. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/225861/rec/4466
Chicago Manual of Style (16th Edition):
Sheng, Lingyan. “Novel algorithms for large scale supervised and one class
learning.” 2013. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/225861/rec/4466.
MLA Handbook (7th Edition):
Sheng, Lingyan. “Novel algorithms for large scale supervised and one class
learning.” 2013. Web. 07 Mar 2021.
Vancouver:
Sheng L. Novel algorithms for large scale supervised and one class
learning. [Internet] [Doctoral dissertation]. University of Southern California; 2013. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/225861/rec/4466.
Council of Science Editors:
Sheng L. Novel algorithms for large scale supervised and one class
learning. [Doctoral Dissertation]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/225861/rec/4466

University of Southern California
16.
Rossi, Lorenzo.
Efficient data collection in wireless sensor networks:
modeling and algorithms.
Degree: PhD, Electrical Engineering, 2012, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/195260/rec/2245
► This dissertation focuses on data gathering for wireless sensor networks. Data gathering deals with the problem of transmitting measurements of physical phenomena from the sensor…
(more)
▼ This dissertation focuses on data gathering for
wireless sensor networks. Data gathering deals with the problem of
transmitting measurements of physical phenomena from the sensor
nodes to one or more sinks in the most efficient manner. It is
usually the main task performed by a sensor network and therefore
the main cause of energy depletion for the nodes. The research
efforts presented here propose insightful models for the phenomena
sampled by sensor networks with the purpose of designing more
energy efficient data gathering schemes. ❧ We first focus on
phenomena that can be characterized by a diffusive process. We
propose to model the data via discretized diffusion partial
differential equations (PDEs). The rationale is that few equation
coefficient plus initial and contour conditions may have the
potential to completely describe such spatio-temporal phenomena in
a compact manner. We propose and study an algorithm for the
in-network identification of the diffusion coefficients. Then, we
adopt a spatially non stationary correlation model and we study how
this impacts correlation based data gathering and, in particular,
the problem of optimally placing a sink node in a sensor network
region. Finally, we view each round of sensor measurements as a
still image and we represent it via intensity histograms. This way,
we can adopt image content analysis tools (intensity histograms
matching) to analyze the data and determine which rounds of
measurements are of interest to the final users. Therefore energy
can be saved by transmitting only some rounds of measurements to
the base station. We study the above models and the performance of
data collection algorithms via analysis and experiments on
synthetic and real data.
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Krishnamachari, Bhaskar (Committee Member),
Golubchik, Leana (Committee Member).
Subjects/Keywords: digital signal processing; information theory; pattern recognition; wireless sensor networks; partial differential equations; spatially non-stationary correlations
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rossi, L. (2012). Efficient data collection in wireless sensor networks:
modeling and algorithms. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/195260/rec/2245
Chicago Manual of Style (16th Edition):
Rossi, Lorenzo. “Efficient data collection in wireless sensor networks:
modeling and algorithms.” 2012. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/195260/rec/2245.
MLA Handbook (7th Edition):
Rossi, Lorenzo. “Efficient data collection in wireless sensor networks:
modeling and algorithms.” 2012. Web. 07 Mar 2021.
Vancouver:
Rossi L. Efficient data collection in wireless sensor networks:
modeling and algorithms. [Internet] [Doctoral dissertation]. University of Southern California; 2012. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/195260/rec/2245.
Council of Science Editors:
Rossi L. Efficient data collection in wireless sensor networks:
modeling and algorithms. [Doctoral Dissertation]. University of Southern California; 2012. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/195260/rec/2245

University of Southern California
17.
Liu, Tsung-Jung.
A learning‐based approach to image quality
assessment.
Degree: PhD, Electrical Engineering, 2016, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/446378/rec/226
► Research on visual quality assessment has been active during the last decade. This dissertation consists of six parts centered on this subject. In Chapter 1,…
(more)
▼ Research on visual quality assessment has been active
during the last decade. This dissertation consists of six parts
centered on this subject. In Chapter 1, we highlight the
significance and contributions of our research work. The previous
work in this area is also thoroughly reviewed. ❧ In Chapter 2, we
provide an in‐depth review of recent developments in the field. As
compared with others' work, our survey has several contributions.
First, besides image quality databases and metrics, we put emphasis
on video quality databases and metrics since this is a less
investigated area. Second, we discuss the application of visual
quality evaluation to perceptual coding as an example for
applications. Thirdly, we compare the performance of
state‐of‐the‐art visual quality metrics with experiments. Finally,
we introduce the machine learning methods that can be applied on
visual quality assessment. ❧ In Chapter 3, a new methodology for
objective image quality assessment (IQA) with multi‐method fusion
(MMF) is proposed. The research is motivated by the observation
that there is no single method that can give the best performance
in all situations. To achieve MMF, we adopt a regression approach.
The new MMF score is set to be a nonlinear combination of scores
from multiple methods with suitable weights obtained by a training
process. In order to improve the regression results further, we
divide distorted images into three to five groups based on the
distortion types and perform regression within each group, which is
called ""context‐dependent MMF"" (CD‐MMF). One task in CD‐MMF is to
determine the context automatically, which is achieved by a machine
learning approach. To further reduce the complexity of MMF, we
perform algorithms to select a small subset from the candidate
method set. The result is very good even if only 3 quality
assessment methods are included in the fusion process. The proposed
MMF method using support vector regression (SVR) is shown to
outperform a large number of existing IQA methods by a significant
margin when being tested in six representative databases. ❧ In
Chapter 4, an ensemble method for full‐reference image quality
assessment (IQA) based on the parallel boosting (or ParaBoost in
short) idea is proposed in this work. We first extract features
from existing image quality metrics and train them to form basic
image quality scorers (BIQSs). Then, we select additional features
to address specific distortion types and train them to construct
auxiliary image quality scorers (AIQSs). Both BIQSs and AIQSs are
trained on small image subsets of certain distortion types and, as
a result, they are weak performers with respect to a wide variety
of distortions. Finally, we adopt the ParaBoost framework to fuse
the scores of BIQSs and AIQSs to evaluate images containing a wide
range of distortion types. This ParaBoost methodology can be easily
extended to images of new distortion types. Extensive experiments
are conducted to demonstrate the superior performance of the
ParaBoost method, which ourperforms existing…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Georgiou, Panayiotis G. (Committee Member),
Nakano, Aiichiro (Committee Member).
Subjects/Keywords: ensemble; fusion; image quality assessment; image quality scorer; machine learning; ParaBoost
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, T. (2016). A learning‐based approach to image quality
assessment. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/446378/rec/226
Chicago Manual of Style (16th Edition):
Liu, Tsung-Jung. “A learning‐based approach to image quality
assessment.” 2016. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/446378/rec/226.
MLA Handbook (7th Edition):
Liu, Tsung-Jung. “A learning‐based approach to image quality
assessment.” 2016. Web. 07 Mar 2021.
Vancouver:
Liu T. A learning‐based approach to image quality
assessment. [Internet] [Doctoral dissertation]. University of Southern California; 2016. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/446378/rec/226.
Council of Science Editors:
Liu T. A learning‐based approach to image quality
assessment. [Doctoral Dissertation]. University of Southern California; 2016. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/446378/rec/226

University of Southern California
18.
Shirani-Mehr, Houtan.
Efficient reachability query evaluation in large
spatiotemporal contact networks.
Degree: PhD, Computer Science, 2013, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/307725/rec/2250
► In many application scenarios, an item, such as a message, a piece of sensitive information, contagious virus or a malicious malware, passes between two objects,…
(more)
▼ In many application scenarios, an item, such as a
message, a piece of sensitive information, contagious virus or a
malicious malware, passes between two objects, such as moving
vehicles, individuals or cell phone devices, when the objects are
sufficiently close (i.e., when they are, so-called, in contact),
and some application specific constraints are satisfied. An example
of ""constraint"" in the transmission of a malware is that it takes
some time such that the malware is activated on a cell phone and
then it can be transmitted to another one via Bluetooth. As another
example for constraint, a message passes between two vehicles with
a probability which depends on various conditions such as the
distance between the vehicles. In such applications, once an item
is initiated, it can penetrate the object population through the
evolving network of contacts among objects, termed ""contact
network"". A reachability query evaluates whether two objects are
""reachable"" through the contact network. In this dissertation, we
define and study reachability query in large (i.e., disk resident)
contact datasets which verifies whether two objects are reachable
through the contact network represented by such contact datasets.
The main characteristics of our problem are the large scale of the
contact dataset as well as the dynamism of the network which models
the contact dataset. This underlying network evolves over the time
period during which the contact dataset is constructed as the
objects are moving in the environment and subsequently new contacts
appear and old contacts disappear over time. ❧ In this
dissertation, due to the complexity of the general problem, we
first simplify the problem by focusing on reachability in contact
datasets with no-constraints. With such contact datasets, an item
passes between two objects when they are close enough. We propose
two contact dataset indexes, termed ReachGrid and ReachGraph, for
efficient reachability query processing. With ReachGrid, at the
query time only a small necessary portion of the contact dataset is
constructed and traversed. With ReachGraph, we precompute and
leverage reachability at different scales for efficient query
processing. We optimize the disk placement of both indexes for
efficient query processing. ❧ Afterward, we extend ReachGrid and
ReachGraph for contact networks with constraints. To this end, as a
case study we focus on a specific type of constraint, i.e., the
latency constraint, and adopt ReachGraph and ReachGrid for
efficient reachability query processing. Furthermore, we discuss
how to generalize ReachGraph and ReachGrid for contact networks
with general constraints based on the insights we obtain from
focusing on contact networks with latency.
Advisors/Committee Members: Shahabi, Cyrus (Committee Chair), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member),
Narayanan, Shrikanth S. (Committee Member).
Subjects/Keywords: contact networks; query processing; reachability query
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shirani-Mehr, H. (2013). Efficient reachability query evaluation in large
spatiotemporal contact networks. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/307725/rec/2250
Chicago Manual of Style (16th Edition):
Shirani-Mehr, Houtan. “Efficient reachability query evaluation in large
spatiotemporal contact networks.” 2013. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/307725/rec/2250.
MLA Handbook (7th Edition):
Shirani-Mehr, Houtan. “Efficient reachability query evaluation in large
spatiotemporal contact networks.” 2013. Web. 07 Mar 2021.
Vancouver:
Shirani-Mehr H. Efficient reachability query evaluation in large
spatiotemporal contact networks. [Internet] [Doctoral dissertation]. University of Southern California; 2013. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/307725/rec/2250.
Council of Science Editors:
Shirani-Mehr H. Efficient reachability query evaluation in large
spatiotemporal contact networks. [Doctoral Dissertation]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/307725/rec/2250

University of Southern California
19.
Gao, Zhenzhen.
City-scale aerial LiDAR point cloud visualization.
Degree: PhD, Computer Science (Multimedia and Creative
Technologies), 2014, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/426678/rec/1370
► Aerial LiDAR (Light Detection and Ranging) is cost‐effective in acquiring terrain and urban information by mounting a downward‐scanning laser on a low‐flying aircraft. It produces…
(more)
▼ Aerial LiDAR (Light Detection and Ranging) is
cost‐effective in acquiring terrain and urban information by
mounting a downward‐scanning laser on a low‐flying aircraft. It
produces huge volumes of unconnected 3D points. This thesis focuses
on the interactive visualization of aerial LiDAR point clouds of
cities, which is applicable to a number of areas including virtual
tourism, security, land management and urban planning. ❧ A
framework needs to address several challenges in order to deliver
useful visualizations of aerial LiDAR cities. Firstly, the data is
2.5D, in that the sensor is only able to capture dense details of
the surfaces facing it, leaving few samples on vertical building
walls. Secondly, the data often suffers from noise and
under‐sampling. Finally, the large size of the data can easily
exceed the memory capacity of a computer system. ❧ This thesis
first introduces a visually‐complete rendering framework for aerial
LiDAR cities. By inferring classification information, building
walls and occluded ground areas under tree canopies are completed
either through pre‐processing point cloud augmentation or through
online procedural geometry generation. A multi‐resolution
out‐of‐core strategy and GPU‐accelerated rendering enable
interactive visualization of virtually unlimited size data. With
adding only a slight overhead to existing point‐based approaches,
the framework provides comparable quality to visualizations of
off-line pre‐computation of 3D polygonal models. ❧ The thesis then
presents a scalable out‐of‐core algorithm for mapping colors from
aerial oblique imagery to city‐scale aerial LiDAR points. Without
intensive processing of points, colors are mapped via a modified
visibility pass of GPU splatting, and a weighting scheme leveraging
image resolution and surface orientation. ❧ To alleviate visual
artifacts caused by noise and under‐sampling, the thesis shows an
off‐line point cloud refinement algorithm. By explicitly
regularizing building boundary points, the algorithm can
effectively remove noise, fill gaps, and preserve and enhance both
normal and position discontinuous features for piece‐wise smoothing
buildings with arbitrary shape and complexity. ❧ Finally, the
thesis introduces a new multi‐resolution rendering framework that
supports real‐time refinement of aerial LiDAR cities. Without
complex computation and without user interference, simply based on
curvature analysis of points of uniform sized spatial partitions,
hierarchical hybrid structures are constructed indicating whether
to represent a partition as point or polygon. With the help of such
structures, both rendering and refinement are dynamically adaptive
to views and curvatures. Compared to visually‐complete rendering,
the new framework is able to deliver comparable visual quality with
less than 8% increase in pre‐processing time and 2-5 times higher
rendering frame‐rates. Experiments on several cities show that the
refinement improves rendering quality for large magnification under
real‐time constraint.
Advisors/Committee Members: Neumann, Ulrich (Committee Chair), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member),
Nakano, Aiichiro (Committee Member).
Subjects/Keywords: city‐scale; aerial LiDAR; point cloud; rendering; GPU; hybrid; visually‐complete; visualization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gao, Z. (2014). City-scale aerial LiDAR point cloud visualization. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/426678/rec/1370
Chicago Manual of Style (16th Edition):
Gao, Zhenzhen. “City-scale aerial LiDAR point cloud visualization.” 2014. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/426678/rec/1370.
MLA Handbook (7th Edition):
Gao, Zhenzhen. “City-scale aerial LiDAR point cloud visualization.” 2014. Web. 07 Mar 2021.
Vancouver:
Gao Z. City-scale aerial LiDAR point cloud visualization. [Internet] [Doctoral dissertation]. University of Southern California; 2014. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/426678/rec/1370.
Council of Science Editors:
Gao Z. City-scale aerial LiDAR point cloud visualization. [Doctoral Dissertation]. University of Southern California; 2014. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/426678/rec/1370

University of Southern California
20.
Wang, Jingwei.
Depth inference and visual saliency detection from 2D
images.
Degree: PhD, Electrical Engineering, 2013, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/293350/rec/1846
► With the rapid development of 3D vision technology, it is an active research topic to recover the depth information from 2D images. Current solutions heavily…
(more)
▼ With the rapid development of 3D vision technology, it
is an active research topic to recover the depth information from
2D images. Current solutions heavily depend on the structure
assumption of the 2D image and their applications are limited. It
is now still technically challenging to develop an efficient yet
general solution to generate the depth map from a single image.
Furthermore, psychological study indicates that human eyes are
particular sensitive to salient object region within one image.
Thus, it is critical to detect salient object accurately, and
segment its boundary very well as small depth error in these areas
will lead to intolerant visual distortion. Briefly speaking,
research works in this literature can be categorized into two
different categories. Depth map inference system design and salient
object detection and segmentation algorithm development. ❧ For
depth map inference system design, we propose a novel depth
inference system for 2D images and videos. Specifically, we first
adopt the in-focus region detection and salient map computation
techniques to separate the foreground objects from the remaining
background region. After that, a color-based grab-cut algorithm is
used to remove the background from obtained foreground objects by
modeling the background. As a result, the depth map of the
background can be generated by a modified vanishing point detection
method. Then, key frame depth maps can be propagated to the
remaining frames. Finally, to meet the stringent requirements of
VLSI chip implementation such as limited on-chip memory size and
real-time processing, we modify some building modules with
simplified versions of the in-focus region detection and the
mean-shift algorithm. Experimental result shows that the proposed
solution can provide accurate depth maps for 83% of images while
other state-of-the-art methods can only achieve accuracy for 34% of
these test images. This simplified solution targeting at the VLSI
chip implementation has been validated for its high accuracy as
well as high efficiency on several test video clips. ❧ For salient
object detection, inspired by success of late fusion in semantic
analysis and multi-modal biometrics, we model saliency detection as
late fusion at confidence score level. In fact, we proposed to fuse
state-of-the-arts saliency models at score level in a para-boosting
learning fashion. Firstly, saliency maps generated from these
models are used as confidence scores. Then, these scores are fed
into our para-boosting learner (i.e. Support Vector Machine (SVM),
Adaptive Boosting (AdBoost), or Probability Density Estimator
(PDE)) to predict the final saliency map. In order to explore
strength of para-boosting learners, traditional transformation
based fusion strategies such as Sum, Min, Max are also also applied
for comparison purpose. In our application scenario, salient object
segmentation is our final goal. So, we further propose a novel
salient object segmentation schema using Conditional Random
Field(CRF) graph model. In this segmentation model, we…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Jenkins, Brian Keith (Committee Member),
Itti, Laurent (Committee Member).
Subjects/Keywords: 2D; 3D; depth; saliency; image
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, J. (2013). Depth inference and visual saliency detection from 2D
images. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/293350/rec/1846
Chicago Manual of Style (16th Edition):
Wang, Jingwei. “Depth inference and visual saliency detection from 2D
images.” 2013. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/293350/rec/1846.
MLA Handbook (7th Edition):
Wang, Jingwei. “Depth inference and visual saliency detection from 2D
images.” 2013. Web. 07 Mar 2021.
Vancouver:
Wang J. Depth inference and visual saliency detection from 2D
images. [Internet] [Doctoral dissertation]. University of Southern California; 2013. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/293350/rec/1846.
Council of Science Editors:
Wang J. Depth inference and visual saliency detection from 2D
images. [Doctoral Dissertation]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/293350/rec/1846

University of Southern California
21.
Gawecki, Martin.
A signal processing approach to robust jet engine fault
detection and diagnosis.
Degree: PhD, Electrical Engineering, 2015, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/454289/rec/357
► As in any mechanical system, entropy is continually fighting our best efforts to preserve order. Engineers, mechanics, and pilots have all helped in the process…
(more)
▼ As in any mechanical system, entropy is continually
fighting our best efforts to preserve order. Engineers, mechanics,
and pilots have all helped in the process of engine health
management by perceiving and identifying faults in aircraft. The
complexity of these systems has gradually increased, necessitating
the evolution of novel methods to detect engine component problems.
As airlines and manufacturers have begun to develop capabilities
for the collection of ever more information in the age of ""Big
Data"" an opportunity for such a method has presented itself to the
signal processing community. ❧ This work will address the
development of reliable fault detection and diagnosis algorithms,
built around the collection of various types of engine health data.
Engine Health Management (EHM), has so far relied on rudimentary
readings, the diligence of maintenance crews, and pilot familiarity
with expected equipment behavior. While the majority of EHM
advances are inexorably tied to the field of mechanical and
aerospace engineering, signal processing approaches can make unique
contributions in effectively handling the oncoming deluge of
complicated data. ❧ During the scope of this work, two broad
approaches are taken to address the challenges of such an
undertaking. First, the feasibility of vibration and acoustic
sensors is examined in controlled experimental conditions to
determine if such information is useful. This in turn will be used
to develop modern detection/diagnosis algorithms and examine the
importance of sampling frequency for EHM systems in this context.
Here, this work offers several important contributions, chief among
which are: excellent results for ""stationary"" phases of flight, a
consistent fault detection rate for synthetic abrupt changes, fast
responses to component failures in high‐frequency data, and
well‐defined clustering for nominal samples in lower‐frequency
(1Hz) data. ❧ Second, this work describes an improved Gas Path
Analysis (GPA) approach that utilizes information from traditional
sensors (pressures, temperatures, speeds, etc.) to produce relevant
high‐quality simulated data, develop a correspondence between
simulated and real‐world data, and demonstrate the feasibility of
fault detection in these scenarios. Here, the chief contribution is
the establishment of a close agreement between synthetically
simulated faults and nominal data from real engines. Building on
this, a reliable fault detection and diagnosis system for
""stationary"" and ""transient"" flight phases is developed, while
adapting high quality simulated full flight data to low‐frequency
(1Hz) real world correspondences.
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Nakano, Aiichiro (Committee Member),
Jenkins, Brian Keith (Committee Member).
Subjects/Keywords: detection and diagnosis; pattern recognition; engine health management; gas path analysis; machine learning; turbofan jet engine; full flight data; CMAPSS; MFCC; CELP; DCT; SVM; fusion; engine transients; change‐point detection; quick access recorder
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gawecki, M. (2015). A signal processing approach to robust jet engine fault
detection and diagnosis. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/454289/rec/357
Chicago Manual of Style (16th Edition):
Gawecki, Martin. “A signal processing approach to robust jet engine fault
detection and diagnosis.” 2015. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/454289/rec/357.
MLA Handbook (7th Edition):
Gawecki, Martin. “A signal processing approach to robust jet engine fault
detection and diagnosis.” 2015. Web. 07 Mar 2021.
Vancouver:
Gawecki M. A signal processing approach to robust jet engine fault
detection and diagnosis. [Internet] [Doctoral dissertation]. University of Southern California; 2015. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/454289/rec/357.
Council of Science Editors:
Gawecki M. A signal processing approach to robust jet engine fault
detection and diagnosis. [Doctoral Dissertation]. University of Southern California; 2015. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/454289/rec/357

University of Southern California
22.
He, Xingze.
Novel and efficient schemes for security and privacy issues
in smart grids.
Degree: PhD, Electrical Engineering, 2013, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/309209/rec/4467
► The past years have witnessed the fast development of smart grids all over the world. The introduction of digital communication technologies into the power system…
(more)
▼ The past years have witnessed the fast development of
smart grids all over the world. The introduction of digital
communication technologies into the power system makes smart grids
more efficient and intelligent. In the meantime, however, wide
security and privacy concerns arise due to the increasing system
complexities. Without proper protection, smart grids is extremely
vulnerable to various attacks, such as conventional physical
damages and emerging cyber attacks. On the other hand, even tiny
system faults, if not detected and resolved in a real time manner,
would lead to large scale power outage with unexpected loss.
Besides, customer's privacy is also severely threatened by the
provision of fine-grained power consumption data in smart grids.
Motivated by these concerns, three novel schemes from different
technical perspectives are proposed in the dissertation. ❧ For the
first topic, an efficient homomorphic encryption-based system was
proposed for securing data transmission, data sharing and
operations among different parties. In this work, we first proposed
a system framework tailored for homomorphic encryption techniques
which have great potential to secure data, enable
privacy-preserving data sharing and thereby improve the overall
efficiency of smart grids. Based on the proposed system framework,
we then designed a practical system with an extended partially
homomorphic encryption scheme. With homomorphic features, we prove
that the designed system well supports privacy-preserving data
aggregation and power consumption statistical analysis in smart
grids. ❧ For the second topic, a metering scheme was proposed to
protect customer's privacy. In this work, a reading distortion
scheme was first designed to distort smart meter data in a way that
only data senders (i.e. customers) are able to access the original
power consumption data. With distorted power consumption data, an
aggregated billing mechanism was then proposed to guarantee
accurate billing service. For power consumption analysis and
prediction, we designed a distribution reconstruction algorithm to
recover the original power consumption distribution from distorted
power consumption data. To show the security, two potential attacks
were investigated theoretically. Experimental results on real world
power consumption data were discussed in the end. ❧ For the third
topic, a power quality monitoring scheme using change-point
detection techniques was investigated. After modeling pre-event and
post-event power signal, we proposed a weighted CUSUM algorithm to
detect common power quality events, i.e. sags, transients, swells
and harmonics. With experimental results, we compared proposed
scheme with conventional power quality monitoring techniques and
concluded with the superiority of the proposed scheme. We also
extend the scheme to distributed version under multi-sensor
scenario. The proposed MVWCUSUM scheme is compared with recent
MBQCUSUM scheme in terms of detection latency and
robustness.
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Hwang, Kai (Committee Member),
Huang, Ming-Deh (Committee Member).
Subjects/Keywords: smart grids; security; privacy; power quality monitoring; change-point detection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
He, X. (2013). Novel and efficient schemes for security and privacy issues
in smart grids. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/309209/rec/4467
Chicago Manual of Style (16th Edition):
He, Xingze. “Novel and efficient schemes for security and privacy issues
in smart grids.” 2013. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/309209/rec/4467.
MLA Handbook (7th Edition):
He, Xingze. “Novel and efficient schemes for security and privacy issues
in smart grids.” 2013. Web. 07 Mar 2021.
Vancouver:
He X. Novel and efficient schemes for security and privacy issues
in smart grids. [Internet] [Doctoral dissertation]. University of Southern California; 2013. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/309209/rec/4467.
Council of Science Editors:
He X. Novel and efficient schemes for security and privacy issues
in smart grids. [Doctoral Dissertation]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/309209/rec/4467

University of Southern California
23.
Yuan, Hang.
Modeling and optimization of energy-efficient and
delay-constrained video sharing servers.
Degree: PhD, Electrical Engineering, 2015, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/370334/rec/4124
► With the continually growing popularity of online video sharing, energy consumption in video sharing servers has become a pivotal issue. Energy saving in large‐scale video…
(more)
▼ With the continually growing popularity of online
video sharing, energy consumption in video sharing servers has
become a pivotal issue. Energy saving in large‐scale video sharing
data centers can be achieved by utilizing low power modes in disks,
yet this could lead to excessive delay and affect the
quality‐of‐service. In this thesis, we present techniques that
jointly optimize energy and delay for video sharing servers.
Specifically, we present a general energy‐delay optimization
framework that can be applied to a variety of issues related to
energy management in video‐sharing services. Furthermore, the
framework is generally applicable to disks with multiple low‐power
modes, including currently available disks and future ones. ❧ This
thesis features a comprehensive survey followed by careful
examination of three major problems in energy management for
video‐sharing services: power mode selection, caching and data
placement. For the first topic, we propose a novel model that
exploits the unique workload characteristics of video‐sharing
services. Based on the model, we formulate the power mode decision
problem as a constrained optimization task. By solving the
optimization problem, the proposed prediction‐based mode decision
(PMD) algorithm selects the optimal power modes for disks with
various delay constraints. ❧ For the second topic, we investigate
the effects of caching on energy efficiency and study how cache can
be better utilized in the context of energy‐delay optimization. We
extend the original framework and propose two new techniques along
this direction to improve energy efficiency. Firstly, we adopt a
energy‐delay‐optimized caching (EDOC) utility for cache
replacement. Then, we propose the prediction‐based energy‐efficient
prefetching (PEEP) algorithm that effectively reduces mode
transition overheads for the video storage server. Experiments show
that our schemes achieve significantly more energy savings under
the same delay level compared to the traditional threshold‐based
energy management scheme. ❧ Finally, we present a learning‐based
optimization scheme for the placement of video data. Optimization
of data placement has been known to be an NP‐hard problem even when
the objective function is explicitly given, and it becomes even
more difficult in the context of energy efficiency due to lack of
analytical models that can accurately predict energy consumption
and service delays. Instead of resorting to heuristic approaches
like previous work, we approach the mathematical problem by
applying machine learning techniques. The solution we provide can
create data-disk allocations that are energy efficient under a wide
array of conditions, including different levels of service load,
delay requirement and capacity constraints.
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Hwang, Kai (Committee Member),
Golubchik, Leana (Committee Member).
Subjects/Keywords: energy efficiency; video servers; parallel storage systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yuan, H. (2015). Modeling and optimization of energy-efficient and
delay-constrained video sharing servers. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/370334/rec/4124
Chicago Manual of Style (16th Edition):
Yuan, Hang. “Modeling and optimization of energy-efficient and
delay-constrained video sharing servers.” 2015. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/370334/rec/4124.
MLA Handbook (7th Edition):
Yuan, Hang. “Modeling and optimization of energy-efficient and
delay-constrained video sharing servers.” 2015. Web. 07 Mar 2021.
Vancouver:
Yuan H. Modeling and optimization of energy-efficient and
delay-constrained video sharing servers. [Internet] [Doctoral dissertation]. University of Southern California; 2015. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/370334/rec/4124.
Council of Science Editors:
Yuan H. Modeling and optimization of energy-efficient and
delay-constrained video sharing servers. [Doctoral Dissertation]. University of Southern California; 2015. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/370334/rec/4124

University of Southern California
24.
Chiang, Pei-Ying.
Feature-preserving simplification and sketch-based creation
of 3D models.
Degree: PhD, Computer Science, 2011, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/616942/rec/2788
► A prototype of an innovative 3D thumbnail system for managing large 3D mesh databases is presented in this research. The goal is to provide an…
(more)
▼ A prototype of an innovative 3D thumbnail system for
managing large 3D mesh databases is presented in this research. The
goal is to provide an online 3D model exhibit page where the user
can browse multiple 3D thumbnails interactively and efficiently. ❧
❧ An overall system framework for a large scale 3D repository is
described. It includes an offline process and an online process.
For the offline process, a 3D mesh is first decomposed into several
significant components. For each decomposed part, its skeleton and
body measurements are extracted and saved as the shape descriptor.
Subsequently, its thumbnail is created according to the shape
descriptor and saved as the thumbnail descriptor. In the online
process, according to user's preference, the system can either
render the 3D thumbnail directly with its pre-generated thumbnail
descriptor or re-generate the 3D thumbnail descriptor based on a
pre-generated shape descriptor without starting from the scratch.
As a result, the data size of a thumbnail descriptor is much less
than its original mesh and can be downloaded quickly. Rendering a
simplified thumbnail demands less hardware resource, and the online
thumbnail viewer can display multiple 3D thumbnails simultaneously
within a few seconds. ❧ ❧ Furthermore, we develop two
feature-preserving thumbnail creation techniques. They are the
surface-based and the voxel-based methods. For the surface-based
technique, a 3D polygonal mesh is decomposed by a visual
salience-guided mesh decomposition approach that identifies and
preserves significant components. For each decomposed part, its
skeleton and body measurements are extracted after the PCA
transformation. Then, a coarse-to-fine primitive approximation
algorithm is used to create the 3D thumbnail. Moreover, a
customized deformable primitive, called the d-cylinder, is designed
for approximating the shape better and fining the appearance of the
resultant thumbnail. We generate the 3D thumbnail with a different
number of d-cylinders so that the thumbnail can represent a
simplified mesh in different level of details successfully. The
processing time of each process and the file size of the 3D
thumbnail descriptor are given to show the efficiency of the
surface-based approach. ❧ ❧ In the voxel-based approach, a
polygonal model is first rasterized into a volumetric model and a
coarse skeleton is extracted with a thinning operation. The
skeleton derived from the thinning process is further refined to
meet the required accuracy. Subsequently, the skeleton is
classified to significant groups, and the volumetric model is
decomposed into significant parts accordingly. As compared with the
surface-based approach, the voxel-based approach can preserve more
features of the model and decompose the model more precisely. Thus,
the significant components of the original model can be preserved
better in the 3D thumbnails while the model is extremely
simplified. A thorough performance comparison between the
surface-based and the voxel-based techniques is conducted. ❧ ❧
Finally, the…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Nakano, Aiichiro (Committee Member),
Jenkins, B. Keith (Committee Member).
Subjects/Keywords: Voxel-based shape decomposition; volumetric shape representation; primitive approximation; skeleton extraction; skeletonization; sketch-based 3D modeling
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chiang, P. (2011). Feature-preserving simplification and sketch-based creation
of 3D models. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/616942/rec/2788
Chicago Manual of Style (16th Edition):
Chiang, Pei-Ying. “Feature-preserving simplification and sketch-based creation
of 3D models.” 2011. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/616942/rec/2788.
MLA Handbook (7th Edition):
Chiang, Pei-Ying. “Feature-preserving simplification and sketch-based creation
of 3D models.” 2011. Web. 07 Mar 2021.
Vancouver:
Chiang P. Feature-preserving simplification and sketch-based creation
of 3D models. [Internet] [Doctoral dissertation]. University of Southern California; 2011. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/616942/rec/2788.
Council of Science Editors:
Chiang P. Feature-preserving simplification and sketch-based creation
of 3D models. [Doctoral Dissertation]. University of Southern California; 2011. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/616942/rec/2788

University of Southern California
25.
Lee, Sang Yun.
A notation for rapid specification of information
visualization.
Degree: PhD, Computer Science, 2013, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/131364/rec/280
► This thesis describes a system of notation for the rapid specification of data visualization and its applications at a conceptual level. The system can be…
(more)
▼ This thesis describes a system of notation for the
rapid specification of data visualization and its applications at a
conceptual level. The system can be used as a theoretical framework
integrating various types of data visualization. The proposed
notation codifies the major characteristics of data/visual
structures in conventional visualizations used in business and
statistics domains. It consists of unary and binary operators that
can be combined to represent a visualization. Each operator is
divided into two major components: data manipulation and conceptual
representation. The data manipulation consists of internal data
operations required to visualize data, and the conceptual
representation part regulates the meaning of the data in a
visualization. ❧ Capturing the structural features of a
visualization, our notation can express data at an abstract level
and be applied to match or compare two visualizations. The
integration of data visualization into a single framework is an
unresolved problem in the data visualization community. The major
contribution of this work lies in formalizing the notation and its
operator rules in a limited context. Our notation does not cover
all types of visualization. Instead, it is limited to visualization
types that have expressible data characteristics in the context of
business and statistics domains. Instead of giving a complete
description of a visualization, the proposed notation is designed
as a high-level abstraction for the rapid specification of a
visualization. Thus, it provides a descriptive, rather than a
generative, notation. ❧ The focus of this thesis is the development
of the notation. First, the design of the major operators is
discussed as we present their underlying concepts and define rules
of operator equivalence and transformation. Second, to evaluate how
expressive the notation is, we explore some commonly-used data
visualizations. Finally, to demonstrate the usefulness of the
notation, we consider two possible applications: similarity
measurement and alternative visualization generation. In the
similarity measurement, two given visualizations are converted into
operator-based notation strings in a full binary tree format and
compared in terms of the Levenshtein Edit Distance. In the
alternative visualization generation, a transformation mechanism is
developed for two given source and target notation expressions, and
alternative visualizations are generated for the source expression.
❧ The benefits of our approach are as follows: First, because the
notation is a high-level abstraction of a visualization, it can
focus on a user's conceptual intention better than a detailed
description of a visualization. Second, the operators define a set
of required capabilities on which a visualization system can be
organized. Thus, the notation can be used to design a system that
interconnects various data visualization tools by sending and
receiving visualization requests between them. Third, it can be
used to compare visualizations or to find/generate similar…
Advisors/Committee Members: Neumann, Ulrich (Committee Chair), Szekely, Pedro (Committee Member), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member).
Subjects/Keywords: data visualization; information visualization; information visualization notation; visualization model; visualization alternatives
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, S. Y. (2013). A notation for rapid specification of information
visualization. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/131364/rec/280
Chicago Manual of Style (16th Edition):
Lee, Sang Yun. “A notation for rapid specification of information
visualization.” 2013. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/131364/rec/280.
MLA Handbook (7th Edition):
Lee, Sang Yun. “A notation for rapid specification of information
visualization.” 2013. Web. 07 Mar 2021.
Vancouver:
Lee SY. A notation for rapid specification of information
visualization. [Internet] [Doctoral dissertation]. University of Southern California; 2013. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/131364/rec/280.
Council of Science Editors:
Lee SY. A notation for rapid specification of information
visualization. [Doctoral Dissertation]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/131364/rec/280

University of Southern California
26.
Zhou, Qian-Yi.
3D urban modeling from city-scale aerial LiDAR data.
Degree: PhD, Computer Science, 2012, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/70401/rec/19
► 3D reconstruction from point clouds is a fundamental problem in both computer vision and computer graphics. As urban modeling is an important reconstruction problem that…
(more)
▼ 3D reconstruction from point clouds is a fundamental
problem in both computer vision and computer graphics. As urban
modeling is an important reconstruction problem that has various
significant applications, this thesis investigates the complex
problem of reconstructing 3D urban models from aerial LiDAR (Light
Detection And Ranging) point cloud. ❧ In the first part of this
thesis, an automatic urban modeling system is proposed which
consists of three modules: (1) the classification module classifies
input points into trees and buildings; (2) the segmentation module
splits building points into different roof patches; (3) the
modeling module creates building models, ground, and trees from
point patches respectively. In order to support city-scale data
sets, this pipeline is extended into an out-of-core streaming
framework. By storing data as stream files on hard disks and using
main memory as only a temporary storage for ongoing computation, an
efficient out-of-core data management is achieved. City-scale urban
models are successfully created from billions of points with
limited computing resource. ❧ The second part of this thesis
explores the 2.5D nature of building structures. The 2.5D
characteristic of building models is observed and formally defined
as ""building structures are always composed of complex roofs and
vertical walls"". Based on this observation, a 2.5D geometry
representation is developed for the building structures, and used
to extend a classic volumetric modeling approach into a 2.5D
method, named 2.5D dual contouring. This algorithm can generate
building models with arbitrarily shaped roofs while keeping the
verticality of the walls. The next research studies the topology of
2.5D building structures. 2.5D building topology is formally
defined as a set of roof features, wall features, and point
features; together with the associations between them. Based on
this research, the topology restrictions in 2.5D dual contouring
are relaxed. The resulting model contains much less triangles but
similar visual quality. To further capture the global regularities
that intrinsically exist in building models because of human design
and construction, a broad variety of global regularity patterns
between 2.5D building elements are explored. An automatic algorithm
is proposed to discover and enforce global regularities through a
series of alignment steps, resulting in 2.5D building models with
high quality in terms of both geometry and human judgement.
Finally, the 2.5D characteristic of building structures is adopted
to aid 3D reconstruction of residential urban areas: a more
powerful classification algorithm is developed which adopts an
energy minimization scheme based on the 2.5D characteristic of
building structures. ❧ This thesis demonstrates the effectiveness
of all the algorithms on a range of urban area scans from different
cities; with varying sizes, density, complexity, and
details.
Advisors/Committee Members: Neumann, Ulrich (Committee Chair), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member),
Barbic, Jerney (Committee Member),
You, Suya (Committee Member).
Subjects/Keywords: 2.5D; global regularity; LiDAR; streaming; tree detection; urban modeling
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhou, Q. (2012). 3D urban modeling from city-scale aerial LiDAR data. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/70401/rec/19
Chicago Manual of Style (16th Edition):
Zhou, Qian-Yi. “3D urban modeling from city-scale aerial LiDAR data.” 2012. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/70401/rec/19.
MLA Handbook (7th Edition):
Zhou, Qian-Yi. “3D urban modeling from city-scale aerial LiDAR data.” 2012. Web. 07 Mar 2021.
Vancouver:
Zhou Q. 3D urban modeling from city-scale aerial LiDAR data. [Internet] [Doctoral dissertation]. University of Southern California; 2012. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/70401/rec/19.
Council of Science Editors:
Zhou Q. 3D urban modeling from city-scale aerial LiDAR data. [Doctoral Dissertation]. University of Southern California; 2012. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/70401/rec/19

University of Southern California
27.
Li, Ming.
Representation, classification and information fusion for
robust and efficient multimodal human states recognition.
Degree: PhD, Electrical Engineering, 2013, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/317515/rec/5540
► The goal of this work is to enhance the robustness and efficiency of the multimodal human states recognition task. Human states recognition can be considered…
(more)
▼ The goal of this work is to enhance the robustness and
efficiency of the multimodal human states recognition task. Human
states recognition can be considered as a joint term for
identifying/verifing various kinds of human related states, such as
biometric identity, language spoken, age, gender, emotion,
intoxication level, physical activity, vocal tract patterns, ECG QT
intervals and so on. I performed research on the aforementioned
states recognition problems and my focus is to increase the
performance while reduce the computational cost. ❧ I start by
extending the well known total variability i-vector modeling (a
factor analysis on the concatenated GMM mean supervectors) to the
simplified supervised i-vector modeling to enhance the robustness
and efficiency. First, by concatenating the label vector and the
linear classifier matrix at the end of the mean supervector and the
i-vector factor loading matrix, respectively, the traditional
i-vectors are extended to the label regularized supervised
i-vectors. This supervised i-vectors are optimized to not only
reconstruct the mean supervectors well but also minimize the mean
square error between the original and the reconstructed label
vectors, thus can make the supervised i-vectors more discriminative
in terms of the label information regularized. Second, I perform
the factor analysis (FA) on the pre-normalized GMM first order
statistics supervector to ensure each Gaussian component's
statistics sub-vector is treated equally in the FA which reduce the
computational cost by a factor of 25. Since there is only one
global total frame number in the equation, I make a global table of
the resulted matrices against its log value. By checking with the
table, the computational cost of each utterance's i-vector
extraction is further reduced by 4 times with small quantization
error. I demonstrate the utility of the simplified supervised
i-vector representation on both the language identification (LID)
and speaker verification (SRE) tasks, achieved comparable or better
performance with significant computational cost reduction. ❧
Inspired by the recent success of sparse representation on face
recognition, I explored the possibility to adopt sparse
representation for both representation and classification in this
multimodal human sates recognition problem. For classification
purpose, a sparse representation computed by l1-minimization (to
approximate the l0 minimization) with quadratic constraints was
proposed to replace the SVM on the GMM mean supervectors and by
fusing the sparse representation based classification (SRC) method
with SVM, the overall system performance was improved. Second, by
adding a redundant identity matrix at the end of the original
over-complete dictionary, the sparse representation is made more
robust to variability and noise. Third, both the l1 norm ratio and
the background normalized (BNorm) l2 residual ratio are used and
shown to outperform the conventional l2 residual ratio in the
verification task. I showed the usage of SRC on GMM mean
supervectors, total…
Advisors/Committee Members: Narayanan, Shrikanth S. (Committee Chair), C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Member),
Ortega, Antonio K. (Committee Member),
Sha, Fei (Committee Member).
Subjects/Keywords: human state characterization; speaker verification; language identification; multimodal biometrics; emotion recognition; simplified supervised i-vector; sparse representation; physical activity recognition; ECG processing; speech production; articulation; vocal tract morphology
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, M. (2013). Representation, classification and information fusion for
robust and efficient multimodal human states recognition. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/317515/rec/5540
Chicago Manual of Style (16th Edition):
Li, Ming. “Representation, classification and information fusion for
robust and efficient multimodal human states recognition.” 2013. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/317515/rec/5540.
MLA Handbook (7th Edition):
Li, Ming. “Representation, classification and information fusion for
robust and efficient multimodal human states recognition.” 2013. Web. 07 Mar 2021.
Vancouver:
Li M. Representation, classification and information fusion for
robust and efficient multimodal human states recognition. [Internet] [Doctoral dissertation]. University of Southern California; 2013. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/317515/rec/5540.
Council of Science Editors:
Li M. Representation, classification and information fusion for
robust and efficient multimodal human states recognition. [Doctoral Dissertation]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/317515/rec/5540

University of Southern California
28.
Ren, Yuzhuo.
Techniques for vanishing point detection.
Degree: MS, Electrical Engineering, 2013, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/245264/rec/6377
► Automatic vanishing point detection is an important problem in computer vision since it has many applications such as road navigation, 3D scene reconstruction and camera…
(more)
▼ Automatic vanishing point detection is an important
problem in computer vision since it has many applications such as
road navigation, 3D scene reconstruction and camera calibration.
Accurate detection of the vanishing point location facilitates the
solution of these related problems. For a given image, this
research attempts to answer the following two questions: 1) whether
there is vanishing point or not in this image; and 2) if there are
vanishing points, where their locations are. To address the first
question, we apply a machine learning approach. First, we construct
a database containing a wide variety of images and use it to train
a model to determine whether there is vanishing point in a test
image. The two features used in this training and test process are
the angular histogram and the defocus degree. Furthermore, we adopt
the Adaboost algorithm as incremental learning to increase
classification accuracy. To address the second problem, we
implement and improve one algorithm for vanishing point location
estimation, and compare its performance with another algorithm
based on the J-linkage model. Finally, concluding remarks and
future research directions are discussed.
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Ortega, Antonio K. (Committee Member),
Sawchuk, Alexander A. (Sandy) (Committee Member).
Subjects/Keywords: computer vision; image processing; machine learning; SVM; vanishing point
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ren, Y. (2013). Techniques for vanishing point detection. (Masters Thesis). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/245264/rec/6377
Chicago Manual of Style (16th Edition):
Ren, Yuzhuo. “Techniques for vanishing point detection.” 2013. Masters Thesis, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/245264/rec/6377.
MLA Handbook (7th Edition):
Ren, Yuzhuo. “Techniques for vanishing point detection.” 2013. Web. 07 Mar 2021.
Vancouver:
Ren Y. Techniques for vanishing point detection. [Internet] [Masters thesis]. University of Southern California; 2013. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/245264/rec/6377.
Council of Science Editors:
Ren Y. Techniques for vanishing point detection. [Masters Thesis]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/245264/rec/6377

University of Southern California
29.
Kang, Dongwoo.
Advanced coronary CT angiography image processing
techniques.
Degree: PhD, Electrical Engineering, 2013, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/282774/rec/541
► Computer-aided cardiac image analysis obtained by various modalities plays an important role in the early diagnosis and treatment of cardiovascular disease. Numerous computerized methods have…
(more)
▼ Computer-aided cardiac image analysis obtained by
various modalities plays an important role in the early diagnosis
and treatment of cardiovascular disease. Numerous computerized
methods have been developed to tackle this problem. Recent studies
employ sophisticated techniques using available cues from cardiac
anatomy such as geometry, visual appearance, and prior knowledge.
Especially, visual analysis of three-dimensional (3D) coronary
computed tomography angiography (CCTA) remains challenging due to
large number of image slices and tortuous character of the vessels.
In this thesis, we focus on cardiac applications associated with
coronary artery disease and cardiac arrhythmias, and study the
related computer-aided diagnosis problems from computed tomography
angiography (CCTA). First, in Chapter 2, we provide an overview of
cardiac segmentation techniques in all kinds of cardiac image
modalites, with the goal of providing useful advice and references.
In addition, we describe important clinical applications, imaging
modalities, and validation methods used for cardiac segmentation. ❧
In Chapter 3, we propose a robust, automated algorithm for
unsupervised computer detection of coronary artery lesions from
CCTA. Our knowledge-based algorithm consists of centerline
extraction, vessel classification, vessel linearization, lumen
segmentation with scan-specific lumen attenuation ranges, and
lesion location detection. Presence and location of lesions are
identified using a multi-pass algorithm which considers expected or
”normal” vessel tapering and luminal stenosis from the segmented
vessel. Expected luminal diameter is derived from the scan by
automated piecewise least squares line fitting over proximal and
mid segments (67%) of the coronary artery considering the locations
of the small branches attached to the main coronary arteries. We
applied this algorithm to 42 CCTA patient datasets, acquired with
dual-source CT, where 21 datasets had 45 lesions with stenosis 25%.
The reference standard was provided by visual and quantitative
identification of lesions with any stenosis ≥25% by 3 expert
observers using consensus reading. Our algorithm identified 43
lesions (93%) confirmed by the expert observers. There were 46
additional lesions detected; 23 out of 46 (50%) of these were
less-stenosed lesions. When the artery was divided into 15 coronary
segments according to standard cardiology reporting guidelines, per
segment basis, sensitivity was 93% and per-segment specificity was
81%. Our algorithm shows promising results in the detection of
obstructive and nonobstructive CCTA lesions. ❧ In Chapter 4, we
propose a novel low-radiation dose CCTA denoising algorithm. Our
aim in this study was to optimize and validate an adaptive
de-noising algorithm based on Block-Matching 3D, for reducing image
noise and improving left ventricular assessment, in low-radiation
dose CCTA. In this study, we describe the denoising algorithm and
its validation, with low-radiation dose coronary CTA datasets from
consecutive 7 patients. We validated the…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Leahy, Richard (Committee Member),
Shung, K. Kirk (Committee Member),
Nayak, Krishna| (Committee Member).
Subjects/Keywords: coronary CT angiography; image processing; computer-aided diagnosis; machine learning; coronary arterial lesion detection; low-radiation dose coronary CT angiography; image denoising
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kang, D. (2013). Advanced coronary CT angiography image processing
techniques. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/282774/rec/541
Chicago Manual of Style (16th Edition):
Kang, Dongwoo. “Advanced coronary CT angiography image processing
techniques.” 2013. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/282774/rec/541.
MLA Handbook (7th Edition):
Kang, Dongwoo. “Advanced coronary CT angiography image processing
techniques.” 2013. Web. 07 Mar 2021.
Vancouver:
Kang D. Advanced coronary CT angiography image processing
techniques. [Internet] [Doctoral dissertation]. University of Southern California; 2013. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/282774/rec/541.
Council of Science Editors:
Kang D. Advanced coronary CT angiography image processing
techniques. [Doctoral Dissertation]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/282774/rec/541

University of Southern California
30.
Kang, Je-Won.
Efficient coding techniques for high definition
video.
Degree: PhD, Electrical Engineering, 2013, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/45608/rec/2243
► High definition (HD) video contents become popular and displays of higher resolution such as ultra definition are emerging in recent years. The conventional video coding…
(more)
▼ High definition (HD) video contents become popular and
displays of higher resolution such as ultra definition are emerging
in recent years. The conventional video coding standards offer
excellent coding performance at lower bit-rates. However, their
coding performance for HD video contents is not as efficient. The
objective of this research is to develop a set of efficient coding
tools or techniques to offer a better coding gain for HD video. The
following three techniques are studied in this work. ❧ First, we
present a Joint first-order-residual/second-order residual
(FOR/SOR) coding technique. The FOR/SOR algorithm that incorporates
a few advanced coding techniques is proposed for HD video coding.
For the FOR coder, the block-based prediction is used to exploit
both temporal and spatial correlation in an original frame surface
for coding efficiency. However, there still exists structural noise
in the prediction residuals. We design an efficient SOR coder to
encode the residual image. Block-adaptive bit allocation between
the FOR and the SOR coders is developed to enhance the coding
performance, which corresponds to selecting two different
quantization parameters in the FOR and the SOR coders in different
spatial regions. It is shown by experimental results that the
proposed FOR/SOR coding algorithm outperforms H.264/AVC
significantly in HD video coding with an averaged bit rate saving
of 15.6%. ❧ Second, we develop two advanced processing techniques,
which are referred as to two-layered transform with sparse
representation (TTSR) and slant residual shift (SRS), for
prediction residuals so as to improve coding efficiency. Prediction
residues often show a non-stationary property, and the DCT becomes
sub-optimal and yields undesired artifacts. The proposed TTSR
algorithm makes use of sparse representation and is targeted toward
the state-of-the-art video coding standard, High Efficiency Video
Coding (HEVC), in this work. A dictionary is adaptively trained to
contain featured patterns of residual signals so that a high
portion of the energy in a structured residual can be efficiently
coded with sparse coding. Then, the following DCT in cascade is
applied to the remaining signal after spare coding. The use of
multiple representations is justified with an R-D analysis, and the
two transforms successfully complement each other. The SRS
technique is to align dominant prediction residuals of
inter-predicted frames with the horizontal or the vertical
direction via row-wise or column-wise circular shift before the 2-D
DCT. To determine the proper shift of pixels, we classify blocks
into several types, each of which is assigned an index number.
Then, these indices are sent to the decoder as signaling flags,
which can be viewed as the mode information of the SRS technique.
It is demonstrated by experimental results that the proposed
algorithm outperforms the HEVC. ❧ Third, we make a contribution to
the HEVC with several efficient coding tools incorporated into the
two context adaptive entropy coding, i.e., Context Adaptive…
Advisors/Committee Members: C.%20Jay%22%29&pagesize-30">
Kuo,
C.-
C.
Jay (Committee Chair),
Ortega, Antonio K. (Committee Member),
Gabbouj, Moncef (Committee Member),
Neumann, Ulrich (Committee Member).
Subjects/Keywords: high definition video coding; data compression; multimedia signal processing; sparse representation; transform; entropy coding; H.264/AVC; HEVC
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kang, J. (2013). Efficient coding techniques for high definition
video. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/45608/rec/2243
Chicago Manual of Style (16th Edition):
Kang, Je-Won. “Efficient coding techniques for high definition
video.” 2013. Doctoral Dissertation, University of Southern California. Accessed March 07, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/45608/rec/2243.
MLA Handbook (7th Edition):
Kang, Je-Won. “Efficient coding techniques for high definition
video.” 2013. Web. 07 Mar 2021.
Vancouver:
Kang J. Efficient coding techniques for high definition
video. [Internet] [Doctoral dissertation]. University of Southern California; 2013. [cited 2021 Mar 07].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/45608/rec/2243.
Council of Science Editors:
Kang J. Efficient coding techniques for high definition
video. [Doctoral Dissertation]. University of Southern California; 2013. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll3/id/45608/rec/2243
◁ [1] [2] [3] [4] ▶
.