You searched for +publisher:"The Ohio State University" +contributor:("Yilmaz, Alper")
.
Showing records 1 – 26 of
26 total matches.
No search limiters apply to these results.

The Ohio State University
1.
Kim, Rhae Sung.
Spectral Matching using Bitmap Indices of Spectral
Derivatives for the Analysis of Hyperspectral Imagery.
Degree: MS, Geodetic Science and Surveying, 2011, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1293667753
► Hyperspectral imaging data, such as that recorded by AVIRIS, provides rich information on ground cover materials but seriously complicates their use for classification due to…
(more)
▼ Hyperspectral imaging data, such as that recorded by
AVIRIS, provides rich information on ground cover materials but
seriously complicates their use for classification due to the large
number of spectral bands. A rapid and feasible spectral matching
and mapping algorithm is always desirable, and it is a practical
necessity to improve the processing speed and reduce memory
requirements for analysis. In this thesis, the spectral derivative
has been used to develop a spectral matching algorithm for the
analysis of hyperspectral imaging data, providing surface
mineralogical information using the spectral shape. Particularly,
our spectral matching and mapping algorithm is based on each pixel
in an image and uses whole, spectrally contiguous spectra without
any spectral bands loss. Moreover, unsupervised direct spectral
matching between unknown observed spectra and reference library
spectra is made possible by analyzing the spectral derivative. To
develop a fast and efficient spectral matching and mapping
algorithm, a new bitmap indexing method is proposed, and a simple
bitwise operation is applied in order to find the spectral
similarity. Bitmap indexing of spectral derivatives can reduce the
large volume of hyperspectral imaging data and enables fast library
searching. Therefore, a decrease in the time which is desired for
overall data processing is achieved.AVIRIS data acquired in 1995
over the Cuprite(NV, USA) mining area is used to demonstrate the
developed algorithm and, when compared with two reference mineral
maps, a satisfactory and reliable mineral mapping result is
generated.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Remote Sensing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kim, R. S. (2011). Spectral Matching using Bitmap Indices of Spectral
Derivatives for the Analysis of Hyperspectral Imagery. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1293667753
Chicago Manual of Style (16th Edition):
Kim, Rhae Sung. “Spectral Matching using Bitmap Indices of Spectral
Derivatives for the Analysis of Hyperspectral Imagery.” 2011. Masters Thesis, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1293667753.
MLA Handbook (7th Edition):
Kim, Rhae Sung. “Spectral Matching using Bitmap Indices of Spectral
Derivatives for the Analysis of Hyperspectral Imagery.” 2011. Web. 19 Jan 2021.
Vancouver:
Kim RS. Spectral Matching using Bitmap Indices of Spectral
Derivatives for the Analysis of Hyperspectral Imagery. [Internet] [Masters thesis]. The Ohio State University; 2011. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1293667753.
Council of Science Editors:
Kim RS. Spectral Matching using Bitmap Indices of Spectral
Derivatives for the Analysis of Hyperspectral Imagery. [Masters Thesis]. The Ohio State University; 2011. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1293667753

The Ohio State University
2.
Park, Kyoung Jin.
Generating Thematic Maps from Hyperspectral Imagery Using a
Bag-of-Materials Model.
Degree: PhD, Geodetic Science and Surveying, 2013, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1366296426
► Obtaining information about Earth’s surfaces and generating a land cover map is essential to remote sensing research. For that purpose, identifying and classifying the characteristics…
(more)
▼ Obtaining information about Earth’s surfaces and
generating a land cover map is essential to remote sensing
research. For that purpose, identifying and classifying the
characteristics of pixels are fundamental problems. To obtain a
significant amount of information about the Earth’s surface,
hyperspectral images taken from high altitude are used. However,
exploiting a hyperspectral image is challenging because the
spectral dimension is high and the spatial resolution is low. In
order to overcome both the high dimensionality and low spatial
resolution problems, I introduce a novel method for robust
identification and classification of pixels’ properties based on
the Latent Dirichlet Allocation model. The proposed method first
analyzes mixture spectral signatures to extract material
combinations. These combinations are processed to discover the
latent theme for each pixel. This process is governed by a
hierarchical Bayesian learning model, whose distribution and
parameters are estimated using Gibbs sampling. As a result of
parameter estimation, each pixel is described a using topics
distribution and provides pixel descriptors used for identification
and classification tasks. Compared to the original spectral
information, these descriptors have significantly reduced
dimensionality, yet provided efficient clustering to segment the
image. Experimental results show that the proposed method
effectively handles identification and classification problems for
hyperspectral images
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Remote Sensing; Computer Engineering; Computer Science; Geographic Information Science; Hyperspectral image clustering; probabilistic topic modeling; generative model; latent dirichlet allocation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Park, K. J. (2013). Generating Thematic Maps from Hyperspectral Imagery Using a
Bag-of-Materials Model. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1366296426
Chicago Manual of Style (16th Edition):
Park, Kyoung Jin. “Generating Thematic Maps from Hyperspectral Imagery Using a
Bag-of-Materials Model.” 2013. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1366296426.
MLA Handbook (7th Edition):
Park, Kyoung Jin. “Generating Thematic Maps from Hyperspectral Imagery Using a
Bag-of-Materials Model.” 2013. Web. 19 Jan 2021.
Vancouver:
Park KJ. Generating Thematic Maps from Hyperspectral Imagery Using a
Bag-of-Materials Model. [Internet] [Doctoral dissertation]. The Ohio State University; 2013. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1366296426.
Council of Science Editors:
Park KJ. Generating Thematic Maps from Hyperspectral Imagery Using a
Bag-of-Materials Model. [Doctoral Dissertation]. The Ohio State University; 2013. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1366296426

The Ohio State University
3.
Zhang, Yujia.
A Structured Light Based 3D Reconstruction Using Combined
Circular Phase Shifting Patterns.
Degree: PhD, Geodetic Science, 2019, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1546488253821736
► Coded structured light is one of the most effective and reliable techniques for surface reconstruction. With a calibrated projector-camera stereo system, a set of illuminating…
(more)
▼ Coded structured light is one of the most effective
and reliable techniques for surface reconstruction. With a
calibrated projector-camera stereo system, a set of illuminating
patterns are projected onto the scene by projector and the image is
captured by the camera. The correspondences between projector and
camera frames are calculated in the decoding process, which is used
for triangulation and point cloud generation. Continuous phase
shifting method is a well-known fringe projection technique for 3D
scanning. A set of sinusoidal patterns showing variation of
intensity or color are projected onto the surface. This proposal,
introduces a novel circular phase shifting method to improve the
decoding accuracy, without the trade-off between the processing
speed and decoding accuracy. In the proposed approach, three-step
circular phase shifting and absolute patterns are used for
increasing the multiplicity since the circular patterns are coded
in all directions. The stereo triangulation is more stable and
accurate than line-plane intersection used in single axis
structured light methods by reducing incorrect decoding errors.
Robust gray patterns are applied to resolve the ambiguity
resolution caused by reflectance discontinuities in the phase
shifting decoding process.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Civil Engineering; Earth; Geography; Optics; Structured Light, 3D Reconstruction, Circular Phase
Shifting, Projector-Camera Stereo System
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, Y. (2019). A Structured Light Based 3D Reconstruction Using Combined
Circular Phase Shifting Patterns. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1546488253821736
Chicago Manual of Style (16th Edition):
Zhang, Yujia. “A Structured Light Based 3D Reconstruction Using Combined
Circular Phase Shifting Patterns.” 2019. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1546488253821736.
MLA Handbook (7th Edition):
Zhang, Yujia. “A Structured Light Based 3D Reconstruction Using Combined
Circular Phase Shifting Patterns.” 2019. Web. 19 Jan 2021.
Vancouver:
Zhang Y. A Structured Light Based 3D Reconstruction Using Combined
Circular Phase Shifting Patterns. [Internet] [Doctoral dissertation]. The Ohio State University; 2019. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1546488253821736.
Council of Science Editors:
Zhang Y. A Structured Light Based 3D Reconstruction Using Combined
Circular Phase Shifting Patterns. [Doctoral Dissertation]. The Ohio State University; 2019. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1546488253821736

The Ohio State University
4.
Khare, Vinod.
Precise Image Registration and Occlusion Detection.
Degree: MS, Civil Engineering, 2011, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1308246730
► Image registration and mosaicking is a fundamental problem in computer vision. The large number of approaches developed to achieve this end can be largely…
(more)
▼ Image registration and mosaicking is a
fundamental problem in computer vision. The large number of
approaches developed to achieve this end can be largely divided
into two categories - direct methods and feature-based methods.
Direct methods work by shifting or warping the images relative to
each other and looking at how much the pixels agree. Feature based
methods work by estimating parametric transformation between two
images using point correspondences. In this work,
we extend the standard feature-based approach to multiple imagesand
adopt the photogrammetric process to improve the accuracy of
registration. In particular, we use a multi-head camera mount
providing multiple non-overlapping images per time epoch and use
multiple epochs, which increases the number of images to be
considered during the estimation process. The existence of a
dominant scene plane in 3-space, visible in all the images acquired
from the multi-head platform formulated in a bundle block
adjustment framework in the image space, provides precise
registration between the images. We also develop
an appearance-based method for detecting potential occluders in the
scene. Our method builds upon existing appearance-based approaches
and extends it to multiple views.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Image Registration; Occlusion Detection; Multi-head Cameras; Bundle Adjustment
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Khare, V. (2011). Precise Image Registration and Occlusion Detection. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1308246730
Chicago Manual of Style (16th Edition):
Khare, Vinod. “Precise Image Registration and Occlusion Detection.” 2011. Masters Thesis, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1308246730.
MLA Handbook (7th Edition):
Khare, Vinod. “Precise Image Registration and Occlusion Detection.” 2011. Web. 19 Jan 2021.
Vancouver:
Khare V. Precise Image Registration and Occlusion Detection. [Internet] [Masters thesis]. The Ohio State University; 2011. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1308246730.
Council of Science Editors:
Khare V. Precise Image Registration and Occlusion Detection. [Masters Thesis]. The Ohio State University; 2011. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1308246730

The Ohio State University
5.
Koroglu, Muhammed Taha.
Multiple Hypothesis Testing Approach to Pedestrian Inertial
Navigation with Non-recursive Bayesian Map-matching.
Degree: PhD, Electrical and Computer Engineering, 2020, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1577135195323298
► Inertial sensors became wearable with the advances in sensing and computing technologies in the last two decades. Captured motion data can be used to build…
(more)
▼ Inertial sensors became wearable with the advances in
sensing and computing technologies in the last two decades.
Captured motion data can be used to build a pedestrian inertial
navigation system (INS); however, time-variant bias and noise
characteristics of low-cost sensors cause severe errors in
positioning. To overcome the quickly growing errors of so-called
dead-reckoning (DR) solution, this research adopts a pedestrian INS
based on a Kalman Filter (KF) with zero-velocity update (ZUPT) aid.
Despite accurate traveled distance estimates, obtained trajectories
diverge from actual paths because of the heading estimation errors.
In the absence of external corrections (e.g., GPS, UWB), map
information is commonly employed to eliminate position drift;
therefore, INS solution is fed into a higher level map-matching
filter for further corrections. Unlike common Particle Filter (PF)
map-matching, map constraints are implicitly modeled by generating
rasterized maps that function as a constant spatial prior in the
designed filter, which makes the Bayesian estimation cycle
non-recursive. Eventually, proposed map-matching algorithm does not
require computationally expensive Monte Carlo simulation and wall
crossing check steps of PF. Second major usage of the rasterized
maps is to provide probabilities for a self-initialization method
referred to as the Multiple Hypothesis Testing (MHT). Extracted
scores update hypothesis probabilities in a dynamic manner and the
hypothesis with the maximum probability gives the correct initial
position and heading. Realistic pedestrian walks include room
visits where map-matching is de-activated (as rasterized maps do
not model the rooms) and consequently excessive positioning drifts
occur. Another MHT approach exploiting the introduced maps further
is designed to re-activate the map filter at strides that the
pedestrian returns the hallways after room traversals.
Subsequently, trajectories left behind inside the rooms are
heuristically adjusted for the sake of consistency in the overall
solution. Various experiments with different unknown initial
conditions, longer distances and short/long room visits are
conducted and representative results are shown to validate
theperformance of the developed methods. The experimental results
show feasible trajectories with negligible return-to-start and
stride errors.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Engineering; Electrical Engineering; pedestrian tracking; MEMS; inertial navigation system; INS; inertial measurement unit; IMU; building information model; BIM; map-matching; multiple hypothesis testing; Kalman Filter; Particle Filter; Bayesian Filtering; pedestrian INS, zero velocity; ZUPT
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Koroglu, M. T. (2020). Multiple Hypothesis Testing Approach to Pedestrian Inertial
Navigation with Non-recursive Bayesian Map-matching. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1577135195323298
Chicago Manual of Style (16th Edition):
Koroglu, Muhammed Taha. “Multiple Hypothesis Testing Approach to Pedestrian Inertial
Navigation with Non-recursive Bayesian Map-matching.” 2020. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1577135195323298.
MLA Handbook (7th Edition):
Koroglu, Muhammed Taha. “Multiple Hypothesis Testing Approach to Pedestrian Inertial
Navigation with Non-recursive Bayesian Map-matching.” 2020. Web. 19 Jan 2021.
Vancouver:
Koroglu MT. Multiple Hypothesis Testing Approach to Pedestrian Inertial
Navigation with Non-recursive Bayesian Map-matching. [Internet] [Doctoral dissertation]. The Ohio State University; 2020. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1577135195323298.
Council of Science Editors:
Koroglu MT. Multiple Hypothesis Testing Approach to Pedestrian Inertial
Navigation with Non-recursive Bayesian Map-matching. [Doctoral Dissertation]. The Ohio State University; 2020. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1577135195323298

The Ohio State University
6.
Al-Shahri, Mohammed.
Line Matching in a Wide-Baseline Stereoview.
Degree: PhD, Geodetic Science and Surveying, 2013, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1376951775
► Matching is a fundamental problem in photogrammetry and computer vision. Matching primitives such as points and lines is the most common feature. While point-feature matching…
(more)
▼ Matching is a fundamental problem in photogrammetry
and computer vision. Matching primitives such as points and lines
is the most common feature. While point-feature matching received a
lot of attention during the last decade, line matching still lacks
well-established algorithms. Two different algorithms are proposed
in this dissertation. A bottom-up approach, which starts the
solution locally and verifies it globally. The second approach is a
top-down approach, where the solution begins globally and is
confirmed through local constraints. In this dissertation, we
attempt to develop a new reliable line matching algorithm across
multiple images. The bottom-up approach uses the configuration of
different sets of lines in both views, which are referred to here
as a mesh. This establishes putative correspondences by evaluating
the geometric errors between two different meshes across pairviews.
The top-down proposed algorithm exploits the epipolar geometry and
coplanarity constraints. Generally speaking, the epipolar geometry
is not directly suitable for the line matching problem due to the
inconsistency between the endpoints of corresponding lines. In this
proposed algorithm, however, we use it as a global geometry
constraint to guide the line matching based on the fact that
intersections of coplanar lines are preserved across images. This
observation is used to obtain a set of candidate correspondences.
The candidate coplanarities are then verified via local
homographies derived from neighboring point correspondences. The
proposed line-matching methods rely only on geometric relations and
do not use appearance during the matching process. While the first
approach's performance showed poor matching results, the results
derived using the second approach showed high matching performance
under different viewpoint changes. The successful comparison with
the
state-of-the-art methods showed the effectiveness of the
proposed method on different datasets.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Engineering; Computer Science; Civil Engineering; line matching; wide-baseline; epipolar geometry; coplanarity; image features; image registration
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Al-Shahri, M. (2013). Line Matching in a Wide-Baseline Stereoview. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1376951775
Chicago Manual of Style (16th Edition):
Al-Shahri, Mohammed. “Line Matching in a Wide-Baseline Stereoview.” 2013. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1376951775.
MLA Handbook (7th Edition):
Al-Shahri, Mohammed. “Line Matching in a Wide-Baseline Stereoview.” 2013. Web. 19 Jan 2021.
Vancouver:
Al-Shahri M. Line Matching in a Wide-Baseline Stereoview. [Internet] [Doctoral dissertation]. The Ohio State University; 2013. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1376951775.
Council of Science Editors:
Al-Shahri M. Line Matching in a Wide-Baseline Stereoview. [Doctoral Dissertation]. The Ohio State University; 2013. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1376951775

The Ohio State University
7.
Lee, Young Jin.
Real-Time Object Motion and 3D Localization from
Geometry.
Degree: PhD, Geodetic Science and Surveying, 2014, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1408443773
► Knowing the position of an object in real-time has tremendous meaning. The most widely used and well-known positioning system is GPS (Global Positioning System), which…
(more)
▼ Knowing the position of an object in real-time has
tremendous meaning. The most widely used and well-known positioning
system is GPS (Global Positioning System), which is now used widely
as invisible infrastructure. However, GPS is only available for
outdoor uses. GPS signals are not available for most indoor
scenarios. Although much research has focused on vision-based
indoor positioning, it is still a challenging problem because of
limitations in both the vision sensor itself and processing power.
This dissertation focuses on real-time 3D positioning of a moving
object using multiple static cameras. A real-time, multiple static
camera system for object detection, tracking, and 3D positioning
that is run on a single laptop computer was designed and
implemented. The system successfully shows less than ±5 mm in
real-time 3D positioning accuracy at an update rate of 6 Hz to 10
Hz in a room measuring 8×5×2.5 meters. Implementation and
experimental analysis has demonstrated that this system can be used
for real-time indoor object positioning.In addition, `collinearity
condition equations of motion’ were derived that represent the
geometric relationship between 2D motions and 3D motion. From these
equations, a `tracking from geometry’ method was developed that
combines these collinearity condition equations of motion with an
existing tracking method to simultaneously estimate 3D motion as
well as 2D motions directly from the stereo camera system. A stereo
camera system was built to test the proposed methods. Experiments
with real-time image sequences showed that the proposed method
provides accurate 3D motion results. The calculated 3D positions
were compared with the results from an existing 2D tracking method
that uses space intersection. The differences between results of
the two methods were less than ±0.01 mm in all X, Y, and Z
directions. The advantage of the tracking from geometry method is
that this method calculates 2D motions and 3D motion
simultaneously, while other tracking methods need an additional
triangulation step to estimate 3D positions.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Geographic Information Science; 3D positioning; 3D tracking; object detection; multiple cameras; motion from geometry; tracking from geometry; real-time 3D positioning; multiple camera tracking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, Y. J. (2014). Real-Time Object Motion and 3D Localization from
Geometry. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1408443773
Chicago Manual of Style (16th Edition):
Lee, Young Jin. “Real-Time Object Motion and 3D Localization from
Geometry.” 2014. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1408443773.
MLA Handbook (7th Edition):
Lee, Young Jin. “Real-Time Object Motion and 3D Localization from
Geometry.” 2014. Web. 19 Jan 2021.
Vancouver:
Lee YJ. Real-Time Object Motion and 3D Localization from
Geometry. [Internet] [Doctoral dissertation]. The Ohio State University; 2014. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1408443773.
Council of Science Editors:
Lee YJ. Real-Time Object Motion and 3D Localization from
Geometry. [Doctoral Dissertation]. The Ohio State University; 2014. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1408443773

The Ohio State University
8.
Hosseinyalamdary , Saivash, Hosseinyalamdary.
Traffic Scene Perception using Multiple Sensors for
Vehicular Safety Purposes.
Degree: PhD, Civil Engineering, 2016, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1462803166
► Autonomous driving is an emerging technology, preventing accidents on the road in future. It, however, faces many challenges because of various environmental conditions and limitations…
(more)
▼ Autonomous driving is an emerging technology,
preventing accidents on the road in future. It, however, faces many
challenges because of various environmental conditions and
limitations of sensors. In this dissertation, we study multiple
sensor integration to overcome their limitations and reliably
perform missions enabling autonomous driving.The laser scanner
point cloud is a rich source of information, suffers from low
resolution, especially for farther objects. We generalize 2D
super-resolution approaches applied in image processing to the 3D
point clouds. Two variants are developed for the 3D
super-resolution: the dense point cloud is generated in such a way
that it follows the geometry of the original point cloud; the
brightness of the images are utilized to generate the dense point
cloud. The results show our proposed approach successfully improves
the density of the point cloud, preserves the edges and corners of
objects, and provides more realistic dense point cloud of objects
relative to the existing surface reconstruction approaches.The
static and moving objects must be detected on the road, the moving
objects must be tracked, and trajectory of the platform must be
designed to avoid accidents. The densified point clouds are
integrated with other sources of information, including the GPS/IMU
navigation solution and GIS maps, to detect the objects on the road
and track the moving ones. The results show static and moving
objects are detected, the moving objects are accurately tracked,
and their pose is estimated. In addition to obstacle avoidance, the
autonomous vehicles must detect and obey the traffic lights and
signs on the road. Due to the variations in the traffic lights, we
propose Bayesian statistical approach to detect them. The
spatio-temporal consistency constraint is applied to provide
coherent traffic light detection in space and time. In addition,
conic section geometry is utilized to estimate the position of the
traffic lights with respect to camera mounted on the platform. The
proposed traffic light detection approach is evaluated using
Karlsruhe Institute of Technology (KITTI) and La Route Automatise
(LARA) benchmarks. The results of the proposed traffic light
detection approach are 98.7% precision rate and 94.7% recall rate
in LARA benchmark, outperform the existing traffic light detection
approaches tested in LARA benchmark. In conclusion, we integrate
multiple sensors to overcome their shortages, such as low
resolution of point clouds, and propose obstacle avoidance and
traffic light detection approaches based on the integrated sensors.
Our results outperform the earlier studies in traffic light
detection and provide more realistic surfaces in 3D
super-resolution. Further studies may modify the proposed traffic
light detection to detect the traffic signs.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Civil Engineering; Multiple sensor integration; moving object tracking; surface reconstruction; super-resolution; traffic light detection; conic section geometry; autonomous vehicle; spatio-temporal consistency constraint; non-holonomic constraint
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hosseinyalamdary , Saivash, H. (2016). Traffic Scene Perception using Multiple Sensors for
Vehicular Safety Purposes. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1462803166
Chicago Manual of Style (16th Edition):
Hosseinyalamdary , Saivash, Hosseinyalamdary. “Traffic Scene Perception using Multiple Sensors for
Vehicular Safety Purposes.” 2016. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1462803166.
MLA Handbook (7th Edition):
Hosseinyalamdary , Saivash, Hosseinyalamdary. “Traffic Scene Perception using Multiple Sensors for
Vehicular Safety Purposes.” 2016. Web. 19 Jan 2021.
Vancouver:
Hosseinyalamdary , Saivash H. Traffic Scene Perception using Multiple Sensors for
Vehicular Safety Purposes. [Internet] [Doctoral dissertation]. The Ohio State University; 2016. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1462803166.
Council of Science Editors:
Hosseinyalamdary , Saivash H. Traffic Scene Perception using Multiple Sensors for
Vehicular Safety Purposes. [Doctoral Dissertation]. The Ohio State University; 2016. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1462803166

The Ohio State University
9.
Kim, Rhae Sung.
Estimating snow depth of alpine snowpack via airborne
multifrequency passive microwave radiance observations.
Degree: PhD, Geodetic Science, 2017, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1503071052341111
► Snow cover plays a key role in the climate and water resource systems in mountainous areas; therefore, accurately monitoring snow properties (e.g., snow water equivalent…
(more)
▼ Snow cover plays a key role in the climate and water
resource systems in mountainous areas; therefore, accurately
monitoring snow properties (e.g., snow water equivalent (SWE) or
snow depth) is critical. Although snow depth can be estimated
in-situ, these measurements are expensive and generally limited in
spatial coverage. Other methods, namely snow hydrologic modeling
and remote sensing, have their intrinsic strengths and limitations;
accurate knowledge and understanding of their highly complementary
relations are required. In this study, we utilized passive
microwave (PM) measurements of the brightness temperature (Tb) to
characterize snowpack properties in mountainous areas. Tb exhibits
reduced sensitivity to depth for deep snow and in forests, limiting
the ability of many existing algorithms for snow mapping. An
alternative approach is to classify snow depth based on its
multifrequency Tb signatures. Here, we first analyzed airborne Tb
measurements of alpine snowpack for five frequencies and two
polarizations, and compared them with an estimate of forest cover
and concurrent measurements of snow depth and snow wetness
collected as part of the NASA Cold Land Processes Field Experiment.
We analyzed a total of 900 independent samples, each representing
one hectare. Samples were classified into classes based on snow
depth, forest fraction, and wetness. We assessed whether the mean
Tb spectrum of each class differed from other classes using the
Hotelling's T-squared test, and assessed the separability of
classes using the Jefferies-Matusita (J-M) distance. Hotelling's
T-squared test revealed that the Tb for each forest cover and snow
depth class differed statistically from each of the others, for dry
snow, notwithstanding that within-class Tb variability tended to be
larger than the between-class differences. The J-M distance
indicated that most classes were somewhat separable based on the Tb
spectra. Consistent with expectations, J-M distance between classes
was lower for forested areas than for un-forested areas,
emphasizing the confounding influence of trees on characterizing
snow using Tb measurements. Based on the results of separability
tests, we explored the supervised machine learning approach by
using various classifiers and RBF-SVM (Support Vector Machine with
RBF kernel function) was selected with highest accuracy. In our
classification system, we utilized both vertical and horizontal
polarizations of Tb in order to provide maximal information to the
classification predictor. Classification accuracy was compared with
the accuracy when using only Tb at vertical polarization.
Classification accuracies tended to decrease with increasing forest
cover density; however, it was encouraging that snow depth could be
somewhat classified even when pixels were forested. Classification
results for all different forest cover conditions showed improved
overall accuracies when using both horizontal and vertical
polarizations instead of using only vertical polarization. Based on
a study of Tb spectra, we proposed a new…
Advisors/Committee Members: Durand, Michael (Advisor), Yilmaz, Alper (Committee Co-Chair).
Subjects/Keywords: Remote Sensing; Earth; Hydrology; Snow Depth; Alpine Snowpack; Remote Sensing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kim, R. S. (2017). Estimating snow depth of alpine snowpack via airborne
multifrequency passive microwave radiance observations. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1503071052341111
Chicago Manual of Style (16th Edition):
Kim, Rhae Sung. “Estimating snow depth of alpine snowpack via airborne
multifrequency passive microwave radiance observations.” 2017. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1503071052341111.
MLA Handbook (7th Edition):
Kim, Rhae Sung. “Estimating snow depth of alpine snowpack via airborne
multifrequency passive microwave radiance observations.” 2017. Web. 19 Jan 2021.
Vancouver:
Kim RS. Estimating snow depth of alpine snowpack via airborne
multifrequency passive microwave radiance observations. [Internet] [Doctoral dissertation]. The Ohio State University; 2017. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1503071052341111.
Council of Science Editors:
Kim RS. Estimating snow depth of alpine snowpack via airborne
multifrequency passive microwave radiance observations. [Doctoral Dissertation]. The Ohio State University; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1503071052341111

The Ohio State University
10.
Lee, Ji Hyun.
Development of a Tool to Assist the Nuclear Power Plant
Operator in Declaring a State of Emergency Based on the Use of
Dynamic Event Trees and Deep Learning Tools.
Degree: PhD, Nuclear Engineering, 2018, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1543069550674204
► Safety is the utmost important requirement in nuclear power plant operation. An approach to develop a real-time operator support tool (OST) for declaring site emergency…
(more)
▼ Safety is the utmost important requirement in nuclear
power plant operation. An approach to develop a real-time operator
support tool (OST) for declaring site emergency is proposed in this
study. Temporal behavior of the early stages of a severe accident
can be used to project the likelihood of different levels of
offsite release of radionuclides based on the results of accident
simulations with severe accident codes. Depending on the severity
of the accident and the potential magnitude of the release of
radioactive material to the environment, an offsite emergency
response such as evacuation or sheltering may be warranted. The
approach is based on the simulation of the possible nuclear power
plant (NPP) behavior following an initiating event and projects the
likelihood of different levels of offsite release of radionuclides
from the plant using deep learning (DL) techniques. Two
convolutional neural network (CNN) models are implemented to
classify possible scenarios under two different labels. Training of
the DL process is accomplished using results of a large number of
scenarios generated with the ADAPT/MELCOR/RASCAL computer codes to
simulate the variety of possible consequences following a station
blackout event involving the loss of all AC power for a large
pressurized water reactor. The ability of the model to predict the
likelihood of different levels of consequences is assessed using a
separate test set of MELCOR/RASCAL calculations. The set of data to
be used in training and testing the machine were obtained
previously from the Ph.D. dissertation work performed by Dr.
Douglas Osborn. The OST is illustrated for a station blackout event
in a pressurized water reactor for possible offsite dose outcomes
at: 1) 2-mile area, 2) 10-mile area, 3) 2-mile boundary, and, 4)
10-mile boundary which are being considered as key locations for
emergency response planning 4 days after release starts. Also, two
meteorological conditions, historical and standard meteorology, are
considered. Instead of random sampling from the total set, the
scenarios are clustered based on their similarities using mean
shift methodology. Two CNN models that are implemented label
whether a scenario falls into Bin over 10rem or Bin 0-10rem. CNN1
model has an average of 87.19 percent accuracy among possible
offsite dose outcomes considered with the maximum accuracy reaching
up to 99.96 percent. CNN2 has better performance rate with the
lowest accuracy as 92.79 percent. All case studies for 3 hours
simulation after release starts have over 74.82 percent
accuracy.
Advisors/Committee Members: Aldemir, Tunc (Advisor), Yilmaz, Alper (Advisor).
Subjects/Keywords: Nuclear Engineering; Computer Engineering; PRA, PSA, Dynamic PRA, Dynamic PSA, Machine learning, Deep
Learning, Convolutional Neural Network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, J. H. (2018). Development of a Tool to Assist the Nuclear Power Plant
Operator in Declaring a State of Emergency Based on the Use of
Dynamic Event Trees and Deep Learning Tools. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1543069550674204
Chicago Manual of Style (16th Edition):
Lee, Ji Hyun. “Development of a Tool to Assist the Nuclear Power Plant
Operator in Declaring a State of Emergency Based on the Use of
Dynamic Event Trees and Deep Learning Tools.” 2018. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1543069550674204.
MLA Handbook (7th Edition):
Lee, Ji Hyun. “Development of a Tool to Assist the Nuclear Power Plant
Operator in Declaring a State of Emergency Based on the Use of
Dynamic Event Trees and Deep Learning Tools.” 2018. Web. 19 Jan 2021.
Vancouver:
Lee JH. Development of a Tool to Assist the Nuclear Power Plant
Operator in Declaring a State of Emergency Based on the Use of
Dynamic Event Trees and Deep Learning Tools. [Internet] [Doctoral dissertation]. The Ohio State University; 2018. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1543069550674204.
Council of Science Editors:
Lee JH. Development of a Tool to Assist the Nuclear Power Plant
Operator in Declaring a State of Emergency Based on the Use of
Dynamic Event Trees and Deep Learning Tools. [Doctoral Dissertation]. The Ohio State University; 2018. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1543069550674204
11.
Li, Ding.
ESA ExoMars Rover PanCam System Geometric Modeling and
Evaluation.
Degree: PhD, Geodetic Science and Surveying, 2015, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1420788556
► The ESA ExoMars rover, planned to be launched to the Martian surface in 2018, will carry a drill and a suite of instruments dedicated to…
(more)
▼ The ESA ExoMars rover, planned to be launched to the
Martian surface in 2018, will carry a drill and a suite of
instruments dedicated to exobiology and geochemistry research. To
fulfill its scientific role, high-precision rover localization and
topographic mapping will be important for traverse path planning,
safe planetary surface operations and accurate embedding of
scientific observations into a global spatial context. For such
purposes, the ExoMars rover PanCam system will acquire an imagery
network providing vision information for photogrammetric algorithms
to localize the rover and generate 3-D mapping products. Since the
design of the PanCam will influence the localization and mapping
accuracy, quantitative error analysis of the PanCam design will
improve scientists’ awareness of the achievable accuracy, and
enable the PanCam design team to optimize the design for achieving
higher localization and mapping accuracy. In addition, a prototype
camera system that meets with the formalized PanCam specifications
is also needed to demonstrate the attainable localization accuracy
of the PanCam system over long-range traverses. Therefore, this
research contains the following two goals. The first goal is to
develop a rigorous mathematical model to estimate localization
accuracy of this PanCam system based on photogrammetric principles
and error propagation theory. The second goal is to assemble a
PanCam prototype according to the system specifications and develop
a complete vision-based rover localization method from camera
calibration and image capture to obtain motion estimation and
localization refinement. The vision-based rover localization method
presented here is split into two stages: the visual odometry
processing, which serves as the initial estimation of the rover’s
movement, and the bundle adjustment technique, which further
improves the localization through posterior refinement. A
theoretical error analysis model for each of the localization
stages has been established accordingly to simulate the rover
localization error with respect to the traverse length.
Additionally, a PanCam prototype was assembled with similar
parameters as the latest technical specifications in order to
systematically test and evaluate the ExoMars PanCam localization
and mapping capabilities. The entire processing path from system
assemblage, calibration, feature extraction and matching, as well
as rover localization from field experiments has been performed in
this research.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Geographic Information Science; Robotics; ExoMars; Localization; Error Propagation; Bundle Adjustment; Visual Odometry
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, D. (2015). ESA ExoMars Rover PanCam System Geometric Modeling and
Evaluation. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1420788556
Chicago Manual of Style (16th Edition):
Li, Ding. “ESA ExoMars Rover PanCam System Geometric Modeling and
Evaluation.” 2015. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1420788556.
MLA Handbook (7th Edition):
Li, Ding. “ESA ExoMars Rover PanCam System Geometric Modeling and
Evaluation.” 2015. Web. 19 Jan 2021.
Vancouver:
Li D. ESA ExoMars Rover PanCam System Geometric Modeling and
Evaluation. [Internet] [Doctoral dissertation]. The Ohio State University; 2015. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1420788556.
Council of Science Editors:
Li D. ESA ExoMars Rover PanCam System Geometric Modeling and
Evaluation. [Doctoral Dissertation]. The Ohio State University; 2015. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1420788556
12.
Sukcharoenpong, Anuchit.
Shoreline Mapping with Integrated HSI-DEM using Active
Contour Method.
Degree: PhD, Geodetic Science and Surveying, 2014, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1406147249
► Shoreline mapping has been a critical task for federal/state agencies and coastal communities. It supports important applications such as nautical charting, coastal zone management, and…
(more)
▼ Shoreline mapping has been a critical task for
federal/
state agencies and coastal communities. It supports
important applications such as nautical charting, coastal zone
management, and legal boundary determination. Current attempts to
incorporate data from hyperspectral imagery to increase the
efficiency and efficacy of shoreline mapping have been limited due
to the complexity in processing its data as well as its inferior
spatial resolution when compared to multispectral imagery or to
sensors such as LiDAR. As advancements in remote-sensing
technologies increase sensor capabilities, the ability to exploit
the spectral formation carried in hyperspectral images becomes more
imperative. This work employs a new approach to extracting
shorelines from AVIRIS hyperspectral images by combination with a
LiDAR-based DEM using a multiphase active contour segmentation
technique. Several techniques, such as study of object spectra and
knowledge-based segmentation for initial contour generation, have
been employed in order to achieve a sub-pixel level of accuracy and
maintain low computational expenses. Introducing a DEM into
hyperspectral image segmentation proves to be a useful tool to
eliminate misclassifications and improve shoreline positional
accuracy. Experimental results show that mapping shorelines from
hyperspectral imagery and a DEM can be a promising approach as many
further applications can be developed to exploit the rich
information found in hyperspectral imagery.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Geographic Information Science; Remote Sensing; shoreline mapping; hyperspectral imagery; data integration; HSI; DEM
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sukcharoenpong, A. (2014). Shoreline Mapping with Integrated HSI-DEM using Active
Contour Method. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1406147249
Chicago Manual of Style (16th Edition):
Sukcharoenpong, Anuchit. “Shoreline Mapping with Integrated HSI-DEM using Active
Contour Method.” 2014. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1406147249.
MLA Handbook (7th Edition):
Sukcharoenpong, Anuchit. “Shoreline Mapping with Integrated HSI-DEM using Active
Contour Method.” 2014. Web. 19 Jan 2021.
Vancouver:
Sukcharoenpong A. Shoreline Mapping with Integrated HSI-DEM using Active
Contour Method. [Internet] [Doctoral dissertation]. The Ohio State University; 2014. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1406147249.
Council of Science Editors:
Sukcharoenpong A. Shoreline Mapping with Integrated HSI-DEM using Active
Contour Method. [Doctoral Dissertation]. The Ohio State University; 2014. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1406147249
13.
luo, sai.
Semantic Movie Scene Segmentation Using Bag-of-Words
Representation.
Degree: MS, Civil Engineering, 2017, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1500375283397255
► Video segmentation is an important procedure in video indexing and archiving. Shot and scene are two levels of video segment. In this thesis, a shot…
(more)
▼ Video segmentation is an important procedure in video
indexing and archiving. Shot and scene are two levels of video
segment. In this thesis, a shot detection method and a scene
segmentation method are introduced. The shot detection method is
based on both camera motion and histogram difference in the frames.
Motions in the frames within one shot have certain continuity, so
the frames which break the pattern are the shot transition frames.
Motion based method is sensitive to object motion in frames, which
would affect the result, thus histogram comparison is introduced.
Frames in one shot are very similar in terms of visual content, the
visual content can form the histogram. The drawback of this method
is that in scenes with massive camera movement, the visual content
may vary much. By combining these two methods the shots can be
extracted with higher accuracy. Scene segmentation is a higher
level in video summarization. In this process, BOW model and color
histogram method are used. BOW model is used to construct feature
descriptors to demonstrate background information. The model is
constructed by generating clusters from the feature vectors of the
training images. Since scene is a complex level of video segment,
we need to use multiple criteria to determine it, thus color
histogram is introduced. Color information of the frames in one
scene shares similarity, the frames which break the pattern are the
frames with scene transition, thus forming the second level
scenes.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Civil Engineering; Shot detection, Scene segmentation, Bag-of-Words
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
luo, s. (2017). Semantic Movie Scene Segmentation Using Bag-of-Words
Representation. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1500375283397255
Chicago Manual of Style (16th Edition):
luo, sai. “Semantic Movie Scene Segmentation Using Bag-of-Words
Representation.” 2017. Masters Thesis, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1500375283397255.
MLA Handbook (7th Edition):
luo, sai. “Semantic Movie Scene Segmentation Using Bag-of-Words
Representation.” 2017. Web. 19 Jan 2021.
Vancouver:
luo s. Semantic Movie Scene Segmentation Using Bag-of-Words
Representation. [Internet] [Masters thesis]. The Ohio State University; 2017. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1500375283397255.
Council of Science Editors:
luo s. Semantic Movie Scene Segmentation Using Bag-of-Words
Representation. [Masters Thesis]. The Ohio State University; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1500375283397255
14.
Deshpande, Sagar Shriram.
Semi-automated Methods to Create a Hydro-flattened DEM using
Single Photon and Linear Mode LiDAR Points.
Degree: PhD, Geodetic Science, 2017, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1491300120665946
► LiDAR pulses are mostly absorbed by water bodies, thereby creating voids. The LiDAR points available over water surfaces are not reliable due to near water…
(more)
▼ LiDAR pulses are mostly absorbed by water bodies,
thereby creating voids. The LiDAR points available over water
surfaces are not reliable due to near water surface features such
as: ripple, waves, or near surface ground objects. A bare ground
DEM surface, created using such points, result in an uneven water
surface which appears unnatural and cartographically unpleasing.
Contours, created using such surface, are not consistent with the
USGS contours, which are produced using traditional methods. Hence,
the LiDAR point cloud needs to be hydro-flattened to produce a bare
ground surface, consistent with the traditionally produced
DEMs.Hydro-flattening is the process of creating a LiDAR-derived
DEM where the water surfaces appear and behave as they would in a
traditional topographic DEM generated from photogrammetric digital
terrain models (DTMs). Hydro-flattened DEMs, created using LiDAR
data, exclude LiDAR points over water bodies and include
three-dimensional (3D) bank shorelines. In this dissertation, a
methodology for creating hydro-flattened bare ground surfaces using
linear mode (LM) or Single Photon (SP) LiDAR point clouds is
presented. First the properties of both the sensors are compared
and the need of hydro-flattening is discussed. Then, the method is
described in detail for both the sensors. LiDAR point cloud and an
approximate stream centerline are the primary data for this
process. In the first step, a continuous bare ground surface (CBGS)
is created by eliminating non-ground LiDAR points and adding
artificial underwater points. In the second step, the lowest
elevation from the LiDAR point cloud, within a radius distance from
the river centerline is used to create a virtual water surface
(VWS). This VWS is revised to consider water surface undulations
such as ripples or waves, protruding underwater objects, etc. The
revised VWS is then intersected with the CBGS to locate the
two-dimensional (2D) bank shorelines. The 2D shorelines are
assigned the elevations of the VWS and are used to produce a
hydro-flattened DEM. This methodology is developed for either
classified or unclassified LiDAR point clouds.The method proposed
is adapted for both LM and SP LiDAR point clouds. Data at three
sites are tested to check the consistency of the proposed
methodology. Only a LM LiDAR point cloud is available at the
Michigan site. The results from this site show that the horizontal
accuracies observed between the bank shoreline, extracted using raw
LiDAR points, and the GPS survey of the 2D shoreline are 0.94 m,
0.69 m, and 0.63 m for the three water surfaces, respectively. The
accuracies attained using vendor classified LiDAR points are 0.74
m, 0.67 m, and 0.64 m which are very similar to those using raw
LiDAR points. Both SP and LM LiDAR are processed at the second
site, located in North Carolina. The results at this site are
compared to orthoimages. The results show that the 2D bank
shoreline appears very close to the orthoimages compared to the 2D
shoreline obtained using the LM LiDAR point cloud. This is due to
the…
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Civil Engineering; Geographic Information Science; Single Photon, Linear mode LiDAR, Digital Photogrammetry,
Feature extraction, GIS and remote sensing,
Hydro-flattening
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Deshpande, S. S. (2017). Semi-automated Methods to Create a Hydro-flattened DEM using
Single Photon and Linear Mode LiDAR Points. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1491300120665946
Chicago Manual of Style (16th Edition):
Deshpande, Sagar Shriram. “Semi-automated Methods to Create a Hydro-flattened DEM using
Single Photon and Linear Mode LiDAR Points.” 2017. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1491300120665946.
MLA Handbook (7th Edition):
Deshpande, Sagar Shriram. “Semi-automated Methods to Create a Hydro-flattened DEM using
Single Photon and Linear Mode LiDAR Points.” 2017. Web. 19 Jan 2021.
Vancouver:
Deshpande SS. Semi-automated Methods to Create a Hydro-flattened DEM using
Single Photon and Linear Mode LiDAR Points. [Internet] [Doctoral dissertation]. The Ohio State University; 2017. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1491300120665946.
Council of Science Editors:
Deshpande SS. Semi-automated Methods to Create a Hydro-flattened DEM using
Single Photon and Linear Mode LiDAR Points. [Doctoral Dissertation]. The Ohio State University; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1491300120665946
15.
Lai, Yuchen.
Augmented Reality Visualization of Building Information
Model.
Degree: MS, Civil Engineering, 2017, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu149263273982056
► Building Information Modeling (BIM) is an effective tool which widely used in construction industries. As its result, building information models (BIMs) serve an important role…
(more)
▼ Building Information Modeling (BIM) is an effective
tool which widely used in construction industries. As its result,
building information models (BIMs) serve an important role through
project design, delivery, build and management stages, bringing
many benefits. There are many reliable commercial BIM software on
market using computers as their main platform. But the way they
display the BIM and interact with the BIM are also limited by
computers. On the other hand, Augmented Reality (AR), as a latest
popular technique, shows a great potential of changing the way of
people observing and interacting with the world. It provides a
seamless way of combing virtual digital contents with the real
world.In this paper, we will discuss about the development of BIM
and AR technique, and the possible benefits of combing them. In the
last chapter we present an experimental system that is able to
visualize BIM in AR. The results are demonstrated and the whole
idea of our system can be served as a general framework of a wider
range of AR-BIM system development.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Civil Engineering; Building Information Model; Augmented Reality Visualization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lai, Y. (2017). Augmented Reality Visualization of Building Information
Model. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu149263273982056
Chicago Manual of Style (16th Edition):
Lai, Yuchen. “Augmented Reality Visualization of Building Information
Model.” 2017. Masters Thesis, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu149263273982056.
MLA Handbook (7th Edition):
Lai, Yuchen. “Augmented Reality Visualization of Building Information
Model.” 2017. Web. 19 Jan 2021.
Vancouver:
Lai Y. Augmented Reality Visualization of Building Information
Model. [Internet] [Masters thesis]. The Ohio State University; 2017. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu149263273982056.
Council of Science Editors:
Lai Y. Augmented Reality Visualization of Building Information
Model. [Masters Thesis]. The Ohio State University; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu149263273982056
16.
xiao, changlin, xiao.
Visual Tracking with an Application to Augmented
Reality.
Degree: PhD, Civil Engineering, 2017, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1500638355208487
► In simple terms, visual tracking is the process of visually following a given object. This very initial task is one of the fundamental problems in…
(more)
▼ In simple terms, visual tracking is the process of
visually following a given object. This very initial task is one of
the fundamental problems in many computer vision applications, like
in movement pattern analysis, animal surveillance, robot
navigation, and so on. Currently, with the increasing popularity of
cameras, large video data are generated every day. However, the
algorithms that used to handle this visual information are far from
being enough. Besides, the technical progress and increasing
demands for unmanned aerial vehicle (UAV) with the automatic pilot
capability promote the requirement of visual tracking in many
practical applications. Therefore, visual tracking is still one of
the most interesting topics in computer vision tasks.However, even
various methods have been proposed for visual tracking, it is still
an open problem due to its complexity. Tracking means following the
target’s motion, and the primary challenge in visual tracking is
the inconsistency of target’s appearance. The status of the target
may be changed with the illumination change, the non-rigid motion,
and occlusions. Additionally, the similar background may cause the
drift problem like switching the targeted player to the untargeted
ones in the games. In real life, there are more problems such as
the scale change, fast motion, lower resolution and out of plane
rotation which cause the tracking tasks even more challenging.
Therefore, visual tracking, after several decades’ research, is
still an active research topic with many unsolved problems.In this
dissertation, three tracking methods are proposed trying to deal
with the tracking problems for different targets and various
scenarios. Also, besides the tracking in 2D images, this work
further introduces a 3D space tracking model in augmented reality
application.For a simple tracking scenario, an efficient tracking
method with distinctive color and silhouette is proposed. The
proposed method uses colors that most exist on the target to
represent and track it. It is a dynamic color representation for
the target which is updating with the background changes. This
appearance model can substantially reduce the distractors in the
background, and the color is constant to the shape change which
significantly alleviates the nonrigid deformation problem.Based on
the above tracking idea, a unique feature vote tracking algorithm
is further developed. This work divides the feature space into many
small spaces as storage cells for feature descriptions. And if most
of the descriptions in the cell are from the target, the features
in the cell are treated as unique features. Besides counting how
likely the feature from the target, each feature’s location respect
to the target center is recorded to reproject the center in the new
coming frames. This voting machine makes the tracker focus on the
target against the occlusion and cluster background. Recently, deep
learning and neural network show powerful ability in computer
vision applications. The neural network, especially the
convolutional neural…
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Civil Engineering; computer vision, visual tracking, object appearance model,
deep learning, augmented reality, 3D tracking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
xiao, changlin, x. (2017). Visual Tracking with an Application to Augmented
Reality. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1500638355208487
Chicago Manual of Style (16th Edition):
xiao, changlin, xiao. “Visual Tracking with an Application to Augmented
Reality.” 2017. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1500638355208487.
MLA Handbook (7th Edition):
xiao, changlin, xiao. “Visual Tracking with an Application to Augmented
Reality.” 2017. Web. 19 Jan 2021.
Vancouver:
xiao, changlin x. Visual Tracking with an Application to Augmented
Reality. [Internet] [Doctoral dissertation]. The Ohio State University; 2017. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1500638355208487.
Council of Science Editors:
xiao, changlin x. Visual Tracking with an Application to Augmented
Reality. [Doctoral Dissertation]. The Ohio State University; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1500638355208487
17.
Barsai, Gabor.
DATA REGISTRATION WITHOUT EXPLICIT CORRESPONDENCE FOR
ADJUSTMENT OF CAMERA ORIENTATION PARAMETER ESTIMATION.
Degree: PhD, Geodetic Science and Surveying, 2011, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1315855340
► Creating accurate, current digital maps and 3-D scenes is a high priority in today’s fast changing environment. The nation’s maps are in a constant…
(more)
▼ Creating accurate, current digital maps and
3-D scenes is a high priority in today’s fast changing environment.
The nation’s maps are in a constant
state of revision, with many
alterations or new additions each day. Digital maps have become
quite common.Google maps, Mapquest and others are examples. These
also have 3-D viewing capability. Many details are now included,
such as the height of low bridges, in the attribute data for the
objects displayed on digital maps and scenes. To expedite the
updating of these datasets, they should be created autonomously,
without humanintervention, from data streams. Though systems exist
that attain fast, or even real-time performance mapping and
reconstruction, they are typically restricted to creating sketches
from the data stream, and not accurate maps or scenes. The ever
increasing amount of image data available from private companies,
governments and the internet, suggest the development of an
automated system is of utmost importance. The
proposed framework can create 3-D views autonomously; which extends
the functionality of digital mapping. The first step to creating
3-D views is to reconstruct the scene of the area to be mapped. To
reconstruct a scene from heterogeneoussources, the data has to be
registered: either to each other or, preferably, to a general,
absolute coordinate system. Registering an image is based on the
reconstruction of the geometric relationship of the image to the
coordinate system at the time of imaging. Registration is the
process of determining the geometric transformation parameters of a
dataset in one coordinate system, the source, with respect to the
other coordinate system, the target. The advantages of fusing these
datasets by registration manifests itself by the data contained in
the complementary information that different modality datasets
have. The complementary characteristics of these systems can be
fully utilized only after successful registration of the
photogrammetric and alternative data relative to a common reference
frame. This research provides a novel approachto finding
registration parameters, without the explicit use of conjugate
points, but using conjugate features. These features are open or
closed free-form linear features, there is no need for a parametric
or any other type of representation of these featuresThe proposed
method will use different modality datasets of the same area: lidar
data, image data and GIS data. There are two datasets: one from the
Ohio State University and the other from San Bernardino,
California. The reconstruction of scenes from
imagery and range data, using laser and radar data, has been an
active research area in the fields of photogrammetry and computer
vision. Automatic, or just less human intervention, would have a
great impact on alleviating the “bottle-neck” that describes the
current
state of creating knowledgefrom data. Pixels or laser
points, the output of the sensor, represent a discretization of the
real world. By themselves, these data points do not contain…
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Civil Engineering; Computer Engineering; Geographic Information Science; automatic registration; 3D reconstruction; lidar; GIS; digital image; point free registration
…great pleasure to be a student at The Ohio State University, USA. I wish to
thank all faculty… …M.S.,Geodetic Sciences,The Ohio State
University,Columbus, OH, USA
June 1997 - Sept 1999… …Geodetic
Sciences,The
Ohio
State University,Columbus, OH, USA
Gabor Barsai has worked in the… …a graduate of the Ohio State University with a Master of Science in Mapping/GIS,
and is a… …Sciences, The Ohio State University,Columbus, OH, USA
G. Barsai, A. Lipscomb, : Business GIS…
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Barsai, G. (2011). DATA REGISTRATION WITHOUT EXPLICIT CORRESPONDENCE FOR
ADJUSTMENT OF CAMERA ORIENTATION PARAMETER ESTIMATION. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1315855340
Chicago Manual of Style (16th Edition):
Barsai, Gabor. “DATA REGISTRATION WITHOUT EXPLICIT CORRESPONDENCE FOR
ADJUSTMENT OF CAMERA ORIENTATION PARAMETER ESTIMATION.” 2011. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1315855340.
MLA Handbook (7th Edition):
Barsai, Gabor. “DATA REGISTRATION WITHOUT EXPLICIT CORRESPONDENCE FOR
ADJUSTMENT OF CAMERA ORIENTATION PARAMETER ESTIMATION.” 2011. Web. 19 Jan 2021.
Vancouver:
Barsai G. DATA REGISTRATION WITHOUT EXPLICIT CORRESPONDENCE FOR
ADJUSTMENT OF CAMERA ORIENTATION PARAMETER ESTIMATION. [Internet] [Doctoral dissertation]. The Ohio State University; 2011. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1315855340.
Council of Science Editors:
Barsai G. DATA REGISTRATION WITHOUT EXPLICIT CORRESPONDENCE FOR
ADJUSTMENT OF CAMERA ORIENTATION PARAMETER ESTIMATION. [Doctoral Dissertation]. The Ohio State University; 2011. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1315855340
18.
Srestasathiern, Panu.
Line Based Estimation of Object Space Geometry and Camera
Motion.
Degree: PhD, Geodetic Science and Surveying, 2012, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1345401748
► In this dissertation, two problems of 3D structure and camera motion recovery are addressed. The first problem is the 3D reconstruction problem using multiple…
(more)
▼ In this dissertation, two problems of 3D
structure and camera motion recovery are addressed. The first
problem is the 3D reconstruction problem using multiple images.
Particularly, in this dissertation, the line estimation using
multiple views is researched. The second addressed problem of 3D
structure and camera motion recovery is the line-based bundle
adjustment. A novel cost function for line based bundle adjustment
is proposed. For the line based 3D structure and
camera motion recovery, the first problem is the 3D line estimation
which provides an initial solution for the bundle adjustment
process. In order to facilitate this I represent the 3D line by its
Plücker coordinates. A typical requirement of this representation
is the use of the Plücker constraint. I leverage the
state of art
by waiving the Plücker constraint and propose two streamlined
solutions to 3D line estimation problem. The first proposed 3D line
estimation model is based on the preservation of coincidence in the
dual projective space. The second method is based on the averaging
of a set of 3D lines which are generated by the intersection of the
back-projection planes from multiple images viewing the estimated
3D line. The second component of my proposal is
to develop a new bundle adjustment model. More precisely, a new
line-based cost function that defines a geometric error in the
object space is proposed. The proposed cost function is derived by
using the equivalence between the image plane and the unit Gaussian
sphere with its center positioned at the optical center of the
image plane. Particularly, the geometric error is defined as the
integrated squared distance between the projection plane of a 3D
line estimate and point on the perimeter of the circular sector
equivalent to the image of the 3D line
estimate.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Computer Science; Geographic Information Science; Robotics; photogrammetry; projective geometry; 3D structure and camera motion recovery; line feature
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Srestasathiern, P. (2012). Line Based Estimation of Object Space Geometry and Camera
Motion. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1345401748
Chicago Manual of Style (16th Edition):
Srestasathiern, Panu. “Line Based Estimation of Object Space Geometry and Camera
Motion.” 2012. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1345401748.
MLA Handbook (7th Edition):
Srestasathiern, Panu. “Line Based Estimation of Object Space Geometry and Camera
Motion.” 2012. Web. 19 Jan 2021.
Vancouver:
Srestasathiern P. Line Based Estimation of Object Space Geometry and Camera
Motion. [Internet] [Doctoral dissertation]. The Ohio State University; 2012. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1345401748.
Council of Science Editors:
Srestasathiern P. Line Based Estimation of Object Space Geometry and Camera
Motion. [Doctoral Dissertation]. The Ohio State University; 2012. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1345401748
19.
JIANG, JINWEI.
Collaborative Tracking of Image Features Based on Projective
Invariance.
Degree: PhD, Geodetic Science and Surveying, 2012, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1345562896
► In past manned lunar landing missions, such as Apollo 14, spatial disorientation of astronauts substantially compromised the productivities of astronauts, and caused safety and…
(more)
▼ In past manned lunar landing missions, such
as Apollo 14, spatial disorientation of astronauts substantially
compromised the productivities of astronauts, and caused safety and
mission success problems. The non-GPS lunar environment has
micro-gravity field, and lacks both spatial recognition cues and
reference objects which are familiar to the human biological
sensors related to spatial recognition (e.g. eyes). Such an
environment causes misperceptions of the locations of astronauts
and targets and their spatial relations, as well as misperceptions
of the heading direction and travel distances of astronauts. These
spatial disorientation effects can reduce productivity and cause
life risks in lunar manned missions. A navigation system, which is
capable of locating astronauts and tracking the movements of them
on the lunar surface, is critical for future lunar manned missions
where multiple astronauts will traverse more than 100km from the
lander or the base station with the assistance from roving vehicle,
and need real-time navigation support for effective collaborations
among them. Our earlier research to solve these
problems dealt with developing techniques to enable a precise,
flexible and reliable Lunar Astronaut Spatial Orientation and
Information System (LASOIS) capable of delivering real-time
navigation information to astronauts on the lunar surface. The
LASOIS hardware was a sensor network composed of orbital, ground
and on-suit sensors: the Lunar Reconnaissance Orbiter Camera
(LROC), radio beacons, the on-suit cameras, and shoe-mounted
Inertial Measurement Unit (IMU). The LASOIS software included
efficient and robust algorithms for estimating trajectory from IMU
signals, generating heading information from imagery acquired from
on-suit cameras, and an Extended Kalman Filter (EKF) based approach
for integrating these spatial information components to generate
the trajectory of an astronaut with meter-level accuracy. Moreover,
LASOIS emphasized multi-mode sensors for improving the flexibility
and robustness of the system. From the
experimental results during three field tests for the LASOIS
system, we observed that most of the errors in the image processing
algorithm are caused by the incorrect feature tracking. This
dissertation addresses the feature tracking problem in image
sequences acquired from cameras. Despite many alternatives to
feature tracking problem, iterative least squares solution solving
the optical flow equation has been the most popular approach used
by many in the field. This dissertation attempts to leverage the
former efforts to enhance feature tracking methods by introducing a
view geometric constraint to the tracking problem, which provides
collaboration among features. In contrast to alternative geometry
based methods, the proposed approach provides an online solution to
optical flow estimation in a collaborative fashion by exploiting
Horn and Schunck flow estimation regularized by view geometric
constraints. Proposed collaborative tracker estimates the motion of
a…
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Computer Engineering; Computer Science; Geographic Information Science; Robotics; features; tracking; appearance similarity; projective geometry; projective invariants
…M.S. Geodetic Science,
The Ohio State University
2008-present… …Graduate Research Associate,
The Ohio State University.
Publications
Research Publications
J…
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
JIANG, J. (2012). Collaborative Tracking of Image Features Based on Projective
Invariance. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1345562896
Chicago Manual of Style (16th Edition):
JIANG, JINWEI. “Collaborative Tracking of Image Features Based on Projective
Invariance.” 2012. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1345562896.
MLA Handbook (7th Edition):
JIANG, JINWEI. “Collaborative Tracking of Image Features Based on Projective
Invariance.” 2012. Web. 19 Jan 2021.
Vancouver:
JIANG J. Collaborative Tracking of Image Features Based on Projective
Invariance. [Internet] [Doctoral dissertation]. The Ohio State University; 2012. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1345562896.
Council of Science Editors:
JIANG J. Collaborative Tracking of Image Features Based on Projective
Invariance. [Doctoral Dissertation]. The Ohio State University; 2012. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1345562896
20.
Lawver, Jordan D.
Robust Feature Tracking in Image Sequences Using View
Geometric Constraints.
Degree: MS, Civil Engineering, 2013, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1365611706
► In computer vision, interest point tracking across an image sequence is a fundamental technique for determining motion characteristics. Once motion is extracted it can be…
(more)
▼ In computer vision, interest point tracking across an
image sequence is a fundamental technique for determining motion
characteristics. Once motion is extracted it can be applied to a
wide array of other practical applications, including but not
limited to automated surveillance, three-dimensional recovery, and
object recognition. Traditionally, feature tracking has been
performed with a variety of appearance-based comparison methods.
The most common methods analyze intensity values of local pixels
and subsequently attempt to match them to the most similar region
in the following frame. This standard, though sometimes effective,
lacks versatility. For example, these methods are easily confused
by shadows, patterns, feature occlusion, and a variety of other
appearance-based anomalies.To counteract the issues presented by a
one-sided approach, a new method has been developed to take
advantage of both appearance and geometric constraints in a
complementary fashion. To do this, a select number of points are
first tracked through a set number of initialization frames such
that their shape can be defined. Beginning at the following frame,
this spatial information is substituted as an additional constraint
into the appearance-based optical flow equation. Through
aniterative least-squares solution the camera parameters for the
new frame are computed and used to project the derived shape data
to the new feature point image coordinates. The process is repeated
for each new frame until a trajectory is created for the entire
video sequence.With this method, weight can be allocated as desired
between both appearance and geometric constraints. If an issue
arises with one constraint (e.g., occlusion or rapid camera
movement), the other constraint will continue to track the feature
successfully. Preliminary results have shown that this method
provides consistent robustness to tracking challenges, such as
occlusion, shadows, and repeating patterns, while also
outperforming appearance-based methods in tracking quality. With
this improvement, many existing deficiencies in the practical
applications of feature tracking can eventually be
overcome.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Civil Engineering; Computer Science; Geographic Information Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lawver, J. D. (2013). Robust Feature Tracking in Image Sequences Using View
Geometric Constraints. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1365611706
Chicago Manual of Style (16th Edition):
Lawver, Jordan D. “Robust Feature Tracking in Image Sequences Using View
Geometric Constraints.” 2013. Masters Thesis, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1365611706.
MLA Handbook (7th Edition):
Lawver, Jordan D. “Robust Feature Tracking in Image Sequences Using View
Geometric Constraints.” 2013. Web. 19 Jan 2021.
Vancouver:
Lawver JD. Robust Feature Tracking in Image Sequences Using View
Geometric Constraints. [Internet] [Masters thesis]. The Ohio State University; 2013. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1365611706.
Council of Science Editors:
Lawver JD. Robust Feature Tracking in Image Sequences Using View
Geometric Constraints. [Masters Thesis]. The Ohio State University; 2013. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1365611706
21.
Mattmuller, Adam.
Nuclear Power Plant Maintenance Improvement via
Implementation of Wearable Technology.
Degree: MS, Nuclear Engineering, 2016, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1461760209
► The recent commercialization of wearable technology presents an opportunity for nuclear power plant maintenance workers to increase their performance by becoming more aware of their…
(more)
▼ The recent commercialization of wearable technology
presents an opportunity for nuclear power plant maintenance workers
to increase their performance by becoming more aware of their
situation and increasing the tools at hand to perform the
maintenance. A wearable technology, such as Google Glass (GG),
gives the wearer a small screen in front of the user’s eye, which
is a potentially useful medium through which to convey information
about radiation fields and maintenance procedures. If used to its
full potential, the technology can be used to monitor worker
progress in real time, guide to divert potential errors, and give
the worker performance feedback, all of which may be invaluable
during time-sensitive activities in the plant. A GG program has
been created which allows a utility worker to see the procedure in
a small screen worn on his or her head. The worker follows the
procedure and enters decisions made and situations encountered
during the procedure. The program tracks these decisions and checks
them against the preprogrammed valid solutions. Should an error
occur, the worker is immediately notified that an invalid set of
solutions has occurred. Such an error may present itself in the
form of a valve out of position, a tag not cleared on an associated
system, or a step inadvertently being marked as “Not Applicable.”
In addition, another GG program was created to record and relay
data from a Bluetooth-enabled radiation dosimetry device and
conveys this information to the worker in a small box on the
screen. This alternative form of conveying dosimetry information
presents the information in a more accessible way: on a screen in
the worker’s field of view rather than on a pocket dosimeter which,
requires a button to be pressed to convey the
information.
Advisors/Committee Members: Aldemir, Tunc (Advisor), Yilmaz, Alper (Advisor).
Subjects/Keywords: Energy; Engineering; Mechanical Engineering; Nuclear Engineering; Radiation; Surgery; google glass; wearable technology; glassware; nuclear power plant maintenance; human reliability; human error
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mattmuller, A. (2016). Nuclear Power Plant Maintenance Improvement via
Implementation of Wearable Technology. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1461760209
Chicago Manual of Style (16th Edition):
Mattmuller, Adam. “Nuclear Power Plant Maintenance Improvement via
Implementation of Wearable Technology.” 2016. Masters Thesis, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1461760209.
MLA Handbook (7th Edition):
Mattmuller, Adam. “Nuclear Power Plant Maintenance Improvement via
Implementation of Wearable Technology.” 2016. Web. 19 Jan 2021.
Vancouver:
Mattmuller A. Nuclear Power Plant Maintenance Improvement via
Implementation of Wearable Technology. [Internet] [Masters thesis]. The Ohio State University; 2016. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1461760209.
Council of Science Editors:
Mattmuller A. Nuclear Power Plant Maintenance Improvement via
Implementation of Wearable Technology. [Masters Thesis]. The Ohio State University; 2016. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1461760209

The Ohio State University
22.
Ozendi, Mustafa.
Viewpoint Independent Image Classification and
Retrieval.
Degree: MS, Geodetic Science and Surveying, 2010, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1285011830
► Image retrieval has applications in different disciplines. For example, there are applications in digital painting catalogues and in security related applications Researchers from both computer…
(more)
▼ Image retrieval has applications in different
disciplines. For example, there are applications in digital
painting catalogues and in security related applications
Researchers from both computer vision and photogrammetry fields are
developing robust image retrieval methods that can be used for
achieving, browsing and searching. Various approaches have been
developed by researchers to solve the retrieval problem using
different image features, including color, texture and shape of
objects in the image. Our method is motivated from a geometric
invariance framework, which is based on invariance of conic
sections under the projective image transformation.First, conic
sections, which are fitted to object boundaries, are generated. The
invariance property of these conic sections is used to represent
the shape of the object boundaries. This representation provides an
invariant signature of that image. Once an invariant signature is
obtained for each image, certain classification methods are used to
test whether these signatures present unique characteristics for
each image group. Additionally, a retrieval mechanism is built that
uses invariant signatures of each image to build a relationship
with other images and to retrieve the most related ones. A measure
of the relationship between images is obtained by using two common
metrics histogram intersection and minimum pair distance
assignment. It is hypothesized in this research that generated
invariant signatures present unique characteristics for each image
group and these signatures can be used for classification and
retrieval of images in a database. This hypothesis is satisfied in
terms of classification, but it is not satisfied for retrieval
problems because of degenerate conics.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Computer Science; image retrieval; image classification; projective invariant; viewpoint independent retrieval; viewpoint independent image classification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ozendi, M. (2010). Viewpoint Independent Image Classification and
Retrieval. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1285011830
Chicago Manual of Style (16th Edition):
Ozendi, Mustafa. “Viewpoint Independent Image Classification and
Retrieval.” 2010. Masters Thesis, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1285011830.
MLA Handbook (7th Edition):
Ozendi, Mustafa. “Viewpoint Independent Image Classification and
Retrieval.” 2010. Web. 19 Jan 2021.
Vancouver:
Ozendi M. Viewpoint Independent Image Classification and
Retrieval. [Internet] [Masters thesis]. The Ohio State University; 2010. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1285011830.
Council of Science Editors:
Ozendi M. Viewpoint Independent Image Classification and
Retrieval. [Masters Thesis]. The Ohio State University; 2010. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1285011830

The Ohio State University
23.
Lai, Po-Lun.
Shape Recovery by Exploiting Planar Topology in 3D
Projective Space.
Degree: PhD, Geodetic Science and Surveying, 2010, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1268187247
► In the fields of photogrammetry and computer vision, three-dimensional (3D) shape recovery has remained an active research topic over the past decades. Accompanied by…
(more)
▼ In the fields of photogrammetry and computer
vision, three-dimensional (3D) shape recovery has remained an
active research topic over the past decades. Accompanied by the
boost in the development of sensor technology, considerable efforts
have been expanded to image-based shape recovery. However, most of
the approaches still rely on calibrated cameras with known
orientations to establish the transformation between the object
space and image space, keeping them from practical when the camera
interior and exterior parameters are
unavailable. In this research, a novel approach
is developed for the recovery of 3D object shape using uncalibrated
multiple-view images. The approach is based on the assumption that
the 3D projective space is composed of 2D discrete projective
subspaces. In the designed framework, a 2D subspace corresponds to
a set of hypothetical planes which creates cross-sections by
slicing the objects in the scene. The images of such cross-sections
are obtained by planar projective transforms which are estimated
from the points defined on these hypothetical planes. A stack of
these cross-sections provides a projective recovery of the 3D
object shape. The resulting recovery becomes an affine or metric
recovery when stack of cross-sections are transformed to an affine
or ortho-rectified image respectively, or when absolute ground
information is provided. In this dissertation,
two possible formations of subspaces are proposed, and several
experiments using image sets of different characteristics are
conducted to evaluate the performance of the proposed method.
Generated 3D shapes, when qualitatively examined or quantitatively
compared against ground-truth, show promising recovery
performance.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Computer Science; Remote Sensing; shape recovery; projective geometry; homography; silhouette
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lai, P. (2010). Shape Recovery by Exploiting Planar Topology in 3D
Projective Space. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1268187247
Chicago Manual of Style (16th Edition):
Lai, Po-Lun. “Shape Recovery by Exploiting Planar Topology in 3D
Projective Space.” 2010. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1268187247.
MLA Handbook (7th Edition):
Lai, Po-Lun. “Shape Recovery by Exploiting Planar Topology in 3D
Projective Space.” 2010. Web. 19 Jan 2021.
Vancouver:
Lai P. Shape Recovery by Exploiting Planar Topology in 3D
Projective Space. [Internet] [Doctoral dissertation]. The Ohio State University; 2010. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1268187247.
Council of Science Editors:
Lai P. Shape Recovery by Exploiting Planar Topology in 3D
Projective Space. [Doctoral Dissertation]. The Ohio State University; 2010. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1268187247

The Ohio State University
24.
Srestasathiern, Panu.
View Invariant Planar-Object Recognition.
Degree: MS, Geodetic Science and Surveying, 2008, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1420564069
► In many photogrammetry and computer vision applications, there ultimate goals is to recognize objects of interest. Various framework for object recognition problem have been developed.…
(more)
▼ In many photogrammetry and computer vision
applications, there ultimate goals is to recognize objects of
interest. Various framework for object recognition problem have
been developed. Among many framework, geometric invariances have
been proven to be an efficient way to recognize object under
geometric transformation i.e. affine transformation.Motivated by
geometric invariance framework, we propose a new method for
rec¬ognizing the isolated planar object under the circumstance of
geometric transforma¬tion. Namely, we assume that object shape is
deformed by projective transformation. Firstly, we present a new
object’s shape representation based on an assumption that the
boundary of the object’s shape can be approximately represented by
a set of piecewise conics. Secondly, a new projective invariant
feature is derived based on the distribution of the projective
relations between the conic pairs, which are estimated from the
objects shape. We hypothecate that two objects of the same type,
which are viewed from different viewpoints generate similar
histograms, such that the distance between these two histogram is
smaller than the histograms generated from other object types. The
proposed method has shown promising performance our shape database
in which object’s shapes are deformed by projective
transformation.
Advisors/Committee Members: Yilmaz, Alper (Advisor).
Subjects/Keywords: Civil Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Srestasathiern, P. (2008). View Invariant Planar-Object Recognition. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1420564069
Chicago Manual of Style (16th Edition):
Srestasathiern, Panu. “View Invariant Planar-Object Recognition.” 2008. Masters Thesis, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1420564069.
MLA Handbook (7th Edition):
Srestasathiern, Panu. “View Invariant Planar-Object Recognition.” 2008. Web. 19 Jan 2021.
Vancouver:
Srestasathiern P. View Invariant Planar-Object Recognition. [Internet] [Masters thesis]. The Ohio State University; 2008. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1420564069.
Council of Science Editors:
Srestasathiern P. View Invariant Planar-Object Recognition. [Masters Thesis]. The Ohio State University; 2008. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1420564069

The Ohio State University
25.
Lee, Won Hee.
Bundle block adjustment using 3D natural cubic
splines.
Degree: PhD, Geodetic Science and Surveying, 2008, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1211476222
► One of the major tasks in digital photogrammetry is to determine the orientation parameters of aerial imageries correctly and quickly, which involves two primary…
(more)
▼ One of the major tasks in digital
photogrammetry is to determine the orientation parameters of aerial
imageries correctly and quickly, which involves two primary steps
of interior orientation and exterior orientation. Interior
orientation defines a transformation to a 3D image coordinate
system with respect to the camera's perspective center, while a
pixel coordinate system is the reference system for a digital
image, using the geometric relationship between the photo
coordinate system and the instrument coordinate system. While the
aerial photography provides the interior orientation parameters,
the problem is reduced to determine the exterior orientation with
respect to the object coordinate system. Exterior orientation
establishes the position of the camera projection center in the
ground coordinate system and three rotation angles of the camera
axis to represent the transformation between the image and the
object coordinate system. Exterior orientation parameters (EOPs) of
the stereo model consisting of two aerial imageries can be obtained
using relative and absolute orientation. EOPs of multiple
overlapping aerial imageries can be computed using bundle block
adjustment. Bundle block adjustment reduces the cost of field
surveying in difficult areas and verifies the accuracy of field
surveying during the process of bundle block adjustment. Bundle
block adjustment is a fundamental task in many applications, such
as surface reconstruction, orthophoto generation, image
registration and object recognition. Point-based
methods with experienced human operators are processed well in
traditional photogrammetric activities but not the autonomous
environment of digital photogrammetry. To develop more robust and
accurate techniques, higher level objects of straight linear
features accommodating elements other than points are adopted
instead of points in aerial triangulation. Even though recent
advanced algorithms provide accurate and reliable linear feature
extraction, extracting linear features is more difficult than
extracting a discrete set of points which can consist of any form
of curves. Control points which are the initial input data and
break points which are end points of piecewise curves are easily
obtained with manual digitizing, edge operators or interest
operators.Employing high level features increase the feasibility of
geometric information and provide an analytical and suitable
solution for the advanced computer
technology.
Advisors/Committee Members: Schenk, Anton F. (Advisor), Yilmaz, Alper (Committee Co-Chair).
Subjects/Keywords: Civil Engineering; bundle block adjustment; feature-based photogrammetry; line photogrammetry; natural cubic splines
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, W. H. (2008). Bundle block adjustment using 3D natural cubic
splines. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1211476222
Chicago Manual of Style (16th Edition):
Lee, Won Hee. “Bundle block adjustment using 3D natural cubic
splines.” 2008. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1211476222.
MLA Handbook (7th Edition):
Lee, Won Hee. “Bundle block adjustment using 3D natural cubic
splines.” 2008. Web. 19 Jan 2021.
Vancouver:
Lee WH. Bundle block adjustment using 3D natural cubic
splines. [Internet] [Doctoral dissertation]. The Ohio State University; 2008. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1211476222.
Council of Science Editors:
Lee WH. Bundle block adjustment using 3D natural cubic
splines. [Doctoral Dissertation]. The Ohio State University; 2008. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1211476222

The Ohio State University
26.
Ding, Lei.
From Pixels to People: Graph Based Methods for Grouping
Problems in Computer Vision.
Degree: PhD, Computer Science and Engineering, 2010, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1289845859
► In this dissertation, we study grouping problems in computer vision using graph-based machine learning techniques. Grouping problems abound in computer vision and are typically…
(more)
▼ In this dissertation, we study grouping
problems in computer vision using graph-based machine learning
techniques. Grouping problems abound in computer vision and are
typically challenging ones in order to generate perceptually and
semantically consistent results. In the context of this
dissertation, we strive to (1) group image pixels into meaningful
objects and backgrounds; (2) group interacting people present in a
video into sound social communities.Traditionally, in a graph-based
formulation, the entities (e.g. image pixels) are treated as graph
vertices and their interrelations are encoded in a weighted
adjacency matrix of the graph. In this dissertation, we go beyond
standard graph construction methods by building on probabilistic
image hypergraphs and learned social graphs (or social networks)
for the two parts of work respectively. Learning on graphs results
in labeling of entities. In our work, graph based smoothness and
modularity measures are examined andadapted to the problems under
study. Under this general graph-based framework,
the first pursued direction is interactive image segmentation, or
the problem of grouping image pixels into meaningful objects and
their backgrounds, given a limited number of user-supplied seeds.
Our contributions in this direction include the probabilistic
hypergraph image model (PHIM) to address higher-order relations
among pixels in segment labels, which are commonly ignored in
competing approaches. To further alleviate the dependence of
interactive segmentation on user-supplied seeds, we introduce
diffusion signatures derived from salient boundaries and present a
framework for automatically introducing new seeds at critical image
locations, in order to enhance segmentation results. Both proposed
frameworks are extensively tested on a standard image dataset and
achieved excellent quantitative and qualitative results in
segmentation. In the second direction, we
contribute an automatic framework to infer relations among actors
from videos. In particular, we propose a principled graph-based
affinity learning method, which synthesizes both co-occurrence
information among actors and local grouping cue estimates at the
scene level in order to make informed decisions. Once the pairwise
affinities between actors are learned from thevideo content using
visual and auditory features, we perform social network analysis
based on modularity measures to detect communities, which are
groups of actors. Experiments on a dataset of ten movies that we
collected have shown promising results. Moreover, the proposed
framework has considerably outperformed baseline methods not using
visual or auditory features, suggesting the importance of
audiovisual cues in high-level relational understanding
tasks. In summary, built on a graph-based
learning framework, thisdissertation makes contributions to
grouping problems in computer vision. Specifically, we have
proposed effective techniques to solve problems in both low-level
analysis of images (segmentation) and high-level…
Advisors/Committee Members: Belkin, Mikhail (Committee Chair), Yilmaz, Alper (Committee Co-Chair).
Subjects/Keywords: Computer Science; Graph-based Machine Learning; Computer Vision
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ding, L. (2010). From Pixels to People: Graph Based Methods for Grouping
Problems in Computer Vision. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1289845859
Chicago Manual of Style (16th Edition):
Ding, Lei. “From Pixels to People: Graph Based Methods for Grouping
Problems in Computer Vision.” 2010. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1289845859.
MLA Handbook (7th Edition):
Ding, Lei. “From Pixels to People: Graph Based Methods for Grouping
Problems in Computer Vision.” 2010. Web. 19 Jan 2021.
Vancouver:
Ding L. From Pixels to People: Graph Based Methods for Grouping
Problems in Computer Vision. [Internet] [Doctoral dissertation]. The Ohio State University; 2010. [cited 2021 Jan 19].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1289845859.
Council of Science Editors:
Ding L. From Pixels to People: Graph Based Methods for Grouping
Problems in Computer Vision. [Doctoral Dissertation]. The Ohio State University; 2010. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1289845859
.