Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

You searched for +publisher:"University of North Carolina" +contributor:("Ji, Dinghuang"). One record found.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of North Carolina

1. Ji, Dinghuang. Data-driven 3D Reconstruction and View Synthesis of Dynamic Scene Elements.

Degree: Computer Science, 2018, University of North Carolina

Our world is filled with living beings and other dynamic elements. It is important to record dynamic things and events for the sake of education, archeology, and culture inheritance. From vintage to modern times, people have recorded dynamic scene elements in different ways, from sequences of cave paintings to frames of motion pictures. This thesis focuses on two key computer vision techniques by which dynamic element representation moves beyond video capture: towards 3D reconstruction and view synthesis. Although previous methods on these two aspects have been adopted to model and represent static scene elements, dynamic scene elements present unique and difficult challenges for the tasks. This thesis focuses on three types of dynamic scene elements, namely 1) dynamic texture with static shape, 2) dynamic shapes with static texture, and 3) dynamic illumination of static scenes. Two research aspects will be explored to represent and visualize them: dynamic 3D reconstruction and dynamic view synthesis. Dynamic 3D reconstruction aims to recover the 3D geometry of dynamic objects and, by modeling the objects’ movements, bring 3D reconstructions to life. Dynamic view synthesis, on the other hand, summarizes or predicts the dynamic appearance change of dynamic objects – for example, the daytime-to-nighttime illumination of a building or the future movements of a rigid body. We first target the problem of reconstructing dynamic textures of objects that have (approximately) fixed 3D shape but time-varying appearance. Examples of such objects include waterfalls, fountains, and electronic billboards. Since the appearance of dynamic-textured objects can be random and complicated, estimating the 3D geometry of these objects from 2D images/video requires novel tools beyond the appearance-based point correspondence methods of traditional 3D computer vision. To perform this 3D reconstruction, we introduce a method that simultaneously 1) segments dynamically textured scene objects in the input images and 2) reconstructs the 3D geometry of the entire scene, assuming a static 3D shape for the dynamically textured objects. Compared to dynamic textures, the appearance change of dynamic shapes is due to physically defined motions like rigid body movements. In these cases, assumptions can be made about the object’s motion constraints in order to identify corresponding points on the object at different timepoints. For example, two points on a rigid object have constant distance between them in the 3D space, no matter how the object moves. Based on this assumption of local rigidity, we propose a robust method to correctly identify point correspondences of two images viewing the same moving object from different viewpoints and at different times. Dense 3D geometry could be obtained from the computed point correspondences. We apply this method on unsynchronized video streams, and observe that the number of inlier correspondences found by this method can be used as indicator for frame alignment among the different streams. … Advisors/Committee Members: Ji, Dinghuang, Frahm, Jan-Michael, Dunn, Enrique, Berg, Tamara, Niethammer, Marc, Savarese, Silvio.

Subjects/Keywords: College of Arts and Sciences; Department of Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ji, D. (2018). Data-driven 3D Reconstruction and View Synthesis of Dynamic Scene Elements. (Thesis). University of North Carolina. Retrieved from https://cdr.lib.unc.edu/record/uuid:eaeceb64-ad74-416d-a0b6-b4ee48512f8d

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Ji, Dinghuang. “Data-driven 3D Reconstruction and View Synthesis of Dynamic Scene Elements.” 2018. Thesis, University of North Carolina. Accessed January 16, 2021. https://cdr.lib.unc.edu/record/uuid:eaeceb64-ad74-416d-a0b6-b4ee48512f8d.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Ji, Dinghuang. “Data-driven 3D Reconstruction and View Synthesis of Dynamic Scene Elements.” 2018. Web. 16 Jan 2021.

Vancouver:

Ji D. Data-driven 3D Reconstruction and View Synthesis of Dynamic Scene Elements. [Internet] [Thesis]. University of North Carolina; 2018. [cited 2021 Jan 16]. Available from: https://cdr.lib.unc.edu/record/uuid:eaeceb64-ad74-416d-a0b6-b4ee48512f8d.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Ji D. Data-driven 3D Reconstruction and View Synthesis of Dynamic Scene Elements. [Thesis]. University of North Carolina; 2018. Available from: https://cdr.lib.unc.edu/record/uuid:eaeceb64-ad74-416d-a0b6-b4ee48512f8d

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.