Title: "3D Mesh Generation by Spherical Panoramic Videogrammetry"

 

Introduction:

The creation of accurate and detailed 3D urban models has become critically important for modern city management and development. These digital representations of the built environment are indispensable tools across numerous domains, including urban planning and development, disaster management and security, smart governance and e-municipality initiatives, navigation and positioning, simulation and gamification, environmental studies and cadastral applications. Producing these complex models relies on diverse methodologies from space, aerial and terrestrial platforms using imaging and LiDAR sensors. Terrestrial photogrammetry complements the arial and space solutions because of sideview data collection from streets and building facets.  One known solution is spherical panoramic videogrammetry which offers a rapid and cost-effective solution for capturing comprehensive, high-resolution ground-level imagery, enabling the creation of detailed, textured 3D models of urban features (especially those obscured from aerial view) that can be seamlessly integrated with space/aerial datasets. In this competition, we want to evaluate the capabilities of spherical panoramic videogrammetry for this aim.

Problem Statement:

1- Extract sequential frames from the video captured from the central building facet of faculty. You can collect your own data from this building by your spherical panoramic sensor based on your observational plan (optional).

2- Measure GCPs and scale bars from existing LiDAR point cloud dataset.

3- Accomplish bundle adjustment and self-calibration, point cloud extraction, mesh generation and texture draping by existing software. DO NOT USE LiDAR data in the process of 3D model generation, else you fail the competition. 

4- Enhance 3D textured mesh by deep learning using your algorithm so that 3D model appearance and geometric accuracy effectively improve. Think about geometric details such as lines, curves, edges and planes. An Idea: Render 3D textured mesh from existing images and compare them. The differences should be minimized by your deep learning algorithm.   

5- Evaluate the 3d model by comparing it with LiDAR point cloud as ground truth and visualization quality. Develop or use software for visualization in your opinion (optional).

 Output:

    • The final 3D textured mesh preferably in OBJ format
    • Technical Report including title page, proposed solution, experimental data and evaluation.
    • Other related raw and processed data, developed software and codes in different folders.

How to evaluate your output:

Our specialists check your solution and output.

    • How does the solution is logical and in detail? 0-20 points
    • Visualization quality of 3D model: 0-40 points
    • Geometric quality of 3D model (compared to LiDAR data): 0-40 points

             

Figure 1: Ricoh Spherical Camera, a sample video frame from building and LiDAR PC Ground Truth

You can access this Dataset.