Photogrammetry is one of the main 3D modeling methods. It allows the creation of 3D models of various physical objects: buildings, landscapes, machine parts, and more. The target object is shot from different angles, and the received images serve as a basis for the future 3D model.
Photogrammetry is widely used in many different fields of activity, including architecture, manufacturing, engineering, geology, topographic mapping, quality control, and medicine. It allows us to explore space, test car components, create accurate maps, study ancient architecture, and more.
3D models created with the help of photogrammetry are super-realistic. For example, a super detailed 3D model of Notre Dame from the video game Assassin’s Creed helped assess the damage and rebuilt the cathedral burned down in 2019.
Here’s a simple and comprehensive guide created by Softarex software solutions for ambitious beginners who have always wanted to learn how to use photogrammetry for creating 3D models. It covers the main stages of 3D models creating processes broken down into steps. If you follow the instructions from the guide, you will learn how to build 3D models from scratch.
To create a high-quality 3D model, you need to get the clearest photos possible. Therefore, it’s important to choose the right camera.
It’s preferable to use high-resolution cameras with lenses that exhibit minimum optical distortion. It’s recommended to use lenses with standard focal lengths — 50 mm. Try not to change the lens during shooting, as this may affect the result.
You also need to set intrinsic parameters of the camera. They will be used for further calculations. For the shooting scheme described above, you can use the universal intrinsic parameters of a pinhole camera.
After you have chosen the camera, you need to place the object in a way that all its surfaces are evenly lit. Proper object placement is crucial for building the model with the best quality. Avoid harsh shadows and diffused lightning.
Now you can start shooting the object. First, shoot the whole object, move around it, and then focus on the details gradually zooming in. The camera should move at the same speed to avoid the unclarity of the frames and motion blur. Change the angle of view to capture points on the top of the object. But make sure your viewpoint is fewer than 30 degrees — the smaller the angle, the better.
The object should be placed in the center of the frame and occupy most of the frame. The image should be clean and sharp. Every surface of the object must be visible from three or more angles. Three adjacent frames should overlap each other.
An example of a shooting process.
The next stage of building a 3D model is detecting the key points. It is the points located in the places of sharp gradient drop the image by X and Y — corner points. The principle of their detection is based on the use of the autocorrelation matrix and image pyramid.
Key points in two views.
After you have detected all the key points, you should match the found points in each image with the ones in the adjacent images. To do so, you need to make all the possible pairs of images. In the received pairs the key points are matched using the cascade hashing method.
Next, you need to find the key points correspondence for every pair of images to discard the points that can not be used for building a 3D model. To do so, you should select the two closest neighboring points and compare the distance between them.
The result of the filtering process. Only 50% of all matches are shown on the image
Not all the points can be used in the 3D reconstruction. The illustration below shows that the lower point appeared between the 1st and 2nd point at the top after switching to the neighboring view but should have been at the same level as the lowermost point. Such a point can’t be used in 3D reconstruction.
To avoid such discrepancies, you need to perform additional filtering.
Additional filtering for keypoint correspondence is the epipolar constraint. To apply it, simply use the camera position and the calculated fundamental matrix between two cameras. After filtering, there are considerably fewer mismatched points as seen below.
Next, you should make a point cloud from the filtered point pairs using the method of sequential reconstruction. First, you need to select 2 views having the largest number of common pairs of points. Then you should iteratively expand the reconstruction scene by adding new views and 3D points using position estimation and triangulation.
After that, you should extend point cloud with additional 3D-points. To do so, you need to calculate a depth map for each frame, and then, using the obtained depth maps, construct additional points.
As you can see, a depth map contains a large number of points outside the target object. To remove unnecessary points, you need to calculate the confidence of each point, and then discard the points with lower confidence.
The higher the confidence of a point, the lighter it is.
Depth maps should be made and then filtered for all the images in the dataset. As a result, you will get depth maps of quite a good quality. They will form the basis for a fairly dense cloud of points.
Thanks to this step of 3D-reconstruction 1,120,607 points were added to the pointcloud.
The initial mesh may still contain isolated triangles respecting visibility constraint — from false points in the background of the scene or in the sky. Furthermore, it may capture the landscape that is far from the scene and that doesn’t need to be reconstructed in detail. It is quite impossible to refine this part due to inexact calibration for scenes situated at a big distance from cameras.
The model surface after the refinement algorithm.
It is the final stage of 3D modeling. Texturing can be broken down into the following steps:
- First, you need to determine the visibility of model faces in the input images.
- Second, you should select the next view. This process follows the structure of modified Lempitsky and Ivanov’s algorithm. A single view per face should be selected by optimizing Markov random field energy formulation with graph cuts and alpha expansion. A minority of views might see wrong colors, for example an occluder, and be much less correlated. Use a modified mean-shift algorithm to obtain a list of photo-consistent views for each face and penalize the rest of the views to prevent their selection.
- After view selection the resulting texture patches may show strong color discontinuities due to the differences in exposure and illumination or even different camera response curves. Therefore, you need to photometrically adjust the adjacent texture patches to make their seams less noticeable. You should calculate globally optimal luminance correction terms added to the vertex luminances subject to two intuitive constraints. After the adjustment of luminance, the differences at seams and the derivative of adjustments within the texture should be small.
The final model can still not be perfect. It may take you to fix all the problems but the final result will be worth the time and effort. Follow the above-mentioned instructions step by step, and you will easily learn how to use photogrammetry for 3D modeling. All you need to start is the right camera and a powerful PC. In case there are any questions left unanswered or you have any suggestions, feel free to contact us.
Please Help Support BeforeitsNews by trying our Natural Health Products below!
Order by Phone at 888-809-8385 or online at https://mitocopper.com M - F 9am to 5pm EST
Order by Phone at 888-388-7003 or online at https://www.herbanomic.com M - F 9am to 5pm EST
Order by Phone at 888-388-7003 or online at https://www.herbanomics.com M - F 9am to 5pm EST
Humic & Fulvic Trace Minerals Complex - Nature's most important supplement! Vivid Dreams again!
HNEX HydroNano EXtracellular Water - Improve immune system health and reduce inflammation
Ultimate Clinical Potency Curcumin - Natural pain relief, reduce inflammation and so much more.
Oxy Powder - Natural Colon Cleanser! Cleans out toxic buildup with oxygen!
Nascent Iodine - Promotes detoxification, mental focus and thyroid health.
Smart Meter Cover - Reduces Smart Meter radiation by 96%! (See Video)