Read the story here. Advertise at Before It's News here.
Profile image
Story Views
Last hour:
Last 24 hours:

Photogrammetry: Practical Guide for Newbies

% of readers think this story is Fact. Add your two cents.

Photogrammetry is one of the main 3D modeling methods. It allows the creation of 3D models of various physical objects: buildings, landscapes, machine parts, and more. The target object is shot from different angles, and the received images serve as a basis for the future 3D model.

Photogrammetry is widely used in many different fields of activity, including architecture, manufacturing, engineering, geology, topographic mapping, quality control, and medicine. It allows us to explore space, test car components, create accurate maps, study ancient architecture, and more.

3D models created with the help of photogrammetry are super-realistic. For example, a super detailed 3D model of Notre Dame from the video game Assassin’s Creed helped assess the damage and rebuilt the cathedral burned down in 2019.

Here’s a simple and comprehensive guide created by Softarex software solutions for ambitious beginners who have always wanted to learn how to use photogrammetry for creating 3D models. It covers the main stages of 3D models creating processes broken down into steps. If you follow the instructions from the guide, you will learn how to build 3D models from scratch.

Choosing the Camera

To create a high-quality 3D model, you need to get the clearest photos possible. Therefore, it’s important to choose the right camera.

It’s preferable to use high-resolution cameras with lenses that exhibit minimum optical distortion. It’s recommended to use lenses with standard focal lengths — 50 mm. Try not to change the lens during shooting, as this may affect the result.

You also need to set intrinsic parameters of the camera. They will be used for further calculations. For the shooting scheme described above, you can use the universal intrinsic parameters of a pinhole camera.


After you have chosen the camera, you need to place the object in a way that all its surfaces are evenly lit. Proper object placement is crucial for building the model with the best quality. Avoid harsh shadows and diffused lightning.

Now you can start shooting the object. First, shoot the whole object, move around it, and then focus on the details gradually zooming in. The camera should move at the same speed to avoid the unclarity of the frames and motion blur. Change the angle of view to capture points on the top of the object. But make sure your viewpoint is fewer than 30 degrees — the smaller the angle, the better.

The object should be placed in the center of the frame and occupy most of the frame. The image should be clean and sharp. Every surface of the object must be visible from three or more angles. Three adjacent frames should overlap each other.

An example of a shooting process.

Detecting the Key Points

The next stage of building a 3D model is detecting the key points. It is the points located in the places of sharp gradient drop the image by X and Y — corner points. The principle of their detection is based on the use of the autocorrelation matrix and image pyramid.

Key points in two views.

To search for key points, you can use the AKAZE algorithm with the KAZE descriptor type that you can find in the OpenCV library.

Matching the Points

After you have detected all the key points, you should match the found points in each image with the ones in the adjacent images. To do so, you need to make all the possible pairs of images. In the received pairs the key points are matched using the cascade hashing method.

Filtering the Matching Points

Next, you need to find the key points correspondence for every pair of images to discard the points that can not be used for building a 3D model. To do so, you should select the two closest neighboring points and compare the distance between them.

The result of the filtering process. Only 50% of all matches are shown on the image

Not all the points can be used in the 3D reconstruction. The illustration below shows that the lower point appeared between the 1st and 2nd point at the top after switching to the neighboring view but should have been at the same level as the lowermost point. Such a point can’t be used in 3D reconstruction.

To avoid such discrepancies, you need to perform additional filtering.

Additional filtering for keypoint correspondence is the epipolar constraint. To apply it, simply use the camera position and the calculated fundamental matrix between two cameras. After filtering, there are considerably fewer mismatched points as seen below.

Point Cloud Reconstruction

Next, you should make a point cloud from the filtered point pairs using the method of sequential reconstruction. First, you need to select 2 views having the largest number of common pairs of points. Then you should iteratively expand the reconstruction scene by adding new views and 3D points using position estimation and triangulation.

Densifying Point Cloud/Point Cloud Densification

After that, you should extend point cloud with additional 3D-points. To do so, you need to calculate a depth map for each frame, and then, using the obtained depth maps, construct additional points.

As you can see, a depth map contains a large number of points outside the target object. To remove unnecessary points, you need to calculate the confidence of each point, and then discard the points with lower confidence.

The higher the confidence of a point, the lighter it is.

Depth maps should be made and then filtered for all the images in the dataset. As a result, you will get depth maps of quite a good quality. They will form the basis for a fairly dense cloud of points.

Thanks to this step of 3D-reconstruction 1,120,607 points were added to the pointcloud.

3D Meshing

After you have built the dense point cloud, you should reconstruct the surface of the object. For this purpose, you should use the Delaunay triangulation method.

Reconstructed mesh

Mesh Refinement

The initial mesh may still contain isolated triangles respecting visibility constraint — from false points in the background of the scene or in the sky. Furthermore, it may capture the landscape that is far from the scene and that doesn’t need to be reconstructed in detail. It is quite impossible to refine this part due to inexact calibration for scenes situated at a big distance from cameras.

Therefore, you need to remove these triangles by some threshold of triangle size or the number of triangles in an isolated piece and manually cut unnecessary far landscape background.

The model surface after the refinement algorithm.

Mesh Texturing

It is the final stage of 3D modeling. Texturing can be broken down into the following steps:

  1. First, you need to determine the visibility of model faces in the input images.
  2. Second, you should select the next view. This process follows the structure of modified Lempitsky and Ivanov’s algorithm. A single view per face should be selected by optimizing Markov random field energy formulation with graph cuts and alpha expansion. A minority of views might see wrong colors, for example an occluder, and be much less correlated. Use a modified mean-shift algorithm to obtain a list of photo-consistent views for each face and penalize the rest of the views to prevent their selection.
  3. After view selection the resulting texture patches may show strong color discontinuities due to the differences in exposure and illumination or even different camera response curves. Therefore, you need to photometrically adjust the adjacent texture patches to make their seams less noticeable. You should calculate globally optimal luminance correction terms added to the vertex luminances subject to two intuitive constraints. After the adjustment of luminance, the differences at seams and the derivative of adjustments within the texture should be small.

The final model can still not be perfect. It may take you to fix all the problems but the final result will be worth the time and effort. Follow the above-mentioned instructions step by step, and you will easily learn how to use photogrammetry for 3D modeling. All you need to start is the right camera and a powerful PC. In case there are any questions left unanswered or you have any suggestions, feel free to contact us.

Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Please Help Support BeforeitsNews by trying our Natural Health Products below!

Order by Phone at 888-809-8385 or online at M - F 9am to 5pm EST

Order by Phone at 888-388-7003 or online at M - F 9am to 5pm EST

Order by Phone at 888-388-7003 or online at M - F 9am to 5pm EST

Humic & Fulvic Trace Minerals Complex - Nature's most important supplement! Vivid Dreams again!

HNEX HydroNano EXtracellular Water - Improve immune system health and reduce inflammation

Ultimate Clinical Potency Curcumin - Natural pain relief, reduce inflammation and so much more.

MitoCopper - Bioavailable Copper destroys pathogens and gives you more energy. (See Blood Video)
Oxy Powder - Natural Colon Cleanser!  Cleans out toxic buildup with oxygen! 
Nascent Iodine - Promotes detoxification, mental focus and thyroid health.
Smart Meter Cover -  Reduces Smart Meter radiation by 96%!  (See Video)

Immusist Beverage Concentrate - Proprietary blend, formulated to reduce inflammation while hydrating and oxygenating the cells.

Report abuse


    Your Comments
    Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

    Load more ...




    Email this story
    Email this story

    If you really want to ban this commenter, please write down the reason:

    If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.