Você está na página 1de 31

Protogrammetry and Remote sensing Semester Project.

PHOTOGRAMMETRY AND REMOTE


SENSING PROJECT
Project Name:

Reduction of DSM to DTM and Quality Assessment

Department:

Institute of Geodesy and Photogrammetry

Focus Area:

Zurich Airport

Product/Process:

Aerial Photogrammetry with RGB and CIR.

Mr. Jedsada Kerdsrilek


Geomatic Engineering and Planning MSc
Swiss Federal Institute of Technology Zurich
DSM to DTM generation

-1-

5/29/2008

Protogrammetry and Remote sensing Semester Project.

TABLE OF CONTENTS
1

INTRODUCTION ................................................................................................................................ 3

IMAGE PREPROCESSING ............................................................................................................... 6


2.1

Separated Images Band............................................................................................................. 6

2.2

Filtered Image ........................................................................................................................... 8

2.3

Preparate Orientation Parameters.............................................................................................13

GENERATION OF DSM WITH IMAGE MATCHING.................................................................15

FILTERING OF DSM FOR DETECTION ON-TERRAIN OBJECTS ....................................... 20

INTERPOLATION OF GROUND POINTS ................................................................................... 25

QUALITY ASSESSMENT .................................................................................................................26

CONCLUSIONS................................................................................................................................. 30

DSM to DTM generation

-2-

5/29/2008

Protogrammetry and Remote sensing Semester Project.

1 Introduction
The Project of DSM to DTM generation is an important process to make an
accurate DTM, specially on this project. We have produced accurate DSM with extracted
object features in Zurich airport. In order to automatic pilot landing and take off the
aircrafts, the project can be separated in four steps;
(i)
started with image preprocessing by applying two kinds of filter to reduce the
number of noise in the images and increase the contrast of feature objects, in
order to perform a quality of image matching process,
(ii)
The second step is DSM generation with image bundle adjustment.
(iii) DSM filtering is a process to filtering and eliminating on terrain objects in
order to reduce DSM to create DTM.
(iv)
Finally it can interpolated the eliminated area with interpolate function to
generate on ground surface of DTM in automatic way. However, we can
evaluate the product of DTM and of Lidar data. The comparison between both
surfaces is completed.
The used data in this project are aerial images and metadata of images as shown
in table 1
Film
Camera Type
Focal Length
Scale
Endlap
Sidelap
Scan Resolution

Color and CIR


Frame
303.811 mm.
1:10,150 (RGB)
1: 6,000 (CIR)
70%
26%
20 micron

Table 1.1 Image data description.

DSM to DTM generation

-3-

5/29/2008

Protogrammetry and Remote sensing Semester Project.

The flowchart 1 is a brief process description of the project. Starting with Image
preprocessing step, which decide the mostly contrast band from RGB image or CIR in
order to know which band is the best contrast. After select the situation band, we
extracted the contrast band from all images (22 images) and started to reduce the number
of noise by image filtering. Image filter can be done by recheck the results and comparing
the histogram before and after generation. As for the parameter of camera calibration and
external orientation are prepared in separate text file format, the task of this part is to put
all of parameter together in ORI format (parameter format of SatPP) for image
orientation process.
The DSM is generated by using SatPP software. Firstly, seed points distributed
every part of the image are meausred. Every image pairs will have at least 10 points of
seed point in to set the X-parallax. Then we generate accurate DSM including terrain
objects. The next is to reduce DSM to DTM generation to extract terrain objects and
interpolate surface model with SCOPP++. Finally we can check quality of DTM by
comparison to available DTM from LIDAR data.

1. Image Preprocessing
- choose the contrast band from RGB or CIR
- separating contrast band from all of the
images project
- images filtering
- parameter preparing with image orientation

2. DSM Generation with Image Matching


- seeds points measurement
- bundle adjustment
- DSM generation

3. Interpolate DSM to DTM


- filter parameters assignment
- DTM generation

4. Quality Assessment
- Lidar DTM assessing
- comparing both surface DTM

Diagram 1.1 The working process of DSM to DTM generation.

DSM to DTM generation

-4-

5/29/2008

Protogrammetry and Remote sensing Semester Project.


The figure 1 shows the example of aerial images that use in this project with
acquisition in Color and CIR data mode. The project area is around Zurich airport with 44
images and orientations are available.

RGB Images

CIR Images

Figure 1.1 Aerial images acquisition in RGB and CIR mode.

DSM to DTM generation

-5-

5/29/2008

Protogrammetry and Remote sensing Semester Project.

2 Image Preprocessing
Image Preprocessing is a significant process to reduce the effect of the noise and
increase quality for the next step process. The result will be suitable in term of
visualization for the matching process. The process started with reduction of radiometric
problems. For instance, strong brightness area or dark regions are reduced with filtering
tool such as noise filter to smoothing roughness radiometric area and wallis filter to
enhance and sharpen the objects. Before the filtering process, we should design which
available channel of data is the most suitable to be generated DSM.
2.1

Separating Image Bands

Image channels separated with Software Photoshop in order to design that which
band is the most strongly contrast image. We stared to choose the example image with in
RGB and CIR to extract Image RGB red, Image RGB green, image RGB blue, image
CIR IR, image CIR red and image CIR green. With visualization, we can see that the near
infrared channel(NIR) and red channel(R) from CIR image are the most contrast image
channel. In this project, we designed to use red channel from CIR image for generate the
DSM. Though we selected very contrast band, there are still a radiometric noise which
can show in the histogram configuration in Figure 2.1 and Figure 2.2 .
Separate Band RBG

RGB Band Red

RGB Band Green

RGB Band Blue

Figure 2.1 Separate RGB Image and the histogram generated

DSM to DTM generation

-6-

5/29/2008

Protogrammetry and Remote sensing Semester Project.


Separating Band CIR

CIR Band Red

CIR Band IR

CIR Band Green

Noise from radiometric resolution


Noise from image

Figure 2.2 Separating CIR Image and the generated histogram

As for the histogram showed that the histogram gray scale curves is not smooth,
because there are the radiometric problem and the black frame around the image are exist
as explained above. So the next step is cutting the frame to reject the no data effect and
filtering image with NOISE filter and WALLIS filter to reduce the radiometric problem.

DSM to DTM generation

-7-

5/29/2008

Protogrammetry and Remote sensing Semester Project.


2.2

Filtering Image

First of all, we will apply NOISE filter to reduce radiometric problem of the
image with the result are show in figure 2.3 NOISE filter in case of RBG and 2.4 NOISE
filter in case of CIR
Filtering RGB with Noise Filter

Figure 2.3 the result of NOISE filter in case of RBG

DSM to DTM generation

-8-

5/29/2008

Protogrammetry and Remote sensing Semester Project.


Filtering CIR with Noise Filter

Figure 2.4 the result of NOISE filter in case of CIR

DSM to DTM generation

-9-

5/29/2008

Protogrammetry and Remote sensing Semester Project.


After NOISE filte, we have obtained the low noise image. The next step is to
apply WALLIS filter to enhance the edges contrast of exist objects. The results are shown
in Figure 2.5 WALLIS filter in case of RGB and Figure 2.6 WALLIS filter in case of
CIR.
Filtering RGB with Wallis Filter

Figure 2.5 the result of WALLIS filter in case of RGB

DSM to DTM generation

- 10 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

Filtering CIR with Wallis Filter

Figure 2.6 the result of WALLIS filter in case of CIR

DSM to DTM generation

- 11 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

After filtering image, the result showed that the noise filter can eliminate
radiometry error from the gray scale data which shows on the histograms. On the other
hand, wallis filter can enhance edges of the feature objects. By the way, NIR and Red
channel shows a good reparability for the feature objects. Finally, we designed to use red
channel from CIR images for DSM generation. As show in Figure 2.7, the comparison of
RGB and CIR in case of radiometric contrast to separate feature objects.
Comparing with zoom in to small objects
RGB

CIR

Figure 2.7 the comparison result shows that the red band of CIR is the most contrast
image band.

DSM to DTM generation

- 12 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

2.3

Orientation Parameters Preparation

Before starting DSM generation, we need to prepare orientation file for each
aerial images with the camera parameter and acquisition condition. Normally we will
calibrate the camera before achieve images in order to get calibration parameter and
internal orientation information such as focal length, fiducial marks, calibration
parameters. The parameters are provided in the calibration data of the camera, as show in
Figure 2.8. The Figure 2.8 shows parameter of 8 points fiducial marks of internal
orientation of each image.

Figure 2.8 Show parameter of 8 Fiducial marks in order to generate internal


orientation images

DSM to DTM generation

- 13 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

After calibration the camera we will have internal orientation parameters of the
camera. The external orientation of each image is generated from the navigation and
position of instrument that located and connected with the camera on airplane. All of
parameters are provided in format SUP file, within Sup file which external orientation
parameter such as Xo, Yo, Zo and ,, are provided.
The next step is to generate rotation metric from the rotation parameter with
changing the unit. The unit are provided in RADAIN to GON format and using Rotation
Metric software to generate rotation metric form. The last parameter is affine transform.
For this case, we used LPS with internal orientation by mark fiducial 8 point and
measurement tie point in overlap area in order to generate external orientation with
blunder adjustment create affine parameter for each images.
Finally we can form orientation file in format ORI file as show in Figure 4. The
ORI file is a form of orientation parameter for using in SatPP software and the form ORI
are provided internal orientation with focal length. Meanwhile, affine metric for each
images and the ground coordinates Xo, Yo, Zo which rotation metric
provided for external orientation.

,,

SOCET
SET

Orientation ,,

Ground Coor. Xo, Yo, Zo

Change RADAIN to GON format

Rotation Metric software


External Orientation Metric
- Orientation ,,

LPS software
-measurement fiducial marks
-measurement tie points

Calibration
file

Affine Transform Metric

Flowchart 2 ORI file generation from SUP file and camera calibration file.
DSM to DTM generation

- 14 -

5/29/2008

are

Protogrammetry and Remote sensing Semester Project.

3 Generation of DSM with Image Matching


For the result of Image preprocessing, it shows that the NIR and Red channel of
CIR is most appropriate channels for images matching. We have chosen red channel for
DSM generation in SatPP (Satellite Image Precision Processing). The DSM generation
process started with transform images format JPG to RAW (SatPP format) and then
created the project with all image input covering the airport area. Generally, we save ORI
file (orientation provided) and the image data in the same directory in order to link them
together.
In general, aerial photo obtains the frame work for marking fiducial marks in
order to form internal orientation of image, but we have orientation file from previous
process already. Thus, mark tool is applied for all image frame work so that non spatial
information can be recognized in the software. The next step is measurement the seed
points to get height relationship between image pairs.
The diagram below shows the SatPP work flow (Gruen A., Kocaman S., Wolff
K., 2007: High accuracy 3D processing of stereo satellite images in mountainous areas).
Image processing, as also as shown in chapter 2, has been completed at the first step and
triangulation for external orientation has been done in sequence. Image matching is
generated the points cloud on the terrain surface and finally, least square matching are
provided to improve the result statistically.

Figure 3.1 Work flow of SattPP processing. (Gruen A., Kocaman S., Wolff K., 2007)

DSM to DTM generation

- 15 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.


In case of SatPP, matching algorithm composes of three main techniques to
search for matching features. First, points correlation matching is done by checking
corresponding between image pair. After that, grids matching to check in grids by grids
are carried out and finally the edge matching to judge against the same object in the pair
image is determined. All matching algorithm of SatPP are respected to separate image
area in order to reduce the time for image matching.
As image matching results, we can see that the forest area is in green points
because they have a unique prattle. Therefore it can be simply done in matching process.
In case of construction area, the result is not as fine as the forest. The buildings in
construction area are good with edges matching and the road is a flat area without the
prattle or texture so the result is show on yellow points. Also in some part of the structure
area, there are red points because the object is moving or the object has no texture and
less correlation.
DSM generation is the last step for DSM generation process. The result of DSM
generation is shown in Figure 3.2 to 3.5. The figures show the interesting area and
comparison the result of the matching points and the original images in order to see how
the relationship of the process generated the results. In Figure 3.2 represents all
interesting cases such as buildings, roads and the forests, and we will see in the detail
below.

Figure 3.2 The result of DSM generation showing case over Zurich airport.
DSM to DTM generation

- 16 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.


DSM in case of the building shows that the airport building can be reconstruction
well, they can getting the relation height between the structure and rebuild even in case of
airplane.

Figure 3.3 DSM generation in case of building area

DSM to DTM generation

- 17 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.


DSM generation in case of road reconstruction is show on the matching in yellow
color and its can generated well like the building with pretty smooth surface.

Figure 3.4 DSM generation in case of road structure.

DSM to DTM generation

- 18 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.


DSM in case of forest shows that the forest are can be generated with different
heights according to the color of them.

Figure 3.5 DSM generation in case of forest area.

DSM to DTM generation

- 19 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

4 Filtering of DSM to Detection on Terrain Objects


The goal of this process is to determine terrain objects such as building, forest,
and other object features on the ground. In order to produce DTM from DSM, we
concentrated on ground classification and the data should be eliminated the entire object
on the terrain surface.
We are using Scop++ algorithm for generate DSM to DTM as show in the
Figure 4.1. The algorithm started with searching grid width a long the original points
cloud to classify data with selected LOWEST points. The next is generate approximate
surface from LOWEST points cloud as show on red line and robust adjustment for
eliminate gloss error which are giving more precise terrain surface. Buffer zone area is
used to limit the ground surface from the original points cloud, shown in Figure 4.2.
Finally, the precise surface with only ground points is generated as show in Figure 4.3.
More information can be found in K. Kraus, N. Pfeifer, Advance DTM Generation from
Lidar Data.

Figure 4.1 On terrain searching windows

DSM to DTM generation

- 20 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

Figure 4.2 off terrain surface detection

Figure 4.3 on ground surface generation

DSM to DTM generation

- 21 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.


As the result of DSM generation, the image size of DSM generation is too big
(about 2 gigabyte) and software SCOPP has a limited file size for image filtering. We
reduced the study area by cutting DSM points cloud over the ZURICH airport for DSM
filtering process.
The classification DSM to ground surface on SCOP++ starting with, using the
parameter of filtering object from LIDAR default STRONG with are show the process on
Diagram 4.1. The filtering process started with three eliminated building model in order
to classify building object and to separate from ground surface. The parameter of building
elimination has been done by classifying the original image. Classifying is completed by
defining cell size of searching window. Then, defining a minimum slope and minimum
area of on terrain building that can be detected by the searching window and eliminated it
from the source data. The model sought parameter from coarse to fine in order to
eliminate big buildings until small buildings which relate to the slope and the area of the
objects on ground.
The results of building elimination model are the DTM with the gaps of
building that can be detected and eliminated from the provided parameters. However,
there are the vegetation and the object on ground surface left to be removed in the next
process.
We are using iteration of filtering in the second process to eliminate off terrain
objects. The first step is THIN OUT to separate ground surface and off terrain object by
defining cell size searching and getting the LOWEST point to represent on ground
surface to reduce the original source image.
The second is filtering the data by using robust iteration and defining the weight
function which are created average surface. When the point are below the surface, they
are defined as a ground point (weight = 1). The results are called approximate surface
which are limited upper branch by distant limit to find off terrain point and also for lower
branch condition.
The third is surface interpolation by classifying point on previous step with
linear prediction and defining grid size to obtain on terrain detail surface.
The last step is sort out the data by limit upper and lower surface for DTM
generation. In this step, off terrain object are eliminated which the user can define upper
and lower tolerance. On ground surface can be generated with four classification steps
and then we can iterate coarse to fine step from thin out to sort out which reduce
parameter and get the better result. The iteration step will provide the parameter by
coarse to fine as shown on the Diagram 4.1. It represents parameter for each loop by the
color different.

DSM to DTM generation

- 22 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

(1) Eliminate Building


- cell size 3
- min slop 1
- min area 12

(2) Eliminate Building


- cell size 2
- min slop 0.9
- min area 12

(3) Eliminate Building


- cell size 2
- min slop 0.8
- min area 12

(4) Thin out


- cell size 6 3 1
- Lowest selection

(5) Filter
- Limit lower branch 3.6

- Limit Upper branch

Upper haft weigh - 1.2 0.075


Upper haft weigh 0.8 0.3 0.05
Upper slant - 1.2 0.075
Upper slant 0.8 0.3 0.05
Upper tolerance - 1.2 0.15
Upper tolerance 2.4 0.9 0.10
Penetration Rate 80% 70% 70%

Iteration 3
Time

(6) Interpolate
- Derive point per CU to aimat 90 25 25
- Grid width 6 3 1

(7) Sort Out


- Eliminate building data
Upper dist 1.0 0.9 0.075
Lower dist 2.2 1.2 0.075
Slop 2.0 - -

Diagram 4.1 Classification step of filtering on terrain objects

DSM to DTM generation

- 23 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

The filtering result shows classification on ground surface without the objects
such as building, vegetation, and the on surface objects.

Figure 4.4 the result of filtering on ground surface from DSM to DTM

Figure 4.5 Zoom in the result of filtering on ground surface from DSM to DTM

DSM to DTM generation

- 24 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

5 Interpolations of Ground Points


The goal of this step is fill all gaps from the result of image classification. We
are applying the filter to fill the gap from robust iteration and the parameters are showing
on Diagram 5.1. First the surface interpolation needed to be done in order to calculate the
classify point with linear prediction and define grid size to obtain on terrain detail
surface. The next step is to sort out the data by limit upper and lower surface for DTM
generation. Fill void area model are inserted on this step to detect and fill the holes by
using point around the hole to interpolation fill gap areas. Interpolation is the last process
for fill the gap, in order to calculate the DTM generation from fill hold parameter.

(6) Interpolate
- Derive point per CU to aimat 12
- Grid width 2

(7) Sort Out


- Eliminate building data
Upper dist 3.0
Lower dist 3.0
Slop -

Fill void area


- resembling interval 10
- Bridging distant 200

(6) Interpolate
- Derive point per CU to aimat 16
- Grid width 2
Diagram 5.1 fill hole of DTM generation

DSM to DTM generation

- 25 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

6 Quality assessments
The Quality assessment is the process to evaluate between the product of DTM
generation and the product of DTM from LIDAR data by generating a comparison
surface which represents the difference of two surfaces by different colors.
After DTM generation process, we got DSM in format XYZ (terrain points).
The result is shown in GEOMEGIC STUDIO for wrapping surface model. Comparison
process staring with import DTM LIDAR and generate compare surface with the result of
DTM generation. The result shows in Figure 6.1. The result shows that there are the
existing big building and some giant vegetation are leave in the surface that shows in blue
areas. On the other hand, for small building and small holes are eliminated.

Figure 6.1 Comparison DTM generation and DTM LIDAR

DSM to DTM generation

- 26 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

As for the previous result is still have big different between two surface, The
manual elimination for buildings are provided in order to reduce the different between
surfaces as shown in Figure 6.2. The process started with selecting the big building as
shown in dark brawn color and manual deleting all point cloud existing which are shown
the result in Figure 6.3. The interpolation on ground surface in order to produce a new
DTM.

Figure 6.1 manual eliminating huge building

Figure 6.2 result of manual eliminating huge building

DSM to DTM generation

- 27 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.


The results of manual building elimination for DTM generation comparing to
reference DTM LIDAR is showing in Figure 6.4. There are some differences between
those two DTM. The height difference is 10 meter. That means it can be reduced the
height difference from the full automatic fill gap DTM. In the previous results, the height
difference was 20 meter.
The result still shows different surfaces because in the process of classification
on ground surface it cannot eliminate all slopes that connected between building and
ground surface. So the result of DTM generation is still included off surface slope. When
we use this model to generate the gap area, the fill area will create the surface cover all
the hole which includes the height. If we reduce the searching window size in order to
search the precision resembling points cloud to fill the hole. The result of filled hole
surface shows that the big size holes cannot be filled. Only small holes are filled. In the
same time, if we increate the searching window size, the result will show that the hole
can be filled but they will include the existing slope area to generate fill area as show in
Figure 6.4.

Figure 6.4 Compare manual DTM with DTM LIDAR

DSM to DTM generation

- 28 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

On the other hand, we can test the parameter of hole filling process by applying
to the ground classification LIDAR DTM in order to compare between both results which
are same parameter. The results show that the parameter of void filling process works
well with ground classification LIDAR DTM as shown in Figure 6.5.
There are different between LIDAR DSM and DSM from aerial image in case
of building areas. LIDAR DSM has many points cloud on the top of building and only
few points in the vertical objects such as the wall. So it is easy to detect that they are
buildings and to classify them. On contrary, DSM, which produces from aerial image,
shows that there are slope between buildings to ground surface. As the result, it is
complicated to remove all the existing slope from the model.

Figure 6.5 DTM form LIDAR data from same parameter with fill hold DTM aerial image

DSM to DTM generation

- 29 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

After LIDAR DTM production with the same parameter from DTM aerial
image, we can check the quality of DTM with the reference DTM LIDAR and compare
with the result before. As shown in Figure 6.6, the result of DTM LIDAR comparison
with the reference DTM LIDAR is represented. The result shows there are only few
meters different in case of the huge building that means the algorithm and the parameter
can work well with LIDAR data to search the whole objects and eliminate its off ground
points. Thus, the filter parameters can fill the gaps on the terrain without slope between
objects and ground surface. Finally, precision DTM can be generated.

Figure 6.6 The result of comparison between DTM LIDAR and reference DTM LIDAR

7 Conclusions
This project shows processes and results of creation DSM from aerial image and
filtering to DTM generation step by step. The image preprocessing shows how to reduce
the effects of radiometric resolution and enhances the object image to perform in
matching process. As for DSM generation from aerial image, it can reconstruct all of the
objects surface which are different in height.

DSM to DTM generation

- 30 -

5/29/2008

Protogrammetry and Remote sensing Semester Project.

The project represents mainly discussion on DSM to DTM generation focusing


on the step of DTM generated from aerial image. The DTM produced by algorithm of
SCOP++ which generated points cloud of classification object and generated ground
surface DTM. The resultd of automatic DTM from aerial images need carefully provide
the filter parameters and parameters of fill hole object terrain. Manual editing are
provided for the large building because algorithm can not eliminate the existing slope
between object and ground surface. However, in case of residential and on terrain object
(for example air crafts) can be eliminated well. As for the large vegetation, there is the
same problem with the huge building because DSM generation from aerial image will
produce a smooth slope between the objects and the ground terrain.
Nevertheless, the process and provided parameters can strongly produce rapidly
DTM from the large area and need only few of big buildings to manual elimination. The
void filling process is shown that can use searching window for detecting the holes and
filling area with spatial interpolation of the neighbor points around the holes.

8 References
Demir N., Baltsavias E., 2007. Object extraction at airport sites using DTMsDSMs and
multispectral image analysis, International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, Vol 36, Issue 3/W49B, on CD-ROM, Munich,
Germany
K. Kraus, C. Briese, M. Attwenger, N. Pfeifer: Quality Measurements for Digital Terrain
Models.
Karl Kraus, Johannes Otepka, 2005. DTM modeling and Visualization The SCOP
Approach. Photogrammetric Week 5, P241-251.
Kraus, K., Pfeifer, N., 1998. Determination of terrain models in wooded areas with
airborne laser scanner data. ISPRSJournal, Vol. 53.
K. Kraus, N. Pfeifer, 2001. Advance DTM Generation from Lidar Data. International
Archive of Photogrammetry and Remote Sensing, Volume XXXIV-3/4.
Gruen A., Kocaman S., Wolff K., 2007: High accuracy 3D processing of stereo satellite
images in mountainous areas. Dreilaendertagung 2007, Muttenz-Basel, Switzerland, 1921 June.

DSM to DTM generation

- 31 -

5/29/2008

Você também pode gostar