Escolar Documentos
Profissional Documentos
Cultura Documentos
Case Studies
ISATIS 2012
Case Studies
Case Studies
Published, sold and distributed by GEOVARIANCES
49 bis Av. Franklin Roosevelt, BP 91, 77212 Avon Cedex, France
Web: http://www.geovariances.com
Isatis Release 2012, February 2012
Contributing authors:
Catherine Bleins
Matthieu Bourges
Jacques Deraisme
Franois Geffroy
Nicolas Jeanne
Ophlie Lemarchand
Sbastien Perseval
Jrme Poisson
Frdric Rambert
Didier Renard
Yves Touffait
Laurent Wagner
All Rights Reserved
1993-2012 GEOVARIANCES
No part of the material protected by this copyright notice may be reproduced or utilized in any form
or by any means including photocopying, recording or by any information storage and retrieval sys-
tem, without written permission from the copyright owner.
"... There is no probability in itself. There are only probabilistic models. The
only question that really matters, in each particular case, is whether this or
that probabilistic model, in relation to this or that real phenomenon, has or
has not an objective meaning..."
G. Matheron
Estimating and Choosing - An Essay on Probability in Practice
(Springer Berlin, 1989)
Case Studies
1
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
1 About This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Mining. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
2 In Situ 3D Resource Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
2.1 Workflow Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
2.2 Presentation of the Dataset & Pre-processing. . . . . . . . . . . . . . . . . .16
2.3 Variographic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
2.4 Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
2.5 Global Estimation With Change of Support . . . . . . . . . . . . . . . . . . .78
2.6 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88
2.7 Displaying the Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129
3 Non Linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145
3.1 Introduction and overview of the case study. . . . . . . . . . . . . . . . . . .146
3.2 Preparation of the case study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148
3.3 Global estimation of the recoverable resources . . . . . . . . . . . . . . . .165
3.4 Local estimation of the recoverable resources . . . . . . . . . . . . . . . . .176
3.5 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
Oil & Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231
4 Property Mapping & Risk Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . .233
4.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .234
4.2 Estimation of the Porosity From Wells Alone. . . . . . . . . . . . . . . . . .236
4.3 Fitting a Variogram Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240
4.4 Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242
4.5 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245
4.6 Estimation with External Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249
4.7 Cokriging With Isotopic Neighborhood . . . . . . . . . . . . . . . . . . . . . .252
4.8 Collocated Cokriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258
4.9 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264
5 Non Stationary & Volumetrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271
5.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .272
5.2 Creating the Output Grid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274
5.3 Estimation With Wells. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276
5.4 Estimation With Wells and Seismic . . . . . . . . . . . . . . . . . . . . . . . . .282
5.5 Assessing the Variability of the Reservoir Top . . . . . . . . . . . . . . . . .293
5.6 Volumetric Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .299
6 Plurigaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321
2
6.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
6.3 Creating the Structural Grid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
6.4 Creating the Working Grid for the Upper Unit . . . . . . . . . . . . . . . . 332
6.5 Computing the Proportions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6.6 Lithotype Rule and Gaussian Functions . . . . . . . . . . . . . . . . . . . . . 357
6.7 Conditional Plurigaussian Simulation . . . . . . . . . . . . . . . . . . . . . . . 370
6.8 Simulating the Lithofacies in the Lower Unit . . . . . . . . . . . . . . . . . 373
6.9 Merging the Upper and Lower Units. . . . . . . . . . . . . . . . . . . . . . . . 385
7 Oil Shale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
7.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
7.2 Exploratory Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
7.3 Fitting a Variogram Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
7.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
7.5 Displaying Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
8 Multi-layer Depth Conversion With Isatoil. . . . . . . . . . . . . . . . . . . . . 407
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
8.2 Field Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
8.3 Loading the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
8.4 Master File Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
8.5 Building the Reservoir Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . 433
8.6 Filling the Units With Petrophysics. . . . . . . . . . . . . . . . . . . . . . . . . 446
8.7 Volumetrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
8.8 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
9 Geostatistical Simulations for Reservoir Characterization . . . . . . . . 487
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
9.2 General Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
9.3 Data Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
9.4 Structural Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
9.5 2D Petrophysical Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
9.6 Modeling 3D Porosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
9.7 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
10 Pollution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
3
10.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .578
10.2 Univariate Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .581
10.3 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .582
10.4 Fitting a Variogram Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .590
10.5 Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .594
10.6 Creating the Target Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .601
10.7 Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603
10.8 Displaying the Graphical Results . . . . . . . . . . . . . . . . . . . . . . . . . .608
10.9 Multivariate Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .612
10.10 Case of Self-krigeability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .623
10.11 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .626
11 Young Fish Survey. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .638
11.2 Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646
11.3 Global Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .650
12 Acoustic Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .655
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .656
12.2 Global Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .661
13 Air quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .669
13.1 Presentation of the data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .670
13.2 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .675
13.3 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .680
13.4 Fitting a variogram model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .687
13.5 Kriging of NO2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .692
13.6 Displaying the graphical results . . . . . . . . . . . . . . . . . . . . . . . . . . .696
13.7 Multivariate approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .701
13.8 Cross-validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .711
13.9 Gaussian transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .714
13.10 Quantifying a local risk with Conditional Expectation (CE) . . . .719
13.11 NO2 univariate simulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .721
13.12 NO2 multivariate simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . .725
13.13 Simulation post-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .729
13.14 Estimating population exposure . . . . . . . . . . . . . . . . . . . . . . . . . .734
14 Soil pollution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .739
4
14.1 Presentation of the data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
14.2 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
14.3 Visualization of THC grades using the 3D viewer . . . . . . . . . . . . 749
14.4 Exploratory Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
14.5 Fitting a variogram model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
14.6 Selection of the duplicates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
14.7 Kriging of THC grades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
14.8 Intersection of interpolation results with the topography. . . . . . . 770
14.9 3D display of the estimated THC grades. . . . . . . . . . . . . . . . . . . . 784
14.10 THC simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
14.11 Simulation post-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
14.12 Displaying graphical results of risk analysis with the 3D Viewer 803
15 Bathymetry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
15.1 Presentation of the data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
15.2 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
15.3 Interpolation by kriging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
15.4 Superposition of models and smoothing of frontiers. . . . . . . . . . . 846
15.5 Local GeoStatistics (LGS) application to bathymetry mapping . . 851
Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
16 Image Filtering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
16.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
16.2 Exploratory Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
16.3 Filtering by Kriging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
16.4 Other Techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
16.5 Comparing the Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
17 Boolean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895
17.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896
17.2 Boolean Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
17.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
5
Introduction
6
About This Manual 7
1 About This Manual
A set of case studies is developed in this manual. It is mainly designed:
l for new users to get familiar with the software and gives some leading lines to carry a study
through,
l for all users to improve their geostatistical knowledge by following detailed geostatistical work-
flows.
Basically, each case study describes how to carry out some specific calculations in Isatis as pre-
cisely as possible. The data sets are located on your disk in a sub-directory, called Datasets, of the
Isatis installation directory.
You may follow the work flow proposed in the manual (all the main parameters are described) and
then compare the results and figures given in the manual with the ones you get from your test.
Most case studies are dedicated to a given field (Mining, Oil & Gas, Environment, Methodology)
and therefore grouped together in appropriate sections. However, new users are advised to run a
maximum of case studies, whatever their field of application. Indeed, each case study describes dif-
ferent functions of the package which are not necessarily exclusive to one application field but
could be useful for other ones.
Several case studies, namely In Situ 3D Resources Estimation (Mining), Property Mapping (Oil
& Gas) and Pollution (Environment) almost cover entire classic geostatistical workflows: explor-
atory data analysis, data selections and variography, monovariate or multivariate estimation, simu-
lations.
The other Case Studies are more specific and mainly deal with particular Isatis facilities, as
described below:
8
l Non Linear: anamorphosis (with and without information effect), indicator kriging, disjunctive
kriging, uniform conditioning, service variables and simulations.
l Non Stationary & Volumetrics: non stationary modeling, external drift kriging and simula-
tions, volumetric calculations, spill point calculation, variable editor.
l Plurigaussian: an innovative facies simulation technique.
l Oil Shale: fault editor.
l Isatoil: multi-layer depth conversion with the Isatoil advanced module.
l Young Fish Survey, Acoustic Fish Survey: polygons editor, global estimation.
l Image Filtering: image filtering, grid or line smoothing, grid operator.
l Boolean: boolean conditional simulations.
Note - All case studies are not necessarily updated for each Isatis release. Therefore, the last
update and the corresponding Isatis version are systematically given in the introduction.
9
Mining
10
In Situ 3D Resource Estimation 11
2 In Situ 3D Resource Esti-
mation
This case study is based on a real 3D data set kindly provided by Vale
(Carajs mine, Brazil).
It demonstrates particular features related to the Mining industry:
domaining, processing of three dimensional data, variogram modeling
and kriging. A brief description of global estimation with change of
support and block simulations is also provided. A simple application of
use of local parameters in kriging and simulations is presented.
Reminder: while using Isatis, the on-line help is accessible anytime by
pressing F1 and provides full description of the active application.
Last update: Isatis version 2012
12
2.1 Workflow Overview
This case study aims to give a detailed description of the kriging workflow and a brief introduction
to the grade simulation workflow of iron grades in an iron productive mine. This workflow over-
view lists the sequence of Isatis applications as they are ordered in the case study in order to run
through it. The list is nearly complete but not exhaustive.
Next to each application, two links are provided:
m the first link opens the application description of the Users guide: this allows the user to
have a complete description of the application as it is implemented in the software;
m the second link sends the user to the corresponding practical application example in the case
study.
Applications in bold are the most important for achieving kriging and simulation:
l File/Import Users Guide Case Study
Import the raw drillhole data.
l File/Selection/Macro Users Guide Case Study
Creates a macro-selection variable for each assay of the raw data based on the lithological code.
It is used to define two domains rich ore and poor ore.
l File/Selection/Geographic Users Guide Case Study
Creates a geographic selection to mask 4 drillholes outside of the orebody.
l Tools/Copy Variable/Header to Line Users Guide Case Study
Copy the selection masking the drillholes header to all assays of the drillholes.
l Tools/Regularization Users Guide Case Study
Assays compositing tool. A comparison of regularization by length or by domains is made. This
step is compulsory to make data additive for kriging. The composites regularized by domains
are kept for the rest of the study.
l Statistics / Quick Statistics Users Guide Case Study
Different modes for making statistics are illustrated: numerical statistics by domain, graphic dis-
plays with boxplots or swathplots.
l Statistics/Exploratory Data Analysis Users Guide Case Study
Isatis fundamental tool for QA/QC, 2D data displays, statistical and variographic analysis.
l Statistics/Variogram Fitting Users guide Case Study
Isatis tool for variogram modeling. Different modes are illustrated:
In Situ 3D Resource Estimation 13
m manual: the user chooses by himself the basic structures (with their types, anisotropy, ranges
and sills) entering the parameters at the keyboard or for ranges/sills interactively in the Fit-
ting Window. This is used for modeling the variogramof the indicator of rich ore,
m automatic: the model is entireley defined (ranges, anisotropy and sills) from the definition of
the types and number of nested structures the user wants to fit. This is used for modeling the
Fe grade of rich ore.
l Statistics/Domaining/Border Effect Users Guide Case Study
Calculates statistical quantities based on domains indicator and grades to visualize the behav-
iour of grades when getting closer to the transition between domains.
l Statistics/Domaining/Contact Analysis Users Guide Case Study
Represents graphically the behaviour of the mean grade as a function of the distance of samples
to the contact between two domains.
l Interpolate/Estimation/(Co-)Kriging Users Guide Case Study
Isatis kriging application. It is applied here to krige (1) the indicator of rich ore and (2) the Fe
grade of rich ore on blocks 75mx75mx15m. In order to take into account the geo-morphology of
the deposit, kriging with Local Parameters is achieved: the main axis of anisotropy and neigh-
borhood ellipsod are changed between the northern and southern part of the deposit.
l Statistics/Gaussian Anamorphosis Modeling Users Guide Case Study
Isatis tool for normal score transform and modeling of histogram on composites support. This
step is compulsory for any non linear application including simulations. It is applied here on Fe
in the rich ore domain.
l Statistics/Support Correction Users Guide Case Study
Isatis tool for modeling grade histograms on block support. Useful for global estimation and for
non linear techniques (see Non Linear case study).
l Tools/Grade Tonnage Curves Users Guide Case Study
Calculates and represent graphically the grade tonnage curves. From the different possible
modes we compare the kriged panels and the distribution of grades on blocks obtained after sup-
port correction.
l File/Create Grid File Users Guide Case Study
Creates a grid of blocks 25mx25mx15m, on which we will simulate the ore type (1 for rich ore,
2 for poor ore) and the grades of Fe-P-SiO
2
.
l Tools/Migrate Grid to Point Users Guide Case Study
Transfers the selection variable defining the orebody from the panels 75mx75mx15m to the
blocks 25mx25mx15m.
14
l Interpolate/Conditional Simulations/Sequential Indicator/Standard Neighborhood Users
Guide Case Study
Simulations of the indicator of rich ore by SIS method.
l Statistics/Gaussian Anamorphosis Modeling Users Guide Case Study
That application is run again, for the purpose of a multivariate grade simulation, to transform
Fe-P-SiO
2
grades of composites. The P grade distribution is modelled differently from Fe and
SiO
2
, because of the presence of many values at the detection limit. The zero-effect distribution
type is then applied. It results that the gaussian value assigned to P has a truncated gaussian
distribution.
l Statistics/Exploratory Data Analysis Users Guide Case Study
The Exploratory Data Analysis is used for calculating the experimental variogram on the gauss-
ian transform of P.
l Statistics/Variogram Fitting Users guide Case Study
The variogram fitting is used with the Truncation Special Option for modeling the gaussian
experimental variogram of the gaussian transform of P.
l Statistics/Statistics/Gibbs Sampler Users guide Case Study
The Gibbs Sampler algorithm is used to generate the final gaussian transforms of P with a true
Gaussian distribution instead of a truncated one.
l Statistics/Exploratory Data Analysis Users Guide Case Study
The Exploratory Data Analysis is used now for calculating the experimental variogram on the
gaussian transform of Fe-P-SiO
2
.
l Statistics/Variogram Fitting Users guide Case Study
The variogram fitting is used for modeling the threevariate gaussian experimental variograms of
the gaussian transform of Fe-P-SiO
2
. The Automatic Sill Fitting mode is used: the sills of all
basic structures are automatically calculated using a least square minimization procedure.
l Statistics/Modeling/Variogram Regularization Users guide Case Study
The threevariate variogram model of the gaussian grades is regularized on the block support. A
new experimental variogram is then obtained.
l Statistics/Variogram Fitting Users guide Case Study
The variogram fitting is used for modeling the threevariate gaussian experimental variograms of
the gaussian transform of Fe-P-SiO
2
on the block support (25mx25mx15m). The Automatic Sill
Fitting mode is used.
In Situ 3D Resource Estimation 15
l Statistics/Modeling/Gaussian Support Correction Users guide Case Study
Transforms the point anamorphosis and the variogram model referring to the gaussian variables
regularized on the block support. The result is a gaussian anamorphosis on a block support and a
variogram model referring to the block gaussian variables (0-mean, variance 1). These steps are
compulsory for carrying out Direct Block Simulations.
l Interpolate/Conditional Simulations/Direct Block Simulations Users Guide Case Study
Simulations using the Turning Bands technique in the discrete gaussian model framework
(DGM).
l Statistics/Variogram on Grid Users Guide Case Study
Calculates, for QC purpose, the experimental variograms on the simulated gaussian block val-
ues.
l Statistics/Data Transformation/Raw<->Gaussian Transformation Users guide Case Study
Transforms the block gaussian simulations into raw block values.
l Tools/Copy Statistics/ Grid-> Grid Users Guide Case Study
Calculates rich ore tonnage and metal quantities in the panels 75mx75mx15m from the simu-
lated blocks 25mx25mx15m.
l File/Calculator Users Guide Case Study
Transforms the previous results into real ore tonnages and metals.
l Tools/Simulation Post-Processing Users Guide Case Study
Presents examples of Post-Processing of simulations.
l 3D viewer Users Guide Case Study
Some brief description of the 3D viewer module.
16
2.2 Presentation of the Dataset & Pre-processing
The data set is located in the Isatis installation directory (sub-directory Datasets/Mining) and con-
stituted of two different ASCII files:
l borehole measurements are stored in the ASCII file called boreholes.asc;
l a simple 3D geological model resulting from previous geological work (block size: 75 m hori-
zontally and 15 m vertically) is provided in a 3D grid file called block model_75x75x15m.asc.).
Firstly, a new study has to be created using the File / Data File Manager facility; then, it is advised
to verify the consistency of the units defined in the Preferences / Study Environment / Units win-
dow. In particular, it is suggested to use:
l Input Output Length Options:
Default Unit... = Length (m) Default Format...= Decimal (10,2)
l Graphical Axis Units:
X Coordinate = Length (km)
Y Coordinate = Length (km)
Z Coordinate = Length (m)
2.2.1 Borehole data
2.2.1.1 Data import
The boreholes.asc file begins with a header (commented by #) which describes its contents:
#
# structure=line , x_unit=m , y_unit=m , z_unit=m
#
# header_field=1 , type=alpha , name="drillhole ID"
# header_field=2 , type=xb , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# header_field=3 , type=yb , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# header_field=4 , type=zb , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# header_field=5 , type=numeric , name="depth" , ffff=" " , bitlength=32 ;
# f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# header_field=6 , type=numeric , name="inclination" , ffff=" " ,
bitlength=32 ;
# f_type=Decimal , f_length=8 , f_digits=2 , unit="deg"
# header_field=7 , type=numeric , name="azimuth" , ffff=" " , bitlength=32
;
# f_type=Decimal , f_length=8 , f_digits=2 , unit="deg"
#
# field=1 , type=xe , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# field=2 , type=ye , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# field=3 , type=ze , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
#
# field=4 , type=numeric , name="Sample length" , ffff=" " , bitlength=32
;
# f_type=Decimal , f_length=6 , f_digits=2 , unit="m"
# field=5 , type=numeric , name="Fe" , ffff=" " , bitlength=32 ;
# f_type=Decimal , f_length=6 , f_digits=2 , unit="%"
# field=6 , type=numeric , name="P" , ffff=" " , bitlength=32 ;
# f_type=Decimal , f_length=6 , f_digits=2 , unit="%"
# field=7 , type=numeric , name="SiO2" , ffff=" " , bitlength=32 ;
# f_type=Decimal , f_length=6 , f_digits=2 , unit="%"
In Situ 3D Resource Estimation 17
# field=8 , type=numeric , name="Al2O3" , ffff=" " , bitlength=32 ;
# f_type=Decimal , f_length=6 , f_digits=2 , unit="%"
# field=9 , type=numeric , name="Mn" , ffff=" " , bitlength=32 ;
# f_type=Decimal , f_length=6 , f_digits=2 , unit="%"
# field=10 , type=alpha , name="Lithological code ALPHA" , ffff=" "
# field=11 , type=numeric , name="Lithological code INTEGER" , ffff=" "
, bitlength= 8 ;
# f_type=Integer , f_length= 4 , unit=" "
#
# ++++ --------- +++++++++ --------- +++++++++ --------- +++++++++
# ++++++++++ --------- +++++++++ --------- +++++++++ --------- +++++++++ ---
------ +++++++++ --------- ---------
*---- 1 026 1400.00 -195.00 804.21 144.46 90.00 0.00
1 1400.00 -195.00 799.71 4.50 65.90 0.13 0.20
0.90 0.07 6 6
2 1400.00 -195.00 795.32 4.39 66.70 0.12 0.10
0.90 0.08 6 6
3 1400.00 -195.00 791.22 4.10 67.70 0.11 0.20
0.50 0.08 3 3
The samples are organized along lines and the file contains two types of records:
l The header record (for collars), which starts with an asterisk in the first column and introduces a
new line (i.e borehole).
l The regular record which describes one core of a borehole.
The file contains two delimiter lines which define the offsets for both records.
The dataset is read using the File / Import / ASCII procedure and stored in two new files of a new
directory called Mining Case Study:
18
l The file Drillholes Header, which contains the header of each borehole, stored as isolated
points.
l The file Drillholes, which contains the cores measured along the boreholes.
(snap. 2.2-1)
You can check in File / Data File Manager (by pressing s for statistics on the Drillholes file) that
the data set contains 188 boreholes, representing a total of 5954 samples. There are five numeric
variables (heterotopic dataset), whose statistics are given in the next table (using Statistics/Quick
Statistics...):
We will focus mainly on Fe variable. Also note the presence of an alphanumeric variable called
Lithological code Alpha.
Number Minimum Maximum Mean St. Dev.
Al
2
O
3
3591 0.07 44.70 1.77 4.14
Fe 5069 4.80 69.40 60.51 14.19
Mn 5008 0. 30.70 0.58 1.75
P 5069 0. 1. 0.06 0.08
Si O
2
3594 0.05 75.50 1.54 4.32
In Situ 3D Resource Estimation 19
2.2.1.2 Borehole data visualization without the 3D viewer
Note - To visualize boreholes with the Isatis 3D viewer module, see the dedicated paragraph at the
end of this case study.
All the 2D Display facilities are explained in detail in the Displaying & Editing Graphics chapter
of the Beginner's Guide.
To visualize the lines without the 3D viewer, perform the following steps:
l Click on Display / New Page,
l In the Contents, for the Representation Type, choose Perspective,
l Double-click on Lines. An Item Contents for: Lines window appears:
m In the Data area, select the file Mining Case Study/Drillholes, without selecting any vari-
able as we are looking for a display of the boreholes geometry.
m Click on Display, and OK. The Lines appear in the graphic window.
l To change the View Point, click on the Camera tab and choose for instance:
m Longitude = -46
m Latitude = 20.
l Using the Display Box tab, deselect the toggle Automatic Scales and stretch the vertical dimen-
sion Z by a factor of 3.
l Click on Display.
l You should obtain the following display. You can save this template to automatically reproduce
it later: just click on Application / Store Page as in the graphic window.
(fig. 2.2-1)
20
The data set is contained in the following portion of the space:
Most of the boreholes are vertical and horizontally spaced approximately every 150m. The vertical
dimension is oriented upwards.
2.2.1.3 Creation of domains
In order to demonstrate Isatis capabilities linked to domaining, a simplified approach is presented
here. It consists in splitting the assays into two categories:
m the first one called rich ore corresponds to the lithological codes 1, 3 and 6,
m the second one called poor ore corresponds to the lithological codes 10 and above
A macro-selection final lithology[xxxxx] is created using File / Selection/Macro ...
After asking to create a New Macro Selection Variable and defining its name final lithology in the
Data File, you have to click on New.
(snap. 2.2-2)
Minimum Maximum
X 0.009 km 3.97 km
Y -0.35 km 3.77 km
Z -54.9 m +811.8 m
In Situ 3D Resource Estimation 21
For creating Rich ore, Poor ore and Undefinedindices, you should give the name you want
(this has to be repeated three times). Then in the bottom part of the window you will define the
rules to apply. For each rule, you will have then to choose which variable it depends to, here Litho-
logical Code Integer, and the criterion to apply among the list you get by clicking on the button
proposing Equals as default:
m in the case of Rich ore you choose Is Lower or Equals to 9
m in the case of Poor ore you choose to match 2 rules (see snap shot on the previous page).
m in the case of Undefined you choose to match any of two rules (see next snap shot).
(snap. 2.2-3)
2.2.1.4 Drillholes selection
From the display of the drillholes, we can see that 4 are outside of the area covered by the other
drillholes. We will mask these drillholes for the rest of the study by using the File / Selection / Geo-
graphic menu.
The procedure "File / Selection / Geographic" is used to visualize and to perform a masking opera-
tion based on complete boreholes or more selectively on composites within a borehole.
We create the selection mask drillholes outside in the Drillholes header file.
22
(snap. 2.2-4)
When pressing the "Display as Points" button, the following graphic window opens representing by
a + symbol in green (according to the menu Preferences / Miscellaneous). the headers of all the
boreholes in a 2D XOY projection.
In Situ 3D Resource Estimation 23
(snap. 2.2-5)
By picking with the mouse left button the 4 boreholes, their symbols are blinking, they can then be
masked by using the menu button of the mouse and clicking on Mask, the 4 masked boreholes are
then represented with the red square (according to the menu Preferences / Miscellaneous).
In the Geographic Selection window the number of selected samples (i.e.boreholes) is appearing
(184 from 188). To store the selection you must click on Run.
0 1000 2000 3000 4000
X (m)
0
1000
2000
3000
4000
Y
(
m
)
24
(snap. 2.2-6)
This selection is defined on the drillhole collars. In order to apply this selection to all samples of the
drillholes, a possible solution is to use the menu Tools / Copy Variable / Header Point -> Line.
(snap. 2.2-7)
2.2.1.5 Borehole data compositing
The compositing (or regularization) is an essential phase of a study using 3D data, especially in the
mining industry, although the principle is much more general. The idea is that geostatistics will
consider each datum with the same importance (prior to assigning a weight in the kriging process
0 1000 2000 3000 4000
X (m)
0
1000
2000
3000
4000
Y
(
m
)
In Situ 3D Resource Estimation 25
for example) as it does not make sense to combine data that does not represent the same amount of
material.
Therefore, if data is measured on different support sizes, a first, essential task is to convert the
information into composites of the same dimension. This dimension is usually a multiple of the size
of the smallest sample, and is related to the height of the benches, which is in this case 15m.
l This operation can be achieved in different ways:
m the boreholes are cut into intervals of same length from the borehole collar, or in intervals
intersecting the boreholes and a regular system of horizontal benches. It is performed with
the Tools / Regularization by Benches or by Length facility, consists in creating a replica of
the initial data set where all the variables of interest in the input file are converted into com-
posites.
m the boreholes are cut into intervals of same length, determined on the basis of domain defini-
tion. Each time the domain assigned to the assay is changed a new composite is created. The
advantage of that method is to get more homogeneous composites. It is performed with the
Tools / Regularization by Domains facility.
m We will work on the 5 numerical variables Al
2
0
3
, Fe, Mn, P and SiO
2
.
m The regularization by length is performed on 5 numerical variables Al
2
0
3
, Fe, Mn, P and
SiO
2
and on the lithological code, in order to keep for each composite the information on the
most abundant lithology and the corresponding proportion. The new files are called:
- Composites 15m by length header for the header information (collars).
- Composites 15m by length for the composite information.
m Regularization mode: By Length measured along the borehole: this is the selected option as
some boreholes are inclined, with a constant length of 15m.
m Minimum Length: 7.5 m. It may happen that the first composite, or the last composite (or
both) do not have the requested dimension. Keeping too many of those incomplete samples
will lead us back to the initial problem of having samples of different dimensions being con-
sidered with the same importance: this is why the minimum length is set to 7.5 m (i.e. half of
the composite size).
26
(snap. 2.2-8)
m Three boreholes are not reproduced in the composite file as their total length is too small
(less than 7.5m): boreholes 93, 163 and 171. There are 1282 composites in the new output
file.
l The regularization by domain will calculate composites for two domains rich ore and poor
ore. The macro selection defining the domains in the input file is created with the same indices
in the output composites file. The selection mask drillholes outside is activated to regularize
only the boreholes within the orebody envelope. Only Fe, P, SiO2 are regularized. The new files
are called:
m Composites 15m header for the header information (collars).
m Composites 15m for the composite information.
m The Undefined Domain is assigned to the Undefined index. It means that when a sample is
in the Undefined Domain the composition procedure keeps on going (see on-line Help for
more information).
m The Analysed Length is kept for each grade element.
m The option Merge Residual is chosen, which means that the last composite is merged with
the previous one if its length is less than 50% of the composite length.
In Situ 3D Resource Estimation 27
(snap. 2.2-9)
There are 1556 composites on the 184 boreholes in the new output file. From now on all geostatis-
tical processes will be applied on that regularized by domains composites file.
Using Statistics / Quick Statistics we can obtain different types of statistics, as for example:
The statistics on the Fe grades by domains. You note that after compositing there are no more
Undefined composites.
28
(snap. 2.2-10)
(snap. 2.2-11)
l Graphic representations with Boxplots by slicing according the main axes of the space.
In Situ 3D Resource Estimation 29
(snap. 2.2-12)
30
In Situ 3D Resource Estimation 31
32
l Swathplots by slicing according the main axes of the space.
(snap. 2.2-13)
(snap. 2.2-14)
The swathplots along OY shows for Fe rich ore a trend to decrease from South to North.
In Situ 3D Resource Estimation 33
2.2.2 Block model
2.2.2.6 Grid import
The block model_75x75x15m.asc file begins with a header (Isatis format, commented by #) which
describes its contents:
#
# structure=grid, x_unit="m", y_unit="m", z_unit="m";
# sorting=+Z +Y +X ;
# x0= 150.00 , y0= -450.00 , z0= 310.00 ;
# dx= 75.00 , dy= 75.00 , dz= 15.00 ;
# nx= 28 , ny= 47 , nz= 31 ;
# theta= 0 , phi= 0 , psi= 0
# field=1, type=numeric, name="geographic domain", bitlength=32;
# ffff="N/A", unit="";
# f_type=Integer, f_length=9, f_digits=0;
# description="Creation Date: Mar 21 2006 15:13:15"
#
#+++++++++
0
0
0
The file contains only one numeric variable named geographic domain which equals 0, 1 or 2:
l 0 means the grid node lies outside the orebody,
l 1 means the grid node lies in the southern part of the orebody,
l 2 means the grid node lies in the northern part of the orebody.
Launch File/Import/ASCII... to import the grid in the Mining Case Study directory and call it 3D
Grid 75x75x15 m.
You have now to create a selection variable, called orebody, for all blocks where the geographic
code is either 1 or 2, by using the menu File / Selection / Intervals.
34
(snap. 2.2-15)
2.2.2.7 Visualization without the 3D viewer
Note - To visualize with the Isatis 3D viewer module, see the dedicated paragraph at the end of this
case study.
Click on Display / New Page in the Isatis main window. In the Contents window:
l In the Contents list, double click on the Raster item. A new Item contents for: Raster window
appears, in order to let you specify which variable you want to display and with which color
scale:
m Grid File...: select orebody variable from the 3D Grid 75x75x15 m file,
m In the Grid Contents area, enter 16 for the rank of the section XOY to display.
m In the Graphic Parameters area below, the default color scale is Rainbow.
m In the Item contents for: Raster window, click on Display.
m Click on OK.
In Situ 3D Resource Estimation 35
l Your final graphic window should be similar to the one displayed hereafter.
(fig. 2.2-2)
The orebody lies approximately north-South, with a curve towards the southwestern part. The
northern part thins out along the northern direction and has a dipping plane striking North with a
western dip of 15 approximately. This particular geometry will be taken into account during vario-
graphic analysis.
500 1000 1500 2000
X (m)
0
1000
2000
3000
Y
(
m
)
36
2.3 Variographic Analysis
This step describes the structural analysis performed on 3D data set. In a first stage we consider the
Fe grade only of the rich ore (univariate analysis) on the 15 m composites. The estimation requires
to estimate for each block the proportion of rich ore and its grade. The analysis has then to be made:
l on the indicator of rich ore variable, which is defined on all composites
l and on the rich ore Fe grade, which is defined on rich ore composites.
The Exploratory Data Analysis (EDA) will be used in order to perform Quality Control, check sta-
tistical characteristics and establish the experimental variograms. Then variogram models will be
fitted.
2.3.1 Variographic analysis of rich ore indicator
The workflow that has been applied illustrates some important capabilities of Exploratory Data
Analysis, the decisions that are taken would probably require more detailed analysis in a real study.
The main steps of the workflow, that will be detailed in the next pages are:
l Calculation of the rich ore indicator.
l Variogram map in horizontal slices to confirm the existence of anisotropy.
l Calculations of directional variograms in horizontal plane. For simplification we keep 2 orthog-
onal directions East-West (N90) and North-South (N0).
l Check that the main directions of anisotropy are swapped when looking to northern or southern
boreholes.
l Save the Indicator variogram in the northern part (where are most of the data), with the idea
that the variogram in the Southern part is the same as in the North by inverting N0 and N90
directions of the anisotropy. In practice this will be realized at the kriging/simulation stage by
the use of Local Parameters for the variogram structures.
l Variogram Fitting using a combination of Automatic and Manual mode.
2.3.1.1 Calculation of the indicator
Use File / Calculator to assign the macro-selection index corresponding to rich ore to a float vari-
able Indicator rich ore.
In Situ 3D Resource Estimation 37
(snap. 2.3-1)
2.3.1.2 Experimental Variogram of the Indicator
Launch Statistics/Exploratory Data Analysis... to start the analysis on the variable Indicator rich
ore:
38
(snap. 2.3-2)
Highlight the Indicator rich ore variable in the main EDA window and open the Base Map and His-
togram:
In Situ 3D Resource Estimation 39
(fig. 2.3-1)
The mean value gives the proportion of rich ore samples.
The variogram map allows to check potential anisotropy. After clicking on the variogram map, the
Define Parameters Before Initial Calculations being on, you should choose the parameters as
shown in the next figure. You define parameters for horizontal slices, i.e. Ref.Plane UV with No
rotation.
Switch off the button Define the Calculations in the UW Plane and in the VW Plane, using the cor-
responding tabs.
With 18 directions each direction makes an angle of 10 with the previoius one. By asking a Toler-
ance on Directions of 2 sectors, the variograms are calculated from pairs in a given direction +/-
25.
0 1000 2000
X (m)
0
1000
2000
3000
4000
Y
(
m
)
0.0 0.5 1.0
Indicator rich ore
0.0
0.1
0.2
0.3
0.4
0.5
0.6
F
r
e
q
u
e
n
c
i
e
s
Nb Samples: 1556
Minimum: 0.000
Maximum: 1.000
Mean: 0.627
Std. Dev.: 0.484
40
(snap. 2.3-3)
In Situ 3D Resource Estimation 41
(snap. 2.3-4)
After pressing OK you get the representation of the Variogram Map. In the Application Menu ask
Invert View Order to have variogram map and extracted experimental variograms in a landscape
view.
In the Application Menu ask Graphic Specific Parameters and change the Color Scale to Rain-
bow Reversed.
In the variogram map representation drag with the mouse a zone containing all directions. With the
menu button ask Activate Direction. You will then visualize the experimental variograms in the 18
directions of the horizontal plane. It exhibits clearly anisotropic behaviour.
42
(snap. 2.3-5)
We will now calculate the experimental variograms directly from the main EDA window by click-
ing on the Variogram bitmap at the bottom of the window. In the next figure we can see the param-
eters used for the calculation of 4 directional variograms in the horizontal plane and the vertical
variogram.
(snap. 2.3-6)
In Situ 3D Resource Estimation 43
(snap. 2.3-7)
(snap. 2.3-8)
For sake of simplicity we decide to keep only 2 directions N0, showing more continuity and the
perpendicular direction N90.
The procedure to follow is:
44
l In the List of Options, change from Omnidirectional to Directional.
l In Regular Direction choose Number of Regular Directions 2 and switch on Activate Direction
Normal to the Reference Plane. Click Ok and go back to the Variogram Calculation Parameters
window.
(snap. 2.3-9)
You have then to define the parameters for each direction. Click the parameter table to edit:
l You have then to define the parameters for each direction. Click the parameter table to edit. For
applying the same parameters on the 2 horizontal directions, you must highlight these directions
in the Directions list of the Directions Definition window.
l The two regular directions choose the following parameters:
m Label for direction 1: N90 (default name)
m Label for direction 2: N0
m Tolerance on direction: 45 (in order to consider all samples without overlapping)
m Lag value: 90 m (i.e. approximately the distance between boreholes)
m Number of lags: 15(so that the variogram will be calculated over 1350 m distance)
m Tolerance on Distance (proportion of the lag): 0.5
m Slicing Height: 7.55 m (adapted to the height of composites)
m Number of Lags Refined: 1
m Lag Subdivision: 45m (so that we can have the variogram at short distance from the drill-
holes closely spaced).
l The normal direction with the following parameters:
m Label for direction 1: Vertical
m Tolerance on angle: 22.5
m Lag value: 15 m
m Number of lags: 10
m Tolerance on lags (proportion of the lag): 0.5
In Situ 3D Resource Estimation 45
l In the Application Menu ask for Graphic Specific Parameters and click on the toggle button
for the display of the Histogram of Pairs.
(snap. 2.3-10)
Because the general shape of the orebody is anisotropic, we will calculate the variogram restricted
to the northern part and to the southern part of the orebody.
To do so you will use capabilities of the linked windows of EDA, by masking samples in the Base
Map. Automatically the variograms will be recalculated with only the selected samples.
For instance in the Base Map you drag a box around data in the Southern part (as shown on the fig-
ure) and with the menu button of the mouse you ask Mask. You will then get the variogram calcu-
lated from the northern data.
46
(snap. 2.3-11)
In the next figure we compare the variograms calculated from the northern and the southern data.
The main directions of anisotropy are swapped between North and South.
In Situ 3D Resource Estimation 47
(snap. 2.3-12)
48
(snap. 2.3-13)
We decide now to fit a variogram model on the northern variogram, which is calculated with the
most abundant data. Then we will apply the same variogram to the southern data by making the
main axes of anisotropy swapped. This will be realized by means of local parameters attached to the
variogram model and to the neighborhood.
In the graphic window containing the experimental variogram in the northern zone, click on Appli-
cation / Save in Parameter File and save the variogram under the name Indicator rich ore North.
2.3.1.3 Variogram Modeling of the Indicator rich ore
You must now define a Model which fits the experimental variogram calculated previously. In the
Statistics / Variogram Fitting application, define:
l the Parameter File containing the set of experimental variograms: Indicator rich ore North.
l the Parameter File in which you wish to save the resulting model: Indicator rich ore
Click on Show Advanced Parameters.
In Situ 3D Resource Estimation 49
(snap. 2.3-14)
50
l Set the toggles Fitting Window and Global Window ON; the program displays automatically
one default spherical model. The Fitting window displays one direction at a time (you may
choose the direction to display through Application/Variable & Direction Selection...), and the
Global window displays every variable (if several) and direction in one graphic.
l To display each direction in separate views, click in the Global Window on Application /
Graphic Specific Parameters and choose the Manual mode. Choose for Nb of Columns 3,
then Add, in turn for each Current Column, in the Selection by picking in the View Contents
area the First Variable, the Second Variable and the Direction.
(snap. 2.3-15)
l when pressing the Edit button next to the variogram model, the Model Definition sub-window
opens and the user can choose the basic structures. The model must reflect:
m the variability at short distances, with a consistent nugget effect,
m the main directions of anisotropy,
m the general increase of the variogram.
The model is automatically defined with the same rotation definition as the experimental vario-
gram. Three different structures have been defined (in the Model Definition window, use the Add
button to add a structure, and define its characteristics below, for each structure):
l Nugget effect,
l Anisotropic Exponential model with the following respective ranges along U, V and W: 700 m,
550 m and 70 m,
l Anisotropic Exponential model with the following respective ranges along U, V and W: 500
m, 5000 m and nothing (which means that it is a zonal component with no contribution in the
vertical direction).
Do not specify the sill for each structure at this stage, instead:
In Situ 3D Resource Estimation 51
l click Nugget effect in the main Variogram Fitting window, set the toggle button Lock the Nug-
get Effect Components During Automatic Sill Fitting ON and enter the value .065.
l set the toggle Automatic Sill Fitting ON. The program automatically computes the sills and dis-
plays the results in the graphic windows.
l A final adjustement is necessary, particularly to get a total sill of 0.25, which is the maximum
admissible for a stationary indicator variogram. Set the toggle Automatic Sill Fitting OFF from
the main Variogram Fitting window, then in the Model Definition window set the sill for the
first exponential to 0.14 and the sill for the second exponential to 0.045.
The final model is saved in the parameter file by clicking Run in the Variogram Fitting window.
(snap. 2.3-16)
2.3.2 Variographic Analysis of Fe rich ore
2.3.2.4 Experimental Variogram of Fe rich ore
Launch Statistics/Exploratory Data Analysis... to start the analysis on the variable Fe using the
selection for the rich ore composites.
52
(snap. 2.3-17)
You will calculate the variograms in 2 directions of dipping plane striking North with a western dip
of 15. In the Calculation Parameters you will ask in List of Options a Directional. Click then Reg-
ular Directions a new window Directions pops up where you will define the Reference Direction
and switch on Activate Direction Normal to the Reference Plane.
(snap. 2.3-18)
Click Reference Direction, in 3D Direction Definition window set the convention to User Defined
and define the rotation parameters as shown in the next figure.
In Situ 3D Resource Estimation 53
(snap. 2.3-19)
The reference direction U (in red) correspond to the N121 main direction of anisotropy.
The calculation parameters are then chosen as shown in the next figure.
54
(snap. 2.3-20)
The next figure shows the experimental variograms.
Two points may be noted:
l the anisotropy is not really marked, we will recalculate isotropic variogram in the horizontal
plane,
l the second point of the variogram for the direction N121, calculated with 42 pairs, shows a peak
that we can explain by using the Exploratory Data Analysis linked windows.
In Situ 3D Resource Estimation 55
(snap. 2.3-21)
For using the linked windows the following actions have to be made:
56
l ask to display the histogram (accept the default parameters),
l in the Graphic Specific Parameters of the graphic page containing the experimental variogram,
set the toggle button Variogram Cloud (if calculated) OFF, and click on the radio button Pick
from Experimental Variogram.
l in the Calculation Parameters of the graphic page containing the experimental variogram, set
the toggle button Calculate the Variogram Cloud ON.
l In the graphic page click on the experimental point with 43 pairs and ask in the menu of the
mouse Highlight. The variogram is then represented as a blue square, and all data making the
pairs represented the part painted in blue in the histogram.
(snap. 2.3-22)
The high variability due to pairs made of the samples with low values is responsible of the peak in
the variogram. It can be proved by clicking in the histogram on the bar of the minimum values and
clicking with the menu of the mouse on Mask, the variograms are automatically calculated and
dont show anymore the anomalous point as shown on the next figure.
(snap. 2.3-23)
In Situ 3D Resource Estimation 57
l We now re-calculate the variograms with 2 directions, omni-directional in the horizontal plane
and vertical, with the parameters shown hereafter you enter by clicking Regular Directions....
(snap. 2.3-24)
58
(snap. 2.3-25)
In the graphic containing this last variogram ask for the Application->Save in Parameter File to
save the variogram with the name Fe rich ore.
2.3.2.5 Variogram Modeling of Fe rich ore
In the Statistics / Variogram Fitting application, define:
l the Parameter File containing the set of experimental variograms: Fe rich ore
l the Parameter File in which you wish to save the resulting model: Fe rich ore
Open the Model Intialization window in order to make an automatic model with a nugget effect and
2 spherical (short and long range )
N0
41
157
472
688
1120
1373
1195
1196
900
1108
1222
1155
D-90
6
78 392
325
266
223
183
148
117
0 500 1000 1500
Distance (m)
0
5
10
15
V
a
r
i
o
g
r
a
m
:
F
e
In Situ 3D Resource Estimation 59
(snap. 2.3-26)
In the Global window, you represent the variograms in two columns, the automatic variogram
looks satisfactory, so you click Run in the Variogram Fitting window to save it.
(fig. 2.3-2)
2.3.3 Analysis of border effects
This chapter may be skipped in a first reading as it does not change anything in the Isatis study. It
helps to decide whether kriging/ simulation will be made using hard or soft boundary.
In order to understand the behaviour of Fe grades when the samples are close to the border between
rich and poor ore, we can use two applications:
60
l Statistics / Domaining / Border effect calculates bi-point statistics from pairs of samples belong-
ing to different domains. The pairs are chosen in the same way as for experimental variogram
calculations.
l Statistics / Domaining / Contact Analysis calculates the mean values of samples of 2 domains
as a function of the distance to the contact between these domains along the drillholes.
2.3.3.6 Statistics on Border effect
Launch Statistics / Domaining / Border effect and choose in the file Composites 15m, the Macro
Selection Variable final lithology[xxxxx], that contains the definition of all domains, and the vari-
able of interest Fe.
In the list of Domains you may pick only some of these, in this case Rich ore and Poor ore, while
you ask to Mask Samples from Domain choosing Undefined.
In the Calculation Parameters sub-window we define the parameters for 3 directions by pressing
the corresponding tabs in turn and switching on the toggle Activate Direction. For the 3 directions
the parameters are:
Switch on the three toggle buttons for the Graphic Parameters and click on Run.
In Situ 3D Resource Estimation 61
(snap. 2.3-27)
Three graphic pages corresponding to the three statistics are then displayed:
62
l Transition Probability, that, in the case of only 2 domains, is not very informative.
(snap. 2.3-28)
In Situ 3D Resource Estimation 63
l Mean [Z(x+h)|Z(x)], that shows that when going from Rich ore to Poor ore there is a border
effect (the grade of the new domain, i.e. Poor ore, is higher than the mean Poor ore grade which
means it is influenced at short distance by the proximity to Rich ore samples. Conversely when
going from Poor ore to Rich ore there is no border effect.
(snap. 2.3-29)
Dir
Dir
Dir
0 500 1000 1500
Distance (m)
0
10
20
30
40
50
60
70
F
e
e
n
t
e
r
i
n
g
i
n
R
i
c
h
o
r
e
Dir
Dir
Dir
0 500 1000 1500
Distance (m)
0
10
20
30
40
50
60
70
F
e
x
+
h
i
n
R
i
c
h
o
r
e
|
x
i
n
P
o
o
r
o
r
e
Dir
Dir
Dir
0 500 1000 1500
Distance (m)
0
10
20
30
40
50
60
70
F
e
e
n
t
e
r
i
n
g
i
n
P
o
o
r
o
r
e
Dir
Dir
Dir
0 500 1000 1500
Distance (m)
0
10
20
30
40
50
60
70
F
e
x
+
h
i
n
P
o
o
r
o
r
e
|
x
i
n
R
i
c
h
o
r
e
64
l Mean Diff[Z(x+h)-Z(x)], that shows that when going from Rich ore to Poor ore as well as
going from Poor ore to Rich ore the grade difference is influenced by the proximity of both
domains.
(snap. 2.3-30)
2.3.3.7 Contact Analysis
Launch Statistics / Domaining / Contact Analysis and choose in the file Composites 15m, the
Macro Selection Variable final lithology[xxxxx], that contains the definition of all domains, and
the variable of interest Fe. You set the variables Direct Distance Variable and Indirect Distance
Variable to None, which means that the contact point is determined when the domain changes
down the boreholes.
In the list of Domains you pick Rich ore for Domain 1 and Poor ore for Domain 2, while you let
Use Undefined Domain Variable to Off.
The statistics are calculated as a function of the distance to the contact along the drillhole, you have
the possibility to select only some of the drillholes according to a specific direction with an angular
tolerance. In this case, as most of the drillholes are vertical, we select all drillholes by choosing a
tolerance of 90 on the vertical direction defined by thre rotation angles Az=0, Ay=90, Ax=0 (Math-
ematician Convention). The samples are regrouped by Distance Classes of 15m.
Dir
Dir
Dir
0 500 1000 1500
Distance (m)
-40
-30
-20
-10
0
10
20
30
40
D
i
f
f
F
e
,
x
+
h
i
n
R
i
c
h
o
r
e
,
x
N
O
T
Dir
Dir
Dir
0 500 1000 1500
Distance (m)
-40
-30
-20
-10
0
10
20
30
40
D
i
f
f
F
e
,
x
+
h
i
n
R
i
c
h
o
r
e
|
x
i
n
P
o
o
r
o
r
Dir
Dir
Dir
0 500 1000 1500
Distance (m)
-40
-30
-20
-10
0
10
20
30
40
D
i
f
f
F
e
,
x
+
h
i
n
P
o
o
r
o
r
e
,
x
N
O
T
Dir
Dir
Dir
0 500 1000 1500
Distance (m)
-40
-30
-20
-10
0
10
20
30
40
D
i
f
f
F
e
x
+
h
i
n
P
o
o
r
o
r
e
|
x
i
n
R
i
c
h
o
r
e
In Situ 3D Resource Estimation 65
(snap. 2.3-31)
Two graphic pages are then displayed:
l Contact Analysis (Oriented) contains two views:
m Direct for statistics calculated in the Reference Direction
m Indirect for statistics calculated in the opposite of the Reference Direction
In the Application menu of the graphic pages we ask the Graphical Parameters, as shown
below, to display the Number of Points and the Mean per Domain.
(snap. 2.3-32)
66
(snap. 2.3-33)
In Situ 3D Resource Estimation 67
l Contact Analysis (Non-Oriented) displays the average of the two previous ones.
(snap. 2.3-34)
From these graphs it appears that the poor grades are influenced by the proximity to rich grades.
In conclusion we decide for the kriging and simulations steps to apply hard boundary when dealing
with rich ore.
68
2.4 Kriging
We are now going to estimate on blocks 75mx75mx15m the tonnage and Fe grades of Rich ore.
Therefore, we will perform two steps:
l Kriging of the Indicator of Rich ore to get the estimated proportion of rich ore, from which the
tonnage can be deduced.
l Kriging of the Fe grade of rich ore using only the rich ore samples. Each block is then estimated
as if it would be entirely in rich ore, by applying the estimated tonnage, we can then obtain an
estimate of the Fe metal content.
2.4.1 Kriging of indicator of rich ore with local parameters
After the variographic analysis it was found that the variogram model has an horizontal anisotropy
that has a different orientation in the northern and southern part of the orebody. We will then use
that orientation as local parameter recovered from the grid file in a variable called RotZ. As a first
attempt, that should be sufficient in this case because of the orebody shape, we will use two values
90 for blocks in the southern area and 0 for the northern area, both areas being defined by means
of the geographic code variable (respectively 1 and 2). These values are stored in the grid file by
using File / Calculator.
(snap. 2.4-1)
In Situ 3D Resource Estimation 69
Then you launch Interpolate / Estimation / (Co)Kriging.
(snap. 2.4-2)
You need to specify the type of calculation to Block and the number of variables to 1, then:
l Input File: Indicator rich ore (Composites on 15m with the selection None).
l The names of the variables in the output file (3D Grid 75 x 75 x 15 m), with the orebody selec-
tion active:
m Kriging indicator rich ore for the estimation of Indicator rich ore
m Kriging indicator rich ore std dev for the kriging standard deviation
70
l The variogram model contained in the Parameter File called Indicator rich ore.
l The neighborhood: open the Neighborhood... definition window and specify the name (Indica-
tor rich ore for instance) of the new parameter file which will contain the following parameters,
to be defined from the Edit... button nearby. The neighborhood type is set by default to moving:
(snap. 2.4-3)
m The moving neighborhood is an ellipsoid with No rotation, which means that U,V,W axes
are the original X,Y,Z axes;
m Set the dimensions of the ellipsoid to 800 m, 600 m and 60 m along the vertical direction;
m Switch ON the Use Anisotropic Distances button.
m Minimum number of samples: 4;
m Number of angular sectors: 12
m Optimum Number of Samples per Sector: 5
m Block discretization: as we chose to perform Block kriging, the block discretization has to be
defined. The default settings for discretization are 5 x 5 x 1, meaning each block is sub-
divided by 5 in each X and Y direction, but is not divided in Z direction. The Block Discret-
In Situ 3D Resource Estimation 71
ization sub-window may be used to change these settings, and check how different discreti-
zations influence the block covariance C
vv
. In this case study, the default parameters 5x5x1
will be kept.
m Press OK for the Neighborhood Definition.
l The Local Parameters: open the Local Parameters Loading... window and specify the name of
the Local Parameters File (3D Grid 75x75x15m). Fore the Model All Structures and Neighbor-
hood tabs switch ON Use Local Rotation (Mathematician convention) then 2D and define as
Rotation/Z the variable Rot Z.
(snap. 2.4-4)
72
It is possible to check both the model and the neighborhood performances when processing on a
grid node, and to display the results graphically: this is the purpose of the Test option at the bottom
of the (Co-)Kriging main window. When pressing it, a graphic page opens where:
l The Indicator rich ore variable is represented with proportional symbols,
l The neighborhood ellipsoid is drawn on a 2D section.
By pressing once on the left button of the mouse, the target grid is shown (in fact a XOY section of
it, you may select different sections through Application/Selection For Display...). The user can
then move the cursor to a target grid node: click once more to initiate kriging. The samples selected
in the neighborhood are highlighted and the weights are displayed. We can see here that the nearest
samples get the higher weights. It is also important to check that the negative weights due to screen
effect are not too important. The neighborhood can be changed sometimes to avoid this kind of
problem (more sectors and less points by sector...).
You can also select the target grid node by giving the indices along X, Y and Z with the Application
menu Target Selection (for instance 6, 11, 16). You can figure out how the local parameters used
for the neighborhood are applied.
(snap. 2.4-5)
In Situ 3D Resource Estimation 73
(snap. 2.4-6)
Note - From Application/Link to 3D viewer, you may ask for a 3D representation of the search
ellipsoid if the 3D viewer application is already running (see the end of this case study).
Close the Test Window and press RUN.
7814 grid nodes have been estimated. Basic statistics of the variables are displayed below.
(fig. 2.4-1)
The kriging standard deviation is an indicator of the estimation error, and depends only on the geo-
metrical configuration of the data around the target grid node and on the variogram model. Basi-
cally, the standard deviation decrease as an estimated grid node is closer to data.
Some blocks have the kriged indicator above 1. These values will be changed into 1 by means of
File / Calculator.
74
(snap. 2.4-7)
Note - In the main Kriging window, the optional toggle Full set of Output Variables allows to
store in the Output File other kriging parameters: slope of regression, weight of the mean,
estimated dispersion variance of estimates etc...
2.4.2 Kriging of Fe rich ore
In the Standard (Co)Kriging menu specify the type of calculation to Block and the number of
variables to 1, then enter the following parameters:
l Input File: Fe (Composites on 15m with the selection final lithology{rich ore}).
l The names of the variables in the output file (3D Grid 75 x 75 x 15 m), with the orebody selec-
tion active:
m Kriging Fe rich ore for the estimation of Fe;
m Kriging Fe rich ore std dev for the kriging standard deviation.
In Situ 3D Resource Estimation 75
l The variogram model contained in the Parameter File called Fe rich ore.
l The neighborhood: open the Neighborhood... definition window and specify the name (Fe rich
ore for instance) of the new parameter file which will contain the following parameters, to be
defined from the Edit... button nearby. The neighborhood type is set by default to moving:
m The moving neighborhood is an ellipsoid with No rotation, which means that U,V,W axes
are the original X,Y,Z axes;
m Set the dimensions of the ellipsoid to 800 m, 300 m and 50 m along the vertical direction;
m Switch ON the Use Anisotropic Distances button.
m Minimum number of samples: 4;
m Number of angular sectors: 12
m Optimum Number of Samples per Sector: 3
m Block discretization: as we chose to perform Block kriging, the block discretization is kept to
the default 5 x 5 x 1.
l Apply Local Parameters but only for the Neighborhood, where you use Rot Z variable for 2D
Rotation /Z.
(snap. 2.4-8)
76
After Run you can calculate the statistics of the kriged estimate by asking in Statistics / Quick Sta-
tistics to apply as Weight the weight variable Kriging indicator rich ore. 7561 blocks from 7814
have been kriged. By using a weight variable you will obtain the statistics weighted by the propor-
tion of the block in rich ore.
(snap. 2.4-9)
(fig. 2.4-2)
In Situ 3D Resource Estimation 77
The mean grade is close to the average of the composites grade (65.84). Therefore in the next steps,
carrying out non linear methods which require the modeling of the distribution, we will not apply
any declustering weights.
78
2.5 Global Estimation With Change of Support
The support is the geometrical volume on which the grade is defined.
Assuming the data sampling is representative of the deposit, it is possible to fit a histogram model
on the experimental histogram of the composites. But at the mining stage, the cut-off will be
applied on blocks, not on composites. Therefore, it is necessary to apply a support correction to the
composite histogram model in order to estimate an histogram model on the block support.
Note - When kriging too small blocks with a high error level, applying a cut-off to the kriged
grades will induce biased tonnage estimates due to the high smoothing effect. It is then
recommended to use non-linear estimation techniques, or simulations (see the Non Linear case
study). For global estimation, an other alternative is to use the Gaussian anamorphosis modeling,
as described here below.
2.5.1 Gaussian anamorphosis modeling
Gaussian anamorphosis is a mathematical technique which allows to model histograms, taking the
change of support from composites to blocks into account.
Note - From a support size point of view, composites will be considered as points compared to
blocks.
The technique will not be mathematically detailed here: the reader is referred to the Isatis on-line
help and technical references. Basically, the anamorphosis transforms an experimental dataset to a
gaussian dataset (i.e. having a gaussian histogram). The anamorphosis is bijective, so it is possible
to back transform gaussian values to raw values. A gaussian histogram is often a pre-requisite for
using non linear and simulation techniques. The anamorphosis function may be modelled in two
ways:
l by a discretization with n points between a negative gaussian value of -5 and a positive gaussian
value of +5.
l by using a decomposition into Hermite polynomials up to a degree N. This was the only possi-
bility until the Isatis release V10.0. It is still compulsory for some applications, as will be
explained later on.
Open the Statistics/Gaussian Anamorphosis Modeling window.
In Situ 3D Resource Estimation 79
(snap. 2.5-1)
l In Input... choose the Composites 15 m file with the selection final lithology{Rich ore};
choose Fe for the raw variable.
l Do NOT ask for a Gaussian Transform.
l Name the anamorphosis function Fe rich ore.
l In Interactive Fitting... choose the Type Standard and switch ON the toggle button Dispersion
with the Dispersion Law set to Log-Normal Distribution. In this mode the histogram will be
modelled by assigning to each datum a dispersion, that accounts for some uncertainty that is
80
globally reflected by an error on the mean value. The variability of the dispersion is controlled
by the Variance Increase parameter, related to the estimation variance of the mean. By default
that variance is set to the statistical variance of the data divided by the number of data.
(snap. 2.5-2)
In Situ 3D Resource Estimation 81
l Click on the Anamorphosis and Histogram bitmaps. You will visualize the anamorphosis func-
tion and how the experimental histogram is modelled (black bars are for the experimental histo-
gram and the blue bars for the modelled histogram).
(snap. 2.5-3)
Close the Fitting Parameters window.
l Press RUN in the Gaussian Anamorphosis window: because you have not asked for Hermite
Polynomials, the following error message window is displayed to advise you on the applications
requiring these polynomials.
(snap. 2.5-4)
82
2.5.2 Block anamorphosis on SMU support
Using the composite histogram and variogram models, we are now going to take the change of sup-
port into account using Statistics/Support Correction...:
(snap. 2.5-5)
The Selective Mining Unit (SMU) size has been fixed to 25 x 25 x 15 m. Therefore, the correction
will be calculated for a block support of 25 x 25 x 15 m. Each block is discretized by default in 3x3
for the X and Y direction (NX = 3 and NY = 3); no discretization is needed for the vertical direction
(NZ = 1) as the composites are regularized accordingly to the bench height (15 m). Changing the
discretization along X and Y may allow to study the sensitivity on change of support coefficients.
Switch ON the toggle button Normalize Variogram Sill. As the variogram sill is higher than the
In Situ 3D Resource Estimation 83
variance, the consequence is to reduce a little bit the support correction (r coefficient a bit higher
than without normalization).
Press Calculate at the bottom of the window. The block support correction calculations are dis-
played in the message window:
(snap. 2.5-6)
The block variogram value Gamma (v,v) is calculated and is the base for calculating the real
block variance and the real block support correction coefficient r. We can see that the support
correction is not very important (r not very far from 1), it is because of the variogram model whose
ranges are rather large compared to the smu size. The calculation is made at random, so different
calculations will give similar results, but different. If the differences in the real block variance are
too large, the block discretization should be refined by increasing NX and NY. By pressing Calcu-
late... several times, we statistically check if the discretization is fine enough to represent the vari-
ability inside the blocks. Press OK.
Save the Block Anamorphosis under the name Fe rich ore block 25x25x15 and press RUN.
2.5.3 Grade Tonnage Curves
Launch Tools / Grade Tonnage Curves. You will ask to display two types of curves, calculated
from:
84
l Kriged Fe rich ore on the panels 75mx75mx15m, the Histogram modelled after support correc-
tion on blocks 25mx25mx15m
. (snap. 2.5-7)
For each curve you have to click Edit and Fill the parameters.
For the first curve on kriged panels:
In Situ 3D Resource Estimation 85
(snap. 2.5-8)
For the second curve, on blocks histogram:
86
(snap. 2.5-9)
After clicking the bitmaps at the bottom of the Grade Tonnage Curves window (M vs. z, T vs z, Q
vs. z, Q vs.T, B vs z) you get the graphics like for instance T(z), M(z):
In Situ 3D Resource Estimation 87
(snap. 2.5-10)
(snap. 2.5-11)
These curves show as expected that the selectivity is better from true blocks 25x25x15 than from
kriged panels 75x75x15, that have a lower dispersion variance.
The legend is displayed in a Separate Window as was asked in the Grade Tonange Curves win-
dow. By clicking Define Axes you switch OFF Automatic Bounds to change the Axis Minimum and
Axis Maximum for Mean Grade to 60 and 70 respectively.
50 55 60 65
Cutoff
0
10
20
30
40
50
60
70
80
90
100
T
o
t
a
l
T
o
n
n
a
g
e
50 55 60 65
Cutoff
60
61
62
63
64
65
66
67
68
69
70
M
e
a
n
G
r
a
d
e
88
(snap. 2.5-12)
(snap. 2.5-13)
In Situ 3D Resource Estimation 89
2.6 Simulations
This chapter aims at giving a quick example of conditional block simulations in a multivariate case.
Simulations allow to reproduce the real variability of the variable.
We will focus on the Fe-P-SiO
2
grades of rich ore of blocks 25mx25mx15m. Two steps will then be
achieved:
l simulation of the rich ore indicator. Sequential Indicator method will be applied to generate sim-
ulated model where each block has a simulated code 1 for rich ore blocks and 2 for poor ore
blocks. A finer grid would be required to be more realistic, for sake of simplicity we will make
the indicator simulation on the same blocks 25mx25mx15m.
l simulation of rich ore Fe grade, as if each block would be entirely in rich ore. By intersecting
with the indicator simulation, we will get the final picture.
2.6.1 Simulation of the indicator rich ore
You must first create the grid of blocks 25x25x15 with File / Create Grid File.
(snap. 2.6-1)
90
To create in the grid file the orebody selection we use the migration capability (Tools/Migrate/Grid
to Point...) from the 3D Grid 75x75x15 m file to 3D Grid 25x25x15 with maximum migration dis-
tance of 55 m.
(snap. 2.6-2)
Open the menu Interpolate / Conditional Simulations / Sequential Indicator / Standard Neighbor-
hood.
In Situ 3D Resource Estimation 91
(snap. 2.6-3)
For defining the two facies 1 for rich ore and 2 for the complementary you have to click on
Facies Definition and enter the parameters as shown below.
92
(snap. 2.6-4)
You may use the same variogram model, the same neighborhood and the same local parameters
as used for the kriging. The only additional parameter is the Optimum Number of Already Simulated
Nodes, you can fix to 30 (the total number being 5 for 12 sectors, i.e. 60). Save the simulation in
SIS indicator rich ore.
You ask 100 simulations, then press on Run.
2.6.2 Block simulations of Fe-P-SiO
2
rich ore
The direct block simulation method, based on the discrete gaussian model (DGM), will be used.
The workflow is the following:
m transform the raw data to gaussian values by anamorphosis. For the case of P grade the
anamorphosis will take into account the fact that many samples are at the detection limit,
that produces an histogram with a significant zero effect.
m do a multivariate variographic analysis on the gaussian data in order to have a gaussian vari-
ogram
In Situ 3D Resource Estimation 93
m model these gaussian variograms with a linear model of coregionalisation;
m regularize these variograms on the block support;
m perform a support correction on the gaussian transforms;
m perform the simulations using the discrete gaussian model framework, that allows to condi-
tion block simulated values to gaussian point data.
2.6.2.1 Gaussian Anamorphosis
We will perform the gaussian anamorphosis on the three grades of the rich ore domain in one go.
and independently. Note that the three anamorphosis functions must be stored together in the same
Parameter file called Fe-SiO
2
-P rich ore. Note in this case that we also ask to store the Gaussian
transforms in the composites file with the names Gaussian Fe/P/SiO
2
rich ore, ...
94
(snap. 2.6-5)
By clicking on Interactive Fitting, the Fitting Parameters window pops up, you will have to choose
parameters for the three variables in turn, by clicking on the arrow on the side of the area displaying
Parameters for Fe/P/SiO
2
. For Fe and SiO
2
you choose the Standard Type with a Dispersion
using a Log Normal Distribution and the default Variance Increase (as was made before for Fe
alone).
For P many samples have values equal to the detection limit of 0.01. The histogram shows a spike
at the origin, that will be modelled by a zero-effect. You must choose the type Zero-effect and click
on Advanced Parameters to enter the parameters defining the zero effect. In particular we will put
in the atom all values equal to 0.01 with a precision of 0.01, i.e. all samples between 0 and 0.02.
In Situ 3D Resource Estimation 95
(snap. 2.6-6)
After Run the transformed values of Fe and SiO2 have a gaussian distribution, while for P the gaus-
sian transform has a truncated gaussian distribution. The gaussian values assigned to the samples
concerned by the zero effect are all equal to the same value (gaussian value corresponding to the
frequency of the zero effect).
2.6.2.2 Gaussian transform of P rich ore
The next steps consist of making the gaussian transform of P a true gaussian distribution. This is
achieved by using a Gibbs Sampler algorithm that will generate for all samples of the zero effect a
gaussian value consistent with the structure of spatial correlation with all gaussian values. Practi-
cally 3 steps must be carried out:
l calculation of the experimental variogram of the truncated gaussian values;
l variogram modelling of the gaussian transform using the truncation option;
l Gibbs Sampler to generate the gaussian transform with a true distribution and honouring the
spatial correlation.
Using EDA we calculate the histogram and the experimental variogram on the variable Gaussian
P rich ore (activating the selection final lithology{Rich ore}). In the Application menu of the his-
togram you ask the Calculation Parameters and switch off the Automatic mode to the values shown
below:
(snap. 2.6-7)
96
For the variogram you choose the same parameters as used for Fe (omnidirectional in the horizontal
plane and vertical), by asking in the Application Menu / Calculation Parameters, in the Variogram
Calculation Parameters window click Load Parameters from Standard Parameter File and select
the experimental variogram Fe rich ore.
On the graphic display you see the truncated distribution with about 35% of samples concerned by
the zero effect, the gaussian truncated value is -0.393. The variance displayed as the dotted line on
the variograms is about 0.5. In the Application / Save in Parameter File menu of the graphic con-
taining the variogram you save it under the name Gaussian P rich ore zero effect.
(snap. 2.6-8)
In Situ 3D Resource Estimation 97
(snap. 2.6-9)
In the Variogram Fitting window you choose the Experimental Variograms Gaussian P rich ore
zero effect and you create a New Variogram Model, called Gaussian P rich ore. Note that the var-
iogram model refers to the gaussian transform (with the true gaussian distribution), it is transformed
by means of the truncation to match the experimental variogram of the truncated gaussian variable.
(snap. 2.6-10)
Click Edit, in the Model Definition window you must first click Truncation.
98
(snap. 2.6-11)
In the Truncation window, switch ON Truncation and click Anamorphosis V1 to select the anamor-
phosis Fe-SiO
2
-P[P].
(snap. 2.6-12)
In Situ 3D Resource Estimation 99
(snap. 2.6-13)
Coming back to the Model Definition window you enter the parameters of the variogram model as
shown below. It is important to choose sill coefficients summing up to 1 (dispersion variance of the
true gaussian) and not 0.5 the dispersion variance of the truncated gaussian.
(snap. 2.6-14)
N0
1
157
472
688
1120
1373
1195
1196 900
1108
1222
1155
0 500 1000 1500
Distance (m)
0.0
0.1
0.2
0.3
0.4
0.5
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
P
r
i
c
h
o
r
e
D-9
606
478
392
325
266
223
183
148
117
0 50 100 150
Distance (m)
0.0
0.1
0.2
0.3
0.4
0.5
0.6
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
P
r
i
c
h
o
r
e
100
You will now generate gaussian values for the zero effect on P rich ore by using Statistics / Statis-
tics / Gibbs Sampler. Note that the gaussian values not concerned by the zero effect are kept
unchanged.
l The Input Data are the variogram model you just fitted Gaussian P rich ore and the Gaussian
P rich ore variable stored after the GaussainAnamorphosis Modelling.
l The Output Data are a new variogram model Gaussian P rich ore no truncation (which is in
fact the same as the input one without the truncation option) and a new variable in the Compos-
ites 15m file Gaussian P rich ore (Gibbs).
l You ask to perform 1000 iterations.
(snap. 2.6-15)
You can check how the Gibbs Sampler has reproduced the gaussian distribution and the input vari-
ogram. You just have to recalculate the histogram and the variograms on the variable Gaussian P
rich ore (Gibbs). After saving in the Parameter File that experimental variogram, you can superim-
pose to it the variogram model with no truncation using Variogram Fitting menu. For the first dis-
tance the fit is acceptable.
In Situ 3D Resource Estimation 101
(snap. 2.6-16)
(snap. 2.6-17)
N0
1
157
472688
1120
1373
1195
1196
900
1108
1222
1155
D-9
6
78
92
325
266
223
183
148
117
0 500 1000 1500
Distance (m)
0.0
0.5
1.0
1.5
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
P
r
i
c
h
o
r
e
(
G
i
b
b
s
)
102
2.6.2.3 Multivariate Gaussian variogram modeling
In Statistics / Exploratory Data Analysis you calculate the variograms with the same parameters as
before (one monidirectional horizontal direction and one vertical direction) on the 3 gaussian trans-
forms.
In the graphic window you use Application / Save in Parameter File to save these variograms under
the name Gaussian Fe-SiO2-P rich ore.
(snap. 2.6-18)
In Statistics/Variogram Fitting..., choose the experimental variogram you just saved. Create the
new variogram model with the same name Gaussian Fe-SiO2-P rich ore. Set the toggles Global
Window and ask to display the number of pairs in the graphic window (Application/Graphic
Parameters...).
In Situ 3D Resource Estimation 103
(snap. 2.6-19)
The model is made using the following method:
104
l enter the name of the new variogram model Gaussian Fe-SiO2-P rich ore and Edit it.
l in the Model Definition window click on Load Model and choose the model made for Gaussian
P rich ore no truncation. The following window pops up:*
(snap. 2.6-20)
Clck on Clear button, then move the mouse to the second line Gaussian P rich ore, click on Link
and on OK in the Selector window to put the variogram made on Gaussian P alone for the same
variable in the three variate variogram. Then you click on OK in the Model Loading window.
l in the Variogram Fitting window click on Automatic Sill Fitting. The Global Window shows
the model that has been fitted. Press Run to save it in the parameter file.
In Situ 3D Resource Estimation 105
(snap. 2.6-21)
2.6.2.4 Variogram regularization
In order to perform the direct block simulation you have to model the three variate variogram on the
support of the blocks 25x25x15.
N0
1
157
472
688
11201373
1195
1196
900
1108
1222
1155
D-9 6
78
92
325 266 223
183
148
117
0 500 1000 1500
Distance (m)
0.00
0.25
0.50
0.75
1.00
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
F
e
r
i
c
h
o
r
e
N0
1
157
472 688
11201373
11951196900
1108
1222
1155
D-9
6
78
92
325
266
223
183
148
117
0 500 1000 1500
Distance (m)
-0.5
0.0
0.5
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
P
r
i
c
h
o
r
e
(
G
i
b
b
s
)
N0
1
157
472 688
1120
1373
1195
1196
900
11081222
1155
D-9
6
78
92
325
266
223
183
148
117
0 500 1000 1500
Distance (m)
0.0
0.5
1.0
1.5
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
P
r
i
c
h
o
r
e
(
G
i
b
b
s
)
N0
0
56
311
417763
896
663 701
561
683
722
632
D-9
6
75
11
262
216 182 151 122
97
0 500 1000 1500
Distance (m)
-0.5
0.0
0.5
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
S
i
O
2
r
i
c
h
o
r
e
&
G
a
N0
0
56311417
763
896
663 701
561
683
722
632
D-9
6
75
11
262 216
182
151
122
97
0 500 1000 1500
Distance (m)
-0.5
0.0
0.5
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
S
i
O
2
r
i
c
h
o
r
e
&
G
a
N0
0
56
311
417
763 896
663
701
561 683
722 632
D-9
6
75
11
262
216
182 151
122
97
0 500 1000 1500
Distance (m)
0.00
0.25
0.50
0.75
1.00
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
S
i
O
2
r
i
c
h
o
r
e
106
l You first have to launch Statistics / Modeling / Variogram Regularization. You will store in a
new experimental variogram Gaussian Fe-SiO
2
-P rich ore block 25x25x15 3 directional vari-
ograms using a discretization of 5x5x1. You will also ask to Normalize the Input Point Vario-
gram.
(snap. 2.6-22)
l Then you model the regularized variogram using Variogram Fitting and the Automatic Sill Fit-
ting mode, after having loaded the model made on the point samples Gaussian Fe-SiO2-P rich
ore. You note that the Nugget effect is put to zero. When you save the variogram model the
Nugget effect is not stored in the Parameter file
In Situ 3D Resource Estimation 107
(snap. 2.6-23)
108
(snap. 2.6-24)
2.6.2.5 Gaussian Support Correction
The point gaussian anamorphosis and the regularized variogram model have to be transformed in
gaussian anamorphosis and variogram model related to the gaussian block variable Yv (gaussian
zero-mean, variance-1 variable).
This is achieved by running Statistics / Modeling / Gaussian Support Correction.
N0
N27
D-9
0 500 1000 1500
Distance (m)
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
F
e
r
i
c
h
o
r
e
(
B
l
o
c
k
N0
N27
D-9
0 500 1000 1500
Distance (m)
-0.5
0.0
0.5
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
P
r
i
c
h
o
r
e
(
G
i
b
b
s
)
N0
N27
D-9
0 500 1000 1500
Distance (m)
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
P
r
i
c
h
o
r
e
(
G
i
b
b
s
)
N0
N27
D-9
0 500 1000 1500
Distance (m)
-0.5
0.0
0.5
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
S
i
O
2
r
i
c
h
o
r
e
(
B
l
o
N0
N27
D-9
0 500 1000 1500
Distance (m)
-0.5
0.0
0.5
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
S
i
O
2
r
i
c
h
o
r
e
(
B
l
o
N0
N27
D-9
0 500 1000 1500
Distance (m)
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
S
i
O
2
r
i
c
h
o
r
e
(
B
l
o
In Situ 3D Resource Estimation 109
(snap. 2.6-25)
2.6.2.6 Direct Block Simulation
It is achieved by running the menu Interpolate / Conditional Simulations / Direct Block Simulation.
It takes some time to get 100 simulations. Depending on the computer it may be more than an hour.
110
l The simulated variables are created with the following names Simu block Gaussian Fe rich
ore ...in the 3D Grid 25x25x15. We store the gaussian values before transform to allow a check
of the experimental variograms on gaussian simulated values with the input variogram model,
that is defined on the gaussian variables.
l The Block Anamorphosis and the Block Gaussian Model are those obtained from the Gaussian
Support Correction.
l The Neighborhood used for kriging Fe rich ore is modified into a new one called Fe rich ore
simulation changing the radius along V to 800m. The reason is just because the Local Parame-
ters for the neighborhood are not implemented in the application Direct Block Simulation.
l Number of simulations: 100 for instance .
l We ask to not Perform a Gaussian Back Transformation, for the reason explained above. The
back transform will be achieved afterwards.
l The turning bands algorithm is used with 1000 Turning Bands.
In Situ 3D Resource Estimation 111
(snap. 2.6-26)
You can compare the experimental variograms calculated from the 100 simulations in up to 3 direc-
tions with the input variogram model. The directions are entered by giving the increments (number
of grid mesh) of the unit directional lag along X, Y, Z. For instance for the direction 1, the incre-
ments are respectively 1, 0, 0, which makes the unit lag 25m East-West.
112
(snap. 2.6-27)
Three graphic pages (one per direction) are then displayed. The average experimental variograms
are displayed with a single line, the variogram model with a double line. On the next figure the var-
iograms in the direction 3 show a good match up to 100m. For the cross-variogram P-SiO2 where
the correlation is very low, some simulations look anomalous, further analysis could be made to
exclude these simulations for the next post processing steps.
In Situ 3D Resource Estimation 113
(snap. 2.6-28)
It is then necessary to transform the simulated gaussian values into raw values, using Statistics /
Data Tranformation / Raw Gaussian Transformation. For transforming the three grade you will
have to run that menu three times. You should choose as Transformation Gaussian to Raw Trans-
formation. The New Raw Variable will be created with the same number of indices with names
like Simu block Fe rich ore...
The transform is achieved by means of the block anamorphosis Fe-SiO2-P rich ore block
25x25x15, do not forget to choose on the right side of the Anamorphosis window the right variable.
0 25 50 75 100 125
Distance (m)
0.00
0.25
0.50
0.75
1.00
1.25
V
a
r
i
o
g
r
a
m
:
S
i
m
u
b
l
o
c
k
G
a
u
s
s
i
a
n
F
e
r
i
c
h
0 25 50 75 100 125
Distance (m)
-0.5
-0.4
-0.3
-0.2
-0.1
0.0
V
a
r
i
o
g
r
a
m
:
S
i
m
u
b
l
o
c
k
G
a
u
s
s
i
a
n
P
r
i
c
h
0 25 50 75 100 125
Distance (m)
0.00
0.25
0.50
0.75
1.00
V
a
r
i
o
g
r
a
m
:
S
i
m
u
b
l
o
c
k
G
a
u
s
s
i
a
n
P
r
i
c
h
0 25 50 75 100 125
Distance (m)
-0.4
-0.3
-0.2
-0.1
0.0
V
a
r
i
o
g
r
a
m
:
S
i
m
u
b
l
o
c
k
G
a
u
s
s
i
a
n
S
i
O
2
r
i
0 25 50 75 100 125
Distance (m)
-0.03
-0.02
-0.01
0.00
0.01
0.02
0.03
0.04
0.05
V
a
r
i
o
g
r
a
m
:
S
i
m
u
b
l
o
c
k
G
a
u
s
s
i
a
n
S
i
O
2
r
i
0 25 50 75 100 125
Distance (m)
0.00
0.25
0.50
0.75
1.00
1.25
V
a
r
i
o
g
r
a
m
:
S
i
m
u
b
l
o
c
k
G
a
u
s
s
i
a
n
S
i
O
2
r
i
114
(snap. 2.6-29)
We can now combine the simulations of the rich ore indicator and the grades simulations, by chang-
ing to undefined (N/A) the grades when the block is simulated as poor ore (simulated code 2).
These transformations have to be applied on the 100 simulations using File / Calculator. It is com-
pulsory to create beforehand new macro variables, with 100 indices, called Simu block Fe ... with
Tools / Create Special Variable.
In Situ 3D Resource Estimation 115
(snap. 2.6-30)
116
(snap. 2.6-31)
If you complete this Case Study by simulating also the grades of poor ore, you will get valuated
grades for all blocks in the orebody. The displays will be presented in the last chapter.
2.6.3 Simulations post-processing
One main advantage of simulations is the possibility to apply non linear calculations (for example
applying different cut-off grades simultaneously, or calculation of the probability for a grade to be
above a threshold etc.) for local reserves estimation. The post-processing may be applied on the
simulated blocks, but in the present case it is more interesting to first regroup the simulated blocks
in the blocks 75x75x15 (called panels) and illustrate some basic post-processing on the tonnage and
metals of rich ore within those panels.
In Situ 3D Resource Estimation 117
2.6.3.7 Regrouping blocks into panels
We will calculate for each panel the mean grade, tonnage and metal quantitiy of rich ore and the
quantity of rich ore Fe-P-SiO
2
by using Tools / Processing / Grade Simulation Post-Processing, that
applies directly on the macro-variables. The Grade Simulation Post-processing is designed to calcu-
late local grade tonnage curves on panel grid (Q,T,M variables) from simulated grade variables on
block grid. The grade variables can be simulated using the panel Turning bands, Sequential Gauss-
ian Simulation or any kind of simula-tion that generates continuous variables.
The Block Grid usually corresponds to the S.M.U. (Selective Mining Unit). It has to be consistent
with the Panels, in other words the Block Grid must make a partition of this Panel Grid.This
appli-cation handles multivariable cases with a cuttof on the main variable.
(snap. 2.6-32)
118
(snap. 2.6-33)
2.6.3.8 Examples of Post Processing
The menu Tools / Simulation Post-processing offers different options, illustrated hereafter on the
Tonnage and Metal variables stored on the 3D Grid 75x75x15m file:
In Situ 3D Resource Estimation 119
l Statistical Maps to calculate the average of 100 simulated tonnages
(snap. 2.6-34)
(snap. 2.6-35)
The mean tonnage may be compared to the kriged indicator (after multiplication by the panel ton-
nage).
120
l Iso-Frequency Maps to calculate the quantile at the frequencies of 25%-50%-75% of the Ton-
nage of rich ore. In the previous Simulation Post-Processing window, click the Toggle button
Iso-Frequency Maps, the following window pops up and you define a New Macro Variable
Quantile Tonnage rich ore[xxxxx].
(snap. 2.6-36)
then click Quantiles and choose for Step Between Frequencies 25%. You get a macro-variable with
3 indices, one per frequency: for each panel the tonnage such that 25%, 50%, 75% of the simula-
tions is lower than the corresponding quantile value.
(snap. 2.6-37)
In Situ 3D Resource Estimation 121
l Iso-Cutoff Maps to calculate the probability for the Metal P rich ore to be above 0, 50, 100,
150, 200.
(snap. 2.6-38)
In the previous Simulation Post-Processing window, click the Toggle button Iso-Cutoff Maps, the
following window pops up and you define a New Macro Variable for Probability to be Above Cut-
off (T), i.e. Proba P rich ore above[xxxxx].
122
(snap. 2.6-39)
then click Cutoff and click Regular Cutoff Definition and choose the parameters as shown below.
You get a macro-variable with 4 indices, one per cutoff: for each panel the probability to be above
0.02,0.03 ...
(snap. 2.6-40)
In Situ 3D Resource Estimation 123
l Risk Curves to calculate the distribution of 100 simulations of Fe metal quantities of rich ore
over the orebody.
(snap. 2.6-41)
Click Risk Curves then Edit and fill the parameters in the Risk Curves & Printing Format window,
as shown. Only the Accumulations are interesting. For a given simulation the accumulation is
obtained by multiplying the simulated block value (here the Fe metal in tons) by the volume of the
block. It means that the average grade of the block is multiplied twice by the block volume. That is
why in order to get the metal in MTons we have to apply a scaling factor of 75x75x15 (84375) and
multiply it by 10
6
. That scaling is entered in the box just on the left of m3*V_unit of the Accumula-
tions sub-window. By asking Print Statistics the 100 accumulations will be output in the Isatis mes-
sage window. The order of the printout depends of the option Sorts Results by, here we ask
Accumulations.
124
(snap. 2.6-42)
Coming back to the Simulation Post-processing window and press Run. The following graphic is
then displayed.
In Situ 3D Resource Estimation 125
(snap. 2.6-43)
With the Application / Graphic Parameters you may Highlight Quantiles with the Simulation Value
on Graphic.
(snap. 2.6-44)
The graphic page is refreshed as shown.
126
(snap. 2.6-45)
In the message window we get the 100 simulated metal quantities by increasing order. The column
Macro gives the index of the simulation for each outcome: for instance the minimum metal is
obtained for the simulation #72, the next one for the simulation 97 ...
Rank Macro Frequency Accumulation Volume
In Situ 3D Resource Estimation 127
1 72 1.00 1140.90MT 3442162500.00m3
2 97 2.00 1156.65MT 3442162500.00m3
3 38 3.00 1171.82MT 3442162500.00m3
4 15 4.00 1179.91MT 3442162500.00m3
5 91 5.00 1181.25MT 3442162500.00m3
6 41 6.00 1185.01MT 3442162500.00m3
7 30 7.00 1191.53MT 3442162500.00m3
8 45 8.00 1191.71MT 3442162500.00m3
9 57 9.00 1194.86MT 3442162500.00m3
10 59 10.00 1195.80MT 3442162500.00m3
11 35 11.00 1196.15MT 3442162500.00m3
12 6 12.00 1196.37MT 3442162500.00m3
13 48 13.00 1197.58MT 3442162500.00m3
14 62 14.00 1199.70MT 3442162500.00m3
15 40 15.00 1201.25MT 3442162500.00m3
16 1 16.00 1201.90MT 3442162500.00m3
17 86 17.00 1204.47MT 3442162500.00m3
18 33 18.00 1206.65MT 3442162500.00m3
19 93 19.00 1206.83MT 3442162500.00m3
20 11 20.00 1210.44MT 3442162500.00m3
...
(snap. 2.6-46)
128
2.7 Displaying the Results
The last chapter consists in visualizing the different result in the 3D grids, through the 2D Display
facility then through the 3D viewer.
2.7.1 Using the 2D Display
2.7.1.1 Display of the Kriged block model
We are going to create a new Display template (Display/New Page...), that consists in an overlay of
a grid raster and isolines. All the Display facilities are explained in detail in the "Displaying & Edit-
ing Graphics" chapter of the Beginner's Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page pops up, together
with a Contents window. You have to specify in this window the contents of your graphic. To
achieve that:
l First, give a name to the template you are creating: Kriging Fe rich ore. This will allow you to
easily display again this template later.
l In the Contents list, double click the Raster item. A new window appears, in order to let you
specify which variable you want to display and the color scale:
m Select the Grid file, 3D Grid 75x75x15m with selection orebody active, select the variable
Kriging Fe rich ore
m Specify the title for the Raster part of the legend, for instance Kriging Fe rich ore
m In the Grid Contents area, enter 16 for the rank of the section XOY to display
m In the Graphic Parameters area, specify the Color Scale you want to use for the raster dis-
play. You may use an automatic default color scale, or create a new one specifically dedi-
cated to the Fe variable. To create a new color scale: click the Color Scale button, double-
click on New Color Scale and enter a name: Fe, and press OK. Click the Edit button. In the
Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Choose Number of Classes 22,
- Click on the Bounds... button, enter 60 and 71 as the Minimum and Maximum values.
Press OK.
- Switch on the Invert Color Order toggle in order to affect the red colors to the large Fe
values.
- Click Undefined Values button and select Transparent.
- In the Legend area, switch off the Display all tick marks button, enter 60 as the reference
tickmark and 2 as the step between the tickmarks. Then, specify that you do not want
your final color scale to exceed 7 cm. Switch off the Automatic Format button, and spec-
ify that you want to use integer values of Length 7. Ask to display the Extreme Classes.
Click OK.
In Situ 3D Resource Estimation 129
(snap. 2.7-1)
m In the Item contents for: Raster window, click Display current item to display the result.
m Click OK.
l Double-click on the Isolines item. A new Item contents window appears. In the Data area, select
the Kriging Fe rich ore variable from the 3D Grid file with the same selection. In the Grid
Contents area, select the rank 16 for the XOY section. In the Data Related Parameters area,
switch on the C1 line, enter 60 and 71 as lower and upper bounds and choose a step equal to 2.
130
Switch off the Visibility button. Click on Display Current Item to check your parameters, then
on Display to see all the previously defined components of your graphic. Click on OK to close
the Item contents window.
l In the Item list, you can select any item and decide whether or not you want to display its leg-
end, by setting the toggle Legend ON. Use the Move Front and Move Back buttons to modify
the order of the items in the final Display.
l Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.
(fig. 2.7-1)
You can also visualize your 3D grid in perspective. Open again the Contents window of the previ-
ous graphic display (Application/Contents...). Switch the Representation Type from Projection to
Perspective:
500 1000 1500 2000
X (m)
0
1000
2000
3000
Y
(
m
)
Kriging Fe rich ore
Kriging Fe rich ore
N/A
70
68
66
64
62
60
In Situ 3D Resource Estimation 131
132
l just click on Display: the previous section is represented within the 3D volume. Because of the
extension of the grid, set the vertical axis factor to 3 in the Display Box tab (switch the toggle
Automatic Scales OFF). In the Camera tab, modify the Perspective Parameters: longitude=60,
latitude=40.
(fig. 2.7-2)
335
435
535
635
735
335
435
535
635
735
1
6
3
1
1
6
3
2
7
5
1
2
7
5
2
2
7
5
Kriging Fe rich ore
Kriging Fe rich ore
N/A
70
68
66
64
62
60
In Situ 3D Resource Estimation 133
l Representing the whole grid as a solid: this is obtained by setting the 3D Grid contents to 3D
Box, both in the Raster and Isolines item contents windows.
l Representing the 3G grid as a solid and penetrating into the solid by digging a portion of the
grid. For each item content window (for raster and isolines), set the 3D Grid contents to Exca-
vated Box. Then define the indices of the excavation corner (for instance: cell=17, 21, 15).
(fig. 2.7-3)
In the Contents window, the Camera tab allows you to animate (animate tab from the main contents
window) the graphic in several ways:
l by animating the entire graphic along the longitude or latitude definition,
l by animating one item property at a time, for instance the grid raster section. To interrupt the
animation, press the STOP button in the main Isatis window.
2.7.1.2 Display of the simulated block model
l Fe grade
m Create a raster image of the Fe simulated macro variable: choose the first simulation (index
1). Display rank 16 of the 25x25x15 m 3D grid file, so you can compare simulations with
the kriging) and choose the grade Fe color scale. Ask to display the legend.
m Create a Base map of the composite data from the Composites 15 m with the selection final
lithology{Rich ore} active and no variable in order to use the same Default Symbol a full
circle of 0.15cm.
335
435
535
635
735
335
435
535
635
735
1
6
3
1
1
6
3
2
7
5
1
2
7
5
2
2
7
5
Kriging Fe rich ore
Kriging Fe rich ore
N/A
70
68
66
64
62
60
134
(snap. 2.7-2)
In the Display Box tab from the contents window, set the mode to Containing a set of items and
click the Raster item: set the toggle Box Defined as Slice around Section ON and set the Slice
Thickness to 45 m.
In Situ 3D Resource Estimation 135
(snap. 2.7-3)
Press Display:
136
(fig. 2.7-4)
From the Animate tab, select the raster item and choose to animate on the macro index. Set the
Delay to 1s and press Animate. The different simulations appear consecutively: the animation
allows to sense the differences between the simulations. Check that the simulations tend to be simi-
lar around boreholes.
l Display of the probability for the Metal P of rich ore in panels to be above cut-off = 50T:
m Create a new page and display the macro variable Proba P rich ore above from the 3D
Grid 75x75x15m file: choose the macro index n 2 (i.e. cutoff = 50)
m Legend title: probability
m Ask to display rank 16 (horizontal section 16)
m =Make a New Color scale named Proportion as explained before for Fe, but with 20 classes
between 0 and 1.
m press OK
500 1000 1500 2000
X (m)
0
1000
2000
3000
Y
(
m
)
Simu block Fe[00001]
Fe rich ore
N/A
70
68
66
64
62
60
In Situ 3D Resource Estimation 137
l Ask for the legend and press Display:
(fig. 2.7-5)
2.7.2 Using the 3D Viewer
Launch the 3D Viewer (Display/3D Viewer...).
2.7.2.3 Borehole visualization
l Display the Fe composites:
m Drag the Fe variable from the Composites 15 m file in the Study Contents and drop it in the
display window;
m Magnify by a factor of 2 the scale along Z by clicking the Z Scale button at the top of the
graphic page.
m Click Toggle the Axes in the menu bar on the left of the graphic area.
m From the Page contents, click right on the 3D Lines object to open the 3D Lines properties
window. In the 3D Lines tab
500 1000 1500 2000
X (m)
0
1000
2000
3000
Y
(
m
)
Proba P rich ore above{50.000000}
Probability
N/A
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
138
- select the Tube mode;
- switch on the toggle Selection and choose the final lithology{Rich ore} macro index;
- switch off the toggle Allow Clipping
(snap. 2.7-4)
m In the Color tab, choose the same Fe Isatis color scale;
m In the Radius tab, set the mode to constant with a radius of 20 m
m Press Display and close the 3D Lines Properties window
m In the File menu click Save Page as and give a name (composites rich ore) in order to be
able to recover it later if you wish.
In Situ 3D Resource Estimation 139
(snap. 2.7-5)
2.7.2.4 Display of the kriged 3D Block model
As an example we will display the kriged indicator of rich ore. In order to make a New Page click
Close Page in the File menu.
l Click Compass in the menu bar on the left of the graphic area.
l Drag the Kriging indicator rich ore variable from the 3D Grid 75 x 75 x 15 m file in the Study
Contents and drop it in the display window;
l Click right on the 3D Grid 75x75x15m file in the Page Contents to open the 3D Grid Proper-
ties:
140
m In the 3D Grid tab, tick the selection toggle, choose the orebody selection;
m in the color tab:
- set the color mode to variable and change the variable to Kriging Indicator rich ore;
- apply the Rainbow reversed Isatis color scale;
- Press Display and close the 3D Grid properties window
(fig. 2.7-6)
l Investigate inside the kriged block model:
m open the clipping plane facility from Toggle the Clipping Plane in the menu bar on the left
of the graphic area: the clipping plane appears across the block model;
m Go in select mode by pressing the arrow button in the function bar;
m Click the clipping plane rectangle and drag it next by the block model for better visibility;
m Click one of the clipping planes axis to change its orientation (be careful to target precisely
the axis itself in dark grey, not its squared extremity nor the center tube in white)
m Add the drill holes (Fe rich ore) as you did for the previous graphic page
In Situ 3D Resource Estimation 141
m Open the Line Properties window of the Composites 15 m file: set the Allow Clipping tog-
gle ON;
m Click on the clipping planes center white tube and drag it in order to translate the clipping
plane along the axis: choose a convenient cross section, approximately in the middle of the
block model. You may also benefit from the clipping controls parameters available on the
right of the graphic window in order to clip a slice with a fixed width and along the main
grid axes.
m Click on one block of particular interest: its information is displayed in the top right corner:
(snap. 2.7-6)
You may click also on boreholes to display composite data.
142
l Slicing (before hand, click on Toggle the Clipping Plane)
m Edit the 3D Grid 75x75x15m attributes, go in the Slicing tab and set the properties as fol-
low:
(snap. 2.7-7)
Set the toggle Automatic Apply ON, and move the slices to visualize interactively the slicing.
l Save the graphic as a New Page with the name Composites and kriged indicator rich ore.
2.7.2.5 Display of the search ellipsod
From the kriging application (the definition parameters of the 3D kriging of Fe should be kept),
launch the Test window. From Application/Target Selection, select the grid node (20,19,14) for
instance and press Apply. Then, make sure that the 3D viewer is running and, from the same Appli-
cation menu of the Test window, ask to Link to 3D Viewer: a 3D representation of the search ellip-
sod neighborhood is represented, and the samples used for the estimation of this particular node are
highlighted. A new graphic object neighborhood appears in the Page Contents from which you
may change the graphic properties (color, size of the samples for coding the weights or the Fe val-
ues etc...)
In Situ 3D Resource Estimation 143
(fig. 2.7-7)
144
Non Linear 145
3 Non Linear
This case study, dedicated to advanced users, is based on the Walker
Lake data set, which has been first introduced and analyzed by Mohan
SRIVASTAVA and Edwards H. ISAAKS in their book Applied Geosta-
tistics (1989, Oxford University Press).
Geostatistical methods applicable to perform global and local estima-
tion of recoverable resources in a mining industry context are
described through this case study:
Non linear methods, including four methods used to estimate local
recoverable resources: indicator kriging, disjunctive kriging, uniform
conditioning and service variables.
Conditional simulations of grades, using the two main methods appli-
cable: turning bands and sequential gaussian.
The efficiency of these methods will be evaluated by comparison to
the reality, which can be considered as known in this case because
of the origin of the data set.
Reminder: while using Isatis, the on-line help is accessible anytime by
pressing F1 and provides full description of the active application.
Last update: Isatis version 2012
146
3.1 Introduction and overview of the case study
This case study is dedicated to advanced users who feel comfortable with linear geostatistics and
Isatis.
3.1.1 Why non linear geostatistics?
Non linear geostatistics are used for estimating the recoverable resources. At the difference of the
estimation of in situ resources by conventional kriging (linear geostatistics), the estimation of the
recoverable resources considers the mining aspects of the question. Three points can effectively be
taken into account by non linear geostatistics:
l the support effect, that makes the recovered ore depending on the volume on which the ore/
waste decision is made. In this case the size of the selective mining unit (SMU or blocks) has
been fixed to 5m x 5m. When performing the local estimations we will calculate the ore tonnage
and grade after cut-off in panels of 20m x 20m. It is important to keep these terms of block for
the selective unit and panel for the estimated unit (e.g.: tonnage within the panel of the ore con-
sisting of blocks with a grade above the cut-off). These terms are systematically used in the Isa-
tis interface.
l the information effect, that makes the mis-classification between selected ore and waste depend-
ing on the amount of information used in estimating the blocks. At this stage two notions are
important. Firstly the recovered ore is made of true grades contained in blocks whose estimated
grade is above the cut-off. Secondly the decision between ore and waste will be made with an
additional information (blast-holes...) in the future of the production. The question is then what
can we expect to recover tomorrow, if we assume a future pattern blast-holes for instance.
l the constraint effect, that leads for any technical/economical reason to ore dilution or ore left in
place. The two previously mentioned effects are assuming a free selection of blocks within the
panels, only the distribution of block grades is of importance. When their spatial distribution has
to be considered (the recovered ore will be different if rich blocks are contiguous or spread
throughout the panel), only geostatistical simulations provide an answer.
3.1.2 Organization of the case study
This case study is divided in several parts: the first part 3.2 Preparation of the case study
rehearses geostatistical concepts and Isatis manipulation already described in the In Situ 3D
Resource Estimation case study, consisting of declustering, grid manipulations, variography, ordi-
nary kriging with neighborhood creation. These topics will not be detailed here and the user is
invited to have a look at the previous case study for an extensive description. The following of the
case study describes several different methods for the estimation of recoverable resources; it is also
recommended that the user reads 3.3 Global estimation of recoverable resources before start-
ing any method described in 3.4 Local estimation of the recoverable resources or in 3.5 Sim-
ulations. The dataset allows to compare estimations with real measurements: this will be done
exhaustively in 3.6 Conclusions.
Non Linear 147
3.1.2.1 Global Estimation of Recoverable Resources (developed in 3.3)
The global estimation makes use of the raw data histogram (possibly weighed by declustering coef-
ficients): each grade is attached to a frequency, i.e the global proportion relative to the global ton-
nage of the deposit assuming a perfect sampling. This is a direct statistical approach. Geostatistics
appears as soon as the variogram is used to correct this histogram, i.e the proportion, to reflect the
support effect and/or the information effect. Thus, a histogram model is needed in order to perform
these corrections: the modeling and the corrections are done through the Gaussian Anamorphosis
Modeling and Support Effect panels in Isatis, widely used through the whole case study. Compari-
son to reality and kriging will be done through global grade-tonnage curves.
3.1.2.2 Local estimation of recoverable resources
The local estimation of recoverable resources makes use of non linear estimation or simulation
techniques, involving gaussian anamorphosis. The aim is to estimate the proportion of ore blocks
within larger panels (assuming free selection of blocks within each panel), and the corresponding
metal tonnage and mean grade above cut-off:
l by non linear kriging techniques (developed in 3.4): the main advantage of these methods is
their swiftness, but no information on the location of the ore blocks within the panels is given.
Four methods will be described: Indicator kriging, Disjunctive kriging, Service variables and
Uniform Conditioning.
l by simulation techniques (developed in 3.5): the main advantages of simulations is the possi-
bility to derive simulated histograms and estimate the constraint effect, but the method is quite
heavy and time consuming for big block models. Two methods will be described: Turning
Bands (TB) and Sequential Gaussian Simulations (SGS).
Comparison to reality through a specific analysis of the 600 ppm cut-off will be done through
graphic displays and cross plots of the ore tonnage and mean grade above cut-off.
Note - If you wish to compare the local estimates with reality you will need first to calculate the
real tonnage variables from the real grades for the specific cut-off 600 (this is done in 3.4.1
Calculation of the true QTM variables based on the panels).
148
3.2 Preparation of the case study
The dataset is derived from an elevation model from the western United States, the Walker Lake
area in Nevada. It has been transformed in order to represent measures of concentration in some
elements (economic grades in the deposit we are going to evaluate). From the original data set we
will use only the variable V, considered as the grade of an ore mineral measured in ppm: the multi-
variate aspect of this data set will not be considered, as the non linear estimation methods available
in Isatis are currently univariate (unlike simulations). The data set is two fold, the exhaustive data
set, containing 78 000 measurements points on a 1m x 1m grid, and the sample set resulting from
successive sampling campaigns and containing 470 data locations. Several methods for the estima-
tion of recoverable resources are proposed in Isatis: this case study aims to describe them all and
compare them to the reality issued from the exhaustive set.
3.2.1 Data import and declustering
The data is stored in the Isatis installation directory (sub-directory Datasets/Walker_Lake). Load
the data from ASCII file by using File / Import / ASCII. The ASCII files are Sample_set.hd for the
sample set and Exhaustive_set.hd for the exhaustive data set. The files are imported into two sepa-
rate directories Sample set and Exhaustive set respectively, and files are called Data.
(snap. 3.2-1)
By visualizing the Sample set data (using Display / Basemap/ Proportional), we immediately see
the preferential sampling pattern of high grade zones:
Non Linear 149
(fig. 3.2-1)
In order to correct the bias of preferential sampling of high grade zones, it is necessary to declus-
ter the data. To do so you can use Tools / Declustering: it performs a cell declustering with a mov-
ing window centered on each sample. We store the resulting weights in a variable Weight of the
sample data set: this variable will be used later to weight statistics for the variographic analysis in
the EDA and the gaussian anamorphosis modeling. The moving window size for declustering has
been fixed here to 20m x 20m, accordingly to the approximative sampling loose mesh size outside
the clusters.
Note - A possible guide for choosing the moving window dimensions is to compare the value of the
resulting declustered mean to the mean of kriged estimates (kriging has natural declustering
capabilities).
The statistics before and after declustering are the following:
0
0
100
100
200
200
X (m)
X (m)
0
100
200
300
Y
(
m
)
V
150
(snap. 3.2-2)
Mean: 436.35 -> 279.68
Std dev: 299.92 -> 251.44
The next graphics correspond to the histograms of the Sample set, Exhaustive set and Declustered
sample set; they have been calculated using Statistics / Exploratory Data Analysis (EDA). The his-
togram of the Declustered sample set has been calculated with the Compute Using the Weight Vari-
able option toggle ON, using the Weight variable.
Non Linear 151
(snap. 3.2-3)
152
(fig. 3.2-2)
From these three histograms we clearly see that the declustering process will allow to better repre-
sent the statistical behavior of the phenomenon.
3.2.2 Variographic analysis of the sample grades
We first focus on possible anisotropies of the sample set data. From the Statistics / Exploratory
Data Analysis panel, activate the option Compute using the Weight Variable: we will calculate a
weighted 2D variogram map on the V variable from the sample dataset. By default, the Reference
Direction is set to an azimuth equal to the North (Azimuth = N0.00). The parameters related to
the directions, lags and tolerance may be tuned for a detailed variographic analysis but here we will
base ourselves directly on common parameters: ask for 18 directions (10 degrees each), and we
will define 11 lags of 15 m. Generally, the variogram is calculated with a tolerance on distance set
to 50% of the lag which corresponds to a Tolerance on Lags equal to 0 lag; besides, calculations
are often made with an angular tolerance of 45 (in order to consider all samples once with two
directions) which corresponds to a Tolerance on Directions equal to 4 sectors (4 sectors of 10 +
half sector 5 = 45 ).
If the focus is on short scale, one may decide to calculate a bi-directional variogram along N70 and
N160, considering that N160 is a direction of maximum continuity.
Note - This short scale anisotropy is not clearly visible on the variogram map below: to better
visualize it, you may re-calculate the variogram map on 5 lags only and create a customized color
scale through Application / Graphic Specific Parameters...
Non Linear 153
In the variogram map area you can activate a direction using the mouse buttons, the left one to
select a direction, and the right one for selecting Activate Direction in the menu. Activating both
principal axes (perpendicular directions N160 and N70) displays the corresponding experimental
variograms below. When selecting the variogram, click right and ask for Modify Label... to change
N250 to N70:
(snap. 3.2-4)
The short scale anisotropy is visible on the experimental variogram; it is then saved in a parameter
file Raw V from the graphic window (Application / Save in Parameter File...).
We now have to fit a model based on these experimental variograms using the Statistics / Variogram
Fitting facility. We fit the model from the Manual Fitting tab.
154
(snap. 3.2-5)
Non Linear 155
(snap. 3.2-6)
156
(snap. 3.2-7)
Press Print to check the output variogram and then save the variogram model in the parameter file
under the name of Raw V. It should be noted that the total sill of the variogram is slightly above the
dispersion variance and the low nugget value has been chosen.
3.2.3 Calculation of the true block and panel values
In this case study, during the mine exploitation period, a 5m x 5m block will be the selective mining
unit (SMU). The recoverable resource estimation will be based on this 5m x 5m block support; but
first, the in-situ resource estimation will be done on 20m x 20m panels for more robust estimation.
As we have access to an exhaustive data set of the whole area to be mined, we can assume that we
know the true values for any size of support, just by averaging the real values of the exhaustive
set on the wanted block or panel support.
3.2.3.1 Calculation of the true grade values for 5 m x 5 m SMU blocks
To store this average value on a 5m x 5m block support, we need to create a new grid (new file
called Grid 5*5 in a new directory Grids, using the File / Create Grid File facility) and choose the
coordinates of the origin (center of the block at the lowest left corner) in order to match exactly the
data. The Graphic Check, in Block mode, will help to achieve this task. Enter the following grid
parameters:
m X and Y origin: 3m,
m X and Y mesh: 5m,
m 52 nodes along X, 60 nodes along Y.
Non Linear 157
(snap. 3.2-8)
Using this configuration we have exactly 25 samples from the exhaustive data set for each block of
the new grid. Edit the graphic parameters to display the auxiliary file.
158
(snap. 3.2-9)
(fig. 3.2-3)
Now we need to average the real values on this Grid 5*5 file, using Tools / Copy Statistics / Points
-> Grid. We will call this new variable True V.
Note - Using a moving window equal to zero for all the axes, we constrain the new Mean variable
to a calculation area of 5m x 5m (1 block).
Non Linear 159
(snap. 3.2-10)
(fig. 3.2-4)
Display of the true block grade values (5m x 5m blocks)
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
True V
N/A
1000
900
800
700
600
500
400
300
200
100
0
160
The above figure is a result of two basic actions of the Display Menu: a display grid raster of the
true block grade is performed, then isolines are overlaid. Isolines range from 0 to 1500 by steps of
250 ppm, 1000 ppm isoline has been represented with a bold line type. The color scale has been
customized to cover grades between 0 and 1000 ppm, even if there are values greater than this
upper bound. Each class has a width of 62.5 ppm, the extreme values are represented using the
extreme colors.
Note - Keep in mind that V variable has primarily been deduced from elevation data: we clearly
see on the above map a NW-SE valley responsible of the anisotropy detected during variography.
The Walker Lake itself (consequently with zero values...) is in this valley. One could raise
stationarity issues, as the statistical behavior of elevation data differs from valleys (with a lake) to
nearby ranges. This is not the subject of this case study.
3.2.3.2 Calculation of the true grade values for 20 m x 20 m panels
Create a new grid file Grid 20*20 in the Grids directory with the following parameters:
m X and Y origin: 10.5 m,
m X and Y mesh: 20 m,
m 13 nodes along X, 15 nodes along Y
Non Linear 161
(snap. 3.2-11)
The graphic check with the Grid 5*5 shows that the 5m x 5m blocks describe a perfect partition of
the 20m x 20m panels. This allows to use the specific Tools / Copy Statistics / Grid to grid... for cal-
culating the true panel values True V for the Mean Name:
162
(snap. 3.2-12)
Non Linear 163
3.2.4 Ordinary Kriging - In situ resource estimation
The in-situ resource estimation will be done on the 20 m x 20 m panels through Interpolate / Esti-
mation / (Co)-Kriging...:
l Type of calculation: Block
l Input file: Sample Set/Data/V
l Output file: Grids/Grid 20*20 /Kriging V
l Model: Raw V
l Neighborhood: create a moving neighborhood named octants without any rotation and a con-
stant radius of 70 m, made of 8 sectors with a minimum of 5 samples and the optimum num-
ber of samples by sector set to 2. This neighborhood will be used extensively throughout the
case study.
(snap. 3.2-13)
164
(snap. 3.2-14)
For comparison purposes, it is interesting to do also the same kriging on the small blocks (Grid 5*5)
to quantify the smoothing effect of linear kriging.
3.2.5 Preliminary conclusions
Basic statistics may be done through different runs of Statistics / Quick Statistics...; the results are
summarized below. Interpolation by Inverse Distance ID2 with a power equal to 2 and the same
neighborhood has been done for comparison (through Interpolate / Interpolation / Quick Interpola-
tion...):
VARIABLE Count Minimum Maximum Mean Variance
True V punctual 78000 0.0 1631.2 278.0 62422
Sampled V punctual (declus.) 470 0.0 1528.1 279.7 63221
True V blocks 5x5 3120 0.0 1378.1 278.0 52287
ID2 V blocks 5x5 3120 1.6 1279.3 299.1 39031
Kriging V blocks 5x5 3120 -50.9 1361.1 275.4 44013
True V panels 20x20 195 2.2 997.8 278.0 37617
ID2 V panels 20x20 195 0.7 945.0 279.7 53539
Kriging V panels 20x20 195 -4.4 1011.3 275.8 35973
Non Linear 165
Comparing the true V values for the three different supports (punctual, block 5x5 and panel 20x20):
l as expected, the mean remains exactly identical
l the variance decreases with the support size: this is the support effect
Comparing estimated values vs. true values for one same support:
l punctual: the estimation by declustering is satisfactory because the mean and the variance are
comparable. The bias (279.7 compared to 278.0) is negligible
l block 5x5: ID2 shows an overestimation. For kriging, the bias is negligible and, as expected, the
variance of the kriged blocks (44013) is smaller than the real block variance (52287); this is the
smoothing effect caused by linear interpolation. Beside, there are some negative estimates; the
5m x 5m blocks are too small for a robust in situ estimation.
l panel 20x20: The bias of ID2 is less pronounced, but the variance is not realistic; this is because
strong local overestimation of high grade zones. The variance of the kriged panels is smaller
than the real panel variance, but the difference is less pronounced. Moreover, there is only one
negative panel estimate.
Note - 72 SMU blocks have negative estimates indicating that the 5 m x 5 m block size is too small
in this case.
166
3.3 Global estimation of the recoverable resources
3.3.1 Punctual histogram modeling
Using Statistics / Gaussian Anamorphosis Modeling we model the anamorphosis function linking
the raw values of V (called Z in Isatis) and their normal score transform (called Y in Isatis), i.e the
associated gaussian values. In order to reproduce correctly the underlying distribution we have to
apply the Weight variable previously calculated by the Declustering tool. The Gaussian variable
will be stored under Gaussian V:
(snap. 3.3-1)
Non Linear 167
(snap. 3.3-2)
The Interactive Fitting... gives access to specific parameters for the anamorphosis (intervals on the
raw values to be transformed, intervals on the gaussian values, number of polynomials etc...): the
default parameters will be kept. The distribution function is modeled by specific polynomials called
Hermite polynomials; the more polynomials, the more precise is the fit. There are also QC graphic
windows allowing to check the fit between experimental (raw) and model histograms:
168
(fig. 3.3-1)
Punctual anamorphosis function.
Experimental data is in black, the anamorphosis is in blue.
Save the anamorphosis in a new parameter file called Point and to perform the gaussian transform
with the default Frequency inversion method. This will write the Gaussian V variable on disk and
will be used for the Disjunctive Kriging, Service Variable estimations and for the simulations.
The Point Anamorphosis is equivalent to a histogram model of the declustered raw values V; it may
be used to derive global estimation as an overall view of the potential of an orebody (Grade-Ton-
nage curves are available in the Interactive Fitting... parameters), but it does not take the support
effect nor the information effect into account. This is done hereafter.
3.3.2 Support effect correction
We are going now to quantify the support effect for 5 m x 5 m blocks; that is, how much does the 5
m x 5 m block distribution differ from the punctual grades calculated above. The following is
required:
l a model of the distribution, defined by means of a gaussian anamorphosis function
l the block variance, which can be calculated using the Krige's relationship giving the dispersion
variance as a function of the variogram.
The gaussian discrete model provides then a consistent change of support model.
Use the Statistics/Support Correction... panel with the Point anamorphosis and the Raw V vario-
gram model as input. The 5mx5m block will be discretized in 4x4. At this stage no information
effect is considered, so the corresponding toggle is not activated.
-3 -2 -1 0 1 2 3 4 5
Gaussian values
0
500
1000
1500
V
Non Linear 169
(snap. 3.3-3)
Press Calculate to calculate the Gamma(v,v), and the corresponding Real Block Variance and Cor-
rection are displayed in the message window:
_________________________________________________
_________________________________________________
| | |
| | V |
|--------------------------------------|----------|
| Punctual Variance (Anamorphosis) | 63167.25 |
| Variogram Sill | 66500.00 |
| Gamma(v,v) | 8452.79 |
| Real Block Variance | 54714.47 |
| Real Block Support Correction (r) | 0.9370 |
| Kriged Block Support Correction (s) | 0.9370 |
| Kriged-Real Block Support Correction | 1.0000 |
| Zmin Block | -0.02 |
170
| Zmax Block | 1528.12 |
|______________________________________|__________|
Note - Gamma (v,v) is calculated using random procedures; hence, different results are generated
when pressing the Calculate button. Gamma (v,v) and the resulting Real Block Variance should not
vary too much between different calculations.
By clicking on the anamorphosis and on the histogram bitmaps we can check that, after the support
effect correction, the histogram of blocks is smoother (smaller variance) than the punctual histo-
gram model:
(fig. 3.3-2)
Histograms (punctual in blue and block in red): the block histogram model is smoother
The anamorphosis function will be saved under the name Block 5m * 5m and press RUN to save it.
3.3.3 Support & information effects correction
The grade tonnage curves obtained at this stage consider that the mining selection is based upon
true SMU grade. In reality, the SMU grades will be estimated using the ultimate information from
the blast-holes. The consequence is that the grade tonnage curve is deteriorated as it ignores the
uncertainty of the estimation: this is called the information effect. Knowing the future sampling
pattern, it is possible to consider this information effect.
We suppose that, at the mining stage, there will be one blast-hole at the centre of each block. The
blocks will then be estimated from blast-holes spread on a regular grid of 5m x 5m: we will use the
grid nodes of the Grid 5*5 file to simulate this future blast-hole sampling pattern. In order to calcu-
late the grade tonnage curves taking into account the information effect from this blast-hole pattern
(i.e. the selection between ore and waste is made on the future estimated grades, and not on the real
grades), we should calculate 2 coefficients:
0 500 1000 150
V
0.0
2.5
5.0
7.5
10.0
12.5
F
r
e
q
u
e
n
c
i
e
s
(
%
)
Non Linear 171
l a coefficient that transforms the point anamorphosis in the kriged block one.
l a coefficient that allows to calculate the covariance between true and kriged blocks.
Therefore, the variance of the kriged block and the covariance between real and kriged blocks are
needed: they can be automatically calculated in the same Support Correction panel through the
Information Effect optional calculation sub-panel (... selector next to the toggle):
(snap. 3.3-4)
The final sampling mesh corresponds to the final sampling pattern to be considered: 5x5 m. Press
OK and create a new anamorphosis function Block 5m*5m with information effect. Click on Run
button; two extra support correction coefficients are calculated and are displayed when pressing
RUN from the main panel:
Block Support Correction Calculation:
_________________________________________________
| | |
| | V |
|--------------------------------------|----------|
| Punctual Variance (Anamorphosis) | 63167.25 |
| Variogram Sill | 66500.00 |
| Gamma(v,v) | 9431.85 |
| Real Block Variance | 53735.40 |
| Real Block Support Correction (r) | 0.9293 |
| Kriged Block Support Correction (s) | 0.9117 |
| Kriged-Real Block Support Correction | 0.9859 |
|______________________________________|__________|
3.3.4 Analysis of the results for the global estimation
Open Tools / grade Tonnage Curves... and activate 5 data toggles. This tool allows to compare his-
tograms from different kind of data (histogram models, grade variables, tonnage variables) and
derive grade-tonnage curves for the following QTM key variables:
172
Press Edit... for the first one and then ask for a histogram model kind of data. Choose the Point
anamorphosis function and specify 21 cut-offs from 0 to 1000:
(snap. 3.3-5)
Non Linear 173
(snap. 3.3-6)
174
(snap. 3.3-7)
Press OK then repeat the procedure for the other 4 data with the same cut-off definition and specify-
ing different curve parameters for distinguishing them:
m curve 2: choose histogram model and the Block 5m * 5m anamorphosis function
m curve 3: choose histogram model and the Block 5m * 5m with information effect anamor-
phosis
m curve 4: choose grade variable and select the True V variable from the Grid 5*5 file
m curve 5: choose grade variable and select the Kriging V variable from the Grid 5*5 file
Once the 5 curves have been edited, click on the graphic bitmaps to display the Total tonnage vs.
cut-off and the Mean grade vs. cut-off curves:
Non Linear 175
(fig. 3.3-3)
Total tonnage vs. cut-off - the block histograms are close to the true tonnages.
The ordinary kriging curve under-estimates the total tonnage for high cut-offs, showing the danger
to apply cut-offs on linear estimates for recoverable resources.
176
(fig. 3.3-4)
Mean grade vs. Cut-off
Pressing Print from the main Grade Tonnage Curves window prints the numeric values for each
cut-off. The QTM variables for the particular cut-off 600 are obtained by pressing Print (the total
tonnage T is expressed in %):
| Q | T | M
True block 5x5 | 77.954| 10.385 | 750.67
Point model | 87.738| 11.351| 772.934
Block 5*5 (no info) | 76.103| 10.084| 754.699
Kriged blocks 5x5 | 61.082| 8.077| 756.258
In 3.2.5 we have seen that linear kriging is well adapted to in situ resource estimation on panels.
But when mining constraints are involved (i.e applying the 600 cut-off on small blocks), the kriging
predicts a tonnage of 8.08% instead of 10.38%: the mine will have to deal with a 29% over-produc-
tion compared to the prediction.
On the other hand, the global estimation using the point model over-estimates the reality. The glo-
bal estimation with change of support (block 5*5 no info) gives a prediction of good quality.
Because we know the reality from the exhaustive dataset, it is possible to calculate the true block
grades taking the true information effect into account and compare it to the Block 5x5 with infor-
Non Linear 177
mation effect anamorphosis. The detailed workflow to calculate the true information effect will not
be detailed here, only the general idea is presented below:
l Sample one true value at the center of each block from the exhaustive set (representing the
blasthole sampling pattern with real sampled grades V)
l krige the blocks with these samples: this is the ultimate estimated block grades on which the
ultimate selection will be based
l select blocks where ultimate estimates > 600 and derive the tonnage
l calculate the associated QTM variables based on the true grades
We can now compare the Block 5x5 with info to the real QTM variables calculated with the true
information effect (info):
| Q | T | M
True block 5x5 | 77.95 | 10.38 | 750.67
True block 5x5 (info) | 67.92 | 9.01 | 754.11
Block 5*5 with info | 71.83 | 9.66 | 743.40
As expected, the information effect on the true grades deteriorates the real recovered tonnage and
metal quantity because the ore/waste mis-classification is taken into account: the real tonnage
decreases from 10.38% to 9.01%. The estimation from the Block 5x5 with info anamorphosis
(9.69%) is closer to this reality.
178
3.4 Local Estimation of the Recoverable Resources
We want now to perform the local estimation of the recoverable resources, i.e. the ore and metal
tonnage contained in selective 5m x 5m SMU blocks within 20 m x 20 m panels.
Four main estimation methods will be reviewed: Indicator kriging, Disjunctive kriging, Uniform
conditioning and Services variables. For a set of given cut-offs, these methods will issue the follow-
ing QTM variables:
l the total Tonnage T: the total tonnage is expressed as the percentage or the proportion of SMU
blocks that have a grade above the given cut-off in the panel. Each panel is a partition of 16
SMU blocks, i.e when T is expressed as a proportion, T = 1 means that all the 16 SMU blocks of
the panel have an estimated grade above the cut-off.
l the metal Quantity Q (also referred sometimes as the metal tonnage) is the quantity of metal
relative to the tonnage proportion T for a given cut-off (according to the grade unit);
l the Mean grade M is the mean grade above the given cut-off.
In Isatis, QTM variables for local estimations are calculated and stored in macro-variables (1 index
for each cut-off) with a fixed terminology:
l base name_Q[xxxxx] for the metal Quantity variable
l base name_T[xxxxx] for the Tonnage variable
l base name_M[xxxxx] for the Mean grade above cut-off variable
All three variables are linked by the following relation:
Q = T x M
In order to be able to compare the different methods with the reality, we need first to calculate the
real QTM variables on the panel 20 x 20 support; the cut-off is defined at 600 ppm and each
method is locally compared to reality through this particular cut-off. The global grade tonnage
curves of all methods will be displayed and commented later in the final conclusion ( 3.6).
Non Linear 179
3.4.1 Calculation of the true QTM variables based on the panels
l In Grid 5*5, create a constant 600 ppm variable named Cut-off 600 ppm: this is done through
File / Calculator window:
(snap. 3.4-1)
l Tools / Copy Statistics / Grid -> Grid: in the input area we will select the true block grades True
V from the Grid 5*5 file and the Cut-off 600 ppm as the Minimum Bound Name, i.e only cells
for which the grade is above 600 will be considered. In the output area we will store the true
tonnage above 600 under Number Name and the true grade above 600 under Mean Name in
the Grid 20*20 file. If inside a specific panel no SMU block has a grade greater than 600, then
the true tonnage of this panel will be 0 and its true grade will be undefined:
180
(snap. 3.4-2)
In order to get the true total tonnage T relevant for future comparisons (i.e the ore proportion above
the cut-off 600), we have to normalize the number of blocks contained in each panel by the total
number of blocks in one panel (16):
(snap. 3.4-3)
Non Linear 181
l The metal quantity Q is calculated as Q = T x M. When the true grade above 600 is defined, the
metal quantity is equal to M x T otherwise it is null. A specific ifelse syntax is needed to reflect
this:
(snap. 3.4-4)
if this specific ifelse syntax was not used, the metal quantity in the waste would be undefined
instead of being null.
Now, we have the true tonnage, the true mean and the true metal quantity above 600 ppm to base
our comparisons in the Grid 20*20 file.
Note - Beware that the true grade above 600 is not additive as it refers to different tonnages.
Therefore, it is necessary to use the true tonnage above 600 as weights for computing the global
182
mean of the grade over the whole deposit. Another way to compute the global mean of the grade
above 600 is to divide the global metal quantity by the global tonnage after averaging on the whole
deposit.
3.4.2 Indicator kriging
Indicator kriging is a distribution free method. It is based on the kriging of indicators defined on a
series of cut-off grades. The different kriged indicators are assumed to provide the possible distribu-
tion of block grades (after a block support correction) within each panel, given the neighboring
samples. Indicator kriging can be applied in two ways:
l Multiple indicator (co-)kriging: performs the kriging of the indicator variables with their own
variograms, independently or not, for the different cut-offs.
l Median indicator kriging: supposes that all the indicator variables have the same variogram; that
is, the variogram of the indicator based on the median value of the grade.
Multiple indicator kriging is preferable because of the de-structuring of the spatial correlation with
increasing cut-offs (the assumption of an unique variogram for all cut-offs does not hold for the
whole grade spectrum), but problems of consistency must be corrected afterwards. Besides it has
the disadvantage to be quite tedious because it requires a specific variographic analysis for each
cut-off. Incidentally it is the reason why median indicator kriging has been proposed as an alterna-
tive. In this case study we will use the median indicator kriging of the panels 20m x 20m; using Sta-
tistics / Quick Statistics..., with the declustering weights, the median of the declustered histogram is
found to be 223.9.
3.4.2.1 Calculation of the median indicator variogram
We have first to generate a Macro Indicator variable Indicator V[xxxxx] in the Sample set data file
and in the output grid, by using the Statistics / Indicator Pre Processing panel, with 20 cut-offs from
50 by step of 50.
(snap. 3.4-5)
Non Linear 183
We then calculate the experimental variogram of this macro indicator variable Indicator V [xxxxx]
with the EDA (make sure that the Weight variable is activated). When selecting the Indica-
torV[xxxxx] macro variable from the EDA, you will be asked to specify the index corresponding to
the median indicator: we have chosen the index 5 corresponding to the cut-off 250 which is close
enough to 223.9. If the same calculations parameters of the Raw V variogram are used, the anisot-
ropy is no more visible; hence, the experimental variogram will be omnidirectional and calculated
with 33 lags of 5 m. It is stored in a parameter file Model Indicator, and used through Statistics /
Variogram fitting... to fit a variogram model with the following parameters detailed below the
graphic:
(fig. 3.4-1)
13
244
863
928
1520
1208
1774
1411
2053
1875
2140
1941
2659
2222
2742
2346
2882
2546
2829
2237
3243
2405
2912
2596
3093
2549
3016
2496
2999
2661
2717
2530
2914
0 50 100 150
Distance (m)
0.0
0.1
0.2
0.3
V
a
r
i
o
g
r
a
m
:
I
n
d
i
c
a
t
o
r
V
{
2
5
0
.
0
0
0
0
0
0
}
Isatis
Sample set/Data
- Variable #1 : Indicator V{250.000000}
Experimental Variogram : in 1 direction(s)
D1 :
Angular tolerance = 90.00
Lag = 5.00m, Count = 33 lags, Tolerance = 50.00%
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill = 0.035
S2 - Exponential - Scale = 45.00m, Sill = 0.21
184
It should be noted that the total sill is close to 0.25, which is the maximum authorized value for an
indicator variogram. The model is fit using the tab Manual Fitting. The variogram is saved in the
parameter file under the name Model Indicator.
(snap. 3.4-6)
Non Linear 185
(snap. 3.4-7)
186
3.4.2.2 Kriging of the indicators
We now perform the kriging of the indicators keeping the same variogram whatever the cut-off, by
using Interpolate / Estimation / Bundled Indicator Kriging:
(snap. 3.4-8)
l We ask to calculate a Block estimate: we are estimating the proportion of points above the cut-
offs within the panel.
l As Indicator Definition we define the same cut-offs as previously. In the Cut-off definition win-
dow, by clicking on Calculate proportions we get the experimental probabilities of the grade
being above the different cut-offs. These values correspond to the mean of the indicators and are
used if we perform a simple kriging. In this case because a strict stationarity is not likely, we
prefer to run an ordinary kriging, which is the default option.
l Output panels: Grid 20*20 / Indicator V[xxxxx]
l Model: Model Indicator
l The same moving neighborhood octants will be used.
3.4.2.3 Calculation of the final grade tonnage curves
At the moment we only have 20m * 20m panel estimates of probabilities for a restricted set of spec-
ified cut-offs. Now we need to perform two actions:
Non Linear 187
l rebuild the cumulative density function (cdf) of tonnage, metal and grades above cut-off for
each panel,
l Apply a volume correction (support effect) to take into account the fact that the recoverable
resources will be based on 5m * 5m blocks.
These two actions are done through Statistics / Indicator Post-processing... with the Indicator
V[xxxxx] variable from the panels as input:
(snap. 3.4-9)
l Basename for Q.T.M variables: IK. As the cut-offs used for kriging the indicators and the cut-
offs used here for representing the final grade tonnage relationships may differ (an interpolation
is needed), three different macro-variables will be created:
m IK_T{cut-off} for the ore total Tonnage T above cut-off.
m IK_Q{cut-off} for the metal Quantity Q above cut-off
m IK_M{cut-off} for the Mean grade M above cut-off.
188
l Cut-off Definition... for the QTM variables: 50 cut-offs from 0 by a step of 25.
l Volume correction: a preliminary calculation of the dispersion variance of the blocks within the
deposit is required. A simple way to achieve this consists in using the real block variance calcu-
lated by Statistics/support Correction... choosing the block size as 5 m x 5 m (cf. 3.3.2). The
Volume Variance Reduction Factor of the affine correction is calculated by dividing the Real
Block Variance (53842) by the Punctual Variance (63167). But the real block variance is calcu-
lated from the variogram sill (66500), which is superior to the punctual variance, the difference
being 3333; the real block variance needs to be corrected according to this value:
Corrected Real Block Variance = Real Block Variance - 3333 = 53842 - 3333 = 50509
Thus, the Volume Variance Reduction Factor is:
Volume Variance Reduction Factor = 50509 / 63167 = 0.802
Therefore, enter 0.802 for the Volume Variance Reduction Factor.
l two volume corrections may be applied: affine or indirect lognormal correction. As the original
distribution is clearly not lognormal we prefer to apply the affine correction, which is just
requiring the variance ratio between the 5m * 5m blocks and the points
l Parameters for Local Histogram Interpolation: we keep the default parameters for interpolating
the different parts of the histogram (linear interpolation) including for the upper tail, which is
generally the most problematic. A few tests made with other parameters (hyperbolic model with
exponent varying from 1 to 3) showed great impact on the resources. We need now to define the
maximum and minimum block values of the local block histograms: the Minimum Value
Allowed is 0; the Maximum Value Allowed may be simply approximated by applying the affine
correction by hand on the maximum value from the weighted point histogram and transpose it to
the block histogram with the Volume Variance Reduction Factor (0.8) calculated above: the
obtained value is 1391.
3.4.2.4 Analysis of the results
The Grade-Tonnage curves of the IK estimates will be displayed in 3.6 Conclusions, as for the
other following methods. Here, we will focus on the cut-off V = 600 ppm only, and compare the
results with the true values for this specific cut-off.
Below are displayed the calculated tonnage IK_T{600} compared to the true tonnage:
Non Linear 189
(fig. 3.4-2)
Tonnage T calculated by IK (SMU proportion) compared to the true tonnage.
The color scale is a regular 16-class grey palette between 0 and 1: panels for which
there is strictly less than 1 block (i.e 0 <= proportion < 0.0625) are white.
Below are displayed the calculated mean grade compared to the true grade of panels:
(fig. 3.4-3)
Mean grade calculated by IK compared to the true grades.
The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.
Hereafter we show the scatter diagrams of the real panel values and IK estimates for the 600 ppm
cut-off:
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true tonnage above 600
N/A
1.000
0.875
0.750
0.625
0.500
0.375
0.250
0.125
0.000
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
IK_T{600.000000}
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true grade above 600
ppm
N/A
1000
950
900
850
800
750
700
650
600
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
IK_M{600.000000}
190
(fig. 3.4-4)
Scatter diagram of the IK estimates vs. the true panel values above 600 ppm
(the black line is the first bisector)
At this stage of the case study we can consider that globally the indicator kriging gives satisfactory
results. At the local scale noticeable differences exist with a tendency to overestimate the grade,
especially in the upper tail of the histogram.
3.4.2.5 Disjunctive kriging
An argument against Indicator Kriging is that it ignores the relationship existing between different
cut-offs. This argument would not hold anymore, if an indicator Co-kriging was performed instead
of an univariate kriging; in practice, it is difficult to establish a model of corregionalization accept-
able for a large number of cut-offs. Disjunctive Kriging solves this problem by transforming the
cokriging problem into N krigings performed independently. One model offering this possibility is
the gaussian anamorphosis model using the Hermite polynomials where the change of support is
just explained by a coefficient (r coefficient of change of support).
In order to achieve the Disjunctive Kriging we have to provide:
l the gaussian data values Gaussian V
l the anamorphosis function on the block support Block 5m * 5m.
l the variogram model of the block gaussian variable. To determine this model we need first to
calculate an experimental block gaussian variogram using the Raw V variogram model and the
block anamorphosis. For mathematical reasons, the sill of Raw V should not exceed the punc-
tual variance of the anamorphosis, which is unfortunately the case here. Therefore, we need first
to compute another block anamorphosis including a sill normalization (cf. 3.3.2 With support
0.0 0.5 1.0
IK T{600.000000}
0.0
0.5
1.0
t
r
u
e
t
o
n
n
a
g
e
a
b
o
v
e
6
0
0
rho=0.906
600 700 800 900 1000
IK M{600.000000}
600
700
800
900
1000
t
r
u
e
g
r
a
d
e
a
b
o
v
e
6
0
0
rho=0.683
Non Linear 191
effect correction) using Statistics / Support Correction... and ask for Normalize Variogram
Sill. Store the anamorphosis in a new parameter file Block 5m * 5m (normalized) to avoid
overwriting the existing block anamorphosis Block 5m * 5m.
Open Statistics / Block Gaussian Variogram... to calculate the experimental block gaussian vario-
gram:
(snap. 3.4-10)
m Variogram model: Raw V
m Block anamorphosis: Block 5m * 5m (normalized)
m Number of directions: 2. It is convenient to make these directions coincide with the main
directions of anisotropy of the raw variogram (N160E and N70E) by setting a rotation of -
70 around positive z axis
m 20 lags of 5 m for each direction
m New experimental variogram: Block Gaussian V
We fit this variogram in Statistics / Variogram Fitting...; as expected the nugget effect has disap-
peared. Two anisotropic structures (cubic + spherical, details below the graphic) combine to a total
sill of 1, and we store the resulting model in a parameter file Block Gaussian V:
192
(fig. 3.4-5)
We are now ready to perform the Disjunctive Kriging with Interpolate / Estimation / Disjunctive
Kriging...:
N16
N70
0 25 50 75 100 125
Distance (m)
0.00
0.25
0.50
0.75
1.00
V
a
r
i
o
g
r
a
m
:
V
(
B
l
o
c
k
G
a
u
s
s
i
a
n
)
Isatis
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Cubic - Range = 42.00m, Sill = 0.4
Directional Scales = ( 42.00m, 60.00m)
S2 - Spherical - Range = 40.00m, Sill = 0.6
Directional Scales = ( 100.00m, 40.00m)
Non Linear 193
(snap. 3.4-11)
194
l Input: Gaussian V
l Block anamorphosis...: Block 5m * 5m (normalized)
l Number of Kriged Polynomials: we use the same number as during the modeling of the anamor-
phosis function, i.e. 30.
l Cut-off definition...: we choose 21 cut-offs from 0 by steps of 50. It is compulsory to include the
zero cut-off, which should give the in situ grade estimate.
l we ask to perform Tonnage Corrections with a minimum tonnage of 0.5%.
l the Auxiliary Polynomial File will contain experimental values of the different Hermite polyno-
mials for the data points, that will be also put at the center of the closest block 5m x 5m. They
are calculated before the RUN, as soon as the output grid is defined (it may take a little time).
l Output Grid File...: in the panels Grid 20*20, store the error DK variable
l in the Panel Grid file we will also store the Q.T.M. values for each cut-off from the Basename
DK.
l we use the neighborhood octants as before.
l we choose for the Block Gaussian Variogram Model the variogram model previously fitted
Block Gaussian V.
Graphic displays of the panels for comparison with reality (proportion of SMU above 600 ppm):
(fig. 3.4-6)
Tonnage T calculated by DK (SMU proportion) compared to the true tonnage.
The color scale is a regular 16-class grey palette between 0 and 1: panels for which
there is strictly less than 1 block (i.e 0 <= proportion < 0.0625) are white.
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true tonnage above 600
N/A
1.000
0.875
0.750
0.625
0.500
0.375
0.250
0.125
0.000
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
DK_T{600.000000}
Non Linear 195
Graphic displays of the panels for comparison with reality (grade above 600 ppm):
(fig. 3.4-7)
Mean grade calculated by DK compared to the true grades.
The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.
(fig. 3.4-8)
Scatter diagram of the DK estimates vs. the true panel values above 600 ppm
(the black line is the first bisector)
The results on tonnage look very comparable to those obtained with indicator kriging; but the
grades show a better correlation between Disjunctive kriging estimates and true values.
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true grade above 600
ppm
N/A
1000
950
900
850
800
750
700
650
600
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
DK_M{600.000000}
500 600 700 800 900 1000
DK M{600.000000}
500
600
700
800
900
1000
t
r
u
e
g
r
a
d
e
a
b
o
v
e
6
0
0
rho=0.753
0.0 0.5 1.0
DK T{600.000000}
0.0
0.5
1.0
t
r
u
e
t
o
n
n
a
g
e
a
b
o
v
e
6
0
0
rho=0.925
196
3.4.3 Uniform conditioning
This method aims to calculate directly the distribution of the blocks 5m x 5m within each panel, by
using the panel estimate and the anamorphosis functions to take the change of support into account.
To achieve the Uniform Conditioning we have to provide:
l the kriged 20m x 20m panel grades,
l two anamorphosis functions, one for the panel and one for the block support (Block 5m * 5m).
The calculation of the panel anamorphosis requires the value of the kriged panel dispersion vari-
ance. The two anamorphosis models must be consistent, that is, created from the same samples.
3.4.3.1 Kriging of panels
The panels have already been kriged during the in situ resource estimation (cf 3.2.4) but we need
to calculate the local dispersion variance of these estimates. In Interpolate / Estimation / (Co-)krig-
ing.:
(snap. 3.4-12)
Non Linear 197
m Set to Block mode and activate the Full set of Output Variables option
m Input: Sample set / Data / V
m Output: in Grids / Grid 20*20. Because we have asked for the Full set of Output Variables,
we are able to store the local estimated dispersion variance Variance of Z* for V under a
new variable Local dispersion Var Z*
m variogram model: Raw V
m Neighborhood: octants
Below are displayed the panel estimates:
(fig. 3.4-9)
Map of the kriged panels 20m x 20m
The Uniform Conditioning recreates a local gaussian histogram of the SMU in each panel, the mean
of this histogram being the gaussian equivalent of the kriged estimate. The panel dispersion vari-
ance (Local dispersion var Z*, estimated at the kriging step above) is also needed to re-construct
these histograms.
3.4.3.2 Uniform Conditioning
We then run Interpolate/Estimation/ Uniform Conditioning as shown below. The Block 5m * 5m
anamorphosis will be chosen for the block anamorphosis and a Tonnage correction of 0.5% will be
performed. The Basename for Output Variables is UC_no info, as the block anamorphosis has no
information effect. The same set of cut-offs as for the disjunctive kriging (21 cut-offs ranging from
0 to 1000) will be defined:
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
Kriging V
ppm
N/A
1000
900
800
700
600
500
400
300
200
100
0
198
(snap. 3.4-13)
Graphic displays of the panels for comparison with reality:
(fig. 3.4-10)
Tonnage T calculated by UC (SMU proportion) compared to the true tonnage.
The color scale is a regular 16-class grey palette between 0 and 1: panels for which
there is strictly less than 1 block (i.e 0 <= proportion < 0.0625) are white.
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true tonnage above 600
N/A
1.000
0.875
0.750
0.625
0.500
0.375
0.250
0.125
0.000
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
UC_no info_T{600.000000}
Non Linear 199
(fig. 3.4-11)
Mean grade calculated by UC compared to the true grades.
The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.
(fig. 3.4-12)
Scatter diagram of the UC estimates vs. the true panel values above 600 ppm
(the black line is the first bisector)
The quality of local estimation is satisfying.
Moreover, UC allows to take the information effect into account by changing the block anamorpho-
sis to the Block 5*5 with information effect instead of block 5*5.
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true grade above 600
ppm
N/A
1000
950
900
850
800
750
700
650
600
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
UC_no info_M{600.000000}
0.0 0.5 1.0
UC_no info_T{600.000000}
0.0
0.5
1.0
t
r
u
e
t
o
n
n
a
g
e
a
b
o
v
e
6
0
0
rho=0.928
600 700 800 900 1000
UC_no info_M{600.000000}
600
700
800
900
1000
T
r
u
e
G
r
a
d
e
a
b
o
v
e
6
0
0
rho=0.785
200
Note - Some grade inconsistencies may appear when taking the information effect into account,
because the cut-off have to be applied on a histogram of kriged values. These grade inconsistencies
affect low grades for small tonnages, therefore it may be corrected by suppressing the lowest
tonnage values (as done here with a minimum tonnage fixed at 0.5%).
Do not forget to change the Basename for Output Variables to UC_with info and press RUN:
(snap. 3.4-14)
The statistical results are presented in 3.6.
In conclusion, Disjunctive kriging and Uniform Conditioning give both good results; in practice, on
real datasets, Uniform Conditioning is often preferred because it is less sensitive to stationarity
hypothesis.
3.4.3.3 Localized Uniform Conditioning
A criticism addressed to non linear techniques, including Uniform Conditioning, is that the outputs
are probability of smus grades within bigger units. We dont have a representation of the spatial dis-
tribution of smu grades, like for instance with simulations.
One way to get such a representation is to apply the Localized Uniform Conditioning methodology
(see Abzalov, M.Z. Localized Uniform Conditioning (LUC): A New Approach to Direct Modelling
of Small Blocks, Mathematical Geology 38(4) pp 393-411).
The principle is the following: the tonnage and metal at different cutoffs contained in each panel are
distributed over the smus according to a preference based on the ranking of smus kriged grade. The
metal for higher cutoff is first assigned to the smus whose kriged grades is the highest, and so on.
Non Linear 201
As there are enough data to get a realistic estimate of the kriged smus, we can apply that method to
the results of Uniform Conditioning (without information effect for instance).
As the kriging of smus has already been achieved (see 3.2.4) you just have to run Statistics / Pro-
cessing / Localized Uniform Conditioning.
Note: the same method can be used in the multivariate case, the metal of other elements are
assigned according to the ranking of the main variable kriged smus.
After Run we get the following Error message:
It is due to the fact that it is compulsory that for the highest cutoff the tonnage represents less than
the tonnage of one smu.
The solution consists in Re-running Uniform Conditioning with 41 cutoffs from 0 with a step of 50.
Then running Localized Uniform Conditioning does not produce anymore error message.
The statistics and the displays show that after Localized Uniform Conditioning the variability of
actual block grades is much better reproduced compared to the true smu grades.
202
With Tools / Grade Tonnage Curve we can also check that the QTM values obtained from Uniform
Conditioning (with Tonnage Variables option) are the same as those obtained from grades estimated
using Localized Uniform Conditioning method.
Variable Count Minimum Maximum Mean Std. Dev. Variance
True V 5x5 3120 0 1378.1 278.0 228.7 52287
KrigingV 5x5 3120 -51.06 1361.57 275.29 210.13 44153.00
LUC V 5x5 3120 0 1438.57 276.02 228.7 52374.87
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
Kriging V
V
N/A
2000
1900
1800
1700
1600
1500
1400
1300
1200
1100
1000
900
800
700
600
500
400
300
200
100
0
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
LUC V
Non Linear 203
3.4.4 Service variables
The Service Variables method is based on the transformation of grades into two variables represent-
ing the ore and metal tonnage above a given cut-off for a block centered around the data point. This
transformation requires a change of support model. Each variable is then kriged by ordinary krig-
ing. We can apply this technique for the cut-off 600 ppm (Tools / Service Variables...):
(snap. 3.4-15)
The scatter diagram between the Ore and the Metal above 600 ppm shows a very strong (non linear)
correlation.
(fig. 3.4-13)
0.0 0.5 1.0
Ore Tonnage T above 600 ppm
0
500
1000
M
e
t
a
l
Q
u
a
n
t
i
t
y
Q
a
b
o
v
e
6
0
0
p
p
m
rho=0.987
204
Consequently, we will perform independently the kriging of both variables. The experimental vari-
ograms are omnidirectional and calculated with 16 lags of 10 m (with the declustering weights
active). They have been fitted as shown below:
(fig. 3.4-14)
The declustering weights have great impact on the short scale structure; the variograms at short
scale are not satisfactory.
Then, the kriging of Ore and Metal is performed, with the usual octants neighborhood; the variables
Service Var Ore Tonnage T > 600 and Service var Metal Q > 600 are created.
91
1524
2572
3124
3696
3972
4885
5035
5319
5224
5537
5390
5578
5579
5627
5254
0 50 100 150
Distance (m)
0
10000
20000
30000
40000
50000
60000
V
a
r
i
o
g
r
a
m
:
M
e
t
a
l
Q
u
a
n
t
i
t
y
Q
a
b
o
v
e
6
0
0
Isatis
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill = 8100
S2 - Spherical - Range = 53.00m, Sill = 2.876e+004
91
1524
2572
3124
3696
3972
4885
5035
5319
5224
5537
5390
5578
5579
5627
5254
0 50 100 150
Distance (m)
0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.10
V
a
r
i
o
g
r
a
m
:
O
r
e
T
o
n
n
a
g
e
T
a
b
o
v
e
6
0
0
p
p
m
Isatis
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill = 0.01
S2 - Spherical - Range = 53.00m, Sill = 0.0462
Non Linear 205
(snap. 3.4-16)
206
(snap. 3.4-17)
Because a linear kriging is performed, some panels have negative or unacceptable low Tonnage T
values: for all panels having a tonnage T < 0.02 (i.e 2%), T and Q are set to 0 (this is done using
File / Calculator...). Using the Calculator once more, we derive from the kriged variables
Service var Metal Q > 600 and Service Var Ore Tonnage T > 600 the variable Service var grade
M > 600 using the same relation M = Q / T.
Non Linear 207
(snap. 3.4-18)
208
(fig. 3.4-15)
The scatter diagrams show that some grades overestimation, and a slight under-estimation of high
tonnage values.
0.0 0.5 1.0
Service Var Ore Tonnage T>600
0.0
0.5
1.0
T
r
u
e
T
o
n
n
a
g
e
a
b
o
v
e
6
0
0
rho=0.924
600 700 800 900 1000
Service Var grade M>600
600
700
800
900
1000
T
r
u
e
G
r
a
d
e
a
b
o
v
e
6
0
0
rho=0.644
Non Linear 209
3.5 Simulations
After having reviewed the non linear estimation techniques, we can also perform simulations to
answer the same questions on the recoverable resources. Because we are in a 2D framework, we
can perform 100 simulations within a reasonable computation time.
Two techniques, both working under multigaussian hypothesis, will be described: Turning Bands
(TB) and Sequential Gaussian (SGS). This multigaussian hypothesis requires that the input variable
is gaussian: the Gaussian V variable, calculated previously ( 3.3.1 Punctual Histogram Model-
ing), will be used.
Simulations will be performed on the SMU blocks of 5 m x 5 m (Grid 5*5): this will allow to com-
pare results with the non linear estimation techniques. Therefore, block simulations require a gaus-
sian back transformation and a change of support from point to block: this implies specific remarks
discussed hereafter.
3.5.1 Before starting... important comments on block simulations
3.5.1.1 Block discretization optimization
In the standard version of Isatis, only points may be simulated and the change of support from point
to block is done by averaging simulated points. In practice, each block is discretized in n sub-cells
and each sub-cell is approximated as a point: the number n has to be large enough to validate this
approximation. But if n increases, the CPU time calculation increases, as each block will require n
simulation process (basically the CPU time is proportional to n). Thus, the choice of the block dis-
cretization is the result of a compromise between performance and precision.
The block discretization is defined through the neighborhood definition panels, and Isatis gives
some guidance to the best compromise by calculating the mean block covariance C
vv
. The block
covariance depends only on the variogram model and the block geometry. Theoretically, if n was
infinite the mean block covariance would tend to its true value.
Note - In Isatis the default block discretization is 5 x 5 and may be optimized, as explained later (
3.5.4.1).
3.5.1.2 Gaussian back transformation
When simulating in Block mode, Isatis performs automatically the following workflow:
l from the input gaussian data, simulate gaussian point grades according to the block discretiza-
tion parameters as discussed above;
l gaussian back transformation (gaussian -> raw) of the point grades using a point anamorphosis
l block grade = averaging of the raw point grades
the averaging is done automatically at the end of the simulation run. Hence the required anamor-
phosis function to perform the gaussian back transformation is the Point anamorphosis based on the
sample (point) support, which has already been calculated during the 3.3.1 Punctual Histogram
210
Modeling. The block anamorphosis Block 5m*5m (which includes a change of support correction)
should not be used here.
3.5.2 Simulations workflow summary
The aim is to simulate 5 m x 5 m block grades and to calculate the ore Tonnage T, the metal Quan-
tity Q and the mean grade M above 600 ppm for 20 m x 20 m panels. The workflow will consist in:
l Variographic analysis of the gaussian sample grades (the name of the variogram model will be
Point Gaussian V)
l Simulate the SMU grades (5 m x 5 m blocks) with Turning Bands (TB) or Sequential Gaussian
(SGS) method with the following parameters:
m Block mode
m input data: Sample Set / Data / Gaussian V
m output macro-variables to be created: Grids / Grid 5*5 / Simu V TB or Simu V SGS
m Number of simulations: 100
m Starting index: 1
m Gaussian back transformation enabled using the Point anamorphosis
m Model...: Point Gaussian V defined at the previous step
m Seed for Random Number Generator: leave the default number 423141. This seed is sup-
posed to be a large prime number; the same seed allows reproducibility of realizations.
The neighborhood and other parameters specific to each method will be detailed in the relevant
paragraph.
l Calculation of the QTM variables for both techniques (described for TB): ore Tonnage T (i.e
SMU proportion within each panel), metal Quantity Q, and mean grade M of blocks above 600
ppm among each 20 m x 20 m panel (M = Q / T). The panel mean grades can not be averaged
directly on the 100 simulations: the mean grade is not additive because it refers to different ton-
nages (the tonnage may differ between different simulations). Therefore it has to be weighed by
the ore proportion T. One way to do this is to use an accumulation variable for each panel:
m calculate the ore proportion T and the metal quantity Q (the metal quantity is the accumula-
tion variable: Q = T x M) for each simulation
m calculate the average (T) and average (Q) of the 100 simulations
m calculate the average mean grade: average (M) = average (Q) / average (T)
3.5.3 Variographic analysis of gaussian sample grades
The experimental variogram of gaussian variables often show more visible structures and make
their interpretation easier; the analysis of anisotropy using the variogram map gives similar infor-
mation about the main directions of continuity. In Statistics / Exploratory Data Analysis..., the
experimental variogram Point Gaussian V is calculated with the same rotation parameters than
Non Linear 211
Raw V. A variogram model using 3 structures has been fitted and saved under the name Point
Gaussian V:
(fig. 3.5-1)
3.5.4 Simulation with the Turning Bands method
3.5.4.1 Simulations
We run Interpolate / Conditional Simulations / Turning Bands... with the parameters defined in the
workflow summary ( 3.5.2):
N160
N250
0 50 100 150
Distance (m)
0.00
0.25
0.50
0.75
1.00
1.25
V
a
r
i
o
g
r
a
m
:
G
a
u
s
s
i
a
n
V
Isatis
Model : 3 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill = 0.13
S2 - Spherical - Range = 20.00m, Sill = 0.3
Directional Scales = ( 20.00m, 40.00m)
S3 - Spherical - Range = 40.00m, Sill = 0.6
Directional Scales = ( 86.00m, 40.00m)
212
(snap. 3.5-1)
l Gaussian back transformation... enabled: select the Point anamorphosis.
l Neighborhood...: create a new neighborhood parameter file named octants for TB. Press Edit...
and from the Load... button reload the parameters from the octants neighborhood. We are now
going to optimize the block discretization: press the ... button next to Block Discretization: the
Discretization Parameters window pops up where the number of discretization points along the
x,y,z directions may be defined. These numbers are set to their default value (5 x 5 x 1). Press
Calculate C
vv
, the following appears in the message window (values differ at each run due to the
randomization process):
Non Linear 213
Regular discretization: 5 x 5 x 1
In order to account for the randomization, 11 trials are performed
(the first value will be kept for the Kriging step)
Variables Gaussian V
Cvv = 0.811792
Cvv = 0.809978
Cvv = 0.812136
Cvv = 0.811752
Cvv = 0.810842
Cvv = 0.812900
Cvv = 0.808768
Cvv = 0.811977
Cvv = 0.810781
Cvv = 0.810921
Cvv = 0.812400
11 mean block covariances have been calculated with 11 different randomizations. The mini-
mum value is 0.808768 and the maximum is 0.812900; the maximum relative variability is
approximately 0.5% which is more than acceptable: the 5 x 5 discretization is a very good
approximation of the punctual support and may be optimized.
Note - Note that, for reproducibility purposes, the first value of C
vv
will be kept for the simulations
calculation
For optimization, we decrease the number of discretization points to 3x3:
214
(snap. 3.5-2)
Press Calculate C
vv
:
Regular discretization: 3 x 3 x 1
In order to account for the randomization, 11 trials are performed
(the first value will be kept for the Kriging step)
Variables Gaussian V
Cvv = 0.809870
Cvv = 0.814197
Cvv = 0.808329
Cvv = 0.812451
Cvv = 0.819093
Cvv = 0.809922
Cvv = 0.814171
Cvv = 0.811332
Cvv = 0.805993
Cvv = 0.806053
Cvv = 0.807459
Non Linear 215
The minimum value is 0.805993 and the maximum value is 0.819093: the maximum relative
variability is approximately 1.6%. As expected, it has increased but remains acceptable: there-
fore, the 3 x 3 discretization is a good compromise and will be kept for the simulations (i.e each
simulated block value will be the average of 3 x 3 = 9 simulated points). Press Close then OK
for the neighborhood definition window.
l Number of Turning Bands: 300. The more turning bands, the more precise are the realizations
but CPU time increases. Too few turning bands would create visible 1D-line artefacts.
Press RUN: calculations may take a few minutes.
We represent in the next figure five simulations, compared to the true map:
216
(fig. 3.5-2)
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
True V
ppm
N/A
1000
900
800
700
600
500
400
300
200
100
0
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
Simu V TB[00020]
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
Simu V TB[00030]
50 100 150 200 250
X ( )
50
100
150
200
250
300
Y
(
m
)
Simu V TB[00040]
50 100 150 200 250
50
100
150
200
250
300
Y
(
m
)
Simu V TB[00050]
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
Simu V TB[00002]
Non Linear 217
3.5.4.2 Calculation of the QTM variables
From Statistics/Processing/Grade simulation Post-processing compute the metal quantity, mean
grade and tonnage on the 20*20 grid from the 5*5 grid simulation.
(snap. 3.5-3)
3.5.4.3 Analysis of the results
We can then display the ore Tonnage T and mean grade M above 600 ppm calculated by Turning
Bands and compere them to the true values:
218
(eq. 3.5-1)
Tonnage T calculated by TB (SMU proportion) compared to the true tonnage. The color scale is
a regular 16-class grey palette between 0 and 1: panels for which there is strictly less than 1
block (i.e 0 <= proportion < 0.0625) are white.
(fig. 3.5-3)
Mean grade calculated by TB compared to the true grades.
The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true tonnage above 600
N/A
1.000
0.875
0.750
0.625
0.500
0.375
0.250
0.125
0.000
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
TB_ mean ore tonnage above 600
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true grade above 600
ppm
N/A
1000
950
900
850
800
750
700
650
600
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
TB_mean (mean grade above 600)
Non Linear 219
(fig. 3.5-4)
Scatter diagrams of ore tonnage and mean grade above 600 ppm between
the mean of 100 TB simulations and the true values of panels.
3.5.5 Simulation with the Sequential Gaussian method
Two different algorithms are available for SGS in Isatis, using two different kinds of neighborhood:
l Interpolate / Conditional Simulation / Sequential Gaussian / Standard Neighborhood...: a stan-
dard elliptical neighborhood is used taking the point data & the previously simulated grid nodes
into account.
l Interpolate / Conditional Simulation / Sequential Gaussian / Sequential Neighborhood...: the
sequential neighborhood performs first a migration of point data on the nearest grid node; the
neighborhood is then defined by a moving window made of x blocks around the target block.
We will use the standard neighborhood option because it is more accurate from a theoretical point
of view, and moreover the Block simulation is possible (automatic averaging of point values).
3.5.5.4 Simulations
Open Interpolate / Conditional Simulations / Sequential Gaussian / Standard neighborhood.... and
enter the same parameters described in the workflow summary ( 3.5.2):
600 700 800 900 1000
TB_mean (mean grade above 600)
600
700
800
900
1000
T
r
u
e
G
r
a
d
e
a
b
o
v
e
6
0
0
rho=0.869
0.0 0.5 1.0
TB_mean ore tonnage above 600
0.0
0.5
1.0
T
r
u
e
T
o
n
n
a
g
e
a
b
o
v
e
6
0
0
rho=0.936
220
(snap. 3.5-4)
Non Linear 221
l The Gaussian Back Transformation is enabled with the Point anamorphosis function
l Special Model Options...: by default, a Simple Kriging (SK) is performed using a constant
mean equal to zero
l Neighborhood...: create a new neighborhood named octants for SGS with the following param-
eters (you may load the parameters from the octants for TB parameter file):
(snap. 3.5-5)
m The search ellipsoid is maintained to 70 m.
m minimum number of samples: 5
m Number of angular sectors: 8
m Optimum Number of Samples per Sector: 4, which adds to a maximum of 32 samples. The-
oretically, the SGS technique would require a unique neighborhood and use all the previ-
ously simulated grid nodes to reproduce exactly the variogram; in practice, it is impossible,
so it is recommended to increase the Optimum Number in respect to the Optimum Number of
Already Simulated Node (to be defined below in the main SGS window) and the capacity of
the computer.
222
m in the Advanced tab, set the Minimum distance between two samples to 2 m; as two different
sets of data are used to condition the simulations (i.e the actual data points combined with
the previously simulated grid nodes), this minimum distance criterion avoids fictitious
duplicates between original data points and simulated grid nodes. It allows also to spread
conditioning data for a better reproducibility of the variogram.
m The same Block Discretization of 3 x 3 will be used.
l Optimum Number of Already Simulated Node: 16. This means that the software will load all the
real samples and the 16 closest already simulated nodes in memory for the search neighborhood
algorithm. The maximum number of samples being 32, there will be 16 real samples used for
each node simulation, as for the Turning Bands method. The TEST window allows to evaluate
the impact of these different parameters on the neighborhood.
l Leave the other parameters to their default values and press RUN
Note - Isatis offers the possibility to perform the different simulations with independent paths
(optional toggle in the main SGS window). By default, this toggle is set OFF, meaning that the same
random path is used for all simulations: the independency is no more certain, but the algorithm is
much quicker. If the toggle is set ON, the CPU time will approximately be multiplied by the number
of simulations. Here, it has been checked that both options show negligible differences in the final
results.
The resulting outcomes are very similar to the TB method.
Non Linear 223
3.5.5.5 Calculation of the QTM variables
From Statistics/Processing/Grade simulation Post-processing compute the metal quantity, mean
grade and tonnage on the 20*20 grid from the 5*5 grid simulation.
(snap. 3.5-6)
224
3.5.5.6 Analysis of the results
(fig. 3.5-5)
Tonnage T calculated by SGS (SMU proportion) compared to the true tonnage.
The color scale is a regular 16-class grey palette between 0 and 1: panels for which
there is strictly less than 1 block (i.e 0 <= proportion < 0.0625) are white.
(fig. 3.5-6)
Mean grade calculated by SGS compared to the true grades.
The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true tonnage above 600
N/A
1.000
0.875
0.750
0.625
0.500
0.375
0.250
0.125
0.000
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
SGS_ mean ore tonnage above 600
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
true grade above 600
ppm
N/A
1000
950
900
850
800
750
700
650
600
50 100 150 200 250
X (m)
50
100
150
200
250
300
Y
(
m
)
SGS_ mean (mean grade above 600)
Non Linear 225
(fig. 3.5-7)
Scatter diagrams of ore tonnage and mean grade above 600 ppm between
the mean of 100 SGS and the true values of panels
We observe that SGS simulations give very similar results to TB and are also well correlated to the
reality.
0.0 0.5 1.0
SGS_ mean ore tonnage above 600
0.0
0.5
1.0
t
r
u
e
t
o
n
n
a
g
e
a
b
o
v
e
6
0
0
rho=0.938
600 700 800 900 1000
SGS_ mean (mean grade above 600)
600
700
800
900
1000
t
r
u
e
g
r
a
d
e
a
b
o
v
e
6
0
0
rho=0.870
226
3.6 Conclusions
The objective of the case study was to illustrate several non linear methods (global and local) to
estimate recoverable resources, and compare them to linear kriging. All methods take the same sup-
port effect for 5 m x 5 m blocks into account, but only a few take the information effect into
account. Therefore, we will first focus on results without information effect.
3.6.1 Global estimation
3.6.1.1 Without information effect
Grade Tonnage curves
The following methods will be compared to the true values (True): Ordinary Kriging (OK), block
anamorphosis (block 5x5), Indicator Kriging (IK), Disjunctive Kriging (DK) and Uniform Condi-
tioning (UC). The grade-tonnage curves for all these methods will be presented; Service Variables
(SV) and simulations (TB and SGS) have been calculated only for one particular cut-off V = 600
ppm so we can not display G-T curves for these methods.
Open Tools / Grade Tonnage Curves...: Activate 6 curves. For IK, DK and UC outcomes, we need
to ask for Tonnage Variables. For instance, for the Indicator Kriging (IK): press Edit..., choose the
Tonnage Variables option then IK_Q[xxxxx] for the Metal Quantity and IK_T[xxxxx] for the Total
Tonnage:
Non Linear 227
(snap. 3.6-1)
Repeat the same for DK and UC, and change the curve parameters and labels for optimal visibility.
By clicking on the graphic windows below, ask for the following Grade Tonnage curves: Mean
grade vs. cut-off, Total tonnage vs. cut-off, Metal tonnage vs. cut-off and Metal tonnage vs. Total
tonnage. The graphics are presented here below:
228
(fig. 3.6-1)
Mean Grade vs. Cutoff
(fig. 3.6-2)
Total Tonnage vs. Cutoff
0 250 500 750 1000
Cutoff
0
250
500
750
1000
1250
M
e
a
n
G
r
a
d
e
True
OK
Block 5*5
IK
DK
UC
0 250 500 750 1000
Cutoff
0
10
20
30
40
50
60
70
80
90
100
T
o
t
a
l
T
o
n
n
a
g
e
True
OK
Block 5*5
IK
DK
UC
Non Linear 229
(fig. 3.6-3)
Metal Tonnage vs. Cutoff
(fig. 3.6-4)
Metal Tonnage vs. Total Tonnage
0 250 500 750 1000
Cutoff
0
50
100
150
200
250
M
e
t
a
l
T
o
n
n
a
g
e
True
OK
Block 5*5
IK
DK
UC
0 10 20 30 40 50 60 70 80 90 10
Total Tonnage
0
50
100
150
200
250
M
e
t
a
l
T
o
n
n
a
g
e
True
OK
Block 5*5
IK
DK
UC
230
The True curve is black and represented with a bold line type. We clearly see that the OK tonnage
curves are shifted compared to others: linear kriging induces a significant smoothing effect despite
a refined sampling and a good coverage of the domain.
All non linear methods provide similar and suitable results; a zoom centered on V = 600 allows
a more precise comparison around this particular cut-off:
(fig. 3.6-5)
Grade-Tonnage curves with a zoom on the 600 ppm cutoff of interest (same legend)
Little differences are noticeable: IK overestimates the grades whereas DK overestimates the ton-
nages.
570 580 590 600 610 620 630 64
Cutoff
720
730
740
750
760
770
780
790
800
M
e
a
n
G
r
a
d
e
570 580 590 600 610 620 630
Cutoff
8
9
10
11
12
13
T
o
t
a
l
T
o
n
n
a
g
e
525 550 575 600 625 650
Cutoff
60
70
80
90
M
e
t
a
l
T
o
n
n
a
g
e
8.5 9.0 9.5 10.0 10.5 11.0 11.
Total Tonnage
73
74
75
76
77
78
79
80
81
M
e
t
a
l
T
o
n
n
a
g
e
True
OK
Block 5*5
IK
DK
UC
Non Linear 231
As we had to choose a particular cut-off for comparing these methods with SV and simulations, we
have chosen V = 600 and the global results according to this cut-off are presented hereafter.
Global statistics on cut-off V = 600 ppm
The following tables give the statistics on ore tonnage, metal quantity and grade above 600 for the
different methods on the 195 panels. The true values are compared to the following methods (using
Statistics / Quick Statistics...): Turning Bands (TB), Sequential Gaussian Simulations (SGS), Indi-
cator Kriging (IK), Disjunctive Kriging (DK), Uniform conditioning (UC), Service Variables (SV),
global estimation with support effect (Block 5x5 without information effect, results already shown
in 3.3.4 Analysis of the results for the global estimation p.94) and ordinary kriging (OK):
Statistics on Ore Tonnage above 600 (proportion)
Statistics on Metal Quantity above 600
As the Mean grade M defined on the panels refers to different tonnages, it is not additive so the cal-
culation of the mean and the standard deviation needs to be weighed by the tonnages. Therefore,
VARIABLE Count Minimum Maximum Mean Std. Dev. Variance
True 195 0 1 0,104 0,21 0,044
TB 195 0 0,994 0,104 0,185 0,034
SGS 195 0 0,996 0,1 0,185 0,034
IK 195 0 0,99 0,101 0,2 0,04
DK 195 0 1 0,115 0,192 0,037
UC 195 0 0,987 0,096 0,189 0,036
SV 195 0 0,884 0,098 0,163 0,027
OK 195 0 1 0,081 0,205 0,042
Block 5x5 0,101
VARIABLE Count Minimum Maximum Mean Std. Dev. Variance
True 195 0 997,8 78,0 169,7 28796,8
TB 195 0 996,5 79,4 155,4 24148,8
SGS 195 0 1002,4 75,9 156,4 24453,6
IK 195 0 982,3 78,2 165,9 27533,9
DK 195 0 1002,5 86,0 159,7 25488,7
UC 195 0 1004,6 71,7 154,6 23893,9
SV 195 0 804,2 74,3 131,9 17231,0
OK 195 0 1005,7 61,1 165,6 27418,7
Block 5x5 76,0
232
use Statistics / Quick statistics 8 times on each grade variable of each method with the relevant ton-
nage as the Weight variable:
Statistics on Mean Grade above 600
These statistics are attached to the specific cut-off 600: no global conclusion on the performances of
the methods may be assessed here. Besides, the dataset may not be compared to a realistic explora-
tion campaign.
3.6.1.2 With information effect
Comparisons will be made for the anamorphosis Block 5*5 with information effect and the Uni-
form Conditioning (UC_with info[xxxxx]). Results for the block anamorphosis have already been
discussed (cf. 3.3.4 Analysis of the results for the global estimation p.94). Only global statistics
for the cut-off V = 600 ppm have been made:
| Q | T | M
True block 5x5 | 77.95 | 10.38 | 750.67
True block 5x5 (info) | 67.92 | 9.01 | 754.11
Block 5*5 with info | 72.03 | 9.69 | 743.05
UC_with info | 69.20 | 9.17 | 754.60
For the cut-off V = 600 ppm, UC has correctly quantified the information effect.
3.6.2 Local estimation
For each local estimation method, a scatter diagram of the panel estimates with true values (ton-
nages and grades) with the correlation coefficients has already been done (cf. relevant paragraphs).
Here, the error for each panel has been calculated and reported:
error = estimate - true value
Therefore, positive error values reveal overestimation.
VARIABLE Count Minimum Maximum Mean Std. Dev. Variance
True 66 603,0 997,8 700,0 79,0 9225,2
TB 173 600,3 1009,7 689,0 52,0 2706,7
SGS 166 606,2 1015,7 684,0 53,0 2815,5
IK 91 604,5 992,2 722,0 87,0 7560,3
DK 116 132,6 1002,5 659,5 118,6 14066,0
UC 115 653,4 1017,2 691,5 50,0 2496,2
SV 103 607,3 951,3 735,0 63,3 4000,5
OK 44 601,0 1005,7 756,7 102,0 10411,6
Block5x5 754,5
Non Linear 233
The table below summarizes the main results for the error on tonnages:
Local statistics of error on tonnages estimates and correlation
with true tonnage values (for cut-off = 600 ppm)
The true global tonnage is 0.104; the bias for all non linear methods remains acceptable.
The table below summarizes the main results for the error on mean grades above 600:
Local statistics of error on mean grades above 600 and correlation
with true values (for cut-off = 600 ppm)
IK and SV methods show a global overestimation of the grades and a lower correlation with reality.
VARIABLE Count Minimum Maximum Mean Std. Dev. Correlation
TB 195 -0,397 0,248 -0,001 0,075 0,94
SGS 195 -0,404 0,237 -0,004 0,074 0,94
IK 195 -0,395 0,39 -0,003 0,089 0,91
DK 195 -0,399 0,363 0,01 0,08 0,93
UC 195 -0,314 0,219 -0,009 0,079 0,93
SV 195 -0,411 0,277 -0,006 0,086 0,93
OK 195 -0,5 0,375 -0,023 0,085 0,92
ID2 195 -0,563 0,188 -0,024 0,08 0,93
VARIABLE Count Minimum Maximum Mean Std. Dev. Correlation
TB 66 -88,8 98,6 18,1 39,0 0,87
SGS 66 -81,9 98,3 12,7 38,9 0,87
IK 57 -93,4 275,3 33,8 69,2 0,68
DK 65 -485,6 161,2 0,8 82,9 0,75
UC 65 -126,4 94,0 7,9 49,3 0,79
SV 65 -113,1 188,2 41,6 61,2 0,67
OK 40 -100,5 67,9 -22,6 40,6 0,89
ID2 44 -130,9 134,6 -21,0 50,1 0,82
234
The table below summarizes the main results for metal quantity:
Local statistics of error on metal quantity and correlation
with true values (for cut-off = 600 ppm)
All non linear methods give consistent results for the metal quantity.
3.6.3 Final conclusions
The conclusions based on these numerical results only concern this particular dataset and should
not be interpreted as a straightforward classification of the methods.
Despite a refined sampling, linear interpolation methods (linear kriging, inverse distance...) induce
a smoothing effect that has a significant impact on recoverable resources. Non linear geostatistics
provide practical solutions and this case study shows that all methods are globally consistent;
though some little differences appear at the local scale.
Global estimation techniques, based on anamorphosis functions, showed satisfying results and are
quick to proceed.
Simulations techniques (TB and SGS) showed good results but these techniques are time consum-
ing and quite heavy to proceed. Indicator Kriging showed some little differences at the local scale
(as service variables), and requires some specific pre/post-processing. Disjunctive Kriging and Uni-
form Conditioning both make use of anamorphosis functions, but Uniform Conditioning has the
advantage to base itself on ordinary kriging estimates instead of the global mean for Disjunctive
Kriging, which requires a stronger stationarity hypothesis. Besides, Uniform Conditioning is
straightforward to the global estimation techniques and allows to take the information effect into
account.
VARIABLE Count Minimum Maximum Mean Std. Dev. Correlation
TB 195 -276,5 174,9 -1,4 51,9 0,95
SGS 195 -281,1 166,8 -2,1 51,3 0,95
IK 195 -266,8 260,5 0,2 59,1 0,94
DK 195 -277,0 253,1 8,0 55,8 0,94
UC 195 -213,7 153,2 -6,2 54,8 0,95
SV 195 -279,0 192,2 -3,6 62,9 0,94
OK 195 -350,3 242,3 -16,8 57,5 0,94
ID2 195 -389,0 120,8 -18,5 54,7 0,95
235
Oil & Gas
236
Property Mapping & Risk Analysis 237
4 Property Mapping &
Risk Analysis
This case study is based on a real data set kindly provided by AMOCO
for teaching purposes, and that has been used in the AAPG publication
Stochastic Modeling and Geostatistics, edited by Jeffrey M. Yarus and
Richard L. Chambers.
It demonstrates several capabilities offered by Isatis to cope with two
variables whose coverage of the field are different, typically a few
wells on one hand and a complete 3D seismic on the other hand.
The study covers the use of estimation and simulations, from Kriging to
Cokriging, External Drift and Collocated Cokriging.
Last update: Isatis version 2012
238
4.1 Presentation of the Dataset
First, create a new study using the Study / Create facility of the File / Data File Manager window.
(snap. 4.1-1)
Then, set the Preferences / Study Environment / Units:
m default input-output length unit in foot,
m X, Y and Z graphical axis in foot.
The datasets are located in two separate ASCII files (in the Isatis installation directory, under the
Datasets/Petroleum sub-directory):
m The file petroleum_wells.hd contains the data collected at 55 wells. In addition to the coordi-
nates, the file contains the target variable (Porosity) and the selection (Sampling) which
concerns the 12 initial appraisal wells,
m The file petroleum_seismic.hd contains a regular grid where one seismic attribute has been
measured: the normalized acoustic impedance (Norm AI). The grid is composed of 260 by
130 nodes at 40ft x 80ft.
Both files are loaded using the File / Import / ASCII facility in the same directory (Petroleum), in
files respectively called Wells and Seismic.
Property Mapping & Risk Analysis 239
(snap. 4.1-2)
Using the File / Data File Manager, you can check that both files cover the same area of 10400ft by
10400ft. You can also check the basic statistics about the two variables of interest.
At this stage, no correlation coefficient between the two variables can be derived, as they are not
defined at the same locations.
In this case study, the structural analysis will be performed using the whole set of 55 wells, whereas
any estimation or simulation procedure will be based on only the 12 appraisal wells, in order to
produce stronger differences in the results of various techniques.
Variable Porosity (from Wells) Norm AI (from Seismic)
Number of samples 55 33800
Minimum 6.1 -1
Maximum 11.8 0.
Mean 8.2 -0.551
Std Deviation 1.4 0.155
240
4.2 Estimation of the Porosity From Wells Alone
The first part of this case study is dedicated to the mapping of the porosity from wells alone. In
other words, we simply ignore the seismic information. This step is designed to provide a compari-
son basis, although it would probably be skipped in an industrial study. The spatial correlation of
the Porosity variable is studied through the Statistics / Exploratory Data Analysis procedure. The
following figures are displayed: a base map where the porosity variable is represented with propor-
tional symbols, an histogram and the omnidirectional variogram calculated for 10 lags of 1000ft. In
the Application / Graphic Specific Parameters of the Variogram window, the Number of Pairs
option is switched ON.
(fig. 4.2-1)
The area of interest is homogeneously covered by the wells. The Report Global Statistics item from
the Menu bar of the variogram graphic window produces the following printout where the vario-
0 5000 10000
X (ft)
0
5000
10000
Y
(
f
t
)
Porosity
6 7 8 9 10 11 12
Porosity
0.00
0.05
0.10
0.15
F
r
e
q
u
e
n
c
i
e
s
Nb Samples: 55
Minimum: 6.1
Maximum: 11.8
Mean: 8.2
Std. Dev.: 1.4
73
94
199
217
194
160
203
142
99
0 2000 4000 6000 8000
Distance (ft)
0.0
0.5
1.0
1.5
2.0
V
a
r
i
o
g
r
a
m
:
P
o
r
o
s
i
t
y
Property Mapping & Risk Analysis 241
gram details can be checked. The number of pairs is reasonably stable (above 70) up to 9000ft: this
is consistent with the regular sampling of the area by the wells.
Variable : Porosity
Mean of variable = 8.2
Variance of variable = 1.862460
Rank Number Average Value
of pairs distance
1 73 1301.15 1.143562
2 94 1911.80 1.460053
3 199 2906.72 1.863894
4 217 4054.00 2.068571
5 194 5092.86 1.987912
6 160 5882.27 1.817500
7 203 6895.25 1.909532
8 142 8014.89 2.118310
9 99 8937.23 2.070556
Coming back to the variogram Application / Calculation Parameters, ask to calculate the vario-
gram cloud. Highlight pairs corresponding to small distances (around 1000ft) and a high variability
on the variogram cloud: these pairs are represented by asterisks on the variogram cloud; the corre-
sponding data are highlighted on the base map and joined by a segment. No point in particular can
be designated as responsible for these pairs (outlier): as usually, they simply involve the samples
corresponding to high porosity values.
242
(fig. 4.2-2)
0
0
5000
5000
10000
10000
X (ft)
X (ft)
0
5000
10000
Y
(
f
t
)
Porosity
Property Mapping & Risk Analysis 243
(fig. 4.2-3)
To save this experimental variogram in a Parameter File in order to fit a variogram model on it,
click on Application / Save in Parameter File and call it Porosity.
0
0
1000
1000
2000
2000
3000
3000
4000
4000
5000
5000
6000
6000
7000
7000
8000
8000
9000
9000
10000
10000
Distance (ft)
Distance (ft)
0
5
10
15
V
a
r
i
o
g
r
a
m
:
P
o
r
o
s
i
t
y
244
4.3 Fitting a Variogram Model
Within the procedure Statistics / Variogram Fitting, define the Parameter File containing the exper-
imental variogram (Porosity) and the one which will contain the model. The latter may also be
called Porosity; indeed, although these two Parameter Files have the same name, there will be no
confusion as their type is different. Visualize the experimental variogram and the fitted model using
any of the graphic windows; as there is only one variable and one omnidirectional variogram, the
global and the fitting windows are similar. From the Model Initialization frame, select Spherical
and Add Nugget. These are the structures that will be fitted on the experimental.
The model can be fitted using the Automatic Fitting tab by pressing Fit.
(snap. 4.3-1)
Pressing the Print button in this panel produces the following printout where we can check that the
model is the nesting of a short range spherical and a nugget effect.
Property Mapping & Risk Analysis 245
(snap. 4.3-2)
The corresponding graphic representation is presented in the next figure.
(fig. 4.3-1)
A final Run(Save) saves this model in the Parameter File Porosity.
0 1000 2000 3000 4000 5000 6000 7000 8000 9000
Distance (ft)
0.0
0.5
1.0
1.5
2.0
V
a
r
i
o
g
r
a
m
:
P
o
r
o
s
i
t
y
246
4.4 Cross-Validation
The cross-validation technique (Statistics/Modeling/Cross-validation) enables you to evaluate the
consistency between your data and the chosen variogram model. It consists in removing in turn one
data point and re-estimating it (by kriging) from its neighbors using the model previously fitted.
An essential parameter of this phase is the neighborhood, which tells the system which data points,
located close enough to the target, will be used during the estimation. In this case study, because of
the small number of points, a Unique neighborhood is used; this choice means that any information
will systematically be used for the estimation of any target point in the field. Therefore, for the
cross-validation, each data point is estimated from all other data.
This neighborhood also has to be saved in a Parameter File that will be called Porosity.
(snap. 4.4-1)
When a point is considered, the kriging technique provides the estimated value Z* that can be com-
pared to the initial known value Z, and the standard deviation of the estimation * which depends
on the model and the location of the neighboring information. The experimental error between the
estimated and the true values (Z - Z*) can be scaled by the predicted standard deviation of the esti-
mation ( *) to produce the standardized error. This quantity, which should be a normal variable,
E
z
------
558
Acoustic Survey 559
12 Acoustic Survey
This case study is based on an Acoustic survey carried in the northern
North Sea (western half of ICES division IVa) in July 1993 in order to
evaluate the total biomass, total numbers and numbers at age of the
North Sea herring stock. It has been kindly provided by the Herring
Assessment Group of the International Council for the Exploration of
the Sea (ICES). It has also been used as a case study in the book
Geostatistics for Estimating Fish Abundance by J. Rivoirard, K.G.
Foote, P. Fernandes and N. Bez. This book will serve as a reference for
comparison in this case study.
The case study illustrates the use of Polygons to limit the area on
which a global estimation has to be performed. The aim of this study is
to carry out a global estimation with a large number of data, which
requires the domain to subdivided in strata (polygons). The main issue
arises in the way the results per strata have to be combined, both for
estimation and variance estimation.
Last update: Isatis version 11.0
560
12.1 Introduction
As stated in the reference book, this data set has been taken from the 6 years acoustic survey of the
Scottish North Sea. The 1993 data constitute 938 values of an absolute abundance index, at regular
points along the survey cruise track. This cruise track is oriented along systematic parallel transects
spaced 15 nautical miles (nmil) apart, running east-west and vice versa, progressing in a northerly
direction on the east of the Orkney and Shetland Isles and southward down the west side. The
acoustic index is proportional to the average fish density.
The position of an acoustic index was taken every 2.5 nmil, initially recorded in a longitude and
latitude global positioning system and later converted into nmil using a simple transformation of
longitude based on the cosine of the latitude.
12.1.1 Loading the data
First, we have to set the Study Environment: all units but Z unit in Nautical Miles (Input-Output
Length Options and Graphical Axis Units), and Z unit in meters.
The acoustic survey information is contained in the ASCII file called acoustic_survey.hd. It is
provided with a header which describes the set of variables:
m Longitude and Latitude expressed in nmil will serve as coordinates.
m Year, Month, Day, Hour, Minute and Second give the exact date at which the
measurement has been performed. They will not be used in this case study.
m Fish is the variable containing the fish abundance and will be the target variable throughout
this study.
m East and West are two selections which delineate the sub-part of the data belonging to the
eastern part of the North Sea from the western part: the boundary corresponds to a broken
line going through the Orkney and Shetland Isles.
The data are loaded in the Directory Survey and in the File called Data.
12.1.2 Statistics
Getting info on the file Data tells us that the data set contains 938 points, extending in a square area
with 200 nmil edge. The next figure represents the acoustic survey where the points located in the
East part (1993 - East selection) are displayed using a dark circle whereas the points in the West
part (1993 - West selection) are represented with plus sign.
Acoustic Survey 561
(fig. 12.1-1)
The differences between East and West areas show up in the basic statistics of the fish abundance:
The following figure shows the histogram of the Fish variable. The data are highly positively
skewed with 50% of zero values.
All data East West
Count of samples 938 606 332
Minimum 0 0 0
Maximum 533.36 533.36 306.48
Mean 8.27 8.16 8.47
Variance 1078.49 1189.48 875.84
Skewness 9.07 9.93 6.33
CV (sample) 3.97 4.23 3.49
-100 -50 0 50
X (nmil)
3500
3550
3600
3650
3700
Y
(
n
m
i
l
)
562
(fig. 12.1-2)
The next figure represents the log of the acoustic index + 1 in proportional display, zero values
being displayed with plus sign whereas non zero values are displayed using circles. It is similar to
the 1993 display in the figure (4.3.1) of page 84 in the reference manual.
(fig. 12.1-3)
0 100 200 300 400 500
Fish
0.00
0.25
0.50
0.75
1.00
F
r
e
q
u
e
n
c
i
e
s
Nb Samples: 938
Minimum: 0.00
Maximum: 533.36
Mean: 8.27
Std. Dev.: 32.84
-150 -100 -50 0 50
X (nmil)
3500
3550
3600
3650
3700
Y
(
n
m
i
l
)
Acoustic Survey 563
12.1.3 Variography
Two omnidirectional variograms were calculated separately on data coming from the east and the
west areas. For sake of simplicity in this case study, the variograms are calculated on the raw
variables, with a lag value of 2.5 nmil, 30 lags and a tolerance on distance of 50%. Each
experimental variogram has then been fitted using the same combination of a nugget effect and an
exponential basic structure: the sill of each component has been fitted automatically. The next
figure shows the two experimental variograms and the corresponding models (West and East).
(fig. 12.1-4)
564
(fig. 12.1-5)
Note that, as we already knew, the variances of the two subsets are quite different (875 for West and
1189 for East). The fitted models have the following parameters:
There is enough evidence to indicate that there are differences between the east and west regions,
particularly regarding the proportion of nugget; it is therefore advised to stratify the whole area data
sets into east and west regions.
Dataset West East
Nugget 396 842
Exp - Range 27 20
Exp - Sill 787 469
Total Sill 1183 1311
Ratio Nugget / Total Sill 33% 64%
Acoustic Survey 565
12.2 Global Estimation
We decided to divide the information between the East and the West regions. For the purpose of the
global estimation, the whole field is subdivided into geographical sub-strata of consistent sampling
density. These sub-strata correspond to Polygons.
12.2.1 Small strata
At first, the sub-strata are designed so as to follow the survey track along a single transect in the
East part; in the West part, it may happen that a sub-strata contains the two-way transect.
These polygons also take into account the shape of the Coast of Scotland as well as the Orkney and
Shetland Islands, in order to avoid integrating the target variable (Fish density) over the land.
(fig. 12.2-1)
This first set of polygons corresponding to small strata is read from the separate ASCII Polygon
File called small_strata.hd. The procedure File / Polygons Editor is used to import these polygons
into a new Polygon File Small Strata: some parameters (label contents and position, filling...) are
already stored in the ASCII File. The procedure allows a visualization of these polygons, together
with the survey data used as a control information (See paragraph on Auxiliary Data in the Polygon
section from the Beginner's Guide).
S1
S2
S3
S4
S5
S6
S7
S8
S9
S10
S11
S12
S13 S14
S15
S16
S17
S18
S19
S20
S21
S22
S23
S24
S25
S26
-100 0 100
X (nmil)
3500
3550
3600
3650
3700
Y
(
n
m
i
l
)
566
The polygons are named from S1 to S26. Using the File / Selection Intervals menu, we create two
selections on the Sample Number to distinguish the first 13 polygons (from S1 to S13) which are
located in the East region (selection East) from the last 13 polygons (from S14 to S26) which are
located in the West region (selection West).
The polygons constitute a partition of the domain of integration (no polygon overlap) and the total
surface is then obtained as the sum of the surface of each polygon: 39192 nmil
2
.
12.2.2 Global Weighted Estimation Using Kriging
The next step consists in performing the global estimation for each polygon, using the Interpolate /
Estimation / Polygon Kriging window. Nevertheless, we will pay attention to process the two
regions separately. Therefore, we will interpolate the data over the polygons of the East selection
using the model corresponding to this subset of information, and then perform the same operation
for the West region.
Some polygons overlap the East and West data selections; this is for instance the case for S15.
Therefore, to avoid loosing some information within the polygon, the entire dataset is used as input
for the interpolation.
The global estimation by kriging requires each polygon to be discretized. The definition of the
discretization grid is a feature of the Polygon Editor facility, using the Application / Discretize
facility. We simply obtain a discretization by choosing to fit grids with a fixed mesh of 2.5 per 2.5
nmil with no rotation.
The global estimation requires the definition of the Neighborhood criterion (stored in the Standard
Parameter File Polygon): we perform the global estimation of the fish density in each polygon only
using the data points located within this polygon. Nevertheless, we allow the system to grab
information located on the edge of this polygon and possibly falling outside due to roundoff error:
for this reason, the neighborhood is increased to its dilation by a small ball with 1 nmil radius.
The global estimation is performed and the following results are stored in the polygon file:
Estimation which contain the (weighted) estimate of the mean (using Kriging), and St. dev. which
contains the square root of the (weighted) estimate of the mean (using Kriging).
We can visualize the result using the Display facility with the Estimation variable defined on the
polygon, using a new color scale.
Acoustic Survey 567
(snap. 12.2-1)
(fig. 12.2-2)
-100 0 100
X (nmil)
3500
3550
3600
3650
3700
Y
(
n
m
i
l
)
Estimation
Fish
30
25
20
15
10
5
0
568
12.2.3 Global Unweighted Estimation
Another method consists in calculating the mean estimate over each polygon simply as the mean of
the data located within the polygon neighborhood. This can be achieved in two different ways:
l Average of the data within a polygon
We must first create a selection which retains the information located within the polygon: this is
realized using the File / Selection From Polygons feature. When considering the first polygon
for example, we run this procedure, selecting samples located inside the polygon S1, and storing
the result in a new selection variable called S1. Out of the 938 initial data, only 68 are retained
in this selection.
Then it suffices to run the standard Quick Statistics procedure selecting only the data points
within this S1 selection, in order to obtain the mean: 6.358.
(snap. 12.2-2)
l Global estimation with a pure nugget effect
The second solution is to run the Global Estimation procedure again, but using a model where
any spatial dependency between samples is discarded, such as a pure nugget effect (called
Nugget). Then the arithmetic average is the optimal estimation for the mean of the polygon. An
other interesting feature is that all the polygons can be estimated in a single RUN of the same
procedure. The estimation is stored in the variable Arithmetic Mean in the Polygon File.
Acoustic Survey 569
Note - For comparison purpose, the dilation radius of the neighborhood is brought back to 0 for
this example.
When the global estimation has been processed, it suffices to use the traditional Print feature to
dump out the value of the Arithmetic Mean variable for each polygon. We can check the
exactness of the comparison for the first polygon: 6.36.
12.2.4 Comparison
It is now time to review the results obtained for all polygons by using the Print feature for dumping
the variables:
m Estimation: (weighted) estimate of the mean (using Kriging)
m St. dev.: square root of the (weighted) estimate of the mean (using Kriging)
m Arithmetic Mean: unweighted estimate of the mean
The results are presented in the following table where:
m Rk is the rank of the polygon
m N designates the count of points in a polygon
m Surf is the surface of the polygon in nmil
2
m Rap is the ratio of the surface of the current polygon with respect to the total surface
m Z
iid
is the unweighted average Fish density over the polygon
m Z
geo
is the kriged estimate of the mean Fish density over the polygon
m S is the corresponding standard deviation
m A
iid
is the unweighted abundance over the polygon
m A
geo
is the kriged abundance
Regarding the abundance estimation, we can compare:
m the arithmetic mean fish density raised to area of the polygon: 284923,
m the arithmetic mean fish density raised to each polygon surface, and cumulated over the 26
polygons: 305670,
m the kriged mean fish density raised to each polygon surface, and cumulated over the 26
polygons : 295182.
For the estimation variance, we can compare:
m the global CV
iid
( ) coefficient of variation which ignores the spatial structure
(expressed in %): 12.97% with s the standard deviation of sample values and the sample
mean,
Z
V ( )
s
z
iid
N
----------------
z
570
m the CV
geo
( ) where is the weighted sum of the estimation variances for the
different strata: , is the surface of the polygon j and V
the global surface of the domain. This term is equal to 18.37%.
Rk N Surf Rap Z
iid
Z
geo
A
iid
A
geo
1 68 4275 10.91 6.36 6.04 5.11 27179 25817
2 64 2650 6.76 19.34 19.05 4.75 51266 50483
3 57 2584 6.59 13.83 12.84 4.95 35745 33188
4 59 2423 6.18 4.97 4.88 4.98 12045 11814
5 60 2206 5.63 5.02 3.64 4.96 11081 8039
6 42 1938 4.95 11.16 10.12 5.63 21624 19623
7 40 1719 4.39 9.62 9.41 5.93 16548 16182
8 34 1742 4.44 5.98 5.74 6.61 10425 10005
9 28 1523 3.89 4.77 4.38 7.07 7261 6676
10 36 1853 4.73 6.40 5.92 6.53 11867 10965
11 33 1562 3.99 3.44 5.83 6.28 5370 9115
12 33 1585 4.05 1.11 3.67 6.35 1764 5824
13 52 2575 6.57 0.86 0.86 5.80 2215 2219
14 23 781 1.99 0.05 0.05 6.82 39 37
15 19 488 1.25 11.89 5.24 8.44 5809 2557
16 30 713 1.82 12.44 13.33 6.15 8876 9506
17 30 969 2.47 4.65 4.69 5.87 4504 4546
18 47 1280 3.27 6.24 6.30 4.48 7992 8059
19 53 1199 3.06 9.30 7.48 3.93 11147 8970
20 36 1175 3.00 30.10 29.96 5.12 35361 35196
21 35 1293 3.30 2.70 2.11 5.37 3490 2728
22 24 856 2.19 15.06 13.94 5.95 12902 14941
23 11 676 1.73 0.00 0.00 12.39 0 0
24 10 487 1.24 0.00 0.00 11.52 0 0
25 7 405 1.03 1.67 2.59 12.92 678 1051
26 6 234 0.60 2.07 2.73 13.24 482 638
E
z
geo
---------
E
E
V
j
V ( )
2
j
2
= V
j
0 10 20 30 40 50 60 70 80 90 100
Distance (km)
0.0
0.5
1.0
1.5
2.0
S
q
r
t
o
f
V
a
r
i
o
g
r
a
m
/
M
a
d
o
g
r
a
m
:
N
O
2
G
a
u
s
624
(snap. 13.9-3)
Air quality 625
(snap. 13.9-4)
626
(fig. 13.9-4)
Air quality 627
13.10 Quantifying a local risk with Conditional
Expectation (CE)
The aim of this part is to calculate the probability for NO2 to exceed a given cutoff at a given point.
The method that we consider is the Conditional Expectation, it uses a normal score transformation
of the variable and its kriging.
You need to krige the gaussian variable NO2 Gauss using the NO2 Gauss model of variogram, a
Unique neighborhood and you have to create two new variables: Estimation for NO2 Gauss
(Kriging) and Std for NO2 Gauss (Kriging) as Output File.
(snap. 13.10-1)
628
After that, you can proceed with the calculation of probability. Select the Statistics / Statistics /
Probability from Conditional Expectation menu and click on the Data File button to open a File
Selector. Choose the Estimation for NO2 Gauss (Kriging) as Gaussian Kriged Variable, Std for
NO2 Gauss (Kriging) for the second variable and create a new variable Probability 40g/m3
(CE) for the last variable. This Probability macro variable will store the different probabilities to be
above given cutoffs. Each alphanumerical index of the Macro Variable will correspond to the
different cutoffs. In our case, there will be only one cutoff.
Press the Indicator Definition button to define the cutoff in the raw space, we have chosen a cutoff
of 40 g/m
3
. Click on Apply next Close.
Check Perform a Gaussian Back Transformation and click on Anamorphosis to define the
transformation (NO2) which has been used to transform the raw data in the gaussian space before
kriging. To finish, click on Run.
(snap. 13.10-2)
Air quality 629
(snap. 13.10-3)
The map corresponding to the probability to exceed the sanitary threshold of 40 g/m
3
is displayed
hereafter. A new color scale called Probability is created with irregular bounds in order to show up
the points where the probability is low.
(fig. 13.10-1)
340 350 360 370 380 390 400 410 420 430 440
X (km)
5275
5300
5325
5350
5375
5400
5425
Y
(
k
m
)
Probability to exceed the
sanitary threshold of 40 g/m3
Probability
N/A
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
630
13.11 NO
2
univariate simulations
An other way to calculate the probability to exceed a threshold is based on the simulations and
particularly the conditional simulations. Simulations will be compulsory to compute global
statistics, such as the average exposed population.
A conditional simulation corresponds to a grid of values having a normal distribution and honoring
the model. Moreover it honors the data points as it uses a conditioning step based on kriging which
requires the definition of a neighborhood. So the simulations also need the gaussian transformation
and a model of variogram based on this normal variable.
To compute these simulations, you are going to use the turning bands method (Interpolate /
Conditional Simulations / Turning Bands). You use the same Unique neighborhood as in the
kriging step. The additional parameters consist in:
l the name of the macro variable: each simulation is stored in this macro variable with an index
attached,
l the number of simulations: 200 in this exercise,
l the starting index for numbering the simulations: 1 in this exercise,
l the Gaussian back transformation is performed using the anamorphosis function: NO2. In a first
run, this anamorphosis will be disabled in order to study the gaussian simulations,
l the seed used for the random number generator: 423141 by default. This seed allows you to
perform lots of simulations in several steps: each step will be different from the previous one if
the seed is modified.
The final parameters are specific to the simulation technique. When using the Turning Band
method, you simply need to specify the number of bands: a rule of thumb is to enter a number much
larger than the count of rows or columns in the grid, and smaller than the total number of grid
nodes; 1000 bands are chosen in our exercise.
You can verify on some simulations in the gaussian space that the histogram is really gaussian and
the experimental variogram respects the structure of the model NO2 Gauss particularly at small
scale. After this Quality Control, you can enable the Gaussian back transformation NO2.
Air quality 631
(fig. 13.11-1)
-3 -2 -1 0 1 2 3
Simulations NO2 Gauss[00050]
0.00
0.05
0.10
0.15
F
r
e
q
u
e
n
c
i
e
s
Nb Samples: 8302
Minimum: -3.19
Maximum: 3.78
Mean: -0.03
Std. Dev.: 0.97
-3 -2 -1 0 1 2 3
Simulations NO2 Gauss[00150]
0.000
0.025
0.050
0.075
0.100
0.125
F
r
e
q
u
e
n
c
i
e
s
Nb Samples: 8302
Minimum: -3.32
Maximum: 3.67
Mean: -0.06
Std. Dev.: 1.03
0 10 20 30 40 50 60 70 80 90 100
Distance (km)
0.00
0.25
0.50
0.75
1.00
1.25
V
a
r
i
o
g
r
a
m
:
S
i
m
u
l
a
t
i
o
n
s
N
O
2
G
a
u
s
s
[
0
0
0
5
0
0 10 20 30 40 50 60 70 80 90 100
Distance (km)
0.00
0.25
0.50
0.75
1.00
1.25
V
a
r
i
o
g
r
a
m
:
S
i
m
u
l
a
t
i
o
n
s
N
O
2
G
a
u
s
s
[
0
0
1
5
0
632
(snap. 13.11-1)
The results consist in 200 realizations stored in one Simulations NO2 Macro Variable in the Grid.
The clear differences between several realizations are illustrated on the next graphic.
Air quality 633
(fig. 13.11-2)
(fig. 13.11-3)
634
13.12 NO
2
multivariate simulations
As in the kriging, you can integrate auxiliary variables in simulations. The gaussian hypothesis
requires a new multi-linear regression of auxiliary variables Altitude and ln(Emi_NOx+1) on the
NO2 Gauss variable. The new auxiliary variable is stored in NO2 Gauss regression and the
coefficients of this new regression are informed in the Message window:
Multi-linear regression
-----------------------
Equation for the target variable : NO2 Gauss
Coefficient - Variable Name
-2.9183e-003 - Altitude
0.287881 - ln(Emi_NOx+1)
-1.337892 - constant
Statistics calculated on 49 active samples
Raw data Mean = 0.309796
Variance = 1.209877
Residuals Mean = -0.000000
Variance = 0.328754
Calculate the NO2 Gauss regression variable on the Grid in the Calculator panel.
(snap. 13.12-1)
Air quality 635
After that, you can compute the three experimental variograms (using the declustering weights
variable). Save them as NO2 Gauss-Altitude+ln(Emi_NOx+1) and fit a model. You choose the
following parameters:
l exponential with a range of 48 km with:
m a sill of 1.12 for the NO2 Gauss variogram,
m a sill of 1.00 for the cross variogram,
m a sill of 1.10 for the NO2 Gauss regression variogram.
(fig. 13.12-1)
You are now able to perform the collocated co-simulations using the turning bands technique. The
differences in relation to the univariate simulations are that the multivariate case requires two
variables NO2 Gauss and NO2 Gauss regression (with the Background selection) on Input File.
Click on the Output File button, create two new variables Simulations NO2 (multivariate case)
and Simulations NO2 Gauss regression irrelevant but required by the algorithm (multivariate
case) on the Grid (Alsace selection activated) and select the NO2 Gauss regression as Collocated
Variable.
Enter NO2 Gauss-Altitude+ln(Emi_NOx+1) as variogram model and a Unique neighborhood.
Click on the Special Option button and switch the Collocated Cokriging option (verify that the
collocated variable is the same, NO2 Gauss regression, in Input and Output File). Enable the
9
22
31
46
54
73
56
70
58
67
61
42
55
67
21
43
34
30
44
34
0 10 20 30 40 50 60 70 80 90 100
Distance (km)
0.0
0.5
1.0
1.5
V
a
r
i
o
g
r
a
m
:
N
O
2
G
a
u
s
s
9
22
31
46
54
73
56
70 58
67
61
42
55
67
21 43
34
30
44
34
0 10 20 30 40 50 60 70 80 90 100
Distance (km)
-1
0
1
V
a
r
i
o
g
r
a
m
:
N
O
2
G
a
u
s
s
r
e
g
r
e
s
s
i
o
n
&
N
O
2
9
22
31
46
54
73
56
70
58
67
61
42
55
67
21
43
34
30
44
34
0 10 20 30 40 50 60 70 80 90 100
Distance (km)
0.0
0.5
1.0
1.5
V
a
r
i
o
g
r
a
m
:
N
O
2
G
a
u
s
s
r
e
g
r
e
s
s
i
o
n
636
Gaussian Back Transformation and define the NO2 Anamorphosis for each variable. Do not change
the other parameters like the number of simulations and the number of turning bands. Finally click
on Run.
(snap. 13.12-2)
Air quality 637
(snap. 13.12-3)
(snap. 13.12-4)
638
13.13 Simulation post-processing
The Tools / Simulation Post Processing panel provides a procedure for the post processing of a
macro variable. Considering the 200 univariate simulations, you ask the procedure to perform
sequentially the following tasks:
l calculation of the mean of the 200 simulations,
l determination of the cutoff map giving the probability that NO2 exceeds 40 g/m3.
(snap. 13.13-1)
Air quality 639
(snap. 13.13-2)
(snap. 13.13-3)
The map corresponding to the mean of the 200 simulations is displayed with the same color scale as
for each of the estimated maps and the standard deviation associated. The mean of a large number
of simulations converges toward kriging.
640
(fig. 13.13-1)
(fig. 13.13-2)
340 350 360 370 380 390 400 410 420 430 440
X (km)
5275
5300
5325
5350
5375
5400
5425
Y
(
k
m
)
Simulation NO2 mean (univariate case)
NO2 (g/m3)
N/A
50
45
40
35
30
25
20
15
10
5
0
340 350 360 370 380 390 400 410 420 430 440
X (km)
5275
5300
5325
5350
5375
5400
5425
Y
(
k
m
)
Simulation NO2 std (univariate case)
NO2 (g/m3)
N/A
10
9
8
7
6
5
4
3
2
1
0
Air quality 641
The following graphic represents the probability that the NO
2
concentrations exceed a sanitary
threshold of 40 g/m
3
calculated by simulations. This map is very similar to the probability map
obtained by conditional expectation. With an infinity of simulations, the map would be exactly the
same.
(fig. 13.13-3)
The following graphics represent the mean of simulations and the probability to exceed 40 g/m
3
calculated in the multivariate case, i.e. using the Simulations NO2 (multivariate case) macro
variable in the Tools / Simulations Post Processing panel with the same parameters as before.
The simulation mean has many similarities with the cokriging map. Regarding the probability map,
it presents some differences with the probability map obtained by univariate simulations, specially
on the South where the probability is lower (quasi null) than for the first graphic and on the East
center where the main area exposed to a risk of exceed 40 g/m
3
is more limited and shows up a
road axis.The integration of auxiliary variables in simulations leads to a map of probability more
realistic.
340 350 360 370 380 390 400 410 420 430 440
X (km)
5275
5300
5325
5350
5375
5400
5425
Y
(
k
m
)
Probability 40g/m3 (univariate case)
Probability
N/A
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
642
(fig. 13.13-4)
(fig. 13.13-5)
340 350 360 370 380 390 400 410 420 430 440
X (km)
5275
5300
5325
5350
5375
5400
5425
Y
(
k
m
)
Simulation NO2 mean (multivariate case)
NO2 (g/m3)
N/A
50
45
40
35
30
25
20
15
10
5
0
340 350 360 370 380 390 400 410 420 430 440
X (km)
5275
5300
5325
5350
5375
5400
5425
Y
(
k
m
)
Probability 40g/m3 (multivariate case)
Probability
N/A
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
Air quality 643
13.14 Estimating population exposure
The first task consists in initializing a new population exposure macro variable. For that, use the
Tools / Create Special Variable panel. Select the Variable Type to be created in the list: a Macro
Variable 32 bits and click on the Data File button to define the name of this new variable:
Population exposure. Click Variable Unit to select the unit: Float() and Editing Format to define
the format: Integer(10,0). Finally, specify the Number of Macro Variable Indices, it will be the same
as the number of simulations, i.e. 200. Click on Run.
(snap. 13.14-1)
In the File / Calculator panel, for each simulation you are going to calculate the population
potentially exposed to NO
2
concentrations higher than 40 g/m
3
. You have to click on the Data
File button to select Pop99 as v1 and Simulations NO2 (multivariate case) and Population
exposure as m1 and m2 (macro variables).
644
Enter in the window Transformation the operation that will be applied on the variables. For each
simulation and each mesh, the NO
2
simulated concentration is compared to the threshold of 40 g/
m
3
. If this value is exceeded, the number of inhabitants informed in the Pop99 variable will be
stored, else the number of inhabitants exposed will be zero. As a consequence, the transformation
is: m2=ifelse(m1>40,v1,0).
(snap. 13.14-2)
The Tools / Simulation Post-processing is finally used to estimate the population exposed to NO
2
concentrations higher than 40 g/m
3
from the Population exposure macro variable. In order to run
this operation, switch on Risk Curves and click on the Edit button. You are only interested by the
Accumulations. For each realization (each index of the macro variable), the program calculates the
sum of all the values of the variable which are greater or equal to the cutoff, i.e. in our case the
program calculates the total sum of inhabitants (so choose a cutoff of 0, the selection of the
inhabitants living in a area exposed to more than 40 g/m
3
is considered in the preceding step).
This sum is then multiplied by the unit surface of a cell equal to: 1000 m x 1000 m = 1000000 m;
as you are interested in the number of inhabitants (inhab), you need to divide by this figure
1000000 m. Switch on Draw Risk Curve on Accumulations to draw the risk curves on
accumulations in a separate graphic and Print Statistics to print the accumulations of the target
variable for each simulation.
Air quality 645
(snap. 13.14-3)
646
(snap. 13.14-4)
Air quality 647
(fig. 13.14-1)
Statistics for Simulation Post Processing
=========================================
Target Variable : Macro variable = Population exposure[xxxxx] [count=200]
Cutoff = 0
Number of outcomes = 200
The 19716 values are processed using 1 buffers of 19716 data each
Cell dimension along X = 1000.00m
Cell dimension along Y = 1000.00m
Rank Macro Frequency Accumulation Surface
1 1 0.50 140173.00inhab 8302.00km2
2 2 1.00 111181.00inhab 8302.00km2
3 3 1.50 114081.00inhab 8302.00km2
.../...
198 198 99.00 93011.00inhab 8302.00km2
199 199 99.50 108996.00inhab 8302.00km2
200 200 100.00 150109.00inhab 8302.00km2
Statistics on Accumulation curve
================================
Smallest = 63214.00inhab
Largest = 201496.00inhab
Mean = 112426.59inhab
St. dev. = 25451.57inhab
Statistics on Surface curve
===========================
Smallest = 8302.00km2
Largest = 8302.00km2
Mean = 8302.00km2
St. dev. = 0.00km2
Quantiles on Accumulation Risk curves
=====================================
P 5 = 157580.00inhab
P50 = 111007.00inhab
P95 = 74521.00inhab
The number of inhabitants exposed to NO2 concentrations higher than 40 g/m
3
is between 63214
and 201496 with a mean of 112427.
P 5 ( 157580.00)
P50 ( 111007.00)
P95 ( 74521.00)
50000 100000 150000 200000
Accumulation (inhab)
0
10
20
30
40
50
60
70
80
90
100
F
r
e
q
u
e
n
c
i
e
s
648
Soil pollution 649
14 Soil pollution
This case study is based on a data set kindly provided by TOTAL
Dpots Passifs.
Coordinates and pollutant grades have been transformed for
confidentiality reasons.
The case study covers rather exhaustively a large panels of Isatis
features. Its main objectives are to:
estimate the 3D total hydrocarbons (THC) on a contaminated site
using classical geostatistical algorithms,
interpolate the site topography to exclude from the calculations 3D
grid cells above the soil surface,
use simulations to perform risk analysis by:
- the estimation of the local risk to exceed a threshold of 200mg/kg,
- the quantification of the statistical distribution of the contaminated
volume of soil.
Last update: Isatis version 11.0
650
14.1 Presentation of the data set
14.1.1 Creation of a new study
First, a new study has to be created using the File / Data File Manager functionality.
(snap. 14.1-1)
It is then advised to verify the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l Input-Output Length Options window: unit in meters (Length), with its Format set to Decimal
with Length = 10 and Digits = 2.
l Graphical Axis Units window: X, Y and Z units in meters.
14.1.2 Import of the data
14.1.2.1 Import of THC grades
The first data set is provided in the Excel file THC.xls (located in the Isatis installation directory). It
contains the values of THC measured on the site.
The procedure File / Import / Excel is used to load the data. First you have to specify the path of
your data using the button Excel File. In order to create a new directory and a new file in the current
study, the button Points File is used to enter the names of these two items; click on the New Direc-
tory button and give a name, do the same for the New File button, for instance:
Soil pollution 651
l New directory = Data
l New file = THC
You have to tick the box First Available Row Contains Field Names and click on the Automatic
button to load the variables contained in the file.
At last, you have to define the type of each variable:
l The coordinates easting(X), northing(Y) and elevation(Z) for X, Y and Cote (mNGF),
l The numeric 32 bits variables ZTN (mNGF), Prof (m) and Measure,
l The alphanumeric variable Mesh.
Finally, press Run.
652
(snap. 14.1-2)
14.1.2.2 Import of the topography
The second data set is provided in the Excel spreadsheet Topo.xls. It contains the values of topogra-
phy measured on the site that will enable to limit the interpolation of THC grades under the surface
of the soil.
To import this file, you have to do a File / Import / Excel in the target directory Data and the new
file Topography.
Soil pollution 653
(snap. 14.1-3)
Note - Be careful to define this file as a 2D file. In this step, the ZTN (mNGF) variable will be
defined as a numeric 32 bits variable, not as the Z coordinate.
14.1.2.3 Import of polygon
The next essential task for this study is to define the area of interest. This contour is loaded as a 3D
polygon.
The polygon delineating the contour of the site is contained in an ASCII file, called
Site_contour.pol, whose header describes the contents:
654
l the polygon level which corresponds to the lines starting with the ** symbol,
l the contour level which corresponds to the lines starting with the * symbol.
This polygon is read using the File / Polygons Editor functionality. This application stands as a
graphic window with a large Application Menu. You have first to choose the New Polygon File
option to create a file where the 3D polygon attributes will be stored: the file is called Site contour
in the directory Data.
(snap. 14.1-4)
The next task consists in loading the contents of the ASCII Polygon File using the ASCII Import
functionality in the Application Menu.
(snap. 14.1-5)
The polygon now appears in the graphic window.
Soil pollution 655
(snap. 14.1-6)
The final action consists in performing the Save and Run task in order to store the polygon file in
the general data file system of Isatis.
Note - This polygon could have been digitalized inside Isatis, using a background map of the site.
656
14.2 Pre-processing
14.2.1 Creation of a target grid
All the estimation and simulation results will be stored as different variables of a new grid file
located in the directory Grid. This grid, called 3D grid, is created using the File / Create Grid File
functionality. It is adjusted on the Site contour polygon.
(snap. 14.2-1)
Soil pollution 657
Using the Graphic Check option, the procedure offers the graphical capability of checking that the
new grid reasonably overlays the data points.
(snap. 14.2-2)
14.2.2 Delineation of the interpolation area
You have to create a polygon selection on the grid to delineate the interpolation area by the File /
Selection / From polygons functionality.
658
(snap. 14.2-3)
SELECTION/INTERVAL STATISTICS:
-----------------------------
New Selection Name = Site contour
Total Number of Samples = 182160
Masked Samples = 32384
Selected Samples = 149776
Soil pollution 659
14.3 Visualization of THC grades using the 3D viewer
Launch the 3D Viewer (Display / 3D Viewer).
Display the THC grades:
l Drag the Measure variable from the THC file in the Study Contents and drop it in the display
window;
l From the Page Contents, click right on the Points object (THC) to open the Points Properties
window. In the Points tab, select the 3D Shape mode (sphere) and choose the Rainbow
Reversed Isatis Color Scale in the Color tab.
(snap. 14.3-1)
Tick the Automatic Apply option to automatically assign the defined properties to the graphic
object. If this option is not selected, modifications are applied only when clicking Display.
Display the site contour:
l Drag the Site contour file in the Study Contents and drop it in the display window.
l From the Page Contents, click right on the Polygons object (Site contour) to open the Polygons
Properties window. In the Color tab, select Constant and click the next colored button to assign
to the polygon the color of your choice. In the Transparency tab, tick the Active Transparency
option to define a level of transparency for the display, in order to see the samples inside.
Tick Legend to display the color scale in the display window. The legend is attached to the current
representation. Specific graphic objects may be added from the Display menu as the graphic axes
and corresponding valuations, the bounding box and the compass.
660
The Z Scale, in the tool bar, may also be modified to enhance the vertical scale.
Click on File / Save Page As to save the current graphic.
(fig. 14.3-1)
Soil pollution 661
14.4 Exploratory Data Analysis
In the Statistics / Exploratory Data Analysis panel, the first task consists in defining the file and
variable of interest, namely Measure. To achieve that, click on the Data File button and select the
variable. By pressing the corresponding icon (eight in total), you can successively perform several
statistical representations, using default parameters or by choosing appropriate parameters.
(snap. 14.4-1)
For example, to calculate the histogram with 26 classes between 0 and 520 mg/kg (20 units
interval), first you have to click on the histogram icon (third from the left); a histogram calculated
with default values is displayed, then enter the previous values in the Application / Calculation
Parameters menu bar of the Histogram page. If you select the option Define Parameters Before
Initial Calculations, you can skip the default histogram display.
Clicking on the base map (first icon from the left), the dispersion of THC grades appears. Each
active sample is represented by a cross proportional to the THC value. A sample is active if its
value for a given variable is defined and not masked.
662
(fig. 14.4-1)
All graphic windows are dynamically linked together. If you want to locate the particularly high
values, select on the histogram the higher values, right click and choose the Highlight option. The
highlighted values are now represented by a blue star on the base map.
Soil pollution 663
(fig. 14.4-2)
Selecting an other section (YOZ or XOZ), in the Application / Graphical Parameters panel of the
base map window, allows you to visualize the dispersion of THC grades in depth.
664
(snap. 14.4-2)
Then, an experimental variogram can be calculated by clicking on the 7
th
statistical representation,
with 10 lags of 15m (consistence with the sampling mesh) and a proportion of the lag of 0.5. An
histogram displaying the number of pairs can be previewed by clicking on the Display Pairs button.
Soil pollution 665
(snap. 14.4-3)
666
(snap. 14.4-4)
The number of pairs may be added to the graphic by switching on the appropriate button on the
Application / Graphic Specific Parameters. The variogram cloud is obtained by ticking the box
Calculate the Variogram Cloud in the Variogram Calculation Parameters.
(fig. 14.4-3)
The experimental variogram shows an important nugget effect. This variability is due to the fact
that we compare some pairs of points located in the XOY plane and some pairs of points in depth.
The variability of the THC grades seems to be higher vertically than horizontally. You have to
consider this phenomenon by calculating two experimental variograms, one for each direction. For
Soil pollution 667
that, you have to choose the Directional option. A Slicing Height of 0.5m allows you not to put
together the two directions.
Set Regular Directions to 1, choose Activate Direction Normal to the Reference Plane and choose
the following parameters in Direction Definition:
l Label for regular direction: N0 (default name)
l Tolerance on angle: 90 (in order to consider all samples without overlapping)
l Lag value: 15 m (i.e. approximately the distance between boreholes)
l Number of lags: 10 (so that the variogram will be calculated over 150 m distance)
l Tolerance on distance (proportion of the lag): 0.5
(snap. 14.4-5)
Then choose the following parameters for the direction normal to the reference plane:
668
l Label for orthogonal direction: D-90
l Tolerance on angle: 45
l Lag value: 1 m
l Number of lags: 4
l Tolerance on distance (proportion of the lag): 0.5
(fig. 14.4-4)
In order to perform the fitting step, it is now time to store the final experimental variogram with the
item Save in Parameter File of the Application menu of the Variogram Page. You will call it THC.
D-90
23
506
472
345
0 1 2 3
Distance (m)
0
1000
2000
3000
4000
5000
V
a
r
i
o
g
r
a
m
:
M
e
a
s
u
r
e
N0
1
2345
3003
3486
5664
4002
4471
3523
3274
3457
0 50 100 150
Distance (m)
0
1000
2000
3000
4000
5000
V
a
r
i
o
g
r
a
m
:
M
e
a
s
u
r
e
Soil pollution 669
14.5 Fitting a variogram model
You must now define a Model which fits the experimental variogram calculated previously. In the
Statistics / Variogram Fitting application, define:
l The Parameter File containing the set of experimental variograms: THC.
l The Parameter File in which you wish to save the resulting model: THC. As the experimental
and the variogram model are stored in different types of parameter file, you may define the same
name for both.
(snap. 14.5-1)
Check the toggles Fitting Window and Global Window; the program displays automatically one
default spherical model. The Fitting window displays one direction at a time (you may choose the
670
direction to display through Application / Variable & Direction Selection...), and the Global
window displays every variable (if several) and direction in one graphic.
You can first use the variogram initialization by clicking on Model initialization; this will enable
you to initialize the model with a combination of spherical and cubic models and with or without a
nugget effect. This procedure automatically fits the range and the sill of the variogram (see the Var-
iogram fitting section from the Users guide). Then, when pressing the Edit button in the manual
Fitting tab the Model Definition sub-window opens and you can choose other parameters. Each
modification of the model parameters can be validated using the Test button in order to update the
graphic. The model must reflect:
l The specific variability along each direction (anisotropy),
l The general increase of the variogram.
Two different structures have been defined (in the Model Definition window, use the Add button to
add a structure, and define its characteristics below, for each structure):
Soil pollution 671
l an exponential model with a (practical) range of 50m and a sill of 3360,
l an anisotropic Linear model with a sill of 1000 and the following respective ranges along U, V
and W: 115m, 115m and 0.85m.
(snap. 14.5-2)
672
(fig. 14.5-1)
This model is saved in the Parameter File for future use by clicking on the Run (Save) button.
Note - The Automatic Sill Fitting option allows you to ask the system to derive the optimal sill of
each structure. This calculation tends to minimize the distance between the experimental variogram
and the model, taking into account the number of pairs and the distance for each lag in a way which
depends on the Fitting Weights... rule, accessible by switching ON the Show Advanced Parameters
button.
D-90
23
506
472
345
0 1 2 3
Distance (m)
0
1000
2000
3000
4000
5000
V
a
r
i
o
g
r
a
m
:
M
e
a
s
u
r
e
N0
1
2345
3003
3486
5664
4002
4471
3523
3274
3457
0 50 100 150
Distance (m)
0
1000
2000
3000
4000
5000
V
a
r
i
o
g
r
a
m
:
M
e
a
s
u
r
e
Soil pollution 673
14.6 Selection of the duplicates
In order to avoid some problems of matrix inversion during the kriging, a New Selection variable is
created. The Tools / Look for Duplicates panel is designed to check the presence of too close data
points and allows you to mask them. The samples which the distance between them is smaller than
0.1m, will be declared as duplicates. The Mask all Duplicates but First option allows you to kept
the first of the duplicates unmasked (i.e. the duplicate with the smallest X-coordinate).
(snap. 14.6-1)
Pressing Run, an Isatis message is printed out informing you that two duplicates have been found
and masked in the Without duplicates Selection variable.
Duplicates below a distance of : 0.10m
--------------------------------
Total number of discarded samples = 2
Number of groups = 2
Number of duplicates = 2
Minimum grouping distance = 0.00m
SELECTION/DUPLICATES STATISTICS:
--------------------------------
New Selection Name = Without duplicates
Total Number of Samples = 784
Masked Samples = 2
Selected Samples = 782
Note - The presence of duplicates is generally visible on the variogram cloud by the existence of
pairs of points at zero distance.
674
14.7 Kriging of THC grades
The kriging procedure Interpolate / Estimation / (Co-)Kriging requires the definition of:
l the Input information: variable Measure in the THC File (with the selection Without
duplicates),
l the following variables in the Output Grid File, where the results will be stored (with the
selection Site contour):
m the estimation result in THC kriging,
m the standard deviation of the error estimation in THC std kriging,
l the Model: THC,
l the neighborhood: moving 3D.
To define the neighborhood, you have to click on the Neighborhood button and you will be asked to
select or create a new set of parameters; in the New File Name area enter the name moving 3D, then
click on OK or press Enter and you will be able to set the neighborhood parameters by clicking on
the respective Edit button.
l The neighborhood type is a moving neighborhood. It is an ellipsoid with No Rotation;
l Set the dimensions of the ellipsoid to 100 m, 100 m and 2 m along the vertical direction;
l Minimum number of samples: 1;
l Number of angular sectors: 1
l Optimum Number of Samples per Sector: 20.
Press OK for the Neighborhood Definition.
Soil pollution 675
(snap. 14.7-1)
676
(snap. 14.7-2)
In the Standard (Co-)Kriging panel, a special feature allows you to test the choice of parameters,
through a kriging procedure, on a graphical basis (Test button). By pressing once on the left button
of the mouse, the target grid is shown (in fact a XOY section of it, you may select different sections
through Application / Selection For Display...). You can then move the cursor to a target grid node:
click once more to initiate kriging. The samples selected in the neighborhood are highlighted and
the weights are displayed. The bottom of the screen recalls the estimation value, its standard
deviation and the sum of the weights. The target grid node may also be entered in the Test Window /
Application / Selection of Target option, for instance the node [37,55,10].
Soil pollution 677
(snap. 14.7-3)
678
In the Application menu of the Test Graphic Window, click on Print Weights & Results. This
produces a printout of:
l the calculation environment: target location, model and neighborhood,
l the kriging system,
l the list of the neighboring data and the corresponding weights,
l the summary of this kriging test.
Results for : Punctual
- For variable V1
Number of Neighbors = 20
Mean Distance to the target = 23.55m
Total sum of the weights = 1.000000
Sum of positive weights = 1.108689
Lagrange parameters #1 = 8.146024
Estimated value = 23.309070
Estimation variance = 1676.879631
Estimation standard deviation = 40.949721
Signal to Noise ratio (final) = 2.600067
You also may ask for a 3D representation of the search ellipsoid if the 3D Viewer application is
already running and, from the Application menu, ask to Link to 3D Viewer: a 3D representation of
the search ellipsoid neighborhood is represented, and the samples used for the estimation of the
node are highlighted. A new graphic object neighborhood appears in the Page Contents from which
you may change the graphic properties (color, size of the samples for coding the weights or the
THC values etc.).
(fig. 14.7-1)
Soil pollution 679
Click on Run to interpolate the data on the entire grid.
680
14.8 Intersection of interpolation results with the
topography
The aim of this part is to interpolate the site topography not to take into account the 3D grid cells
above the surface of the soil in the results of simulations.
The idea is to copy the topography interpolated from a 2D grid to the 3D grid and select the cells
under the surface by comparing the values of topography with the Z-coordinate.
Following this section is not relevant if the topography of your site can be considered as flat.
14.8.1 Creation of a 2D grid
The estimation of topography is calculated on a 2D grid that the parameters along X and Y are the
same that those of the previous 3D grid (origin and resolution unchanged). This new grid is saved in
a new grid file 2D grid.
Soil pollution 681
(snap. 14.8-1)
A selection from the polygon Site contour to the new 2D grid is also realized not to interpolate the
topography outside of the site area.
682
(snap. 14.8-2)
SELECTION/INTERVAL STATISTICS:
-----------------------------
New Selection Name = Site contour
Total Number of Samples = 7920
Masked Samples = 1408
Selected Samples = 6512
14.8.2 Exploratory data analysis
The experimental variogram of the topography is computed in the Statistics / Exploratory Data
Analysis panel.
Soil pollution 683
(snap. 14.8-3)
A first experimental variogram is calculated with 10 lags of 15m and a proportion of the lag of 0.5.
684
(fig. 14.8-1)
This variogram shows an important nugget effect. This effect does not seem to be due to only one
sample. A variogram map can be computed clicking on the last statistical representation of the
panel. This specific tool allows you to analyze the spatial continuity of the variable of interest in all
the directions of the space, and especially to pick the possible anisotropies.
The following parameters are defined:
-50 0 50 100
X (m)
50
100
150
200
250
300
350
Y
(
m
)
ZTN (mNGF)
513
676
789
1303
932
1075
873
815
874
0 25 50 75 100 125
Distance (m)
0.0
0.1
0.2
0.3
V
a
r
i
o
g
r
a
m
:
Z
T
N
(
m
N
G
F
)
Soil pollution 685
l 14 directions
l 10 lags of 15m as previously
l a tolerance of 0 lag not to compute a same pair of points into two consecutive classes
l a tolerance on directions of 3 sectors to smooth the map to highlight the principal directions of
anisotropy
(snap. 14.8-4)
Studying the map, you can see that the variability seems to be higher along Y than along X until a
distance of 80m. The variograms along these two directions are directly calculated from the
variogram map. You have to pick the N90 direction label, right click and choose Active Direction
(ditto for the N0 direction).
686
(fig. 14.8-2)
The anisotropic variogram is saved in parameter file under the name Topography anisotropic.
It is fitted by a model of variogram composed of:
N
2
6
N
2
0
6
N
5
1
N
2
3
1
N
7
7
N
2
5
7
N90 270
N
1
0
3
N
2
8
3
N
1
2
9
N
3
0
9
N
1
5
4
N
3
3
4
U
N/A
0.32
0.30
0.28
0.26
0.24
0.22
0.20
0.18
0.16
0.14
0.12
0.10
0.08
N0
258
363
432
747
602
704
636
638
773
N90
255
313
357
556
330
371
237
177
101
0 50 100 150
Distance (m)
0.0
0.1
0.2
0.3
V
a
r
i
o
g
r
a
m
:
Z
T
N
(
m
N
G
F
)
Soil pollution 687
l an anisotropic spherical model with a sill of 0.14 and the respective ranges along U and V:
135m and 75m
l an exponential model with a range of 20m and a sill of 0.13
(snap. 14.8-5)
688
(fig. 14.8-3)
14.8.3 Kriging of topography
The kriging of the topography requires the definition of:
l the ZTN (mNGF) variable as Input File,
l two new variables Topography anisotropic kriging and Topography anisotropic std kriging
as Output File in the 2D grid file to store respectively the estimation result and the standard
deviation of the error estimation,
l the Model of variogram Topography anisotropic,
l the new Neighborhood unique.
(snap. 14.8-6)
Soil pollution 689
(snap. 14.8-7)
14.8.4 Displaying the results of the estimation of topography
The kriging results of topography are now visualized using several combinations of the display
capabilities.
You are going to create a new Display template, that consists in an overlay of a grid raster and a
representation of the topography by isolines. All the display facilities are explained in detail in the
Displaying & Editing Graphics chapter of the Beginners Guide.
690
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify the contents of your graphic in this window.
To achieve that:
l Firstly, give a name to the template you are creating: Topography. This will allow you to easily
display again this template later.
l In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m In the Data area, in the 2D grid file select the variable Topography anisotropic kriging
with the Site contour selection,
m Specify the title that will be given to the Raster part of the legend, for instance Topo
(mNGF),
m In the Graphic Parameters area, specify the Color Scale you want to use for the raster
displayed. You may use an automatic default color scale, or create a new one specifically
dedicated to the variable of interest. To create a new color scale, click on the Color Scale
button, double-click on New Color Scale and enter a name: Topo, and press OK. Click on
the Edit button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button and enter the min and the max bounds (respectively 27 and
30.3).
- Change the Number of Classes (30).
- Switch on the Invert Color Order toggle in order to affect the red colors to the large val-
ues of topography.
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Display all Tick Marks button, enter 0.4 as the step
between the tick marks. Then, specify that you do not want your final color scale to
exceed 6cm. Switch off the Display Undefined Values button.
- Click on OK.
m In the Item contents for: Raster window, click on Display to display the result.
Soil pollution 691
(snap. 14.8-8)
l Back in the Contents list, double-click on the Isolines item. Click Grid File to open a File
Selector to select the 2D grid file then the variable to be represented, Topography anisotropic
kriging.
m The Legend Title is not active as no legend is attached to this type of representation.
m The isolines representation requires the definition of classes. A class is an interval of values
separated by a given step. In the Data Related Parameters area, switch on the C1 line, enter
27 and 30 as lower and upper bounds and choose a step equal to 0.2.
m Not to overload the graphic, the Label Flag attached to the class is left inactive.
m Close the current Item Contents and click on Display.
692
l In the Items list, you can select any item and decide wether or not you want to display its legend.
Use the Up and Down arrows to modify the order of the items in the final display.
l Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.
(snap. 14.8-9)
The * and [Not saved] symbols respectively indicate that some recent modifications have not been
stored in the Topography graphic template, and that this template has never been saved. Click on
Application / Store Page to save them. You can now close your window.
Create a second template Topography std kriging to display the kriging standard deviation using
the Raster item in the Contents list and a new Color Scale. To overlay the ZTN (mNGF) data loca-
tions to the grid raster representing the error of estimation:
l Back in the Contents list, double-click on the Basemap item to represent the ZTN (mNGF)
variable with symbols proportional to the variable value. A new Item Contents window appears.
In the Data File area, select Data / Topography / ZTN (mNGF) variable as the proportional
variable. Enter Topo data as the Legend Title. Leave the other parameters unchanged; by
Soil pollution 693
default, black crosses will be displayed with a size proportional to the values of topography.
Click on Display Current Item to check your parameters, then on Display to see all the
previously defined components of your graphic. Click on OK to close the Item contents panel.
l To take off the white edge, click on the Display Box tab and select the Containing a set of items
mode. Choose the raster to define the display box correctly.
Finally, click on Display. The result should be similar to the one displayed hereafter.
(fig. 14.8-4)
14.8.5 Selection of the grid cells under the surface of the soil
The first task consists in copying the estimation of the topography from the 2D grid to the 3D grid
using Tools / Migrate / Grid to Point.
-50 0 50
X (m)
50
100
150
200
250
300
350
Y
(
m
)Topo (mNGF)
0.5
0.4
0.3
0.2
0.1
0.0
Topo data
Above
30.54
27.06
Below
Test
694
(snap. 14.8-10)
A new Selection variable Under Topo is created using the File / Calculator to store the result of
the comparison between the estimation of the topography and the Z-coordinate. The 3D grid cells
which values of Topography are higher than corresponding values of the Z-coordinate are masked
(the cells outside of the site contour are also masked because the Site contour selection variable is
activated on input file). You have to apply the following transformation in File / Calculator:
s1=ifelse(v1<v2,1,0)
Soil pollution 695
(snap. 14.8-11)
This Under topo selection will be used in all the rest of the study (it will be activated on output file
and in the graphic representations).
696
14.9 3D display of the estimated THC grades
You could start from the THC grades page created previously not to do again the display of the data.
Drag and drop the THC kriging variable from the 3D grid file in the display window. In the Page
contents, click right on the 3D grid object to edit its properties:
l in the 3D Grid tab, tick the selection toggle, choose the Under topo selection and active the
Automatic Apply function;
l in the Color tab, be careful that selected variable is THC kriging. Apply a THC Isatis Color
Scale created in the File / Color Scale functionality (25 classes from 0 to 500 mg/kg);
l in the Cell Filter tab, tick the Activate Cell Filter toggle and choose the V is Defined option not
to display the cells with undefined values (which are colored in grey by default);
(fig. 14.9-1)
l investigate inside the kriged model:
m open the clipping plane functionality from Display / Clipping Plane: the clipping plane
appears across the block model;
m go in Selecting mode by pressing the arrow button in the function bar;
m click on the clipping plane rectangle and drag it next by the block model for better visibility;
Soil pollution 697
m click on one of the clipping planes axis to change its orientation (be careful to target
precisely the axis itself in dark grey, not its squared extremity nor the center tube in white)
m open the Points Properties window of the THC file: set the Allow Clipping toggle OFF
(ditto for the polygon);
m click on the clipping planes center white tube and drag it in order to translate the clipping
plane along the axis. You may also benefit from the clipping controls parameters available
on the right of the graphic window in order to clip a slice with a fixed width and along the
main grid axes.
m you can click on one cell of particular interest or on a sample: its information is displayed in
the top right corner (take care to inactivate the polygon not to select it).
(snap. 14.9-1)
698
14.10 THC simulations
Kriging provides the best estimate of the variable at each grid node. By doing so, it does not pro-
duce an image of the true variability of the phenomenon. Performing risk analysis usually requires
to compute quantities that have to be derived from a model representing the actual variability. In
this case, advanced geostatistical techniques such as simulations have to be used.
It is for instance the case here if you want to estimate the probability of THC to exceed a given
threshold. As in fact thresholding is not a linear operator applied to the concentration, applying the
threshold on the kriged result (which is a linear operator) can lead to an important bias. The prob-
lem is similar to estimate the statistical distribution of a contaminated volume of soil. Simulation
techniques generally require a multi-gaussian framework: thus each variable has to be transformed
into a normal distribution beforehand and the simulation result must be back-transformed to the raw
distribution afterwards.
14.10.1 Gaussian transformation
A conditional simulation corresponds to a grid of values having a normal distribution and honoring
the model. Moreover it honors the data points as it uses a conditioning step based on kriging which
requires the definition of a neighborhood. So the simulations also need the gaussian transformation
and a model of variogram based on this normal variable.
Using the Statistics / Gaussian Anamorphosis Modeling procedure, you can fit and display this
anamorphosis function and transform the raw variable into a new gaussian variable Measure
Gauss.
Select the Measure variable with the Without duplicates selection on Input data.
The Interactive Fitting button overlays the experimental anamorphosis with its model expanded in
terms of Hermite polynomials: this step function gives the correspondence between each one of the
sorted data (vertical axis) and the corresponding frequency quantile in the gaussian scale (horizon-
tal axis). A good correspondence between the experimental values and the model is obtained by
choosing an appropriate number of Hermite polynomials; by default Isatis suggests the use of 30
polynomials, but you can modify this number and choose 50 polynomials.
Switch on the Gaussian Transform and create a new variable Measure Gauss on the Output data.
Three options of interpolation are available, we recommend the Empirical Inversion method for
this case. Save the anamorphosis clicking on the Point Anamorphosis button, name it THC. Finally
click on Run.
Soil pollution 699
(snap. 14.10-1)
700
(fig. 14.10-1)
Using the Statistics / Exploratory Data Analysis on this new variable, you can first compute its
basic statistics: the mean is 0.00 and the variance is 0.96. The distribution of the gaussian variable is
not symmetric with a minimum of -1.2 and a maximum of 3.3, and an important proportion of low
equal values. This phenomenon is due to the important part of values equal to the limit of detection
and the method of anamorphosis used. The gaussian value calculated uses the empirical cumulative
distribution: two points with the same raw value will get the same gaussian value. This method is
preferred to the frequency inversion method that gets, for two points with the same raw value,
different gaussian values. In the context of the study the non symmetry of the gaussian variable is
not very important because the threshold of 200mg/kg that we consider is higher than the limit of
detection.
-2 -1 0 1 2 3 4
Gaussian values
-100
0
100
200
300
400
500
600
M
e
a
s
u
r
e
Soil pollution 701
(fig. 14.10-2)
The experimental variogram is very structured. The following one is computed using the same
calculation parameters as in the non gaussian case. To load the parameters of an existing variogram,
click on Load Parameters from Standard Parameter File... and select the experimental variogram
THC.
(fig. 14.10-3)
This variogram is saved in a file called THC gauss.
In the Statistics / Variogram Fitting, you fit a model constituted of:
-1 0 1 2 3
Measure gauss
0.0
0.1
0.2
0.3
F
r
e
q
u
e
n
c
i
e
s
Nb Samples: 782
Minimum: -1.20
Maximum: 3.30
Mean: 0.00
Std. Dev.: 0.96
D-90
21
506
471
342
0 1 2 3
Distance (m)
0.00
0.25
0.50
0.75
1.00
V
a
r
i
o
g
r
a
m
:
M
e
a
s
u
r
e
g
a
u
s
s
N0
9
2328
2987
3471
5637
3983
4455
3511
3259
3441
0 50 100 150
Distance (m)
0.00
0.25
0.50
0.75
1.00
V
a
r
i
o
g
r
a
m
:
M
e
a
s
u
r
e
g
a
u
s
s
702
l an anisotropic Exponential model with a sill of 0.58 and the following respective ranges along
U, V and W: 43m, 43m and 6m.
l an anisotropic Linear model with a sill of 0.25 and the following respective ranges along U, V
and W: 115m, 115m and 2.4m.
(snap. 14.10-2)
Soil pollution 703
(snap. 14.10-3)
(fig. 14.10-4)
D-90
21
506
471
342
0 1 2 3
Distance (m)
0.00
0.25
0.50
0.75
1.00
V
a
r
i
o
g
r
a
m
:
M
e
a
s
u
r
e
g
a
u
s
s
N0
9
2328
2987
3471
5637
3983
4455
3511
3259
3441
0 50 100 150
Distance (m)
0.00
0.25
0.50
0.75
1.00
V
a
r
i
o
g
r
a
m
:
M
e
a
s
u
r
e
g
a
u
s
s
704
14.10.2 Creation of a remediation grid
According to the remediation strategy, a 3D grid of 15 x 15 x 0.5 m is created in order to calculate
the volume of soil to be excavated.
(snap. 14.10-4)
As previously, create the selection variable Under topo on the 3D grid remediation not to take
into account in the computation of contaminated soil, the cells above the surface.
Soil pollution 705
(snap. 14.10-5)
*** Variable Statistics ***
Directory Name : Grid
File Name : 3D grid remediation
Variable Name : Under topo
Variable Type : Float (Selection)
Bit Length : 1
Unit :
Last Modification : Jan 08 2009 11:41:48
Size : 733 bytes
Physical Path : \\Bigtwo\etudes\Documentation\80\Soil poll
tion\GTX\DIRE.2\FILE.3\VARI.10
Printing Format : Integer, Length = 3
Variable Description :
Creation Date: Jan 08 2009 11:41:46
Number of Selected Samples : 3210 / 5060
*** END of Variable Statistics ***
706
14.10.3 Turning bands simulations
To make these simulations, you are going to use the turning bands method (Interpolate / Condi-
tional Simulations / Turning Bands). You use the same moving 3D neighborhood as in the kriging
step. The additional parameters consist in:
l the name of the Macro Variable: each simulation is stored in this Macro Variable with an index
attached,
l the number of simulations: 200 in this exercise,
l the starting index for numbering the simulations: 1 in this exercise,
l the Gaussian back transformation is performed using the anamorphosis function: THC. In a
first run, this anamorphosis will be disabled in order to study the gaussian simulations,
l the seed used for the random number generator: 423141 by default. This seed allows you to per-
form lots of simulations in several steps: each step will be different from the previous one if the
seed is modified.
The final parameters are specific to the simulation technique. When using the Turning Band
method, you simply need to specify the number of bands: a rule of thumb is to enter a number much
larger than the count of rows or columns in the grid, and smaller than the total number of grid
nodes; 500 bands are chosen in our exercise.
You can verify on some simulations in the gaussian space that the histogram is really gaussian and
the experimental variogram respects the structure of the model THC Gauss particularly at small
scale. After this Quality Control, you can enable the Gaussian back transformation THC and you
can perform block simulations on the 3D grid remediation.
(fig. 14.10-5)
The Type of calculation is set as Block. Block simulations are obtained by averaging simulated
points. Each block is discretized in sub-blocks according to the block discretization parameters and
each sub-block is simulated as a point.
The block discretization is defined in the Neighborhood window: it will be set to 3x3x2 for quicker
calculations.
Soil pollution 707
(snap. 14.10-6)
708
(snap. 14.10-7)
(snap. 14.10-8)
Clicking on the Calculate Cvv button, the average covariance of each block is calculated using the
discretization of it. Its covariance should be practically constant for all the blocks.
Calculation of the Mean Block Covariance :
------------------------------------------
Regular discretization : 3 x 3 x 2
In order to account for the randomization, 11 trials are performed
(the first value will be kept for the Kriging step)
Variables Measure gauss
Cvv = 0.323526
Cvv = 0.318589
Cvv = 0.316975
Cvv = 0.322070
Cvv = 0.324283
Cvv = 0.324562
Cvv = 0.317189
Cvv = 0.327152
Cvv = 0.323802
Soil pollution 709
Cvv = 0.324525
Cvv = 0.322971
Note - Performing simulations on the 2.5 x 2.5 x 0.5 m grid allows you to test different sizes of
remediation grid. A Copy Statistics / Grid -> Grid computes for each block of the remediation grid,
the mean of a given simulation on the 2.5 x 2.5 x 0.5 m grid. This calculation is achieved for each
simulation (i.e. for each index of simulation) through a journal file.
%LOOP i = 1 TO 200
#
******* Bulletin Name ******* =B= Copy Grid Statistics to Grid
***** Bulletin Version ****** =N= 600
Input Directory Name =A= Grid
Input File Name =A= 3D grid
Input Selection Name =A= Under topo
Variable Name =A= Simulations THC[$0i]
Minimum Bound Name =A= None
Maximum Bound Name =A= None
Output Directory Name =A= Grid
Output File Name =A= 3D grid remediation
Output Selection Name =A= Under topo
Number Name =A= None
Minimum Name =A= None
Maximum Name =A= None
Mean Name =A= Simulations THC block[$0i]
Std dev Name =A= None
#
%ENDLOOP
710
14.11 Simulation post-processing
One main advantage of simulations is the possibility to apply non linear calculations (for example
applying different cut-off grades simultaneously, calculation of the probability for a grade to be
above a threshold, or a volume of soil contaminated).
14.11.1 Statistical and probability maps
The Tools / Simulation Post Processing panel provides a procedure for the post processing of a
macro variable. Considering the 200 simulations, you ask the procedure to perform sequentially the
following tasks:
l calculation of the mean of the 200 simulations,
l determination of the cutoff giving the probability to exceed a threshold of 200 mg/kg.
(snap. 14.11-1)
Check the toggle Statistical Maps and press Edit in order to define the output file variables
Simulations THC mean and Simulations THC std.
Soil pollution 711
(snap. 14.11-2)
Check the toggle Iso Cutoff Maps and press Edit in order to define the cutoff of 200 mg/kg.
(snap. 14.11-3)
712
(snap. 14.11-4)
Close and press Run.
14.11.2 Risk curves on volume
Conversely to the previous statistics which are calculated on the whole set of realizations but for
each node of the grid, the program works here realization by realization and performs global
statistics. These statistics are expressed as Risk Curves.
Each realization produces two quantities:
Soil pollution 713
l the Accumulations. For each realization (each index of the Macro Variable), the program
calculates the sum of all the values of the variable which are greater or equal to the Cutoff (if the
value is smaller than the cutoff, the cell is not taken into account). This sum is then multiplied
by the unit surface of the cell (or the unit volume of the block in 3D).
l the Surfaces/Volumes. Instead of calculating the sum of the values for each realization, the
program calculates only the number of nodes where the Accumulation has been calculated. This
number is then multiplied by the unit surface of the cell (or the unit volume of the block in 3D).
This curve provides, for each realization of the variable, the surface (in 2D) or the volume (in
3D) of the cells (or blocks) where the variable is greater or equal to the cutoff.
(snap. 14.11-5)
The cutoff of 200 mg/kg is informed in the principal panel. Tick the Risk Curves option and press
Edit to define:
714
l the Unit Name used to display the results in the printout. By default, the values of volume are
expressed in m3 but in our case, the values can be expressed in 10-3 m3 (equal to 1000 m3) not
to load down the results.
l the Global Statistics (on Polygons):
m Draw Risk Curve on Volumes. The volumes values of all the realizations are sorted by
decreasing order and displayed as an inverse cumulative histogram. On the abscissae of this
graph (cutoff on the volumes) is represented the probability to get a result greater than this
value. The greater the volumes cutoff is, the smaller the probability is.
m Print Statistics. The accumulation of the target variable and the volume of soil contaminated
by values of THC higher than 200 mg/kg are printed in the Isatis Message Window for each
realization. The order in which these results are printed depends on the choice of the Sorting
Order specified.
(snap. 14.11-6)
Click Apply to compute and display the risk curves and leave the dialog box open.
Soil pollution 715
(fig. 14.11-1)
The graphic figure containing the risk curves offers an Application Menu with a single item:
Graphic Parameters where you can define quantiles. Tick the Highlight Quantiles option to
compute the quantiles of your choice and click on Show the Simulation Value on Graphic to display
of the simulations for each previously selected quantile on the graphic.
(snap. 14.11-7)
Statistics for Simulation Post Processing
=========================================
Target Variable : Macro variable = Simulations THC block[xxxxx] [count=200]
Cutoff = 200.00
Number of outcomes = 200
The 5060 values are processed using 1 buffers of 5060 data each
Cell dimension along X = 15.00m
P 5 ( 15.19)
P50 ( 9.11)
P95 ( 5.96)
5 10 15 20
Volume (10-3 m3)
0
10
20
30
40
50
60
70
80
90
100
F
r
e
q
u
e
n
c
i
e
s
716
Cell dimension along Y = 15.00m
Rank Macro Frequency Accumulation Volume
1 1 0.50 1874.0410-3 m3 6.1910-3 m3
2 2 1.00 2671.7110-3 m3 8.7810-3 m3
3 3 1.50 2097.1410-3 m3 7.2010-3 m3
.../...
198 198 99.00 5320.8110-3 m3 16.5410-3 m3
199 199 99.50 3430.1910-3 m3 11.0310-3 m3
200 200 100.00 1678.6110-3 m3 5.9610-3 m3
Statistics on Accumulation curve
================================
Smallest = 1492.9410-3 m3
Largest = 6127.9510-3 m3
Mean = 3006.7010-3 m3
St. dev. = 988.2310-3 m3
Statistics on Volume curve
===========================
Smallest = 5.0610-3 m3
Largest = 18.4510-3 m3
Mean = 9.6710-3 m3
St. dev. = 2.8010-3 m3
Quantiles on Volume Risk curves
================================
P 5 = 15.1910-3 m3
P50 = 9.1110-3 m3
P95 = 5.9610-3 m3
The volume of soil contaminated by a concentration of THC higher than 200 mg/kg is between
5.061 and 18.451 m
3
with a mean of 9.671 m
3
.
Soil pollution 717
14.12 Displaying graphical results of risk analysis with
the 3D Viewer
Drag and drop the Probability 200 mg/kg variable from the 3D grid remediation file in the
display window. In the Page contents, click right on the 3D grid object to edit its properties:
l in the 3D Grid tab, tick the selection toggle, choose the Under topo selection and active the
Automatic Apply function;
l in the Color tab, be careful that selected variable is Probability 200 mg/kg. Apply a Proba
Isatis Color Scale created in the File / Color Scale functionality (25 classes from 0 to 1);
l in the Cell Filter tab, tick the Activate Cell Filter toggle and choose the V > option to display
the cells with a value of probability higher than 0,2 for example;
You can add, as previously, the polygon Site contour to delineate the area and the THC data to
compare the values measured to the probability to exceed a threshold of 200 mg/kg in a remediation
cell.
(snap. 14.12-1)
718
(fig. 14.12-1)
Bathymetry 719
15 Bathymetry
This case study is based on a data set kindly provided by IFREMER,
the French Research Institute for Exploitation of the Sea, from La
Rochelle (www.ifremer.fr).
The case study illustrates how to set up, from several campaigns, a
unified bathymetric model which ensures the consistency of both:
data processing, merge and modeling procedures,
bathymetry product delivered for a whole region.
The last paragraph focuses on an innovative methodology using local
parameters to get a better adequacy between the geostatistical model
and the data.
Last update: Isatis version 11.0
720
15.1 Presentation of the Data set
15.1.1 Creation of a new study
First, before loading the data, create a new study using the File / Data File Manager functionality.
(snap. 15.1-1)
It is then advised to check the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l Input-Output Length Options window: unit in meters (Length), with its Format set to Decimal
with Length = 10 and Digits = 2.
l Graphical Axis Units window: X and Y units in kilometers.
15.1.2 Import of the data
15.1.2.1 Import of bathymetry data sets
The first data set is provided in the Ascii file DDE_Boyard_2000.csv (located in the Isatis installation
directory). It contains the values of bathymetry measured on the Fort Boyard area. The coordinates
are defined in a geographic system in latitude/longitude. As Isatis is designed to work with sample
locations defined in a Cartesian coordinate system, it is necessary to compute the Cartesian
coordinates X and Y from the geographic coordinate system using a projection system. We choose
to work in a Lambert zone II (extended) projection.
The procedure File / Import / ASCII is used to load the data. First you have to specify the path of
your data using the ASCII Data File button.
The second step consists in creating an external file referred to as the Header File. This header file
will contain a full description of the contents of the data file to be read (type of organization, details
on the variables, description of the input fields). It can be included at the beginning of the data file
or, as in our case, separated and created from scratch.
Bathymetry 721
You can click on the Preview button to bring up the Data and Header Preview window. It is
designed to help you building the header.
(snap. 15.1-2)
As the header file is not contained in the data file, click Build New Header and a new dialog box
pops up. The different tabs have to be filled in as follows:
l Data Organization: this first tab is used to define the file type, dimension and specific
parameters. Select Points for Type of File and 2D for Dimension. The bathymetry will be
considered as a numeric variable and not as a third coordinate.
l Options: this second tab defines how data are arranged in the file.
m In our case, columns are fixed. So tick the CSV Input (Comma Separated Value) option and
choose ',' as Values Separator and write a '.' for Decimal Symbol. Specify that you want to
skip the first line by typing Skip 1 File Lines at the Beginning.
722
(snap. 15.1-3)
m As the data coordinates are defined in a geographic system, select the Coordinates are in
latitude/longitude format option. Choose -45.6533 / 22.578 to specify that the Coordinates
Input Format are in decimal degrees. You need then to define the projection system. Click
Build/Edit Projection File to create a new projection file. The Projection Parameters dialog
box pops up.
- Click New Projection File to enter a name for the new projection file: lambert2e.proj.
- Select clarke-1880 as reference in the Ellipsoid list.
- Select Lambert as Projection Type. First, choose France / Center (II) as Lambert
System. Then switch it for User Defined in order to modify the Y Origin from 200000 to
2200000.
- Click Save to store the parameters and close the Projection Parameters dialog box.
Bathymetry 723
(snap. 15.1-4)
l Base Fields: this tab is used to specify how the input data fields will be read and stored as new
variables in Isatis. Click Automatic Fields to automatically create as many fields as they appear
in the data file. The names of the variables will be those given in the first line (the first line
skipped is considered as containing the variables names). At last, you have to define the type of
each variable:
m The coordinates 'Easting Degrees' and 'Northing Degrees' for long and lat,
m The bathymetry Z is considered as a 'Numeric 32 bits'.
724
(snap. 15.1-5)
Click Save As to save the edited header in a file. Enter a name for this file header.txt and Close.
This header could be used for the other files which have the same structure. The header created
should have the same structure as following:
#
# structure=free
#
# csv_file=Y, csv_sep=",", csv_dec="."
# nskip=1
#
# proj_file="D:\Profile\Bureau\Case_study_bathy\Data\lambert2e.proj"
# proj_coord_rep=0
#
# field=1 , type=ewd , name="long";
# f_type=Decimal , f_length=10 , f_digits=2, unit="";
# factor=1
# field=2 , type=nsd , name="lat";
# f_type=Decimal , f_length=10 , f_digits=2, unit="";
# factor=1
# field=3 , type=numeric , name="Z" , ffff="" ;
# bitlength=32 , unit="" ;
# f_type=Decimal , f_length=10 , f_digits=2
Once your header is ready, you have to choose where and how your data will be stored in the Isatis
database. Select the mode Create a New File to import the complete data set. Then, create a new
Bathymetry 725
directory and a new file in the current study. The button NEW Points File is used to enter the names
of these two items; click on the New Directory button and give a name, do the same for the New
File button, for instance:
l New directory = Data
l New file = DDE Boyard 2000
Finally, press OK and then Import.
(snap. 15.1-6)
Do the same thing for the two other files (without building a new header but using the same as
previously) to import these data sets in two new Isatis files:
l DDE_Maumusson_2001.csv in Data/DDE Maumusson 2001,
l DDE_Marennes_Oleron_2003.csv in Data/DDE Marennes Oleron 2003.
15.1.2.2 Import of the coast line
This other data set is provided in an ArcView format. It contains the geometry of the coast line and
the different islands. These contours are loaded as polygons that allow you to define the area of
interest in the grid file in order not to interpolate outside the sea.
To import this file, you have to go to File / Import / ArcView.
726
In the Shapefile tab, click File Name to open a file selector and select the Shapefile to be read
Coast.shp.
Choose the option Import as Polygons and click Data File to define the output file in your Isatis
study:
l Directory = Data
l New file = Coast
As for the ASCII Import, tick the Coordinates are in latitude/longitude format option to specify
your data are defined in a geographic system. Click on Projection File Name and select your
projection file lambert2e.proj created previously.
Finally, press Import.
(snap. 15.1-7)
Bathymetry 727
15.2 Pre-processing
15.2.1 Visualization
The data sets are visualized using the display capabilities.
You are going to create a new Display template, that consists in an overlay of several base maps
and polygons. All the display facilities are explained in detail in the "Displaying & Editing
Graphics" chapter of the Beginner's Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify in this window the contents of your graphic.
To achieve that:
l Firstly, give a name to the template you are creating: Data. This will allow you to easily display
the same map later on.
l In the Contents list, double click on the Basemap item. A new window appears, in order to let
you specify which file and which variable you want to display.
m In the Data area, click on the Data File button and select the file Data / DDE Boyard 2000.
Three types of representation may be defined (proportional, color or literal variable) but if
these three variables are left undefined, a simple basemap is drawn using only the Default
Symbol. Clicking on this button, you can modify the pattern, the color and the size of the
points.
l Click on Display to display the result and on OK to close the Item Contents panel.
l Back in the Contents list, double-click again on the Basemap item to represent the other points
files DDE Marennes Oleron 2003 and DDE Maumusson 2001. Choose a different color for
each file in order to distinguish them.
l Back in the Contents list again, double-click on the Polygons item to represent the coast line and
select Data/ Coast clicking on Data File. The lowest part of the window is designed to define
the graphic parameters:
m Label Position: Select no symbol to not materialize the label position of each polygon.
m Filling: Check Use a Specific Filling and click on the ... button to open the Color Selector
and choose Transparent.
728
l Click on Display Current Item to check your parameters, then on Display to see all the
previously defined components of your graphic.
(fig. 15.2-1)
15.2.2 Sampling selection
The final resolution for the bathymetric model will be 60 m. However, the data sets resolution is
finest, metric in some places. In consequence, it is advised to create a selection variable in order to
avoid some problems of matrix inversion during the interpolation due to very close points which
will be considered as "duplicates" and to reduce calculation time.
The File / Selection / Sampling panel allows you to create a selection variable by sampling data
points on a regular grid basis.
Click on the Data File button to select the file Data / DDE Boyard 2000 you want to re-sample.
You have then to define a New Selection Variable where the result of the sampling will be stored.
Call it Sampling 10 m.
Bathymetry 729
Choose the Center Point option to specify that you want to keep in the selection the sample
nearest to the cell gravity center.
In order to take into account the whole set of samples, select the option Infinite Grid. The grid
system will be extended so that each of the samples is classified in a grid cell.
Finally, you have to specify the grid parameters. As the Infinite Grid option is activated, you just
have to fill in the dimensions of the cells. Type 10 m for DX and DY.
Press Run.
The variable created by the procedure is set to 1 when a sample is kept (just one sample by grid
cell), 0 otherwise.
(snap. 15.2-1)
Do the same thing for the two other data sets.
Note - This procedure can also be achieved in Tools / Look for Duplicates.
15.2.3 Creation of a target grid
All the estimation results will be stored as different variables inside a new grid file located in the
directory Targets. This grid, called Grid 60x60m, is created using the File / Create Grid File
functionality.
730
Using the Graphic Check option, the procedure offers the graphical capability of checking that the
new grid reasonably overlays the different data files selected clicking on the Display File
(Optional) button.
(snap. 15.2-2)
Note - In Isatis, only regular grids can be created but it is possible to import irregular grids. For
example, if you create a regular grid in latitude/longitude outside Isatis, this file has to be projected
in Isatis (with a projection system consistent with your data set). Once the projection realized, the
grid will not be regular so it will be imported as a points file via File / Import / ASCII. This new file
will be finally considered as the target file of the interpolation. During import, you just need to
select the Keep Geographical Coordinates option to keep and store the original fields used to
compute the latitude/longitude coordinates as float variables in the output Isatis file in order to
export the result of the interpolation on these coordinates .
Bathymetry 731
15.2.4 Delineation of the interpolation area
You have to create a polygon selection on the grid to delineate the interpolation area by the File /
Selection / From Polygons functionality. In order to consider the interpolation only on the sea, the
new selection variable, called Bathy area, will take into account the grid cells located outside all
the polygons.
(snap. 15.2-3)
15.2.5 Consistency and concatenation of data sets
15.2.5.1 Marennes Oleron and Maumusson
When zooming on the Marennes Oleron and Maumusson area, it appears that the two campaigns do
not overlay. Consequently, the two data sets can be concatenated in order to interpolate them
simultaneously. However, it is important to check the consistency of bathymetry between the files
before merging them.
The merger of the two files is done in Tools / Copy Variable / Merge Samples. Click Input File 1 to
open a File selector to select the Z variable of the DDE Marennes Oleron 2003 file using the
Sampling 10 m selection. Select the same variable for the DDE Maumusson 2001 file clicking
732
Input File 2. You have then to define the output variable corresponding to the input variable of both
input files. Click New Output Points File to create the new output Points File MO and
Maumusson where the variable Z will be copied. If you press the Default button, the name(s) of
the input variable(s) will be kept as the name(s) of the corresponding output variable(s) in the
output file. Finally, click Run.
(snap. 15.2-4)
Note - Be careful that the input variables are defined with the same format (in our case, Float and
not Length), in order to avoid Isatis making a conversion.
Then, the borders consistency of the two data sets is studied in Statistics / Exploratory Data
Analysis. In our case, we just want to compare the two profiles linking the two campaigns. The
comparison is made via a H-scattor plot. This application allows you to analyze the spatial
continuity of the selected variable.
It is first advised to create a selection containing only the two profiles. Clicking on the base map
(first icon from the left), the localization of bathymetry measures appears. Each active measure is
represented by a cross proportional the the bathymetry value. A sample is active if its value for a
given variable is defined and not masked.
To create the selection variable, right click and Mask all Information on the Basemap window.
Then, zooming, select the two profiles (with the left button of your mouse) and right click, Unmask.
Bathymetry 733
(snap. 15.2-5)
To avoid high computation time, you should save this selection and work only on an extraction of
the bathymetric file:
734
l To save the selection variable, click on Application / Save in Selection in the Basemap window
and create a new selection variable Two profiles. Save.
l In Tools / Copy Variable / Extract Samples, click Input File and select the Z variable of the MO
and Maumusson file with the selection Two profiles activated (select it on the left part of the
File Selector). Click New Output Points file and create a new output Points File Two profiles
MO and Maumusson and a new variable Z. Run.
(snap. 15.2-6)
Launch again the Statistics / Exploratory Data Analysis on the Z variable of this new file. Tick the
Define Parameters before Initial Calculations option and click on the sixth icon from left to display
the H-scattor plot. The default parameters are modified:
Bathymetry 735
l the Reference Direction: an angle of 55 from North is taken to compare the pairs of points
located in the principal direction of the trench. This direction can be identified clicking on
Management / measure / Angle between two Segments in the graphic window.
l the Minimum and Maximum Distance: respectively equal to 400 and 800 m to include the pairs
of points resulting of the comparison of two profiles.
l the Tolerance on Angle: 5 not to be too strict on the Reference Direction.
(snap. 15.2-7)
It is possible to add the First Bisector Line on the H-scattor plot via Application / Graphic Specific
Parameters.
(fig. 15.2-2)
0 5 10 15
Z
0
5
10
15
Z
736
Select a pair of points on the H-scattor plot (i.e. one point) and do a right click and Highlight allows
you to show their localization on the Basemap. No special bias seems visible, in consequence the
two campaigns can be merged without any correction.
15.2.5.2 Marennes Oleron and Boyard
The consistency between Boyard and Marennes Oleron can be studied more finely because both
campaigns overlay.
The first step consists in migrating the bathymetry values of Marennes Oleron to the points of
Boyard with the Tools / Migrate / Point to Point. The Maximum Migration Distance is set to 2 m
not to compare too far values.
(snap. 15.2-8)
For clarity reasons, in the DDE Boyard 2000 file, the bathymetric variable is renamed in Z
Boyard.
The difference of bathymetry between both variables Z Boyard and Z MO is calculated via the
File / Calculator panel.
Bathymetry 737
(snap. 15.2-9)
Both Z variables and the difference between them are then selected in Statistics / Exploratory Data
Analysis. On the Scatter Diagram of Z Boyard versus Z Mo, which is considered as the reference
bathymetry because it is more recent, you can observe an excellent correlation of 0.999. However
the error Z Boyard-MO seems to increase with the depth (the distance from the first bissector line is
more and more important). The mean of these errors is equal to 0.45 m.
738
(fig. 15.2-3)
After removing some points, you can observe a link between the errors and the bathymetry. This
phenomenon could be due to an evolution of the sediments between the two campaigns made in
2000 and 2003.
(fig. 15.2-4)
The bias between the two bathymetric models resulting from the Boyard and the Marennes Oleron
data sets could be corrected, applying to the Boyard data, the following correction (corresponding
to the equation of the regression line below):
Z Boyard - MO = 0.021193 * Z Boyard + 0.213207 (eq. 15.2-1)
This relation is observed on the overlapping area of the two campaigns. Its validity should be
confirmed on the remaining area.
0 10 20
Z Boyard
0
10
20
Z
M
O
rho=0.999
-0.5 0.0 0.5 1.0
Z Boyard-MO
0.00
0.05
0.10
0.15
0.20
F
r
e
q
u
e
n
c
i
e
s
Nb Samples: 115
Minimum: -0.35
Maximum: 1.00
Mean: 0.45
Std. Dev.: 0.25
0 10 20
Z Boyard
0.0
0.5
1.0
Z
B
o
y
a
r
d
-
M
O
rho=0.548
Bathymetry 739
15.3 Interpolation by kriging
15.3.1 Exploratory Data Analysis
In the Statistics / Exploratory Data Analysis panel, the first task consists in defining the file and
variable of interest. To achieve that, click on the Data File button and select the variable Z in the
Data / MO and Maumusson file. By pressing the corresponding icon (eight in total), you can
successively perform several statistical representations, using default parameters or by choosing
appropriate parameters.
(snap. 15.3-1)
For example, to calculate the histogram with 25 classes between -6 and 19 m (1 meter interval),
first you have to click on the histogram icon (third from the left); a histogram calculated with
default parameters is displayed, then enter the previous values in the Application / Calculation
Parameters menu bar of the Histogram page. If you switch on the Define Parameters Before Initial
Calculations option, you can skip the default histogram display.
The different graphic windows are dynamically linked. If you want to locate the negative measures
of bathymetry, select on the histogram the classes corresponding to negative values, right click and
choose the Highlight option. The highlighted values are now represented by a blue star on the base
map previously displayed.
(fig. 15.3-1)
740
(fig. 15.3-2)
Then, an experimental variogram can be calculated by clicking on the 7
th
statistical representation,
with 20 lags of 10 m and a proportion of lag of 0.5. The variance of data may be removed from the
graphic by switching off the appropriate button in the Application / Graphic Specific Parameters.
0 10 20
Z
0.00
0.05
0.10
0.15
F
r
e
q
u
e
n
c
i
e
s
Nb Samples: 147879
Minimum: -5.48
Maximum: 18.67
Mean: 0.71
Std. Dev.: 4.08
Bathymetry 741
(snap. 15.3-2)
742
(fig. 15.3-3)
In order to perform the fitting step, it is now time to store the experimental variogram with the item
Save in Parameter File of the Application menu of the Variogram Page. You will call it Z bathy.
15.3.2 Fitting a variogram model
The procedure Statistics / Variogram Fitting allows you to fit an authorized model on an
experimental variogram.
You must first specify the name of the parameter file which contains the Experimental Variogram Z
bathy created in the previous paragraph.
Then, you need to define another parameter file which will ultimately contain the model: you will
also call it Z bathy. Although they carry the same name, there will be no ambiguity between these
two files as they are of different types.
Common practice is to find, by trial and error, the set of parameters defining the model which fits
the experimental variogram as closely as possible. The quality of the fit is checked graphically on
each of the two windows:
l The global window where all experimental variograms, in all directions and for all variables are
displayed.
l The fitting window where you focus on one given experimental variogram, for one variable and
in one direction.
In our case, as the parameter file refers the only one experimental variogram for the single variable
Z, there is obviously no difference between the two windows.
0.00 0.05 0.10 0.15 0.20
Distance (km)
0.0
0.5
1.0
1.5
V
a
r
i
o
g
r
a
m
:
Z
Bathymetry 743
(snap. 15.3-3)
744
(snap. 15.3-4)
The principle consists in editing the Model parameters and checking the impact graphically.
The panel used for the Model definition (displayed when pressing the Edit button) offers the
possibility of defining a default model: this model is isotropic and composed of one basic structure
of spherical type with a range calculated as one tenth of the field extension and a sill equal to the
statistical variance of the data.
Each modification of the Model parameters can be validated using the Test button in order to update
the graphic.
Once the set of basic structures has been chosen and given the range (or scale factor) of each
structure, a convenient feature is the possibility of asking the system to derive the optimal sill of
each structure (Automatic Sill Fitting button of the main window). This calculation needs to
minimize the distance between the experimental variogram and the model, taking into account the
number of pairs and the distance for each lag in a way which depends on the Fitting Weights rule,
accessible by switching ON the Show Advanced Parameters button.
Here, two different structures have been defined (in the Model Definition window, use the Add
button to add a structure, and define its characteristics below, for each structure):
Bathymetry 745
l a stable model with a third parameter equal to 1.45, a range of 600 m and a sill of 3.35,
l a nugget effect of 0.0025.
These parameter lead to a better fitting of the model to the experimental variogram. This model is
saved in the Parameter File for future use by clicking on the Run (Save) button.
(fig. 15.3-4)
15.3.3 Kriging of bathymetry
The kriging procedure Interpolate / Estimation / (Co-)Kriging requires the definition of:
l the Input information: variable Z in the Data File,
l the following variables in the Output Grid File, where the results will be stored:
m the estimation result in Kriging of bathymetry MO and Maumusson,
m the standard deviation of estimation in Std of bathymetry MO and Maumusson (Kriging),
m the Model of variogram: Z bathy,
m the neighborhood: Moving 300m.
To define the neighborhood, you have to click on the Neighborhood button and you will be asked to
select or create a new set of parameters; in the New File Name area enter the name Moving 300m,
then click on OK or press Enter and you will be able to set the neighborhood parameters by clicking
on the respective Edit button.
The neighborhood type is a moving neighborhood. It is an ellipsoid with No Rotation;
0.00 0.05 0.10 0.15 0.20
Distance (km)
0.0
0.5
1.0
1.5
V
a
r
i
o
g
r
a
m
:
Z
746
l Set the dimensions of the ellipsoid to 300 m and 300 m. Because of the sampling, the
neighborhood size does not need to be very large;
l Minimum number of samples: 1;
l Number of Angular Sectors: 4 in order to avoid data coming all from the same profile;
l Optimum Number of Samples per Sector: 4. A number of 4x4=16 samples seems to be a good
compromise between reliability of the interpolation and calculation time.
(snap. 15.3-5)
In order to avoid extrapolation outside the domain, in the Advanced tab, it is possible to interrupt
the neighborhood search when there are too many consecutive empty sectors. Tick the Maximum
Number of Consecutive Empty Sectors option to active it and enter a value of 2.
Bathymetry 747
(snap. 15.3-6)
Press OK for the Neighborhood Definition.
Note - When kriging huge data sets, it is advised to modify the parameters in the Sorting tab in
order to optimize the computations. With a moving neighborhood, the samples are first sorted into
a coarse grid of cells (the maximum number of cells is limited to 500000). This sorting will improve
the performance of the search algorithm.
The sorting parameters DX and DY should be set such as the product of the domain extension along
X by the domain extension along Y divided by the product of DX by DY, is smaller than 500000.
748
(snap. 15.3-7)
In the Standard (Co-)Kriging panel, a special feature allows you to test the choice of parameters,
through a kriging procedure, on a graphical basis (Test button). A first click within the graphic area
displays the target file (the grid). A second click allows the selection of one grid node in particular.
The target grid node may also be entered in the Test Window / Application / Selection of target
option (see the status line at the bottom of the graphic page), for instance [207,262].
The figure shows the data set, the sample chosen in the neighborhood (the 16 closest points inside a
300 m radius circle) and their corresponding weights. The bottom of the screen recalls the
estimation value, its standard deviation and the sum of the weights.
Bathymetry 749
(snap. 15.3-8)
In the Application menu of the Test Graphic Window, click on Print Weights & Results. This
produces a printout of:
750
l the calculation environment: target location, model and neighborhood,
l the kriging system,
l the list of neighboring data and the corresponding weights,
l the summary of this kriging test.
Results for : Punctual
- For variable V1
Number of Neighbors = 16
Mean Distance to the target = 56.73m
Total sum of the weights = 1.000000
Sum of positive weights = 1.063054
Weight attached to the mean = -0.036649
Lagrange parameters #1 = 0.102892
Estimated value = -0.993803
Estimation variance = 0.172023
Estimation standard deviation = 0.414757
Variance of Z* (Estimated Z) = 2.974693
Covariance between Z and Z* = 3.077585
Correlation between Z and Z* = 0.974551
Slope of the regression Z | Z* = 1.034589
Signal to Noise ratio (final) = 19.474120
Click on Run to interpolate the data on the entire grid.
The same interpolation can be achieved with the Boyard data set (with the selection Sampling 10 m
activated) taking care that the names of the output file are different. Create two new variables:
l Kriging of bathymetry Boyard to store the estimation result ,
l Std kriging of bathymetry Boyard for the standard deviation of estimation.
15.3.4 Displaying the graphical results
15.3.4.1 Display 2D
Click on Display / New Page in the Isatis main window. A new blank graphic page is popped up.
l Give a name to the template you are creating: Bathy kriging.
l In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable with which color scale you want to display:
m In the Data area, in the Grid file select the variable Kriging of bathymetry MO and
Maumusson,
m Specify the title that will be given to the Raster part of the legend, for instance Bathy (m),
m In the Graphic Parameters area, specify the Color Scale you want to use for the raster
display. You may use an automatic default color scale, or create a new one specifically
dedicated to the bathymetry. To create a new color scale, click on the Color Scale button,
double-click on New Color Scale and enter a name: Bathy, and press OK. Click on the Edit
button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
Bathymetry 751
- Click on the Bounds button and enter the min and the max bounds (respectively -5 and
15).
- Do not change the number of classes (32).
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Automatic Spacing between Tick Marks button, enter -
5 as the reference tick mark and 2 as the step between the tick marks. Then, specify that
you do not want your final color scale to exceed 6 cm. Switch of the Display Undefined
Classes as button.
- Click on OK.
m In the Item contents for: Raster window, click on Display to display the result.
(snap. 15.3-9)
752
l It is possible to add other items such as Isolines defined on the nodes of a grid. For example,
you can display, on the bathymetry variable, isolines by 1 m classes.
l You can also display the coast line by adding a Polygons item as done for the data visualization.
l In the Items list, you can select any item and decide whether or not you want to display its
legend. Use the Move Back and Move Front buttons to modify the order of the items in the final
display.
l Click on the Display Box tab. Choose Containing a set of items and select the Raster item to
define the size of the graphic by reference to the contents of the grid.
l Finally, click on Display to display the result and on OK to close the Item Contents panel. Your
final graphic window should be similar to the one displayed hereafter:
Bathymetry 753
(snap. 15.3-10)
The * and [Not saved] symbol respectively indicate that some recent modifications have not been
stored in the Bathy kriging graphic template, and that this template has never been saved. Click on
Application / Store Page to save them. You can now close your window.
15.3.4.2 3D Viewer
Launch the 3D Viewer (Display / 3D Viewer).
754
To display the bathymetry estimation, drag and drop the Kriging of bathymetry MO and Mau-
musson variable from the Grid 60x60m file in the display window. In the Page Contents, click
right on the Surface object to edit its properties:
l in the Color tab, be careful that selected variable is Kriging of bathymetry MO and Maumus-
son. Apply the Bathy color scale created previously.
l in the Elevation tab, you need to select Variable and to choose Kriging of bathymetry MO
and Maumusson to define for each grid cell the bathymetry as the level Z. Tick Convert into Z
Coordinate to calculate the elevation Z from the bathymetry (in depth) as Z = -1x V + 0.
(snap. 15.3-11)
Tick the Automatic Apply option to automatically assign the defined properties to the graphic
object. If this option is not selected, modifications are applied only when clicking Display.
Tick Legend to display the color scale in the display window. The legend is attached to the current
representation. Specific graphic objects may be added from the Display menu as the graphic axes
and corresponding valuations, the bounding box and the compass.
The Z Scale, in the tool bar, may also be modified to enhance the vertical scale.
Click on File / Save Page As to save the current graphic.
Bathymetry 755
(fig. 15.3-5)
15.3.5 Detection of outliers - Filtering
By construction, kriging smoothes the real variability. As a consequence, with a variogram model
including a nugget effect, if the interpolation is done very close to a data point, the estimated value
will be different from the one measured. Indeed, if this nugget effect is due to an error of
measurement, it makes sense not to give priority only to the closest point and to give also some
weight to farther data. However, by construction, kriging is unbiased, i.e. if kriging is done exactly
on a measure, the estimated value and the measured value will be exactly the same.
Filtering allows you to produce an estimation of the variable, filtering out the effect of the error of
measurement. This error, considered as independent from the variable, is characterized by its own
scale component on the variogram model: the nugget effect. The objective is to estimate on the data
points the most probable bathymetric value with no measurement error. Then, an analysis will be
done to compare these values with the ones measured. The validity of the points for which the
correction will be the most important will be called into question.
Filtering is achieved just like kriging. You just need to specify the same Input File and Output File
with two new variables Z filtering for estimation and Z std filtering for the standard deviation.
The Model of variogram and the Neighborhood are the same as for the kriging. Click on the Special
Model Options button and tick Filtering Model Components. Then highlight the nugget effect so
that it will be filtered from the model in the list of Covariance basic structures. Click Apply and
Run.
756
(snap. 15.3-12)
Bathymetry 757
(snap. 15.3-13)
In File / Calculator, click Data File and select:
l the measures Z as v1,
l the filtered bathymetry Z filtering as v2,
l the standard deviation associated Z std filtering as v3,
l a new variable Z standardized error filtering as v4 which will be equal to the difference
between the "true" value and the value estimated by filtering standardized by the standard
deviation.
Then write the following transformation:
v4=(v1-v2)/v3
Click on Run.
758
(snap. 15.3-14)
The Exploratory Data Analysis allows you to locate on the base map the highest errors by
highlighting them on the histogram. Adding on the base map the values of bathymetry (informing
the Literal Code Variable in Application / Graphical Parameters) permits to study these points in
details.
It is first advised to modify the symbol of the selected points from crosses to points in order to
improve the legibility of the display. To achieve that, you have to access the study parameters in
Preferences / Study Environment, Miscellaneous tab and change the Selected Point symbol in the
Interactive Picking Windows Convention part.
After masking the outliers (with a right click and Mask), you can save the result of this work in a
selection variable (Application / Save in Selection). Then, you can perform again a kriging (without
filtering) with this selection variable activated in input and the grid of interpolation in output this
time. Of course, the classication of the points as outliers should be done carefully.
Bathymetry 759
(fig. 15.3-6)
-5 0 5
Z standardized error filtering
0.0
0.1
0.2
0.3
0.4
0.5
F
r
e
q
u
e
n
c
i
e
s
Nb Samples: 147878
Minimum: -6.43
Maximum: 7.82
Mean: 0.00
Std. Dev.: 0.37
760
15.4 Superposition of models and smoothing of
frontiers
15.4.1 Merge of several Digital Terrain Models (DTM)
The two data sets, Boyard on the one hand and Marennes Oleron/Maumusson on the other hand,
have been interpolated separately but they overlay in part. At this stage, it is necessary to know
which one of these two models you want to give priority. In this case, it is decided to favour
Marennes Oleron/Maumusson because of its more recent campaign and its larger coverage of the
study area.
A new bathymetric model Z bathy Boyard MO and Maumusson is built thanks to the File /
Calculator application.
The mathematic transformation simply consists in taking for final model the priority one (Kriging
of bathymetry MO and Maumusson) when it is defined, otherwise the Kriging of bathymetry
Boyard variable adjusted thanks to the equation of regression calculated at the paragraph 14.2.5.2.
(snap. 15.4-1)
Bathymetry 761
15.4.2 Smoothing of frontiers
Zooming in a display of the Z bathy Boyard MO and Maumusson variable, you can see that the
frontier between the two concatenated DTM (Boyard and Marennes Oleron) is still visible. So it
seems necessary to smooth it.
The idea consists in defining a band around Marennes Oleron (which is privileged), then re-
interpolate the values of bathymetry in this band from the interpolated values nearby.
The buffer zone is created in two steps:
l In Interpolate / Interpolation / Grid operator, you should create a new selection variable Sel Z
bathy MO and Maumusson dilated which takes into account all the grid cells where the
Kriging of bathymetry MO and Maumusson variable is defined and a band of 120 m wide
(i.e. 2 cells) around.
(snap. 15.4-2)
762
l In File / Calculator, three variables are created:
m Sel Z bathy MO and Maumusson buffer: this selection variable defines the buffer zone. It
is equal to Sel Z bathy MO and Maumusson dilated minus the area on which the Kriging
of bathymetry MO and Maumusson variable is defined.
m Z bathy final Boyard MO and Maumusson: this variable contains the concatenation of the
two models previously created by kriging with priority to the MO and Maumusson model as
well as undefined values inside the buffer zone.
m DTM area: this selection variable is created in order not to extrapolate the interpolation
done at the next step.
(snap. 15.4-3)
Bathymetry 763
(fig. 15.4-1)
The buffer area is then filled in with a simple moving average in Interpolate / Interpolation / Grid
Filling. The result of this last interpolation is stored in the same Z bathy final Boyard MO and
Maumusson variable as in input. The variable is overwritten to contain the final bathymetric
model. The DTM area selection variable is activated in order not to extrapolate.
The choice of the algorithm of interpolation has no real importance because of the limited size of
the buffer area.
320 325 330 335
X (km)
2095
2100
2105
2110
2115
2120
Y
(
k
m
)
Z bathy final Boyard MO and Maumusson
Bathy (m)
15
13
11
9
7
5
3
1
-1
-3
-5
764
(snap. 15.4-4)
Bathymetry 765
15.5 Local GeoStatistics (LGS) application to
bathymetry mapping
15.5.1 Variogram analysis
The estimation previously obtained is based on a global variogram model. This analysis assumes
the stationarity of the data and its spatial structure over the area of interest.
The LGS methodology described hereafter suggests to calculate local variograms, taking into
account potential local particularities such as local anisotropies, spatially varying small-scale
structures or heterogeneity. The expected outcome is an improved prediction, together with a more
consistent assessment of uncertainties.
The first step consists in computing an anisotropic variogram which will be used in the LGS meth-
odology to find the right local angle of anisotropy. Using this local angle, the local ranges along U
and V will then be computed in the next step.
An anisotropic variogram model is required to test different rotations. Consequently, the first step
of the analysis consists in computing an anisotropic variogram. In the Statistics / Exploratory Data
Analysis menu, you should select the Z variable in the Data / MO and Maumusson file and click
on the variogram icon. The Variogram Calculation Parameters panel pops up. In the List of
Options, change the type of variogram from Omnidirectional to Directional. Then click on the
Regular Directions button and choose a Number of Regular Directions of 2 (N0 and N90) and
click on OK. In the Variogram Calculation Parameters panel, when clicking any cell of the table,
the Directions Definition box pops up.
Select the direction to be defined in the Directions List on the left side of the interface. You may
select the two directions at the same time to set the same parameter values. Then activate the
parameters you need to modify by checking the corresponding box and choose:
l Tolerance on Direction: 5
l Lag Value: 10 m
l Number of Lags: 20
Click OK twice to calculate the variogram and get it displayed in a graphic window.
766
(snap. 15.5-1)
Bathymetry 767
(snap. 15.5-2)
768
(fig. 15.5-1)
Finally store this experimental variogram with the item Save in Parameter File of the Application
menu of the Variogram Page. You will call it Z bathy anisotropic.
To fit a variogram model, in the Statistics / Variogram Fitting application, define:
N90
N0
0.00 0.05 0.10 0.15 0.20 0.2
Distance (km)
0
1
2
3
V
a
r
i
o
g
r
a
m
:
Z
Bathymetry 769
l The Parameter File containing the set of experimental variograms: Z bathy anisotropic.
l The Parameter File in which you whish to save the resulting model Z bathy anisotropic. You
may define the same name for both.
(snap. 15.5-3)
Check the toggles Fitting Window and Global Window; the program displays automatically one
default spherical model. The Fitting Window displays one direction at a time (you may choose the
direction to display through Application / Variable & Direction Selection...), and the Global
Window displays all directions in one graphic.
Click on the Edit button next to the variogram model to open the Model Definition sub-window.
You can first initialize the variogram by pressing the Load Model button, and select the Z bathy
model to begin your modelization using the same parameters. But the model must reflect:
770
l The specific variability along each direction (anisotropy),
l The general increase of the variogram.
You should tick the Anisotropy option for the Stable structure with a third parameter equal to 1.45,
a sill of 3.35 and the following respective ranges along U and V: 800 m and 300 m. The nugget
effect stays equal to 0.0025.
This model is saved in the Parameter File by clicking on the Run (Save) button.
(snap. 15.5-4)
Bathymetry 771
(fig. 15.5-2)
15.5.2 Pre-processing
In order to avoid heavy computation time, the method is only illustrated on a specific part of the
area of interest. After validating the analysis of the LGS parameters on this restricted area, you
could perform the estimation on the entire domain.
In the File / Selection / Geographical Box menu, a new selection variable Restricted area is cre-
ated in the MO and Maumusson file by selecting only the samples for which the coordinates are
included between:
l 325800 and 333200 m for X,
l 2102200 and 2111200 m for Y.
The same selection is applied to the grid Targets / Grid 60x60m.
N90
N0
0.00 0.05 0.10 0.15 0.20 0.2
Distance (km)
0
1
2
3
V
a
r
i
o
g
r
a
m
:
Z
772
(snap. 15.5-5)
The dataset is also reduced to select one point every 25 m with the File / Selection / Sampling menu.
(snap. 15.5-6)
Finally, the selection Sampling 25 m containing 18512 samples is extracted into a new points file
MO and Maumusson LGS thanks to the Tools / Copy Variable / Extract Samples application. You
should press the Default button to keep the name of the input variable Z as the name of the
corresponding output variable in the output file MO and Maumusson LGS. Click Run.
Bathymetry 773
(snap. 15.5-7)
15.5.3 LGS Parameters Modeling
The computation of the Local Parameters is achieved in the Statistics / LGS Parameters Modeling /
Local Cross-validation Score Fitting application.
Click Input Data and select the Z variable in the Data / MO and Maumusson LGS file. Then
choose the Z bathy anisotropic variogram model.
In order to perform the cross-validation used to compute the parameters, it is necessary to specify a
search neighborhood. Click Neighborhood to open the Neighborhood selector. We choose not to
use the previous neighborhood but to create a new one, Moving LGS, to authorize a minimum
distance between two samples of 100 m to counterbalance the samples organization along lines.
774
Then, click Edit to modify the parameters, in the Sectors tab:
l The neighborhood type is a Moving neighborhood. It is an ellipsoid with No Rotation;
l Set the dimensions of the ellipsoid to 1200 m and 1200 m along the U and V directions;
l Minimum number of samples: 1;
l Number of angular sectors: 8;
l Optimum Number of Samples per Sector: 4.
In the Advanced tab:
l Minimum Distance Between two Selected Samples: 100 m;
l Maximum Number of Consecutive Empty Sectors: 2.
Press OK for the Neighborhood Definition.
(snap. 15.5-8)
In the Local Grid tab, you should click on the Local Grid button to define the grid on which the
local parameters will be calculated. Create a new file Grid LGS in the existing Targets directory.
The grid is automatically computed in order to geographically overlay the input samples. You
should tick the Graphic Check option to check the superimposition of the grid on the samples.
Bathymetry 775
The Cross-validation tab allows you to define a block size inside of which samples are considered.
Enter a value of 100 m for X and Y and choose to Perform Cross-validation on 50 % of the data
(to reduce the amount of data and the computation time).
In the last Local Parameters tab, you should select the parameters that you whish to estimate
locally. In this exemple, we first choose to test only the rotation i.e. the directions of anisotropy. The
Output Local Base Name area is designed to define a base name for the local parameters. The
complete name of each parameter is automatically created concatenating this chain of characters,
the name of the structure (for the variogram model) and the parameter you are testing (Rot, Range,
Sill, Third). It appears in the Parameter area. You should call it Z_bathy.
The different basic structures constituting the variogram model defined earlier as well as the
neighborhood item are listed in the Structure area. Click the Stable Structure and select the
Parameter: Rot Z to indicate the local parameter you want to test. In the Min and Max boxes, enter
the values between the selected parameter should fluctuate: respectively -90 and 90. Choose a Step
of 10 degrees between two consecutive values to be tested.
Finally click Run to launch the calculations.
776
(snap. 15.5-9)
You can visualize the result of calculations in the Statistics / Exploratory Data Analysis. Tick the
Legend option in the Application / Graphical Parameters menu of the basemap to display the
legend.
Bathymetry 777
(fig. 15.5-3)
After computing the rotation of the variogram model, a second run is achieved to test the ranges,
taking into account the previous calculations. The Input Data, the Model of variogram and the
Neighborhood remain the same.
In the Local Grid tab, tick the Use an Existing Grid option to save the range parameters in the grid
file Targets / Grid LGS previously created.
Do not change anything in the Cross-validation tab.
In the Local Parameters tab, tick the Parameter Already Exists option not to erase the variable
containing the calculations of rotation. Then, click Add Parameter to add a second parameter.
Select the Stable structure and the Range U for parameter. Choose a Min of 600, a Max of 1000
and a Step of 100. Add a third parameter for the Range V with a Min of 100, a Max of 500 and a
Step of 100.
Be careful that Simultaneous estimation mode be ticked in order to test all possible combinations
of the different values for the range.
Click Run.
326 327 328 329 330 331 332 333
X (km)
2103
2104
2105
2106
2107
2108
2109
2110
2111
Y
(
k
m
)
Z_bathy_1_Stable_Rot Z
N/A
100
80
60
40
20
0
-20
-40
-60
-80
-100
778
(snap. 15.5-10)
15.5.4 LGS kriging
The last step of this analysis consists in performing a kriging taking into account the local parame-
ters previously calculated.
Bathymetry 779
The kriging procedure Interpolate / Estimation / (Co-)Kriging requires the definition of:
l the Input information: variable Z in the Data / MO and Maumusson file with the Restricted
area selection,
l the following variables in the Targets / Grid 60x60m Output Grid File, where the results will
be stored:
m the estimation result in Kriging LGS MO and Maumusson restricted area,
m the Model of variogram: Z bathy anisotropic,
m the neighborhood: Moving 300m.
(snap. 15.5-11)
You should click on the LGS Parameters button to pop up the LGS Parameter Loading box and
define the local models. Click on Local Grid and select the grid Targets / Grid LGS where the
local parameters are stored.
780
In the Model Per Structure tab, tick the Use Local Rotation (Mathematician Convention) option
to make the rotations varying locally. Click Rotation / Z and select the Z_bathy_1_Stable_Rot_Z
variable. In the same way, select Use Local Range and choose Z_bathy_1_Stable_Range_U for
Range / X and Z_bathy_1_Stable_Range_V for Range / Y.
Click OK and Run.
(snap. 15.5-12)
Bathymetry 781
The map displaying the difference between kriging and kriging using LGS points out the areas
whith high differencies between the maps. The two main conclusions are that the use of LGS
reduces the wavelet artefact visible at the border of the main channel and that LGS also offers more
continuous secondary channels which is closer to the reality.
(fig. 15.5-4)
(fig. 15.5-5)
782
783
Methodology
784
Image Filtering 785
16 Image Filtering
This case study demonstrates the use of kriging to filter out the compo-
nent of a variable which corresponds to the noise. Applied to regular
grids such as images, this method gives convincing results in an effi-
cient manner.
The result is compared to classical filters which do not pretend to sup-
press the noise but to reduce it by dilution instead.
Last update: Isatis version 12.0
786
16.1 Presentation of the Dataset
The dataset is contained in the ASCII file called images.hd. It corresponds to a grid of 256256
nodes with a square mesh of 2 microns, only containing one variable piece of information: the
phosphorus element (P) measured using an electronic microprobe on a steel sample. Due to the very
low quantities of material (traces), the realization of this picture may take up to several hours of
exposure: hence the large amount of noise induced by the process. The file is read using the Files /
Import / ASCII facility asking for the data to be loaded in the new Directory called Images and the
new grid file called Grid.
(snap. 16.1-1)
We set in Preferences / Study Environment the X and Y units for graphics to mm.
Using the File Manager utility, we can check the basic statistics of the P variable that we have just
loaded: it varies from 11 to 71, with a mean of 35 and a standard deviation of 7.
Use the Display facility to visualize the raster contents of the P variable located on the grid. The
large amount of noise, responsible for the fuzziness of the picture, is clearly visible.
Image Filtering 787
(fig. 16.1-1)
Initial Image
788
16.2 Exploratory Data Analysis
The next step consists in analyzing the variability of this trace element with the Statistics / Explor-
atory Data Analysis.
Once the names of the Directory (Images), File (Grid) and variable (P) have been defined, ask for
a histogram. Using the Application Menu of the graphic page, modify the Calculation Parameters
as follows: 62 classes lying from 10 to 72. The Automatic button resets the minimum and maximum
by performing the statistics on the active data in the file. The resulting histogram is very close to a
normal distribution with a mode located around 35.
(fig. 16.2-1)
16.2.1 Quantile-quantile plot and -test
Although this will not be used afterwards in the case study, it is possible to check how close this
experimental distribution is to normality, using the Quantile-quantile facility.
It allows the comparison of the experimental quantiles to those calculated on any theoretical distri-
bution (normal in our case). This comparison may be improved by suppressing several points taken
from the head or the tail of the experimental distribution.
2
Image Filtering 789
(snap. 16.2-1)
(fig. 16.2-2)
In the Report Global Statistics item of the Application Menu, you obtain an exhaustive comparison
between the experimental and the theoretical quantiles, as well as the score of the -test, equal to
2
790
9049. This score is much greater than the reference value (for 16 degrees of freedom) obtained in
tables: this indicates that the experimental distribution cannot be considered as normal with a high
degree of confidence.
16.2.2 Variographic Analysis
We now wish to estimate the spatial variability of P, by computing its experimental variogram. The
data being organized on a regular grid, the program takes this information into account to calculate
two variograms by default in a more efficient way: the one established by comparing nodes belong-
ing to the same row (X direction) and the one obtained by comparing nodes belonging to the same
column (Y direction). The number of lags is set to 90; be sure to modify the parameter twice (once
for each direction of calculation).
(snap. 16.2-2)
Note - We could try to calculate the variogram cloud on this image: nevertheless, for one (any)
direction, the smallest distance (once the grid mesh) already corresponds to 256255 pairs, the
second lag to 256254 pairs, and so on. Needless to say this procedure takes an enormous amount of
time to draw and selectively picking some "abnormal" pairs is almost impossible. Therefore this
option is not recommended.
Image Filtering 791
(snap. 16.2-3)
This figure represents the two directional variograms that overlay almost perfectly: this informs us
that the variable behaves similarly with respect to distance along the two main axes. This is almost
enough to pretend that the variable is isotropic. Actually, two orthogonal directional variograms are
not theoretically sufficient as the anisotropy could happen on the first diagonal and would not be
visible from the two main axes. The study can be completed by calculating the experimental vario-
grams along the main axes and along the two main diagonals: this test confirms in the present case
the isotropy of the variable. The two experimental directional variograms are stored in a new
Parameter File called P.
To fit a model to these experimental curves, we use the Statistics / Variogram Fitting procedure,
naming the Parameter File containing the experimental quantity (P) and the one that will ultimately
contain the model. You can name it P for better convenience, keeping in mind that, although they
have the same name, there is no ambiguity between these two files as their contents belong to two
different types.
792
(snap. 16.2-4)
Image Filtering 793
By pressing the Edit button of the main window, you can define the model interactively and check
the quality of the fitting using any of the graphic windows available (Fitting or Global). Each mod-
ification must be validated using the Test button in order for the graphic to be updated. The Auto-
matic Sill fitting and the Model Initialization of the main window can be used to help you to
determine the optimal sill and ranges values for each basic structure constituting the model. A cor-
rect fit is obtained by cumulating a large nugget effect to a very regular behavior corresponding to a
Cubic variogram with a range equal to 0.17 .
794
(fig. 16.2-3)
The parameters can also be printed using the Print button in the Model Editing panel.
Model : Covariance part
=======================
Number of variables = 1
- Variable 1 : P
Number of basic structures = 2
S1 : Nugget effect
Sill = 40.2576
S2 : Cubic - Range = 0.17mm
Sill = 14.7493
Model : Drift part
==================
Number of drift functions = 1
- Universality condition
Click on Run (Save) to save your latest choice in the model parameter file.
Image Filtering 795
16.3 Filtering by Kriging
This task corresponds to the Interpolate / Estimation / Image Filtering & Deconvoluting procedure.
First, define the names of directory (Images), file (Grid) and variable (P) of interest which contain
the information. There is no possibility of selecting the output file as it corresponds to the input file,
by construction in this procedure. The only choice is to define the name of the variable which will
receive the result of kriging process: P denoised. The parameter file containing the model is called
P and a new file called Images P is created for the definition and the storage of the neighborhood
parameters.
(snap. 16.3-1)
When pressing the Neighborhood Edit button, you can set the parameters defining this Image
neighborhood. Referring to the target node as the reference, this image neighborhood is character-
ized by the extensions of the rectangle centered on the target node: the extension is specified by its
796
radius. Hence in 2D, a neighborhood corresponds to the target node alone, whereas a neighborhood
includes the eight nodes adjacent to the target node.
target cell of a 1x1 image neighborhood
For some applications, it may be convenient to reach large distances in the neighboring informa-
tion. However, the number of nodes belonging to the neighborhood also increases rapidly which
may lead to an unreasonable dimension for the kriging system. A solution consists in sampling the
neighborhood rectangle by defining the skipping ratio: a value of 1 takes all information available,
whereas a value of 2 takes one point out of 2 on average. The skipping algorithm manages to keep a
larger density of samples close to the target node and sparser information as the distance increases.
Actually, the sampling density function is inspired from the shape of the variogram function which
means that this technique also takes anisotropy into account.
(snap. 16.3-2)
Prior to running the process on the whole grid, it may be worth checking its performance on one
grid node in particular. This can be realized by pressing the Test button which produces a graphic
page where the data information is displayed. Because of the amount of data available (256256) the
page shows a solid black square. Using the zooming (or clipping) facility on the graphic area, we
can magnify the picture until a set of limited cells are visible (around 20 by 20).
By clicking on the graphic area, we can select the target node (select the one in the center of the
zoomed area). Then the graphic shows the points selected in the neighborhood, displaying their
kriging weight (as a percentage). The bottom of the graphic page recalls the value of the estimate,
the corresponding standard deviation (square root of the variance) and the value for the sum of
weights. The first trial simply reminds us that kriging is an exact interpolator: as a data point is
located exactly on top of the target node, it receives all the weight (100%) and no other information
carries weight.
In order to perform filtering, we must press the Special Model Options button and ask for the Filter-
ing option. The covariance and drift components are now displayed where you have to select the
item that you wish to filter. The principle is to consider that the measured variables (denoted Z) is
Image Filtering 797
the direct sum of two uncorrelated quantities, the underlying true variable (denoted Y) and the noise
(denoted ): . Due to the absence of correlation, the experimental variogram may be
interpreted as the sum of a continuous component (the Cubic variogram) attributed to Y and the
nugget effect corresponding to the noise . Hence filtering the nugget effect is equivalent to sup-
pressing the noise from the input image.
(snap. 16.3-3)
When pressing the Apply button, the filtering procedure is automatically resumed on the graphic
page, using the same target grid node as in the previous test: you can check that the weights are now
shared on all the neighboring information, although they still add up to 100%.
Before starting the filtering on the whole grid, the neighborhood has to be tuned. An efficient qual-
ity index frequently used in image analysis, called the Signal to Noise Ratio, is provided when dis-
playing the Results (in the Application Menu of the graphic page). Roughly speaking, the larger this
quantity the most accurate the result.
The following table summarizes some trials that you can perform. The average number of data in
the neighborhood is recalled, as it directly conditions the computing time.
The Ratio increases quickly and then seems to converge with a radius equal to 8-9. Trying a neigh-
borhood of 10 and a skipping ratio of 2 does not lead to satisfactory results. It is then decided to use
a radius of 8 for the kriging step.
Radius Number of nodes Skipping Ratio Signal to Noise Ratio
1 9 1 3.3
Z Y + =
798
An interesting concern is to estimate a target grid node located in the corner of the grid. In order to
keep the data pattern unchanged for all the target nodes, including those located on the edge of the
field, the field is virtually extended by mirror symmetry. In the following display, the weights
attached to virtual points are cumulated to the one attached to the actual source data.
2 25 1 9.1
3 49 1 17.8
4 81 1 29.9
5 121 1 41.5
6 169 1 53.7
7 225 1 63.4
8 289 1 69.5
9 361 1 72.4
10 441 1 73.5
10 222 2 49.37
Image Filtering 799
(snap. 16.3-4)
The final task consists in performing the filtering on the whole grid.
Note - The efficiency of this kriging application is that it takes full advantage of the regular pattern
of the information as it must solve a kriging system with 121 neighborhood data for the 65536 grid
nodes.
The resulting variable varies from 29 to 45, to be compared with the initial statistics. It can be dis-
played as the initial image, where the color scale has been adapted.
800
(fig. 16.3-1)
Kriging Filter
This image shows more regular patterns with larger extension for the patches of low and high P val-
ues. Compared to the initial image, it shows that the noise has clearly been removed.
Image Filtering 801
16.4 Other Techniques
We return to the basic assumption that the measured variable Z is the combination of the underlying
true variable Y and the noise :
(eq. 16.4-1)
It is always assumed that the noise is a zero mean quantity, uncorrelated with Y, and whose variance
is responsible for the nugget effect component of the variogram. In order to eliminate the noise, a
good solution is to perform the convolution of several consecutive pixels on the grid: this technique
corresponds to one of the actions offered by the Tools / Grid or Line Smoothing operation.
On a regular grid, the low pass filtering algorithm performs the following very simple operation on
three consecutive grid nodes in one direction:
(eq. 16.4-2)
A second pass is also available which enhances the variable and avoids flattening it too much. It
operates as follows:
(eq. 16.4-3)
When performed on a 2D grid and using the two filtering passes, the following sequence is per-
formed on the whole grid:
l filter the initial image along X with the first filtering mode,
l filter the result along X with the second filtering mode,
l filter the result along Y with the first filtering mode,
l filter the result along Y with the second filtering mode.
If several iterations are requested, the whole sequence is resumed, replacing the initial image by the
result of the previous iteration when starting a subsequent iteration. This mechanism can be con-
strained so that the impact of the filtering on each grid node is not stronger than a cutoff variable
(the estimation standard deviation map for instance): this feature is not used here.
We decide empirically to perform 20 iterations of the two-passes filtering on the initial image (P)
and to store the result on the new variable called P smoothed.
Z Y + =
Z i ( )
1
4
---Z i 1 ( )
1
2
---Z i ( )
1
4
---Z i 1 + ( ) + +
Z i ( )
1
4
--- Z i 1 ( )
3
2
---Z i ( )
1
4
---Z i 1 + ( ) +
802
(snap. 16.4-1)
The result is displayed using the same type of representation as before. Nevertheless, please pay
attention to the difference in color coding. The image also shows much more structured patterns
although this time the initial high frequency has only been diluted (and not suppressed) which
causes the spotted aspect.
(fig. 16.4-1)
Low Pass Filter
Image Filtering 803
Using the same window Tools / Grid or Line Smoothing, we can try another operator such as the
Median Filtering. This algorithm considers a 1D neighborhood of a target grid node and replaces its
value by the median of the neighboring values. In 2D, the whole grid is first processed along X, and
the result is then processed along Y. If several iterations are required, the whole sequence is
resumed. Here, two iterations are performed with a neighborhood radius of 10 pixels (excluding the
target grid node) so that each median is calculated on 21 pixels. The result is stored in the new vari-
able called P median.
(snap. 16.4-2)
The result is displayed with the same type of representation as before: it is even smoother than the
kriging result, which is not surprising given the length of the neighborhood selected for the median
filter algorithm.
804
(fig. 16.4-2)
Median Filter
The real drawback of these two methods is the lack of control in the choice of the parameters (num-
ber of iterations, width of the neighborhood) whereas in the case of kriging, the quantity to be fil-
tered is derived from the model which relies on statistics calculated on actual data, and the
neighborhood is simply a trade-off between accuracy and computing time.
Image Filtering 805
16.5 Comparing the Results
16.5.1 Connected Components
The idea is to use the Interpolate / Interpolation / Grid Operator which offers several functions
linked to the Mathematical Morphology to operate on the image.
The window provides an interpreter which sequentially performs all the transformations listed in
the calculation area. The formula involves:
l Variables (which are defined in the upper part of the window) through their aliases v* for 1 bit
variables and w* for real variables.
l Thresholds which correspond to intervals and are called t*.
l Structural elements which define a neighborhood between adjacent cells and are called s*. In
addition to their extension (defined by its radius in the three directions), the user can choose
between the block or the cross element, as described on the next figure:
Cross (X=2; Y=1) Block (X=2; Y=1)
Here the procedure is used to perform two successive tasks:
806
l Using the input variable P denoised, first apply a threshold considering as grain any pixel
whose value is larger than 40 (inclusive); otherwise the pixel corresponds to pore. The result is
stored in a 1 bit variable called grain (P denoised): in fact this variable is a standard selection
variable that can be used in any other Isatis application, where the pores correspond to the
masked samples.
l Calculate the connected component and sort them by decreasing size. A connected component
is composed of the set of grain pixels which are connected by the structural element. The result,
which is the rank of the connected component, is stored in the real variable called cc (P
denoised).
(snap. 16.5-1)
The procedure also produces a printout, listing the different connected components by decreasing
size, recalling the cumulative percentage of grain.
Image Filtering 807
The same procedure is also applied on the three resulting images. The following table recalls some
general statistics for the 3 variables:
The different results are produced as images where the pore is painted in black.
(fig. 16.5-1)
Grains for Kriging Filter
Resulting Image P denoised P median P smoothed
Total amount of grain 5308 5887 7246
Number of connected components 11 4 122
5 largest components (in pixels) 1718
1008
978
862
543
1882
1864
1167
974
1733
1650
1257
842
174
808
(fig. 16.5-2)
Grains for Low Pass Filter
(fig. 16.5-3)
Grains for Median Filter
Image Filtering 809
16.5.2 Cross-sections
The second way to compare the three resulting images consists in representing each variable as the
elevation along one cross-section drawn through the grid.
This is performed using a Section in 2D Grid representation of the Display facility, applied to the 3
variables simultaneously. The parameters of the display are shown below.
(snap. 16.5-2)
Clicking on the Trace... button allows you to specify the trace that will be represented. For instance,
to represent the first diagonal of the image, enter the following vertices:
810
(snap. 16.5-3)
In the Display Box tab of the Contents window, modify the Z Scaling Factor to 0.0005.
The three profiles are shown in the next figure and confirm the previous impressions (P denoised in
red, P smoothed in blue and P median in black).
(fig. 16.5-4)
(fig. 16.5-5)
Boolean 811
17 Boolean
This case study demonstrates some of the large variety of possibilities
offered by the implementation of the Boolean Conditional Simulations.
This simulation technique belongs to the category of Object Based sim-
ulations. It consists in dropping objects with different shapes (defined
by the user) in a 3D volume, fulfilling the conditioning information
defined in terms of pores and grains.
Last update: Isatis version 10.0
812
17.1 Presentation of the Dataset
This simulation type requires data to be defined on lines in a 3-D space. The file bool_line.hd con-
tains a single vertical line (located at coordinates X=5000m and Y=5000m), constituted of 50 sam-
ples defined at a regular spacing of one meter, from 100m to 149m. You must load it using File /
Import / ASCII, creating a new Directory Boolean, and a new File Lines. Set the input-output
length units in meters, the X and Y graphical axis units in kilometers and the Z axis in meters in
the Preferences / Study Environment / Units.
(snap. 17.1-1)
Note - The dataset has been drastically reduced to allow a quick and good understanding of the
conditioning and reduce the computing time.
The file refers to a Line Structure which corresponds to the format used for defining several sam-
ples gathered along several lines (i.e. boreholes or wells) in the same file. The original file contains
five columns which correspond to:
Boolean 813
l the sample number: it is not described in the header and will not be loaded (the software gener-
ates it automatically in any case),
l the coordinate of the sample gravity center along X,
l the coordinate of the sample gravity center along Y,
l the coordinate of the sample gravity center along Z,
l the variable of interest, called facies which only contains 0 and 1 values. This information is
considered as the geometrical input used for conditioning the boolean simulations. One can
think of 0 for shale and 1 for sandstone to illustrate this concept. In this case study, the word
grain is used for 1 values and the word pore for 0 values.
You need to to go Tools/Convert Gravity Lines to Core Lines since the boolean simulation tool work
only with Core Lines. Convert the Lines using the From Isatis <v9 Lines File option.
(snap. 17.1-2)
The boolean conditional simulation is run on a regular grid which has to be created beforehand
using the File / Create Grid File facility. It consists of a regular 3-D grid containing 201 x 201 x 51
814
nodes, with a mesh of 50 m x 50 m x 1 m and whose origin is located at point (X=0; Y=0;
Z=100m). The user may check in the Data File Manager that the grid extends from 0m to 10000m
both in X and Y, and vertically from 100m to 150m.
(snap. 17.1-3)
Boolean 815
17.2 Boolean Environment
The boolean simulation facility is located in the Interpolate / Conditional Simulations / Boolean
menu. This window requires the definition of several items, presented hereafter.
(snap. 17.2-1)
17.2.1 Conditioning Information
This item refers to the Line file that has been imported. The target variable is obviously the facies
variable. As this variable may contain any numerical value, it is compulsory to specify how this
numerical variable has to be converted into a boolean variable (only 0 and 1 values). This is done
by defining the threshold rule in the sub-window that pops up when pressing the button called Set
Definition.... Here the interval is simply set to [1,1].
816
17.2.2 Output Grid
The variable that will be used to store the result of the Boolean Conditional Simulation has to be
defined: it is called Simulation 1. For each grid node, this variable will contain the resulting indica-
tor value, i.e. a value equal to:
l 1 if the grid node belongs to at least one object,
l 0 if the node does not belong to any object.
An important point to remember is that, during the simulation process the conditioning data are
assigned to the closest node of the grid. This discretization step implies that the user should worry if
two samples are assigned to the same grid although they carry two different indicator values: an
error message is sent and the procedure is interrupted.
The procedure is also using the value "-1" to designate a grid node which coincides with a condi-
tioning grain value. This is the reason why the output variable is not created as a 1-bit variable: the
software uses the default 32-bit format.
17.2.3 Object Family Definition
This is where the user defines the shapes and dimensions of the objects to be simulated. Different
examples will be given in this case study; for the moment, just press the Add button and enter the
following parameters:
(snap. 17.2-2)
17.2.4 Parameters
The Boolean Conditional Simulation parameters are briefly described hereafter. For more informa-
tion, the user should refer to the On-Line documentation.
Boolean 817
This Object Based simulation technique consists in dropping objects in a 3D space, so that they
intersect the field (3D grid) to be simulated. Obviously, to have an even spread of objects over the
space, we must take into consideration not only the objects whose center lies within the field to be
simulated, but also those located in its immediate periphery. This periphery is called the Guard
Zone and is defined by its dimensions along the three main axes. Here they have been set to 1800m
along X, 1000m along Y and 2m along Z. This implies that the radius of the objects that we con-
sider should not be larger than these values. Note that no test is performed to ensure this compat-
ibility.
The objects are dropped according to a random process which requires the following parameters:
l the number of objects to be generated before the simulation stops. Actually, the user has to
define either a Poisson intensity or the related average number of objects (dropped in the dilated
domain) that the simulation aims to reach (1000 here).
l the seed used to generate the random values: to generate different outcomes of this boolean con-
ditional simulation technique, it is compulsory to change this seed value before each run. Note
that, if the seed is set to 0, Isatis automatically generates a different seed at each run.
The boolean simulation algorithm relies on a death and birth process which may either create or
delete objects. Therefore, the average number of objects must be considered as a target number
that will be reached only if the simulation is run for a long time. It is common practice, however,
to provide a Maximum Time that will be used to stop the process prematurely (100).
Moreover, this iterative process is performed in two steps: a preliminary step consists in drop-
ping some initial objects at preferential locations simply to fulfill the conditioning data. These
initial objects must disappear during the process.
A Graphic Output enables, after run, to control the evolution of the total number of objects as
well as the proportion of initial objects (not visible without zooming in the lower left corner).
(fig. 17.2-1)
818
17.2.5 Theta Function
The definition of the Theta Function is available pressing the Theta Intensity button.
(snap. 17.2-3)
The density of the objects (regarding their centers) does not have to be even over the whole dilated
domain. The Theta function describes the object density along the vertical axis. It is
defined as (up to its sign) where is the probability that some pores extend from
z to z+h, without encountering any grain in the mean time. The value h corresponds to the Minimum
Pore Length defined by the user in terms of layers. Finally, the Theta function can be smoothed by
averaging its value over several consecutive layers. For more information, the user should refer to
the On-Line documentation.
This Theta function might also be derived from the conditioning information (Calculate from Data
button) and displayed graphically. The picture corresponds to a minimum pore length of 1 and no
smoothing (Number of layers averaged set to 1).
h
z ( )
P h ( ) [ ] log P h ( )
Boolean 819
(snap. 17.2-4)
Simultaneously Isatis calculates and represents three statistical quantities that may help analyzing
the quality of the conditioning information and understanding the simulation process:
l the grain proportion which simply tells us, for a given horizontal grid level, what the proportion
of the conditioning information which corresponds to grain is,
l the histogram of the pore length,
l the pore survival function which gives to the average residual length for the pores whose length
is larger than a given value, as a function of this value.
Only the Theta function varies when the values for the Minimum Pore Length and the Number of
layers averaged are modified. Set the Minimum Pore Length to 3 and check how the graphic modi-
fication.
You can then smooth out this function by setting the number of layers on which the function is cal-
culated to 4.
820
Finally, the values of the Theta variable are displayed in a scrolled editable area, where the user can
modify them by hand. Any value lying between 0 and 1 is admissible. Nevertheless one must
remember that a value of 0 at a given horizontal grid level implies that no object may be generated
at this level. This constraint must at least be compatible with the conditioning information.
For sake of simplicity, the rest of this chapter will be processed setting the Minimum Pore Length to
1 and suppressing any smoothing.
Boolean 821
17.3 Simulations
This paragraph is focused on the description of the Object Law. Each example describes an Object
Family Definition and illustrates the result through the display of a simulation outcome.
17.3.1 Exercise 1
The first trial uses the already described parameters for a single type of parallelepipedic object. All
the parallelepipedic objects have the same geometrical characteristics:
l extension along X = 1800m
l extension along Y = 1000m
l extension along Z = 2m
The next figure represents a display of the Z level number 10 of the grid using the Display facility.
A grid node which does not intersect any object (value 0) is painted in black; if at least one object is
intersected (value 1), the color is white. If a conditioning grain coincides with the grid node (value -
1), the node is painted in grey. Due to the very fine definition of the grid (the picture corresponds to
200 x 200 grid nodes), the conditioning sample at this level (located in coordinates X=5000m,
Y=5000m) is hardly visible.
(fig. 17.3-1)
17.3.2 Exercise 2
In this exercise, while keeping the parallelepipedic objects, set their vertical thickness equal to 3m.
This run will not function and will return errors specifying that some conditioning grains have not
been covered successfully by objects. In fact, this simply reveals the incompatibility between the
object description and the conditioning data: as a matter of fact, when reading the conditioning data
0 1 2 3 4 5 6 7 8 9 10
X (km)
0
1
2
3
4
5
6
7
8
9
10
Y
(
k
m
)
Simulation 1
822
along the line, you can find (several times) the occurrence of the following sequence 0,1,1,0 which
implies the presence of an object between the two conditioning pores which are precisely 3m apart
and are therefore not compatible with the thickness of the objects which are constantly equal to 3m.
17.3.3 Exercise 3
In this second trial, the parallelepipeds are replaced by lower half ellipsoids. The object extension is
kept unchanged, except for the vertical extension which is set to 4m.
(fig. 17.3-2)
The next picture presents a vertical section (XOZ) which intersects the 3D grid at coordinate
Y=5000m (IY = 101). Do not forget to change the projection definition to XOZ in the Camera tab.
The third dimension may be extended for better legibility. This view is convenient to check that the
conditioning is also fulfilled when the sample density is large.
(fig. 17.3-3)
Note that the vertical extension of the ellipsoids of this exercise (4m), though larger than in the pre-
vious exercise (3m high parallelepipeds), do not cause any problem as the thickness of ellipsoids is
not constant over the whole object.
0 1 2 3 4 5 6 7 8 9 10
X (km)
0
1
2
3
4
5
6
7
8
9
10
Y
(
k
m
)
Simulation 3
Boolean 823
17.3.4 Exercise 4
This exercise simulates lower half sinusoidal objects. This type of object require 6 parameters:
l the value for half of the period (extension): 1300m,
l the amplitude of the sine function: 500m,
l the thickness of the sine function: 400m,
l the extension of the object along the sine function in the horizontal plane: 4000m,
l the extension along Z: 4m,
l the rotation angle: 0.
(fig. 17.3-4)
17.3.5 Exercise 5
Several types of objects may be mixed in the same simulation outcome. For instance, combine the
three types of objects already presented and set the following proportions for each family of
objects:
l 10% of parallelepipedic objects (1800m, 1000m, 2m)
l 60% of lower half ellipsoids (1800m, 1000m, 4m)
l 30% of lower half sinusoids (1300m, 500m, 400m, 4000m, 4m)
0 1 2 3 4 5 6 7 8 9 10
X (km)
0
1
2
3
4
5
6
7
8
9
10
Y
(
k
m
)
Simulation 4
824
(fig. 17.3-5)
17.3.6 Exercise 6
In this exercise, set the object type back to the lower half ellipsoidal objects, in order to demonstrate
the non constant geometrical parameters. Simply modify the definition of the extension along X of
the ellipsoids: instead of being constantly equal to m=1800m, a tolerance s=1000m is defined in
order to allow them to vary uniformly between m-s and m+s.
(fig. 17.3-6)
0 1 2 3 4 5 6 7 8 9 10
X (km)
0
1
2
3
4
5
6
7
8
9
10
Y
(
k
m
)
Simulation 5
0 1 2 3 4 5 6 7 8 9 10
X (km)
0
1
2
3
4
5
6
7
8
9
10
Y
(
k
m
)
Simulation 6
Boolean 825
17.3.7 Exercise 7
This final example consists in playing with the rotation angle. Keeping the initial lower half ellip-
soids, allow the rotation angle to vary in an interval centered around 45 degrees with a tolerance
equal to 20 degrees (amplitude: 25 to 65 degrees from E-W direction).
(fig. 17.3-7)
0 1 2 3 4 5 6 7 8 9 10
X (km)
0
1
2
3
4
5
6
7
8
9
10
Y
(
k
m
)
Simulation 7
826