Escolar Documentos
Profissional Documentos
Cultura Documentos
October 2005
Contents
0. Introduction.............................................................................................2
2. Matrix operations....................................................................................4
4. Graphics ..................................................................................................9
5. Conditional branching...........................................................................11
7. Loops.....................................................................................................14
8. Simulation .............................................................................................16
9. Procedures.............................................................................................20
2
1. Creating matrices
WW = { 1 5 , WW =
2 5 , 1.0000000 5.0000000
3 5.8 , 2.0000000 5.0000000
4 5 }; 3.0000000 5.8000000
"WW ="; WW; 4.0000000 5.0000000
W1 = W[3,2]; W1 =
"W1 ="; W1; 5.8000000
"W[3,.]"; W[3,.]
W[3,.]; 3.0000000 5.8000000
W[2:4,1]
"W[2:4,1]"; 2.0000000
W[2:4,1]; 3.0000000
4.0000000
"W[1 3 4,.]"; W[1 3 4,.]
W[1 3 4,.]; 1.0000000 5.0000000
3.0000000 5.8000000
4.0000000 5.0000000
"(Y~W~Y)|(Z~Z)"; (Y~W~Y)|(Z~Z)
(Y~W~Y)|(Z~Z); 10.000000 1.0000000 5.0000000 10.000000
20.000000 2.0000000 5.0000000 20.000000
30.000000 3.0000000 5.8000000 30.000000
40.000000 4.0000000 5.0000000 40.000000
100.00000 0.00000000 100.00000 0.00000000
0.00000000 100.00000 0.00000000 100.00000
3
1.2. More examples on matrix creation (ex-02.gss)
Program Output
load a[4,3] = a=
c:\gauss\examples\curso\data.txt; 1.1000000 1.0000000 3.0000000
0.10000000 2.0000000 2.0000000
"a="; a; 0.90000000 3.0000000 1.0000000
2.0000000 4.0000000 0.00000000
c=ones(2,4); c=
1.0000000 1.0000000 1.0000000 1.0000000
"c="; c; 1.0000000 1.0000000 1.0000000 1.0000000
2. Matrix operations
Commonly used matrix operators:
+ Addition
- Subtraction
* Matrix multiplication
.* Element-by-element multiplication
^ Element-by-element exponentiation
./ Element-by-element division
/ Division or linear equation solution
.*. Kronecker (tensor) product
‘ Transpose operator
4
Examples of addition operations (ex-03.gss):
Program Output
let a[5,2] = 10 10 a=
10 20 10.000000 10.000000
10 30 10.000000 20.000000
10 40 10.000000 30.000000
10 50; 10.000000 40.000000
"a="; a; 10.000000 50.000000
let b[5,2] = 1 1 b=
2 1 1.0000000 1.0000000
3 1 2.0000000 1.0000000
4 1 3.0000000 1.0000000
5 1; 4.0000000 1.0000000
"b="; b; 5.0000000 1.0000000
"a+b="; a+b; a+b=
11.000000 11.000000
12.000000 21.000000
13.000000 31.000000
14.000000 41.000000
15.000000 51.000000
c = 100*ones(5,1); c=
"c="; c; 100.00000
100.00000
100.00000
100.00000
100.00000
"a+c="; a+c; a+c=
110.00000 110.00000
110.00000 120.00000
110.00000 130.00000
110.00000 140.00000
110.00000 150.00000
e = 1000; e=
"e="; e; 1000.0000
5
3. GAUSS commands
3.1 Commonly used GAUSS commands:
Linear equation solution:
INV Matrix inversion.
b/A Linear equation solution Ax = b.
Matrix dimensions:
COLS The number of columns of a matrix.
ROWS The number of rows of a matrix.
Sub-matrix extraction:
DIAG Extracts the diagonal of a matrix to a column vector.
DIAGRV Puts a column vector into the diagonal of a matrix.
SELIF Select rows of a matrix that verify some logical
condition.
TRIMR Eliminates rows at the top and/or bottom of a matrix.
Mathematical functions:
ABS Absolute value.
INT Round to the nearest integer.
EXP Exponential function.
LN Natural logarithm.
LOG Logarithm of base 10.
SQRT Square-root.
SIN Sine.
Descriptive statistics:
CORRX Correlation matrix of the columns of a matrix.
MEANC Mean of the elements in each column of a matrix.
STDC Standard deviation of each column of a matrix.
VCX Variance-covariance matrix.
There are many other functions allowing for example to compute eigenvalues and
eigenvectors, several matrix decompositions or to take care of missing values.
6
3.2. Example: least-squares estimation
Program - ex04.gss
/* READING THE DATA */
LET Y[8,1] = 3.1 7.6 10.4 11.2 9.0 17.9 15.7 9.3;
LET X1[8,1] = 1.2 3.1 2.5 5.2 3.9 7.5 6.7 1.2;
/* T = NUMBER OF OBSERVATIONS */
T = ROWS(Y);
/* RESIDUALS */
EHAT = Y - X*B;
"EHAT ="; EHAT;
/* T-STATISTICS */
TSTATB = B ./ SEB;
"TSTATB =";; TSTATB;
/* P-VALUES */
PVALUES = 2*CDFTC(TSTATB,T-K);
"PVALUES =";; PVALUES;
7
Output:
X =
1.0000000 1.2000000
1.0000000 3.1000000
1.0000000 2.5000000
1.0000000 5.2000000
1.0000000 3.9000000
1.0000000 7.5000000
1.0000000 6.7000000
1.0000000 1.2000000
Y =
3.1000000
7.6000000
10.400000
11.200000
9.0000000
17.900000
15.700000
9.3000000
B =
3.8391681
1.7088388
EHAT =
-2.7897747
-1.5365685
2.2887348
-1.5251300
-1.5036395
1.2445407
0.41161179
3.4102253
SUM OF RESIDUALS = -1.5099033e-014
EHAT2 =
7.7828429
2.3610426
5.2383071
2.3260215
2.2609318
1.5488816
0.16942426
11.629637
RSS = 33.317088
S2 = 5.5528481
VCOVB =
2.8368781 -0.54767337
-0.54767337 0.13998041
VARB =
2.8368781
0.13998041
SEB =
1.6843034
0.37413957
TSTATB =
2.2793803
4.5673833
PVALUES =
0.062853166
0.0038210152
It is possible to modify the above program so that the output could be stored in a file:
OUTPUT FILE = file name RESET;
...
(commands)
...
OUTPUT OFF;
END;
8
4. Graphics
xy X,Y graph
bar Bar graph
hist Histogram
xyz X, Y, Z graph
surface 3D surface graph
contour countour graph
4.2. Example – A graph of the X1 and Y series from the least squares example.
Program - ex-04b.gss
LET Y[8,1] = 3.1 7.6 10.4 11.2 9.0 17.9 15.7 9.3;
LET X1[8,1] = 1.2 3.1 2.5 5.2 3.9 7.5 6.7 1.2;
LIBRARY PGRAPH;
GRAPHSET;
/* Y graph */
XY(1,Y);
/* X1 graph */
XY(1,X1);
/* X1 and Y graph */
XY(1,X1~Y);
/* X1-Y graph */
/* The following option turns off the option to connect points with
lines: */
_PLCTRL = -1;
XY(X1,Y);
9
One of the graphs:
10
5. Conditional branching with IF
Example:
IF A < 0;
"A IS NEGATIVE";
B = 2*A;
ELSEIF A == 0;
"A EQUALS ZERO";
B = 0;
ELSE;
"A IS POSITIVE";
B = A^2;
ENDIF;
The expression after the IF should always result in a scalar value. The expression is
TRUE if the value is different from zero, and FALSE if it equals zero.
The ELSE command is optional and can be used only once in the IF instruction. The
ELSEIF is also optional but can be used several times.
When comparing matrices, the result only equals 1 (TRUE) if all the comparisons
result in 1 (TRUE).
11
6. Random numbers
6.2. Examples:
(b) Generate 10000 random numbers N(0,1) and N(µ,σ2) with µ = 10, σ = 5.
Program: ex-05b.gss
T = 10000;
Z = RNDN(T,1);
/* INITIALIZE GRAPHS */
LIBRARY PGRAPH;
GRAPHSET;
/* HISTOGRAM OF THE Z VALUES WITH 30 CATEGORIES*/
CALL HIST(Z,30);
The graphs:
12
(c) Generate 500 random numbers with a Cauchy distribution.
Program: ex-05c.gss
/* GENERATE T RANDOM NUMBERS WITH A CAUCHY DISTRIBUTION */
T=500;
/* THE CAUCHY CAN BE OBTAINED AS THE RATIO OF TWO NORMALS */
CAUCHY=RNDN(T,1)./RNDN(T,1);
/* FREQUENCY HISTOGRAM */
LIBRARY PGRAPH;
GRAPHSET;
CALL HISTP(CAUCHY,100);
Graph:
13
7. Loops
7.1. DO command
The set of instructions between DO WHILE and ENDO is executed while the scalar
expression is true VERDADE. See the command IF for a discussion of relational
operators.
/* ANOTHER EXAMPLE*/
M = ZEROS(13,1);
I = 1;
DO WHILE I <= 13;
M[I,1] = I^2;
I = I + 1;
ENDO;
"M =";
M;
Output:
I = 1.0000000
I = 2.0000000
I = 3.0000000
I = 4.0000000
I = 5.0000000
I = 6.0000000
I = 7.0000000
I = 8.0000000
M =
1.0000000
4.0000000
9.0000000
16.000000
25.000000
36.000000
49.000000
64.000000
81.000000
100.00000
121.00000
144.00000
169.00000
14
7.2. Alternatives to the DO command
Some matrix operators may be used to compute the matrix of results of all element-
by-element comparisons:
.== Equal to
.< Less than
.<= Less than or equal to
.> Greater than
.>= Greater than or equal to
./= Not equal to
Since the result of these operators is not scalar, they cannot be used in IF or DO
commands. However, they may be very useful in other circumstances as shown in the
following example.
Z = RNDN(100000,1);
To count the number of negative values on this vector, the following program could
be used:
NUM = 0;
I = 1;
DO WHILE I <= ROWS(Z);
IF Z[I] < 0;
NUM = NUM + 1;
ENDIF;
I = I + 1;
ENDO;
"Number of negative values =";; NUM;
15
8. Simulation
(a) Computing the sample means of 10000 random samples of size 10 generated from
a N(10,5^2) distribution.
Program – ex-06b.gss
/* SAVE EXECUTION START TIME */
TEMPO=HSEC;
/* SAMPLE SIZE */
T = 10;
/* POPULATION MEAN */
MU = 10;
/* POPULATION STANDARD DEVIATION */
SIGMA = 5;
SIMUL = SIMUL + 1;
ENDO;
/* AT THE END OF THE LOOP THE MATRIX MEDIAS HAS ALL THE NSIMUL MEANS STORED
IN IT */
16
(b) The same computations without loops:
Program – ex-07.gss
TEMPO=HSEC;
T = 10;
MU = 10;
SIGMA = 5;
NSIMUL = 10000;
TEMPO=(HSEC-TEMPO)/100;
"TOTAL TIME=";;TEMPO;;"SECONDS";
Output:
POPULATION MEAN: MU = 10.000000
MINIMUM SAMPLE MEAN:
4.3308605
MAXIMUM SAMPLE MEAN:
15.875061
AVERAGE SAMPLE MEAN:
9.9888622
STANDARD DEVIATION OF THE SAMPLE MEANS:
1.5977483
THEORETICAL STANDARD DEVIATION OF THE SAMPLE MEANS = SIGMA/SQRT(T) =
1.5811388
TOTAL TIME= 3.3500000 SECONDS
17
(c) Study of the distribution of the t-statistic when observations follow a normal
distribution.
Program – ex-08.gss
/* SIMULATION OF THE T-STATISTIC FOR RANDOM SAMPLES WITH NORMALLY
DISTRIBUTED OBSERVATIONS */
T = 10;
MU = 10;
SIGMA = 5;
NSIMUL = 10000;
Z = RNDN(T,NSIMUL);
X = MU + SIGMA * Z;
MEDIAS = MEANC(X);
DP = STDC(X);
/* T-STATISTIC */
ESTT = (MEDIAS - MU) ./ (DP ./ SQRT(T) );
Output:
PERCENTILES 2.5% AND 97.5% OF THE SIMULATED VALUES
-2.2881654
2.2547969
PERCENTILES 2.5% AND 97.5% OF THE STUDENT-T WITH T-1 D.F.
-2.2621572
2.2621572
18
(d) Distribution of the t-statistic when observations follow a Cauchy distribution.
Program – ex-09.gss
NEW;
/* SIMULATION OF THE T-STATISTIC WHEN OBSERVATIONS FOLLOW A CAUCHY DISTR. */
T = 10;
MU = 0;
SIGMA = 1;
NSIMUL = 5000;
Output:
PERCENTILES 2.5% AND 97.5% OF THE SIMULATED VALUES
-1.8825505
1.8735276
PERCENTILES 2.5% AND 97.5% OF THE STUDENT-T WITH T-1 D.F.
-2.2621572
2.2621572
19
9. Procedures
The commands to create a new procedure are the following:
The only mandatory commands are proc and endp. If the command retp is not used
then the procedure will not return anything when called.
/* MAIN PROGRAM */
/* READ DATA */
LET Y[8,1] = 3.1 7.6 10.4 11.2 9.0 17.9 15.7 9.3;
LET X1[8,1] = 1.2 3.1 2.5 5.2 3.9 7.5 6.7 1.2;
T = ROWS(Y);
X = ONES(T,1)~X1;
Output:
BHAT =
3.8391681
1.7088388
V =
2.8368781 -0.54767337
-0.54767337 0.13998041
20
10. Maximum likelihood estimation – MAXLIK
The library MAXLIK in GAUSS contains a set of More detailed information can be
found in the corresponding manual.
Y = DADOS[.,1];
X1 = DADOS[.,2];
BETA1 = PARAM[1,1];
BETA2 = PARAM[2,1];
S = PARAM[3,1];
RETP(F);
ENDP;
/* READ DATA */
LET Y[8,1] = 3.1 7.6 10.4 11.2 9.0 17.9 15.7 9.3;
LET X1[8,1] = 1.2 3.1 2.5 5.2 3.9 7.5 6.7 1.2;
/* ALL DATA TO BE USED ARE STORED IN ONE MATRIX */
DADOS = Y ~ X1;
21
Output:
================================================================================
iteration: 1
algorithm: BFGS step method: STEPBT
function: 65.62894 step length: 0.00000 backsteps: 0
--------------------------------------------------------------------------------
param. param. value relative grad.
B1 0.0000 0.1604
B2 0.0000 0.7566
S 1.0000 1.9568
================================================================================
iteration: 2
algorithm: BFGS step method: STEPBT
function: 5.63411 step length: 1.00000 backsteps: 0
--------------------------------------------------------------------------------
param. param. value relative grad.
B1 0.1604 0.1503
B2 0.7566 0.6840
S 2.9568 1.1115
================================================================================
================================================================================
iteration: 28
algorithm: BFGS step method: STEPBT
function: 2.13225 step length: 1.00000 backsteps: 0
--------------------------------------------------------------------------------
param. param. value relative grad.
B1 3.8404 0.0007
B2 1.7090 0.0014
S 2.0408 0.0000
===============================================================================
MAXLIK Version 4.0.22 9/21/99 12:03 pm
===============================================================================
return code = 0
normal convergence
Number of iterations 28
Minutes to convergence 0.01017
22