Você está na página 1de 4

APPLICATION OF STATISTICAL CONCEPTS IN THE DETERMINATION

OF WEIGHT VARIATION IN SAMPLES

1. Discuss the significance of the


standard deviation.
The standard deviation tells us
how close the data points are to
the mean value; thus, the smaller
the standard deviation, the more
precise your data is and the less
random errors there are [1]. The
standard deviation is a helpful
statistical
parameter
in
determining the precision of your
data set. It is given by the formula:
( )2
=
1
As can be seen from the formula
for the standard deviation, the
sample mean is subtracted from
the individual data points. This is
done to approximate the distances
of each data point from the mean
value.
2. Discuss the significance of the
confidence limits.
The confidence limit conveys how
close the true mean is to the
measured mean . The concept of
confidence limits is useful in
predicting the probability that a
certain data point falls under a
given area under the Gaussian Bell
curve with m% probability. To get
the confidence limits of the mean,
the following formula is used:

The parameter t is dependent


upon the number of the degrees of
freedom, which is n-1, and the

confidence
level
required.
Tabulated values of t at different
confidence levels and degrees of
freedom can be found in various
literature in statistics. [2]
3. Discuss the significance of a Q-test
(or Grubbs Test).
In measurement, obtaining a data
point which is far from the rest of
the data (called an outlier) is
inevitable. The Q-test or Grubbs
test are statistical tests which help
us in determining whether a
suspected outlier should be kept
or rejected. These tests are very
important since outliers can affect
the precision of your data
significantly. To apply the Grubbs
test, a suspected outlier is picked
from the data set. From this
outlier, the Gcalc is computed using
the formula:
| |
=

After getting Gcalc, it is compared to


Gcrit, which can be obtained from a
table of G values. Like the t
parameter, Gcrit also depends on
the degrees of freedom and the
required confidence level.
4. Discuss
how
the
statistical
parameters calculated from Data
Set 1 compare to those obtained
from Data Set 2.
The sample mean for Data Set 2 is
bigger than that of Data Set 1, but
the range for both data sets is the
same, since the highest and lowest
1

values are obtained within the first


six trials. However, Data Set 1 has
larger standard deviation and
relative
standard
deviation
compared to Data Set 2. This can
be due to the fact that Data Set 1
has fewer data points, and since
the spread for both data sets is the
same, Data Set 1 is less precise and
is bound to have a larger standard
deviation than Data Set 2. This is
why the relative range for Data Set
1 is larger than that of Data Set 2.
The ratio of the range to the
sample mean is larger for the
second data set than to the first
data set. Finally, at 95%
confidence limits, Data Set 2 has
less uncertainty than Data Set 1.
The precision of both data sets
affected the magnitude of their
uncertainty.
5. Discuss the significance of the
pooled standard deviation.
To calculate the pooled standard
deviation, a number of data sets
are collected or pooled. Pooling
data sets improve the reliability of
the standard deviation. The pooled
standard deviation obtained is
oftentimes more reliable than the
standard deviations of the
individual subsets of data. The
formula for calculating the pooled
standard deviation from several
sets of data is:
=
1 (
=1

)
1 2 +

=1(

3
)
2 2 + =1
( )
3 2 +

1 + 2 + 3

N1, N2, and N3 refer to the number


of data in each data set, and NT
refers to the total number of data
sets compiled [3]. Notice that the

formula for the pooled standard


deviation is just a modified version
of the formula for the sample
standard deviation.
For instance, a hypothetical global
network of analytical chemists
from over 100 countries can all
conduct an identical experiment
using
the
same
methods,
instruments, and techniques.
Assuming that the errors made by
the analytical chemists are purely
random, and that they obtained
the same results to some extent,
each of them can compute for the
standard deviation of their own
data. To improve the reliability of
their results, they can consolidate
their data to get the pooled
standard deviation, which is more
reliable than each of their own
calculated standard deviations.
6. Discuss the 3 types of experimental
error. Give example of each type.
The three common types of
experimental error encountered in
the analytical laboratory are
systematic or determinate error,
random or indeterminate error,
and gross error [3].
Systematic or determinate errors
affect the accuracy of the results by
causing the mean of the data set to
deviate from the correct value.
Systematic
errors
are
reproducible, but they can be
detected and corrected. There are
three sources of systematic errors.
First is instrumental errors. If the
instruments used in carrying out
the experiment are either not
calibrated or defective, the
accuracy of the data gathered will
be affected. For example, in doing
2

an elemental analysis, using a


faulty ICP-AES device will shift
your data away from the correct
values. Method errors are another
source of systematic errors.
Method errors are due to the
nonideality of the analytical
system you are working with. For
instance, if you are working with a
system of electrolytes, it is
important to account other
competing equilibria in the
system. Failure to do so may give
rise to interferences and faulty
results. The third source of
determinate errors is personal
error. Personal errors are due to
unintelligent judgements and
biases. For instance, the color at
the endpoint of a titration can be
subjective to the one doing the
titration.
Random or indeterminate errors
are caused by uncontrolled
variables. Random errors affect
the precision of your data set. This
type of error is difficult to
determine and is uncorrectable
most of the time. The use of
statistics, however, can be
facilitated to handle these errors.
An example of random error can
be seen in weighing the same
sample repeatedly. You will notice
that there are times when the
weight of the sample will not be
constant.
Gross errors, often caused by
human errors, lead to outliers.
This type of error is seldom
encountered since this is the
easiest to avoid. Examples for the
gross error are overtitrating the
analyte due to carelessness and
loss of sample due to improper
handling.

7. Discuss
the
Gaussian/normal
distribution and the requirements
for a data set to have a normal
distribution.
A Gaussian or normal distribution
is a smooth curve obtained when
you plot the results of an
experiment or a measurement
repeated for a very large number
of times [1]. Ideally, if you let your
number of trials approach infinity,
the graph of your data will tend to
look like the curve given by:
2
2
() /2
=
2

where is the population mean


and is the population standard
deviation.
For a data set to have a normal
distribution, it should have a very
large number of data and the
errors committed should be purely
random.

8. Discuss the rationale behind the use


of forceps/crucible tongs in
handling the coins.
In measuring the identical set of
coins, we want them to be as dry as
possible since moisture in coins
may affect the readings in the
analytical balance significantly.
Our hands contain some amount of
moisture, and handling the coins
to be used in the experiment with
our bare hands may transfer some
of the unwanted moisture to the
coins. That is why it is important to
use forceps or crucible tongs in
handling the coins.
REFERENCES
[1] Harris, D. Quantitative Chemical Analysis,
7th ed.; W.H. Freeman: New York, 2007
[2] Jeffery, GH., et al; Vogels Textbook of
Quantitative Chemical Analysis, 5th ed.;
Longman Scientific and Technical: UK, 1989
3

[3] Skoog, D., et al; Fundamentals of Analytical


Chemistry, 9th ed.; Brooks/Cole, Cengage
Learning: USA, 2014

Você também pode gostar