Escolar Documentos
Profissional Documentos
Cultura Documentos
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
Lab 3 Counting Statistics
Introduction
This lab introduced various methods of statistical analysis to be used in interpreting
data obtained in lab. It is our aim to measure radiation that results from nuclear decay of a
radionuclide source. Since radioactive decay is a nonconstant, and thusly random process,
statistical fluctuations are expected and such statistical analysis can help to better analyze the
data in a more lucid, and systematic way. The instruments used to measure are also not ideal.
Measuring instruments such as a detector are prone to errors such as dead time, pileup, and
in some sense: competing forms of decay. Dead time refers to a separation time necessary
between two events to be recorded as two separate pulses by a detector (i.e. processing time).
This technical consequence, therefore, inhibits knowledge of the true rate of radiation. In
paralyzable and non-paralyzable models, this dead time results in either of two possibilities: a)
the exclusion of detection of multiple pulses received by the detector during the dead time,
or b) an addition of pulse heights, resulting in a larger than expected pulse amplitude. In
fact, any detector that does not instantaneously detect is prone to these errors.
Often it is of interest to measure a specific type of decay (e.g. + decay); however,
unless the source decays with the decay type of interest 100 percent of the time, competing
decay reactions will give a false impression of the measured quantities desired of the decay of
interest. As a general example, x-ray photons emitted from a transitioning electron could
avoid direct detection by being absorbed by another electron, causing it to be ejected (i.e.
auger electrons). These intrinsic fluctuations are built into the framework of nuclear decay
and the instruments used to detect it. Some sources of error in measurement are bound to
be derivatives of these inherent problems. Despite an unavoidable uncertainty, these
fluctuations are useful in the respect that they are characterizable. Statistical models can
model the internal variation of data and be used to determine two important qualities: 1)
whether nuclear counting equipment is operating normally and 2) to provide insight to the
uncertainty of a data set given only a single measurement. It is of prime interest to acclimate
oneself with this statistical analysis first hand as it is integral in measuring radiation.
NERS 315
Lab 3 Counting Statistics
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
The objectives of this lab are to practice diagnosing data as a particular statistical
model, and to use that model to determine quantities that will reflect expected fluctuation,
arithmetic mean, etc. By first beginning with a hypothetical datum, a statistical model was
fitted, and quantities of interest computed. Then, to verify the validity of this choice of
model, more data was given and quantities such as those relating to the internal variation
were measured and compared with those found for the single measurement. The result was
a sound agreement between the two. This lab was a proof of counting statistics through
example, rather than theory. Later in the lab, data was taken in experiment and analyzed in a
similar fashion using both the Poisson and Gaussian models. First a Cs-137 source was used
in tandem with a Geiger-Mueller Tube (GM tube), and counter in aim to measure 30 counts
consistently. Twenty-five measurements were taken, and from this data the experimental
mean, sample variance were measured. A Poisson model was then fitted to the data, and
through comparing this expected standard deviation with that of the experiment, it was
shown to be a good fit. Tallk labout chi square, and the uncertainty comparing with
expected, etc. Finally, it was aimed to measure 5 counts on the counter, one thousand trials
were conducted, and to explicitly show which model (Gaussian, or Poisson) is a better fit,
both distributions were plotted on a graph including the data collected. The theory was
shown to match the data in all accounts.
Procedure
The lab procedure involved two main parts in experiment, and one hypothetical
component with data given, these parts are labeled by number to be consistent with the lab
handout numbering scheme (attached in the appendix). Hypothetical data listed under part
2 in the lab handout is described despite it not being performed in lab as it will be discussed
in the following section, Results and Analysis. Equipment used for all experiments are listed
below:
NERS 315
Lab 3 Counting Statistics
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
Equipment/Materials
Tennelec TC 952 High Voltage Supply (set to 801V)
Ortec 572 Amplifier
Hewlett Packard 54610B Oscilloscope, 500 Mhz
Ortec 994 Dual Counter/Timer
G-M Tube Lead Shield, Model No. AL144, Serial No. 453
Ortec Timing SCA SS1
Preamp (HV SIG)
Cs-137 5 5 Cu, half life = 38 years, May 1989, Nucleus Inc.
2) A source was put in a G-M Tube with a two-minute timing interval. One trial was
conducted and a single count value was yielded on the counter.
2e) the experiment in part two was performed nine more times yielding nine more pieces of
data.
4) The following setup was used:
Figure 3.1
A Cs-137 source was placed inside the GM tube, with the source being placed on the
second shelf from the top, and a time interval on the timer was set to 0.1 s so as to record
approximately 30 counts per 0.1 seconds. A set of twenty-five of these counts were taken.
NERS 315
Lab 3 Counting Statistics
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
8) The same setup as in figure 3.1 was used in aim to record 1000 trials of approximately 5
counts per timing interval. The timing interval used was 0.02 s.
xi / xi (%)
(min-1)
R / R (%)
10982 + 104.79
10982 + 0.9542
5491 + 52.39
5491 + 0.954
Table 3.1
NERS 315
Lab 3 Counting Statistics
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
tabulated, where t in the rate, R, equation is now taken to be t = 20 minutes (shown on the
following page):
Statistical Quantities Associated with = 108714
/ (%)
R (min-1)
R / R (%)
108714 + 329.718
108714 + 0.30329
5435.7 + 16.486
5325.7 + 0.30329
Table 3.2
By measuring counts for ten times the two-minute interval used for the initial calculaltions, precision
was increased. In terms of percent, an increase of 0.65% incurred; however, this results in a 214%
different in regards to percent difference.
To verify if ~68% of the data collected lied within one standard deviation of the two minute
time interval counting exercise, it was noted that a lower bound of [10982 104.79 = 10877] counts
per timing interval was found, along with an upper bound of [10982 + 104.79 = 11086] counts per
timing interval. Glancing over the data, it was found that six of the ten measurements lied within
one standard deviation implying that 6 / 10 = 60% lied within one standard deviation which is
approximately 68%. However, This value does differs by 8%, and this statistical error may or may not
be rectified if more measurements are taken. This problem seems to stem from the initial
measurement being an untypical value with regards to the further data collected in part (2e).
Computing the mean of the total set of ten measurements yields: / 10 = 10,871 counts per
timing interval. This value is differs from the initial count given for xi = 10,982 counts per
timing interval. Using <> as the mean, assuming a Gaussian distribution again to avoid
excess calculation, a standard deviation of 104.264 counts per timing interval was found.
Factoring in uncertainty, it was found that 7 / 10 = 70% of the measurements lied within
one standard deviation of this mean. It can then be concluded that using a single
measurement as the mean for a fitted distribution can be done with certain accuracy, but it
may be incur error depending on how typical the measurement may be in an experiment.
3) If the single measurement value xi2 = 10,915 counts per timing interval is taken to be the
only element of a data set. Then, assuming a Gaussian distribution, further analysis yielded
how many elements of the data set lie within various standard deviations found from this
single datum. Identical to the procedure described above, the mean is taken to be the single
measurement xi2 implying a standard deviation xi2 of approximately 104.47 counts per
NERS 315
Lab 3 Counting Statistics
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
timing interval. The number of measurements out of a total of ten that deviate from various
standard deviations according to this model is tabulated below.
Standard Deviation
0.67xi2
1.0xi2
1.6xi2
2.0xi2
0
Table 3.3
The Gaussian model would predict that 5.028 or about 5 would lie within 0.67 standard deviations,
3.174 (about 3) would lie outside one deviation, 1.096 or about 1 would lie outside of 1.6 deviations,
and 0.456 (nearly none) would lie outside of two standard deviations. Data for the Gaussian
distributions expected valued was found via a normal curve table
( http://people.hofstra.edu/faculty/Stefan_Waner/RealWorld/normaltable.html ), however, the
actual technique to finding these values involves integrating the Gaussian distribution function from
any Z*, where Z is a real number. Where the Gaussian Distribution is defined as:
where denotes the mean, 2 is the variance, and is the standard deviation. It is evident that the
data differs slightly from the predicted Gaussian number of deviations, but these differences are
small and shows the Gaussian in a good fit.
5) From the data collected in experiment consisting of 25 trials aiming to record 30 counts, the
experimental mean <x>, and sample variance s2 were computed to be:
Experimental Mean
31.84
Sample Variance
27.47
Table 3.4
NERS 315
Lab 3 Counting Statistics
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
Where all numbers are quoted in terms of counts per time interval. Asserting a Poisson model to the
data, it can be presumed that the expected standard deviation i has the following relationship
deviation s from experiment gives a value of s 5.24 counts per timing interval implying
that s i. The significance of i in this application it reflects predictions given by a
Poisson distribution, with only one value. The experimental equivalent standard deviation s
is approximately equal to i showing the utility and accuracy of using statistical models. If s
is taken to be the true standard deviation, then the predicted deviation of the Poisson
distribution differs by only 7.63%. This small experiment shows the validity of using
Poisson statistics when only one or few measurements are known.
6) the Chi-squared test was performed using the Chi square formula (postulated by Karl
Pearson in 1900), and was computed to be 20.708. This test quantitatively discerns a
measure of fluctuation in the data. Particularly, it measures how much the ratio of the
sample variance over the modeled variance differs from unity. Using interpolation, along
with a chi square table ( http://www.statsoft.com/textbook/sttable.html#chi) , it is found
that the probability that a random sample from a poisson distribution with the same mean
will show a larger fluctuation is 0.633. This value is high, indicating that the data set
collected though can be modeled by a poisson distribution, would yield substantial
conflictions with the models predictions more than half of the time.
7) using elementary error propagation techniques, it was found that the uncertainty to be
5.89. Comparing this to the standard deviation i, it is nearly the same. Despite the chi
square test, in some ways the poisson distribution proves to fit the data.
8) Using a 0.02 second timing interval, 1000 trials were taken in aim to record 5 counts on
the counter. A frequency plot Pi of Probability of occurrence of exactly i counts vs i value is
displayed below.
NERS 315
Lab 3 Counting Statistics
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
Figure 3.1
Predictably, the counts of highest frequency resides around the center of this data,
around the mean, 4 counts. The data resembles a bell type curve. Which model will fit
this data best is discussed in the following part.
9) Finding the mean, the Poisson and Gaussian distribution functions are graphed on the
same axes as the data distribution to visually see which one fits the best.
NERS 315
Lab 3 Counting Statistics
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
Figure 3.2
From figure 3.2, it is evident that the poisson distribution fits the data best. This is
expected, as one of the stipulations of fitting a data set to poisson is that n should not be
large. In this case n is ~ 5, making poisson a better choice than the Gaussian
distribution. The theory matches the data with accuracy.
Conclusions
It was shown though both hypothetical and experimental data that fitting statistical
models to data, or just a single datum is valid. In all cases, no matter how many
measurements were known, it was proved valid to use either a Poisson distribution when
n is large, or in the opposite case a Gaussian model. It was the approach to measure
nuclear decay through direct measurement, and then to correctly interpret the results
using counting statistics.
In part two, it was found that increasing the measuring time decreased the
error incurred. By having ten times the measuring time, the uncertainty decreased from
0.954% to 0.303%. Experimental quantities were computed and compared to those of
the statistical predictions according to a model, as in the Gaussian in part 3. The number
NERS 315
Lab 3 Counting Statistics
David Sirajuddin
Partners: Nick Krupansky, Yong Ping Qiu
of trials differing from the various multiples of the standard deviation agreed more or
less with theory.
In part 4, actual measurement was done and the standard deviation found
from the sample variance of the experiment was found to be different from the
prediction standard deviation of the poisson model by 0.4. The chi square test helped to
quantitatively assess the agreement between the sample variance and the predicted
variance, and the uncertainty was shown to be low, but more importantly in close
agreement with that of theory (a change of 5.64 for the standard deviation of experiment
to 5.89 for that of theory).
Finally, 1000 trials were taken and it was visually shown that the poisson distribution
mapped almost identically onto the datas frequency functions points.
Appendix
Pages are attached.
10