Escolar Documentos
Profissional Documentos
Cultura Documentos
ON
INSTRUMENTAL METHODS OF ANALYSIS
BACHELOR OF TECHNOLOGY
IV SEMESTER
PREPARED BY
Mr. K. Selvaraj B. Pharm, M. Tech
DEPARTMENT OF BIOTECHNOLOGY
RAJALAKSHMI ENGINEERING COLLEGE
THANDALAM- 602 105
SYLLABUS
INTRODUCTION
Analytical Chemistry deals with methods for determining the chemical composition
of samples of matter. A qualitative method yields information about the identity of atomic or
molecular species or the functional groups in the sample; a quantitative method, in contrast,
provides numerical information as to the relative amount of one or more of these
components.
Analytical methods are often classified as being either classical or instrumental. This
classification is largely historical with classical methods, sometimes called wet-chemical
methods, preceding instrumental methods by a century or more.
Classical Methods
Separation of analytes by precipitation, extraction, or distillation.
Qualitative analysis by reaction of analytes with reagents that yielded products that
could be recognized by their colors, boiling or melting points, solubilities, optical
activities, or refractive indexes.
Quantitative analysis by gravimetric or by titrimetric techniques.
1. Gravimetric Methods – the mass of the analyte or some compound produced from the
analyte was determined.
2. Titrimetric Methods – the volume or mass of a standard reagent required to react
completely with the analyte was measured.
Instrumental Methods
Measurements of physical properties of analytes, such as conductivity, electrode
potential, light absorption, or emission, mass to charge ratio, and fluorescence, began to be
used for quantitative analysis of a variety of inorganic, organic, and biochemical analyte.
Highly efficient chromatographic and electrophoretic techniques began to replace distillation,
extraction, and precipitation for the separation of components of complex mixtures prior to
their qualitative or quantitative determination. These newer methods for separating and
determining chemical species are known collectively as instrumental methods of analysis.
Instrumentation can be divided into two categories: detection and quantitation.
1. Quantitation
Measurement of physical properties of analytes - such as conductivity, electrode
potential, light absorption or emission, mass-to-charge ratio, and fluorescence-began to be
employed for quantitative analysis of inorganic, organic, and biochemical analytes.
2. Detection
Efficient chromatographic separation techniques are used for the separation of
components of complex mixtures.
Table 1. Classification of instrumental methods based on different analytical signals
Figure 1.Limit of detection (LOD), limit of quantification (LOQ), dynamic range, and limit of linearity (LOL).
Applications
Standard addition is frequently used in atomic absorption spectroscopy and gas
chromatography.
LAWS OF ELECTRICITY
Ohm's Law
For many conductors of electricity, the electric current which will flow through them
is directly proportional to the voltage applied to them. When a microscopic view of Ohm's
law is taken, it is found to depend upon the fact that the drift velocity of charges through the
material is proportional to the electric field in the conductor. The ratio of voltage to current is
called the resistance, and if the ratio is constant over a wide range of voltages, the material is
said to be an "ohmic" material. If the material can be characterized by such a resistance, then
the current can be predicted from the relationship:
Kirchhoff's Current Law (KCL)
The current entering any junction is equal to the current leaving that junction. i1 + i4 =
i2 + i3.This law is also called Kirchhoff's first law, Kirchhoff's point rule, Kirchhoff's junction
rule (or nodal rule), and Kirchhoff's first rule.
Power law
The general form of the law is
P=VI
where I is the magnitude of the physical stimulus, ψ(I) is the psychophysical function relating
to the subjective magnitude of the sensation evoked by the stimulus, a is an exponent that
depends on the type of stimulation and k is a proportionality constant that depends on the type
of stimulation and the units used.
Series Circuit
A Series circuit is one in which all components are connected in tandem. The current
at every point of a series circuit stays the same. In series circuits the current remains the same
but the voltage drops may vary.
Parallel Circuit
Parallel circuits are those in which the components are so arranged that the current
divides between them. In parallel circuits the voltage remains the same but the current may
vary. The circuits in your home are wired in parallel.
MEASUREMENT OF DC
Digital voltmeters (DVM)
The phasor diagram below shows us a simple way to calculate the series voltage. The
components are in series, so the current is the same in both. The voltage phasors (brown for
resistor, blue for capacitor in the convention we've been using) add according to vector or
phasor addition, to give the series voltage (the red arrow).
Note the frequency dependence of the series impedance ZRC: at low frequencies, the
impedance is very large; because the capacitive reactance 1/ωC is large (the capacitor is open
circuit for DC). At high frequencies, the capacitive reactance goes to zero (the capacitor
doesn't have time to charge up) so the series impedance goes to R. At the angular frequency ω
= ωo = 1/RC, the capacitive reactance 1/ωC equals the resistance R. We shall show this
characteristic frequency on all graphs on this page.
Remember how, for two resistors in series, you could just add the resistances: Rseries = R1 + R2
to get the resistance of the series combination. That simple result comes about because the
two voltages are both in phase with the current, so their phasors are parallel. Because the
phasors for reactances are 90° out of phase with the current, the series impedance of a resistor
R and a reactance X are given by Pythagoras' law:
Zseries2 = R2 + X2
Ohm's law in AC
We can rearrange the equations above to obtain the current flowing in this circuit.
Alternatively we can simply use the Ohm's Law analogy and say that I = V source/ZRC. Either
way we get
where the current goes to zero at DC (capacitor is open circuit) and to V/R at high
frequencies (no time to charge the capacitor).
From simple trigonometry, the angle by which the current leads the voltage is
tan-1 (VC/VR) = tan-1 (IXC/IR)
= tan-1 (1/ωRC) = tan-1 (1/2πfRC).
However, we shall refer to the angle φ by which the voltage leads the current. The voltage is
behind the current because the capacitor takes time to charge up, so φ is negative, ie
RL Series combinations
In an RL series circuit, the voltage across the inductor is aheadof the current by 90°,
and the inductive reactance, as we saw before, is XL = ωL. The resulting v(t) plots and phasor
diagram look like this.
where Zseries is the series impedance: the ratio of the voltage to current in an RLC series ciruit.
Note that, once again, reactances and resistances add according to Pythagoras' law:
Zseries2 = R2 + Xtotal2
= R2 + (XL XC)2.
Remember that the inductive and capacitive phasors are 180° out of phase, so their reactances
tend to cancel.
Now let's look at the relative phase. The angle by which the voltage leads the current is
φ = tan-1 ((VL - VC)/VR).
Substiting VR = IR, VL = IXL = ωL and VC = IXC= 1/ωC gives:
The dependence of Zseries and φ on the angular frequency ω is shown in the next figure. The
angular frequency ω is given in terms of a particular value ωo, the resonant frequency
(ωo2 = 1/LC), which we meet below.
The next graph shows us the special case where the frequency is such that VL = VC.
Because vL(t) and vC are 180° out of phase, this means that vL(t) = vC(t), so the two reactive
voltages cancel out, and the series voltage is just equal to that across the resistor. This case is
called series resonance.
SIGNAL-TO-NOISE RATIO
The signal is what you are measuring that is the result of the presence of your analyte.
Noise is extraneous information that can interfere with or alter the signal. It can not be
completely eliminated, but hopefully reduced. True Noise is considered random.
Signal-to-noise ratio (often abbreviated SNR or S/N) is an electrical engineering concept,
also used in other fields (such as scientific measurements, biological cell signaling), defined
as the ratio of a signal power to the noise power corrupting the signal.
In less technical terms, signal-to-noise ratio compares the level of a desired signal (such as
music) to the level of background noise. The higher the ratio, the less obtrusive the
background noise is.
In analog and digital communications, signal-to-noise ratio, often written S/N or SNR, is a
measure of signal strength relative to background noise. The ratio is usually measured in
decibels (dB).If the incoming signal strength in microvolts is Vs, and the noise level, also in
microvolts, is Vn, then the signal-to-noise ratio, S/N, in decibels is given by the formula
S/N = 20 log10(Vs/Vn)
If Vs = Vn, then S/N = 0. In this situation, the signal borders on unreadable, because the noise
level severely competes with it. In digital communications, this will probably cause a
reduction in data speed because of frequent errors that require the source (transmitting)
computer or terminal to resend some packets of data.
Ideally, Vs is greater than Vn, so S/N is positive. As an example, suppose that V s = 10.0
microvolts and Vn = 1.00 microvolt. Then
S/N = 20 log10(10.0) = 20.0 dB
which results in the signal being clearly readable. If the signal is much weaker but still above
the noise -- say 1.30 microvolts -- then
S/N = 20 log10(1.30) = 2.28 dB
which is a marginal situation. There might be some reduction in data speed under these
conditions.
If Vs is less than Vn, then S/N is negative. In this type of situation, reliable communication is
generally not possible unless steps are taken to increase the signal level and/or decrease the
noise level at the destination (receiving) computer or terminal.
Communications engineers always strive to maximize the S/N ratio. Traditionally, this has
been done by using the narrowest possible receiving-system bandwidth consistent with the
data speed desired. However, there are other methods. In some cases, spread spectrum
techniques can improve system performance. The S/N ratio can be increased by providing the
source with a higher level of signal output power if necessary. In some high-level systems
such as radio telescopes, internal noise is minimized by lowering the temperature of the
receiving circuitry to near absolute zero (-273 degrees Celsius or -459 degrees Fahrenheit). In
wireless systems, it is always important to optimize the performance of the transmitting and
receiving antennas.
Types of Noise
Chemical Noise
Chemical reactions
Reaction/technique/instrument specific
Instrumental Noise
Germane to all types of instruments
Can often be controlled physically (e.g. temp) or electronically (software
averaging)
Instrumental Noise
Thermal (Johnson) Noise:
Thermal agitation of electrons affects their “smooth” flow.
Due to different velocities and movement of electrons in electrical
components.
Upon both temperature and the range of frequencies (frequency bandwidths)
being utilized.
Can be reduced by reducing temperature of electrical components.
Eliminated at “absolute” zero.
Considered “white noise” because it is independent of frequency (but
dependent on frequency bandwidth or the range of frequencies being measured).
Shot Noise:
Occurs when electrons or charged particles cross junctions (different
materials, vacuums, etc.)
Considered “white noise” because it is independent of frequency.
It is the same at any frequency but also dependent on frequency bandwidth
Due to the statistical variation of the flow of electrons (current) across some
junction
Some of the electrons jump across the junction right away
Some of the electrons take their time jumping across the junction
Flicker Noise
Frequency dependent
Significant at frequencies less than 100 Hz
Magnitude is inversely proportional to frequency
Results in long-term drift in electronic components
Can be controlled by using special wire resistors instead of the less expensive
carbon type.
Environmental Noise
Unlimited possible sources
Can often be eliminated by eliminating the source
Other noise sources can not be eliminated!!!!!!
Methods of eliminating it…
Moving the instrument somewhere else
Isolating /conditioning the instruments power source
Controlling temperature in the room
Control expansion/contraction of components in instrument
Eliminating interferences
Stray light from open windows, panels on instrument
Turning off radios, TV’s, other instruments
SIGNAL-NOISE ENHANCEMENT
HARDWARE METHODS
Lock-in amplifier
A lock-in amplifier (also known as a phase-sensitive detector) is a type of amplifier
that can extract a signal with a known carrier wave from extremely noisy environment (S/N
ratio can be as low as -60 dB or even less . It is essentially a homodyne with an extremely low
pass filter (making it very narrow band). Lock-in amplifiers use mixing, through a frequency
mixer, to convert the signal's phase and amplitude to a DC—actually a time-varying low-
frequency—voltage signal.
Basic principles
Operation of a lock-in amplifier relies on the orthogonality of sinusoidal functions.
Specifically, when a sinusoidal function of frequency ν is multiplied by another sinusoidal
function of frequency μ not equal to ν and integrated over a time much longer than the period
of the two functions, the result is zero. In the case when μ is equal to ν, and the two functions
are in phase, the average value is equal to half of the product of the amplitudes.
In essence, a lock-in amplifier takes the input signal, multiplies it by the reference
signal (either provided from the internal oscillator or an external source), and integrates it
over a specified time, usually on the order of milliseconds to a few seconds. The resulting
signal is an essentially DC signal, where the contribution from any signal that is not at the
same frequency as the reference signal is attenuated essentially to zero, as well as the out-of-
phase component of the signal that has the same frequency as the reference signal (because
sine functions are orthogonal to the cosine functions of the same frequency), and this is also
why a lock-in is a phase sensitive detector.
Chopper amplifiers
One classic use for a chopper circuit and where the term is still in use is in chopper
amplifiers. These are DC amplifiers. Some types of signal that need amplifying can be so
small that an incredibly high gain is required, but very high gain DC amplifiers are much
harder to build with low offset and 1/f noise, and reasonable stability and bandwidth. It's
much easier to build an AC amplifier instead. A chopper circuit is used to break up the input
signal so that it can be processed as if it were an AC signal, then integrated back to a DC
signal at the output. In this way, extremely small DC signals can be amplified. This approach
is often used in electronic instrumentation where stability and accuracy are essential; for
example, it is possible using these techniques to construct pico-voltmeters and Hall sensors.
SOFTWARE METHODS
Signal Averaging
(one way of controlling noise)
Ensemble Averaging
Collect Multiple Signals Over The Same Time Or Wavelength (For Example)
Domain
Easily Done With Computers
Calculate The Mean Signal At Each Point In The Domain
Re-Plot The Averaged Signal
Since Noise Is Random (Some +/ Some -), This Helps Reduce The Overall Noise By
Cancellation.
Boxcar Averaging
Continuum Sources
Ar Lamp VAC UV
Xe Lmp VAC UV, UV-VIS
H2 or D2 Lamp UV
Tungsten Lamp UV-Near IR
Nernst Glower UV-VIS-Near IR-IR
Nichrome Wire Near IR-Far IR
Globar Near IR-Far IR
Hollow Cathode Lamp UV-VIS
Lasers UV-VIS-Near IR
Radiation Sources
Sources may be continuous or pulsed in time
Continuum sources
- Continuum sources are preferred for spectroscopy because of their relatively
flat radiance versus wavelength curves
- Nernst glower (b) W filament (c) D2 lamp (d) arc (e) arc plus reflector
- produce broad, featureless range of wavelengths
- black and gray bodies, high pressure arc lamps
Line sources
- produce relatively narrow bands at specific wavelengths generating structured
emission spectrum
- lasers, low pressure arc lamps, hollow cathode lamps
Line plus continuum sources
- contain lines superimposed on continuum background
- medium pressure arc lamps, D2 lamp
Black body sources
Nernst glowers (ZrO2, YO2), Globars (SiC)
1000-1500 K in air - max lies in IR
relatively fragile
low spectral radiance (B~10-4 W·cm-2·nm-1·sr-1)
Arc sources
Hg, Xe, D2 lamps
AC or DC discharge through gas or metal vapor
- 20-70 V, 10 mA-20 A
Line sources
Generally not much use for molecular spectroscopy
useful for luminescence excitation, photochemistry experiments
where high radiant intensity at one q required
Arc lamps
Low pressure (<10 Torr) with many different fill vapors
Hg, Cd, Zn, Ga, In, Th and alkali metals
Excellent wavelength calibration sources
Wavelength selector
A. Filters are used to pass a band of wavelengths
Absorption filters
Interference filters
Monochromators - one color - pass a narrow band of wavelnegths
B. Prisms
Dispersing prisms
Separation of wavelengths due to differences in index of refraction of
the glass in the prism with each different wavelength. This leads to
constructive and destructive interference.
Dispersion is angular (nonlinear). Single order is obtained. The larger
the focal length, the better the dispersion.
Reflecting prisms
Designed to change direction of propagation of beam, orientation, or
both
Polarizing prisms
Made of birefringent materials
C. Gratings
Can be considered as a set of slits at which diffraction occurs and
destructive/constructive interference occurs that yields a diffraction pattern. Grooves
patterns are now generated by machine. They are manufactured by ruling a piece of
glass or metal. Replica gratings are then produced by laying down a polymer film
over it to copy the groove pattern. Replicas are what are actually used in instruments
due to the great difficulty and cost of achieving high quality gratings. Holographic
gratings are also used but are not as efficient. Never touch a grating with your fingers.
D. Types of Mounts
1) Littrow: autocollimating
2)Czerny-Turner: two mirrors used to collimate and focus.
3) Fastie-Ebert: single mirror used to collimate and focus
4) Rowland Circle: used in polychromators
5) Echelle: uses prism to sort orders from a grating
E. Performance Characteristics
Resolving power
R= / =nN
where n = diffraction order and N =lines of the grating illuminated from the entrance
slit. Therefore depends on
1) Physical size of dispersing element
2) Order of hv being observed to get better resolution either
1) Increase N
2) Increase n (cost now is in lessened intensity)
If R = 100 Poor quality
If R = 106 High quality
Number of orders detectable is proportional to N
Higher orders yield greater resolution but poorer intensity
The quality of the slits is also important.
Some light is also lost in reflection (n = 0, zero order)
F/Number
Measure of the light gathering power of the monochromator that emerges from the
entrance slit
f=F/d
Where F = focal length of the collimating mirror or lens
and d = diameter of collimating mirror or lens
The light-gathering power of an optical device increases as the inverse square of the
f/number, therefore an f/2 lens gathers four times more light than an f/4 lens.
The f/numbers of many monochromators lie in the 1 to 10 range
F. Slits
Slits are used to limit the amount of light impinging on the dispersing element as well
as to limit the light reaching the detector.
There is a dichotomy between intensity and resolution.
1) Natural
2) Doppler:
3) Stark
4)Collisional broadening
The use of entrance and exit slits convolutes this broadening as a triangular function -
the slit function.
Spectral bandpass, s, is the width at half-height of the wavelength distribution as
passed by the exit slit
s = Rd W
where Rd is the reciprocal linear dispersion and W is the slit width.
The slit-width-limited resolution s is
= 2s = 2 Rd W
SAMPLE CONTAINERS
Required of all spectroscopic methods except emission spectroscopy
Must be made of material that is transparent to the spectral region of interest
Spectral
Material
Region
UV Fused silica
VIS Plastic, glass
NaCl IR
V. RADIATION TRANSDUCERS
High sensitivity
High S/N
Constant response over range of wavelengths
Fast response
Zero output in absence of illumination
Electrical signal directly proportional to radiant power
Photomultiplier Tubes
ARRAY DETECTORS
Usually 1-3 cm long; contains a few hundred photodiodes (256 - 2048) in a linear
array
Partitions spectrum into x number of wavelength increments
Each photodiode captures photons simultaneously
Measures total light energy over the time of exposure (whereas PMT measures
instantaneous light intensity)
Process
Each diode in the array is reverse-biased and thus can store charge like a capacitor
Before being exposed to light to be detected, diodes are fully charged via a transistor
switch
Light falling on the PDA will generate charge carriers in the silicon which combine
with stored charges of opposite polarity and neutralize them
The amount of charge lost is proportional to the intensity of light
Amount of current needed to recharge each diode is the measurement made which is
proportional to light intensity
Recharging signal is sent to sample-and-hold amplifier and then digitized
Array is however read sequentially over a common output line
Use minicomputer to handle data
Disadvantages
Must have fast data storage system
High dark noise
Must cool PDA to well below room temperature
Diode saturates within a few seconds integration time
Resolution not good, limited by # diodes/linear distance
Stray radiant energy (SRE) is a killer
Used as detectors in Raman, fluorescence, and absorption
CHARGE TRANSFER DEVICES
Two-dimensional arrays of silicon integrated circuits, postage-stamp-size
Typical pixel dimensions are 20 x 20 µm
Both CCDs and CIDs accumulate photogenerated charges in similar ways but differ in the
way accumulated charge is detected
A. Charge Injection Devices (CID)
• Summation is done on the chip rather than in memory after the readout, thus only one
read operation required for all the pixels to be summed, thus lower readout noise per
pixel is achieved
Used in astronomy and low light situations: fluorometry, Raman, CZE, HPLC
Thermal Transducers
• phototransducers not applicable in IR due to low energy
• Thermocouples
• Bolometers
• Pyroelectric Transducers
UNIT III
MOLECULAR SPECTROSCOPY
INTRODUCTION
Historically, spectroscopy referred to a branch of science in which visible light was used for
the theoretical study of the structure of matter and for qualitative and quantitative analyses.
Recently, however, the definition has broadened as new techniques have been developed that
utilise not only visible light, but many other forms of radiation.
Spectroscopy is often used in physical and analytical chemistry for the identification of
substances through the spectrum emitted from or absorbed by them. Spectroscopy is also
heavily used in astronomy and remote sensing. Most large telescopes have spectrometers,
which are used either to measure the chemical composition and physical properties of
astronomical objects or to measure their velocities from the Doppler shift of their spectral
lines.
The type of spectroscopy depends on the physical quantity measured. Normally, the quantity
that is measured is an amount or intensity of something.
Measurement process
Fluorescence spectroscopy
Fluorescence spectroscopy uses higher energy photons to excite a sample, which will then
emit lower energy photons. This technique has become popular for its biochemical and
medical applications, and can be used for confocal microscopy, fluorescence resonance
energy transfer, and fluorescence lifetime imaging.
Flame Spectroscopy
Atomic Emission Spectroscopy - This method uses flame excitation; atoms are excited
from the heat of the flame to emit light. This method commonly uses a total
consumption burner with a round burning outlet. A higher temperature flame than
atomic absorption spectroscopy (AA) is typically used to produce excitation of
analyte atoms. Since analyte atoms are excited by the heat of the flame, no special
elemental lamps to shine into the flame are needed. A high resolution polychromator
can be used to produce an emission intensity vs. wavelength spectrum over a range of
wavelengths showing multiple element excitation lines, meaning multiple elements
can be detected in one run. Alternatively, a monochromator can be set at one
wavelength to concentrate on analysis of a single element at a certain emission line.
Plasma emission spectroscopy is a more modern version of this method. See Flame
emission spectroscopy for more details.
Atomic absorption spectroscopy (often called AA) - This method commonly uses a pre-
burner nebulizer (or nebulizing chamber) to create a sample mist and a slot-shaped
burner which gives a longer pathlength flame. The temperature of the flame is low
enough that the flame itself does not excite sample atoms from their ground state. The
nebulizer and flame are used to desolvate and atomize the sample, but the excitation
of the analyte atoms is done by the use of lamps shining through the flame at various
wavelengths for each type of analyte. In AA, the amount of light absorbed after going
through the flame determines the amount of analyte in the sample. A graphite furnace
for heating the sample to desolvate and atomize is commonly used for greater
sensitivity. The graphite furnace method can also analyze some solid or slurry
samples. Because of its good sensitivity and selectivity, it is still a commonly used
method of analysis for certain trace elements in aqueous (and other liquid) samples.
Atomic Fluorescence Spectroscopy - This method commonly uses a burner with a round
burning outlet. The flame is used to solvate and atomize the sample, but a lamp shines
light at a specific wavelength into the flame to excite the analyte atoms in the flame.
The atoms of certain elements can then fluoresce emitting light in a different
direction. The intensity of this fluorescing light is used for quantifying the amount of
analyte element in the sample. A graphite furnace can also be used for atomic
fluorescence spectroscopy. This method is not as commonly used as atomic
absorption or plasma emission spectroscopy.
Spark or arc (emission) spectroscopy - is used for the analysis of metallic elements in solid
samples. For non-conductive materials, a sample is ground with graphite powder to make it
conductive. In traditional arc spectroscopy methods, a sample of the solid was commonly
ground up and destroyed during analysis. An electric arc or spark is passed through the
sample, heating the sample to a high temperature to excite the atoms in it. The excited analyte
atoms glow emitting light at various wavelengths which could be detected by common
spectroscopic methods. Since the conditions producing the arc emission typically are not
controlled quantitatively, the analysis for the elements is qualitative. Nowadays, the spark
sources with controlled discharges under an argon atmosphere allow that this method can be
considered eminently quantitative, and its use is widely expanded worldwide through
production control laboratories of foundries and steel mills.
Visible spectroscopy
Many atoms emit or absorb visible light. In order to obtain a fine line spectrum, the atoms
must be in a gas phase. This means that the substance has to be vaporised. The spectrum is
studied in absorption or emission. Visible absorption spectroscopy is often combined with
UV absorption spectroscopy in UV/Vis spectroscopy.
Ultraviolet spectroscopy
All atoms absorb in the UV region because these photons are energetic enough to excite outer
electrons. If the frequency is high enough, photoionisation takes place. UV spectroscopy is
also used in quantifying protein and DNA concentration as well as the ratio of protein to
DNA concentration in a solution. Several amino acids usually found in protein, such as
tryptophan, absorb light in the 280nm range and DNA absorbs light in the 260nm range. For
this reason, the ratio of 260/280nm absorbance is a good general indicator of the relative
purity of a solution in terms of these two macromolecules. Reasonable estimates of protein or
DNA concentration can also be made this way using Beer's law.
Infrared spectroscopy
Infrared spectroscopy offers the possibility to measure different types of interatomic bond
vibrations at different frequencies. Especially in organic chemistry the analysis of IR
absorption spectra shows what types of bonds are present in the sample.
Raman Spectroscopy
Raman spectroscopy uses the inelastic scattering of light to analyse vibrational and rotational
modes of molecules. The resulting 'fingerprints' are an aid to analysis.
Nuclear magnetic resonance spectroscopy analyzes the magnetic properties of certain atomic
nuclei to determine different electronic local environments of hydrogen, carbon, or other
atoms in an organic compound or other compound. This is used to help determine the
structure of the compound.
Photoemission spectroscopy
Mössbauer spectroscopy
Photoacoustic Spectroscopy measures the sound waves produced upon the absorption
of radiation.
Photothermal Spectroscopy measures heat evolved upon absorption of radiation.
Circular Dichroism spectroscopy
Raman Optical Activity Spectroscopy exploits Raman scattering and optical activity
effects to reveal detailed information on chiral centers in molecules.
Terahertz spectroscopy uses wavelengths above infrared spectroscopy and below
microwave or millimeter wave measurements.
Inelastic neutron scattering works like Raman spectroscopy, with neutrons instead of
photons.
Inelastic electron tunneling spectroscopy uses the changes in current due to inelastic
electron-vibration interaction at specific energies which can also measure optically
forbidden transitions.
Auger Spectroscopy is a method used to study surfaces of materials on a micro-scale.
It is often used in connection with electron microscopy.
Cavity ring down spectroscopy
Fourier transform is an efficient method for processing spectra data obtained using
interferometers. The use of Fourier transform in spectroscopy is called Fourier
transform spectroscopy. Nearly all infrared spectroscopy (FTIR) and Nuclear
Magnetic Resonance (NMR) spectroscopy are performed with Fourier transforms.
Spectroscopy of matter in situations where the properties are changing with time is
called Time-resolved spectroscopy.
Mechanical spectroscopy involves interactions with macroscopic vibrations, such as
phonons. An example is acoustic spectroscopy, involving sound waves.
Time-resolved spectroscopy
Spectroscopy using an AFM-based analytical technique is called Force spectroscopy.
Dielectric spectroscopy
Thermal infrared spectroscopy measures thermal radiation emitted from materials and
surfaces and is used to determine the type of bonds present in a sample as well as their
lattice environment. The techniques are widely used by organic chemists,
mineralogists, and planetary scientists.
Background Subtraction
Background subtraction is a term typically used in spectroscopy when one explains the
process of acquiring a background radiation level (or ambient radiation level) and then makes
an algorithmic adjustment to the data to obtain qualitative information about any deviations
from the background, even when they are an order of magnitude less decipherable than the
background itself.
SPECTROPHOTOMETRY
There are two major classes of spectrophotometers; single beam and double beam. A double
beam spectrophotometer measures the ratio of the light intensity on two different light paths,
and a single beam spectrophotometer measures the absolute light intensity. Although ratio
measurements are easier, and generally stabler, single beam instruments have advantages; for
instance, they can have a larger dynamic range, and they can be more compact.
Historically, spectrophotometers use a monochromator to analyze the spectrum, but there are
also spectrophotometers that use arrays of photosensors and. Especially for infrared
spectrophotometers, there are spectrophotometers that use a Fourier transform technique to
acquire the spectral information more quickly in a technique called Fourier Transform
InfraRed.
The spectrophotometer measures quantitatively the fraction of light that passes through a
given solution. In a spectrophotometer, a light from the lamp is guided through a
monochromator, which picks light of one particular wavelength out of the continuous
spectrum. This light passes through the sample that is being measured. After the sample, the
intensity of the remaining light is measured with a photodiode or other light sensor, and the
transmittance for this wavelength is then calculated.
UV and IR spectrophotometers
The most common spectrophotometers are used in the UV and visible regions of the
spectrum, and some of these instruments also operate into the near-infrared region as well.
Visible region 400-700nm spectrophotometry is used extensively in colorimetry science. Ink
manufacturers, printing companies, textiles vendors, and many more, need the data provided
through colorimetry. They usually take readings every 20 nanometers along the visible
region, and produce a spectral reflectance curve. These curves can be used to test a new batch
of colorant to check if it makes a match to specifications.
Samples are usually prepared in cuvettes; depending on the region of interest, they may be
constructed of glass, plastic, or quartz.
IR spectrophotometry
Spectrophotometers designed for the main infrared region are quite different because of the
technical requirements of measurement in that region. One major factor is the type of
photosensors that are available for different spectral regions, but infrared measurement is also
challenging because virtually everything emits IR light as thermal radiation, especially at
wavelengths beyond about 5 μm.
Another complication is that quite a few materials such as glass and plastic absorb infrared
light, making it incompatible as an optical medium. Ideal optical materials are salts, which do
not absorb strongly. Samples for IR spectrophotometry may be smeared between two discs of
potassium bromide or ground with potassium bromide and pressed into a pellet. Where
aqueous solutions are to be measured, insoluble silver chloride is used to construct the cell.
Spectroradiometers
Spectroradiometers, which operate almost like the visible region spectrophotometers, are
designed to measure the spectral density of illuminants in order to evaluate and categorize
lighting for sales by the manufacturer, or for the customers to confirm the lamp they decided
to purchase is within their specifications.
Components: 1. The light source shines onto or through the sample. 2. The sample transmits
or reflects light. 3. The detector detects how much light was reflected from or transmitted
through the sample. 4. The detector then converts how much light the sample transmitted or
reflected into a number.
ULTRAVIOLET-VISIBLE SPECTROSCOPY
Applications
Solutions of transition metal ions can be coloured (i.e., absorb visible light) because d
electrons within the metal atoms can be excited from one electronic state to another.
The colour of metal ion solutions is strongly affected by the presence of other species,
such as certain anions or ligands. For instance, the colour of a dilute solution of
copper sulphate is a very light blue; adding ammonia intensifies the colour and
changes the wavelength of maximum absorption (λ_max).
Organic compounds, especially those with a high degree of conjugation, also absorb
light in the UV or visible regions of the electromagnetic spectrum. The solvents for
these determinations are often water for water soluble compounds, or ethanol for
organic-soluble compounds. (Organic solvents may have significant UV absorption;
not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly
at most wavelengths.)
While charge transfer complexes also give rise to colours, the colours are often too
intense to be used for quantitative measurement.
The Beer-Lambert law states that the absorbance of a solution is directly proportional to the
solution's concentration. Thus UV/VIS spectroscopy can be used to determine the
concentration of a solution. It is necessary to know how quickly the absorbance changes with
concentration. This can be taken from references (tables of molar extinction coefficients), or
more accurately, determined from a calibration curve.
Beer-Lambert law
where A is the measured absorbance, I0 is the intensity of the incident light at a given
wavelength, I is the transmitted intensity, L the pathlength through the sample, and c the
concentration of the absorbing species. For each species and wavelength, ε is a constant
known as the molar absorptivity or extinction coefficient. This constant is a fundamental
molecular property in a given solvent, at a particular temperature and pressure, and has units
of 1 / M * cm or often AU / M * cm.
The absorbance and extinction ε are sometimes defined in terms of the natural logarithm
instead of the base-10 logarithm.
The Beer-Lambert Law is useful for characterizing many compounds but does not hold as a
universal relationship for the concentration and absorption of all substances. A 2nd order
polynomial relationship between absorption and concentration is sometimes encountered for
very large, complex molecules such as organic dyes (Xylenol Orange or Neutral Red, for
example).
A = − log(%T)
The basic parts of a spectrophotometer are a light source (often an incandescent bulb for the
visible wavelengths, or a deuterium arc lamp in the ultraviolet), a holder for the sample, a
diffraction grating or monochromator to separate the different wavelengths of light, and a
detector. The detector is typically a photodiode or a CCD. Photodiodes are used with
monochromators, which filter the light so that only light of a single wavelength reaches the
detector. Diffraction gratings are used with CCDs, which collects light of different
wavelengths on different pixels.
A spectrophotometer can be either single beam or double beam. In a single beam instrument
(such as the Spectronic 20), all of the light passes through the sample cell. Io must be
measured by removing the sample. This was the earliest design, but is still in common use in
both teaching and industrial labs.
In a double-beam instrument, the light is split into two beams before it reaches the sample.
One beam is used as the reference; the other beam passes through the sample. Some double-
beam instruments have two detectors (photodiodes), and the sample and reference beam are
measured at the same time. In other instruments, the two beams pass through a beam chopper,
which blocks one beam at a time. The detector alternates between measuring the sample
beam and the reference beam.
Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of
gases and even of solids can also be measured. Samples are typically placed in a transparent
cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an
internal width of 1 cm. (This width becomes the path length, L, in the Beer-Lambert law.)
Test tubes can also be used as cuvettes in some instruments. The best cuvettes are made of
high quality quartz, although glass or plastic cuvettes are common. (Glass and most plastics
absorb in the UV, which limits their usefulness to visible wavelengths.)
Ultraviolet-visible spectrum
The Woodward-Fieser rules rules are a set of empirical observations which can be used to
predict λmax, the wavelength of the most intense UV/Vis absorption, for conjugated organic
compounds such as dienes and ketones.
INFRARED SPECTROSCOPY
Infrared spectroscopy (IR spectroscopy) is the subset of spectroscopy that deals with the
infrared region of the electromagnetic spectrum. It covers a range of techniques, the most
common being a form of absorption spectroscopy. As with all spectroscopic techniques, it
can be used to identify compounds or investigate sample composition. Infrared spectroscopy
correlation tables are tabulated in the literature.
Theory
The infrared portion of the electromagnetic spectrum is divided into three regions; the near-,
mid- and far- infrared, named for their relation to the visible spectrum. The far-infrared,
approximately 400-10 cm-1 (1000–30 μm), lying adjacent to the microwave region, has low
energy and may be used for rotational spectroscopy. The mid-infrared, approximately 4000-
400 cm-1 (30–1.4 μm) may be used to study the fundamental vibrations and associated
rotational-vibrational structure. The higher energy near-IR, approximately 14000-4000 cm-1
(1.4–0.8 μm) can excite overtone or harmonic vibrations. The names and classifications of
these subregions are merely conventions. They are neither strict divisions nor based on exact
molecular or electromagnetic properties.
Infrared spectroscopy exploits the fact that molecules have specific frequencies at which they
rotate or vibrate corresponding to discrete energy levels. These resonant frequencies are
determined by the shape of the molecular potential energy surfaces, the masses of the atoms
and, by the associated vibronic coupling. In order for a vibrational mode in a molecule to be
IR active, it must be associated with changes in the permanent dipole. In particular, in the
Born-Oppenheimer and harmonic approximations, i.e. when the molecular Hamiltonian
corresponding to the electronic ground state can be approximated by a harmonic oscillator in
the neighborhood of the equilibrium molecular geometry, the resonant frequencies are
determined by the normal modes corresponding to the molecular electronic ground state
potential energy surface. Nevertheless, the resonant frequencies can be in a first approach
related to the strength of the bond, and the mass of the atoms at either end of it. Thus, the
frequency of the vibrations can be associated with a particular bond type.
Simple diatomic molecules have only one bond, which may stretch. More complex molecules
have many bonds, and vibrations can be conjugated, leading to infrared absorptions at
characteristic frequencies that may be related to chemical groups. For example, the atoms in a
CH2 group, commonly found in organic compounds can vibrate in six different ways:
symmetrical and antisymmetrical stretching, scissoring, rocking, wagging and twisting:
The infrared spectra of a sample are collected by passing a beam of infrared light through the
sample. Examination of the transmitted light reveals how much energy was absorbed at each
wavelength. This can be done with a monochromatic beam, which changes in wavelength
over time, or by using a Fourier transform instrument to measure all wavelengths at once.
From this, a transmittance or absorbance spectrum can be produced, showing at which IR
wavelengths the sample absorbs. Analysis of these absorption characteristics reveals details
about the molecular structure of the sample.
This technique works almost exclusively on samples with covalent bonds. Simple spectra are
obtained from samples with few IR active bonds and high levels of purity. More complex
molecular structures lead to more absorption bands and more complex spectra. The technique
has been used for the characterization of very complex mixtures.
Sample preparation
Gaseous samples require little preparation beyond purification, but a sample cell with a long
pathlength (typically 5-10 cm) is normally needed, as gases show relatively weak
absorbances.
Liquid samples can be sandwiched between two plates of a high purity salt (commonly
sodium chloride, or common salt, although a number of other salts such as potassium
bromide or calcium fluoride are also used). The plates are transparent to the infrared light and
will not introduce any lines onto the spectra. Some salt plates are highly soluble in water, so
the sample and washing reagents must be anhydrous (without water).
Solid samples can be prepared in two major ways. The first is to crush the sample with a
mulling agent (usually nujol) in a marble or agate mortar, with a pestle. A thin film of the
mull is applied onto salt plates and measured.
The second method is to grind a quantity of the sample with a specially purified salt (usually
potassium bromide) finely (to remove scattering effects from large crystals). This powder
mixture is then crushed in a mechanical die press to form a translucent pellet through which
the beam of the spectrometer can pass.
It is important to note that spectra obtained from different sample preparation methods will
look slightly different from each other due to differences in the samples' physical states.
Cast film technique is used mainly for polymeric compound. Sample is first dissolved in
suitable, non hygroscopic solvent. A drop of this solution is deposited on surface of KBr or
NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is
analysed directly. Care is important to ensure that the film is not too thick otherwise light
cannot pass through. This technique is suitable for qualitative analysis.
Typical method
Typical apparatus
A beam of infrared light is produced and split into two separate beams. One is passed through
the sample, the other passed through a reference which is often the substance the sample is
dissolved in. The beams are both reflected back towards a detector, however first they pass
through a splitter which quickly alternates which of the two beams enters the detector. The
two signals are then compared and a printout is obtained.
This prevents fluctuations in the output of the source affecting the data
This allows the effects of the solvent to be cancelled out (the reference is usually a
pure form of the solvent the sample is in)
Infrared spectroscopy is widely used in both research and industry as a simple and reliable
technique for measurement, quality control and dynamic measurement. The instruments are
now small, and can be transported, even for use in field trials. With increasing technology in
computer filtering and manipulation of the results, samples in solution can now be measured
accurately (water produces a broad absorbance across the range of interest, and thus renders
the spectra unreadable without this computer treatment). Some machines will also
automatically tell you what substance is being measured from a store of thousands of
reference spectra held in storage.
By measuring at a specific frequency over time, changes in the character or quantity of a
particular bond can be measured. This is especially useful in measuring the degree of
polymerization in polymer manufacture. Modern research machines can take infrared
measurements across the whole range of interest as frequently as 32 times a second. This can
be done whilst simultaneous measurements are made using other techniques. This makes the
observations of chemical reactions and processes quicker and more accurate.
Techniques have been developed to assess the quality of tea-leaves using infrared
spectroscopy. This will mean that highly trained experts (also called 'noses') can be used
more sparingly, at a significant cost saving.[1]
Infrared spectroscopy has been highly successful for applications in both organic and
inorganic chemistry. Infrared spectroscopy has also been successfully utilized in the field of
semiconductor microelectronics[2]: for example, infrared spectroscopy can be applied to
semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous
silicon, silicon nitride, etc.
Isotope effects
The different isotopes in a particular species may give fine detail in infrared spectroscopy.
For example, the O-O stretching frequency of oxyhemocyanin is experimentally determined
to be 832 and 788 cm-1 for ν(16O-16O) and ν(18O-18O) respectively.
The reduced masses for 16O-16O and 18O-18O can be approximated as 8 and 9 respectively.
Thus
RAMAN SPECTROSCOPY
Typically, a sample is illuminated with a laser beam. Light from the illuminated spot is
collected with a lens and sent through a monochromator. Wavelengths close to the laser line,
due to elastic Rayleigh scattering, are filtered out while the rest of the collected light is
dispersed onto a detector.
Spontaneous Raman scattering is typically very weak, and as a result the main difficulty of
Raman spectroscopy is separating the weak inelastically scattered light from the intense
Rayleigh scattered laser light. Raman spectrometers typically use holographic diffraction
gratings and multiple dispersion stages to achieve a high degree of laser rejection. In the past,
PMTs were the detectors of choice for dispersive Raman setups, which resulted in long
acquisition times. However, the recent uses of CCD detectors have made dispersive Raman
spectral acquisition much more rapid.
Basic theory
Energy level diagram showing the states involved in Raman signal. The line thickness is
roughly proportional to the signal strength from the different transitions.
The Raman effect occurs when light impinges upon a molecule and interacts with the electron
cloud of the bonds of that molecule. The incident photon excites one of the electrons into a
virtual state. For the spontaneous Raman effect, the molecule will be excited from the ground
state to a virtual energy state, and relax into a vibrational excited state, which generates
Stokes Raman scattering. If the molecule was already in an elevated vibrational energy state,
the Raman scattering is then called anti-Stokes Raman scattering.
Although the inelastic scattering of light was predicted by Smekal in 1923, it was not until
1928 that it was observed in practice. The Raman effect was named after one of its
discoverers, the Indian scientist Sir C. V. Raman who observed the effect by means of
sunlight (1928, together with K. S. Krishnan and independently by Grigory Landsberg and
Leonid Mandelstam).[1] Raman won the Nobel Prize in Physics in 1930 for this discovery
accomplished using sunlight, a narrow band photographic filter to create monochromatic light
and a "crossed" filter to block this monochromatic light. He found that light of changed
frequency passed through the "crossed" filter.
Subsequently the mercury arc became the principal light source, first with photographic
detection and then with spectrophotometric detection. Currently lasers are used as light
sources.
Applications
Raman gas analyzers have many practical applications, for instance they are used in medicine
for real-time monitoring of anaesthetic and respiratory gas mixtures during surgery.
In solid state physics, spontaneous Raman spectroscopy is used to, among other things,
characterize materials, measure temperature, and find the crystallographic orientation of a
sample.
As with single molecules, a given solid material has characteristic phonon modes that can
help an experimenter identify it. In addition, Raman spectroscopy can be used to observe
other low frequency excitations of the solid, such as plasmons, magnons, and
superconducting gap excitations.
The spontaneous Raman signal gives information on the population of a given phonon mode
in the ratio between the Stokes (downshifted) intensity and anti-Stokes (upshifted) intensity.
Raman scattering by an anisotropic crystal gives information on the crystal orientation. The
polarization of the Raman scattered light with respect to the crystal and the polarization of the
laser light can be used to find the orientation of the crystal, if the crystal structure
(specifically, its point group) is known.
Raman active fibers, such as aramid and carbon, have vibrational modes that show a shift in
Raman frequency with applied stress. Polypropylene fibers also exhibit similar shifts.
The radial breathing mode is a commonly used technique to evaluate the diameter of carbon
nanotubes.
Spatially Offset Raman Spectroscopy (SORS), which is less sensitive to surface layers than
conventional Raman, can be used to discover counterfeit drugs without opening their internal
packaging, and for non-invasive monitoring of biological tissue.[2][3]
Raman microspectroscopy
Raman microscopy, and in particular confocal microscopy, has very high spatial resolution.
For example, the lateral and depth resolutions were 250 nm and 1.7 µm, respectively, using a
confocal Raman microspectrometer with the 632.8 nm line from a He-Ne laser with a pinhole
of 100 µm diameter.
Since the objective lenses of microscopes focus the laser beam to several micrometres in
diameter, the resulting photon flux is much higher than achieved in conventional Raman
setups. This has the added benefit of enhanced fluorescence quenching. However, the high
photon flux can also cause sample degradation, and for this reason some setups require a
thermally conducting substrate (which acts as a heat sink) in order to mitigate this process.
Raman microscopy for biological and medical specimens generally uses near-infrared (NIR)
lasers (785 nm diodes and 1064 nm Nd:YAG are especially common). This reduces the risk
of damaging the specimen by applying high power. However, the intensity of NIR Raman is
low (owing to the ω-4 dependence of Raman scattering intensity), and most detectors required
very long collection times. Recently, more sensitive detectors have become available, making
the technique better suited to general use. Raman microscopy of inorganic specimens, such as
rocks and ceramics and polymers, can use a broader range of excitation wavelengths.[5]
Variations
Several variations of Raman spectroscopy have been developed. The usual purpose is to
enhance the sensitivity (e.g., surface-enhanced Raman), to improve the spatial resolution
(Raman microscopy), or to acquire very specific information (resonance Raman).
Hyper Raman - A non-linear effect in which the vibrational modes interact with the
second harmonic of the excitation beam. This requires very high power, but allows
the observation of vibrational modes which are normally "silent". It frequently relies
on SERS-type enhancement to boost the sensitivity.
Stimulated Raman Spectroscopy - A two color pulse transfers the population from
ground to a rovibrationally excited state, if the difference in energy corresponds to an
allowed Raman transition. Two photon UV ionization, applied after the population
transfer but before relaxation, allows the intra-molecular or inter-molecular Raman
spectrum of a gas or molecular cluster (indeed, a given conformation of molecular
cluster) to be collected. This is a useful molecular dynamics technique.
UNIT IV
THERMAL ANALYSIS
THERMOGRAVIMETRIC ANALYSIS
sketch of a typical TGA (Setaram TG-DTA 92 B type); the cooling water pipe was omitted
Analyzer
The analyzer usually consists of a high-precision balance with a pan loaded with the sample.
The sample is placed in a small electrically heated oven with a thermocouple to accurately
measure the temperature. The atmosphere may be purged with an inert gas to prevent
oxidation or other undesired reactions. A computer is used to control the instrument.
Analysis is carried out by raising the temperature gradually and plotting weight against
temperature. After the data is obtained, curve smoothing and other operations may be done
such as to find the exact points of inflection.
DIFFERENTIAL THERMAL ANALYSIS
Apparatus
Applications
A DTA curve can be used only as a finger print for identification purposes but usually the
applications of this method are the determination of phase diagrams, heat change
measurements and decomposition in various atmospheres.
DTA may be used in cement chemistry, mineralogical research and in environmental studies.
DTA curves may also be used to date bone remains or to study archaeological materials.
DIFFERENTIAL SCANNING CALORIMETRY
An alternative technique, which shares much in common with DSC, is differential thermal
analysis (DTA). In this technique it is the heat flow to the sample and reference that remains
the same rather than the temperature. When the sample and reference are heated identically
phase changes and other thermal processes cause a difference in temperature between the
sample and reference. Both DSC and DTA provide similar information; DSC is the more
widely used of the two techniques.[1][2][3]
DSC curves
The result of a DSC experiment is a curve of heat flux versus temperature or versus time.
There are two different conventions: exothermic reactions in the sample shown with a
positive or negative peak; it depends by the different kind of technology used by the
instrumentation to make the experiment. This curve can be used to calculate enthalpies of
transitions. This is done by integrating the peak corresponding to a given transition. It can be
shown that the enthalpy of transition can be expressed using the following equation:
ΔH = KA
where ΔH is the enthalpy of transition, K is the calorimetric constant, and A is the area under
the curve. The calometric constant will vary from instrument to instrument, and can be
determined by analyzing a well-characterized sample with known enthalpies of transition.[2]
Applications
Glass transitions may occur as the temperature of an amorphous solid is increased. These
transitions appear as a step in the baseline of the recorded DSC signal. This is due to the
sample undergoing a change in heat capacity; no formal phase change occurs.[1][3]
As the temperature increases, an amorphous solid will become less viscous. At some point
the molecules may obtain enough freedom of motion to spontaneously arrange themselves
into a crystalline form. This is known as the crystallization temperature (Tc). This transition
from amorphous solid to crystalline solid is an exothermic process, and results in a peak in
the DSC signal. As the temperature increases the sample eventually reaches its melting
temperature (Tm). The melting process results in an endothermic peak in the DSC curve. The
ability to determine transition temperatures and enthalpies makes DSC an invaluable tool in
producing phase diagrams for various chemical systems.
DSC may also be used in the study of liquid crystals. As matter transitions between solid and
liquid it often goes through a third state, which displays properties of both phases. This
anisotropic liquid is known as a liquid crystalline or mesomorphous state. Using DSC, it is
possible to observe the small energy changes that occur as matter transitions from a solid to a
liquid crystal and from a liquid crystal to an isotropic liquid.[2]
Using differential scanning calorimetry to study the oxidative stability of samples generally
requires an airtight sample chamber. Usually, such tests are done isothermally (at constant
temperature) by changing the atmosphere of the sample. First, the sample is brought to the
desired test temperature under an inert atmosphere, usually nitrogen. Then, oxygen is added
to the system. Any oxidation that occurs is observed as a deviation in the baseline. Such
analyses can be used to determine the stability and optimum storage conditions for a
compound.[1]
DSC is widely used in the pharmaceutical and polymer industries. For the polymer chemist,
DSC is a handy tool for studying curing processes, which allows the fine tuning of polymer
properties. The cross-linking of polymer molecules that occurs in the curing process is
exothermic, resulting in a positive peak in the DSC curve that usually appears soon after the
glass transition.[1][2][3]
In food science research, DSC is used in conjunction with other thermal analytical techniques
to determine water dynamics. Changes in water distribution may be correlated with changes
in texture. Similar to material science studies, the effects of curing on confectionery products
can also be analyzed.
DSC curves may also be used to evaluate drug and polymer purities. This is possible because
the temperature range over which a mixture of compounds melts is dependent on their
relative amounts. This effect is due to a phenomenon known as freezing point depression,
which occurs when a foreign solute is added to a solution. (Freezing point depression is what
allows salt to de-ice sidewalks and antifreeze to keep your car running in the winter.)
Consequently, less pure compounds will exhibit a broadened melting peak that begins at
lower temperature than a pure compound.
In last few years this technology has been involved in metallic material study. The
characterization of this kind of material with DSC is not easy yet because of the low quantity
of literature about it. It is known that it is possible to use DSC to find solidus and liquidus
temperature of a metal alloy, but the widest application is, by now, the study of
precipitations, Guiner Preston zones, phase transitions, dislocations movement, grain growth
etc.
UNIT V
SEPARATION TECHNIQUES
INTRODUCTION
Explanation
.An analogy which is sometimes useful is to suppose a mixture of bees and wasps passing
over a flower bed. The bees would be more attracted to the flowers than the wasps, and
would become separated from them. If one were to observe at a point past the flower bed, the
wasps would pass first, followed by the bees. In this analogy, the bees and wasps represent
the analytes to be separated, the flowers represent the stationary phase, and the mobile phase
could be thought of as the air. The key to the separation is the differing affinities among
analyte, stationary phase, and mobile phase. The observer could represent the detector used in
some forms of analytical chromatography. A key point is that the detector need not be
capable of discriminating between the analytes, since they have become separated before
passing the detector.
Chromatography terms
Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for
example obtained by a spectrophotometer, mass spectrometer or a variety of other
detectors) corresponding to the response created by the analytes exiting the system. In
the case of an optimal system the signal is proportional to the concentration of the
specific analyte separated.
Column chromatography
In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a
packed bed. This allows omission of initial clearing steps such as centrifugation and
filtration, for culture broths or slurries of broken cells.
Planar Chromatography
Paper Chromatography
Paper chromatography is a technique that involves placing a small dot of sample solution
onto a strip of chromatography paper. The paper is placed in a jar containing a shallow layer
of solvent and sealed. As the solvent rises through the paper it meets the sample mixture
which starts to travel up the paper with the solvent. Different compounds in the sample
mixture travel different distances according to how strongly they interact with the paper. This
paper is made of cellulose, a polar molecule, and the compounds within the mixture travel
farther if they are non-polar. More polar substances bond with the cellulose paper more
quickly, and therefore do not travel as far. This process allows the calculation of an Rf value
and can be compared to standard compounds to aid in the identification of an unknown
substance.
Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid.
Liquid chromatography can be carried out either in a column or a plane. Present day liquid
chromatography that generally utilizes very small packing particles and a relatively high
pressure is referred to as high performance liquid chromatography (HPLC).
In the HPLC technique, the sample is forced through a column that is packed with irregularly
or spherically shaped particles or a porous monolithic layer (stationary phase) by a liquid
(mobile phase) at high pressure. HPLC is historically divided into two different sub-classes
based on the polarity of the mobile and stationary phases. Technique in which the stationary
phase is more polar than the mobile phase (e.g. toluene as the mobile phase, silica as the
stationary phase) is called normal phase liquid chromatography (NPLC) and the opposite
(e.g. water-methanol mixture as the mobile phase and C18 = octadecylsilyl as the stationary
phase) is called reversed phase liquid chromatography (RPLC). Ironically the "normal phase"
has fewer applications and RPLC is therefore used considerably more.
Specific techniques which come under this broad heading are listed below. It should also be
noted that the following techniques can also be considered fast protein liquid chromatography
if no pressure is used to drive the mobile phase through the stationary phase. See also
Aqueous Normal Phase Chromatography.
Affinity chromatography
Special techniques
Reversed-phase chromatography
Two-dimensional chromatography
In some cases, the chemistry within a given column can be insufficient to separate some
analytes. It is possible to direct a series of unresolved peaks onto a second column with
different physico-chemical (Chemical classification) properties. Since the mechanism of
retention on this new solid support is different from the first dimensional separation, it can be
possible to separate compounds that are indistinguishable by one-dimensional
chromatography.
Countercurrent chromatography
Chiral chromatography
Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers,
these have no chemical or physical differences apart from being three dimensional mirror
images. Conventional chromatography or other separation processes are incapable of
separating them. To enable chiral separations to take place, either the mobile phase or the
stationary phase must themselves be made chiral, giving differing affinities between the
analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both
normal and reversed phase are commercially available