This spreadsheet demonstrates the spectral distribution of the
slit function, transmission, and measured light for a simulated
dispersive absorption spectrophotometer with a continuum light
source, adjustable wavelength, mechanical slit width, reciprocal
linear dispersion, spectral
bandpass, absorber spectral half-width, concentration, path
length, and unabsorbed stray light. (Mouse-controlled sliders
allow you to change the values quickly without typing). It
computes the relative signal-to-noise ratio under
photon-noise-limited and detector-noise-limited conditions.
Note: this simulation applies to conventional molecular
absorption spectrophotometry as well a continuum-source atomic
absorption, but not to line-source atomic absorption, where the
function of slit width is different. Reference: Thomas C. O'Haver,
"Effect of the source/absorber width ratio on the signal-to-noise
ratio of dispersive absorption spectrometry", Analytical
Chemistry, 1991, 63 (2), pp 164–169.
Assumptions: The true monochromatic absorbance follows the Beer-Lambert Law; the absorber has a Gaussian absorption spectrum; the monochromator has a Gaussian slit function; the absorption path length and absorber concentration are both uniform across the light beam; the spectral response of the detector is much wider than the spectral bandpass of the monochromator; a double-beam instrument design measures both sample and reference beams and both beams are subject to random and uncorrelated noise.
View
Equations (.pdf)
Download
spreadsheet in Excel format (.xls)
Download spreadsheet in
OpenOffice format (.ods)
Other related simulations:
Monochromator
U.V.-Visible Spectrophotometer
Dual Wavelength Spectrophotometer
Signal-to-noise ratio of absorption
spectrophotometry
Instrumental Deviations from Beer's Law
Comparison of Calibration Curve
Fitting Methods in Absorption Spectroscopy
Multiwavelength Spectrometry
Spectroscopic Simulation
of Atomic Absorption
[Return to Index of Simulations]
What is slit width? Slit
width is the width (usually expressed in mm) of the entrance and
exit slits of a monochromator.
The slits are rectangular apertures through with light
enters into and exits from the monochromator. Their purpose is to
control the spectral resolution of the monochromator, that it, its
ability to separate close wavelengths. In the diagram below,
B is the entrance slit and F is the exit slit.
Light is focused onto the entrance slit B, is focused by concave
mirror C onto the grating
D, which disperses the
light according to wavelength. Concave mirror E then focuses the light onto
the exit slit F, forming
a spectrum across the exit slit. Only the particular wavelength
that falls directly on the exit slit passes through it and is
detected. (In the diagram above, white light enters the
monochromator at A, but
only the green wavelengths pass through and are detected at G). Adjusting the rotating
angle of the grating changes the wavelength that passes through
the exit slit. In a standard monochromator design, the entrance
and exit slits have equal width. The wider the slit widths, the
larger the range of wavelengths that passes through the
monochromator. Some simple instruments, for example the common Spectronic 20, have fixed
slit widths, but most research-grade instruments have
user-controllable slit widths. In general, smaller (narrower) slit
widths yield greater spectral resolution but cut down the amount
of light that is transmitted through the monochromator.
In an absorption spectrophotometer, a monochromator is used to limit the wavelength range of the light passed through the sample to that which can be absorbed by the sample. In the most common arrangement, the light source is focused onto the entrance slit and the absorbing sample is placed immediately after the exit slit, with the photodetector immediately behind it to detect the intensity of the transmitted light.
What is the optimum slit width for absorption spectroscopy? The answer depends on the purpose of the measurement. If the purpose is to record an accurate absorption spectrum, for example for use as a reference spectrum for future measurements or for identification, then a sufficiently small slit width must be used to avoid the polychromaticity deviation from the Beer-Lambert Law. The requirement is that the spectral bandpass (the spectral width over which the transmission of the sample is measured, given the variable name SB in this spreadsheet) be small compared compared to the spectral width of the absorber. In a dispersive instrument (using a white light source and a monochromator), the spectral bandpass is given by the product of the mechanical slit width (sw) and the reciprocal linear dispersion (RLD). The slit width is user-variable in many instruments, whereas the RLD is fixed by the design of the monochromator. So, if the slit width is adjustable, setting it to the smallest width will insure the smallest spectral bandpass and result in the minimum polychromaticity error. However, the signal-to-noise ratio decreases as the slit width is reduced, so it is not always practical to use the smallest slit width possible. If the spectral bandpass is one tenth (1/10th) of the spectral width (full width at half-maximum) of the narrowest band in the spectrum, then the maximum error caused by polychromaticity will be about 0.8% for a Lorentzian absorption band and 0.5% for a Gaussian absorption band, which is a sufficiently small error for many purposes. A smaller slit width, even if it is possible on that spectrometer, will not be useful if the random signal-to-noise error exceeds the error caused by non-linearity.
On the other hand, if the purpose of the measurement is quantitative analysis of the concentrations of the absorbing components, then the requirement for good signal-to-noise ratio is more important, especially in trace analysis applications that may operate near the signal-to-noise ratio limit of the instrument. Moreover, in this application, the primary requirement is linearity of the analytical curve (plot of absorbance vs concentration) rather than absolute accuracy of the absorbance. This is because, in the vast majority of practical cases, quantitative analysis procedures are calibrated against standard samples rather than depending on absolute absorbance measurements. For both of those reasons, the restrictions on maximum slit width are considerably relaxed.
When the slit width of the monochromator is increased, two
optical effects are observed.
1) the total slit area increases in proportion to the slit width, which increases the spacial fraction of the light source intensity that enters the monochromator (assuming that the image of the light source formed on the entrance slit by the entrance optics is larger than the width of the slit, which is almost always the case in normal instruments), and
2) the spectral bandpass of the monochromator increases in proportion to the slit width, which increases the spectral fraction of the source intensity that enters the monochromator - in other words, more photons of different colors get through. (This is assuming that the light source is a continuum source whose spectral distribution is much wider than the spectral bandpass of the monochromator).
These two factors operate independently, with the result that the
light level incident on the sample increases with the square of the slit width. The
resulting higher light intensity increases the signal-to-noise
ratio (SNR), in a way that can be predicted by the simulation "Signal-to-noise ratio of absorption
spectrophotometry". Simply put, the effect on SNR depends on
the dominant noise in the system. Photon noise
(caused by the quantum nature of light, and often the limiting
noise in instruments that use photomultiplier detectors), is
proportional to the square root of light intensity, and therefore
the SNR is proportional to the square root of light intensity and
directly proportional to the slit width. Detector
noise (constant noise originating in the detector, and often
the limiting noise in instruments that use solid-state photodiode
detectors) is independent of the light intensity and therefore the
detector SNR is directly proportional to the light intensity and
to the square of the slit width. Flicker noise,
caused by light source instability, vibration, sample cell
positioning errors, light scattering by suspended particles, dust,
bubbles, etc., is directly proportional to the light intensity, so
the flicker SNR is not decreased by increasing the slit width.
Fortunately, flicker noise can usually be reduced or eliminated by
using double-beam, dual wavelength, derivative, and wavelength
modulation instrument designs.
The other effect of increasing slit width is that, because the
spectral bandpass increases in proportion to the slit width, the
analytical curve non-linearity caused by polychromaticity is
increased. The simulation "Instrumental
Deviations from Beer's Law" shows that, if the spectral
bandpass is one-tenth the absorption peak width and the unabsorbed
stray light is 0.1%, the analytical curve is still nearly linear
up to an absorbance of 2, with an R2
of 1.000. When this curve is fit with a straight-line
least-squares fit, the average concentration prediction error is
less than 0.1% of the maximum concentration. Even if the spectral
bandpass is as large as one-half the absorption peak width, the
analytical curve is still nearly linear up to an absorbance of 2,
with an R2 of 0.9999.
When this curve is fit with a straight-line least-squares
fit, the average concentration prediction error is less than 1% of
the maximum concentration (0.5% if the unabsorbed stray light is
negligible).
The present simulation, "Effect of Slit Width on Signal-to-Noise Ratio in Absorption Spectroscopy", shows how these contrary factors combine - decreasing observed absorbance and analytical curve linearity versus increasing light intensity and signal-to-noise ratio as the slit width is increased. For example, the simulation shows that, at an absorbance of 1.0, as the slit width is increased so that the spectral bandpass goes from 0.1 to 0.5 of the absorption peak width, the photon SNR increases by a factor of 5 and the detector SNR increases by a factor of 30. If the slit width is increased so that the spectral bandpass equals the absorption peak width, the photon SNR increases by a factor of 9 and the detector SNR increases by a factor of 140, but in that case, the analytical curve is sufficiently non-linear that the average concentration prediction error is as much as 3% of the maximum concentration up to an absorbance of 1.0 and 6% up to an absorbance of 2.0. But at least at low absorbances the linearity is good even in that case.
Although the signal-to-noise ratio continues to increase with
slit width, using very large slit widths is not advisable because
of the increased analytical curve non-linearity and the increased
possibility of spectral interferences from non-analyte absorption
(See #6 in the Student
Handout). In fact, the simulation shows that, if the
reciprocal linear dispersion of the monochromator is varied at
constant slit width (which would require changing the
monochromator design by varying its focal length or the
diffraction grating ruling density), the optimum photon SNR always
occurs when the spectral bandpass equals the absorber width.
(See #7 in the Student Handout).
This has implications for the design of spectrometers. It explains
why relatively small, low dispersion monochromators are commonly
used in condensed-phase spectrophotometry, whereas high-dispersion
echelle spectrometers are used in continuum-source atomic
absorption spectroscopy, where the absorber widths of the atomic
lines are orders of magnitude smaller.
What about line-source atomic absorption? Line-source atomic absorption is different than other forms of absorption spectroscopy because the primary light source is an atomic line source (e.g. hollow cathode lamp or electrodeless discharge lamp) whose spectral width is not a function of the monochromator slit width, but rather is controlled by the temperature and pressure within the lamp and by the hyperfine splitting of that element. The only control that the operator has over the source line width is the operating current of the lamp; increased current causes increased temperature and pressure, both of which lead to increased line width (increased source width), as well a higher lamp intensity. So, to that extent, the effect of increasing the lamp current is a bit like the effect of increasing the slit width in a continuum-source absorption instrument; the intensity goes up (good) but the increased source width and increasing polychromatic effect causes calibration non-linearity (bad). The slit width on a line-source atomic absorption instrument is also adjustable by the user, but it has no effect on the source width, being much greater (by 1000-fold or so) than the actual line width of the atomic line source. (Typical atomic line width 0.001 - 0.003 nm, compared to typical spectral bandpass of 1 nm). Increasing the slit width does increase the light intensity linearity (because the entrance slit area increases directly), but operating a too large a slit width runs the risk of increasing the stray light, by allowing in other lines emitted by the atomic line source that are not absorbed by the analyte in the atomizer, such as lines originating from the fill gas, impurities in the lamp, and non-resonance lines of the analyte. As always, stray light leads to non-linearity, especially at high concentrations and eventually to a plateau in the calibration curve at high absorbances.
Correcting for calibration curve non-linearity. What can be done about the calibration curve non-linearity in cases where the SNR is the limiting factor and a small slit width can not be used? The traditional approach is to decrease the fitting error by using a curvilinear regression to fit the calibration curve, rather than a first-order straight line. This possibility is demonstrated by the simulation "Calibration Curve Fitting Methods in Absorption Spectroscopy". However, there are downsides to that approach. First, the analytical curve must be characterized fully by running a series of several standards, which is time consuming and possibly expensive. Second, the common curvilinear functions (quadratic, cubic) are not a perfect match to the shape of an absorption spectroscopy analytical curve whose shape is determined by polychromaticity error and unabsorbed stray light. Third, the use of the curve fit equation to predict concentration of unknown samples given their absorbance reading requires that the curve fit equation be solved for concentration as a function of absorbance. This is trivial in the case of a first-order (straight line) fit, not difficult for a quadratic fit, but quite complicated for cubic and higher-order polynomials. (This particular problem can be overcome by taking liberties with statistical rigor and fitting absorbance as the independent variable vs concentration as the dependent variable, as done in "Calibration Curve Fitting Methods in Absorption Spectroscopy" and in "Error propagation in analytical calibration methods/Calibration curve method with non-linear fit"). Finally, the use of higher-order polynomials to improve the fitting error runs the danger of "fitting the noise" when a small number of standards is used, yielding an unstable fit that can be wildly unpredictable outside the range of the standards. These problems ultimately stem from the fact that the polychromaticity and unabsorbed stray light problems in absorption spectroscopy really operate in the spectral domain and are not fundamentally describable in terms of polynomial functions at the analytical curve level. Because of this, it is reasonable that their solution might be better achieved by curve fitting in the spectral domain rather than at the analytical curve level. This is possible with modern absorption spectrophotometers that use array detectors that have many tiny detector elements that slice up the spectrum of the transmitted beam into many small wavelength segments, rather than detecting the sum of all those segments with one big photo-tube detector as older instruments do. This is the approach taken in the Transmission Fitting Method, a spectral-based curve fitting procedure that yields linear calibration curves up to an absorbance of 100, even in the presence of unabsorbed stray light and even when the spectral bandpass is comparable to or greater than the absorption peak width.© T. C. O'Haver, 2008, 2015. Last updated September, 2014. This page is part of Interactive Computer Models for Analytical Chemistry Instruction, created and maintained by Prof. O'Haver at toh@umd.edu. Number of unique visits since May 17, 2008: