index previous next

Effect of Slit Width on Signal-to-Noise Ratio in Absorption Spectroscopy


Click to see larger graphic. Download .xls or .ods format
[Background]  [Student Handout]

This spreadsheet demonstrates the spectral distribution of the slit function, transmission, and measured light for a simulated dispersive absorption spectrophotometer with a continuum light source, adjustable wavelength, mechanical slit width, reciprocal linear dispersion, spectral bandpass, absorber spectral half-width, concentration, path length, and unabsorbed stray light. (Mouse-controlled sliders allow you to change the values quickly without typing). It computes the relative signal-to-noise ratio under photon-noise-limited and detector-noise-limited conditions.  Note: this simulation applies to conventional molecular absorption spectrophotometry as well a continuum-source atomic absorption, but not to line-source atomic absorption, where the function of slit width is different. Reference: Thomas C. O'Haver, "Effect of the source/absorber width ratio on the signal-to-noise ratio of dispersive absorption spectrometry",  Analytical Chemistry, 1991, 63 (2), pp 164–169.

Assumptions: The true monochromatic absorbance follows the Beer-Lambert Law; the absorber has a Gaussian absorption spectrum; the monochromator has a Gaussian slit function; the absorption path length and absorber concentration are both uniform across the light beam; the spectral response of the detector is much wider than the spectral bandpass of the monochromator; a double-beam instrument design measures both sample and reference beams and both beams are subject to random and uncorrelated noise.  

View Equations (.pdf)
Download spreadsheet in Excel format (.xls)
Download spreadsheet in OpenOffice format (.ods)

Other related simulations:
Monochromator

U.V.-Visible Spectrophotometer
Dual Wavelength Spectrophotometer
Signal-to-noise ratio of absorption spectrophotometry
Instrumental Deviations from Beer's Law
Comparison of Calibration Curve Fitting Methods in Absorption Spectroscopy
Multiwavelength Spectrometry

Spectroscopic Simulation of Atomic Absorption

[Return to Index of Simulations]

Background

What is slit width?  Slit width is the width (usually expressed in mm) of the entrance and exit slits of a monochromator.  The slits are rectangular apertures through with light enters into and exits from the monochromator. Their purpose is to control the spectral resolution of the monochromator, that it, its ability to separate close wavelengths.  In the diagram below, B is the entrance slit and F is the exit slit.  

From
          http://en.wikipedia.org/wiki/Image:Czerny-turner.png
Optical diagram of a common monochromator design; from Wikipedia

Light is focused onto the entrance slit B, is focused by concave mirror C onto the grating D, which disperses the light according  to wavelength. Concave mirror E then focuses the light onto the exit slit F, forming a spectrum across the exit slit. Only the particular wavelength that falls directly on the exit slit passes through it and is detected. (In the diagram above, white light enters the monochromator at A, but only the green wavelengths pass through and are detected at G). Adjusting the rotating angle of the grating changes the wavelength that passes through the exit slit. In a standard monochromator design, the entrance and exit slits have equal width. The wider the slit widths, the larger the range of wavelengths that passes through the monochromator. Some simple instruments, for example the common Spectronic 20, have fixed slit widths, but most research-grade instruments have user-controllable slit widths. In general, smaller (narrower) slit widths yield greater spectral resolution but cut down the amount of light that is transmitted through the monochromator.  

In an absorption spectrophotometer, a monochromator is used to limit the wavelength range of the light passed through the sample to that which can be absorbed by the sample. In the most common arrangement, the light source is focused onto the entrance slit and the absorbing sample is placed immediately after the exit slit, with the photodetector immediately behind it to detect the intensity of the transmitted light.

What is the optimum slit width for absorption spectroscopy? The answer depends on the purpose of the measurement. If the purpose is to record an accurate absorption spectrum, for example for use as a reference spectrum for future measurements or for identification, then a sufficiently small slit width must be used to avoid the polychromaticity deviation from the Beer-Lambert Law. The requirement is that the spectral bandpass (the spectral width over which the transmission of the sample is measured, given the variable name SB in this spreadsheet) be small compared compared to the spectral width of the absorber. In a dispersive instrument (using a white light source and a monochromator), the spectral bandpass is given by the product of the mechanical slit width (sw) and the reciprocal linear dispersion (RLD). The slit width is user-variable in many instruments, whereas the RLD is fixed by the design of the monochromator. So, if the slit width is adjustable, setting it to the smallest width will insure the smallest spectral bandpass and result in the minimum polychromaticity error. However, the signal-to-noise ratio decreases as the slit width is reduced, so it is not always practical to use the smallest slit width possible. If the spectral bandpass is one tenth (1/10th) of the spectral width (full width at half-maximum) of the narrowest band in the spectrum, then the maximum error caused by polychromaticity will be about 0.8% for a Lorentzian absorption band and 0.5% for a Gaussian absorption band, which is a sufficiently small error for many purposes. A smaller slit width, even if it is possible on that spectrometer, will not be useful if the random signal-to-noise error exceeds the error caused by non-linearity.  

On the other hand, if the purpose of the measurement is quantitative analysis of the concentrations of the absorbing components, then the requirement for good signal-to-noise ratio is more important, especially in trace analysis applications that may operate near the signal-to-noise ratio limit of the instrument. Moreover, in this application, the primary requirement is linearity of the analytical curve (plot of absorbance vs concentration) rather than absolute accuracy of the absorbance. This is because, in the vast majority of practical cases, quantitative analysis procedures are calibrated against standard samples rather than depending on absolute absorbance measurements. For both of those reasons, the restrictions on maximum slit width are considerably relaxed.  

When the slit width of the monochromator is increased, two optical effects are observed.

1) the total slit area increases in proportion to the slit width, which increases the spacial fraction of the light source intensity that enters the monochromator (assuming that the image of the light source formed on the entrance slit by the entrance optics is larger than the width of the slit, which is almost always the case in normal instruments), and

2) the spectral bandpass of the monochromator increases in proportion to the slit width, which increases the spectral fraction of the source intensity that enters the monochromator - in other words, more photons of different colors get through. (This is assuming that the light source is a continuum source whose spectral distribution is much wider than the spectral bandpass of the monochromator).

These two factors operate independently, with the result that the light level incident on the sample increases with the square of the slit width. The resulting higher light intensity increases the signal-to-noise ratio (SNR), in a way that can be predicted by the simulation "Signal-to-noise ratio of absorption spectrophotometry". Simply put, the effect on SNR depends on the dominant noise in the system. Photon noise (caused by the quantum nature of light, and often the limiting noise in instruments that use photomultiplier detectors), is proportional to the square root of light intensity, and therefore the SNR is proportional to the square root of light intensity and directly proportional to the slit width. Detector noise (constant noise originating in the detector, and often the limiting noise in instruments that use solid-state photodiode detectors) is independent of the light intensity and therefore the detector SNR is directly proportional to the light intensity and to the square of the slit width. Flicker noise, caused by light source instability, vibration, sample cell positioning errors, light scattering by suspended particles, dust, bubbles, etc., is directly proportional to the light intensity, so the flicker SNR is not decreased by increasing the slit width. Fortunately, flicker noise can usually be reduced or eliminated by using double-beam, dual wavelength, derivative, and wavelength modulation instrument designs.

The other effect of increasing slit width is that, because the spectral bandpass increases in proportion to the slit width, the analytical curve non-linearity caused by polychromaticity is increased. The simulation "Instrumental Deviations from Beer's Law" shows that, if the spectral bandpass is one-tenth the absorption peak width and the unabsorbed stray light is 0.1%, the analytical curve is still nearly linear up to an absorbance of 2, with an R2 of 1.000. When this curve is fit with a straight-line least-squares fit, the average concentration prediction error is less than 0.1% of the maximum concentration. Even if the spectral bandpass is as large as one-half the absorption peak width, the analytical curve is still nearly linear up to an absorbance of 2, with an R2 of 0.9999.  When this curve is fit with a straight-line least-squares fit, the average concentration prediction error is less than 1% of the maximum concentration (0.5% if the unabsorbed stray light is negligible).  

The present simulation, "Effect of Slit Width on Signal-to-Noise Ratio in Absorption Spectroscopy", shows how these contrary factors combine - decreasing observed absorbance and analytical curve linearity versus increasing light intensity and signal-to-noise ratio as the slit width is increased. For example, the simulation shows that, at an absorbance of 1.0, as the slit width is increased so that the spectral bandpass goes from 0.1 to 0.5 of the absorption peak width, the photon SNR increases by a factor of 5 and the detector SNR increases by a factor of 30. If the slit width is increased so that the spectral bandpass equals the absorption peak width, the photon SNR increases by a factor of 9 and the detector SNR increases by a factor of 140, but in that case, the analytical curve is sufficiently non-linear that the average concentration prediction error is as much as 3% of the maximum concentration up to an absorbance of 1.0 and 6% up to an absorbance of 2.0. But at least at low absorbances the linearity is good even in that case.  

Although the signal-to-noise ratio continues to increase with slit width, using very large slit widths is not advisable because of the increased analytical curve non-linearity and the increased possibility of spectral interferences from non-analyte absorption (See #6 in the Student Handout). In fact, the simulation shows that, if the reciprocal linear dispersion of the monochromator is varied at constant slit width (which would require changing the monochromator design by varying its focal length or the diffraction grating ruling density), the optimum photon SNR always occurs when the spectral bandpass equals the absorber width. (See #7 in the Student Handout). This has implications for the design of spectrometers. It explains why relatively small, low dispersion monochromators are commonly used in condensed-phase spectrophotometry, whereas high-dispersion echelle spectrometers are used in continuum-source atomic absorption spectroscopy, where the absorber widths of the atomic lines are orders of magnitude smaller.  

What about line-source atomic absorption?   Line-source atomic absorption is different than other forms of absorption spectroscopy because the primary light source is an atomic line source (e.g. hollow cathode lamp or electrodeless discharge lamp) whose spectral width is not a function of the monochromator slit width, but rather is controlled by the temperature and pressure within the lamp and by the hyperfine splitting of that element.  The only control that the operator has over the source line width is the operating current of the lamp; increased current causes increased temperature and pressure, both of which lead to increased line width (increased source width), as well a higher lamp intensity.  So, to that extent, the effect of increasing the lamp current is a bit like the effect of increasing the slit width in a continuum-source absorption instrument; the intensity goes up (good) but the increased source width and increasing polychromatic effect causes calibration non-linearity (bad). The slit width on a line-source atomic absorption instrument is also adjustable by the user, but it has no effect on the source width, being much greater (by 1000-fold or so) than the actual line width of the atomic line source. (Typical atomic line width 0.001 - 0.003 nm, compared to typical spectral bandpass of 1 nm). Increasing the slit width does increase the light intensity linearity (because the entrance slit area increases directly), but operating a too large a slit width runs the risk of increasing the stray light, by allowing in other lines emitted by the atomic line source that are not absorbed by the analyte in the atomizer, such as lines originating from the fill gas, impurities in the lamp, and non-resonance lines of the analyte. As always, stray light leads to non-linearity, especially at high concentrations and eventually to a plateau in the calibration curve at high absorbances.

Correcting for calibration curve non-linearity. What can be done about the calibration curve non-linearity in cases where the SNR is the limiting factor and a small slit width can not be used?  The traditional approach is to decrease the fitting error by using a curvilinear regression to fit the calibration curve, rather than a first-order straight line. This possibility is demonstrated by the simulation "Calibration Curve Fitting Methods in Absorption Spectroscopy". However, there are downsides to that approach. First, the analytical curve must be characterized fully by running a series of several standards, which is time consuming and possibly expensive. Second, the common curvilinear functions (quadratic, cubic) are not a perfect match to the shape of an absorption spectroscopy analytical curve whose shape is determined by polychromaticity error and unabsorbed stray light. Third, the use of the curve fit equation to predict concentration of unknown samples given their absorbance reading requires that the curve fit equation be solved for concentration as a function of absorbance. This is trivial in the case of a first-order (straight line) fit, not difficult for a quadratic fit, but quite complicated for cubic and higher-order polynomials. (This particular problem can be overcome by taking liberties with statistical rigor and fitting absorbance as the independent variable vs concentration as the dependent variable, as done in "Calibration Curve Fitting Methods in Absorption Spectroscopy" and in "Error propagation in analytical calibration methods/Calibration curve method with non-linear fit"). Finally, the use of higher-order polynomials to improve the fitting error runs the danger of "fitting the noise" when a small number of standards is used, yielding an unstable fit that can be wildly unpredictable outside the range of the standards. These problems ultimately stem from the fact that the polychromaticity and unabsorbed stray light problems in absorption spectroscopy really operate in the spectral domain and are not fundamentally describable in terms of polynomial functions at the analytical curve level. Because of this, it is reasonable that their solution might be better achieved by curve fitting in the spectral domain rather than at the analytical curve level. This is possible with modern absorption spectrophotometers that use array detectors that have many tiny detector elements that slice up the spectrum of the transmitted beam into many small wavelength segments, rather than detecting the sum of all those segments with one big photo-tube detector as older instruments do. This is the approach taken in the Transmission Fitting Method, a spectral-based curve fitting procedure that yields linear calibration curves up to an absorbance of 100, even in the presence of unabsorbed stray light and even when the spectral bandpass is comparable to or greater than the absorption peak width.

Student handout:

Effect of Slit Width on Signal-to-Noise Ratio in Absorption Spectroscopy

1. Open AbsorptionSlitWidth.ods in OpenOffice Calc or AbsorptionSlitWidth.xls in Excel. This spreadsheet simulates an absorption spectrophotometer and demonstrates the effect that the instrument settings have on the signal-to-noise ratio. (Note: Because this is a simulation, you have control over many variables that you ordinarily wouldn't be able to choose. In a real absorption spectrophotometric measurement, the absorber wavelength and width would be determined by the nature of the analyte, and the dispersion and the stray light would be determined by the design on the monochromator. The only variables that you would ordinarily have direct control over in a real experiment are the analyte concentration, path length, and slit width).

2.  Start with Absorber wavelength = 300 nm, Slit width = 0.1 mm, Dispersion = 5 nm/mm, Absorber width = 200 nm, Concentration = 0.01, Path length = 1 cm, and Unabsorbed stray light = 0%.  In this case the Spectral bandpass is 0.5 nm.  This is a nearly ideal case for adherence to Beer's Law, and in fact the measured absorbance is exactly 0.01 as expected. (In this simulation, the absorption coefficient is always exactly 1.0; therefore, the theoretical absorbance equals the product of the concentration and the path length).  

3. Note that the computed signal-to-noise ratio is 20 for photon noise (ordinarily the dominant noise in spectrophotometers with photomultiplier detectors).  The signal-to-noise ratio determines
the precision of measurement of absorbance; it is in fact the reciprocal of the relative standard deviation of measured absorbance. So a signal-to-noise ratio of 20 means the relative standard deviation would be 5%, which is not very good for precise quantitative measurement and not nearly as good as it is possible to do in absorption spectrophotometry.
Now gradually increase the Slit width (use the slider) and watch what happens. The graph shows the slit function (plot of spectral bandpass) and the observed intensity increasing in width (and total intensity, as you can see from the y-axis scale on the right).  This causes the signal-to-noise ratio to increase, because the photon noise is proportional to the square root of the total intensity, so therefore as the signal intensity increases, the photon noise increases less fast than the intensity, with the result that the signal-to-noise ratio increases. This means that the absorbance can be measured more precisely

4. Use the Slit width slider to increase the Slit width from 1.0 to 10 mm (maximum on the slider) and look at the "Measured incident intensity" (in cell B17); it increases by a factor of 100 as the
Slit width increases by a factor of only 10.  Why? Because there are two factors working here.  First, the entrance slit area increases directly with the slit width (the slit height is fixed), and so that the intensity of white light getting into the monochromator increases. Second, the spectral bandpass (cell C18) is directly proportional to the exit slit width, which increases the spectral range of light detected, that is, more photons of different colors get through to the detector. (The entrance and exit slit widths are always equal in a standard monochromator). The result is that the Measured intensity incident on the sample increases with the square of the slit width, as you have just shown in the simulation. It's this increased intensity that causes the improvement in signal-to-noise ratio.

5. Is there any disadvantage to increasing the slit width?  Yes, if you keep your eye on the Measured absorbance (cell A17), you'll see it drop slightly from 0.01 to 0.097 as the slit width increases to 10 mm.  This is caused by the fact that some of the photons now included in the wider spectral bandpass fall at the edges where the strength of absorption is weaker. (We're making the reasonable assumption here that the center of the spectral bandpass is centered on the peak of the
absorption band).  But this small drop in absorbance is not enough to counteract the dramatic increase in signal-to-noise ratio. 

Now leave the slit width at 10 mm and increase the concentration from 0.01 to 1.  The absorbance reads 0.9681, rather than the expected 1.000.  But this decrease is about the same as the decrease at 0.01 absorbance, meaning that that absorbance is still proportional to concentration to a very close approximation.  In other words, at a slit width of 10 mm the calibration curve will still be linear, its
slope will be a little lower (about 3% lower), but the signal-to-noise ratio will be nearly 10 times better! That's a pretty good trade-off. 6. Is it possible for the slit width to be too high?  Yes, absolutely.  To demonstrate that, decrease the Absorber width to 30 nm, use the sliders to set the Concentration to 0.10, the Slit width to 10 mm, and the Dispersion to 10 nm/mm.  Now the spectral bandpass is 100 nm, which is 3.33 times larger than the absorber width.  The Measured absorbance now reads 0.0278 when it should be 0.10.  

Now
use the slider to set Concentration ten times higher to 1.0 (maximum on the slider). The absorbance only goes up to 0.1824, which is less than a 10-fold increase. So this means that the absorbance is no longer proportional to concentration and the calibration curve is non-linear. This is a major inconvenience, because it means you would have to make up and measure a series of standard solutions, plot a calibration curve, and use some sort of non-linear curve fitting or interpolation to convert the measured absorbances of unknown solutions into concentration.  

But there's another reason why very a large spectral bandpass is bad - the increased possibility of spectral interference caused by other chemical components in the sample that absorb at nearby wavelengths. Look at the graph in this simulation.  The yellow line represents the transmission spectrum of the absorbing sample. Suppose there's another component in the sample that absorbs at 230 nm, or at 380 nm, with the same absorber width.  (Try it my changing the wavelength to 230 or 380 nm and see how much absorbance remains). Ordinarily these should not interfere with the measurement because they are far enough way from the peak wavelength of the analyte, but with the instrument set to a spectral bandpass of 100 nm, as we have now with these settings, the absorption of those component could overlap with the slit function (green line), resulting in an error in the absorbance reading (call an interference). But if you reduce the spectral bandpass (by reducing the slit width or the dispersion), these components are less likely to interfere.  Because of the increased possibility of spectral interference, it's not a good idea to allow the spectral bandpass to be greater than the width of the absorber.

7. Is there an optimum slit width for absorption spectroscopy?  It's hard to pinpoint an exact optimum slit width, but there is an optimum spectral bandpass for best signal-to-noise ratioThe spectral bandpass (cell C18) is the wavelength range passed by the monochromator. Use the slider to set the Concentration to 0.1, the Slit width to 10 mm, and the Absorber width to 55 nm. Vary the Dispersion while keeping your eye on the Signal-to-photon-noise ratio (cell D17).  Is there an optimum signal-to-noise ratio? Yes, the highest SNR should be 4721, at a Dispersion of 5.5 nm/mm, and a spectral bandpass of 55 nm (cell C18). Note that the spectral bandpass (cell C18) is exactly equal to the absorber width (cell C8). Is that a coincidence? Try slightly higher and lower Absorber widths. What do you observe? The optimum photon SNR always occurs when the spectral bandpass equals the absorber width, when the slit width is held constant. This seems to be a general result.

But how can this be? The law is the law, right? The Beer-Lambert law is derived under the assumption that monochromatic light be used, or at least that the incident light have a spectral width that is much less than the spectral width of the absorber. But this is not the condition under which the
signal-to-noise ratio is optimum. The answer to this quandary is that the Beer-Lambert law really has nothing to do with signal-to-noise ratio; it simply seeks to define the conditions under which the relationship between light absorption and path length and concentration is mathematically simple. It dates from more than 150 years ago, many years before the age of electronic amplifiers, photomultiplier tubes, digital readouts, and computers. At the time of the formulation of that law, there was little appreciation for the concept of signal-to-noise ratio, and there was no good way to handle a non-linear relationship between concentration and instrument response. The Beer-Lambert still holds today, of course, but it no longer the only thing that dictates the optimum instrument settings in quantitative absorption spectroscopy.

8. Is there an optimum absorbance for best signal-to-noise ratio?  Yes, and it's easy to demonstrate. Set the Slit width and the Dispersion both to about 5, Absorber width to 100 nm, and the Path length to 3. Now
use the slider to vary the concentration through its range, keeping your eye on the Signal-to-photon-noise ratio (D17) and the Signal-to-detector-noise ratio (E17). As the absorbance approaches zero, the difference between the incident and transmitted intensities (Izero and I) approaches zero, and so even a small amount of noise in those intensities will cause the signal-to-noise ratio of the computed absorbance to dive towards zero. At very high absorbances, the transmitted intensity (I) is very low, so its signal-to-noise ratio is very poor, which degrades the signal-to-noise ratio of the computed absorbance seriously. But in between, at intermediate absorbances, neither of these two effects dominates. Is there an absorbance where the signal-to-noise ratio is maximum?  For photon noise, the signal-to-noise ratio is maximum at about an absorbance of 1. For detector noise, the  signal-to-noise ratio is maximum at about an absorbance of 0.5.  Challenge: Determine the absorbance range over which the signal-to-noise ratio is no worse that one-half its maximum value.

Conclusion: Although the theoretical requirement for adherence to the Beer-Lambert Law is that the incident light be monochromatic, which implies that the smallest possible slit width and spectral bandpass be used, this simulation shows that, in the presence of stray light and random photon or detector noise, a larger slit width and spectral bandpass will give better signal-to-noise ratio.

© T. C. O'Haver, 2008, 2015. Last updated September, 2014. This page is part of Interactive Computer Models for Analytical Chemistry Instruction, created and maintained by Prof. O'Haver at toh@umd.edu. Number of unique visits since May 17, 2008: