Skip to main content
Chemistry LibreTexts

2.6: What data processing considerations are important for obtaining accurate and precise results?

  • Page ID
    77777
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Data processing describes operations that are performed to improve spectral quality after the data has been acquired and saved to disk.

    Zero-Filling

    Zero-filling is the addition of zeros to the end of the FID to increase the spectral resolution. Because zeros are added, not additional real data points carrying with them an overlay of noise, zero-filling can improve digital resolution without decreasing S/N. Another option is to use linear prediction to add data points calculated from the beginning of the FID where S/N is at its highest.

    Apodization

    Apodization is the multiplication of the FID by a mathematical function. Apodization can serve several purposes. Spectral resolution can be improved by emphasizing the data points at the end of the FID. S/N can be improved by multiplying the FID by a function that emphasizes the beginning of the FID relative to the later data points where S/N is poorer. For quantitative NMR experiments, the most common apodization function is an exponential decay that matches the decay of the FID (a matched filter) and forces the data to zero intensity at the end of the FID. This function is often referred to a line broadening, since it broadens the signals based on the time constant of the exponential decay. This trade-off between S/N and spectral resolution is not restricted to NMR and is common to many instrumental methods of analysis.

    Integration Regions

    Because NMR signals are Lorentzians, the resonances have long tails that can carry with them significant amounts of resonance intensity. This is especially problematic when the sample is complex containing many closely spaced or overlapped signals, or when the homogeneity of the magnetic field around the sample has not been properly corrected by shimming. For a Lorentzian peak with a width at half-height of 0.5 Hz, integration regions set at 3.2 Hz or 16 Hz on either side of the resonance would include approximately 95% or 99% of the peak area, respectively. Note that this analysis does not include the 13C satellites which account for an additional 1.1% of the intensity of carbon-bound protons in samples containing 13C at natural abundance. In cases where resonances are highly overlapped, more accurate quantitative analysis can often be achieved by peak fitting rather than by integration.

    An alternative approach utilizes 13C decoupling during the acquisition of the proton spectrum to collapse the 13C satellites so that this signal is coincident with the primary 1H-12C resonance.2, 3 This relatively simple approach requires only that the user has access to a probe (for example a broadband inverse or triple resonance probe) that permits 13C decoupling.

    Baseline Correction

    NMR integrals are calculated by summation of the intensities of the data points within the defined integration region. Therefore, a flat spectral baseline with near zero intensity is required. This can be achieved in several ways; the most common is selecting regions across the spectrum where no signals appear, defining these as baseline and fitting them to a polynomial function that is then subtracted from the spectrum.


    This page titled 2.6: What data processing considerations are important for obtaining accurate and precise results? is shared under a CC BY-NC-SA 2.5 license and was authored, remixed, and/or curated by Cynthia K. Larive & Albert K. Korir via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.