00 b-7-1 Algorithms for a Hand-held Miniature X-ray Fluorescence Analytical Instrument

The purpose of this joint program was to provide technical assistance with the development of a Miniature X-ray Fluorescence (XRF) Analytical Instrument. This new XRF instrument is designed to overcome the weaknesses of spectrometers commercially available at the present time. Currently available XRF spectrometers (for a complete list see reference 1) convert spectral information to sample composition using the influence coefficients technique or the fundamental parameters method. They require either a standard sample with composition relatively close to the unknown or a detailed knowledge of the sample matrix. They also require a highly-trained operator and the results often depend on the capabilities of the operator. In addition, almost all existing field-portable, hand-held instruments use radioactive sources for excitation. Regulatory limits on such sources restrict them such that they can only provide relatively weak excitation. This limits all current hand-held XRF instruments to poor detection limits and/or long data collection times, in addition to the licensing requirements and disposal problems for radioactive sources. The new XRF instrument was developed jointly by Quantrad Sensor, Inc., the Naval Research Laboratory (NRL), and the Department of Energy (DOE). This report describes the analysis algorithms developed by NRL for the new instrument and the software which embodies them.


INTRODUCTION
The purpose of this joint program was to provide technical assistance with the development of a Miniature X-ray Fluorescence (XRF) Analytical Instrument.overcome the weaknesses of spectrometers commercially available at the present time.
This new XRF instrument is designed to Currently available XRF spectrometers (for a complete list see reference 1) convert spectral information to sample composition using the influence coefficients technique or the fundamental parameters method.They require either a standard sample with composition relatively close to the unknown or a detailed knowledge of the sample matrix.They also require a highly-trained operator and the results often depend on the capabilities of the operator.addition, almost all existing field-portable, hand-held instruments use radioactive sources for excitation.Regulatory limits on such sources restrict them such that they can only provide relatively weak excitation.This limits all current hand-held XRF instruments to poor detection limits and/or long data collection times, in addition to the licensing requirements and disposal problems for radioactive sources.

In
The new XRF instrument was developed jointly by Quantrad Sensor, Inc., the Naval Research Laboratory (NRL), and the Department of Energy (DOE).developed by NRL for the new instrument and the software which embodies them.

This report describes the analysis algorithms
The following tasks were performed during this program in FY97: 1.The principal limitation to the fundamental parameters method of XRF analysis has been the accuracy of the atomic parameters used to calculate the x-ray emission.fundamental parameters computer programs in the late 1960s and early 1970~2.The method has remained virtually unchanged since then3.parameters have become available.

NRL developed one of the first
During the intervening years improved values of these la.latest available values of the atomic parameters used in the fundamental parameters method.The search included both the traditional, published, archival literature and online resources available over the Internet.
A complete survey of the literature was undertaken to find the lb.A portion of these newer values were incorporated into the fundamental parameters program from NRL (called NRLXRF)2.A few tests of calculations with the newer values were performed and compared to the results using the older values.
IC. Calculations of the predicted XRF intensities for several alloys, similar to those found in the expected uses of the new instrument, were performed and compared to the measured XRF intensities for the same alloys.
A weighting scheme was developed to select the most The normalization and weighting of the 2c.Fundamental parameters calculations were performed using the algorithms from the NRLXRF computer program2.These calculations were tested for accuracy in task IC and the exact codes used in the tests were also used to perform the calculations for the database of coefficients used in the instrument.
2d.The regression model chosen to convert the x-ray intensities to sample composition was the method outlined by DeJongh4.This method takes advantage of both spectral measurements on physical standards and calculation via fundamental parameters in a natural and versatile way.unknowns are analyzed which have compositions near the available standards but gives reasonable values for almost any unknown composition.The fundamental parameters calculations are used in a differential manner, which dramatically reduces the errors inherent in their use.

It yields the best possible results when ACCOMPLISHMENTS :
The results of the literature survey for updated values of the atomic parameters are given in References 5-20.These include new values for the elemental x-ray absorption coefficients7, fluorescence yields1 1, cross sections15, Coster-Kronig transition rateslz, and a variety of scattering intensities (Rayleigh/elastic, anomalous, and relativistic)l6, 17,18, 19, 20. the extension of reliable measurements of x-ray absorption cross sections to lower photon energiess-9.the calculation of the spectra for x-ray tubes has also been made (by the same group at NRL which did the original calculation for NRLXRF) 13.There is, however, somewhat less new information than might be expected at first glance.most transition metals (1 to 30 keV), the values of the x-ray absorption coefficients are simply the values from the McMaster tables21 of 1969.The extension to lower energies (10-1000 eV) can be expected to improve the fundamental parameters calculation, but only secondarily.Several other important parameter values are changed relatively little in the new references.

In the range of x-ray emissions by
The ultimate test of the new values is their effect on the results of the fundamental parameters calculations of interest for use in the new instrument.in the NRLXRF codes, since they were deemed to be the greatest improvement over the old values and should make a definite improvement.It was rapidly discovered that there were discrepancies between the old and new values for some parameters.Updating one set of parameters without providing accurate values for the remainder actually made the results worse.The most glaring was tiny differences in the absorption edge energies between the old and new absorption tables.Since the jump in absorption just below and just above the absorption edge in a constituent element is a crucial factor in the x-ray fluorescence yield, this proved disastrous.The absorption is calculated by the program one eV below and one eV above the edge according to a table of edge energies.The new absorption coefficient tables had their edge jump values at energies different by 1 to 5 eV from the values in the old tables.An attempt was made to extract the actual edge jump energies from the new table, but no reliable method was found during this project.became impossible to get usable results with the new absorption tables.

It thus
The improvements in calculation of spectra from x-ray tubes was more fruitful.The values from the new calculation code described in reference 13 were used to calculate the expected yield from the xray tube used in the new instrument with excellent agreement in overall spectral shape.precise measurements of actual tube output in the literature22 and agreement was found to within a few percent in all cases.however, incorporating the results into the fundamental parameters calculation done by NRLXRF made little noticeable difference.
The calculations were also compared to very Again, Table I1 in the Appendix compares the x-ray intensities calculated via fundamental parameters to the measured x-ray intensities for 72 alloys.composition and the experimental measurement conditions (the xray tube target and voltage plus the incident and detected beam angles).Analytical Instrument.Both intensities are presented as relative xray intensities (RXI), which is the ratio of the intensity of the x-ray emission line from a given constituent element of the alloy to the intensity from a pure sample of the same element.reduces the effect of systematic errors in some of the fundamental parameters and in the measured intensities.

The fundamental parameters calculations use only the alloy
The measured values were obtained with the new XRF

This ratio
The good news is that the fundamental parameters calculations performed by NRLXRF were more accurate than expected on the basis of the atomic parameters which were available at the time.The fundamental parameters calculations compare very well to the measured values, usually agreeing to within 10%.This is true even for the difficult case of chromium and iron in stainless steels.The calculations are almost universally about 10% higher than he measured values.Thus, if the calculated values are used in a differential mode, where they are corrected to measured values from a physical standard, their useability should be excellent.
improving the fundamental parameters calculations, including both improvements to the algorithms as well as updated values of the atomic parameters, will continue at NRL.
A fully automatic spectrum analysis method must accomplish the tasks listed above: convert the spectrum from the hardware into elemental intensities, select a standard from the built-in library, calculate the coefficients necessary to relate the standard intensities and composition to the unknown intensities and composition, and use Work on this information to convert the spectral intensities from the unknown to its elemental composition.
Peak assignment methods in wide use fall into three types: region of interest, peak search, and peak stripping.method assumes that a fixed region of the spectrum can be integrated, perhaps with subtraction of some background, to yield the integrated intensity for a particular element.This method does not handle overlapped peaks nor does it account for multiple peaks per element.difference to locate peaks in the spectrum and select the region of interest for peak integration.nonlinear least squares fitting procedure to fit overlapping peaks to an assumed shape.This procedure is the most sophisticated and requires the most computing time, but it still suffers from variations in the selection of the region of interest, which skews the fitting results, and does not incorporate multiple peaks per element.stripping involves finding the largest peak in the spectrum, then subtraction of this peak from the spectrum.proceeds with the second largest peak (the largest remaining peak) and so forth until all peaks are removed.intensive than peak searching with nonlinear fitting, but suffers badly if the assumed shape of the peaks is even slightly inaccurate.

The region-of-interest
The peak search method uses a smoothed second Most methods also incorporate a

Peak
The process then This is less computationally For this project, NRL developed a new algorithm based on linear least squares fitting of the spectrum with the measured spectra of each constituent element.Using the measured spectra of the individual elements both insured that the peak shapes are correct and fits all of the peaks from a given element simultaneously.works for overlapped peaks and treats measured backgrounds on the same basis as the peaks.
spectra can be processed in advance) and, since it uses only linear least squares techniques, is stable, reproducible, and reliable.The algorithm is described in detail in the Appendix.

It is very fast (the individual element
An additional feature of the algorithm is the ability to reduce the spectrum into "baskets", each of which is the sum of an arbitrary number and range of spectral channels.dramatically reduces the computational time and power required and makes the algorithm less sensitive to peak shape.

Using these baskets
The algorithm has been tested with generated data including noise and easily reproduces the original spectral composition within better than one percent even with distorted peak widths and considerable noise.The algorithm is sensitive to the energy calibration, so input from the most recent calibration is used to sum the unknown spectrum into the baskets.The output of the algorithm is fractional intensities of each element, taking into account both alpha and beta lines and normalized to pure element intensities.These results are already calibrated and give a good first approximation to the unknown sample composition, uncorrected for matrix effects (absorption or enhancement).The computation time required to decompose an unknown spectrum via Gauss-Jordan elimination, using about 30 baskets, is only a few hundred milliseconds on an 80486 processor.
The results of tests on this algorithm are shown in Figure 1 below.
Once the spectrum has been reduced to elemental x-ray intensities, an appropriate standard must be chosen from the library of available standards.Since the actual composition of the unknown is not yet available, the selection must be made on the basis of a match in raw intensities.The comparison is made by calculating a weighting factor for each standard according to: w h e r e W j is the inverse weight for standard j j is the index of available standards i is the index of constituent elements f is a factor which multiplies the differences Y S i j is the intensity of element i in standard j YUi is the intensity of element i in the unknown YMj is the maximum intensity for standard j Nj is the number elements in standard j.
The standard with the smallest value of Wj is used.The factor f is chosen to magnify small fractional differences to near unity.Differences smaller than l/f will be reduced by the squaring operation while differences larger than l/f will be amplified.A judicious choice of f together with normalization by the largest elemental intensity in the standard are crucial to the proper operation of this algorithm.A value of 10 for f yields the best performance.the results produced by the overall process.
The evaluation of this weighting scheme was based on That is, the scheme and values described above yielded the best agreement between measured and actual values of a list of alloys (see Table I11 in the Appendix).
Test of Basket Spectrum Decomposition Simulated Spectrum (with noise & beta peaks) co 0 ~" " " " ' t " " " " ' " " ' " " ' " " '  The fundamental parameters calculations used to augment the physical standards were based on the computer program NRLXRF, which is described in reference 2.
domain, but the version used in this project has had improvements incorporated for this project, including changes to run on personal computers and the incorporation of updated atomic parameters as The program is in the public described above.The details of the fundamental parameters method are beyond the scope of this report, but are covered in reference 3 and in reference 23.been covered in the discussion of atomic parameters above.
The information relevant to this project has Conversion of the elemental intensities into composition of the unknown followed the method outlined by DeJongh4.This method is described in detail in reference 24.The conversion of intensities into composition must take into account the effects of the matrix, such as absorption of the characteristic x-rays for each element and enhancement effects.The latter occur when the x-rays emitted by one element preferentially excite another element.effects depend on the actual composition, making the problem a set of interacting equations which must be solved via linear algebra.In general the problem is nonlinear, but all current methods assume a linear approximation is adequate.The DeJongh method is one of several "standard compensation" methods which use the differences in intensity between an unknown and a selected standard to determine the composition of the unknown relative to the composition of the standard.determine a general calibration curve over all compositions.DeJongh's method takes advantage of the fundamental parameters method to calculate the coefficients which relate the differences in intensity to differences in composition.strong advantages.It does not require a large number of standards to evaluate the coefficients or to generate a calibration curve over a wide range of compositions.parameters calculations only in a differential mode, greatly reducing its sensitivity to errors in the calculations.This method provides the best of both the accuracy of calibration standards and the wide composition range of fundamental parameters calculations.The results will be best where the unknown is close in composition to one of the available standards but will provide reasonable results over a very wide range of possible compositions.The actual algorithm and a few refinements included in the new instrument are given in the Appendix.

Both of these
These methods do not attempt to The method has two very However, it uses the fundamental The results of applying the algorithms developed here to 72 alloys are given in Table I11 in the Appendix.The name and grade for each alloy are listed along with the measured intensities, the results of the calculated composition based on the measured intensities, and the known composition from the manufacturer for seven major elements.The agreement between the measured and given compositions are almost always within a few percent.the effect of iron on chromium (and vice versa) in stainless steels.The x-ray intensity from chromium is as dependent on the amount of iron as it is on the amount of chromium.

The most difficult correction is
The 72 alloys in the table are the same alloys which make up the library of physical standards for the algorithm.excluded from the library when it was being analyzed, forcing the choice of the closest remaining alloy (as defined by the weighting scheme described above) to be used as the reference standard.

Each alloy was
This algorithm has been incorporated in a hand-held device for alloy analysis.
It weighs less than 20 pounds and can be easily operated by a single individual.
taken by this device.below.
The unit is about 8 by 8 inches square and 20 inches long.
The results presented here were obtained with data A picture of the device in shown in Figure 2 Figure 2. Fluorescence Instrument.
A photo of the hand-held Miniature Analytical X-ray

CONCLUSION
A set of algorithms, including a novel linear-least-squares method of spectral decomposition, has been developed for a Miniature X-ray Fluorescence (XRF) Analytical Instrument.
tested to determine the performance, both as individual parts and collectively.computing power and is insensitive to peak shape distortions and to noise.The conversion of x-ray spectral intensities to elemental composition follows the method of DeJongh, which uses differential coefficients to relate the differences in x-ray intensity between the unknown and a physical reference standard to differences in composition.The accuracy of the fundamental parameters method used to calculate the differential coefficients is within 10% for the 72 alloys used as the library of reference standards.A careful choice of the weighting scheme used to select the standard to be used in the DeJongh method was crucial.A weighting scheme based on summing the squares of normalized differences in x-ray intensities was developed and optimized based on performance.The complete algorithm was tested by treating each of the 72 physical standards as unknowns (and removing the respective standard from the library during its own analysis).yield results within a few percent in almost all cases and are computationally very efficient.incorporated into the hand-held alloy analyzer shown in Figure 2.

The algorithms have been
The spectral decomposition algorithm uses minimal The results confirm that the algorithms The algorithms have been background with known profile and some amount of noise and other distortion.obtained from the coefficients of superposition of the pure element spectra.
we must apply a least squares technique to the data to extract the elemental intensities.
The elemental intensities in this spectrum can be Since the instrument spectrum contains noise and distortion, The background is treated the same as the component spectra.background can be approximated as a polynomial, then each term of the polynomial can be calculated as a separate component and the algorithm will adjust the coefficients as necessary.

If the
This algorithm assumes that each pure element spectrum profile is somewhat linearly independent.spectra (or background profiles) MU) and NU) consisting of n sampled data points: That is for any pair of pure element Sum( IM(j)a * Nfj)l, j=1,2 ,..., n) >> 0 for all values of a.
Let L(i,j) be the spectrum of the i'th pure element (or background) with j the index for the j'th sample as a function of energy.There are m different pure element or background profiles.These spectra are assumed known a priori and are sampled with the same energy grid as the unknown spectrum.approximately equal to the weighted sum of the L's with coefficients c(i) plus noise or other distortion.Let S(j) be the unknown spectrum.
The unknown spectrum is We wish to minimize the sum of the squares of the differences, d(j) (defined below), with respect to the coefficients, c(i).
To cast the problem as a least squares minimization, we want to minimize G = Sum( d(i)2, j=l, ..., n) with respect to each c(i).
To do this, we set the partial derivatives of G wrt the c(i) equal to zero and solve for the c(i).This gives: Solving for the partial derivative in terms of the unknown and pure element spectra gives: This can be reformulated as a matrix eqn as follows: Let T(i,j) = Sum( L(i,k) * L(j,k), k=0,1,2 ,..., n-1) and U(i) = Sum( S(j) * L(i,j), j=O,l, ..., n-1) then Since the matrix contains values that can be calculated purely the already known spectral profiles, it need only be calculated and inverted, or it can be used repeatedly in a Gauss-Jordan elimination on an augmented matrix.
from once A modification of this algorithm uses variable sized bins, referred to here as baskets.summed over a range of channels to form a basket.In areas where any spectral profile has a major feature, small baskets can be used.
In regions where no features are expected, a basket can span many channels This can be used to reduce the amount of data processing and increase the insensitivity to distortion of lines, etc.
particularly important to chose the baskets such that a resolution is high where lines may overlap.The list of basket boundaries is given in Table I below.
Each spectrum from the multichannel analyzer is

It is
Since the measured pure element spectra contain noise, generated spectra are used in computing the matrices.element spectra are comprised of Gaussian lineshapes fit and summed to match the measured pure element spectra.This process was semi-automated with final comparison and adjustment by hand to insure that the generated spectra matched the measured pure element spectra.The background profiles were computed by repeatedly smoothing the measured background spectrum.
The generated pure Table I.Basket definitions used to sum spectra: x-ray fluorescence intensity compared to measured intensity for selected elements in 72 alloys.given, followed by the calculated and measured x-ray intensities.Calculations were performed by the NRLXRF computer code.the known composition, YC the calculated intensity, and YM the measured intensity.this report applied to the 72 alloys used as the standard library.Each alloy was excluded from the library during its analysis.
the major constituents are shown.The alloy name and grade are given followed by the measured intensity, the composition from the algorithm, and the known composition from the manufacturer.YU is the measured intensity, XU the measured composition, and XG the known composition.

1 .
Improve fundamental parameters calculations 1.a Survey literature for best values of fundamental physical parameters needed to calculate x-ray yields 1 .bIncorporate new parameter values in fundamental parameters XRF calculations 1 .cTest accuracy of fundamental parameters calculations for a wide range of sample compositions and geometries 2. Develop fully automated analysis method for field XRF data 2.a Peak identification and assignment to elements 2.b Evaluation of available standards from library by comparison to unknown 2.c Calculation of standards from fundamental parameters when necessary 2.d Select regression model for optimal use of available standards and calculations APPROACH:

Figure 1 .
Figure 1.algorithm to simulated XRF spectra including noise and peak broadening.decomposition algorithm applied to a simulated spectrum where noise has been added and the peaks broadened.the actual intensities which were included in the simulated spectrum.Results of application of the spectral decomposition