Identifying sources of variation for reliability analysis of field inspections Page: 4 of 10
This article is part of the collection entitled: Office of Scientific & Technical Information Technical Reports and was provided to UNT Digital Library by the UNT Libraries Government Documents Department.
Extracted Text
The following text was automatically extracted from the image on this page using optical character recognition software:
11-3
ultrasonic imaging technique that was being deployed.
Three groups were identified, based upon the amount of
training that was deemed necessary. These were expert,
intermediate, and novice groups that would receive I
day, 1 week, and 2 weeks of training respectively. By
insuring that these groups were represented in the field
experiments, there would be data on the efficacy of the
various training programs.
3.2.1 Correlating field results with laboratory
Although the procedural type variables that are studied
in the laboratory will not be controlled in the field
experiments they can still be recorded and compared to
the levels used in the laboratory. The field data
provides a check for the adequacy of the range of inputs
used in the laboratory.
For the C-141 program the time base delays and gains
used by the inspectors varied more than the range used
in the laboratory characterization. The laboratory range
was based on an analysis of what should be expected
from strict adherence to the procedures. Identifying the
reason for the added field variation is a concrete step
toward improving the inspection system.
3.2.2 Procedure Implementation
The field inspections should include all major
procedural steps that could possibly influence the
reliability of an inspection. This includes calibration
and setup of equipment, as well as the handling of
equipment during an inspection. In the C-141 program
this meant that the inspectors had to attach, with suction
cups, a two-axis scanner to the underside of a wing
surface. Proper scanner attachment and the appropriate
definition of the inspection area in the computer were
essential steps.
The field experiments provide the opportunity to
observe discrete events that could impact reliability and
that may not be uncommon. Examples in the C-141
program include the reverse mounting of a transducer
following calibration and scan misalignment that
effectively removed the last fastener in the scan from
being inspected.
If possible, the field experiments should address both
the skill and mechanical aspects of an inspection as
well as the decision process once a signal is obtained.
This separation of the inspection tasks was easily
accomplished in the C-141 program because all signal
images obtained by the inspectors were saved.
However, the program went an additional step to
separate the data acquisition process from the decision
process by asking some of the inspectors to make calls
on a stored image set. The process to access those
images was similar to the process that they were taught
for an actual inspection. The difference was that they
did not have to perform the actual inspection. In the C-141 case there was substantial variation in the calls
made on this common data set.
The variation of calls made on the common data set was
nearly as great as the calls made from the individual
inspections of the test specimens. Clearly, assuring that
inspectors applied a more uniform decision process
would increase the reliability of the inspection. This
issue could be addressed through improvements in
training as well as the possible development of
computer aided decision tools.
4. POD MODELS TO REFLECT FIELD DATA
In the previous section we discussed design of
experiment philosophies for integrating laboratory and
field data into reliability assessments. Results of such
experiments are usually summarized by probability of
detection curves, where a probability is established as a
function of a flaw characteristic. For the purposes of
the following discussion we use crack length as the
flaw characteristic, although in practice some other
variable may be more appropriate.
Data from laboratory experiments are more likely to be
able to be gathered as variables, as opposed to binary.
Data from field experiments are likely to be hit/miss or
call/no-call data. The usual analysis of either form of
data is addressed in the literature [3]. Further
discussions and software have been made available in
US Air Force sponsored programs [4,5].
The cited references refer to the variable response
analysis as an a-hat versus a analysis. In this context,
a-hat is a single inspection variable that is treated as
being directly related to the crack length, a. We will
not pursue this form of analysis other than to note that
extensions to multidimensional data can be made [6].
4.1 Probability of Detection Curves for
Hit/Miss Data
The usual probability of detection curves are assumed
to be monotonic and to go from 0 to 1 as a function of
the crack length. Implicit in this modeling is the
assumption that if a crack is small enough it will have
probability of 0 of being detected and if it is large
enough it will have a probability of 1 of being detected.
The curves used to model POD are usually two
parameter functions. The two most common are
derived from using log-logistic and lognormal
probability distribution functions. We will not repeat
the mathematical forms here, but note that the two
parameters control the location of the POD curve (that
is, where the 50% detection rate is) and the scale (how
fast the POD changes as a function of the crack length).
Probability of detection curves are empirically
estimated from the hits and misses made on a set of test
specimens with a range of crack lengths. If an
Upcoming Pages
Here’s what’s next.
Search Inside
This article can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Article.
Spencer, F. W. Identifying sources of variation for reliability analysis of field inspections, article, April 1, 1998; Albuquerque, New Mexico. (https://digital.library.unt.edu/ark:/67531/metadc704270/m1/4/: accessed April 19, 2024), University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu; crediting UNT Libraries Government Documents Department.