689_1.pdf

AUTOMATIC EVALUATION OF WELDED JOINTS USING IMAGE
PROCESSING ON RADIOGRAPHS
Ch. Schwartz
CEA - Centre de Valduc - 21120 IS SUR TILLE - FRANCE
ABSTRACT. Radiography is frequently used to detect discontinuities in welded joints (porosity,
cracks, lack of penetration). Perfect knowledge of the geometry of these defects is an important step
which is essential to appreciate the quality of the weld. Because of this, an action improving the
interpretation of radiographs by image processing has been undertaken. The principle consists in
making a radiograph of the welded joint and of a depth step wedge penetrameter in the material. The
radiograph is then finely digitized and an automatic processing of the radiograph of the penetrameter
image allows the establishment of a correspondence between grey levels and material thickness. An
algorithm based on image processing is used to localize defects in the welded joints and to isolate
them from the original image. First, defects detected by this method are characterized in terms of
dimension and equivalent thickness. Then, from the image of the healthy welded joint (that is to say
without the detected defects), characteristic values of the weld are evaluated (thickness reduction,
width).
INTRODUCTION - PRINCIPLE OF THE STUDY
Research for smallest flaws and for perfect knowledge of welded joints is an
important step in the improvement of non-destructive controls. In some specific
applications, radiography is a well-adapted non-destructive technique that allows the
quantification of quality of welds (thickness reduction, flaws...)- A method based on
indirect thickness measurement has been considered. This method consists in modeling a
calibration curve representing thickness of the material been radiographed as a function of
the density of the film [1]. Such a relationship is possible by the use of a special
penetrameter (usually called a depth step wedge) made in the same material than the
specimen. The weld of the specimen is placed beside the depth step wedge in order to be
radiographed on the same film and to avoid errors from differences in radiation intensity
across the field. The stepped wedge must be as wide as possible and, in order to limit the
effects of the radiation, lead masking is frequently used. It is important to have exactly the
same operating conditions (voltage, intensity, time of exposure, film development...) to
establish a reliable relationship between the film density and the thickness of material
(from the image of the depth step wedge) and then to extrapolate it to the weld.
When appropriate experimental conditions are observed, the film is finely digitized
with a microdensitometer (50 x 50 urn on 256 grey levels). The images are then explored
with specific procedures using image-processing algorithms and automatic evaluation of
the weld is possible.
CP657, Review of Quantitative Nondestructive Evaluation Vol. 22, ed. by D. O. Thompson and D. E. Chimenti
© 2003 American Institute of Physics 0-7354-0117-9/03/S20.00
689
Grey Level
FIGURE 1. Principle of the calibration curve.
CALIBRATION CURVE MODELING
The first step of the automatic treatment consists in establishing a calibration curve
between the grey levels of the digitized image as a function of the material
thickness : Gl = f(Th). Average pixel values of each step of the wedge's image are
calculated so that each thickness of the step wedge corresponds with a grey level value
(Figure 1).
In order to predict the thickness correspondence for each grey level value on the
image of the weld, data fitting is necessary. By this effect, a model based on the
attenuation law has been developed :
~2Th + B,.e~nlh
where :
(1)
Th represents material thickness,
Gl represents equivalent grey levels,
B represents curve coefficients.
This nonlinear curve-fitting problem is solved in the least-squares sense [2]. That
is, given input data Th = {Thi, ..., Th6}, and the observed output G\ = {On, ..., Gi6},
coefficients B = {B0, BI, B2} that "best-fit" the equation F(B, Th) are determined.
Actually, it consists in resolving the function :
(2)
The calibration curve is now determined for all the grey level values of the image
of the depth step wedge (Figure 2-a). However, some adjustments are necessary before the
automatic evaluation of the welded joints can be applied. In fact, it appears that the step of
the wedge, which has the same thickness than the specimen, has not exactly the same
density on the film.
690
Grey Level
Grey Level
Number of pixel
t
FIGURE
(a)
Modeling
of
the
calibration
curve -- (b)
(b)Histogram
Histogramof
ofthe
theraw
rawimage
image---(c)
(c)correction
correctionof
ofthe
the
FIGURE2.2.2. (a)
(a)Modeling
Modelingof
ofthe
thecalibration
calibration curve
Histogram
of
the
raw
image
(c)
correction
FIGURE
of
the
calibration
curve.
calibrationcurve.
curve.
calibration
The
calibration
curve
must be
be corrected
corrected of
of this
this drift
drift resulting
resulting from
fromthe
thenon-isotropy
non-isotropy
Thecalibration
calibrationcurve
curve must
drift
resulting
The
from
the
non-isotropy
of
the
X-ray
cone
beam
and
from
the
scattering
radiation
effects.
The
average
greylevel
level
of
the
X-ray
cone
beam
and
radiation
effects.
The
average
of the X-ray cone beam and from the scattering radiation effects. The average grey
grey
level
value
A
is
extracted
form
the
histogram
image
of
the
weld
(Figure
2-b)
and
value
A
is
extracted
form
of
the
image
of
the
weld
(Figure
2-b)
GL
GL
value AGL is extracted form the histogram of the image of the weld (Figure 2-b) and
and
correspondsdirectly
directlywith
with the
the thickness
of the
the specimen.
specimen.
The
characteristic
corresponds
directly
with
the
thickness ee of
specimen. The
The characteristic
characteristic point
pointof
ofthe
the
corresponds
point
of
the
weld(e,
(e,AGL)
thenput
puton
onthe
graph. The
The
curve
generated
weld
(e,
AAGL
isisthen
then
put
on
the calibration
calibration graph.
The curve
curve generated
generated from
from the
theresult
resultof
of
GL))is
weld
from
the
result
of
thestepped
steppedwedge
wedge isis
is moved
moved to
point
of
the
weld
in
the
stepped
wedge
moved
to the
the characteristic
characteristic point
point of
of the
the weld
weld in
in order
order to
to match
match
the
order
to
match
exactlywith
withthe
thedifferent
different grey
grey level
of the
the weld
exactly
with
the
different
grey
level values
values of
(Figure 2-c).
2-c).
exactly
weld (Figure
(Figure
2-c).
NON-UNIFORMITYILLUMINATION
ILLUMINATION CORRECTION
CORRECTION
NON-UNIFORMITY
ILLUMINATION
CORRECTION
NON-UNIFORMITY
Dueto
to aaa anisotropy
anisotropy of
cone beam,
beam,
non-uniform
Due
to
anisotropy
of the
the X-ray
X-ray cone
beam, aaa non-uniform
non-uniform illumination
illumination of
of the
the
Due
illumination
of
the
background
of
the
image
observed.
For
example,
the
background
of
the
background
of
the
image
is
frequently
For
example,
the
background
of
the
background of the image is frequently observed. For example, the background of the
image
can
be
much
brighter
in
the
top
part
of
the
image
than
in
the
bottom
part.
This
image
can
be
much
brighter
in
the
top
part
of
the
image
than
in
the
bottom
part.
This
image can be much brighter in the top part of the image than in the bottom part. This
variationmust
mustbe
besubtracted
subtracted out
out from
from the
the background
background
of
variation
must
be
subtracted
out
from
the
background of
of the
the image
image in
in order
order to
to not
notdisturb
disturb
variation
the
image
in
order
to
not
disturb
the
results
of
the
automatic
evaluation
algorithm.
Several
methods
have
been
the results
results of
of the
the automatic
automatic evaluation
evaluation algorithm.
algorithm. Several
Several methods
methods have
have been
been tested
tested
the
tested
(mathematical morphology,
morphology, low
low filtering...).
filtering…). Good
Good
results
are
(mathematical
morphology,
low
filtering…).
Good results
results are
are obtained
obtained by
by polynomial
polynomial
(mathematical
obtained
by
polynomial
fitting[3].
[3].Each
Eachcolumn
column of
of the
the raw
raw image
image isis
fitted
by
polynomial
fitting
[3].
Each
column
of
the
raw
image
is fitted
fitted by
by aaa polynomial
polynomial function
function (Figure
(Figure3-a).
3-a).
fitting
function
(Figure
3-a).
In
order
to
not
change
the
information
of
the
weld
during
the
correction
of
the
non-uniform
In
order
to
not
change
the
information
of
the
weld
during
the
correction
of
the
non-uniform
In order to not change the information of the weld during the correction of the non-uniform
illumination,the
thedata-fitting
data-fitting algorithm
algorithm only
only takes
takes
in
account
points
illumination,
the
data-fitting
algorithm
only
takes in
in account
account points
points that
that are
arenot
notincluded
includedinin
in
illumination,
that
are
not
included
the
weld
itself,
it
means
points
that
are
far
enough
from
the
center
of
the
weld.
the weld
weld itself,
itself, itit means
means points
points that
that are
are far
far enough
enough from
from the
the center
center of
of the
the weld.
weld. The
Thenonnonthe
The
nonuniformity illumination
illumination of
of the
the background
background is
then
modeled
uniformity
is then
then modeled
modeled for
for all
all the
the profiles
profilesof
ofthe
theimage
image
uniformity
illumination of
the background
is
for
all
the
profiles
of
the
image
of the
the weld
weld (Figure
(Figure 3-b).
3-b). This
This effect
effect is
is extracted
extracted by
subtracting
the
modeled
background
of
by
subtracting
the
modeled
background
of the weld (Figure 3-b). This effect is extracted by subtracting the modeled background
image to
to the
the original
original image
image of
of the
the weld.
weld. Then,
all
the
profiles
image
Then, all
all the
the profiles
profiles are
are translated
translated to
to aaarealistic
realistic
image
to the
original image
of the
weld. Then,
are
translated
to
realistic
grey
level
value
by
simply
adding
to
all
the
pixels,
the
value
A
(Figure
3-c).
GL
grey
level
value
by
simply
adding
to
all
the
pixels,
the
value
A
(Figure
3-c).
GL
grey level value by simply adding to all the pixels, the value AGL (Figure 3-c).
Grey Level
Grey Level
Grey Level
c^w*^
(a)
FIGURE 3. (a) Profile in the raw image - (b) non uniformity illumination of the background - (c) Profile
FIGURE
3. (a)
(a) Profile
Profile in
in the
the raw
raw image
image -- (b)
(b) non
non uniformity
uniformity illumination
illumination of
the background
background -- (c)
FIGURE
3.
of the
(c) Profile
Profile
after processing.
after processing.
processing.
after
691
FLAWS DETECTION AND AUTOMATIC EVALUATION OF THE WELD
A good approach for locating defects in the image of the weld (Figure 4-a) is to
detect its edges. Flaws are detected on the image by the use of the Canny-Deriche filter
[4] [5]. The optimum edge detector used is the combination of two monodimensional filters
in the two directions of the images. A smoothing filter f(x) and the optimum derivative
filter h(x):
Ne
—a\x\
''
(3)
a
h(x) = cxe' ^
The image of the norm of the gradient B is calculated as followed [6] :
\Bx=(A*h(x))*f(y)
B — A/ BX + By
(4)
Edges of the flaws correspond to the local maxima of the norm of gradient B. They
are directly extracted by thresholding. The noise of the raw images generates the detection
of artifact's edges. In order to limit the detection of these edges, the appearance in the
profil of detected edges in the image are analyzed. The idea is that each profil of a flaw
like a porosity is composed of three different areas characterized by specific criteria :
1) a "descendant area" characterized by a minimum descendant slope of the profile,
2) a flat area,
3) a "rising area" characterized by a minimum rising slope of the profile.
If one of the criteria is not fulfilled, the edges are considered as edges of an artifact
and are eliminated. The resulting image contains only the edges of the flaws. An automatic
algorithm based on segmentation isolates them form the image (Figure 4-b). The detected
flaws are then characterized in term of size, number and location on the weld. On the other
hand, in order to evaluate the intrinsic weld quality (thickness reduction, width...), a
"virtual" image of the weld is created. Each point that is considered as a pixel of a flaw is
identified. For each line of the image where points of flaws are detected, an algorithm
determines the new grey levels of the pixels of the flaw by joining the two boundaries of
the edges of the flaw by a cubic-spline function. The result of this reconstruction is shown
on Figure 4-c.
FIGURE 4. (a) Raw image - (b) Flaws detected by the automatic processing - (c) Final image without
flaws.
692
Two different treatments allow evaluating separately the flaws (size, number) and
the weld's characteristics (width, thickness reduction, eventual lack of penetration...).
RESULTS, PERFORMANCE EVALUATION AND FUTURE WORK
This method has been tested on several radiographs of welds of thin sheets of
metal. The results are compared with the interpretation of the expert and, in most cases, all
the flaws are accurately detected. The size and location of flaws in the weld are correctly
estimated by the expert and by the automatic software. In our application, the lateral
resolution is evaluated at 50 um and has been validated by comparison with the expert
interpretation for flaws sizing at least 200 um in diameter. The contrast sensitivity
corresponds to approximately 0.1% of the total thickness of the specimen and the detection
of flaws which thickness correspond to 1% of the total thickness of thin sheets has been
validated.
A special penetrameter with artificial flaws (simulating thickness reduction and
porosity) associated with a depth step wedge has been manufactured. Different shapes of
thickness reduction have been considered (flat, triangular and circular bottom) and holes of
different size and depth (determined by the flaw thickness of interest) are drilled to
simulate voids in the weld. The stepped wedge and the specimen are radiographed on the
same film (Figure 5) and the automatic algorithms are applied (curve calibration, flaw
detection and weld evaluation). First results of a study show that all the flaws (lateral size
varying from 0.6 mm to 2 mm in diameter - depth representing voids from 0.1 mm to 1
mm in thin sheets) in the different shapes of weld reduction profiles are correctly detected
and located by the automatic software.
The next step of the validation will consist in evaluating the size of the smallest
flaw (in term of dimensions and thickness) that the automatic software can detect. This
study will also use virtual simulated images of flaws in a weld. This action will allow the
determination of the limit of detection and the accuracy of the automatic treatments.
CONCLUSION
An automatic image analysis software allowing the detection and the quantification
of flaws in a weld of thin sheet has been developed. The principle is based on a perfect
knownledge of the relation between grey levels of the digitized image and thickness of
material. Once this relation is established, image-processing algorithms are applied to the
raw image for detecting and segmenting flaws. On the other hand, other algorithms are
used to evaluate the weld in term of width, of thickness reduction and to detect an eventual
lack of penetration. This method has been tested on several welds of thin sheets of metal
and has been validated by expert interpretation. Performance will be evaluated on special
penetrameter or simulated images representing different flaws within different shapes of
welds.
FIGURE 5. Digitized image of the standard penetrameter.
693
REFERENCES
1. Thickness measurement radiography, Non Destructive Testing Handbook, American
Society for Nondestructive Testing, pp. 817-821.
2. Press, W.H., Flannery, B.P., Teukolsky, S.A. and Vetterling, W.T., Numerical recipes
- The Art of Scientific Computing, Cambridge University Press, 1988.
3. Doering, E. R. and Basart, J. P., Trend removal in X-ray images, Review of progress in
Quantitative Nondestructive Evaluation, edited by D.O. Thompson and D.E. Chimenti,
Plenum Press, Vol. 7A, 1988, pp. 785-794.
4. Canny, J. F., A computational approach to edge detection, EEE Trans. on PAMI, vol. 8,
n° 6, pp. 679-698, 1986.
5. Deriche, R., Using Canny's criteria to derive a recursively implemented optimal edge
detector, Int. Journal of Computer Vision, vol. 1, n°2, pp. 167-187, 1987.
6. Cocquerez, J. P. and Philipp, S., Analyse d'images : filtrage et segmentation, Ed.
MASSON pp. 120-129.
694