Applications of Computational Verbs to Digital

32
INTERNATIONAL JOURNAL OF COMPUTATIONAL COGNITION (HTTP://WWW.YANGSKY.COM/YANGIJCC.HTM), VOL. 3, NO. 3, SEPTEMBER 2005
Applications of Computational Verbs to Digital
Image Processing
Tao Yang
Abstract— The digital image processing technology based on
computational verb theory is presented. If images are viewed
as dynamic processes along spatial coordinates then the changes
of patterns of gray values can be represented as spatial verbs.
The basic principles of verb image processing is to find the
relation between an image and a template spatial verb. In order to
reduce the calculating burdens for real-time applications, a twodimensional spatial verbs can be represented by a composition
of a brightness profile function and a shape outline function.
A fast way of calculating verb similarities between an image
and a template verb is constructed based on either row-wise
or column-wise verb compositions. Two applications of verb
image processing and one existing commercial product using verb
c 2004-2005 Yang’s
image processing are introduced. Copyright °
Scientific Research Institute, LLC. All rights reserved.
Index Terms— Digital image processing, computational verb,
card counter, intelligent traffic control.
I. I NTRODUCTION
I
N the summer of 2001, I began to think about generalizing
computational verb theory into a more general framework
called physical linguistics[18]. During my exploration of the
realm of physical linguistics, I realized that two immediate
applications of computational verbs to engineering problems;
namely, (computational) verb controllers and (computational)
verb image processing. I dedicated Chapters 6 and 7 of [18]
to verb controllers and verb image processing, respectively.
After my first attack to both engineering applications, I kept
thinking about how to improve the existing results. For the
applications of computational verbs to control problems, two
papers reporting the latest advances had been published[23],
[24]. For the applications of computational verbs to image
processing, a credit card counting system with vision sensor,
called YangSky-MAGIC, had been developed[2]. During the
R&D of this product, I realized that verb image processing
has a much stronger ability than I originally thought. This is
the reason that I want to probe this direction more.
As the first try of a paradigm shift for solving engineering
problems using verbs, the computational verb theory and
physical linguistics have undergone a rapid growth since the
birth of computational verb in the Department of Electrical Engineering and Computer Sciences, University of California at
Berkeley in 1997[4], [5]. The paradigm of implementing verbs
Manuscript received January 1, 2004; revised June 25, 2004.
T. Yang, Department of Electrical Engineering and Computer Sciences,
Yang’s Scientific Research Institute, 1303 East University Blvd. # 20882,
Tucson, Arizona 85719-0521, USA. Email: [[email protected]]
Publisher Item Identifier S 1542-5908(05)10303-0/$20.00
c
Copyright
°2004-2005
Yang’s Scientific Research Institute, LLC.
All rights reserved. The online version posted on July 1, 2004 at
http://www.YangSky.com/ijcc33.htm
in machines were coined as computational verb theory[18].
The building blocks of computational theory are computational
verbs[13], [8], [6], [14], [20]. The relation between verbs
and adverbs was mathematically defined in [7]. The logic
operations between verb statements were studied in [9]. The
applications of verb logic to verb reasoning were addressed
in [10] and further studied in [18]. A logic paradox was
solved based on verb logic[15]. The mathematical concept
of set was generalized into verb set in[12]. Similarly, for
measurable attributes, the number systems can be generalized
into verb numbers[16]. The applications of computational
verbs to predictions were studied in [11]. The applications
of computational verbs to different kinds of control problems
were studied on different occassions[17], [18]. In [21] fuzzy
dynamic systems were used to model a special kind of
computational verb that evolves in a fuzzy space. The relation
between computational verb theory and traditional linguistics
was studied in [18], [22]. Two successful commercial applications of computational verb theory are YangSky-MAGIC card
counter[2] and an intelligent traffic monitor and control system
called TrafficSky Project[1].
Except for the results in Chapter 7 of [18], so far all results
of computational verb theory were focused on temporal verbs
that evolve along time axis. This is because the majority of
verbs evolve in time domain. However, many verbs do evolve
in spatial domain or in both time domain and space domain.
This kind of verb is called a spatial verb. Some examples of
spatial verbs are as follows.
1) The altitude increases from east to west.
2) The gray values change abruptly at an
impulsive noise.
3) The gray values decrease smoothly at the
right-hand side of the edge.
4) The image becomes darker towards the
roof.
Observe that the verbs increases, change, decrease, and
becomes can also evolve in time domain in different contexts.
Therefore, the contexts of spatial verbs play important roles
to the classification of computational verbs.
By viewing the gray values of an image as dynamical
evolving processes in space, we can use different methods
to chunk this kind of spatial dynamics into spatial verbs just
as we had done in time domain. The operations and logics
among computational verbs can then be used to find different
trends and changes of gray values, which in many cases are
very useful results for image processing.
This paper is organized as follows. In Section II the method
of composing brightness profile functions and shape outline
YANG, APPLICATIONS OF COMPUTATIONAL VERBS TO DIGITAL IMAGE PROCESSING
functions into spatial verbs is presented. Some examples are
provided to demonstrate the efficiency of constructing spatial
verbs using row-wise composition. In Section III, the methods
of calculating verb similarity between verbs are presented. A
fast way of finding verb similarity based on the canonical
forms of spatial verbs is provided. In Section IV, the fast algorithm of using verb similarity to process image is given. Some
examples are used to demonstrate the ideas. In Section V, some
concluding remarks are included and the existing commercial
products using verb image processing are introduced.
II. R EPRESENTATIONS OF S PATIAL C OMPUTATIONAL
V ERBS
The evolving function of a spatial verb V for image processing purpose is defined as follows.
E V : ΩS → Ω B
(1)
where ΩS ⊂ Z × Z denotes the support for a two-dimensional
image1 and ΩB ⊂ R denotes the range of the brightness or
the gray value of each pixel. For simplicity and without loss
of generality, here we assume that ΩB = [0, 1].
A. Constructing Canonical Spatial Computational Verbs
The evolving function of a spatial verb denotes the changes
of gray-values along spatial coordinates. Therefore, two factors
contribute to the forms of the evolving functions of spatial
verbs; namely, the spatial configurations and the changes of
brightness. However, the couplings between the spatial and
the brightness facets of spatial verbs sometimes make the
forms of evolving functions too complicated to deal with. On
the other hand, it will be very helpful to construct canonical
forms of spatial verbs for different situations. To construct the
canonical forms of spatial computational verbs, it is convenient
to decouple evolving functions of spatial verbs into two
functions: one handles the brightness information and the other
deals with the spatial configuration.
To construct canonical spatial computation verbs we use a
brightness profile function fp : Z → [0, 1] and a shape outline
function fo : Z × Z → [0, 1]. The evolving function EV can be
expressed by
EV (i, j) =
∞
M
∞
M
fo (k, l) ⊗ fp (i − k, j − l)
(2)
k=−∞ l=−∞
where i, j, k, l ∈ Z, ⊕ and ⊗ denote an s-norm and a tnorm, respectively. Since the method in Eq. (2) is a composition of functions fp and fo , we called it a composition
method(composition, for short) for constructing spatial verbs.
To reduce the computational complexity of Eq. (2), in practice we choose either row-wise or column-wise compositions
to construct the canonical forms of a computational verb V.
1) Row-wise Composition.
33
Comparing Eqs. (2) and (3) one can see that in (2) the
composition of fo and fp is performed along a 2D plane
while in (3) the composition is performed along a 1D
line.
2) Column-wise Composition.
EV (i, j) =
∞
M
fo (k) ⊗ fp (i − k, j).
(4)
k=−∞
By using either row-wise or column-wise composition, the
computational burden of implementing verb image processing
task can be reduced dramatically. This is a critical issue for
many real-time applications such as traffic monitoring and
control[1].
B. Examples of Spatial Computational Verbs
Here we present some examples of constructing canonical
spatial verbs by using both brightness profile functions and
shape outline functions. In all examples presented here we
choose the t-norm and the s-norm as min and max, respectively. For the purpose of demonstration and without loss of
generality, we only use the row-wise composition to construct
spatial computational verbs.
1) Smooth Sigmoidal Functions as Brightness Profile Functions: In this example we choose the profile function as
fp (i) =
1
, i ∈ [−wp , wp ], i ∈ Z,
1 + e−αi
(5)
where (2wp + 1) is the window size2 of fp (·) and α > 0
is a parameter. Here, without expressing explicitly, we let
fp (i) = 0, ∀i ∈
/ [−wp , wp ]. fp denotes that gray values
increase. The result of using a linear shape outline function
is shown in Fig. 1 with α = 0.2 and the window size wp = 40.
Figure 1(a) shows the brightness profile function. Figure 1(b)
shows the shape outline function which is a line with a slope
of 1. Figure 1(c) shows the evolving function of the canonical
form composed from Fig. 1(a) and (b).
The example shows in Fig. 2 uses a different shape outline
function fo (·, ·). Otherwise, all other settings are the same as
those used in Fig. 1. Observe that different kinds of patterns
can be easily composed.
2) Piecewise Linear Functions as Brightness Profile Functions: In this example we choose the brightness profile image
as a piecewise linear function shown in Fig. 3(a). The profile
function is given by
fp (i) = 0.5 +
i
i, i ∈ [−wp , wp ], i ∈ Z.
2wp
(6)
The process of generating the evolving function of the spatial
verb is shown in Fig. 3 with parameter wp = 40.
C. Composed Spatial Computational Verbs
(3)
We can apply different operations upon canonical forms
of spatial verbs to get secondary level of spatial verbs. The
most useful operations are logic AND(∧), logic OR(∨) and
1 Each element of Ω is called a pixel for the purpose of digital image
S
processing.
2 The window size is known as the life span for temporal verbs. We also
denote the set of all elements in [−wp , wp ] as the support of fp (·).
EV (i, j) =
∞
M
fo (l) ⊗ fp (i, j − l).
l=−∞
34
INTERNATIONAL JOURNAL OF COMPUTATIONAL COGNITION (HTTP://WWW.YANGSKY.COM/YANGIJCC.HTM), VOL. 3, NO. 3, SEPTEMBER 2005
1
0.9
0.8
0.7
fp(i)
0.6
0.5
0.4
0.3
0.2
0.1
0
−40
−30
−20
−10
0
i
10
20
30
40
(a)
(b)
(c)
Fig. 1. The process of generating a canonical spatial verb by composing the brightness profile function fp (·) and the shape outline function fo (·, ·). (a)
Brightness profile function fp (·). (b) Shape outline function fo (·, ·). (c) Evolving function V(i, j) of the resulting canonical spatial verb.
(a)
(b)
Fig. 2. The process of generating a canonical spatial verb by composing the brightness profile function fp (·) and the shape outline function fo (·, ·). (a)
Shape outline function fo (·, ·). (b) Evolving function V(i, j) of the resulting spatial verb.
YANG, APPLICATIONS OF COMPUTATIONAL VERBS TO DIGITAL IMAGE PROCESSING
35
1
0.9
0.8
0.7
fp(i)
0.6
0.5
0.4
0.3
0.2
0.1
0
−40
−30
−20
−10
0
i
10
20
30
40
(a)
(b)
(c)
Fig. 3. The process of generating a canonical spatial verb by composing the brightness profile function fp (·) and the shape outline function fo (·, ·). (a)
Brightness profile function fo (·). (b) Shape outline function fo (·, ·). (c) Evolving function V(i, j) of the resulting spatial verb.
logic NOT. We usually use t-norm and s-norm to implement
logic AND and OR, respectively. Let V1 (i, j) and V2 (i, j) be
evolving functions of two spatial verbs, then the results of logic
AND and logic OR denoted by VAN D (i, j) and VOR (i, j) are
respectively given by
VAN D (i, j)
VOR (i, j)
= V1 (i, j) ∧ V2 (i, j)
= min(V1 (i, j), V2 (i, j)),
= V1 (i, j) ∨ V2 (i, j)
= max(V1 (i, j), V2 (i, j)).
(7)
For a canonical spatial verb V, the logic NOT operation is
given by
VN OT , N OT ◦ V,
VN OT (i, j) = 1 − V(i, j).
(8)
Figure 4(a) and (b) show the evolving functions of two spatial computational verbs V1 and V2 , respectively. Figure 4(c)
and (d) show the evolving functions of the logic ANDing and
ORing results of V1 and V2 , respectively.
III. S IMILARITIES A MONG S PATIAL C OMPUTATIONAL
V ERBS
The similarity among spatial computational verbs is the
central concept for utilizing computational verbs to image processing. However, it is difficult to use a single verb similarity
to cover the similarity relation between computational verbs
that have many different forms. Therefore, instead of giving a
closed form of the definition of verb similarity, the boundary
conditions are used to define it as follows[24].
Verb Similarity. Given two computational verbs V1
and V2 , the verb similarity S(V1 , V2 ) should satisfy
the followings.
1) S(V1 , V2 ) ∈ [0, 1];
2) S(V1 , V2 ) = S(V2 , V1 );
3) S(V1 , V2 ) = 1 if V1 = V2 almost everywhere,
where V1 = V2 means both computational
36
INTERNATIONAL JOURNAL OF COMPUTATIONAL COGNITION (HTTP://WWW.YANGSKY.COM/YANGIJCC.HTM), VOL. 3, NO. 3, SEPTEMBER 2005
(a)
(b)
(c)
(d)
Fig. 4. Logic operations between two spatial verbs. (a) Evolving function V1 (i, j) of the first canonical spatial verb. (b) Evolving function V2 (i, j) of the
second canonical spatial verb. (c) Evolving function VAN D (i, j) of the logic ANDing result. (d) Evolving function VOR (i, j) of the logic ORing result.
verbs have the same evolving function.
where Ωs is the support of the spatial verbs. The second verb
similarity can be defined by
Given two spatial computational verbs V1 and V2 , in [18]
the following verb similarity was used















S2 (V1 , V2 ) ,

X

|V1 (i, j) − V2 (i, j)|




(i,j)∈Ωs


,
1− X



V1 (i, j) + V2 (i, j)




(i,j)∈Ωs
X
S1 (V1 , V2 ) ,
if
V1 (i, j) + V2 (i, j) 6= 0;



(i,j)∈Ω

s



0, X





if
V1 (i, j) + V2 (i, j) = 0


(i,j)∈Ωs
X
V1 (i, j) ∧ V2 (i, j)
(i,j)∈Ωs
X
V1 (i, j) ∨ V2 (i, j)
,
(i,j)∈Ωs
X
if
V1 (i, j) ∨ V2 (i, j) 6= 0;



(i,j)∈Ω

s



0, X





if
V1 (i, j) ∨ V2 (i, j) = 0.


(10)
(i,j)∈Ωs
(9)
Observe that whenever V1 (i, j) ≡ 0 and/or V2 (i, j) ≡ 0, both
verb similarities are zero. The cognition behind this fact is that
the verb similarity between a verb and “be zero” is always
zero. However, it might need to pay some attention to the verb
similarity between two “be zeros” in a more general context.
YANG, APPLICATIONS OF COMPUTATIONAL VERBS TO DIGITAL IMAGE PROCESSING
A. Verb Similarity for Canonical Forms of Spatial Verbs
calculated by using the following verb similarity
When spatial verbs are applied to process images, the verb
similarity between an image and canonical forms of spatial
verbs is very useful. Since a canonical form of a spatial verb
can be constructed by using the composition in Eq. (2), the
calculation of verb similarity can be take advantage of the
composition. Without loss of generality, let us suppose that
a spatial verb is a row-wise(or column-wise) composition of
a shape outline function, which is a one-pixel wide curve.
Then the verb similarity can be calculated using the following
steps. Assume that the first verb V1 (i, j) is constructed by a
row-wise composition, of which the profile function fp (·) has
a limited support [−wb , wb ] and the shape outline function is a
curve fo (·, ·). There is no constraint to the shape of the second
spatial verb V2 . Let us assume that the supports of both verbs
are the same, then the similarity between both spatial verbs is
given by the following steps.
1) First a one-dimensional function h(j) is calculated in
order to set up the comparing standard used in Step 3.
Let us first calculate the verb similarity between the profile function fp (·) and each row of the evolving function
of V1 , the results stored in a one dimensional function
h(j). There are at least two methods to calculate h(j).
The first method is given by
wb
X
h1 (j) = 1 −
|V1 (i, j) − fp (i)|
i=−wb
wb
X
, ∀j ∈ ΩJ ,
37
(11)
V1 (i, j) + fp (i)
wb
X
g2 (j) =
V2 (i, j) ∧ fp (i)
i=−wb
wb
X
, ∀j ∈ ΩJ .
(14)
V2 (i, j) ∨ fp (i)
i=−wb
3) Finally, two kinds of verb similarities between V1 and
V2 can be calculated as follows.
X
|h1 (j) − g1 (j)|
S1 (V1 , V2 )
=
j∈ΩJ
1− X
X
S2 (V1 , V2 )
=
h1 (j) + g1 (j)
j∈ΩJ
h2 (j) ∧ g2 (j)
j∈ΩJ
X
,
h2 (j) ∨ g2 (j)
.
(15)
j∈ΩJ
Remarks. When spatial verbs are used in image processing,
we usually choose a standard spatial verb called template verb
to play the role of templates as those used in cellular image
processing[19], or the role of convolution kernels as those used
in digital image processing[3]. Here, the spatial verb V1 plays
the role of template verb. In practical applications, the function
h(·) can be calculated off-line and stored as a set of standard
parameters. Whenever an image operation needs to use the
template verb V1 , the corresponding function h(·) doesn’t need
to be calculated again. Therefore, we call h(·) the template
function.
i=−wb
wb
X
where we assume
IV. V ERB I MAGE P ROCESSING U SING V ERB S IMILARITY
fp (i) 6= 0, ΩJ is the set of all
i=−wb
column indexes for the support of the verbs. h(j) can
also be calculated by using the following verb similarity
wb
X
h2 (j) =
V1 (i, j) ∧ fp (i)
i=−wb
wb
X
, ∀j ∈ ΩJ .
(12)
V1 (i, j) ∨ fp (i)
i=−wb
2) Calculate the verb simulation between the profile function fp (·) and each row of the evolving function of V2 ,
the results stored in a one dimensional function g(j).
There are at least two methods to calculate g(j). The
first method is given by
wb
X
g1 (j) = 1 −
|V2 (i, j) − fp (i)|
i=−wb
wb
X
, ∀j ∈ ΩJ ,
(13)
V2 (i, j) + fp (i)
i=−wb
where we assume
wb
X
i=−wb
Let an image of size M ×N be I(i, j), 1 ≤ i ≤ M, 1 ≤ j ≤
N , and a standard spatial verb of support p × q be V 3 , then we
can calculate the similarity between every p×q sub-image and
the template verb using the following method. First, choose a
point (m, n) where V(m, n) 6= 0 as the anchor point. Let VI
be a sampled p × q sub-image from the image I, then the verb
similarity between VI and V is given by the following steps.
1) Let us assume that the coordinates for VI are the same
as those for V, if the gray value of VI (m, n) 6= 0,
then normalize the gray value of VI with a factor κ =
V(m, n)
. If VI (m, n) = 0 then we choose κ = 1.
VI (m, n)
2) Find S(κVI , V) as the verb similarity between the
sampled sub-image and the standard spatial verb. The
evolving function of κVI is simply a multiplication of
that of VI with a factor κ.
As already shown in the Reference [18], the resulting verb
similarity at each pixel can be viewed as result of applying a
nonlinear filter to the original image I.
A. Vertical Texture Enhancement and Segmentation
For developing a video based card counter, the first step is
to discriminate different spatial configurations of cards. For
fp (i) 6= 0. g(j) can also be
3 Hence
forth we call this verb the template verb.
38
INTERNATIONAL JOURNAL OF COMPUTATIONAL COGNITION (HTTP://WWW.YANGSKY.COM/YANGIJCC.HTM), VOL. 3, NO. 3, SEPTEMBER 2005
example, when cards are packed with irregular gaps, we need
to determine the average gaps in order to estimate the spatial
variation of the packing density of the cards. Figure 5(a) shows
a snapshot of a video input with resolution of 640 × 480
pixels. The task is to discriminate regions where the cards
were packed tightly from regions where the cards were packed
sparsely and irregularly. For the template verb, the following
brightness profile function is chosen.
V. C ONCLUDING R EMARKS
To accumulate dynamic knowledge is the key to many
art and engineering practices where formal methods failed
to capture the complexities of the underlying mechanisms.
However, just as philosophy failed to provide implementing
details needed for engineering practices, purely symbolized
knowledge fails to capture human intuitions, especially the
dynamic ones, behind engineering practices. Computational
fp (i) = 0/ − 2 + 0/ − 1 + 1/0 + 1/1 + 0.1/2.
(16) verbs provide a promising way to capture dynamic intuitions
from human experts. To view an image as a spatial dynamic
Note that the convention of expression in above is the same process is not popular in the mainstream of image processing
as
because engineers were not trained in that way. Instead, we
{fp (−2) = 0, fp (−1) = 0, fp (0) = 1, fp (1) = 1, fp (2) = 0.1}. mainly view an image as a “static picture” where lines and
shapes are pure geometry items. However, when we take a
The shape outline function is a vertical line. Therefore the look at this problem on a second thought, we find when a
template function h(·) ≡ 1. The verb similarity S1 (·, ·) is human being look as a 2D image, what in the mind is in fact
used. The processed result is shown in Fig. 5(b). Observe that a 3D cognition. We use built-in mechanisms to rebuild a 2D
the regions where the cards are sparsely packed are brightly image into a 3D representation use our experiences related to
highlighted while the regions where the cards are tightly trends of shapes and changes of shadows. A comprehensive
packed are enhanced but not highlighted.
view of images from physical linguistic is more helpful than
a biased view of image.
B. Robust Card Detection with Virtual Transparency
Since the design methods presented here can be easily
In many real-time applications, the huge demand of com- interfaced with human languages, the commercial applications
putational power needed for advanced image processing al- of verb image processing have been growing up very fast.
gorithms usually make it impossible to implement them using The first commercial product using verb image processing was
compact, power saving and light-weighted hardware platforms. developed jointly in May 2004 by Yang’s Scientific Research
To overcome this problem we need to develop high efficient Institute, LLC., USA and Wuxi Xingcard Technology Ltd.,
and hardware-friendly image processing algorithms. Verb im- China. This product, which is called YangSky-MAGIC card
age processing is a promising way to save computational re- counter and is shown in Fig. 7(a), applied different kinds
sources for different visual tasks. By processing an image row- of verb image operations to enhance the features of cards
wise and then columns-wise, the computational complexity of under different situations such as irregular gaps between cards,
a verb image processing operator is proportional to the sum irregular and damaged edges of cards, bending cards, and cards
of the width and the height of the image. In contrast, the with cutting corners. Some extreme situations are shown in
computational complexity of a traditional image operator is Fig. 7(b).
proportional to the product of the width and the height of the
For realtime applications such as traffic monitor and control
image. Here we use an example to show how the combination with visual inputs the efficiency of image processing algoof row-wise and column-wise verb similarities can serve as a rithms is critical. In this kind of applications the environpromising way to process images in real time with nontrivial ments subject to short-term and long-term changes. For robust
image processing abilities. Let us choose a brightness profile detection of vehicles online, specially designed verb image
function as
operations can be used to extract the features of vehicles and
tracking moving objects. Some results are shown in Fig. 8.
fp (i) = 0/ − 2 + 1/ − 1 + 1/0 + 0/1 + 0/2
(17)
Observe from Fig. 8(a) that whenever a car entering the range
which is applied to detect vertical edge features. The result of the intersection, a unique number is assigned to it. In order
is show in Fig. 6(b) with the original snapshot of the video to track a car continuously, this unique number will be attached
signal shown in Fig. 6(a) as the input. We then choose the to this car all the time as shown in Fig. 8(b). Many other
information and latest update of this application can be found
following template function
from [1].
h(j) = 1/ − 5 + 1/ − 4 + 0.2/ − 3 + 0.4/ − 2
+0.7/ − 1 + 1/0 + 0.7/1 + 0.4/2 + 0.2/3
R EFERENCES
+0/4 + 0/5
(18)
which is applied to the image in Fig. 6(b). The resulting image
is shown in Fig. 6(c). Observe from Fig. 6(c) that in this result,
the camera virtually “see-through” the sticks covering the surface of the cards. This is in fact an important way of restoring
discontinuous edge features and gain strong robustness when
a card counter works under uncertain conditions. The verb
similarity S1 (·, ·) is used in this experiment.
[1] YangSky
Groups.
TrafficSky
Project.
http://www.yangsky.com/traficsky.htm.
[2] YangSky Groups and Wuxi Xingcard Technology Ltd. Visual Card
Counter: YangSky-MAGIC. http://www.yangsky.com/cardsky.htm.
[3] John C. Russ. The image processing handbook. CRC Press, Boca Raton,
FL., 1992.
[4] T. Yang. Verbal paradigms—Part I: Modeling with verbs. Technical
Report Memorandum No. UCB/ERL M97/64, Electronics Research
Laboratory, College of Engineering, University of California, Berkeley,
CA 94720, 9 Sept. 1997. page 1-15.
YANG, APPLICATIONS OF COMPUTATIONAL VERBS TO DIGITAL IMAGE PROCESSING
(a)
Fig. 5.
39
(b)
Enhance and segment cards under different conditions. (a) A snapshot of an video input. (b) Verb similarity.
(a)
(b)
(c)
Fig. 6. Robust card detection with virtual “see-through” effects. (a) The original snapshot of the video input of size 640 × 480 pixels. (b) Row-wise verb
similarity with respect to brightness profile function. (c) Column-wise verb similarity with respect to the template function.
40
INTERNATIONAL JOURNAL OF COMPUTATIONAL COGNITION (HTTP://WWW.YANGSKY.COM/YANGIJCC.HTM), VOL. 3, NO. 3, SEPTEMBER 2005
Damaged edge
Zig-zag edge
Corner cut
Irregular gaps
(a)
Bended card
(b)
Fig. 7. The first commercial products that uses verb processing operators. (a) The front view of YangSky-MAGIC card counter. (b) The software interface
for a testing setting with different extreme situations.
(a)
(b)
Fig. 8. Application of verb image processing to traffic monitoring and control. (a) The first frame of tracking all cars entering a intersection. (b) The second
frame.
[5] T. Yang. Verbal paradigms—Part II: Computing with verbs. Technical
Report Memorandum No. UCB/ERL M97/66, Electronics Research
Laboratory, College of Engineering, University of California, Berkeley,
CA 94720, 18 Sept. 1997. page 1-27.
[6] T. Yang. Computational verb systems: Computing with verbs and
applications. International Journal of General Systems, 28(1):1–36,
1999.
[7] T. Yang. Computational verb systems: Adverbs and adverbials as
modifiers of verbs. Information Sciences, 121(1-2):39–60, Dec. 1999.
[8] T. Yang. Computational verb systems: Modeling with verbs and
YANG, APPLICATIONS OF COMPUTATIONAL VERBS TO DIGITAL IMAGE PROCESSING
applications. Information Sciences, 117(3-4):147–175, Aug. 1999.
[9] T. Yang. Computational verb systems: Verb logic. International Journal
of Intelligent Systems, 14(11):1071–1087, Nov. 1999.
[10] T. Yang. Computational verb systems: A new paradigm for artificial
intelligence. Information Sciences—An International Journal, 124(14):103–123, 2000.
[11] T. Yang. Computational verb systems: Verb predictions and their
applications. International Journal of Intelligent Systems, 15(11):1087–
1102, Nov. 2000.
[12] T. Yang. Computational verb systems: Verb sets. International Journal
of General Systems, 20(6):941–964, 2000.
[13] T. Yang. Advances in Computational Verb Systems. Nova Science
Publishers, Inc., Huntington, NY, May 2001. ISBN 1-56072-971-6.
[14] T. Yang. Computational verb systems: Computing with perceptions of
dynamics. Information Sciences, 134(1-4):167–248, Jun. 2001.
[15] T. Yang. Computational verb systems: The paradox of the liar. International Journal of Intelligent Systems, 16(9):1053–1067, Sept. 2001.
[16] T. Yang. Computational verb systems: Verb numbers. International
Journal of Intelligent Systems, 16(5):655–678, May 2001.
[17] T. Yang. Impulsive Control Theory, volume 272 of Lecture Notes in
Control and Information Sciences. Spinger-Verlag, Berlin, Aug. 2001.
ISBN 354042296X.
[18] T. Yang. Computational Verb Theory: From Engineering, Dynamic
Systems to Physical Linguistics, volume 2 of YangSky.com Monographs
in Information Sciences. Yang’s Scientific Research Institute, Tucson,
AZ, Oct. 2002. ISBN:0-9721212-1-8.
[19] T. Yang. Handbook of CNN Image Processing: All You Need to Know
about Cellular Neural Networks, volume 1 of YangSky.com Monographs
in Information Sciences. Yang’s Scientific Research Institute, Tucson,
AZ, Sept. 2002. ISBN:0-9721212-0-X.
[20] T. Yang. Computational verb systems: Verbs and dynamic systems.
International Journal of Computational Cognition, 1(3):1–50, Sept.
2003.
[21] T. Yang. Fuzzy Dynamic Systems and Computational Verbs Represented
by Fuzzy Mathematics, volume 3 of YangSky.com Monographs in Information Sciences. Yang’s Scientific Press, Tucson, AZ, Sept. 2003.
ISBN:0-9721212-2-6.
[22] T. Yang. Physical Linguistics: A Measuable Linguistics based on
Computational Verb Theory, Fuzzy Theory and Probability, volume 5
of YangSky.com Monographs in Information Sciences. Yang’s Scientific
Press, Tucson, AZ, Oct. 2004.
[23] T. Yang.
Applications of computational verbs to the design
of P-controllers.
International Journal of Computational
Cognition,
3(2):52–60,
June
2005
[available
online
at
http : //www.YangSky.com/ijcc32.htm].
[24] T. Yang. Architectures of computational verb controllers: Towards
a new paradigm of intelligent control. International Journal of
Computational Cognition, 3(2):74–100, June 2005 [available online at
http : //www.YangSky.com/ijcc32.htm].
41