Available online at www.sciencedirect.com ScienceDirect Mathematics and Computers in Simulation 102 (2014) 153–163 Original Article Nonlocal infinity Laplacian equation on graphs with applications in image processing and machine learning Elmoataz Abderrahim a,b , Desquesnes Xavier c,∗ , Lakhdari Zakaria a,b , Lézoray Olivier a,b a Université de Caen Basse-Normandie, UMR 6072 GREYC, F-14032 Caen, France b ENSICAEN, UMR 6072 GREYC, F-14050 Caen, France c Université d’Orléans, Laboratoire Prisme, Orléans, France Received 31 October 2011; received in revised form 3 December 2013; accepted 20 January 2014 Available online 12 March 2014 Abstract In this paper, an adaptation of the infinity Laplacian equation to weighted graphs is proposed. This adaptation leads to a nonlocal partial difference equation on graphs, which is an extension of the well-known approximations of the infinity Laplacian equation. To do so, we study the limit as p tends to infinity of minimizers of p-harmonic function on graphs. We also prove the existence and uniqueness of the solution of this equation. Our motivation stems from the extension of the nonlocal infinity Laplacian equation from image processing to machine learning fields, with proposed illustrations for image inpainting and semi-supervised clustering. © 2014 IMACS. Published by Elsevier B.V. All rights reserved. Keywords: Nonlocal infinity Laplacian; Partial difference equations; Tug-of-war game; Weighted graphs; Image processing; Semi-supervised data clustering 1. Introduction The infinity Laplacian equation is currently at the interface of a number of different mathematical fields. This equation is also used in several applications, for instance, in optimal transportation, game theory, image processing, computer vision and surface reconstruction. It was first studied by Gunnar Aronsson motivated by classical analysis for building Lipschitz extension of a given function. In the last decade PDE Theorists establish existence, uniqueness and regularity of this equation. Recently, this equation (and also the p-Laplacian) was related to continuous values of Tug-of-War games. For more details and applications of these equations see [1,13,2,12,8,7,17,16] and references therein. See also [18] for numerical approximations. This equation has found concrete applications in image processing; For instance in computer vision, the infinity Laplacian was introduced for edge enhancement [21,22] and served as the basis of Canny edge detection [3,21]. Later, Caselles et al. have investigated the absolutely minimizing Lipschitz extensions and the infinity Laplacian for the data interpolation and inpainting in images [4]. Cong et al. have also ∗ Corresponding author. Tel.: +33 610910979. E-mail addresses: [email protected] (E. Abderrahim), [email protected] (D. Xavier), [email protected] (L. Zakaria), [email protected] (L. Olivier). http://dx.doi.org/10.1016/j.matcom.2014.01.007 0378-4754/© 2014 IMACS. Published by Elsevier B.V. All rights reserved. 154 E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 proposed an interpolation scheme for shape morphing [6]. Elion and Vese have proposed an image decomposition model using the total variation and the infinity Laplacian [10]. Finally, Guillot and Le Guyader have recently proposed to extrapolate the vector field on the whole image domain in a variational framework with the infinity Laplacian [15]. Contributions. In this paper, motivated by the extension of the infinity Laplacian equation from the image processing field to the machine learning field, we propose an adaptation on weighted graphs of arbitrary topology. This adaptation leads to Partial difference Equations with data-dependent coefficients which can be considered as nonlocal versions of the infinity Laplacian equation. Then, we also prove the existence and uniqueness of the solution of such equation. Finally, we present some applications for image and data interpolation problems. First, let us recall some previous works on the connection of the infinity Laplacian with p-harmonic functions and Tug-of-war game. Continuous infinity Laplacian and p-harmonic functions. The infinity Laplacian is related to PDE infinity Laplacian equation that can be expressed as: ∞ f = 0, (1) where ∞ f = ∇f −2 ni,j (∂f/∂xi )(∂f/∂xj )(∂2 f/∂xi ∂xj ) denotes the 1-homogeneous version of the infinity Laplacian on Euclidean domains for smooth functions in some open set Ω ⊆ Rn . Aronsson [1] interpreted formally ∞ f = 0 as the limiting case p→ + ∞ of the Euler-Lagrange equation p f = div(∇f p−2 ∇f ) = 0, (2) related to the homogeneous Dirichlet problem with 1< p < + ∞ inf f ∈W 1,p (Ω),f =b on ∂Ω ∇f Lp (Ω) . (3) Recently, in [5], Chambolle et al. proposed an Hölder infinite Laplacian that can be considered as a nonlocal version of the infinity Laplacian. They consider the minimization of the nonlocal functional of the form: |f (x) − f (y)|p dxdy, for α ∈ [0, 1] (4) |x − y|αp Ω×Ω The Euler-Lagrange equation of this functional is |f (x) − f (y)|p−1 sign(f (x) − f (y)) dy = 0. |x − y|α |x − y|α Ω (5) As p tends to infinity, this expression formally converges to the following nonlinear and nonlocal equation L(f) = 0 on Ω with f (y) − f (x) f (y) − f (x) L(f ) = max ( ) + min ( ), for x ∈ Ω. (6) α y∈Ω,y = / x y∈Ω,y = / x |y − x| |y − x|α They called this operator L(f) the Hölder infinity Laplacian. We will show in the sequel that such an operator is equivalent to the nonlocal infinity Laplacian for a particular graph. Tug-of-war and infinity Laplacian. The tug-of-war games related to the infinity Laplacian studied in [19] can be briefly described as follows. A tug-of-war game is a two-person, zero-sum game, that is; two players are in contest and the total earnings of the one are the losses of the other. The rules of the game are the following: consider a bounded domain Ω ⊂ Rn and take a strip around the boundary Γ ⊂ Rn \Ω. Let g : Γ → R be a Lipschitz continuous function (the final payoff function). At the initial time, a token is placed at a point x0 ∈ Ω. Then, a (fair) coin is tossed and the winner of the toss is allowed to move the game position to any x1 ∈ Bε (x0 ) (where Bε (x) = {y | |y − x| ε}. At each turn, the coin is tossed again, and the winner chooses a new game state xk ∈ Bε (xk−1 ). Once the token has reached some xτ ∈ Γ , the game ends and the first player earns g(xτ ) (while the second player earns −g(xτ )). This game has an expected value fε (x0 ) (called the value of the game) that verifies the Dynamic Programming Principle (DPP), fε (x) = 1 1 sup fε (y) + inf fε (y), ∀x ∈ Ω. 2 y∈B (x) 2 y∈Bε (x) (7) ε Here it is understood that fε (x) = g(x) for x ∈ Γ . This formula can be intuitively explained from the fact that the first player tries to maximize the expected outcome (and has the probability 1/2 of selecting the next state of the game) while the second tries to minimize the expected outcome (and also has the probability 1/2 of choosing the next position). E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 155 The authors also prove that the ε-value of the game is a Lipschitz function converging uniformly as ε goes to zero, named the continuous value of the game, which solves the infinity Laplacian equation ∞ f (x) = 0 on Ω (8) f (x) = g(x) on ∂Ω. 2. Partial difference operators on graphs Let us consider the general situation where any discrete domain can be viewed as a weighted graph. A weighted graph Gw = (V, E) consists of a finite set V of N vertices and of a finite set E ⊆ V × V of edges. Let (u, v) be the edge that connects vertices u and v. An undirected graph is weighted if it is associated with a weight function w : E → / E. The weight function represents a R+ satisfying w(u, v) = w(v, u), for all (u, v) ∈ E and w(u, v) = 0, if (u, v) ∈ graph. We use the notation u∼v to denote two adjacent vertices. The similarity measure between two vertices of the degree of a vertex u is defined as δw (u) = v∼u w(u, v). The neighborhood of a vertex u (i.e., the set of vertices adjacent to u) is denoted by N(u). In this paper, the considered graphs are connected, undirected, with no self-loops nor multiple edge. Let H(V ) be the Hilbert space of real valued functions on the vertices of the graph. Each function f : V → R of H(V ) assigns a real value f(u) to each vertex u ∈ V. Similarly, let H(E) be the Hilbert space of real valued functions endowed with the following inner products: defined on the edges of the graph. These two spaces are f, gH(V ) = u∈V f (u)g(u) with f, g ∈ H(V ), and F, GH(E) = u∈V v∈V F (u, v)G(u, v) where F, G ∈ H(E). Given a function f : V → R, the integral of f is defined as f = f (u) V u∈V and its Lp norm is given by f p =( |f (u)|p ) 1/p ,1 p < ∞ u∈V f ∞ = max(|f (u)|), p = ∞ u∈V Let A be a set of connected vertices with A ⊂ V such that for all u ∈ A, there exists a vertex v ∈ A with (u, v) ∈ E. We denote by ∂A the external boundary set of A, respectively ∂A = {u ∈ Ac : ∃v ∈ A with (u, v) ∈ E} and (9) where Ac = V \ A is the complement of A. To manipulate graphs or functions on graphs, based on the ideas of Partial difference Equations (PdEs) methods, we have proposed an adaptation of many PDEs and continuous variational methods to finite weighted graphs [20,9,11]. Conceptually, PdEs mimic PDEs in spatial domains having a graph structure by providing discrete analogue of notions such as integration, derivative, gradient, etc. This framework allows most techniques for the nonlinear continuous p-Laplacian to be easily transposed on graphs. 2.1. Nonlocal difference operators We recall several definitions of difference operators on weighted graphs to define derivatives and some morphological operators on graphs that will be useful in the sequel. More details on these operators can be found in [11,20]. The nonlocal difference operator of a function f ∈ H(V ), noted dw : H(V ) → H(E), is defined on an edge (u, v) ∈ E and denoted by: (dw f )(u, v) = γ (w(u, v)) (f (v) − f (u)) , (10) where γ : R+ → R+ depends on the weight function (in the sequel we denote γ(w(u, v)) by γ uv ). This difference operator is linear and antisymmetric. 156 E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 ∗ : H(E) → H(V ), is a linear operator defined by d f, G The adjoint of the difference operator, noted dw w H(E) = ∗ f, dw GH(V ) , for all f ∈ H(V ) and all G ∈ H(E). Using the definitions of difference and inner products in H(V ) and ∗ , of a function G ∈ H(E), can by expressed at a vertex u ∈ V by the following expression: H(E), the adjoint operator dw ∗ (dw G)(u) = γuv (G(v, u) − G(u, v)). (11) v∼u ∗ , measures the net outflow of a function of H(E) at each vertex of The divergence operator, defined by divw = −dw the graph. Each function F ∈ H(E) has a null divergence over the entire set of vertices. From the previous definitions, it can be easily shown that u∈V v∈V dw f (u, v) = 0, f ∈ H(V ) and u∈V divw F (u) = 0, F ∈ H(E). We define the directional derivative of a function f ∈ H(V ), noted (∂v f ) as ∂v f (u) = dw f (u, v) = γuv (f (v) − f (u)). (12) The k-order directional derivative is defined as k (f (v) − f (u)). ∂vk f (u) = (−1)k γuv (13) We also introduce two morphological directional partial derivative operators, external and internal partial derivative operators are respectively defined as (∂v+ f )(u) = max(0, (dw f )(u, v)), (∂v− f )(u) = − min(0, (dw f )(u, v)). (14) From these operators one can obtain a class of discrete nonlocal gradients. These gradients, including two upwind gradients of a function f ∈ H(V ), at a vertex u ∈ V are defined as follows: (∇w f )(u) = [(∂v f )(u : v∼u)]T . (∇w± f )(u) = [(∂v± f )(u : v∼u)] . T The Lp norms, 1 p < ∞ of these gradients: ∇w f p and ∇w± f p , allow to define the notion of the regularity of a function around a vertex. They can be used to construct several regularization functional on graphs such as Jw,p (f ) = ∇w f p u∈V ± (f ) = Jw,p ∇w± f p u∈V Jw,∞ (f ) = ∇w f ∞ u∈V These functional are the schemes for well-known total variation on graphs. One can remark that all these definitions do not depend on the graph structure. Moreover, one can remark that by definition such functionals are nonlocal. Indeed, if we consider a complete graph G(V, E, w), where E = V × V, the previous equation can be rewritten as p Jw (f ) = ∇w f pp = γuv |f (v) − f (u)|p . u∈V u∈V v∈V 2.2. The p-Laplace operator The p-Laplacian on graph, which is the analogue of the p-Laplacian on Riemannian manifolds, has been investigated by many researchers and used in many application in image and machine learning. In the context of PdEs on graphs, based on the nonlocal weighted difference and divergence operators, we mimic the classical definition of the p-Laplacian on Euclidean domains to derive a unified form of two expressions: anisotropic and isotropic p-Laplacian. E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 157 The nonlocal anisotropic p-Laplace operator of a function f ∈ H(V ), aw,p : H(V ) → H(V ), is denoted by: 1 div(|dw f |p−2 dw f )(u). 2 (aw,p f )(u) = (15) with 1≤ p < ∞. Using (10) and (11), the anisotropic p-Laplace operator of f ∈ H(V ), at a vertex u ∈ V, can be computed by p (aw,p f )(u) = γuv |f (v) − f (u)|p−2 (f (v) − f (u)) (16) v∼u which is the analogue of the p-Laplacian p f(u) = div(| ∇ f|p−2 | ∇ f|). This operator is nonlinear if p = / 2. To avoid zero denominator in (16) when p < 2, |f (v) − f (u)| is replaced by |f (v) − f (u)| = |f (v) − f (u)| + , where → 0 is a small fixed constant. √ In the case where p = 2, and with γuv = w(u, v), we obtain the classical Laplacian given by (aw,2 f )(u) = w(u, v) (f (v) − f (u)) . (17) v∼u Similarly, using γuv = w(u,v) δw (u) , we obtain the normalized Laplacian, as (w,2 f )(u) = f (u) − v∼u w(u, v)f (v) (18) δw (u) The weighted isotropic p-laplacian operator noted iw,p : H(V ) → H(V ), is denoted by: (iw,p f )(u) = 1 div(∇w f p−2 dw f )(u). 2 (19) with 1≤ p < ∞. 3. Nonlocal infinity Laplacian equation on graphs Eqs. (6) and (7) are two distinct forms of the same partial difference equation: L(f)(u) = 0 that is rewritten as L(f )(u) = max( w(u, v) max (f (v) − f (u), 0)) + min( w(u, v) min (f (v) − f (u), 0)) = 0 (20) v∼u v∼u Indeed, considering a graph G(V, E, w) and Eq. (6), if V = Ω ⊂ Rn , E = {(x, y) ∈ Ω × Ω : w(x, y) > 0} and w(x, y) = 1/|x − y|2α , we can fall back on (20). Similarly, considering Eq. (7), if we replace sup by max and inf by min, and using a weight function defined such that 1 if |x − y| < ε w(x, y) = (21) 0 otherwise, we fall back on Eq. (20). Using discrete gradient definitions, we can rewrite Eq. (20) as L(f )(u) = ∇w+ f (u)∞ − ∇w− f (u)∞ = 0 (22) which we denote as the w,∞ f (u), the nonlocal infinity Laplacian. We now provide a new relationship between the anisotropic Laplacian and morphological gradients. We show this new relation in the sequel. Proposition 1. For 1≤ p < + ∞, at a vertex u ∈ V p p + aw,(p+1) f (u) = (∇w f )(u)p − (∇w− f )(u)p with w (u, v) = w(u, v)( p + 1)/p. (23) 158 E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 Proof. From (16), we have, p−2 (aw,p f )(u) = wp/2 (f (v) − f (u)). uv |f (v) − f (u)| v∼u Since |x| = x+ + x− and x = x+ − x− , one has, with A = (f (v) − f (u)): p+1 p−1 (aw,(p+1) f )(u) = wuv2 (A+ − A− )(A+ + A− ) . (24) v∼u Then, by developing (A+ + A− ) p−1 , since A+ A− = 0, it is easy to obtain (23). If we take some particular values of p, we have the following relations. For p = 1, (23) gives (aw,2 f )(u) = (∇w+2 f )(u)1 − (∇w−2 f )(u)1 . (25) The classical combinatorial Laplacian operator can therefore be expressed as the difference of two morphological external and internal gradients. This new expression of the p-Laplacian is the basis of our proposition for an expression of the infinity Laplacian on graphs. 3.1. The infinity Laplacian on graphs To derive a formulation of the infinity Laplacian on graphs, we consider the limit when p tends to infinity of the p-harmonious functions on graphs. To do so, we formally show that this limit satisfies a nonlocal finite difference equation, that we name the infinity Laplacian equation. Let us first introduce p-harmonic functions on graphs. Definition 1. Given a weighted graph G = (V, E, w) and a subset A ⊂ V. A function f having specified values on ∂A is called p-harmonic over A if f is a minimizer of the following p-energy functional. 1 ∇w f (u)pp (26) Jw,p (f ) = 2p u∈A Proposition 2. The minimization problem (26) has a unique minimizer and this minimizer satisfies the p-Laplacian equation aw,p f (u) = 0 ∀ u ∈ A Proof. When p 1, the energy Jw,p (f ) is a convex functional of functions of H(V ) ⊂ Rn . As Jw,p (f ) tends to infinity, as f tends to infinity (we recall that f is by definition a vector of RN , where N is the number of vertices), by standard arguments of convex analysis, there exists a global minimizer of (26). Then, any local minimizer of Jw,p (f ) is a global minimizer. Let C(V) be the set of constant functions on H(V ). Jw,p (f ) is also a strictly convex functional of functions in H(V ) \ C(V ). In this case, the minimization of Jw,p (f ) has a unique solution. This is also the case for the functions in C(V). Indeed Jw,p (f ) is translation invariant, e.g., Jw,p (f + c) = Jw,p (f )∀c ∈ R. Thus, there exists only one global minimum. To find a solution of (26), we consider the following system of equations: ∂Jw,p (f ) = −aw,p f (u) = 0 ∀u ∈ A. ∂f (u) (27) If this system has a solution, then it is the solution of (26) Now we formally show that the limit of the Euler-Lagrange equation (27) when p tends to infinity is w,∞ f . From Proposition 1, we have (aw,p f )(u) = (∇w+ f )(u)p−1 − (∇w− f )(u)p−1 = 0 p−1 p−1 with w (u, v) = w(u, v)p /(p − 1) that we can easily simplify into (∇w+ f )(u)p−1 − (∇w− f )(u)p−1 = 0. (28) E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 159 Using gradient p-norm definition and letting p→ ∞ in (28), one can prove that lim (∇w+ f )(u)p−1 − (∇w− f )(u)p−1 = (∇w+ )f (u)∞ − (∇w− f )(u)∞ = 0. (29) p→∞ Based on Proposition 1, and the previous limit, we propose a new definition of the ∞-Laplacian: (w,∞ f )(u) = (∇w+ f )(u)∞ − (∇w− f )(u)∞ (30) With an appropriate weight function (w(u, v) = 1, for all (u, v) ∈ E), the proposed definition recovers well-known finite difference approximation for the infinity Laplacian. Indeed, (30) can be rewritten as (with u ∼ u): (1,∞ f )(u) = −2f (u) + minf (v) + maxf (v) v∼u v∼u For instance (1,∞ f)(u) is known in the community of (discrete) mathematical morphology as the morphological Laplacian. 3.2. Existence and uniqueness of solutions We consider that data are defined on a general domain represented on a graph Gw = (V, E). Let f 0 : V → R be a function. Let A ⊂ V be the subset of vertices with unknown values and ∂A its external boundary. We will study the existence and uniqueness of the nonlocal infinity laplacian equation defined as w,∞ f (u) = (∇w+ f )(u)∞ − (∇w− f )(u)∞ = 0 ∀u ∈ A (31) f (u) = f 0 (u) ∀u ∈ ∂A. Theorem 1. Eq. (31) has a unique solution. Generalizing the arguments of [14], we now show that there exists a unique solution of (31). First we use the fact that having w,∞ f (u) = 0 implies that f has no regional maxima. By contradiction, suppose v0 ∈ V is a regional maxima of f. Then, if F ⊆ V is a connected subset of vertices with v0 ∈ F , we have f (v0 ) = maxf > f (u), ∀u ∈ ∂F. F Let H = {u ∈ F, f (u) = f (v0 )}, then ∀u∈∂H, max(f (u) − f (v))+ = 0 < max(f (v) − f (u))+ i.e., (∇w− f )(u)∞ < v∼u v∼u (∇w+ f )(u)∞ which is equivalent to w,∞ f (u) > 0 that provides a contradiction. Therefore w,∞ f (u) = 0 implies that f has no regional maxima. Using this result and the comparison principle, we now show the uniqueness. Proof. Suppose we have f and h with f ≤ h on ∂A, we want to show that this implies f ≤ h on A. By contradiction, we suppose we have M = sup(f − h) > 0 on V. Let H = {u ∈ V, (f(u) − h(u)) = M} and F = {u ∈ H, f (u) = maxf }. If H u ∈ H, then (f − h) reaches its maximum at u. This implies that (∇w− f )(u)∞ ≥(∇w− h)(u)∞ and (∇w+ f )(u)∞ ≤ (∇w+ h)(u)∞ . By hypothesis, we have w,∞ f (u) = w,∞ h(u), therefore, ∀v ∈ H: (∇w− f )(u)∞ = (∇w− h)(u)∞ (32) (∇w+ f )(u)∞ = (∇w+ h)(u)∞ Now let us show that the set F is a regional maxima. By contradiction, we can choose v ∈ ∂+ F such that f (v0 )≥maxf F for u ∈ F, v0 ∼u. Then, it follows that (∇w+ f )(u)∞ = f (v0 ) − f (u). Since v0 ∈ / H, we must have f (v0 ) − h(v0 ) < f (u) − h(u). Thus f (v0 ) − f (u) < h(v0 ) − h(u) and so (∇w+ h)(u)∞ > (∇w+ f )(u)∞ , contradicting (32). For the existence, we consider the following nonlocal averaging operator f (u) + (∇w+ f )(u)∞ + f (u) − (∇w− f )(u)∞ (33) 2 First, we recall the Bouwer fixed point theorem: A continuous function from a convex, compact subset of an Euclidean space to itself has a fixed point. Then, we identify H(V ) as Rn and consider the set K = {f ∈ H(V ) | f (u) = g(u) ∀u ∈ ∂A, and m f (u) M ∀u ∈ A}, where m = min(g(u)) and M = max(g(u)). By definition, K is a convex and compact subset of Rn . NLA(f )(u) = ∂A ∂A 160 E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 It is easy to show that the map f → NLA(f) is continuous and take from K to K. So, by the Bouwer fixed point theorem, the map NLA has a fixed point that is solution of NLA(f) = f. This complete the proof. 4. Applications Many tasks in image processing and computer vision can be formulated as interpolation problems. Image inpainting and semi-supervised segmentation are typical examples of such interpolation problems. Interpolating data consists in constructing new values for missing data in coherence with a set of known data. Now that we have proposed a new explicit expression of a discrete nonlocal ∞-Laplacian, we are going to use this expression as a framework for nonlocal interpolation of missing data on graphs that constitutes a natural extension to graphs of the continuous infinity Laplacian equation. Known data (defined by the set A) are considered as Dirichlet boundary conditions and the interpolation is obtained by solving an appropriate boundary problem. The algorithm can be summarized as follows: Given f : V → R, a function defined on the data domain A ⊂ V, and using the nonlocal averaging operator (33): ∀u: f 0 (u) = f 0 (u), ∀u ∈ A f n (u) = f 0 (u), ∀u ∈ ∂A f n+1 (u) = NLA(f n (u)), ∀u ∈ A (34) In this section, we will consider two concrete applications: inpainting and semi-supervised images and data clustering. For the two examples we will consider both images and data as particular graphs, with specific topologies. The iterative algorithm (34) is iterated till convergence is reached. We consider that convergence is reached when the Mean Squared Error between fn+1 and fn is lower than a user-defined threshold. In this section, we will consider two concrete applications: inpainting and semi-supervised images and data clustering. For the two examples we will consider both images and data as particular graphs, with specific topologies. 4.1. Graph construction There exists several popular methods to transform discrete data f 0 : Zn → Rm into a weighted graph structure. Considering a set of vertices V such that data are embedded by functions of H(V ), the construction of such graph consists in modeling the neighborhood relationship between the data through the definition of a set of edges E and using a pairwise distance measure μ : V × V → R+ . In the particular case of images, the ones based on geometric neighborhoods are particularly well-adapted to represent the geometry of the space, as well as the geometry of the function defined on that space. One can quote: • Grid graphs which are the most natural structures to describe an image with a graph. Each pixel is connected by an edge to its adjacent pixels. Classical grid graphs are 4-adjacency grid graphs and 8-adjacency grid graphs. Larger adjacency can be used to obtain nonlocal graphs. • Non-local patch based graphs which provide a very efficient way to catch texture information in images. The construction is similar to a grid graph construction, but each pixel u is also connected to every pixel v which lies in a square window centered on u. The similarity between two pixel is computed using an Euclidean distance where each pixel is represented by a patch feature vector Fuτ = v∈Wτ (u) Fv (i.e., the set of values Fv where v is in a square window Wτ (u) of size (2τ + 1) × (2τ + 1) centered a a vertex pixel u, in order to incorporate nonlocal features. • k-neighborhood graphs (k-NNG) where each vertex vi is connected with its k-nearest neighbors according to μ. Such a construction implies to build a directed graph, as the neighborhood relationship is not symmetric. Nevertheless, an undirected graph can be obtained by adding an edge between two vertices vi and vj if vi is among the k-nearest neighbor of vj or if vj is among the k-nearest neighbor of vi E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 161 Fig. 1. Bear inpainting. The similarity between two vertices is computed according to a measure of similarity g : E → R+ , which satisfies: g(u, v) if (u, v) ∈ E w(u, v) = otherwise 0 Usual similarity functions are as follows: g0 (u, v) = 1, g1 (u, v) = (μ(F (u), F (v)) + ε)−1 with ε > 0, ε → 0, g2 (u, v) = exp(−μ(F (u), F (v))/σ 2 ) with σ > 0 where σ depends on the variation of the function μ and control the similarity scale. 4.2. Image inpainting The inpainting process consists in filling in the missing parts of an image or a video with the most appropriate data in order to obtain harmonious and hardly detectable reconstructed zones. In order to preserve texture and fine details, the graphs used are nonlocal patch based graphs. In Fig. 1, the security fencing (provided by the user) has been removed to free the bears and improve the picture aestheticism. Another result on a texture image is reported in Fig. 2. We can observe that the reconstructed zones are not detectable and merge harmoniously with the preserved data in both results. 4.3. Semi-supervised images and data clustering In this subsection we present the application of our method to semi-supervised segmentation. The semi-supervised segmentation provides a segmentation of data in two clusters given initial seeds. A user sets initial labels in order to obtain a segmentation of a target zone of his choice. The segmentation is performed by interpolating the label function over the graph. In Fig. 3, the labels are given by white and grey colors, where the data are the pixels of the image and the graph is build as a simple grid-graph. In Fig. 4, the labels are given by the red and blue boxes, where the data are a set of handwritten digits represented by feature vectors of R28×28 and organized as a k-nearest-neighbors graph. Fig. 2. Chalk inpainting. 162 E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 Fig. 3. Semi-supervised image segmentation. Fig. 4. Semi-supervised data clustering. Then, these initial labels are interpolated to similar regions all over the graph using weights computed from the image (respectively data) to be segmented. At convergence of this process, the image is partitioned into two labels. In Fig. 3 the boundary between the segmented regions is marked in blue. In Fig. 4 the two classes are represented by red and blue boxes. 5. Conclusion In this paper, we have proposed an adaptation of the infinity Laplacian equation to weighted graphs, as a nonlocal partial difference equation. We also proved the existence and uniqueness of solutions of this equation and presented the behavior and the potentialities of such an equation to some interpolation problems as image inpainting or semisupervised data clustering. References [1] G. Aronsson, Extension of functions satisfying Lipschitz conditions, Ark. Mat. 6 (1967) 551–561. [2] G. Aronsson, M.G. Crandall, P. Juutinen, A tour of the theory of absolutely minimizing functions, Bull. Am. Math. Soc. 41 (2004) 439–505. [3] J. Canny, A computational approach to edge detection, IEEE Trans. Pattern Anal. Mach. Intell. 8 (1986) 679–698. E. Abderrahim et al. / Mathematics and Computers in Simulation 102 (2014) 153–163 [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] 163 V. Caselles, J. Morel, C. Sbert, An axiomatic approach to image interpolation, IEEE Trans. Image Process. 7 (1998) 376–386. A. Chambolle, E. Lindgren, R. Monneau, A Hölder infinity Laplacian, ESAIM: Control Optim. Calc. Var. 18 (2012) 799–835. G. Cong, M. Esser, B. Parvin, G. Bebis, Shape metamorphism using p-Laplacian equation, in: Proceedings of ICPR, vol. 4, 2004, pp. 15–18. M. Crandall, A visit with the infinity Laplace equations, a visit with the infinity Laplace equations, in: Lecture Notes in Mathematics, 2008, pp. 75–122. M. Crandall, G. Gunnarsson, P. Wang, Uniqueness of ∞-harmonic functions and the Eikonal equation, Commun. Partial Diff. Eqs. 32 (2007) 1587–1615. X. Desquesnes, A. Elmoataz, O. Lézoray, V.T. Ta, Efficient algorithms for image and high dimensional data processing using Eikonal equation on graphs, in: International Symposium on Visual Computing, vol. 2, 2010, pp. 647–658. C. Elion, L.A. Vese, An image decomposition model using the total variation and the infinity Laplacian, in: Proceedings of SPIE, vol. 6498W, 2007, pp. 1–10. A. Elmoataz, O. Lézoray, S. Bougleux, Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing, IEEE Trans. Image Process. 17 (2008) 1047–1060. P. Juutinen, B. Kawohl, On the evolution governed by the infinity Laplacian, Mat. Ann. 335 (2006) 819–851. P. Juutinen, P. Lindqvist, J. Manfredi, The ∞-eigenvalue problem, Arch. Ration. Mech. Anal. 148 (1999) 89–105. E. Le Gruyer, On absolutely minimizing Lipschitz extensions and PDE ∇ ∞ (u) = 0, Nonlinear Diff. Eqs. Appl. 14 (2005) 09–55. C. Le Guyader, L. Guillot, Extrapolation of vector fields using the infinity Laplacian and with applications to image segmentation, Commun. Math. Sci. 7 (2009) 423–452. J. Manfredi, M. Parviainen, J. Rossi, On the definition and properties of p-harmonious functions, in: Workshop on New Connections Between Differential and Random T Games, PDEs Image Process. (2009). J. Manfredi, M. Parviainen, J. Rossi, Dynamic programming principle for tug-of-war games with noise, ESAIM: Control Optim. Calc. Var. 18 (2012) 81–90. A.M. Oberman, A convergent difference scheme for the infinity Laplacian: construction of absolutely minimizing Lipschitz extensions, Math. Comput. 74 (2005) 1217–1230. Y. Peres, O. Schramm, S. Sheffield, D.B. Wilson, Tug-of-war and the infinity Laplacian, J. Am. Math. Soc. 22 (2009) 167–210. V.T. Ta, A. Elmoataz, O. Lézoray, Nonlocal PDEs-based morphology on weighted graphs for image and data processing, IEEE Trans. Image Process. 26 (2011) 1504–1516. V. Torre, T. Poggio, On edge detection, IEEE Trans. Pattern Anal. Mach. Intell. 8 (1986) 147–163. A.L. Yuille, T.A. Poggio, Scaling theorems for zero crossings, IEEE Trans. Pattern Anal. Mach. Intell. 8 (1986) 15–25.
© Copyright 2026 Paperzz