Special cases of the planar least gradient problem

Nonlinear Analysis 151 (2017) 66–95
Contents lists available at ScienceDirect
Nonlinear Analysis
www.elsevier.com/locate/na
Special cases of the planar least gradient problem
Wojciech Górny a , Piotr Rybka a,b,∗ , Ahmad Sabra a
a
Faculty of Mathematics, Informatics and Mechanics, The University of Warsaw, ul. Banacha 2, 02-097
Warsaw, Poland
b
Graduate School of Mathematical Sciences, University of Tokyo, Komaba 3-8-1, Tokyo 153-8914, Japan
article
abstract
info
Article history:
Received 21 May 2016
Accepted 24 November 2016
Communicated by S. Carl
We study two special cases of the planar least gradient problem. In the first one,
the boundary conditions are imposed on a part of a strictly convex domain. In the
second case, we impose the Dirichlet data on the boundary of a rectangle, an example
of convex but not strictly convex domain. We show the existence of solutions and
study their properties for particular cases of boundary data.
© 2016 Elsevier Ltd. All rights reserved.
MSC:
primary 49J10
secondary 49Q10
49J20
35A15
Keywords:
Planar least gradient
Dirichlet and natural boundary
conditions
Convex but non-strictly convex
domains
1. Introduction
We are interested in the two-dimensional case of the least gradient problem


min
|Du| : u ∈ BV (Ω ), TΓ u = f ,
(1.1)
Ω
where Γ is an open subset of ∂Ω , and TΓ u denotes the restriction of the trace T u to Γ .
We establish connections between (1.1) and the following problem appearing in Free Material Design
(FMD) models, see [8,19],


inf
|p| : p ∈ L1 (Ω ; R2 ), div p = 0, p · ν|Γ = g ,
(1.2)
Ω
∗ Corresponding author at: Faculty of Mathematics, Informatics and Mechanics, The University of Warsaw, ul. Banacha 2, 02-097
Warsaw, Poland.
E-mail address: [email protected] (P. Rybka).
http://dx.doi.org/10.1016/j.na.2016.11.020
0362-546X/© 2016 Elsevier Ltd. All rights reserved.
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
67
where ν is the outer normal to ∂Ω and p · ν|Γ is a properly defined trace operator. We note at this stage that
a regularity assumption on ∂Ω must be made so that the trace is well-defined, see [6]. Since any solution
u to (1.1) is scalar valued, as opposed to the vector field p in (1.2), we may say that (1.1) represents a
dimensional reduction of (1.2).
Section 2 is devoted to establishing a relationship between the two minimization problems, when Ω is a
convex set in the plane. We show how to reduce (1.2) to (1.1) and explain a relation between the boundary
data f and g. In order do this, we first clarify the notions of traces for Lp fields and vector valued measures,
see Definition 2.2 and Remark 2.1.
Once this is done, we address the following problems:
(1) The boundary data are specified on a set Γ , which is essentially smaller than ∂Ω . The rest of the
boundary remains free. We study this problem in Section 3 when Ω is strictly convex with C 1 boundary
and Γ is an arc. Our existence result, Theorem 3.1, is shown for continuous functions f having one-sided
limits at the endpoints of Γ . Uniqueness and further properties are studied in Theorem 3.2.
(2) Loads at the boundary in (1.2) may be concentrated at a few points and be zero elsewhere. Such a
situation corresponds to functions f in (1.1) that are piecewise constant. An example of this type is
analyzed in Section 3.4 also for strictly convex C 1 domains Ω .
(3) We relax the strict convexity condition in Section 4, and consider the least gradient problem in the case
when Ω is a rectangle. The data f are assumed to be continuous on ∂Ω and monotone on its sides.
The main result is described in Theorem 4.1. We use it in Section 4.1 to study the corresponding FMD
problem in the case, when load g, applied to the boundary of the rectangle Ω , is discontinuous and
piecewise constant.
The least gradient problem, (1.1), with Γ = ∂Ω , attracted the attention of many authors, [4,18,21,20,23,
26,24,11] who addressed (1.1), as well as its anisotropic or non-homogeneous generalizations.
We are particularly interested in the result by Sternberg, Williams and Ziemer, [26], who showed existence
and uniqueness of solutions of (1.1) when Γ = ∂Ω and the boundary data f are continuous. In their case,
the condition imposed in [26] on the region Ω , in case of a two dimensional problem, boils down to strict
convexity of Ω . The authors of [26] showed also that solutions to the least gradient problem are less smooth
than the boundary data. In particular, they proved that if f ∈ C α (∂Ω ), then the solution u ∈ C α/2 (Ω̄ ).
The method of proof in [26] is constructive, and it is based on the fundamental result in [4] showing
that superlevel sets of solutions to (1.1) are minimal surfaces. In Section 3, we extend the method developed
in [26], to find solutions to (1.1), when Γ ( ∂Ω by constructing explicitly their level sets, see Section 3.2. We
apply this approach in Theorem 3.2 to study several examples of boundary data for which we can establish
basic properties of the solutions like uniqueness, continuity and presence of level sets with positive Lebesgue
measure. Solutions to (1.1) with continuous boundary data are not necessarily continuous in Ω̄ , contrary to
the case when Γ = ∂Ω .
As we mentioned above, a proper definition of traces is an issue. Here, we will point to one of the
aspects. It is well-known that the trace space of BV (Ω ) equals L1 (∂Ω ). It is also well-known, that the
functional defined in (1.1) is not lower semicontinuous on {u ∈ BV (Ω ) : T u = f }, see [2,15] for its lower
semicontinuous envelope. This implies that the direct methods of the calculus of variations need not be
best suited to solve (1.1). In [21], by a different method, the authors showed that a solution to (1.1) exists,
provided that f ∈ L1 (∂Ω ). However, this solution does not necessarily satisfy the boundary condition in
the trace sense. This problem is highlighted in [24,11], where the authors showed that the space of traces of
solutions to (1.1) is in fact smaller than L1 (∂Ω ). Another observation, related to the construction in [21], is
the lack of uniqueness of solutions, even when f has jump discontinuities at just a few points. These remarks
show that we have to be very careful about the following:
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
68
(a) solutions to (1.1) may not exist for the data in the space of traces of BV functions;
(b) uniqueness of solutions might be lost if we relax the continuity of the data.
In Section 4, the strict convexity of Ω is relaxed and solutions to (1.1) are constructed when Ω is a
rectangle. For such sets the general theory in [26] fails. Our method of proof is based on an approximation
of rectangle Ω by strictly convex sets and on a stability result showed in [23]. A regularity result similar to
the one in [26] is also obtained in this case, see Remark 4.1.
We include in the paper two examples drawn from FMD. They are presented in Sections 3.4 and 4.1. These
examples are inspired by existing models found in engineering literature, see [8,19], and illustrate explicitly
how the solutions look like, i.e. how the material should be distributed in the design region according to
the given load at the boundary. An on-going research project, conducted by the second author of this paper
and T. Lewiński is dedicated to further studies of more physical situations than studied in this paper. A
similar example to the one in Section 3.4 can be found in [5, Section 5], where in this case, a solution was
constructed with the help of the mass transport theory.
Finally, we emphasize the fact that the proofs in this paper depend essentially the special geometry of
the plane. Solutions are found explicitly by constructing their superlevel sets whose boundaries are minimal
surfaces. In R2 , line segments are the only minimal surfaces. However, in higher dimensions, even when
n = 3, minimal surfaces are not unique and have a much more complicated geometry, see [3,10,16,22]. In
this case, further work is needed. Carrying our analysis to R3 will be investigated separately in a future
research project.
2. A relationship between two minimization problems
In this section, we show a connection between solutions to the following problems,


min
|Du| : u ∈ BV (Ω ), T u = f ,
(2.1)
Ω
and

1
|p| : p ∈ L
inf

Ω; R
2


, div p = 0, p · ν|∂Ω = g .
(2.2)
Ω
The first one, (2.1), is the well-known Least Gradient problem. We assume in these problems that the
boundary of Ω is Lipschitz continuous and f ∈ L1 (∂Ω ). The restriction T u at the boundary is understood
in the sense of trace of BV functions.
The second problem, (2.2), appears in Free Material Design models, where the goal is to find the optimal
material distribution of a body to support a load applied to its boundary. Since p is a field in L1 (Ω ),
then div p is understood in the distribution sense, see Definition 2.1. For the normal trace p · ν|∂Ω to be
well defined, with ν the outer unit normal to ∂Ω , the boundary of Ω should belong to a special class of
Lipschitz domains called deformable Lipschitz domains, see [7, Definition 3.1]. This class of sets contains
convex regions, [6, Remark 2.2], which are considered in this paper.
We introduce now the definition of the divergence of a field in Lp , 1 ≤ p ≤ ∞, along with the notion
of its normal trace p · ν|∂Ω . These definitions generalize to the case, when the field is a Radon measure, as
described in Remark 2.1.
Definition 2.1. Assume F is a vector field in Lp (Ω ; Rn ), 1 ≤ p ≤ ∞, or F is a vector valued Radon measure
on Ω ⊆ Rn . F is called a divergence measure field if the distributional divergence div F is a Radon measure
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
69
and its total variation,

|div F |(Ω ) = sup

F · ∇ϕ : ϕ ∈ C01 (Ω ), ∥ϕ∥L∞ (Ω) ≤ 1 ,
Ω
is finite.
Definition 2.2. If F ∈ Lp (Ω ; Rn ), 1 ≤ p ≤ ∞, is a divergence measure field, then the normal trace F · ν|∂Ω
is a continuous functional on Lip (∂Ω ), the space of Lipschitz continuous functions, defined by the following
formula

⟨F · ν|∂Ω , ϕ⟩ = ⟨div F, φ⟩ +
F (x) · ∇φ(x) dx,
(2.3)
Ω
n
where φ is the Whitney extension of ϕ to Lip (R ) preserving the Lipschitz constant, see [14, Section 2.10.43].
By [7, Theorem 3.1], the operator F · ν|Ω is well defined as a continuous linear functional on the space of
Lipschitz continuous functions on ∂Ω and is independent of the choice of the extension φ.
Remark 2.1. In the case where F is a vector valued measure whose distributional divergence is a measure,
the normal trace, defined by (2.3), is a continuous functional over the space of Lip (γ, ∂Ω ) with γ > 1,
[7, Theorem 3.1]. The space Lip (γ, C) can be understood as the space of Lipschitz functions with bounded
and continuous partial derivatives of order smaller or equal to γ, see [25, Chapter VI, Section 2.3] for a
detailed description of these functions and their extensions.
Since the space L1 is not weakly∗ closed, we do not expect the existence of minimizers of (2.2). However,
as shown in this section, we can still establish a relation between the infimum of (2.2) and the solutions to
(2.1). This is described in the following theorem.
Theorem 2.1. Let us assume Ω ⊆ R2 is convex and u is a solution to (2.1) with T u = f ∈ L1 (∂Ω ), then
π
g = ∂f
∂τ is a continuous functional over Lip (∂Ω ), and q = R− 2 Du is a solution to (2.2) in the following
sense




1
2
inf
|p| : p ∈ L Ω ; R , div p = 0, p · ν|∂Ω = g on ∂Ω = |q|(Ω ) ≡ |Du|(Ω ).
Ω
Here, Rα denotes a rotation with angle α around the origin, and τ is the tangent such that (ν, τ ) is positively
oriented. Moreover, q is a divergence measure field, with div q = 0, and q·ν|∂Ω = g, as defined in Remark 2.1.
In order to prove this theorem, we need the following result.
Proposition 2.1. Suppose p ∈ L1 (Ω ; R2 ), div p = 0 in the sense of distributions and Ω is convex, then there
exists u, an element of W 1,1 (Ω ), such that p = R− π2 ∇u. More precisely,


|p| dx =
|∇u| dx.
Ω
Ω
Moreover, if p · ν|∂Ω = g in the sense of Definition 2.2 and T u = f , then g =
∂f
∂τ .
Proof. We fix x0 ∈ Ω , and define the differential form ω = p1 dx2 − p2 dx1 , where p = (p1 , p2 ). We set
pϵ = p ∗ ϕϵ := (pϵ1 , pϵ2 ), ϵ > 0, with ϕε a sequence of mollifiers. We define a sequence of mollified differential
forms ω ϵ = pϵ1 dx2 − pϵ2 dx1 and the functions

uϵ (x) =
ωϵ ,
γx
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
70
where γx is a line segment joining x0 to x. Since pε → p in L1 (Ω ; R2 ), we claim that the sequence uϵ
converges in W 1,1 (Ω ). In fact,

  |x−x0 |
 ϵ

(p (γx (t)) − pδ (γx (t))) · η  |γ̇| dtdx := J ,
|uϵ (x) − uδ (x)|dx =
Ω
Ω
0
where η is a unit normal to the interval γx . We take σ to be tangent to γx . We switch to another orthogonal
coordinate system, such that the first axis is parallel to σ, while the second one is parallel to η. Then, by
Fubini Theorem, we change the order of integration and we obtain,
  
 ϵ

(p (γx (t)) − pδ (γx (t))) χΩ (σx1 + ηx2 ) dtdx2 dx1
J ≤
R
R
R
≤ diam(Ω )∥pϵ − pδ ∥L1 (Ω) .
We conclude that uϵ is a Cauchy sequence in L1 , and hence our claim follows.
We notice that, ∇uϵ is also a Cauchy sequence in L1 . Indeed, since div p = 0, then div pϵ = 0, and we
have ∇uϵ = R π2 pϵ . Therefore,
∥∇uϵ − ∇uδ ∥L1 (Ω) ≤ ∥R π2 (pϵ − pδ )∥L1 (Ω) = ∥pϵ − pδ ∥L1 (Ω) .
We denote the W 1,1 -limit of uϵ by u. Since pϵ → p in L1 , then ∇u = R π2 p and



|∇u| dx =
|R π2 p| dx =
|p| dx.
Ω
Ω
Ω
2
Finally, we study the traces f and g. If φ ∈ Lip (R ), and ϕ is its restriction on ∂Ω , then by (2.3)



⟨g, ϕ⟩ = ⟨p · ν|∂Ω , ϕ⟩ =
p · ∇φ dx = lim
pϵ · ∇φ dx = lim
R− π2 ∇uϵ · ∇φ dx
ϵ→0 Ω
ϵ→0 Ω
Ω

= lim
R− π2 ∇uϵ · ν ϕ dH1 .
ϵ→0
∂Ω
ϵ
Due to the smoothness of uϵ , we have R− π2 ∇uϵ · ν = ∂u
∂τ . Hence,



∂ϕ
∂uϵ
1
ϵ
1
⟨g, ϕ⟩ = lim
R− π2 ∇u · ν ϕ dH = lim
ϕ dH = − lim
T uϵ
dH1 .
ϵ→0 ∂Ω
ϵ→0 ∂Ω
ϵ→0 ∂Ω ∂τ
∂τ
Since uϵ converges to u in W 1,1 , then the traces converge in L1 (∂Ω ), i.e., T uϵ → T u. Thus,




∂f
∂ϕ
∂ϕ
1
1
⟨g, ϕ⟩ = −
Tu
dH = −
f
dH =
,ϕ . ∂τ
∂τ
∂τ
∂Ω
∂Ω
Remark 2.2. The above proof fails when p is a measure since the sequence of smooth functions uϵ does not,
in general, converge in the BV norm.
Proof of Theorem 2.1. Let us suppose that
u is a solution to (2.1) and P is the value of the infimum of (2.2). Our goal is to show that q := R− π2 Du

satisfies Ω |q| = P and q · ν|∂Ω = ∂f
∂τ = g.
Let us suppose that pn ∈ L1 (Ω ) is a minimizing sequence, i.e.

|pn | dx → P,
div pn = 0,
pn · ν|∂Ω = g.
Ω
By Proposition 2.1, we can find a sequence vn ∈ W 1,1 (Ω ), such that T vn = f and


|pn | dx =
|∇vn | dx.
Ω
Ω
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
71
Hence,

n→∞

|pn | = lim
P = lim
n→∞
Ω

|∇vn | ≥
Ω

|Du| =
Ω
|q|.
Ω
Since pn is a minimizing sequence, we infer that P cannot be bigger than

Ω
|Du|.
Notice that div q = 0. In order to show that q satisfies the desired boundary conditions it suffices to
verify identity (2.3). We take a sequence of smooth functions wn in BV (Ω ) converging to u in a strict


sense, i.e., wn → u in L1 (Ω ) and Ω |∇wn | dx → Ω |Du| and we require that T wn = f = T u. Then,
there is a subsequence (not relabeled) such that ∇wn converges to Du weakly as measures. Hence, if we set
p̂n = R− π2 ∇wn , then by Proposition 2.1, we have
gn = p̂n · ν|∂Ω =
We know that T wn = f , then gn =
∗
∂T wn
.
∂τ
∂f
∂τ .
∗
Since ∇wn ⇀ Du, then p̂n ⇀ q. If φ ∈ Lip (γ, R2 ) with γ > 1, then φ has continuous partial derivatives
of order 1, and therefore


∇φ p̂n dx.
∇φ dq = lim
n→∞
Ω
Ω
Subsequently, using (2.3) and the fact that div q = 0, we obtain


⟨q · ν|∂Ω , ϕ⟩ =
∇φ dq = lim
∇φ p̂n dx = lim ⟨p̂n · ν|∂Ω , ϕ⟩ = ⟨g, ϕ⟩,
n→∞
Ω
Ω
(2.4)
n→∞
where ϕ is the restriction of φ to ∂Ω . Hence, q satisfies the desired boundary conditions.
2.1. Case with load on Γ ( ∂Ω
In this section, we consider the case when the load is applied only on one part of the boundary. In other
words, we assume that Γ ( ∂Ω is an open set and consider the following two minimizing problems,


min
|Du| : u ∈ BV (Ω ), TΓ u = f
(2.5)
Ω
and

inf

|p| : p ∈ L1 (Ω , R2 ), div p = 0, p · ν|Γ = g .
(2.6)
Ω
In order to define the trace p · ν on Γ , we consider the general framework of Definitions 2.1 and 2.2. We
notice that Lip Γ (∂Ω ) = {ϕ ∈ Lip (∂Ω ) : ϕ|∂Ω\Γ = 0} is a closed subspace of Lip (∂Ω ). We then define
p · ν|Γ as a restriction of p · ν|∂Ω to Lip Γ (∂Ω ). Also, TΓ u is the restriction of T u on the set Γ .
Therefore, similarly as in Theorem 2.1, we obtain the following result.
Theorem 2.2. Assume Ω ⊆ R2 is convex, Γ ( ∂Ω is relatively open, and u is a solution to (2.5) with
f ∈ L1 (∂Ω ), then q = R− π2 Du satisfies

 

1
inf
|p| : p ∈ L (Ω ), div p = 0, p · ν|Γ = g =
|q| =
|Du|,
Ω
where g =
∂f
∂τ .
Ω
Ω
Moreover, div q = 0, q · ν|Γ = g as a functional on Lip Γ (∂Ω , γ), with γ > 1.
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
72
3. Analysis of the least gradient problem, when Γ ( ∂Ω
In this section, we study solutions of the least gradient problem, when the trace is defined only on a part of
the boundary. More precisely, we prove existence of minimizers of (2.5), when Γ is open and f is continuous
in Γ . Before embarking on the analysis of (2.5), we point out that the question of existence of solutions,
when the trace is defined over the whole boundary, was studied in the literature from different perspectives.

One could try to apply the direct method of the Calculus of Variation to the relaxed functional Ω |Du| to
establish the existence of minimizers, [20]. However, in this paper, we adapt the method introduced by [26],
since it gives us insight into the structure of the solutions. This method requires a modification, and even
with continuous data, uniqueness and continuity of solutions might fail.
3.1. Sets of finite perimeter
Before showing the existence of minimizers of (2.5), we recall the definitions of sets of finite perimeter
and the generalized definitions of their boundary. These notions will be frequently used in the rest of this
paper.
Definition 3.1. For E ⊆ Ω measurable, we say E is a set of finite perimeter in Ω if and only if χE ∈ BV (Ω ).
In this case, the perimeter of P (E, Ω ) of the set E is defined as the total variation of the Radon measure
DχE on Ω .
Definition 3.2. Suppose E is a set of finite perimeter.
• For x ∈ Rn , the measure theoretic exterior normal ν(x, E) at x is a unit vector ν such that
lim
|B(x, r) ∩ {y : (y − x) · ν < 0, y ̸∈ E}|
= 0,
rn
lim
|B(x, r) ∩ {y : (y − x) · ν > 0, y ∈ E}|
= 0.
rn
r→0
and
r→0
(We restrict our attention to the case n = 2.)
• The reduced boundary ∂ ∗ E is the set of points x such that ν(x, E) exists.
• The measure theoretic boundary ∂M E is the set of points x ∈ Rn such that
lim sup
r→0
|E ∩ B(x, r)|
>0
|B(x, r)|
and
lim inf
r→0
|E ∩ B(x, r)|
< 1.
|B(x, r)|
We have that ∂ ∗ E ⊆ ∂M E ⊆ ∂E, where ∂E is a topological boundary of E. It is a well-known fact, see
[1, Section 3.5], that H1 (∂M E \ ∂ ∗ E) = 0, and that
P (E, Ω ) = H1 (Ω ∩ ∂M E) = H1 (Ω ∩ ∂ ∗ E).
(3.1)
Since sets of finite perimeter are defined up to measure zero, then in order to avoid ambiguity, we will
use the following convention,
x ∈ E ⇐⇒ lim sup
r→0
|E ∩ B(x, r)|
> 0.
|B(x, r)|
(3.2)
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
73
3.2. Sternberg–Williams–Ziemer construction
We establish in this section a version of a construction presented in [26]. We adapt it to deal with the case
of the boundary condition on Γ . The justification of the method is based on the observation made in [4],
which is presented in Lemma 3.1. Before we state it, we recall the definition of functions of least gradients.
Definition 3.3. We say u ∈ BV (Ω ) is a function of least gradient if


|Du| ≤
|D(u + w)|
Ω
Ω
for every compactly supported function w ∈ BV (Ω ) with supp w ⊆ Ω .
Lemma 3.1. If u is a solution to (2.5), then ∂{u ≥ t} is a minimal surface for every t ∈ R.
Proof. For v in BV (Ω ) with TΓ v = f , we have


|Du| ≤
|Dv|.
Ω
(3.3)
Ω
Let K be a compact subset of Ω and w ∈ BV (Ω ) with supp w = K. Notice that u|Γ = (u + w)|Γ = f, then,
applying (3.3) to v = u + w, we get that


|Du| ≤
|D(u + w)|.
Ω
Ω
Thus, we have shown that u is a function of least gradient, hence we can apply the results by [4] to conclude
the minimality of ∂{u ≥ t}. Now, we are ready to state the main results of this section.
Theorem 3.1. Let us suppose that Ω is a strictly convex region in the plane with Lipschitz continuous
boundary. We assume that Γ ( ∂Ω is an open subset of ∂Ω and Υ = ∂Ω \ Γ is a smooth arc with
endpoints a and b. We assume that f : Γ → R is a bounded continuous function having one-sided limits at
a and b, denoted by f (a) and f (b), respectively. Then, there exists u ∈ C(Ω ∪ Γ ), a solution to (2.5), i.e.,
u ∈ BV (Ω ), TΓ u = f and for every v ∈ BV (Ω ), with TΓ v = f we have


|Du| ≤
|Dv|
Ω
Ω
where TΓ is interpreted in the sense of the trace of BV functions.
First, we provide the setup for the existence proof. We take any region Ωo , with Lipschitz boundary,
containing Ω and such that
∂Ωo ∩ ∂Ω = Υ .
Let F ∈ C(Ωo ) ∩ BV (Ωo \ Ω̄ ) be an extension of f to Ωo . We denote I = f (Γ ) and we recall that f is
bounded in Γ so I is a compact interval. For every t ∈ I, we define the set
Lt = {x ∈ Ωo : F (x) ≥ t} .

Since F ∈ BV (Ωo \ Ω̄ ), then P Lt , Ωo \ Ω̄ < ∞ for almost every t. We let

T := I ∩ {t : P (Lt , Ωo \ Ω̄ ) < ∞}.
For every t ∈ T , we consider the minimization problem,


min P (E, Ωo ) : E \ Ω̄ = Lt \ Ω̄ .
(3.4)
74
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
By compactness of the embedding BV (Ωo ) ⊂ L1 (Ωo ) and the lower semicontinuity of the BV norms we get
the existence of minimizers of (3.4).
Among minimizers, we select one which is a solution to
max {|E| : E is a solution to (3.4)} .
(3.5)
The existence of a maximizer follows from the same argument as above. It is easy to see that there is a
unique maximizer up to measure 0. Indeed, let us assume E 1 , E 2 are maximizers to (3.5), notice first that
E 1 ∪ E 2 \ Ω̄ = Lt \ Ω̄ , and similarly E 1 ∩ E 2 \ Ω̄ = Lt \ Ω̄ , then E 1 ∪ E 2 and E 1 ∩ E 2 are competitors to
E 1 and E 2 in (3.4). We know that for sets of finite perimeters, A and B, we have
P (A ∪ B, Ωo ) + P (A ∩ B, Ωo ) ≤ P (A, Ωo ) + P (B, Ωo ).
(3.6)
Therefore, E 1 ∪ E 2 and E 1 ∩ E 2 are minimizers to (3.4). Since E 1 , and E 2 are maximizers of (3.5), then
|E 1 | ≥ |E 1 ∪ E 2 | = |E 1 | + |E 2 \ E 1 |,
|E 2 | ≥ |E 1 ∪ E 2 | = |E 2 | + |E 1 \ E 2 |.
We deduce that |E 1 △ E 2 | = 0, concluding the proof of the claim.
For every t in T , we denote the maximizer of (3.5) by Et .
The argument in the proof of [26, Lemma 3.3] is local in its nature, so we obtain similarly the following
key property of Et .
Lemma 3.2. If Γ ( ∂Ω is open and f : Γ → R is continuous and Et is given by (3.5), then ∂Et ∩Γ ⊆ f −1 (t)
for every t ∈ T .
Checking the ordering of Et , expected for candidates for the level sets, is an important step in the construction. First, we need the following geometrical result, that relies on the fact that ∂Ω \ Γ is smooth and
Ω is strictly convex.
Lemma 3.3. If Υ is a smooth arc, γ is a connected component of ∂Et in Ω intersecting Υ ◦ , the interior of
Υ , then γ is orthogonal to Υ .
Proof. Take xt ∈ ∂Et ∩ Υ ◦ . We know that ∂Et is a minimal surface in R2 , then γ must be a segment
[xt , yt ] ⊆ ∂Et intersecting Υ ◦ at xt . We show that [xt , yt ] is normal to ∂Ω at xt . In fact, let B(xt , ϵ) be such
that B(xt , ϵ)∩Γ = ∅, and let pt ∈ Ω be the intersection of ∂B(xt , ε) and [xt , yt ]. Suppose now, that our claim
does not hold, then the convexity of B(xt , ϵ) ∩ Ω and the Pythagorean Theorem imply that dist(pt , Υ ) <
dist(xt , pt ). This implies that we can make P (Et , Ωo ) smaller, contrary to our minimality assumption. Lemma 3.4. Let s, t ∈ T be such that s < t. If
∂Et ∩ ∂Γ = ∅
or
∂Es ∩ ∂Γ = ∅,
then Et b Es .
Proof. We prove first that Et ⊆ Es . In fact, since Lt ⊆ Ls , then
Es ∪ Et \ Ω̄ = Ls \ Ω̄ ,
Es ∩ Et \ Ω̄ = Lt \ Ω̄ .
Therefore by (3.4), P (Es ∪ Et , Ωo ) ≥ P (Es , Ωo ) and P (Es ∩ Et , Ωo ) ≥ P (Et , Ωo ). Now, (3.6) implies that
P (Es ∪ Et , Ωo ) = P (Es , Ωo ). As a result, (3.5) yields |Et ∪ Es | ≤ |Es | i.e. |Et \ Es | = 0, hence due to the
convention in (3.2), Et ⊆ Es .
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
75
We will show that ∂Es ∩ ∂Et ∩ (∂Ω \ ∂Γ ) = ∅. In fact, by Lemma 3.2 we have ∂Es ∩ ∂Et ∩ Γ = ∅, so it
is enough to prove that ∂Es ∩ ∂Et ∩ Υ ◦ = ∅. Let us assume otherwise and take x0 ∈ ∂Et ∩ ∂Es ∩ Υ ◦ . By
Lemma 3.3, the connected components of ∂Et and ∂Es , which intersect Υ must both meet Υ orthogonally.
Since Υ is smooth, then ∂Et and ∂Es must be contained in the line perpendicular to Υ and passing through
x0 , hence they must coincide. As a result, Ls = Lt , a contradiction.
It remains to show that ∂Et ∩ ∂Es ∩ Ω = ∅. Indeed, since ∂Et and ∂Es are minimal surfaces in R2 , then
each of their components in Ω is a segment. Hence, if they intersect in Ω , then by [26, Theorem 2.2] they
must coincide. Thus, we reached a contradiction. We set At := Et ∩ Ω , for every t ∈ T . Following [26], we define our candidate for a solution to (2.5), by
the formula,
u(x) = max{t ∈ T : x ∈ At }.
(3.7)
We have to make sure that for each x ∈ Ω , there is t ∈ T such that x ∈ At . Indeed, if t = inf Γ f − ϵ, ϵ > 0,
then we have that P (Lt , Ωo \ Ω̄ ) = 0 and At ∩Ω = Ω . Similarly, if t = supΓ f +ϵ, then P (Lt , Ωo \ Ω̄ ) = 0. Lemma 3.5. For every t ∈ T , we have
{x ∈ Γ : f (x) > t} ⊆ Et◦ ∩ Γ ⊆ At ∩ Γ ⊆ {x ∈ Γ : f (x) ≥ t}.
Proof. To prove the first inclusion we take x ∈ Γ such that f (x) > t. By the continuity of F , there exists a
neighborhood of x in Ωo \ Ω such that F (x) > t, therefore by the definition of Et , we deduce that x ∈ Ēt .
By Lemma 3.2, x ̸∈ ∂Et , hence x ∈ Et◦ .
Now, we prove the second inclusion. Let z ∈ Et◦ ∩ Γ , there exists a neighborhood Vz of z such that
Vz ⊆ Et . Since z ∈ Γ , then there exists a sequence zn ∈ Et ∩ Ω such that zn → z. Thus, z ∈ At .
In order to prove the third inclusion, notice that At ∩ Γ ⊆ Ēt ∩ Γ . Take x ∈ Ēt ∩ Γ = (∂Et ∩ Γ ) ∪ (Et◦ ∩ Γ ).
By Lemma 3.2, it is enough to show that Et◦ ∩ Γ ⊆ {x ∈ Γ : f (x) ≥ t}. Assume x ∈ Et◦ ∩ Γ , then there
exists a neighborhood of x, Vx ⊆ Et◦ and by the definition of Et , F (y) ≥ t for all y ∈ Vx ∩ (Ωo \ Ω̄ ), and
hence by continuity f (x) ≥ t. The following lemma is direct consequence of Lemma 3.4, and the definition of At .
Lemma 3.6. If ∂Et ∩ ∂Γ = ∅ or ∂Es ∩ ∂Γ = ∅, then At b As for s < t.
We summarize the main result of the construction.
Proposition 3.1. Let us suppose that u is given by the above version of the SWZ construction i.e. by
formula (3.7), then u is continuous in Ω ∪ Γ and u = f on Γ .
Proof. We notice first that
{x : u(x) ≥ t} =

s∈T,s<t
As
and {x : u(x) > t} =

As .
s∈T,s>t

The first set is obviously closed. We will prove that s∈T,s>t As ∩ Ω is open. Indeed, let us take

x0 ∈ s∈T,s>t As ∩ Ω , so there is s0 > t such that x0 ∈ As0 , i.e. u(x0 ) ≥ s0 > t. Moreover, dist(x0 , ∂Ω ) > 0.
We take
1
r = min{dist(x0 , ∂Ω ), dist(x0 , ∂At )} > 0.
2
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
76

Notice that B(x0 , r) ⊂ Ω . We claim that B(x0 , r) ⊂ s∈T,s>t As . For any x ∈ B(x0 , r), we have
dist(x, ∂At ) > 0, so u(x) > t and the claim follows. Hence, we obtain the continuity of u in Ω .
We shall prove now that for every x0 ∈ Γ , limx→x0 ,x∈Ω u(x) = f (x0 ). We define t = f (x0 ) and take any
s, such that s < t. We have f (x0 ) > s, and by Lemma 3.5 x0 ∈ Es◦ ∩ Γ , so there exists a neighborhood Vx0 of
x0 such that Vx0 ∩ Ω ⊆ Es ∩ Ω ⊆ As . Notice that for every z ∈ Vx0 ∩ Ω , u(z) ≥ s, then lim inf z→x0 u(z) ≥ s
for all s < t. We conclude that lim inf z→x0 u(z) ≥ t.
It remains to show that δ := lim supz→x0 u(z) ≤ t. We assume that δ > t. For τ ∈ (t, δ), there exists
a sequence zn ∈ Ω such that u(zn ) ≥ τ . Notice that zn ∈ Aτ ∩ Γ , then since Aτ is closed, we get that
x0 ∈ Aτ ∩ Γ . Hence, by Lemma 3.5, f (x0 ) ≥ τ > t, contradiction. Proof of Theorem 3.1. It remains to show that u is a function of least gradient. Let v be a competitor of
u, i.e. v ∈ BV (Ω ) and TΓ v = f . Let ṽ be the extension of v onto Ω0 , such that ṽ = v in Ω , and ṽ = F in
Ω0 \ Ω̄ . We define Gt = {ṽ ≥ t} and we shall see that for t ∈ T ∂ ∗ Gt ∩ Γ ⊆ f −1 (t), where the inclusion is
up to H1 -measure zero. Since F is continuous, we notice that ṽ ∈ BV (Ω0 ) ∩ C(Ω0 \ Ω̄ ). Since TΓ v = f and
v = ṽ on Ω , we have that for H1 —almost every x ∈ Γ ,
?
|ṽ(y) − f (x)| dy = 0.
(3.8)
lim
r→0
B(x,r)∩Ω
∗
Let us take x ∈ ∂ Gt ∩ Γ , such that (3.8) holds. We shall prove that f (x) = t. We assume that f (x) < t,
say f (x) = t − ε, then



1
0 = lim
|ṽ(y) − f (x)| dy +
|ṽ(y) − f (x)| dy
r→0 |B(x, r) ∩ Ω |
B(x,r)∩Ω∩{ṽ<t}
B(x,r)∩Ω∩{ṽ≥t}

1
≥ lim sup
|ṽ(y) − f (x)| dy
r→0 |B(x, r) ∩ Ω | B(x,r)∩Ω∩{ṽ≥t}
≥ ε lim sup
r→0
|B(x, r) ∩ Ω ∩ Gt |
.
|B(x, r) ∩ Ω |
Hence,
lim
r→0
|B(x, r) ∩ Ω ∩ Gt |
= 0.
|B(x, r) ∩ Ω |
Taking into account that f is the trace of ṽ as a function on Ωo \ Ω̄ , by a similar argument, we obtain that
|B(x, r) ∩ (Ωo \ Ω ) ∩ Gt |
= 0.
r→0
|B(x, r) ∩ Ω |
lim
Combining both results, we conclude that
lim
r→0
|B(x, r) ∩ Ωo ∩ Gt |
= 0.
|B(x, r)|
Hence, x ̸∈ ∂M Gt , contradiction to the fact that ∂ ∗ Gt ⊆ ∂M Gt , see Definition 3.2.
We come to a similar conclusion if we assume f (x) > t, as a result f (x) = t.
Thus, we have H1 (∂ ∗ Gt ∩ Γ ) ≤ H1 (f −1 (t)) = 0 for almost all t. As a result, we obtain from (3.1)
P (Gt , Ωo ) = H1 (Ωo ∩ ∂ ∗ Gt ) = H1 (Ω ∩ ∂ ∗ Gt ) + H1 (Γ ∩ ∂ ∗ Gt ) + H1 ((Ωo \ Ω̄ ) ∩ ∂ ∗ Gt )
= P (Gt , Ω ) + H1 (∂ ∗ Lt \ Ω̄ ).
(3.9)
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
77
Similarly, by Lemma 3.2, we have H1 (∂ ∗ Et ∩ Γ ) = 0. Then
P (Et , Ωo ) = H1 (Ωo ∩ ∂ ∗ Et ) = H1 (Ω ∩ ∂ ∗ Et ) + H1 (Γ ∩ ∂ ∗ Et ) + H1 ((Ωo \ Ω̄ ) ∩ ∂ ∗ Et )
= P (Et , Ω ) + H1 (∂ ∗ Lt \ Ω̄ ).
(3.10)
Since Gt \ Ω̄ = Lt \ Ω̄ , then by (3.4)
P (Et , Ωo ) ≤ P (Gt , Ωo ).
(3.11)
Therefore (3.9)–(3.11) yield P (Et , Ω ) ≤ P (Gt , Ω ), which together with the coarea formula, [1, Theorem
3.40], concludes the proof of the theorem. 3.3. Uniqueness of solution to (2.5)
In this subsection, we study uniqueness of solutions to (2.5), constructed in Theorem 3.1. Our basic tool is
based on the observation that a solution is fully specified by its level sets and their ‘labeling’, see Lemma 3.7.
We first make the following observation regarding the range of solutions to (2.5). We set M = supΓ f ,
m = inf Γ f . Recall that I = f (Γ ).
Proposition 3.2. Let u be any solution to (2.5), then the range of u is contained in I, where the inclusion is
up to H1 -measure 0.
Proof. If we set v = min{M, max{m, u}}, then v ∈ BV (Ω ), and v satisfies


|Dv| ≤
|Du|.
Ω
Ω
Notice that the range of v is contained in I. If |{u > M } ∪ {u < m}| > 0, then the inequality above is strict,
which is a contradiction, because TΓ v = TΓ u = f . We are interested in detecting fat level sets of solutions to (2.5).
Definition 3.4. Let u be a solution to (2.5), and Et = {u ≥ t}. For t ∈ u(Ω ), we say that the t-level set of u
is fat if



 

Es  > 0.
 Es \
s<t

s>t
We would like to gain insight into the solutions constructed in Section 3.2. For this purpose, we denote
by u0 the solution obtained by means of the Sternberg–Williams–Ziemer construction in Theorem 3.1 and
we set Et0 = {x : u0 (x) ≥ t}. Our goal is to give an explicit description of the sets ∂Et0 for all t ∈ u0 (Ω ),
when f is continuous. Moreover, these sets will be determined only in terms of f , Ω and Γ . In other words,
we will show that if f satisfies a monotonicity condition, u is another solution to (2.5) and ∂Et = {u ≥ t},
then
∂Et0 = ∂Et
for all t.
(3.12)
Showing that (3.12) implies that u = u0 , see Lemma 3.7, is an important step toward a uniqueness proof.
This plan is carried out in Theorem 3.2 for a few examples of continuous functions f . However, we expect
that our future research will establish uniqueness results for a larger class of functions. Before we state this
theorem, we introduce more notations aiming at capturing the structure of the level sets of the solutions to
(2.5). We will describe the geometric aspects of the SWZ construction solutions in terms of solutions to
d(x, Υ ) = d(x, y),
y ∈ Υ,
(3.13)
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
78
when x ∈ Γ is given. We notice that since Υ is closed, then there exists a solution to (3.13), but it need not
be unique. We also define
S := {x ∈ Γ : there exists y ∈ Υ ◦ such that d(x, y) = d(x, Υ )},
(3.14)
◦
where Υ denotes the relative interior of Υ . We also split S into two sets
U = {x ∈ S : there is a unique solution to (3.13)}
(3.15)
D = {x ∈ S : there are at least two solutions to (3.13)}.
(3.16)
and
In order to study the structure of the set of solutions to (3.13), we define a map Φ : S → P (Υ ) by
Φ(x) = {y ∈ Υ : d(x, y) = d(x, Υ )}.
Proposition 3.3. Let C be a connected component of D, then C is a singleton, and hence D is at most a
countable set.
Proof. We assume otherwise, i.e. there exist p and q, distinct points in C. Since C is a one dimensional
connected set, then the arc pq must be contained in C.
We take x1 , x2 ∈ Φ(p) and y1 , y2 ∈ Φ(q) such that
d(x1 , x2 ) = diam Φ(p),
d(y1 , y2 ) = diam Φ(q).
(3.17)
We claim that the arcs x1 x2 and y1 y2 may not intersect. Let us suppose they do, then we have two
possibilities:
(1) the intersection point is in the relative interior of the arcs x1 x2 and y1 y2 ;
(2) the intersection point is one endpoint, say x2 .
In the first case, one endpoint of x1 x2 , say x2 is in the interior of △qy1 y2 , the curvilinear triangle with
vertices q, y1 and y2 , where y1 and y2 are connected by the circular arc centered at q. Notice that, since
d(q, Υ ◦ ) = d(q, y1 ) = d(q, y2 ), then the disk of center q passing through y1 , and y2 contains △qy1 y2 . This
implies that
d(q, x2 ) < d(q, y1 ) = d(q, y2 ),
contrary to the definition of y1 and y2 .
In the second case, we consider any point s belonging to the arc pq, different from p and q. By our
assumption s ∈ C, we have that Φ(s) contains at least two distinct points, z1 and z2 , such that
d(s, z1 ) = d(s, z2 ).
The arc z1 z2 must intersect the interior of at least one curvilinear triangle, △px1 x2 or △qy1 y2 . Let us suppose
that z2 is in the interior of △qy1 y2 , then
d(q, z2 ) < d(q, y1 ) = d(q, y2 ).
This again contradicts the definition of y1 and y2 .
Thus, we proved that if p, q ∈ C, then the corresponding arcs x1 x2 and y1 y2 , which have positive length,
do not intersect. Since the number of elements of C is not countable, we infer that the total length of the
corresponding arcs is infinite, thus we reached a contradiction. Now, we are ready to state the main result of this section.
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
79
Theorem 3.2. Let us suppose that the assumptions of Theorem 3.1 hold and u0 is a solution given by
Theorem 3.1.
(1) If f (a) = f (b) = inf Γ f , f has a strict maximum at xM and f attains each value (except supΓ f ) exactly
twice, then u0 is unique, discontinuous at a and b. There is τ ∈ (inf Γ f, supΓ f ) such that the τ -level set
of u0 is fat.
(2) If f is monotone, then u0 is discontinuous at a and b. There is at least one τ , an element of u(Φ(D)),
such that the τ -level set is fat. Moreover, u0 is a unique solution in the class of solutions continuous in
Ω.
(3) Let us suppose that f (a) = f (b) and there are xm ∈ Γ a unique local minimum, xM ∈ Γ a unique local
maximum, and x0 ∈ D ⊆ Γ such that f (x0 ) = f (a) = f (b), and
d(x0 , Υ ) = d(x0 , a) = d(x0 , b).
(3.18)
Then, u0 is continuous, unique and the f (a)-level set is fat.
Remark 3.1. We can change ‘min’ to ‘max’ in Theorem 3.2 to get a corresponding result. We notice that
the function f on an arc may be increasing or decreasing depending on a parametrization of the arc Γ . The
same claim in (3) holds if we require Φ(D) ∩ {a, b} = ∅.
One might expect that in case (1), the least gradient function should take value inf Γ f in Υ , but it turns

out that the process of forming discontinuities decreases the energy Ω |Du|.
We want to present here different types of behavior of f near the endpoints of Υ . We obtain quite
detailed information about solutions u0 due to the explicit construction of ∂Et0 , which enables us to study
the superlevel sets Et0 = {u0 (x) ≥ t}.
The main claim is uniqueness of solutions. Detecting fat level sets is a by-product. We proceed in two
stages. Firstly, we analyze the structure of ∂Et0 , which depends only on the geometry of ∂Ω and the properties
of f . Secondly, we show that we can reconstruct u0 and any other solution from this information.
Before we study the special cases in Theorem 3.2, we present a more general uniqueness argument, which
will be applicable also in Section 4.
Lemma 3.7. We have u0 continuous solution to problem (2.5) with superlevel sets Et0 = {x : u0 (x) ≥ t}.
Assume u is another BV solution with superlevel sets Et = {x : u(x) ≥ t}. If ∂Et0 = ∂Et for every t then
u = u0 in L1 .
Proof. We make the following key observation. By our construction of the function u0 , we know that if
ρ1 ̸= ρ2 , then ∂Eρ01 ∩ ∂Eρ02 = ∅, therefore
∂Eρ1 ∩ ∂Eρ2 = ∅.
(3.19)
Step 1. We begin by proving that u is also continuous in Ω . Let us suppose otherwise. We know by
[17, Proposition 3.7] that the jump set Su of u consists of at most countable number of minimal surfaces
Sk . Moreover, u has constant one-sided traces on each Sk . Let us consider one of those sets, Sj . We suppose
that there exists x0 ∈ Sj such that ap- lim inf x→x0 u(x) = τ1 and ap- lim supx→x0 u(x) = τ2 with τ1 ̸= τ2 .
The notions of the upper and lower approximate limits can be found in [1, Definition 4.28].
Since u has a constant jump along Sj , we deduce that for any ρ ∈ (τ1 , τ2 ), we have that ∂Eρ ∩ Sj ̸= ∅ and
H1 ((∂Eρ ∩ Sj ) △ Sj ) = 0. Thus, for any τ1 < ρ1 < ρ2 < τ2 , we have ∂Eρ1 ∩ Sj = ∂Eρ2 ∩ Sj up to a set of
H1 -measure 0. But this contradicts (3.19).
80
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
We conclude from this step that for each ρ, the superlevel set Eρ is closed.
Step 2. Let us assume, contrary to our claim, that u(x) < u0 (x) at a point x ∈ Ω and s, t are such that
u(x) < s < t < u0 (x). We notice that x ∈ Et0 \ (Es )◦ , where M ◦ denotes the interior of a set M .


We show that ∂ Et0 \ (Es )◦ ∩ Ω = ∅. We notice that Step 1 and continuity of u0 imply that Et0 \ (Es )◦ is
closed. Therefore,



 
◦
∂ Et0 \ (Es )◦ ∩ Ω = Et0 \ (Es )◦ \ Et0 \ (Es )◦ ∩ Ω

  ◦

c
c ◦
= Et0 ∩ ((Es )◦ ) \ Et0 ∩ (((Es )◦ ) ) ∩ Ω ,

 
c
c
c ◦ 
= Et0 ∩ ((Es )◦ ) \ (Et0 )◦ ∪ Et0 ∩ ((Es )◦ ) \ (((Es )◦ ) )
∩Ω
 0
  0

◦ c
◦ c
= ∂Et ∩ ((Es ) ) ∩ Ω ∪ Et ∩ ∂ (((Es ) ) ) ∩ Ω =: A ∪ B.
We first show that A = ∅. We have that ∂Et0 = ∂Et and by the continuity of u we get ∂Et ⊆ {u = t}. On the
c
other hand, {u > s} ⊆ (Es )◦ , then ((Es )◦ ) ⊆ {u ≤ s}. Therefore, since s < t,
A ⊆ {u = t} ∩ {u ≤ s} ∩ Ω = ∅.
It remains to show that B = ∅. In fact, we know that for any set M ,
c
∂ ((M ◦ ) ) = ∂M ◦ = M ◦ \ M ◦ ⊆ M \ M ◦ = ∂M.
Applying this identity to M = Es , we get that
B ⊆ Et0 ∩ ∂Es = Et0 ∩ ∂Es0 ⊆ {u0 ≥ t} ∩ {u0 = s} = ∅.


Step 3. We finish the proof of the lemma. We showed that ∂ Et0 \ (Es )◦ ∩Ω = ∅, then since Ω̄ is connected
Et0 \ (Es )◦ ⊆ ∂Ω or Et0 \ (Es )◦ = Ω̄ .


c
However, we are assuming that x ∈ Et0 \ (Es )◦ ∩Ω , then Et0 \(Es )◦ = Ω̄ . We conclude that Et0 = ((Es )◦ ) = Ω̄ .
0
◦ c
Therefore, for every z ∈ Ω , z ∈ Et = {u0 ≥ t} and z ∈ ((Es ) ) ⊆ {u ≤ s}. But this implies that
inf u0 ≥ t > s ≥ sup u.
Ω
Ω
This contradicts the fact that TΓ u = TΓ u0 = f .
We would like to establish an analogue of Lemma 3.2, with f and Ω , as in the statement of Theorem 3.1,
for solutions which are not necessarily continuous. If u is such a solution to (2.5), then Proposition 3.2
implies that u(x) ∈ [inf Γ f, supΓ f ] for a.e. x.
Let us set Et = {u ≥ t}. We know that ∂Et are minimal surfaces. They must intersect ∂Ω , i.e. ∂Et ∩∂Ω ̸= ∅.
A given ∂Et is a sum of intervals and none of their endpoints may belong to Ω . If any of them did, e.g. xt ∈ ∂Et
is in Ω , then at this point Et would have zero density. But this is impossible due to our convention (3.2).
Lemma 3.8. If u is any solution to (2.5), identified with its exact representative, then
∂Et ∩ Γ ⊆ f −1 (t).
Proof. We identify u with its exact representative, i.e.

1
lim
|u(y) − u(x)|dy = 0.
r→0 |B(x, r)| B(x,r)
(3.20)
For other representatives the inclusion is up to a set of H1 -measure zero.
We first make the following observation. Since Ω is strictly convex, no component of ∂Et is tangent to
∂Ω .
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
81
Step 1. Let us suppose that x0 is in ∂Et ∩ Γ . Without the loss of generality, we may assume that x0 = 0
and in its neighborhood in R2 the boundary of Ω is a graph of a Lipschitz continuous function ϕ. Since
ϕ is Lipschitz continuous, then it has one-sided derivatives at every point. Hence, we may assume that
ϕ′+ (0) = ar , ϕ′− (0) = al .
Since ∂Et consists of a number of intervals, i.e. minimal surfaces, then for sufficiently small ρ the set Et ∩
B(0, ρ) has one of the following forms, for properly chosen real numbers k and K.
A1 = {(x, y) ∈ B(0, ρ) : 0 < kx ≤ y ≤ Kx},
A2 = {(x, y) ∈ B(0, ρ) : x < 0, ϕ(x) ≤ y and kx ≤ y},
A3 = {(x, y) ∈ B(0, ρ) : x > 0, ϕ(x) ≤ y ≤ kx},
A4 = {(x, y) ∈ B(0, ρ) : x < 0, kx ≤ y and ϕ(x) ≤ y},
A5 = {(x, y) ∈ B(0, ρ) : x < 0, ϕ(x) ≤ y ≤ kx}.
For other representatives some of the connected components, which are of the above form, may have zero
measure.
Step 2. Let us first assume that there exists a tangent vector to ∂Ω at 0, i.e. ar = al . Let us call the outer
normal to ∂Ω , at this point, by ν. We apply the usual blow-up procedure. For any ρ > 0 and a measurable
set E the blown-up set is E/ρ. It is well-known that since 0 ∈ ∂ ∗ Ω , then χ(Ω∩B(0,r))/ρ converges locally in
L1 (R2 ) to χH(ν) as ρ → 0, where H(ν) = {(x, y) : ν · (x, y) ≤ 0}. If ℓ is any line passing through 0 and any
ρ > 0, then ℓ/ρ = ℓ. Thus, it is easy to see that each χ(Ai )/ρ , i = 1, . . . , 5 converges locally in L1 (R2 ) to A0i ,
an angle strictly contained in H(ν) with positive measure α, but strictly smaller than π. Thus,

1
|B(0, ρ) ∩ Ai |
= lim
χ(Ai )/ρ dxdy = α ∈ (0, π).
lim
ρ→0 |B(0, 1)| B(0,1)
ρ→0
|B(0, ρ)|
We notice, due to the remarks we made earlier, that these limits do not depend on the representative of u.
We will reach the desired result just for the exact representative.
A similar argument and a conclusion apply in the case of any component B of the complement of set Et ,
i.e.
|B(0, ρ) ∩ B|
lim
= β > 0.
ρ→0
|B(0, ρ)|
Step 3. In general, Ω need not have any normal vector at 0, i.e. ar ̸= al . We perform an additional
construction reducing the general case to the situation, when Ω has a tangent at 0. For this purpose we can
define two functions,


ϕ(x), x > 0,
al ,
x > 0,
r
l
ϕ (x) =
ϕ (x) =
ar x, x ≤ 0,
ϕ(x), x ≤ 0.
Now, for sufficiently small fixed s > 0, we define the regions F l and F r ,
F l = {(x, y) ∈ B(0, s) : ϕl (x) < y},
F r = {(x, y) ∈ B(0, s) : ϕr (x) < y}.
Of course, due to differentiability of ϕl and ϕr at x = 0 we have 0 ∈ ∂ ∗ F l and 0 ∈ ∂ ∗ F r . Moreover, for all
set Ai , i = 1, . . . , 5, we can choose F r or F l such that
Ai ∩ F j = Ai
and F r ∩ F l = Ω ∩ B(0, s).
Thus, we can consider F j in place of Ω ∩ B(0, ρ).
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
82
Step 4. We notice that χΩ/ρ = χ(F r )/ρ ·χ(F l )/ρ . Hence, χΩ/ρ converges locally in L1 to an angle of measure
γ < π. By the argument of Step 2, we deduce that
lim
ρ→0
|B(0, ρ) ∩ Ai |
= α0 ∈ (0, γ).
|B(0, ρ)|
Similarly, if B is a component of the complement of the set Et , then
lim
ρ→0
|B(0, ρ) ∩ B|
= β0 ∈ (0, γ).
|B(0, ρ)|
Step 5. We recall that by the definition of the trace, for H1 -almost every x ∈ Γ , we have
?
lim
|v(y) − f (x)| dy = 0.
r→0
(3.21)
B(x,r)∩Ω
We set G = {x ∈ Γ : condition (3.21) holds at x}.
Let us assume that t ∈ T and x ∈ ∂Et ∩ Γ but f (x) < t, say f (x) = t − ε. Then,



1
|v(y) − f (x)| dy +
|v(y) − f (x)| dy
0 = lim
r→0 |B(x, r) ∩ Ω |
B(x,r)∩Ω∩{v<t}
B(x,r)∩Ω∩{v≥t}

1
|v(y) − f (x)| dy
≥ lim sup
r→0 |B(x, r) ∩ Ω | B(x,r)∩Ω∩{v≥t}
≥ ε lim sup
r→0
|B(x, r) ∩ Ω ∩ Et |
.
|B(x, r) ∩ Ω |
|B(x,r)∩Ω∩Et |
|B(x,r)∩Ω|
However, we have shown in previous steps that limr→0
> 0.
By a similar token, if f (x) = t + ε, we get that
0 ≥ ε lim sup
r→0
|(B(x, r) ∩ Ω ) \ Et |
.
|B(x, r) ∩ Ω |
However, we have shown in previous steps that limr→0
diction in case x ∈ G. Hence, f (x) = t.
|B(x,r)∩Ω\Et |
|B(x,r)∩Ω|
> 0. Thus, we have reached a contra-
Step 6. We take any x0 ∈ ∂Et ∩ Γ , but x0 ∈ Γ \ G. The set Γ \ G has H1 -measure zero, but it need not
be empty. We fix x0 ∈ Γ \ G. We may take sequences {xn } and {yn } converging to x0 and such that all xn ,
yn belong to G and xn ∈ Etn , tn ≤ t and yn ∈ Esn , sn ≥ t. This is possible for all points on Γ except for
local minima and maxima, but in those cases Et ∩ B(0, ρ) ∩ Ω = B(0, ρ) ∩ Ω or Et ∩ B(0, ρ) ∩ Ω = ∅.
Of course
lim tn = τ ≤ t
n→∞
lim sn = σ ≥ t.
n→∞
Continuity of f implies that σ = t = τ and f (x0 ) = t. Our claim follows.
Introducing a parametrization of Γ is advantageous for further considerations. We use the convention
that [0, L] ∋ s → x(s) is an arc length parametrization of Γ and x(0) = a, x(L) = b, and we define the
following sets,
Ba = {s ∈ (0, L) : d(x(s), Υ ) = d(x(s), a)},
Bb = {x ∈ (0, L) : d(x(s), Υ ) = d(x(s), b)}.
We shall write S̃ = x−1 (S), where S is defined in (3.14). Let us set sa = sup Ba , sb = inf Bb . We notice
that sa ∈ Ba , and sb ∈ Bb . Indeed, if sn is a sequence in Ba converging to sa , then after introducing the
shorthand notation xn = x(sn ), we obtain
d(x(sa ), a) = lim d(xn , a) = lim d(xn , Υ ) = d(x(sa ), Υ ).
n→∞
n→∞
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
83
It is obvious from the definition that
sa ≤ inf S̃,
sup S̃ ≤ sb .
(3.22)
We prove a more precise statement, namely equalities hold in (3.22).
Corollary 3.1. Let us suppose that S, defined in (3.14), is not empty, then using the notation introduced
above, we have that
sup Ba = max Ba ∈ Ba ,
inf Bb = min Bb ∈ Bb ,
max Ba = inf S̃,
min Bb = sup S̃.
Proof. It is sufficient to consider one of the inequalities in (3.22), because the argument is the same in both
cases. If sa < s0 < inf S̃, then d(x(s0 ), Υ ) < d(x(s0 ), a). At the same time d(x(s0 ), Υ ) may not be attained
at any y ∈ Y ◦ . As a result, we deduce that d(x(s0 ), Υ ) = d(x(s0 ), b) but this is impossible if S ̸= ∅. Remark. Let us stress the fact that due to a possible non-uniqueness of solutions to (3.13), it may also
happen that x(sup Ba ) ∈ S.
Proof of Theorem 3.2, Part (1). We assume f (a) = f (b) = inf Γ f , u is any solution to (2.5) and Et = {u ≥
t}. Due to Lemma 3.8, we know that for all t ∈ T , we have ∂Et ∩ Γ ⊂ f −1 (t). We set
TΥ = {t ∈ I : ∂Et ∩ Υ ̸= ∅},
TΓ = {t ∈ I : ∂Et ∩ Υ = ∅}.
Since each value of f , except for inf Γ f and maxΓ f is attained exactly twice, then we introduce
f −1 (t) = {xt , y t }
for t ∈ (inf f, max f ).
Γ
Γ
In order to prove that TΓ and TΥ are non-empty, we take t such that t − f (a) > 0 is so small that
xt ∈ x(Ba ) and y t ∈ x(Bb ) and
d(xt , y t ) > d(xt , a) + d(b, y t ).
Hence, ∂Et must consist of the segments [xt , a] and [b, y t ], i.e. TΥ ̸= ∅.
Take xM , the strict global maximum of f , then Γ \ {xM } = Γ1 ∪ Γ2 and on each arc Γi , i = 1, 2, the
function f is monotone. Let t be such that f (xM ) − t is small. Thus, xt and y t are close to xM and
d(xt , y t ) < d(xt , a) + d(b, y t ).
Hence, ∂Et must be the segment [xt , y t ], so TΓ ̸= ∅.
We notice that if we take t ∈ TΥ , s ∈ TΓ , then the only possibility is s > t. Indeed, at least one of ∂Es ,
∂Et does not intersect ∂Γ . Hence, from the inequality s < t, we would infer that Et b Es , which in turn
would imply that the intersection of ∂Et and ∂Es is not empty, but this is impossible. As a result,
inf TΥ ≥ sup TΓ .
Since the strict inequality above is not possible, we conclude that
inf TΥ = sup TΓ =: τ.
(3.23)
We would like to make sure that ∂Eτ has components that intersect Υ and a component that does not
intersect Υ . In order to show this we recall that xt ∈ Γ1 and y t ∈ Γ2 . Moreover, the functions
t → xt , y t
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
84
Fig. 1. Part 1 level sets representation.
are continuous, because of the strict monotonicity of f |Γi , i = 1, 2. Thus, we infer continuity of
d(xt , a) + d(y t , b) − d(xt , y t ) =: h(t).
Since h(f (a)) < 0 and h(f (xM )) > 0, we deduce the existence of a point where h vanishes. This implies our
claim and the uniqueness of τ .
Fig. 1 shows various level sets of the solutions in this case. The blue lines represent one level set ∂Et for
t ∈ TΥ , the green line corresponds to a level set ∂Et′ for t′ ∈ TΓ , and the shaded region represents the τ −
fat level set of u.
We claim that τ is the number postulated in the statement of Part (1). In order to check it, let us take
any t > τ , where τ is defined in (3.23), then the assumptions of Lemma 3.6 are satisfied, hence Et b Eτ
and ∂Et intersects Γ . If we take a sequence sn > τ converging to τ , then we can deduce that xsn → xτ and
y sn → y τ and Esn b Eτ . Therefore,

Eτ +ϵ = Ω ∩ H + (xτ , y τ ),
ϵ>0
where H (x , y ) is the closed half plane with the boundary containing [xτ , y τ ] and H + (xτ , y τ ) ∩ ∂Γ = ∅.
+
τ
τ
On the other hand, for t < τ , we have ∂Et ∩ Υ ̸= ∅. Thus, ∂Et ̸= [xt , y t ]. Since t < τ , there are pt , q t ∈ Υ
such that
∂Et = [xt , pt ] ∪ [y t , q t ].
Let us suppose that tn < τ converges to τ . Then,
ptn → pτ ,
q tn → q τ
and pτ , q τ ∈ Υ . As a result τ ∈ TΥ . At the same time

Eτ −ϵ = Ω ∩ H + (xτ , pτ ) ∩ H + (xτ , q τ ),
ϵ>0
where the closed half planes H + (xτ , pτ ) and H + (xτ , q τ ) are chosen so that the intersection of their interiors
does not contain neither a nor b. We see that


{u(x) = τ } =
Eτ +ϵ \
Eτ −ϵ = Ω ∩ H + (xτ , y τ ) \ Ω ∩ H + (xτ , pτ ) ∩ H + (xτ , q τ ).
ϵ>0
ϵ>0
This set has a positive Lebesgue measure, i.e. the τ -level set is fat. It is also easy to see that there are no
other fat level sets.
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
85
Fig. 2. Part 3 level sets representation.
Now, we establish the uniqueness of solutions. Indeed, if u is any solution and t < τ , then ∂Et = [xt , y t ],
where xt ∈ Ba , y t ∈ Bb . If t is greater than τ , then t ∈ TΓ and again the structure of ∂Et is known. We can see
that for each point x in Ω \{u = τ } there exists a unique ∂Et , such that x ∈ ∂Et . Moreover, τ is uniquely determined, as a result ∂Et0 = ∂Et for every t. Hence, we can apply Lemma 3.7. This ends the proof of Part (1). Proof of part (3). We proceed as in the proof of part (1). For any solution u, we set Et = {u ≥ t}. Let
us examine ∂Et for t ∈ (f (a), f (xM )). Due to Lemma 3.8, we know that ∂Et ∩ Γ ⊆ f −1 (t). We have
f −1 (t) = {x1t , x2t }. Actually, we claim that ∂Et ∩ Γ = f −1 (t). If it were otherwise, then ∂Et would have
contained a point y from Υ . We notice that (3.18) implies that y = a or y = b. Otherwise ∂Et would
intersect Ef (a) in Ω , but this is not possible. Thus, ∂Et must consist of [x1t , c] and [x2t , c], where c = a
or c = b. But due to the triangle inequality d(x1t , x2t ) < d(x1t , c) + d(x2t , c), contrary to the minimality of
P (Et , Ωo ). This means that the triangle with vertices x0 , a and b is contained in {u = f (a)}. Hence, the
f (a)-level set is fat. We also deduce continuity of u at points belonging to Υ .
Fig. 2 represents the level sets of the solutions u in this case. The green and the blue lines represent two
distinct level sets ∂Et1 , ∂Et2 , and the shaded region corresponds to the τ − fat level set, with τ = f (a).
Uniqueness follows by the same argument as in Part (1), thus the proof of Part (3) is complete.
Proof of part (2). Suppose u0 is given by Theorem 3.1 and Et0 are its superlevel sets. By Lemma 3.1, ∂Et0
are minimal surfaces. We observe that f −1 (t) consists of a single point xt , t ∈ (inf Γ f, supΓ f ). Since the
components of ∂Et0 are line segments ℓt = [xt , y], which must intersect ∂Ω , hence y ∈ Υ and by Lemma 3.8,
∂Et0 ∩ Γ = xt ∈ Γ .
Obviously, there is x0 ∈ Γ such that d(x0 , a) = d(x0 , b), hence x0 is an element of D. Moreover, by
Proposition 3.3 we know that D is at most countable. Momentarily we show that elements of f (Φ(D)) have
fat level sets.
If x0 ∈ D and Φ(x0 ) ⊃ {y1 , y2 }, then the set bounded by [x0 , y1 ], [x0 , y2 ] and the arc y1 y2 , denoted by

△x0 , has positive measure and u|△x0 = f (x0 ). Next, we check that for each x in Ω \ xi ∈D △xi there is Et0
such that x ∈ ∂Et0 . If it were otherwise, then we would have two cases to consider:
(1)
min dist(x, ∂Et0 ) > 0
t∈I
or
(2)
inf dist(x, ∂Et0 ) = 0.
t∈I
The first case implies existence of a ball B(x0 , r) such that no [xt , y t ] intersects it. Then, we would see that
B(x0 , r) is contained in a fat level set, but we are considering Ω with all fat level sets removed, so we have
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
86
Fig. 3. Part 2 level sets representation.
reached a contradiction. In the second case, there would be a sequence tn converging to t0 and such that
dist(x, ∂Et0n ) → 0. There is also a sequence xn converging to x such that u(xn ) = tn . Since u0 constructed
in Theorem 3.1 is continuous in Ω , we deduce that u0 (x) = t0 . Since tn ̸= t0 , then we deduce that x ∈ ∂Et00 ,
contrary to the assumption.
Fig. 3 represents different level sets of the solution. Each line in this case corresponds to a distinct level
set, and the shaded regions are fat level sets of u0 , notice that in this case they may not be unique.
Since we discovered the structure of the level sets of continuous solutions in Ω , then uniqueness of solutions
follows in this class of functions due to the argument used in the earlier Parts. 3.4. An explicit example with piecewise constant data
We present an example of a solution to (2.5) and (2.6), when Ω is strictly convex and with a smooth
boundary. We notice that Definition 2.2 permits g to be a distribution e.g.
g=
3

ci δ a i ,
(3.24)
i=1
where δp is a delta function located at p ∈ ∂Ω . Data given by this formula are admissible as long as

3
dg = 0, i.e. i=1 ci = 0. From the point of view of Mechanics, this condition means that the total force
∂Ω
acting on ∂Ω is zero, hence Ω is at rest.
We assume that ∂Ω is parametrized by the arclength parameter, [0, L) ∋ s → x(s) ∈ ∂Ω . In this case g
is given by (3.24), it may be written as,
g = (α1 + α2 )δx(s1 ) − α2 δx(s2 ) − α1 δx(0) .
Since g =
∂f
∂τ ,
then f is given by the following formula,


s ∈ [0, s1 ),
0
f = α1 + α2 s ∈ [s1 , s2 ),

α
s ∈ [s2 , L).
1
(3.25)
(3.26)
We may assume that α1 and α2 are positive and in fact (3.26) is the only possible datum up to the exact
value and location on s1 , s2 , L. Other admissible functions f are obtained from (3.26) by a shift in s or a
change of the orientation of the curve.
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
87
If in the problem (2.1) data f are given by (3.26), then [21] guarantees the existence but not the uniqueness
of solutions. We can show existence differently, by approximating f with continuous data and passing to the
limit. To do this, we need the following stability result, proved in [23].
Proposition 3.4. If un is a sequence of least gradient functions converging in L1 (Ω ) to function u, then u is
a function of least gradient.
We summarize the result of this Subsection in the following proposition.
Proposition 3.5. Let us suppose that Ω is bounded and strictly convex and f is given by (3.26). Then, there
exists a unique solution to


min
|Du| : u ∈ BV (Ω ), T u = f
(3.27)
Ω
and whose superlevel sets are given by formula (3.29).
Proof. Let us write xi = x(si ), i = 0, 1, 2, where x(·) is the parametrization used above. We take a sequence
fϵ of continuous functions on ∂Ω such that f = fϵ on {x ∈ ∂Ω : dist(x, xi ) ≥ ϵ, i = 0, 1, 2} and uϵ the
corresponding solutions to


min
|Duε | : uϵ ∈ BV (Ω ), T u = fϵ .
(3.28)
Ω
We can see that the structure of ∂{uϵ ≥ fϵ } does not change if the intersection ∂{uϵ ≥ fϵ } ∩ ∂Ω is at a
distance greater than ϵ from the points x0 , x1 , x2 . Thus,
∥uϵ − uδ ∥L1 (Ω) ≤ C max{ϵ, δ}(|x0 − x1 | + |x0 − x2 | + |x2 − x1 |).
As a result, uϵ → u in L1 . Moreover, we can see that T u = f . By the stability theorem in Proposition 3.4,
u is a function of least gradient satisfying our boundary conditions, hence u is a solution to (3.27). We now describe explicitly the level sets of the solutions found in Proposition 3.5. We notice that we have
three distinguished points on ∂Ω . They are,
x0 = x(0),
x1 = x(s1 ),
x2 = x(s2 ).
If u is a solution to (3.27), we have three candidates for the level sets ∂{u ≥ t}
[x0 , x1 ],
[x1 , x2 ],
[x0 , x2 ].
However, some choices are excluded.
Corollary 3.2. If u is a solution to (3.27), then it is not possible that ∂{u ≥ t} is any triangle.
Proof. Let us suppose the contrary, i.e. there is t ∈ R such that ∂{u ≥ t} is a triangle △def , then by [21],
we have



∇u
=
div z =
z · ν.
0=
div
|∇u|
△def
∂△def
△def
Here, z is a vector field with ∥z∥∞ ≤ 1 and agreeing with
∇u
|∇u|
on the sides of △def , see [21] for more details.
We notice that on the boundary of △def , vector field z has to be equal to ±ν, where ν is the outer normal
to triangle △def . Thus,

z · ν = ±|de| ± |ef | ± |ef |.
∂△def
The last sum may never vanish due to the triangle inequality.
88
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
Therefore, we conclude that


t ≤ 0 or t > α1 + α2 ,
∅
∂{u ≥ t} = [x0 , x1 ] t ∈ (0, α1 ],

[x , x ] t ∈ (α , α + α ].
1
2
1
1
2
(3.29)
We may exploit the argument developed in the course of proof of Theorem 3.2, to claim uniqueness despite
the lack of continuity.
Here, the solution to the FMD with three point loads is not obvious, at first glance. The material is
concentrated along two rods determined by the values of the load, not the points of their application.
4. Least gradient problem on rectangles
In this section, we consider the least gradient problem, when Ω is a rectangle. Notice that in this case,
Ω does not satisfy the assumptions in [26]. We study the case, when data f are continuous and strictly
monotone on two pieces of the boundary and prove existence of continuous least gradient solutions. Next,
we show the uniqueness of solutions in the BV setting. Notice that for general boundary data, least gradient
solutions might not be continuous nor unique see [21, Example 2.7]. We state the main result of this section
in the following theorem.
Theorem 4.1. We are given a rectangle Ω = (−L, L) × (−h, h). We denote by vi = {(−1)1+i L} × (−h, h)
and hi = (−L, L) × {(−1)1+i h}, i = 1, 2 the sides of Ω . We write ∂Ω = Γ1 ∪ Γ2 , where Γ1 = h1 ∪ v1 ,
Γ2 = h2 ∪ v2 . We are given a continuous function f , such that each restriction to Γi , i = 1, 2, f |Γi , is strictly
increasing. Then, there exists a unique continuous solution to


min
|Du| : u ∈ BV (Ω ), T∂Ω u = f ,
(4.1)
Ω
where the boundary condition is understood in terms of the trace of BV functions.
We obtain the same existence and uniqueness results if f |Γi is decreasing.
Proof of Theorem 4.1. We proceed in a number of steps.
Step 1. A priori, a solution to (4.1) may not exist since Ω is only convex, so we consider the following
sequence of auxiliary problems in strictly convex domains. Let Ωn be a region bounded by four properly
chosen circular arcs passing through vertices of Ω and such that the Hausdorff distance between Ωn and Ω
is less than 1/n. In other words,
∂Ωn = ∂Ω + νγn ,
where ν is the outer normal to ∂Ω (except for the corners), γn is a smooth function (away from the corners)
by 0 ≤ γn ≤ 1/n. We define the functions fn on ∂Ωn with fn (x + νγn (x)) := f (x), x ∈ ∂Ω . By [26], the
following problem,


min
|Dv| : v ∈ BV (Ωn ), T∂Ωn v = fn
(4.2)
Ωn
has a unique continuous solution vn . Here, T∂Ωn : BV (Ωn ) → L1 (∂Ωn ) designates the trace operator.
Step 2. We set un = vn χΩ . Since fn is continuous on ∂Ωn , then by [9, Theorem 3.33] there exists
Fn ∈ W 1,1 (Ωn ) such that Fn |∂Ωn = fn and ∥DFn ∥L1 (Ωn ) ≤ C(Ωn )∥fn ∥L1 (∂Ωn ) . In fact, the argument in
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
89
[9, Theorem 3.33] works not only for smooth domains, but also for Lipschitz ones. We deduce the following
uniform bound,
∥DFn ∥L1 (Ωn ) ≤ C(Ωn )∥fn ∥L1 (∂Ωn ) ≤ C(Ωn )∥fn ∥L∞ (∂Ωn ) |∂Ωn | ≤ C0 (Ω )∥fn ∥L∞ (∂Ωn ) = C0 (Ω )∥f ∥L∞ (Ω)
with constant C0 independent of n. By the definition of un and vn , we get that


|Dun | ≤
|Dvn | ≤ ∥DFn ∥L1 (Ωn ) ≤ C0 (Ω )∥f ∥L∞ (Ω) < ∞.
Ω
Ωn
Thus, there exists a subsequence, denoted by un , converging in L1 (Ω ) to u and


lim inf
|Dun | ≥
|Du|.
n→∞
Ω
Ω
Step 3. We prove that u is a function of least gradient with a given trace. Let gn be the trace of un on
∂Ω . Since vn is continuous on Ω̄n and un = vn χΩ , then the trace is defined as follows:
gn (z) =
lim
y→z,y∈Ω
un (y) = vn (z).
By Proposition 3.4, it is enough to show that for every n, un is a function of least gradient over Ω . For this
purpose, we will use the following proposition.
Proposition 4.1. Let Ω1 ⊂ Ω2 be open sets with Lipschitz boundary. Then, if u ∈ BV (Ω2 ) is of least gradient
in Ω2 , then u|Ω1 is of least gradient in Ω1 .
Proof. Suppose that u|Ω1 is not a function of least gradient in Ω1 , then there exists v ∈ BV (Ω1 ) such that
T∂Ω1 u = T∂Ω1 v and |Dv|(Ω1 ) < |Du|(Ω1 ). Let us define u
 by the formula

v(x) if x ∈ Ω1 ,
u
(x) =
u(x) if x ∈ Ω2 \ Ω1 .
Let u
+ , and u
− be the traces of u
 as a function in BV (Ω2 \ Ω1 ) and BV (Ω1 ), respectively. It follows from
[13, Section 5.4] that the total variation of u
 equals,

|D
u|(Ω2 ) = |Dv|(Ω1 ) + |Du|(Ω2 \ Ω1 ) +
|
u+ − u
− | dH1
∂Ω1

= |Dv|(Ω1 ) + |Du|(Ω2 \ Ω1 ) +
|u+ − T∂Ω1 v| dH1
∂Ω1

= |Dv|(Ω1 ) + |Du|(Ω2 \ Ω1 ) +
|u+ − T∂Ω1 u|∂H1
∂Ω1

< |Du|(Ω1 ) + |Du|(Ω2 \ Ω1 ) +
|u+ − u− | dH1 = |Du|(Ω2 ).
∂Ω1
But this contradicts the fact that u is of least gradient in Ω2 .
We apply this proposition to Ω1 = Ω , Ω2 = Ωn , u equal to vn . Notice that in this case due to continuity
of vn in Ω̄n , the one sided traces vn+ , vn− are equal. While un are the restrictions of vn to Ω , we conclude
that un are minimizers of the following problem


|Dv| : v ∈ BV (Ω ), T v = gn
Ω
where T v is the trace on ∂Ω . Hence, the L1 -limit u is a least gradient function.
Step 4. We show now that un converges uniformly to a continuous function w, which will be defined in
(4.3). Due to Step 3, w = u a.e. and w is a least gradient function.
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
90
We have that ∂Ω = Γ1 ∪ Γ2 , with Γ1 = h1 ∪ v1 and Γ2 = h2 ∪ v2 . We are given f continuous and strictly
increasing on Γi , i = 1, 2.
For every t ∈ (min f, max f ), we denote by ℓt the line segment with endpoints xt , y t such that xt ∈ Γ1 ,
y t ∈ Γ2 , and f (xt ) = f (y t ) = t. Notice that by the monotonicity of f , the segments ℓt are disjoint. The
lemma below shows that the line segments ℓt fill out the region Ω .
Lemma 4.1. For every z ∈ Ω there exists a unique ℓt for t ∈ (min f, max f ) such that z ∈ ℓt .
Proof. Without the loss of generality, we may assume that z is closer to Γ2 , i.e. below the diagonal joining
the endpoints of Γ1 . Let s be the arclength parameter of Γ1 , its range is (0, 2(L + h)), and let x(s) be the
parametrization of Γ1 such that
lim f (x(s)) = min f,
s→0+
∂Ω
lim
s→(2(L+h))−
f (x(s)) = max f.
∂Ω
For every s, we draw the line ℓ(x(s), z) joining x(s) and z and we call y(s) its intersection with Γ2 . We
notice that y(s) is continuous and
lim f (x(s)) − f (y(s)) = min f − f (y(s)) ≤ 0,
s→0+
∂Ω
lim
s→(2(L+h))−
f (x(s)) − f (y(s)) = max f − f (y(s)) ≥ 0.
∂Ω
Due to the strict monotonicity of the function s → f (x(s)) − f (y(s)), there exist a unique s0 and t such that
ℓt = [x(s0 ), y(s0 )], z ∈ ℓt and t = f (x(s0 )) = f (y(s0 )). We may now define function w : Ω → R. Since for each point x ∈ Ω there exists a unique line ℓt ,
introduced above and passing through x, we set
w(x) = t.
(4.3)
Lemma 4.2. The function w defined by (4.3) is continuous in Ω .
Proof. Let us denote the continuity modulus of f by ω. Fix x1 , x2 ∈ Ω . For each i = 1, 2 there exists a
unique ti ∈ (min f, max f ) such that xi ∈ ℓti = [xti , y ti ]. There are three cases to consider:
(1) both ℓt1 and ℓt2 intersect h1 and h2 (or v1 and v2 );
(2) both ℓt1 and ℓt2 intersect h1 and v2 (or h2 and v1 );
(3) ℓt1 intersects h1 and h2 while ℓt2 intersects v1 and h2 (or ℓt1 intersects v1 and v2 , while ℓt2 intersects v1
and h2 ).
In case (1), we notice that




|x1 − x2 | ≥ d(x1 , ℓt2 ) ≥ min d(xt1 , ℓt2 ), d(y t1 , ℓt2 ) ≥ min |xt1 − xt2 |, |y t1 − y t2 | sin α,
where α is the smallest angle between the diagonals and the horizontal sides of Ω .
Then, we obtain

 |x1 − x2 |
.
min |xt1 − xt2 |, |y t1 − y t2 | ≤
sin α
We may estimate w(x1 ) − w(x2 ) as follows, assuming that |xt1 − xt2 | = min{|xt1 − xt2 |, |y t1 − y t2 |},
|w(x1 ) − w(x2 )| = |t1 − t2 | = |f (xt1 ) − f (xt2 )| ≤ ω(|xt1 − xt2 |) ≤ ω(|x1 − x2 |/ sin α).
t1
t2
(4.4)
(4.5)
In case (2), we may assume without the loss of generality that d(x , (L, −h)) ≤ d(x , (L, −h)). This implies
by the monotonicity of f that d(y t1 , (L, −h)) ≤ d(y t2 , (L, −h)). Then,




|x1 − x2 | ≥ d(x1 , ℓt2 ) ≥ min d(xt1 , ℓt2 ), d(y t1 , ℓt2 ) = min |xt1 − xt2 | sin β1 , |y t1 − y t2 | sin β2 .
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
91
Here, β1 is the angle formed by ℓt2 with h1 , and β2 is the angle formed by ℓt2 with v2 . Notice that
sin β1 =
|y t2 − y t1 |
d (y t2 , (L, −h))
≥
t
t
|y 2 − x 2 |
diam(Ω )
and
sin β2 =
|xt2 − xt1 |
d (xt2 , (L, −h))
≥
.
t
t
|y 2 − x 2 |
diam(Ω )
Then,
|y t1 − y t2 | |xt1 − xt2 | ≤ diam(Ω ) |x2 − x1 |.
Therefore,

 

min |xt1 − xt2 |, |y t1 − y t2 | ≤ diam(Ω ) |x2 − x1 |
(4.6)
and we similarly obtain the following estimate for |w(x1 ) − w(x2 )|,



|w(x1 ) − w(x2 )| ≤ ω
diam(Ω ) |x1 − x2 | .
(4.7)
In case (3), we know that y t1 , y t2 belong to h2 . Then, y 0 = 12 (y t1 + y t2 ) also belongs to h2 . We notice that
the line ℓ((L, h), y 0 ) intersects the line ℓ(x1 , x2 ) at a point x0 . Furthermore, |x1 − x2 | ≥ |x1 − x0 |, |x2 − x0 |.
Estimating |x1 − x0 | using case (1), we get,


|x1 − x0 |
≥ min |xt1 − (L, h)|, |y t1 − y 0 | .
sin α
Using case (2), we have




diam(Ω ) |x2 − x0 | ≥ min |(L, h) − xt2 |, |y 0 − y t2 | .
Combining these estimates with (4.5) and (4.7) we obtain,
|w(x1 ) − w(x2 )| ≤ |w(x1 ) − w(x0 )| + |w(x0 ) − w(x2 )| ≤ ω(|x1 − x2 |/ sin α) + ω

diam(Ω )

We conclude that w is continuous. In fact, for any x1 , x2 ∈ Ω ,



|w(x1 ) − w(x2 )| = |t1 − t2 | = |f (p1 ) − f (p2 )| ≤ ω c1 |x1 − x2 | + c2 |x1 − x2 | ,
where c1 , and c2 depend only on Ω .

|x1 − x2 | .
(4.8)
Lemma 4.3. The sequence un converges uniformly to w.
Proof. We evaluate |ul (x) − uk (x)| for x ∈ Ω . Let us set, ul (x) = t1 , uk (x) = t2 . We notice that x ∈
∂{uk ≥ t1 } = ℓtk1 and x ∈ ∂{ul ≥ t2 } = ℓtl 2 . By construction, the endpoints of ℓtk1 and ℓt1 and respectively of
ℓtl 2 and ℓt2 are at the distance not bigger than n1 , n = min{k, l}. Hence, we can find xk ∈ ℓt1 (resp. xl ∈ ℓt2 )
such that d(x, xk ) ≤ 1/k, (resp. d(x, xl ) ≤ 1/l).
Now, by (4.8)

|ul (x) − uk (x)| = |w(xk ) − w(xl )| ≤ ω c1 |xk − xl | + c2
So, un converges uniformly to w.


|xk − xl | ≤ ω


√
2c1
c2 2
+
.
min{k, l}
min{k, l}
Step 5. It remains to show that w has the correct trace f .
Proof. We may assume without the loss of generality that z ∈ Γ1 . Take a sequence xn converging to z. Let
ℓtn = [xtn , y tn ] be the corresponding segments such that f (xtn ) = f (y tn ) = tn and let ℓt = [z, z ′ ] be such
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
92
that f (z) = f (z ′ ). As in Step 4, we have that,



|w(xn ) − f (z)| = |tn − t| ≤ ω c1 |xn − z| + c2 |xn − z| .
Letting n → ∞, we conclude that limΩ∋x→z w(x) = f (z).
Step 6. The uniqueness of solutions follows from the structure of superlevel sets of any solution to (4.1). In
fact, one can prove that if v is a solution to (4.1) then ∂{v ≥ t} ∩ ∂Ω ⊆ f −1 (t), and hence by the minimality
of the superlevel sets, we conclude that ∂{v ≥ t} = ∂{w ≥ t} and hence, by the same argument presented
in Lemma 3.7, uniqueness follows.
This finishes the proof of Theorem 4.1.
Remark 4.1. We make the following observations.
(1) From (4.8), we notice that the constructed solutions are more regular than the boundary data. In fact,
we have shown in the proof of Lemma 4.2 that if f ∈ C α (∂Ω ), then the solution w is in C α/2 (Ω̄ ).
(2) By a similar argument, one can show existence of solutions to (4.1) when the function f is strictly
monotone on hi and constant on vi , i=1,2. In this case, level sets of the solution are constructed as follows:
for every x ∈ Ω , as in Lemma 4.1, there exists unique xt in h1 and y t in h2 such that f (xt ) = f (y t ) = t,
and x is on the line ℓt joining xt and y t . We define w(x) = t on ℓt . The continuity of w is deduced as in
the proof of case 1 of Lemma 4.2, and the rest of the proof of Theorem 4.1 for such data f follows.
(3) The considered rectangle case might be generalized to convex sets for which the lines ℓt fill up Ω . In
general, the relation between the monotonicity of the data and the convexity of the set Ω is a very
interesting question and will be the focus of our future research.
4.1. Back to FMD
We will use the method of the proof of Theorem 4.1 for the following FMD problem. We assume that in
problem (2.2), Ω = (−L, L) × (−h, h) and g is a positive constant load lT acting on T = [−t, t] × {h} and a

positive constant load lB acting on B = [−b, b] × {−h} satisfying ∂Ω g dx = 0, i.e.
2b lb − 2t lt = 0.
(4.9)
In Mechanics, in this situation we say that the load is self-equilibrated, see [12], because as described in
Section 3.4 the total force acting on the body is zero. We notice that since g = ∂f
∂τ , then


0
y ∈ [−h, h], x = −L;


l min{(x + b)+ , 2b} x ∈ [−L, L], y = −h;
B
f (x, y) =

lT min{(x + t)+ , 2t} x ∈ [−L, L], y = h;


2bl ,
y ∈ [−h, h], x = L.
B
By (4.9), the function f is continuous. We prove in this section the following result.
Proposition 4.2. Let us suppose that Ω = (−L, L) × (−h, h) and f is given by the formula above. Then, there
exists a unique solution to (4.1).
Proof. Notice that if t = b = L, then the function f is strictly monotone on h1 , h2 and constant on v1 and
v2 , hence by the second part of Remark 4.1 a continuous solution u to (4.1) exists.
Otherwise, notice that f has the property, which was the basis of the construction in the proof of
Theorem 4.1. Namely, each value of f , except for the maximum and the minimum, is attained at exactly
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
93
Fig. 4. Solution for the piecewise constant data.
two points. Let us define kϵ : ∂Ω → R by the following formula,


−ϵ,

ϵ
ϵ


+ (x + b)−
,
(x − b)+
L
−
b
L
−
b
kϵ (x, y) =
ϵ
ϵ
+
−


(x − t) L − t + (x + t) L − t ,



ϵ,
y ∈ [−h, h], x = −L;
x ∈ [−L, L], y = −h;
x ∈ [−L, L], y = h;
y ∈ [−h, h], x = L.
Notice that kϵ is continuous on ∂Ω . Moreover, kϵ is constant on v1 , v2 , and kϵ = 0 on the segments [−t, t]×{h}
and [−b, b] × {−h}.
We define the function fϵ = f + kϵ . Notice that fϵ is continuous and is strictly monotone on hi and
constant on vi for i = 1, 2, then by Remark 4.1, there exists a unique continuous function uϵ , which are
solutions to the least gradient problem on the rectangle Ω with trace fϵ . As described in Remark 4.1, the
level sets of uϵ are constructed explicitly.
Let us denote by Q1 the quadrilateral with vertices (−L, h), (−t, h), (−b, −h), (−L, −h), Q2
the quadrilateral with vertices (t, h), (L, h), (L, −h), (b, −h), and Q the quadrilateral with vertices
(−t, h), (t, h), (b, −h), (−b, −h), see Fig. 4. Notice that
−ϵ ≤ fϵ ≤ 0
on [−L, −t] × {h}, and [−L, −b] × {−h};
0 ≤ fϵ ≤ 2t lT
on [−t, t] × {h}, and [−b, b] × {−h};
2t lt ≤ fϵ ≤ 2t lT + ϵ on [t, L] × {h}, and [b, L] × {−h}.
Therefore, by the construction of uϵ , we obtain that
0 ≤ uϵ ≤ ϵ
on Q1 ,
0 ≤ uϵ ≤ 2t lT
2t lT ≤ uϵ ≤ 2t lT + ϵ
on Q,
on Q2 .
Moreover, uϵ (x, y) does not depend on ϵ for every (x, y) ∈ Q.
We define the functions u as follows


0
u(x, y) = uϵ

2t l
T
on Q1 ,
on Q,
on Q2 .
We see that u is continuous in Ω and T∂Ω u = f . It remains to show that u is a least gradient function. In
fact, we notice that
0 ≤ |uϵ − u| ≤ ϵ in Ω \ Q
and uϵ = u in Q. Then, uϵ converges to u uniformly, hence our claim follows from Proposition 3.4.
94
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
We notice that the structure of a solution to (2.5) is simple: ∂{u ≥ t} are intervals connecting appropriate
points on T and B. Moreover, load g, defining f , need not be piecewise constant.
This example tells us in a rigorous way how the material shall be distributed in a region determined
by the position of the loads. Its boundary consists of line segments, i.e. minimal surfaces, see Lemma 3.1.
Physically, the construction of the solutions tells us that the material occupies the region bounded by the
support of the loads and the minimal surfaces spanned by the boundaries of this support.
Acknowledgments
The work of Piotr Rybka was in part supported by the Research Grant No 2013/11/B/ST8/04436
financed by the National Science Centre, Poland, entitled: Topology optimization of engineering structures.
An approach synthesizing the methods of: free material design, composite design and Michell-like trusses.
The work of Ahmad Sabra was in part supported by the Research Grant No 2015/19/ST1/02618 financed
by the National Science Centre, Poland, entitled: Variational Problems in Optical Engineering and Free
Material Design.
All the authors thank the referee for his/her insightful comments, which helped to improve the paper.
The authors also thank Professor Tomasz Lewiński of Warsaw Technological University for stimulating
discussions and constant encouragement.
This project has received funding from the European Union’s Horizon 2020 research and
innovation program under the Marie Sklodowska-Curie grant agreement No 665778.
References
[1] L. Ambrosio, N. Fusco, D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems, in: Oxford
Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 2000.
[2] F. Andreu, C. Ballester, V. Caselles, J.M. Mazón, The Dirichlet problem for the total variation flow, J. Funct. Anal. 180
(2001) 347–403.
[3] L. Anetor, A Survey on Minimal Surfaces Embedded in Euclidean Space, in: Applied Sciences Monographs, vol. 11,
Geometry Balkan Press, Bucharest, 2015.
[4] E. Bombieri, E. De Giorgi, E. Giusti, Minimal cones and the Bernstein problem, Invent. Math. 7 (3) (1969) 243–268.
[5] G. Bouchitté, G. Buttazzo, Characterization of optimal shapes and masses through Monge–Kantorovich equation, J. Eur.
Math. Soc. (JEMS) 3 (2001) 139–168.
[6] G.-Q. Chen, H. Frid, On the theory of divergence-measure fields and its applications, Bol. Soc. Br. Mat.- Bull. Braz. Math.
Soc. 32 (2) (2001) 401–433.
[7] G.-Q. Chen, H. Frid, Extended divergence-measure fields and the Euler equations for gas dynamics, Comm. Math. Phys.
236 (2003) 251–280.
[8] S. Czarnecki, T. Lewiński, A stress-based formulation of the free material design problem with the trace constraint and
multiple load conditions, Struct. Multidiscip. Optim. 49 (2014) 707–731.
[9] F. Demengel, G. Demengel, Functional Spaces for the Theory of Elliptic Partial Differential Equations, Springer, EDP
Sciences, London, Les Ulis, 2012.
[10] U. Dierkes, S. Hildebrandt, A.J. Tromba, Regularity of Minimal Surfaces, Revised and enlarged second ed., in: Grundlehren
der Mathematischen Wissenschaften, vol. 340, Springer, Heidelberg, 2010, With assistance and contributions by A. Küster.
[11] M. Dos Santos, Characteristic functions on the boundary of a planar domain need not be traces of least gradient functions.
preprint, arXiv:1510.05081.
[12] G. Duvaut, J.-L. Lions, Inequalities in mechanics and physics, in: Grundlehren der Mathematischen Wissenschaften, vol.
219, Springer-Verlag, Berlin-New York, 1976.
[13] L.C. Evans, R.F. Gariepy, Measure Theory and Fine Properties of Functions, in: Studies in Advanced Mathematics, CRC
Press, Boca Raton, FL, 1992.
[14] H. Federer, Geometric Measure Theory, in: Die Grundlehren der Mathematischen Wissenschaften, Band 153, SpringerVerlag New York Inc., New York, 1969.
[15] M. Giaquinta, G. Modica, J. Soucek, Functionals with linear growth in the calculus of variations I, Comment. Math. Univ.
Carolin. 20 (1) (1979) 143–156.
W. Górny et al. / Nonlinear Analysis 151 (2017) 66–95
95
[16] E. Giusti, Minimal Surfaces and Functions of Bounded Variation, in: Monogr. Math., vol. 80, Birkhäuser, Basel, 1984.
[17] W. Górny, Planar least gradient problem: existence, regularity and anisotropic case, arxiv:1608.02617.
[18] N. Hoell, A. Moradifam, A. Nachman, Current density impedance imaging of an anisotropic conductivity in a known
conformal class, SIAM J. Math. Anal. 46 (2014) 1820–1842.
[19] M. Kočvara, J. Zowe, Free material optimization: An overview, in: Abul Hasan Siddiqi, Michal Kočvara (Eds.), Trends
in Industrial and Applied Mathematics, Proceedings of the 1st International Conference on Industrial and Applied
Mathematics of the Indian Subcontinent, Kluwer Acad. Publ., Dordrecht, 2002.
[20] J.M. Mazón, The Euler–Lagrange equation for the anisotropic least gradient problem, Nonlinear Anal. RWA 31 (2016)
452–472.
[21] J.M. Mazón, J.D. Rossi, S. Segura de León, Functions of least gradient and 1-harmonic functions, Indiana Univ. Math. J.
63 (4) (2014) 1067–1084.
[22] W.H. Meeks, J. Pérez, A Survey on Classical Minimal Surface Theory, in: University Lecture Series, vol. 60, American
Mathematical Society, Providence, RI, 2012.
[23] M. Miranda, Comportamento delle successioni convergenti di frontiere minimali, Rend. Semin. Mat. Univ. Padova 38
(1967) 238–257.
[24] G. Spradlin, A. Tamasan, Not all traces on the circle come from functions of least gradient in the disk, Indiana Univ.
Math. J. 63 (6) (2014) 1819–1837.
[25] E.M. Stein, Singular Integrals and Differentiability Properties of Functions, in: Princeton Mathematical Series, No. 30,
Princeton University Press, Princeton, NJ, 1970.
[26] P. Sternberg, G. Williams, W.P. Ziemer, Existence, uniqueness, and regularity for functions of least gradient, J. Reine
Angew. Math. 430 (1992) 35–60.