Longest minimal length partitions

This article provides numerical evidence that under volume constraint the ball is the set which maximizes the perimeter of the least-perimeter partition into cells with prescribed areas. We introduce a numerical maximization algorithm which performs multiple optimizations steps at each iteration to approximate minimal partitions. Using these partitions we compute perturbations of the domain which increase the minimal perimeter. The initialization of the optimal partitioning algorithm uses capacity-constrained Voronoi diagrams. A new algorithm is proposed to identify such diagrams, by computing the gradients of areas and perimeters for the Voronoi cells with respect to the Voronoi points.


Introduction
In [18] the authors answer a question raised by Polya in [38] and prove that among planar convex sets of given area the disk maximizes the length of the shortest area-bisecting curve.Denote by Ω ⊂ R d an open, connected region with Lipschitz boundary.Consider c ∈ (0, 1) and denote with | • | the usual Lebesgue measure (area in 2D, volume in 3D).Given Ω and c, define the shortest fence set to be In other words, SF (Ω, c) is one subset ω ⊂ Ω which minimizes the relative perimeter Per Ω (ω) when the measure |ω| is fixed to c|Ω|.In the following, the relative perimeter of SF (Ω, c) is denoted by I(Ω, c) = Per Ω (SF (Ω, c)) = min{Per Ω (ω) : ω ⊂ Ω, |ω| = c|Ω|}. ( In the literature the mapping c → I(Ω, c) is sometimes called the isoperimetric profile of the set Ω.The paper [18] cited above solves the problem of maximizing I(Ω, c) with respect to Ω, max in dimension two for c = 1/2.In the following v d denotes the volume of the unit ball in R d .The choice of the volume constraint |Ω| = v d does not reduce the generality of the problem, since changing this constant only rescales the solution via a homothety.Classical details regarding the existence of the sets SF (Ω, c) and the definition of the relative perimeter Per Ω (•) are recalled in the next section.This work was initiated by the note [29] published on the French CNRS website Images des mathématiques, where it is asked what happens to the solution of (3) when the parameter c varies in (0, 1).This conjecture is attributed to Wichiramala [45] and the article [36] on F. Morgan's blog presents an extensive discussion regarding the history of the problem.The conjecture was partially solved in dimension two in the following works: • In [5] the authors prove the conjecture in the plane for small fraction areas.The article contains many interesting results related to relative isoperimetric sets.• In [44] the authors prove the conjecture in the plane for domains symmetric with respect to both coordinate axes and perturbations of the unit disk.Therefore the conjecture remains unanswered for large fraction areas, except for the case c = 1/2.Moreover, other generalizations of this problem can be investigated.It is possible, for instance, to consider the analogue problem in the case of partitions of shortest total boundary measure.Given Ω ⊂ R d and n > 1 consider (ω 1 , ..., ω n ) to be a partition of Ω, in the sense that the union of ω i , i = 1, ..., n is Ω and ω i ∩ ω j = ∅.Given a vector c = (c 1 , ..., c n ) ∈ R n with n i=1 c i = 1 consider the shortest partition of |Ω| with volume constraints c to be Per Ω (ω i ) : (ω i ) partition of Ω, In the following we define the isoperimetric profile of a partition given by the constraints c by Per Ω (ω i ) : In other words P I(Ω, c) is the minimal total relative perimeter of a partition with volume constraints given by c.Now it is possible to formulate the problem max where the total relative perimeter of the shortest partition with constraints c is maximized when Ω has fixed volume.It is obvious that (3) is a particular case of ( 6) by considering n = 2 and c = (c, 1 − c).
In this paper, problems (3) and ( 6) are investigated from both numerical and theoretical points of view.In order to approximate solutions of these problems multiple issues need to be addressed: • Compute reliably a numerical approximation of the shortest partition SP (Ω, c) once the domain Ω and the constraints vector c are given.It is important to avoid local minimizers at this stage, since the objective is to maximize the shortest perimeter.Any local minimizer may give a false candidate for the solutions of (3) or (6).There are many works in the literature which deal with the investigation of minimal length partitions.In [14] Cox and Flikkema use the surface Evolver software to approximate minimal partitions.In this work the approach presented in [37] is used, where the perimeter is approximated using a Γ-convergence result of Modica and Mortola [34].This allows us to work with density functions rather than sets of finite perimeter and simplifies the handling of the partition condition.Moreover, working with densities directly allows changes in the topology of the partitions.Given a domain Ω, a mesh is constructed and finite elements are used in FreeFEM [25] in order to approximate SF (Ω, c) or SP (Ω, c).When dealing with partitions, in order to accelerate the convergence, an initialization based on Voronoi diagrams with prescribed areas is used.• Once the shortest partition SP (Ω, c) is identified, the bounding set Ω needs to be modified in order to increase the objective function P I(Ω, c).In order to find a suitable ascent direction classical results related to the shape derivative are used [16,26].• The family of star-shaped domains (which includes convex shapes) is parametrized using radial functions.Moreover, radial functions are discretized by considering truncations of the associated Fourier series.Using the shape derivative it is possible to compute the gradient of the objective function with respect to the discretization parameters.Once the gradient is known, an optimization algorithm is used in order to search for solutions of (3) and (6).The choice of the optimization algorithm is also an important factor since the computation of SP (Ω, c) is highly sensitive to local minima.Moreover, when changing Ω following a perturbation field found using a shape derivative argument, the configuration of the optimal partition might change.The chosen algorithm is a gradient flow with variable step size.Minimal length partitioning algorithms presented in [37] or [6] use random initializations.While this illustrates the flexibility of Modica-Mortola type algorithms and the ability of the algorithm to avoid many local minima, choosing random initializations leads to longer computation times required for the optimization algorithm.A classical idea is to use Voronoi diagrams as initializations.However, these Voronoi diagrams should consist of cells which verify the area constraints |ω i | = c i .In the literature, the notion of capacity-constrained Voronoi diagrams is employed and results in this direction can be found in [4], [3] and [46].In this work we propose a new way of computing capacity-constrained Voronoi diagrams by explicitly computing the gradients of the areas of the Voronoi cells with respect to variations in the Voronoi points.The gradient of the perimeters of the Voronoi cells is also computed, which allows the search of capacity-constrained Voronoi diagrams with minimal length.
The numerical simulations give rise to the following conjectures: • The result of the convex isoperimetric conjecture seems to generalize to every volume fraction c ∈ (0, 1) in dimensions two and three.• The same result seems to hold in the case of partitions.For n > 1 and c = (c i ) n i=1 ∈ R n with n i=1 c i = 1 arbitrary, the solution of ( 6) is the disk in 2D and the ball in 3D.It is surprising that this result seems to hold even when the area constraints of the cells of the partition are not the same.Outline and summary of results.Section 2 presents classical theoretical results regarding approximations of minimal perimeter partitions by Γ-convergence.
Section 3.1 recalls basic aspects regarding the numerical computation of minimal length partitions.Section 3.3 presents the computation of the gradients of the areas and perimeters of Voronoi cells and shows how to use prescribed-area Voronoi cells in order to construct initializations for our optimization algorithm.Section 3.4 presents the computation of an ascent direction for the shape optimization algorithm using the notion of shape derivative.The choice of the discretization and the optimization algorithm for approximating solutions of problems ( 3) and ( 6) are presented in Section 3.5.We underline that the maximization algorithm approximates solutions to a max-min problem, and the optimal partitioning algorithm presented in Section 3.1 is run at every iteration.
Finally, results of the optimization algorithm in dimensions two and three are presented in Section 4. The numerical results suggest that the solution of problems ( 3) and ( 6) is the disk in dimension two and the ball in dimension three.A brief discussion of the optimality conditions is presented in Section 5.

Theoretical aspects 2.1 Minimal relative perimeter sets and partitions
The appropriate framework to work with sets of finite relative perimeter in Ω is to consider the space of functions with bounded variation on Ω As usual C c (Ω) represents the space of C ∞ functions defined on Ω with compact support in Ω.Using the divergence theorem it is easy to observe that if u is of class If ω is a subset of Ω its generalized perimeter is defined by Per(ω) = T V (χ ω ), where χ ω represents the characteristic function of ω.All these aspects are classical and can be found, for example, in [2,8].
The fact that problems (1) and ( 4) have solutions is classical and is a consequence of the fact that the generalized perimeter defined above is lower-semicontinuous for the L 1 convergence of characteristic functions.For more aspects related to solutions of these problems see [32,Chapter 17].The book previously referenced also presents aspects related to the regularity of optimal partitions in Part Four.Aspects about optimal partitions in the smooth case are presented in [35] where qualitative properties of minimal partitions in the plane and on surfaces are presented.
Proving existence of solutions for problems (3) and ( 6) is more difficult since these are maximum problems and the perimeter is lower semicontinuous.We recall that problem (3) was solved in [18] in the case d = 2, c = 1/2.In particular, existence was proved exploiting results in [13] which show that in this case the minimal relative perimeter sets are convex.In the following we prove that solutions exist in the class of convex domains for arbitrary area constraints.Proof: We divide the proof into steps which allow us to apply classical methods in calculus of variations.
Step 1: Upper bounds.In the following denote by w(Ω) the minimal H d−1 measure of the projection of Ω on a hyperplane (in dimension two this corresponds to the minimal width).For convex bodies the following reverse Loomis-Whitney inequality holds true: where the minimum is taken over all orthonormal bases of R d and K|e ⊥ i represents the projection of K onto a hyperplane orthogonal to e i .In [30] it is shown that there exists a constant c 0 such that In particular, this shows that the minimal projection w(Ω) verifies w(Ω It is immediate to see that the quantity w(Ω) gives an upper bound for I(Ω, c).To justify this choose e 0 the direction for which H d−1 (Ω|e ⊥ 0 ) is minimal and slice Ω with a hyperplane orthogonal to e 0 which divides Ω into two regions ω and Ω \ ω with volume |ω| = c.The relative perimeter of the set ω in Ω is at most equal to w(Ω), the H d−1 measure of the projection.Therefore, we may conclude that in the class of convex sets with measure |Ω| = v d the quantity I(Ω, c) is bounded from above, and the upper bound only depends on d and v d .This implies the existence of a maximizing sequence (Ω h ) h≥1 which verifies I(Ω h , c) ≤ I(Ω h+1 , c) and I(Ω h , c) → sup |Ω|=v d I(Ω, c), where the supremum is taken in the class of convex sets.
Step 2: Compactness.When dealing with a sequence of convex sets we may extract a subsequence converging in the Hausdorff distance provided the sets are uniformly bounded.For classical aspects related to the Hausdorff distance we refer to [26,Chapter 2].Therefore, in the following we show that the diameters diam(Ω h ) of convex sets Ω h forming the maximizing sequence are uniformly bounded.
First, let us note that since (Ω h ) is a maximizing sequence for I(Ω, c) there exists a positive constant p 0 > 0 such that I(Ω, c) > p 0 .Since w(Ω) ≥ I(Ω, c) we also have w(Ω h ) ≥ p 0 > 0 for n ≥ 1.The results in [20] show that the minimal perimeter projection, the diameter and the volume of a convex set Ω satisfy It is now immediate to see that diam(Ω h ) ≤ |Ω h |/(dw(Ω)) ≤ v d /(dp 0 ), and therefore the diameters of (Ω h ) are bounded.Without loss of generality we may assume that (Ω h ) are contained in a large enough ball.Applying the classical Blaschke selection theorem we find that there exists a maximizing sequence, denoted for simplicity by (Ω h ), such that Ω h converges, with respect to the Hausdorff distance, to the convex set Ω .Moreover, the volume is continuous for the Hausdorff distance among bounded convex sets, so Ω also satisfies the volume constraint |Ω| = v d .
Step 3. Continuity.The last step is to prove that I(Ω, c) is indeed equal to lim sup n→∞ I(Ω h , c).This is a direct consequence of [40,Theorem 4.1], which states that if (Ω h ) is a sequence of convex bodies in R d and Ω h → Ω in the Hausdorff distance then I(Ω h , c) → I(Ω, c) for every c ∈ [0, 1].This finishes the proof as the limit Ω is indeed a maximizer for (3).Remark 2.2.Removing the convexity assumption is not straightforward.Nevertheless, using the regularity results regarding solutions of (1) it is possible that this result could be partially extended in the general case.There are multiple difficulties which follow the structure of the proof above: • Proving there exists an upper bound in (3).
• Proving that a maximizing sequence is bounded: long tails may not intersect the minimizing set in (1) therefore cutting them may increase I(Ω, c).• Obtaining compactness results of a maximizing sequence: classically this should be possible when working in the class of sets of finite perimeter.• Proving that the maximizing sequence converges to an actual maximizer.This would involve obtaining some continuity properties regarding the perimeter of a sequence of sets.This is not straightforward, as the perimeter is only lower-semicontinuous for the L 1 convergence of characteristic functions.Nevertheless, using the regularity of minimal relative perimeter sets might help obtain the desired results.
The case of partitions can be handled using a similar strategy in the class of convex sets.The missing ingredient is the convergence of the minimal perimeters of partitions, analogue to the results in [40].Proof: As in the proof of Theorem 2.1 it is straightforward to give upper bounds for P I(Ω, c) in terms of w(Ω) (the minimal H d−1 measure of the projection on a hyperplane).A maximizing sequence (Ω h ) would have a positive lower bound 0 < p 0 ≤ w(Ω h ) for the sequence of minimal projections on hyperplanes.Therefore the diameters of (Ω h ) are bounded from above and we may assume that the convex sets Ω h converge to a convex set Ω (with respect to the Hausdorff distance).The set Ω also verifies the volume constraint |Ω| = v d .
It only remains to prove the continuity of the minimal partition perimeters P I(Ω h , c) for the convergence with respect to the Hausdorff distance.In order to do this, the same tools as in the proof of Theorem 4.1 in [40] can be used.
1. Lower-semicontinuity. Theorem 3.4 in [40] shows that there exist bilipschitz maps f h : Ω h → Ω with Lipschitz constants Lip(f h ) converging to 1, the Lipschitz constants of the inverse maps Lip(f −1 h ) also converging to 1.The volumes and perimeters of the images of finite perimeter sets E h ⊂ Ω h have upper and lower bounds as follows: Extracting a diagonal sequence, we may assume that (ω i h ) n i=1 converges with respect to the Hausdorff distance to a partition (ω i ) n i=1 of Ω as h → ∞.Using the estimates above and the fact that the perimeter is lower semi-continuous with respect to the convergence of finite perimeter sets we have 2. Upper-semicontinuity.It remains to prove that P I(Ω, c) ≥ lim sup h→∞ P I(Ω h , c).Reasoning by contradiction, suppose that P I(Ω, c) < lim sup h→∞ P I(Ω h , c).Up to a subsequence we may assume that P I(Ω h , c) converges.Choose (ω i ) n i=1 a minimal partition in Ω with constraints |ω i | = c i |Ω|.As in [40] using these sets it is possible to construct better competitors on some Ω h for large h than the corresponding optimal partition.This leads to a contradiction.Indeed, (f −1 h (ω i h )) n i=1 forms a partition of Ω h , which may fail to satisfy the volume constraints.Optimality conditions imply that common boundaries of the sets in the partition are regular hypersurfaces.Therefore, it is possible to perturb these boundaries around regular points in order to attain the desired volume constraints.Moreover, for h large enough this will produce partitions which verify contradicting the optimality of P I(Ω h , c).This concludes the proof of the existence of solutions for the given problem.
Remark 2.4.Existence results obtained in this section may also be generalized to the case of manifolds, in particular when Ω is the boundary of a convex set in R d .There exist sets Ω which are surfaces of codimension 1 that are boundaries of some convex set in R d and have fixed H d−1 measure which maximize the minimal relative geodesic perimeter of a subset or partition with given H d−1 measure constraints.

Relaxation of the perimeter -Gamma convergence
A key point in our approach is to approximate minimal length partitions SP (Ω, c).In order to avoid difficulties related to the treatment of the partition constraint it is convenient to represent each set in the partition ω i as a density u i : Ω → [0, 1].Then, the partition constraint can be simply expressed by the algebraic equality n i=1 u i = 1 on Ω.The next aspect is to approximate the perimeter of a set represented via its density function.A well known technique is to use a Γ-convergence relaxation for the perimeter inspired by a result of Modica and Mortola [34].The main idea is to replace the perimeter with a functional that, when minimized, yields minimizers converging to those that minimize the perimeter.
Let us briefly recall the concept of Γ-convergence and the property that motivates its use when dealing with numerical optimization.Remark 2.5.Let X be a metric space.For ε > 0 consider the functionals F ε , F : X → [0, +∞].We say that F ε Γ-converges to F and we denote F ε Γ −→ F if the following two properties hold: (LI) For every x ∈ X and every (x ε ) ⊂ X with (x ε ) → x we have (LS) For every x ∈ X there exists (x ε ) ⊂ X such that (x ε ) → x and An important consequence is the following classical result concerning the convergence of minimizers of a sequence of functionals that Γ converge.Proposition 2.6.Suppose that F ε Γ −→ F and x ε minimizes F ε on X.Then every limit point of (x ε ) is a minimizer for F on X.
Therefore, in practice, in order to approximate the minimizers of F it is possible to search for minimizers of F ε , for ε small enough.
Let us now state the two theoretical results that are used in this work concerning the Γ-convergence relaxation of the perimeter and of the total perimeter of a partition, with integral constraints on the densities.The first result is the classical Modica-Mortola theorem [34].Various proofs can be found in [1,8,11].In the following Ω is a bounded, Lipschitz open set.Consider a double well potential W : R → [0, ∞) which verifies the following assumptions: W is of class C 1 , W (z) = 0 if and only if z ∈ {0, 1} and W has exactly three critical points.For such a double well potential W described previously, denote γ = 2 1 0 W (s)ds.In the following c ∈ [0, 1] represents the fraction used for the volume constraint. and In [37] this result was generalized to the case of partitions and was used to compute approximations for SP (Ω, c).For c ∈ R n with n i=1 c i = 1, in order to simplify notations, let us denote by X(Ω, c) the space of admissible densities which verify the integral constraints and the algebraic non-overlapping constraint The Γ-convergence result in the case of partitions is recalled below.
A proof of this result can be found in [37].In the numerical simulations the double well potential is chosen to be W (s) = s 2 (1 − s) 2 which gives the factor γ = 1/3 in the results shown above.
Remark 2.9.It can be seen that SF (Ω, c) corresponds to a density that is a minimizer of F in Theorem 2.7.Moreover, SP (Ω, c) corresponds to a family of densities which minimizes G in Theorem 2.8.Using the result recalled in Proposition 2.6 it is possible to approximate these minimizers by those of F ε and G ε , respectively, for ε small enough.From a numerical point of view, dealing with the minimization of F ε and G ε is easier since the variable densities are H 1 regular.
Remark 2.10.The structures of minimizers of F ε was widely studied in the literature as can be seen in the papers [23], [31], [42].It can immediately be seen that, assuming W is at least of class C 1 , minimizers u of F ε verify an optimality condition of the form where µ ∈ R is a Lagrange multiplier for the volume constraint.Classical regularity theory results that can be found in [21] allow us to employ a bootstrap argument and conclude that u is of class C ∞ in the interior of Ω and u has the regularity of Ω up to the boundary.For example, for smooth domains Ω the optimizer u is also smooth up to the boundary of Ω.Moreover, it can be proved that the minimizer u takes values in [0, 1].In the case when Ω is convex results found in [22,Chapter 3] show that solutions of the above problem are in H 2 (Ω).
The same type of results hold for minimizers of G ε in the case of partitions, with eventual singularities at junction points between three or more phases in the partition.Nevertheless, the contact between the optimal partition and the boundary ∂Ω has the desired regularity.
Remark 2.11.The results in [31] show that the Lagrange multiplier µ for the volume constraint in (9) has a geometric interpretation.Given a volume fraction c ∈ (0, 1), as ε → 0, the Lagrange multiplicator µ converges to γ times the mean curvature of the shortest fence set SF (Ω, c), where γ was defined before Theorem 2.7.Recall that this minimal set SF (Ω, c), being optimal for the relative perimeter under a volume constraint has constant mean curvature inside Ω.Moreover, as shown in [42], taking ϕ = 1 as a test function in (9) gives an explicit formula for the Lagrange multiplier Since in the numerical section we deal with the minimization of F ε , G ε for fixed Ω, we briefly recall existence results related to these problems.In the following we suppose that the double well potential W is Lipschitz continuous on R.This is not restrictive since minimizers of F ε , G ε are densities which take values in [0, 1], which means that values of W far away from this interval do not matter in the analysis.Theorem 2.12.(i) Problems min admit solutions for Ω a Lipschitz domain with finite volume.In the following we denote by I ε (Ω, c) and P I ε (Ω, c) the optimal values obtained when minimizing F ε and G ε , respectively.
(ii) Given c ∈ (0, 1) admit solutions in the class of convex sets.
Proof: The proof of (i) is classical.Note that the constraints on the density functions are embedded in the definition of the functionals F ε , G ε to be minimized.We give the ideas for G ε as F ε is just a particular case.The existence proof goes as follows: • The functional G ε is obviously bounded from below by zero.Moreover, truncating the density functions (u i ) to take values in [0, 1] does not increase the value of G ε .This allows us from now on to assume that the densities have values in this interval.• Minimizing sequences exist and they are bounded in H 1 (Ω) n , which allows us to extract a subsequence weakly converging in H 1 .The constraints are stable under the L 2 convergence.Moreover, the lower-semicontinuity of the H 1 norm and Fatou's lemma allow us to see that any weak H 1 -limit point of the minimizing sequence is a minimizer.
The proof of (ii) follows the same lines as the proofs of Theorems 2.1, 2.3.As in the proof of these theorems, we start by noticing that the minimal H d−1 measure w(Ω) of the projection of Ω on a hyperplane is bounded from above.We detail the proof for I ε , while the proof in the case of partitions follows the same path.In order to emphasize the dependence of F ε on Ω we write F ε (u) = F ε (Ω, u).
Upper bound.Choose e 0 the direction for which H d−1 (Ω|e ⊥ 0 ) is minimal and equal to w(Ω).Given a hyperplane ζ orthogonal to e 0 consider the function u ε = ϕ ε (d(x)), where d(x) is the signed distance to the hyperplane ζ (choosing an orientation) and ψ ε , ϕ ε are given by This type of construction is standard when proving the limsup part of the Γ-convergence proof for the Modica-Mortola type results in Theorems 2.7, 2.8 (see for example [33]).The coarea formula and the fact that |∇d(x)| = 1 allows us to write The definition of ϕ ε and a continuity argument allow us to deduce that there is a position of the hyperplane for which the constraint Ω u ε = c|Ω| is verified.
Using the coarea formula to evaluate F ε (Ω, u ε ) we obtain The last inequality comes from the fact that {d(x) = t} is a slice of Ω orthogonal to e 0 and its H d−1 measure is at most w(Ω).Moreover, we can restrict the bounds in the one dimensional integral to 0 and ψ ε (1) since for t not in this interval the integrand is zero.A simple computation gives Thus we obtain where the last equality comes from the change of variables s = ϕ ε (t).This quantity depends only on W and w(Ω) and is bounded from above independently of Ω.
Compactness.The same argument used in the proof of Theorem 2.1 can be applied in order to conclude that there exists a maximizing sequence (Ω h ) converging in the Hausdorff distance to a convex set Ω with non-empty interior and volume |Ω| = v d .Moreover, we may assume that there exists a bounded open set D such that (Ω h ) h≥1 , Ω are contained in D.
Following the ideas in [26, Chapter 2] we may assume that (Ω h ) and Ω satisfy an ε-cone condition, or equivalently that they are Lipschitz regular with a uniform Lipschitz constant.In this case, the convergence with respect to the Hausdorff distance implies that Continuity.It now remains to prove that Since (Ω h ) and Ω have a uniform Lipschitz constant L (as recalled above), using the extension theorems recalled in [10,Theorem 3.4], there exists an extension ũ ∈ W 1,p (D) of u which verifies ũ W 1,p (D) ≤ Const(L) u W 1,p (Ω h ) .Together with the results recalled in Remark 2.10 we find that We cannot conclude yet, since ũ may not satisfy the integral constraints on Ω h .
In order to fix this, let x 0 be a point in the interior of Ω.For h large enough there exists a ball B δ of radius δ > 0 such that B δ ⊂ Ω ∩ Ω h .Denote by d δ the function which is equal to the distance to ∂B δ inside B δ and zero outside.We use this function to construct functions we necessarily have x h → 0. This immediately shows that ).This concludes the proof of the existence result.The case of partitions can be handled in a similar manner with the additional difficulty that the area constraints and sum constraints need to be handled simultaneously.This can be achieved by modifying the candidate densities in a finite family of balls.

Numerical framework for approximating minimal perimeter partitions
In this section the numerical minimization of F ε and G ε is discussed.Since Ω is a general domain in this work, we choose to work with finite element discretizations.Given T h a triangulation of Ω, denote by (x j ) N j=1 the set of the nodes.Working with P 1 Lagrange finite elements, a piecewise affine function u defined on the mesh T h is written N j=1 u j φ j .As usual, φ j are the piece-wise linear functions on each triangle, characterized by φ j (x k ) = δ jk .For a P 1 finite element function, the values u j are given by u(x j ) and we denote u = (u j ) = (u(x j )) ∈ R N .With these notations, it is classical to introduce the mass matrix M and the rigidity matrix K defined by As an immediate consequence of the linearity of the decompositions u = N j=1 u j φ j , v = N j=1 v j φ j we have that This immediately shows that the functionals F ε and G ε can be expressed in terms of the mass and rigidity matrices M and K using the expression where v = (u j (1 − u j )) N j=1 .The gradient of this expression w.r.t.u can be computed and is given by where denotes pointwise multiplication of two vectors: u v = (u j v j ) N j=1 .It is obvious that with (11) and (12) it is possible to implement a gradient-based optimization algorithm in order to minimize F ε and G ε .The software FreeFEM [25] is used for constructing the finite element framework and the algorithm LBFGS from the package Nlopt [28] is used for the minimization of (11).We address the question of handling the constraints in the next section.

Area constraints and projections
The area or volume constraint can be expressed with the aid of the vector m = M e with e = (1, 1, ..., 1) ∈ R N .Indeed, with this notation, for a finite element function u we have Projection for one phase.Let us start with the projection of one function onto the integral constraint.Given a P 1 finite element function u and its values u at the nodes we search a function u 0 with values at nodes u 0 = u + αm verifying the constraint m • u 0 = c by solving An alternative way of handling the constraint during the optimization process is to project the initial vector on the constraint and project the gradient onto the hyperplane x • m = 0 at each iteration.This can simply be done by using c = 0 in the relation above.Such a modification of the gradient allows us to use efficient black-box optimization toolboxes, since quasi-Newton algorithms like LBFGS will perform updates based on a number of gradients stored in memory.If these gradients verify x • m = 0, the integral constraint will be preserved throughout the optimization process.
Projection for multiple phases.In the case of partitions projections on the integral constraints were already proposed in [37] (when using finite differences) and in [6] (when using finite elements).A drawback of using orthogonal projections parallel to the vector m is the fact that the vector u is modified almost everywhere in the domain Ω, which also includes the regions where it is 0 or 1.As observed in [9], this can cause resulting optimal densities to be non-zero at interfaces between two cells and at triple points.The solution proposed was to use instead projections parallel to 2W (u i ).It is possible to note that, using this method, the functions u i are mainly modified only at the interface of transition between 0 or 1.
Let us now describe the construction of the projection algorithm on the constraints with the compatibility condition c i = 1.Consider λ ∈ H 1 (Ω) and (µ i ) ∈ R n and perform the transformation in order to satisfy the constraints and It is easy to note that: • in view of (14), given µ i we can find λ: 2W (u i ) .
• in view of ( 13), given λ we can find µ i using the relations above.In the following we introduce the quantities λi = Ω λ 2W (u i ).Again, in view of ( 13), if λi are known, then µ i are known and so is λ.In order to obtain a system for λi , let us note that With this in mind we get In order to further simplify the above expression, let's make the following notations One may note that the above system (I − A) λ = b is singular since the sum on the columns of A is equal to 1 and therefore (I − A T )e = 0, where e = (1, ..., 1) ∈ R n .This is due to the fact that one of the constraints is redundant, in view of the compatibility condition.In practice we simply discard one unknown and set it to zero.As noted previously, the same procedure can be applied to the gradients g i associated to each u i in order to satisfy at every iteration This allows us to preserve the constraints when using a black-box LBFGS optimizer when initial parameters satisfy the integral and sum constraints.

Initializations for 2D partitions -Voronoi diagrams
The optimization algorithm for approximating I ε (Ω, c) and P I ε (Ω, c) is ready to be implemented, following the ideas shown in the previous sections.There is, however, the choice of the initialization which is non-trivial and which has an impact on the performance of the optimization algorithm.It was already noted in [37] and [6] that starting from random initializations is possible, but some additional work needs to be done in order to avoid constant phases, which are encountered at some local minimizers.Keeping in mind that the optimal partition problem needs to be solved multiple times during the optimization algorithm, we propose below a different initialization strategy, based on Voronoi diagrams.The use of Voronoi diagrams for generating initializations is a rather natural idea when dealing with partitions and was already mentioned in [14].In this section Ω is assumed to be a polygon.
Using random Voronoi diagrams is not very helpful, since area constraints are not verified in general.This led us to consider Voronoi diagrams which verify the area constraints, which in the literature are called capacity-constrained Voronoi diagrams.Algorithms for computing such diagrams were proposed in [4] for the discrete case and in [3] for the continuous case.In the continuous case the method employed in [3] was to optimize the position of one Voronoi point at a time using the gradient-free Nelder-Mead method.In [46] the authors propose efficient ways of generating such diagrams, but for weighted Voronoi diagrams only.In the following we propose an alternative method for constructing capacity-constrained Voronoi diagrams by computing the sensitivity of the areas of the Voronoi cells with respect to the position of the points generating the respective Voronoi diagram.Since we are also interested in minimizing the perimeter, the computation of the sensitivity of the perimeter of Voronoi cells is also described.
Terminology related to Voronoi diagrams.Given a set of points p 1 , ..., p n ∈ R 2 (called Voronoi points) the associated Voronoi diagram consists of n Voronoi cells V 1 , ..., V n defined for i = 1, ..., n by Each Voronoi cell V i is a polygonal region (possibly unbounded).The vertices of V i are simply called vertices in the following (please observe the difference between the Voronoi points and the Voronoi vertices).The edges of V i are called ridges, some of which can be unbounded.Each ridge connects two Voronoi vertices (possibly at infinity, for unbounded ridges) called ridge vertices.Moreover, each ridge separates two of the initial points, called ridge points.All structure information of a Voronoi diagram associated to a set of points can be recovered as an output to some freely available software like scipy.spatial.Voronoi.The Voronoi diagrams are not restricted to a bounded domain.It is possible, however, to consider restrictions of a Voronoi diagram to a bounded set Ω by simply intersectiong the regions V i with the set Ω.In our implementation the intersection of polygons is handled using the Shapely Python package for computational geometry.
In the following, given the points p i , i = 1, ..., n, we consider the Voronoi regions restricted to a finite domain Ω re-defined by V i = V i ∩ Ω.Note that in some cases, some V i may be void if Ω does not contain the associated point p i .We explain below how to compute the gradients of the areas and perimeters of V i with respect to positions of the points p i .
Gradient of the areas of the Voronoi cells.The derivative of a functional that can be represented as an integral over the Voronoi cell V i with respect to the Voronoi points can be computed if the normal displacement of the cell is known.This fact was recalled in [17] and [27] and is classical in the shape derivative theory.However, since the functionals considered there were sums over all Voronoi cells V i , the contributions coming from the variations of the boundary cancelled themselves and only the variation of the integrand mattered.This is no longer the case in our situation.The area of the voronoi cell V i is A i = Vi 1dx and its directional derivative when perturbing a point p j in direction d is given by the integral on the boundary of the normal variation of V i : A i (d) = ∂Vi θ.n, where θ is the infintesimal displacement of the boundary of V i when moving p j in the direction d i .More explicitly, if V i (t) is the Voronoi cell for p j + td then θ = lim t→0 v(t) t , where the vector field v(t) is defined by V i + v(t) = V i (t) on the boundary of V i .For a given ridge v k v l with associated ridge points p i , p j , we perturb the point p i → p i + δ and investigate the derivative of the normal perturbation of v k v l as δ → 0. The two main perturbations are the following: • δ is collinear with p i p j : in this case the perturbation induced on the ridge is just δ/2.(see Figure 2 (left)).The associated infinitesimal normal perturbation is constant equal to 1/2.• δ is orthogonal to p i p j : in this case the infinitesimal perturbation induced on the ridge is a rotation around the intersection m ij of p i p j and v k v l .The associated infinitesimal normal perturbation varies (the signs vary with respect to the orientation of the orthogonal perturbation).(see Figure 2 (middle)).In order to prove this it is enough to consider the normal perturbation v(t) of the ridge v k v l illustrated in Figure 2 (right) and take the limit v(t)/t as t → 0. For a general perturbation δ of p i we denote by n the normal vector to v k v l pointing outwards to V i and by t the unit vector collinear with − − → v l v k .Furthermore, consider the notations for the normal and tangential contributions (computed as one dimensional integrals on v k v l of the infinitesimal perturbations described above): By symmetry, these contributions will be similar, but with changed signs when perturbing p j with δ.The contributions to the gradients of the areas of the cells V i and V j when perturbing p i or p j are described in the table below: The algorithm for computing the gradient of the areas of the cells simply iterates over all the Voronoi ridges that intersect Ω and for each ridge adds the contributions described in (16).
Algorithm 1 describes the computation of the gradient of the areas of the cells.The coordinates of the n input points are given in the vector x ∈ R 2n and the output is the real matrix M of size 2n × n containing as columns the gradients of the areas of the n cells with respect to the 2n coordinates.
The explicit formulas for the gradients of the areas allow us to easily find capacity-constrained Voronoi diagrams as results of an optimization algorithm.For given constraints For the Voronoi ridge r Find the associated Voronoi points p i and p j and the Voronoi vertices v k , v l .

9:
Perform the updates using (16): In order to obtain more regular structures it is also possible to minimize the energy under the capacity constraints |V i | = c i .The energy ( 18) is employed for characterizing centroidal Voronoi diagrams where each Voronoi point p i coincides with the centroid of the cell V i .In particular, Centroidal Voronoi diagrams are critical points for (18).See [46] for more details regarding this functional.Examples in this sense are shown in Figure 3.The constrained minimization is done using the MMA algorithm [43] from the NLOPT library [28].Note that all constraints are coded as inequality constraints in this algorithm: |V i | ≤ c i .Since V i form a partition of Ω it is immediate to see that if the sets satisfy the inequality constraints, they, in fact, also satisfy the equality constraints |V i | = c i .
Remark 3.1.It is also possible to generalize the gradient formulas when a density is involved, when dealing with quantities of the type Vi ρ, where ρ ∈ L 1 (Ω) is a given density.The shape derivative of Vi ρ is ∂Vi ρθ.n,where θ is the perturbation of the boundary ∂V i .The boundary perturbations are obviously the same, but the computations in (15) are no longer explicit, and a one-dimensional numerical integration needs to be performed for each Voronoi ridge.Gradient of the perimeter of the Voronoi cells.We saw that in order to compute the gradient of the areas of the Voronoi cells, the normal displacement of the Voronoi ridges needed to be understood, when moving the Voronoi points.On the other hand, the variation of the perimeter of a Voronoi region depends on the tangential perturbation of the Voronoi ridges.In order to understand this perturbation one needs to see how the Voronoi vertices move when perturbing the Voronoi points.Moreover, it can be observed that when two Voronoi vertices merge, i.e. a Voronoi ridge collapses, the total perimeter of the cells is not smooth.This behavior is illustrated by an example shown in Figure 4. Therefore, we suppose in the following that each Voronoi vertex is in contact with at most three Voronoi ridges.Moreover, the definition of the Voronoi cells allows us to conclude that in this situation each Voronoi vertex is the circumcenter of the triangle determined by the three points associated to the neighboring Voronoi regions.This allows us to transform perturbations of the Voronoi points into perturbations of the Voronoi vertices, by looking at the following well known formulas for the circumcenter of a triangle with vertices (A x , A y ), (B x , B y ), (C x , C y ): where The formulas above are well defined as long as the three points A, B, C are not colinear.Moreover, it is immediate to see that in this case the circumcenter varies smoothly with respect to the coordinates of the vertices of the triangle.The infinitesimal perturbation of the circumcenter when moving (A x , A y ) can be computed by simply differentiating the above formulas w.r.t.A x and A y .Once the derivative of the circumcenter is known, in order to find the gradient of the prerimeter it is enough to project this derivative on all the Voronoi ridges going through the respective circumcenter and add the contribution to the gradient of the perimeter of each cell with respect to the corresponding coordinates.See Figure 5 for more details.Variations induced by the Voronoi nodes are enough to compute the gradient of perimeters of Voronoi cells that do not intersect the boundary of the bounding polygon.For the boundary cells, it is necessary to describe the perturbation of intersections between Voronoi ridges and the bounding polygon.Fortunately, this can also be described using variations of circumcenters for some particular triangles.
Indeed, let v k v l be a Voronoi ridge intersecting a side of the bounding polygon Ω at the point q and let p i , p j be the associated Voronoi points.Consider now p j the reflection of the point p j with respect to the line supporting .Then obviously q is the circumcenter of the triangle p i p j p j and the variation of q with respect to perturbations of p i can be found using the same procedure as above.See Figure 5 for more details.The algorithm for computing the gradients for the perimeters of the Voronoi cells is presented in Algorithm 2, assuming that every Voronoi vertex is a circumcenter of exactly one triangle determined by the Voronoi points.
Using the gradients for the area and perimeters of Voronoi cells it is possible to perform a constrained minimization of the perimeter under area constraint starting from random Voronoi initializations.The optimization is performed with Nlopt [28] optimization toolbox in Python using the MMA [43] algorithm.Some examples of initializations obtained are shown in Figure 6.The initial Voronoi points are chosen randomly inside the polygon Ω.In order to accelerate the convergence of the optimization algorithm a few iterations of Lloyd's algorithm are performed before starting the optimization process.Recall that Lloyd's algorithm consists in replacing the Voronoi points by the centroids of the respective cells iteratively (see for example [46] for more details).In order to deal with local minima multiple optimizations (typically 10) are performed for every polygon Ω and the one with the partition having the least perimeter is retained as a valid initialization.Note that the algorithm gives similar topologies with the best known ones shown in [14] for the case of equal areas and in [24] for the case of cells with two different areas.
Initialization of a partition.Having at our disposal the gradients of areas and perimeters of Voronoi cells, we are now ready to propose initialization algorithms for optimal partitioning algorithm.In practice we use one of the options below: 1. Compute minimizers of ( 17) starting from random Voronoi points (p i ).Repeat the procedure a number of times and keep the configuration having the smallest total perimeter.This works well when the areas of the cells are the same.

Optimize the total perimeter of the Voronoi cells under capacity constraints starting from random
Voronoi points (p i ).Repeat the procedure a number of times and keep the configuration having the smallest total perimeter.This approach gives good results when the areas of the cells are different and the optimization process is more difficult, since more local minima are present.3. When n ≤ 4 random initializations work very well.4. In dimension three random initializations were used for n ≤ 4 and random Voronoi initializations were used for n ≥ 5.

Algorithm 2 Compute gradients of perimeters of Voronoi cells
Require: x = (x 1 , y 1 , ..., x n , y n ), coordinates of points p 1 , ..., p n , bounding polygon Ω 1: Initialize M = 0 (of size 2n × n) 2: Compute the Voronoi diagram associated to the points (p i ) n i=1 and the intersections of the Voronoi cells with the polygon Ω.Let p i , p j , p k be the three Voronoi points which are associated to ridges going through v.

6:
Compute the derivative d of the circumcenter of p i p j p k when moving p i in the direction δ = (1, 0).See Figure 5.

7:
For all ridges r going through v project d on r and add this to the gradient w.r.t. the x coordinate of the perimeter of the cells {V 1 , V 2 } neighbors to the ridge r (determined by the ridge points associated to the ridge r): these are elements M 2i−1,V1 , M 2i−1,V2 in matrix M .

8:
Repeat the above with δ = (0, 1) in order to get the gradients with respect to the y-coordinates.Denote by p i , p j the associated ridge points and by the edge of the boudnary polygon Ω cut by r

13:
Let p j be the reflection of p j with respect to .

14:
For δ = (1, 0) compute the derivative d of the circumcenter of p i p j p j when moving p i in the direction δ.See Figure 5.

15:
Project d on the ridge r and add this projection to the gradient of the cells i and j w.r.t. the x coordinate: M 2i−1,i and M 2i−1,j in matrix M .

16:
Project d on and add this to the gradient of the cells i and j (with the proper sign).

17:
Repeat the above with δ = (0, 1) in order to get the gradients with respect to the y coordinates.The areas are equal, except the third case where two cells have areas three times smaller than the other three.

Shape derivative
In order find perturbations of the domain Ω that decrease the value of a functional J(Ω) the concept of shape derivative is used.For a Lipschitz domain Ω, the functional J is said to be shape differentiable at Ω if there exists a linear form θ → J (Ω)(θ) such that for every vector field where I denotes the identity mapping.Classical results from [41, Section 2.31] and [26,Section 5.2] show that for a function f Ω ∈ H 1 (Ω) that varies smoothly with respect to perturbations of Ω, the functional A nice overview of the basic notions regarding shape derivatives, together with the associated references is given in the paper [12].We are interested in finding the shape derivatives of I ε (Ω, c) and P I ε (Ω, c) which are minimal values obtained through constrained optimization.In the following we perform these computations assuming that the corresponding shape derivatives exist.Methods used in the following are inspired from the following works: • In [16, Chapter 10, Sections 2.3, 5.4] the shape derivation of a minimum problem and, respectively, a saddle point is described.• In [19,Capter 3] the derivative of the minimal value problem of a constrained problem where the objective and the constraints depend on a parameter is given in the finite dimensional case.• In [7,Chapter 4] the differentiability of the minimal value given by a constrained parametric problem is considered in Banach spaces.In our case we are in the framework of the derivation of a saddle point.However, we were not able apply the first result cited above to prove rigorously that the shape derivative exists.In the following we use a formal approach in order to identify the formula for the shape derivatives that are of interest for us.
The case of one phase.Consider u Ω which minimizes F ε (u) from Theorem 2.7 and suppose that u Ω is unique.We assume that u Ω varies smoothly with respect to perturbations of the boundary of Ω and that its shape derivative u Ω (θ) exists and belongs to H 1 (Ω).Remark 2.10 underlines the fact that u Ω is C ∞ in the interior of Ω and has the regularity of Ω up to the boundary.In the following we suppose that Ω convex (or, in general, at least of class C 2 ), which implies that u Ω is indeed in H 2 (Ω) and the gradient ∇u Ω has a well defined trace on ∂Ω.The function I ε (Ω, c) = F ε (u Ω ) has the same structure as in (20).Using the classical chain rule for the shape derivatives (see for example [12,Lemma 26]) we obtain that where u Ω (θ) is the shape derivative of u Ω with respect to θ. Recall (see Remark 2.10) that minimizing F ε (u) under the constraint Ω u = c|Ω| implies the existence of a Lagrange multiplier µ ∈ R such that for every φ ∈ H 1 (Ω).Taking φ = u Ω (θ) in the previous equation gives Recall that u Ω also verifies the constraint Ω u Ω − c|Ω| = 0. Differentiating this with respect to the shape Ω gives Combining ( 21), ( 22) and (23) gives The previous formula corresponds to the shape derivative of the Lagrangian with respect to the shape when u = u Ω and ν = µ.This is in accord with similar results in [16, Chapter 10, Section 5.4], [7,Chapter 4] and [19,Chapter 3].Note that the formula (24) does give a valuable and reasonable assumption on the perturbation of the boundary Ω that increases the length of the set SF (Ω, c) having minimal perimeter.Notice that the term ε|∇u Ω | 2 + 1 ε W (u Ω ) is non-zero (and strictly positive) only in the neighborhood of the contact points of the minimal relative perimeter set with the boundary ∂Ω.Moving the boundary outwards at these points with a small enough step size will increase the minimal perimeter I ε (Ω, c).On the other hand the term containing µ(u Ω −c) models the movement of the relative isoperimetric set when the boundary of Ω is perturbed away from the contact points.Indeed, recall that |SF (Ω, c)| = c|Ω| and the Lagrange multiplier µ is proportional (as ε → 0) to the curvature of the isoperimetric set SF (Ω, c), as shown in Remark 2.11.Therefore, when perturbing Ω away from the contact points with SF (Ω, c) the inner boundary of SF (Ω, c) is pulled in one direction or another (corresponding to the change in the volume) and the corresponding variation of the minimal perimeter is given by the mean curvature of the isoperimetric set.
In the case the solution u Ω is not unique, we cannot assume that u Ω varies smoothly with Ω.Indeed, perturbing the boundary of Ω with the normal velocity given by ε|∇u Ω | 2 + 1 ε W (u Ω ) + µ(u Ω − c) may drastically change the topology of the minimal set.Nevertheless, perturbing the boundary of Ω with this normal velocity will eventually increase the value of I ε (Ω, c).In Figure 7 the numerical approximation of u Ω is shown together with the values of ε|∇u Perturbing Ω in the normal direction as shown will increase the minimal value, provided the solution u Ω is unique.It can be noted that the term ε|∇u Ω | 2 + 1 ε W (u Ω ) is dominant in the shape derivative, as the second term is of order 1/ε.Recall that the results given in [31] show that the the Lagrange multiplier µ as a bounded limit as ε → 0, proportional to the constant mean curvature of the limiting minimal interface.Remark 3.2.Multiple approaches may exist in order to make the above computation a rigorous proof of the shape derivative formula and we describe a few below.The technical difficulties involved do not allow us to present such a complete proof.
• In [16, Chapter 9, Theorem 5.1] a method that computes the directional derivative of a saddle point is described.However, it is not clear if this result applies to our case.On the other hand, one may note that this method would give the same result as the formal method described previously.• The computation shown above strongly depends on the existence of the shape derivative u Ω (θ).The existence of this derivative is not obvious even under the assumption that u Ω is a unique minimizer.
It may be possible to apply techniques similar to those in [26,Section 5.7] or [41,Section 2.29] in order to deduce that the unique minimizer u Ω is differentiable with respect to the shape Ω.
Remark 3.3.The hypothesis regarding the uniqueness of u Ω for the shape derivative to exist is similar to the hypothesis needed when differentiating the eigenvalue of an operator with respect to the shape.When dealing with multiple eigenvalues, the shape derivative does not exist, but directional derivatives are available.For more details see [26,Chapter 5].It is possible that such theoretical results could be obtained in our case, but this goes outside the scope of this article.
The case of partitions.Consider u Ω = (u i Ω ) n i=1 which minimizes G ε in Theorem 2.8.As in the previous paragraphs, we suppose that u Ω is unique and varies smoothly with respect to perturbations in Ω. Differentiating with respect to the domain using the formula (20) we get Since the sum constraint n i=1 u i = 1 implies that one of the area constraints is redundant we may note that the Lagrange multipliers are not uniquely defined.Adding a constant to λ and subtracting the same constant from each µ i gives another set of valid multipliers.Therefore, it is not restrictive to assume that the multiplier λ verifies Ω λ = 0.
Replacing (26) and using ( 25) we obtain On the other hand, differentiating the constraint Therefore we obtain The Lagrange multipliers µ i can be found by using φ i = δ ij in (26), which gives µ i = −1/(ε|Ω|) Ω W (u i Ω ).As discussed above, in the case u Ω is not unique, perturbing the boundary of Ω with normal velocity given by will eventually increase the value of P I ε (Ω, c) and will reduce the multiplicity of the family of optimal partitions u Ω .In Figure 8 the numerical approximation of u Ω is shown together with the value of Perturbing Ω in the normal direction as shown will increase the length of the minimal partition, provided the solution u Ω is unique.
An example of non differentiability -multiplicity greater than one.It is not difficult to imagine domains Ω for which there are multiple minimizers for I ε (Ω, c), given c > 0. It is enough to consider c small enough and a symmetric domain like in Figure 9.Moreover, there exists a value of c for Figure 9: Symmetric domain with two minimizers I ε (Ω, c).The optimal densities obtained numerically are represented together with the perturbation fields obtained from (24) which the Lagrange multiplier vanishes: µ = −1/(ε|Ω|) Ω W (u Ω ) = 0.It is enough to remember that µ is proportional to the curvature of the minimal interface as ε → 0. Let us show that in such a case the functional I ε (Ω, c) does not admit a shape derivative.Indeed, suppose that Ω is symmetric like in Figure 9. Denote u 1 , u 2 the two solutions and θ 1 , θ 2 two vector fields such that θ i .n also illustrated in Figure 9).Suppose that J(Ω) := I ε (Ω, c) is differentiable at Ω. Then for {i, j} = {1, 2} it is clear that F ε ((I + tθ i )(Ω), u j ) = F ε (Ω, u j ), i.e. the minimal value of F ε does not change when modifying Ω with only one of the vector fields θ i .This would imply that J (Ω)(θ 1 ) = J (Ω)(θ 2 ) = 0 and by linearity J (Ω)(θ 1 + θ 2 ) = 0.However, this last equality is clearly false, since if we modify Ω with the combined vector field θ 1 + θ 2 we clearly have In order to deduce the above equality it is enough to work with one half Ω i of the domain Ω.We use the fact that this half can be extended to a non-symmetric domain for which u i is a unique minimizer and apply the formula for the shape derivative found previously.Therefore, we arrive at a contradiction, showing that when multiple minimizers of F ε (Ω, u) exist, the functional I ε (Ω, c) is not shape differentiable.The same kind of argument can be applied for P I ε (Ω, c) in the case where optimal partitions are not unique.

Radial parametrization and optimization algorithm
The results of [18] and existence results obtained in Section 2.1 are restricted to convex domains Ω.We therefore choose to search for domains maximizing SF (Ω, c) and SP (Ω, c) in the larger class of star-shaped domains which includes the class of convex sets.These domains can be parametrized using an associated radial function in dimensions two and three.Furthermore, a spectral decomposition of the radial function with a Finite number of Fourier coefficients is used in order to work with a finite, but sufficiently large number of parameters in the computations.
Planar domains.In dimension two, the radial function ρ : [0, 2π] → R + is discretized using 2N + 1 Fourier coefficients Consider a shape functional J(Ω) whose shape derivative is expressed by J (Ω)(θ) = ∂Ω Gθ.n.Using the discretization above, given v = (a 0 , a 1 , ..., a N , b 1 , ..., b N ) that defines Ω via the radial function ρ, a finite dimensional function is obtained j(v) = J(Ω).It is classical to compute the gradient of j using the shape derivative, by choosing the appropriate boundary perturbation for each Fourier coefficient.Using the notation r = x/|x| we have r.n = ρ/ ρ 2 + (ρ ) 2 .Therefore, denoting v n = ρ/ ρ 2 + (ρ ) 2 , we obtain Domains in R 3 .In dimension three we choose to parametrize the unit sphere using (φ, ψ) ∈ [−π, π]× [0, 2π] → (cos ψ cos φ, sin ψ cos φ, sin φ).Next, we are interested in parametrizing radial functions ρ : [−π, π] × [0, 2π] which are constant for φ ∈ {−π, π}.This is needed in order to be able to create 3D meshes in FreeFEM [25] by deforming two dimensional meshes.One way of attaining this objective is to use two dimensional Fourier parametrizations which contain only sines for the φ coordinate, together with an affine function in φ in order to allow different values at the extremities φ ∈ {−π, π}: As in dimension two, it is straightforward to infer the gradient of the discretized functional with respect to each one of the parameters.A simple computation yields Optimization algorithm.Given the discretization and the gradients expressed above it is straightforward to implement a gradient descent algorithm.The delicate issue is the fact that at each iteration, the objective function and its gradient are computed as a result of a minimization algorithm.If a local minimal perimeter set/partition is found instead of the global one, this might give a wrong ascent direction.Therefore, we choose to work with a gradient flow type algorithm, which consists in advancing at each iteration in the direction given by the gradient of the functional with a prescribed step, regardless of the fact that the objective function increases or decreases.In this way, even the optimization algorithms solved at one of the iterations yields a local minimum, the global optimization algorithm may still correct itself at subsequent iterations.The area constraint is imposed by a projection algorithm: the next iterate is rescaled to have the desired area via a homothety.The precise description is given in Algorithm 3.

Algorithm 3 Global maximization algorithm
Require: Initial Fourier coefficients, area constraints (c for one phase, a vector c for the partitions), the number of iterations Niter, ε, initial step α, the number of iterations Nmod after which the step is halved 1: for i in {1,2,...,Niter} do

2:
Construct the mesh of Ω from the Fourier coefficients v: the size of triangles/tetrahedra should be at most ε/2.

5:
Advance in the direction of the gradient in order to increase the value of j(v): v ← v + α∇j(v).

6:
Project on the area/volume constraint of Ω using a homothety 7: If i mod Nmod ≡ 0 decrease the step: α ← α/2.8: end for return the final set of Fourier coefficients v An example of a result obtained with this algorithm for the maximization of I ε (Ω, c) is shown in Figure 10 together with the graph of the objective function.It can be seen that the objective function increases and stabilizes as the size of the step decreases.Oscillations in the curve describing the cost have two main causes: first, the optimization algorithm at the current iteration might yield a local minimum instead of the global one and secondly, the size of the step may be too big.An example for the case of partitions is shown in Figure 11.Multiple instances of the gradient flow maximization algorithm are represented in Figure 12 for n = 6 and in Figure 13 for n = 10.
Numerical aspects.When minimizing F ε and G ε it is classical to consider meshes with elements that have size smaller than ε.This is due to the fact that the phase transition from 0 to 1 typically takes place in a region of width proportional to ε and the mesh needs to be fine enough to capture this.In dimension two we consider ε = 0.05 giving rise to meshes having around 23k nodes.
In dimension three using ε = 0.1 gives meshes of about 25k nodes.When dealing with more cells in dimension three we start with ε = 0.07 and we interpolate and re-optimize the result on a finer mesh   corresponding to ε = 0.04.This gives meshes with around 35k nodes.For postprocessing and plotting purposes, the final mesh is further refined using MMG3D [15] such that more tetrahedra are present where phases change quickly.The final partition is interpolated and re-optimized (with ε = 0.025) on this fine mesh (with around 270k nodes) before plotting.
Code.The finite element software used for the optimization algorithm described in Section 3.1 is FreeFEM [25], which provides an interface to the LBFGS optimizer from Nlopt [28].
The partition initialization via Voronoi diagrams is coded in Python, where optimization algorithms from Scipy.optimize and Nlopt are used for unconstrained and, respectively, constrained optimizations.
The visualization is done with Python using Matplotlib in dimension two and Mayavi [39] in dimension three.The graphical representation of partitions is done by extracting surface meshes of an iso-level for each cell in the optimal partition using FreeFEM [25] and MMG3D [15].These surface meshes are then plotted with Mayavi [39].Some codes used for obtaining the results illustrated in the paper can be found in the Github repository: https://github.com/bbogo/LongestShortestPartitions/tree/main/FreeFEMcodes.

Results
In this section we use the algorithm described previously in order to study problems (3) and (6).Results from [18] show that problem (3) is solved by the disk in dimension two for c = 1/2.We perform simulations for various values of c < 1/2 (note that considering c or 1 − c for the constraint gives the same result) and the numerical result is always the disk in dimension two.In dimension three the same phenomenon occurs: for various values of the volume fraction c the shape which maximized the relative minimal perimeter of a subset with volume c|Ω| is the ball.Some examples are shown in Figure 14.
Surprisingly, the case of partitions shows similar results.When considering equal area constraints the set with fixed area maximizing the length of the minimal partition is still the disk (see Figure 15 for some examples).In dimension three for n ∈ {3, 4, 6, 13} we obtain similar results: the ball maximizes the total surface area of the smallest total perimeter partition.These results are illustrated in Figures 17 and 18.An expanded view of the optimal partition is also illustrated for each case.
Note that simulations made in the case of one phase already show that when partitioning the domain into two regions with two non-equal areas, the maximizer of the minimal length partition is still the disk.This suggests that even in the case where the cells do not have the same prescribed area, the set Ω which maximizes the minimal perimeter of a partition is still the disk in 2D (the ball in 3D).Indeed, when considering more cells with different areas, the numerical result is the same: the disk seems to be the maximizer (see Figure 16 for some examples).As already underlined in [24], the study of partitions in cells with prescribed but different areas is more complex, since in this case there are even more local minima.
The numerical simulations give rise to the following conjecture, which generalizes the results of [18].Remark 4.2.The same type of results seem to hold when maximizing the minimal geodesic perimeter for closed surfaces in Ω in R 3 which are boundaries of convex sets with a constraint on the H 2 (∂Ω).The techniques used in this case are those presented in [6] and the theoretical and numerical framework is similar to what was done in dimension three.In this case the sphere seems to be the maximizing set, which is in accord with the conjecture stated above.

Remarks on optimality conditions
As discussed in Section 3.4, existence of shape derivatives for I ε and P I ε depends on the uniqueness of the minimizers for these functionals.Therefore, it is not straightforward to obtain classical optimality conditions.It is possible, however, to obtain some qualitative information about sets maximizing the minimal values of I ε and P I ε under volume constraint.Recall that the optimizer of shape differentiable functional J under volume constraint will verify an optimality condition of the form where ∈ R is a Lagrange multiplier associated to the volume constraint.Recall that the shape derivative of the volume functional is |Ω| (θ) = ∂Ω θ.n.Non-uniqueness of the minimal relative perimeter set/partition at the optimum.Results in Section 3.4 indicate that the shape derivatives of I ε and P I ε exist when they correspond to unique minimizers of the Modica-Mortola type functionals.In this case, the corresponding shape derivatives are boundary integrals of non-constant functions multiplied by the normal perturbation θ.n.Therefore, it is straightforward to see that a relation of the type (30) cannot hold.This allows us to conclude that of Ω * is a minimizer of I ε (Ω, c) the optimal minimal relative perimeter set is not unique.The same happens in the case of partitions: if Ω * minimizes P I ε (Ω, c) and the minimal length partition of Ω with constraints c is not unique.
Suppose now that Ω is a domain with fixed volume |Ω| = v d such that SF (Ω, c) is unique.Then for ε > 0 small enough the minimizer of I ε (Ω, c) will also be unique.Thus I ε (Ω, c) admits a shape derivative.Also the corresponding optimal density u Ω is not constant on the boundary, and therefore the optimality relation (30) cannot hold.This shows that such a domain Ω is not a solution of problem (3).The same argument can be applied for problem (6).
As a conclusion, a domain Ω that solves (3) must have multiple minimal relative perimeter sets (or multiple minimal length partitions for problem ( 6)).

Conclusions
The theoretical considerations and numerical simulations presented in this paper suggest that the results of [5], [44], [18] are valid in more general settings: in dimensions two and three, under volume and convexity constraints the ball is the set Ω who maximizes • the minimal relative perimeter of a subset ω ⊂ Ω with volume constraint |ω| = c|Ω| for all c ∈ (0, 1).
• the minimal relative perimeter of a partition of Ω into sets (ω i ) n i=1 with volume constraints |ω i | = c i |Ω| given c i ∈ (0, 1) with n i=1 c i = 1.The result seems to hold even in the case where the sets |ω i | do not have the same volume constraints.
The numerical maximization algorithm consists in solving at each iteration an optimization problem which approximates the least perimeter set or partition under the given constraint.Then a perturbation of the set which does not decrease the minimal perimeter is found and the set is modified.In all cases, the numerical result was close to the disk/ball.
The initialization phase for the computation of the optimal partitions is made using Voronoi diagrams with prescribed capacity.We provide a new way of generating such Voronoi diagrams using the gradients of the areas with respect to the Voronoi points.The gradient of the perimeter of the Voronoi cells is also computed.

Figure 1 :
Figure 1: Examples of minimizers of problem (1) for various shapes Ω and various constraints.

Theorem 2 . 3 .
Problem (6) has solutions in the class of convex sets, i.e. given c = (c i ) n i=1 ∈ R n , n i=1 c i = 1 there exist convex sets Ω * which maximize P I(Ω, c) among convex sets with fixed volume |Ω| = v d .

Figure 2 :
Figure 2: Normal perturbation of the Voronoi ridge when moving one of the point in the normal and tangent directions.

Figure 5 :
Figure 5: (left) Perturbation of the circumcenter when moving one point and projections on the Voronoi ridges.(right) Computing the perturbation of a boundary point by transforming it into a circumcenter.

3 :
Set Voronoi vertices as the set of Voronoi ridges that intersect the bounding polygon Ω. 4: for v in Voronoi vertices do 5:

9 :
Do the same instructions for p j and p k .10: end for Set Voronoi ridges as the set of Voronoi ridges that intersect the boundary polygon Ω. 11: for r in Voronoi ridges do 12:

18 :Figure 6 :
Figure 6: Initializations obtained when minimizing the perimeter of Voronoi cells under area constraints.The areas are equal, except the third case where two cells have areas three times smaller than the other three.

Figure 8 :
Figure 8: (left) Minimization of the minimal partition functional G ε for three cells with equal areas.(right) Representation of the shape gradient and the normal perturbation producing an ascent direction for SF (Ω, c).

Figure 10 :
Figure 10: Maximization of I(Ω, 0.3) in dimension two together with the evolution of the cost function.

Figure 14 :
Figure 14: Maximization of the minimal relative perimeter in 2D and 3D with volume constraints c ∈ {0.25, 0.4, 0.5}.The optimal set Ω (the disk/ball) together with the set obtained numerically when minimizing the relative perimeter for the given volume fraction.

Figure 15 :Figure 16 :
Figure 15: Maximization of the length of the minimal perimeter partition into equal areas for n ∈ {4, 7, 10}

Figure 18 :
Figure 18: Maximization of the length of the minimal perimeter partition into equal areas for n ∈ {6, 13}.An expanded view of the optimal partition is also illustrated for each case.
Algorithm 1 Compute gradients of areas of Voronoi cellsRequire: x = (x 1 , y 1 , ..., x n , y n ), coordinates of points p 1 , ..., p n , bounding polygon Ω 1: Initialize M = 0 (of size 2n × n) 2: Compute the Voronoi diagram associated to the points (p i ) n i=1 and the intersections of the Voronoi cells with the polygon Ω. 3: Set Voronoi ridges as the set of Voronoi ridges that intersect the bounding polygon Ω.
4: for r in Voronoi ridges do 5: