Preservation of Structural Properties in Optimization with Decisions Truncated by Random Variables and Its Applications Xin Chen Department of Industrial and Enterprise Systems Engineering University of Illinois at Urbana-Champaign, Urbana, IL 61801 [email protected] Xiangyu Gao Department of Industrial and Enterprise Systems Engineering University of Illinois at Urbana-Champaign, Urbana, IL 61801 [email protected] Zhan Pang Department of Management Sciences, College of Business City University of Hong Kong, Hong Kong [email protected] Abstract: A common technical challenge encountered in many operations management models is that decision variables are truncated by some random variables and the decisions are made before the values of these random variables are realized, leading to non-convex minimization problems. To address this challenge, we develop a powerful transformation technique which converts a non-convex minimization problem to an equivalent convex minimization problem. We show that such a transformation enables us to prove the preservation of some desired structural properties, such as convexity, submodularity, and L\ -convexity, under optimization operations, that are critical for identifying the structures of optimal policies and developing efficient algorithms. We then demonstrate the applications of our approach to several important models in inventory control and revenue management: dual sourcing with random supply capacity, assemble-to-order systems with random supply capacity, and capacity allocation in network revenue management. Key words : Dual Sourcing; Assemble-to-Order System; Supply Capacity Uncertainty; Revenue Management; L\ -Convexity. History : First draft, May 2014; this draft, Jan 2017 1. Introduction In operations management literature, a common technical challenge encountered in many models is that decision variables are truncated by some random variables and the decisions are made before the values of these random variables are realized. A notable example is inventory control problems with supply capacity uncertainty in which the replenishment decision is truncated by the random supply capacity (see, e.g., Ciarallo et al. 1994, Wang and Gerchak 1996, Bollapragada, Rao and Zhang 2004, Hu et al. 2008, Feng 2010 and Feng and Shi 2012). Another example is 1 2 capacity allocation problems in revenue management where the booking limit of each demand class is truncated by the random demand (see, e.g., Brumellem and Mcgill 1993, Robinson 1995 and Chen and Homem-de-Mello 2010). This type of variable truncation often leads to stochastic optimization problems in the following form: g(x, z) = inf E[f (x, u ∧ (z + Ξ))], (1) u:(x,z,u)∈A where f is a function in decision variables u and state variables (x, z), A is the constraint set, Ξ is a random vector, and ∧ denotes componentwise minimum. For these applications, it is natural to ask how to solve problem (1) efficiently and whether the optimization operation can preserve some desired structural properties of f such as convexity or submodularity. However, solving and analyzing such a problem can be very difficult. An intrinsic challenge arises from the fact that the truncation by random variables may destroy convexity: the objective function may not be convex in the decision variables even if the function f is convex. Without the regular properties such as convexity, the problem could be both analytically and computationally intractable, in particular when facing multidimensional state and decision variables. Our paper aims at addressing this challenge when the random variables are independently distributed by developing a novel transformation technique which converts the non-convex minimization problem (1) to an equivalent convex minimization problem. As we mentioned earlier, the original problem formulation may be non-convex for a convex function f because in the objective function there are terms involving the minimum of decision variables and random variables. The key idea is to relax the original problem by replacing u ∧ (z + ξ) by new variables v(ξ) = (v1 (ξ1 ), v2 (ξ2 ), ..., vn (ξn )) and imposing v(ξ) ≤ z + ξ in the constraints. We prove that the optimal objective values of the original and transformed problems are the same when f is convex and certain regularity conditions are imposed on A. Furthermore, our transformation technique allows us to show that the optimization operation in problem (1) can preserve convexity, submodularity or L\ -convexity, which then enables us to perform comparative statics analysis in multi-dimensional state and decision spaces and characterize the monotone structure of optimal policies. Our approach has a wide range of applications. In this paper, we focus on the applications of the transformation technique to three models with multi-dimensional state spaces. Our first application is an inventory system with two capacitated suppliers, a regular one with a longer leadtime and an expedited one with a shorter leadtime. The two suppliers have independent supply capacity uncertainties. The objective of the firm is to find a dual-sourcing strategy to minimize the total expected cost. The second application is an assemble-to-order inventory system with multiple components and products. The order quantity of each component cannot exceed a random capacity. The firm decides the ordering quantities of all components and then the number of products 3 assembled to minimize the expected cost. The third application is the capacity allocation in network revenue management where fixed capacities of resources are allocated dynamically to different products with random demands. In the airline industry, this corresponds to setting booking limits for each itinerary-fare class combination. The booking limits are truncated by the random demand. The firm aims to maximize the expected total revenue. In all the above applications, we employ the transformation technique to prove that the apparently non-convex minimization problems (or nonconcave maximization problems) can be converted to equivalent convex minimization problems (or concave maximization problems), and under some conditions, the optimal decisions are monotone in terms of the state variables with limited sensitivities. Without the transformation technique, the structural analyses would have been much more complicated, if not impossible, to carry out. Related Literature We next review two streams of related literature: (1) inventory management with special emphasis on supply capacity uncertainty, and (2) capacity allocation in network revenue management, and summarize our contributions. Inventory Management Supply uncertainty of inventory/production systems can be driven by a variety of factors. Most studies in this literature focus on random yield problems where the supply is a random proportion of the order quantity; see Henig and Gerchak (1990), Federgruen and Yang (2008, 2011), Chen, Feng and Seshadri (2013) and the references therein. Such an issue usually arises from the quantity uncertainty of items produced in a batch. Another important supply uncertainty is the supply capacity uncertainty due to the unreliability of the supply processes (e.g., partial delivery or cancellation of an order by the supplier). In such an environment, the firm has to place orders before knowing the actual supply capacity. There are relatively few papers addressing the random capacity problems. Ciarallo et al. (1994) consider an inventory control problem, assuming that the replenishment decisions are made before the capacity uncertainty is realized and the replenishment leadtime is zero. They show that the presence of capacity uncertainty does not affect the optimality of a basestock policy. Wang and Gerchak (1996) extend the analysis to systems with both random supply capacity and random yield. Feng (2010) addresses a joint pricing and inventory control problem with supply capacity uncertainty and zero leadtime and shows that the optimal policy is characterized by two critical values: a reorder point and a target safety stock. The common technical challenge of these models is that with random supply capacity, the corresponding dynamic programming recursions, though all involving one-dimensional state spaces, are not convex minimization (concave maximization) problems anymore, and delicate analyses are needed to characterize the structures of optimal policies. 4 Our transformation technique can be readily applied to the aforementioned models to simplify the structural analysis. More importantly, such an approach allows us to address more general inventory models under supply capacity uncertainty with multi-dimensional state spaces using the concept of L\ -convexity. This paper demonstrates two applications in the area of inventory management with supply uncertainty, i.e., the dual sourcing problem and the assemble-to-order problem. In the following, we introduce the literature related to these two applications. There is an extensive literature on the dual sourcing problem. It was first studied by Barankin (1961) in a one-period setting and then extended by Daniel (1963), Fukuda (1964) and Whittmore and Saunders (1977) to various settings with multi-period horizons. Feng and Shi (2012) consider a joint inventory control and pricing problem with multiple suppliers whose replenishment lead times are zero and supply capacities are uncertain. They show that with deterministic capacities a multi-level base-stock list-price policy plus a cost-based supplier selection (i.e., ordering from a cheaper source first) is optimal. However, with general random supply capacities, such a policy is no longer optimal. They show that the optimal policy can be characterized by a near reorder point such that a positive order is placed (almost everywhere) if and only if the inventory level is below this point. They also identify a condition under which a strict reorder-point policy and a costbased supplier-selection criterion become optimal. More recently, Zhou and Chao (2014) address the dual-sourcing problem with price sensitive demand, a regular supplier with one-period leadtime and an expedited supplier with zero leadtime, and characterize the structure of the optimal policy. Gong et al. (2014) further generalize the structural analysis to a dual-sourcing problem with price sensitive demand and Markovian supply interruptions. In both models, there are no capacity limits on the supplies. To the best of our knowledge, our paper is the first addressing the dual-sourcing system with arbitrary deterministic leadtime discrepancies and supply capacity uncertainties. The assemble-to-order system is one of the most important production/inventory systems; see Song and Zipkin (2003) for a review of the research literature and applications of assemble-to-order systems up to the early 2000s. Lu and Song (2005) study a continuous-review assemble-to-order system with random demands and lead-times with an order-based approach. Nadar et al. (2014) develop the optimal structural results for a continuous-review assemble-to-order generalized M system with lost sales. Bollapragada, Rao and Zhang (2004) study multi-echelon assembly systems under installation base-stock policies where the component suppliers have various leadtimes and random supply capacities. They propose a decomposition approach and their numerical study shows that their heuristic performs well in comparison with the optimal base-stock policy. In this paper, we show that our approach applies to the assemble-to-order system with random component capacity. Moreover, for the generalized M -system, we show that the cost-to-go functions are L\ convex, which allows us to characterize the monotone structure of the optimal policy. 5 Revenue Management Revenue Management (RM), also known as Yield Management, has been widely adopted in various industries such as airlines, hotels, car rentals and cruise lines. Driven by its prevalence in service industry, the research interest in RM has been growing rapidly over the last two decades; see Talluri and van Ryzin (2005) for a comprehensive introduction to the practice and theoretical developments of RM. The network revenue management problem, which involves managing multiple resources (such as airline seats in different leg-cabin combinations), is notoriously challenging. Indeed, as mentioned by Talluri and van Ryzin (2005), “in the network case, exact optimization is for all practical purposes impossible”, and thus the literature focuses predominantly on various approximations. One approximation is to formulate a stochastic programming problem (see Cooper and Homemde-Mello 2007, Möller et al. 2008, Chen and Homem-de-Mello 2010 and the references therein). For example, one can formulate a two-stage stochastic linear programming problem (SLP) by aggregating the demand over the planning horizon and determining the booking limits at the beginning (see section 3.3.1 of Talluri and van Ryzin 2005). To improve upon the SLP, one can consider a multi-stage stochastic programming (MSSP), in which the policy of booking limits is revised from time to time in order to take into account the information about demand learned so far. The MSSP is challenging, evidenced by Chen and Homem-de-Mello (2010): “even the continuous relaxation of that problem does not have a concave expected recourse function”, as its objective function and constraints involve booking limits truncated by realized demands. As a compromise, they propose an approximation based on re-solving a sequence of two-stage stochastic programs. We consider the MSSP with continuous relaxation. In each time period, the firm decides the booking limits allocated to each demand class before the demand is realized. Interestingly, our transformation technique preserves concavity in the dynamic programming recursions, and hence overcomes the difficulty stated by Chen and Homem-de-Mello (2010). Under certain network structure, we further show that L\ -concavity can be preserved and use it to derive some monotonicity properties of the optimal booking limits. Our approach opens the door to the development of effective algorithms to solve MSSP directly. Our Contribution As evidenced by the literature review, structural analyses for many important models (such as the inventory control problem under supply capacity uncertainty and the capacity allocation problem in revenue management) involve solving challenging stochastic optimization problems with a form similar to problem (1). Our transformation technique provides a unified technical tool to facilitate the structural analysis of this type of problems, which is our primary contribution to the literature. The power of this technique is demonstrated by its applications to several important inventory control and revenue management models that generalize the corresponding ones in the literature. The preservation results for structural properties such as 6 convexity, submodularity and L\ −convixity enabled by our transformation technique can also be potentially exploited to develop efficient algorithms. Recently, Feng and Shanthikumar (2014) use the notion of stochastic linearity in mid-point to develop a different technique to show that a class of nonlinear supply and demand functions (in the almost sure sense) are in fact linear in the stochastic sense. Like us, their approach allows them to convert some non-convex minimization problems, including those in Ciarallo et al. (1994), Wang and Gerchak (1996), Feng (2010) and Feng and Shi (2012), into convex minimization problems. Treating the means of the supply and demand functions as decision variables instead of the original decisions (ordering quantity and price), they show that supply and demand functions are stochastically linear in mid point with respect to their means and the objective functions are concave in the means of supply and demand. Note that they focus on the concavity property but do not touch upon supermodularity or L\ −convavity. Different from their approach, our approach works on the original decision variables and transforms the original optimization problem into an equivalent constrained optimization problem, which allows us to readily show the preservation of convexity, submodularity and L\ -convexity. Hence, our approach is more suitable for problems with high-dimensional state spaces like the applications we present in this paper. In the appendix we provide a detailed comparison between our transformation technique and their approach. In particular, we show that although their approach can also preserve convexity and submodularity, it does not preserve L\ −convexity. The remainder of this paper is organized as follows. Section 2 develops the transformation technique and the relevant preservation results. Sections 3-5 focus on the applications of our approach to the dual sourcing problem, the assemble-to-order system, and the capacity allocation problem, respectively. The paper is concluded in Section 6. Throughout this paper, we use decreasing, increasing and monotonicity in a weak sense. We use < and <+ to denote the real space and the set with nonnegative reals, Z and Z+ to denote the set of integers and the set of nonnegative integers, respectively. For convenience, let F be either < or ¯ = < ∪ {∞}, e ∈ F n a vector whose components are all ones, ej a unit vector whose jth Z . Define < component is one, and for x, y ∈ F n , x ≤ y if and only if xi ≤ yi for any i = 1, ..., n, x+ = max(x, 0), x ∧ y = min(x, y) and x ∨ y = max(x, y) (the component-wise minimum and maximum operations). The indicator function of any set V ⊆ F n , denoted by δV , is defined as δV (x) = 0 for x ∈ V and +∞ otherwise. We use the superscript T to denote the transpose of a vector or a matrix. We use uppercase letters (e.g. Ξ) to denote random vectors and lowercase letters (e.g., ξ) for their realizations. Given a random vector Ξ = (Ξ1 , ..., Ξn )T , we use X = Supp(Ξ) to denote the support of this random vector. In addition, we define ξ¯j = ess sup{ξj |ξj ∈ Xj }, ξ j = ess inf {ξj |ξj ∈ Xj } for j = 1, ..., n, where Xj is X ’s projection into the j-th coordinate. Let ξ¯ = (ξ¯1 , ..., ξ¯n )T , ξ = (ξ , ..., ξ )T , 1 and almost surely is abbreviated as a.s.. n 7 2. Transformation Technique and Preservation Properties In this section, we first develop the transformation technique for a class of stochastic optimization problems and then show several preservation results that are useful in structural analysis. 2.1. Transformation ¯ and a random vector Ξ with Supp(Ξ) = X ⊆ F n , consider the following Given a function f : F n → < optimization problem τ ∗ = infn E[f (u ∧ Ξ)]. (2) u∈F In general, the above problem may not be a convex minimization problem even if the function f is convex. For instance, let f (u) = u2 and Ξ be Bernoulli distributed with success probability 0.5. One can easily see E[f (u ∧ Ξ)] is not convex in u. Interestingly, we show that under certain conditions, we can convert it into an equivalent convex minimization problem. For this purpose, note that the optimization problem (2) can be rewritten as follows. inf E[f (v(Ξ))] s.t. v(ξ) = u ∧ ξ (3) ∀ξ ∈ X , u ∈ F n , v(·) ∈ M, n where M is the set of measurable functions. The feasible region of (3) is F n × (F )X while the feasible region of (2) is F n . In the following theorem, we show that the equality constraint v(ξ) = u ∧ ξ can be relaxed by the inequality constraint v(ξ) ≤ ξ ∀ξ ∈ X with v(ξ) = (v1 (ξ1 ), ..., vn (ξn )) ∈ F n . For the rest of the paper, we require that v(·) is measurable in all of our formulations and therefore omit v(·) ∈ M for brevity. The following lemma will be useful for the proof of the theorem. ¯ is quasi-convex. If x∗ is a minimizer of f (x) Lemma 1. Suppose that the function f : F → < over F , we have f (x∗ ∧ b) ≤ f (a) for any a, b ∈ F with a ≤ b. Proof. The quasi-convexity of f implies that f (x) decreases in x as x ≤ x∗ and increases in x as x ≥ x∗ . If a ≥ x∗ , we have b ≥ a ≥ x∗ , which implies that a ≥ x∗ ∧ b = x∗ . If a ≤ x∗ , since a ≤ b, we have a ≤ x∗ ∧ b ≤ x∗ . In either case, f (x∗ ∧ b) ≤ f (a). Q.E.D. ¯ is Theorem 1 (Equivalent Transformation). Suppose that (a) the function f : F n → < lower semi-continuous with f (x) → +∞ for |x| → ∞; (b) f is componentwise convex (componentwise discrete convex if F = Z ); (c) the random vector Ξ has independent components and it has realizations ξ ∈ X = Supp(Ξ). Then, τ ∗ defined in (2) is also the optimal objective value of the following optimization problem. inf E[f (v(Ξ))] s.t. v(ξ) ≤ ξ (4) ∀ξ ∈ X , v(ξ) = (v1 (ξ1 ), ..., vn (ξn )) ∈ F n ∀ξ ∈ X . 8 Proof. Let π ∗ be the optimal objective value of problem (4). Since for any u ∈ F n , v(ξ) = u ∧ ξ is feasible for problem (4), π ∗ ≤ τ ∗ . It remains to show that τ ∗ ≤ π ∗ . Clearly, it holds when π ∗ = ∞. Thus, in the following, we assume that π ∗ < ∞, which together with assumption (a) implies that all optimization problems involved below, as well as problems (2) and (4), admit finite optimal solutions. Given any optimal solution of (4) denoted by v ∗ = (v ∗ (ξ)|ξ ∈ X ), we will show that we can find a solution û ∈ F n such that E[f (û ∧ Ξ)] = E[f (v ∗ (Ξ))]. We first show that it is true for n = 1. Let û = arg minu∈F f (u) (when there are multiple optimal solutions, we choose the smallest one). Consider any feasible solution v = (v(ξ)|ξ ∈ X ) of problem (4). We have f (û ∧ ξ) ≤ f (v(ξ)) for any ξ ∈ X according to Lemma 1. Hence, E[f (û ∧ Ξ)] ≤ π ∗ . Note that û is a feasible solution for problem (2), which implies that τ ∗ = E[f (û ∧ Ξ)] ≤ π ∗ . Combined with the fact that π ∗ ≤ τ ∗ , we have τ ∗ = π ∗ . We now consider the general case with n ≥ 1. Use vi∗ to represent the ith component of v ∗ for i = 1, ..., n. Starting from the first component, define π1 (u1 ) = E[f (u1 , v2∗ (Ξ2 ), . . . , vn∗ (Ξn ))]. The component-wise convexity of f implies that π1 (u1 ) is convex in u1 . Since the components of the vector ξ are independently distributed, EΞ1 [π1 (v1 (Ξ1 ))] = EΞ [f (v1 (Ξ1 ), v2∗ (Ξ2 ), . . . , vn∗ (Ξn ))] for any measurable function v1 (·), and the preceding analysis for n = 1 implies that there exists a û1 such that π ∗ = min{E[π1 (v1 (Ξ1 ))]|v1 (ξ1 ) ≤ ξ1 , v1 (ξ1 ) ∈ F , ∀ξ1 ∈ X1 } = min E[π1 (u1 ∧ Ξ1 )] = E[π1 (û1 ∧ Ξ1 )]. u1 ∈F Next define π2 (u2 ) = E[f (û1 ∧ Ξ1 , u2 , v3∗ (Ξ3 ), . . . , vn∗ (Ξn ))]. Clearly, π2 is convex. Following the preceding analysis, there exists a û2 such that π ∗ = min{E[π2 (v2 (Ξ2 ))]|v2 (ξ2 ) ≤ ξ2 , v2 (ξ2 ) ∈ F , ∀ξ2 ∈ X2 } = min E[π2 (u2 ∧ Ξ2 )] = E[π2 (û2 ∧ Ξ2 )]. u2 ∈F ∗ Continue this process and define πi (ui ) = E[f (û1 ∧ Ξ1 , ..., ûi−1 ∧ Ξi−1 , ui , vi+1 (Ξi+1 ), . . . , vn∗ (Ξn ))]. Applying the same approach, we can find ûi , i = 3, ..., n, such that π ∗ = min{E[πi (vi (Ξi ))]|vi (ξi ) ≤ ξi , vi (ξi ) ∈ F , ∀ξi ∈ Xi } = min E[πi (ui ∧ Ξi )] = E[πi (ûi ∧ Ξi )]. ui ∈F Therefore, π ∗ = E[πn (ûn ∧ Ξn )] = E[f (û1 ∧ Ξ1 , ..., ûn ∧ Ξn )]. Since û is a feasible solution to (2), we have τ ∗ ≤ E[f (û ∧ ξ)] = π ∗ . Combined with the fact that π ∗ ≤ τ ∗ , we have π ∗ = τ ∗ . Q.E.D. 9 Remark 1. In the proof of the above theorem, we illustrate that when n = 1, min E[f (u ∧ Ξ)] = E[f (û ∧ Ξ)], u∈F where û is any minimizer of the function f . In fact, this observation is still valid when f is quasiconvex. However, when n > 1, such a result no longer holds, i.e., minu∈F E[f (u ∧ Ξ)] 6= E[f (û ∧ Ξ)], even if f is jointly convex. We now present an example. Specifically, let n = 2, F = <, f (u1 , u2 ) = (u1 + u2 − 2)2 + (u1 − 1)2 + (u2 − 1)2 , and ξ1 and ξ2 be independent and identically distributed and take values 0 and 2 with equal probabilities. In this case, û = (1, 1). However, one can easily verify that arg minu∈F n E[f (u ∧ Ξ)] = (1.2, 1.2) 6= û. Remark 2. In the above theorem, we require that v(ξ) = (v1 (ξ1 ), . . . , vn (ξn )). This cannot be relaxed to allow v(ξ) = (v1 (ξ), . . . , vn (ξ)). To illustrate this, we use the above example again. Note that for problem (4), the optimal objective value is 2.4 and an optimal solution is given by v1∗ (0) = v2∗ (0) = 0, v1∗ (2) = v2∗ (2) = 1.2. However, if one replaces v(ξ) = (v1 (ξ1 ), . . . , vn (ξn )) by v(ξ) = (v1 (ξ), . . . , vn (ξ)) in problem (4), the optimal objective value becomes 2.25 and an optimal solution is given by v ∗ (0, 0) = (0, 0), v ∗ (0, 2) = (0, 1.5), v ∗ (2, 0) = (1.5, 0), v ∗ (2, 2) = (1, 1). Remark 3. It is interesting to observe that u does not appear in problem (4). Our proof implies that given an optimal solution u∗ of problem (2), v ∗ = (v ∗ (ξ) = u∗ ∧ ξ |ξ ∈ X ) is optimal for problem (4). On the other hand, given an optimal solution v ∗ of problem (4), we can directly construct an optimal solution of problem (2) without solving any additional optimization problem. To see this, we start with n = 1 and define S = {ξ |v ∗ (ξ) < ξ, ξ ∈ X } (for simplicity, we drop the subscript 1 in the presentation when n = 1). We consider two cases depending on whether the probability of event S, denoted by P (S), is zero or not. In the first case, P (S) > 0. Randomly pick ξˆ ∈ S according to ˆ It suffices to show that û the probability distribution of Ξ conditional on S and define û = v ∗ (ξ). is optimal for the optimization problem minu∈F f (u) with probability 1. Suppose this is not true ˆ is not optimal for minu∈F f (u). We and P (S 0 ) > 0, where S 0 is the event such that ξˆ ∈ S and v ∗ (ξ) define a new feasible solution of problem (4): v ∗ (ξ), v̂(ξ) = u0 ∧ ξ, if ξ ∈ / S0, if ξ ∈ S 0 , where u0 is an optimal solution of minu∈F f (u). If ξ ∈ / S 0 , then v̂(ξ) = v ∗ (ξ) and f (v̂(ξ)) = f (v ∗ (ξ)). If ξ ∈ S 0 and ξ > u0 , f (v̂(ξ)) = f (u0 ) < f (v ∗ (ξ)). If ξ ∈ S 0 and ξ ≤ u0 , v ∗ (ξ) < ξ ≤ u0 and the convexity 10 of f implies that f (v̂(ξ)) = f (ξ) < f (v ∗ (ξ)). Since P (S 0 ) > 0, we have E[f (v̂(ξ))] < E[f (v ∗ (ξ))], which is a contradiction. Therefore, with probability 1, û is optimal for the optimization problem minu∈F f (u). In the second case, P (S) = 0. Note that f must be decreasing over X , otherwise we can easily construct a feasible solution of problem (4) with a lower cost. Hence, assumption (a) implies that ξ¯ < ∞, and û = ξ¯ is a minimizer of the function f on F . For n > 1, define, for i = 1, . . . , n, event Si = {ξi |v ∗ (ξi ) < ξi }. If the probability of Si is positive, randomly pick ξˆi ∈ Si i according to the probability distribution of Ξ conditional on Si and define ûi = vi∗ (ξˆi ); otherwise, define ûi = ξ¯i (again ξ¯i < ∞). Since the components of the random vector Ξ are independent, we can extend the above analysis to show that, with probability 1, û = (û1 , . . . , ûn ) is an optimal solution of problem (2). We can explicitly incorporate constraints on u in Theorem 1 and consider a more general optimization model. To simplify notations, we define an operator ♦k as u♦k ξ , (u1 ∧ ξ1 , ..., uk ∧ ξk , uk+1 ∨ ξk+1 , ..., un ∨ ξn ). The problem of interest is inf E[f (u♦k Ξ)], u∈U (5) ¯ and U ⊆ F n . Define a set where f : F n → < V = {u♦k ξ |u ∈ U , ξ ∈ X }. (6) We impose the following assumption: Assumption 1. (a) For any u ∈ F n such that u♦k ξ ∈ V , ∀ξ ∈ X , there exists u0 ∈ U such that u0 ♦k ξ = u♦k ξ, ∀ξ ∈ X. (b) The indicator function of the set V is componentwise convex (componentwise discrete convex if F = Z ). Notice that Part (a) implies that if u♦k ξ ∈ V , ∀ξ ∈ X , we do not necessarily need u ∈ U . Instead, we only require that there exists u0 ∈ U such that u0 ♦k ξ = u♦k ξ, ∀ξ ∈ X . As can be seen from the proof of Theorem 2 below, Assumption 1 allows us to convert the constrained optimization problem (5) to an equivalent unconstrained optimization problem so that Theorem 1 can be applied. We provide a nontrivial example under which Assumption 1 holds in Lemma 2. ¯ and the random vector Theorem 2. Consider the optimization problem (5), where f : F n → < Ξ in F n satisfy the assumptions in Theorem 1. Suppose that Assumption 1 is satisfied. Problem (5) and the following optimization problem have the same optimal objective value. inf E[f (v1 (Ξ1 ), . . . , vn (Ξn ))] s.t. vj (ξj ) ≤ ξj vj (ξj ) ≥ ξj ∀ ξj ∈ Xj , j = 1, . . . , k ∀ ξj ∈ Xj , j = k + 1, . . . , n (v1 (ξ1 ), . . . , vn (ξn )) ∈ V ∀ξ ∈ X . (7) 11 Proof. Problem (5) is equivalent to the following unconstrained optimization problem. inf {E[f (u♦k Ξ)] + δU (u)}. u∈F n (8) Define for any v ∈ F n , fˆ(v) = f (v) + δV (v), where V is defined in (6). Then by Assumption 1 the optimal objective value of problem (8) is equivalent to that of the following problem inf E[fˆ(u♦k Ξ)]. u∈F n (9) To see this, note that for any u ∈ U , we have u♦k ξ ∈ V ∀ξ ∈ X , and hence inf u∈F n E[fˆ(u♦k Ξ)] ≤ inf u∈U E[f (u♦k Ξ)]. On the other hand, due to Assumption 1, we have inf u∈F n E[fˆ(u♦k Ξ)] ≥ inf u∈U E[f (u♦k Ξ)]. Define a new random vector Ξ̃ with (Ξ̃1 , . . . , Ξ̃k , Ξ̃k+1 , . . . , Ξ̃n ) = (Ξ1 , . . . , Ξk , −Ξk+1 , . . . , −Ξn ) and ¯ by a new function f˜ : F n → < f˜(u1 , . . . , uk , uk+1 , . . . , un ) = fˆ(u1 , . . . , uk , −uk+1 , . . . , −un ). Then problem (9) is equivalent to the problem inf E[f˜(ũ ∧ Ξ̃)]. ũ∈F n By Theorem 1, it has the same optimal objective value with the following problem. ˜ inf E[f˜(ṽ(ξ))] ˜ ≤ ξ˜ ∀ξ˜ ∈ Supp(Ξ̃), s.t. ṽ(ξ) ˜ = (ṽ1 (ξ˜1 ), . . . , ṽn (ξ˜n )) ∈ F n , ṽ(ξ) which is clearly equivalent to problem (7) from the definition of f˜. Notice that the indicator function of the set V needs to be componentwise convex to ensure that f˜ is componentwise convex. Q.E.D. The following lemma provides an example which satisfies Assumption 1. Lemma 2. Assume that U = {u ∈ F n |Au ≤ b, u1 ≥ u1 , ..., uk ≥ uk , uk+1 ≤ ūk+1 , ..., un ≤ ūn }, where b, u1 , ..., uk , ūk+1 , ..., ūn are given constants, A = (aij ) with entries aij ≥ 0 for any i and j = 1, ..., k, and aij ≤ 0 for any i and j = k + 1, ..., n. In addition Xj is contained in [uj , +∞) for j = 1, ..., k, and Xj is contained in (−∞, uj ] for j = k + 1, ..., n. Then Assumption 1 is satisfied. Proof. For notational convenience, we only prove the case where k = n, i.e., there is only the ∧ operation. This is because we can apply the same technique used in the proof of Theorem 2 to convert a problem with the ∨ operation to a new one only with the ∧ operation. In this case the set U = {u|Au ≤ b, uj ≥ uj , j = 1, ..., n}, where aij ≥ 0 for all i = 1, ..., m, j = 1, ..., n, and ξj ≥ uj ∀ξj ∈ Xj for j = 1, ..., n. 12 Recall that we define ξ¯j = ess sup{ξj |ξ ∈ X }. We first consider the case where ξ¯j < ∞ for all j. Note that V = {u ∧ ξ |u ∈ U , ξ ∈ X } is equivalent to the following set {w|Aw ≤ b, u ≤ wj ≤ ξ¯j , j = j 1, ..., n}, denoted by Vw . For any w = u ∧ ξ ∈ V , we have Aw = A(u ∧ ξ) ≤ b since aij ≥ 0 for all i, j and Au ≤ b; u ≤ uj ∧ ξj = wj ≤ ξ¯j since u ≤ ξj ∀ξj ∈ Xj . For any w ∈ Vw , let u = w, ξj = ξ¯j for all j. j j Then w = u ∧ ξ since wj ≤ ξ¯j for all j, and u ∈ U . Hence, V = Vw . Clearly V is a convex set. Given any u satisfying u ∧ ξ ∈ V ∀ξ ∈ X , we define u0 such that for j = 1, ..., n, uj , if uj ≤ ξ¯j , u0j = ξ¯ , if uj > ξ¯j . j One can easily check that u0 ∧ ξ = u ∧ ξ ∀ξ ∈ X . We only need to show u0 ∈ U . Since ξ¯j ≥ uj and uj ≥ uj , we have u0j ≥ uj for j = 1, ..., n. Because A(u ∧ ξ) ≤ b ∀ξ ∈ X and Ξ has independent ¯ ≤ b, which is the same as Au0 ≤ b. components, we obtain A(u ∧ ξ) If ξ¯j = ∞ for any j, then u0j = uj and following similar arguments we can obtain the desired results. 2.2. Q.E.D. Preservation of Structural Properties One advantage of our transformation technique is that it can be used to establish the preservation of not only convexity and submodularity but also L\ -convexity under optimization operations, which plays a critical role in characterizing the structure of the optimal policies for many dynamic decision making problems and facilitates their efficient computations. To see this, we first provide a brief review of the concept of L\ -convexity and some structural properties. L\ -convexity was defined by Murota (1998) as a fundamental concept to extend convex analysis from real space to spaces with integers (see Murota 2009 for a survey of the recent developments in discrete convex analysis). It was first introduced into the inventory management literature by Lu and Song (2005) and used by Zipkin (2008) to characterize the optimal structural policy of lost-sales inventory models with positive leadtimes. Since then, L\ -convexity was found to be powerful enough to establish the structures of optimal policies in various other inventory models: serial inventory systems (Huh and Janakiraman 2010); inventory-pricing models with positive leadtimes (Pang et al. 2012); and perishable inventory models (Chen, Pang and Pan 2014); etc. In the transformed problem the decisions are v = (v(ξ)|ξ ∈ X ) ∈ (F n )X . Note that the direct product of lattices is still a lattice under the componentwise partial order (see Example 2.2.3 (d) of Topkis 1998). Therefore, if Xα is a lattice for each α ∈ A, where A is an index set, then the direct product of sets Xα , is also a lattice. In the following we present the definition of L\ -convexity with domain Y , (F n )X , where A is any index set. ¯ is L\ -convex if for any x, x0 ∈ Y , λ ∈ F+ , Definition 1. A function f : Y → < f (x) + f (x0 ) ≥ f ((x + λe) ∧ x0 ) + f (x ∨ (x0 − λe)), where e is the all-ones vector in Y . A set V ⊆ Y is said to be L\ -convex if its indicator function δV is L\ -convex. 13 For an L\ -convex function f , its effective domain dom(f ) = {x ∈ Y|f (x) < +∞} is an L\ -convex set. We sometimes say a function f is L\ -convex on a set V with the understanding that V is an L\ convex set and the extension of f to the whole space by defining f (v) = +∞ for v 6∈ V is L\ -convex. One can also show that an L\ -convex function restricted to an L\ -convex set is also L\ -convex. Following a similar proof in Simchi-Levi et al. (2014), we can show that an equivalent definition of ¯ is L\ -convex if and only if g(x, ξ) , f (x − ξe) L\ -convexity is given as follows: A function f : Y → < is submodular in (x, ξ) ∈ Y × S , where S is the intersection of F and any unbounded interval in <, and e is the all-ones vector in Y . We now list some of the commonly used properties of L\ -convexity. To describe the monotonicity of optimal solution sets, we use the induced set ordering v which defines X 0 v X 00 for two nonempty sets X 0 and X 00 if x0 ∈ X 0 and x00 ∈ X 00 imply that x0 ∧ x00 ∈ X 0 and x0 ∨ x00 ∈ X 00 (see Topkis 1998, p32). For a nonempty set Xt that depends on the parameter t in a partial order set T , we say that Xt is increasing in t on T if {Xt , t ∈ T } has the induced set ordering v. The proofs of these properties are relegated to the appendix. Proposition 1 (L\ -Convexity). (a) Any nonnegative linear combination of L\ -convex ¯ (i = 1, 2, ..., n) are L\ -convex, then for any scalar functions is L\ -convex. That is, if fi : Y → < Pm αi ≥ 0, i=1 αi fi is also L\ -convex. (b) If fk is L\ -convex for k = 1, 2, ... and limk→∞ fk (x) = f (x) for any x ∈ Y , then f is L\ -convex. (c) Assume that a function f (·, ·) is defined on the product space Y × F m . If f (·, y) is L\ -convex for any given y ∈ F m , then for a random vector ζ defined on F m , Eζ [f (x, ζ)] is L\ -convex, provided it is well defined. ¯ is an L\ -convex function, then g : Y × F → < ¯ defined by g(x, λ) = f (x − λe) is (d) If f : Y → < also L\ -convex. ¯ is an L\ -convex (e) Assume that A is an L\ -convex set of F n × Y and f (·, ·) : F n × Y → < function. Then the function g(x) = inf f (x, y) (10) y:(x,y)∈A is L\ -convex over F n if g(x) 6= −∞ for any x ∈ F n . (f ) Let e and ẽ be the all-ones vectors corresponding to the state space of x and the decision space of y respectively in (10). Then arg miny:(x,y)∈A f (x, y) is increasing in x and arg min f (x + ωe, y) v ωẽ + arg min f (x, y). y:(x+ωe,y)∈A y:(x,y)∈A (g) Denote xi a component of x ∈ Y . A set with a representation {x ∈ Y : l ≤ x ≤ u, xi − xj ≤ vij , ∀i 6= j }, is L\ -convex in the space Y , where l, u ∈ Y and vij ∈ F . 14 We now show how our transformation technique can be used to establish preservation properties of convexity, submodularity, and L\ -convexity under optimization operations. Consider the following optimization problem g(x, z) = inf u:(x,z,u)∈A E[f (x, u♦k (z + Ξ))], (11) ¯ , x ∈ F m , z ∈ F n and set A ⊆ F m × F n × F n . where f (·, ·) : F m × F n → < Define a set AΞ = {(x, z, w)|w = u♦k (z + ξ), (x, z, u) ∈ A, ξ ∈ X }. Similar to Assumption 1, we specify the following condition: Assumption 2. (a) For any (x, z, u) such that (x, z, u♦k (z + ξ)) ∈ AΞ ∀ξ ∈ X , there exists (x, z, u0 ) ∈ A such that u0 ♦k (z + ξ) = u♦k (z + ξ) ∀ξ ∈ X . (b) The indicator function of the set AΞ is componentwise convex in w (componentwise discrete convex if F = Z ). Similar to Lemma 2, we provide an example with linear constraints which satisfies Assumption 2. The proof is similar and thus omitted for brevity. Lemma 3. Assume that A = {(x, z, u)|Au ≤ b, u1 ≥ u1 , ..., uk ≥ uk , uk+1 ≤ ūk+1 , ..., un ≤ ūn }, where b, u1 , ..., uk , ūk+1 , ..., ūn are parameters that may depend on x and z, A = (aij ) with entries aij ≥ 0 for any i and j = 1, ..., k, and aij ≤ 0 for any i and j = k + 1, ..., n. In addition Xj is contained in [uj − zj , +∞) for j = 1, ..., k, and Xj is contained in (−∞, uj − zj ] for j = k + 1, ..., n. Then Assumption 2 is satisfied. Now we are ready to present our main result in this section. Theorem 3 (Preservation). Consider the optimization problem (11), where f and Ξ satisfy the assumptions in Theorem 1 given any (x, z). If Assumption 2 is satisfied, then we have the following results: (a) If f and AΞ are convex, then g is also convex. (b) If f is submodular and AΞ is a lattice, then g is also submodular. (c) If f and AΞ are L\ -convex, then g is also L\ -convex. Proof. Theorem 2 implies that problem (11) can be equivalently converted to the following one: inf E[f (x, v1 (Ξ1 ), . . . , vn (Ξn ))] s.t. vj (ξj ) ≤ zj + ξj vj (ξj ) ≥ zj + ξj ∀ξj ∈ Xj , ∀ j = 1, . . . , k ∀ξj ∈ Xj , ∀ j = k + 1, . . . , n (x, z, v1 (ξ1 ), . . . , vn (ξn )) ∈ AΞ ∀ξ ∈ X . (12) 15 To see this, given fixed (x, z), let U (x, z) denote the constraint set {u : (x, z, u) ∈ A}, fx (u) = f (x, u), Ξ̃z = z + Ξ, and X̃z = Supp(Ξ̃z ). Then (11) is equivalent to inf u∈U (x,z) E[fx (u♦k Ξ̃z )]. (13) Let V (x, z) = {u♦k ξ˜ : u ∈ U (x, z), ξ˜ ∈ X̃z }. Given any u♦k ξ˜ ∈ V (x, z) ∀ξ˜ ∈ X̃z , we have (x, z, u) satisfying (x, z, u♦k (z + ξ)) ∈ AΞ ∀ξ ∈ X . According to Assumption 2, there exists (x, z, u0 ) ∈ A such that u0 ♦k (z + ξ) = u♦k (z + ξ) ∀ξ ∈ X . Thus we have u0 ∈ U (x, z) and u0 ♦k ξ˜ = u♦k ξ˜ ∀ξ˜ ∈ X̃z . If the indicator function of AΞ is componentwise convex in w, it is clear that the indicator function of V (x, z) is also componentwise convex. Therefore, if Assumption 2 is satisfied, then Assumption 1 is also satisfied. According to Theorem 2, we can transform (13) into: inf E[fx (v1 (Ξ̃z1 ), . . . , vn (Ξ̃zn ))] s.t. vj (ξ˜j ) ≤ ξ˜j ∀ξ˜j ∈ X̃zj , ∀ j = 1, . . . , k vj (ξ˜j ) ≥ ξ˜j ∀ξ˜j ∈ X̃zj , ∀ j = k + 1, . . . , n (v1 (ξ˜1 ), . . . , vn (ξ˜n )) ∈ V (x, z) ∀ξ˜ ∈ X̃z , which is equivalent to (12). It is straightforward to check that the constraint set involving (x, z, (v1 (ξ1 ), . . . , vn (ξn ))ξ∈X is a convex set, a lattice, and an L\ -convex set (Proposition 1 part (g)) on the product set F m × F n × (F n )X for cases (a), (b) and (c) respectively. In the following we show that the objective function E[f (x, v1 (Ξ1 ), ..., vn (Ξn ))] is convex, submodular, L\ −convex in (x, (v1 (ξ1 ), ..., vn (ξn ))ξ∈X ∈ F m × (F n )X for cases (a), (b) and (c) respectively. ¯ such that f˜(x, v, ξ) , f (x, v(ξ)) ∀ξ ∈ X . Clearly, E[f˜(x, v, Ξ)] = Define f˜ : F m × (F n )X × X → < E[f (x, v(Ξ))]. Given any realization ξ, if f (·, ·) is convex/ submodular/ L\ −convex, then one can easily prove by definition that f˜(·, ·, ξ) is also convex/ submodular/ L\ −convex. We show the proof for convexity; the proofs for submodularity and L\ −convex are similar and simply follow their definitions respectively. Given ξ, for any (x, v), (x0 , v 0 ) and λ ∈ [0, 1], we have f˜(λx + (1 − λ)x0 , λv + (1 − λ)v 0 , ξ) = f (λx + (1 − λ)x0 , λv(ξ) + (1 − λ)v 0 (ξ)) ≤ λf (x, v(ξ)) + (1 − λ)f (x0 , v 0 (ξ)) = λf˜(x, v, ξ) + (1 − λ)f˜(x0 , v 0 , ξ). Since f˜(·, ·, ξ) is convex/ submodular/ L\ −convex for any given ξ, we have that the objective function E[f (x, v(Ξ))] = E[f˜(x, v, Ξ)] is also convex/ submodular/ L\ −convex due to Proposition 1 part (c). Part (a) follows immediately from the theorem of convexity preservation under minimization (see Simchi-Levi et al. 2014, Proposition 2.1.15, for the case with finite-dimensional spaces, and 16 Zǎlinescu 2002, Theorem 2.1.3(v), for the case with general vector spaces). Part (b) follows from Theorem 2.7.6 of Topkis (1998). Part (c) follows from Proposition 1 part (e). Q.E.D. The following theorem characterizes the monotonicity properties of the solution set to the optimization problem (11). The proof is relegated to the appendix. Theorem 4. Consider the optimization problem (11), where f and Ξ satisfy the assumptions in Theorem 1 given any (x, z). Let U ∗ (x, z) denote the the optimal solution set of (11). If Assumption 2 is satisfied, A, AΞ are closed, and in addition uj ≤ zj + ξ¯j , j = 1, ..., k, uj ≥ zj + ξ , j = k + 1, ..., n, j then we have the following results: (a) If f is a submodular function, and A, AΞ are lattices, then U ∗ (x, z) is increasing in (x, z). There exist a greatest element and a least element in U ∗ (x, z), which are increasing in (x, z). (b) If f is an L\ -convex function, and A, AΞ are L\ -convex sets, then U ∗ (x, z) is increasing in (x, z) and U ∗ ((x, z) + ωe) v U ∗ (x, z) + ωe for any ω > 0. Within U ∗ (x, z), there exist a greatest element and a least element, which have the above monotonicity properties with limited sensitivity. In the following we provide an example to show that the assumption uj ≤ zj + ξ¯j , j = 1, ..., k, uj ≥ zj + ξ j , j = k + 1, ..., n is needed. Example 1. Suppose that f (u) = u2 and the support of Ξ is [−3, −1]. Let U ∗ (z) = arg minu∈U E[f (u ∧ (z + Ξ))], and z = 0, ω = 2. When U = <, we have U ∗ (z) = [−1, ∞), U ∗ (z + ω) = {0}. Notice that U ∗ (z) v U ∗ (z + ω) does not hold. However, when U (z) = {u ∈ < : u ≤ z + ξ¯}, we have U (z) = (−∞, −1], U (z + ω) = (−∞, 1]. Then U ∗ (z) = {−1}, U ∗ (z + ω) = {0}. Clearly U ∗ (z) v U ∗ (z + ω). Notice that if the conditions in Lemma 3 are satisfied, then the assumptions uj ≤ zj + ξ¯j , j = 1, ..., k, uj ≥ zj + ξ j , j = k + 1, ..., n in Theorem 4 are without loss of generality. To see this, given any (x, z, u) which is feasible for problem (11), (x, z, u1 ∧ (z1 + ξ¯1 ), ..., uk ∧ (zk + ξ¯k ), uk+1 ∨ (zk+1 + ξ k+1 ), ..., un ∨ (zn +ξ n )) is also feasible and yields the same objective value. In all of our applications, the constraint set satisfies the conditions in Lemma 3. In the following three sections, we apply the transformation technique and the relevant preservation results to three fundamental models from inventory and revenue management literature and demonstrate how these results facilitate the structural analyses. 3. Dual Sourcing under Supply Capacity Uncertainty Consider a firm managing a T -period periodic-review inventory system in the presence of two capacitated suppliers (or delivery modes): a regular supplier with a longer replenishment leadtime of lR periods and a unit ordering cost cR , and an expedited (emergency) supplier with a shorter replenishment leadtime of lE periods and a unit ordering cost cE , where lR and lE are nonnegative integers and lR > lE . There are no fixed ordering costs. Both suppliers offer limited and uncertain 17 capacities, denoted by KR,t and KE,t , t ∈ {1, ..., T }, for regular and expedited suppliers, respectively. The processes {KR,t }Tt=1 and {KE,t }Tt=1 are both independent over time and independent of each other. Note that the independence assumption on the supply capacity distributions can be justified by the dual sourcing practice with two geographically distant locations, such as China and Mexico in the case study of Van Mieghem (2008), where the production processes are typically independent of each other. Demands of successive periods, denoted by dt for period t, are stochastic, independent overtime, and independent of the supply capacities. For convenience, let d[t,t+l] be the total demand from period t to period t + l, i.e., d[t,t+l] = dt + ... + dt+l . It is notable that a typical assumption in the dual-sourcing literature without capacity limits is that the expedited ordering cost cE is greater than the regular ordering cost cR , because otherwise it is trivial for the firm to procure exclusively from the expedited supplier (see, e.g., Veeraraghavan and Scheller-Wolf 2006, Sheopuri et al. 2010). We do not make this assumption here. In fact, if the expedited capacity is limited, even when the regular ordering cost is higher, it may still be beneficial to order from the regular supplier. The sequence of events is as follows. At the beginning of period t, orders from the regular supplier lR periods ago and the expedited supplier lE periods ago (if lE ≥ 1) are received. (Note that if lE = 0, we assume that an order from the expedited supplier is received right away.) The firm then reviews the inventory level and the orders outstanding, and determines how much to order from the two suppliers before observing the suppliers’ capacities KR,t and KE,t . Let qR and qE be the (target) order quantities from the regular and expedited channels, respectively. After the orders are placed, the suppliers’ capacities KR,t and KE,t are realized. We use kR,t and kE,t to denote realizations of KR,t and KE,t respectively. Then the amounts of inventories shipped from the regular and expedited suppliers are qR ∧ kR,t and qE ∧ kE,t , respectively. Note that here we assume that the supply capacity uncertainties are resolved in the same period when the orders are placed (see Federgruen and Yang 2011 for a similar treatment for the random yield problem). This is reasonable when the capacity uncertainties are mainly driven by the unreliability of the production process and the production time is no more than the period length while the shipping time is long. The ordering costs are given by cR (qR ∧ kR,t ) and cE (qE ∧ kE,t ). Here we assume that the ordering cost is proportional to the quantity actually delivered, which is a common assumption in the literature of inventory control with random capacities (see Ciarallo et al. 1994, Wang and Gerchak 1996, Feng 2010, and so on). This assumption is appropriate when the payment is made upon the receipt of the shipments and the firms only pay the actual delivered amount. At the end of this period, the demand is realized and met with on-hand inventory (if any). Unmet demand is fully backlogged with a unit shortage cost h− . Excess inventory is carried over to the next period with a unit holding cost h+ . The objective of the firm is to find a dual-sourcing strategy so as to minimize the total expected discounted cost, including ordering cost, holding cost and backorder cost, over the planning horizon. 18 To present the dynamic programming model for deriving the optimal strategy, one can naturally describe the system state right before the firm places orders by a vector s = (s0 , ..., slR −1 ), where si denotes the amount of on-hand net inventory plus outstanding orders that will arrive within i periods, i = 1, .., lR − 1. However, in a backlogging model, since the orders of each period will have an influence only lE periods later, and the on-hand net inventory level lE periods later solely depends on slE , it suffices to use the now standard accounting technique to discount the future inventory cost to the current period and focus on the pipeline inventory levels slE , ..., slR −1 . Specifically, we can reduce the state space to k = lR − lE dimensions by defining the system state as z = (z1 , ..., zk ), where zi = si+lE −1 , i = 1, ..., k. The state space is given by S = {(z1 , . . . , zk ) : z1 ≤ z2 ≤ . . . ≤ zk }. Given the system state z, the system state of the next period is given by z̃ = (z2 + qE ∧ kE,t − dt , ..., zk + qE ∧ kE,t − dt , y ∧ (zk + kR,t ) + qE ∧ kE,t − dt ), where y = zk + qR is the (target) order-up-to level from the regular channel. For reasons that will become clear later, we denote u = −qE and k̃E,t = −kE,t . The dynamics of the system state can be rewritten as z̃ = [(z2 , ..., zk , y ∧ (zk + kR,t )) − (u ∨ k̃E,t + dt )e], where e is the k-dimensional all-ones vector. We are now ready to present the dynamic program to derive the firm’s optimal strategy. Let α ∈ (0, 1] be the discount factor. The optimality equations can be written as follows. For t = 1, ..., T , vt (z) = min y≥zk ,u≤0 n o E[gt (z, y ∧ (zk + KR,t ), u ∨ K̃E,t )] ∀ z ∈ S , (14) where gt (z, y, u) = cR (y − zk ) − cE u + Bt (z1 − u) + αE[vt+1 ((z2 , ..., zk , y) − (dt + u)e)], (15) and Bt (x) = αlE E[h+ (x − d[t,t+lE ] )+ + h− (d[t,t+lE ] − x)+ ]. Note that the expectation of the right hand side of equation (14) is taken over the random capacities. The function gt represents the expected total discounted cost after the capacities are realized but before the demand is realized. The first term of the right hand side of equation (15) is the ordering cost from the regular supplier, the second term is the ordering cost from the expedited supplier, the third term is the expected discounted holding and shortage cost of period t + lE , and the last term is the expected total discounted future costs. For simplicity, we assume the terminal value function vT +1 (z) = 0 for any z, which implies that there is no salvage value for leftover inventory 19 and no backlogging cost for unfilled demand after period T + lE . That is, the firm makes decisions in the first T periods but takes into account the inventory cost up to period T + lE . Our structural results and analysis still hold if vT +1 (z) is assumed to be L\ -convex. Problem (14) admits optimal solutions under rather general and standard conditions. Nevertheless, it is a challenging problem. First, the state space is multi-dimensional. A more severe issue is that the objective function of problem (14) is not convex. Note that for the last period with vT +1 = 0, the objective function has a structure similar to that in (1), which may not be convex. Thus, it is far from being clear whether the cost-to-go functions vt are convex, and even if they are, the objective function of problem (14) is not. However, with the transformation technique developed in Section 2 we can convert the non-convex minimization problem (14) into an equivalent convex minimization problem and show that vt is actually L\ -convex. In the following analysis, we assume that both cE and cR are smaller than h− /(1 − α), which ensures that it is not optimal to never order anything and merely accumulate penalty costs. Let (yt (z), ut (z)) denote the optimal solution for problem (14). When there are multiple optimal solutions, we assume it is the greatest one, which will be shown to be well defined later. Theorem 5. For all t, vt (z) is L\ -convex in z ∈ S . The optimal solution (yt (z), ut (z)) is increasing in z with limited sensitivity. (When there are multiple optimal solutions, we assume it is the greatest one.) That is, for any ω > 0, yt (z) ≤ yt (z + ωe) ≤ yt (z) + ω, Proof. ut (z) ≤ ut (z + ωe) ≤ ut (z) + ω (16) The proof is by induction. Suppose that vt+1 is L\ -convex. By Proposition 1 (d), for any dt , vt+1 [(z2 , ..., zk , y) − (dt + u)e] is L\ -convex in (z, y, u) and so is αvt+1 [(z2 , ..., zk , y) − (dt + u)e]. Clearly all the other terms of gt are L\ -convex in (z, y, u) (That’s why we define u = −qE and k̃e,t = −kE,t ). Thus, gt is L\ -convex in (z, y, u). Let A = {(z, y, u)|y ≥ zk , u ≤ 0} and AΞ = {(z, y ∧ (zk + kR,t ), u ∨ k̃e,t )|y ≥ zk , u ≤ 0, kR,t ∈ Supp(KR,t ), k̃e,t ∈ Supp(K̃E,t )}. Since KR,t ≥ 0 and K̃E,t ≤ 0 almost surely, it is easy to see that u l AΞ = {(z, w1 , w2 )|zk + kR,t ≥ w1 ≥ zk , k̃E,t ≤ w2 ≤ 0}, u l where kR,t = ess sup Supp(KR,t ) and k̃E,t = ess inf Supp(K̃E,t ). The constraint set AΞ forms an L\ -convex set because of Proposition 1 (g). It is straightforward to see that the set A = {(z, y, u)|y ≥ zk , u ≤ 0} is of the form in Lemma 3. Applying Theorem 3, we know vt (z) is L\ -convex in z ∈ S . According to Theorem 4, the greatest optimal solution (yt (z), ut (z)) is well defined and has the desired monotonicity property with limited sensitivity. Q.E.D. 20 The monotonicity and limited sensitivity of yt (z) imply that the optimal regular order quantity qR,t (z), which is equal to yt (z) − zk , increases in z1 , ..., zk−1 , but decreases in zk and satisfies −ω ≤ qR,t (z + ωe) − qR,t (z) ≤ 0. To gain more insights, we can transform the state vector to x = (x1 , ..., xk ) where x1 = z1 and xi = zi − zi−1 , i = 2, ..., k. Note that x1 = z1 represents the amount of on-hand net inventory plus outstanding orders that will arrive within lE periods, and xi represents the size of the outstanding order that will arrive lE + i − 1 periods later. Denote the corresponding optimal order quantities by q̂R,t (x) = qR,t (z) and q̂E,t (x) = qE,t (z). The monotonicity and limited sensitivity of yt (z) imply the following inequalities. −ω ≤ q̂R,t (x + ωek ) − q̂R,t (x) ≤ q̂R,t (x + ωek−1 ) − q̂R,t (x) ≤ ... ≤ q̂R,t (x + ωe1 ) − q̂R,t (x) ≤ 0. (17) Compare states z + ωei and z. For i = 1, the former has ω more units of on-hand inventory or outstanding orders that will arrive within lE periods. For i = 2, ..., k, the former has ω more units of outstanding order that will arrive lE + i − 1 periods later. Thus, inequalities (17) imply that the regular order quantity decreases in on-hand inventory level and the sizes of the outstanding orders. The sensitivity decreases in the age of the outstanding order, where the age refers to the number of periods passed since the order was placed. In other words, the regular order quantity is most sensitive to the size of the most recently placed order. Similarly, for the expedited order quantity q̂E,t (x) = −ut (z), we have −ω ≤ q̂E,t (x + ωe1 ) − q̂E,t (x) ≤ q̂E,t (x + ωe2 ) − q̂E,t (x) ≤ ... ≤ q̂E,t (x + ωek ) − q̂E,t (x) ≤ 0. (18) That is, the expedited order quantity decreases in the sizes of outstanding orders in the pipeline, but the sensitivity increases in the age of the outstanding order. In other words, the expedited order quantity is least sensitive to the most recently placed order, which is opposite to the sensitivity of the regular order quantity. Such monotone properties with limited sensitivity are also observed in Hua et al. (2015) who consider an uncapacitated dual sourcing problem, and in the joint inventory-pricing control problems with positive leadtime where the replenishment decision has a decreasing sensitivity in the age of the outstanding order whereas the pricing decision has an increasing sensitivity in the age of the outstanding order (see, e.g., Chen, Pang and Pan 2014). The implication is that the decisions whose immediate impacts are closer to the on-hand stock (e.g., pricing or expedited order) are more sensitive to the on-hand inventory level and older outstanding orders while the decisions whose immediate impacts are further away from the on-hand stock (e.g., regular order) is more sensitive to the younger outstanding orders. Remark 4 (Dual-index policy). For the special case with the leadtime difference being 1, i.e., k = lR − lE = 1, it has been shown in the literature that a dual-index policy is optimal when 21 there are deterministic capacity limits (see, e.g., Fukuda 1964, Zhou and Chao 2014). That is, in each period t, there exist two critical inventory positions for the regular and expedited ordering decisions, denoted by SR,t and SE,t respectively, with SE,t ≤ SR,t . When the inventory position is below SE,t , order up to SE,t from the expedited supplier and then place an order to SR,t from the regular supplier; when the inventory position is above SE,t but below SR,t , order up to SR,t from the regular supplier. In the presence of random capacities, such a policy may no longer be optimal. Nevertheless, when the expedited supplier has no capacity limit and the regular supplier has a random capacity limit, it can be readily shown that a dual-index policy is still optimal. 4. Assemble-to-Order Systems with Random Capacity Consider an assemble-to-order (ATO) system over a planning horizon with T periods. The ATO system consists of m components indexed by i ∈ {1, 2, ..., m} and n products indexed by j ∈ {1, 2, ..., n}. At the beginning of each period, the firm observes on-hand inventory levels of the m components x = (x1 , ..., xm )T , and then decides the order-up-to inventory levels of components y = (y1 , ..., ym )T . The delivered quantity of each component i cannot exceed a random capacity, denoted by Ξt,i , which is realized after the order is placed. The capacities are independent of each other and over time. Inventory replenishment leadtime is assumed to be zero. The demand for product j in period t is Dt,j and we assume that they are independent over time and independent of capacities. Let Dt = (Dt,1 , ..., Dt,n )T . The bill of materials is specified by an m × n matrix A, whose component aij denotes the units of component i required to make one unit of product j. Unmet demands are assumed to be lost. Let ci and hi represent the ordering cost and holding cost of component i per unit respectively, and bj denote the per unit shortage cost of product j. We use c, h, b to denote the vectors (c1 , ..., cm )T , (h1 , ..., hm )T , (b1 , ..., bn )T respectively. The one-period discount factor is α ∈ (0, 1]. The objective of the firm is to minimize the total expected discounted cost. Let ft (x) be the cost-to-go function with initial inventory levels x at the beginning of period t. We omit the subscript t for notational brevity when no ambiguity occurs. The optimality equation is ft (x) = min{E[cT (y ∧ (x + Ξ) − x)] + E[gt (y ∧ (x + Ξ)|D)]}, y≥x (19) where gt (z |d) = min {L(z, u|d) + αft+1 (z − Au)}, (20) u:(z,u)∈U (d) and L(z, u|d) = hT (z − Au) + bT (d − u). (21) The boundary condition is assumed to be fT +1 (x) = 0 without loss of generality. The first term in the objective function of (19) is the ordering cost. Similar to the dual sourcing model, we assume that the ordering cost is proportional to the quantity actually delivered. The feasible set in (20) 22 is given by U (d) = {(z, u)|Au ≤ z, 0 ≤ uj ≤ dj , j = 1, 2, ..., n}, where z is the on-hand inventory level after the inventory ordered in the current period arrives, and u is the vector of assembled-product quantities. The inventory holding and shortage costs are given in L(z, u|d). Due to the complexity of general ATO systems, some important special systems are studied in the literature, one of which is a generalized M -system (see Nadar et al. 2014). A generalized M -system has m components and m + 1 products, where each product i requires a single unit of component i for i ≤ m and product m + 1 consumes one unit of each component. This ATO system reduces to an M -system when m = 2. The bill of materials matrix has the following form: 1 0 0 ··· 0 1 0 1 0 ··· 0 1 . 0 0 1 · · · 0 1 A= .. .. .. . . .. .. . . . . . . (22) 0 0 0 ··· 1 1 We summarize the structural results of this section in the following theorem. Theorem 6. (a) For a general ATO system, the optimal cost function ft (x) is convex in x for all t. (b) For a generalized M -system, the optimal cost function ft (x) is L\ -convex in x for all t. The optimal order-up-to level yt (x) is increasing in x with limited sensitivity. That is, for any ω > 0, yt (x) ≤ yt (x + ωe) ≤ yt (x) + ωe. (When there are multiple optimal solutions, we assume it is the greatest one.) Proof. (a) We prove by induction. Assume that ft+1 is convex. It is easy to see that gt (z |d) in (20) is convex in z for any demand realization d since the objective function is jointly convex in (z, u) and the constraints form a convex set. Define Gt (z) , cT z + E[gt (z |D)], which is convex in z. Then ft (x) = miny≥x E[Gt (y ∧ (x + Ξ))] − cT x. The constraint set is A = {(x, y)|y ≥ x}. By definition, AΞ = {(x, y ∧ (x + ξ))|y ≥ x, ξ ∈ Supp(Ξ)}. This is equivalent to the set {(x, w)|xi ≤ wi ≤ xi + ξ¯i , ∀i = 1, ..., m}, which is convex. In addition, Assumption 2 is satisfied since A is of the form given in Lemma 3. Therefore, following Theorem 3 we know that ft (x) is convex in x. (b) For j = 1, ..., m, define ûj = zj − uj . Let ûm+1 = um+1 and −1 0 0 · · · 0 1 0 −1 0 · · · 0 1 . 0 0 − 1 · · · 0 1  = .. .. .. . . .. .. . . . . . . 0 0 (23) 0 · · · −1 1 Then gt (y |d) can be written as gt (z |d) = min û:(z,û)∈Û (d) {L̂(z, û|d) + αft+1 (û1 − ûm+1 , ..., ûm − ûm+1 )}, (24) 23 where L̂(z, û|d) = m X m X bj (dj − zj + ûj ) + bm+1 (dm+1 − ûm+1 ), (25) Û (d) = {(z, û)|Âû ≤ 0, 0 ≤ zj − ûj ≤ dj , j = 1, 2, ..., m, 0 ≤ ûm+1 ≤ dm+1 }. (26) i=1 hi (ûi − ûm+1 ) + j=1 and We then prove by induction. Clearly fT +1 (x) is L\ -convex. If ft+1 is L\ -convex then the objective function of (24) is also L\ -convex in (z, û) due to Proposition 1 (a) and (d). The constraint set Û (d) forms a L\ -convex set by Proposition 1 (g). Therefore, gt (z |d) is L\ -convex in z for any d according to Proposition 1 (e). Similar to part (a) we have AΞ = {(x, w)|xi ≤ wi ≤ xi + ξ¯i , ∀i = 1, ..., m}, which is L\ -convex following Proposition 1 (g). One can easily check that A is of the form given in Lemma 3. Therefore, applying Theorem 3 and 4, we know that ft (x) is also L\ -convex and the sensitivity results hold. Q.E.D. Theorem 6 summarizes the sensitivity results for the stage when components are ordered. For a generalized M -system, the order-up-to level yi of any component i increases in its own inventory level xi as well as the inventory level of any other component j 6= i. The limited sensitivity implies that for any component i the ordering quantity yi − xi decreases in the inventory level of any component. One can easily check that the following sensitivity results hold during the stage when products are assembled. For a generalized M -system, the quantity of product m + 1 increases in the quantity of each component, while the quantity of product j(6= m + 1) increases in the quantity of component j but decreases in the quantity of component k(6= j). 5. Capacity Allocation in Network Revenue Management We consider a network system consisting of m resources (airline seats in different legs), indexed by i ∈ {1, ..., m}, with initial capacity levels C = (C1 , ..., Cm )T , and n products (itinerary-class combinations), indexed by j ∈ {1, ..., n}. The corresponding prices, denoted by p = (p1 , ..., pn )T , are exogenously given. Each product needs at most one unit of each resource. Let A = (aij ) be the resource coefficient matrix, where aij = 1 if product j uses one unit of resource i and aij = 0 otherwise. Define dt = (dt,1 , ..., dt,n )T where dt,j is demand of product j in period t. Assume that the demands of different products are independent and the demands are independent over time. The objective of the firm is to decide the booking limits for all demand classes dynamically so as to maximize the total expected profit over the planning horizon. As mentioned in Section 1, the model we consider here is MSSP in Chen and Homem-de-Mello (2010) with continuous relaxations. Chen and Homem-de-Mello (2010) point out that the major difficulty of the above model is that it is not a concave maximization problem, since the decisions are truncated by random demands. Therefore, they re-solve a sequence of two-stage stochastic programs for approximation. Interestingly, as we show in this section, our transformation technique can overcome this difficulty and allows us to preserve concavity in the dynamic programming 24 recursions. Under certain network structure, we further demonstrate that L\ -concavity can be preserved and use it to derive monotone properties of the optimal booking limits. Note that the model considered here is different from the one in section 3.2.1 of Talluri and van Ryzin (2005). Their model assumes there is at most one demand request in any period. We do not impose this assumption. Since in our model each time period corresponds to the time when the firm needs to revise its capacity allocation policy, it may not be practical to divide the planning horizon so much so that there is at most one demand in any period due to the increased computational complexity. In the following, we omit the subscript t for notational brevity when no ambiguity occurs. The state variable is denoted by the vector x = (x1 , ..., xm )T in which xi is the capacity level of the resource i in the current period. At the beginning of the planning horizon, we have x = C. In each period, the firm observes the current capacity level x and decides the booking limits for different demand classes. The decision variable is denoted by vector u = (u1 , ..., un )T where uj is the booking limit for class j demand in the current period. The action space can be defined as A = {(x, u)|Au ≤ x, u ≥ 0}. Let ft (x) be the optimal value. The optimality equations can be expressed as ft (x) = max E pT (u ∧ d) + ft+1 (x − A(u ∧ D)) , t = 1, ..., T, (27) u:(x,u)∈A where fT +1 (x) = 0. For ξ ∈ F+m , define the function gt : F+m+1 → < such that gt (x, ξ) = pT ξ + ft+1 (x − Aξ). Then the optimality equation can be expressed as ft (x) = max E [gt (x, u ∧ D)] , t = 1, ..., T. (28) u:(x,u)∈A We also consider a special case where the resource coefficient matrix has the same format as the bill of materials matrix in the assemble-to-order generalized M -system, i.e., the resource coefficient matrix is given by (22). When the number of resources m = 2, one can relate this type of resource coefficient matrix to the following setting. There are two legs in the network: A to B and B to C. There are three types of consumers. Type one consumers travel from A to B, type two consumers travel from B to C, and type three consumers travel from A to C with a transition at B. We summarize the structural results in the following theorem. Theorem 7. (a) For the network revenue management problem (27), the optimal value function ft (x) is concave in x for all t. (b) If, in addition, the resource coefficient matrix is given by (22), then for all t, ft (x) is L\ concave. The optimal booking limit u∗m+1 (x) is increasing in x with limited sensitivity, i.e., for any ω > 0, u∗m+1 (x) ≤ u∗m+1 (x + ωe) ≤ u∗m+1 (x) + ω. For j = 1, ..., m, u∗j (x) is increasing in xj and decreasing in xk , k 6= j, with limited sensitivity, i.e., u∗j (x) ≤ u∗j (x + ωej ) ≤ u∗j (x) + ω and u∗j (x) − ω ≤ u∗j (x + ωek ) ≤ u∗j (x) for any ω > 0, k 6= j. (When there are multiple optimal solutions, we choose the one such that (−u∗1 (x), ..., −u∗m (x), u∗m+1 (x)) is the greatest.) 25 Proof. (a) We prove by induction. Assume ft+1 is concave. In the objective function of (28), gt (·, ·) is clearly concave. Since A = {(x, u)|Au ≤ x, u ≥ 0}, AΞ = {(x, u ∧ d)|(x, u) ∈ A, d ∈ Supp(D)}, which is equivalent to the convex set {(x, w) : Aw ≤ x, 0 ≤ wj ≤ d¯j , ∀j = 1, ..., n}. In addition, Assumption 2 is satisfied since A is of the form given in Lemma 3. Then it follows from Theorem 3 that ft (x) is concave. (b) For j = 1, ..., m, define ûj = xj − uj , ûm+1 = um+1 and  is given in (23). The optimality equations can be rewritten as " ft (x) = max E pm+1 (ûm+1 ∧ dm+1 ) + û:(x,û)∈A m X pj xj − j=1 m X # pj (ûj ∨ (xj − dj )) + ft+1 (x̃) , j=1 where  = {(x, û)|û ≥ 0, Âû ≤ 0, xj − ûj ≥ 0, j = 1, ..., m} and x̃ = [û1 ∨ (x1 − d1 ), ..., ûm ∨ (xm − dm )] − (ûm+1 ∧ dm+1 )e. Define ht (x, ξ) = pm+1 ξm+1 + ft (x) = Pm j=1 pj xj − Pm j pj ξj + ft+1 ((ξ1 , ..., ξm ) − ξm+1 e). Then for t = 1, ..., T, max E[ht (x, û1 ∨ (x1 − d1 ), ..., ûm ∨ (xm − dm ), ûm+1 ∧ dm+1 )]. û:(x,û)∈A Clearly fT +1 (x) is L\ -concave. If ft+1 (x) is L\ -concave, then ht (x, ξ) is also L\ -concave by Proposition 1 (a) and (d). We have AΞ = {(x, û1 ∨ (x1 − d1 ), ..., ûm ∨ (xm − dm ), ûm+1 ∧ dm+1 )|(x, û) ∈ Â, d ∈ Supp(D)}. Notice that AΞ is equivalent to the following set {(x, w)|xj − d¯j ≤ wj ≤ xj , wm+1 ≤ wj , wj ≥ 0, j = 1, ..., m, 0 ≤ wm+1 ≤ d¯m+1 }, where d¯j = ess sup{dj |d ∈ Supp(D)}. It follows from Proposition 1 (g) that AΞ is L\ -convex. One can easily check that A is of the form in Lemma 3. Therefore, Theorem 3 can be applied to show that the L\ -concavity of ft (x) is preserved. It follows from Theorem 4 that there exists a greatest solution û∗ (x) such that û∗j (x) is increasing in x for all j with limited sensitivity, which implies that u∗m+1 (x) = û∗m+1 (x) is increasing in x with limited sensitivity while u∗j (x) = xj − û∗j (x) is increasing in xj , and decreasing in xk , k 6= j with limited sensitivity. Q.E.D. The sensitivity result from Theorem 7 implies that if the current capacity level of any resource i increases by ω, then the allocated capacity of product i and m + 1 should also increase, but the allocated capacity of product j, j 6= i, j 6= m + 1 will decrease. All the above changes are bounded by ω because of the limited sensitivity. Remark 5. Even though the resource coefficient matrix here is the same as the bill of materials matrix in Section 4, the analyses of the two models have a significant difference. For the ATO model, the decision variable is truncated by random capacity and the bill of materials matrix does not enter the constraints when we apply the transformation technique. However, for the revenue management model the decision variable is truncated by random demand and the resource coefficient matrix affects the constraints when applying the transformation. 26 6. Conclusion In this paper, we develop a transformation technique for a class of stochastic optimization problems. This transformation technique allows us to convert a non-convex minimization problem to an equivalent convex minimization problem, and to prove the preservation of some desirable structural properties (e.g., convexity, submodularity and L\ -convexity). We apply these results to several important applications: dual sourcing with random supply capacity, ATO systems with random supply capacity and network revenue management. Our transformation technique is not limited to the aforementioned applications. For instance, Chen, Gao and Hu (2015) applied our results, together with a preservation property of concavity and supermodularity with a non-lattice constraint structure developed in Chen, Hu and He (2013), to provide a significantly simplified analysis to the two-facility joint inventory and transshipment problem with uncertain capacities analyzed in Hu et al. (2008). Recently, Demirel et al. (2015) analyze a calibrate-to-order system where a firm produces two products on dedicated production lines that are then calibrated according to the specifications of customer orders on a shared resource. Both the dedicated product lines and the shared resource face random capacities. It can be readily shown that our analysis also applies to their model when the uncertain capacities are independent of each other and over time, and can significantly simplify the analysis. We believe our transformation technique can find many more applications in inventory control, revenue management and beyond. It is notable that our transformation technique requires the assumption that the random components are independently distributed. It is likely that new approaches are needed to extend the analysis to cases with correlated random components. Another future research direction is to design efficient algorithms for the applications considered here employing properties of convexity or L\ convexity enabled by our transformation technique. Acknowledgments This research of the first author is partly supported by National Science Foundation (NSF) Grants CMMI-1030923, CMMI-1363261, CMMI-1538451, CMMI-1635160 and National Science Foundation of China (NSFC) Grants 71520107001 and 71228203. This research of the third author is partly supported by NSFC Grants 71390335 and 71528003. The authors would like to thank George Shanthikumar, Qi Feng and Xiting Gong for insightful discussions, and Chung-Piaw Teo, the AE and three anonymous referees for helpful comments and suggestions. Appendix A: Proofs Proof of Proposition 1 Parts (a)-(c) are from Murota (2003). The proofs follow directly from the definition. (d) We need to show that g[(x, λ) − ξ(e, 1)] is submodular. Notice that g[(x, λ) − ξ(e, 1)] = g(v − ξe, λ − ξ) = f ((v − ξe) − (λ − ξ)e) = f (v − λe), which is submodular. (e) We assume without loss of generality that A = F n × Y; otherwise we can focus on the restriction of f on A and let f be infinity outside of A. We know that r(x, y, ξ) = f [(x, y) − ξe] is submodular, and we want 27 to show that g(x − ξe) is submodular in (x, ξ). We have g(x − ξe) = inf y∈Y f (x − ξe, y) = inf y∈Y f [(x, y + ξe) − ξe] = inf y∈Y r(x, y + ξe, ξ) = inf z−ξe∈Y r(x, z, ξ). Notice that {(z, ξ) : z − ξe ∈ Y} is a lattice and r(x, z, ξ) is submodular, it follows from Theorem 2.7.6 of Topkis (1998) that g(x − ξe) is submodular in (x, ξ). (f) It follows from Theorem 2.8.2 of Topkis (1998) that arg miny:(x,y)∈A f (x, y) is increasing in x. For any ω > 0 and any x ∈ dom(g), define ωẽ + arg miny:(x,y)∈A f (x, y) as the set {u + ω : u ∈ arg miny:(x,y)∈A f (x, y)}. 00 Pick y 0 in arg miny:(x,y)∈A f (x, y) and y in arg miny:(x+ωe,y)∈A f (x + ωe, y). Then for any ω > 0 such that 00 00 (x + ωe, y 0 + ωẽ) ∈ A and (x, y − ωẽ) ∈ A we have (x + ωe, y ∧ (y 0 + ωẽ)) = (x + ωe, y 00 ) ∧ (x + ωe, y 0 + ωẽ) ∈ A, 00 (x, (y − ωẽ) ∨ y 0 ) = (x, y 00 − ωẽ) ∨ (x, y 0 ) ∈ A and 00 00 0 ≥ f (x + ωe, y ) − f (x + ωe, y ∧ (y 0 + ωẽ)) 00 00 = f [(x, y − ωẽ) + ω(e, ẽ)] − f [(x, (y − ωẽ) ∧ y 0 ) + ω(e, ẽ)] 00 00 ≥ f (x, y − ωẽ) − f (x, (y − ωẽ) ∧ y 0 ) 00 ≥ f (x, (y − ωẽ) ∨ y 0 ) − f (x, y 0 ) ≥ 0, where the first and the last inequalities are due to the optimality of y 00 and y 0 for x + ωe and x respec- tively, the second inequality is due to the L\ -convexity of f which implies that f (x − ωe, y − ωẽ) is submodular in (x, y, ω), and the third inequality is due to the submodularity of f (x, y) in y. The first and 00 the last inequalities then imply that equality holds throughout the above inequalities and so y ∧ (y 0 + 00 ωẽ) ∈ arg miny:(x+ωe,y)∈A f (x + ωe, y), and (y − ωẽ) ∨ y 0 ∈ arg miny:(x,y)∈A f (x, y) which then implies that 00 y ∨ (y 0 + ωẽ) ∈ ωẽ + arg miny:(x,y)∈A f (x, y). Therefore, arg min f (x + ωe, y) v ωẽ + arg min f (x, y). y :(x+ωe,y )∈A y :(x,y )∈A (g) Let A = {x ∈ Y : l ≤ x ≤ u, xi − xj ≤ vij , ∀i 6= j}. For any x, x0 ∈ A, λ ∈ F+ , we only need to show that (x + λe) ∧ x0 , x ∨ (x0 − λe) ∈ A. Firstly we have (x + λe) ∧ x0 ≤ x0 ≤ u, and l ≤ x ∧ x0 ≤ (x + λe) ∧ x0 . For any i 6= j, if x0i ≤ xi + λ, x0j ≤ xj + λ, then (xi + λ) ∧ x0i − (xj + λ) ∧ x0j = x0i − x0j ≤ vij . If x0i ≥ xi + λ, x0j ≥ xj + λ, then (xi + λ) ∧ x0i − (xj + λ) ∧ x0j = xi − xj ≤ vij . If x0i ≤ xi + λ, xj + λ ≤ x0j , then (xi + λ) ∧ x0i − (xj + λ) ∧ x0j = x0i − (xj + λ) ≤ xi + λ − (xj + λ) ≤ vij . If x0i ≥ xi + λ, xj + λ ≥ x0j , then (xi + λ) ∧ x0i − (xj + λ) ∧ x0j = (xi + λ) − x0j ≤ x0i − x0j ≤ vij . Thus we have (x + λe) ∧ x0 ∈ A. Similarly we can show that x ∨ (x0 − λe) ∈ A. Proof of Theorem 4 In the following, we provide the proof for part (b). Since part (a) can be proved using almost the same arguments (as L\ -convexity includes submodularity), its proof is omitted for brevity. Let Ṽ(x, z) denote the constraint set of the transformed problem (12). Define the projection of the solution set of the transformed problem, S ∗ (x, z) = arg min(v(ξ),ξ∈X )∈Ṽ (x,z) E[f (x, v(Ξ))], on the constraint set U(x, z) as ΠU S ∗ (x, z) = {u ∈ U(x, z)|(u♦k (z + ξ), ξ ∈ X ) ∈ S ∗ (x, z)} By Proposition 1 we know that S ∗ (x, z) is increasing in (x, z) and satisfies the monotone sensitivity property with respect to (x, z) as follows: S ∗ ((x, z) + ωe) v ωe + S ∗ (x, z). 28 We argue that ΠU S ∗ (x, z) is the solution set of the original problem for any given (x, z), i.e., ΠU S ∗ (x, z) = U ∗ (x, z). In fact, if u∗ is an optimal solution to the original problem, (u∗ ♦k (z + ξ), ξ ∈ X ) is a minimizer of the transformed problem, i.e., (u∗ ♦k (z + ξ), ξ ∈ X ) ∈ S ∗ (x, z). On the other hand, if u ∈ ΠU S ∗ (x, z), then (v(ξ)|v(ξ) = u♦k (z + ξ), ξ ∈ X ) is an optimal solution of the transformed problem due to the definition of ΠU S ∗ . Since E[f (x, v(Ξ))] = E[f (x, u♦k (z + Ξ))] = τ ∗ , u is optimal for the original problem. Therefore, our argument is true, which implies that we only need to show that ΠU S ∗ (x, z) is increasing in (x, z) and ΠU S ∗ ((x, z) + ωe) v ωe + ΠU S ∗ (x, z). We firstly show that ΠU S ∗ ((x, z) + ωe) v ωe + ΠU S ∗ (x, z). (29) 00 Pick any u0 in ΠU S ∗ (x, z) and any u in ΠU S ∗ ((x, z) + ωe) respectively. We have (u0 ♦k (z + ξ), ξ ∈ X ) ∈ S ∗ (x, z), (u00 ♦k (z + ωe + ξ), ξ ∈ X ) ∈ S ∗ ((x, z) + ωe). 00 0 00 It suffices to show that u ∧ (u + ωe) ∈ ΠU S ∗ ((x, z) + ωe) and (u − ωe) ∨ u0 ∈ ΠU S ∗ (x, z). Since S ∗ ((x, z) + ωe) v ωe + S ∗ (x, z), we have 00 (u ♦k (z + ωe + ξ), ξ ∈ X ) ∧ (u0 ♦k (z + ξ) + ωe, ξ ∈ X ) ∈ S ∗ ((x, z) + ωe). Hence, 00 (u ♦k (z + ωe + ξ), ξ ∈ X ) ∧ (u0 ♦k (z + ξ) + ωe, ξ ∈ X ) 00 = ((u ♦k (z + ωe + ξ) ∧ ((u0 + ωe)♦k (z + ωe + ξ)), ξ ∈ X ) = ((u00 ∧ (u0 + ωe))♦k (z + ωe + ξ), ξ ∈ X ) ∈ S ∗ ((x, z) + ωe). Since A is an L\ −convex set, (u0 , x, z) ∈ A, and (u00 , (x, z) + ωe) ∈ A, we have (u00 , (x, z) + ωe) ∧ ((u0 , x, z) + ωe) = (u00 ∧ (u0 + ωe), (x, z) + ωe) ∈ A. Here we use the following property of L\ −convex set (page 128 of Murota 2003): if A is an L\ −convex set, then for any p, q ∈ A, we have (p − ωe) ∨ q, p ∧ (q + ωe) ∈ A ∀ω ≥ 0. Hence, u00 ∧ (u0 + ω) ∈ U((x, z) + ωe). Together with ((u00 ∧ (u0 + ωe))♦k (z + ωe + ξ), ξ ∈ X ) ∈ S ∗ ((x, z) + ωe) we obtain u00 ∧ (u0 + ω) ∈ ΠU S ∗ ((x, z) + ωe). Similarly, since S ∗ ((x, z) + ωe) v ωe + S ∗ (x, z), we have ((u00 ∨ (u0 + ωe))♦k (z + ωe + ξ), ξ ∈ X ) ∈ S ∗ (x, z) + ωe. Hence, ((u00 − ωe) ∨ u0 )♦k (z + ξ), ξ ∈ X ) ∈ S ∗ (x, z). Since A is an L\ −convex set, (u0 , x, z) ∈ A, and (u00 , (x, z) + ωe) ∈ A, we have ((u00 , (x, z) + ωe) − ωe) ∨ (u0 , x, z) = ((u00 − ωe) ∨ u0 , x, z) ∈ A. Hence, (u00 − ωe) ∨ u0 ∈ U(x, z). Therefore, (u00 − ωe) ∨ u0 ∈ ΠU S ∗ (x, z). This completes the proof of the inequality (29). In the following we show that for any i and ω > 0, we have ΠU S ∗ (x, z) v ΠU S ∗ ((x, z) + ωei ). 00 It suffices to show that u0 ∧ u00 ∈ ΠU S ∗ (x, z), u0 ∨ u00 ∈ ΠU S ∗ ((x, z) + ωei ) for any u0 ∈ ΠU S ∗ (x, z) and u ∈ ΠU S ∗ ((x, z) + ωei ). If the increment ω is associated with a component of x, then we have S ∗ (x, z) v S ∗ (x + ωei , z). Hence, (u0 ♦k (z + ξ), ξ ∈ X ) ∧ (u00 ♦k (z + ξ), ξ ∈ X ) = ((u0 ∧ u00 )♦k (z + ξ), ξ ∈ X ) ∈ S ∗ (x, z), and (u0 ♦k (z + ξ), ξ ∈ X ) ∨ (u00 ♦k (z + ξ), ξ ∈ X ) = ((u0 ∨ u00 )♦k (z + ξ), ξ ∈ X ) ∈ S ∗ (x + ωei , z). Since A is a lattice, (u0 , x, z) ∈ A, 29 and (u00 , x + ωei , z) ∈ A, we have (u0 ∧ u00 , x, z) ∈ A, (u0 ∨ u00 , x + ωei , z) ∈ A. Hence, u0 ∧ u00 ∈ U(x, z), u0 ∨ u00 ∈ U(x + ωei , z). Therefore, u0 ∧ u00 ∈ ΠU S ∗ (x, z), u0 ∨ u00 ∈ ΠU S ∗ (x + ωei , z). If the increment ω is associated with a component of z, we firstly show u0 ∧ u00 ∈ ΠU S ∗ (x, z). Applying the previous arguments, we have u0 ∧u00 ∈ U(x, z). We only need to show that ((u0 ∧u00 )♦k (z +ξ), ξ ∈ X ) ∈ S ∗ (x, z). Since S ∗ (x, z) v S ∗ (x, z + ωei ), we have (u0 ♦k (z + ξ), ξ ∈ X ) ∧ (u00 ♦k (z + ωei + ξ), ξ ∈ X ) ∈ S ∗ (x, z). Notice that if u0i ≤ u00i or i ≤ k (corresponding to the ∧ operation), we have (u0i (zi + ξi ), ξ ∈ X ) ∧ (u00i (zi + ω + ξi ), ξ ∈ X ) = ((u0i ∧ u00i )(zi + ξi ), ξ ∈ X ), where denote a ∧ or ∨ operation, which is the corresponding operation in ♦k for component i. Therefore, it remains to consider the case where u0i > u00i and i > k (corresponding to the ∨ operation). 00 Define ṽ(ξ) , (u0 ♦k (z + ξ)) ∧ (u ♦k (z + ωei + ξ)), we have zi + ξi , u0 , 00 i 0 ṽi (ξi ) = (ui ∨ (zi + ξi )) ∧ (ui ∨ (zi + ω + ξi )) = zi + ω + ξi , 00 ui , if ξi ≥ u0i − zi , if u0i − zi − ω ≤ ξi < u0i − zi , if u00i − zi − ω ≤ ξi < u0i − zi − ω, if ξi < u00i − zi − ω. We use ṽ−i (ξ−i ) to denote (ṽ1 (ξ1 ), ..., ṽi−1 (ξi−1 ), ṽi+1 (ξi+1 ), ..., ṽn (ξn )). Let π ∗ denote the optimal objective value of the transformed problem with parameters (x, z). Similar to the arguments in Theorem 2, let fˆ(x, z, v) = f (x, v) + δV (x,z) (v), where V(x, z) denotes the constraint set {v(ξ)|(x, z, v(ξ)) ∈ AΞ , ξ ∈ X }. Following the proof in Theorem 1, define ĝ(·) = E[fˆ(x, z, ṽ−i (Ξ−i ), ·)]. Since the function f is componentwise convex, lower semi-continuous with f (u) → +∞ for |u| → ∞ and the constraint set is componentwise convex and closed, we have that ĝ(·) is convex, lower semi-continuous and ĝ(u) → +∞ for |u| → ∞. Therefore, minui ∈F ĝ(ui ) has a greatest and a least minimizer, and given any ũi ∈ arg minui ∈F ĝ(·), we have π ∗ = min{E[ĝ(vi (Ξi ))]|vi (ξi ) ≥ zi + ξi , ∀ξi ∈ Xi } = min E[ĝ(ui ∨ (zi + Ξi ))] = E[ĝ(ũi ∨ (zi + Ξi ))]. ui ∈F 00 We argue that ui ∈ arg minui ∈F ĝ(ui ). Let ūi and ui denote the greatest and least minimizer respectively. We will show ui ≤ u00i ≤ ūi . If, otherwise, ūi < u00i , then when ξi < u0i − zi we have ūi ≤ ūi ∨ (zi + ξi ) < ṽi (ξi ) and thus ĝ(ūi ∨ (zi + ξi )) < ĝ(ṽi (ξi )). By the assumption we have ξ i + zi + ω ≤ u00i < u0i , hence we know P r(ξi < u0i − zi ) > 0. When ξi ≥ u0i − zi , we have ĝ(ūi ∨ (zi + ξi )) = ĝ(zi + ξi ) = ĝ(ṽi (ξi )). If ui > u00i , then when ξi < ui − zi − ω, we have ui ∨ (zi + ξi ) > ṽi (ξi ) and thus ĝ(ui ∨ (zi + ξi )) < ĝ(ṽi (ξi )). Since ξ i + ω + zi ≤ u00i < ui , we have P r(ξi < ui − zi − ω) > 0. In the following we show that when ξi ≥ ui − z − ω, we have ĝ(ui ∨ (zi + ξi )) ≤ ĝ(ṽi (ξi )). When ui − zi − ω ≤ ξi < ui − zi , we have ĝ(ui ∨ (zi + ξi )) = ĝ(ui ) ≤ ĝ(ṽi (ξi )); when ξi ≥ ui − zi , we have ĝ(ui ∨(zi +ξi )) = ĝ(zi +ξi ) ≤ ĝ((u0i ∨(zi +ξi ))∧(zi +ω +ξi )) = ĝ(ṽi (ξi )). Therefore, for ūi < u00i or ui > u00i , we have π ∗ < E[ĝ(ṽi (Ξi ))], which contradicts the hypothesis that (ṽ(ξ), ξ ∈ X ) is an optimal solution to the transformed problem. Hence, π ∗ = minui ∈F E[ĝ(u00i ∨ (zi + Ξi ))] and ((u0i ∧ u00i ) ∨ (zi + ξi ), ṽ−i (ξ−i ), ξ ∈ X ) ∈ S ∗ (x, z). Since for any j 6= i, ṽj (ξi ) = (u0j ∧ u00j )(zj + ξj ), ∀ξ ∈ X , we have ((u0 ∧ u00 )♦k (z + ξ), ξ ∈ X ) ∈ S ∗ (x, z). 00 It follows a similar logic to show that u0 ∨ u00 ∈ ΠU S ∗ (x, z + ωei ), i.e., ((u0 ∨ u )♦k (z + ωei + ξ), ξ ∈ X ) ∈ S ∗ (x, z + ωei ). We omit the details for brevity. 30 In the following we show that the optimal solution set U ∗ (x, z) is a lattice, and it has a greatest element and a least element. Given fixed (x, z), let h(u) = E[f (x, u♦k (z + Ξ))]. For any realization of Ξ, denoted by ξ, given any u0 and u00 , we have f (x, u0 ♦k (z + ξ)) + f (x, u00 ♦k (z + ξ)) = f (x, u01 ∧ (z1 + ξ1 ), ..., u0n ∨ (zn + ξn )) + f (x, u001 ∧ (z1 + ξ1 ), ..., u00n ∨ (zn + ξn )) ≥ f (x, (u01 ∧ (z1 + ξ1 )) ∧ (u001 ∧ (z1 + ξ1 )), ..., (u0n ∨ (zn + ξn )) ∧ (u00n ∨ (zn + ξn ))) + f (x, (u01 ∧ (z1 + ξ1 )) ∨ (u001 ∧ (z1 + ξ1 )), ..., (u0n ∨ (zn + ξn )) ∨ (u00n ∨ (zn + ξn ))) = f (x, (u01 ∧ u001 ) ∧ (z1 + ξ1 ), ..., (u0n ∧ u00n ) ∨ (zn + ξn )) + f ((u01 ∨ u001 ) ∧ (z1 + ξ1 ), ..., (u0n ∨ u00n ) ∨ (zn + ξn )). The inequality is due to the submodularity of f . Then h(u0 ) + h(u00 ) ≥ h(u0 ∧ u00 ) + h(u0 ∨ u00 ). Since the objective function of (11) is submodular in u and the constraint set is a sublattice of <n , then by Theorem 2.7.1 of Topkis (1998) the set of optimal solutions of problem (11) is a sublattice of <n . Since uj ≤ zj + ξ¯j , j = 1, ..., k, uj ≥ zj + ξ j , j = k + 1, ..., n, the function f satisfies f (x) → +∞ for |x| → ∞, and the constraint set is closed, it is equivalent to restricting our constraint set to a compact set. It then follows from Corollary 2.3.2 of Topkis (1998) that there exist a greatest element and a least element in the solution set. Appendix B: Q.E.D. A Comparison to the Stochastic Linearity Approach Feng and Shanthikumar, (hereafter referred to as FS, 2014) also consider optimization problems with objective function E[f (u ∧ Ξ)] and convert them to convex minimization problems using stochastic linearity in mid-point (SL(mp)). Specifically, given a stochastic function Y (u) , ψ(u, Ξ), let µ(u) = E[ψ(u, Ξ)], and u(µ) be the inverse of µ(u), i.e., u(µ) = inf{u|E[ψ(u, Ξ)] ≥ µ}. Then g(µ) , E[f (Y (u(µ)))] is convex in µ as long as Y (u(µ)) is SL(mp). FS prove that, along with several other supply functions, if ψ(u, Ξ) = u ∧ Ξ, then Y (u(µ)) is SL(mp). This allows them to convert non-convex minimization problems to equivalent convex minimization problems by a variable transformation. Our transformation technique can preserve convexity as well as submodularity and L\ −convexity. FS do not mention whether their approach can preserve submodularity or L\ −convexity. It turns out that their approach can preserve submodularity of the objective function, but can not preserve L\ −convexity. These are shown in the following Proposition 2 and Example 2 respectively. ¯ is a submodular function, and Ξ is a random vector with support Proposition 2. Suppose that f : <n → < X ∈ <n , in which any component Ξi is independent of each other. Let µ(u) = E[u ∧ Ξ] and u(µ) be the inverse of µ(u). Then (a) h(u) = E(f (u ∧ Ξ)) is submodular. (b) g(µ) = E(f (u(µ) ∧ Ξ)) is submodular. Proof. (a) For any realization ξ, given any u and u0 , we have f (u ∧ ξ) + f (u0 ∧ ξ) ≥ f ((u ∧ ξ) ∧ (u0 ∧ ξ)) + f ((u ∧ ξ) ∨ (u0 ∧ ξ)) = f ((u ∧ u0 ) ∧ ξ) + f ((u ∨ u0 ) ∧ ξ). (b) Notice that for any component i = 1, ..., n, ui (µi ) is increasing. It follows from section 9.A.4 of Shaked and Shanthikumar (2006) and part (a) that g(µ) is also submodular. Q.E.D. Example 2. Consider E[f (u1 ∧ Ξ1 , u2 ∧ Ξ2 )], where f (u1 , u2 ) = eu1 −u2 is an L\ −convex function. Suppose that both Ξ1 and Ξ2 follow exponential distribution with mean 1, and they are independent of each other. 31 Then ∀i = 1, 2, µi (ui ) = E[ui ∧ Ξi ] = R ui 0 ξi e−ξi dξi + R∞ ui ui e−ξi dξi = 1 − e−ui . We have ui (µi ) = − ln(1 − µi ) and E[f (u1 ∧ Ξ1 , u2 ∧ Ξ2 )] Z u2 Z ∞ Z u2 Z u1 ξ1 −ξ2 −ξ1 −ξ2 eu1 −ξ2 e−ξ1 e−ξ2 dξ1 dξ2 e e dξ1 dξ2 + e = u1 0 0 0 Z ∞Z ∞ Z ∞ Z u1 eu1 −u2 e−ξ1 e−ξ2 dξ1 dξ2 eξ1 −u2 e−ξ1 e−ξ2 dξ1 dξ2 + + u2 0 u2 u1 1 = (1 + u1 )(1 + e−2u2 ) 2 1 = (1 − ln(1 − µ1 ))(1 + (1 − µ2 )2 ) 2 Define g(µ1 , µ2 ) = 12 (1 − ln(1 − µ1 ))(1 + (1 − µ2 )2 ). Let µ = [0.7, 0.2], µ0 = [0.8, 0.4], α = 0.1. We have g(µ) + g(µ0 ) ≈ 3.5817 while g((µ + αe) ∧ µ0 ) + g(µ ∨ (µ0 − αe)) ≈ 3.5860. Therefore, g(µ) + g(µ0 ) < g((µ + αe) ∧ µ0 ) + g(µ ∨ (µ0 − αe)), which means that g(µ1 , µ2 ) is not L\ −convex. Notice that the approach from FS requires computing the inverse of µ(u), which may not have a closed form solution. If we consider a constrained optimization problem, even if all the constraints in the original problem are linear, the approach from FS will very likely add non-linear constraints explicitly. However, our transformation technique only adds linear constraints though potentially infinite number of them. More importantly, under the conditions in Lemma 2, the constraint set can also preserve L\ −convexity with our transformation technique, but this may not hold using the approach in FS. We illustrate this in the following example. Example 3. Consider inf u∈U E[f (u1 ∧ Ξ1 , u2 ∨ Ξ2 )], where f (·, ·) is an L\ −convex function and U = {(u1 , u2 )|u1 − u2 ≤ 21 , 0 ≤ u1 ≤ 1, 0 ≤ u2 ≤ 1}. Suppose Ξ1 and Ξ2 are both uniformly distributed between 0 and 1, and they are independent of each other. Applying our transformation technique, we have inf E[f (v1 (Ξ1 ), v2 (Ξ2 ))] s.t. v(ξ1 ) ≤ ξ1 ∀ξ1 ∈ [0, 1], v2 (ξ2 ) ≥ ξ2 ∀ξ2 ∈ [0, 1], (30) (v1 (ξ1 ), v2 (ξ2 )) ∈ V ∀ξ ∈ [0, 1] × [0, 1], where V = {(v1 , v2 )|v1 − v2 ≤ 21 , 0 ≤ v1 ≤ 1, 0 ≤ v2 ≤ 1}. All constraints in the transformed problem are linear, and they form an L\ −convex set. Next we apply the transformation of FS. We have µ1 (u1 ) = E[u1 ∧ Ξ1 ] = u1 − 12 u21 , µ2 (u2 ) = E[u2 ∨ Ξ2 ] = √ √ 1 (1 + u22 ). Then we have u1 (µ1 ) = 1 − 1 − 2µ1 , u2 (µ2 ) = 2µ2 − 1. Hence, the constraint set after the 2 √ √ transformation becomes Ũ = {(µ1 , µ2 )| 1 − 2µ1 + 2µ2 − 1 ≥ 12 , 0 ≤ µ1 ≤ 12 , 12 ≤ µ2 ≤ 1}, which consists of non-linear constraints. One can also check that Ũ is not an L\ −convex set. To see this, notice that µ = [0.3, 0.5] ∈ Ũ and µ0 = [0.41, 0.51] ∈ Ũ, but µ ∨ (µ0 − αe) ∈ / Ũ with α = 0.01. References Barankin EW (1961) A delivery-lag inventory model with an emergency provision (the one-period case). Naval Res. Logist. Quart. 8(3): 258-311. 32 Bollapragada R, Rao US, Zhang J (2004) Managing inventory and supply performance in assembly systems with random capacity and demand. Management Sci. 50(12): 1729-1743. Brumelle SL, Mcgill JI (1993) Airline seat allocation with multiple nested fare classes. Oper. Res. 41(1): 127-137. Chen L, Homem-de-Mello T (2010) Re-solving stochastic programming models for airline revenue managment. Ann Oper. Res. 177: 91-114. Chen W, Feng Q, Seshadri S (2013) Sourcing from suppliers with random yield for price-dependent demand. Ann. Oper. Res. 208(1): 557–579. Chen X, Gao X, Hu Z (2015) A new approach to two-location joint inventory and transshipment control via L\ -convexity. Oper. Res. Letters 43(1): 65-68. Chen X, Hu P, He S (2013) Technical Note-Preservation of supermodularity in parametric optimization problems with nonlattice structures. Oper. Res. 61(5): 1166-1173. Chen X, Pang Z, Pan L (2014) Coordinating inventory control and pricing strategies for perishable products. Oper. Res. 62(2): 284-300. Ciarallo FW, Akella R, Morton TE (1994) A periodic review, production planning model with uncertain capacity and uncertain demand: optimality of extended myopic policies. Management Sci. 40(3): 320332. Cooper W, Homem-de-Mello T (2007) Some decomposition methods for revenue management Transportation Science 41(3): 332-353. Demirel S, Duenyas I, Kapuscinski R (2015) Production and inventory control for a make-to-stock/calibrateto-stock system with dediated shared resources. Oper. Res. 63(4): 823-839. Daniel K (1963) A delivery-lag inventory model with emergency. H. Scarf, D. Gilford, M. Shelly, eds. Multistage Inventory Models and Techniques. Stanford University Press, Stanford, CA. pp. 32-46. Federgruen A, Yang N (2008) Selecting a portfolio of suppliers under demand and supply risks. Oper. Res. 56(4): 916–936. Federgruen A, Yang N (2011) Procurement strategies with unreliable suppliers. Oper. Res. 59(4): 1033–1039. Feng Q (2010) Integrating dynamic pricing and replenishment decisions under supply capacity uncertainty. Management Sci. 56(12): 2154-2172. Feng Q, Shi R (2012) Sourcing from multiple suppliers for price-dependent demands. Production and Operations Management 21(3): 547-563. Feng Q, Sethi S, Yan H, Zhang H (2006) Are base-stock policies optimal in inventory problems with multiple delivery modes. Oper. Res. 54(4): 801–807. Feng Q, Shanthikumar G (2014) Supply and demand functions in inventory models. Working paper, Purdue University. Fukuda Y (1964) Optimal policies for the inventory problem with negotiable leadtime. Manage. Sci. 10(4): 690-708. 33 Gong X, Chao X, Zheng S (2014) Dynamic pricing and inventory management with dual suppliers of different leadtimes and disruption risks. Production and Operations Management 23(12): 2058-2074. Henig M, Gerchak Y (1990) The structure of periodic review policies in the presence of random yield. Oper. Res. 38(4): 634-643. Hu X, Duenyas I, Kapuscinski R (2008) Optimal joint inventory and transshipment control under uncertain capacity. Oper. Res. 56(4): 881–897. Hua, Z, Yu Y, Zhang W, Xu X, (2015) Structural Properties of the optimal policy for dual-sourcing systems with general lead times. IIE Transactions 47: 841-850 Huh W, Janakiraman G (2010) On the optimal policy structure in serial inventory systems with lost sales. Oper. Res. 58(2): 481-491. Lu Y, Song JS (2005) Order-based cost optimization in assemble-to-order systems. Oper. Res. 53(1): 151-169. Möller A, Römisch W, Weber K (2008) Airline network revenue management by multistage stochastic programming. Computational Management Science 54: 355-377. Murota K (1998) Discrete convex analysis. Mathematical Programming, 83: 313-371. Murota K (2003) Discrete Convex Analysis. SIAM Monographs on Discrete Mathematics and Applications. Society for Industrial and Applied Mathematics, Philadelphia, PA. Murota K (2009) Recent developments in discrete convex analysis. in: W. Cook, L. Lovasz and J. Vygen, eds., Research Trends in Combinatorial Optimization, Bonn 2008, Springer-Verlag, Berlin, Chapter 11, 219-260. Nadar E, Akan M, Sheller-Wolf A (2014) Optimal structural results for assemble-to-order generalized M systems. Oper. Res. 62(3): 571-579. Pang Z, Chen F, Feng Y (2012) A note on the structure of joint inventory-pricing control with leadtimes. Oper. Res. 60(3): 581-587. Robinson L (1995) Optimal and approximate control policies for airline booking with sequantial nonmonotomic fare classes. Oper. Res. 43(2): 252–263. Shaked M, Shanthikumar G (2006) Stochastic Orders, Springer-Verlag, New York. Sheopuri A, Janakiraman G, Seshadri S (2010) New policies for the stochastic inventory control problem with two supply sources. Oper. Res. 58(3): 734–745. Simchi-Levi D, Chen X, Bramel J (2014) The Logic of Logistics: Theory, Algorithms, and Applications for Logistics Management, 3rd ed. Springer-Verlag, New York. Song JS. Zipkin P (2003) Supply chain operations: Assemble-to-order systems. In: de Kok, T., S. Graves, (Eds.), Supply Chain Management. Handbooks in Operations Research and Management Science, Vol. 30. North-Holland, Amsterdam, 561–596. Talluri, K. T., van Ryzin, G. J., (2005). The Theory and Practice of Revenue Management. International Series in Operations Research & Management Science Vol. 68. Springer US. 34 Topkis D M (1998) Supermodularity and Complementarity. Princeton University Press, Princeton, NJ. Van Mieghem, J A (2008) Operations Strategy: Principles and Practice. Dynamic Ideas, Belmont, MA, pp. 230–232. Veeraraghavan S, Scheller-Wolf AA (2006) Now or later: A simple policy for effective dual sourcing in capacitated systems. Oper. Res. 56(4): 850–864. Wang YZ, Gerchak Y (1996) Periodic review production models with variable capacity, random yield, and uncertain demand. Management Sci. 42(1): 130–137. Whittmore AS, Saunders SC (1977) Optimal inventory under stochastic demand with two supply options. SIAM J. Appl. Math. 32(2): 293-305. Zǎlinescu C (2002) Convex Analysis in General Vector Spaces. World Scientisfic, Singapore. Zhou X, Chao X (2014) Dynamic pricing and inventory management with regular and expedited supplies. Production and Operations Management 23(1): 65-80. Zipkin P (2008) On the structure of lost-sales inventory models. Oper. Res. 56(4): 937–944.
© Copyright 2026 Paperzz