Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS We consider LP given in standard form and let x0 be a BFS. Let B1 ; B2 ; :::; Bm be the columns of A corresponding to the basis B for x0 : Then B = (B1 ; :::; Bm ) is a m m invertible basis matrix. For Aj 2 = B (xj is a nonbasic variable) 9 yj such that Aj = Byj Aj = m X or (1) Bi yij (2) i=1 since B spans Rm and therefore Aj 2 Rm is expressible as a linear combination of columns from B: Let y0 = (y10 ; :::; ym0 )T be the values of the basic variables (xj such that Aj 2 B). Then By0 = b m X or (3) Bi yi0 = b (4) i=1 with yi0 0: Consider (4) - (2) for some scalar m X (yi0 0 yij ) Bi + Aj = b (5) i=1 Suppose x0 is nondegenerate, then all yi0 > 0: As " from zero, we move from the BFS x0 to feasible solutions with m + 1 strictly positive components. can increase until some component of y0 becomes zero. This happens at the value 0 = min i:yij >0 = yi0 yij (6) yp0 , say. ypj (7) Example 2.1 (contd.) The BFS corresponding to the basic variables fx1 ; x3 ; x6 ; x7 g is x0 = (2; 0; 2; 0; 0; 1; 4) and B= (A1 ; A3 ; A6 ; A7 ) : The nonbasic column A5 = (0; 1; 0; 0)T may be written A5 = A1 i.e. y15 = 1; y25 = (2 A3 + A6 + A7 1; y35 = 1; y45 = 1: Then (5) becomes ) A1 + (2 + ) A3 + (1 ) A6 + (4 1 ) A7 + A 5 = b The family of feasible points x = (2 ; 0; 2 + ; 0; ; 1 ;4 ) moves from the vertex x0 to the new BFS x1 = (1; 0; 3; 0; 1; 0; 3) as increases from 0 to a maximum value of 0 = 1 given by (6). The new set of basic variables is fx1 ; x3 ; x5 ; x7 g. Thus x5 joins the basis and x6 leaves basis. Notes 1. Where there is a tie in the minimization operation (6) the new BFS is degenerate. 2. If 8i; yij 0 then can be increased inde…nitely and the feasible region F is unbounded. 3. If x0 is degenerate (some yi0 = 0) and the corresponding yij > 0 then 0 = 0 and xj joins the basis at zero level. In this case the new BFS represents the same vertex as x0 in Rn ; but corresponds to a di¤erent basis: Theorem 1 (Pivot step) Given a BFS x0 with basic components yi0 ; i = 1; :::; m and basis B; let j be such that Aj 2 = B: Then the new feasible solution given by 0 0 yi0 = min i:yij >0 = ( yi0 yij yi0 = 0 yij 0 yp0 ,say ypj i 6= p i=p is a BFS with B0 = B[ fAj g n fBp g : 0 = Note: Aj has been substituted in place of Bp and the value yp0 value of the entering variable xj (see also calculation of z00 0 is to be interpreted as the below). Proof. We need to show that the columns of A contained in B0 are linearly independent, and thus form a basis. Suppose 9 constants fdi gm i=1 such that m X di Bi + dp Aj = 0 i=1 i6=p Substitute Aj = m P yij Bi + ypj Bp i=1 i6=p m X (dp yij + di ) Bi + dp ypj Bp = 0 i=1 i6=p 2 (8) But B = fBi gm i=1 are a basis, therefore linearly independent. Hence all coe¢ cients of Bi are zero. In particular dp ypj = 0 hence dp = 0 (as ypj > 0 by construction) : This means that all fdi g are zero in (8). Hence B0 is linearly independent. Choosing a pro…table column Aj The cost of a bfs x0 = (y0 ; 0) with basis matrix B is z0 = cTB y0 (9) where cTB = (cB1 ; :::; cBm ) are the costs of the basic variables xB : The net change in cost of a solution corresponding to a unit increase in the variable xj is m X cj yij cBi = cj zj ; say i=1 NB. zj denotes the scalar product of cB with yj and is very important in later explanations of the Simplex tableau. The quantity cj zj = cj is known as the relative cost or the “reduced cost” of variable xj (at this vertex). Theorem 2 (Cost improvement) At a BFS x0 ; a pivot step in which xj enters the basis at value 0 changes the cost by an amount 0 cj = 0 (cj zj ) (10) where zj are the components of z T = cTB B 1 A (11) Proof. The previous theorem establishes that the new BFS, x1 ; after pivoting is 0 yi0 = ( yi0 0 yij i 6= p i=p 0 so the new cost is z00 = m X (yi0 0 yij ) cBi + 0 cj i=1 i6=p = z0 = z0 + noting that 0 ypj yp0 cBp 0 (cj 0 zj zj ) = yp0 ; thus proving (10) : 3 + 0 ypj cBp + 0 cj Since the yij are de…ned through Aj = Byj we have yj = B zj = m X 1A j and hence cBi yij i=1 = cTB yj = cTB B 1 Aj for each j; thus proving (11). Theorem 3 (Optimality criterion) If c = c z 0 then x0 is optimal. Proof. Let y be any feasible vector, not necessarily basic such that Ay = b and y c z 0 and y 0: Given that 0 their scalar product is also nonnegative: (c z)T y = cT zT y 0 Therefore cT y zT y = cTB B = 1 Ay cTB B 1 b = cTB y0 = cT x0 so x0 is optimal. 3. The Simplex Algorithm The fundamental theorem of LP assures us that we can …nd an optimum to an LP in standard form by searching the BFS’s of the constraint set Ax = b (12) which are precisely the vertices (extreme points) of the feasible region F . The simplex method proceeds from one BFS to another, ensuring that the objective function decreases monotonically (for minimization) until a minimum is reached. 3.1 Diagonal representation Given a basic solution to (12) we suppose for convenience of notation that the basic variables are xB = (x1 ; :::; xm )T and nonbasic variables are xN = (xm+1 ; :::; xn )T : Let B denote the m 4 m basis matrix (containing basic columns of A) and N the matrix of nonbasic columns of A.Then (12) may be written xB B N xN ! =b BxB + N xN = b Premultiply this system by B 1 gives an equivalent system Im xB + B where Im is the m 1 N xN = B 1 b (13) m identity matrix, or simply xB + Y xN = y0 where Y = B 1N is a m (n (14) m) matrix and y0 are the values of the basic variables xB at this BFS. (Set xN = 0 gives xB = y0 :) A typical column of Y will be yj : The system of equations (13) or (14) are a representation of the original system (12) diagonalized with respect to the basic variables. Such a diagonalization may be achieved by a sequence of elementary row operations in a process known as Gauss-Jordan pivoting. When m = n such a process gives a unique solution y0 to a system of linear equations, assuming that A is a full rank square matrix. 3.2 Tableau iterations Suppose that, initially, the variables are labelled so that fx1 ; x2 ; :::; xm g are basic and fxm+1 ; xm+2 ; :::; xn g are non-basic. The "tableau representation" of (14) is de…ned to be the partitioned matrix 2 1 6 6 0 6 6 6 [I m j Y j y 0 ] = 6 6 6 6 4 0 y1;m+1 y1;m+2 y1;n y10 1 y2;m+1 y2;m+2 y2;n y20 .. .. . . 1 ym;m+1 ym;n ym0 3 7 7 7 7 7 7 7 7 7 5 (15) The columns of the identity matrix correspond to a particular choice of basic variables and different choices of the basic variables lead to alternative BFS’s. Note that in a diagonal representation basic variables correspond to columns of the identity matrix Im : Suppose some basic variable xp leaves the basis and some non-basic variable xq enters the basis. Provided ypq 6= 0; the transformed tableau can be obtained by the following row operations on (3.3) 5 where Ri denotes row i of the tableau R0p = Rp ypq R0i = Ri yiq Rp ypq (i 6= p) ypq is known as the pivot element. Element by element we have 0 ypj = ypj ypq (8j) yiq ypj ypq yij ypq yiq ypj = ypq 0 yij = yij (8j; i 6= p) Example 3.1 (non-Simplex) Consider the system in diagonal form: x1 x2 x3 + x4 + x5 x6 =5 +2x4 3x5 + x6 =3 x4 +2x5 x6 = 1 in which x1; x2 ; x3 are basic, x4; x5 ; x6 are nonbasic. Find a basic solution with basic variables x4; x5 ; x6 : Notice that at this stage we have no objective function, nor do we insist on feasibility. Tableau T0 x1 x2 x3 x4 x5 x6 x1 1 0 0 1 1 1 5 x2 0 1 0 2 3 1 3 x3 0 0 1 1 2 1 1 Column 1 of the tableau indicate the basic variables. The right hand side column contains the current value of the basic variables y0 which are x1 = 5; x2 = 3; x3 = solution would not be feasible for a LP. R1 0 , Exchange x1 ; x4 . Then R1 = 1 2 R1 ; 1 R20 = R2 R30 = R3 Tableau T1 x1 x2 x3 x4 x5 x6 x4 1 0 0 1 1 1 5 x2 2 1 0 0 3 7 x3 1 0 1 0 2 4 5 3 The basic variables are now (in row order) x4 = 5; x2 = 6 7; x3 = 4: 1. Since x3 < 0 this basic 1 R1 1 1 Exchange x2 ; x5 . Then R10 = R1 + R2 , 5 R2 ; 5 R20 = 3 R30 = R3 + R2 : 5 Tableau T2 x1 x2 x3 x4 x5 x6 3 5 2 5 1 5 1 5 1 5 3 5 0 1 0 0 0 1 1 0 0 2 5 3 5 1 5 x4 x5 x3 The basic variables are x4 = 18 5 ; x5 Exchange x3 ; x6 . Then R10 = R1 = 75 ; x3 = 1 5: R20 = R2 2R3 , 18 5 7 5 1 5 R30 = 3R3 ; 5R3 Tableau T3 x1 x2 x3 x4 x5 x6 x4 1 1 2 1 0 0 4 x5 1 2 3 0 1 0 2 x6 1 3 5 0 0 1 1 The basic variables are x4 = 4; x5 = 2; x6 = 1: The columns of I3 visible in the tableau correspond to these basic variables. In place of0the identity matrix originally under x1 ; x2 ; x3 in T0 1 1 1 1 B C we now have the columns of B 1 where B = @ 2 3 1 A since B 1 N = B 1 when N = I3 : 1 2 1 This example shows how we can obtain a new system in diagonal form with respect to a desired basis by a sequence of pairwise exchanges of a basic and a nonbasic variable. In the simplex algorithm our target basis is not given explicitly but is the one de…ning the optimal BFS. We need to consider two additional aspects: 1. Keep the right hand side vector y0 positive so each basic solution is feasible (BFS). 2. Improve the objective function z at each iteration. 3.3 Maintaining feasibility The outgoing variable xp given a choice of incoming nonbasic variable xq (j = q) is given by the minimum ratio rule: yi0 yiq 0 yp0 = min i:yiq >0 = yp0 ypq (16) 3.4 Improving the objective function We add a new row to the tableau (row 0; the ‘z-row’or the ‘bottom row’) representing the equation z = cT x in diagonal form with respect to the current basis. We can show that the z-row equation 7 then contains the coe¢ cients zj cj as de…ned earlier. z = cT x = cTB xB + cTN xN = cTB (y 0 Y xN ) + cTN xN n X cTB = z0 y j xj + j=m+1 n X = z0 (zj n X from (14) cj xj j=m+1 cj ) xj (17) j=m+1 or in a form consistent with equation (14). n X z+ (zj cj ) xj = z0 j=m+1 From (10) the per unit decrease in the OF (objective function) z due to introducing variable xj is zj cj . The criterion zq cq = max fzj cj gnj=m+1 (18) therefore picks the nonbasic variable which gives the largest rate of decrease (Dantzig’s Rule). As long as zq cq > 0; a pivot will decrease the OF. The optimality criterion (minimization) is zj cj 0; j = m + 1; :::; n (19) The corresponding rule for maximization problems is to replace min by max in (18) and to seek all zj cj 0. Example 3.2 Maximize z (x) = 5x1 + 4x2 + 3x3 subject to 2x1 + 3x2 + x3 Introduce slack variables s1 ; s2 ; s3 5 4x1 + x2 + 2x3 11 3x1 + 4x2 + 2x3 8 x1 ; x2 ; x3 0 0, and rewrite Constraint 1 as for example. 2x1 + 3x2 + x3 + s1 = 5 In the following sequence of tableaux the pivot element is boxed. Initial tableau is for the BFS s1 = 5; s2 = 11; s3 = 8; 8 x1 = x2 = x3 = 0 T0 The z Max s1 s2 s3 x1 x2 x3 s1 1 0 0 2 3 1 5 s2 0 1 0 4 1 2 11 s3 0 0 1 3 4 2 8 z 0 0 0 5 4 3 0 row represents the equation z = 5x1 + 4x2 + 3x3 and z0 = 0: Alternatively the scalar product formula zj = cTB yj may be used giving zj cj = (0; 0; 0) : (2; 4; 3) 5= 5 in the case of x1 : Dantzig’s rule modi…ed for the mazimization problem gives zq cq = 5 which …xes our choice of pivot column. So we add x1 to the basic variables. (Pivoting in any nonbasic variable xj with a strictly negative values of zj cj would however also lead to an increase in z ) The minimum ratio rule leads to min f5=2; 11=4; 8=3g = 5=2; therefore s1 leaves the basis. A pivot step represented by the row operations R10 = R1 2 5 R00 = R0 + R1 2 R20 = R2 2R1 3 R30 = R3 R1 2 leads to the new BFS T1 Max s1 x1 1 2 0 s2 2 s3 3 2 5 2 z s2 s3 x1 x2 x3 0 1 3 2 1 2 5 2 1 0 0 5 0 1 0 1 0 0 0 0 1 2 7 2 1 2 1 2 1 2 25 2 This tableau represents the BFS (5=2; 0; 0; 0; 1; 1=2:) i.e. x1 = 5=2; s2 = 1; s3 = 1=2 with objective value 25=2 = 12:5 (monotonic increase is guaranteed). Notice that the change in z-value is 0 (cq zq ) = 5 2 5 2 5 and cTB y0 = 5 Dantzig’s rule gives zq cq = = z0 1=2 showing it is pro…table to include x3 in the basis. The minimum ratio rule gives min f5=1; 1=1g = 1: s3 leaves the basis. Pivot again giving T2 Max s1 s2 s3 x1 x2 x3 x1 2 0 1 1 2 0 2 s2 2 1 0 0 5 0 1 x3 3 0 2 0 1 1 1 z 1 0 1 0 3 0 13 9 This tableau has all zj cj 0 (i.e. c 0) so satis…es the optimality criterion: The optimal z LP solution is x = (2; 0; 1) in the problem’s original variables. i.e. x1 = 2; x2 = 0; x3 = 1: The optimal value at this vertex is z = 13 and cTB y0 = 5 0 (cq zq ) = 1 1 2 = 1 2: 2+3 1 = 13: The change in z value is We may apply the scalar product formula to verify all zj for variable x2 we obtain (5; 0; 3) : (2; 5; 1) cj values; e.g. 4 = 3: (checks z-row pivots). 3.5 Reduced tableau iterations A faster but equivalent representation omits the columns (of I m ) corresponding to the basic variables xB : At each pivot we exchange the labels of the pivot column and pivot row and apply a new rule for transforming the pivot column: 1 ypq yiq 0 yiq = ; ypq 0 ypq = (i 6= p) The remaining tableau elements transform in the same way as the full tableau: 0 ypj = 0 yij = ypj ; ypq yij ypq (j 6= q) yiq ypj (i 6= p; j 6= q) ypq For Example 3.2, the reduced tableau iterations are as follows: T0 x1 x2 x3 s1 2 3 1 5 s2 4 1 2 11 s3 3 4 2 8 z 5 4 3 0 The pivot element is replaced by its reciprocal. The rest of the pivot column is divided by the pivot element and changed in sign. Other tableau elements transform as before. T1 x1 s1 x2 x3 1 2 3 2 1 2 5 2 s2 2 5 0 1 s3 3 2 1 2 1 2 1 2 25 2 z 5 2 7 2 1 2 One more iteration gives the reduced form of the optimal tableau obtained before omitting the tableau columns corresponding to the basic variables. 10 T2 x1 s1 x2 s3 2 2 1 2 s2 2 5 0 1 x3 3 1 2 1 1 13 z 1 3 3.6 Obtaining an initial tableau (arti…cial variables) How do we obtain a starting tableau which is diagonalized with respect to the basic variables? If the constraints happen to be of the form Ax b with b 0 then, as we have seen, an initial basis of slack variables is available. For the general case, suppose the constraints are in standard form Ax = b with b To the ith 0: We may use a two-phase method to obtain an initial BFS. constraint we add the term +Ri representing an arti…cial variable Ri to which we attach a unit cost. The augmented tableau (without z 2 1 6 6 0 6 6 6 6 6 6 6 4 row ) is then 0 a11 a12 a1n b1 1 a21 a22 a2n b2 .. .. . . 1 am;1 amn bm 3 7 7 7 7 7 7 7 7 7 5 (20) which represents the BFS R = b. In Phase I we minimize the cost function = m X Ri i=1 subject to (20). There are three possible outcomes. Case 1 We obtain min = 0 and all Ri are driven out of the basis. We now have a BFS to the original problem. Case 2 We obtain min > 0. There is no feasible solution to the original problem. (Otherwise a BFS to the original problem would also be a BFS to (20) with value Case 3 We obtain min = 0). = 0 but some arti…cial variables remain in ther basis at value zero. In this case we may continue non-Simplex pivoting to drive all AV’s out of the basis. i.e. reducing to Case 1 Assuming Case I applies, we delete all arti…cial variables and proceed in Phase II to solve the original problem. Either we need to have carried through a z Phase I pivots or we recompute zj row for the original problem in cj using the scalar product formula. Example 3.3 11 Minimize z = 4x1 + x2 subject to 3x1 + x2 = 3 4x1 + 3x2 6 x1 + 2x2 4 x1 ; x2 0 Insert surplus and slack variables s1 ; s2 to constraints 2 and 3. Then add arti…cial variables R1 ; R2 to constraints 1 and 2. 3x1 + x2 4x1 +3x2 x1 +2x2 +R1 =3 s1 +R2 +s2 =6 =4 An initial basis is given by R1 = 3; R2 = 6; s2 = 4; x1 = x2 = s1 = 0: It is unnecessary to include R3 in this example because constraint 3 has the form aTi x bi which allows s2 to be the third basic variable. The cB column contains unit costs of Ri (i = 1; 2) and zero for s2 : The basic values y0 are initially set to b: The initial OF value is 9: Phase I cj cB 0 0 0 x1 x2 s1 y0 3 1 0 3 1 R1 1 R2 4 3 1 6 0 s2 1 2 0 4 7 4 1 9 It easy to verify that the bottom row represents + 7x1 + 4x2 s1 = 9, the equation giving = R1 + R2 in terms of the non-basic variables. Two iterations result in an optimal tableau for Phase I with min = 0 (Case 1). Min R1 x1 R2 s2 Min R1 x2 s1 1 3 5 3 5 3 5 3 0 1 1 2 0 3 1 2 R2 s1 x2 1 5 3 5 1 5 3 5 3 5 6 5 s2 1 1 1 1 0 0 x1 12 Phase II Delete columns corresponding to R1 ; R2 : Recompute new bottom row from z = 4x1 + x2 Min s1 x1 1 5 Min 3 5 x2 3 5 6 5 x1 s1 s2 1 1 z 1 5 18 5 s2 2 5 9 5 x2 z 1 1 5 17 5 3.7 Alternative rules for pivot selection The most common rule and the easiest to implement for selecting the pivot column is (for minimization) by most negative reduced cost cj < 0: We may regard cj as the derivative of the cost with respect to distance in the space of nonbasic variables. Choosing the most negative cj is a form of “steepest descent” policy. The total increment in cost as a result of one pivot is however 0 cj where 0 is determined by the minimum ratio rule. Another rule for choosing the pivot column is to choose the column giving the largest decrease in cost. This may be termed the “greatest increment” rule. A unit increase in xj increments the entire solution vector x by the amount 8 > < +1 xk = yij > : 0 k=j k = B (i) ; i = 1; :::; m otherwise where k = B (i) indicates that xk is the ith basic variable. The “derivative”(rate of change) of cost with respect to Euclidean distance in the space of all variables is cj q Pm 2 1 + i=1 yij Use of this criterion leads to a pivot selection rule known as the all variable gradient selection rule. No selection rule has been conclusively shown to be superior. 3.8 Cycling Since there are only …nitely many BFS’s the simplex algorithm either terminates in a …nite number of iterations or it must “cycle” i.e. loop through the same sequence of BFS’s of the same value. Cycling is only possible if the problem has degeneracy (otherwise z decreases strictly monotonically). Cycling occurs rarely, but can be prevented by Bland’s rule: Choose the pivot column by q = min fj : zj 13 cj > 0g and p = min i:yiq >0 yi0 yiq yk0 yk0 ; 8k s.t. >0 ykq ykq i.e. of all valid pivot columns, choose the one with the lowest index. Of all tied valid pivot rows, choose the one with the lowest index. (Proof omitted) 14
© Copyright 2026 Paperzz