1 Optimal Pivot Point Selection for the Simplex Method Jason Bradley Spring 2012 In Partial Fulfillment of Stat 4395-Senior Project Department of Computer and Mathematical Sciences Faculty Advisor: Dr. Timothy Redl: _________________________________ Committee Member: Dr. Vasilis G. Zafiris: _________________________________ Committee Member: Dr. Erin M. Hodgess: _________________________________ Department Chairman: Dr. Dennis Rodriguez _________________________________ 2 Table of Contents I. Abstract 3 II. Introduction 4 III. History of the Simplex Method 5 IV. How to Perform the Simple Method 6 V. Fixing What Isn’t Broken 11 VI. a. Identifying the Flaw 16 b. Creating a Solution 17 Results a. Minimization? 19 24 VII. Restrictions 26 VIII. Future Studies 28 IX. Conclusions 28 X. Bibliography 30 3 Abstract The objective of this project is to take the existing standard optimization model and improve upon its structure by modifying its pivoting algorithm. While there are many different methods for solving optimization problems, the most well-known and most used algorithm is called the Simplex Method. The original method is over seventy years old and has proven itself to be very robust. It has undergone only minor changes over the course of its lifetime and currently relies on a rule modification that was created in the 1970’s. This particular piece of the puzzle will be the focus of this project. Changing the course of the algorithm will increase the overall efficiency and effectiveness of the Simplex Method. If the hypothesis holds true, it has the ability to have far reaching effects on many different levels of industry. This project is built upon the amazing work of some brilliant mathematicians and is intended to add to previously made discoveries. Innovation thrives on both original ideas and on the refinement of old discoveries. This new algorithm falls into the later of those two categories. 4 Introduction Throughout industry and business, the optimal use of resources is not only necessary for profitability but also necessary for the overall survival of a company. Organizations commit large portions of manpower and funds toward improving the decision making within a company. These choices are usually guided by specific restrictions on manufacturing, manpower or other limitations. This type of optimization often relies on the principles of Linear Programming (LP) to ensure the most efficient use of limited resources. While this may sound like an economics topic, it actually falls into the discipline of Operations Research (OR). It can also be referred to as Industrial Engineering, Managerial Science, Mathematical Programming, Decision Science or host of other monikers. In the field of Operations Research, choices are not made haphazardly or based upon guesses. Instead, scenarios are laid out in a linear mathematical model and the restrictions of that model are then analyzed in order to find an optimal solution to the given scenario. Often, Linear Programming problems arrive at solutions that are not obvious upon initial inspection. With larger problems, more constraints and more variables, Linear Programming shows its true power. Problems that would otherwise be nearly impossible to solve are done with ease by following a simple, repetitive algorithm. Logistics, industry, business, manufacturing, design, indeed any situation where resource management is crucial all benefit from this modeling method. 5 History of the Simplex Method The first step in approaching this problem is to both study and give credit to the works that it is built upon. The Simplex Method is one of the fundamental tools used in Linear Programming. It was originally based on the input-output model developed during the early 1940’s. This new method, created by George Dantzig, was officially published in 1947 after being declassified by the Department of Defense. The conceptualization of the simplex method was done without the relying on similar work being developed in Russia by Leonid Kantorovich (1939). Dr. Dantzig designed this approach while heading up the Air Force’s Statistical Control Combat Analysis Branch. While working at the pentagon during World War II, Dantzig was tasked with improving the logistics programming for the military. Keep in mind, programming as we now know it today did not exist. The closest correlation to computer programming in today’s terms was done by coding punch cards. Programming was more akin to planning and logistics. After searching for methods to solve his input-output matrices, he decided to create his own algorithm. Dantzig used linear inequalities and a set of rules that determined how to manipulate those inequalities to arrive at an optimal solution to the problems with which he was faced. The first real world problem that he tested the new algorithm on was “Stigler’s Diet Problem” (Dantzig, 1990, pg. 43-47). It contained 77 variables and only 9 constraints. “At the time, this was considered a very large problem.” It took 9 people 3 months to come to a solution using the newly developed Simplex Method. Fortunately, with today’s computing power, the Diet Problem is considered relatively small and can be solved very quickly. There is a great quote from Dr. Dantzig concerning 6 the power of Simplex. He refers to an assignment problem concerning tasking 70 men to 70 different jobs. That’s 70! Different combinations! Dantzig says, “Now 70! is a big number, greater than 10100. Suppose we had an IBM 370-168 available at the time of the big bang 15 billion years ago. Would it have been able to look at all the 70! combinations by the year 1981? No! Suppose instead it could examine 1 billion assignments per second? The answer is still no. Even if the earth were filled with such computers all working in parallel, the answer would still be no. If, however, there were 1050 earths or 1044 suns all filled with nano-second speed computers all programmed in parallel from the time of the big bang until sun grows cold, then perhaps the answer is yes.” (Dantzig,1984, pg. 106) With the creation of the Simplex Method, this assignment problem could be transformed into a manageable and solvable scenario. This amazing mathematical discover has not only shown the ability to stand the test of time, it is still the standard for optimization, just as it was 65 years ago. How to perform the Simplex Method Is the problem maximization or minimization? This is the first question that needs to be answered when constructing a Linear Programming problem. Following the identification of the type of model, each constraint and the objective function must be identified and set as a linear equation. In standard form, the constraints are represented by “less than or equal to” inequalities. Each inequality is converted into an equation by adding slack/surplus variables that take up the difference of the inequality. Also, it should be noted that whether or not the variables are restricted by non-negativity constraints. Once these elements have been identified, they are set up in a matrix (tableau) form. This tableau has one row represented by each constraint and the 7 objective (function) row is presented as the upper or lower row depending on the preference of the user. The function row is set to zero and the negative values of the original coefficients are inserted into the tableau. This finishes the initial set up of the LP model. Only after these steps are completed can the algorithm begin. Step 1. Maximize Z= c1 x1+c2 x2+... +cn xn subject to: Step 2. Maximize Z= c1 x1+c2 x2+... +cn xn subject to: Step 3. a11 x1+a12 x2+...+a1n xn <= b1 a21 x1+a22 x2+...+a2n xn <= b2 . . . . . . . . . . . am1 x1+am2 x2+...+amn xn <= bm and x1,x2,...xn >= 0 x1 - c1 a11 a21 . . . am1 a11 x1+a12 x2+...+a1n xn + s1 <= b1 a21 x1+a22 x2+...+a2n xn + s2 <=b2 . . . . . . . . . . . am1 x1+am2 x2+...+amn xn + sm <= bm and x1,x2,...xn >= 0 x2 - c2 a12 a22 . am2 … … … … . … xn - cn a1n a2n . . . amn s1 0 1 0 s2 0 0 1 sm 0 0 0 0 0 1 Z 0 b1 b2 . . . bm The first part of the algorithm identifies the “pivot element”. The current standard for finding a pivot element is based off the work of Robert Bland (Bland, 1997). This is accomplished in two steps. First, the pivot column must be identified by locating 8 the most negative coefficient in the function row. If there is a tie, the upper left most (North West) value is chosen. Once the column is identified, the pivot element has to be determined. This is accomplished by a process called a minimum ratio test. The right hand side of each constraint is divided by its corresponding pivot column element of the pivot column. The lowest, non-negative, value becomes the pivot element. The row associated with the pivot element is divided by the value of that element. After scaling down the pivot row, the other elements in that column are eliminated using GaussJordan. The resulting Z-value is the amount of increase in the overall function. This process is repeated until there are no remaining negative values in the function row. The final Z-value represents an optimal solution to the linear problem. There may exist none, one or many optimal solutions to a Linear Programming problem depending on the area defined by the constraints. On a side note, if the only eligible pivot column contains only negative values, it is an indication that the feasible region is unbounded and there is no optimal feasible solution to the problem. Also, for the sake of this project, each problem is set up in standard form. 9 This is a flowchart that reflects the decision making for the Simplex Method: Here is an example of the current pivot method of the Simplex Method: Iteration 1: Identify the Pivot Column (-5 is the most negative obj. row coefficient) z-row s1 x1 -3 .5 x2 -5 1 s1 0 1 Z 0 3 10 Conduct Minimum Ration Test (MRT) on selected column (3/1=3, min non-neg) z-row x2 x1 -3 .5 x2 -5 1 s1 0 1 Z 0 3 Pivot the Tableau (Gauss-Jordan, Pivot element to 1 and zero out other rows) z-row 'x2 x1 -.5 .5 x2 0 1 s1 5 1 Z 15 3 Iteration 2: Identify the Pivot Column (-.5 is the most negative obj. row coefficient) z-row x2 x1 -.5 .5 x2 0 1 s1 5 1 Z 15 3 Conduct Minimum Ration Test (MRT) on selected column (3/.5=6, min non-neg) z-row x2 x1 -.5 .5 x2 0 1 s1 5 1 Z 15 3 Pivot the Tableau (Gauss-Jordan, Pivot element to 1 and zero out other rows) z-row x1 x1 0 1 x2 1 2 s1 1 2 Z 18 6 Iteration 3: Identify the Pivot Column (All columns have Positive coefficients in z-row) With no negative coefficients in the z-row, maximization is complete. 11 Sol: Objective Function Z=18 Path Traveled (0,0), (0,3), (6,0) Iterations = 3 Distance traveled = 9.71 As you can see, the algorithm works very well. It traverses the corner points (Feasible Solutions) of the feasible region until if finds an acceptable solution to the problem. The results tell the story of how the algorithm progressed and what the final outcome was. For this LP, the algorithm travelled from origin (0,0) to point (0,3) where it arrived at a value of 15. From there it travelled to point (6,0) where it reached its optimal of Z = 18. The total unit distance traveled was 9.71 and it found a solution on the third iteration. While this seems straight forward, please remember that as Linear Programming goes, this problem is about as simple as it gets. It is a 2-dimensional problem with only one constraint. Real world LP’s are far larger for both constraints and variables (dimensions). Accordingly, the complexity of the method increases when these parameters are larger. Fixing what isn’t broken While the Simplex Method is large part of the foundation of Operations 12 Research, it is not without its issues. It has been shown that under certain scenarios that a problem called cycling occurs. Cycling happens when the values associated with a linear optimization problem fail to converge on an optimal solution. In other words; the objective value does not improve after completing an iteration. This usually occurs when one or more constraints occupy the same basic feasible solution or corner point. With the most extreme examples of cycling, the algorithm will never find a solution and will get stuck in an infinite loop. In order to address this issue of cycling, Robert Bland crafted new guidelines for determining the process that the simplex method should follow. His changes are specific to the pivot point selection. Those changes are now the standard for the Simplex Method. His changes to the basic algorithm curtail cycling but do not eliminate it. It can be demonstrated that, under certain conditions, cycling still occurs while using his algorithm. Part of the goal of this project is to eliminate cycling, especially infinite cycling. This is one of the fundamental weak points with the algorithm. Infinite Cycle: z-row s1 s2 s3 x1 -3/4 ¼ ½ 0 x2 150 -60 -90 0 x3 -1/50 -1/25 -1/50 1 x4 6 9 3 0 s1 0 1 0 0 s2 0 0 1 0 s3 0 0 0 1 Z 0 0 0 1 z-row x1 s2 s3 x1 0 1 0 0 x2 -30 -240 30 0 x3 -7/50 -4/25 3/50 1 x4 33 36 -15 0 s1 3 4 -2 0 s2 0 0 1 0 s3 0 0 0 1 Z 0 0 0 1 13 z-row x1 x2 s3 z-row x3 x2 s3 x1 0 1 0 0 x1 ¼ 25/8 -1/160 -25/8 x2 0 0 1 0 x2 0 0 1 0 x3 -2/25 8/25 1/500 1 x3 0 1 0 0 x3 0 1 0 0 x4 18 -84 -1/2 0 s1 1 -12 -1/15 0 s2 1 8 1/30 0 s3 0 0 0 1 Z 0 0 0 1 x4 -3 -525/2 1/40 525/2 s1 -2 -75/2 1/120 75/2 s2 3 25 -1/60 -25 x4 0 0 1 0 s1 -1 50 1/3 -50 s2 1 -150 -2/3 150 s3 0 0 0 1 Z 0 0 0 1 s3 0 0 0 1 Z 0 0 0 1 z-row x3 x4 s3 x1 x2 -1/2 120 -125/2 10500 -1/4 40 125/2 -10500 z-row s1 x4 s3 x1 -7/4 -5/4 1/6 0 x2 330 210 -30 0 x3 1/50 1/50 -1/150 1 x4 0 0 1 0 s1 0 1 0 0 s2 -2 -3 1/3 0 s3 0 0 0 1 Z 0 0 0 1 z-row s1 s2 s3 x1 -3/4 ¼ ½ 0 x2 150 -60 -90 0 x3 -1/50 -1/25 -1/50 1 x4 6 9 3 0 s1 0 1 0 0 s2 0 0 1 0 s3 0 0 0 1 Z 0 0 0 1 This is the same place we started. We have entered an infinite cycle. Another issue within the Simplex Method is the efficiency of the path it takes to acquire an optimal solution. Assuming that the cycling dilemma is not an issue, Simplex will absolutely arrive at an optimal solution if it exists. However, the direction it chooses 14 to take in order to arrive at that optimal solution is not always… optimal. This is particularly true in concave linear models. Simplex does not account for the overall increase in the objective function when choosing a pivot. The algorithm simply picks the column with the most negative coefficient with complete disregard for the per unit increase along that vector. Due to that shortcoming, Simplex may actually take a longer route than necessary to arrive at its solution. It has been common practice that the efficiency of this method be measured in iterations of the algorithm. This way of measuring can be misleading. For instance, if we take our previous problem with the triangular feasible region, we could add additional constraints to the one of the extreme corners and thereby increase the number of iterations in that area. This creates the illusion that more was done while in reality, the unit distance may not have increased much due to the close proximity of corner points to one another. To address this, we can use the unit distance for the path from our starting solution to the optimal solution to measure the efficiency of the algorithm. By measuring by distance instead of iterations, we are able to avoid situations where “loading a corner” affects the overall efficiency of the algorithm. For clarification, “loading a corner” is referring to the respective number of vertices located in one particular corner of the feasible region. Since the Simplex method iterates based on the number of vertices between its origin and its final destination, loading up a corner with multiple vertices will increase the number of iterations if Simplex decides to travel in that particular direction. If we instead use the 15 unit distance travelled to measure the overall efficiency, we arrive at a measurement based upon the distance of the path traveled to arrive at the optimal solution. This accounts for the shorter distances between tightly packed vertices. This distance measurement is a far better gauge of efficiency because, unlike the iterations measurement, it cannot be artificially skewed. Thus, you get a more reliable means of determining how optimal the process is. This LP is loaded up near the extreme feasible solution on the y-axis. Work on these issues is a promising venture due to the proven effectiveness of the Simplex Method. If this already powerful algorithm can be improved in any significant way, it becomes an even more valuable tool. Small changes to existing work can often have far reaching implications. The cycling issues and the paths chosen by the Simplex Method present a unique opportunity to reexamine a masterful piece of work. If a more efficient (distance) method that does a better job avoiding cycling can be gleaned from tinkering with Dr. Dantzig’s masterpiece, I would be proud to be a small part of it. “Linear programming can be viewed as part of a great revolutionary development which has given mankind the ability to state general goals and to lay out a path of detailed decisions to take in order to ``best’’ achieve its 16 goals when faced with practical situations of great complexity.” (Dantzig, 1991) Identifying the Flaw Vs. Originally, my desire was to decrease the overall number of iterations of the Simplex Method because it seemed as though they were an accurate measure of effectiveness. Only after changing the algorithm and being happy with the results, I discovered that I could indeed alter the constraints to prove my algorithm wrong. Disappointed, but not deterred, I sought to understand what changes would actually constitute an improvement in the process. The realization that the number of pivots required to find the optimal does not correlate to the actual distance from the starting point of the feasible region was the inspiration I needed to try and find a better model. So the focus changed from iterations to distance, which gave a more consistent result. The new algorithm would need to take into account the actual distance between the current point and the distance(s) to the 17 other feasible solutions. The thought seemed very close to graph theory and I found it curious why the concept would not apply to optimization the same way it applies to directed graphs. Creating a Solution From there designing the basic algorithm seemed simple; check the distance between two or more points, scale that distance by the unit value of that vector, choose the path with the most negative value. Fortunately, the backbone of the Simplex Method could stay intact and the only changes would be made to the pivot identification portion of the algorithm. After inspection, I decided to keep the first part of the rule concerning column identification. Each column’s z row value still represented the coefficient associated with original objective equation. Also, the minimum ratio test represents the unit value along each vector. With those two elements already identified in the standard Simplex algorithm, using them for the modified algorithm seemed like a natural fit. All of the parts needed for the modified algorithm were already there. The next step was to fit those pieces to the idea for the new algorithm. The initialization is the same as the original method; convert the problem into a system and convert the system into a tableau. Next, where Simplex identifies only the most negative z-row value, the new algorithm requires the identification of all eligible columns. In other words, identify all columns that the z-row coefficient is less than zero. Once those columns were identified, perform the minimum ratio test on each eligible column. Following that test, 18 choose the lowest, non-negative, value in each eligible column. Multiply that value by the corresponding objective row coefficient. Whichever value is most negative is used to identify the pivot column and the pivot element. Once that process is complete; pivot the tableau in the same manner as the original Simplex algorithm. Repeat the algorithm until there are no negatives in the objective row or until the only eligible columns have all negative values in their constraint rows (this would violate the minimum ratio test requirements). The resulting z-row value will be the optimal value of the linear programming problem. This is the modified Algorithm: 19 Results In order to test the results I needed a large number of sample linear programming problems. The set of LPs needed to include unremarkable maximization and minimization problems. The new method was evaluated with 2nd, 3rd and nth degree LP’s in order to identify the limits of the algorithm. Also, I tested examples where the Simplex Method showed temporary stalling and infinite cycling. After verifying that the Simplex method takes a longer path than necessary or that the algorithm cycles, the same LP needed to be tested in the modified algorithm. Simulated real world example: The first example is a maximization problem. In our scenario, a fledgling computer retailer wants to begin building its own personal computers. The retailer offers two models. The first is a base level pc. It takes 3 hours to install the mother board and associated hardware, 3 hours to install the operating system; it does not require any updating following the software installation. For the more advanced computer, the build requires 3 hours to install the board and hardware, 10 hours to install the OS and additional software and 2 hours to update the system. Due to the other requirements placed on the employees, the company can only dedicate as much as 60 man hours a week for hardware instillation, 70 hours on operating systems, and no more than 12 hours on updates. The profit for each computer is $300.00 on the basic model and a profit of $500.00 on the advanced model. Due to having lower prices than their competitors, demand is high and the company cannot keep enough computers on 20 the shelves. The owner of the computer company wants to know about how many of each machine he should dedicate his employees to build per week in order to maximize his profit. This is a fairly simple maximization word problem. Our first step is to identify the objective function and the constraints. Objective Function: # Comp 1 x Price Comp 1 + # Comp 2 x Price Comp 2 = Profit Constraints: Hardware hours for Comp 1 + hours for Comp 2 <= 60 hours Oper System hours for Comp 1 + hours for Comp 2 <= 70 hours Updates hours for Comp 1 + hours for Comp 2 < = 12 hours So after identifying the objective function and the constraints it is time to convert the information into a system. Maximize Z = 300 x1 + 500 x2 subject to: 3 x1 + 3 x2 <= 60 3 x1 + 10 x2 <= 70 0 x1 + 2 x2 <= 12 and x1, x2 >= 0 Next, we convert the system into a tableau and add slack variables. x1 -300 3 3 0 x2 -500 10 3 2 s1 0 1 0 0 s2 0 0 1 0 s3 0 0 0 1 Z 0 70 60 12 21 Each pivot point is highlighted Here are the results represented by the tableau 22 So our company owner would choose to build roughly 19 of computer 1 and only one of computer 2, resulting in a profit of about $6300.00. If you notice, the path to the optimal point could have taken a shorter route. If it did take the alternate path, not only would the distance travelled be less but it would have used less iterations as well. So the next step was to test the modified algorithm. Each pivot is highlighted 23 Upon inspection, it is easy to see that the solutions were the same yet the new algorithm took a much shorter route and in this situation, less iterations. If you compare the two models, the path to the optimal solution using the Simplex Method traveled a unit distance of 25.24 compared to the modified method whose path was a unit distance of 22.02. The modified method outperformed the standard Simplex algorithm by a unit value of 3.22. The comparison, while compelling, is not enough to claim an improvement over the traditional model. The next step was to address problems where Simplex cycles. This was the infinite cycle we looked at earlier: z-row s1 s2 s3 x1 -3/4 ¼ ½ 0 x2 150 -60 -90 0 x3 -1/50 -1/25 -1/50 1 x4 6 9 3 0 s1 0 1 0 0 s2 0 0 0 0 s3 0 0 0 1 Z 0 0 0 1 In this LP, the Simplex Method cycles forever. However, if you run the same LP through the modified algorithm, cycling does not occur. Surprisingly, a solution presents 24 itself after only three iteration of the new method. This result by itself demonstrates one of the limitations of the Simplex Method and the potential promise of the new pivot identification rules. These are the results: Minimization? The modified algorithm also works on minimization problems. However, it does require some work on the front end. Since the minimization standard form is not the same as the maximization standard form, it has to be converted. The system is 25 converted using the Theory of Duality discovered by John Von Neumann. The theory suggests that the optimal a minimization problem (primal) is the same as the optimal of dual of that system. Once the system is converted into the dual and put into standard form, it can then be solved using the modified algorithm. The final value will be optimal for both systems. Also, the slack values in the final tableau represent the basic solution to the minimization problem. Example of a minimization problem using the modified method to solve the dual: 26 The values of the slack variables correspond to the solution to the original minimization problem. Restrictions While the modified algorithm will work with any maximization system in standard form, there are some restrictions that need to be addressed. The first is temporary degeneracy or temporary cycling. Like Simplex, it can be shown that the modified algorithm can still be made to cycle temporarily. The other issue is a limitation on the distance claim. The modified algorithm can guarantee shortest distance only on cases where the constraints have a negative slope including zero and undefined. While both of these issues are substantial, they do not take away from the overall goal of the algorithm. The temporary degeneracy is a particularly frustrating problem. Stalling or temporary cycling, as mentioned before, happens when more than one constraint crosses a feasible solution on the path to the optimal solution. I believe that like the selection of the pivot element, it can be addressed by simply designing an algorithm to 27 affect order of the constraints. Since the North West Method or lowest index method is the default choice for the next column or row, it too should be evaluated for improvement. However, that’s a topic for another paper. The more serious restriction is that of the negative slope issue. Because the new algorithm detects the distance between feasible solutions, it cannot predict the following point until it becomes the focus. This means that if the first constraint was along the y-axis and the second constraint had a steep positive slope that intersected the y-axis, the algorithm must evaluate that intersection point first before moving on. Since, with two dimensional scenarios, the first to vectors to be evaluate intersect the origin it can be misleading about the upper limits of each axis. Under negative slope conditions, this problem is nonexistent due to the accurate representation of the upper bounds of each axis. Both of these issues are significant enough to warrant further examination. The good news is; both of these problems already exist in the standard Simplex Method. The stalling problem is not only prevalent in the Simplex Method; it is easy to create a scenario where a linear problem cycles infinitely instead of just stalling. It is only when you compare the Simplex algorithm with the modified algorithm that you realize the blindness associated with identifying the upper bounds along each base vector. Still, it is a solid claim that under the negative sloped contraint conditions, the modified algorithm will reach the optimal solution in a distance less than or equal to the Simplex Method. 28 Future Studies There are several unanswered questions concerning the benefits of this modified algorithm. One of the next steps is obviously the mathematical proof for the new method. Also, the problem of stalling is still a nuisance that requires some attention. Before either of those two concerns can be addressed, I believe that more sample LP’s need to be evaluated. If the modification survives that further evaluation, then I believe it would be a very worthwhile topic for further work. Conclusion My goal at the start of this project was to modify the algorithm behind the Simplex Method so that it would avoid unnecessary iterations. After initially believing that I had somehow stumbled upon a simple way to accomplish this, I found ways to prove that solution wrong. This was frustrating but I was still convinced that there had to be a way to improve the algorithm. After reexamining the problem and the ways I found to disprove my original attempt, I realized that strictly using iterations as a measure of efficiency was the wrong metric for this project. Distance of the path traveled seemed a far better measurement of success or failure. Once I changed the focus to improving the distance traveled, the algorithm came together. The next objective was testing. I decided that I needed to design a program that not only computed each tableau but that could also display it in a meaningful manner. Through sheer luck or happenstance I was introduced to Mathematica. This piece of software had not only a high-level programming language but also several powerful graphics 29 functions. After designing code to run the original Simplex Method, I then went back and modified that code to run the new algorithm as well. Once I verified that the new algorithm provided reliably correct results, I began testing it with problems that Simplex struggled with or failed to solve all together. The results were compelling. The modified algorithm not only had the ability to identify the shortest path, it could also solve LP’s where the Simplex Method would cycle indefinitely. The new method does come with some caveats though. It does not guarantee the shortest path if there are constraints with a positive slope and it does not solve temporary stalling. Also, there is a trade-off with some additional simple calculations on the front end of the algorithm in exchange for no infinite cycling. The changes made to this algorithm have the potential to affect a wide range of industries, especially economics. I still have a long way to go before being able to prove this concept mathematically but that is the next logical step for this project. Also, finding a way to minimize or prevent temporary degeneracy (stalling) seems to be a natural next step. Often, we do not feel the need to fix something that is not broken. We simply become satisfied that the problem has been solved and we move on to the next. However, when we do decide to apply new methods and technologies to old solutions, we often end up with tools that are amplified and more effective. Given my relative newness to the field of Operations Research, my intuition says that this must have been tried before. That being said, the results of this project speak for themselves. If it has been done before, I do not know why it is not the standard. 30 Bibliography Dantzig, GB (1984). Reminiscences about the origin of linear programming. In Mathematical Programming. RW Cottle; ML Kelmanson and B Korte (eds.). Amsterdam: Elsevier (North-Holland). pp. 217–226. The Diet Problem, George B. Dantzig, Interfaces, Vol. 20, No. 4, The Practice of Mathematical Programming (Jul. - Aug.,1990), pp. 43-47, Published by: http://www.informs.org/, URL: http://www.jstor.org/stable/25061369, Accessed: 26/01/2012 10:26 Dantzig, George B. 1963, Linear Programming and Extensions, Princeton Press, Princeton, New Jersey. Cottle, Richard; Johnson, Ellis; Wets, Roger. "George B. Dantzig (1914–2005)", Notices of the American Mathematical Society, v.54, no.3, March 2007. H. Taha, Operations Research. 8th Ed. 2007. Prentice Hall, Upper Saddle River, NJ Bland, Robert G. (May 1977). "New Finite Pivoting Rules for the Simplex Method". Mathematics of Operations Research 2 (2): 103–107. “Linear Programming’’ in History of Mathematical Programming: A Collection of Personal Reminiscences, edited by J.K. Lenstra, A.H.G. Rinnooy Kan and A. Schrijver, Elsevier, 1991.
© Copyright 2025 Paperzz