Study on the influence of workers heterogeneity in assembly line performance by Discrete Event Simulation Mariana Borges Guerreiro Gaspar Pereira Thesis to obtain the Master of Science Degree in Mechanical Engineering Supervisors: Prof.ª Alexandra Bento Moutinho Prof. Paulo Miguel Nogueira Peças Examination Committee Chairperson: Prof. João Rogério Caldas Pinto Supervisor: Prof.ª Alexandra Bento Moutinho Members of the Committee: Prof.ª Elsa Maria Pires Henriques Prof. Carlos Baptista Cardeira June 2014 “The woods are lovely, dark, and deep, But I have promises to keep, And miles to go before I sleep, And miles to go before I sleep.” Robert Frost “Feeling my way through the darkness Guided by a beating heart I can't tell where the journey will end But I know where to start” Avicii - Wake Me Up Este trabalho reflete as ideias dos seus autores que, eventualmente, poderão não coincidir com as do Instituto Superior Técnico. Abstract Even with the present technological evolution, assembly systems often rely on human elements for different reasons. Due to their human nature, these workers show different behaviours, which lead to variations in their time performance. Several commercial softwares allow simulating these assembly lines, in order to evaluate their stochastic performance. However, in general these solutions only provide average values, which merely allow assessing the stationary behaviour of the system. In order to better understand how the variability of the workers affects the production of the assembly line through time, i.e. to know its transient behaviour, it is necessary to have access to instant time data. It is with this motivation that a Discrete Event Simulation (DES) software was created in MATLAB. With this simulator five different types of workers classified by their performances were tested for a given production scenario (required number of parts, available time to produce, necessary number of operations) considering a straight assembly line with five workstations. How the different combinations of workers performances affect the considered system output is discussed. Keywords: assembly systems, performance, asynchronous line, discrete event simulation vi Resumo Mesmo face à presente evolução tecnológica, as linhas de montagem estão frequentemente sujeitas a mão de obra por variadas razões. Devido à natureza humana, estes trabalhadores apresentam diferentes comportamentos o que leva a diferentes desempenhos nomeadamente no tempo de trabalho efetivo. Existem vários softwares comerciais que permitem a simulação destas linhas de montagem de maneira a avaliar o seu desempenho estocástico. No entanto, geralmente estas soluções apenas geram valores médios, os quais apenas permitem avaliar o comportamento estacionário do sistema. De maneira a compreender melhor como a variabilidade dos trabalhadores afeta a produção duma linha de montagem ao longo do tempo, por exemplo, para conhecer o comportamento transiente, é necessário o acesso a toda a informação gerada ao longo da simulação, não bastando os resultados finais. É com esta motivação que foi criado um software DES criado em MATLAB. Com este simulador, cinco diferentes tipos de trabalhadores, classificados de acordo com o seu desempenho, foram testados para um determinado cenário (número de peças a produzir, tempo disponível para produção, número de operações necessárias) considerando uma linha de montagem com cinco trabalhadores. Neste trabalho é discutido como diferentes combinações de desempenho dos trabalhadores afetam a saída do sistema. Palavras chave: sistemas de montagem, desempenho, linha assíncrona, simulação de acontecimentos discretos vii Acknowledgments First of all, I would like to thank Prof. Paulo Peças and Prof.ª Alexandra Moutinho for the availability, concerns, support and perseverance shown throughout this work. I know it was difficult to put up with me, especially after I started working in NOV, and that this thesis could have gone in the wrong way if it wasn't for your support and motivation given. Your help was precious to me. I would like to thank my mother for her eternal belief in me that has been helping me, and still helps me a lot, in every day of my life. She is my role model. I love you mom. I need to thank my boyfriend Miguel for being the most reasonable man I have ever known, he always helps me see the practical side of things. If you don't know where to go, you always know where to start. You are my partner and I love you. In addition, I thank my younger sister Madalena, for sometimes making me feel like a superhero and other times like an idiot, both help me grow. I also would like to thank all of my old and my new friends for worrying about me, giving me tips and constantly asking about my thesis. That kept me on track! Finally I would like to thank my father, he was my inspiration to get into Engineering and Instituto Superior Técnico. viii Contents Contents ........................................................................................................................................ix List of Figures ................................................................................................................................xi List of Tables ............................................................................................................................... xiii Notation ....................................................................................................................................... xiv 1. 2. 3. Introduction............................................................................................................................. 1 1.1 Objective and contributions ........................................................................................... 1 1.2 Dissertation structure..................................................................................................... 2 Assembling Lines and Discrete Event Simulation ................................................................. 3 2.1 Manufacturing systems ................................................................................................. 3 2.2 Automation and manual work in the assembly.............................................................. 6 2.3 Key performance indicators for production lines ........................................................... 8 2.4 Assembly line balancing problems ................................................................................ 9 2.5 Simulation Models ....................................................................................................... 11 2.5.1 Advantages and worries on using DES ................................................................... 13 2.5.2 DES project phases ................................................................................................. 14 2.5.3 Applications in manufacturing ................................................................................. 15 2.6 Buffers ......................................................................................................................... 16 2.7 Workers performance heterogeneity and variability .................................................... 19 Assembly Line Model ........................................................................................................... 23 3.1 The case study ............................................................................................................ 23 3.2 Methodology ................................................................................................................ 28 3.3 Simulation model ......................................................................................................... 29 3.3.1 Stochastic convergence .......................................................................................... 33 3.3.2 Warm-up .................................................................................................................. 35 3.4 3.4.1 3.5 4. Model Validation .......................................................................................................... 37 Performances validation .......................................................................................... 37 Assembly line model conclusions ................................................................................ 41 Results analysis ................................................................................................................... 42 4.1 Combinations with all equal performances ................................................................. 42 ix 4.2 4.4 5. Combinations with extreme performances .................................................................. 44 4.2.1 Combinations with QI performance ......................................................................... 45 4.2.2 Combinations with QIII performance ....................................................................... 50 Final remarks ................................................................................................................... 55 Conclusions .......................................................................................................................... 58 References .................................................................................................................................. 60 Appendix ...................................................................................................................................... 64 A.1 - Balancing a production line ............................................................................................. 64 x List of Figures Figure 1 - Variability analysis from the majority of commercial DES software compared with the developed simulator ...................................................................................................................... 2 Figure 2 – Part transfer in serial assembly lines .......................................................................... 5 Figure 3 – Types of control in the inflow of the assembly line ...................................................... 6 Figure 4 – Performance characteristics of assembly systems following different assembly principles – adapted from Michalos et al. [13] ............................................................................... 7 Figure 5 – Precedence graph as in Becker & Scholl [7] ............................................................... 9 Figure 6 – Feasible line balance as suggested in Becker & Scholl [7]....................................... 10 Figure 7 – Ways of studying a system according to [21]. ........................................................... 12 Figure 8 – Steps in a DES Project as in Oakshott [31]............................................................... 15 Figure 9 – Buffer’s functions [36] ................................................................................................ 16 Figure 10 – Buffer’s location in a productive system based on Battini et al. [36] ....................... 17 Figure 11 – Total annual costs and optimal buffer size when inventory costs are not negligible as in Battini et al. [36] .................................................................................................................. 18 Figure 12 - Expected performance representation ..................................................................... 24 Figure 13 - Representation of the four types of performance in terms of individual deviations to the average task time and variability of the workers population based on Folgado [1] .............. 25 Figure 14 - Triangular distributions probability density functions of workers performances ...... 26 Figure 15 - Representation of the assembly line considered in the simulation model ............... 27 Figure 16 - Methodology employed on the study ....................................................................... 28 Figure 17 - Block diagram of the assembly line model .............................................................. 30 Figure 18 - Algorithm steps summary ........................................................................................ 30 Figure 19 - Number of parts and cycle time comparison............................................................ 33 Figure 20 - Obtained triangular distribution with all workers of type Expected .......................... 38 Figure 21 - Expected performance dispersion compared to the histogram obtained ................ 39 Figure 22 - QI performance dispersion compared to the histogram obtained ............................ 40 Figure 23 - Triangular distribution for every performance .......................................................... 41 Figure 24 - Histogram obtained by simulating assembly lines with all workers with the same performance (57.600 units, seed:1) ............................................................................................ 42 Figure 25 - Plot obtained for seeds 1, 5 and 10, with the assembly line operating with all workers with Expected behaviour (57.600 units) ........................................................................ 43 Figure 26 - Blocked, starved and working times percentage for a line with Expected workers (57.600 units, 10 seeds) .............................................................................................................. 44 Figure 27 - All workers Expected performance line compared to a behaviour from a line with four Expected workers and one QI performance worker in the end (57.600 units, seed:10)...... 47 xi Figure 28 - All workers Expected line performance (seed 1) compared to the performance from 10 different lines (all 10 seeds used separately) with four Expected workers and one QI performance worker in the end (57.600 units) ............................................................................ 48 Figure 29 - Comparison between an assembly line with all workers Expected and another with one worker QIII in the beginning and four others Expected (57.600 units, seed:10) .................. 51 Figure 30 - Comparison between an assembly line with all workers Expected and another with one worker QIII in the end and four others Expected (57.600 units, seed:1) ............................. 52 Figure 31 - All workers Expected line performance (seed 1) compared to the performance from 10 different lines (all 10 seeds used separately) with four Expected workers and one QIII performance worker in the end (57.600 units) ............................................................................ 52 Figure 32 - Parallel assembly line combinations - plots with the percentage of starved, blocked and working times for the workers in these combinations (57.600 units, 10 seeds)................... 55 Figure 33 - QI and QIII triangular distributions ........................................................................... 56 Figure 34 - Comparison between an assembly line with all workers Expected, one with one worker QI in the end and four others Expected and another with one worker QIII in the end and four others Expected (57.600 units, seed:10) ............................................................................. 57 xii List of Tables Table 1 - Manufacturing systems layouts according to equipment grouping and product flow, based on Dilworth [2]. .................................................................................................................... 4 Table 2 - Comparison between synchronous and asynchronous assembly lines based on Witzenburg [9] ............................................................................................................................... 5 Table 3 - Key performance indicators based on Aguiar et al. [15]................................................ 8 Table 4 – Versions of SALBP [7] ................................................................................................ 11 Table 5 – Individual differences in productivity based on Hunter et al. [44] ............................... 20 Table 6 - Average performance deviations in relation to average performance (Expected - E) 26 Table 7 - Cycle time for 57.600 parts with 10 different random numbers sequences ................ 34 Table 8 - Medium cycle time for 10 different sequences of random numbers ............................ 34 Table 9 - Simulations for the warm-up ........................................................................................ 36 Table 10 - Inputs considered for each type of performance of the workers allocated to the system based on Folgado [1] ...................................................................................................... 37 Table 11 - Results from the simulation model with all workers with the same performance (57.600 units, 10 seeds) .............................................................................................................. 43 Table 12 - First case results from the simulation model with one worker QI and all the rest with E performance (57.600 units, 10 seeds) ..................................................................................... 45 Table 13 - Variability of all type of workers ................................................................................. 46 Table 14 - Blocked and starved times obtained for a line with four workers with Expected performances and one worker in the end with QI performance (57.600 units, 10 seeds) .......... 46 Table 15 - Second case results from the simulation model with two workers QI and all the rest with E performance (57.600 units, 10 seeds) .............................................................................. 48 Table 16 - Third case results from the simulation model with tree workers QI and two with E performance (57.600 units, 10 seeds) ........................................................................................ 49 Table 17 - Fourth case results from the simulation model with four workers QI and one with E performance (57.600 units, 10 seeds) ........................................................................................ 49 Table 18 - First Case results from the simulation model with one worker QIII and all the rest with E performance (57.600 units, 10 seeds) ..................................................................................... 50 Table 19 - Blocked and starved times obtained for a line with four workers with Expected performances and one worker in the end with QIII performance (57.600 units, 10 seeds) ........ 53 Table 20 - Second case results from the simulation model with two workers QIII and all the rest with E performance (57.600 units, 10 seeds) .............................................................................. 53 Table 21 - Third case results from the simulation model with tree workers QI and two with E performance (57.600 units, 10 seeds) ........................................................................................ 54 Table 22 - Fourth case results from the simulation model with four workers QI and one with E performance (57.600 units, 10 seeds) ........................................................................................ 55 xiii Notation The following notation is used throughout this work. Acronyms WIP – Work In Process ALBP – Assembly Line Balancing Problem SALBP-1 – Simple Assembly Line Balancing Problem type 1 SALBP-2 - Simple Assembly Line Balancing Problem type 2 SALBP-E - Simple Assembly Line Balancing Problem Efficiency SALBP-F - Simple Assembly Line Balancing Problem Feasible DES – Discrete Event Simulation OM - Operations management TH - Throughput Time tc - Cycle Time E - Expected performance QI - QI performance QII - QII performance QIII - QIII performance QIV - QIV performance Symbols - Standard deviation xiv 1. Introduction The flexibility of the human factor is still considered an advantage in several productive environments, but the heterogeneity and variability of a worker can be an inconvenience if there is not an understanding of how these properties influence a production system. Manual work is the main work force in assembling processes where a great deal of accumulated value is added to the product and that is why it is important to manage assembly systems wisely. When the assembly system is in operation, workers perform the assembly tasks with some degree of variability, since in one repetition the worker may be quicker and in the next slower. These variations are often disregarded in the early system design stage. On top of that, the workers are different from one another: there may be differences in the speed and consistency while performing assembly tasks, some workers might be slower than others, and some workers might have task times more variable than others (or the other way around). Given the nature of manual processing it is expected some degree of heterogeneity in the performance, in terms of speed and consistency. In the daily production, the production management has then to deal with these variations on the workers performance, and make sure that the system output fulfils the customer order. To do so, simulation tools are often used to assess and help on decisions in production systems. Discrete Event Simulation (DES) is a decision support tools that allows designing and analyzing the performance of complex processes and systems. Also, it can be understood as the process of building a representative model of a real system and conducting experiments with this model in order to better understand the real system behaviour and assess the impact of alternative operating strategies. DES makes it possible to study, analyze and evaluate a variety of situations that could not be known otherwise. In an increasingly competitive world, DES has become an essential methodology for solving problems for both engineers and top managers. DES has been used over time as an important helping technique in decision making in various fields of activity, such as telecommunication systems, wind tunnel testing, evaluation of offensive and defensive tactics in war situations, operations maintenance, among others. 1.1 Objective and contributions This work objective is to use DES in an assembly line with five workstations and to make assessments regarding the impact of workers performance heterogeneity and variability on the output of the assembly line proposed. This thesis is based on a previous empirical work by Folgado [1] that was performed in collaboration with a manufacturing company located in Portugal, where workers performances data were gathered and several conclusions on the subject of individual variability among a group of workers were extracted. She classified the worker in five types by their performances in the assembly line but there isn't yet an assessment 1 on how those performances combined, influence the output on this type and size of an assembly line. Commercial DES softwares are often not that flexible when simulating assembly lines, i.e. it's difficult to obtain data about the cycle time evolution for each one of the items that goes through the assembly line - Figure 1, making it difficult to analyse the variability throughout the simulation. In general this variability is obtained by calculating the standard deviation between all the different sets tested. This merely allows assessing the stationary behaviour of the system. In order to better understand how the variability of the workers affects the production of the assembly line through time, i.e. to know its transient behaviour, it is necessary to have access to instant time data. It is with this motivation that a Discrete Event Simulation (DES) software was created in MATLAB. Figure 1 - Variability analysis from the majority of commercial DES software compared with the developed simulator 1.2 Dissertation structure The thesis document is organized in the following way: in the next chapter (Chapter 2), the literature review can be found, contextualizing the research topic, namely on manufacturing systems, assembly line balancing problems, simulation models and workers performance variability and heterogeneity. Chapter 3 focuses on the previous empirical work on what this thesis is based on, describing the case study, the methodology used, how the simulation model was designed and which considerations were carried out and how was the model validated. Chapter 4 contains the results from the simulations and the analysis of these results. In Chapter 5 are drawn and some possible improvements on the system model are presented and left for future work. 2 2. Assembling Lines and Discrete Event Simulation This chapter goes through some background studied for this work and shows definitions and terms needed to better understand this document. First, manufacturing systems are explained to comprehend where assembly lines stand. The next section is about how automation and manual work are a struggle through evolution of industries and where does manual work fit on production today. After that, some key performance indicators are described and explained. Later, assembly line balancing problems are introduced and the importance of simulation is discussed. Subsequently simulation is the subject presented and finally the topic of buffers. 2.1 Manufacturing systems In this section the manufacturing systems are described, particularly the classifications regarding the work flow and the equipment layout with the objective of understanding how assembly lines work. Manufacturing systems layouts can vary accordingly to the authors, but four types of organizations appear rather constantly: fixed position layout (also called fixed product layout or project layout), process layout, product layout and cellular layout [2] [3] [4] – Table 1. According to Chase et al. [3] there are three basic types of facility layouts and the cellular layout is referred as a hybrid type. The layouts are described below: • In fixed position layout the product remains still and all the manufacturing equipment and resources move around it to perform the operations needed. This kind of format can be found in ship or aircraft industries, due to the size and weight of the products. • When similar equipment is grouped, being that they perform similar functions, it is called process layout. The product then flows through the different areas (drill area, paint area, etc.) in order to undergo all the operation steps needed. This layout is chosen when there is a wide variety of products. It is used for dies and moulds or in medical care facilities, for example. • On the other hand, for low variety of products but high production volumes, there is the product layout. The equipment is organized according to the sequence of manufacturing operations required to create the product. Automobiles, refrigerators and televisions are a few examples of products from industries that use this design; • In cellular layout the equipment is grouped into cells (or clusters) to process similar parts, since the processing requirements or shapes can be alike. Each cell can work 3 almost like a product or process layout. Electrical appliances, electrical cables sets for automobiles and hydraulic and engine pumps used in aircrafts can be examples of the products that need this kind of layout arrangement. Table 1 - Manufacturing systems layouts according to equipment grouping and product flow, based on Dilworth [2]. Equipment grouping Product Flow Examples Fixed position layout Equipment moves around the product Product fixed Ships or aircrafts Process layout Similar equipment is grouped together Product flows through different equipment areas Medical care facilities Product layout Equipment organized according to the sequence of manufacturing operations Product flows in a straight line Automobiles, refrigerators and televisions Cellular layout Equipment organized according to the sequence of manufacturing operations Product flows through different equipment areas Hydraulic and engine pumps used in aircrafts Assembly lines are flow-oriented production systems, so they are a specific case of product layout. An assembly line consists of a series of workstations, where one or more operators perform an attributed set of small tasks on each part as it passes the station. The complete assembly operation is divided into small work elements which are distributed among the stations of the line [5]. The parts visit workstations in succession as they move along the assembly line, generally by some kind of transportation system, e.g. a conveyor belt [6]. Since the 1910’s, when Henry Ford introduced this approach, several developments took place which changed assembly lines from strictly paced and straight single-model lines to more flexible systems [7]. In this line of thought, synchronous and asynchronous transfers in assembly lines are two different ways of transferring the product between workstations - Figure 2. The first approach, synchronous transfer, was used by Ford and consists in a conveyor system that moves all the parts at the same time, slot by slot, so the workers can perform their given task in all of the parts that pass in front of them. The products can advance to another workstation or to an intermediate position. The other approach, asynchronous transfer, is when the worker finishes his task on a given part, and then passes that part to the next worker or, if he is occupied, puts the part in a buffer that exists between workstations, joining the work-in-process (WIP) [8]. In the 4 latter case, the rule of First-In First-Out (FIFO) may not apply because the work-in-process may be stored in a container [1]. Conveyor FLOW Synchronous transfer IN OUT WORKSTATION 1 FLOW WORKSTATION 2 WORKSTATION 3 Conveyor system that moves all the parts at the same time, transferring them to the next station or to an intermediate position. WIP Asynchronous transfer IN OUT Parts are moved by workers when the task is complete. If the next worker is occupied, the part joins the WIP. Figure 2 – Part transfer in serial assembly lines These transfer modes have to be selected considering the effect they will have on the production line . There are some advantages and disadvantages associated with both transfer types, as indicated in Table 2. With an automatic conveyor, synchronous assembly lines can have higher speed transitions between workstations, but the moving periods can only happen when the slowest operation is complete, since there are different tasks occurring at the same time. On the other hand, in asynchronous assembly, each workstation is not paced by the slowest operation, since there can be some kind of storage between the workstations for the work-in-process. This justifies why this type of transfer might be more flexible but also causes higher levels of WIP. Table 2 - Comparison between synchronous and asynchronous assembly lines based on Witzenburg [9] Assembly line Advantages Synchronous Relative low cost High speed Asynchronous Disadvantages Flexible Improved utilization/uptime Movements not paced by slowest operation The part movement between workstations is governed by the slowest operation High WIP High throughput time Furthermore, asynchronous lines can be classified as closed or open [10] – see Figure 3. These definitions differ on how the input of the line is controlled. Closed asynchronous lines 5 are dependable on the amount of WIP and open asynchronous lines rely on the space available in the first station or the size of its buffer. The WIP in closed asynchronous lines is kept on track by some kind of identifier or tag (pallets of transport or cards attached) that is taken out when the finished product leaves the line. This identifier is then used again in a new product, but this product only enters the assembly line when there is one tag free for use. As Bulgak et al. [11] say “A fixed amount of pallets perpetually circulate in closed loop”. FLOW IN OUT Open If the first workstation (or its buffer) is free, the part can WORKSTATION 1 WORKSTATION 2 WORKSTATION 3 enter the assembly line. FLOW Closed FLOW The part can enter the assembly line only if there are OUT IN WORKSTATION 1 WORKSTATION 2 free identifiers or tags. WORKSTATION 3 Figure 3 – Types of control in the inflow of the assembly line 2.2 Automation and manual work in the assembly Industries face dynamic challenges every day, such as fast changing demands, increasing number of product variants, decreasing product cycles and precise time to delivery. According to Bley et al. [12], at this time, these challenges cannot be achieved with strategies of high automation since studies made in Germany concluded that a large number of companies that had invested in high automation have recognized that these solutions are not flexible enough and having reduced again their level of automation. Automation is sometimes implemented in many companies with wrong assumptions on their economic value and often in an inappropriate manner. Flexibility on automation usually means high costs and, with constant innovation in the market, investing in fully automated facilities may become too expensive and risky for the companies. Michalos et al. [13] stated that human operators are considered as major flexibility enablers, since they are able to quickly adapt to the changing products and market situations. The workers ability to exchange between different workstations and to perform several assembly 6 tasks is also a way of handling with the increasing demand for larger product variability. Manual work is also common when the product is fragile or has to be handled with great precision, actions that are easier for the human operators and not so much for the industrial robots. The extent of customization causes greater number of variants in the final assembly stage, so human workforce is mainly used on this stage due to the high flexibility they provide. Generally, four approaches can be relevant in the design of an assembly system: manual assembly, semi-automated assembly, flexible assembly and fixed assembly. These assembly principles relate their respective assembly system performances, in terms of production volumes, number of variants, batch sizes and flexibility, are presented in Figure 4. Low Production volume High Many Number of variants Few Manual Assembly SemiAutomation Flexible Automation Fixed special purpose automation Small Batch size Large High Flexibility Low Figure 4 – Performance characteristics of assembly systems following different assembly principles – adapted from Michalos et al. [13] So, for the manual assembly systems, it can be said that they are advisable in situations where: • the production volumes are relatively low; • the number of variants is considerable; • the batch sizes are small; • the required flexibility is high. Overall, since the 1980’s, when a strong trend to automate assembly tasks was leaded by companies like Volkswagen, Fiat or General Motors, as most failed to produce the desired results despite the massive investments, the advancement towards automating the assembly process has been slow, irregular, and not all that successful [14]. 7 2.3 Key performance indicators for production lines Performance indicators are important to understand, design or make significant decisions about the operation of a production line. In this section some relevant key performance indicators – see Table 3 – are described and one example was created for better understanding and tangibility of the definitions. Table 3 - Key performance indicators based on Aguiar et al. [15] Indicator Definition Equation Cycle time Elapsed time between two consecutive work pieces or products in end of the line (output). Also, it corresponds to the maximum time available for production for each workstation. The longest task in a line defines the cycle time. If there is a demand, the desired cycle time can be calculated. Production rate Ratio of the available time (working time) and the cycle time. Number of workers Number of workers or working positions necessary to attend the demand. # Efficiency Represents how much the equipment and workers are being useful. = Throughtput time The period required for a work piece to pass through the manufacturing process. ( )= = = # ∑ ℎ #ℎ $ =& $ × ∑ × ( %) (&' ) The cycle time is always conditioned by the slowest task, whether the assembly line is synchronous or asynchronous. By observation of the equation, the production rate is also conditioned the same way. Therefore, an asynchronous system has higher availability in the workstations but that doesn't mean higher cycle time, it means greater quantity of work in process (WIP). Little's Law states that “The average number of customers in a system (over some interval) is equal to their average arrival rate, multiplied by their average time in the system.” [16]. Symbolically this can be represented as: &' = % × 8 (1) It might be tempting to conclude that WIP reduction will always reduce cycle time. However, reducing WIP in a line without making any other changes will also reduce throughput. 2.4 Assembly line balancing problems In this section the assembly line balancing problems (ALBP) and the existing tools and techniques for aiding in this kind of difficulties are introduced. Balancing an assembly line is adjusting it to the necessities of the demand assigning the tasks to a planned sequence of stations in order to satisfy the precedence relations, maximizing or minimizing some line features in order to get the best design that is available at the moment. The problem of deciding which features benefit most the final objective of the assembly line is called assembly line balancing problem (ALBP) and the first published paper about this matter is from the 1950’s, where linear programming is suggested for the solution [17]. Manufacturing a product on an assembly line requires dividing the total amount of work into smaller groups of elementary operations called tasks. Additionally for technological and organizational reasons there are precedence constraints between tasks that must be respected. Precedence graphs show tasks in a visual and summarized way – see Figure 5. Each task is represented inside a circle with its task time indicated next to it, and all precedence constraints are represented by arrows. Figure 5 – Precedence graph as in Becker and Scholl [7] Specifically in Figure 5 there are 10 tasks represented, the task times vary between 1 and 10 time units and, for example, task 5 can only start if tasks 1 and 4 (directly) and task 3 (indirectly) are completed. The precedence graph can create a basic ALBP and there are many ways to solve the same problem, even when the demand is the same. For example, for Figure 5, if the decision based on the demand should be that the cycle time needs to be 11 and the line needs to have 5 stations, one possible result could be grouping tasks 1 and 3 for station S1, tasks 2 and 4 for station S2, tasks 5 and 6 for station S3, tasks 7 and 8 for station S4 and finally tasks 9 and 10 for station 5 – see Figure 6. It is noticeable that there is no waiting time for stations S2 and S5 but there is for the other stations [7]. The time that a worker is waiting to pass the part to the next worker, being that the next worker is still occupied, is the blocked time. 9 The starved time is when a worker has finished his/her job in the part and passes its part to the next but does not have another part available to start working again. S4 S1 S5 S2 S3 Figure 6 – Feasible line balance as suggested in Becker and Scholl [7] The rhythm of an assembly line can create two types of ALBP for synchronous lines, paced and unpaced. Paced is where all stations have their station time limited to the cycle time as a maximum value for each part, leading to a fixed production rate. In a more extreme case of pacing, the worker can have his performance rigidly paced by a machine, where the time available to perform the work is equal to time required to complete it. Unpaced corresponds to work situations “in which the speed of working is not determined or influenced by a machine, belt, or other worker” [18]. Here, all stations function at their own speed so work pieces may have to wait before they can enter the next station, and/or stations may get starved when they have to wait for the next work piece. Allocating buffers between stations, asynchronous lines, can partially overcome these difficulties but this solution is accompanied by the added decision problem of positioning and dimensioning buffers [7]. Another factor that can lead to different versions of ALBP is the variability in the task times. If there is small expected variance of the task times, these task times are considered to be deterministic. But when human workers enter the picture there is instability associated with their work rate, skill, motivation, and sensibility to failure on complex processes, causing considerable variations. As regards, stochastic task times have to be considered, meaning that the tasks time are expected to vary randomly according to a specific distribution. There is a field of research that, due to de number of simplifications made in the assumptions underlying the ALBP, is labelled simple assembly line balancing problem (SALBP) in many accepted reviews. The characteristics of these SALBP are [19] [7] [6]: • Mass-production of one homogeneous product; • All operations are processed in a predetermined mode (no processing alternatives) • Paced line with a fixed cycle time according to a desired output quantity; • The line is considered to be serial with no feeder lines or parallel elements; • The processing sequence of operations is subject to precedence restrictions; 10 • Deterministic (and integral) processing times; • No assignment restrictions of tasks besides the precedence constraints; • An operation cannot be split among two or more stations; • All stations are equally equipped with respect to machines and workers. Several problem versions arise from varying the objective: SALBP type 1 (SALBP-1) is when the number of stations is minimized for a given cycle time; SALBP-2 is when the cycle time is minimized for a given number of stations; and maximizing the line efficiency while satisfying precedence constraints of the products is SALBP-E. Furthermore, the problem of finding a feasible balance for a given number of stations is known as SALBP-F – see Table 4. Table 4 – Versions of SALBP [7] Cycle time Nº of stations Given Minimize Given SALBP-F SALBP-2 Minimize SALBP-1 SALBP-E There are some guidelines for balancing a production line in the annex. If all those guidelines have been followed, an alternative balancing is to use parallel workstations to perform elementary time-consuming operations, which cannot be subdivided. Two parallel workstations performing the same operation are capable of doubling the production speed, for that specific operation, by producing 2 parts at the same time for previous task time. Nonetheless, the setup of an assembly line usually requires a large investment, thus it is important that the line gets the best design possible, with great efficiency and performance. There are two essential ways of predicting an assembly system performance: through analytical models and simulation models [20]. In the following section such models are discussed. 2.5 Simulation Models This section is intended to convey an overview of Discrete Event Simulation (DES), its different types of models, the advantages and disadvantages it offers, terminology and basic concepts used in it. There are several ways to study the behaviour of a system, understood as a collection of entities (workers and machines), which act and interact to achieve a certain logical order – see Figure 7. 11 System Experiment with the Experiment with a real system model of the system Physical model Mathematical model Analytical model Discrete event simulation Figure 7 – Ways of studying a system according to Law and Kelton [21]. If possible, we use the actual system itself to perform experiments such as testing new configurations and strategies for the deployment of resources. However, besides the high costs associated with this practice, most often the target system that needs to be studied does not physically exist. So, the use of models as representations of the real system has two options: a study based on the physical model, considered as a replica of the system, on a smaller scale; or to conduct the same study using a representative mathematical model of system behaviour. If this model is simple enough, it may be possible to obtain an exact solution by analytical procedures. However, most existing systems that represent the real world are so complex that their mathematical formulation is virtually impossible. In these cases, the system should be studied using discrete event simulation, which allows modelling the behaviour of systems, with any degree of complexity and with a level of detail adjusted to each case. Therefore, as often in analytical models, there is no necessity for simplifying assumptions, as these simplifications can jeopardize the validity of these models, given its inadequacy in relation to reality [21]. An analytic model is a set of equations that characterizes a system. A simulation model is a operating model of a system that mimics the behaviour of that system [20]. DES makes it possible to study, analyze and evaluate a variety of situations that could not be known otherwise. In an increasingly competitive world, DES has become an essential methodology for solving problems for both engineers and top managers [22]. DES has been used over time as an important helping technique in decision making in various fields of activity, such as telecommunication systems, wind tunnel testing, evaluation of offensive and defensive tactics in war situations, operations maintenance, among others. DES, as method to optimize the performance, has been growing in many companies and organizations [23]. Many sectors, such as aerospace and automotive industry, are increasingly using DES at various stages of 12 their production process. Currently, production systems are becoming increasingly complex due to the imposed demands, involving the analysis of many variables whose management will necessarily have a strong impact on its performance. Thus, DES offers the ability to quickly visualize the effect that certain decisions will have in the production process [24]. 2.5.1 Advantages and worries on using DES Although DES is sometimes considered as a method to be used as a last resort, especially when all else fails, recent progress in this area, specifically in DES methodologies, available software, techniques of sensibility analysis and stochastic optimization, contributed to make DES a technique of Operational Research used in multiple sectors [25]. DES is a decision support tools that allows designing and analyzing the performance of complex processes and systems. Also, it can be understood as the process of building a representative model of a real system and conducting experiments with this model in order to better understand the real system behaviour and assess the impact of alternative operating strategies [22] [26] [27]. There are many advantages of using DES that can be found on the literature; some of them are: • Allows to test new configurations of the production process without compromising any resources, whose costs would be high [23]; • Can be used to explore new resource scheduling policies, operating procedures, decision rules, organizational structures, information flows, without interrupting the normal functioning of the system [22]; • Allows identifying the bottlenecks in the production line, test various options in order to achieve optimal functioning, identifying the causes of delays in the flow of materials, information and other processes [23] [22]; • Allows testing explanatory hypotheses of how or why a particular phenomenon occurs in the system [26]; • Allows studying a system with a large time frame in a compressed time period, or alternatively, studying the functioning in detail on an enlarged scale of time [28]; • Allows getting to know the system and identifying which variables do influence its performance, supporting an informed decision making process based on better understanding of reality [25] [21]; • Allows testing the system behaviour under new and unexpected situations [25] [22], supporting the decision making of investments in new technology and equipment, improve production capacity, management of human and material features [24]. On the other hand, DES as a decision support tool presents some drawbacks, among which stand out: 13 • The development of simulation models requires high level of expertise on simulation language that is used [29]; • Deep knowledge of the system being modelled [22]; • DES does not provide optimal solutions to the problems being studied - allows, however, to evaluate the behaviour of the system under certain scenarios, created for that purpose by the analyst [22] [26]. 2.5.2 DES project phases A DES project necessarily involves, along its entire route, a set of stages that are connected to each other, which, with proper implementation, contribute to the construction of valid and credible models that faithfully represent the reality. All this will allow the user to obtain reliable results, as well as using it as a source for making decisions that aim to improve the performance of the model. Therefore, verifying if the model corresponds to an exact representation of reality should be a concern. Without this assurance, the results of any experiments with the model may be questionable. This guarantee will only be achieved through verification and validation of the model. Centeno and Carrillo [30] refer to verification as the process of analysis used to confirm whether the model was built according to the initially set parameters, whereas validation is a process that ensures the model is a correct representation of reality. For these authors, the model validation can be accomplished by observing the model behaviour, analyzing the results (processing times, waiting times, etc.) and noting if these are within reasonable limits. An identical definition for validation is presented in Law and McComas [31] where the authors refer some general perspectives on the concept: • If a simulation model is valid, it can be used on decision making for similar systems to the one for which the model was developed; • The ease or difficulty of the validation process of a model depends on the complexity of the system being modelled; • A simulation model should always be developed taking into account a specific set of objectives. Indeed, a model that is valid for one purpose may not be to another. Oakshott [32], considered that the verification process is dependent on the type of model but essentially involves finding if the model performs what is expected of it. Still, validation is understood as a process that aims to check whether the results produced by the model are in agreement with what is observed in the real system. The author draws attention to the fact that the task of validation is an essential step in the construction of a simulation process, and if we are not prepared to validate, the results produced by it should not be taken as reliable. 14 Several references can be found in the literature with different proposals for the steps of a simulation project, one of these references [31] describes the set of seven steps indicated in Figure 8 to successfully conduct a project. Formulation of the problem Gathering information and building the conceptual model Is the conceptual model valid? No Yes Program the model Is the programmed model valid? No Yes Project, perform and analyze experiences Document and present the simulation results Figure 8 – Steps in a DES Project as in Oakshott [31] Regarding Figure 8, it can be said that one of the most important phases in a DES project is the formulation that consists in the definition of the problem and objectives to achieve. 2.5.3 Applications in manufacturing The production sector was one of the first sectors in which DES was initially used. The reason for this is mainly due to the kind of situations and problems usually found in this sector. The productive process is complex, often involving the use of some kind of material handling device such as conveyors and transporters, therefore any process failure can lead to high costs. Thus, a DES project undertaken in this area seeks to identify solutions to improve the 15 performance of the production line, reducing manufacturing costs, increasing machine utilization and human resources, and at the same time promoting the quality of the final product. DES can also enable the management and human resources involved in the production process to have a more accurate picture of the reality, being that it highlights the most critical areas of the overall system performance. There are many and varied examples of industries in the manufacturing sector that use simulation to improve its manufacturing process, among which stand out the automotive industry, electronics and clothing [32]. General Motors Corporation, Ford Motor Company and Chrysler Corporation are a few examples of car manufacturers that use discrete event simulation techniques while designing and improving their production lines [33]. Through the existence of various successful cases, it can be concluded that DES is an essential methodology for the solution of problems related to production systems [34]. Beyond the analysis resources it provides, DES allows to extend the knowledge and understanding of an existing, or still in design stage, production system [35]. 2.6 Buffers Production lines (automatic or manual) are often equipped with additional devices afar from workstations and basic mechanisms for product transport: these typically include storage areas which collect the intermediate product (work in process) along the line. These areas of intermediate storage are called buffers and they have a number of different functions, sometimes complementary to other functions, and their size and position depends strongly on its function. Figure 9 summarizes the different functions of a buffer in the production line. Quality control activities and defective pieces detection Materials feeding: • Machines loading • Assembly lines loading Compensation: • Different machine operational times • Free manual operation • Different working shifts BUFFER Picking activities on the line Production mix re- Breakdown: • Maintenance activities • Micro-breakdown presence • Functional downtimes sequencing on the line Figure 9 – Buffer’s functions [36] 16 Where stations have a very specific function and considerable time variation (different operation times), each station needs to maintain a large independence degree, so that its specific efficiency is not affected by fluctuations in production from the previous station. The lack of such independence can cause a blockage in the manufacture. For this reason, placing buffers between workstations (intermediate buffers) is an optimization problem of great importance that designers of these kinds of production systems face. A given amount of available buffer space needs to be distributed by intermediate buffers through effective positioning planning: each product enters the system in the first station, moves progressively on to other stations and intermediate buffers locations, finally leaving the production line through the last workstation - Figure 10. The product’s pathway is as follows: if a station has completed its tasks and the next buffer has space available, the unfinished product (WIP) is directed there. Then this station begins processing a new product from the previous buffer. If this buffer has any product left, the station remains empty and waiting (starved) until a new product reaches the buffer. By using buffers with the correct capacity and location in an automated production line, it is possible to reduce the losses of the entire system, achieving the required rate without increasing the processing speed of the workers/machines involved. In Out Workstation 1 Buffer Workstation 2 Figure 10 – Buffer’s location in a productive system based on Battini et al. [36] According to Battini et al. [36], the buffers size has distinct impact on two cost types: downtime of the machines (the presence of micro-faults, maintenance times, setup times, etc.) and WIP cost. Thus, the optimal buffers size corresponds to a multi-objective optimization as indicated by the graphic in Figure 11. 17 Figure 11 – Total annual costs and optimal buffer size when inventory costs are not negligible as in Battini et al. [36] The size and location of the buffers in a production line are critical design parameters. Large buffers provide protection against variability, therefore increasing the runtime of production and helping to meet the customer's requirements. However, there is a downside when large buffers are used: • Increasing the runtime of production results in higher operating costs (working longer in the same project means having to pay the workers for that much longer), and makes more difficult to identify the origin of some defects; • It increases the time needed to improve or make changes in the products design to respond to market demands; • It reduces the ability to meet deadlines in time, since the runtime is increased, resulting in non-competitive products with high price. Buffers with low dimension, on the other hand, can overcome most of the disadvantages of large buffers but offer a limited protection against existing variability. In short, buffers are required to uncouple operations and protect the production rate from fluctuations and variations. The question that researchers face is how to define the size and location of buffers. For any type of production line the aim is to determine the minimum number of storage spaces required and the location of these spaces, in order to maximize the overall performance of the line. It is not strictly necessary to allocate one buffer between each successive pair of operations, because these are only needed to improve the overall performance of a production line [37]. 18 2.7 Workers performance heterogeneity and variability Recently, the value of showing the importance of human resources has become even more significant as corporations struggle with increasingly competitive markets, globalization, and the fluctuating economy. Assessing employees in terms of their contribution to an organization’s tactical objectives and metrics is becoming more and more important as companies fight with allocating insufficient resources to best promote the organization’s longterm competitive position [38]. In Operations management (OM) models there is a set of assumptions that are normally used to simplify human behaviour [39]: • People are not a major factor. (Many models look at machines without people, so the human side is omitted entirely.); • People are deterministic, predictable or even identical. People have perfect availability (no breaks, absenteeism, etc.). Task times are deterministic. Mistakes don’t happen, or mistakes occur randomly. Workers are identical. (Employees work at the same speed, have the same values, and respond to same incentives.); • Workers are independent (not affected by each other, physically or psychologically); • Workers are “stationary.” No learning, tiredness, or other patterns exist. Problem solving is not considered; • Workers are not part of the product or service. Workers support the “product” (e.g., by making it, repairing equipment, etc.) but are not considered as part of the customer experience. The impact of system structure on how customers interact with workers is ignored; • Workers are emotionless and unaffected by factors such as pride, loyalty, and embarrassment; • Work is perfectly observable. Measurement error is ignored. No consideration is given to the possibility that observation changes performance (Hawthorne effect). Simplification is an necessary part of all modelling, and OM researchers and managers are conscious that their models involve simplified representations of human behaviour. But they may not always be aware of the consequences these simplifications can have on decision making. While assumptions like these can significantly simplify the mathematics, they can skip over important features, sometimes to the point where it has caused the resulting models to yield results that are not only quantitatively inaccurate, but are also qualitatively misleading [39]. Juran and Schruben [40] state that differences in worker’s mean processing rates can cause blocking and starving in tightly-coupled systems because of worker mismatches. Underestimating variability will cause the models to underestimate congestion and thus be overly optimistic in predicting system performance. Also, Boudreau et al. [41] point out that the usually simplified, univariate approaches to decide on employee performance in the majority of 19 the utility analyses are unrealistic for nearly all organizational settings. Alternatively, they propose that a broader, more multivariate, conceptualization of performance may be more appropriate. However, OM has not usually included evaluations of human performance variability in its research [42]. Adding in within-person performance variability in the definition of employee performance may be the way to make more correct assessments about the accurate employee performance. On the other hand, if any of these inter-dependent individuals vary their level of individual performance, even a little, the impact of this variation can be experienced in multiplication throughout the process by the following stations in the system. Therefore, though overall performance surely influences productivity, variability in individual performance levels also has an major impact. Accordingly, the effective performance of an employee, despite how productivity is defined, should be viewed as an interaction between individual mean level of work performance and individual work performance variability [43]. Furthermore, Boudreau and Ramstad [38] propose that individual work performance variability in positions usually characterized by low complexity and/or low pay (characteristics relevant to the worker rather than the machine used) may have “pivotal effects” on systems stressing the importance of implementing human resource practices with strong utility at this organizational level. Hunter et al. [44] did a retrospective study of the data available in the prior 60 years on individual differences in productivity and determined the standard deviation of individual performance for several different categories of jobs: blue-collar (e.g., packing, machine operator, grocery checker), crafts (e.g., cook, repairman, claim evaluator), professional (e.g., dentist, doctor, attorney) and life insurance sales – Table 5. In order to show the impact of the influence of individual differences, Table 5 also shows: • The ratio of the performance of an individual such that only 5% of people are better to the performance of an individual such that only 5% are worse; • The ratio of the expected performance of the worst in a group of 6 individuals to the average; • The ratio of the expected performance of the best in a group of 20 to the average. For people in life insurance sales, note that because of the very high standard deviation of performance, the likelihood of the worst performer having zero productivity is high. Table 5 – Individual differences in productivity based on Hunter et al. [44] Occupation Std dev/mean Top 5%/Bottom 5% Worst/mean Best/mean Blue collar 0.20 2.2 0.75 1.37 Crafts 0.32 4 0.59 1.60 Professional 0.5 27 0.37 1.93 Sales life insce 1.2 ∞ 0 3.03 20 Overall, the data reveals that disparity between workers is considerable and increase with the conceptual requirements of the task. In fact, due to transformations in the business paradigms, the spotlight on performance was widened from just looking for an average performance to extending the interest in variability. Variability reduction in systems has become the prevalent priority in many manufacturing and service organizations throughout the world for quite some time [45] [46]. Doerr and Arreola-Risa [47] stated three sources of variability in task completion times as: the task itself, the worker performing the task, and the environment where the task is performed. They investigated the notion that the worker performing the task is the most significant source of variability in task completion times even when the tasks vary a great deal. It was verified that the task or the day where the observations took place by itself, did not explain variability in the task times, while the worker and the worker-task interaction effects were significant. It should be noted that in their experiment, it is considered that the manual fabrication line is unpaced, however, according to the description of the workflow policy in the observed system, there is interdependence between workers (therefore, some degree of pacing). Additionally, Baugous [43] affirms that both individual performance facets: individual mean performance and individual performance variability explain the vast majority of group productivity leaving little room for more typical forms of variability typically emphasized by OM research (e.g., materials defects, equipment performance problems, etc.) to influence productivity. Also, Doerr et al. [48] suggest that workflow policies alter the levels of heterogeneity and variability. They propose two workflow policies, one where the assembly tasks are performed by one worker that passes his/her product to the next worker when the work is done and another where the workers share their tasks if needed, knowing that it could be more efficient. After some laboratorial experiments [42] these authors came to the conclusion that, the work sharing policy did not improve the system efficiency. On the other hand, when workers were not sharing tasks, the slower ones became faster than usual and the workers performances, in general, turn out to be more homogeneous. The authors defended that the behavioural studies should focus more on the group, not just on the individual. Schultz et al. [49] have some work on interaction between workers and the impact it has on individual performance. They believed that subjects would correct their own speed to match the speed of their co-workers. With some studies they concluded that there was some correlation between workers speed, but couldn't prove that this was a reaction to the co-workers subjects. They justify themselves by the large amount of variation between subjects and explain that since the workers do not always respond the same way, average models leave a lot variability to explain. But, since the workers had large buffers to work from and to and the workstations were set up in parallel, they could change their own speed without affecting their co-workers. Another behaviour happens when the subject’s performances have some degree of interdependence, it’s commonly called free rider effect [50] but also known as social loafing [51] or sucker effect [52]. The definition is: the reduction of individual efforts due to the presence of 21 others. The effect has been demonstrated by measuring group output such as rope pulling and has been documented in the field of Social Psychology [53]. Since the subjects share their benefits with their co-workers, they do not enjoy their full benefits. However, it can also have the reverse effect, since peer pressure can lead to increased effort [54]. It is also suggested, that the free-riding effect is more common when people find the task unimportant, uninteresting, and uninvolving [55]. All these results connect the study on interdependence among workers performance [49], since people respond differently to the same stimuli even when working in a group with the same goal. Though, some studies show that setting up rules over productivity can originate reductions in variability around the mean in the same group members [56]. In cases like that, it's proposed that the slowest subjects can at times speed up but there is no guarantee that the faster workers will not slow down. Summing up, there is variability on individual performance and proof of heterogeneity on workers performance within a group and there are workflow policies that seem to have some effect on both individual variability and variations among the group. Nonetheless, the amount of variation and how much of this variation is reduced or increased by the different policies was not clear on the published results until Folgado [1]. Her work and how much is used in this work is going to be presented in the next chapter. 22 3. Assembly Line Model This chapter describes all the previous work that was developed and on which this work is based on. The case study is presented and explained, the methodology used is exposed, the simulation model of the system proposed as case study is described and validated, and all the considerations are clarified. This work is based on a previous empirical work [1] performed in collaboration with a manufacturing company located in Portugal, where workers performance data was gathered and several conclusions on the subject of individual variability among a group of workers were extracted. The manufacturing company, where the observations took place, has competences in interior automotive kinematic components and their main products are components for the automotive interiors – namely kinematic components such as air vents, ashtrays, door handles, and radio panels, among others. The radio panels are supplied to integrators, which then supply it to the automotive manufacturers, while products like air vents and ashtrays are directly supplied to the automotive manufacturers. The interaction with the selected company allowed the observation of an industrial reality where, beside other manufacturing processes, there was a dedicated area to the assembly processes of the produced kinematic products. The analyzed assembly system is a flow assembly line connected by a loop conveyor which produces radio panels. Several components (such as buttons, and guide bars), have to be assembled to the panel, and the completed product inspected, before being tagged and packaged to be sent to the customer. In the mentioned work, 46 sets readings were collected from 26 different workers allocated to that assembly system in order to analyze the differences in the workers task times (some of the workers were observed while working in different workstations). The statistical tests made to the workers task times demonstrated that, for this type of assembly work, the task time distributions are significantly different both in terms of average time as well as in terms of variability. Two measures were considered to visualize and quantify the differences among the workers performance: the speed, measured by the average task time, and the variability of the task time distribution, measured by the standard deviation, which is a common measure of statistical dispersion expressed in the same units as the data. 3.1 The case study A worker might be slower or faster than the average and/or have more or less variability than average, making possible to propose five possible types of performance, depending on those combinations. To do so, in [1] Folgado proposed to measure and compute the performance of each worker in terms of deviation to the group average performance (Expected E), in Figure 12, corresponding to an average speed of 15 seconds with an average variability of 1.95 seconds. 23 Expected performance 0,3 Probability 0,25 15,00 0,2 0,15 E 0,1 0,05 10,22 0 6 19,78 11 16 21 Time (s) Figure 12 - Expected performance representation The workers were classified as (see Figure 13): • Quadrant I (QI): The worker is slower and his/hers task times are more variable than the average; • Quadrant II (QII): The worker is faster and his/hers task times are more variable than the average; • Quadrant III (QIII): The worker is faster and his/hers task times are less variable than the average; • Quadrant IV (QIV): The worker is slower and his/hers task times are less variable than the average. 24 Figure 13 - Representation of the four types of performance in terms of individual deviations to the average task time and variability of the workers population based on Folgado [1] The assembly system output is a result of the performances of the several interconnected workers allocated to it. Therefore, if there are large deviations to that average performance of the group, the system output will be hampered (this impact depends on several factors, namely on the system configuration). In the proposed mapping approach in Figure 13, the differences in performances are mapped in terms of deviations to the average task time and average variability of all the workers observed performing the same type of task. In this way, the variations in performance can be analyzed in a group perspective. All individual performances represented were calculated in terms of deviations to the average performance (Expected - E) in each workstation and plotted. Correlation tests performed by Folgado indicate that the two variables considered are strongly positively correlated [1]. There is a significant tendency for the workers, which are slower than the average to also have more variability than the average. Alternatively, workers which are faster than the average have the tendency to have lower variability in the task times. Therefore, there are two predominant types of performance: QI and QIII, according to the previously proposed classification. From the results, Folgado reports that it was visible the average worker from QI takes 16% more time to complete the assembly cycle with 26% more variability than the Expected worker. A worker having an average performance in the opposite quadrant (QIII) takes less 11% of the time with 21% less variability, when compared with performance Expected - Table 6. 25 Table 6 - Average performance deviations in relation to average performance (Expected - E) based in [1] Type of Performance Deviation to average task time Deviation to average variability E 0% 0% QI +15.9% +26.4% QII - 4.5% +13.3% QIII -11.3% -21.4% QIV +3.8% -9.1% Based on the average deviations calculated for each quadrant (centroids), the values for each type of performance (QI,QII,QIII,QIV) were assessed and compared with performance E. Folgado mentions that there is not an agreement in the literature on which kind of distribution to use. So in this work, a triangular distribution was considered, which is the preferred distribution in project management problems, and it is the same distribution Folgado used in [1]. In more detail, it is a triangular centred task time distribution Using the deviation to the average performance E, Folgado calculated the minimum, average and maximum times for each type of performance. In Figure 14 it can be observed that the distribution of the task times changes significantly. Depending on the performance type, the time distribution can shift positively or negatively from the distribution for performance E (dashed line), and/or can be wider or narrower, due to the variations in variability. Figure 14 - Triangular distributions probability density functions of workers performances 26 This work uses the same classification of Folgado. A serial assembly line, with 5 workstations is considered - Figure 15. Each workstation has one dedicated worker, and each workstation is intended to perform one indivisible operation. The part transfer between workstations is done asynchronously. This means that when the worker finishes the assembly tasks on his workstation, transfers it to the next workstation if it is starved (waiting for a part). If the next workstation is not waiting, is either working or blocked, then the workstation becomes blocked, the worker has to wait and cannot accept any other part. The first workstation is never starved, given that it has an unlimited resource of parts, and the last station is never blocked, since the storage for the last station is also unlimited. Note that, in a first approach, it is considered that there is not the possibility to buffer parts between workstations. FLOW IN OUT WORKSTATION 1 WORKSTATION 2 WORKSTATION 3 WORKSTATION 4 WORKSTATION 5 Figure 15 - Representation of the assembly line considered in the simulation model 27 3.2 Methodology In this section the methodology used to perform the study is described - Figure 16. This describes the method used to get to the results. A problem was proposed and some considerations were made. Then the conceptual model was accepted and the developing of the simulation model took place. After the validation of the simulation model the results were analysed, presented and conclusions were reached. Comprehension of the problem Assembling information and construction of the theoretical model Is the conceptual model valid? No Yes Developing and programming the simulation model Is the simulation model valid? No Yes Analysis for all the relevant data Document and present the simulation results Conclusions about the results Figure 16 - Methodology employed on the study In the next sections the model of the system is presented with all the considerations and constraints executed to create the algorithm for the study. 28 3.3 Simulation model All the values that will and need to be calculated in this work have some variability associated and since there is no DES software that takes into account these values, it was necessary to develop a new simulation model of the case study described in section 3.1. To study this assembly line, a simulation model was implemented in MATLAB (Matrix Laboratory). This was the tool proposed since it is versatile and author has programming knowledge in the language. Productive systems are usually designed with analytical task times, but in reality these task times suffer a random variation with a certain probability distribution that comes from the intrinsic characteristics of the workers, which generate variability. In this model this variability is introduced by the triangular distributions and each type of worker has one distribution attributed, as mentioned above and represented in Figure 14. For a triangular distribution the distribution function is: , (( − *( − )( (() = ( + *1 − ( − ) being the minimum value, ). , − ) − (). , )( − ) ≤(≤ <(≤ the maximum and 3 (2) the mode. The value ( is the time the worker has to finish one task, so the distribution function inserted in MATLAB, based on the triangular distribution function (2), is: (=4 5 (()( − )( − ) + , 5 (()( − )( − ) + − 781 − (()9( − )( − ) + , ℎ < 3 (3) Given that workers can have different performances, for this behaviour to be simulated in the program, random numbers were used. For the study to be as realistic as possible every worker needed to have a random behaviour, respecting their triangular distribution performance, therefore, there was no control over the time that a worker needed to do his/her job, the only control was over the “average” attributed performance. It is possible to define the sequence of random numbers to be used using the rng function. That gives control over the generation of the random numbers allowing to repeat calculations and get the same results or, while changing some variables, get results that are comparable. These random numbers were created by the function rand and then processed into the triangular distribution functions to obtain the task time for every worker. Each sequence of random numbers is defined by a parameter named seed, which is going to be used in the following to define a given sequence. The simulator starts by analyzing the raw part that enters the system, and then processes the group of random 29 numbers that is going to be used. The model is defined by the number of parts that is going to enter the system and the combination of workers on the assembly line. This combination of workers is characterized by the number of workstations proposed and the performance type of the worker in each workstation. Then it creates the matrix of the workers task times in the assembly line for all the parts that are going to be produced. A simple block diagram of the assembly line simulator model is represented in Figure 17. - Random number sequence - Cycle times - Blocked times - Starved times - Raw parts ASSEMBLY - WIP LINE MODEL Parameters: - Number of parts - Workers performance, quantity and position Figure 17 - Block diagram of the assembly line model The algorithm that implements the assembly line simulator can be summarized in a few steps corresponding to the sequence of events represented in Figure 18. Define N parts, M workers and worker types Repeat for all combinations of workers Repeat for seed i=1 to 10 Repeat for N parts Simulate Assembly Line Save data Statistical analysis Figure 18 - Algorithm steps summary 30 Some guidelines were followed in order to properly implement the case study assembly line. Those guidelines and some simple functions will be described. The stored values are rows ( is the number of the part) and : columns (: is the number of the arrays with workstation). • The algorithm obtains a randomly created matrix for the workers processing times, the time that workers take to do their job in every part, that respects the imposed performance for every worker in the line; • Each worker can have different performance from the one after or before him. This means that the first worker can have an Expected performance but the second one can have a QI performance (slow and variable), so their processing times are created respecting their triangular distribution parameters; • Every step in the assembly line is quantified, that means the algorithm calculates all start and finish times for every part and every worker respecting if the workers has been blocked or starved; • The start time of each part is when the part entered the system to start the assembly, so it is the start time of the first station; • The start time for each workstation is when the part reaches that workstation and the workers starts performing his job; • The first part has a start time of zero, it is when the clock starts counting: (1,1) = 0 ; (4) • The second to the n part starts when the worker finishes the previous one and can th pass the part to the next worker, because it’s assumed that there is no buffer between them as explained before, so the workers can only start in a new part if they have passed the previous part to the next; • Still, for the first part, the start time calculations have no constraints because the worker are not busy when the parts start to come in the assembly line. So, from the second to the last workstation, for the first part, the workers can start their task in the part as soon as the previous workers passes it to them. This means that the start time for the second worker is the start time of the first worker plus the processing time that first worker needed for this part: ; (1, :) = : = 2, … , ? # (1, : − 1) + ; (1, : − 1), (5) • The time that a worker is waiting to pass the part to the next worker, being that the next worker is still occupied, is the blocked time; 31 • The starved time is when a worker has finished his/her job in the part and passes its part to the next but does not have another part available to start working again; • For the second to the n part the start times are calculated the same way as in (5) th adding just the fact that sometimes workers become starved or blocked. This values are added accordingly; • All the blocked and starved times are also accounted for independently; • The finish time for every worker in every part is established by the start time for that part and worker plus the processing time that same worker needs to preform his/her job on that part: @ ℎ = 1, … , ? ( , :) = ( , :) + ; # $ ( , :), , : = 2, … , ? (6) • Similarly to the start times, the finish time for every part is when the part leaves the assembly line, and is represented by the finish time of the last station; • The throughtput time is the time each part stays in the system, the time that the part takes to be assembled. This is calculated by the time when the part leaves the system (the finish time of the last workstation) minus the time when part enters the system (the start time of the first workstation): ℎ ()=@ #ℎ$ ( , 5) − ; ℎ = 1, … , ? ( , 1), $ (7) • The elapsed time between two workpieces corresponds to how long it takes to produce one specific part, regarding the wainting or blocked times, so this is calculated as the finish time of the part that is being considered minus the finish time of the previous part: BC = ()=@ $ = 1, … , ? ( , 5) − @ ℎ $ ℎ ( − 1,5), (8) • The cycle time is time the system took to produce one part. The elapsed time, already mentioned, is an intantaneous cycle time between two consecutive parts and the average value of of this elapsed times is the average cycle time. This average cycle time can be calculated by the finish time of the last part produced divided by the number of parts: B = @ ℎ ? (? $ 32 $ , 5) (9) • The average variability of all the elapsed times gives the cycle time variability for each simulation: DB = EFGHIJKL DCMK(C) (10) The next section provides some considerations/constraints applied to the described model in order to guarantee its performance. 3.3.1 Stochastic convergence A stochastic convergence is when a sequence of random or unpredictable events can occasionally settle down into a behaviour, that will not change again, when the study gets far enough into the sequence. So, for the random numbers, already mentioned in this work, the problematic is that they will influence the cycle time for a small batch of produced parts. The value of the tc will have too large fluctuations to take any type of conclusion about its variability. This lead to a study to understand from which number of parts produced the influence of use of different sets of random numbers (different seeds) in the cycle time starts to fade, so that the results are not dependent on the sequence of random numbers used. For this analysis a line with 5 workstations (as proposed) and all workers with Expected performances, was chosen. Using 3 different sequences of random numbers, called seeds, a graphic was obtained comparing the cycle time with the number of parts produced. The parts start with a batch of 100, incrementing 100 parts until 100.000 parts are reached - Figure 19. Comparison 17 Cycle Time (s) Seed 1 Seed 2 Seed 3 16.95 16.9 16.85 0 2 4 6 Number of Parts 8 Figure 19 - Number of parts and cycle time comparison 33 10 4 x 10 Figure 19 shows that from around 50.000 parts the system cycle time stabilizes, independently of the chosen random number sequence. So any number of parts higher than 50.000 guarantees the assembly line production is stable. In order to choose a number of parts that relates to reality some small math was used: 6 weeks × 40h/week × 3600 s/h = 57.600 parts 15 s/part (11) The cycle time used (15 s/part), corresponds to the ideal cycle time proposed, the cycle without any variability. Since 6 work weeks correspond to 57.600 parts, this was the number of parts used as reference in this work. Considering now the 57.600 parts, it was interesting to see the cycle time differences for the different seeds used. Table 7 shows the results of the average cycle time and respective standard deviation (SD) considering the fixed 57.600 parts and the 10 different random numbers sequences (each corresponding to one simulation) that were chosen to be used in this work. Table 7 - Cycle time for 57.600 parts with 10 different random numbers sequences It is clear that the average cycle time only changes over the third decimal so it is safe to say that it is stable. The medium values and their amplitudes, for all the seeds, presented are shown in Table 8. Table 8 - Medium cycle time for 10 different sequences of random numbers Medium cycle time [s] Medium standard deviation [s] Value 16.9234 3.0708 Amplitude 0.0005 0.0095 34 The amplitude for the medium cycle time that is in Table 8 is a difference between the maximum and minimum values for the cycle times in Table 7 and then this difference is divided by the medium cycle time itself. For the standard deviation amplitude the same procedure was performed but for the values of the standard deviation. It is observable that the amplitudes are very small, this means that is 10 seeds are enough to perform the study, since the values for each seed don't have large differences between each other. Each cycle time and each standard deviation will, from now on, be calculated from an average of 10 seeds. Having set the number of parts to be produced, and the number of seeds to use in the analysis, the next analysis regards the influence of the warm-up in the results. This study is presented in the following section. 3.3.2 Warm-up The term warm-up designates the time one assembly line takes to really start working properly, meaning the line is steady and working without the influence of its empty initial state. A study was preformed to understand from which unit produced the modelled line would reach the steady state. The term C corresponds to the time of spawn of two consecutive and already assembled parts. As mentioned before, for the first part there is no blocking time and the first C st will be the time difference between when the 1 part and the 2 Therefore, for the first part the C nd part left the system . is higher than the ones of the following parts. The same simulations represented in Figure 19, regarding a line composed of 5 workers with Expected performance, simulated with different seeds, were considered to evaluate the impact of the warm-up. For each volume of production considered, the effect of eliminating the average cycle time of the first 0 to 5 units was registered and is represented in Table 9. . 35 Table 9 - Simulations for the warm-up For small production volumes, 10 and 100, it is verified that the considered warm-up dimension has a decisive influence in the average cycle time over the first decimal. For larger volumes, 10000 and 20000, that influence appears only over the third decimal. Therefore, for larger volumes of simulation/production the warm-up dimension does not significantly affect the average cycle time value. Still, it was decided that, to maintain some realism, since it is irrelevant, the first part is eliminated and not considered in the following analysis. 36 3.4 Model Validation Any model requires a validation prior to its acceptance as system simulator. Different kinds of calculations and verifications were undertaken, being the more important represented in this section. 3.4.1 Performances validation As already mentioned in section 3.1, the performances of five different workers are used in this work, defined by the triangular distributions represented in Figure 14. It was necessary to introduce in the model the triangular distribution parameters for each of the five performance types. The values used are shown in Table 10. Table 10 - Inputs considered for each type of performance of the workers allocated to the system based on Folgado [1] Class of Performance Min. (sec) Mode (sec) Max. (sec) E 10.22 15.00 19.78 Standard Deviation (sec) 1.95 QI 11.35 17.39 23.43 2.46 QII 8.91 14.33 19.74 2.21 QIII 9.55 13.30 17.06 1.53 QIV 11.23 15.58 19.92 1.77 The algorithm imposes these performances to simulate the behaviours. Figure 20 shows the time distribution of each worker in an assembly line with five Expected-type workers, revealing the developed model performance. 37 Frequency Workstation 2 Frequency 2000 0 10 15 Time (s) Workstation 3 4000 2000 0 10 4000 2000 0 10 20 Frequency Frequency Frequency Workstation 1 4000 15 Time (s) Workstation 5 20 15 Time (s) 20 15 Time (s) Workstation 4 20 15 Time (s) 20 4000 2000 0 10 4000 2000 0 10 Figure 20 - Obtained triangular distribution with all workers of type Expected From observation of Figure 20, it is noticeable that the model imposes slightly different task time distributions due to the use of the random numbers, but of course the main characteristic is that all of them follow the triangular distribution parameters imposed. This means the algorithm is creating the performances correctly, for every workstation, and the inputs created are acceptable and good to be used in the study. Figure 21 shows in more detail the values of one worker with Expected performance. 38 Expected performance - Seed 1 1200 + Mode = 15.0405 Triangular dist. Histogram 1000 Frequency 800 600 400 200 0 + 10 Minimum = 10.2893 11 12 13 14 15 tci (s) 16 Maximum = 19.6967 + 17 18 19 20 Figure 21 - Expected performance dispersion compared to the histogram obtained It can be noticed that the values are within the limits of the triangular distribution wanted. The values do not reach the limits since this is just one sample, only with an average of a large sampling it would be possible to see something like the ideal performance. Similar results are obtained for the remaining types of workers, as illustrated in Figure 22 for a QI performance worker. 39 QI performance - Seed 1 1200 + Mode = 17.4412 Triangular dist. Histogram 1000 Frequency 800 600 400 200 0 10 + Minimum = 11.4375 12 14 16 18 Maximum = 23.3247 + 20 22 24 tci (s) Figure 22 - QI performance dispersion compared to the histogram obtained Figure 23 shows the different workers task time distributions imposed by the model, confirming the good model performance regarding the match of the theoretical and the obtained triangular distribution parameters for every type of worker in every workstation. 40 Frequency Workstation 2 - QI Frequency 2000 0 10 15 Time (s) Workstation 3 - QII 20 4000 Frequency Frequency Frequency Workstation 1 - Expected 4000 2000 0 5 10 15 Time (s) Workstation 5 - QIV 20 15 Time (s) 20 4000 2000 0 10 15 20 Time (s) Workstation 4 - QIII 25 10 15 Time (s) 20 4000 2000 0 5 4000 2000 0 10 Figure 23 - Triangular distribution for every performance 3.5 Assembly line model conclusions Given the proposed problem, a computational simulator of the case study assembly line was carefully designed, built and validated. As described in the methodology presented in section 3.2, the next step corresponds to an analysis of the assembly line simulator results, to be carried out in the next chapter. 41 4. Results analysis In the last chapter some individual results about the workers were presented to validate the model. In this chapter all of the analysis will be centred in the behaviour of the entire assembly line. 4.1 Combinations with all equal performances In a first approach, a set of scenarios were consider where the allocated workers have the same type of performance: either all Expected, or QI, or QII, or QIII, or QIV. Figure 24 represents the distribution of the system cycle times obtained with the simulation runs for assembly lines that have all the workers with the same type of performance. The plot shows 50 values and their respective frequency. Histogram - Seed 1 4000 Expected QI QII QIII QIV 3500 3000 Frequency 2500 2000 1500 1000 500 0 5 10 15 20 25 tci 30 35 40 45 Figure 24 - Histogram obtained by simulating assembly lines with all workers with the same performance (57.600 units, seed:1) In these scenarios, the triangular distribution is also visible in the behaviour of the whole line. In the plot, it's shown that the higher values of cycle time are superior to the values of the respective maximum for each type of worker. This happens because the entire performance of the line also comes with the starved and blocked times associated, so sometimes the parts take longer to finish, increasing the cycle time. The results presented in the Table 11 were obtained for the different 5 configurations using 10 different seeds for each configuration. Configurations with slower workers (QI, and QIV) cause higher average line cycle times. 42 Also, the configuration with QI type workers, the ones with the highest task times and variability, result on a assembly line with the highest cycle time and variability (SD). Table 11 - Results from the simulation model with all workers with the same performance (57.600 units, 10 seeds) As expected, the best scenario is where all the workers have QIII performance. In this scenario the line takes 12% less time than the scenario with all the workers with an Expected performance. The worst scenario happens when all the workers are QIII, where an order would take 17% more time than the Expected. For the QII workers, that are more variable but faster than the Expected, the resulting line performance is 2% faster than the one with type E workers. For the slower and variable workers, QIV performance, since they sometimes can work faster due to the variability, their performance in an assembly line is better than the QI performance, but still not enough to be faster than the Expected, taking 2% more time than the Expected. In Figure 25 it's shown that for different seeds the results can have soft differences but the performance stays the same. Having different seeds increases the variety in the results and that is crucial to have, it is required to obtain a few samples to get reliable results. Histogram - All Expected 4000 Seed 1 Seed 5 Seed 10 3500 3000 Frequency 2500 2000 1500 1000 500 0 10 15 20 25 30 35 tci Figure 25 - Plot obtained for seeds 1, 5 and 10, with the assembly line operating with all workers with Expected behaviour (57.600 units) 43 This Expected behaviour will be used has a base for comparison in the next sections. Nevertheless this type of performance has its own variability associated and does not behave as an ideal case (where the worker would only take 15 seconds assembling the parts every time), it produces starved times and blocked times as well. This can be observed in Figure 26 were the percentage of starved, blocked and working times of an assembly line with five Expected performance workers is represented. All Expected 90 Starved Blocked Working 80 70 60 % 50 40 30 20 10 0 1 2 3 Workstation 4 5 Figure 26 - Blocked, starved and working times percentage for a line with Expected workers (57.600 units, 10 seeds) By observation it is noticeable that even when having all workers with Expected performance, they cause blocking and starved times to the other Expected performance workers, validating the already mentioned about this workers performance. In the next section different performances are combined to better understand how they influence an assembly line. 4.2 Combinations with extreme performances As seen in the previous section the worst and best performances belong to QI and QIII respectively. This are the two most frequently observed and predominant types of performance as mentioned in Chapter 3. So, in this section these extreme performances are going to be the focus element, being the performances that can create the worst and best scenarios possible. All the other workers, QII and QIV, will not be mentioned since there are no relevant results that 44 can be extracted from this types. All result tables presented in this section come from an average of 10 different seeds that were computed, unless otherwise mentioned. 4.2.1 Combinations with QI performance In this section the influence of the QI performance is going to be studied. There are four combinations possible: one worker with a time distribution QI and the other four have Expected performance; two workers with a time distribution QI and the other three have Expected performance; three workers with a time distribution QI and the other two have Expected performance; four workers with a time distribution QI and the other has Expected performance. This way, the effect of having such type of performance, in several possible positions, in the considered system, can be studied. The simulation results, in Table 12, show that if there is one worker with the worst type of performance (QI) while the others have Expected (E) performance, the system performance, in terms of time spent assembling the required number of parts, is affected by at least from 6%. This time tends to change a little according to the workstation to which worker is positioned. The higher cycle time is obtained when the worker is positioned in the middle (E;E;QI;E;E). Being in this position, this worker has more probability to create blocked (when the worker can't pass the part to another and has to wait) and starving (where a worker doesn't get a part to work on and also has to wait) situations. The best cases happen when the QI worker is positioned in the begging or in the end of the line (QI;E;E;E;E and E;E;E;E;QI). This removes some starved and blocked times because, if in the first station, he/she never gets starved and doesn't cause any blocking (it has no workers positioned before) or, if in the last station, the worker never gets blocked and never causes starving (it has no workers positioned after). This eliminates some added times that this worker would encounter if positioned in the middle of two other workers. Table 12 - First case results from the simulation model with one worker QI and all the rest with E performance (57.600 units, 10 seeds) Another important aspect to analyse, in these results, is the effect in the variability by the QI performance worker positioned in the last station of the assembly line. In the table, when a QI worker type is located in this last station the line output variability is the lowest among all the possible line configurations with one QI worker performance. Moreover, is in fact the lowest one, even comparing with a line with all the workers that have Expected performance. The line 45 output variability obtained with the QI type of worker in the last position is very similar to the performance variability intrinsic to the QI type worker - Table 13. This intrinsic variability is imposed in the simulation model as a worker performance characteristic. Table 13 - Variability of all type of workers Type of performance Mode [sec] SD [sec] E 15,00 1,95 QI 17,39 2,46 QII 14,33 2,21 QIII 13,30 1,53 QIV 15,58 1,77 The explanation found is based on the particular working characteristics of the QI worker type in the last position of this line configuration, that will be explored further. In Table 14 it can be observed, that the workers in workstations from 1 to 4 have high values of blocked time and the QI type worker has no blocking time, since this worker is positioned in the last station he/her has an infinite storage. Table 14 - Blocked and starved times obtained for a line with four workers with Expected performances and one worker in the end with QI performance (57.600 units, 10 seeds) Workstation Performance Blocked time [sec] 1 E 1,7 x 10 2 E 5 5 1,6 x 10 Blocked SD [sec] Starved time [sec] Starved SD [sec] 913 0 0 608 4 201 4 262 4 289 4 216 1,1 x 10 5 805 1,7 x 10 5 3 E 1,5 x 10 4 E 1,4 x 10 646 2,3 x 10 5 QI 0 0 3,2 x 10 For the starving time, the first workstation doesn't starve (as it has unlimited feed of parts), while the other workstations have a great amount of starved time. It is also visible that the worker with the higher time for being blocked is the first worker, for the same reason this worker is never starved. This fact, happens for all the combinations where one QI type of worker is allocated in the assembly line. Then again, regarding the QI worker in the last position, it's visible in Figure 27 that the line adopts the minimum and mode values of the QI performance worker (for the values see Table 10) but, due to the blocked times, occasionally longer cycle times are obtained making the maximum values go up. 46 Histogram - Seed 10 2500 (E;E;E;E;E) (E;E;E;E;QI) 2000 Frequency Mode = 17.7361 + 1500 1000 500 + 0 10 Min = 11.4837 15 20 25 tci 30 + Max = 32.1164 35 40 Figure 27 - All workers Expected performance line compared to a behaviour from a line with four Expected workers and one QI performance worker in the end (57.600 units, seed:10) In the same figure, being that the minimum tc value starts later than the line where the workers have Expected behaviour and both plots end around the same time, makes the variability lower for the line with one QI performance worker. Closer extremes produce a lower variability. Proof that all the seed produce the same behaviour is presented in Figure 28, where all of the random number sequences for the combination in focus are revealed in comparison with the assembly line with all workers with Expected behaviour. 47 Histogram - All seeds 4500 Expected Seed 1 Seed 2 Seed 3 Seed 4 Seed 5 Seed 6 Seed 7 Seed 8 Seed 9 4000 3500 Frequency 3000 2500 2000 1500 1000 500 0 10 15 20 25 tci 30 35 40 Figure 28 - All workers Expected line performance (seed 1) compared to the performance from 10 different lines (all 10 seeds used separately) with four Expected workers and one QI performance worker in the end (57.600 units) Regarding the variability in the last workstation, for the QI worker performance, the already held line of though from the analysis of the first case can also be applied to the second case, where two workers have QI performance and three have Expected performance - Table 15. The lowest values for the variability are all obtained when the QI workers are positioned in the first and last stations (QI;E;E;E;QI) and this time, this is also the best performance combination. The explanation for this being the best scenario has already been given the last case where the best scenario happened when the QI performance was positioned in the beginning or the end of the line. In this case both happen at the same time. Table 15 - Second case results from the simulation model with two workers QI and all the rest with E performance (57.600 units, 10 seeds) 48 The simulation results, show that the system performance is affected from 8% to 12% and again, this times tends to change depending on the workstation combination. As before, the higher cycle time is obtained when the workers with worst performance are positioned in the middle (E;E;QI;QI;E and E;QI;QI;E;E) which obtain exactly the same value. Comparing to the first case (one QI and four E), the system is now more affected, given that before, in the worst scenario, the system would be affected in 7% and now, in the best scenario, the amount of time spent assembling requires at least 8% more time than the Expected. Again, for the next case - Table 16 - with tree workers QI and two E, the already mentioned about the variability still applies. Table 16 - Third case results from the simulation model with tree workers QI and two with E performance (57.600 units, 10 seeds) As for the times obtained the effect of having the three QI workers in the middle workstations are also present in this results, with its 15% time deviation associated. The value for the best combination, 11% more time spent assembling, has now gone up and happens when the QI workers are alternated with the Expected workers (QI;E;QI;E;QI). This combination is interesting because this means the Expected workers fulfil the lacks of the QI workers, given that they are quicker than the others. So, to obtain a better output when having three slow workers the solution is separating them with faster workers in the middle. Finally for the last case - Table 17 - the amount of time spent on assembly varies from 14% to 16% more than the Expected. The analysis of the variability already applied in the previous cases, are also valid for this results. Table 17 - Fourth case results from the simulation model with four workers QI and one with E performance (57.600 units, 10 seeds) 49 Here the best scenario happens when the Expected performance worker is positioned in the middle of the other four QI workers (QI;QI;E;QI;QI). As already seen for the third case, where alternating E workers with QI workers was the best combination, this has the same effect. The Expected worker creates "soothing" effect when separating the others with worst performance. For the worst performances, this scenario happens when the Expected worker is positioned in the last or first workstations (E;QI;QI;QI;QI and QI;QI;QI;QI;E). The explanation is the already given before but for the opposite effect, when having QI workers allocated to the same positions where the E worker is (QI;E;E;E;E and E;E;E;E;QI). 4.2.2 Combinations with QIII performance As in the previous section, in here the influence of the QIII performance is going to be studied. There are also four combinations possible: one worker with a time distribution QIII and the other four have Expected performance; two workers with a time distribution QIII and the other three have Expected performance; three workers with a time distribution QIII and the other two have Expected performance; four workers with a time distribution QIII and the other has Expected performance. Thus, it can be studied the effect of combining this type of performance, in all the possible positions, in the proposed system. In Table 18, the results show that if there is one worker with the best type of performance (QIII) while the others have Expected (E) performance, the system performance, in terms of time spent assembling the required number of parts, is affected from -1% to -2%. As mentioned, this time tends to change a little according to the workstation to which the worker is allocated. In this, the lowest cycle time is obtained when the worker is positioned in the middle workstation (E;E;QIII;E;E). This outcome is the reversed of seen in the case where one QI performance worker is allocated with four Expected type of workers, but while in that case this was the worst scenario, in this case it's the best scenario, being that the performances are extremes and opposites. Table 18 - First Case results from the simulation model with one worker QIII and all the rest with E performance (57.600 units, 10 seeds) For this case worst scenario the reverse as seen for the QI workers cases happens. When the QIII workers are allocated in the beginning or the end of the line (QIII;E;E;E;E and E;E;E;E;QIII) the cycle time gets higher. 50 Figure 29 shows that the line with one worker QIII behaves almost like a line that has all workers with Expected behaviour. The influence of the QIII worker is just slightly noticeable. This is the cause for this combination to be one of the worst found for one QIII allocated in a line with four Expected workers. Histogram - Seed 10 2500 (E;E;E;E;E) (QIII;E;E;E;E) 2000 Frequency + Mode = 15.4996 1500 1000 500 + 0 10 Min = 10.4106 15 20 25 tci 30 + Max = 32.3153 35 40 Figure 29 - Comparison between an assembly line with all workers Expected and another with one worker QIII in the beginning and four others Expected (57.600 units, seed:10) The effect in the variability by the QIII performance worker positioned in the last station of the assembly line is again the reverse of what happens from the already seen case - Table 12. When this type of worker is in the last station of the line, the value of the line’s variability is the highest among all the possible line configurations with one QIII worker performance. This is why this case is also one of the worst scenarios. It happens because the minimum cycle time value gets smaller by influence of the QIII performance and the maximum cycle time value stays practically the same compared with the line where the workers all have Expected performance. Being that the minimum value and the maximum value are far apart, the variability becomes higher. Figure 30 shows what is mentioned, with an example showing that the plot for one assembly line with one QI worker positioned at the end starts first than the line that has all workers with Expected behaviour and both line finish almost at the same time. 51 Histogram - Seed 1 2500 (E;E;E;E;E) (E;E;E;E;QIII) 2000 Frequency + Mode = 15.0847 1500 1000 500 0 5 Min = 9.6957 + 10 15 20 25 Max = 35.1007 + 30 35 40 tci Figure 30 - Comparison between an assembly line with all workers Expected and another with one worker QIII in the end and four others Expected (57.600 units, seed:1) This is also visible when all of the seeds for the combination in focus are exposed in comparison with the assembly line with all workers with Expected behaviour - Figure 31. Histogram - All seeds 4000 Expected Seed 1 Seed 2 Seed 3 Seed 4 Seed 5 Seed 6 Seed 7 Seed 8 Seed 9 3500 3000 Frequency 2500 2000 1500 1000 500 0 5 10 15 20 25 30 35 40 tci Figure 31 - All workers Expected line performance (seed 1) compared to the performance from 10 different lines (all 10 seeds used separately) with four Expected workers and one QIII performance worker in the end (57.600 units) 52 In the left, the beginning of the plot for the Expected behaviour line starts later comparing with all from the line with QIII performance in the end, and they almost finish at the same time except for one or two seeds that go further. For the blocked time, as expected, the lowest value observed is the one that belongs to the Expected performance worker assembling before the QIII performance worker. As for the starved time, this one belongs also to the QIII worker - Table 19. Table 19 - Blocked and starved times obtained for a line with four workers with Expected performances and one worker in the end with QIII performance (57.600 units, 10 seeds) Workstation Performance Blocked time [sec] 1 E 2 E 7,6 x 10 3 4 5 E E QIII Blocked SD [sec] Starved time [sec] Starved SD [sec] 10,3 x 10 383 0 0 4 401 2,7 x 10 4 4 5,1 x 10 362 4 1,2 x 10 137 0 0 4 207 4 513 4 391 4 584 5,2 x 10 9,1 x 10 2,0 x 10 The simulation results on Table 20, show that the system performance is affected from -2% to -4% depending on the workstation combination. As before, the lowest cycle times are obtained when the workers with best performance are positioned in the middle workstations (E; QIII;QIII;E;E and E;E;QIII;QIII;E). Comparing to the first case (one QIII and four E), where the best scenarios had -2% time spent assembling, in this case this amount of time became the system worst scenario. Table 20 - Second case results from the simulation model with two workers QIII and all the rest with E performance (57.600 units, 10 seeds) The lowest cycle time is when the two QIII workers are located between the three Expected workers (E;QIII;E;QIII;E). The explanation for this scenario is the same already taken for the case where three QI workers where alternated with two Expected (QI;E;QI;E;QI). This is almost the same scenario since the relation between E and QIII is the same as for QI and E (one is worst or better than the other). The highest cycle time is when the QIII performances are 53 allocated to the first and last workstations (QIII;E;E;E;QIII). The explanation for this being the worst scenario has already been explained in the last case where this scenario happened when the QIII performance was positioned in the beginning or the end of the line. In this case both happen at the same time. Again, for the highest variability being when the QIII is allocated to the last workstation, the line of though from the analysis of the first case can also be applied to this the second case. For the next set of combinations - Table 21 - with tree workers QI and two E, the already mentioned about the variability still applies. Table 21 - Third case results from the simulation model with tree workers QI and two with E performance (57.600 units, 10 seeds) Again, the best case scenario is when the QIII are allocated to the middle of the assembly line (E;QIII;QIII;QIII;E) with -7% time spent assembling compared with the Expected. The worst case, with -4%, is when the Expected workers are in the middle of the line (QIII;E;E;QIII;QIII and QIII;QIII;E;E;QIII). Both cases already have been exhaustively mentioned. At last for the remaining case - Table 22 - the amount of time spent on assembly varies also from -7% to -8% in relation to the Expected. Here the variability is lower when the Expected worker is positioned in the last station (QIII;QIII;QIII;QIII;E). This is a parallel case to when the QI worker is positioned in the same spot (E;E;E;E;QI), so the same justification applies. Figure 32 represents the percentage of usage of the workers in the mentioned combinations, and can thus validate that they are in fact parallel. Both have a slower worker in the end of the line, so they will both behave the same way regarding of how much time is spent working, starved or blocked. 54 QIII;QIII;QIII;QIII;E 100 90 90 80 80 70 70 60 60 50 50 % % E;E;E;E;QI 100 40 40 30 30 20 20 10 10 0 1 2 3 Workstation 4 0 5 Starved Blocked Working 1 2 3 Workstation 4 5 Figure 32 - Parallel assembly line combinations - plots with the percentage of starved, blocked and working times for the workers in these combinations (57.600 units, 10 seeds) The lowest cycle times happen when the Expected worker is positioned in the first and last workstations (QIII;QIII;QIII;QIII;E and E;QIII;QIII;QIII;QIII) as seen in QI;E;E;E;E and E;E;E;E;QI, when worst performance is positioned in the same workstations. Table 22 - Fourth case results from the simulation model with four workers QI and one with E performance (57.600 units, 10 seeds) 4.4 Final remarks From the simulations, it can be observed that the system cycle time is more affected when the QI/QIII workers are positioned to the middle workstations on the assembly line, than in any other position. In addition, the time deviation (absolute value) caused by having at least one QI worker allocated to the assembly line is greater than when a QIII type of worker is on the same conditions. Since, the workers are performing their task in a serial assembly line, there is an influence of the tight interconnection between the workers performances and this outcome in the time deviation can be attributed to the “free-riding” effect. This effect is the reduction of individual efforts due to the presence of others, that comes from realizing that the subjects effort 55 does not lead them to enjoy their full benefits, since these benefits are shared with their coworkers. On the other hand, the cycle time variability is most affected if the QI/QIII workers performance are allocated to the last position of the assembly line. With a QI type of worker in the end of the assembly line, the variability of the system will be decreased comparing to the Expected. In contrast, for the QIII performance, the variability of the system will be higher than the Expected. This seems counter intuitive since the performance from the QI worker has more variability associated than the QIII worker. However, by analysing the triangular distributions of both workers - Figure 33 - it can be observed that the QIII worker performance has a lower minimum than the QI worker performance. Figure 33 - QI and QIII triangular distributions Then, being that the initial state of the system performance is directly related to the performance of the QI/QIII worker type (their triangular distributions), and that the histogram of the assembly line, for both combinations - Figure 34 - finishes at similar cycle times due to the waiting periods, the system minimum and maximum cycle times are further apart, when the QIII performance is positioned in the end than for the QI performance. This creates a bigger gap between the system minimum and maximum cycle time values for the assembly line with the QIII worker performance in the end, originating a higher value for the variability. 56 Histograms - Seed 10 2500 (E;E;E;E;E) (E;E;E;E;QI) (E;E;E;E;QIII) 2000 Frequency 1500 1000 500 0 Min = 9.7445 + 5 10 + Min = 11.4837 15 20 25 30 + Max = 32.1164 Max = 35.8176 + 35 40 tci (s) Figure 34 - Comparison between an assembly line with all workers Expected, one with one worker QI in the end and four others Expected and another with one worker QIII in the end and four others Expected (57.600 units, seed:10) Some authors [43] state that the variance reduction in systems has become a prevalent priority in many manufacturing and service organizations throughout the world and that the variation in a process has been referred to as the ‘root of all evil’ in a process. Though, from the analysis of the results, for the case where the worker with QIII performance was allocated to the end of the line (where the variability is higher than the Expected), the assembly line still obtain a cycle time better than the Expected. Independently from the variability, this worker took less time in total to assembly all the parts. In contrast, for the case where the worker with QI performance was allocated to the end of the line (where the variability is better than the Expected), the line cycle time is higher than the Expected. This means that, even though the changes in variability can sometimes be interpreted as a problem to be solved, this reasoning of the results can show a different perspective on this observation. The effect of the variability is not making that much of a difference on the overall perspective. 57 5. Conclusions From analysing the results of the performed simulations studies, it can be concluded that if the workers with a worse/best performance than the others, are positioned to the workstations in the middle section of the assembly line, the system cycle time is more affected, than in any other position. Also noticeable is that, in absolute terms, the time deviation caused by having at least one worker of worst/best performance is greater when this worker has the worst (slower and more variable) performance than when it has the best (faster and less variable). In such situation, where the workers are performing the assembly task in a serial assembly line, there is an influence of the tight interconnection between the workers performances and this outcome can be attributed to the “free-riding” effect. The cycle time variability is most affected if the worker with the worst/best performance is allocated to the last position. With a worst performer in the assembly line the variability of the system will be decreased comparing to the Expected. On the other hand, for the best performance worker, the variability of the system will be higher than the Expected. This seems counter intuitive since the worst performance worker is a more variable worker than the best performance one. But, by analysing the triangular distributions of the workers it can be observed that the best performance worker has lower minimum than the worst performer. If the histogram of the assembly line, for both combinations, finishes almost at the same point due to the blocked and starving times (the cycle time is sometimes greater), but the initial state of the system performance is directly related to this worst/best performer then, for the best performer, the system minimum and maximum cycle times are further apart, than for the worst performer. This creates a bigger gap between the systems minimum and maximum cycle time values for the best workers performance, originating a higher value for the variability. Some authors [43] state that the variance reduction in systems has become a prevalent priority in many manufacturing and service organizations throughout the world and that the variation in a process has been referred to as the ‘root of all evil’ in a process. However, analysing the results, for the case where the best performance worker is positioned in the end of the line (where the variability is higher than the Expected), the assembly line still obtain a cycle time better than the Expected. This means this worker will take less time in total to assembly all the parts. On the other hand, when the worst performance worker is allocated to the end of the assembly line (where the variability is better than the Expected), the line cycle time is higher than the Expected. This means that, even though the changes in variability can sometimes be interpreted as a problem to be solved, and that variability in a line is a bad thing to have, this rationalization of the results can show a different perspective on this observation. The effect of the variability does not make that much of a difference on the overall perspective. In conclusion, the differences on workers task times, both average times and dispersion, can have large impacts on the output performance of manually operated systems, and should be taken into account when modelling and managing tightly coupled systems. 58 Having heterogeneity in the workers performances will inescapably affect the system performance, especially if these workers have extremely different performances. Consequently, it is recommended when performing simulations of systems which are manually operated, to consider extreme performances, beside the expected performance, in order to have a more realistic output. For future work, adding buffers to the proposed assembly line would be an interesting development to the study. Designing the best buffer combination whether if its needed, the maximum quantity of parts it can store and where they will be positioned can be achieved using metaheuristics. 59 References [1] Folgado, R., Heterogeneity and Variability on Human-Centered Assembly Systems, PhD Thesis, Instituto Superior Técnico, Universidade de Lisboa, 2012. [2] Dilworth, J., Operation management, McGraw Hill, 1996. [3] Chase, R., Aquilano, N., Jacobs, F., Production and operations management: Manufacturing and services, McGraw Hill, 1998. [4] Drira, A., Pierreval, H., Hajri-Gabouj, S., “Facility layout problems: A survey,” Annual Reviews in Control, vol. 31, pp. 255-267, 2007. [5] Das, B., “A computer simulation approach to evaluating bowl versus inverted bowl assembly line arrangement with variable operation times,” The International Journal of Advanced Manufacturing Technology, vol. 51, pp. 15-24, 2010. [6] Boysen, N., Fliedner, M., Scholl, A., “A classification of assembly line balancing problems,” European Journal of Operational Research, vol. 183, p. 674–693, 2007. [7] Becker, C., Scholl, A., “A survey on problems and methods in generalized assembly line balancing,” European Journal of Operational Research, vol. 168, p. 694–715, 2006. [8] Altiok, T., Performance analysis of manufacturing systems, New York: Springer, 1997. [9] Witzenburg, T., “The Next Step – Industrial Automation - The Automation Options,” Belcan Engineering Automation Group. [10] Hopp, W., Spearman, M., Factory physics: foundations of manufacturing management, Irwin/McGraw-Hill, 2001. [11] Bulgak, A., Diwan, P., Inozu, B., “Buffer size optimization in asynchronous assembly systems using genetic algorithms,” Journal Computers and Industrial Engineering, vol. 28, pp. 309-322, April 1995. [12] Bley, H. et al., “Appropriate Human Involvement in Assembly and Disassembly,” CIRP Annals - Manufacturing Technology, vol. 53, pp. 487-509, 2004. [13] Michalos, G. et al., “Automotive assembly technologies review: challenges and outlook for a flexible and adaptive approach,” CIRP Journal of Manufacturing Science and Technology, vol. 2, pp. 81-91, 2010. [14] MacDuffie, J., Pil, F., “From Fixed to Flexible: Automation and Work Organization Trends from the International Assembly Plant Survey,” 1996. [15] Aguiar, G., Aguiar, B., Wilhelm, V., “Obtenção de Índices de eficiência para a metodologia data envelopment analysis utilizando a planilha eletrônica Microsoft Excel,” Revista Da Vinci, vol. 3, pp. 157-169, 2006. [16] Little, J., “A Proof for the Queuing Formula: L = λ W,” Operations Research, vol. 9 (3), p. 383 – 387, 1961. 60 [17] Kriengkorakot, N., Pianthong, N., “The Assembly Line Balancing Problem: Review articles,” KKU Engineering Journal, vol. 34, pp. 133 - 140, 2007. [18] Dudley, N., “The effect of pacing on worker performance,” International Journal of Production Research, vol. 1, pp. 60-72, 1961. [19] Baybars, I., “A survey of exact algorithms for the simple assembly line balancing problem,” Management Science, vol. 32, p. 909–932, 1986. [20] Hsieh, S., “Hybrid analytic and simulation models for assembly line design and production planning,” Simulation Modelling Practice and Theory, vol. 10, p. 87–108, 2002. [21] Law, A., Kelton, W., Simulation Modeling & Analysis, Industrial Engineering Series, McGraw-Hill International Editions, 1991. [22] Shannon, R., “Introduction to the Art and Science of Simulation,” 1998. [23] Banks, J., “Introduction to Simulation,” 2000. [24] Mclean, C., Leong, S., “The Expanding Role of Simulation in Future Manufacturing,” 2001. [25] Rubinstein, R., Melamed, B., Modern Simulation and Modeling, Wiley, 1998. [26] Ingalls, R., “Introduction to Simulation,” 2001. [27] Andersson, M., Olsson, G., “A Simulation Based Decision Support Approach For Operation Capacity Planning In A Customer Order Driven Assembly Line,” in Proceedings of 1998 Winter Simulation Conference, D.J. Medeiros, E.F. Watson, J.S. Carson and M.S., 1998. [28] Rodrigues, G., Carvalho, V., CAPS-ECSL, Experiência de modelagem simulação aplicada a um sistema de elevadores, Universidade do Minho: Relatório Técnico, Universidade do Minho, 1984. [29] Shannon, R., Systems Simulation – the art and science, Prentice-Hall, Inc., 1975. [30] Centeno, M., Carrillo, M., “Challenges Of Introducing Simulation As A Decision Making Tool,” 2001. [31] Law, A., McComas, M., “How To Build Valid And Credible Simulations Models,” 2001. [32] Oakshott, L., Business Modeling and Simulation, Pitman Publishing, 1997. [33] Banks, J., Handbook of Simulation – Principles, Methodology, Advances, Applications and Practice, John Wiley &Sons, Inc., 1998. [34] Ferreira, L., Geração automática de modelos de simulação de uma linha de produção na indústria electrónica, Universidade do Minho: Industrial Engineering MSc Thesis, 2003. [35] Leite, C., Modelo de simulação para uma linha de montagem de helicópteros, Itujabá: MSc Thesis, Universidade Federal de Itujabá, 2003. [36] Battini, D., Persona, A., Regattieri, A., “Buffer size design linked to reliability performance: A simulative study,” Computers & Industrial Engineering, vol. 56, p. 1633–1641, 2009. [37] Lutz, C., Davis, K., Sun, M., “Determining buffer location and size in production lines using tabu search,” European Journal of Operational Research, vol. 106, pp. 301-316, 1998. 61 [38] Boudreau, J., Ramstad, P., “Strategic industrial and organizational psychology and the role of utility analysis models,” John Wiley & Sons, Inc., vol. 12, pp. 193-221, 2003. [39] Boudreau, J. et al., “On the interface between operations and human resources management,” Manufacturing & Service Operations Management, vol. 5 (3), pp. 179-202, 2003. [40] Juran, D., Schruben, L., “Using worker personality and demographic information to improve system performance prediction,” Journal of Operations Management, vol. 22, p. 355–367, 2004. [41] Boudreau, J., Sturman, M., Judge, T., Utility Analysis: What are the black boxes and do they affect decisions?, New York: Wiley, 1994, pp. 77-96. [42] Doerr, K. et al., “Work Flow Policy and Within-Worker and Between-Workers Variability in Performance,” Journal of Applied Psychology, vol. 89, pp. 911-921, 2004. [43] Baugous, A., More than a mean: Broadening the definition of employee performance, PhD Thesis, The University of Tennessee, 2007. [44] Hunter, J., Schmidt, F., Judiesch, M., “Individual differences in output variability as a function of job complexity,” Journal of Applied Psychology, vol. 75, pp. 28-42, 1990. [45] Shunta, J., Achieving World Class Manufacturing through Process Control, New Jersey: Prentice Hall, 1997. [46] Tan, B., “Agile Manufacturing and Management of Variability,” International Transactions in Operational Research, vol. 5, pp. 375-388, 1998. [47] Doerr, K., Arreola-Risa, A., “A worker-based approach for modeling variability in task completion times,” IIE Transactions, vol. 32, pp. 625-636, 2000. [48] Doerr, K. et al., “Heterogeneity and variability in the context of flow lines,” The Academy of Management Review, vol. 27, pp. 594-607, 2002. [49] Schultz, K., Schoenherr, T., Nembhard, D., Equity Theory Effects on Worker Motivation and Speed on an Assembly Line, Working Paper, SC Johnson Graduate School of Business, 2006. [50] Kerr, N., Bruun, S., “Dispensability of member effort and group motivation losses: Free-rider effects,” Journal of Personality and Social Psychology, vol. 44, pp. 78-94, 1983. [51] Latane, B., Williams, K., Harkins, B., “Many hands make light the work: The causes and consequences of social loafing,” Journal of Personality and Social Psychology, vol. 37, pp. 822-832, 1979. [52] Kerr, N., “Motivation losses in small groups: A social dilemma analysis,” Journal of Personality and Social Psychology, vol. 45, pp. 819-828, 1983. [53] Williams, K., Harkins, S., Latané, B., “Identifiability as a deterrant to social loafing: Two cheering experiments,” Journal of Personality and Social Psychology, vol. 40, pp. 303-311, 1981. 62 [54] Kandel, E., Lazear, E., “Peer pressure and partnerships,” Journal of Political Economy, vol. 100, pp. 801-817, 1992. [55] Zaccaro, S., “Social Loafing: The Role of Task Attractiveness,” Personality and Social Psychology Bulletin, vol. 10, pp. 99-106, 1984. [56] Schultz, K., Juran, D., Boudreau, J., “The effects of low inventory on the development of productivity norms,” Management Science, vol. 45, pp. 1664-1678, 1999. [57] Aguiar, G. , Peinado, J., Graeml, A., “Simulação de arranjos físicos por produto e balanceamento de linha de produção: O estudo de um caso real no ensino para estudantes de engenharia,” Cobenge, 2007. 63 Appendix A.1 - Balancing a production line There are some guidelines for balancing a production line [57]: 1. Divide operations in small indivisible work elements (tasks) so they can be performed independently; 2. Make a solid study about the times of every work task; 3. Define the right task sequence; 4. Draw a precedence graph; 5. Calculate the cycle time and the number of workstations; 6. Assign the tasks to the workstations regarding the precedence order. The following rules must be respected in order to determine which tasks can be attributed to which workstations: a) Every preceding tasks have already been allocated; b) The time of the task to be allocated shall not exceed the time remaining to the workstation; c) If there is more than one task that may be allocated, give preference to the task that has the longest duration or to the nearest from the beginning of the assembly, that is, the one that has more subsequent tasks; d) When there is no task that can be allocated in the workstation, move to the next workstation, until the production line is complete. 7. Check if there is a more appropriate way of balancing, trying to leave the same amount of idle time on each workstation; 8. Calculate the efficiency ratio for the production line. 64
© Copyright 2026 Paperzz