Where innovation starts TU/e Academic Awards 2012 Thursday 7 June 2012 Foreword to the Annual Academic Awards 2012 The Academic Awards 2012 of Eindhoven University of Technology (TU/e) are part of the first Dutch Technology Week. This week is a showcase of the technological innovations that originate in the Brainport Eindhoven region. The Academic Awards put the spotlight on our bright young researchers who will help maintain the outstanding level of technological innovations in this region. The nominees and their projects demonstrate that our education and research contribute to the solutions of major societal problems in areas like energy, health and smart mobility. Many of the research projects have been conducted in collaboration with commercial and non-profit partners. The close links between science, industry and government in the Brainport Eindhoven region proof their value yet again. Each year we display the best graduation work, the best design report and the best doctoral project. This booklet enables you to find out about the work of our nominees. They are clearly the talents of the future. I am convinced that winning an annual academic award is one of the first steps towards a successful career. On behalf of the TU/e and Brainport I would like to congratulate all the nominees for their contributions to our future. Prof.dr.ir. C.J. van Duijn Rector Magnificus Eindhoven University of Technology 1 2 Contents TU/e Final Project Award 2012 Jury report J.H.M. Evers MSc M.J. Beelen MSc ir. T.H. Ellis ir. A. van der Heide J.A.J. Hellings MSc M.C.L.F. Jaspers MSc J.J.M. Kierkels MSc ir. B. Spronck ir. T. Vranken ir. M. van ‘t Westeinde Mathematics and Computer Science --------------------------------------------------------------------------------------------------------------------------------Mechanical Engineering ----------------------------------------------------------------------------------------------------------------------------------------------------------------Applied Physics --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Architecture, Building and Planning ---------------------------------------------------------------------------------------------------------------------------------Mathematics and Computer Science --------------------------------------------------------------------------------------------------------------------------------Industrial Engineering & Innovation Sciences --------------------------------------------------------------------------------------------------------Industrial Design -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Biomedical Engineering -----------------------------------------------------------------------------------------------------------------------------------------------------------------Chemical Engineering and Chemistry ------------------------------------------------------------------------------------------------------------------------------Electrical Engineering ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ TU/e Design Project Award 2012 Jury report K.S. Zych MSc PDEng Y. Cai MSc PDEng R. Kocielnik MSc PDEng H.L. Liang MSc PDEng ir. M. Polanco Fernández PDEng Design and Technology of Instrumentation ---------------------------------------------------------------------------------------------------------------Logistics Management Systems --------------------------------------------------------------------------------------------------------------------------------------------User System Interaction ----------------------------------------------------------------------------------------------------------------------------------------------------------------Software Technology -------------------------------------------------------------------------------------------------------------------------------------------------------------------------Process and Product Design ------------------------------------------------------------------------------------------------------------------------------------------------------- TU/e Doctoral Project Award 2012 Jury report dr.ir. J. Beckers dr.ir. M.S. Alfiad dr. D. Cavallo MSc dr.ir. G. Dingemans dr.ir. M.C.F. Donkers dr. B.L.J. Gysen MSc dr.ir. B.J. Hengeveld dr.ir. J. Jovanović dr.ir. M.J.H. Marell dr. M. van den Tooren dr.ir. A. de Vries dr.ir. A.P. Wijnheijmer dr.ir. C.M.E. Willems Applied Physics ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Electrical Engineering -----------------------------------------------------------------------------------------------------------------------------------------------------------------------Electrical Engineering -----------------------------------------------------------------------------------------------------------------------------------------------------------------------Applied Physics ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Mechanical Engineering -----------------------------------------------------------------------------------------------------------------------------------------------------------------Electrical Engineering -----------------------------------------------------------------------------------------------------------------------------------------------------------------------Industrial Design -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Chemical Engineering and Chemistry ------------------------------------------------------------------------------------------------------------------------------Electrical Engineering -----------------------------------------------------------------------------------------------------------------------------------------------------------------------Industrial Engineering & Innovation Sciences --------------------------------------------------------------------------------------------------------Biomedical Engineering -----------------------------------------------------------------------------------------------------------------------------------------------------------------Applied Physics ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Mathematics and Computer Science --------------------------------------------------------------------------------------------------------------------------------- page page page page page page page page page page page page 5 6 8 9 10 11 12 13 14 15 16 17 page page page page page page page 19 20 22 23 24 25 26 page page page page page page page page page page page page page page page 29 30 32 33 34 35 36 37 38 39 40 41 42 43 44 3 4 TU/e Final Project Award 2012 For the best final project at TU/e in 2011 The TU/e Final Project Award 2012 consists of a certificate and a sum of € 2.500,- for the best final project in one of the TU/e Master’s programs, completed in 2011. This year 10 final projects from 9 different departments have been nominated. The jury has assessed the final projects based on the following criteria: • Sufficient innovative elements • Theoretically well-founded • Verifiably of adequate scientific quality • Results publishable in a scientific journal • Independence on the part of the student • Well-written report, preferably in English 5 TU/e Final Project Award 2012 Report of the Jury All ten nominations made a very strong impression on the committee, with every final project study representing the best research each faculty has to offer. In the eyes of the committee, the best final projects of 2011 are of extremely high quality indeed. Nevertheless, the committee was unanimous in selecting this year’s prize winner as J.H.M. (Joep) Evers MSc of the Industrial and Applied Mathematics Program for his Master's thesis entitled ‘Modeling Crowd Dynamics: a Multiscale, Measuretheoretical Approach’ supervised by Dr. Adrian Muntean The collective motion of individuals, whether flocks of birds or human crowds, is a fascinating and captivating phenomenon whose aesthetic aspects induced by such an expression of collective behavior are complemented by many practical research aspects studied in various scientific disciplines, from logistics and theoretical biology to statistical physics and mathematics. Mathematical models need to take into account individual driving forces (microscopic scale) and external forces exerted on the group (macroscopic scale). This demands a multiscale modeling technique. The recent introduction of a twoscale modeling approach re-interprets the classical conservation laws in terms of abstract mass measures and subsequently specialized to Dirac measures for the microscopic scale and to Lebesgue measures for the macroscopic scale. In the resulting coupled model the two scales coexist and share information. Joep Evers took up this approach and extended it with a theory of mixtures in fluid mechanics. In his hybrid model, he proves time-discrete well-posedness of the problem, Clausius-Duhem entropy inequalities (related to the second law of thermodynamics) and mass conservation. He not only gives an approximation strategy of the respective measures, but also illustrates model-based simulations of the expected behavior of pedestrians moving in opposite directions in a narrow hallway. 6 Joep Evers excelled in each of the assessment criteria. In terms of publication, Joep Evers can boast a joint paper with his supervisor on the topic of his final project in the online journal NONLINEAR PHENOMENA IN COMPLEX SYSTEMS, published in 2011, with one more to come. Furthermore, the innovation of combining thermo-dynamic theory with the measure theory (including the RadonNikodym Theorem) struck the committee as surprising and certainly innovative. The theoretical foundation of the work on the proposed dynamic crowd model gives rigorous proofs where needed, and references to the literature are precise and clearly stated. In addition, the augmentation to this novel two-scale treatment of crowd dynamics certainly convinced the committee that the criterion of verifiably of adequate scientific quality is satisfied. Joep’s advisor refers to him as an independent thinker, and his achievements seem very convincing. The report is very well written, in English, with clarity and succinctness. We are indeed fortunate to have a university with such great talents among its students. Eindhoven, May 22, 2012 Prof.dr.ir. A.C. Brombacher (chair) Prof.dr. A.M. Cohen Prof.dr.ir. J.J.H. Brouwers 7 Mathematics and Computer Science J.H.M. Evers MSc Joep graduated cum laude from the Department of Mathematics and Computer Science. His final project was supervised by dr. Adrian Muntean and prof.dr. Mark Peletier in the Applied Analysis group. Currently he works on a PhD project in the same group, within the Graduate Program of the Institute for Complex Molecular Systems (ICMS). Modeling crowd dynamics A multiscale, measure-theoretical approach Relevance Shortly before the start of the project, the relevance of good crowd management was emphasized in a tragic way. More than 20 people lost their lives at the Love Parade, organized in Duisburg (Germany) in July 2010. They were crushed brutally when a stampede occurred in the overcrowded festival area. It is urgent and challenging to understand how and why crowds move the way they do. Without this knowledge one cannot predict whether situations will get out of hand, and thus security mechanisms are not able to react appropriately. This is precisely the point where mathematics makes the difference. Multiscale mathematical modeling allows us to gain insight into the underlying mechanisms of the collective behaviour in crowds, and hence to improve predictions. Figure 1. Crowd dynamics at Eindhoven railway station: The entrance of the stairs acts as a bottleneck. Two trains have just arrived simultaneously during the rush hour. Method I work with transport models in a measure-theoretical context. A mass measure contains information about the distribution of the crowd. This measure evolves in time (i.e. the crowd moves) due to a velocity field. Each individual has its intrinsic desired velocity, which is perturbed by its neighbours by means of a social velocity. The actual modeling takes place when we define the exact form of the mutual interactions (friendship, hate, leadership, etc.). We only make decisions about what happens at the level of each individual (the microscale), and search for the corresponding macroscale (crowd) response. Typically, interesting collective behaviour (dynamic patterns) naturally arises at the crowd level, often leading to stable self-organized structures (lanes, groups…). Unifying theory I propose a unifying theory for describing crowd dynamics in which I combine ideas from continuum mechanics, mixture theory and thermodynamics. My particular achievement is that I have glued these fields together in the framework of measure theory. Perspective Not only is the model suitable for studying the behaviour of a single crowd, but also for the interplay between ‘special individuals’ (firemen, tourist guides, policemen…) and groups of people. These individuals can be employed to control the crowd. The importance of this lies in the fact that never before has ensuring the safety of crowds been as relevant as it is today. Figure 2. Two snapshots of a corridor simulation. Two subpopulations (red and black) have oppositely directed desired velocities. Starting from a disordered initial state (left), the system evolves towards an ordered configuration (right), in which the subpopulations walk in lanes. 8 Winner of the TU/e Final Project Award 2012 Mechanical Engineering M.J. Beelen MSc Maarten received his MSc degree (cum laude) on the topic of haptics for a telesurgical robot, a project from the Control Systems Technology group of prof.dr.ir. Steinbuch, Mechanical Engineering. During his study he worked on a bioreactor for cultivating heart valves, he performed research on the topic of nuclear fusion in the USA. Furthermore, he co-organized a study trip to Korea, he was a tutor for student projects, a teaching-assistant for multiple courses and a recruiter for new students. Currently, he is active in the TU/e spin-off Medical Robotic Technologies, targeting the introduction of surgical robots into clinical practice. Evaluation of haptics for a telesurgical robot Shunt dynamics filtering and time domain passivity control Figure 1. The telesurgical robot with a slave robotic system (foreground) and a haptic master console (background). Figure 2. Experimental validation of the two-layer passivity approach, showing that active behavior in the form of bounces on the surface of the stiff spring is successfully detected and suppressed. Figure 3. Telesurgical robot for vitreoretinal eye surgery. The spin-off Medical Robotic Technologies aims at bringing surgical robots to the market. Evaluation of haptics for a telesurgical robot In contrast to open surgery, minimally invasive surgery (MIS) only requires small incisions, resulting in less patient trauma. Robotically assisted MIS provides more precision, steadiness and dexterity and facilitates tremor filtering and movement or force scaling. Commercially available surgical robots lack the ability of reflecting the interaction forces between the slave robot and the tissue to the surgeon through the master robot. This force feedback, called haptics, is highly desirable: it could reduce unintentional damage to tissue, resulting in fewer complications; it could enable surgeons to localize or diagnose tissue by palpation; and it could reduce the time to complete a surgical procedure. Therefore, a new robotic master-slave system has been realized, restoring the surgeon’s sense of touch, see Fig 1. Maarten has provided the motivation for and evaluation of the control design steps taken to achieve force feedback in this telesurgical robot. The challenge is to achieve force feedback for an industrial telemanipulator for which a common complication is the noncollocation of the force sensors with respect to the point of interaction. This results in shunt dynamics that degrade the interaction force estimation. Other typical complications are force sensor noise, high levels of friction, structural resonance frequencies, gravity influences, non-backdriveability and actuator saturation. Maarten presented a method to model, identify and compensate for the influence of the shunt dynamics, and he evaluated a two-layer approach that enforces passivity in the time domain, for the first time on an industrial telemanipulator. Experimental validation Experimental results demonstrate successful interaction with soft environments, using an impedance reflecting controller. Furthermore, a significant reduction of the shunt dynamics influences is achieved. Finally, stable interaction with both soft and hard environments is demonstrated, using the time domain passivity approach, see Figure 2. The first part (0 ≤ t ≤ 10s) shows stable bilateral interaction with a soft spring. During the second part, active behavior is observed upon contact with a stiff spring, which was anticipated as is common for bilateral control systems. During the third part, the passivity layer is enabled and the energy level H is monitored, ensuring passive interaction. During part four, the energy level drops below zero upon the first bounce with the stiff spring; active behavior is detected and a damping force Fpas is successfully imposed on the master until the required amount of energy is dissipated, obtaining a neutral energy balance H=0mJ; stable interaction with a stiff spring has been achieved. 9 Applied Physics ir. T.H. Ellis Tim carried out his Master’s thesis work in the group FNA (Physics of Nanostructures) at TU/e in collaboration with FEI Company. The project was supervised by dr.ir. R. Lavrijsen (FNA), prof.dr.ir. H.J.M. Swagten (FNA), and dr.ir. J.J.L. Mulders (FEI Company). Tim is currently employed by Holst Centre/TNO where he is working on the development of flexible thin-film barrier layers for organic LEDs (OLEDs). Novel deposition of magnetic nanostructures Historically, the seemingly magic nature of magnetism has caused much bemusement. Today though, an ever-increasing understanding of magnetic phenomena has allowed the development of amazingly complex technologies such as the now ubiquitous hard disk drive. In the future, to allow both this understanding and these applications to further progress, access to similarly advanced experimental techniques will be vital. One such novel technique is Electron Beam Induced Deposition, or EBID. Figure 1. (Right) An illustration of the EBID principle and (Left) a SEM image of a free-standing 3-D structure produced with the technique. Figure 2. (Top) A schematic representation of 3 ironbased EBID nanopillars on top of a magnetic nanowire. (Bottom) Magnetically sensitive microscope images of this nanowire at three different applied magnetic fields. As the applied field is increased, the magnetic orientation of the wire reverses (changes from white to black) gradually from left to right. The presence of the nanopillars, specifically their stray fields, is found to influence (delay) this magnetic switching. 10 There are currently many techniques available to an experimentalist for the deposition of material layers down to monolayer precision. However, for the fabrication of more intricate structures, complex multistep lithographic techniques are usually required. EBID, in contrast, is a versatile direct-write material deposition technique. Deposition is achieved by using a focused beam of electrons to selectively decompose a precursor gas adsorbed on a substrate. Not only does this technique permit nanoscale fabrication in three dimensions, but the large number of precursors available also means a broad range of materials can be deposited, including those magnetic in nature. Deposition of magnetic material was achieved using a new iron-based precursor material, Fe2(CO)9. Relatively untested in the world of EBID, the abilities of this precursor were of significant interest for the project’s collaborator, FEI Company. Thorough characterisation of the precursor revealed that depositions containing as much as 80 at.% iron could be readily achieved. Crucially, these deposits also exhibited ferromagnetic behaviour. In addition to high purities, the flexibility of the precursor was also demonstrated. By artificially increasing the local water vapour concentration during the EBID process, the iron content, and consequently the structural, magnetic, and electric properties of deposits were shown to be easily tuned. Apart from material characterisation, a possible future application of EBID for a magnetic storage technology was also demonstrated. ‘Racetrack Memory’ is a magnetic memory concept under development which has the potential to replace all present memory technologies. The basic concept of racetrack memory is domain-wall-motion, or the coherent transport of magnetic states (information) along very small magnetic wires. In a proof-of-concept investigation, this transport was found to be influenced by the presence of nanoscale ironcontaining EBID pillars deposited at specific positions on top of such wires. Architecture, Building and Planning ir. A. van der Heide Angela is currently working at Philips Lighting, Eindhoven. She graduated cum laude for the Master’s program in Building Physics and Systems and also obtained a certificate in Sustainable Development; both at TU/e. Her graduation topic was the acoustics of orchestra pits; and graduation was under supervision of Renz van Luxemburg, Constant Hak and Remy Wenmaekers. Acoustics of orchestra pits Figure 1. Schematic representation of sound transmission paths in an opera house. Figure 2. Floor plan and section of orchestra pit Muziektheater Amsterdam. Introduction Orchestra pits were developed in the 17th century to support opera performances, and later found their purpose for ballet, theatre and musical performances as well. The orchestra pit is located between the audience and the stage, and is often partly covered by the latter. The reason to cover part of the pit is twofold: • More seating rows can be placed in the auditorium, thus more tickets can be sold. • The balance between orchestra and singers is enhanced. For a good opera performance, sound transmission from the stage (vocals) and the pit (music) to the audience should be supported. Furthermore communication between stage and pit and within the pit are important to allow ensemble playing. The main sound transmission paths are displayed in Figure 1. Pit improvements Various possibilities have been investigated to improve the acoustic conditions for musicians: 1. Ear plugs 2. Change orchestra arrangement 3. Sound screens 4. Acoustic treatment walls/ceiling 5. Increase opening size Increasing the opening size is actually the most effective measure, but is often undesired by theatres because valuable seats are lost. The other measures all have their advantages and disadvantages, but a straightforward solution is yet to be found. Case study: Muziektheater Amsterdam The orchestra pit of the Muziektheater is the largest in the Netherlands with a floor area of 180 m2. All absorbing surfaces were removed from the pit because the sound levels in the auditorium were too low. However, due to this measure sound levels inside the pit are now a major concern. A group of engineers has come up with 7 options to improve the situation, but it is unclear which will be the most effective. Problem statement It is highly challenging to optimize the acoustics of orchestra pits, wich is mainly due to conflicting acoustic requirements posed by the audience and the musicians. One of the major issues is the sound level inside the orchestra pit, which can reach dangerous levels and cause hearing damage to the musicians. Secondly musicians often have trouble hearing other orchestra members, and are generally unable to hear the performers on stage. 11 Mathematics and Computer Science J.A.J. Hellings MSc Jelle is currently a PhD student at Hasselt University, Belgium. He graduated cum laude for the Master’s program in Computer Science and Engineering and also obtained a certificate in Philosophy; both at TU/e. His graduation topic was external memory bisimulation; and graduation was under daily supervision of George Fletcher. Bisimulation partitioning and partition maintenance a d d a d d b b b c c c Running time Figure 1. An example of a graph (left) and its structural description based on bisimulation (right). Bisimilar equivalent nodes in the two graphs are connected by a red, dotted line. 16 14 12 10 8 6 4 2 0 ·10 3 0 Networks - or, in mathematical terms, graphs - are fundamental structures that arise in numerous areas. Road networks or railway networks are obvious examples, but graphs are also used extensively to model relational information. The nodes of the graph are then the objects of interest, and the edges indicate which pairs of objects are related. For example, in social networks the nodes are people and the edges denote friendships between people. Representing data by graphs gives access to a broad range of graph-based analysis technologies. The cost of graph-based analysis technologies heavily depends on the size of the graph: the bigger the graphs are, the longer computations on the graph will take. Because graphs are typically huge, compression techniques are necessary in order to support efficient analysis of the graph data. One popular method to perform compression uses the concept of bisimulation. Intuitively, two nodes are bisimilar if they ‘reflect the same behaviour’. Often one can compress a graph significantly by bisimulation partitioning, that is, by grouping its nodes into clusters of bisimilar nodes and replacing each cluster by a single node. Performing bisimulation partitioning has been very time-consuming when graph data sets are so large that they do not fit in the computer's main memory but must be stored on disk. Existing algorithms spend most of their time transferring data back and forth between main memory and disk, which is extremely costly. Thus the application of bisimulation partitioning was essentially limited to relatively small graphs. 0.2 0.4 0.6 0.8 Nodes 1 ·10 9 Figure 2. The running time of the implementation of our algorithm on very large graphs; showing good scalability for very large inputs. In our research we have discovered the first practical and efficient approach for bisimulation partitioning massive graphs residing on disk. This sets the stage for progress in the wide variety of practical applications of bisimulation. We have specialized our methods to practical XML technologies and we have provided open-source implementations of our algorithms and ran performance tests which confirm the practical efficiency of our algorithms. Lastly our results will be presented at SIGMOD 2012; the premier international conference on databases. It is extremely selective - in 2012, only 16% of submissions were accepted - and publishing a paper in the highly cited SIGMOD formal proceedings (published by ACM Press) is very prestigious. 12 Industrial Engineering & Innovation Sciences M.C.L.F. Jaspers MSc Martijn conducted his Master’s thesis project at ThyssenKrupp Elevator AG in Essen (Germany). The project was supervised by dr. Claudia Schmidt-Milkau, Senior Vice-President Product / R&D at ThyssenKrupp Elevator AG, prof.dr. Hans Georg Gemünden, professor in Technology and Innovation Management at the Technical University Berlin, and prof.dr. Ed Nijssen, professor in the department of Industrial Engineering & Innovation Sciences. Knowledge transfer between R&D centers within the global network of a multinational corporation: a social network approach Figure 1. Informal R&D network picture of ThyssenKrupp Elevator AG Multinational corporations gain a company-wide advantage by being globally present. Moreover, they typically manage several R&D centers, located in technology clusters around the world. However, studies show that knowledge transfer is hindered by the geographical dispersion of R&D centers. Even if geographical distance grows marginally, cooperation often drops dramatically. This is particularly true for multinationals with a decentralized R&D structure and historically grown weak central coordination. Literature suggests that this negative effect of geographical distance could be overcome by leveraging the informal relationships between employees. Informal networks are a powerful way for globally sharing available knowledge and can compensate for a lack of formal solutions. Martijn’s study aim was to research the influence of the informal network on R&D knowledge transfer success for a multinational corporation with a decentralized R&D structure. Results and implications Results from Martijn’s study confirm that R&D knowledge is scattered within a decentralized corporation, which may hinder effective use of the available expertise. Martijn found that the available knowledge is also only marginally transferred in the company’s global informal network. Specifically, knowledge exchange is rather limited to transfers between R&D centers within the same operating unit (i.e. in relatively close geographical reach). It is suggested to further stimulate knowledge transfer between centers in different parts of the world. Based on this conclusion, knowledge exchange was further analyzed focusing on the effectiveness of informal networks for the transfer of related knowledge. The latter was defined as knowledge of product component families. Mapping the actual network structure showed that only for some components, ongoing personal relationships outside the own R&D center were present. That is, for specific components strong informal ties were noticed between employees of different R&D centers, leading to a higher degree of global knowledge sharing. Based on this result the suggestion was made to stimulate use of informal context for the other product components. Additional statistical analyses showed that if employees identify themselves and their R&D center with the multinational organization as a whole, knowledge transfer is more successful. So, by fostering identification with the parent company, knowledge transfer success can be increased. As a result, the amount of redundant and ineffective development will decrease. 13 Industrial Design J.J.M. Kierkels MSc Jeanine did her final Master’s project at Philips Design in cooperation with the Máxima Medisch Centrum Veldhoven. She was supervised by prof.dr.ir. L.M.G. Feijs, and graduated with honors (BSc & MSc). Currently Jeanine is enjoying her job as People research consultant at Philips Design Healthcare, where also her final Master’s project is becoming reality. ‘De Blijde Gebeurtenis’ Design for an enhanced labor & delivery experience Patient consumers are becoming increasingly assertive and knowledgeable which makes it more than ever important for hospitals to improve the quality of care they provide in order to differentiate. As a result of this current competitive care market, the focus of perceived user value will have to include patients’ experiences and comfort next to functionality. In obstetrics, this trend has focused on women’s experiences of childbirth. A delivery is one of the major events in life: a radical experience that evokes strong emotions. A good delivery experience is crucial for the woman’s well-being, having immediate and long-term effects on her health and her relationship with her infant. One out of six Dutch women look back negatively on their birth experience three years postpartum; especially women who gave birth in a hospital context are less satisfied. Figure 1. ‘De Blijde Gebeurtenis’ guides the woman through labor by visualizing progress and providing real-time breathing support. For her final Master’s project Jeanine did research on the labor and delivery experience in the hospital and developed an innovative design concept to enhance this experience. Throughout the project she involved multiple key stakeholders to ensure a close fit with societal and business contexts. Field research In order to gain a deep understanding of the patients’ journey before, during and after giving birth as well as the staff workflow, on-site research at the Máxima Medisch Centrum Veldhoven was conducted. This showed that among patients there is a strong need for not being left alone, more coaching and more information during labor. Design concept Responding to this design opportunity ‘De Blijde Gebeurtenis’ was developed: an experience design concept that guides the woman through labor in a personal and unobtrusive way by visualizing progress and providing real-time breathing support. ‘De Blijde Gebeurtenis’ consists of a smartphone app that prepares the woman for labor, and an interactive light animation in the delivery room. The latter is coupled to physiological data obtained by contraction monitoring, and results in a unique memento after the delivery. Value for patient, partner & staff An experiential prototype was created to perform a qualitative user evaluation in a simulation context. Results show promise for ‘De Blijde Gebeurtenis’ in facilitating a more positive labor & delivery experience. It can help the woman to cope with contractions and functions as a hopeful and personal stimulus to persevere. For staff it supports the active management of labor. 14 Biomedical Engineering ir. B. Spronck Bart conducted his MSc project under supervision of prof. Frans van de Vosse, professor of Cardiovascular Biomechanics at TU/e. During his graduation, Bart successfully acquired a Kootstra Talent Fellowship to investigate the structure-function relationship in the artery wall, a PhD project he is currently working on at Maastricht University. Modeling the regulation of blood flow to the brain In order for the human brain to fulfil all its tasks, an adequate supply of nutrients and oxygen is required. With an increase in brain activity, this supply should increase accordingly. Regulation of blood flow to the brain is of key importance for adequate brain function. This is emphasised by evidence that cerebrovascular dysregulation is an early feature of pathologies such as hypertension and Alzheimer’s disease. Figure 1. Squat-to-stand manoeuvres induce a blood pressure drop, initiating an autoregulatory response to maintain a steady cerebral blood flow. Flow velocities are measured using Transcranial Doppler Ultrasonography. Blood pressure is simultaneously measured using a finger cuff. Cerebral Autoregulation The cerebral regulatory system expresses two regulatory properties: cerebral autoregulation, maintaining a constant blood flow with changing blood pressure, and neurovascular coupling, adapting blood flow to brain activity. The aim of this study was to develop a physiologically based mathematical model of cerebral blood flow regulation, combining these properties. Neurovascular Coupling Pressure [mmHg] 80 60 40 Standing up A sudden flow drop is readily compensated. Flow is even increasing while pressure is still decreasing. 20 0 70 Cerebral blood flow regulation is based on a variety of different physiological mechanisms. These mechanisms have in common that smooth muscle cells present in the vessel wall respond to external stimuli by alteration of the vessel diameter. These stimuli can be: pressure-induced wall strain (myogenic regulation), flow-induced wall shear stress (flow mediated regulation), CO2 concentration (metabolic regulation), and neural activity (neurogenic regulation). Opening eyes Flow [ml/min] 60 Eye opening readily causes a flow increase 50 40 30 Model 20 Measurement 10 Total Regulatory component [-] Neurogenic Myogenic 1 Shear stress based Steady state 0.5 0 The newly developed mathematical model (Figure 2, bottom) was used to predict blood flow in the posterior cerebral artery, a vessel supplying the visual cortex with blood. This flow responds to a change in visual cortex activity, e.g. when opening or closing the eyes: a neurovascular coupling response. When performing repetitive squatting and standing large blood pressure fluctuations are evoked, eliciting an autoregulatory response (Figure 1). -0.5 -1 -5 0 5 10 15 20 25 0 10 Time [s] 20 30 40 Time [s] Regulation is performed by altering arteriolar smooth muscle tension Pai Pvo In eight healthy volunteers neurovascular coupling responses and autoregulatory responses were measured (Figure 2, top). Measured flow responses were compared with modeled flow responses. A set of model parameters characterising each subject was determined. Pic Posterior cerebral artery Arteriolar circulation Venouscirculation Figure 2. Top: Measured and modeled blood pressure and flow responses of one of the subjects. Bottom: The developed lumped parameter model used to describe cerebral blood flow regulation. Adapted from: Spronck B, Martens EGHJ, Gommer ED, van de Vosse FN. A lumped parameter model of cerebral blood flow control combining cerebral autoregulation and neurovascular coupling, submitted to American Journal of Physiology - Heart and Circulatory Physiology. Myogenic regulation is found to dominate the autoregulatory response. The neurovascular coupling response is found to be dominated by the interaction of neurogenic and myogenic mechanisms. It is concluded that our single, integrated model of cerebral blood flow control can be used to identify the main mechanisms affecting cerebral blood flow regulation in individual subjects. This can eventually open new opportunities in early diagnosis of hypertension and Alzheimer’s disease. 15 Chemical Engineering and Chemistry ir. T. Vranken Thomas conducted his Master’s graduation project in the group Energy Materials and Devices of the Department of Chemical Engineering and Chemistry at TU/e. He was supervised by dr. Bert Hintzen, associate professor. Thomas is currently a PhD student at Hasselt University (Belgium), where he works, in collaboration with TU/e, on (new) nano-structured positive electrode materials for lithium ion battery applications. Thomas is a fellow of the Research Foundation - Flanders (FWO). Nitride-based materials with Eu2SiN3 structure, as spectral conversion materials for LEDs and solar cells Energy-saving LEDs LED lighting devices are becoming more and more popular because of their high energy efficiency and long lifespan. For general lighting applications, e.g. in homes and stores, an accurate colour rendition is needed for all colours. Because of this, white LEDs, consisting of a blue light emitting diode, in combination with only a blue absorbing and yellow emitting phosphor (or spectral conversion material), do not suffice anymore. New compatible phosphors (absorbing blue light and emitting other colours) are urgently needed to give this durable lighting technology a further boost. Sun Spectrum sunlight Spectral conversion layer Solar Cell Blue-red conversion Optimal response Figure 1. Working principle of a spectral conversion material applied onto a solar cell. The efficiency of the solar cell is increased because the spectral conversion layer addjusts the spectrum of the incoming solar light to the spectral response of the solar cell, by converting blue light into red light. Figure2. Graph of selected DFT data, showing the volume of the unit cell of several hypothetical compounds MLnSiN3 versus the ion size of the M2+ ion, to the power of 3. For Eu2SiN3, a comparison is made between the volume available in literature and the calculated one. 16 Solar cells for producing sustainable energy Photovoltaic solar cells are only capable of converting a certain part of the solar spectrum efficiently into electrical power. In a crystalline silicon solar cell, this conversion is for instance more efficient for red light than for blue light. By incorporating these same spectral conversion materials into/onto the solar cell, the spectrum of the incoming light can be adjusted to the spectral response of the cell, thereby allowing a new generation of solar cells, with increased efficiencies. The research objective and results The conducted research concentrated on exploring a novel class of promising nitride-based spectral conversion materials (A2+B3+SiN3). The approach consisted of a combination of theoretical (DFT) calculations and experimental synthetic work. The results from the DFT calculations, for varying A2+ and B3+ ions, provided valuable information about hypothetical compounds, worth studying experimentally. Using this strategy, apart from Eu2SiN3, two compounds never reported before, CaLaSiN3 and EuSmSiN3, were synthesized for the first time. CaLaSiN3 proved to be a suitable host lattice for dopants, opening the door for a whole new class of phosphors. Furthermore, the emission band of the synthesized Eu2SiN3 was positioned at an unusually high wavelength, > 800 nm, which is regarded as a record value for Eu2+. Future expectations Coinciding with the emergence of LED lighting devices, a renewed interest in phosphors can be observed. The advancements of the coming years in this domain will undoubtedly have a large influence on the further commercialisation of LED lighting. The fact that these materials do not only contribute to a decrease in electricity consumption (when applied in LEDs), but can also contribute to an increase of the production of green electricity (when applied in solar cells), makes them even more interesting. One can thus be sure that these spectral conversion materials will have a bright and sunny future! Electrical Engineering ir. M. van ‘t Westeinde Maaike conducted her Master’s graduation project in the electromagnetics group in close cooperation with NXP Semiconductors. The project was supervised by prof.dr.ir. A.B. Smolders, ir. A.de Graauw, and dipl.-ing U. Johannsen. Maaike is currently working as application engineer at ASML, supporting the effective use of ASML machines in the customers process. Modeling of radiation patterns for integrated millimeterwave antennas New applications continue to drive the bandwidth demand and the growth of internet traffic, and it is not expected that this growth will slow in the foreseeable future. This phenomenal growth places tremendous pressure on the underlying information infrastructure at every level, from core and metro network up to intra and on chip communication. Around 60GHz a large bandwidth is allocated worldwide for unlicensed wireless communication. The large bandwidth and physical properties of this band makes it an appropriate choice for high data rate and short distance applications. The 60GHz band offers new opportunities to fulfill the wireless communication demands of the future. Figure 1. The realized bond-wire antenna. The bond-wire antenna (BWA) is an attractive solution for the 60GHz band because it has no chip-to-off-chip interconnect, it has high radiation efficiency and it is extremely low cost. The radiation pattern of the BWA is deteriorated through the finiteness of the structure. In this work, an analytical model is presented to predict this deterioration. The analytical model is based on the Uniform Theory of Diffraction (UTD) and image theory. Using this technique, a significant reduction in the computational resources is achieved and physical insight is increased. Meaning that, it is possible to distinguish between the effects of the direct, reflected, and diffracted fields in the radiation pattern. Four prototype BWA have been built, each with a different ground plane size. Figure 1 shows an example of the constructed BWA. It contains a gold bond-wire with a diameter of 25 μm, which is positioned on a substrate with an ENEPIG finishing. The prototypes have dimensions of d1 = 4.5 mm and d2 = 5 mm, 7.5 mm, 10 mm and 12.5 mm respectively. Figure 2. Measured and simulated Eθ component of the BWA for varying ground plane sizes, φ = 0, d1 = 4.5mm. The analytical model was confirmed with measurement results of the four prototype BWA and a full-wave simulation tool HFSS, see Figure 2. The patterns are normalized to their maximum value and show good agreement. The difference is considered to be a consequence of the high fragility of the BWA. If not handled with proper care, the bond wire tilts to one side. The asymmetrical measurement results for d2 = 7.5 mm and d2 = 12.5 mm might be the result of this. 17 18 TU/e Design Project Award 2012 For the best design report at TU/e in 2011 The TU/e Design Project Award 2012 consists of a certificate and a sum of € 5.000,- for the best design project in one of the TU/e design programs, completed in 2011. The design can be a product, a process, or a system. This year 5 design projects from 5 different design programs and commissioned by different clients have been nominated. The jury has evaluated five design reports nominated for the Academic Design Award 2012. Each report describes a design made for a company. The designs are judged by the following criteria: • clear description of the practical problem, • compliance with the specifications and wishes defined by the company, • clear description of the problem-solving strategy and process to ultimate design • clear design methodology and application of multidisciplinary know-how, techniques and creativity • clear documentation of the complete design process, reasons for design choices and a description of the results that is also comprehensible to non-specialists • usable end-result that meets the user specifications • clarification of the acceptance of the design by the end-user 19 TU/e Design Project Award 2012 Report of the Jury Apart from Bachelor, Master and PhD programs, the three Dutch universities of technology offer eleven two-year post-MSc programs in the Stan Ackermans Institute of the 3TU.School for Technological Design. Graduates from these programs receive the Professional Doctorate in Engineering degree (PDEng). Each program can nominate one candidate for the TU/e design Award. The jury was impressed by the quality of all the design reports submitted but unanimously selected the design of Krzysztof (K.S.) Zych Msc PDEng from the “Design and Technology of Instrumentation” program as the winner of the Academic Design Award 2012. His design, entitled: ‘RF Density Meter’was commissioned by IHC Merwede, M. van Eeten and C. de Keizer under the supervision of Prof. dr. H.C.W. Beijerinck, dr. J.I.M. Botman and dr. J.J. Koning of TU/e. 20 IHC Merwede is market leader in the development and manufacturing of dredging equipment. Efficient dredging requires productivity to be known. Two crucial parameters for determining this are the slurry density and the flow velocity in the suction pipe. It is common practice in dredging to measure the density by using a radioactive sender/transmitter system but handling such a system causes significant problems, for instance in transportation restrictions and the need for qualified personnel. Krzysztof worked on the design and development of a new slurry density meter that uses radiofrequency (RF) electromagnetic waves to sense the density of the slurry. During the project several functional prototype were built. Tests with first prototypes showed strong measurement sensitivity to the salinity of the mixture; careful theoretical study was needed to reconsider the measurement concepts and the physical phenomena related to it. Subsequent prototypes were built and successfully tested in a variety of operating conditions, revealing very accurate measurements, system robustness and readiness for a larger test on board a real cutter-dredger. Krzysztof's work beautifully combines theory and practice. Applying theoretical concepts in daily practice demands a well-structured approach to handle the many practical pitfalls. Krzysztof has shown that he is capable of breaking down complex problems into smaller manageable pieces. The end result is a nice demonstration of this successful approach. Eindhoven, May 22, 2012 Prof.dr. P.A.J. Hilbers (chair) Prof.dr.Ir. A.C.P.M. Backx Dr.ir. S.P.G. Moonen 21 Design and Technology of Instrumentation K.S. Zych MSc PDEng Kris followed the DTI program. His graduation project was the design of a slurry density meter. The project was carried out at IHC Systems in Sliedrecht under supervision of dr.ir. Martijn van Eeten, (IHCS) and prof. Herman Beijerinck (Applied Physics TU/e). Now he continues the full development of that device as an R&D engineer at IHCS and simultaneously as a PhD in Design candidate, tutored by prof. Ward Cottaar (Applied Physics TU/e). RF Density Meter Project an experimental success story Motivation Dredging is crucial to maintain commercial waterways for ships and barges around the world. Nowadays it is done by means of a hydraulic transportation, where sand is sucked in and pumped away through a pipeline. The in-line density of this water-sand mixture is crucial for the efficiency of this process. At IHC an innovative density measurement method was developed, using a high frequency radio wave (RF) propagating across a pipeline. Laboratory measurements showed that the density of the mixture can be calculated from the measured signal phase velocity. Trials onboard a ship revealed however that salt content of estuary water seriously disturbs the measurement, causing the RF prototype to give erratic results. Figure 1. Dedicated pump setup built to investigate the operation of the prototype. The RF prototype is the orange section of the vertical pipeline. Approach The task of the project was to investigate and tackle the problem caused by salty water, including the development of an operational prototype. First a series of carefully designed experiments were performed on the existing prototype. We investigated all aspects of the signal path, including the signal-processing electronics and EM field measurements inside the pipe, followed by physical modeling of the influence of salinity on the signal propagation. The main experimental actions were performed using a dedicated pump setup (Fig. 1.). We were able to create a flow of water-sand mixture with full control over the density and the salinity. We discovered that not only the signal level and phase accuracy but also the frequency spectrum are crucial for correct determination of the density. A vast body of experimental data was obtained, and a measurement algorithm was derived, allowing for real time measurements of density, regardless of the variations of salinity of the mixture (Fig. 2.). Benefits At the end of the graduation project we delivered a working prototype of the RF density meter. The results obtained show that the RF method has a potential to outperform the current density measurement methods. As a result, IHC Systems is ready to built a first device which can be offered to customers and revolutionize the market of inland dredging vessels. Figure 2. Performance of the improved RF density meter. The real density in the pipeline (orange curve, top), the density derived from RF signal with use of the measurement algorithm (green curve, top) are plotted in time domain. Variation in the salinity of the mixture (red curve, bottom), causes only a temporary error. 22 Winner of the TU/e Design Award 2012 Logistics Management Systems Y. Cai MSc PDEng Yongjian conducted his PDEng graduation project at Philips Lighting. The project was supervised by ir. Martin Hutten, Master Black Belt at Philips Lighting, and prof.dr.ir. Jan van der Wal, professor in the Department of Mathematics. Yongjian is currently working for Philips Lighting as supply chain analyst and project leader, investigating supply chain improvement opportunities and conducting global improvement projects. From lagging indicators to leading indicators Design of leading indicator dashboard for Philips Lighting Figure 1. Conceptual design of the leading indicator dashboard. Figure 2. Realization of dashboard through various Excel tools. Globalization and outsourcing have made the supply chain more and more complicated. Meanwhile a rapidly changing market requires an agile and reliable supply chain. In practice, KPI’s (Key Performance Indicators) such as customer service level and inventory level are commonly used to measure the supply chain performance. They are however lagging indicators since they only register what happened in the past and measured the final outcomes. Although useful in many ways, these lagging indicators often do not provide enough information to guide future actions and ensure overall supply chain efficiency. Contrast, leading indicators can be used to monitor the supply chain management system and give advance warning of any developing weaknesses before problems appear. The nature of the relationship between lagging and leading indicators dictates that in order to be good at the first one you must initially excel at most of the second ones. To address this problem, Yongjian has developed a leading indicator dashboard for Philips Lighting. This development started with a conceptual design of leading indicator dashboard where the cause-and-effect relationship between leading indicators and lagging indicators is illustrated. The indicators such as lead time reliability, master data quality and demand development were identified as the most critical leading indicators. Then several tools were developed to allow efficient and continuous monitoring of each indicator. The design of these tools was realized in Excel since the company is most familiar with it. Guidelines were also provided for managers and operators to evaluate the supply chain performance on an ongoing basis and, if necessary, take corrective action proactively. The dashboard presents the information in a layered way. Management can have an overview from which they can see in seconds, or minutes at most, which decisions are needed. Planning works at a far more detailed level where they gain a clear forward visibility and up-to-the-minute information to guide them in the short-term operations. Benefits With leading indicators, everyone in the organization can see their performance and understand how their individual performance contributes to the overall performance of the organization. As more organizational leaders become familiar with leading indicators, there will be a growing trend towards using them to enhance transparency in the supply chain and manage the supply chain proactively. 23 User System Interaction R. Kocielnik MSc PDEng Rafal Kocielnik has done his project at Philips Research as an intern of the User-System Interaction program. The supervisors for the project were: Leszek Holendeski, senior scientist at Philips Research and Rene Ahn, assistant professor at Industrial Design, TU/e. Rafal is continuing work on this project as a Researcher at Information Systems, TU/e. Stress@Work pilot Project description Work related stress has become a serious problem. In the 2000 European Working Conditions Survey (EWCS), stress was found to be the second most common work-related health problem across the EU. A similar situation can be observed in the USA. The main goal of the project was to create a working demo application for stress coaching at work, based on predictions of coming stress. The stress predictions were based on correlations of the Outlook agenda of a user with physiological stress measurements provided by a prototype device developed at Philips Research. The project consisted of two main parts: gathering physiological measurements data with the Philips Research device and developing a demo application of stress coaching at work. Figure 1. The stress coaching demo application. The meetings from MS Outlook are colored based on predicted stress levels. Stress management advises are shown as icons on top of the events. Part 1: The goal was to set up a user experiment for gathering physiological data, and conduct it for a period of 7 weeks with 5 employees. The analysis of the collected measurements led to identification of various stress patterns. User feedback was also collected and the findings were in two categories: 1) Redesign recommendations for the prototype device 2) Improvements to the study procedure. Part 2: The goal was to design and implement a demo application of stress coaching at work. The design was created in a participatory process and included a psychological model for stress coaching based on a literature review. The final concept was implemented as a working demo that cooperates with MS Outlook and communicates with the underlying stress predictor. The system was evaluated with 10 target users. In general they appreciated the design approach as it gave them control over the coaching advice and the ability to give feedback to the system. The interface was regarded as simple, easy to use and well integrated with their daily routines. The main concerns were related to the accuracy of stress predictions and coaching advice. Based on these results the project is being continued on a larger scale and extended to include other target groups. Figure 2. Overview of stress detection, prediction and coaching approach. 24 The main contribution of the solution is combining objective physiological measurements with a personalized and adaptive stress coaching application, integrated with MS Outlook, functioning in a real work environment. Software Technology H.L. Liang MSc PDEng Lorraine conducted her PDEng graduation project at the Embedded Systems Institute and Vanderlande Industries. The project was supervised by Jacques Verriet and Roelof Hamberg from ESI, Bruno van Wijngaarden from Vanderlande Industries, and Ad Aerts from TU/e. Lorraine is currently working as a software engineer for Sioux Embedded Systems. A graphical specification tool for decentralized warehouse control systems Figure 1. Warehouse specification tool used to describe the Automated Case Picking module. Warehouses play an important role in modern supply chains. These facilities receive goods from various suppliers, provide temporary storage for these goods, and distribute them to customers according to the received orders. The flow of goods in a warehouse is controlled by a warehouse management and control system (WMCS). The large variety of warehouse equipment and the very specific delivery requirements of different customers make the development of an effective WMCS a very challenging task. Moreover, the development of a WMCS involves people from different disciplines, each with its own jargon. It often leads to misunderstandings between stakeholders. Thus, the design process of a WMCS is complex and time-consuming. To address this problem, Lorraine has developed a graphical specification tool. The graphical specification tool consists of the Warehouse-Control Specification Language and the corresponding graphical specification editor. The tool was built upon a stable reference architecture for decentralized WMCSs. The specification language allows for describing the system components and their relationships and behaviors. It abridges the communication gap between stakeholders by providing a common understanding of the concepts. Figure 2. Visualization of the execution results of the generated WMCS working in a simulated hardware context. The graphical specification tool allows warehouse designers to describe a WMCS with just as much knowledge of the implementation of the underlying reference architecture as necessary and no detailed knowledge of the underlying specification language. It enables warehouse designers to describe WMCSs at the level that precisely captures the variation that is needed for constructing the customer systems. The tool supports automatic generation of the specified WMCS. Furthermore, it provides the possibility to run the generated WMCS as a software-in-the-loop simulation. The output of the simulation is visualized using the integrated Gantt chart tool. Benefits A real-life industrial case is used as a carrier to evaluate the ability to reduce WMCS design and development effort by providing a specification tool. The evaluation shows that 97 percent of the WMCS code can be automatically generated from the editor. Only 3 percent of application-specific code has to be written manually. The graphical specification tool successfully illustrates that the efficiency of the WMCS design process can be improved by using model-based warehouse design approach. 25 Process and Product Design ir. F. Polanco Fernández PDEng Fernando conducted his PDEng graduation project at Huntsman Polyurethanes. The project was supervised by Arend Jan Zeeuw, Global Process Development Manager at Huntsman Polyurethanes, dr.ir. Mathieu Westerweele, Process and Product Design Coordinator, Paul Deckers, 2nd year students Project Coach. Fernando is currently working for Mobatec BV as a Chemical & Model Engineer, developing dynamic models of industrial processes. Dynamic modeling of relief system Safety should be considered as the most essential concept to apply to every single aspect of our lives. When it comes to industry, the concept acquires even a more relevant dimension. The project conducted by Fernando is, in general terms, in line with safety within the chemical process industry. Figure 1. Huntsman production site. Figure 2. Process model created with Mobatec. In this field, processes are evolving continuously based on innovative technology and efficient designs. In this context process engineers face basically two scenarios: they can either design a new chemical plant from scratch or they can modify, adapt and debottleneck an existing plant designed in the past to fulfill the requirements of today and the future. This project focused on an existing facility, more precisely on a polyurethane production plant. This process deals with some hazardous compounds such as phosgene and hydrogen chloride. Therefore the process includes a large emergency section whose main function is to absorb them in case of a full emergency scenario. This part of the process is considered as of paramount importance due to the fact that it is the last piece of equipment before release to atmosphere. Fortunately, a full emergency scenario has never occurred. The plant was designed in the 1970s based on a production rate obeying product demands at that time. However, the production rate is planned to be increased in the near future. Figure 3. Loading of the column in a full emergency scenario. Goal The objective of this project was to create a dynamic model of the “emergency” section of the process and simulate a full emergency scenario with the aim of having a better knowledge about what might happen under those emergency circumstances. The results from this model were the basis for recommending the most effective way to allow the chemical plant to run at higher production rates without overloading the system. Benefits Fernando built a dynamic model of the emergency section of the process and the results were considered relevant for the company in terms of understanding what might happen in a full emergency scenario. It laid the foundations for the future plans of the company regarding the emergency section of the polyurethane production process. The adaptation of this model to other production facilities in other countries currently is ongoing. 26 27 28 TU/e Doctoral Project Award 2012 For the best dissertation or technological design at TU/e in 2011 The TU/e Doctoral Project Award 2012 consists of a certificate and a sum of € 5.000,- for the best TU/e doctoral project, completed in 2011. This year 13 doctoral projects from 8 different departments have been nominated. The very large number of commendable dissertations this year was striking. Normally two or three dissertations stand out but this year there were at least six that could all have been worthy winners. The jury has assessed the doctoral projects based on the following criteria: • Sufficient innovative elements • Scope of the research or design • Comprehensibility of the dissertation or research report • Social relevance 29 TU/e Doctoral Project Award 2012 Report of the jury The jury was impressed by the quality of the theses that were nominated by the various departments. As TU/e, we can be proud to have been able to deliver this quality across all departments. The ranking of the top three only depended on subtle variations of the relative weight of the 4 evaluation criteria: sufficient innovative elements, scope of the research or design, comprehensibility, and social relevance. We can only select one winner, and we eventually decided that this should be the thesis: 'Dust particle(s) (as) diagnostics in plasmas' by Dr. Job Beckers The thesis of Job Beckers is focused on so-called dusty plasmas: complex plasmas containing dust particles. These are present in planetary rings, nebula and comet tails as well as in plasma processes in the semiconductor or solar cell industry. Here these particles can either have a positive or a negative effect on the structures that are grown. Job has studied dusty plasmas along two research lines: the first concentrating on the formation of dust particles and the second on using these particles to diagnose the plasma itself. An interesting twist to his work is that his experiments required him to tweak gravity. To create gravitational levels between 1 and 10 he has incorporated a plasma reactor onto the gondola of a centrifuge, and to reduce the gravitational levels down to zero he performed experiments during parabolic flights with an Airbus 300. Job has developed very elegant models to explain his experimental data, by which he was able to unravel properties of plasmas that up till now could not be studied directly, but was also able to study the dynamics of the nucleation of dust particles. 30 The jury was impressed by the fine combination of innovative experiments with analytical modeling, leading to a thesis that has scientific rigor and empirical evidence gathered in an inspired way, served with clarity and enthusiasm. And the cover looks great too! An additional strong point of the thesis is that Job Beckers has described his research in a way that is also appealing to non-specialists in the field. His results are directly relevant to both the fundamental understanding of plasmas and a broad range of industrial applications. The work has been published in high-impact journals and has been presented at many conferences. Job Beckers’ skills in communicating science to society is also evidenced by the popular book 'Plasma's voor iedereen' (Plasma for everyone), of which he is one of the authors. In short, we feel that Job Beckers' thesis shows everything that is needed, and more, to win this year's Doctoral Project Award. Eindhoven, May 23, 2012 Prof.dr. J.W. Niemantsverdriet (chair) Prof.dr.ir. K. Kopinga Prof.dr. C.C.P. Snijders 31 Applied Physics dr.ir. J. Beckers Job conducted his PhD project - funded by the European Space Agency (ESA) - in the Elementary Processes in Gas discharges (EPG) group at the department of Applied Physics. The project was supervised by prof.dr.ir. Gerrit Kroesen. Job is currently working for Xtreme Technologies GmbH in Aachen, Germany, developing the new generation extreme UV light sources for lithography applications. Dust particle(s) (as) diagnostics in plasmas Figure 1. Microgravity conditions during the experiments on board of the Novespace’s Airbus A300 during the ESA 54th parabolic flight campaign, Bordeaux, France. Foto: ESA/Le Floch. Figure 2. Profile for the electric field strength determined from hypergravity experiments in a centrifuge (grey triangles) and from microgravity conditions during parabolic flights (red line). 32 Complex plasmas are plasmas with dust particles or liquid droplets suspended in them. Having a broad range of applications in - for example - astrophysics, nanotechnology and semiconductor and solar cell industry, these types of plasmas gain lots of interest. In this research project we use the property of dust particles to become highly negatively charged - once injected into a plasma - to study electric fields at the border of the plasma. The origin of these fields lies in the fact that the mobility of electrons is much higher than the mobility of positive ions causing - at the plasma border - an imbalance in charge carriers. This results into a space charge region - called the plasma sheath - in which large electric fields are present. The electric fields in the plasma sheath dramatically influence processes in all applications where the acceleration of ions at the border of the discharge is utilized (e.g. deposition, sputtering, and etching). However, measuring the strength and the profile of the fields is extremely difficult. We developed a novel method - using the force balance on one single dust particle confined in the plasma sheath due to balancing gravitational and electrical forces - to measure spatially resolved and quantitatively the electric field profile throughout the plasma sheath without disturbing the plasma itself. For the necessary variation in particle confinement position, we simulated in our experiments hypergravity conditions in a centrifuge and microgravity during parabolic flights (ESA 54th parabolic flight campaign, Bordeaux, France). Benefits We are the first to measure, spatially resolved, the electric field profile throughout the plasma sheath without disturbing the plasma itself. Our work contributes to a large extent to the fundamental knowledge and understanding of plasma sheath phenomena and dust particle charging processes. Furthermore, since every plasma has borders by definition, almost all industrial plasma applications benefit from the knowledge gained. Winner of the TU/e Doctoral Project Award 2012 Electrical Engineering dr.ir. M.S. Alfiad Mohammad S. Alfiad received the MSc in broadband telecommunication (cum laude) and the PhD in optical telecommunication (cum laude) from Eindhoven University of Technology. His PhD project was under the supervision of dr. Huug de Waardt and prof. Ton Koonen. He was granted the Nokia Siemens networks quality award, the KIVI-NIRIA telecommunication award and the IEEE/photonics graduate student fellowship. Since May 2012 he has been working at ADVA optical networking as a senior system engineer. Optical data transmission with data rates beyond 100 Gb/s per channel We are using internet in almost all life aspects, and our dependence on it is increasing over time. As a result, our data consumption has been increasing at an enormous rate (around 40% / year). In order to avoid a capacity crunch anytime soon, we have to increase the capacity of our optical fiber transmission links which represent the core of the whole telecommunications network. Nokia Siemens Networks, in cooperation with TU/e, has investigated several techniques to drastically increase the capacity of their optical transmission systems. Figure 1. Structure of an optical coherent receiver. Figure 2. Lab implementation of (a) an optical coherent receiver (b) 100 Gb/s transmitter. Optical coherent detection Coherent detection is a well known concept in radio telecommunications. This receiver type has been adopted for optical detection as depicted in Fig. 1. In comparison to conventional optical detection techniques, coherent detection can reconstruct the complete optical field in the electrical domain. This would enable the use of all optical field components for modulating the optical signal which consequently leads to higher data rate channels with smaller bandwidth. The setup of our coherent receiver is shown in Fig. 2a. Now we can enjoy designing channels with data rates beyond 100 Gb/s Thanks to the flexibility offered by optical coherent detection, one can be very creative in the design of optical channels. Fig. 2b depicts our lab set up for a 100 Gb/s polarization-multiplexed (POLMUX) QPSK transmitter. The 100 Gb/s channel fits in a total bandwidth of merely 50 GHz. Using this channel type, the total capacity of a single fiber link can be boosted up to 9.6 Tb/s. This bulky set-up can be effectively reduced by optical and electrical integration. 100 Gb/s POLMUX-QPSK channels with coherent detection are very robust and have a transmission reach of over 2000 km. Figure 3. Experimentally obtained constellation diagrams of 200-Gb/s POLMUX-16QAM signal. In order to further increase the capacity of the transmission link, the POLMUX16QAM modulation can be utilized. POLMUX-16QAM can easily enable the generation of 200 Gb/s channels. Employing such a modulation format further increases the total capacity up to 19.2 Tb/s. Fig. 3 shows the constellation diagrams of a 200 Gb/s channel that has been generated in our lab. A transmission distance of up to 1500 km has been achieved with this channel. 33 Electrical Engineering dr. D. Cavallo MSc After receiving his MSc degree (summa cum laude) in telecommunication engineering from University of Sannio, Italy, in 2007, he started a PhD on antennas with the Electromagnetics group of TU/e, under the supervision of prof. Gerini and prof. Neto, and co-supervision of prof. Tijhuis. He received the PhD degree (cum laude) in 2011. He is currently postdoctoral researcher at TU Delft. Connected Array Antennas Figure 1. Satellite-to-aircraft communication. Figure 2. “Connected Array” antenna. Figure 3. Best Innovative Paper Award. 34 Rationale In satellite communications, a single wideband antenna can strongly reduce space and weight when supporting many communication channels. An important commercial application that demands an advanced solution for satellite-to-aircraft communication is the in-flight entertainment. For such application, the antenna is required to support two orthogonal polarizations with good isolation between the channels. The beam of the array antenna is required to be electronically steerable and cover the full hemisphere. This allows the system to maintain a good pointing and a good signal reception under all possible flight operations, including high-latitude air routes. Moreover, to minimize the impact of the antenna on the aircrafts, a single wideband antenna for both transmit and receive should be integrated in the aircraft fuselage. Connected arrays The antenna solutions typically used for wideband wide-scan applications are not able to simultaneously achieve broad band and low cross polarization. For this reason, in recent years, a new approach has arisen for the design of broadband arrays, aiming at reducing cross polarization. This antenna solution consists of arrays of long dipoles or slots periodically fed, and are referred to as “connected arrays” of slot or dipoles. This PhD project, on the one hand, developed the theory of connected arrays leading to simple formulas that can be used for the design and the deep understanding of the radiation mechanism. On the other hand, the study addressed the aspects related the practical design of such antennas. Results of the research The work described in this thesis has led to more than 30 papers published in peer-reviewed international journals and conference proceedings. Furthermore, the research developed within this work has had an important role at the Netherlands Organization for Applied Scientific Research (TNO Defense, Security and Safety) in the framework of several projects. In the framework of these projects, one international patent has been granted and four prototype antennas have been manufactured. Experimental results have successfully validated the performance of the antenna. Moreover, the theoretical work has been awarded with the Best Innovative Paper Prize at the 30th ESA Workshop on Antennas for Earth Observation, Science, Telecommunication and Navigation Space Missions, in Noordwijk, Netherlands. Applied Physics dr.ir. G. Dingemans Gijs Dingemans carried out his research at the Department of Applied Physics under supervision of prof. Erwin Kessels. The title of his thesis is “Nanolayer Surface Passivation Schemes for Silicon Solar Cells”. He has accepted a job at ASM International to support and expand their solar activities. Nanolayer surface passivation schemes for silicon solar cells Solar cells Solar energy plays a key role in the transition from fossil fuels towards renewable energy - one of the biggest shifts in the world in the coming decades. Research and development focuses on increasing the energy conversion efficiency of solar cells to further curb solar electricity costs. Today, over 85% of solar panels are based on crystalline silicon. The research of Gijs addressed one of the major losses in such solar cells: charge carrier recombination through defects at the silicon surface. Gijs developed novel “surface passivation schemes” - nanolayers on silicon - that significantly reduce these recombination losses. The solar cell efficiency can increase by up to 2% absolute. The implementation of surface passivation schemes has the highest priority on the technology roadmap of the solar industry. Figure 1. Improvement of the solar cell efficiency by implementation of Al2O3 nanolayers at the rear side of the solar cell. Figure 2. ALD cycle for deposition of Al2O3 nanolayers. Aluminum oxide nanolayers and atomic layer deposition One of the key materials that Gijs studied was aluminum oxide (Al2O3) which provides an excellent level of surface passivation and appeared to exhibit unique properties. The nanolayers with typical dimensions of 5-50 nm were synthesised by atomic layer deposition (ALD). This is a deposition method which has very recently been introduced in the field of photovoltaics. Gijs studied various novel ALD processes and materials for application in solar cells. With a number of international partners he also assessed the industrial feasibility of the new technologies. Example: Interface engineering using nanolayers ALD is ideally suited for the controlled synthesis of films - by repeating a so-called ALD cycle, film thickness can be tuned with 0.1 nm resolution. This benefit of ALD was exploited to tailor the properties of nanolayer surface passivation schemes using “interface engineering”. A transmission electron micrograph of deposited silicon oxide (SiO2) and Al2O3 nanolayers is shown. Gijs demonstrated that the key properties of these SiO2/Al2O3 stacks - such as the strength of the built-in electric field that can shield charge carriers from interfacial defects - can be tuned very precisely by a variation of the SiO2 thickness in the range of 1-10 nm. This makes the stacks very useful for application in high-efficiency solar cells as the properties can be adjusted to the doping type and doping concentration of Si. Patents Gijs has authored over 12 journal papers and more than 8 patents have been filed. The main collaboration partner in the project has demonstrated record efficiencies for industrial solar cells. Moreover, high-throughput ALD equipment for Al2O3 thin films is currently installed at various leading solar cell manufacturers. Figure 3. Interface engineering using nanolayers. Thickness control on the submonolayer level. 35 Mechanical Engineering dr.ir. M.C.F. Donkers Tijs has performed his PhD research within the newly established Hybrid and Networked Systems (HNS) group of the department of Mechanical Engineering, where he was supervised by prof. Maurice Heemels and dr. Nathan van de Wouw. He currently works at the Netherlands Organisation for Applied Scientific Research (TNO) in Helmond, Netherlands, developing advanced control algorithms for ultra-low-emission diesel engines. Networked and event-triggered control systems Networked control systems (NCSs) are control systems that exploit shared (wired and wireless) communication media. The versatility and flexibility that these NCSs offer are appealing and highly needed in many engineering applications, such as in highway traffic control, aiming at reduced fuel consumption and increased traffic throughput, in tele-operated systems, with applications in remote and minimally invasive surgery, and in large-scale systems, such as smart powerdistribution networks. Figure 1. A schematic of a networked control system. Although, from a hardware point of view, the implementation of wireless NCSs is coming within reach, the number of actual deployments has remained strikingly small. The reason for this lack of industrial deployment is that a shared communication infrastructure undermines the basic assumptions on which traditional feedback control systems rely. As a consequence, the creation of reliable, high-performance NCSs requires the development of new theoretical foundations for remote control over unreliable shared wired and wireless networks. In this PhD research, two main contributions have been made to the field of NCSs: - The first contribution is the development of a unifying mathematical framework that captures the essence of the design problems for NCSs. This mathematical model integrates all the heterogeneous phenomena that are experienced in real-life applications of NCSs, while still offering an elegant methodology for the stability and performance analysis; Figure 2. Some examples of engineering applications heavily benefiting from networked and event-triggered control. - The second contribution is the proposition of novel event-triggered control algorithms that abandon the conventional paradigm of the periodic sampling of measurements and periodic control updates. Instead, event-triggered control algorithms only send measurement data and control commands over the network when needed, resulting in fewer transmissions. This asynchronous transmission of data alleviates the high requirements that feedback control pose on the computational and communication platforms. In summary, this research has resulted in new modeling, analysis and design paradigms for NCSs that form a key step in bridging the gap between the traditionally separate disciplines of control, computation and communication. Figure 3. Networked and event-triggered control systems bridge the gap between control, computation and communication engineering. 36 Electrical Engineering dr. B.L.J. Gysen MSc The Phd research (cum laude) of Bart was conducted in the Electromechanics and Power Electronics (EPE) group of Electrical Engineering and supervised by prof. Elena Lomonova and dr. Johannes Paulides. The project was funded by the SKF Automotive Development Centre located in Nieuwegein. Bart is currently working at Prodrive B.V. in the development of actuators while combining a part-time position as an assistant professor within the EPE group. Generalized harmonic modeling technique for 2D electromagnetic problems Driving a car is up to 60% more comfortable with TU/e EPE-SKF active suspension! This 60% improvement in comfort required an electromagnetic approach to active suspension research, since active suspensions are not new, but current systems rely on hydraulics to respond to uneven road conditions. TU/e developed this advanced system in collaboration with Swedish company SKF, which has patented the technology and is looking into marketing it. This TU/e EPE-SKF suspension system senses road inputs and feed them to the controller to take immediate actions, hence significantly increasing either vehicle handling or ride quality. Figure 1. Evolution of the harmonic modeling technique applied to two-dimensional problems. Figure 2. Bart Gysen together with the BMW530i in which the electromagnetic suspension system is installed at the front wheels. Generalized electromagnetic design framework (fundamental research) Automated electromagnetic design tools can be based on analytical equations, lumped equivalent models, and/or the boundary/finite-element methods. However, all these techniques have some drawbacks to truly represent and predict their behavior. Analytical equations require many correction factors, which are only applicable for a certain solution domain. Two-dimensional semi-analytical solutions could provide a more general solution, although up to this PhD research their use was still limited to particular electrical machine configurations. To design this electromagnetic active suspension, research into an extended generalized electromagnetic semi-analytical framework was required. This framework refers to analytical models with a limited form of discretization. A modeling technique belonging to this group is given by Fourier modeling using transfer relations, or sometimes referred to as sub-domain modeling or a closedform analytical solution of a bound problem. These semi-analytical models, derived from the formal solution of Maxwell equations, allow a rapid exploration of the virtual prototypes space solutions in an earlier stage of the design procedure coming before the finite-element refinement of the chosen prototype. In this PhD research project, the prior art has been significantly extended and unified to create a methodology which, besides active suspension, can be applied to almost any electromagnetic problem in the Cartesian, polar and axisymmetric coordinate system (see Figure 1). Active suspension design using electromagnetics (applied research) The direct-drive electromagnetic suspension is composed of a coil spring in parallel with a tubular permanent magnet actuator with integrated eddy current damping (patented solution). The spring supports the sprung mass while the tubular actuator, with integrated passive electromagnetic damping, either consumes or regenerates energy to improve comfort or handling while guaranteeing failsafe operation. The suspension system is installed as a front suspension in a BMW 530i test vehicle (see Figure 2). Both the extensive experimental laboratory and on-road tests prove the capability of this direct-drive electromagnetic active suspension system. 37 Industrial Design dr.ir. B.J. Hengeveld Bart Hengeveld conducted his PhD research at TU/e under the supervision of prof.dr. Kees Overbeeke† and prof.dr. Jan de Moor (Radboud University Nijmegen). He currently works as an Assistant Professor at the Department of Industrial Design, in the Designing Quality in Interaction group. Designing LinguaBytes Young children acquire language seemingly effortlessly. This is however not the case for children who are non- or hardly speaking. These children often experience developmental delays, in many cases caused by perinatal brain injury or the consequences thereof. For example, many of these children suffer motor disability, causing restricted access to their environment, resulting in an impoverished experiential base for language development. Also, facial and gestural expressions of these children can be difficult to interpret by a caregiver, leading to fewer communicative reactions than normal developing children, as communication with non- or hardly speaking children is highly dependent on non-verbal expressions. Additionally, these children require much physical care, which means that less time and attention remains for caregivers to spend on social interaction and communication. Figure 1. The LinguaBytes system consists of interface modules and accompanying tangible input materials. All five generations of LinguaBytes prototypes were experimentally tested in real-life settings. The final prototypes have been in use for over two years to date. To reduce delays in these children’s language development early intervention programs are available. However, the main weakness these programs share successful though they may be commercially - is that they are based on how we interact with the PC, instead of on how young children interact with the world. Young children are explorative, multi-sensory, full of initiative, whereas a PC was originally built for solitary office work. To counter the huge offer in PC-based early intervention programs LinguaBytes was developed, in collaboration with Radboud University Nijmegen and Kentalis. LinguaBytes was developed using a research through design approach. In five consecutive iterations different prototypes of increasing realism and experienceability were designed and built, and then tested in real-life settings. This has lead up to the final design of which three prototypes were built and tested longitudinally. The final LinguaBytes design is a tangible, highly adaptable play-and-learning system consisting of more than 300 RFID-tagged, tangible play materials that grant children playful access to interactive stories and linguistic games and exercises. LinguaBytes was funded by Dr. W.M. Phelps-Stichting voor Spastici (main sponsor), Stichting VSB-Fonds, SKAN Fonds, Nederlandse Stichting voor het Gehandicapte Kind, Nationaal Revalidatie Fonds, Stichting Kinderpostzegels Nederland, Johanna Kinderfonds and Stichting Bio-Kinderrevalidatie. Ten months of research were done while working at TU Delft. Currently, Oost NV is researching opportunities to make LinguaBytes commercially available. Figure 2. The LinguaBytes system was designed to facilitate communication between children and their caregives. It does this by generating spaces where child and caregiver meet and communicate, but also by offering materials that enable communication. Four spaces for (inter)action can be identified: two spaces where either the caregiver or the child is in control, and two other spaces where they meet. 38 Chemical Engineering and Chemistry dr.ir. J. Jovanović After obtaining his MSc in the Oil & Petrochemistry group at the University of Belgrade, he moved to the Netherlands in 2005, where he obtained his MSc and PhD from the Laboratory of Chemical Reactor Engineering at Eindhoven University of Technology. He is currently working as a process development engineer at Royal Dutch Shell. Liquid-liquid microreactors for phase transfer catalysis Figure 1. Fluidic control of slug sizes, and therefore the organic phase surface-to-volume ratio, allowed for productivity optimization and selective synthesis of the mono- alkylated product of phase transfer catalyzed alkylation of phenylacetonitrile. Microreactor phase transfer catalysis Over the last decade microreactors have emerged as an attractive alternative to the conventional batch reactors commonly found in the chemical industry. The sub-millimeter inner diameter channels allow for surface-to-volume ratios above 10000 m2/m3, resulting in a significant intensification of the mass and heat transfer. The liquid-liquid chemical processes that would significantly benefit from microreactor application are those which are based on phase transfer catalysis. They employ catalysts which have the ability to penetrate the interface between two phases, allowing for reactions to take place between otherwise nonreactive components. Consequently, combination of phase transfer catalysis and microreactors has great potential for application in the fine chemical, polymer and pharmaceutical industries. Research The research carried out within this thesis studied the hydrodynamics, reaction applications and scale-up of liquid-liquid microreactors. The liquid-liquid flow patterns in microchannels were evaluated in terms of stability, surface-to-volume ratio, achieved throughput and extraction efficiency. Furthermore, a slug flow pressure drop model was developed and validated. In the second part of the research optimal flow patterns were utilized to achieve a degree of reaction control otherwise unachievable in conventional reactors. Finally, the combination of the two technologies was demonstrated on a pilot-scale microreactor. Results The evaluation of flow patterns showed that slug and bubbly flow patterns are most promising for reaction applications, due to large surface-to-volume ratios and extraction efficiencies. The fluidic control over the interfaces in a slug flow microreactor was employed to study a complex system of liquid-liquid phase transfer catalyzed alkylation of phenylacetonitrile. An interdigital mixer redispersion capillary reactor assembly was developed to achieve bubbly flow with surface-to-volume ratio above 200000 m2/m3. The developed process allowed for highly selective, ton per annum benzyl benzoate production in a single microreactor. Compared with the conventional phase transfer catalyzed esterification, this novel reactor allowed for significant increases in yield, while removing an energy-intensive step of distillation, eliminating the use of solvents and bases and improving process safety. Figure 2. The combination of redispersion capillary and high pressure interdigital mixer resulted in a novel microreactor system that could achieve bubbly flow with surface-to-volume ratio otherwise unachievable in conventional reactors. The system was applied in the benzyl benzoate production achieving product yield of 98%. 39 Electrical Engineering dr.ir. M.J.H. Marell Milan conducted his PhD research in the Photonic Integration group at Eindhoven University of Technology under the supervision of prof.dr. Martin Hill, prof.dr.ir. Meint Smit and dr. Erwin Bente. He is currently working in the Minimally Invasive Healthcare department of Philips Research where he is working on optical imaging modalities for image guided intervention and therapy. Gap plasmon mode distributed feedback lasers Optical circuits are extremely useful for fast transport and processing of the large amounts of data often encountered in telecommunication. Their field of application is continuously being extended and currently they are also considered for use in sensing and computing applications. Most optical circuits still have analog functionality, but also in photonics there is a growing interest in digital circuits, such as logic gates and flip-flops (switches). These circuits allow for data processing without an optical-to-electrical conversion, resulting in low power consumption and operation at very high speeds. Figure 1. A scanning electron microscope photo of the semiconductor body of an extremely small distributed feedback (DFB) laser. With a waveguide width 25x times smaller than a regular waveguide, this laser is the smallest electrically powered DFB laser in the world. The overlay shows the electric field intensity of the light in the cavity. Figure 2. The emission spectrum (left) and output power versus current (right) of the DFB laser. The laser operates at a single wavelength; other wavelengths are suppressed > 21 dB by the grating structure. The threshold current of this laser is smaller than 500 micro-amperes and can be decreased even further, to only a few micro-amperes, by reducing the length. In a typical optical circuit, light is transported through channels. These channels, so-called waveguides, come in many different forms, such as optical fibers. Conventional waveguides rely on refractive index differences to trap the light in their core. For the integration of photonic circuits on a chip, this poses a difficulty. Unlike electrical connections, optical waveguides require a minimum size for the light to stay inside. This is due to the diffraction limit of light, which relates the size of the structure to the wavelength of the light inside. Therefore, until now, optical integrated circuits have always been relatively large compared with their electrical counterparts. Metals can offer a solution to this problem. They owe their characteristic properties to the presence of free electrons. Under certain circumstances these electrons can couple with light traveling parallel to the surface of the metal. So-called surface plasmons are formed, which then ‘stick’ to and travel along the surface of the metal. Surface plasmons can be used to make extremely small optical waveguides. Electrically powered lasers are a key component of optical circuits. For a long time it was believed that small lasers, based on plasmonic waveguides, could not be fabricated due to the high absorption of light inside these waveguides. Milan has shown that it is not only possible to fabricate lasers, but also more complex structures based on these waveguides. The new structures, known as distributed Bragg reflectors, provide control over the wavelength and emissive properties of lasers and allow coupling to other optical components; all requirements for successful further miniaturization and competitive integration of optical circuits. 40 Industrial Engineering & Innovation Sciences dr. M. van den Tooren Marieke conducted her PhD project in the Human Performance Management (HPM) group at the Department of IE&IS. Her project was supervised by prof.dr. Jan de Jonge (HPM group) and prof.dr. Christian Dormann (Ruhr University Bochum). Marieke currently works as an assistant professor of Work & Organizational psychology at Tilburg University. Job demands, job resources, and self-regulatory behavior Exploring the issue of match According to the Demand-Induced Strain Compensation (DISC) Model (also known as the Head-Heart-Hand Model), worker health and well-being can be explained by two distinct classes of job characteristics: job demands and job resources. Job demands are work-related tasks that require effort. Job resources are workrelated assets that can be employed to deal with job demands. In the long term, high job demands (e.g. lifting patients) will have an adverse impact on worker health and well-being (e.g. back pain) unless workers have sufficient job resources (e.g. a hoist) to deal with their demanding jobs. Three types of job demands and job resources can be distinguished: cognitive (‘head’), emotional (‘heart’), and physical (‘hand’) job demands and job resources. Figure 1. The DISC Model’s matching principle: to prevent health complaints there should be a balance between corresponding types of job demands and job resources. The aim of this PhD project was to test two key assumptions underlying the DISC Model: (1) to prevent health problems, job demands can best be dealt with through the activation of job resources that correspond to the type of job demands concerned (i.e. matching job resources), and (2) workers who are faced with a specific type of demanding work situation are generally inclined to use matching job resources to deal with these job demands. The first key assumption was tested by a review of 29 studies on the DISC Model. To test the second key assumption, four empirical studies were conducted. Two studies focused on the self-regulation processes involved in the activation of job resources (i.e. alertness to available job resources, evaluation of the relevance of job resources, and decision-making regarding the actual use of job resources). The objective of the two other studies was to investigate whether workers’ personal characteristics (i.e. specific active coping styles and regulatory focus) should be included in the DISC Model, assuming that these person variables facilitate/inhibit the use of job resources in demanding situations at work. Figure 2. A physical job demand (lifting a patient) is dealt with by employing a corresponding, physical job resource (a hoist). Based on the findings of this PhD project, it was concluded that the DISC Model as it stands now seems warranted, regarding both the two key assumptions of the DISC Model and the predictor variables included in the model (i.e. job characteristics). 41 Biomedical Engineering dr.ir. A. de Vries Anke performed her PhD at the TU/e in collaboration with Philips Research. The project was supervised by prof.dr. Holger Grüll, principal scientist at Philips Research and part-time professor at the Department of Biomedical Engineering. Anke is currently working at the Catharina Hospital as a Medical Physicist Trainee in nuclear medicine. Multimodal nanoparticles for quantitative imaging Computed Tomography (CT) is the most dominant clinical imaging modality for the detection and diagnosis of trauma, cardiovascular diseases and follow up in cancer treatment. The still growing use of CT is associated with an increased exposure of patients to potentially dangerous ionizing radiation. Hence, hospitals are now seeking novel methods to get high image quality with the lowest radiation dose as possible. Figure 1. The transmission spectrum of the emulsion using Spectral CT shows the typical iodine K-edge at 30.2 keV which is proportional to an iodine concentration of 1.0 Molar. Figure 2. Spectral CT allows distinguishing between x-ray absorption of bone structure and the iodinebased contrast agent (visualized in red). Spectral CT is a new imaging technique, which in contrast to conventional CT is able to quantify concentrations of high-Z elements such as iodine. The technique makes use of an energy resolved detector, which allows quantifying the K-edge absorption of high-Z elements and can therefore also distinguish different elements. The K-edge absorption is directly proportional to the tissue concentration of the respective element (Fig. 1). Many clinical applications may benefit from this technique, as it is now possible to distinguish between sources of high absorption in body tissue such as bones, calcifications in plaque, and intravenously administered contrast agents (Fig. 2). Our research focuses on a multimodal imaging study, where we developed a radiolabeled iodinated emulsion that was “visible” with both Spectral CT and SPECT. Spectral CT and SPECT imaging showed a high uptake of the emulsion in the liver and spleen, where spectral CT images are very comparable to images obtained with SPECT (Fig. 3). The biodistribution in vivo was quantified per tissue with Spectral CT and compared with biodistribution data obtained with SPECT as well as with the gold-standard ICP-MS. The quantification of SPECT as compared with ICP-MS revealed a linear correlation with a slope of 0.93, whereas the correlation of iodine quantification of Spectral CT versus ICP-MS showed a correlation of 1.0. This study shows for the first time that Spectral CT is a suitable technique for quantitative CT applications and is able to quantify iodine concentrations above 2 mM. This will affect many clinical applications such as the detection of tumors (e.g. hepatocellular carcinoma) as fewer scans are required to obtain the same information. Benefits The use of Spectral CT can lead to quantitative data allowing better diagnosis, while reducing the number of scans per patient as well as the radiation dose of the patient per scan. Figure 3. The biodistribution of the emulsion droplets as visualized with SPECT (left) shows good agreement with the iodine concentration maps obtained with spectral CT (right), showing for instance both a high uptake of nanoparticles in the liver and a signal void in the gall bladder. 42 Applied Physics dr.ir. A.P. Wijnheijmer Ineke conducted her PhD project at the Applied Physics department of Eindhoven University of Technology, in the group Photonics and Semiconductor Nanophysics. The project was supervised by prof.dr. Paul Koenraad. Ineke is currently working at Océ Technologies, investigating ink-paper interactions. Manipulation and analysis of a single dopant atom in GaAs It is impossible to imagine life today without semiconductors. Almost every piece of electronic equipment contains a computer chip, made up of semiconducting material. The functionality of the components on computer chips, e.g. transistors, is realized by adding dopant atoms into the semiconductor host, in order to introduce free charge carriers. Figure 1. STM image of Silicon dopants in GaAs (24 nm x 26 nm). The atomic layers of the GaAs crystal are visible as the thin horizontal lines, the Silicon dopants appear as bright spots in the center of their ionization ring. Over the last few decades, the size of transistors has decreased tremendously, as was predicted by Moore as early as 1965. Where the channel width of a transistor back in the 1980s was more than 1 micrometer, its width in state-of-the-art devices today is only 22 nanometer, as was published by Intel on May 2, 2011. Research devices that are even smaller than commercial devices have reached dimensions where single dopants can dominate the transport properties and where interfaces affect the properties of dopants. Therefore, fundamental research on the atomic scale into the properties of individual dopants and the influence of surfaces is of crucial importance. We use Scanning Tunneling Microscopy (STM), a technique that combines superb spatial resolution with reasonable energy resolution. In STM, an atomically sharp needle scans over a surface to probe the underlying material. The most striking features in STM images of the model system of Silicon dopants in Gallium Arsenide (GaAs) are the ionization rings belonging to individual dopants. The ionization occurs when the needle is sufficiently close to the dopant, due to the nanoscale repulsive electric field induced by the negatively charged needle. This is analogous to the gate of a transistor. The purpose of the gate in a transistor is to change the ionization state of the dopants, such that the channel for charge carriers is opened or closed. The switch that is obtained in this way, should be operational at low gate voltages for efficient devices. The property that governs this switching is the binding energy of the dopant. By investigating the ionization rings, we found that the binding energy is enhanced near the surface. This means that dopants close to the surface contribute less to the functionality of the devices. Thus, when the down-scaling continues and the devices are dominated by their surface induced properties, the transistors might fail to operate properly. 43 Mathematics and Computer Science dr.ir. C.M.E. Willems Niels has conducted his PhD research together with prof.dr.ir. Jack van Wijk, dr.ir. Huub van de Wetering, and ir. Roeland Scheepens investigating ways to visualize attributed trajectories of moving objects. Niels is currently working at SynerScope B.V. as a software architect developing Big Data visualizations for link analysis. Visualization of vessel traffic Movement is everywhere! It occurs, for example, in transportation or animal movement. The movement of objects is often analyzed by means of their trajectories as they appear in space and time. Vessel traffic has an excellent infrastructure to gather rich data with many attributes on a large scale, by means of the so-called Automatic Identification System (AIS), resulting in massive data sets. This data needs to be analyzed, for instance by maritime surveillance operators at the Coastguard, but so far only a limited number of applications exist. Most analyses boil down to the question: “What are possible reasons why certain movements have occurred?’’. To answer this question for these massive movement data sets, Niels has developed density maps. Figure 1. One week of AIS data visualized with density maps. Typical maritime features are highlighted, such as shipping lanes, anchor zones, and slow movements around oil platforms. Density maps are pictures of convolved trajectories, where a kernel moves along a trajectory with the actual speed of the movement. As a result, there is a high contribution to the density field for slow areas and a low contribution for fast areas. These density fields can be varied with some parameters, like kernel size, filters on attributes, or expressions of attributes. By combining a couple of density fields, each with different parameters, we obtain a density map that can be used for analysis. Using graphics hardware, users can tune these parameters interactively. Density maps can be used for many applications, such as temporal aggregation (what happened when?), anomaly detection (is this movement usual?), risk assessment (what are dangerous movements?), extracting routes (where do multiple objects move in the same direction?), or finding potential drifters (objects that follow the air or water flow more than their own direction). In conclusion, density maps are not only pretty pictures, but are also useful in helping users to understand large amounts of movement data. This research hints towards next-generation coastal surveillance systems and allows analyzing movements in other domains as well. Figure 2. A vessel sailing between Amsterdam and Scheveningen is marked suspicious, since it is sailing in an area where usually no ships sail. 44 This work has been carried out as a part of the Poseidon project at Thales Nederland under the responsibilities of the Embedded Systems Institute (ESI). This project is partially supported by the Dutch Ministry of Economic Affairs under the BSIK program. Information Academic Ceremonies La Place 0.35 Telephone: +31 40 247 5520 www.tue.nl
© Copyright 2026 Paperzz