The communications technology journal since 1924 90 1924-2 Delivering content with LTE Broadcast 4 Nine decades of innovation 11, 19 & 47 Non-line-of-sight microwave backhaul for small cells 12 Software-defined networking: the service provider perspective 20 HSPA evolution: for future mobilebroadband needs 26 Next generation video compression 33 Next generation OSS/BSS architecture 38 Carrier Wi-Fi: the next generation 48 014 Editorial Celebrating 90 years of technology insights “The object of this magazine is to spread information concerning the work and activities of this and associated enterprises, and to furnish a connecting link between these latter and the head firm.” These lines are taken from the introduction to The L. M. Ericsson Review – Tidskrift för Allmänna Telefonaktiebolaget L. M. Ericsson – when the first issue of the new journal was published in 1924. In his historical article to celebrate this journal’s 50th anniversary (1974), Sigvard Eklund, former editor of Ericsson Review (1943-1972), wrote the following words: “Apart from an article on ‘the development and present size of the LM Ericsson Group,’ there was a 10-page description of the company’s automatic 500-line selector system, illustrated by a few photographs of the recently opened automatic exchange in Rotterdam, one of the first major exchange equipments to be delivered up to that time.” In the 40 years since then – and the 90 years since this journal first started promoting technology – the world we live in has been transformed by technology to such a degree that I sometimes find it difficult to recognize the old one. I would like nothing more than to be able to give you a glimpse of what we will be writing about 90 years from now… what will the 22nd century bring? What generation of technology will have been reached by then, what business models will we use, how will we pay for things, and what sort of devices will we connect with? These are just some of my questions, and I wonder even if they will be relevant; maybe we won’t even use devices, as connectivity will simply exist in everything. Even if I don’t have an open window on the next century, the research and development we carry out at Ericsson today is aimed at the next generation, which promises E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 to continue along the current path of evolution: to be data driven, video heavy and influenced by the gaming world. The articles in this edition address a wide range of telecommunication issues, but they all have one thing in common and that is performance. Getting data through the network fast and efficiently so that the best user experience can be delivered to subscribers is a recurring theme no matter what part of network architecture is being discussed. From future OSS/ BSS architecture to integrated Wi-Fi and to packets stuffed with data, the message is clear… the faster the network can serve one subscriber, the faster it can move on to the next. One thing I am convinced about, however, is convergence. And not just in terms of fixed and mobile, but everywhere. Industries and technologies are merging. The lines between TV, the internet, and telecommunication will not exist for much longer. Education, work and family life are all coming together and the key is individualism. With a connection, every individual on the planet has the potential to take control over their life. The traditional models of work and education are being challenged. Connectivity is providing individuals with more choices, greater flexibility and the ability to mix things up in a way that suits them, their budget, their lifestyle and their goals. We are not there yet, and there are many pieces that need to be in place, but right now we are laying the foundation for the Networked Society. Mobile subscriptions are set to rise to 9.3 billion and mobile data traffic to grow by 45 percent (CAGR) by 2019. The opportunities are becoming available for more people, and connectivity is becoming a way of life. This edition is a celebration of 90 years of technology innovation. I hope you enjoy it. The most frequent users interact with their smartphone more than 150 times a day, or an average of every seven minutes during the daytime.* *Ericsson Mobility Report, November 2013 Ulf Ewaldsson Chief Technology Officer Head of Group Function Technology at Ericsson The communicat ons techno ogy journal since 1924 CONTENTS 90 1924-20 90TH ANNIVERSARY 2014 4 a collection of articles from 2013 Delivering content with LTE Broadcast 4 Nine decades of innovation 11 19 & 47 Non line of sight microwave backhaul for small cells 12 Software defined networking the service provider perspective 20 HSPA evolution for future mobile broadband needs 26 Next generation video compression 33 Next generation OSS/BSS architecture 38 Carrier Wi Fi the next generation 48 4 Delivering content with LTE Broadcast The data volume in mobile networks is booming – mostly due to the success of smartphones and tablets. LTE Broadcast is one way of providing new and existing services in areas that can at times be device dense, such as stadiums and crowded city centers. Built on LTE technology, LTE Broadcast extends the LTE/EPC with an efficient point-to-multipoint distribution feature that can serve many devices with the same content at the same time. This article was originally published on February 11, 2013. To bring you the best of Ericsson’s research world, our employees have been writing articles for Ericsson Review – our communications technology journal – since 1924. Today, Ericsson Review articles have a two-to-five year perspective and our objective is to provide you with up-to-date insights on how things are shaping up for the Networked Society. Non-line-of-sight microwave backhaul for small cells Address : Ericsson SE-164 83 Stockholm, Sweden Phone: +46 8 719 00 00 The evolution to denser radio-access networks with small cells in cluttered urban environments has introduced new challenges for microwave backhaul. A direct line of sight does not always exist between nodes, and this creates a need for near- and non-line-of-sight (NLOS) microwave backhaul. This article was originally published on February 22, 2013. Publishing: Ericsson Review articles and additional material are pub ished on www ericsson.com/review. Use the RSS feed to stay informed of the latest updates. Articles are also available on the Ericsson Technology Insights app for Android and Apple tablets. The ink for your device is on the Ericsson Review website:www. ericsson.com/review. If you are viewing this digitally, you can: download from Google Play or download from the App Store Publisher: U f Ewaldsson Editorial board: Håkan Andersson, Hans Antvik, Ulrika Bergström, Joakim Cerwall, Deirdre P. Doyle, Dan Fahrman, Anita Frisell, Jonas Högberg, U f Jönsson, Magnus Karlsson, Cenk Kirbas, Sara Kullman, Kristin Lindqvist, Börje Lundwall, Hans Mickelsson, U f Olsson, Patrik Regårdh, Patrik Roséen and Gunnar Thrysin Editor: Deirdre P. Doyle deirdre.doyle@jgcommunication se Chief subeditor: Birgitte van den Muyzenberg Contributors: John Ambrose, Håkan Andersson, Paul Eade, Ian Nicholson, Gunnar Thrysin and Peter Öhman Art director and layout: Jessica Wiklund and Carola Pilarz Illustrations: Claes-Göran Andersson Printer: Edita Bobergs, Stockholm ISSN: 0014-0171 Volume: 91, 2014 11, 19 & 47 Nine decades of innovation Automatic exchanges to smart networks. 12 Software-defined networking: the service provider perspective 20 An architecture based on software-defined networking (SDN) techniques gives operators greater freedom to balance operational and business parameters, such as network resilience, service performance and QoE against opex and capex. With its beginnings in data-center technology, SDN has developed to the point where it can offer significant opportunities to service providers. This article was originally published on February 21, 2013. 26 HSPA evolution for future mobile-broadband needs As HSPA evolution continues to address the needs of changing user behavior, new techniques develop and become standardized. This article covers some of the more interesting techniques and concepts under study that will provide network operators with the flexibility, capacity and coverage needed to carry voice and data into the future, ensuring HSPA evolution and good user experience. This article was originally published on August 28, 2013. 33 Next generation video compression Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra-HD television (UHDTV). This article was originally published on April 24, 2013. 38 Next generation OSS/BSS architecture When two large companies merge, it often takes a while – years in some cases – before processes get redesigned to span all departments, and the new organization settles into a lean and profitable machine. And the same is true of OSS/BSS. These systems have been designed for two different purposes: to keep the network operational and to keep it profitable. But today’s demanding networks need the functions of both of these systems to work together, and to work across the varying life cycles of products and services. This article was originally published on November 25, 2013. 48 Carrier Wi-Fi: the next generation Putting the network in control over whether or not a device should switch to and from Wi-Fi, and when it should switch, will make it easier for operators to provide a harmonized mobile broadband experience and optimize resource utilization in heterogeneous networks. This article was originally published on December 20, 2013. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Capture your audience 4 Delivering content with LTE Broadcast Ericsson has demonstrated LTE Broadcast with evolved Multimedia Broadcast Multicast Services at a number of international trade shows. These demos have shown the solution’s potential to create new business models for telcos and ensure consistent QoS, even in very densely populated places like sports venues. T HOR S T E N L OH M A R , M IC H A E L S L S S I NGA R , V E R A K E N E H A N A N D S T IG P U U S T I N E N The solution is built on LTE technology, extending the LTE/ EPC with an efficient point-tomultipoint distribution feature that can serve many eMBMScapable LTE devices with the same content at the same time. It can be used to boost capacity for live and on-demand content so that well-liked websites, breaking news or popular on-demand video clips can be broadcast – offloading the network and providing users with a superior experience. Single-frequency network (SFN) technology is used to distribute broadcast streams into well-defined areas – where all contributing cells send the same data during exactly the same radio time slots. The size of the coverage area of an LTE SFN can vary greatly, from just a few cells serving a stadium, to many cells delivering content to an entire country. eMBMS-enabled devices can select BOX A the broadcast streams within the SFN that are of interest. In this way, devices download only relevant data – not everything within the area to then just throw unwanted data away. This ensures that devices work in a battery-efficient way. respondents stated they would watch more TV if the content was provided on their mobile device, and 61 percent said they would switch operator to gain access to mobile-TV services. The majority of respondents said content they would find interesting to watch while on the move includes local news and weather information, movies, national news, sitcoms and sports. To meet this growing demand for mobile TV, operators are rapidly updating their offerings, continuously adding new services and content to live and on-demand streams – increasing the volume of information transported. Naturally, this causes network utilization to rise, requiring more efficient ways to deliver content, while network dimensioning becomes all the more crucial, and new business models are needed to maintain ARPU. Given the direction in which the industry is clearly moving, Ericsson has developed an end-to-end LTE Broadcast Business incentives The coextending evolution of mobile technologies and devices has made it possible for people to consume video using handheld equipment without compromising their experience. Based on an Ericsson ConsumerLab study1, the most recent Ericsson Mobility Report2, states that video is the biggest contributor to mobile-traffic volumes, accounting for more than 50 percent. And the growth of traffic is expected to continue, increasing 12-fold by 2018. According to another study, carried out by Mobile Content Venture3, more than half of US consumers would consider viewing programs on their smartphones and tablets – 68 percent of Terms and abbreviations AL-FEC Application Layer FEC API application program interface ARPU average revenue per user BLER Block Error Rate BM-SC Broadcast Multicast Service Center CDN content distribution network eMBMS evolved MBMS eNB eNodeB EPC Evolved Packet Core EPS Evolved Packet System FDD frequency division duplex FEC forward error correction FIFA Fédération Internationale de Football Association E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 FLUTE file delivery over unidirectional transport HEVC High Efficiency Video Coding IMB integrated mobile broadcast ISD inter-site distance ISI inter-symbol interference M2Mmachine-to-machine MBMS Multimedia Broadcast Multicast Service MBMS-GW MBMS-gateway MBSFN Multimedia Broadcast over an SFN MCE Multicell Coordination Entity MME Mobility Management Entity MPEG Moving Picture Experts Group MPEG- DASH NBC OFDM PGW SDK SFN SGW SNR TDD UDP UE MPEG-Dynamic Adaptive Streaming over HTTP National Broadcasting Company orthogonal frequency division multiplexing packet data network gateway software development kit single-frequency network service gateway signal-to-noise ratio time division duplex User Datagram Protocol user equipment 5 FIGURE 1 Broadcast versus unicast Y%, Y% Y%, Y% X% Y%,Y% Unicast Broadcast Table 1: Broadcast versus unicast Broadcast One data channel per content Limited data channels, unlimited number of users Resource allocation viewer independent Unicast One data channel per user Unlimited channels, limited number of users Resources allocated when needed solution. The concept has been built on eMBMS technology and based on a set of use cases that can be divided into two main categories delivery of live premium content; and unicast off-loading (for example, local device caching). Premium content Despite the diversity of available content and an obvious shift by subscribers towards on-demand viewing, watching certain events and programs live continues to appeal to large audiences. London 2012 is a good example of an event that enjoyed widespread liveviewing appeal. Ratings place the NBC coverage of the games as some of the most watched TV in US history; almost half of the online video streams were delivered to tablets or smartphones, and revenue expectations were far surpassed. Some use cases for premium content follow. Regional and local This use case covers regional and local interest events, such as concerts, sports fixtures or breaking news. Such as the Super Bowl, FIFA World Cup matches, as well as elections and royal weddings. Given suitable content security and digital-rights handling, this use case can be enhanced to allow users to store and replay the event on-demand from their device for a certain period of time. this technology also supports file delivery. Exploiting this and the caching capability available in both mobile and fixed devices creates new possibilities for a range of use cases. Venue casting Popular content This use case covers specific locations such as shopping malls, museums, airports, university campuses and amusement parks. In this case, the operating enterprise may wish to broadcast content to users, which can vary from breaking news of national interest to very specific information such as special offers available at the mall, additional information about the main artist of an art exhibition, or departures and a rrivals information at the airport. For all of these premium-content use cases, operators can deliver services on a nationwide basis as well as locally. The duration of a broadcast and the size of the geographical area where it is available can be managed dynamically, depending on the nature and relevance of the content. By using unicast for blended services, broadcast services can be complemented with interactivity – opening up new ways to generate revenue from content. At a soccer match, for example, these value-added services could include video streams carrying footage from additional camera angles, diverse audio coverage and live results of related matches taking place at the same time in other stadiums. Operators can choose to deliver popular TV and video clips to the local cache of a user’s device at their convenience. Based on content popularity and busyhour-traffic distribution, operators can deliver content when network load is low. Content shared on popular video streaming sites, as well as the content provided by national and cable TV channels can all be pre-loaded to mobile devices through broadcast – significantly reducing the overall network capacity required to deliver frequentlyconsumed video streams. Unicast off-loading MBMSs are traditionally associated with the delivery of live, linear TV, although News Daily clips and subscription content such as a magazine can be pre-delivered to the cache of a subscriber’s preferred device for that content. Software upgrades Upgrades to application software and operating systems are usually released over the network to large numbers of subscribers at the same time. This traditional way of performing an upgrade can be a burden on the network. By using LTE Broadcast instead, upgrades can be distributed as packages to a multitude of devices at little expense in terms of required resources – an approach that is particularly advantageous if the broadcast can be delivered during off-peak hours. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Capture your audience 6 FIGURE 2 the radio-channel quality and overall traffic volumes within the cell. Broadcast is implemented as an extension to the existing EPS architecture (see Figure 7 and Box B). Ericsson’s LTE Broadcast system is mainly a software upgrade applied to existing nodes. The concept was designed according to 3GPP MBMS 23.246 for E-UTRAN and to coexist with unicast-data and voice services. LTE Broadcast gives operators the flexibility to tailor the way content is delivered to suit their capabilities. SFN principles bandwidth Maximum usable set of subframes 0 1 2 3 4 5 6 7 8 9 0 1 Subframe = 1ms Radio frame = 10ms C1 C2 t Service dynamics This supports live streaming and filedelivery use cases. Different service combinations may be delivered simultaneously over the same bearer. Cell C1 Cell C2 Sector edge multipath gain M2M and B2B Over the coming decade, machine-tomachine (M2M) data traffic and the internet of things will create more connectivity demands on the network and create the need for diverse types of eMBMS LTE-enabled devices. LTE Broadcast technology supports efficient one-to-many transfer of machine data in any file format, which can be used for M2M use cases, off-loading the network and providing the essential machine connectivity and control. Ericsson value proposition The concept of Ericsson’s LTE Broadcast solution enables unicast and broadcast service blending, aiming to help meet the challenges created by rising mobile usage and the growth of video traffic in LTE networks. The solution covers the entire chain from live encoder, through delivery via point-to-multipoint transport to devices. Particular focus has been placed on the specification and implementation of the device, starting with the physical chipset as well as transport control middleware – essential enablers for the creation and deployment of eMBMSs. Implementing live streaming with MPEG-DASH4 is a technology choice that E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 supports the common use of a player on devices and a live encoder head-end system for both unicast and broadcast – reducing operating costs and maximizing infrastructure usage. As outlined later in this article, extensive simulation, lab testing and field trials have been conducted with the aim of characterizing the spectral efficiency of eMBMSs in deployed networks with mixed traffic profiles. The results show that live video broadcast with commercially acceptable levels of video and audio degradation is achievable. For video broadcasting to smartphones and tablets, compression using the H.2645 standard is feasible, with HEVC6 coming sometime in the near future. System architecture Broadcast and unicast radio channels coexist in the same cell and share the available capacity. The subset of available radio resources can temporarily be assigned to a broadcast radio channel. Mobile-communication systems such as LTE are traditionally designed for unicast communication, with a separate radio channel serving each device. The resources allocated to the device depend on the data rate required by the service, Time dynamics LTE Broadcast activation triggers the allocation of radio resources on a needs basis. A session may be active for a short time say several minutes or for longer periods: several days in some cases. When the session is no longer active, the assigned radio and system resources can be reallocated for use by other services. Location dynamics LTE Broadcast can be activated for small geographical locations, such as stadiums and city centers, or for large areas, covering say an entire city or region. As long as there is sufficient capacity in the network, multiple broadcast sessions can be active simultaneously. Resource allocation dynamics This involves the free allocation of resources for LTE Broadcast. Up to 60 percent of the FDD radio resources and up to 50 percent for TDD can be assigned to a broadcast transmission. Principles of the radio interface The LTE radio interface is based on OFDM in the downlink, where the frequency selective wideband channel is subdivided into narrowband channels orthogonal to each other. In time domain, a 10ms radio frame consists of subframes of 1ms each; where a subframe is the smallest unit with full frequency domain that can be allocated to a broadcast transmission. With eMBMS, all users within the 7 broadcast area, provided they have the right subscription level and an MBMScapable device, can receive broadcasted content. By setting up a single bearer over the radio interface, operators can distribute a data stream to an unlimited number of users. Although it is possible to deliver broadcasts within a single cell, the concept becomes truly interesting with SFN, the principles of which are illustrated in the lower part of Figure 2. Broadcast data is sent over synchronized SFN – tightly synchronized, identical transmissions from multiple cells, using the same set of subframes and modulation and coding schemes, appear to the device as a transmission from a single large cell over a time-dispersive channel. This improves received signal quality and spectral efficiency (as shown in Figure 2). For a more detailed description, refer to LTE/LTE-Advanced for Mobile Broadband7. The maximal usable set of subframes is shown in the top left of the diagram, and the nodes are time-synchronized to a high precision. By using long data-symbol duration in OFDM, it is possible to mitigate the effect of inter-symbol interference (ISI) caused by delayed signals. For additional protection against propagation delays LTE/OFDM uses a guard interval – delayed signals arriving during the guard interval do not cause ISI and so the data rate can be maintained. For SFN, unlike unicast, signals arrive from many geographically separate sources and can incur large delay spread. Consequently, one of the factors limiting MBMS capacity is self-interference from signals from transmitters with a delay that is greater than the guard interval (low transmitter density). To overcome this, a long cyclic prefix is added to MBSFN-reserved subframes to allow for the time difference in the receiver and corresponds to an ISD of approximately 5km. Architecture The eMBMS architecture, shown in Figure 3, is designed to handle transmission requirements efficiently. The Broadcast Multicast Service Center (BM-SC) is a new network element at the heart of the LTE Broadcastdistribution tree. Generic files or FIGURE 3 Architecture – with only eMBMS components shown Unicast S1-U MME S11 M3 / S1-MME S/PDN-GW eNB SGi Sm SGmb eNB SGi-mb M1 eNB BM-SC MBMS-GW Control User data MPEG-DASH live video streams are carried as content across the BM-SC and made available for broadcast. The BM-SC adds resilience to the broadcast by using AL-FEC – which adds redundancy to the stream so that receivers can recover packet losses – and supports the 3GPP-associated delivery procedures. These procedures include unicast base file repair – allowing receivers to fetch the remaining parts of a file through unicast from the BM-SC and reception reporting, so operators can collect QoE reports and make sessionquality measurements. Another new network element is the MBMS-GW, which provides the gateway function between the radio and service networks. It forwards streams from the BM-SC to all eNBs participating in the SFN transmission. IP multicast is used on the M1 interface between the gateway and the eNBs, so that the packet replication function of existing routers can be used efficiently. The gateway routes MBMS session control signaling to the MMEs serving the area. The MMEs in turn replicate, filter and forward session control messages to the eNBs participating in the specific broadcast session. The eNBs provide functionality for configuration of SFN areas, as well Content as broadcasting MBMS user data and MBMS-related control signaling on the radio interface to all devices. Note, the eNB contains the 3GPP Multicell Coordination Entity (MCE) function. eMBMS LTE-enabled devices are an essential part of the ecosystem. LTE capabilities are becoming integrated into more and more types of devices and may be implemented on devices other than phones and tablets such as embedded platforms for M2M communications. The UE platform is divided into three main blocks (see Figure 4): the lower block incorporates the LTE radio layers, which are typically implemented in the LTE chipset, supporting unicast as well as broadcast; the middleware block handles the FLUTE protocol8 , AL-FEC decoding, unicast file repair and other functions. It includes transport control functions, such as service scheduling, as well as a cache for post-broadcast file processing; and the top platform block exposes APIs to the middleware and connectivity layer methods. Application development is enabled through an SDK, which provides the platform APIs. The SDK enables developers to create and test E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Capture your audience 8 FIGURE 4 UE and SDK in eMBMS ecosystem SDK App download Platform APIs eMBMS middleware LTE chipset (L1, L2, L3) Content SGi-mb BM-SC User equipment eMBMS-enabled applications without requiring detailed knowledge of the underlying transport, control, or radiobearer technology. Spectral efficiency According to 3GPP specifications, eMBMSs and unicast services should be provisioned on a shared frequency. Consequently, while a broadcast service is active, radio-interface resources can be borrowed from unicast capacity. FIGURE 5 SGmb LTE Broadcast and unicast network Spectral efficiency can be defined as the possible information rate transmitted over a given bandwidth with a defined loss rate. The information loss rate depends on the modulation and coding scheme used for physical transmissions and the protection offered by AL-FEC. This definition of spectral efficiency includes packet overheads, such as AL-FEC redundancy. The simulation results from an evaluation of spectral efficiency are shown in Evaluating spectral efficiency Spectral efficiency (b/s/Hz) 3.0 —— indoor; - - - in-car w/o AL-FEC AL-rBLER=1e-3 AL-rBLER=1e-5 2.5 2.0 1.5 1.0 0.5 0 0 1 2 3 4 5 6 7 8 9 ISD (km) E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 BOX B Standards The standard ization of MBMS started in 3GPP with Rel-6, which supported GERAN and UTRAN access networks. Over time, 3GPP has improved the access network support by, for example, defining the integrated mobile broad cast (IMB) solution, which uses UTRAN TDD bands to offer up to 512kbps per content channel. Support for E-UTRAN access (LTE) was added to 3GPP Rel-9 as part of the eMBMS standardization activity. Figure 5. The results associated with a broadcast transmission depend on the ISD in a link budget – signal-to-noise ratio (SNR) – limited deployment. Two urban environments were simulated: indoor scenarios with 20dB penetration loss and in-car scenarios with 6dB loss assuming 95 percent coverage probability in all cases. The failure criterion used was 10 -3 BLER (corresponding to a packet loss of four packets per hour) and simulations were run with and without AL-FEC. An ideal Raptor code with FEC covering 2s per source block was used in this evaluation. The payload for each source block consisted of 50 packets with each IP packet spanning two transport blocks. The MBSFN simulation area -included 19 sites, each with three sectors. The results show that MBMS spectral efficiency of about 1-3bps/Hz (indoor/in-car) could be achieved for a cellular ISD of up to 2km. The simulation results and additional testing show that FEC improves video quality and saves capacity. From the graphs in Figure 5, it is possible to conclude that when ISD is less than 1km, spectral efficiency is greater than 2.5b/s/Hz. By allocating one subframe for MBMS transmission in 20MHz spectrum, corresponding to 10 percent of capacity, the achievable data rate is in the range of 5Mbps. Live video and file delivery The two main eMBMS use cases are live streaming and on-request file delivery. Live streaming supports services for real-time video and audio broadcasting, and on-request file delivery enables services such as unicast off-load (local device caching), software updates and M2M file loading. In fact, any arbitrary file or sequence of files can be distributed over eMBMSs. The target broadcast area for these use cases may be any desired size – some scenarios require a small broadcast area, such as a venue or a shopping mall, and other cases require much larger areas, even up to nationwide coverage. Ericsson has selected MPEG-DASH for live streaming delivery over eMBMSs. This solution slices the live stream into a sequence of media segments, which are then delivered through the system as independent files. Typically, HTTP is used to fetch these 9 files. In the eMBMS case, one quality representation is delivered as a sequence of files through eMBMSs using MBMS file delivery. By using MPEG-DASH with eMBMSs, the same live encoder and common clients can be used for unicast and broadcast offerings. This solution also supports using the same system protocol stack for both live streaming and file-delivery implementation. The IETF FLUTE protocol8 allows distribution of files over unidirectional links using UDP. Most service-layer features can be used for both streaming and file delivery; transmission reliability can be increased using AL-FEC in both cases. File delivery can also make use of the unicast file-repair feature – allowing UEs to fetch any missing file segments. However, this feature is not intended for use with services that have real-time requirements, such as live streaming. With FLUTE, delivery and eMBMS sessions are used, where the duration of a delivery session may span one or more eMBMS sessions. The broadcast is active for the entire eMBMS session, during which UEs can receive content. The relationship between delivery sessions and eMBMS sessions is shown in Figure 6. Service announcement is used to inform devices about delivery sessions and also about eMBMS sessions using a schedule description. UEs do not need to monitor the radio interface for eMBMS sessions continuously. In Figure 6, the schedule description instructs the UE to expect an eMBMS session between t2 and t3 and between t6 and t7. Before the UE expects an eMBMS session, it is already active on the radio interface (t1 < t2). When it comes to filedelivery services, it is preferred that devices should search for sessions prior to expected transmission time on the radio, to ensure that they do not miss the start of a transmission. The example in Figure 6 could represent a service, such as downloading an application that allows users to activate, receive and interact with the broadcast using unicast services from a phone, tablet or television. From the point of view of the user and the UE middleware, the two broadcasts belong to the same MBMS user service, which presents a complete offering including activation and deactivation. FIGURE 6 Example of two scheduled broadcasts Delivery session (FLUTE) Service announcement Service announcement informs the UE about the schedule Service announcement informs the UE about the schedule eMBMS session eMBMS session t1 t2 t 3 t4 t5 t6 t6 to t7 UEs expects to receive data of that FLUTE session UEs expects to receive data of that FLUTE session is used to distribute the content over the air interface. LTE Broadcast provides operators with techniques to deliver consistent service quality, even in highly crowded areas. Such techniques for delivering content efficiently are valuable as they free up capacity, which can be used for other services and voice traffic. eMBMS architecture Application server App FR/RR (HTTP) SGW S1-U UE Time t2 to t3 Conclusions The data volume in mobile networks is booming mostly due to the success of smartphones and tablets. LTE Broadcast is one way of providing new and existing services in areas that can at times be device dense, such as stadiums and crowded city centers. Single-frequency network technology FIGURE 7 t7 t8 Uu S1-MME eNB S5/S8 BM-SC PGW M3 CDN/ live encoder SGi S11 MME HTTP SGmb Sm MBMS-GW SGi-mb M1 E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Capture your audience 10 References 1. Ericsson, 2012, Ericsson ConsumerLab report, TV and video – changing the game, available at: http:// www.ericsson.com/res/docs/2012/consumerlab/ consumerlab-tv-video-changing-the-game.pdf 2.Ericsson, November 2012, Ericsson Mobility Report, On the pulse of the Networked Society, available at: http://www. ericsson.com/res/docs/2012/ericsson-mobility-reportnovember-2012.pdf 3.Mobile Content Venture, June 2012, Dyle Mobile TV Data Report, available at: http://www.dyle.tv/assets/Uploads/ DyleReport.pdf 4.ISO/IEC 23009-1:2012, Information technology – Dynamic adaptive streaming over HTTP (DASH) — Part 1: Media presentation description and segment formats, available at: http://www.iso.org/iso/iso_catalogue/catalogue_tc/ catalogue_detail.htm?csnumber=57623 5.ITU-T H.264 Advanced video coding for generic audiovisual services, available at: http://www.itu.int/rec/T-REC-H.264 6.ITU-T H.265 / ISO/IEC 23008-2 HEVC, available at: http://www.itu.int/ITU-T/aap/AAPRecDetails. aspx?AAPSeqNo=2741 7.Erik Dahlman, Stefan Parkvall, Johan Sköld, 2011, LTE/LTE-Advanced for Mobile Broadband, available at: http://www.elsevier.com/books/4glte-lte-advanced-for-mobile-broadband/ dahlman/978-0-12-385489-6 8.IETF RFC 3926, FLUTE – File delivery over unidirectional transport, T. Paila, et al., October 2004, available at: http:// tools.ietf.org/html/rfc3926 E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Thorsten Lohmar Michael Slssingar joined Ericsson in Germany in 1998 and worked in different Ericsson Research units for several years. He worked on a variety of topics related to mobilecommunication systems and led research projects specifically in the area of multimedia technologies. On the development front, he is focusing on the technical coordination of eMBMSs with an end-to-end perspective. He is currently working as a senior specialist for end-to-end video delivery, principally in mobile networks. Lohmar holds a Ph.D. in electrical engineering from RWTH Aachen University, Germany. is an Ericsson senior specialist in service delivery architectures and holds a post-graduate diploma and master’s in computing and software engineering. He has held many senior engineering roles at Ericsson, mainly in the media-delivery field, and has contributed to the Ericsson IPTV and Mobile TV delivery solutions. In the field of MBMS, Slssingar initially specialized in WCDMA MBMS, where he helped develop the Ericsson Content Delivery System. More recently, he has worked with LTE eMBMS broadcast, where he has a strong interest in the service layer BMSC node, UE middleware and metadata provisioning areas. Stig Puustinen Vera Kenehan is a senior project manager at System Management within Business Unit Networks, where he is currently running an LTE/ EPC systems project involving extensive eMBMSs work. He joined Ericsson in 1991, and has since held a variety of project- and programmanagement roles. He was involved in the early releases of GSM, the first introduction of WCDMA/HSPA and the first release of LTE/EPC. is a strategic product manager within LTE Radio and has worked with several generations of radio-access technologies, including LTE, WCDMA and PDC. She was largely involved in the initial standardization of LTE, including eMBMS. For the past two years, she has been working on the MBMS product as well as promoting and bringing eMBMS to the market. She holds a master’s in telecom engineering from the University of Belgrade, Serbia. Re:view 11 Nine decades of innovation The Roaring Twenties In 1924, in the very first issue of Ericsson Review, the editor stated that the objective of the magazine was to take up points of design and construction, which had not yet reached final standardization, for discussion. The cover article featured Ericsson’s presence at the Gothenburg Exhibition (1923), where a giant replica of our standard table set telephone housed a complete Ericsson Review, issues 1 & 2, 1924. Ericsson exchange for 500 lines, to which a few telephones were connected. Visitors could make calls and watch the switching process, through the plate-glass windows. Automatic exchanges to smart networks The history of our technology is deeply entrenched in that of the telecoms industry. War, international terrorism, and developments in other industries such as railways and the more recent digital revolution have shaped our playing field. Our technical expertise is not just based on theory; it is about how to apply the right technology to create commercially viable products. Standards – and the ability to create and evolve them – have helped us and our industry become global providers of interoperable solutions – a cornerstone of the Networked Society. Over the past nine decades, the world has probably changed more than ever before. Here are a few highlights from Ericsson’s history and some of the world events that have played their part in the evolution of telecoms. In 1924, Ericsson launched its automatic 500-point system. Highlighted in the very first issue of Ericsson Review, this revolutionary switching system was demonstrated at the Gothenburg Exhibition of 1923. In hindsight, it sounds curious that attendees could ‘watch’ the switching process, which at the time was mechanical. By the 1930s, Ericsson had delivered about 100 systems with a total of more than 350,000 lines. Sales of the system continued to rise over the coming decades, not declining significantly until the 1970s. By 1974, 4.8 million lines using this system were in operation in public telephone stations. The very first 500-point system was put into service at the Norra Vasa exchange in Stockholm, and was still in operation 60 years later. At the time of installation in 1924, Televerket – the Swedish government agency for telecommunications (1853-1993) – had four different systems to choose from, including a crossbar system also developed by Ericsson. Televerket’s choice was a decisive one for the future development of Ericsson. In the 1920s and 1930s, the opening of a new telephone exchange was a major event for small towns and villages. Official opening ceremonies were often carried out by the local mayor, to the backdrop of a brass band and refreshments provided for the locals by the operator and the vendor. In the 1930s, Ericsson introduced a photoelectric announcing machine – a simple device that could deliver short prerecorded messages. This offloaded the work of operators and had the ability to deliver longer announcements, such as the speaking clock and – later in the decade – automatic weather forecasts. The first device for weather forecasts was put into service in Stockholm on June 1, 1936 and was the first of its kind in the world. It wasn’t until two decades later that they were replaced with newer technologies. In the 1940s, Ericsson introduced an information management structure called the ABC System. Ericsson still uses this system to classify and manage its information and products. Depression and modernism The cover of issue 2, 1934 shows the ‘photo-electric talking machine for automatic time indication.’ This issue included an account of a carrier system supplied by Ericsson to the Indian Radio and Communications Company. In the spirit of modernism, this issue included a number of articles relating to the use of clocks and timing Ericsson Review, mechanisms in industry. An issue 2, 1934. article on party lines discussed the problem of connecting several telephone instruments to one line. This was of particular interest at the time, as exchanges in rural districts were being automated and party lines were in need of some degree of modification. One of the main concerns was how to use party lines without altering the subscriber equipment. The war years Ericsson Review was not published in English between 1940 and 1944 due to the ongoing world war. Apart from the fact that the paper situation necessitated a reduction both of quantity and quality of printing paper, the journal was issued as usual in its Swedish edition during the war years. In 1945, a collection of some of the articles published during the war in Swedish were printed in a composite English language edition. Continued on page 19... E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Dispelling the NLOS myths 12 Non-line-of-sight microwave backhaul for small cells The evolution to denser radio-access networks with small cells in cluttered urban environments has introduced new challenges for microwave backhaul. A direct line of sight does not always exist between nodes, and this creates a need for near- and non-line-of-sight microwave backhaul. JONA S H A N S RY D, JONA S E D S TA M , BE NGT-E R I K OL S S ON A N D C H R I S T I NA L A R S S ON Using non-line-of-sight (NLOS) propagation is a proven approach when it comes to building RANs However, deploying high-performance microwave backhaul in places where there is no direct line of sight brings new challenges for network architects. The traditional belief in the telecom industry is that sub-6GHz bands are required to ensure performance for such environments. This article puts that belief to the test, providing general principles, key system parameters and simple engineering guidelines for deploying microwave backhaul using frequency bands above 20GHz. Trials demonstrate that such high-frequency systems can outperform those using sub6GHz bands – even in locations with no direct line of sight. Point-to-point microwave is a cost- efficient technology for flexible and rapid backhaul deployment in most locations. It is the dominant backhaul medium for mobile networks, and is expected to maintain this position as mobile broadband evolves; with microwave technology that is capable of providing backhaul capacity of the order of several gigabits-per-second1. BOX A Complementing the macro-cell layer by adding small cells to the RAN introduces new challenges for backhaul. Smallcell outdoor sites tend to be mounted 3-6m above ground level on street fixtures and building facades, with an inter-site distance of 50-300m. As a large number of small cells are necessary to support a superior and uniform user experience across the RAN2, small-cell backhaul solutions need to be more costeffective, scalable, and easy to install than traditional macro backhaul technologies. Well-known backhaul technologies such as spectral-efficient LOS microwave, fiber and copper are being tailored to meet this need. However, owing to their position below roof height, a substantial number of small cells in urban settings do not have access to a wired backhaul, or clear line of sight to either a macro cell or a remote fiber backhaul point of presence. The challenges posed by locations without a clear line of sight are not new to microwave-backhaul engineers, who use several established methods to overcome them. In mountainous terrain, for example, passive reflectors and repeaters are sometimes deployed. However, this approach is less desirable for cost-sensitive small-cell backhaul, as it increases the number of sites. In urban areas, daisy chaining is often used to reach sites in tricky locations – a Terms and abbreviations FDD frequency division duplexing LOSline-of-sight MIMO multiple-input, multiple-output E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 NLOSnon-line-of-sight OFDM orthogonal frequency division multiplexing RAN radio-access network TDD time division duplexing solution that is also effective for smallcell backhaul (see Figure 1). Network architects aim to dimension backhaul networks to support peak cell-capacity3 – which today can reach 100Mbps and above. However, in reality, there is a trade-off among cost, capacity and coverage resulting in a backhaul solution that, at a minimum, can support expected busy-hour traffic with enough margin to account for statistical variation and future growth: in practice around 50Mbps with availability requirements typically relaxed to 99-99.9 percent. Such availability levels require fade margins of the order of just a few decibels for short-link distances. For small-cell backhaul simplicity and licensing cost are important issues. Light licensing or technology-neutral block licensing are attractive alternatives to other approaches such as link licensing, as they provide flexibility4. Using unlicensed frequency bands can be a tempting option, but may result in unpredictable interference and degrad ed network performance. The risk associated with unlicensed use of the 57-64GHz band is lower than that associated with the 5.8GHz band, owing to higher atmospheric attenuation, sparse initial deployment, and the possibility of using compact antennas with narrow beams, which effectively reduces interference. Providing coverage in locations without a clear line of sight is a familiar part of the daily life of mobile-broadband and Wi-Fi networks. However, maybe because such locations are commonplace, a number of widespread myths and misunderstandings surrounding NLOS microwave backhaul exist – for example, that NLOS microwave backhaul needs sub-6GHz frequencies, widebeam antennas and OFDM-based radio 13 technologies to meet coverage and capacity requirements. Despite this, a number of studies on NLOS transmission using frequency bands above 6GHz, for example, have been carried out for fixed wireless access5 and for mobile access6. Coldrey et al. showed that it is realistic to reach 90 percent of the sites in a small-cell backhaul deployment with a throughput greater than 100Mbps using a paired 50MHz channel at 24GHz7. FIGURE 1 Microwave backhaul scenarios for small-cell deployment Daisy chain Penetration NLOS principles As illustrated in Figure 1, all NLOS propagation scenarios make use of one or more of the following effects: Diffraction Fiber diffraction; reflection; and penetration. All waves change when they encounter an obstacle. When an electromagnetic wave hits the edge of a building, diffraction occurs – a phenomenon often described as the bending of the signal. In reality, the energy of the wave is scattered in the plane perpendicular to the edge of the building. The energy loss – which can be considerable – is proportional to both the sharpness of the bend and the frequency of the wave8. Reflection, and in particular random multipath reflection, is a phenomenon that is essential for mobile broadband using wide-beam antennas. Single-path reflection using narrow-beam antennas is, however, more difficult to engineer owing to the need to find an object that can provide the necessary angle of incidence to propagate as desired. Penetration occurs when radio waves pass through an object that completely or partially blocks the line of sight. It is a common belief that path loss resulting from penetration is highly dependent on frequency, which in turn rules out the use of this effect at higher frequencies. However, studies have shown that in reality path loss due to penetration is only slightly dependent on frequency, and that in fact it is the type and thickness of the object itself that creates the impact on throughput9, 10. For example, thin, non-metallic objects – such as sparse foliage (as shown in Figure 1) – add a relatively small path loss, even for high frequencies. Deployment guidelines can be defined given a correct understanding Reflection and application of these three propagation effects, giving network engineers simple rules to estimate performance for any scenario. System properties A simplified NLOS link budget can be obtained by adding an NLOS path attenuation term (ΔLNLOS) to the traditional LOS link budget, as shown in Equation 1. Equation 1 PRX = PTX + GTX + GRX - 92 - 20log(d) - 20log(f) - LF - ΔLNLOS Here, PRX and PTX are the received and transmitted power (dBm – ratio of power in decibels to 1 milliwatt); GTX and GRX are antenna gain (in decibels isotropic – dBi) for the transmitter and receiver respectively; d is the link distance (in kilometers); f is the frequency (in gigahertz); LF is any fading loss (in decibels); and ΔLNLOS is the additional loss (in decibels) resulting from the deployment of NLOS-propagation effects. Not shown in this equation is the theoretical frequency dependency of the antenna gain, which for a fixed antenna size will increase as 20log(f) and as a consequence, the received signal – PRX – will actually increase as 20log(f) when carrier frequency is increased for a fixed antenna size. This relationship indicates the advantage of using higher frequencies for applications where a small antenna form factor is of importance – as is the case for small-cell backhaul. To determine the importance of NLOS-system properties, Ericsson carried out measurement tests on two commercially available microwave backhaul systems in different frequency bands (described in Table 1). The first system used the unlicensed 5.8GHz band with a typical link configuration for applications in this band. The air interface used up to 64QAM modulation in a 40MHzwide TDD channel with a 2x2 MIMO (cross-polarized) configuration providing full duplex peak throughput of 100Mbps (200Mbps aggregate). The second system, a MINI-LINK PT2010, used a typical configuration for the licensed 28GHz band, based on FDD, 56MHz channel spacing and single-carrier technology with up to 512QAM modulation, providing full duplex throughput of 400Mbps (800Mbps aggregate). To adjust the throughput based on the quality of the received signal, both Table 1: Test system specifications SYSTEM TECHNOLOGY CHANNEL SPACING ANTENNA GAIN OUTPUT POWER PEAK THROUGHPUT 5.8GHz TDD/OFDM 64QAM 40MHz 17dBi 19dBm 100Mbps 28GHz FDD/single carrier 512QAM 56MHz 38dBi 19dBm 400Mbps E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Dispelling the NLOS myths 14 FIGURE 2 advantages of using higher frequencies are clear: with comparable antenna sizes, the link margin is about 20dB higher at a peak rate of 400Mbps for the 28GHz system compared with the 5.8GHz system at a peak rate of 100Mbps. Link margin as a function of throughput and distance Link margin (dB) 80 28GHz 5.8GHz 70 60 90Mbps 185Mbps 50 280Mbps 10Mbps 40 400Mbps 60Mbps 30 80Mbps 100Mbps 20 10 50 100 150 200 250 300 350 400 450 500 Link distance (m) systems used adaptive modulation. Physical antenna sizes were similar, but due to the frequency dependency of the antenna gain and the parabolic type used in the 28GHz system, it offered a gain of 38dBi while the flat antenna of the 5.8GHz system reached 17dBi. Link margin versus throughput and hop distance is shown in Figure 2. FIGURE 3A Here, the margin is defined as the difference between received power (according to Equation 1) and the receiver threshold for a particular modulation level (throughput) – in line of sight conditions without fading (Lf = 0). If ΔLNLOS caused by any NLOS effect can be predicted, the curves in Figure 2 can be used to estimate throughput. The Test site for NLOS backhaul – diffraction 0m 15 © 2013 BLOM © 2013 Microsoft Corporation E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Measurements Diffraction It is commonly believed that the diffraction losses occurring at frequencies above 6GHz are prohibitively high, and consequently, deploying a system using this effect for NLOS propagation at such frequencies is not feasible. However, even if the absolute loss can be relatively high, 40dB and 34dB for the 28GHz and 5.8GHz systems respectively (with a diffraction angle of 30 degrees), the relative difference is only 6dB8 – much less than the difference in gain for comparable antenna sizes even when taking into account the higher free-space loss for the 28GHz system (see Figure 2). Figures 3A and 3B show the setup and measured results of a scenario designed to test diffraction. A first radio was positioned on the roof of an office building (marked in Figure 3A with a white circle). A second radio was mounted on a mobile lift, placed 11m behind a 13m-high parking garage. The effect on the signal power received by the second radio was measured by lowering the mobile lift. Figure 3B shows the measured received signal-power versus distance below the line of sight for both test systems, as well as the theoretical received power calculated using the ideal knife-edge model8. Both radios transmitted 19dBm output power, but due to the 21dBi lower antenna gain for the 5.8GHz system, the received signal for this radio was 20dB weaker after NLOS propagation than the 28GHz system. The measured results compare well against the results based on the theoretical model, although an offset of a couple of decibels is experienced by the 28GHz system – a small deviation that is expected due to the simplicity of the model. To summarize, diffraction losses can be estimated using the knife-edge model8. However, due to the model’s simplicity, losses calculated by it are slightly underestimated. This can be compensated for in the planning process by simply adding a few extra decibels to the loss margin. 15 The 28GHz system can sustain full throughput at much deeper NLOS than the 5.8GHz system, which is to be expected as it has a higher link margin. Full throughput – 400Mbps – was achieved at 28GHz up to 6m below the line of sight, equivalent to a 30-degree diffraction angle, while the 5.8GHz system dropped to under 50Mbps at 3m below the line of sight. The link margin is the single most important system parameter for NLOS propagation and, as expected, the 28GHz system performs in reality better in a diffraction scenario than a 5.8GHz system with comparable antenna size. Reflection The performance characteristics of the 5.8GHz and 28GHz systems were measured in a single-reflection scenario in an area dominated by metal and brick facades – shown in Figure 4A. The first radio was located on the roof of the office building (marked with a white circle), 18m above ground level; and the second on the wall of the same building, 5m above ground, facing the street canyon. The brick facade of the building on the other side of the street from the second radio was used as the reflecting object, resulting in a total path length of about 100m. The reflection loss will vary with the angle of incidence, which in this case was approximately 15 degrees, resulting in a ΔLNLOS of 24dB for the 28GHz system and 16dB for the 5.8GHz system – figures that are in line with earlier studies11. Reflection loss is strongly dependent on the material of the reflecting object, and for comparison purposes ΔLNLOS for a neighboring metal facade was measured to be about 5dB for both systems with similar angle of incidence. To summarize, it is possible to c over areas that are difficult to reach using multiple reflections in principle. However, taking advantage of more than two reflections is in practice problematic – due to limited link margins and the difficulty of finding suitably aligned reflection surfaces. ΔLNLOS predictions for a single-facade reflection in the measured area can be expected to vary between 5dB and 25dB at 28GHz and between 5dB and 20dB at 5.8GHz. The throughput for both systems measured over 16 hours is shown in Figure 4B. FIGURE 3B Throughput and received power – diffraction Throughput (Mbps) Received power (dBm) 700 -10 Received power 28GHz Received power 5.8GHz Theoretical power 28GHz Theoretical power 5.8GHz Throughput 28GHz Throughput 5.8GHz 600 500 -20 -30 400 -40 300 -50 200 -60 100 -70 0 –2 -80 0 2 4 6 8 10 Distance below line of sight (m) The 28GHz system shows a stable throughput of 400Mbps, while the throughput for the 5.8GHz system, with a much wider antenna beam, dropped from 100Mbps to below 70Mbps. These variations are to be expected owing to the fact that the wider beam experiences a stronger multipath. OFDM is an effective mitigation technology that combats fading, which will, FIGURE 4A at severe multipath fading, result in a graceful degradation of throughput – as illustrated. However, the narrow antenna lobe at 28GHz, in combination with the advanced equalizer of the high-performance MINI-LINK radio, effectively suppresses any multipath degradation, enabling the use of a single-carrier QAM technology for NLOS conditions – even up to 512QAM and 56MHz channel bandwidths. Test site for NLOS backhaul – reflection 100m © 2012 TerraItaly © 2013 Microsoft Corporation Pictometry Bird’sEye © 2012 Pictometry International Corp. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Dispelling the NLOS myths 16 FIGURE 4B up to more than 28dB. A complementary experiment showing similar excess path loss was carried out at 5.8GHz. To summarize, contrary to popular belief, a 28GHz system can be used with excellent performance results using the effect of NLOS penetration through sparse greenery. Throughput over time – single reflection Throughput (Mbps) 28GHz 5.8GHz 420 400 380 120 100 80 60 0 2 4 6 8 10 12 14 16 Time (hours) Penetration As with the case for NLOS reflection, the path loss resulting from penetration is highly dependent on the material of the object blocking the line of sight. The performance of both test systems was measured in a scenario shown in Figures 5A and 5B. The sending and receiving radios were located 150m apart, with one tall sparse tree and a shorter, denser tree blocking the line of sight. The radio placed on the mobile lift was positioned to measure the radio beam first after penetration of the sparse foliage and then lowered to measure the more dense foliage, as FIGURE 5A shown in Figure 5A. The circle and triangle symbols indicate where the radio beams exit the foliage. Measurements were carried out under rainy and windy weather conditions, resulting in variations of the NLOS path attenuation, as shown in the received signal spectra for the 28GHz radio link in Figure 5B. Under LOS conditions the amplitude spectrum envelope reached -50dBm. Consequently, the excess path loss for the single-tree (sparse foliage) scenario varied between 0 and 6dB when measured for 5 minutes. In the double-tree (dense foliage) case excess path loss varied from 8dB Test site for NLOS backhaul – penetration = sparse foliage = dense foliage E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Deployment guidelines So far, this article has covered the key system properties for NLOS propagation – diffraction, reflection and penetration – dispelling the myth that these effects can be used only with sub-6GHz frequencies. The next step is to apply the theory and the test results to an actual deployment scenario for microwave backhaul. Table 2 shows the indicative throughput for each NLOS scenario, using the measured loss from the examples above together with the graphs in Figure 2. A trial site, shown in Figure 6, was selected to measure the coverage for an NLOS backhaul deployment scenario. Four- to six-story office buildings with a mixture of brick, glass and metal facades dominate the trial area. The hub node was placed 13m above ground on the corner of a parking garage at the south end of the trial area. By using the measured loss in the diffraction, reflection and penetration from the tests as a rule of thumb, an indicative throughput for each NLOS scenario has been taken from Figure 2 and summarized in Table 2. The colored areas in Figure 6 show the line of sight conditions for the trial site: the green areas show where pure LOS exists; the yellow areas indicate the use of single-path reflection; the blue areas indicate diffraction; and the red areas show where double reflection is needed. Areas without color indicate either that no throughput is expected or that they are outside the region defined for measurement. Measurements were made within the region delineated by the dashed white lines. Referring to Table 2, it is expected that the 5.8GHz system will meet small-cell backhaul requirements (>50Mbps throughput) within a 250m radius of the hub; and the 28GHz system should provide more than 100Mbps full duplex throughput up to 500m from the hub. To test the actual performance, a receiver node 17 was placed 3m above ground measuring full duplex throughput along the main street canyon and in the neighboring streets. On account of the wide antenna lobe of the 5.8GHz system, realignment was not needed for the hub antenna for measurement purposes. For the 28GHz system, realignment of the narrow antenna beam was needed at each measurement point – a fairly simple procedure even under NLOS conditions. The actual values observed at each measurement point exceeded or matched the predicted performance levels in Table 2. Due to the lack of correctly aligned reflection surfaces, providing backhaul coverage using the doublereflection technique (the red areas of the trial area in Figure 6) was only possible for a limited set of measurements. Multipath propagation, including the reflection effects created by vehicles moving along the street canyon, was significant for the 5.8GHz system, but resulted only in slightly reduced throughput in some of the more difficult scenarios for the 28GHz system. Summary In traditional LOS solutions, high system gain is used to support targeted link distance and mitigate fading caused by rain. For short-distance solutions, this gain may be used to compensate for NLOS propagation losses instead. Sub6GHz frequency bands are proven for traditional NLOS usage, and as shown in this article, using these bands is a viable solution for small-cell backhaul. However, contrary to common belief, but in line with theory, MINI-LINK microwave backhaul in bands above 20GHz will outperform sub-6GHz systems under most NLOS conditions. The key system parameter enabling the use of high-frequency bands is the much higher antenna gain for the same antenna size. With just a few simple engineering guidelines, it is possible to plan NLOS backhaul deployments that provide high network performance. And so, in the vast amount of dedicated spectrum available above 20GHz, microwave backhaul is not only capable of providing fiber-like multi-gigabit capacity, but also supports high performance backhaul for small cells, even in locations where there is no direct line of sight. FIGURE 5B Channel amplitude response – penetration Sparse foliage Dense foliage Received power (dBm) Received power (dBm) –40 –40 Maximum power over 5 minutes Minimum power over 5 minutes –50 Maximum power over 5 minutes Minimum power over 5 minutes –50 –60 –60 ~6dB fluctuation >20dB fluctuation –70 –80 29.32 –70 29.34 29.36 29.38 29.40 29.42 –80 29.32 29.34 29.36 29.38 Frequency (GHz) 29.40 29.42 Frequency (GHz) Resolution bandwidth: 50kHz Table 2: Indicative bitrate performance for different NLOS key scenarios 0-100m 5.8GHz 28GHz LOS SINGLE REFLECTION DOUBLE REFLECTION DIFFRACTION* PENETRATION*** 100Mbps 100Mbps 10Mbps** 80Mbps 100Mbps 100-250m 100Mbps 80Mbps <10Mbps** 60Mbps 100Mbps 250-500m 100Mbps 60Mbps <10Mbps** 10Mbps** 80Mbps 0-100m 400Mbps 400Mbps 280Mbps** 400Mbps 400Mbps 100-250m 400Mbps 400Mbps 185Mbps** 400Mbps 400Mbps 250-500m 400Mbps 400Mbps 185Mbps** 280Mbps 400Mbps *30-degree diffraction angle; **not recommended for small-cell backhaul; ***sparse foliage or similar FIGURE 6 NLOS backhaul trial area Legend line-of-sight 500m single reflection diffraction double reflection 250m 100m © 2013 BLOM © 2013 Microsoft Corporation E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Dispelling the NLOS myths 18 References 1. Ericsson, 2011, Ericsson Review, Microwave capacity evolution, available at: http://www.ericsson.com/res/docs/ review/Microwave-Capacity-Evolution.pdf 2. Ericsson, 2012, White Paper, It all comes back to backhaul, available at: http://www.ericsson.com/res/docs/ whitepapers/WP-Heterogeneous-Networks-Backhaul.pdf 3. NGMN Alliance, June 2012, White Paper, Small Cell Backhaul Requirements, available at: http://www.ngmn. org/uploads/media/NGMN_Whitepaper_Small_Cell_ Backhaul_Requirements.pdf 4. Electronic Communications Committee (ECC), 2012, Report 173, Fixed service in Europe – current use and future trends post, available at: http://www.erodocdb.dk/Docs/doc98/ official/pdf/ECCRep173.PDF 5. Seidel, S.Y.; Arnold, H.W.; 1995, Propagation measurements at 28 GHz to investigate the performance of local multipoint distribution service (LMDS), available at: http://ieeexplore. ieee.org/xpl/articleDetails.jsp?arnumber=502029 6. Rappaport, T.S.; Yijun Qiao; Tamir, J.I.; Murdock, J.N.; Ben-Dor, E.; 2012, Cellular broadband millimeter wave propagation and angle of arrival for adaptive beam steering systems (invited paper), available at: http://ieeexplore.ieee. org/xpl/articleDetails.jsp?arnumber=6175397 7. Coldrey, M.; Koorapaty, H.; Berg, J.-E.; Ghebretensaé, Z.; Hansryd, J.; Derneryd, A.; Falahati, S.; 2012, Small-cell wireless backhauling: a non-line-of-sight approach for pointto-point microwave links, available at: http://ieeexplore. ieee.org/xpl/articleDetails.jsp?arnumber=6399286 8. ITU, 2012, Recommendation ITU-R P.526, Propagation by diffraction, available at: http://www.itu.int/ rec/R-REC-P.526-12-201202-I 9. Anderson, C.R.; Rappaport, T.S.; 2004, In-building wideband partition loss measurements at 2.5 and 60 GHz, available at: http://ieeexplore.ieee.org/xpl/articleDetails. jsp?arnumber=1296643 10.Okamoto, H.; Kitao, K.; Ichitsubo, S.; 2009, Outdoor-toIndoor Propagation Loss Prediction in 800-MHz to 8-GHz Band for an Urban Area, available at: http://ieeexplore.ieee. org/xpl/articleDetails.jsp?arnumber=4555266 11.Dillard, C.L.; Gallagher, T.M.; Bostian, C.W.; Sweeney, D.G.; 2003, 28GHz scattering by brick and limestone walls, available at: http://ieeexplore.ieee.org/xpl/articleDetails. jsp?arnumber=1220086 E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Jonas Edstam Jonas Hansryd joined Ericsson in 1995 and is currently head of technology strategies at Product Line Microwave and Mobile Backhaul. He is an expert in microwave radio-transmission networks focusing on the strategic evolution of packet-based mobile backhaul and RAN. He holds a Ph.D. in applied solid-state physics from Chalmers University of Technology, Gothenburg, Sweden. joined Ericsson Research in 2008 and is currently managing the microwave high-speed and electronics group. He holds a Ph.D. in electrical engineering from the Chalmers University of Technology, Gothenburg, Sweden and was a visiting researcher at Cornell University, Ithaca, US, from 2003-2004. Bengt-Erik Olsson Christina Larsson joined Ericsson Research in 2007 to work on ultra-high-speed optical communication systems. Recently he switched interest to wireless-technology research, and is currently working on NLOS backhaul applications for microwave links. He holds a Ph.D. in optoelectronics from Chalmers University of Technology, Gothenburg, Sweden. joined Ericsson Research in 2010. Her current focus area is microwave backhaul solutions. She holds a Ph.D. in electrical engineering from Chalmers University of Technology, Gothenburg, Sweden, and was a post-doctoral researcher at the University of St. Andrews, St. Andrews, UK, from 2004-2006. Acknowledgements The authors gratefully acknowledge the colleagues who have contributed to this article: Jan-Erik Berg, Mikael Coldrey, Anders Derneryd, Ulrika Engström, Sorour Falahati, Fredrik Harrysson, Mikael Höök, Björn Johannisson, Lars Manholm and Git Sellin Re:view 19 Nine decades of innovation Recovery and prosperity In 1954, the cover article of Ericsson Review issue 1 looked at the traffic reliability of the crossbar system, which Ericsson delivered to the Helsinki Telephone Corporation in 1950. The cover illustrates the interior of the Helsinki exchange for PBX subscribers. The final testing of the system was carried out in the latter half of 1953 and gave a fault rate Ericsson Review, of 0.090 percent based on issue 1, 1954. 20,000 test connections. This result was deemed to be highly satisfactory, as the exchange was very heavily loaded during peak periods. Automatic exchanges to smart networks Toward the end of this decade, Ericsson trialed a crossbar system, 30 years after these switches were first put into practical operation. The Second World War had a profound effect on people and business. Widespread misery, rations and a shortage of raw materials naturally led many companies to diversify their operations. But overall, the slowdown in growth and reduced level of information exchange among researchers led to a dip in development. World average telephone density in 1930 was 2 per 100 capita; by 1950 this figure had risen to 3, and in the two decades following that, subscriber density more than doubled – reaching a total of 7 in 1970. In preparation for the 1952 Summer Olympics, Ericsson installed its first commercial crossbar system in Helsinki, Finland, in 1950. The decision to move to crossbar switching came about during the war as Ericsson was developing smaller exchanges for rural communities and enterprises. This technology, however, presented new challenges in terms of traffic engineering and dimensioning in particular. In his thesis on a study of congestion in link systems, lifelong Ericsson employee Christian Jacobæus presented a way to calculate traffic capacity that was subsequently trialed and became a worldwide standard. Computing was the next wave of technology to be adopted by the telecoms industry. Parts of Ericsson’s AKE range of telephone exchanges were computer controlled, the key feature being a Stored Program Controlled (SPC) element, which managed the switches and operated in real time. The initial commercial deployments in 1968 were the first computercontrolled exchanges outside the US. By the end of the 1960s, however, it had become clear that something different was needed. Something that was more flexible – a modular system that could be expanded to accommodate new technologies and services without the need for fundamental system changes: something that was future-proof. The downturn of the global economy following the oil crisis in the 1970s again led to several tough years for industry in general. For Ericsson, a boost came when the Saudi Ministry of Telecommunications chose Ericsson AXE exchanges in what was to be the biggest contract in the history of telecoms at that time. This digital exchange introduced modular design and became one of the world’s most successful switching systems. In 1984, Ericsson Review published a special ‘F’ edition dedicated to fiber optic. This issue covered every aspect of fiber optics from cable manufacture to installation, applications and device technology. Ericsson’s transmission expertise goes right back to the late 1920s when it began manufacturing loading coils, early signs of Ericsson’s ethos to provide the telecoms industry with a wider range of products and services. Flower power and revolution The cover of issue 3 in 1968 portrayed printed circuit cards in the transfer unit of Ericsson’s Stored Program Controlled (SPC) AKE exchange system. The photograph is from the automatic exchange installed in Tumba (a suburb of Stockholm). SPC Ericsson Review, exchanges were a milestone issue 3, 1968. in the development of telephony, as they made it easier to trace faults, and thus reduce maintenace costs. The major concerns at the time were capacity and cost of memory. Oil and energy crises Continued on page 47... The second issue of 1976 presented the AXE 10 switching system. The range of articles in this issue highlighted the shift in technology focus. Telecoms was becoming much more than wires, switches, exchanges and transmission. Hardware architecture and design were now being intimately combined with software structure to provide services, efficient traffic Ericsson Review, handling, management issue 2, 1976. systems and scalability. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 New network abstraction layers 20 Software-defined networking: the service provider perspective An architecture based on SDN techniques gives operators greater freedom to balance operational and business parameters, such as network resilience, service performance and QoE against opex and capex. AT T I L A TA K AC S , E L I S A BE L L AGA M BA , A N D JOE W I L K E The traditional way of describing network architecture and how a network behaves is through the fixed designs and behaviors of its various elements. The concept of software-defined networking (SDN) describes networks and how they behave in a more flexible way – through software tools that describe network elements in terms of programmable network states. The concept is based on split architecture, which separates forwarding functions from control functions. This decoupling removes some of the complexity from network management, providing operators with greater flexibility to make changes. BOX A Ericsson’s approach to SDN goes beyond the data center addressing issues in the service-provider environment. In short Ericsson’s approach is Service Provider SDN. The concept aims to extend virtualization and OpenFlow – an emerging protocol for communication between the control and data planes in an SDN architecture – with three additional key enablers: integrated network control; orchestrated network and cloud management; and service exposure. There is no denying that networks are becoming increasingly complex. More and more functionality is being integrated into each network element, and more and more network elements are needed to support evolving service requirements – especially to support rising capacity needs, which are doubling Terms and abbreviations API ARPU CLI DPI GMPLS L2 L3 L2-L4 M-MGW MME MSC-S NAT NMS application programming interface average revenue per user command-line interface deep packet inspection generalized multi-protocol label switching Layer 2 Layer 3 Layers 2-4 Mobile Media Gateway Mobility Management Entity Mobile Switching Center Server Network Address Translation network management system E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 ONF Open Networking Foundation OSS/BSS operations and business support systems PE provider edge device PGW Packet Data Network Gateway RG residential gateway RWA routing and wavelength assignment SDN software-defined networking SGW Service Gateway SLA Service Level Agreement VHG virtual home gateway VoIP voice over IP WAN wide area network every year1. One of the root causes of network complexity lies in the traditional way technology has developed. The design of network elements, such as routers and switches, has traditionally been closed; they tend to have their own management systems with vertically integrated forwarding and control components, and often use proprietary interfaces and features. The goal of network management is, however, to ensure that the entire network behaves as desired – an objective that is much more important than the capabilities of any single network element. In fact, implementing end-to-end networking is an important mission for most operators, and having to configure individual network elements simply creates an unwanted overhead. Network-wide programmability – the capability to change the behavior of the network as a whole – greatly simplifies the management of networks. And the purpose of SDN is exactly that: to be able to modify the behavior of entire networks in a controlled manner. The tradition of slow innovation in networking needs to be broken if networks are to meet the increased demand for transport and processing capacity. By integrating recent technological advances and introducing networkwide abstractions, SDN does just that. It is an evolutionary step in networking. Telephony has undergone similar architectural transitions in the past. One such evolution took place when a clear separation between the functions 21 of the data plane (including SGW, PGW and M-MGW) and the control plane (including MME and MSC-S) was introduced. Now SDN has brought the concept of split architecture to networking. As the business case proves, over the next two to five years, SDN technology will be deployed in networks worldwide. At the same time, the need to maintain traditional operational principles and ensure interoperability between SDN and more traditional networking components will remain. In the future, SDN will help operators to manage scale, reduce costs and create additional revenue streams. Standardization The goal of the Open Networking Foundation (ONF), which was established in 2011, is to expedite the standardization of the key SDN interfaces. Today, the work being conducted by ONF focuses on the continued evolution of the OpenFlow protocol. ONF has recently established the Architecture and Framework Working Group, whose goal is to specify the overall architecture of SDN. The work carried out by this group will guide future standardization efforts based on strategic use cases, requirements for data centers and carrier networks, the main interfaces, and their roles in the architecture. Ericsson is actively driving the work of this group, cooperating with other organizations to promote the evolution of OpenFlow and support an open-source implementation of the most recent specifications. Other standardization organizations, most notably the IETF, have recently begun to extend their specifications to support SDN principles. In IETF, the Interface to the Routing System (i2rs) WG and the recent activity of the Path Computation Element (PCE) WG will result in standardized ways to improve flexibility in changing how IP/MPLS networks behave. This is achieved through the introduction of new interfaces to distributed protocols running in the network, and mechanisms to adapt network behavior dynamically to application requirements. In addition to standardization organizations a multitude of active communities and open-source initiatives, such as OpenStack, are getting involved in the FIGURE 1 Service Provider SDN – components and promise Service exposure Northbound APIs to allow networks to respond dynamically to application/service requirements Orchestrated network and cloud management Unified legacy and advanced network, cloud management system and OSS/BSS to implement SDN in step-by-step upgrade Integrated network control Control of entire network from radio to edge to core to data center for superior performance specification of various SDN tools, working on maturing the networking aspect of virtualization. Architectural vision Split architecture – the decoupling of control functions from the physical devices they govern – is fundamental to the concept of SDN. In split-architecture networks, the process of forwarding in the data plane is separated from the controller that governs forwarding in the control plane. In this way, data-plane and control-plane functions can be designed, dimensioned and optimized separately, allowing capabilities from the underlying hardware to be exposed independently. This ability to separate control and forwarding simplifies the development and deployment of new mechanisms, and network behavior becomes easier to manage, reprogram and extend. Deploying a split architecture, however, does not remove the need for highavailability software and hardware components, as networks continue to meet stringent carrier requirements. However, the decoupling approach to architecture will help rationalize the network, making it easier to introduce new functions and capabilities. The ultimate goal of the SDN architecture is to allow services and applications to issue requests to the network Service Provider SDN Service provider needs: • Accelerated service innovation • Advanced public/hybrid enterprise and consumer cloud services • Improved QoE • Opex reduction through centralized management • Capex control dynamically, avoiding or reducing the need for human intervention to create new services. This, compared to today’s practices, will reduce the time to market of new services and applications. The OpenFlow protocol2 is supporting the separation of data and control planes and allowing the path of packets through the network to be software determined. This protocol provides a simple abstraction view of networking equipment to the control layer. Split architecture makes virtualization of networking resources easier, and the control plane can provide virtual views of the network for different applications and tenants. Ericsson has worked together with service providers to understand their needs for SDN both in terms of reducing costs and creating new revenue opportunities. Based on these discussions, Ericsson has expanded the industry definition of SDN and customized it to fit the needs of operators. Ericsson’s hybrid approach – Service Provider SDN – extends industry definitions including virtualization and OpenFlow with three additional key enablers: integrated network control; orchestrated network and cloud management; and service exposure. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 New network abstraction layers 22 FIGURE 2 Service Provider SDN – architectural vision Ericsson NMS and cloud management system Application Application Application Tenants SDN controller Control Control (v) Control plane APIs (such as OpenFlow) Forwarding Forwarding Forwarding Forwarding Integrated systems Routers Physical Virtual Forwarding elements Integrated network control Service providers will use SDN across the network from access, to edge to core and all the way into the data center. With integrated network control, operators can use their network features, including QoS, edge functions and realtime activity indicators to deliver superior user experience. By expanding the perspective of SDN to include these three elements, service providers can evolve their existing network to the new architecture to improve the experience of their customers. Implementing Service Provider SDN should remove the dumb pipe label giving operators an advantage over competitors that do not own networks. Orchestrated network and cloud management Service Provider SDN will integrate and unify legacy network management systems with new control systems as well as with OSS/BSS. The platform for integrated orchestration supports end-toend network solutions ranging from access over aggregation to edge functions as well as the data centers used to deliver telco and enterprise applications and services. Network virtualization One of the benefits of Service Provider SDN, especially from a network- spanning perspective is network virtualization. Through virtualization, logical abstractions of a network can be exposed instead of a direct representation of the physical network. Virtualization allows logical topologies to be created, as well as providing a way to abstract hardware and software components from the underlying network elements, thereby separating control from forwarding capabilities and supporting the centralization of control. Unified orchestration platforms support network programming at the highest layer as programming instructions flow through the control hierarchy – potentially all the way down to granular changes in flow paths at the forwarding plane level. Adding northbound APIs Service exposure Northbound APIs expose the orchestration platform to key network and subscriber applications and services. Together, the APIs and platforms allow application developers to maximize network capabilities without requiring intimate knowledge of their topology or functions. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Software Hardware into this unified orchestration layer provides the necessary support for applications and tenants to trigger automatic changes in the network, ensuring optimal QoS and guaranteeing SLAs. Unified and centralized orchestration platforms greatly simplify the process of configuring, provisioning and managing complex service networks. Instead of having to tweak hundreds of distributed control nodes using fairly complex CLI programming, operations staff can use simple intuitive programming interfaces to quickly adjust network configurations and create new services. By accelerating the process of service innovation, Service Provider SDN will lead to increased market share and service ARPU, creating significant revenue growth and possibly reducing annual churn rates. A high-level network architecture that supports the Service Provider SDN vision is illustrated in Figure 2. Service-provider networks will combine distributed control plane nodes (traditional routers and appliances), and dataplane elements that are governed by centralized elements – SDN controllers – in the control plane. Consequently, to make service-provider networks programmable, distributed and centralized control-plane components must be exposed to a unified orchestration platform. In addition, the key elements of the orchestration platform and the control plane need to be exposed to network and subscriber applications/services. Application examples The major use cases or applications of SDN in service-provider networks are summarized in Figure 3. Of these applications, the data center was the first to make use of SDN. Ericsson’s approach to this is described in several articles published in Ericsson Business Review3 and in Ericsson Review4,5. SDN can be applied in the aggregation network to support sophisticated virtualization and to simplify the configuration and operation of this network segment. Ericsson has developed a proof-of-concept system in cooperation with Tier 1 operators to evaluate the applicability of SDN to aggregation networks. This work has been carried out as part of the European Commission’s Seventh Framework Programme6. 23 Ericsson and Telstra have jointly developed a service-chaining prototype that leverages SDN technologies to enhance granularity and dynamicity of service creation. It also highlights how SDN can simplify network provisioning and improve resource utilization efficiency. Packet-optical integration is a popular topic of debate, out of which several different approaches are emerging. Split architecture provides a simple way to coordinate packet and optical networking; and so SDN, enhanced with features such as routing and wavelength assignment (RWA) and optical impairments management will be a natural fit for packet-optical integration. Ericsson has started to develop solutions to virtualize the home gateway. Virtualization reduces the complexity of the home gateway by moving most of the sophisticated functions into the network and, as a result, operators can prolong the home gateway refreshment cycle, cut maintenance costs and accelerate time to market for new services. Virtualization of aggregation networks The characteristics shared by aggregation and mobile-backhaul networks are a large number of nodes and relatively static tunnels – that provide traffic grooming for many flows. These networks are also known for their stringent requirements with regard to reliability and short recovery times. Besides L2 technologies IP and IP/MPLS is making an entrance as a generic backhaul solution. From an operational point of view, despite the availability of the distributed control plane technology, this network segment is usually configured statically through a centralized management system, with a point of touch to every network element. This makes the introduction of a centralized SDN controller straightforward for backhaul solutions. A control element hosted on a telecomgrade server platform or on an edge router provides the operator with an interface that has the same look and feel as a single traditional router. The difference between operating an aggregation SDN network and a traditional network lies in the number of touch points required to provision and operate the domain. In the case of SDN, only a few points are needed to control the connectivity for the entire network. Consider, for example, an access/ aggregation domain with hundreds or even thousands of nodes running distributed IGP routing protocols and the Label Distribution Protocol (LDP) to configure MPLS forwarding. In this case, SDN principles can be applied to simplify and increase the scalability of provisioning and operating of such a network by pulling together the configuration of the whole network into just a few control points. The control element treats the underlying forwarding elements as remote line cards of the same system and, more specifically, controls their flow entries through the OpenFlow protocol. With this approach, any kind of connectivity model is feasible regardless of whether the forwarding node is L2 or L3 as, from a pure forwarding point of view, the same model is used in both flows. At the same time, network resilience at the transport level can be implemented by adding protection mechanisms to the data path. The SDN controller can pre-compute and pre-install backup routes and then protection switching is handled by the network elements for fast failover. Alternatively, the SDN controller can reroute around failures, in case multiple failures occur, or in scenarios that have less stringent recovery requirements. From the outside, the entire network segment appears to be one big PE router and, for this reason, neighboring network elements of the SDN-controlled area cannot tell the difference (from a protocol point of view) between it and a traditional network. The network controller handles the interfacing process with legacy systems for connection setup. Additional information on this point can be found in a presentation on the Virtual Network System7. Dynamic service-chaining For inline services, such as DPI, firewalls (FWs), and Network Address Translation (NAT), operators use different middleboxes or appliances to manage subscriber traffic. Inline services can be hosted on dedicated physical hardware, or on virtual machines. Service chaining is required to route certain subscriber traffic through more than one such service. There are still no protocols or tools available for operators to perform flexible, dynamic traffic steering. Solutions currently available are either static or their flexibility is significantly limited by scalability inefficiencies. Given the rate of traffic growth, continued investment in capacity for FIGURE 3 Application examples Virtualization of aggregation network Network support for cloud Cloud/data center Mobile Residental Business Virtual home gateway Policy-based service chaining Packet and optical integration E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 New network abstraction layers 24 FIGURE 4 Service-chaining principles FW DPI DPI NAT FW SSR SDN switch SDN switch Virtual Network System domain Flow before SDN switch inline services needs to be managed carefully. Dynamic service-chaining can optimize the use of extensive hightouch services by selectively steering traffic through specific services or bypassing them completely which, in turn, can result in capex savings owing to the avoidance of over-dimensioning. Greater control over traffic and the use of subscriber-based selection of inline services can lead to the creation of new offerings and new ways to monetize networks. Dynamic service steering enables operators to offer subscribers access to products such as virus scanning, firewalls and content filters through an automatic selection and subscribe portal. This concept of dynamic service chaining is built on SDN principles. Ericsson’s proof-of-concept system uses a logically centralized OpenFlow-based controller to manage both switches and middleboxes. As well as the traditional 5-tuple, service chains can be differentiated on subscriber behavior, application, and the required service. Service paths are unidirectional; that is, different service paths can be specified for upstream and downstream traffic. Traffic steering has two phases. The first classifies incoming packets and assigns a service path to them based on predefined policies. Packets are then forwarded to the next service, based on the current position in their assigned E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 service path. No repeating classification is required; hence the solution is scalable. The SDN controller sets up and reconfigures service chains flexibly to an extent that is not possible with today’s solutions. The dynamic reconfiguration of service chains needs a mechanism to handle notifications sent from middleboxes to the controller. For example, the DPI engine notifies the controller that it has recognized a video flow. These notifications may be communicated using the extensibility features of the OpenFlow 1.x protocol. Figure 4 summarizes service- chaining principles. The Virtual Network System (VNS) is a domain of the network where the control plane is centralized which excludes some of the traditional control agents. A simple API, such as OpenFlow, can be used to control the forwarding functionality of the network, and the VNS can create northbound interfaces and APIs to support creation of new features, such as service chaining, which allows traffic flows to be steered dynamically through services or parts thereof by programming forwarding elements. The services provided by the network may reside on devices located in different parts of the network, as well as within an edge router – for example, on the service cards of Ericsson’s Smart Services Router (SSR). Service chains are Flow after programmed into the network based on a combination of information elements from the different layers (L2-L4 and possibly higher). Based on operator policies, various services can be applied to traffic flows in the network. For example, traffic may pass through DPI and FW functions, as illustrated by the red flow in Figure 4. However, once the type of the flow has been determined by the DPI function, the operator may decide to modify the services applied to it. For example, if the flow is an internet video stream, it may no longer need to pass the FW service, reducing load on it. Furthermore, after the service type has been detected, the subsequent packets of the same flow may no longer need to pass the DPI service either; hence the path of the flow can be updated – as indicated by the blue flow in Figure 4. Packet-optical integration The increased programmability that SDN enables creates an opportunity to address the challenges presented by packet optical networking. SDN can simplify multi-layer coordination and optimize resource allocation at each layer by redirecting traffic (such as VoIP, video and web) based on the specific requirements of the traffic and the best serving layer. Instead of a layered set of separated media coordinated in a static manner, SDN could transform the packet-optical infrastructure to be more fluid, with a unified recovery approach and an allocation scheme based on real-time link utilization and traffic composition. The ONF still has some work to do to adapt OpenFlow to cope with optical constraints. To speed up packet-optical integration, a hybrid architecture can be deployed where OpenFlow drives the packet domain, and the optical domain remains under the control of GMPLS. This approach utilizes the extensive optical capabilities of GMPLS and, therefore, instead working to extend OpenFlow with optical capabilities, it allows us to focus on the actual integration of optical and packet domains and applications that utilize the flexibility of a unified SDN controller. 25 Home gateway control The concept of the virtual home gateway (VHG) introduces a new home-network architecture primarily driven by consideration to improve service delivery and management. The target architecture emerges by applying SDN capabilities between the residential gateway (RG) and the edge network – moving most of the gateway’s functionalities into an embedded execution environment. Virtualizing the RG significantly reduces its complexity and provides the operator with greater granularity in remote-control management, which can be extended to every home device and appliance. As a result, operators can reduce their investments significantly by prolonging the RG refreshment cycle, cutting maintenance costs and accelerating time to market for new services. The VHG concept allows operators to offer seamless and secure remote profile instantiation stretching the boundaries of a home network without compromising security. The concept provides the tools to configure and reconfigure middleboxes dynamically, so that communication between devices attached to different home networks can be established, and/or provide specific connectivity requirements for a third-party service provider – between, for example, a utility company and a particular device. By embedding SDN capabilities, Ericsson’s concept enables operators to offer personalized applications to subscribers each with its own specific chain of management policies and/or services. The target architecture places an operator-controlled bridge at the customer’s premises instead of a complex router, while the L3-L7 functionalities are migrated to the IP edge or into the operator cloud. Using SDN technology between the IP edge and the switch in this way offers the operator fine-grained control for dynamic configuration of the switch. Conclusion With its beginnings in data-center technology, SDN technology has developed to the point where it can offer significant opportunities to service providers. To maximize the potential benefits and deliver superior user experience, SDN needs to be implemented outside the sphere of the data center across the entire network. This can be achieved through enabling network programmability based on open APIs. Service Provider SDN will help operators to scale networks and take advantage of new revenue-generating possibilities. The broader Service Provider SDN vision goes beyond leveraging split architecture to include several software components that can be combined to create a powerful end-toend orchestration platform for WANs and distributed cloud data centers. Over time, this comprehensive softwarebased orchestration platform will be able to treat the overall operator network as a single programmable entity. References 1. Ericsson, November 2012, Mobility Report, On the pulse of the Networked Society, available at: http:// www.ericsson.com/ericsson-mobility-report 2. Open Networking Foundation, available at: https://www.opennetworking.org/ 3. Ericsson, November 2012, Ericsson Business Review, The premium cloud: how operators can offer more, available at: http://www.ericsson.com/ news/121105-ebr-the-premium-cloud-how-operators-can-offer-more_244159017_c 4. Ericsson Review, December 2012, Deploying telecom-grade products in the cloud, available at: http:// www.ericsson.com/res/thecompany/docs/publications/ericsson_review/2012/er-telecom-gradecloud.pdf 5. Ericsson Review, December 2012, Enabling the network-embedded cloud, available at: http://www. ericsson.com/res/thecompany/docs/publications/ericsson_review/2012/er-network-enabled-cloud. pdf 6. European Commission, Seventh Framework Programme, Split Architecture Carrier Grade Networks, available at: http://www.fp7-sparc.eu/ 7. Elisa Bellagamba: Virtual Network System, MPLS & Ethernet World Congress, Paris, February 2012, available at: http://www.slideshare.net/EricssonSlides/e-bellagamba-mewc12-pa8 Elisa Bellagamba is a portfolio strategy manager in Product Area IP & Broadband, where she has been leading SDN related activities since their very beginning. She holds an M.Sc. cum laude in computer science engineering from Pisa University, Italy. Attila Takacs is a research manager in the Packet Technologies Research Area of Ericsson Research. He has been the technical lead of research projects on Software Defined Networks (SDN), OpenFlow, GMPLS, Traffic Engineering, PCE, IP/MPLS, Ethernet, and OAM for transport networks. He is also an active contributor to standardization; in particular he has worked for ONF, IETF and IEEE. He holds more than 30 international patent applications; granted and in progress. He holds an M.Sc. in computer science and a postgraduate degree in banking informatics, both from the Budapest University of Technology and Economics, in Hungary. He has an MBA from the CEU Business School, in Budapest. Joe Wilke is head of Development Unit IP & Broadband Technology Aachen. He currently leads the SDN execution program and holds a degree in electrical engineering from the University of Aachen, Germany and a degree in engineering and business from the University of Hagen, Germany. Acknowledgements The authors would like to thank Diego Caviglia, Andreas Fasbender, Howard Green, Wassim Haddad, Alvaro de Jodra, Ignacio Más, Don McCullough and Catherine Truchan for their contributions to this article. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Smarter networks 26 HSPA evolution: for future mobilebroadband needs As HSPA continues to evolve, addressing the needs of changing user behavior, new techniques develop and become standardized. These techniques provide network operators with the flexibility, capacity and coverage needed to carry voice and data into the future. N I K L A S JOH A N S S ON, L I N DA BRU S , E R I K L A R S S ON, BI L LY HO GA N A N D P E T E R VON W RYC Z A Mobile broadband (MBB), providing high-speed internet access from more or less anywhere, is becoming a reality for an increasing proportion of the world’s population. There are several factors fuelling the need for high-performance MBB networks, not the least, the growing number of mobile internet connections. As Figure 1 illustrates, global mobile subscriptions (excluding M2M) are predicted to grow to 9.1 billion by the end of 2018. Nearly 80 percent of mobile subscriptions will be MBB ones1 , indicating that MBB will be the primary service for most operators in the coming years. Impact of affordable smartphones To a large extent, the rapid growth of MBB can be attributed to the widespread availability of low-cost MBB-capable smartphones, which are replacing BOX A voice-centric feature phones. For less than USD 100, consumers can purchase highly capable WCDMA/HSPA-enabled smartphones with dual-core processors and dual-band operation that support data rates up to 14.4Mbps. This priceto-sophistication ratio has turned the smartphone into an affordable massmarket product, and has accelerated the increase in smartphone subscriptions – estimated to rise from 1.2 billion at the end of 2012 to 4.5 billion by 20181. Ericsson ConsumerLab studied a group of people to assess how they perceived network quality and what issues they encountered when using their smartphones. The study identified two key factors that are essential to the perceived value of a smartphone: a fast and reliable connection to the data network, and good coverage2. These findings highlight an important goal for operators: to provide all network users with high-speed data services and good-quality voice services everywhere. This can be achieved by securing: Terms and abbreviations CELL_FACH Cell forward access channel CPC Continuous Packet Connectivity DPCH Dedicated Physical Channel EULEnhanced Uplink HS-DSCH High-Speed Downlink Shared Channel HSDPA High-speed Downlink Packet Access HSPA High-speed Packet Access HSUPA High-speed Uplink Packet Access LPN low-power node E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 M2Mmachine-to-machine MBB mobile broadband MIMO multiple-input multiple-output ROT rise-over-thermal SRB Signaling Radio Bearer ULuplink URA_PCH UTRAN registration area paging channel UTRAN Universal Terrestrial Radio Access Network WCDMA Wideband Code Division Multiple Access capacity – to handle growing smartphone traffic cost-efficiently; flexibility – to manage the wide range of traffic patterns efficiently; and coverage – to ensure good voice and app user experience everywhere. App coverage For smartphone applications, like social networking and video streaming, to function correctly, access to the data network and a network that can deliver a defined minimum level of performance is needed. The relationship between the performance requirements (in terms of data speed and response time) of an application and the actual performance delivered by the network for that user at their location at a given time determines how well the user perceives the performance of the application. The term app coverage denotes the level of network performance needed to provide subscribers with a satisfactory user experience for a given application. In the past, the task of dimensioning networks was simpler, as calculations were based on delivering target levels of voice coverage and providing a minimum data rate. Today’s applications, however, have widely varying performance requirements. As a result, dimensioning a network has become a more dynamic process and one that needs to take these varying performance requirements into consideration, for apps that are currently popular with subscribers. Footprint Illustrated in Figure 2, at the end of 2012, 55 percent of the world’s population was covered by WCDMA/HSPA, a figure that is set to rise to 85 percent 27 Evolution of traffic patterns Applications have varying demands and behaviors when it comes to when and how much data they transmit. Some apps transmit a large amount of data continuously for substantial periods of time and some transmit small packets at intervals that can range from a few seconds to minutes or even longer. Applications have varying demands, typically sending lots of data in bursts, interspersed with periods of inactivity when they send little or no data at all. Rapid handling of individual user requests, enabled by high instantaneous data rates, improves overall network performance as control-channel overhead is reduced and capacity for other traffic becomes available sooner. So, if a network can fulfill requests speedily, all users will experience the benefits of reduced latency and faster round-trip times. Web browsing on a smartphone is a classic example of a bursty application, both for uplink and downlink communication. When a smartphone requests the components of a web page from the network (in the uplink) they are transferred in bursts (in the downlink), and the device acknowledges receipt of the content (in the uplink). As a result, FIGURE 1 Mobile and MBB subscriptions (2009-2018)1 Subscriptions/lines (million) Mobile subscriptions Mobile broadband 10,000 9,000 8,000 7,000 6,000 5,000 4,000 3,000 2,000 1,000 0 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 uplink and downlink performance becomes tightly connected and therefore better uplink performance has a positive effect on downlink data rates as well as overall system throughput. For web browsing, the instantaneous downlink speed for mobile users needs to be much higher on average than the uplink speed. However, the number of services requiring higher data rates in FIGURE 2 the uplink, such as video calling and cloud synching of smartphone data, is on the rise. As user behavior changes, traffic- volume patterns also change, and measurements show it is becoming more common for uplink levels to be on par with downlink levels, and in some cases even outweigh the downlink traffic. Consequently, continuing to Population coverage by technology (2012-2018) (Source: Ericsson1) 100 >85% >90% >85% 80 % population coverage by the end of 20181. Today, many developed markets are nearing the 100 percent population coverage mark3. This widespread deployment, together with support for the broadest range of devices, makes WCDMA/HSPA the primary radio-access technology to handle the bulk of MBB and smartphone traffic for years to come. Since its initial release, the 3GPP WCDMA standard has evolved, and continues to develop. Today, WCDMA/ HSPA is a best-in-class voice solution with exceptional voice accessibility and retainability. It offers high call retention as well as being an excellent access technology for MBB, as it delivers high data rates and high cell-edge throughput – all of which enable good user experience across the entire network. The continued evolution of WCDMA/ HSPA in Releases 11 and 12 includes several key features that aim to increase network flexibility and capacity to meet growing smartphone traffic and secure voice and app coverage. ~60% 60 ~55% 40 20 0 ~10% 2012 2018 GSM/EDGE 2012 2018 WCDMA/HSPA 2012 2018 LTE E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Smarter networks 28 FIGURE 3 develop data rates to secure uplinkheavy services is key to improving overall user performance. Where to improve, densify and add Area traffic density Improve Densify Add Improve Densify Improve Dense urban FIGURE 4 Urban Suburban Rural Relationship between maximum interference and peak rate UL ROT Legend Y= Y X= X E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Rate Maximum interference handled by the network Maximum uplink data rate that can be achieved High performance networks The standard approach used to create a high-performance network with wide coverage and high capacity is to first improve the macro layer, then densify it by deploying additional macro base stations, and finally add low power nodes (LPNs) in strategic places, such as traffic hotspots, that can offload the macro network. Each step addresses specific performance targets and applies to different population densities, from urban to rural – as illustrated in Figure 3. The evolution of WCDMA/HSPA includes a number of features that target macro layer improvement and how deployments where LPNs have been added can be enhanced. Improving the uplink Features in the 3GPP specification have recently achieved substantial improvement of uplink capabilities. Features such as uplink multi-carrier, higherorder modulation with MIMO, EUL in CELL_FACH state, and Continuous Packet Connectivity (CPC) have multiplied the peak rate (up to 34Mbps per carrier in Release 11) and increased the number of simultaneous users a network can support almost fivefold. Given the high uplink capabilities already supported by the standard, the next development (Release 12) will enable and extend the use of these capabilities to as many network users as possible. The maximum allowed uplink interference level in a cell, also known as maximum rise-over-thermal (ROT), is a highly important quantifier in WCDMA networks. This is because the maximum allowed interference level has a direct impact on the peak data rates that the cell can deliver. Typically, macro cells are dimensioned with an average ROT of around 7dB, which enables UL data rates of 5.7Mbps (supported by most commercial smartphones), and secures voice and data coverage for cell-edge users. High data rates, such as 11Mbps (available since release 7) and 34Mbps (available since release 11) require ROT levels 29 greater than 10dB and 20dB respectively. Figure 4 illustrates the relationship between ROT and peak data rate. The maximum uplink interference level permissible is determined by a number of factors including the density of the network, the capability of the network to handle interference (for example with advanced techniques such as Interference Suppression), and the capabilities of the devices in the network, including both smartphones and legacy feature phones. The Lean Carrier solution, introduced in Release 12, is an additional capability that helps operators meet the needs of high-data-rate users. This multi-carrier solution is built on the Release 9 HSUPA dual-carrier one that is currently being implemented in commercial smartphones. The dual-carrier solution allows two carriers, primary and secondary, to be assigned to a user. By doing this, the traffic generated by the user can be allocated in a flexible way between the two carriers, while at the same time doubling the maximum peak rate achievable. The Lean Carrier solution optimizes the secondary carrier for fast and flexible handling of multiple high-data-rate users, through more efficient granting and lower cost per bit. The solution is designed to support multiple bursty data users in a cell transmitting at the highest peak rates without causing any uplink interference among themselves or to legacy users. To maximize energy efficiency, the Lean Carrier solution should cost nothing in system or terminal resources on the secondary carrier until the user starts to send data. Lean Carrier can be flexibly deployed according to the needs of the network. For example, the maximum ROT on a user’s secondary (lean) carrier can be configured to support any available uplink peak data rate, while the maximum ROT on a user’s primary carrier can be configured to secure cell-edge coverage for signaling, random access and legacy (voice) users. Rate adaptation is another technology under study that results in increased network capacity for some common traffic scenarios, such as areas where subscribers are a mix of high and lowrate users or areas where there are only high-rate users. High uplink data rates FIGURE 5 Rate adaptation results in predictable interference levels Received power Baseline: Fixed rate variable power Rate adaptation: Fixed received power and variable rate DATA DATA Control Control Time require more power. Maintaining a fixed data rate at the desired quality target in an environment where interference levels vary greatly can result in large fluctuations in received power. To avoid such fluctuations, the concept of rate adaptation can be applied. High-rate users are assigned with a fixed receivedpower budget, and as interference levels change, bit rates are adapted to maintain the desired quality target, while not exceeding the allowed power budget. In short, as illustrated in Figure 5, the bit rate is adapted to received power, and not the power to the rate. Limiting fluctuations in received power for high-rate users is good for overall system capacity because these high-rate users can transmit more efficiently, and other users in the system, including low-rate ones such as voice users, consume less power when power levels are stable and predictable. Maintaining a device in connected mode for as long as possible is another technique that can be used to improve performance of the uplink. Smartphone users want to be able to rapidly access the network from a state of inactivity. Maintaining a device in a connected-mode state, such as CELL_FACH or URA_PCH, for as long as possible is one way of achieving this – access to the network from these states is much faster than from the IDLE state. In recent releases, connected mode has been made more efficient from a battery and resource point of view through the introduction of features such as CPC, fractional DPCH and SRB on HS-DSCH. As a consequence it is now feasible to maintain inactive devices in these states for longer. As the number of smartphone users increases, networks need flexible mechanisms to maintain high system throughput, even during periods of extremely heavy load. Allowing the network to control the number of concurrently active users, as well as the number of random accesses, is one such mechanism. Improvements that enable high throughput under heavy load, and allow users to benefit from lower latency in connected mode, while enabling service-differentiated admission decisions and control over the number of simultaneous users, have been proposed for Release 12. Expanding voice and app coverage Good coverage is crucial for positive smartphone user experience and customer loyalty2, which for operators translates into securing voice coverage and delivering data-service coverage that meets the needs of current and future apps. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Smarter networks 30 FIGURE 6 Release 11 uplink transmit diversity beamforming BOX B The system The scenario shown in Figure 7 is for bursty traffic. Four LPNs have been added to each macro base station in the network, and 50 percent of the users are located in traffic hotspots. The transmission power for the macro base station was 20W, and 1W and 5W LPNs were deployed. There are several ways to improve coverage for voice and data. One way is to use lower frequency bands – when compared to 2GHz bands, considerable coverage improvement can be achieved by refarming the 900MHz spectrum FIGURE 7 from GSM, for example. Voice coverage can be significantly extended with lower-rate speech codecs, whereas fourway receiver diversity and advanced antennas can improve coverage for both voice and data. System-level gains – for scenario described in Box B User throughput gain (percent) 300 1W 5W 250 200 150 100 50 0 Average E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Cell edge LPNs were deployed randomly and no LPN range expansion was used. Gains are given relative to a macro-only deployment. Offloading was 32 percent for 1W LPNs and 41 percent for 5W LPNs, where offloading is a measure of the percentage of traffic served by the LPN. Uplink transmit diversity was introduced in Release 11. This feature supports terminals with two antennas to increase the reliability and coverage of uplink transmissions and decrease overall interference in the system. It works by allowing the device to use both antennas for transmission in an efficient way using beamforming. Figure 6 illustrates how the radio transmission becomes focused in a given direction, resulting in a reduction in interference between the device and other nodes, and improving overall system performance. An additional mode within uplink transmit diversity is antenna selection. Here, the antenna with the best radio propagation conditions is chosen for transmission. This is useful, for example, when one antenna is obstructed by the user’s hand. Uplink transmit diversity increases the coverage of all uplink traffic for voice calls and data transmissions. With Release 11, multi-flow HSDPA transmissions are supported. This allows two separate nodes to transmit to the same terminal, improving performance for users at the cell edge and resulting in better app coverage. In Release 12, simultaneous app data and voice call transmissions will become more efficient, and the time it takes to switch the transmission time interval from 10ms to 2ms is considerably shorter. These improvements increase both voice and app coverage. Enhancing small-cell deployments The addition of small cells through deploying LPNs in a macro network – resulting in a heterogeneous network – is a strategic way to improve capacity, data rates and coverage in urban areas. Typically, the deployment of LPNs is beneficial in hotspots where data usage is heavy, to bridge coverage holes created by complex radio environments, and in some specific deployments such as inbuilding solutions. Figure 7 shows the performance gains in a heterogeneous-network deployment (which is described in more detail in Box B). Offloading to small cells has several benefits: it provides increased capacity for handling smartphone traffic, and it results in enhanced app coverage. 31 To maximize spectrum usage, the traditional macro base stations and LPNs share the same frequency, either with separate or shared cell identities. These deployments, illustrated in Figure 8, are referred to as separate cell and combined cell. It is possible to operate both separate and combined-cell deployments based on functionality already implemented in the 3GPP standard, and such deployments have been shown to provide substantial performance benefits over macro-only deployments. Today, combined cells tend to be deployed in specific scenarios, such as railroad, highway and in-building environments. Separate-cell deployments, on the other hand, are more generic and provide a capacity increase in more common scenarios. In 3GPP Release 12, small-cell range expansion techniques and control channel improvements are being introduced to enable further offloading of the macro network. Mobility performance enhancements for users moving at high speeds through small cell deployments are also being investigated by 3GPP. When a macro cell in a combinedcell deployment is complemented with additional LPNs close to users, the data rate and network capacity is improved. By allowing the network to reuse the same spreading codes in different parts of the combined cell, the cell’s capacity can be further increased – a technique being studied in Release 12. And as there is no fundamental uplink/ downlink imbalance in a combined cell, mobility signaling is robust, signaling load is reduced, and network management is simplified. In summary, heterogeneous networks are essential for handling growing smartphone traffic because they support flexible deployment strategies, increase the capacity of a given HSPA network, and extend voice and app coverage. The improvements standardized in Release 12 will further enhance these properties. Conclusions WCDMA/HSPA will be the main technology providing MBB for many years to come. Operators want WCDMA/HSPA networks that can guarantee excellent user experience throughout the whole FIGURE 8 LPN deployment scenarios LPN Macro LPN LPN LPN Macro LPN LPN RNC LPNs deployed as separate cells on the same carrier RNC LPNs deployed as part of a combined cell on the same carrier network coverage area for all types of current and future mobile devices. The prerequisites for networks are: capacity – to handle growing smartphone traffic cost-efficiently; flexibility – to manage the wide range of traffic patterns efficiently; and coverage – to ensure good voice and app user experience everywhere. HSPA evolution, through the capabilities already available in 3GPP and those under study in 3GPP Release 12, aims to fulfill these prerequisites. There are several ways to improve voice and app coverage. Enhancements to the uplink improve the ability to quickly and efficiently serve bursty traffic – improving user experience and increasing smartphone capacity. Small-cell improvements will increase network capacity for smartphone traffic and further improve voice and app coverage. With all of these enhancements, WCDMA/HSPA, already the dominant MBB and best-in-class voice technology, has a strong evolution path to meet the future demands presented by the growth of MBB and highly capable smartphones globally. References 1. Ericsson Mobility Report, June 2013, available at: http://www.ericsson.com/res/docs/2013/ericsson-mobility-report-june-2013.pdf 2. Ericsson ConsumerLab report, January 2013, Smartphone usage experience – the importance of network quality and its impact on user satisfaction, available at: http://www.ericsson.com/news/130115-ericsson-consumerlab-reportnetwork-quality-is-central-to-positive-smartphone-user-experiences-andcustomer-loyalty_244129229_c 3. International Communications Market Report 2011, Ofcom, available at: http:// stakeholders.ofcom.org.uk/binaries/research/cmr/cmr11/icmr/ICMR2011.pdf E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Smarter networks 32 Niklas Johansson Peter von Wrycza is a senior researcher at Ericsson Research. He joined Ericsson after receiving his M.Sc. in engineering physics and B.Sc. in business studies from Uppsala University in 2008. Since joining Ericsson, he has been involved in developing advanced receiver algorithms and multi-antenna transmission concepts. Currently, he is project manager for the Ericsson Research project that is developing concepts and features for 3GPP Release 12. is a senior researcher at Ericsson Research, where he works with the development and standardization of HSPA. He received an M.Sc. (summa cum laude) in electrical engineering from the Royal Institute of Technology (KTH), Stockholm, Sweden, in 2005, and was an electrical engineering graduate student at Stanford University, Stanford, CA, in 2003-2005. In 2010, he received a Ph.D. in telecommunications from KTH. Billy Hogan Erik Larsson joined Ericsson in 1995 and works in the Technical Management group in the Product Development Unit WCDMA and MultiStandard RAN. He is a senior specialist in the area of enhanced uplink for HSPA. He works with the system design and performance of EUL features and algorithms in the RAN product, and with the strategic evolution of EUL to meet future needs. He is currently team leader of the EUL Enhancements team for 3GPP release 12. He holds a B.E. in electronic engineering from the National University of Ireland, Galway, and an M.Eng in electronic engineering from Dublin City University, Ireland. joined Ericsson in 2005. Since then has held various positions at Ericsson Research, working with baseband algorithm design and concept development for HSPA. Today, he is a system engineer in the Technical Management group in the Product Development Unit WCDMA and Multi-Standard RAN and works with concept development and standardization of HSPA. He holds an M.Sc. in engineering physics (1999) and a Ph.D. in signal processing (2004), both from Uppsala University, Sweden. Linda Brus joined Ericsson in 2008. Since then, she has been working with system simulations, performance evaluations, and developing algorithms for WCDMA RAN. Today, she is a system engineer in the Technical Management group in the Product Development Unit WCDMA and MultiStandard RAN, working with concept development for the RAN product and HSPA evolution. She holds a Ph.D. in electrical engineering, specializing in automatic control (2008) from Uppsala University, Sweden. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Same bandwidth, double the data 33 Next generation video compression MPEG and ITU have recently approved a new video-compression standard known as High Efficiency Video Coding (HEVC), or H.265, that is set to provide double the capacity of today’s leading standards1. P E R F RÖJ DH , A N DR E Y NOR K I N A N D R IC K A R D S JÖBE RG Requiring only half the bitrate of its predecessor, the new standard will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC will enable new video services to be launched, and the first applications that are likely to appear will be for mobile devices and OTT applications, followed by TV – and in particular ultra HD television (UHDTV). State-of-the-art video compression can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. Estimates indicate that compressed real-time video accounts for more than 50 percent of current network traffic2, and this figure is set to rise to 90 percent within a few years3. New services, devices and changing viewing patterns are among the factors contributing to this growth, as is increased viewing of traditional TV and video-streaming services, such as Netflix, YouTube and Hulu, on a range of BOX A devices – from phones and tablets to PCs and home-entertainment systems. As HD shifts from luxury to commodity, it will soon be challenged by UHD, which offers resolutions up to 16 times greater. Making standards Most video viewed by subscribers today has been digitized and reduced in size through the application of a compression standard. The more popular include the H.26x series from ITU and the MPEG-x series from ISO/IEC. First published in 1994, the MPEG-2 standard, also known as H.262, played a crucial role in the launch of digital-TV services as it enabled the compression of TV streams to fit the spectrum available. This is also the standard used to compress movies onto a DVD. The H.264 standard (also known as MPEG-4 AVC), published in 2003, has provided the best compression efficiency to date, and is currently the most widely used video-compression codec. It has been successfully incorporated into most mobile devices, and is the best way to reduce the size of video carried over Terms and abbreviations AVC advanced video coding CABAC context-adaptive binary arithmetic coder CTU coding-tree unit CU coding unit fps frames per second HD high definition; often refers to 1280 x 720 or 1920 x 1080 pixels HEVC High Efficiency Video Coding IEC International Electrotechnical Commission ISO International Organization for Standardization ITU International Telecommunication Union MPEG Moving Picture Experts Group OTTover-the-top SAO sample adaptive offset UHD ultra high definition: often refers to 3840 x 2160 (4K) or 7680 x 4320 (8K) pixels WPP wavefront parallel processing the internet. It is the preferred format for Blu-ray discs, telepresence streams and, most notably, HDTV. Now imagine a codec that is twice as efficient as H.264. This was the target set by MPEG and ITU in 2010, when they embarked on a joint standardization effort that three years later delivered HEVC/H.2654,5. The new codec offers a much more efficient level of compression than its predecessor H.264, and is particularly suited to higher-resolution video streams, where bandwidth savings with HEVC are around 50 percent. In simple terms, HEVC enables a network to deliver twice the number of TV channels. Compared with MPEG-2, HEVC can provide up to four times the capacity on the same network. Like most standards, the MPEG and ITU video codecs have been developed in a collaborative fashion involving many stakeholders – manufacturers, operators, broadcasters, vendors and academics. Ericsson has been an active participant in video standardization for more than 15 years, and was closely involved in HEVC. Throughout the development of the standard, Ericsson has led several of the core experiments, chaired ad-hoc working groups and contributed significantly to the development of the technology behind the codec. Our greatest expertise lies in the areas of the deblocking filter6 and in reference picture management7. Concepts that create efficiency One of the primary target areas for HEVC compression is high-resolution video, such as HD and UHD. The statistical characteristics of these types E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Same bandwidth, double the data 34 FIGURE 1 Simplified HEVC encoder diagram Input signal T Transform coefficients Q Q-1 T-1 Entropy coding In-loop filters Intraprediction Motion compensation Decoded picture buffer Motion vectors Motion estimation of video streams tend to be different from lower-resolution content: frame sizes are larger, and frame rates and perceived quality are higher – imposing tough requirements on compression efficiency, as well as on the computational complexity of the encoding and decoding processes. As the architectures of smartphone s and tablets go multi-core, the ability to take advantage of parallel processing is key when it comes to the efficient compression of high-resolution content. All of these points have been taken into FIGURE 2 consideration during the development of the new standard. The hybrid block-based coding used by the new codec is the same as the one used in earlier video-coding standards. To encode content, video frames are divided into blocks that are coded individually by applying prediction – based either on neighboring blocks in the same picture (intra prediction) or from previously coded pictures (motion estimation/compensation). The difference between the predicted result and original video data is Example of the coding-tree unit structure in HEVC CTU CTU structure CU CU E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 CU subsequently coded by applying block transforms and quantization. In this way, a block can be represented by just a few non-zero coefficients. Quantized transform coefficients, motion vectors, prediction directions, block modes and other types of information are encoded with lossless entropy coding. Hybrid block-based coding is illustrated in Figure 1. To ensure the highest level of compression efficiency, and support for parallel processing, some parts of HEVC have been significantly modified compared with previous generations of hybrid block-based codecs. For most of the previous MPEG-x and H.26x codecs, the largest entity that could be independently encoded was a macroblock (16 × 16 pixels). For HEVC, the picture is split into coding-tree units (CTUs) with a maximum size of 64 × 64 pixels. Every CTU is the root of a quadtree, which can be further divided into leaflevel coding units (CUs), as illustrated in Figure 2. The CTUs are coded in raster scan order, and each unit can itself contain a quadtree structure. Each CU contains one or more prediction partitions that are predicted independently of each other. A CU is also associated with a transform quadtree that compresses the prediction residual and has a structure similar to that of a CTU – as shown in Figure 2. Partitions for motion prediction can form square or rectangular shapes, which is also the case with earlier standards. HEVC also supports something called asymmetric motion partitioning, which can split the CU into prediction units of unequal width or height, as illustrated in Figure 3. The size of the prediction blocks in HEVC can therefore vary from 4 × 4 samples up to 64 × 64, while transform sizes vary from 4 × 4 to 32 × 32 samples. Large prediction blocks and transform sizes are the most efficient way to encode large smooth areas, whereas smaller prediction blocks and transforms can be used to achieve precision in areas that contain finer detail. The HEVC specification covers more intra-prediction modes than H.264, including a planar mode to approximate a surface from neighboring pixels, a flat mode and 33 angular prediction modes. Motion-compensated 35 prediction for luma transform blocks is performed with up to quarter-pixel precision, whereas motion compensation for color components is performed with one-eighth-of-a-pixel precision. Interpolation for fractional pixel positions uses 8-tap filters for luma blocks and 4-tap filters for color. In HEVC there is a single entropy coder for low-level data. This is the context-adaptive binary arithmetic coder (CABAC), which is similar to the one used in H.264, but modified to facilitate parallel processing. Higher-level information, such as sequence parameters, is encoded with variable-length or fixedlength encoding. HEVC defines two in-loop filters: a deblocking filter and a sample adaptive offset (SAO) filter. The latter is applied to the output of the deblocking filter, and increases the quality of reference pictures by applying transmitted offsets to samples that fulfill certain criteria. In-loop filters improve the subjective quality of reconstructed video as well as compression efficiency. Deblocking filtering in HEVC is less complex than that of H.264, as it is constrained to an 8 × 8 block grid. This constraint, together with filtering decisions and operations that are non-overlapping between two boundaries, simplifies multi-core processing. Parallel processing To make the most of the increasingly widespread use of multi-core processors, plus the ever-growing number of cores used in consumer-class processors, significant attention was paid to the parallelization characteristics of video encoding and decoding when designing HEVC. As it is computationally more complex than its predecessor, maximizing parallelization has been a key factor in making HEVC an efficient real-time encoding and decoding solution. Several HEVC tools have been designed for easy parallelization. The deblocking filter can be applied to 8 × 8 pixel blocks separately, and transformcoefficient-coding contexts for several coefficient positions can be processed in parallel. Tiles and wavefront parallel processing (WPP) are among several HEVC tools that can provide high-level parallelism. FIGURE 3 Possible motion prediction partitions in HEVC Asymmetric motion partitions are shown in the bottom row. Only square partitions are allowed for intra prediction The concept behind WPP is to re-initialize CABAC at the beginning of each line of CTUs. To facilitate CABAC adaptation to the content of the video frame, the coder is initialized once the statistics from the decoding of the second CTU in the previous row are available. Re-initialization of the coder at the start of each row makes it possible to begin decoding a row before the processing of the preceding row has been completed. Thus, as shown in the example in Figure 4, several rows can be decoded in parallel in several threads with a delay of two CTUs between two consecutive rows. The Tiles tool can be used for parallel encoding and decoding, and works by dividing a picture into rectangular areas (tiles) – as shown in Figure 5 – where each tile consists of an integer number of CTUs. The CTUs are processed in a raster scan order within each tile, and the tiles themselves are processed in the same way. Prediction based on neighboring tiles is disabled, and so the processing of each tile is independent. In-loop filters, however, can operate over tile boundaries. And FIGURE 4 Multi-thread decoding with wavefronts. Gray areas indicate CTUs that have already been decoded CTU E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Same bandwidth, double the data 36 FIGURE 5 Example of the way an image can be divided into tiles Column boundaries CTU CTU tile 1 tile 2 tile 3 tile 4 tile 5 tile 6 tile 7 tile 8 tile 9 Row boundaries CTU as deblocking and SAO can be parallelized, filtering can be performed independently inside each tile, and tile boundaries can be processed by in-loop filters in a final pass. The HEVC standard therefore enables both high- and low-level parallelization, which can provide significant benefits for multi-thread encoding and decoding of video such as 4K and 8K that has a higher resolution than HD. Performance and complexity The improved coding efficiency of HEVC does however come with a price tag: increased computational complexity. Compared with its predecessor, HEVC is 50 -100 percent more complex for decoding and up to 400 percent more complex when it comes to encoding. While these comparisons are based on preliminary tests, they do give an indication of the new codec’s computational complexity. Real-time implementations of HEVC demonstrate that decoding of full HD (1080p) at 50 or 60fps is possible on fast desktop and laptop computers, running on a single core. Performance increases with multiple core implementations (hardware acceleration), so that a modern smartphone is capable of 1080p decoding at 25 or 30fps8. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Applications The new standard is a general one suitable for the compression of all kinds of video. The focus for the first version is consumer applications and for this, three profiles have been defined: Main, Main 10 and Main Still Picture. Main is an all-purpose profile with a depth of 8 bits per pixel, supporting 4:2:0 – the most common uncompressed video format used by consumer devices from mobile phones to HDTVs. Main 10 extends the bit depth to 10 bits per pixel, which is well suited to consumer applications, such as UHDTV, where very high quality is critical. The increased bit depth can compress wide dynamic range video without creating banding artifacts, which sometimes occurs with 8 bits. The third profile, Main Still Picture, used for still images, is a subset of Main and carries a single still picture at a depth of 8 bits per pixel. The initial deployments of HEVC released in 2013 will be for mobiles and OTT applications. Software implementations capable of decoding HEVC without hardware acceleration can easily be downloaded to smartphones, tablets and PCs, enabling mobile TV, streaming and download services on existing devices. To this end, in August 2012, Ericsson announced SVP 55009, the world’s first HEVC real-time video encoder for live-TV delivery to mobile devices. However, as it is better to perform encoding on hardware and as HEVC is computationally more demanding than previous standards, it may be some time before video telephony based on this standard enters mobile platforms, whereas encoding on PCs is already feasible. Set-top boxes with new decoders will become available soon, enabling content broadcast via satellite, cable or terrestrially to take advantage of HEVC. The new standard plays a key role in the provision of UHDTV, and as prices drop and displays become affordable, the number of services utilizing such high resolutions is expected to rise within a few years. Flat-panel displays for HDTV have been on the market for almost a decade, so this may be a good time for consumers to start upgrading to UHDTV. What’s coming The finalized version of HEVC targets most consumer devices and services. However, for more specialized applications, such as 3D, content production or heterogeneous devices and networks, some additions to HEVC may prove useful. With this in mind, MPEG and ITU are working together on a number of ideas, including support for stereo and multi-view (glasses-free) 3D video, an extension that encodes multiple views by rearranging picture buffers and reference picture lists. A first drop is expected in January 2014, with a more advanced version that will support joint encoding of texture and depth information coming in the early part of 2015. Scalability is a key attribute of any codec, as it enables trimming of video streams to suit different network conditions and receiver capabilities; scalable extensions to HEVC are planned for July 2014. Range extensions, which support several color formats as well as increased bit depths, are another area currently under development. In addition to these extensions, further improvements are expected to take place inside the current HEVC framework, such as more efficient encoding and decoding (both software and hardware). It is likely that the full potential of HEVC will take some time to unfold, as encoding algorithms develop and the 37 challenge posed by the optimization of encoders and decoders in multi-core architectures is overcome. In short, HEVC or H.265 is twice as efficient as its 10-year-old predecessor, H.264. The improved efficiency that this codec brings will help to ease traffic load in networks and enable the creation of new and advanced video-based services. The codec supports parallel processing and even though it is more complex from a decoding perspective, tests have shown that it is suitable for adoption in mobile services. Compression of mobile video streams and OTT content are the most likely initial candidates for application of the codec, and within a few years it will undoubtedly bring UHDTV into our homes. Per Fröjdh Andrey Norkin is director of media standardization at Ericsson and former head of visual technology at Ericsson Research. He holds an M .Sc. in engineering physics and a Ph.D. in theoretical physics from Chalmers University in Gothenburg, Sweden. Part of his Ph.D. work was carried out on scholarship at Imperial College London, UK. Following postdoctoral appointments in the US, and Denmark, he held the position of professor of theoretical physics at Stockholm University, Sweden. He joined Ericsson in 2000 as manager of video research and standardization. He has contributed to MPEG and ITU work on H.264 and HEVC, served on the advisory committee for the W3C, and has been the editor of 15 standards on streaming, file formats, and multimedia telephony in MPEG, ITU, 3GPP and IETF. is a senior researcher at Ericsson Research, Kista, Sweden. He holds an M.Sc. in computer engineering from Ural State Technical University, Yekaterinburg, Russia and a Ph.D. in signal processing from the Tampere University of Technology, in Finland. He has worked at Ericsson Research since 2008, contributing to HEVC standardization through technical proposals and activities, including the coordination of a core experiment on deblocking filtering, chairing break-out groups and subjective quality tests for the Joint Collaborative Team on Video Coding (JCT-VC). He has also been active in the 3D video standardization for JCT-3V. He is currently the project manager of the 3D VISION project at Ericsson Research, working on 3D video systems, and algorithms, as well as on parts of the standardization. Rickard Sjöberg References 1. ITU, January 2013, press release, New video codec to ease pressure on global networks, available at: http://www.itu.int/net/pressoffice/ press_releases/2013/01.aspx#.UWKhxBnLfGc 2.Ericsson, November 2012, Mobility Report, available at: http://www.ericsson.com/ ericsson-mobility-report 3.Fierce Broadband Wireless, 2013, Ericsson CEO: 90% of network traffic will be video, available at: http://www.fiercebroadbandwireless.com/ story/ericsson-ceo-90-network-traffic-will-bevideo/2013-02-25 4.ITU-T Recommendation H.265 | ISO/IEC 230082: High Efficiency Video Coding, available at: http://www.itu.int/ITU-T/recommendations/ rec.aspx?rec=11885 5.IEEE, December 2012, Overview of the High Efficiency Video Coding (HEVC) Standard, available at: http://ieeexplore.ieee.org/stamp/ stamp.jsp?tp=&arnumber=6316136 6.IEEE, December 2012, HEVC Deblocking Filter, available at: http://ieeexplore.ieee.org/stamp/ stamp.jsp?tp=&arnumber=6324414 7.IEEE, December 2012, Overview of HEVC High-Level Syntax and Reference Picture Management, available at: http://ieeexplore.ieee. org/stamp/stamp.jsp?tp=&arnumber=6324417 8.IEEE, DOCOMO Innovations, December 2012, HEVC Complexity and Implementation Analysis, available at: http://ieeexplore.ieee.org/xpl/ articleDetails.jsp?arnumber=6317152 9.Ericsson, 2012, Ericsson announces world’s first HEVC encoder for live TV delivery to mobile devices, available at: http://www.ericsson.com/ news/120822_ericsson_announces_worlds_ first_hevc_encoder_for_live_tv_delivery_to_ mobile_devices_244159018_c is a senior specialist in video coding in the Multimedia Technologies department at Ericsson Research, Kista, Sweden. With an M.Sc. in computer science from the KTH Royal Institute of Technology, Stockholm, Sweden, he has been working with Ericsson since 1997 and has worked in various areas related to video coding, in both research and product development. In parallel, he has been an active contributor in the video-coding standardization community, with more than 100 proposals relating to the H.264 and HEVC video-coding standards. He is currently working as the technical leader of Ericsson’s 2D video-coding research, including HEVC and its scalable extensions. His research interests include video compression and real-time multimedia processing and coding. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 The merger of two giants 38 Next generation OSS/BSS architecture Breaking down the silos of operations and business support systems (OSS/BSS) to form an integrated, cross-functional platform that can take a product from conception to execution in a simplified and consistent manner will cut time to market from months and years to weeks. JA N F R I M A N, L A R S A NGE L I N, E DVA R D DR A K E A N D M U N I S H AGA RWA L The systems that keep networks running and profitable are in the direct line of fire when it comes to implementing change. So, as the world moves toward global connectivity, as smartphones cause a shift in user behavior, and as subscribers demand more personalized products and even greater control, the functions of OSS/BSS – such as planning, configuration, fulfillment, charging, billing and analytics – need to be integrated. A consolidated architecture is a typical computer-science approach for bringing together the functions of different systems. By adopting such a consolidated architecture for OSS/BSS, operators will be able to maintain control over costs while implementing network changes effectively. BOX A The challenges of evolution By exposing the functionality and information held in their networks, operators have the opportunity to create innovative and ever more complex value chains that include developers, OTT players and subscribers. In these new value chains, the flow of information and control shifts from unidirectional to multidirectional, and participants can be consumers of services and information as well as being producers of them. New business models for network evolution are based on providing anything as a service (XaaS) – including IaaS, PaaS, SaaS and NaaS – and when using this model, it is not just value chains that become more complex; the life cycles of products and services also become more diversified. How then, as business models advance, should OSS/BSS requirements evolve to cater for factors such as big data, personalization and virtualization? The simple answer is through configurability. To create a high level of flexibility, the evolution of OSS/BSS needs to be configuration driven, with an architecture based on components. The impact of big data Information is a critical resource. Good information is a key asset – one that can be traded, and one that is critical for optimizing operations. As volumes rise, the rate of creation increases, and a wider variety of data that is both structured and unstructured floods into OSS/BSS, access to storage needs to be effortless. In this way, tasks and optimization processes can maximize the use of existing infrastructure and keep data duplication to a minimum. Data management needs to be secure and controllable, ensuring that the Terms and abbreviations BO business object BPMN Business Process Model and Notation BSS business support systems CEP complex event processing CLI command-line interface (E)SP (enterprise) service bus ETL extract, transform, load eTOM enhanced Telecom Operations Map GUI graphical user interface IA information architecture IaaS infrastructure as a service IM information model JEE Java Enterprise Edition E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 LC life cycle LDAP Lightweight Directory Access Protocol M2Mmachine-to-machine NaaS network as a service NFV network functions virtualization OLAP online analytical processing OLTP online transaction processing OS operating system OSGi OSGi Alliance (formerly Open Services Gateway Initiative) OSS operations support systems OTTover-the-top PaaS platform as a service PO purchase order RAM SaaS SBVR SDN SID SLA SQL TCO TMF UI VM XaaS random access memory software as a service Semantics of Business Vocabulary and Business Rules software-defined networking shared information/data model Service Level Agreement Structured Query Language total cost of ownership TeleManagement Forum user interface virtual machine anything as a service 39 systems accessing information do not jeopardize data integrity and subscribers can feel confident that their information is protected. FIGURE 1 Business application plane Business application (such as network inventory) Control plane SDN controller Data plane ND Business application (such as VPN service) SDN controller ND ND ND ND SDN controller ND ND ND ND SDN controller ND ND ND ND ND ND SDN architecture The impact of M2M As the number of connected devices gets closer to 50 billion, the need for automated and autonomous behavior in processes such as configuration and provisioning is becoming more significant. Being able to remotely configure, provision and update millions of devices without impacting the network supports scaling while maintaining control over opex. Making good use of technology One way to address these challenges is to make good use of advancing technology, particularly when it comes to OSS/BSS implementation architecture. And it’s not just about using technology development in a smart way; it’s also about understanding the potential of a given technology. So, when a new concept results in a significant breakthrough, the services and products that Business application (such as CRM) Business application (such as charging and billing) The impact of subscriber needs Personalized services and superior user experience are key capabilities for business success and building loyalty. Subscribers want to be in control, and feel that their operator provides them with reasonably priced services that meet their individual needs, over a network that delivers near real-time response times. The ability to create and test services in a flexible way with short time to market will help operators meet changing user demands. The impact of virtualization As a result of virtualization, operators, partners and even subscribers (in the future) can create instances of their services and networks on demand. So, as networks continue to move into the cloud, and SDN and NFV technologies become more widespread, the number of entities managed by OSS/BSS will rise by several orders of magnitude. So, to help operators remain competitive, next generation OSS/BSS need to fully address the challenges created by certain aspects of network evolution, including virtualization, big data, M2M and personalization. Separating planes in SDN architecture ND = network device can be created as a result should be readily definable. Capitalizing on increased flexibility and agility made possible by new technologies (such as virtualization and SDN) needs to be coordinated through a management function, which puts new demands on OSS/BSS architecture. FIGURE 2 The evolution of virtualization The demands created by increasing virtualization of data centers, not just in terms of computational capacity, but also in terms of storage and networking capabilities, are: virtualization of the OSS/BSS, and running these systems in the cloud; Abstraction of a typical OSS/BSS deployment Enterprise’s business Gap/disconnect Present BSS/OSS Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function Traditional BSS/OSS function E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 The merger of two giants 40 FIGURE 3 Extracting hard-coded business logic Today Tomorrow Enterprise’s business Enterprise’s business Gap/disconnect Enterprise’s virtualized BSS/OSS Present BSS/OSS Traditional BSS/OSS app Business rules Business processes Business events Information ....... Integration Storage Analytics BSS/OSS function BSS/OSS function management of cloud-based OSS applications such as service assurance; and management of cloud-based BSS applications such as IaaS and PaaS. It may, however, not always be beneficial to run certain network elements on generic IaaS resources. For example, information stored in a database may be better provided in the form of a service to subscribers in an IaaS environment, rather than as virtually deployed tenants. The general rule is that anything provided as a service, which is implemented by a piece of software running in a generic IaaS environment, has a reduced level of control and efficiency. Due to the extra layers created by running software in a generic environment, the drawbacks of this approach must be weighed carefully against the benefits of increased flexibility and better (shared) use of physical resources. For next generation OSS/BSS, the focus should be placed on implementing flexibility in an efficient way together with automation and orchestration of resource allocation. The hypervisor approach to E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Business object Enterprise information model Ericsson BSS/OSS application logic Ericsson BSS/OSS application logic Traditional BSS/OSS function Ericsson BSS/OSS application logic Ericsson BSS/OSS application logic Traditional BSS/OSS function Business rule Shared information model Business process Business event Common storage Common analytics virtualization, where virtual machines (VMs) share the resources of a single hardware host, is evolving so network infrastructure is becoming more efficient. For example, the failover capabilities of the hypervisor can place agents on the host in a similar way to traditional failover clusters, and can monitor not only VM health, but application and OS health as well. Such features are prerequisites of an efficient virtual environment. However, application architecture may have to take these features into account, as in some cases they cause the responsibility to perform certain tasks (such as data recovery) to shift between the application and the infrastructure. Service provider SDN SDN separates the data plane (the forwarding plane or infrastructure layer) from the control plane, which in turn, is separated from the business application plane. As shown in Figure 1, various business applications communicate with SDN controllers, providing a virtualized – possibly hierarchical – view of the underlying data plane. Generally speaking, the management Common integration requirements for SDN and non-SDN architectures are similar, if not the same. For example, both require inventory, ordering and fault management. However, SDN presents a new set of technical issues related to resource management, which brings into question the current partitioning and structure of OSS/BSS architectures. Specifically, SDN can result in horizontally abstracted virtualized software layers that have limited vertical vision through the hierarchy from the business applications to network devices. So, at the same time as the abstraction offered by SDN makes it easier to expose the capabilities of the network, it creates additional challenges for the OSS/BSS architecture. OSS always needs to have the capability to map the virtual view of the network to the underlying implementation. Sometimes, the SDN controller handles domain-specific OSS/BSS functionality by hiding parts of the complexity of the control and data planes; and as a result, only a subset of information will be propagated to the business application plane. Sometimes, the underlying layers are not even visible – such as when a third party owns them. In these cases, SLAs can be used to map the virtual view to the underlying implementation. The evolution to SDN architecture and virtualization causes the number of entities managed by OSS/BSS components to rise, which in turn impacts the way they are managed. For example, a more extensive history of each e ntity is required, because the semi-static environment used to locate a device using its IP address no longer exists. With SDN, network topology becomes totally dynamic. History data is essential for analysis and management of the network, as this information puts network events into context. Hybrid flexibility Modern database design is evolving toward the use of hybrid architectures. This approach allows a wider range of solutions and applications to be created with a single consistent implementation and one logical data store. Hybrid disk/in-memory databases use in-memory technologies to achieve the performance and low latency levels 41 of an in-memory solution, while still using disk for data persistency. The hybrid approach allows more data to be stored on disk than can fit into memory; as such, the disk is not a mirror of the in-memory content. This approach is similar to caching disk content, while providing the performance that comes from a true in-memory design – which cannot be achieved by caching disk content alone. Hybrid SQL/NoSQL (sometimes referred to as NewSQL) solutions are SQL-capable databases that are built using a NoSQL implementation to attain the scalability and distribution that these architectures afford, while still providing support for SQL. However, such hybrid solutions can be limited by their lack of support for partition-wise joins and subsequent lack of support for ad hoc queries – although there are exceptions to this. Hybrid OLTP/OLAP solutions aim to merge the typical characteristics of transactional OLTP workloads and OLAP-based workloads (related to analytics) into a single implementation. To build such a structure typically requires that both database architectures be considered from the outset. Even if such solutions exist, it is difficult to build this type of hybrid from the starting point of an OLTP- or OLAP-optimized architecture. Big data Modern data centers are designed so that increasingly large amounts of memory with low levels of latency are being placed ever closer to computational resources. This greatly increases the level of real-time processing that can be achieved as well as the volumes of data that can be processed. Achieving these processing levels is not simply a matter of the speed at which operations can be carried out; it is also about creating new capabilities. The developments being made in big-data processing have a significant impact on how next generation OSS/BSS architecture can be designed. Fast data – the velocity attribute of big data – is the ability to make real-time decisions from large amounts of data (stored or not) with low latency and fast processing capabilities. Fast data supports the creation of filtering and correlation policies that are based on – and can also be adjusted to – near real-time input. Another big-data concept combines the in-memory/disk hybrid with the OLTP/OLAP (row/column) hybrid to achieve a single solution that can address both OLTP and demanding analytics workloads. Coupled with the huge amounts of memory that modern servers can provide, this approach removes the need for a separate analytics database. The business logic When OSS/BSS are deployed, they bring business and technical stakeholders together and allow them to focus on the design and implementation of their unique business. The functionality provided by OSS/BSS must support the necessary userfriendly tools to implement and develop business logic. The deeper and more flexible this support is, the more business opportunities can be explored, and the more profitable an enterprise can be. Figure 2 shows an abstraction of a typical OSS/BSS deployment. Current implementations tend to be multivendor, with multiple systems performing similar tasks. Organic growth has led to a lack of coordination and as a result, significant time and effort is spent on integration, time to market is long, and TCO tends to be high. Quite often there a significant gap between daily business and the systems used to support it. To transform such a complex architecture into a more business-agile system requires some evolution. As consolidation is fairly straightforward, this tends to be the first step. However, to succeed, it requires systems to be modular, to be able to share data, to use data from other sources, and to have a specific role. Another approach is to reduce the silo nature of OSS/BSS and instead be more flexible using business logic in FIGURE 4 An order handling process. The notation shown here is conceptual; processes are modeled in BPMN1 2.0 and rules in SBVR 2 Receive Send purchase order purchase order reception acknowledgement External Purchase order Required entities Order handling Customer handling Send purchase order delivered Product offering Activate customer order Event and Purchase process order received Created BO and LC Application tasks Create customer order Check customer order Customer order Customer order Activate services Activate resources Activate billing Archive customer order Customer order Activate service Activate resources Activate billing Archive customer order E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 The merger of two giants 42 FIGURE 5 The information model is the fundamental component of Ericsson’s approach to next generation OSS/BSS. Business logic from conception to execution Strategy Design Implement Enterprise’s business studio Deploy Enterprise’s BSS/OSS management C-level management Business architects Enterprise and IT architects Business logic decision Business logic design Business logic implement Operate Enterprise’s BSS/OSS engines Business logic management Executable business logic Architecture proposal As visualized in Figure 3, the Ericsson approach to next generation OSS/BSS is to extract hard-coded business logic FIGURE 6 from the underlying systems, and to structure functionality according to design and life-cycle flow. To achieve this and build an abstract and virtual view of the business successfully, a common, shared and semantically rich information model (IM) and a defined set of relationships are essential. Information models within an enterprise Enterprise OSS/BSS architecture reference model Other enterprise systems Deployed OSS/BSS architecture Partner Enterprise business concepts model Enterprise’s application level information model Other Other Other application level information model Enterprise data model Ericsson application level information model Mediation/ transformation Enterprise production environments E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 actors and roles – such as companies, functions, individuals and customers, suppliers and service providers; services and functions – such as sales and contracting; processes – such as TMF eTOM; business objects – such as products, orders, contracts and accounts; and rules – such as pricing, prioritization and product/service termination. These building blocks can then be used to form processes, as Figure 4 illustrates. To achieve the full degree of flexibility, the building blocks for the complete life cycle (from definition to termination) are needed. Feedback a configurable manner. Either way, the best evolution approach is in system architecture design. Building blocks When the hard-coded business logic is extracted the following building blocks are created: Transformation Conception to execution Figure 5 illustrates design chain functionality. This process is relied upon to take a new business idea from conception to execution. Business logic is defined, designed and implemented in what Ericsson refers to as the enterprise business studio. The studio comprises a set of integrated workbenches that have access to all building blocks and information elements, and provide feedback to the process owner as business logic is implemented. In the Ericsson model, business logic (such as verification, commissioning and decommissioning, supervision, migration and optimization) is transferred to a management function for implementation and application. The management function is also responsible for transferring business logic to the proper execution engines. For example, pricing rules are used in execution by the rating engine, contracting engine and the sales portal engine. Given the potentially massive spread of pricing rules, getting it right at this stage of development is key. Information architecture Generally speaking, an enterprise defines the information it needs to 43 operate, and the OSS/BSS manage this information, based on a range of business models. The spread of information across any given enterprise is extensive, and can span many different functional areas from marketing, ordering, strategy and HR, to production and finance. Information models used in OSS/BSS include: As operators continue to differentiate and offer ever more complex products and services, the requirements on information change. Information is no longer just mission critical; it is also enterprise critical, and changes constantly as business needs evolve. The shift to next-generation OSS/ BSS changes the way enterprise-critical information needs to be handled, creating a number of system requirements: information and applications need to be separated; the entire life cycle – from definition to termination – needs to be modeled; information needs to be shared among all enterprise users; master data needs to be determined, and even multiple masters need to be supported to align with different enterprise functions; and information needs to be characterized in terms of size, throughput, quality, reliability and redundancy across the board, for all functions and applications – one instance that can be used by all. As Figure 6 shows, information held in an enterprise can be generated by many sources – both internal and external – and can be classified according to its properties and type. For example, information can be static, structured, event-driven or transactional. The best way to meet the new Information architecture implementation Enterprise Enterprise OSS/BSS information management Data access and grid services Master data Reference and ID Application Customer and partner layer management Non-real-time API Enterprise catalog Event Analytics management management Data access and grid services Real-time API Data grid framework Communication buses Data access IM Metadata functionality Data transform Information model Transaction support Information access (E)SB distribution Published IM Reference and global ID Information service registry Integration master/slave External data sources connection Data engine services Analytics and SEP engines Non-real-time API Data engine framework Data persistance Grid and vault Data engine services Non Ericsson apps Bi-directional integration and transformation services the enterprise vocabulary and concepts at the business level – for example, an enterprise might refer to a voice product using its marketing name, such as Family and Friends; the canonical concepts at the application level – which might refer to the Family and Friends product as family-group; and the multivendor concepts at the application level – where the concept of Family and Friends has different names at the business level and the canonical level. FIGURE 7 Real-time API Analytics and CEP engines Storage schemas engine Data vault requirements on information – driven by the need to differentiate – is to design OSS/BSS in a way that is independent of functional applications with a centrally managed information architecture (IA) that has a common and shared information model. The key characteristics of this architecture are: integrated information across functions; information offered as a service – facilitating high-level abstraction and avoiding the need to understand lowlevel data constructions; and information published in catalogs – formalizing IaaS and enabling use by process owners. Due to the complexity and widespread nature of information models, a modularized information architecture is needed – one that can be configured to meet the varying needs of enterprises and used in multivendor scenarios with varying information life cycles. Modular layers A modular IA can be implemented by categorizing information into a matrix. Storage The first step is to categorize information into (horizontal) layers, where each layer is populated by a number of entities. Subsequently, these entities can be combined (vertically) into a solution. As Figure 7 shows, the data vault is placed at the bottom of the architecture hierarchy. The most efficient type of storage can be chosen from a number of components, including disk, RAM or virtual resources. A level up from the data layer is the engine layer, which comprises a set of components that provide access to the information storage. SQL, LDAP, NoSQL or HBase are examples of technologies used in this layer, and each group of data selects the technology that best matches the access requirements for that data. The grid layer, above the engine, is where the information model becomes accessible. Here, the IM is divided into a set of responsibility areas, such as enterprise catalog with product, service and resource specifications or inventory data. These areas deploy components that expose information as a service and protect the underlying data, E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 The merger of two giants 44 ensuring that it is consistent and available for all authorized applications. The applications that consume and produce information sit on the top of the architecture. They set the requirements on the grid layer so that information is made available with the right characteristics and accessibility according to the needs of the given application. Applications can of course use local caching, but the data vault allows information to persist and be made available throughout the system. The right-hand side of Figure 7 shows the set of functions that support the flow of information in and out of the model. Typically, this part of the OSS/BSS interfaces to data owned by legacy or other systems, and allows information to be accessed from the outside. Grid components are responsible for interacting with external data sources, and exposing access to information residing in the model. Typical functions include transformation, protocol adaptations, handling of services or data streams and identity mapping, which are also used by applications in the application layer. FIGURE 8 Management functions such as definition, registration, discovery, usage, archiving and decommissioning are shown on the left of the information architecture. As information is no longer hard-coded in each application, but shared among applications at all stages of the life cycle, the management function is vital for ensuring data consistency. By making use of the common services provided by a deployment stack that supports scale-out architecture and meets the needs of big data, application development should become more efficient. The stack should integrate easily with existing enterprise systems – a capability that becomes more significant as OSS/BSS are developed and used in multiple scenarios around the world and deployed in an IaaS manner. Deployment stack To serve next generation OSS/BSS, a state-of-the-art deployment stack is required. A functional view of such a stack is illustrated in Figure 8. The deployment stack should provide a consistent user experience for all processes – from business configuration and system provisioning, to operations, administration and management. It should support applications deployed on a variety of different infrastructures including cloud, and virtualized and bare metal hardware. The deployment stack should provide a means of efficient integration among applications, and enable service exposure in a uniform way. Hardware Using existing hardware infrastructure for OSS/BSS deployment is the best option for operators as it consolidates the use of hardware, and supports rapid reconfigurability and scalability. Linux is an attractive OS, as it is community driven and supports the decoupling of software and hardware elements. A functional view of the deployment stack for next generation OSS/BSS Presentation Graphical user interface Command line interface Machine-to-machine interface Business level Applications Information tier Application services Operations and management Middleware In-service performance Logging JEE/OSGi IP OS Licensing Configuration management Availability support User management Coordination service Performance management Software management Fault management Backup and restore Load balancer Linux Virtualization Hardware E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 File system Cloud deployment Middleware A common and well-composed middleware provides: a consistent environment, effective management, ease of integration, greater availability, better scaling, load balancing, simplified installation, upgrade and deployment, and improved backup capabilities. Such an environment can be provided by either an OSGi container or a JEE application server pre-integrated with availability management, software management and backup/restore capabilities. Operations and management This function provides common management services for configuration, logging, configuration management, and fault and performance management. Application services This layer provides common functions for OSS/BSS applications, such as: service performance – which monitors and reports uptime; licensing – which enables provisioning, monitoring, control and reporting of licenses; user management – which provides authentication and authorization; and coordination service – which provides inter-application coordination in a distributed environment that supports changing license requirements created by business models such as pay-as-you-grow. 45 OSS/BSS Presentation layer Common operations and management GUI Software management GUI/CLI Business studio UI Business level Application level Debt management Revenue management Experience and assurance CEP Event handling Decision support Communication channels Correlation Analytics Resource management Order management Workflow framework Document format Content management CEP and ETL management Common application functions Rule framework Application SW compliance and governance Enterprise catalog App IM and master data governance Customer and partner interaction App service governance BSS/OSS application functions Customer and partner management App level management Business studio Proposed architecture The architecture of next generation OSS/BSS is illustrated in Figure 9. At the business level, the proposal supports service agility in all processes from conception to retirement and all relevant phases – including planning, deployment, customer on-boarding and assurance. The proposed architecture comprises a set of application functions, which implement tasks such as enterprise catalog, charging, billing, order management, experience and assurance. Application functions are implemented through a set of components that can be configured and assembled so they form a complete solution that can also be integrated with existing systems. The set of common application functions, including correlation and event handling, support the specific OSS/BSS application functions. The information architecture separates the information model so information is matched to the application functions, supporting modularity and enabling integration in the overall information model. The information model is based on a shared information/data model (SID), with extensions embracing more standard industry information models. To support effective implementation, all components should be prepared for cloud deployment. Next generation OSS/BSS architecture FIGURE 9 Application information mode Presentation layer The presentation layer provides support for GUI, CLI and M2M interfaces for OSS/ BSS applications. A common GUI framework, together with single sign-on for the entire stack, is key to providing a consistent user experience. By exposing interfaces to other applications in a uniform way, the amount of application-application integration required is reduced significantly. Application level information tier Application services Operations and management Middleware IP OS Virtualization Hardware E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 The merger of two giants 46 References 1. Business Process Modeling Notation, available at: http://www. bpmn.org 2. Semantics of Business Vocabulary and Rules, available at: http:// www.omg.org/spec/SBVR E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Jan Friman Munish Agarwal is an expert in the area of user and service management at Business Unit Support Systems (BUSS). He has held various positions in the area of OSS/ BSS at Ericsson for 16 years, including R&D, system management and strategic product management. He is chief architect for information architecture at BUSS and holds an M.Sc. in computer science from Linköping Institute of Technology, Sweden. is a senior specialist in multimedia architecture and chief implementation architect for OSS/BSS. He has been at Ericsson since 2004, working in the OSS/BSS area. He is currently driving the BOAT implementation architecture and is the product owner for Next Generation Execution Environment. He holds a B.Tech. in material science from the Indian Institute of Technology, Kharagpur, India. Edvard Drake Lars Angelin is an expert in the area of hardware and software platform technologies, and is chief architect for implementation architecture at BUSS. He has 20 years’ experience at Ericsson, ranging from AXE-10 exchanges to today’s commercial and open source technology innovation. He holds a B.Sc. in software engineering from Umeå University, Sweden. is an expert in the technology area multimedia management at BUSS. Lars has more than 28 years of work experience in the areas of concept development, architecture and strategies within the telco and education industries. Lars joined Ericsson in 1996 as a research engineer, and in 2003 he moved to a position as concept developer for telconear applications, initiating and driving activities, most of them related to M2M or the OSS/BSS area. He holds an M.Sc. in engineering physics and a Tech. Licentiate in tele-traffic theory from Lund Institute of Technology, Sweden. Re:view 47 Nine decades of innovation Automatic exchanges to smart networks One of the most significant technical evolutions in the history of telecoms is that of the mobile phone. In 1990, Ericsson received its first order to supply a GSM network, which brought the company’s switching and radio expertise together. At the same time, the internet began its worldwide expansion, as did the liberalization of the telecom market. In 1993, Sweden became the first European country to deregulate its telecom market. While already a fact in the US, deregulation has led to increased competition and a greater need for innovation to deliver customer benefits. In 1994, after almost a decade of research, Ericsson released the Bluetooth technology standard, which allows devices to exchange information wirelessly. In the spirit of open standards, its control has been handed over to the Bluetooth Special Interest Group (SIG) – a non-profit organization. That same year, Ericsson Review carried an article about intelligent network architecture for the Japanese 2G digital cellular standard Personal Digital Cellular (PDC). This enhanced network architecture was based on the principle of strictly separating network-oriented services from the mobile subscriber-specific services, allowing for the rapid development of new services in response to demand. At the turn of the 21st century, Ericsson addressed the challenges of VoIP. The surpassing of voice by data – a fact that didn’t become reality until the end of 2009 – was clear, and the primary challenge was how to port voice services to the new packet-based platform, while maintaining the same level of quality. The first decade of the new century was dominated by LTE and the desire to evolve network architecture to support technology evolution, improve spectral efficiency, be more flexible and ultimately support new services and superior user experience. In February 2007, Ericsson demonstrated LTE with bit rates of up to 144Mbps, and theoretical peak rates of 1.2Gbps were demonstrated in 2010. In 2009, Ericsson delivered the first commercial LTE network. And last but not least, the dramatic impact on networks created by smartphones, tablets and other mobile devices, the increased demand for mobile broadband and soaring data traffic have ramped up the opportunities in telecoms. Ericsson today holds a unique position in that we can provide network equipment, solutions, services, and management tools that encompass the full spectrum of needs. Looking forward, one of our visions for the Networked Society is that everything that can benefit from a connection will be connected. Connectivity is the starting point for new ways of innovating, collaborating and socializing. Ultra modernism In 1984, an entire issue of Ericsson Review was dedicated to fiber, providing a report on fiber optic activities at Ericsson. The articles in this issue covered everything from cable design, optical-fiber transmission, splicing equipment and the use of semiconductor materials that were more advanced than silicon. One of the articles Ericsson Review, featured the application of F issue, 1984. fiber optics in offshore systems. This type of environment posed tougher requirements on fiber design in terms of safety, weight and transmission distances. The age of the internet In 1998, an entire issue of Ericsson Review was dedicated to the internet and its impact on different aspects of the telecoms industry. Scalability was the name of the game as were solutions that optimized the use of network resources. The focus of the articles in this issue was access, as at the time, most of the costs were hidden in this part of the network. In the wake of liberalization, the Ericsson Review, access network was deemed special internet issue, to be the battleground for 1998. competing offerings in terms of bandwidth and service levels. A new millennium In 2007, Ericsson Review carried an article about LTESAE (long term evolutionsystem architecture evolution). The simplified and optimized architecture uses a minimum number of nodes in the user plane. In addition, new features have been introduced to simplify operation and maintenance. Ericsson Review, issue 3, 2007. E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Switching the smart way 48 Carrier Wi-Fi: the next generation By controlling whether or not a device should switch to and from Wi-Fi, and when it should switch, cellular operator networks will be able to provide a harmonized mobile broadband experience and optimize resource utilization in heterogeneous networks. RU T H GU E R R A Next generation carrier Wi-Fi will overcome existing coordination issues in multi-RAT environments to become an integrated component of mobile broadband offerings. Guaranteeing the best mobile broadband experience and ensuring that resources in a heterogeneous network that includes Wi-Fi are utilized in an optimal way, is only possible if subscribers are connected to Wi-Fi when this is the best option for them and for the entire network. While this may sound obvious, the way subscribers currently switch to and from Wi-Fi is not optimal. Today, the decision to connect to Wi-Fi is taken by the device according to one basic principle: if Wi-Fi is available, then use it for data traffic. However, this approach is short-sighted because it does not take into consideration real-time information about all BOX A AES AKA ANDSF AP BSC BSS CSMA DAS DPI EAP GTP the available resources. In a heterogeneous network, resources can include 2G, 3G, LTE, macro, small cells, different carriers, different protocols (802.11g, 802.11c) as well as different channels. In addition, devices do not take into account the activity of other UEs, and so each decision to switch to or from a Wi-Fi network is made independently, and without any consideration for load balancing. In short, current practice is inherently inefficient. Background The smartphone revolution and the near-ubiquitous support for Wi-Fi of modern devices have created new business opportunities and new challenges for telcos. Operators have so far deployed over 2 million Access Points (APs) in public spaces, and there are currently about 8 million hotspots worldwide. But why is there a need to integrate Wi-Fi into operator networks? The simple answer is that this technology is a good complement to existing solutions, and in certain conditions, it is particularly appropriate for handling spikes in data traffic. But to work well, it needs to be integrated. So, the main factors that next generation Wi-Fi will be able to capitalize on are: the vast amount of unlicensed spectrum that can be used by this technology without the need for any regulatory approval; its ability to offload data traffic, complementing existing indoor solutions, such as small cell and DAS; the near-ubiquitous device support for this technology – including UEs that are non-cellular; the evolution of small cells to support both cellular and Wi-Fi technology; and a new level of maturity – exemplified by the development of, and device support for, new standards and products, such as Hotspot 2.0, 802.11ac, EAP authentication and additional solutions currently being defined, such as S2a mobility, and 3GPP/Wi-Fi integration. Terms and abbreviations Advanced Encryption Standard Authentication and Key Agreement Access Network Discovery and Selection Function Access Point base station controller basic service set carrier sense multiple access Distributed Antenna System deep packet inspection Extensible Authentication Protocol GPRS Tunneling Protocol E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 HLR HSS LI MME MS-CHAP MWC NAS PLMN RAT RF RNC home location register Home Subscriber Server Lawful Interception Mobility Management Entity Microsoft’s Challenge- Handshake Authentication Protocol Mobile World Congress non-access stratum Public Land Mobile Network radio-access technology radio frequency radio network controller RRM Radio Resource Management RSRP reference signal received power RSSI received signal strength indicator SON self-organizing networks TCP Transmission Control Protocol TLS Transport Layer Security TTLS Tunneled Transport Layer Security UE user equipment USIM Universal Subscriber Identity Module WIC Wi-Fi controller WLAN wireless local area network 49 The results of a survey1 carried out with 24 service providers are illustrated in Figure 1. These highlight the challenges operators are focusing their attention on when it comes to carrier Wi-Fi deployment. Some steps have already been taken to include Wi-Fi in mobile broadband solutions, such as EAP authentication. Some solutions are already supported by UEs, while others will be available shortly. But much more can be done. With these challenges in mind, the top three priorities for next generation carrier Wi-Fi are: traffic steering 3GPP/Wi-Fi – to maintain optimal selection of an access network so quality of experience can be ensured and data throughput maintained; authentication – to provide radio-access network security for both SIM- and nonSIM-based devices; and DPI, support for unified billing and support for seamless handover – achieved by integrating with the core infrastructure already deployed for 3GPP access. When the varied set of resources in a heterogeneous network can be combined and optimized, networks can provide increased capacity and the performance needed to give subscribers the desired level of user experience. So for Wi-Fi, the objective is not to turn it into a 3GPP technology, but rather to figure out how to add 3GPP intelligence and control over Wi-Fi usage, so that all resources are used in an optimal way while delivering the best user experience. When Wi-Fi becomes just a nother RAT, the synergies and application of mobile network capabilities, intelligence and infrastructure will remove the burden from Wi-Fi to meet all of the challenges outlined in Figure 1. With the operator in control, and with Wi-Fi networks that are integrated with mobile radio-access and core networks, subscribers will experience high-performing mobile broadband that operates in a harmonized way. Operators will be able to control, predict and monitor the choice of connectivity, allowing them to optimize both the user experience and resource utilization across the entire network. FIGURE 1 Mobile and Wi-Fi network integration: the main challenges n= 24 Challenges Supporting seamless handover between mobile and Wi-Fi networks 83% Mobile data offload traffic steering between mobile and Wi-Fi networks 79% Supporting QoS 67% Maintaining data throughput 67% 63% Radio access security Supporting seamless handover between Wi-Fi access points 58% Supporting SIM and non-SIM devices 50% Integrating carrier Wi-Fi with 3G/4G small cells 50% Authentication security 46% Deep packet inspection and policy control 46% Supporting unified billing across mobile and Wi-Fi networks 42% Generating revenue from Wi-Fi investment 38% 0 20 Traffic steering 3GPP/Wi-Fi In a heterogeneous network, the type and amount of resources that are available to provide mobile broadband are quite diverse, as networks are built using: multiple technologies including GSM, WCDMA, LTE and Wi-Fi; several types of cells including macro, small cells and APs; and varying network capabilities including carrier aggregation; different carrier bandwidths; 802.11n; and 802.11ac. To provide the best user experience across all available resources and optimize resource utilization, the decision of whether or not to switch to Wi-Fi or back to cellular, and when to switch, needs to be made according to a more complex set of principles than simply: if there is an available Wi-Fi with adequate signal strength, then switch to it. The decision should take into account all available technologies and carriers, all visible cells, network and UE capabilities, radio conditions for each specific UE (which requires decisions to be made in real time, as these conditions 40 60 80 100 fluctuate rapidly), and should ultimately be based on a calculated performance estimate for each UE, given the aggregation of all terminals in the area. In a heterogeneous network it is more desirable to move users to different technologies, carriers and layers. And so, real-time coordination of all resources adds an additional layer of complexity to the Wi-Fi/cellular decision. For example, if we assume that the overall goal is to obtain the best UE throughput (as this has a direct impact on user experience), UEs that have 3GPP carrier aggregation capabilities should switch to Wi-Fi much later than terminals that don’t. In another scenario involving three UEs, the best solution is to assign the first UE to a coordinated macro/small cell to deliver higher throughput, to assign another 3GPP technology or carrier to the second UE, and to allocate a Wi-Fi access point to the third device – as this UE has 802.11ac capabilities and a good radio link budget in Wi-Fi. Good coordination is required to make such a decision, so that the best solution is attained for all three UEs given E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Switching the smart way 50 FIGURE 2 Concept of a performance-based mobility feature Authentication Request to connect Accept/reject WIC Accept/reject after comparison with UE estimated throughput in LTE UE throughput estimate for Wi-Fi UE throughput estimate for Wi-Fi RNC AAA MME UE throughput estimate for Wi-Fi Accept/reject after comparison with UE estimated throughput in 3G Accept/reject after comparison with UE estimated throughput in 2G BSC their individual requirements and those of the other two devices. The ability to assure predictable and consistent switching behavior in heterogeneous network architectures eases network planning, makes optimal use of available resources and ensures good performance for all subscribers independent of device. This implies that a Wi-Fi/cellular decision-making mechanism needs to be independent of UE type, UE operating system and UE vendor. Crystal ball As good predictions can prevent UEs from switching too often from one RAT to another, the ability to forecast network states and available capacity is a fundamental tool for the RAN to operate effectively and to help improve the mobile broadband experience. Avoiding the ping-pong effect is beneficial because the less switching among RATs, the fewer the signals between the RAN and the core, and the less the impact on battery consumption in the terminal. The RAN is the best place in the network to measure available resources for specific UE forecasts, as this part of the network has a complete overview of resources, knowledge of the distributed E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 SON features, information about potential mobility decisions, as well as an awareness of all UEs in the area. For years, operators have deployed mobility features both within and between 3GPP radio access networks that allow the network to determine in what cell, with which carrier or by what technology a UE should be h osted. These features make good use of all available resources to provide everyone connected to the network with an optimal mobile broadband experience. Decisions are made by the network on the basis of the available resources, radio conditions of UEs and the current state of the network (an aggregation of information from the network and all the UEs in the area). A mobility feature Ericsson’s concept of a mobility feature for switching to and from Wi-Fi networks provides operators with the capability to control when UEs connect to Wi-Fi and when data traffic should be switched back to 3GPP. This concept is based on the forecast performance for the specific UE. By deploying a mobility feature that is based on performance, operators will be in a position to add Wi-Fi to their mobile eNodeB broadband resources, just like any another RAT. And as the same entities make the decisions about 3GPP mobility, 3GPP/Wi-Fi mobility and distributed SON features, decisions are coordinated, preventing devices from looping, and improving resource utilization as a result of less signaling. To ensure that the benefits of a mobility feature supporting real-time traffic steering can reach mass market quickly, such a feature needs to be developed to work on legacy UEs without the need to install a client. To achieve this and to avoid placing additional requirements on UEs that would delay the applicability of the feature, an interface needs to be placed between the controllers – BSC, RNC, Wi-Fi controller (WIC) and MME – of the different technologies. Switching decisions need to be made jointly by these controllers, as illustrated in Figure 2 – a simplified flow diagram for a performance-based mobility feature. To illustrate how the concept works, it may be useful to consider the case of a UE that is moving closer to an operator AP, but is currently hosted in a 3G cellular network: the default UE behavior is to request a connection to the Wi-Fi on detection of the AP; the WIC will proxy the authentication request; on authentication, the WIC provides the RNC currently hosting the UE with an estimated performance for the UE in Wi-Fi; if the 3G throughput estimation is higher than the estimates for Wi-Fi, the RNC will order the WIC to reject the connection, thus maintaining data transmission over 3G. As long as the UE is in the vicinity of the AP, the connection manager in the UE will keep trying to connect to the Wi-Fi, allowing the network to decide to switch when and if the conditions become more favorable in Wi-Fi. Once a client is accepted into the Wi-Fi access network, the UE throughput in Wi-Fi and the estimated throughput for the UE in the NodeB are monitored continuously. When NodeB throughput surpasses Wi-Fi performance for the UE, it is disconnected from Wi-Fi and data communication will switch back to 3G. 51 Adding smartness In addition to guaranteeing the best experience for users and optimizing use of resources, maintaining control over the Wi-Fi/cellular switching process provides operators with a platform that can be rapidly adapted to include additional parameters. For example, operators will be able to include subscription parameters or service considerations into the switching decision, without having to wait for support to be implemented by all UEs. The real-time switching decision has a greater impact on user experience for UEs that are in connected mode. Consequently, an extension of existing mobility features for connected mode that includes support for Wi-Fi would optimize and enhance this decision. This could be achieved by including Wi-Fi parameters in UE measurement reports (which currently include only cellular measurements) and so further standardization and extension of mobility mechanisms is foreseen. The way these measurements would be handled by the UE is illustrated in Figure 3, and the suggested trafficsteering process would work as follows: 1. the RAN instructs the UE by setting thresholds or conditions on which UE should perform measurement reporting. For example, the RAN may provide a condition based on signal strength – RSSI – broadcasted WAN metric and load or received power – RSRP; 2. if any of the thresholds are met, the UE reports WLAN measurements back to the RAN; 3. on the basis of this information, the current network state and any other information it has available, the 3GPP RAN will send a traffic-steering message to the UE to switch data traffic to or from WLAN. Traffic steering today – limitations Today, the decision to move traffic between 3GPP and Wi-Fi is made independently by each UE based on the UE’s implementation of the connection manager. With varying implementations, UEs from different vendors are likely to carry out the switching decision at different times. This makes the ecosystem unpredictable, increases capex and opex and limits the operator’s ability to guarantee superior user experience. The algorithm currently used by UEs is oversimplified: the selection of Wi-Fi over cellular is based solely on the availability of Wi-Fi. Once a UE detects Wi-Fi, it automatically shifts data traffic to it. Even if devices become smarter, UEs will still make decisions based on a single cell and a single AP – decisions that are not coordinated with the network (which can see all resources) or with other UEs in the area. Most of the UEs in the coverage area of an AP will therefore try to connect to it. sense multiple access (CSMA) protocol degrades and the total capacity of the AP drops rapidly. To avoid this type of degradation, an AP should not permit a UE to attach unless the following conditions are met: the service level provided to the UE over Wi-Fi is better than the level offered over cellular; and the quality of the Wi-Fi connection is good enough to ensure that the experience level for users that are already connected to the AP will not be overly affected by the addition of another user. Proof of concept By putting the network in control, and using all the network information and aggregated information from UEs in the decision-making process, the network can provide the best user experience as well as increasing its overall mobile broadband capacity. The proof of concept measurements for such an approach are shown in Figure 4. The left-hand graph shows that by rejecting users with bad link budgets and poor radio conditions, the APs can deliver higher throughput rates with exactly the same equipment. Without the mobility feature, the AP reaches a maximum peak rate of 30Mbps, and delivered throughput for most of the UEs is between 0 and 17.5Mbps. With the feature activated, the same AP reaches a maximum peak rate of What is Wi-Fi good at? Let’s consider how Wi-Fi is designed: to provide high peak rates and low latency for a limited number of users in the vicinity of an AP. When too many users are connected, or when the users have bad link budgets, the performance of the carrier FIGURE 3 Connected mode mobility to and from Wi-Fi 3GPP RAN The 3GPP mobility mechanisms for idle mode will be extended in 3GPP Rel-12, providing coordination between 3GPP and Wi-Fi for UEs in idle mode. While the suggested mobility feature can evolve to include enhancements, it can be included in any vendor solution without requiring any direct interfaces, thus increasing the level of coordination between the various access networks, cell types and carriers from different vendors. WLAN 2 BSS/WAN 3 1 E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Switching the smart way 52 40Mbps, with most users receiving between 20 and 40Mbps, and the capacity is increased up to 100 percent. The right-hand graph shows that without the mobility feature, UE performance drops by up to 20Mbps; 50 percent of users experience a drop of 10Mbps and all of them experience inferior performance when moving to Wi-Fi. With the feature activated, the throughput difference in the worse case is 7.5Mbps and on average the difference is zero – thus mobile broadband experience is maintained. Network probing Another approach to optimizing connections is to force the UE to probe the network before it switches. However, this approach increases the uncertainty of measurements – the greater the number of UEs probing the network, the more unrealistic measurements become. Probing can at best provide an indication of the level of performance available at a given moment, and results cannot take into consideration changes to network parameters created by SON features, triggering of mobility features or resource utilization by other UEs. The lack of coordination among individual FIGURE 4 UEs and the decisions made can lead to devices ping-ponging from one access technology to another, causing signaling storms and additional battery consumption – leading to a degradation in overall network performance. Many implementations The variety of connection manager implementations in UEs and the range of radio sensitivities means that the outcome of today’s oversimplified mobility mechanism is unpredictable, which makes it difficult for operators to plan and optimize networks. Measured capacity increase by RRM and coordination of Wi-Fi and 3GPP C. D. F. (%) Legacy AP Network controlled AP 100 90 80 60 60 50 40 30 20 10 0 2.5 Integration tools Some complementary tools, such as Hotspot 2.0 and ANDSF, have been developed to ease the integration of Wi-Fi. By using HS2.0, broadcasted BSS load and WAN metrics, UE-based switching improves somewhat2, as these mechanisms can prevent UEs from connecting to overloaded or limited backhaul Wi-Fi APs. However, a low load does not necessarily imply that the resource availability of an AP is good. Low load can be the consequence of a bad radio environment caused by interference. To use these mechanisms efficiently, the refresh frequency of broadcasted load update needs to be fine-tuned to prevent mass Approaches working against each other Today, 3GPP mobility features are deployed both within a radio access technology and among technologies. These features are controlled by the RAN, and work in real time to provide the desired mobile broadband experience and optimize resource management. They also take into account network parameter changes, as SON features are triggered to adapt network behavior to constantly changing subscriber activity. If the 3GPP/Wi-Fi decision continues to be made by the UE, this can interfere C. D. F. (%) 0 with deployed 3GPP mobility mechanisms, causing devices to loop among access networks and wasting resources. For example, an active 3GPP mobility feature might be in the process of switching a UE to another technology (from LTE to 3G, for example) when the device decides to switch to Wi-Fi instead. If the Wi-Fi becomes overloaded, it will switch the UE back to the 3GPP network, which might trigger an additional switch to another cell. 5 7.5 10 12.5 15 17.5 20 22.5 25 27.5 30 32.5 35 37.5 40 42.5 100 90 80 60 60 50 40 30 20 10 0 -20.0 -17.5 -15.0 -12.5 -10.0 -7.5 Legacy AP Network controlled AP -5.0 -2.5 0 2.5 5.0 7.5 10 Throughput difference after selecting Wi-Fi (Mbps) Throughput in Wi-Fi (Mbps) Throughput (%) 50 25 0 19.33.20 19.35.00 19.36.40 19.38.20 19.40.00 19.41.40 19.43.20 19.45.00 Time E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 19.46.40 19.48.20 19.50.00 19.51.40 19.53.20 19.55.00 19.56.40 53 toggling between cellular and Wi-Fi accesses as a result of brief load peaks. The ANDSF mechanism augments PLMN selection, providing operators with improved control over the decisions made by devices. Through a set of static operator-defined rules, ANDSF guides devices through the decisionmaking process of where, when and how to choose a non-3GPP network. But the final decision is still made by the UE and is still dependent on the implementation of the device’s connection manager. ANDSF is defined to take into account the local environment – in other words, the specific connection manager implementation in the UE – and so UEs from different vendors behave in different ways. Using ANDSF might lead to an unpredictable outcome. Even if they are enhanced with radio information, NAS-based solutions do not have the real-time information needed to make the optimal decision. This is because requirements on information updates would become too demanding for terminals and networks. The ANDSF mechanism is, however, valuable as it provides operators with some level of control over the 3GPP/ Wi-Fi selection process, and it takes into account static or slowly changing network parameters. However, this mechanism has an impact on battery utilization in UEs, as it scans continuously for Wi-Fi and does not cater for the rapidly fluctuating RF environment experienced by a mobile device. Neither does the mechanism take into account UE and radio access capabilities (such as LTE, HSPA, EDGE, carrier aggregation, cell bandwidth, 802.11n and 802.11ac), or network changes (as a result of selfoptimizing features in RAN). And it does not provide real-time coordination with decisions in other UEs in the area. By deploying an HS2.0- and/or ANDSFonly approach, the same policy will be applied by all UEs configured with that policy, independently of their RF conditions, cell and network capabilities in a manner that is not coordinated with other UE decisions or network features. This results in an inability to guarantee the best user experience as well as creating additional opex and capex for the operator, as resource utilization is not optimized on a network level. FIGURE 5 Usage of Wi-Fi shown by authentication method SIM sessions Web sessions Jan 2011 Mar 2011 May 2011 Jul 2011 Sep 2011 Scenario: many users on a train Consider this scenario: a train is entering a station where two APs are deployed. There are 200 subscribers on the train with active mobile broadband sessions. The APs in the station perform best when the number of UEs attached is less than 20. If AP1 has a slightly better signal strength than AP2, it is possible that 180 of the UEs on the train will try to connect to AP1, while the remaining 20 UEs will select AP2. This is because each UE is ignorant of the decisions made by all the other UEs. Subsequently, a number of UEs connected to AP1 will be steered to AP2 or back to 3GPP as a result of the high load on AP1. When two UEs apply the same policy, the one that supports carrier aggregation should move data traffic to Wi-Fi much later – which requires coordination. So while ANDSF can be used as a simple mechanism for offloading, it neither guarantees the best user experience nor optimal resource utilization. Seamless authentication and increased security To secure the air link between a UE and a hotspot, Passpoint devices use the WPA2 Enterprise security protocol. This is a four-way handshaking protocol that is based on AES encryption, and offers a level of security that is comparable to cellular networks. Nov 2011 Jan 2012 Mar 2012 May 2012 Jul 2012 The Hotspot 2.0 specification supports four commonly deployed standard protocols: SIM-based authentication – EAP-SIM for devices with SIM credentials and EAPAKA for devices with USIM credentials; and non-SIM-based authentication – EAPTLS for client and server-side authentication, with a trusted root certificate, and EAP-TTLS with MS-CHAPv2 for user-name-password authentication. The Hotspot 2.0 specification complements WPA2 Enterprise security by incorporating features, such as layer-2 traffic inspection and filtering, as well as broadcast/multicast control, which are often used to mitigate common attacks on public Wi-Fi deployments. Seamless authentication (where users are not required to introduce a user name and password) and increased security are key to Wi-Fi usage. This phenomenon is illustrated in Figure 5, which shows the uptake of Wi-Fi in an airport network enabled with EAP-SIM/ AKA authentication. By reusing SIM mechanisms, subscribers only need to be provisioned once in the HSS/HLR to use an operator’s mobile broadband. Core integration and IP session mobility in 3GPP/Wi-Fi Connection to the mobile core networks from Wi-Fi equipment E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 Switching the smart way 54 gives subscribers direct access to operator services, and provides operators with a mechanism to improve visibility, reuse their mobile broadband infrastructure, and integrate with different systems (LI, DPI, session establishment and management, policy decision and enforcement, reuse of wholesale and roaming agreements, access to operator branded or hosted services, and online and offline charging). Such an architecture provides a harmonized platform for handling cellular and Wi-Fi access, allowing Wi-Fi to become a continuation of the operator’s mobile broadband experience. Packet-core network integration helps operators to gain control of noncellular traffic, and consequently of the user experience. Seamless handover – uninterrupted IP session mobility between Wi-Fi and 3GPP – is a key technology when it comes to providing good user experience and mobility. To implement it, Ericsson has taken a leading role in both the standardization process and in product development. In 3GPP Rel-11, the GPRS Tunneling Protocol (GTP) – which is widely deployed in mobile networks – was specified for the S2a interface. Use of this interface, for trusted non-3GPP access, makes it possible for UEs to connect to the Wi-Fi network and utilize packetcore network services without mobility or tunneling support in the UE. The S2b interface was also included in the standard for connection to the core through untrusted Wi-Fi. 3GPP Rel-12 standards will enable GTPv2-based IP session mobility between Wi-Fi and 3GPP over the S2a interface. Ericsson together with Qualcomm demonstrated how this works at MWC 2013, and IP session mobility is expected to appear in vendor products in 2014. The Rel-12 solution will bring the user experience even closer to mobile E R I C S S O N R E V I E W • 90TH ANNIVERSARY • 2014 broadband by overcoming the limitations of two IP stacks (one for cellular and one for Wi-Fi ) used in current UE implementation. The double stack approach results in IP address reallocation every time devices move between cellular and Wi-Fi. Some applications have tried to overcome these limitations by deploying tunneling mechanisms and using buffers, so that users do not notice the delay introduced in the session by the IP address reallocation (reestablishment of TCP sessions, socket reallocation, packet loss and rerouting delays). But these solutions are not optimal for realtime applications such as video streaming or voice calls; nor do they support IP-address-dependent security mechanisms such as VPN services, and they are not supported by enough UEs to be mass-market. By moving the support for IP address continuity to the modem, it is possible for a device to keep its IP address at handover in a way that is completely transparent to applications. This is what 3GPP Rel-12 offers. Conclusion Next generation carrier Wi-Fi addresses the technical challenges relating to mobile broadband Wi-Fi. By enabling operators to add Wi-Fi capacity where 3GPP spectrum is scarce, Wi-Fi-based business models can be included into operator offerings to maintain optimal mobile broadband experience. With operators in control and Wi-Fi integrated into heterogeneous networks, the mobile broadband experience will become harmonized and provide users with the best possible performance. By taking control, operators will be able to predict and monitor the choice of connectivity, maintaining the user experience and optimizing network resource utilization. Ericsson will develop its 3GPP and Wi-Fi portfolios based on the concepts outlined above. Ruth Guerra joined Ericsson in 1999 and is currently strategic product manager for Wi-Fi integration at Product Line Wi-Fi and Mobile Enterprise. She is an expert in Wi-Fi and 3GPP networks, focusing on the strategic evolution of Wi-Fi and 3GPP RAN integration. She holds an M.Sc. in Telecommunications from the Technical University of Madrid (Universidad Politécnica de Madrid), Spain. References 1. Infonetics Research, May 2013, Carrier WiFi Offload and Hotspot Strategies and Vendor Leadership: Global Service Provider Survey, available at: http://www.infonetics. com/pr/2013/WiFi-Offloadand-Hotspot-Strategies-SurveyHighlights.asp 2. Ericsson, December 2012, Ericsson Review, Achieving carrier-grade WiFi in the 3GPP world, available at: http://www.ericsson.com/ news/121205-er-seamlesswi-fi-roaming_244159017_c?i dx=10&categoryFilter=ericss on_review_1270673222_c Ericsson SE-164 83 Stockholm, Sweden Phone: + 46 10 719 0000 ISSN 0014-0171 297 23-3220 | Uen Edita Bobergs, Stockholm © Ericsson AB 2014
© Copyright 2026 Paperzz