Demand Attentive Networks (DAN)

http://www.theiet.org/cpd
Demand Attentive
Networks
Creating the perception of unlimited bandwidth
in an untethered fibre-wireless world.
A paper provided by the Institution of Engineering and Technology
www.theiet.org/factfiles
About This Paper
Contents
The Institution of Engineering and Technology acts as a voice
for the engineering and technology professions by providing
independent, reliable and factual information to the public and
policy makers. This Briefing aims to provide an accessible guide to
current technologies and scientific facts of interest to the public.
Executive summary������������������������������������������������������������������������3
For more Briefings, Position Statements and Factfiles on
engineering and technology topics please visit http://www.theiet.
org/factfiles.
Supporting Network Infrastructure��������������������������������������������������7
The Institution of Engineering and Technology
The Institution of Engineering and Technology (IET) is a global
organisation, with over 150,000 members representing a vast
range of engineering and technology fields. Our primary aims are
to provide a global knowledge network promoting the exchange
of ideas and enhance the positive role of science, engineering
and technology between business, academia, governments and
professional bodies; and to address challenges that face society in
the future.
Demand Attentive Networks������������������������������������������������������������4
Introduction�����������������������������������������������������������������������������������5
The Access Challenge��������������������������������������������������������������������������7
Exchange Buildings���������������������������������������������������������������������������10
Energy Costs and the Carbon Tax�������������������������������������������������������11
Poles, Ducts or Drains?����������������������������������������������������������������������11
Big Data and Privacy�������������������������������������������������������������������������12
Mobile Capacity & Performance���������������������������������������������������13
National Roaming To Drive Up Coverage��������������������������������������������13
Denser Mobile Networks��������������������������������������������������������������������15
Squeezing More Out Of Usable Radio Specturm���������������������������������17
Coverage Gap Filling��������������������������������������������������������������������������20
Planning for Resilience����������������������������������������������������������������������22
Wireless Device Performance�������������������������������������������������������24
As engineering and technology become increasingly
interdisciplinary, global and inclusive, the Institution of Engineering
and Technology reflects that progression and welcomes
involvement from, and communication between, all sectors of
science, engineering and technology.
Better Mobile Antenna & Receiver Performance���������������������������������24
Multicast Enabled Mobile Handsets���������������������������������������������������25
Best Signal Selection�������������������������������������������������������������������������25
EU Regulatory Weakness�������������������������������������������������������������������25
Content Distribution���������������������������������������������������������������������26
The Institution of Engineering and Technology is a not for profit
organisation, registered as a charity in the UK.
Overnight Push to “Trickle Charge” Storage����������������������������������������26
Making use of Multicast Techniques��������������������������������������������������27
The Future of Terrestrial Broadcasting������������������������������������������������28
For more information please visit http://www.theiet.org
The End-State: Network Architecture Summary������������������������������30
© The Institution of Engineering and Technology 2013
Conclusions���������������������������������������������������������������������������������34
The Institution of Engineering and Technology is registered as
a Charity in England & Wales (no 211014) and Scotland (no
SC038698).
Appendix 1����������������������������������������������������������������������������������35
The working assumptions of a new Common Operating Model������������35
Top 10 Enablers to deliver Demand Attentive Networks����������������������35
Delivering the Vision��������������������������������������������������������������������������37
Enquiries
[email protected]
References����������������������������������������������������������������������������������38
Acronyms������������������������������������������������������������������������������������39
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
2
Executive summary
It is generally acknowledged that universal super-fast broadband would benefit the UK’s economy and help close the gap between
urban and rural economies and between northern and southern economies. Ideally the UK would have a universal fibre to the premise
infrastructure with extensive high speed wireless resources at the edges of the network. This would mean that all broadband users would
be able to wirelessly access internet and other resources at speeds of 1 Gbps from their smartphones, tablets, TVs and other appliances.
Service providers and App developers would find new and creative ways to bring novel services to businesses and domestic customers.
But the commercial business cases for investment in universal coverage do not work and the public subsidy required to make it happen is
variously estimated at between £10 billion and £30 billion for the fibre networks alone.
The Institution of Engineering & Technology (IET) studied the underlying issue, and recognising that public investment of this size is
unaffordable in the foreseeable future, took a different approach. Their key underlying principle is that it is not necessary to have such high
speed capacity available everywhere at all times; it is enough if the device or appliance which is using the connectivity finds that it is not
constrained in its demands by the network infrastructure in place. This can be achieved by a combination of technical standards, network
architecture and smart regulation which work together to organise the demand for bandwidth in real time.
The attached paper on “Demand-Attentive Networks” reflects on this point; that future networks should be attentive to the demand for
bandwidth being placed upon them and provide sufficient resources to meet the demand at the time. There are several implications of this
principle, which are explored in the paper. Firstly the networks have to be positioned to supply bandwidth wherever the demand arises.
This can be enabled or prevented by simple public policy decisions. Some of these are described in the attached paper and not all of these
concern the making available of more spectrum, although that will always remain important. Secondly the networks have to be designed
to enable resources to be shared between networks as demand moves. This implies specific regulatory approaches. The integration of
fibre and wireless networks, not all of which would be provided by traditional mobile operators, will be critical. Thirdly user devices and
Apps developers should assume availability of unrestricted bandwidth, but must design their devices to work in this environment and only
demand what they really need at the time. A smartphone mustn’t “cry wolf” and grab the priority from other users if it doesn’t really need it.
In order to focus the debate between network providers, manufacturers, regulators and policy makers, the IET has defined a number of
“Working Assumptions” which are set out in a second paper. This could be seen as a to-do list for anyone wishing to implement DemandAttentive Networks, but it is also more than that. The IET has tried to determine the top ten or so areas that need attention and invites
comments and suggestions of other priority areas. Ultimately the IET is seeking to drive agreement, amongst all of those with an interest, as
to what the top ten areas for action are.
There have been several Government initiatives intended to increase the reach and speed of UK’s broadband networks. The Coalition
Government has made large sums of money available to increase the rollout of rural broadband, but taken together with commercial
investments this still will result in a broadband infrastructure which delivers 24 Mbps or less to about half of households. The DemandAttentive Network approach has the potential to deliver a similar economic impact as a conventional fibre-to-the-premise network coupled
with with 5G mobile and small cell/wifi access but at a fraction of the cost and without the need for large public subsidy. Currently there
is no country in the world which is taking anything like this approach to the delivery of future communications networks. The UK has
the opportunity to seize the initiative, generating prodigious economic value for the UK and at the same time enabling the proliferation of
creative and novel technology companies.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
3
Demand attentive networks
A paper by the IET Communications Policy Panel
To satisfy today’s telecoms users, networks need to provide consistent, fit for purpose outcomes for increasingly demanding applications.
Fibre and radio spectrum are playing an increasing role in evolving networks. However, radio spectrum is finite and it is not economic to
deliver fibre to every premise in the UK. Bandwidth can never be unlimited but is often used simplistically as a proxy for a well-engineered,
robust network. What is actually important is that the network delivers the application outcomes and experiences that users require.
The IET seeks to address the necessary network evolution via the concept of a “Demand-Attentive Network”. A DAN is a network that is
cognisant of and responsive to the demands that users and applications are placing on it. It then seeks to optimise the use of network
resources including smart terminal devices in order to provide the outcomes that are required to satisfy the users and applications needs.
In doing so it seeks to deliver the appropriate Quality of Experience (QoE) for the population of network users such that they perceive the
network performance as “always sufficient” and hence responsive to their needs. A DAN is not a single technology, protocol or network
design concept. It is an architectural, regulatory and policy approach to leveraging emerging technologies in an effective way to deliver
outcomes that meet the demands of users.
This paper summarises a range of communications and information technology developments that have the potential to enhance the
outcomes delivered by networks in a limited bandwidth world Effective deployment of some of these technologies will necessitate changes
in the policy and regulatory environment to optimise the network architecture.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
4
Introduction
The driving force for our UK communications networks is the unfolding digital economy and digital social space that compels us to shape
a new vision that no longer treats fixed and mobile networks as separate elements. Instead it brings them together into a “network growth
engine”. It is often construed that the network must support ever larger volumes of data... at ever faster speeds...with lower delays...greater
reach... and much stronger resilience. However, we must be wary of conflating the idea of “more” with “better”. The end-state for evolved
networks must provide greater consistency of delivered outcomes. This may require more resources (and will require better managed
resources).
In the popular view, including the view of investors in telecoms and media, the number one constraint on the growth of the digital economy
and the digital social space is bandwidth, both tethered and untethered. It is often perceived that nothing less than unlimited bandwidth
(at strictly limited price) will do. But that cannot be delivered. So engineers and policy makers together, have to find ways of enabling the
benefits of unlimited bandwidth in a limited bandwidth world. In other words, we have to enable the applications running over our limited
bandwidth networks to behave as if the bandwidth available to them is unlimited. Then the creators and users of those applications will
behave as if the bandwidth is unlimited. Then the digital economy and the digital social space will grow as if the bandwidth is unlimited.
However, in reality, policy makers need to get off the overly simplistic “bandwidth wagon” and focus on enabling the creation of outcomes
that fit the customer’s needs and the providers pocket1. The fact that increasing bandwidth may be necessary but not sufficient is illustrated
below2,3.
3,500
3,000
PLT (ms)
2,500
2,000
1,500
1,000
1 Mbps
2 Mbps
3 Mbps
4 Mbps
5 Mbps
6 Mbps
7 Mbps
8 Mbps
9 Mbps
10 Mbps
Bandwidth
Figure 2.1: Example of web page load time (PLT) versus bandwidth
The task is to define the vision of what will be needed from this “network growth engine” 10 years out, what the barriers are that may
impede this growth and identify the essential enablers (technology and policy) to allow the industry to make a big success of this vitally
important challenge.
The commercial need to make more cost effective use of shared infrastructure drives the need to manage and control the supply of and
demand on the underlying communications resource in real time. It is a trading space and the composite result of those trades (and others)
is what drives the delivered customer experience4.
As with all statistically shared resources (e.g. transport networks) the experience of their end-to-end use depends on the influence of other
users. The balancing act is how to create a frustration free experience that is dependable yet generate enough revenue to maintain the
operation and development of the system as a whole.
There are several constraining factors, not least hard physical limits such as the speed of light. The rapid development of higher bandwidth
links and more efficient coding techniques for the use of spectrum over the last 25 years has served us well. However, the physical limits,
especially in the access networks that serve the last mile, are fast being approached. The challenge now is to maintain and extend the
frustration-free experience for users while eking out more from the infrastructure. When done properly it will generate dependable income
streams that will underpin the ongoing investment.
Over the next 10-15 years the telecommunications industry will have to change dramatically to accommodate rising traffic demands and
increasing reliance on networks. It will have to move from a supply driven “build it and they will come” to a demand attentive delivery
approach.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
5
Government and regulation, as well as industry, have their part to play in this. There are other resources, ranging from physical space on
poles to operation of wholesale markets where current regulation and practice will become a barrier. This document outlines how the IET
sees the intertwining of technology and policy issues as networks evolve.
A modern telecommunications system is a distributed computer whose primary role is “translocation” of information in a secure manner
in return for payment by the customer. An ideal system would be able to replicate information at a remote location with zero delay and
zero loss5. In practice the laws of physics make this a challenge so the communication system needs to approximate these ideals. Modern
communication networks are now multi-service networks. We have moved away from the “stove-pipe” systems of the past that were
dedicated to one application category (e.g. voice, internet access etc.). The job of the network is the revenue-adjusted equitable “allocation
of disappointment”. The aim is to distribute the degradation of quality (finite delay, loss, jitter) across the users and applications connected
to the network such that their processing requirements are satisfied in an adequate manner. When this can be achieved, users and
applications perceive the network connection as good enough to meet their requirements. Where capacity limits prevent the full satisfaction
of all users, the aim is minimise the disappointment in a manner that takes into account the QoE level of the user (while still recognising that
all users are important).
“Everything you see or hear or experience in any way at all is specific to you. You create a
universe by perceiving it, so everything in the universe you perceive is specific to you.”
Douglas Adams
We often describe communications networks in terms of pipes and flows. However, this model only takes us so far. A network serving
users with different QoS levels is really more akin to a trading space6. It has two primary characteristics - capacity (often reduced to
just “bandwidth”) and the ability to differentially schedule the processing of different classes of traffic. These capabilities are finite. The
resources of the network are allocated across the users and applications that present the workload to this distributed processing machine.
Trade-offs are made to allocate more resources to the more demanding users (often at a price premium - this is where policy7 enters the
fray) at the expense of those for which a best effort service will suffice.
It is clear that ever-rising demand for services delivered to untethered, mostly hand-held, devices requires smooth integration of wireless and
wireline superfast broadband networks and a rebalancing of the trade-offs between (local) storage and processing power. New technologies
such as 4G/LTE and more extensive use of WiFi8 are a part of the solution but not enough on their own. This paper discusses a range of
engineering and policy approaches that may be leveraged to give the users and applications connected to the network the illusion of infinite
bandwidth.
This paper refers to an “untethered fibre-wireless world”. This is because the end-state network architecture is comprised of fibre wireline
connectivity plus wireless access. As the network evolves towards its target end-state architecture, the bottlenecks and major challenges
alternate between the spectrum and fixed/fibre challenges as illustrated below:
Network evolution phase & challenge Issue to address
Phase 1: Spectrum
Acquiring 4G spectrum
Phase 2 : Fixed/Fibre
Building 1Gbit/s connectivity to as many macro cell sites as possible
Phase 3: Spectrum
Capacity challenge as 4G takes off, need to quickly deploy small cells (with fixed radio
backhaul – sub 6GHz, microwave etc.) and manage interference with macro-cells (power
levels, sector tilting, beam forming, carrier allocation …)
Phase 4: Fixed/Fibre
Need to upgrade small cell backhaul to fibre/NGA as initial rapidly deployed radio
backhaul capacity exhausts. Needs GPON/WDM-PON etc.
Phase 5: Spectrum
Spectrum challenge for harmonised 700MHz, 5G etc.
Phase 6: Fixed/Fibre
Increased use of Cloud-RAN, need to upgrade fibre “fronthaul” between Remote Radio
Head (RRH) and C-RAN base-band hotel to 9 Gbit/s to support CPRI protocol.
Table 2.1: Cycle of alternating spectrum and fibre network challenges
It is clear from the table above that if we were to just focus on the major challenge at a particular point in time we will stall on our journey
towards the end-state architecture. Thus we need to address the spectrum and fixed/fibre network challenges holistically if we are to reach
the vision of the end-state. A balanced approach is required in order to identify and address all of these challenges via the appropriate
technical and policy mechanisms so that the network can develop and evolve unhindered through these various phases. If we look
“through” the generations of technology to the end-state vision, we can deliver sustainably growing benefits to the UK digital economy which
transcend the successive generations of technology.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
6
Supporting network infrastructure
The access challenge
For the majority of UK premises, the most ubiquitous physical bearers to facilitate access to communications networks are twisted copper
pairs (phone lines) and radio spectrum. In just under half the country there is a coaxial cable alternative. In the near term these will serve
us well via technologies such as VDSL2 and 4G (LTE). However, these media have limited capacity. Improvements to modulation techniques
and noise mitigation (such as “vectoring” to cancel crosstalk noise on VDSL29) will give modest improvements. This could for example
enable VDSL2 to reach the magic 100 Mbit/s headline figure. Whilst useful for marketing and regulatory positioning purposes, this will not
be a reality for many users - such peak headline speeds are not attainable on long phone lines (or near the mobile cell edge for mobile
transmission enhancements).
Use of VDSL-style technology on even shorter copper pairs (by deploying fibre to the Distribution Point (FTTdp10) may even enable 500
Mbit/s speeds to be achieved using FTTdp and G.fast copper transmission as illustrated below.
Local Exchange
Building
When
Street
Cabinet
Home
Max User
Speed
DSLAM
VDSL
100-400
homes/DSLAM
60Mbps
Deployed
today
at 0.5km
Fibre
Fibre
Crosstalk
DSLAM
VDSL Vectoring Trialled
100-400
today
homes/DSLAM
100Mbps
Fibre
Fibre
!
Vectoring DSLAM
Removes Crosstalk
at 0.5km
DSLAM
VDSL G.Fast
10-50
homes/DSLAM
Next 3+
years
500Mbps
at 0.1km
Fibre
Fibre
Fibre
!
Fibre closer to home
G.Fast DSLAM modulates
more spectrum
Figure 3.1: Improvements to VDSL via vectoring and G.Fast technology
Medium term bandwidth enhancements will be achievable by “bonding” multiple physical bearers or “channels” together to deliver
aggregate bandwidth that is approximately the sum of the parts. Deployment of intra-bearer bonding (i.e. multiple DSL lines or multiple radio
channels) is on the rise but also inter-bearer bonding (e,g, ADSL + 3G, VDSL2 + 4G, WiFi + 3G …) is also feasible and offers benefits in
terms of resilience. Use of multiple diverse bearers for network access will become increasingly popular as UK consumers and businesses
increasingly depend on Cloud-based services and applications - (network) failure is not an option!
The short to medium term access developments cited above have sufficient momentum in standardisation bodies, vendor roadmaps
and operator deployment plans to come to fruition without any form of “policy assistance”. However, given the inexorable thirst for more
demanding applications11, it is clear that they are not a long-term answer. Users are increasingly stressing the upstream capabilities of
broadband connections via photo and video uploads to social media sites and back-ups to personal cloud storage sites (e.g. Dropbox,
Google Drive etc.). The limited upstream capabilities of FTTC/VDSL2 could pose future limitations. With respect to downstream
requirements, Netflix for example has already announced that it plans to stream 4k-line ultra HD video services within a couple of years.
Each stream could require 30 Mbit/s of capacity. Improved video compression (such as MPEG5) will help. Hence, Fibre To The Premises
(FTTP) is still universally positioned as the ideal end-game.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
7
Technology
Maximum user speed today
Physical medium
Local Exchange
Customer premises
ADSL2plus
Copper
Asymmetric Digital
Subscriber Line
Up to 24Mbps
Cabinet
FTTC-VDSL
Fibre
Fibre to the Cabinet - Very high
bitrate digital subscriber line
Copper
Up to 100Mbps
Cabinet
Cable DOCSIS
Fibre
Coax
Up to 400Mbps
Data over calbe service interface
specification 3.0
FTTH GPON or Ethernet
Fibre
Fibre to the home - Gigibit passive
optical network for point to multipoint, Ethernet for point to point
Up to 1Gbps
Figure 3.2: Evolution from copper to fibre access technology
It should be noted that for the diagram above, the step increases in speed going from copper to partial fibre (with copper or coax) and finally
to pure fibre are each accompanied by a step increase in cost.
In the UK Local Loop Unbundling (LLU) initiated a growth spurt in Broadband network deployment as competitive market forces came into
play. The result was significant price reductions for Broadband since the LLU operators were new competitors who did not need to worry
about cannibalising existing product margins. LLU also heralded a much needed phase of Broadband innovation. The deployment of new
products and technologies was led by the new LLU operators12. Examples include SDSL, ADSL2plus, double-play (voice+broadband) and
multicast for IPTV. The vibrant network competition (and associated UK consumer benefits) that we witnessed during the “pure copper”
age of Broadband is declining. As we enter the next era of Broadband, the UK is on a trajectory to less active network infrastructure
competition13and has become more of a duopoly. FTTP products have been announced by BT with headline speeds of 330 Mbit/s to
compete against Virgin Media’s DOCSIS 3.0 headline speeds. However, the limited geographic availability and high connection charges
mean such products are unlikely to pave the way in replacing the legacy copper infrastructure. FTTP is one of the few access technologies14
that can help to de-couple speed from distance to the exchange, cabinet or mast1. It should also be noted that usage based charging for
the last mile of NGA infrastructure (FTTC and FTTP products could have implications for user, and application, behaviour i.e. watching the
“meter” clock-up costs and hence inhibiting use).
Local Exchange
GPON
OLT
1:64
Home
ONT modem
Max 64 homes / fibre
Technology deployed in
Vodafone PT, IT, ES and QA
Downstream capacity
12 House
slot 1
2.5 Gbps
Single colour
(wavelength)
1 2
12 House
slot 2
1:64
NG-PON2
Purple
house
Red
house
Max 64 homes / fibre
4 (or 8) colours
(wavelengths)
100 Mbps
or
1 Gbps
User modem
dependent
Data
ONT modem
OLT
Max user speed
40 Gbps
(4 wavelengths
of 10 GBPS
each
1 Gbps
or
10 Gbps
User modem
dependent
Data
Figure 3.3: Evolution of PON FTTP technology
The figure above illustrates how today’s TDM-based GPON technology is evolving to Next Generation PON2 technology by the use of an
additional 4 to 8 optical wavelengths to increase access speeds. WDM-PON technology is a further step beyond NG-PON2 with the potential
to make use of several hundred different wavelengths that can offer 1 Gbit/s on a dedicated wavelength per premises, thus ensuring traffic
isolation and avoiding congestion in the access network. Such technology is technically proven15 and lauded by various policy organisations
(Ofcom, EC, BEREC, ECTA) see16,17,18,19,20. NG-PON2 and WDM-PON are in the process of standardisation activity and could facilitate
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
8
“wavelength unbundling”21. Passive Infrastructure Access (PIA) to ducts and poles has been a failure because the barriers to entry and
ancillary costs have been considered too high by potential users.
Mobile technology is set to increase access speeds as we move beyond today’s 4G (LTE) technology to LTE Advanced (LTE-A). LTE-A
introduces the concept of Intra-Site and Inter-Site Coordinated Multi-Point transmission/reception (CoMP). CoMP uses multiple cells22
(including small cells) to communicate with a user’s mobile device in order to maximise effective use of radio spectrum and hence increase
bandwidth, particularly for users at the cell-edge.
eNBs
t0
t2
t1
eNB
eNBs
Figure 3.4: Co-ordinated multi-point processing (CoMP)
CoMP is considered by 3GPP23 as a key tool to improve coverage, cell-edge throughput and system efficiency. Joint Processing (JP) for
instance allows the user device to pool cell site resources. CoMP includes techniques such as co-ordinated scheduling and co-ordinated
beam forming. These techniques require inter-base station co-ordination in real-time and hence necessitate extremely low communications
latency. CoMP in LTE-A may improve the throughput per LTE user by ~30% versus the increased cost in the backhaul since it needs
latencies <5ms.
The figure below illustrates the performance loss for CoMP as the latency over the X2 interface (between base-stations in LTE) increases.
User throughput loss relative to zero X2 delay
0%
95%ile peak
50%ile median
5%ile cell edge
-10%
1 ms
-20%
3 ms
5 ms
-30%
10 ms
15 ms
-40%
20 ms
-50%
-60%
Figure 3.5: Impact of LTE X2 delay on user throughput (vs CoMP Ideal)
Source: Cambridge Broadband Networks
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
9
VDSL2 is just not able to meet these latency requirements, but FTTP can. In addition, LTE-A requires bandwidths in excess of that which
can be delivered by VDSL2 or today’s manifestation of FTTP (GPON). It requires evolution to the latest FTTP technologies (such as WDMPON). Hence a competitive fixed-access network infrastructure market to drive deployment of new FTTP technologies could help UK mobile
infrastructure evolution too. The size of the build required to deploy fibre to UK premises is far bigger than the previous generations of wireline broadband. For UK national broadband coverage, deployment requires the following infrastructure activities:
„„ ADSL2plus UP TO 24 Mbit/s
Requires equipment in ~5k Exchanges
„„ FTTC/VDSL2 UP TO 80 Mbit/s
Requires equipment in ~88k street cabinets
„„ FTTP UP TO 300 Mbit/s
Requires digging to >20M homes
Hence the amount of time and effort to deploy FTTP is much greater. A competitive infrastructure market with multiple companies operating
in parallel offers the prospect of accelerated build.
The policy issues associated with technology evolution in this area include:
„„ Policy Issue: How do we avoid a monopoly access infrastructure position so that there is competitive pressure to deploy FTTP
technology on a wide scale to future-proof the UK?
„„ Policy Issue: If we seek to move away from a monopoly situation for Next Generation Access (NGA) how do we avoid the inefficiencies
of the patchwork quilt approach as witnessed in the early days of UK cable franchises (using different technologies and standards)?
How can operators be incentivised or compelled to follow appropriate national and global standards to maximise interoperability across
different infrastructures for users?
KEY ENABLER: Fibre support network for an untethered fibre-wireless world
Technical Objective:
To ensure fibre back-haul is available at ever more locations, at the right time, at the right price and with a low latency specification.
Policy Means:
Regulations of fixed broadband infrastructure providers (eg Openreach) need to be brought into line with a future where fibre rich
fixed broadband networks essentially gate the future of wireless networks (eg technical standards etc)
Exchange buildings
The evolution of mobile base-station technology now allows smaller and more efficient radio units. These new compact radio heads use
lower power as they are deployed closer to the antennas. The radio head is connected by optical fibre to a “baseband unit”. This use of
optic fibre means that the baseband unit no longer has to be close to the cell site tower. The introduction of techniques such as COMP
has resulted in interest in the “Cloud-RAN” concept whereby the radio head on the cell tower is connected over up to about 20km of fibre
(mobile “fronthaul”) to a pool of base-station processing capacity in a “base-station hotel”. This could be in a Data Centre facility but a
better location is the local exchange. This could also house routing equipment to facilitate local traffic routing with low latency.
Space and power in exchange buildings have been used by competing operators to locate their LLU broadband equipment for several years
now. As we move into a more video-rich era, in addition to use of techniques such as multicast, operators will seek to locate video cache’s
closer to the end-user24. This reduces traffic on the aggregation and core networks (including expensive Internet transit) plus improves the
user experience due to lower latency (faster response to remote control button).
Cloud computing has grown rapidly over the last two years. Virtualisation is a technique used to dynamically allocate a “compute” resource
(processing and storage) on-demand. Many Cloud service implementations include the ability to move compute workloads between data
centres to more flexibly allocate capacity or provide resilience. Similar virtualisation techniques are now available on network equipment
such as routers and Software Defined Network (SDN) switches. SDN is an architectural concept that encompasses the programmability
of multiple network layers (including management, network services, control, forwarding and transport planes) to optimize the use of
network resources, increase network agility, extract business intelligence and ultimately enable dynamic, service-driven virtual networks.
NFV (Network Function Virtualization) aims to leverage standard IT virtualisation technology to consolidate many network equipment types
onto industry standard high volume servers, switches and storage, which could be located in Datacentres, Network Nodes (such as local
exchanges) and in the end user premises. NFV is applicable to any data plane packet processing and control plane function in fixed and
mobile network infrastructures. SDN and NFV are intended to facilitate service innovation and accelerate service time-to-market. They could
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
10
also change established industry business models for network equipment vendors, perhaps encouraging new entrants into the market.
A consequence of SDN and NFV is that we are entering an era that avoids the historical dilemma of whether to deploy a centralised or
a distributed architecture for a particular communications capability. Functionality can now be moved during the lifecycle of the service
depending on where the geographic demand and capacity requirements arise. However, latency is once again key. The ability to distribute
functionality closer to users (e.g. to a router in a local exchange) will radically improve network agility to meet the most onerous latency
requirements.
Exchange buildings are useful assets providing a network node site at which to locate distributed video caches and local processing/routing
functionality in order to improve responsiveness for users. Conversely, the development of long-range fibre access transmission systems
means that it is feasible to merge the traditional access and aggregation networks and bypass exchange buildings.
„„ Policy Issue: Exchange buildings offer an opportunity to locate equipment such as video caches, mobile base-band units and routers
in order to develop distributed network architectures. How do we reconcile any exchange closure programme with the risk of stranding
existing assets and foregoing future network optimisation opportunities?
Today there are of the order of 5000 local exchange locations in the UK. There would appear to a compromise which balances the aforementioned policy issue. That would be to ring-fence the top 1500 to 2000 sites (including all Openreach NGA handover exchanges) thus
enabling significant exchange rationalisation but maintaining the ability to deploy equipment deep into the network infrastructure and
reducing the risk of stranded assets. Access to last mile connectivity outside of these ring-fenced handover sites could be via the long-line
parent/child exchange25 approach used already for NGA interconnect (anticipated to be ~1250 exchanges i.e. ~25%).
Technology developments now enable Network Function Virtualisation (NFV) and the ability to implement packet processing functionality
on generalised compute platforms. In order to enable flexible use of evolving technology, space and power in the exchange needs to be
charged for purely based on kWh and m2 or m3 - independent of equipment functionality, market application or external interconnect.
Energy costs and the carbon tax
Another significant rising cost with increasing numbers of base stations is electricity. Prices of electricity are set to rise driven by carbon
tax. Now the mobile industry, like any other industry, needs to be driven towards greater energy efficiency. But one way mobile operator
could reduce carbon emissions is to have less base stations, which will lead to a considerably less effective national communications
infrastructure. And we know that the alternative to communications is for people to jump in their cars. So is a more sophisticated carbon
tax needed for the mobile industry that brings the two national objectives of better mobile communications infrastructure and lower carbon
emission into harmony rather than conflict?
„„ Policy Issue: As communications improves efficiency of vehicles and allows people to do things remotely, are there public policy
grounds for a more sophisticated carbon tax for the mobile industry to avoid the two national efficiency objectives being in conflict?
Poles, ducts or drains?
Fibre is actually quite cheap at around £0.10 per meter. Digging the hole to lay it in is the expensive part at ~£80-£100 per metre. Fibre
infrastructure has as much to do with civil engineering as with communications engineering. Any public infrastructure project that involves
digging is an opportunity to deploy fibre or at least ducts so that we can organically grow the amount of fibre infrastructure in the UK. This
could range from small scale local projects to large national projects like new rail infrastructure.
The huge economic barrier that civil costs presents to fibre to the premises also applies to the final drop cable. Changes to planning law
could relieve costs over time. For example such an approach was taken in Stockholm several years ago to ensure that street works installed
an appropriate kerb-side duct. With such an approach, connecting fibre to premises becomes progressively cheaper as the proportion of
“fibre ready” infrastructure increasingly permeates larger areas.
Poles are considerably cheaper means of distributing local wires than trenching but generally viewed as more unsightly. Access to existing
poles is a perennial policy issue and where poles do not exist planning authorities are resistant to new poles. Getting access to gas pipes and
foul water drains is also a challenge. Interestingly, the Ciao report concluded that FTTP could be deployed to the majority of UK premises
for approximately £30 - £35bn a figure largely driven by a lack of interest by some private monopolies in providing third party access on
economic terms.
„„ Policy Issue: Is this an area where a bold leap of imagination by policy makers could remove a major cost barrier to fibre drop cables
being run to all UK premises?
„„ Policy Issue: How do we create an environment where every public (or indeed private) infrastructure project that involves “digging” is
leveraged as an opportunity to lay fibre and/or duct?
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
11
„„ Policy Issue: How do we increase the public and government awareness of the value of investing funds in UK fibre infrastructure? i.e.
Why is transport getting such substantial public funding when UK wide FTTP is not?
Big data and privacy
UK communications infrastructure has a vital role to play as we evolve towards smart cities. As an example, at the 2013 Mobile World
Congress, NSN and IBM showed a demonstration of a “City in Motion”. This used radio data (citizen’s location, direction of travel, radio
conditions) to help identify population movement and flows in order to reconfigure the cities transport network in real-time.
„„ Policy Issue: How do we maximise the use of data analytics and big data in our communications networks for wider social benefit
without compromising the privacy of the individual. How can we give appropriate assurances to the public?
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
12
Mobile capacity & performance
National roaming to drive up coverage
LTE uses a hybrid of three modulation schemes, 64-QAM, 16-QAM and QPSK. The first is used when a user is close to a base station and
provides very fast data speeds (very high capacity). The third is used when a user is far from the base station, it is very robust to interference
but can consume up to 20 times more cell capacity for a given throughput due to the use of a less spectrally efficient modulation method,
much higher error correction overheads and impact of adjacent cell interference. (16-QAM kicks in at intermediate distances).
In a uniform distribution of customers, many more customers will be draining excessive capacity from the cell due to being on QPSK than
will be on the much more spectrally efficient 64-QAM (applying simple πr²).
100
90
Mb
150
80
50
40
Interference from next cell
60
Base Station Tower
Gross data speed in Mb/s
ak
/s pe
70
Further a user is from the
tower - the more cell capacity
is consumed for a given speed
30
20
10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Distance from base station relative to cell radius
0.8
0.9
1.0
Figure 4.1: Typical 4G fall-off in data capacity with distance from cell mast (fully loaded network)
If the cell capacity resource is allocated uniformly across the cell the impact is to create “access speed craters”. If the cell capacity resource
is allocated to give all customers the same speed, average speeds are dragged down significantly since if capacity is expending holding up
speeds at the edge there is less capacity to share out for the remaining users in more favourable locations.
A factor of 20 is a big number. There is a way the mobile industry could stem this loss of capacity and that is to exploit the site diversity
between the different mobile infrastructure sharing groups. The regulatory authorities have already imposed a split of the mobile spectrum
between the mobile operators. The concept is for the mobile operators to mutually exchange their capacity draining QPSK customers for
spectrally efficient 64-QAMcustomers where site diversity provides this opportunity. At the cell extremes a QPSK customer is effectively
being exchanged for 20 64-QAM customers, all other things being equal. Or put another way, the data access speed for a customer in what
would have been a QPSK zone on one mobile network sees their access speed go up by a factor of 20 when they are picked up in the 64QAM zone of the other mobile network, all other things being equal.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
13
100
90
60
50
40
Onto A’s
Network
Onto B’s
Network
B’s Users
A’s Users
Tower - Operator B
70
Tower - Operator A
Gross data speed in Mb/s
80
30
20
10
0
0
1.0
0.1
0.9
0.2
0.8
0.3
0.7
0.4
0.6
0.5
0.5
0.6
0.4
0.7
0.3
0.8
0.2
0.9
0.1
1.0
0
Figure 4.2: Potential network capacity gain to be won from national roaming for data
The challenge is how to capture the potential gain in capacity in a way that benefits both the mobile industry and their customers. There are
two technical options. Both involve “enabling” national roaming.
Option A is for each operator to treat the mobile customers of another mobile operator as if they were from a foreign country. There would
be no cell-handover between two competing operators. Instead the customer (or more sensibly the handset of the customer) would have to
initiate the connection to the base station offering the best connection from the start of a session. Why this approach is so interesting (apart
from its simplicity) is that for the dominant use-case where consumers access the Internet the user is likely to be stationary (nomadic) rather
than continuously on the move (mobile). So cell handover is not of overwhelming importance.
Option B is for the network itself to move customers in real time to the base station providing the best connection. This picks up the
remaining use-cases as well as capturing trunking gain. But it is complex to implement (huge signalling load implications). It was recently
done in the merging of the Orange and T-Mobile networks in the UK but took the best part of three years to do. So whilst it is technically
feasible there must be question marks over whether, at least in the short term, the extra gain is worth the pain.
There is then a second challenge. What would be the best commercial and regulatory framework for either option? Roaming is a
sensitive regulatory issue so again the objective has to be to find a solution that is of demonstrable benefit to the mobile operators as
well as consumers. It also has to be shown to sustain competition rather than diminish it. These considerations point in the direction of a
“wholesale framework” effectively run by the mobile operators themselves but following pro-competitive principles. In this approach the
roaming charges are netted out between the mobile operators. The balance of payment will flow to the mobile operator providing the best
coverage and capacity...exactly the attributes network competition aims to promote. It would be particularly beneficial to consumers in rural
areas where often users provided with mobiles for their job find coverage of their home from the chosen mobile operator to be poor (and
vice versa). The industry gains in other respects for example the regulatory imposed caps and floors on spectrum above and below 1 GHz
becomes unnecessary as the spectrum is essentially following the customer irrespective of the mobile operator – reflecting well the Demand
Attentive philosophy.
National roaming at the wholesale level offers a step change in wide area wireless performance (capacity and speed) whilst at the same
time actually providing cost savings for mobile operators. Geographical coverage from two networks could minimise or completely remove
cell edge capacity problem in a capacity limited environments depending on spatial location separation of the two base station sites. In rural
environments, where the network is coverage limited, national roaming helps in cost effective coverage provision.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
14
„„ Policy Issue: How to move to a version of national roaming that benefits mobile operators, their customers and is pro-competitive.
National Roaming is likely to be controversial but from a pure engineering perspective makes a lot of sense (especially outside of dense
urban areas). The point of the DAN initiative is to stimulate regulatory/policy/commercial enablers to unlock this technical potential or
provoke proposals for substitute solutions with clear superior performance/cost. For example if the over the top players ran their MVNO
networks on two incumbent mobile networks they would be able to enable national roaming from the handset and secure a huge
competitive advantage for data speeds over the incumbent mobile operators. Would the regulators allow the incumbent operators to
switch on national roaming to be able to compete?
Denser mobile networks
Improvements in mobile technology and standards (such as CoMP mentioned previously) are intended to squeeze the maximum bandwidth
from the radio spectrum. However, numerous studies have shown that the most significant enhancement to mobile capacity will be via the
deployment of small cells (femto, pico, micro, metro) resulting in a heterogeneous mobile network or Het Net.
Macrocells - wide area coverage and mobility
Enterprise picocells
Domestic femtocells
Outdoor microcells
Figure 4.3: Small cell examples in a het net
Source: Small Cell Forum
Wide-scale deployment of small cells has two key dependencies: cost-effective backhaul and the ability to mount small-cell antennas
on infrastructure. The backhaul issue will be partially mitigated by addressing the previous policy issue of creating a competitive wireline
broadband infrastructure regime that incentivises the deployment of FTTP26. Creating the appropriate conditions for access to physical
infrastructure in order to build small cell networks is extremely challenging. This is partly because of the wide-spread ownership (private
and public) of the physical infrastructure, complexity of planning regulations and “greedy landlord” syndrome. The government has
already responded well to the planning issues with its new guidelines exempting certain sizes of small antennas mounted on buildings
from the need for planning permission. It is worth keeping how this works out in practice as it is a critically affects the necessary efficient
“industrialisation” of micro and pico cell deployment.
As an example, access to lamp posts indirectly facilitates improved mobile network bandwidth if they can be economically used for siting
small cells. However, some local authorities have outsourced the provision of lamp posts and the private sector company that subsequently
owns them wants to charge extortionate rents. With high rents, no operator is going to invest in high density wireless networks of the
future27. In a west London borough, one mobile operator has paid a significant fee for access to infrastructure. Unfortunately other councils
are now seeing this precedent as a benchmark and it has skewed the market. This is a potential set-back for UK small cell deployment
since it increases the costs in any business case.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
15
Figure 4.4: Example of Small Cells on Lamposts
„„ Policy Issue: How can we create a planning regime in the UK that facilitates cost-effective, widespread access to public and private
infrastructure for operators wishing to deploy small cells to enhance mobile network bandwidth?
It should be noted that there is a limit to capacity enhancements by dense cell deployment. The source of limitation is inter-cell interference
which is again dependent on radio propagation at a certain frequency. In addition to the deployment of small cells, the requirement for
universal, untethered, high-speed broadband is also being met by the deployment of WiFi hotspots. There are two fundamental strategies
for operators to leverage small cells and WiFi hotspots. Mobile operators will have an “outside in” approach which leverages their national
mobile network coverage and adds in-building small-cells and Distributed Antennas Systems (DAS) in addition to use of external small cells
in traffic hotspots. Fixed network operators may choose an “inside out” strategy where they leverage their existing broadband and WiFi
hot-spot assets and then add a roaming partner and perhaps some limited mobile spectrum for indoor and targeted external mobile access.
Increasingly small-cell vendors are producing integrated equipment that combines 3G and 4G small-cell functionality with WiFi hot-spot
functionality in a single unit. This gives operators increased flexibility in using the most appropriate technology in order to facilitate data
off-load from mobile frequency bands and also to apply policy-based traffic steering including load-balancing across all available spectrum.
Some mobile operators have even developed their own Over The Top (OTT) applications (including voice and text) that can use unlicensed
WiFi spectrum instead of being constrained to mobile spectrum.
Many of today’s WiFi hotspots are not “public” and so a user would have to subscribe to multiple organisations in order to take advantage
of all the UK-wide WiFi hotspot (and small cell) infrastructure deployments. The situation is not helped by the current lack of a standardised
approach to handover between mobile network small cells and 3rd-party WiFi hotspots. However, the good news is that industry is
addressing these challenges via a plethora of standards initiatives: Passpoint (Hot Spot 2.0) allows the same credentials to authenticate WiFi
and mobile access. ANDSF (Access Network Discovery and Selection Function) informs mobile users of usable WiFi networks and applies
policy to their mobile handset’s bearer selection. The “SaMOG” (S2a Mobility) study item in 3GPP defines the interworking between the
mobile packet core and a trusted WLAN access network. This enables mobile network operators to manipulate and bill for traffic whether it
is over LTE or WiFi. 3GPP Release 11.0 will be a big step forward in terms of facilitating smooth mobility between WiFi and mobile networks.
„„ Policy Issue: UK WiFi hot-spots and small cells are being deployed by multiple operators in a fragmented manner. What standards,
interconnect and roaming regimes are required to maximise the effective population coverage and usability of such infrastructure?
KEY ENABLER: Denser Mobile Networks
Technical Objective:
To see a steady but relentless expansion of small-cells to increase urban network capacity.
Policy Means:
Lamp posts and other public structures have to be made available on fair terms for small-cells. Mobile operators need to have the
regulatory incentives to sustain cell-splitting down to small cells. We need suitable co-ordination structures to resolve the contention
issues for the finite resources: real-estate, local spectrum etc
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
16
Squeezing more out of usable radio specturm
Radio spectrum is a finite and scarce resource that must be used as efficiently as possible. The channel sizes and operating bandwidth of
RF components and systems are generally in proportion to the carrier frequency at which they operate. At higher frequencies there are more
Hz of bandwidth available in proportion to the carrier.
The figure below shows how this is manifest in spectrum allocations in the UK. Wider bandwidths are available at higher carrier frequencies.
The figure also illustrates three broad ranges of carrier frequencies which are used to categorise solutions: sub-6 GHz, microwave (6-60
GHz) and millimetre wave (60-80 GHz). The types of licensing regime available in these different ranges are also shown28.
80 GHz
70 GHz
65 GHz
60 GHz
55 GHz
52 GHz
40 GHz
38 GHz
32 GHz
28 GHz
26 GHz
23 GHz
18 GHz
15 GHz
13 GHz
10 Ghz
7.5 GHz
Upper 6 GHz
Lower 6 GHz
5.8 GHz
3.5 GHz
2.6 GHz TDD
2.6 GHz FDD
2.4 GHz TDD
2.1 GHz TDD 20
2.1 GHz FDD
1800 MHz
900 MHz
800 MHz
10
4750
4750
2000
1008
1008
2000
2240
1568
1232
1792
1176
1912
224
448
200
450
660
474.4
725
120
50
140
83.5
120
150
70
60
}
}
6800
}
millimetre wave
18.3 GHz
Microwave:
16.4 GHz for
terrestrial fixed links
Sub 6 GHz:
1.54 GHz for licensed
and unlicensed fixed
and mobile
100
1000
Total MHz Bandwidth UL+DL (UK example)
Unlicensed
Light licensed
Link licensed (PTP)
Area licensed (PMP)
Area licensed
Unlicensed
10000
Figure 4.5: Spectrum allocation for terrestrial services in the UK
Source: OFCOM, Nov 2011
New techniques that encourage the more effective use of spectrum such as improved transmission techniques and agile sharing
methodologies are continuously emerging. A few examples of these are shown below:
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
17
L
V f
H
V
D
f
H
Site A
Site B
Figure 4.6: Example of MIMO transmission (microwave)
MIMO (Multiple Input, Multiple Output) techniques (and also polarisation) can double spectrum efficiency by transmitting and receiving on
the same frequency on two antennas. MIMO requires optimal optimal Antenna Separation (D) which depends on the frequency of operation
(f1).
256QAM
128QAM
64QAM
........
4QAM
Link capacity
4QAM-st
99.9%
400Mbps @ 256QAM
99.99%
340Mbps @ 128QAM
99.992%
290Mbps @ 64QAM
...........
99.999%
100Mbps @ 4QAM
99.9995% availability
80Mbps @ 4QAM-strong (56MHz)
Figure 4.7: Adaptive code modulation (ACM)
Adaptive Code Modulation (ACM) has been used in numerous generations of mobile and wireline transmission systems. It is now being
increasingly used in modern fixed radio systems too (such as microwave and millimetre wave systems). Until recently radios have usually
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
18
offered a fixed bandwidth of x Mbit/s at an availability of say 99.99%. This relies on the “fade margin” for that availability which is gated
by the regulatory power limits set by Ofcom. Previously, when the radio faded below this level due to weather it “dropped out”. Now, by
using ACM it can revert to a slower more resilient coding scheme so some of the traffic can still get through thus improving the availability
for a sub-set of the traffic. However, note that there are implications in using such technology as part of a critical national infrastructure.
Correlation with bad weather is not a good property in such systems!
Inter-Cell Interference Co-ordination (ICIC) and its enhanced version (eICIC) have been developed for advanced LTE and HetNet
deployments to manage the interference between macro and small cells in order to make more efficient use of mobile spectrum. These
techniques are illustrated below:
ICIC (inter cell interference coordination)
eICIC (enhanced ICIC)
Area where small cell
only uses ABS ‘lent’ by
the macro, resulting
into minor interference
Applicable when
small cell located
at cell border /
blind spot
time
ABS (Almost Blank
Subframes) only carry
pilot references (no
traffic or signalling)
Reuse Factor - 1
Reuse Factor - 3
frequency
Interference coordination (eICIC)
Macro
Macro
A
B
S
A
B
S
Time (subframe)
Femto
Schedulling
Schedulling
for edge
for centre
Femto
Time (subframe)
Figure 4.8: Illustration of ICIC and eICIC
Source: BeYoung-Jin Choi KT, SWCS June 2013
The aforementioned technical optimisation approaches are of little use if the regulatory regime does not evolve in parallel to enable their
use. This does not just apply to spectrum to mobile services but also to spectrum for “fixed” services since the latter is used for macro-cell
backhaul and will become increasingly important small-cell backhaul.
In addition, spectrum is required for M2M communications. Some M2M applications can use low bit-rate mobile data approaches such as
GPRS while some require the higher data rates available with LTE. Potential market entrants are interested in offering low data rate M2M
services and are investigating new radio transmission systems using technologies optimised for low power consumption and hence longer
battery life or resilience. Examples include mesh radio and aim to use white space radio frequencies. Global markets for M2M devices are
needed to reduce price points towards a sub <£1.50 target level. Hence international standardisation and, possibly harmonisation of new
spectrum, will be key to expediting the growth of the M2M market.
White space radio27 typically uses TV frequencies that are locally unused for TV transmission (in order to avoid interference to neighbouring
TV transmitter areas). There are typically 10-20 unused TV channels are available in most locations. It uses the UHF TV bands (470790MHz) which has excellent radio propagation (cell size typically 4 to 10 km for Broadband). This frequency range also penetrates
buildings well, hence can reach conveniently placed receivers. Consequently, white space radio offers high coverage including in rural
areas. Whitespace spectrum provides 150MHz available for free. White space radio is flexible and can service a low number of high bit-rate
users (such as rural broadband) a high number of low bit-rate users (as per smart meters). More than one white-space TV channel can be
used to achieve higher speeds. E.g. use of two can achieve 30 Mbit/s speeds as required in European Commission NGA targets for 2020.
The available white space spectrum can change in less than 1 hour. Hence white space radio transmission systems need to regularly
consult central frequency planning databases and re-tune/disable themselves accordingly.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
19
Spittles interchange
0
-110
25
40
45
50
55
60
Sandy Heath Dig (22km)
Sudbury Dig (78km)
Waltham (65km)
Tacolneston (95km)
Belmont Dig (112km)
Sandy Heath Dig (22km)
Oxford (88km)
Waltham (65km)
Tacolneston (95km)
Oxford (88km)
Waltham (65km)
Tacolneston (95km)
Belmont (112km)
Oxford (88km)
Sudbury (78km)
-100
Sudbury Dig (78km)
Waltham Dig (65km)
Sudbury Dig (78km)
Sudbury (78km)
Oxford Dig (88km)
Oxford (88km)
Sandy Heath Dig (22km)
Sandy Heath Dig (22km)
Sandy Heath Dig (22km)
Sandy Heath Dig (22km)
35
Sandy Heath Dig (22km)
30
Sandy Heath (22km)
Sandy Heath (22km)
Sandy Heath (22km)
Sandy Heath (22km)
-90
Sudbury (78km)
-80
Sudbury (78km)
-70
Belmont (112km)
Crystal Palace (101km)
Oxford Dig (88km)
Waltham (65km)
-60
Belmont (112km)
Oxford Dig (88km)
Crystal Palace (101km)
-50
Belmont (112km)
Crystal Palace (101km)
Power (dBm)
-40
Sandy Heath (22km)
-30
Belmont (112km)
Crystal Palace (101km)
-20
Waltham (65km)
Tacolneston (95km)
Measured aligned polarisation
Measured cross polarisation
Predicted
-10
65
Channel
Figure 4.9: White space spectrum example
Source: Neul
Many rural areas are served by a single mobile operator. Yet in spite of a mobile operator being the only one in an area, the spectrum they
use is limited to what they own. It may only be a quarter of the usable spectrum. The rest sits there inaccessible. There would be a huge
gain of capacity and speeds for rural mobile users if mobile operators were permitted to pool their rural spectrum so the sole operator in a
particular location had access to all the spectrum. For example if a mobile operator in a particular location only had 10 MHz of spectrum at
800 MHz the gross data speed that could be provided might be in the region of 50 Mb/s to be shared between all users. If that operator had
access to the full 30 MHz of spectrum at 800 MHz they could add extra LTE transmitters to provide 150 Mbit/s ie three times the capacity,
with proportionate increases in data speeds. They would not lose ownership rights in this reform but remain free to move into a rural area
giving reasonable notice to the existing mobile operator to relinquish use of the borrowed spectrum.
KEY ENABLER: Squeezing more out of usable Radio Spectrum
Technical Objective:
Squeeze more out of the usable radio spectrum. Spectrum needs to follow the customer.
Policy Means:
Licence obligations for new spectrum need to be linked to the Demand Attentive Network agenda. Rural spectrum could be pooled
so the sole operator in a particular location has access to all the low frequency spectrum and hence offer more equitable service
compared to urban areas.
Coverage gap filling
Network infrastructure deployment faces a more challenging business case in certain geographies, especially in more rural areas where
population densities are lower. The potential to amortise costs over the aggregated demand from at least a “break-even” number of
customers is much lower. Whilst the percentage growth predictions for 4G service demand is high in rural and urban areas alike, the
incremental absolute numbers are significantly lower in rural areas yet some fixed costs like backhaul are higher. Hence there is little
incentive for MNOs or fixed network operators to invest without public funds being used to reduce the financial exposure.
Other non-rural coverage gaps also exist e.g. along certain sections of railway and road infrastructure. These should also be considered
targets for action.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
20
The diagram below illustrates NGA build status in the UK, clearly showing that the majority of NGA not-spots are in rural areas.
Figure 4.10: NGA coverage in the UK 2012
Source: OFCOM, 2012
Similar rural area infrastructure challenges are observed when we look at mobile coverage:
4
3.8
3.6
3.4
3.2
3
2.8
2.6
2.4
2.2
2
Urban
Semi-urban
Figure 4.11: Average number of 3G operators by geotype
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
Rural
Source: OFCOM, 2012
21
KEY ENABLER: Coverage gap filling
Technical Objective:
To fill-in all the not spots and push out national coverage not just to get a minimum signal but to get more data speed to the end-user.
Policy Means:
All networks entail a degree of cross-subsidy between the areas of high demand and areas of lower demand. Regulation needs to
shift the investment balance in favour of more coverage and less not-spots. Thereafter it becomes a matter of public investment to
fulfil political objectives of sustaining rural economies, balance of investment across the nations , security (emergency services usage)
etc.
Planning for resilience
Users are increasingly dependent on having Internet connectivity at all times. Internet connectivity is not quite at the top of Maslow’s
hierarchy of human needs. However, it is certainly considered a basic necessity as the following recent survey results show:
1. Internet Connection
2. Television
3. A cuddle
4. A trustworthy friend
5. Daily shower
6. Central heating
7. Cup of tea
8. An ‘I love you’ every now and then
9. A solid marriage
10. Car
11. Spectacles
12. Coffee
13. Chocolate
14. Night in on the sofa
15. Glass of wine
16. A good cry every now and then
17. A full English breakfast
18. A foreign holiday
19. iPhone
20. A pint
Table 4.12: Top 20 bare necessities of life
Source: Disney Survey, Metro June 2013
As users increasingly depend on mobile devices (personal and M2M), the resilience of the underlying network connectivity infrastructure
becomes paramount. The reliability target implications (in terms of outage time) are illustrated below:
Availabilty
Downtime per
week
Downtime per
month
Downtime per
year
90%
1 Nine
16.8 hrs
72 hrs
36.5 days
99%
2 Nines
1.68 hrs
7.20 hrs
3.65 days
3 Nines
10.1 mins
43.2 mins
8.76 mins
4 Nines
1.01 mins
4.32 mins
52.56 mins
5 Nines
6.05 secs
25.9 secs
5.26 mins
6 Nines
0.605 secs
2.59 secs
31.5 secs
99.9%
99.99%
99.999%
99.9999%
Note: months based on 30 days and non-leap year
Figure 4.13: Reliability target implications
Source: Gordon Mansfield, AT&T, SWCS, June 2013
The root cause of the sorts of issues that have resulted in UK network outages have been collated by Ofcom for both fixed and mobile
networks and are shown below:
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
22
60%
Percentages of outages
50%
40%
30%
20%
Figure 4.14: Root cause of UK network outages (fixed & mobile) 2012
Software Fault
Other Natural
Disaster
Malicious Attack
(physical) inc theft
Other Third Party
Failure
Hardware Failure
Software Failure
(including swtiching/
routing errors)
0%
Power Outage
10%
Source: OFCOM/Operators
Note that hardware failures may be random (Bernoulli distribution) but thefts are not. They can be co-ordinated to defeat any redundancy.
In the multi-service converged network works it is now time to consider how to construct networks with differing levels of resilience for
different communications services.
The Demand Attentive Network principle is particularly relevant when local disasters occur. More members of the public would get their
communications needs met if messaging took priority over voice when surges in demand driven by a local disaster overwhelmed the
capacity of mobile networks.
KEY ENABLER: Planning for resilience
Technical Objective:
As more economic activity is carried on broadband networks and society become more dependent upon them so more investment
has to go into making Demand Attentive Networks more resilient. It may also be necessary to plan in graceful degradation.
Policy Means:
In areas where local disasters occur it must be possible for wireless networks to instantly ration capacity in ways consumers and the
emergency services find most useful. National roaming should be provided for a small percentage of users deemed to be ‘important’,
e.g. emergency service personnel and M2M devices related to key infrastructure. Techniques may be needed to cope better with
power outages.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
23
Wireless device performance
Better mobile antenna & receiver performance
International standards bodies (3GPP in particular) are doing a good job aligning basic 4G technology between network and handsets.
However tests have shown huge disparity of RF performance between mobiles and as more frequency bands are brought into use (as
illustrated below), the number of mobile models embracing all bands is falling.
Digital Dividend
700
800
900
2G
GSM850/PCS1900
EGSM900/DCS1800
1000
1200
3G
UMTS1900/2100
AWS1700/2100
1400
4G
LTE700/7500/7900/1428/1800/1900/2100/2300/2500
1600
1800
Frequency (MHz)
2000
2200
Figure 5.1: Modern mobile handset RF band support
2400
2600
2800
Source: www.eetimes.com
Total Isotropic Sensitivity @ 900 MHz (dBm)
Total Isotropic Sensitivity @ 2100 MHz (dBm)
Vendor A
-98.4
-99.7
Vendor A
-101.6
-98.6
Vendor A
-98.2
-97.5
Vendor B
-94.1
-100.1
Vendor C
95.1
-98.8
Vendor C
-94.1
-99.6
Vendor D
-94.7
-99.9
Vendor D
-95.3
-104
Table 5.2: Antenna variation measured across mobile handsets30
This matters because the network has to compensate for less well optimised mobile receiver design or RF mismatches and this eats into the
network capacity available to everyone else. Individual mobile operators are able to exert less and less control over popular mobile designs
as the industry consolidates down to a few global giants and the balance of volumes moves to emerging markets like China and India.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
24
Multicast enabled mobile handsets
Other sections of this paper set out the case for wireless devices to be multi-casting enabled. Unless this is implemented in the majority
of wireless devices the huge savings in network capacity when mass events are streamed cannot be realised. Those consumers having
handsets that are not multi-cast enabled who take the streamed service on individual streams are eating into the total wireless network
capacity that would otherwise be available to all the other customers using the Internet for all the other services and applications. So all
handset suppliers have to play to the same rules to realise the full benefits of multi-casting.
Best signal selection
A third area where intervention into the wireless device functioning may be essential to secure a big leap forward in wireless network
capacity and speeds is to enable efficient national roaming. Two models for national roaming have been identified. Whichever is selected
needs to be reflected in the way the handset operates to maximise the capacity gains.
EU regulatory weakness
The world has seen several decades where regulation has tended to drive a clean separation and independence for network terminal
devices. This made sense in the telephone age where the network was transparent to whatever was passing across it. However if there is to
be a leap forward in wireless performance in a high speed mobile broadband age the handset has to be much more tightly coupled to the
network. But how is this to be achieved in a relatively fragmented global industry?
There is no longer a concept of EU of type approval for mobile terminals that could have achieved this tighter coupling. What has replaced
it is a manufacturers’ declaration of conformity of their product to the essential requirements of the EU RTTE Directive. They are free to
decide if this is done by meeting the relevant 3GPP standard or explaining their conformity in other ways. Receiver sensitivity or antenna
performance is not one of those things which the RTTE Directive currently allows to be an essential requirement. The EU finds itself in a very
weak position in addressing the issue of poorly performing smartphones let alone thinking of new requirements that could secure significant
advances in network efficiency. This could result in Europe having to spend billions of Euro’s on additional network capacity to replace the
daily waste from millions of poorly designed handsets in circulation.
„„ Policy Issue: Is there a case to take some critical RF performance and other requirements into the technical specifications and
making them mandatory for handsets for the common good of minimising network capacity being drained away (wasted) by suboptimal handset design. If so how can this be achieved when the EU has no type approval. A vital first step would be getting receiver
performance (in-band and out-of-band) within the scope of the forthcoming Radio Equipment Directive currently being debated in the
EU.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
25
Content distribution
Overnight push to “trickle charge” storage
One of the great storage revolutions is occurring on the mobile phone itself with high end smart phones now having 64 to 80 GB of storage.
Between the hours of 1am and 6am around 80% of network capacity is unused. In a personalised world there is enormous scope to
anticipate what a consumer is likely to be demanding in the daytime and use this spare capacity to send it overnight to be stored on the
consumer’s own smartphone. For example most broadcasting organisations have their video and music files pre-stored on computers
and a play-list extracts the material at the appropriate times to send over the air. So one concept is to send to this material to subscribing
consumers overnight and only send play-list instructions to the smartphone. A typical use-case would be people catching up on last-night’s
TV on their daily commute journeys. A benefit to consumers in playing video and audio off local storage is that their pleasure is not disrupted
when trains go through tunnels and radio signals drop. But there will only be significant benefits if this happens on a mass scale.
It may be preferable for many users to “trickle-charge” the content over their fixed broadband & home WiFi rather than over cellular since
the speed would be faster and the ‘radio’ capacity greater (due to massive number of small wifi “cells”). Mobile operators may also want/
need to turn off some our cells at night to save energy (and in some case to meet maximum transmitted energy requirements which are
sometimes averaged over 24 hours). Hence this is another area where fixed WiFi/mobile network interworking could prove beneficial.
The technology already exists to add content cache capability to small cells and mobile handsets. The figure below from31 illustrates the
benefits of content deployment by caching a video on a small cell.
Network traffic (5 minutes)
1.8M
1.6M
1.4M
bytes/sec
1.2M
1.0M
0.8M
0.6M
0.4M
0.2M
0.0
23.37
23.38
23.39
23.40
Figure 6.1: Impact of small cell cache on user experience
23.41
Source: Intel/Edge Datacoms
The red line shows that without preloading, the video took 2.5 minutes to download a 3 minute video. During this time, the video stalled
twice to allow the application buffers to fill up. This would cause many users to cease watching the video, resulting in a lost revenue
opportunity. The blue line shows a content distribution solution where same video was pre-loaded in the small cell cache storage during off
peak demand and then sent to the user’s handset in just 6 seconds. This results in a much better user experience, significant savings in
handset battery consumption, and increased radio utilization. Similar approaches have been demonstrated caching direct to storage on the
users mobile smartphone.
The same approach can be used in the fixed network to trickle-charge Set Top Box and PVR storage devices connected to TVs in the
home. TV Broadcasters can often predict their top 10 most popular programmes 3 months in advance based on historical popularity in
TV schedules (East Enders, Coronation Street, X-Factor etc.). Combine this a-priori knowledge with modern content delivery systems and
Smart TVs which learn individual preferences and it becomes possible to anticipate the best content to push to households and individual’s
equipment like mobile devices.
It is easy just to focus on downstream content delivery when thinking about using caching techniques and localised storage to ease the
burden of video transport over backhaul networks. However, a local video cache (e.g. in a small cell) can also be used as a proxy upload for
cloud applications. This can allow user-generated content such as photos and videos, to be uploaded to the cache and then uploaded to
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
26
cloud or social media hosting sites when the backhaul network has the available upstream capacity31.
„„ Policy Issue: Does there need to be some rules of the game to give content owners confidence on copyright protection and consumer
confidence to allow third party access to the storage on their smartphones? Is that something for policy makers or for the industry to
agree collectively?
KEY ENABLER: Overnight Push to Mobile Storage
Technical Objective:
Use all the otherwise wasted network night time capacity to push “predictable” content to the storage on PCs, smartphones and other
customer devices to deliver a de facto capacity increase of (25%) and better perception of speed and resilience of content delivery.
This can be extended to when devices linger close to any WiFi connections.
Policy Means:
Produce the industry technical standards. Produce code of conduct to ensure consumers consent to the content pushed and amount
of storage used.
Making use of multicast techniques
Multicasting is a well understood technique that has been extensively deployed in fixed broadband networks to deliver video content with
reduced network capacity requirements compared to thousands of concurrent unicast streams. Multicasting reduces the ratio of the
peak to mean traffic loading on the network which can have a profound impact on reducing network capacity costs. However, 3G mobile
architectures have their foundation in radio bearers and transport channels, which are allocated to a single user and hence optimized
for unicast services. The 3GPP standards body has defined the Multimedia Broadcast and Multicast Service (MBMS)[5] which allows
IP packets to be conveyed from a single source (e.g. broadcast video head-end) simultaneously to a group of users in a specific area. In
particular, because of the scarcity of radio resources, MBMS requirements include the efficient use of scarce radio resources. The multicast
resources need to be able to be received over the complete cell coverage area, thus consuming considerably more power resources than a
unicast connection to a user located in the vicinity of the cell site. MBMS may prove effectively if there are sufficient (eg >20) users in the
same cell wanting the same content at the same time. The number of events that can’t be preloaded and that people want to watch (and
are authorised to watch) at the same time may be relatively low in terms of minutes/year. However, encouraging the availability of MBMS in
terminals could still be helpful. Note however that the inefficiencies of the RF system mean that MBMS is not currently commercially viable
on 3G mobile networks.
This has prompted 3GPP to move to a Single Frequency Network (SFN) architecture, whereby multicast data is simultaneously transmitted
from neighbouring base stations on the same radio resources and combined in the terminal for improved coverage32. This requires all of the
cells to have ‘time of day’ knowledge to within a fraction of a milli second.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
27
Multicast
Source
Single Copy
Traverses the Network
GGSN
SGSN
Core Network
SGSN
Single Copy
Traverse the RAN
RAN
RAN
RNC
Single Copy
Traverse the RAN
RNC
Broadcast for all
subscribers
Multicast
Replication for
each Subscriber
Source: Mark Grayson, Kevin Shatzkamer & Scott Wainner, Cisco 2009
Figure 6.2: Multicast distribution of mobile sessions
KEY ENABLER: Multicast-enable the mobile Internet
Technical Objective:
Ensure that when millions (or any large number) of consumers want to download the same content as the same time one data stream
is used and not millions of separate data streams.
Policy Means:
“Multicast” enable mobile networks and mobile devices. This almost certainly would require multicast standards to be encouraged by
EU regulation on mobile devices.
The future of terrestrial broadcasting
The communications industry’s philosophical world is divided between those that believe that there is no future for sound and TV
broadcasting and it will be replaced by each person down-loading over the Internet only the material that interest them (or selected by an
intelligent agent). Others take the view that there will always be a mass audience either cultivated by big brands like the BBC or resulting
from the human spirit to join communities (e.g. Manchester United supporters) or simple bone idleness (the TV couch potato).
The difference between these two modes of behaviour on broadband network capacity demand is massive. For example 1 million people
watching the cup-final on their smart phones via one multicast iPlayer digital stream will use a tiny fraction of network capacity compared
with 1 million separate iPlayer digital streams. So a future vision has to settle whether specialist broadcasting networks such as DTT/DVB
and DAB will continue for ever, be replaced as specialist broadcasting networks with a better generation of technology for DTT and DAB
technology or simply get subsumed into mobile and fixed broadband networks. If the latter is the vision then mobile networks with multicasting capability are essential33. But it is not just the networks but the handsets will also need to have the capability. This is unlikely to
happen unless it is mandated by regulation as the global supply industry is too complex and fragmented in its focus to create the mass
installed base of receivers e.g. how many smart phones have DAB receivers built in? The comparison of traditional broadcast delivery
versus broadband network-based delivery should also encompass a consideration of relative energy costs. i.e. the relative carbon footprint
of thousands of iPlayer streams on a network versus a traditional broadcast transmitter reaching the same number of people. There may be
similar carbon tax issues to consider as discussed in section Energy Costs and the Carbon Tax on page 11.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
28
„„ Policy Issue: What is the future of sound and TV terrestrial broadcasting after DTT/DVB and DAB? Should multicasting become a
mandatory feature in EU smartphone type-approval specifications?
The future of audio broadcasting technology is already confused in the UK. Operating parallel network infrastructure for both FM and DAB
transmission34 (in addition to online delivery over the Internet) raises the cost base for broadcasters, potentially to unsustainable levels.
DAB radio stations have the same frequency at all transmitters across the whole of the UK. This is possible because of the COFDM digital
transmission format, which enables the use of a “single frequency network”. This is an advantage in principle for car radios, since re-tuning
isn’t required.
In addition there is a new international DAB+ standard which has already been deployed in Australia, Italy, Malta, Switzerland and Germany.
DAB+ can improve the quality35 and spectral efficiency of digital audio transmission:
DAB
Quality
DAB+
192k - 256k
Better than FM
56k - 96k
160k - 192k
Similar to FM
40k - 64k
128k - 160k
Worse than FM
24k - 48k
<128k
“Annoying”
<24k
Table 6.1: Relative quality of DAB and DAB+ versus FM
Consumer web sites are already giving users potentially confusing and conflicting advice on the implications of this. For example Which36
recommends going ahead with a DAB radio purchase:
“If you’re planning to buy a new digital radio, don’t let the news about DAB+ put you off. UK
government seems committed to a DAB, rather than DAB+ future and as a national digital radio
switchover from FM to DAB is unlikely before 2015 at the earliest, any potential change to
DAB+ looks to be even further off.“
Other sites37 advise consumers to ensure they buy something that is DAB+ capable:
“The current concerns with DAB audio quality are likely to remain until DAB+ is introduced.
This uses a different audio compression method that isn’t compatible with older DAB radios.
Both services will continue for a while, but eventually everything is expected to become DAB+
only. Anyone thinking of getting a DAB radio should make sure it has DAB+ capability.”
A clear strategy and roadmap is required from the government and Ofcom regarding the future roles of FM, DAB and DAB+ audio broadcast
in the UK.
KEY ENABLER: Decision on Future of Terrestrial Broadcasting
Technical Objective:
To ensure network capacity to handle mass audiences for sound and television services is in place to carry the next generation of
broadcasting.
Policy Means:
The long term future of sound and terrestrial broadcasting could be delivered over dedicated networks (the follow-on technologies
to the current DAB and DTT) or the traffic could be carried on untethered-fibre broadband networks. There are things to be said in
favour of both. As the traffic load is enormous the main policy priority is to look 10 years out and simply decide which one.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
29
The end-state: network architecture summary
Mobile networks have typically lagged fixed networks by approximately 5 years in terms of both user-bandwidths and technology adoption.
For example, mobile networks were moving from TDM-based 2G to ATM-based 3G at around the same time that fixed broadband networks
were moving from ATM to Ethernet38. It is only with 4G/LTE that mobile networks have adopted an IP-based architecture leveraging MPLS
and Ethernet technologies. However, increasingly these once-disparate networks are converging on the same architectural approaches
together with integrated policy management to facilitate converged Fixed-Mobile services. This section summarises the end-state network
architecture based on currently known technology evolution trajectories.
The increasing dependence on telecoms network infrastructure for Cloud-based business and consumer services has placed greater
focus on resilience. This will impact network topologies in order to minimise the number of Single Points of Failure (SPoF). Core network
topologies will continue to use highly meshed interconnections between core network nodes. Increasingly the regional metro networks will
push ring and arc topologies closer to the customer (at least to the initial aggregation nodes). Hub and spoke topologies will be reserved
for the final access connections where connection costs are per site/customer and by definition offer less scope for sharing fixed costs.
However, even last mile connections will increasingly become more resilient. Today such access redundancy is the preserve of business
customers (e.g. using DSL to back-up a fibre access connection). In future SMEs and even consumers will place such reliance on their
telecoms (especially broadband) services that they too will require resilient/redundant access connections leveraging different physical
bearers (e.g. FTTP with fail-over to 4G mobile access).
The increasing focus on network latency (not just its bandwidth and availability) will also impact network topology in the end-state
architecture. Local traffic routing (such as to support LTE X2 traffic between base-stations) will be needed in some geographies to avoid
local traffic “tromboning” deeper into the core network and adding delay. Low latency in the fixed network infrastructure will also be pivotal
to support the next phase of mobile network improvements inherent in LTE-A. Mobile RAN techniques such as eICIC (enhanced Inter-Cell
Interference Co-ordination) and CoMP (Co-ordinated Multi-Point) together with connectivity of remote radio units via the CPRI protocol to a
“Cloud RAN” (C-RAN) all place extremely onerous requirements on the fixed network in terms of latency as well as bandwidth. The Cloud
RAN architecture is a new concept in mobile RANs (now gaining traction in Asia). It consists of distributed base stations, Base Band Units
(BBU - central brain) and Remote Radio Units39 (RRU - RF components). C-RAN can provide an enhancement to radio performance in
dense areas (capacity greater than 2.5 Gbit/s and latency less than 50 µs - c.f. 10km fibre delay). C-RAN is mainly applicable in areas
where there is a lot of fibre.
Virtual Base Station Pool
X2+
General-Purpose
Processor
PHY/MAC
General-Purpose
Processor
Virtual Base Station Pool
General-Purpose
Processor
X2+
Virtual Base Station Pool
General-Purpose
Processor
PHY/MAC
Balanced Traffic Load
High Speed Switching
C-RAN
Bearer Network
RRU
RRU
RRU
RRU
RRU
RRU
RRU
RRU
RRU
Figure 7.1: Cloud-RAN (C-RAN) architecture
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
30
Baseband at Bottom of Tower
Traditional Base Station
NodeB /
eNodeB
NodeB /
eNodeB
CPRI
RF
RF
RF
NodeB /
eNodeB
IP/ETH
NodeB /
eNodeB
IP/ETH
CPRI
CPRI
IP/ETH
IP/ETH
IP/ETH
IP/ETH
Mobile Backhaul
Mobile Backhaul
RRH to Co-located Baseband
RHH to Virtualised Baseband
Radio
unit
Radio
unit
Radio
unit
Radio
unit
Radio
unit
CPRI
Transport
Co-located,
Stacked
Baseband Units
Radio
unit
CPRI
Transport
IP/ETH
Virtualised
/ Pooled
Baseband Unit
IP/ETH
Source: PMC Sierra
Figure 7.2: Evolution of base-station infrastructure necessitates 9 Gbit/s fibre transport for CPRI protocol
The days of IP packet networks only supporting best-effort Internet access are long gone. The modern multi-service nature of packet
networks means that they are increasingly having to have “TDM-like” characteristics with respect to synchronisation and timing, especially
for the more onerous mobile service requirements. Hence the end-state architecture will include support of Synch-E and potentially phase
synchronisation too.
In terms of technologies, the end-state architecture will be a two-layer architecture comprising a physical layer and a packet layer. The
physical layer will be mainly optical and the packet layer will be mainly MPLS. The final access (“last mile”) connection to end user’s
premises will leverage a variety of technologies but the final tethering to the “content consuming device” and its associated user-interface
will be predominantly wireless.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
31
Regional Backhaul
IP/MPLS LAYER
Access (Last Mile)
National IP Core
BNG/EPC
IPv4 VPN
MME
National IP Backbone
Native IPv6 - internet
Native IPv6 - internet
Native IPv4 - internet
Native IPv4 - internet
IPv4 VPN
Ethernet (VPWS/VPLS)
MPLS
Se-GW
Ethernet (VPWS/VPLS)
MPLS
MPLS
Native ETH
Wavelengths
Direct fibre
OTN
(ODU)
OTN
OTN
DWDM
DWDM
xDSL
C/D WDM
∆
PHYSICAL LAYER
Seamless MPLS
PON
Integrated OTN and DWDM
MW, V-Band/E-Band
Figure 7.3: Illustration of 2-layer network approach
Source: Huawei
In this two-layer network there is a degree of vertical multi-layer optimisation. For example a drop in optical quality (e.g. Signal to Noise
Ratio) can be detected at the physical layer and used to re-route traffic at the packet layer before any packet loss actually occurs. Horizontal
integration between domains will be at either the optical layer or via Ethernet interconnection interfaces. Research40 on packet switching and
routing at the photonic layer may even lead to further delayering.
The optical layer will push coherent optics beyond the core network to the metro/aggregation networks. Technologies such as OFDM41
will further increase optical transmission capacity. DWDM optical technologies will increasingly be integrated into large packet routers42,43
instead of separate optical transponder equipment as illustrated below:
Current typical development
CFP grey optics
CFP grey optics
Transponder on
DWDM system
DWDM Optics
now on router
Potential future development
Alien wavelength (Black Link ITU G698.2)
Figure 7.4: Use of “black links” to remove separate transponders
Source: Neil Mcrae BT, UKNOF, 2013
Finally, optical transmission will arrive in in the access domain for non-business customers. The end-game optical access technologies will
use a combination of point to point and PON techniques. WDM-PON enables traffic flow isolation at a wavelength level avoiding the degree
of sharing of optical capacity inherent in today’s PON technologies . Hence the same access infrastructure can be used by consumers,
businesses and infrastructure providers (e.g. for small cell backhaul) without the traffic interacting. This avoids the onerous access network
capacity planning and traffic management (to limit congestion) inherent in today’s FTTP PON networks44. It also provides traffic isolation
(via optical wavelengths) between different service providers thus facilitating a return to “unbundled competition” at a lower level which is
essential for innovation and competitive service pricing. 10G to 100G Enterprise customer and Data Centre connectivity is likely to remain
on point-to-point fibre access technology.
At the packet layer, the end-state architecture will evolve via end-to-end seamless MPLS45 to make use of new segment routing protocols46
towards Software Defined Networks (SDN) with agile virtualisation capabilities (Network Function Virtualisation - NFV) making use of general
purpose compute technologies. Orchestration across both the Cloud domain (compute resources in the Data Centre) and the network
domain will be a major step forward in making networks “demand-attentive”. Such cross-domain orchestration offers the potential for users
to self-provide the communications and applications resources they need on-demand and in in real-time. This would represent a huge step
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
32
forward in infrastructure responsiveness to help provide businesses and potentially consumers with what they need, when they need it.
Deployment of IPv6 will be largely driven by the rapid growth in M2M applications. This evolution will simplify and cost-reduce both
equipment and operations together with improving scalability and resilience. Use of techniques such as multicast and transparent caching
will also be leveraged at the packet layer to improve the end-user experience and application performance over the network (especially for
video-rich contact).
An example of such an architecture is shown below:
Access
(Last Mile)
Backhaul/
Aggregation
Core
RBS/CPE/DSLAM
Regional Backhaul (BEP1)
VM Applications running on
Bladeservers (e.g. Telco over
cloud)
Applications running on
integrated compute fabric
RBS
BNG
CDN Cache
?...?
Hypervisor
Applications running on
integrated or standalone
compute fabric
MME
OSS
MGW
Interconnect/
Internet
P-GW
FW
Traf Mgr
S-GW
NAT
ACF
S-GW
BNG
RBS
HSS
BSC
SeG
MSP
SeG
CDN
Cache
Anti
Malware
RNC
MSS
RNC
BNG
PCRF
CDN
Cache
Hypervisor
Hypervisor
Hypervisor
Data
Centre
VM Applications running on
Bladeservers
App 4
App A
App 1
App 5
App B
App F
App 2
App 6
App C
App G
App 3
App 7
App D App H
Hypervisor
App E
Hypervisor
Hypervisor
DC Connectivity Infrastructure
SDN Controlled Fabric
National Core
IPv6 VPN
International IP Backbone
SDN Controlled Fabric
IPv6 VPN
IPv6 VPN
Mixed Access
Technologies
IPv4 VPN
IPv4 VPN
IPv4 VPN
Ethernet (E-Line)
Ethernet (E-Line)
Ethernet (E-Line)
Ethernet (E-Line)
Seamless MPLS
MPLS
MPLS
OTN
OTN
DWDM
DWDM
Integrated DWDM & OTN
DWDM & OTN
Packet Transport
Sevices
Direct Fibre
PON
Ethernet Microwave
Ethernet Wireline
xDSL
Figure 7.5: Two-layer network architecture showing role of SDN & NFV via compute platforms
The two-layer network described above is already being deployed by some mobile network operators seeking to deliver 1G fibre connectivity
to the majority of macro base-stations. Further mobile network capacity and coverage (including indoors) will be achieved via the significant
increase in deployment of small-cells (micro, pico, femto plus WiFi) resulting in a heterogenous network (Het Net). Point to point fibre
backhaul will not always be viable for small cells so other fixed access/backhaul technologies (such as NGA, Pt-Pt and Pt-Mpt fixed radio)
will also be necessary. This also includes use of new MIMO microwave and V-Band/E-Band radio technologies for backhaul/trunking at up to
10G speeds.
The vast majority of the converged network infrastructure described above is “fixed”. However, in this vision of an untethered fibre-wireless
future, the final connectivity to the “point of consumption” (e.g. end-user’s device or application) is some form of radio link. This could be
via mobile, WiFi, white-space, mesh or other radio technologies. It will become increasingly vital to use the limited available spectrum as
efficiently as possible, even if that means sharing it. Several existing radio frequency bands host services that need access to spectrum but
do not necessarily use it fully. Cognitive technologies, beam-forming adaptive arrays and intelligent networks can make it possible to more
effectively share spectrum. Radio environment awareness and interference management techniques can enable multiple systems to occupy
the same spectrum as part of a Self-Optimising Network (SON). It may take a combination of exclusive spectrum and shared spectrum
solutions to meet the application and service requirements of the future.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
33
Conclusions
Early data networks leveraged the infrastructure already in situ for providing voice services (in both the fixed and mobile domains). That
original infrastructure was designed for a single-service application (i.e. voice) where traffic flows were easily modelled. This provided
a degree of determinism to understanding traffic loading and subsequent capacity management in order to ensure an effective grade
of service and hence fit for purpose outcomes (good voice quality and rare occurrence of “engaged” tone). Fixed and mobile networks
have subsequently evolved to become multi-service broadband networks. The fixed and mobile domains are now converging and the link
bandwidths have increased. However, evolution of the basic topology of the networks has taken longer. Upgrading link bandwidths is not
economic everywhere and is not sufficient to deliver the required outcomes and user experience in the modern era. The modern multiservice network is a tree of packet multiplexing functions transporting a vast range of traffic types (voice, data, video, messaging, M2M
…) with traffic flows that follow a range of topologies (client to server/cloud, peer to peer …) using a range of protocols. The multiplexing
points are also where contention can occur (e.g. home gateway router, FTTC/VDSL cabinet, mobile RAN spectrum etc.). Modelling the load
on such a network in a meaningful way is extremely difficult. Moreover, evolution of the network can move the contention points as new
technologies are introduced. Solving a problem or pinch-point in one area can create challenges in another area. It is therefore important to
understand the end to end system characteristics (including the influence of end-user devices and behaviour of application layer protocols)
when operating the network under load (the Demand in “DAN”).
It is not adequate to simply increase the bandwidth on links where it is economic to do so and hope that the resulting statistical packet
multiplexing occurs in a way that will deliver satisfactory quality of experience most of the time. The future network needs to be responsive
to the demands placed on it by users in a way that makes it almost tactile.
The challenge is to create a network environment where performance capability and stability is such that the user’s applications consistently
achieve the required outcomes. Users will then feel that they can connect with confidence, wherever they are. There are two aspects
to consider: improving connectivity and also improving the quality being delivered (which is constrained within the inherent degrees of
freedom of such systems). Increasing the potential peak speed delivered is often driven not by a desire to improve connectivity but by
a desire to market a “bigger number” than the competition47. It doesn’t address the quality of information translocation, especially as it
creates a co-ordination challenge (of scheduling the increased peak demand). Similarly the use of “average” performance metrics can
be equally dangerous in that it can hide the fact that a proportion of the user population are receiving a terrible service. This is not purely
a technical issue. For example there is no regulatory requirement in the spectrum licenses for service - just for signal coverage. The right
optimisation objective is end-user Quality of Experience (QoE). Is an industry we need to shift focus from “what the industry can deliver” to
delivering “what is the most effective”. This includes consideration of how best to improve minimum consistent service levels in challenging
geographies.
Technologies have been developed to address the fact that infinite bandwidth everywhere is not achievable. Such technologies often provide
point solutions to particular problems, but sometimes inadvertently create challenges elsewhere. Some of these technologies require
changes in regulatory or planning policy to be effective. However, more importantly, such technologies should be considered holistically from
both the technical architecture and national policy perspective. This is necessary In order to deliver an end-state where the network is a
“Demand Attentive Network” (DAN), responding to the varying needs and load placed on it by users and consequently delivering outcomes
that meet their QoE needs. The ambition of the DAN approach is to transform networks from “never enough” to “always sufficient”.
The paper may come across as a rag-bag of suggestions or even prescriptive. That is to miss the point. The Demand Attentive Network
approach requires silos to be broken down and economic and/or technical performance gains seized where they can make a significant
improvement to the perception of network responsiveness. All the proposals given are illustrative working assumptions that can be added to,
improved upon or even replaced by better assumptions.
The IET has produced this paper in order to stimulate discussion on a more holistic approach to network engineering and associated policy
development in order to deliver a DAN infrastructure for the benefit of the UK digital economy. The paper does not seek to consider the
many ramifications of the DAN approach, it is primarily to act as a catalyst for debate.
DAN is not interventionist. The policy and regulatory changes needed are enablers and can only work if they build upon an emerging
industry consensus for change rather than to try to force it.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
34
Appendix 1
The working assumptions of a new common operating model
Introduction
The broadband Internet is a hugely successful engine for generating new services, applications, businesses and jobs and is at the heart of
the unfolding digital economy and digital social space. This growth is generating ever greater volumes of data over the Internet that has to be
moved around at every greater speed.
The success of this “services growth engine” compels us to shape a matching vision of a “network growth engine” that must not only be
able to handle...ever larger volume of data... at ever faster speeds...but with lower delays...greater reach... and much stronger resilience.
This growth is likely to go on for decades and solutions have to be sustainable over a similar time-frame.
The ideal technology end destination is a fibre-wireless network providing unlimited bandwidth and storage. The enduring pinch-point
will be the wireless broadband network component. Wireless networks can never deliver unlimited bandwidth but the world could come
considerably closer to this ideal if the ambition were set to give users the ‘perception’ of unlimited bandwidth. Turning this into a practical
proposition requires:
„„ Being very attentive to the demands users and applications are placing on a network.
„„ Breaking down the silos: fixed & mobile (& perhaps broadcasting); industry & regulators; handsets & networks and pull together all
the ideas that can respond to this demand at lower cost
It leads to more “Demand Attentive Networks”...attentive to the customers’ changing needs ...attentive to the necessary technical, policy
and regulatory changes needed. We term this mix of new technical, policy and regulatory changes the “Common Operating Model”. An
innovative new Common Operating Model would deliver an immensely powerful improvement in the performance/cost of broadband
networks.
As with any optimisation it has to begin with a set of starting assumptions (let’s call them working assumptions). They have then to be honed
through public debate into a consensus for change. In the process any working assumption can be improved upon or changed for better
ones.
Set out below is a sub-set of working assumptions for more Demand Attentive Networks taken from a larger list. It is no more than a shop
window to illustrate the breadth of changes needed and to put practical substance behind the conceptual idea.
Top 10 enablers to deliver demand attentive networks
Fibre support network for an untethered fibre-wireless world
i.
Technical Objective: To ensure fibre back-haul for wireless cells (of all sizes) is available at ever more locations, at the right time, at the
right price and with a low latency specification.
„„ Policy Issue: Regulations of fixed broadband infrastructure providers (eg Openreach) need to be brought into line with a future where
fibre rich fixed broadband networks essentially gate the future of wireless networks (eg technical standards etc).
Mobile capacity & performance
Users connecting to the nearest mast irrespective of mobile operator (wholesale national roaming)
ii. Technical Objective: To have users always connect to the nearest tower (or pico cell) irrespective of the mobile operators to secure an
“up to” 20-fold increase in network capacity for millions of mobile users.
„„ Policy Issue: National Roaming is likely to be controversial but from a pure engineering perspective makes a lot of sense (especially
outside of dense urban areas). The challenge is to move to a version of national roaming that benefits mobile operators, their customers
and is pro-competitive. The point of the DAN initiative is to stimulate regulatory/policy/commercial enablers to unlock this technical
potential (or provoke proposals for substitute solutions with clear superior performance/cost). For example if the over the top players ran
their MVNO networks on two incumbent mobile networks they would be able to enable national roaming from the handset and secure a
huge competitive advantage for data speeds over the incumbent mobile operators. Would the regulators allow the incumbent operators
to switch on national roaming to be able to compete?
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
35
Denser mobile networks
iii. Technical Objective: To see a steady but relentless expansion of small-cells to increase urban network capacity.
„„ Policy Issue: Lamp posts and other public structures have to be made available on fair terms for small-cells. Mobile operators need
to have the regulatory incentives to sustain cell-splitting down to small cells. We need suitable co-ordination structures to resolve the
contention issues for the finite resources: real-estate, local spectrum etc
Squeezing more out of usable radio spectrum
iv. Technical Objective: Spectrum needs to follow the customer.
„„ Policy Issue: Licence obligations for new spectrum need to be linked to the Demand Attentive Network agenda of spectrum following
the customer ie a more agile response to demand. Rural spectrum could be pooled so the sole operator in a particular location has
access to the entire low frequency spectrum and hence able to offer greater local capacity and speeds.
Coverage gap filling
v.
Technical Objective: To fill-in all the not spots and push out national coverage not just to get a minimum signal but to get more data speed
to the end-user.
„„ Policy Issue: All networks entail a degree of cross-subsidy between the areas of high demand and areas of lower demand. Regulation
needs to shift the investment balance in favour of more coverage and less not-spots. Thereafter it becomes a matter of public
investment to fulfil political objectives of sustaining rural economies, balance of investment across the nations, security (emergency
services usage) etc.
Planning for resilience
vi. Technical Objective: As more economic activity is carried on broadband networks and society become more dependent upon them so more
investment has to go into making Demand Attentive Networks more resilient. It may also be necessary to plan in graceful degradation.
„„ Policy Issue: Where local disasters occur it must be possible for wireless networks to instantly ration capacity in ways consumers
and the emergency services find most useful. National roaming should be provided for a small percentage of users deemed to be
‘important’, e.g. emergency service personnel and M2M devices related to key infrastructure. Techniques may be needed to cope
better with power outages.
Better mobile antenna, receiver performance and other network related functions
vii. Technical Objective: To bring up the worst performing terminals to the best to reduce the capacity drain on wide area wireless networks
„„ Policy Issue: There a case to take some critical RF performance and other requirements into the technical specifications and
making them mandatory for handsets for the common good of minimising network capacity being drained away (wasted) by suboptimal handset design. But how can this be achieved when the EU has no type approval? A vital first step would be getting receiver
performance (in-band and out-of-band) within the scope of the forthcoming Radio Equipment Directive currently being debated in the
EU.
Overnight push to “trickle charge” storage
viii. Technical Objective: Use all the otherwise wasted network night time capacity to push “predictable” content to the storage on PCs,
smartphones and other customer devices to deliver a de facto capacity increase of 30% or so and better perception of speed and
resilience of content delivery. This can be extended to when devices linger close to any WiFi connections.
„„ Policy Issue: Produce the industry technical standards, cross-industry cooperation and a code of conduct to ensure consumers consent
to the content pushed and amount of storage used.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
36
Making use of multicast techniques
ix. Technical Objective: Ensure that when millions (or any large number) of consumers want to download the same content as the same time
one data stream is used and not millions of separate data streams.
„„ Policy Issue: “Multicast” enable mobile networks and mobile devices. This almost certainly would require multicast standards to be
encouraged by EU regulation on mobile devices.
Future of terrestrial broadcasting
x. Technical Objective: To ensure network capacity to handle mass audiences for sound and television services is in place to carry the next
generation of broadcasting.
„„ Policy Issue: The long term future of sound and terrestrial broadcasting could be delivered over dedicated networks (the follow-on
technologies to the current DAB and DTT) or the traffic could be carried on untethered-fibre broadband networks. There are things to
be said in favour of both. As the traffic load is enormous the main policy priority is to look 10 years out and simply decide which one as
it makes some working assumptions (eg multicasting) critical.
Delivering the vision
We want an inclusive international approach aimed at drawing in the best ideas and flexible enough to modify and change them as better
ones come along.
Success will be measured by outcomes not inputs, where users of more Demand Attentive Networks behaving increasingly as if their
bandwidth is unlimited - indeed being unaware and uninterested in bandwidth. Any country that wants to be a world leader in digital
communications in an untethered world has to bring about this change of Common Operating Model. The alternative is not sustainable.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
37
References
1
2
3
4
House of Lords Select Committee on Communications, enquiry on “Will superfast broadband meet the needs of our ‘bandwidth hungry’ nation?”, submission by Simon Pike.
http://www.parliament.uk/business/committees/committees-a-z/lords-select/communications-committee/publications/?type=&session=3&sort=false&inquiry=576
Downloading a web page uses protocols that involve several interactions between a web-server and the network. Hence the connection between the two is traversed several
times, thus end to end latency is important too.
https://gettys.wordpress.com/2013/07/10/low-latency-requires-smart-queuing-traditional-aqm-is-not-enough/ Jim Getty, July 2013.
5
“The Properties and Mathematics of Data Transport Quality”, Neil Davies, Predictable Network Solution (for Ofcom), 5th February 2009. http://www.pnsol.com/public/PPPNS-2009-02.pdf
Zero jitter and 100% availability complete the set of engineering SLA parameters.
6
http://www.slideshare.net/mgeddes/martin-geddes-lean-networking Martin Geddes, October 2012.
7
8
Policy” can be in an engineering, economic or regulatory dimension i.e. respective examples include expedited forwarding class, premium pricing and net neutrality
approaches.
In Q1 2012 85% of UK broadband homes used WiFi.
9
Vectoring may enable VDSL2 speeds to increase by of the order of 25 - 40% to exploit more fully the finite capacity of copper cables taking us closer to the theoretical limits.
10
DP - telephone pole or footway box outside premises.
11
12
In this context” more demanding” includes a) wants more instantaneous capacity , b) more stable translocation quality properties, c) better translocation quality properties.
Each of these requirements is really requesting delivery with less sharing of capacity (lower statistical multiplexing gain).
This was not unique to the UK. Innovation in Broadband in other countries was also driven by the LLU operators (e.g. Fastweb in Italy, Covad in the US, Free in France etc.).
13
E,g, Most BDUK funds have been allocated to BT.
14
E.g. in comparison with DSL, FTTC and broadband via a mobile network.
15
“Experiences with WDM PON in SME markets”, Ger Bakker, UUNet, FTTx Summit 2012.
16
“Competitive Models in GPON”, Analysys Mason (for Ofcom), December 2009.
17
Review of the Wholesale Local Access Market, Ofcom, 7th October 2010.
18
“Competition and Investment in Superfast Broadband”, Ed Richards, Ofcom, 8th November 2011.
19
“NGA Implementation Issues & Wholesale Products”, BEREC Report, March 2010.
20
ECTA Response to EC Consultation on the Draft Recommendation on Regulated Access to NGA Networks, November 2008.
21
In Australia, the National Broadband Network (NBN) plans to move away from Active Line Access (ALA) to wavelength unbundling as soon as feasible to promote
competition at the lowest sustainable infrastructure/technical level.
A sectored base station site typically contains 3 cells.
22
23
24
25
26
27
3GPP is the global standards body for mobile technology. Several of these LTE capabilities are developed in 3GPP’s specifications e.g. R11 (completed in March 2013) and
R12 (anticipated to complete in December 2014).
There are power issues to consider as such functionality is distributed, both in terms of overall consumption (carbon footprint) and who pays i.e. server in the data centre/
exchange versus the extreme case of a cache allocation on a user’s home PVR/STB hard drive.
A “parent” exchange has connectivity to several “child” exchanges. A communications provider connecting to only the parent exchange can then gain access to customers
who are connected to the child exchanges. This reduces (from one per exchange) the number of interconnect points required by the communications provider in order to
gain connectivity to the customers in a given locale.
Although for small cell deployment, there are also challenges of deploying wireline backhaul infrastructure to “non-premises” i.e. unmanned sites like lamp posts.
28
Such challenges may not only hinder the evolution of broadband mobile networks. M2M applications like smart utilities (smart grid, smart meters) and smart cities could also
require deployment of radio transceivers (such as mesh radio and TV white-space radio) on similar infrastructure.
“Backhaul Technologies for Small Cells”, 049.01.01, Small Cell Forum, February 2013.
29
“Understanding Weightless: Technology, Equipment and Network Deployment for M2M Communications in White Space”, William Webb, Cambridge University Press 2012
30
31
From measured data presented by Aslborg University to the IWPC Workshop on Advanced Smartphone RF Front Ends for Non-Contiguous Band Combinations (“OTA Testing
- Performance Assurance for Future RF Front Ends”, Professor Gert Frølund Pedersen Aalborg University, IWPC Workshop on Advanced Smartphone RF Front Ends for
Non-Contiguous Band Combinations : June 17-19 June 2013, Lund, Sweden, June 17-19 2013.
“Smart Cells Revolutionize Service Delivery”, Intel White Paper, 2013.
32
“IP Design for Mobile Networks”, Mark Grayson, Kevin Shatzkamer and Scott Wainner, Cisco Press 2009.
33
E.g. via use of eMBMS - evolved Multimedia Broadcast Multicast Service
34
The FM frequency range is 88 - 108MHz, the DAB frequency range is 217.5 - 230MHz).
35
36
The way in which compression is applied in the content production process also impacts the user’s Quality of Experience. Such considerations also apply to video content
and will be a key focus area for 4k Ultrad HD TV.
http://www.which.co.uk/technology/audio/guides/dab-explained/
37
http://www.jimsaerials.co.uk/dab%20&%20fm/radio.htm
38
“Migration to Ethernet-based DSL Aggregation”, Broadband Forum TR-101, April 2006.
39
Sometimes referred to as Remote Radio Heads - RRH
40
Research opportunities exist for a number of the topics discussed in this paper.
41
42
“Experimental Demonstrations of Electronic Dispersion Compensation for Long-Haul Transmission Using Direct-Detection Optical OFDM”, Brendon J. C. Schmidt, Arthur
James Lowery, and Jean Armstrong, Journal of Lightwave Technology, Vol. 26, Issue 1, pp. 196-203 (2008).
“Amplified multichannel DWDM applications with single channel optical interfaces”, ITU.T G698.2
43
“A framework for Management and Control of optical interfaces supporting G.698.2” IETF draft-kunze-g-698-2-management-control-framework-01, October 2011.
44
45
It moves the statistical multiplexing up to the aggregation/core network where they may be easier to manage. Hence it just moves the capacity and performance management
issues but doesn’t fully resolve them end-to-end.
“Seamless MPLS Architecture”, IETF draft-ietf-mpls-seamless-mpls-02, October 2012.
46
“Segment Routing with IS-IS Routing Protocol”, IETF draft-previdi-filsfils-isis-segment-routing-02, March 20th 2013.
47
Including competition between countries to be the first to meet or exceed European Commission targets for broadband.
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
38
Acronyms
BNG
Broadband Network Gateway
BSC
Base Station Controller
CPE
Customer Premises Equipment
C-RAN
Cloud Radio Access Network
DSLAM
Digital Subscriber Link Access Module or Digital Subscriber Line Access Multiplexer
DWDM
Dense Wavelength Division Multiplexing
FTTP
Fibre To The Premises
FW
Firewall
GGSN
Gateway GPRS Support Node (GGSN) is a main component of the GPRS network
GPON
Gigabyte Passive Optical Network
HSS
Home Subscriber Server
LLU
Local Loop Unbundling
MME
Mobile Management Entity
MGW
Media Gateway
MIMO
Multiple In Multiple Out - A means of using multiple antenna’s so as to improve performance
MPLS
Multiprotocol Label Switching - a means to route specific data traffic over defined path
Multicast
A means of sending data from a single source to a defined group
NG-PON2
Next Generation Passive Optical Network
ODU
Optical De-multiplexer Unit
OLT
Optical Line Termination
OSS
Operations Support System
OTN
Optical Transport Network
PON
Passive Optical Network
QAM
Quadrature Amplitude Modulation
RAN
Radio Access network
RBS
Radio Base Station
RNC
Radio Network Controller
RRU
Remote Radio Unit
SGSN
Serving GPRS Support Node
S-GW
Service Gateway
VPN
Virtual Private Network
WDM
Wavelength Division Muliplexing
Demand Attentive Networks
A paper provided by The Institution of Engineering and Technology
© The IET 2013
www.theiet.org/factfiles
39
The Institution of Engineering & Technology
Michael Faraday House
Six Hills Way
Stevenage
SG1 2AY
01438 765690 - Policy Department
email: [email protected]
http://www.theiet.org/policy
http://www.theiet.org/factfiles
© The IET 2013
This content can contribute
towards your
Continuing Professional
Development (CPD) as part
of the IET’s CPD Monitoring
Scheme.
http://www.theiet.org/cpd
Issue 1.0 - 2013
The Institution of Engineering and Technology is registered as a Charity in England & Wales (no 211014) and Scotland (no SC038698).