A Note on Possible Regulatory Strategies in Sweden to 2015

 A Note on Possible Regulatory Strategies in Sweden to 2015 Martin Cave Warwick Business School, UK [email protected] November 2008 1
Page no. CONTENTS Introduction.. .. .. .. .. .. .. .. .. .. 4 1. .. .. .. .. .. .. .. .. 5 A. Potentially competitive NGAs .. .. .. .. .. .. 5 B. A monopoly NGA .. .. .. .. .. .. .. .. 13 C. Public and municipal investment .. .. .. .. .. .. 14 D. The impact of separating NGAs .. .. .. .. .. .. .. .. 15 E. The Commission’s draft recommendation on regulating NGAs .. .. 15 F. Conclusion. .. . .. 16 NGNs .. .. .. .. .. .. .. 2. Interconnection (especially termination) .. .. .. .. 17 A. Introduction.. .. .. .. .. .. .. .. .. 17 B. Costing issues.. .. .. .. .. .. .. .. .. 18 C. Two‐sided markets and callee benefits.. .. .. .. .. .. 20 .. 23 D. The Commission’s draft Recommendation on termination and conclusions .. .. .. .. .. .. .. 26 .. .. .. .. .. .. .. 26 B. Secondary trading and liberalisation.. .. .. .. .. .. 27 C. Controlling receivers.. 3. Spectrum Policy.. A. Spectrum auction.. .. .. .. .. .. .. .. .. 30 D. Public sector spectrum use.. .. .. .. .. .. .. .. 30 E. Choosing the extent of unlicensed spectrum.. .. .. .. .. .. 31 F. Improving Swedish spectrum management.. .. .. .. .. .. 34 2
.. .. .. .. .. .. .. 35 A. Universal voice service.. .. .. .. .. .. .. .. 35 B. A broadband USO.. .. .. .. .. .. .. .. .. 36 .. .. .. .. .. .. 39 4. Universal Service.. C. Implications for spectrum policy.. 5. Summary and Recommendations.. .. .. .. .. .. 40 .. .. .. .. .. .. .. .. 41 .. .. .. .. .. .. .. .. 41 C. Spectrum Policy.. .. .. .. .. .. .. .. .. 42 D. Universal service.. .. .. .. .. .. .. .. .. 42 E. An overview.. .. .. .. .. .. .. .. 43 A. NGNs.. .. B. Interconnection.. .. 44 Annex A: Some recent economic literature on termination .. Annex B: New technologies and their implications for spectrum management 50 Annex C: Project specification from PTS.. .. 56 .. .. .. .. 3
A Note on Possible Regulatory Strategies in Sweden to 2015 Martin Cave 1. Introduction This note is based on the project specification from PTS (extract attached as Annex C), and on the briefing I received in Stockholm in June 2008. It tries to look ahead seven years. Inevitably it also reflects the author’s own preoccupations with regulation (and deregulation). The key development in this period is likely to be the development of high speed broadband, which I take to be the crucial change in the second decade of the century, with a potential impact equivalent to those of the diffusion of mobile in the 1990s and the spread of current generation broadband in the 2000s. Of course, this forecast, based on projections of past rates of change of broadband speeds sought by households and firms and on foreseeable changes in demand for services (for greater symmetry, for dealing with the effects of more wireless distribution systems connecting several users on the same premises, for HDTV entertainment services etc) is by no means certain. But it is also supported by the behaviour of operators in investing in higher speeds, especially in competitive conditions‐ see below. From an economic and societal point of view, the key things will be changes in the organisation (and location) of work and changes in the nature of consumption of communications services. From the standpoint of regulatory design, the nature of these changes is less significant than the assumption that they will be pervasive and unpredictable, and that the underlying productive investments which cope with them should be flexible and where possible competitive, in the interests both of eliciting the large scale investments in the first place and of ensuring efficient pricing and use of them. Hence an approach to regulation which focuses on likely key wholesale inputs is legitimate, leaving the demand side to determine how end users exploit them 1 . Sweden’s pre‐occupation with equity issues also make it important to focus on the universality of the new wholesale services. With this in mind, what follows focuses on four issues: 1.
the development of next generation access networks; 2.
exploiting opportunities to profit from IP and other developments to deregulate interconnection and termination markets; 3.
(further up the supply chain), ensuring the availability of spectrum as an enabler, 4.
universal service issues. 1
This argument is reinforced by the essentially competitive nature of retailing activities.
4
These issues (together with separation, which is not considered here) also constitute the core of the current European debate on reform of the regulatory framework. However, the present discussion goes beyond them, because member states have the power to exceed the minimum requirements, and Sweden is well placed to do so in the light of its current situation. Accordingly, the four major section of the paper discuss the above elements, in the form of a main text supplemented by annexes. The fifth section brings together the conclusions and looks for a general characterisation of changes in regulation. 1. NGNs My focus will be on next generation access (NGA) networks, since next generation core networks generally seem broadly to be a cost‐reducing ‘process innovation’ which increases competitors’ ability to compete and causes few specific regulatory problems. I suggest that there is a hierarchy of questions concerning the future development of the access market and that they are best considered in relation to separate geographical market where the conditions of competition among them differ significantly. The questions can be set out schematically as in figure 1. A. Potentially competitive NGAs Of course, it would be possible to jump over this process and conclude that the outcome will be a fibre monopoly, augmented by a number of wireless options. This possibility has been acknowledged, but certainly not embraced, by PTS. 2
It is worth emphasising that if such a policy were adopted, or even if such a forecast were made, it is highly likely that the accompanying regulatory framework would make it self‐fulfilling. This reflects the fundamental truth that, just as regulation reacts to market structure, so market structure reacts to regulation. Thus opening up a network is likely to deter competitive investment. This point has been examined empirically in a recent paper by Lars‐Hendrik Roeller 3 and others, which investigates the relationship between access regulation and investment on the basis of data covering a panel of 180 European firms over seven or so years. It concludes that stronger access regulation had no effect on incumbent’s investment but reduced significantly that of entrants. 4
2
The NGA Challenge – a regulator’s view, Katarina Kampe, ITRE Committee, European Parliament,
Brussels, July 2008.
3
H. Friederiszick, M. Grajek, L-H. Roeller, Analysing the Relationship between Regulation and Investment
in the Telecom Sector, March 2008.
4
I understand that further, yet to be completed, analysis has qualified these initial results, in the direction of
suggesting a decline in incumbents’ investment in response to tougher access regulation, and a more
ambiguous picture in relation to competitors’ responses.
5
Is there a commercial
basis for investment
in NGAs by 2015?
No
Yes
Is there scope for
network competition?
Should public subsidy
or public supply be
implemented?
No
Yes
Yes
Are wireless
technologies in the
market?
How should a
monopoly NGA
be regulated?
No:
Wireline
duopoly
Yes:
Multioperator
market
Is there a
dominant
firm?
Laxer
regulation
preferred
Is it
collusive?
Yes
No
Yes
No
Devise
access
regulation
approach
[4B]
Standard LRIC
Construct cost
model and
regulate to
cost
[3]
No
Devise
access
regulation
approach
[4A]
Forbearance
[1]
Suitable
access
regime
necessary
[2]
Figure 1. Market outcomes and regulatory options 6
I do not adopt a pessimistic approach to NGA competition, but instead apply the approach in figure 1 to three different geographic areas‐ areas which are: - potentially competitive - probably monopolistic - non‐commercial. These will then be linked to three types of solution: - mandatory access to a subsidised monopoly NGA - mandatory access to a non‐subsidised monopoly or dominant NGA 5 - forbearance from access regulation. In this section I will discuss the first two geographical areas, leaving the third non‐
commercial area to Section 4 below on universal service. In Sweden the weight in NGA investment of publicly owned assets is likely to be greater than in many member states, so a special sub‐section (C) is devoted to it. A. Potentially competitive NGAs This hinges on ‘what is an NGA?’. For simplicity, I will define it as an access technology which can support high speed broadband, say at 40‐50 Mbps download speed. This would have to be an expected average speed, or one achieved at least, say 50% of the time. Assimilating mobile broadband into such comparisons is particularly difficult, because the whole network down to the device is shared amongst users, and reported speeds can be very variable (or even purely theoretical). On this footing, the universe of NGAs might include: - fibre to the home/premises networks (FTTH/FTTP) - fibre to the cabinet/node networks (FTTC/FTTN) - upgraded cable networks (for example, using the DOCSIS 3.0 standard) - fixed wireless networks (using e.g. fixed WiMax) - mobile wireless networks (3G, LTE, mobile WiMax etc.). Figure 2, taken from an IDATE publication in 2007, contains a projection of speeds up to 2011. One can infer the following from the projection: - fixed networks speeds are an order of magnitude (10x) greater than mobile ones; - mobile speeds lag fixed ones by 3‐4 years; 5
The dominance could be jointly exercised.
7
-
-
1 Gbps is quite practicable with a fixed network (e.g. the Singapore high speed broadband currently being tendered is based on upgrading all premises to 1Gbps); mobile speeds in excess of 100 Mbps are projected from 2009, when‐ according to some reports‐ the first LTE networks are planned to come into service. 6 Source: IDATE, 2007, Digiworld 2007 Presentation Figure 2: Maximum downlink data rates of various technologies Our horizon is 2015; hence it seems appropriate to treat wireless networks as potentially part of the high speed broadband world, provided spectrum is available ( a matter discussed at Section 3 below). However, more conservatively I will not assume that wireless technologies will fall in the same market as fixed networks. Perhaps they will, but it is equally likely that they will not, even though they are able to exercise a constraint from below (bearing in mind their more limited and variable capacity) on fixed network pricing. 7 Accordingly, in this section I will principally consider situations where the relevant network market is either a wireline monopoly or a duopoly. If this assumption is mistaken, then the resulting regulatory problems will be much easier to deal with, as Sweden will have five or six networks. It is hard to see how this issue can be investigated further on an empirical basis, although it is apposite to note that so far EU NRAs have not shown a willingness to include mobile broadband retail services in 6
The Economist, 19 July 2009., p. 80-1
If wireless networks were part of the market, it would be a multi-operator one containing operators with
significantly different cost structures and hence limited prospects for tacit collusion. This corresponds to
the left hand side of Figure 1, where forbearance would be a possible outcome.
7
8
the same market as fixed services. In the circumstances, it seems safer to frame the analysis in terms of separate fixed and mobile service markets, although this distinction leaves fixed wireless on something of a limb. I understand that the current position in Sweden in relation to high speed broadband can be summarised as follows. The country has the seventh highest level of broadband penetration in the OECD, and it already has a high share of households enjoying fixed access to the internet via fibre (18% in 2007). A total of 600,000 subscribers to mobile broadband are projected for 2008, but at speeds of 3.6/7.2 Mbps 8 . Cable (largely consolidated under Com Hem) accounts for 21% of fixed access to broadband, and has a reach of 55% of households. Fibre is being installed not only by Telia Sonera (the infrastructure supporter of DSL‐based broadband, which in 2007 accounted for 60% of fixed broadband), but also by municipal electricity companies. As a result, some areas are served by a fibre monopoly (as well as other networks); other areas have a fibre monopoly. Telia‐Sonera is also deploying VDSL technologies, which fall short in capability of fibre to the home, but which still qualify as NGA networks for the purposes of the present paper. Two things flow from this. In Sweden (unlike say the UK) next generation access networks are well on the way to being installed, with a coverage of 30% of households. This means that risks concerning both the demand side (the price points at which end users will buy) and the execution of the project are less significant than almost anywhere else in Europe. Second, the majority of households have exposure to cable –TV and DSL technologies. The former have the potential to upgrade to higher speeds and new standards; Telia Sonera’s existing copper networks which supports DSL is being replaced by a fibre network; and a new fibre entrants are offering service in many locations. As a rough generalisation from international experience, there is one condition which seems sufficient to promote investment in NGAs, and, in the absence of this, a second condition which seems to be necessary. The first condition is competition, and the second is a degree of regulatory certainty. The reason for this is a key difference in the implementation of net present value methods to appraise investment in monopoly and in competitive environments. In a monopoly, replacement of an existing technology is broadly governed by the condition that the incremental cost of the new technology must be less than the sum of the variable cost of the old technology and incremental revenues the new technology brings. 9 In 8
By way of comparison, in Australia Telstra’s ‘Next G’ mobile network is capable of speeds of 20 Mbps.
The cost of a fibre network is of course, substantially reduced where an operative invests physical assets
and customers from a pre-existing copper network.
9
9
relation to the replacement of copper by fibre in the access networks, this can be expressed as: Total costs of fibre < operating cost of copper + scrap value of copper network + incremental revenue from fibre. In a competitive market, the right hand side of this relationship is augmented by two more terms: the revenue gained from competitors as a result of the investment, and the revenue protected from competitors by the investment. As an illustration of the competitive effect, KPN in the Netherlands was losing 10% of subscribers annually to cable companies before it became the first incumbent operator in the EU to install an NGA with wide area coverage. In areas where an investment race involving Telia Sonera, cable and fibre entrants is in progress or could be stimulated, what regulatory approaches are available? The answer to this question is, of course, dependent upon market definition: a national market definition is far more likely to produce a dominant operator than a geographical market confined to competitive areas. 10 In such an area, single firm dominance may or may not be found. If it is not (and if the stringent conditions for a finding of collective dominance are not satisfied), then access regulation will not be permissible. This would result in a US‐style model of competition between end‐to‐end networks. The expected outcome would be a race to sign up new customers, a limited range of choice for end users, and the distinct possibility of a collusive outcome in the future, when the ‘land grab’ phase of competition was over. Alternatively, in an environment with the option of mandating access to the dominant network, then if we project the results of Roeller et al relating to earlier periods into the future, the impact of imposing a tougher access regime on the incumbent is likely to be to discourage competitive investment. Moreover, it is well known that NGAs permit more limited access than current generation network. The options, which also include duct access, are shown in Figure 3. 10
Of course, if different operators are dominant in different geographical areas, the number of geographical
market grows.
10
Figure 3: Points of Access to NGA networks Access to active components
4. Bitstream access
Fibre
MDF
Metro
Core
Customer Cu or fibre Cabinet
Site
Node
network
1. Copper SLU
2. Fibre P2P
3. Fibre GPON WDM
Access to passive components
Subloop unbundling is only applicable for FTTC technologies; it may be practically or technically difficult: and unbundlers face the commercial challenge of recovering their costs from a pool of potential customers served by a cabinet, which is much smaller than that available at a local exchange, where local loops are unbundled. The major alternative – bitstream – can be devised to permit a degree of service differentiation. However, there is no experience of bitstream access in Sweden. This has promoted significant end‐to‐end competition, and competition based on full or shared access to unbundled loops amounting to 35‐40~% of DSL subscribers in Sweden as a whole, despite high wholesale prices by EU standards. In my opinion, when regulating in actually or potentially competitive areas, PTS should consider a range of possible regulatory responses, being mindful of the desirability of promoting end‐to‐end infrastructure competition. In decreasing order of severity, these are: - No imposition of mandatory access. This would cut off from mandated supply at regulated prices those operators currently unbundling loops in the relevant areas, and would therefore be controversial (and possibly attract claims for compensation for stranded assets) 11 ; - restricting mandatory access to fibre to specified wholesale products, for example, those capable of speeds currently available from DSL technologies; this 11
I do not think that these claims are justified, since it is foreseeable technical developments, not a purely
regulatory decision, which has caused the value of the assets to depreciate.
11
-
would give the installer of fibre exclusive access to higher speeds, unless it agreed commercial terms with access seekers; this is known in the UK as anchor product regulation – one of it consequences it to protect existing customers of unbundlers from any necessary disturbance to their service as fibre replaces copper; imposing mandatory access obligations, but at reasonable rather than cost‐based prices 12 ; incorporating a risk‐related element in the cost of capital when settling access prices; this approach is apparently favoured by Commissioner Reding; 13 Application of the normal approach to cost‐oriented pricing at the standard cost of capital. It is obviously a stretch, which Roeller and his colleagues explicitly warn against, to apply their findings covering current generation broadband to NGAs. However, Roeller’s results are consistent with the underlying theoretical propositions arising in most models of network competition that a tough access regime reduces the NPV of investment made by both the access provider and the access seeker. The former loses revenue in the downstream market as a result of facing more competition, and the latter choose more readily to ‘buy’ access services than to ‘make’ them. Another point is that policy makers have no meaningful empirical results relating to NGAs, and hence may have to rely on analogous situations. If we interpret Roeller et al as weakly confirming the above theoretical results, in finding that the incumbent/first mover has incentives to invest which are independent of the access regime but that a generous access regime will diminish a competitor’s/second mover’s incentives, a basis is created for considering some of the variants at the top of this list, provided that a) the regulator attaches sufficient importance to developing infrastructure competition and b) competitive investment is not being undertaken already, which would imply that revenue ‘concessions’ to attract it are not necessary. The argument for regulating more laxly becomes stronger the greater the competitive constraint wireless technologies represent. As noted above, this depends upon whether we focus on projected levels of wireless functionality (which will grow) or on the gap between fibre and wireless, which will be maintained. To summarise, where competition between NGAs is a possibility, and where regulation is possible, the usual conflicts between long‐run and short‐run objectives and between service and infrastructure competition come into play. In my opinion, which is probably a minority one, careful consideration should be given to minimising regulatory 12
This means use of the remedy in Article 12 in the current Access and Interconnect Directive, instead of
Article 13.
13
Europe’s Way to the High Speed Internet: Why Effective Network Competition is the Freeway to the
Future, Speech, 25 June 2008. The Commissioner personally favours a risk premium around 15% (p.7) though it is not clear to me what this means.
12
intervention in such circumstances. But such a conclusion must depend on the circumstances of the particular market. B. A monopoly NGA It is almost inevitable that some parts of Sweden will be served by a single wireline NGA. Their extent may diminish over time, but the population served will still be significant. At present, DSL coverage in Sweden is 98% 14 what regulatory regime will promote the replacement of copper with fibre where it is viable? The problem here is that, absent competitive pressure, operators may choose to delay replacement of copper by fibre even when to do so has a positive net present value. This is because delay has an option value for the potential investor; waiting will reduce uncertainty without threatening pre‐emption by a competitor. The operator may have to be bought out of that option by being awarded a higher rate of return for an earlier investment 15 . Ideally this should be conditional on bringing forward the investment. In practice, it may be inevitable that greater returns have to be made available whenever the investment is done. On the other hand, because some concerns about the costs of fibre connections and the increased revenue which they bring have been dispelled in Sweden, this consideration will be less important than elsewhere. The access regime has two effects on an access provider’s profits: a low access price restricts its revenues directly, and it also brings down retail prices generally, as enhanced competition eats away at the retail and other margins. It may thus affect the monopolist’s incentives to invest, (even though the evidence from the initial Roeller study is that the access regime does not affect the incumbent’s investment levels). Placing greater emphasis on the latter conclusion would encourage PTS to continue to adopt a cost‐based approach to pricing fibre access. But this policy must be seen in the context of the considerable delays in mandating wholesale broadband access and LLU in Sweden. Such delays have an effect of giving the infrastructure operator a significant ‘start’ in the race to sign up customers to a new service, and mimic in exaggerated if involuntary form the consequences of certain of the variants or LRIC pricing enumerated in the previous section. It is clearly important to take account of possible or probable slippage in the implementation of regulatory remedies. In apparently monopoly areas, it may seem tempting to try to stimulate investment in NGAs by offering a monopoly concession‐ i.e. formally giving an operator the right to be 14
Service for many in these areas may however be poor, if they live a long way from the exchange.
See M. Cave, ‘Encouraging infrastructure competition via the ladder of investment’, Telecommunications
Policy, 2006,pp 223-237. Note that this widens the above-noted gap between investor incentives in
competitive and monopolistic environments.
15
13
the sole seller of the relevant service in a particular area. This may not be consistent with the general framework of European telecommunications law which for over a decade has been based on free entry. But even ignoring this consideration, it is important to be aware of the possible risks of such a policy. Firstly, the concession would not have much value to an operator whose calculations suggested a low possibility of entry. Alternatively, if subsequent entry were feasible, then the harm end users would experience from being denied an alternative source of services would be considerable. In other words, either the concession would be of little value, or it would be positively harmful. This is in addition to issues concerning the precise specification of the monopoly (does it exclude mobile networks, etc?). Finally, the construction of an NGA in a monopoly area – as in a more competitive one – can be encouraged by the opportunity for access provider and access sector to construct a risk‐sharing contract. This might take the form of a joint venture (like the proposal by eight operators in Australia to build a national NGA) ‐ but such plans may founder on disagreements, for example those among the Australian joint venturers over whether to take the fibre to the cabinet or to the premises. More plausibly, an access seeker might propose a long‐term or ‘take or pay’ contract –thus benefiting from a quantity discount or an adjustment in price to take account of the purchaser’s assumption of risk. Unregulated arrangements of this kind are fairly commonplace. But they impose a challenge for a regulator to establish if they are discriminatory – i.e. to verify that the prices and other terms in the contract market and in the sport market do not advantage one or other class of purchasers. The European Parliament’s ITRE Committee is very keen on this possibility, but I have my doubts as to whether the two sides would agree. It would also be virtually impossible for a regulator to impose so complex a reference access offer. C. Public and municipal investment In Sweden, Telia Sonera’s fibre installations are most nearly rivalled by investments made by public sector organisations, principally municipal electricity companies, which can benefit from a pre‐existing infrastructure to install fibre cheaply. Such public financing creates no problem from a State aids standpoint where it satisfies the market economy investor principle (MEIP). Where it does not, the Commission has identified three types of area‐ white, where a subsidy is required; black, where it is clearly not required, and grey, in the middle. Municipal investments in Sweden do not appear to have faced restriction on State aid grounds. However the Commission’s 13th Implementation Reports notes their failure in many cases to offer wholesale transmission products; and that this means they do not 14
offer competition to Telia Sonera in wholesale products. 16 As a result, the openness they are supposed to provide is qualified, and their impact falls in the retail market. In short, their behaviour appears to have many of characteristics of privately‐financed competitors, which prefer to sell to retail than to wholesale customers. Nevertheless they provide a competitive threat to the incumbent, which benefits end users. They may also be less willing than an investor‐owned company to tacitly collude over pricing down the line 17 . D. The impact of separating NGAs. This issue has attracted heated debate in a number of countries, mainly outside Europe, where separated contexts for the construction of NGAs are being imposed or contemplated in Australia, New Zealand and Singapore. At its simplest, the issue is which cost is the largest‐ the cost of separation in creating difficulties in co‐ordinating operations and, especially, investment across a line of functional or ownership separation; or the cost of non‐separation represented by the harmful consequences of the non‐price discrimination which the vertically integrated NGA operator can practice, which will chill competition and harm end‐users. I have summarised the underlying theory and available evidence relating to current generation networks elsewhere 18 . To summarise, the conclusion seems likely to be very case‐specific, as NRAs differ considerably in the degree to which they are able to control non‐price discrimination by behavioural remedies, without resorting to structural interventions. In relation specifically to NGAs, an NRA should be able to exploit their layered and modular structure in a way which would make discrimination more obvious and easier to control than is the case with current networks, which were originally conceived as monopolies. On the face of it this would weaken the case for separation, but might not defeat it in particular circumstances. Because I believe that the basic problem which separation is intended to solve is that of non‐price discrimination, I do not believe its presence or absence would have a major impact on the effectiveness of price regulation. E. The Commission’s draft recommendation on regulating NGAs. 16
13th Implementation Report, Staff Papers, Volume 1, pp. 303-4.
There is a literature which proposes keeping one member of an oligopoly in public ownership precisely
for this reason.
18
M Cave, Separation and Investment in Telecommunications Networks: a Review of Recent Practice,
February 2008.
17
15
The draft recommendation 19 appears to fall into the error of technological specificity in its account of what an NGA is :( see Explanatory note, pp. 4‐5). It starts with the premise that they exhibit SMP and require regulation‐ which, as noted above, may not always be the case, especially if geographic market definitions are employed. The note does, however, refer to the possibility of a newly emerging retail service which would place any specific supporting wholesale service in a newly emerging market too. But with this exception, the Recommendation is concerned with what remedies should be imposed if SMP in markets 4 and 5 has been identified. The recommendation focuses primarily on access to passive assets, particularly ducts, which are seen as supporting the maximum level of infrastructure competition. Here there is a surprising level of detail about how ducts should be valued and how access to them should be priced. New ducts, but not old ones, might qualify for a higher risk premium in the cost of capital (which the Commission says NRAs have set at 8‐12%, a figure which presumably can be compared to Ms Reding’s proposal of a risk‐inclusive figure of 15% 20 ). The Recommendation notes that wholesale broadband access is likely to survive as an NGA product in many jurisdictions. It notes that its pricing must be made consistent with that of access to passive assets, and that margin squeezes must be avoided. This seems to create difficulties for advocates of a pricing methodology that yields (relatively) higher prices for access to active (eg bitstream) than to passive services, such as ducts. 21 In summary, the Recommendation embodies new thinking (mainly developed by NRAs) on ducts, but a fairly traditional approach to defining the range of access products to be made available and to their pricing. F. Conclusions The following linked issues have been considered: - do NGAs lead to remonopolisation? - how should they be regulated? - what approach should be adopted to geographical markets? On the first issue, in my opinion, while there is a chance of remonopolisation, regulation can reduce it, just as it could conversely, enhance it. The attempt may fail in the long run, but I consider an unlikely outcome. 19
Draft Commission Recommendation on regulated access to Next generation Access Networks (NGA),
September 2008. Explanatory note, September 2008
20
See fn 12 above.
21
This is proposed by Ofcom in Delivering superfast broadband in the UK, September 2008.
16
On the second question, in terms of figure 1, the relevant approaches are those numbered [1] – [4], although each contains a number of variants. They can be summarised as follows: [1]. absent significant market power in the wholesale network infrastructure access and wholesale broadband access markets, there is no legal basis for regulation. This assessment is highly dependent on geographical market definition (see below). [2]. in the unlikely event that two or more firm are jointly dominant in the market, some form of cost‐oriented price control could be introduced, in the knowledge that both had already undertaken the necessary investment. [3] and [4]. In these outcomes, wholesale prices are regulated on a scale of rigour, from a traditional LRIC basis [3] to allowing a higher degree of discretion to the access provider at least for certain products. The purpose of such departures from the normal approach should be seen in case [4A] as relating to the encouragement of competition and in case [4B] as relating to the bringing forward of NGA investment. Finally, on the issue of geographical markets, there seems to be a good logical case grounded in competition economics for a different market analysis and (probably) for different remedies in markets characterised by significantly different conditions of competition. However the decision must take account also of practical matters and will inevitably be to some extent reverse engineered from the preferred regulatory strategy. . 2. Interconnection (especially termination) A. Introduction The second issue to be considered is the future of interconnection. Under the revised Recommendation on relevant markets, only call origination and termination on fixed and mobile networks survive among the interconnection products. The remainder are access products (giving direct access on the part of the access‐seeker to the end user’s spending power) or, in one case, a retail market. After a period of quiescence, there has been something of an uprising in the analysis of termination markets, and this has now found oblique expression in the Commission’s draft Recommendation on the regulatory treatment of fixed and mobile termination rates in the EU. This section considers rationales for the approach or approaches PTS might take, in the light of the ideas set out in its June presentation entitled ‘Interconnection – current 17
market situation and the way forward’. Annex A contains some more detailed material on the economic analysis of termination. Most of the following discussion is framed in terms of mobile competition, but the analysis applies equally to fixed termination as well. The complexity of the current situation arises in part because of the co‐existence of a ‘switched network’ model of interconnect and an ‘internet’ model, which applies to data, but which also includes VoIP. However, the trend in both cases is towards a redirection in termination rates, possibly to zero. PTS thus has to work out if this is in the interests of Swedish end users’. B. Costing issues The new discussion is a significant departure from the (until recently) canonical model of termination, which started with calling party pays (CPP) – or, more literally, calling party’s network pays (CPNP) – and then applied a standard long‐run incremental costing approach to set the charges. This raises numerous cost classification issues, some of which can be illustrated with reference to figure 4, which describes the approach to cost modelling pioneered in the UK (the first member state to regulate mobile termination charges). This conventional wisdom about network costs, developed during the period of rapid expansion of GSM networks, is that the relationship between network costs and a ‘volume’ measure of total incoming and outgoing minutes is approximately linear, reflecting the fact that, in any location, the addition of more calls will involve, with a given spectrum assignment, the ‘seeding’ of more base stations. This leads to the relationship AA in figure 4. Imposing the constraint that investment is lumpy will lead to a step function linearised as BB. 18
B
A
Network
cost
D
C
B
A
X
Incoming and
outgoing minutes
Figure 4: The cost structure of a mobile network. It was recognised that there was a major ‘exception’ to the rule that most traffic costs are traffic sensitive. If an operator chooses, or is obliged through a coverage requirement, to install base stations in areas with persistently insufficient demand to fill them up (at output level X in figure 4), the cost curve in that area will first be horizontal (CD), and thereafter correspond to DB. How widespread this phenomenon was would depend on the level of demand in each location, the scope for facilities sharing, and the number and size distribution of operators. It would be possible to set special per minute charges for calls in such underpopulated areas, but this is likely to meet with consumer ignorance and resistance. In any case, a higher per minute charge would fail to capture the fact that the marginal cost of the traffic was approximately zero. With the development of 3G networks, however, persistent overcapacity became something of a norm. The unexpectedly slow rate of conversion to 3G, combined with its superior technical properties and smaller cell size, led to widespread excess capacity. Spectrum was also available in significant quantities in some jurisdictions, and this reduced the need to economise on its use as traffic grows. As a result of these forces, it is now widely believed that the alternative formulation of costs in which there is a substantial element of fixed costs and marginal costs are low (as in a fixed network) better represents reality. The consequence of this alternative view are radical, as if costs are fixed rather than traffic‐sensitive, then they should be not be recovered in per‐
minute charges. 19
However most cost models (including those developed for PTS) have adopted essentially the first approach. Moreover, as with many regulated interconnection prices, the pricing principle has been LRIC plus an equiproportionate mark‐up (EPMU) of common costs. The share of common costs which can be recovered in termination charges is clearly important. Operators providing termination services have sought to include in it their customer acquisition costs – which include hand‐set subsidies and are very large – but regulators have tended to attribute these exclusively to retail activity, rather than allow them to be recovered in network charges. Operators have also sought to replace the EPMU principle of equal mark‐ups with one based on demand‐based mark‐ups or Ramsey prices, which (they claim) would raise termination rates, given the low price elasticity of demand for termination services. It is clear form the Commission’s draft Recommendation that the alternative ‘new’ view of cost causation can have an enormous effect on the level of termination charges. But before considering that document in more detail, it is useful to go back to first principles governing socially efficient interconnection charges, and analyse what unregulated agreements operators would tend towards. According to the traditional view that calls are a service purchased by, paid for and benefiting solely the caller, efficiency is achieved by having interconnect or termination rates set equal to marginal costs. 22 This is necessary to permit efficient pricing of, and an efficient level of consumption of, calls. A deviation from this principle, for example by substituting average incremental for margin cost, and/or incorporating a margin to recover some common costs, introduces a departure from the first best option. This is one of the factors behind the current desire to replace a fairly generous interpretation of average incremental costs with mark‐up by a parsimonious version of ‘traffic sensitive’ costs of termination. C. Two‐sided market, callee benefit and strategic interactions However, this is a ‘one‐sided market’ approach. According to the two‐sided platform approach the telephone network produces service which is used simultaneously by two or more groups of customers. Such markets have the characteristic that the volume of transactions can be affected by charging one side more and the other less, as under CNCP, for example. In other words, the price structure matters. Clearly, in telecommunications, the candidates for the two sides are the initiator and receiver of the call, who may be on the same network or on 22
See M. Armstrong, ‘The theory of access pricing and interconnect’, in M. Cave et al (eds), Handbook of
Telecommunications Economics Vol. 1, Elsevier, 2002, pp. 295-385.
20
different networks. In the former case, the interactions are relatively easily internalised by the platform. In the latter case, it is more difficult and market failure is likely. Valletti summarises the effects of two‐sided markets as follows: “2SPs involve inter‐group network externalities and are relevant in many industries, including telecommunications. As a result of these externalities, socially‐optimal prices in 2SPs typically depend in some intricate way on price elasticities of demand, inter‐group network effects and costs.... Another result of externalities is that socially optimal prices in 2SPs, generally, are not cost‐
based...For example, incremental cost pricing is typically not efficient with 2SPs. The removal of alleged cross‐subsidies does not necessarily benefit the side that pays a price above cost. This is because by increasing the other price; some users may drop off, thus making the product less valuable to the [other] users as well.” The conclusion which seems to flow from this is that a variety of models is needed to design optimal policies for termination charges. These are likely to require a specific formulation of the utility functions of the parties involved. The most significant recent development is the explicit recognition that both caller and callee derive benefit from the call. In a very simple model, suppose that the benefits of the call are divided between caller and callee in fixed proportions. If the marginal costs of origination and termination are divided between an outgoing call price and a reception price in proportion to those benefits, there will be the right incentives to use the network. But if, the calling party pays convention operates, then there will be too little demand for calls if the calling party pays the full costs. Another possibility which might emerge is the so‐called ‘connectivity breakdown’ when there is a distinction in the price of on‐net and off‐net calls. This may happen because if the benefit of receiving a call is relatively small, the receiving network wants to reduce them and sets a high off‐net reception price. If the benefit of receiving a call is high, and that of originating one is low, then the originating network wants to choke off outgoing off‐net calls, which mostly benefit its competitors’ clients. This is to be contrasted with the treatment of on‐net calls. The network will try to get the price of these as low as possible, subject to their marginal cost (which, as we have noted, may be low) in order to benefit its clients. There will thus be a preference for recovering fixed costs in fixed charges. This takes us into the territory of strategic interaction among operators. Here there is a clear distinction between interconnect between operator serving the same retail markets and those serving different markets. 21
Recognition of the key role which the setting of mobile to mobile termination rates plays in strategic interaction among operators is, as noted above, probably the main innovation of recent economic models. The operators’ conduct has been characterised in several ways. In the earliest version, roughly symmetric operators collude to set high reciprocal rates per call minute. These support high retail rates and (absent differential on‐net and off‐
net prices) all operators can make excess profits. However, it is now well known that high access prices harm welfare and reduce potential profits, so that if an alternative pricing structure is available, such as non‐linear pricing, firms will prefer it. Alternatively, it is in the operators’ interests to agree a low termination rate, because such a rate softens competition between them. A low termination rate diminishes the incentive to gain customers, because they bring with them low termination rates when they receive calls. (It is a waterbed effect operating in reverse.) In the limit this might even lead to agreement on a bill and keep system. Where operators are asymmetrical, the established ones will want to discriminate against the entrants. This is exemplified by experience in a country where two operators had agreed a very low termination between them, but wanted to charge entrants a much higher rate. Strategic goals are also pursued in the choice of the on‐net /off‐net call price differential. It has been shown that this variable can be used as an anti‐competitive weapon 23 . A combination of scale disadvantages and unbalanced traffic make smaller operators vulnerable to low on‐net rates offered by larger operators, and the standard regulatory response of allowing small operators to charge higher termination rates exacerbates the on‐net/off‐net differential. As a result, smaller operators will argue for regulation, not only to combat the crude termination price discrimination noted above but also to enforce a regime such as bill and keep, which benefits those with a preponderance of outgoing traffic, just as they are harmed by inflated termination rates. If they are successful, the resulting elimination of large on‐net/off‐net call price differentials, which will result from operators having a general preference for having another operator terminating their outgoing calls at zero cost to themselves, is likely to reduce the current imbalances in traffic. The tenor of some of the conclusions is that we no longer have the previous certainty about the need for cost‐based regulation of termination, to avoid excessive prices. It may be necessary, but equally it may not. 24 A common feature of several models is that negotiated prices between mobile operators may be at or even below the efficient level, 23
S Hoernig ‘On-net and off-net pricing in asymmetric telecommunications networks’, Information
Economics and Policy, 2007, pp171-188.
24
See the models discussed in Annex A.
22
rather than above it. But this does not apply to fixed to mobile termination rates which are seen as a continuing source of market failure, at least when fixed termination rates are assumed to be regulated. What can regulators do in these circumstances? The following propositions, in ascending order of contentiousness, emerge from the discussion or (at least one of) the models: - take care to differentiate the analysis of fixed to mobile and mobile to mobile termination markets; - clarify which of the two alternative views of mobile costs is correct; - in the case of mobile to mobile (or fixed to fixed) termination, consider the scope for negotiated solutions subject to less intrusive regulatory instruments, including a requirement for reciprocity and/or a requirement for non‐
discrimination; this can be accompanied by a safeguard price control, of the kind which is permitted under Article 12 of the Access and Interconnect Directive; - be alive to the consequences of asymmetry in network size and, more particularly, of unbalanced traffic; - bring mobile termination rates progressively (or immediately) down to zero, or move directly to bill and keep. The first two are not controversial, and the implications of the third may have to be qualified in the light of the fourth point. The prospects for the fifth point are considered below. D. The Commission’s draft recommendation on termination and conclusions I now consider the Commission’s draft Recommendation in the light of the new thinking, focussing primarily on its proposals relating to mobile termination. Their effect is essentially to whittle away at the recoverable cost elements in the traditional approach, rather than rely on the ‘call externality’ arguments set out above. 25
Essentially the Recommendation proposes the following: - exclusion from cost recovery through termination charges of the cost of providing basic coverage ; this is equivalent to the creation of a mobile ‘access service’, on the analogy of access provided by the copper loops in the fixed network; - this is accompanied by a stipulation about the sequence in which increments are taken when allocating the costs of a mobile network, designed to make the 25
This argument is distinct from the ‘network externality’ arguments which justifies high termination
charges as a source of funds to expand network penetration. The Recommendation rejects this on various
grounds, including uncertainty about the waterbed effect (Draft Recommendation- explanatory notes p. 20).
23
-
-
termination of calls the last increment, after retail, core, network and origination; this ensures that all common costs are excluded from the costs of termination; hence, in relation to costs, under the recommended approach the ones to be included would be those driven by the need to increase capacity to carry wholesale voice termination services, over and above that necessary to provide retail services to subscribers; the effect of this to confine the costs recoverable in termination charges to those of selling wholesale termination and a small element of traffic‐ sensitive incremental costs of termination; in many cases the latter would seem wholly to exclude spectrum costs, because spectrum licences are in most member states indivisible – i.e. the operator cannot alienate unwanted MHz available to it within a given licence. In my opinion, the case for a mobile ‘access service’ is intellectually justifiable. However, no argument is given for taking the increment of termination last, with its convenient consequence that common costs are recovered on other services. Origination and termination are both indispensable elements in producing a call, and –to adapt a saying of Alfred Marshall‐ it makes no more sense to ask if one should take priority than to ask if the upper or lower blade of a pair of scissors is responsible for cutting a piece of paper. This leaves NRAs in the slightly awkward position of dealing with a Recommendation (unless it is amended) which contains proposals which do not follow from the Commission’s own arguments, but which are sensible on other grounds. What are those other grounds, considered so far? In terms of the arguments set out above, reduced levels of mobile termination charges: - bring them more in line with marginal costs; - take better (informal) account of the mutually beneficial nature of many calls (the call externality); - are consistent with what mobile operators would agree to in certain contexts - Contain the potential for eliminating the termination bottleneck under CPNP. Unfortunately, however, the set of circumstances in which all operators would voluntarily move to zero (bill and keep) or nearly zero termination rates are very limited. This arises essentially because traffic flows are asymmetrical, as Swedish data demonstrate. Operators with a surplus of incoming minutes want to keep rates high (to augment their revenues and injure their competitors, which those with a deficit want them to be zero. This conflict of objectives make it virtually certain that agreement will not be found to move to bill and keep, either with respect to M2M termination within the group of mobile operators or with respect to F2F within the group of fixed operators. (It is even les likely that all operators would agree on F2M and M2F termination!). 24
Instead, regulatory intervention, probably covering fixed to fix as well as mobile to mobile is needed. A reduction to zero would be hard to achieve within the current European regulatory framework – where bill and keep is not a standard remedy. There is thus a disjunction between the Commission’s true objectives and the means available to it to realise them‐ leading the unconvincing and ad hoc nature of its reasoning. There is, however, an even larger consideration which will take effect by 2015 and which also goes towards low or vanishing termination charges: NGNs will inaugurate IP‐based interconnection of voice calls – despite some operators’ strong motives to delay this process. IP‐based interconnection of data traffic is unregulated, based (largely) on what might be called the ‘downloading party pays’ principle, underlying the relationship between a customer and her ISP. This is accomplished through a combination of bill and keep (peering) and payment (transit). Peering is allowed only between backbone operators sufficiently symmetrical in terms of their traffic flow; while the alternative transit payment regime applies to non‐symmetrical relationships. Voice calls over broadband are already part of this system. However, as the Recommendation points out, high termination charges give (especially mobile) operators a motive to delay the switch to IP interconnection. This adds to the argument for lowering them. How could a regulator seek to manage a transition of this magnitude, which takes termination rates to zero? It would be a mistake to exaggerate its difficulty. NRAs have already regulated down termination rates from something like 30 cents per minute or above to their present levels. Mobile operators’ business models have not cracked under that strain. They have also assimilated substantial reductions in intra‐EU roaming rates. The trick may be to set out a plan over 2‐4 years to accomplish the goal, and then stick to it. I think this would be a good plan for Sweden, which would be quite feasible if the final version of the Recommendation is similar to the draft and is taken up by other NRAs. In summary, the revised Recommendation appears to represent an intellectually unsatisfactory but practically useful way forward towards a change from the current dysfunctional approach to termination charges, especially on mobile networks. It has the potential to bring mobile rates down from present excessive levels, and ultimately to remove the termination bottleneck, by moving away from CPNP. Unfortunately, conflicts of interest between operators with traffic imbalances in different directions make it unlikely that they will agree to make the change voluntarily. It may also be desirable to have a transitional period to allow operators to amend their charging structures or even their business models. But there seem to be good reasons fro believing that the direction of travel in favour of lower rates is a good one, which can also lead 25
seamlessly towards, and bring forward, the advent of IP interconnection, which competition law seems capable of regulating. 3. Spectrum Policy Sweden’s spectrum policy is one of the most market‐oriented in the EU. This has permitted high levels of mobile voice penetration and the increasing use of wireless technologies for broadband. Auctions are typically used for the assignment of spectrum licences, and spectrum policy is constantly under review. This section sets out how these foundations can be strengthened over the next eight years. I consider in turn: - spectrum auctions; - secondary trading in spectrum rights and liberalisation of spectrum use; - controlling receivers; - public sector use of spectrum; - Determining the extent of unlicensed spectrum. Annex B describes new spectrum‐using technologies, which may alleviate some of these problems. The trend is towards increasingly sophisticated ways of packing more uses into given bands. Art present, much of this is visionary or experimental, but the question is not whether but when the new approaches will deliver more wireless capability. A. Spectrum auctions These are now familiar. Recent focus has been on ways of adapting auctions to permit alternative uses of the spectrum. There are two problems in particular: - how to package the spectrum in a way which is neutral across different users; - How to locate bidders proposing different uses within a band in such a way as to minimise interference. One way of tackling both those issues is via a combinational clock auction. In such an auction, bidders can submit combined bids for several lots, as well as bids for individual items. The auctioneer chooses the set of bids which maximise overall receipts. As a result, if a bidder required 5MHz to undertake an activity, and the lots are in units of 1.7 MH there is no risk of the bidder ending up with success in, say, two lots only. In one version of a clock auction, a number of identical lots are identified, and bidders respond with the quantities they wish to purchase as the bidder raises (or lowers) the price. Once equilibrium is found, the auctioneer allocates a particular lot or lots to successful bidders in ways which minimise interference among them. 26
26
Alternatively, the clock process can be replaced by a sealed bid stage at a point where demand has come
close to supply.
26
The combinational approach introduces complications with an auction process, but use of the clock auction tends to simplify it. The combination is likely to be used in auctions in the UK ‘digital dividend’ spectrum, where several efficient uses are possible. In other words, auction design can promote service neutrality. 27
B. Secondary trading and liberalisation Spectrum trading denotes a market mechanism whereby rights (and any associated obligations) to use spectrum are transferred from one party to another for a certain price. There are a variety of market mechanisms that can be used to trade spectrum. These include bilateral negotiations (the seller and the prospective buyer directly negotiate the terms of the sale), brokerage (buyers and sellers employ a broker to negotiate the contractual terms under which the transfer of usage rights can take place), exchange (a trading platform operating under specific rules, similar to a stock market, is established) and so on. It is also possible to combine more than one of these approaches. Spectrum trading is arguably the most relevant market‐based mechanism available—
and more potent than auctions—as it makes the gravitation of spectrum to its most efficient use a permanent feature of the allocation and assignment system. Yet in practice its impact has been small. Outside the USA, where spectrum trades worth billions of dollars have taken place, the level of trading has been generally low worldwide. For instance, experience in the UK to date has been disappointing and trades in Australia, while much larger in number than in the UK, have nevertheless been fairly modest. Several possible reasons have been suggested, with each likely to have had some influence. Some of those reasons are closely related to features of spectrum markets: • Insufficient level of liquidity in spectrum markets, which may take different forms (e.g. low volumes of tradable spectrum, low flexibility of tradable spectrum, licences issued on national rather than regional basis); • Non‐tradability of public spectrum holdings; • High level of transaction costs (e.g. the procedures for authorising trades are complex and lengthy); • Insufficient information provided around tradable spectrum (e.g. prices, frequencies); • Inadequate development of private band managers who allocate spectrum licences (in particular, efforts to create band managers might have resulted in too few firms holding the most valuable spectrum); • Efficient initial allocations of spectrum that market and technology changes have not altered; 27
These issues are discussed in Ofcom, Digital Dividend Review:550-630 MHz and 790-854 MHz.
Consultation on detailed award design., March 2008, Ch 8.
27
•
Large programmes of primary awards: spectrum users may have concerns over the risk of interference in spectrum acquired in secondary markets; hence they may prefer to buy spectrum at primary awards; Availability of unused spectrum. •
A second set of reasons affecting the level of spectrum trades is more closely related to the regulatory framework: • Uncertainties due to phased liberalisation of spectrum use (additional concerns may arise when trading and liberalisation are not introduced over the same bands at the same time); • Likely modifications of spectrum usage rights by regulatory fiat to allow some unlicensed users to operate in spectrum licensed to others (e.g. ultra‐wideband applications); • Lack of alignment of licence terms and conditions (including technical parameters and tenures with different expiry dates); • Length of licence tenures, which affects the potential for spectrum to be utilised for services that require substantial (and at least partially sunk) investment. 28 For instance, a 20 year license period will result in the license value falling over time, such that any buyer on the secondary market would have a shorter period within which to recover his or her investment in the spectrum. Market power would be a further issue if a firm were able to obtain exclusive rights over sought after bands of spectrum that had little or no substitutes. In such case, the owner of the rights may be able to restrict or distort the supply of spectrum, thereby reducing competition in downstream service, to the detriment of end users. This behaviour is distinct in its motivation from acquiring access to spectrum in anticipation of future need, although in practice discrimination between the two types of behaviour may be difficult. There are, however, reasons to believe that, at least in developed economies, 29 market power in spectrum should not be a significant issue in future. • Technological advances will see greater substitutability between spectrum bands; hence a policy of ‘cornering the market’ in spectrum, in order to profit in services markets, is likely to be unsuccessful; • Existing hoarding is most likely to be related to lack of effective secondary markets; • Regulatory bodies exist that can investigate and address concerns relating to market power. 28
For instance, a 20 year licence period will result in the licence value falling over time, such that any
buyer on the secondary market would have a shorter period within which to recover his or her investment in
the spectrum.
29
In developing countries, especially small economies with limited capital markets, there may be a lack of players;
therefore there may be a greater danger of hoarding and the emergence of dominance.
28
To promote spectrum markets, it is useful to provide some information about spectrum use. The availability of databases of licences for spectrum use may play a great role: databases could provide operators with sufficient information to understand who their neighbours will be, for what purpose they are currently deploying their spectrum, and what interference limits they are subject to 30
Liberalising moves (such as removing or lowering restrictions on use, and encouraging spectrum sharing) will improve the flexibility of spectrum use; this, in turn, should increase efficiency and confer greater economic benefits on society. But the costs of interference, or of preventing interference, may also rise. As returns to a market tend to increase with its scale (because in a larger market there is more scope for mutually beneficial transactions), the total return to expanding flexibility—measured, for example, by the number of bands over which secondary trades with flexibility of use can be effected—will grow. Assuming that interference costs can be restrained, spectrum policy should promote maximum flexibility (and very limited command and control). At some point, it is possible that the marginal costs of flexibility exceed their benefits. In this situation, the optimal degree of flexibility lies somewhere between zero and the maximum. The challenge facing spectrum policy makers is to determine how quickly to introduce flexibility and by how much. However, there are no signs yet from the experience of countries using market methods, that interference costs might, at the margin, outweigh the benefits of flexibility. One particular form of over‐riding the market mechanism is to have a procedure for the compulsory purchase of spectrum when a change of ownership, and probably of use, is deemed to be in the public interest. This might be favoured where spectrum licences were issued for a long period (even for an infinite period without a hand‐back date). Compulsory acquisition is seen as a way of preventing a ‘hold‐out’ by the licensee of a small slice of spectrum, who uses it to extract as much as possible of the rents associated with a more general change of use. First, it is worth pointing out that use of a public power forcibly to switch spectrum between two private users is distinct from the more familiar ‘public taking’ of assets for a public purpose such as a road. Secondly, any uncertainty about the nature of property rights – such as over when the right might be over‐ridden‐ is normally considered to chill investment in the spectrum licence itself and in associated assets. Thirdly, a hold‐
out only makes any money if it agrees to sell in the end. In my opinion, these are good reasons for not automatically assuming that a widely drawn power of expropriation is 30
In Europe, the Commission has recently published a decision to harmonise the availability of information
on the use of radio spectrum through a common information point and by the harmonisation of format and
content of such information; see Commission Decision 2007/344/EC of 16 May 2007 on harmonised
availability of information regarding spectrum use within the Community.
29
desirable. But a narrower one, invoked in specific circumstances, may be justified, if a particular problem is anticipated. C. Controlling receivers 31
It is recognised that relatively low cost (but universal) changes to receiver technology could allow significant reductions in transmission power and economise heavily on spectrum. In other words, there is the basis for a surplus‐generating bargain between spectrum licensees and owners of reception equipment. Does the Coase theorem 32 operate in this case? Sadly, it does not. Firstly, spectrum licensees cannot be sure that they would be able to keep spectrum saved as a result of steps they might take to bribe customers to improve their reception equipment. Secondly, many operators would have to agree to co‐finance such subscribers, and this would lead to attempts at free‐riding. As a result, it looks as regulatory measures are the only way forward, probably based on mandatory standards for receivers. D. Public sector spectrum use Historically public sector users have been gifted substantial amounts of radio spectrum to provide services in the public interest, such as defence, public safety and emergency services. Therefore, in many jurisdictions, the public sector has vast holdings of valuable frequencies. In the UK, for example, public sector spectrum use accounts for just under half of all spectrum use below 15 GHz. Military use of spectrum, particularly for radar and communications, accounts for most of public sector use. In the presence of international military alliances, such as NATO, military spectrum allocations are often harmonised internationally. Furthermore, the strategic nature of defence applications means that sometimes little is known in detail outside the immediate agencies concerned about how the spectrum is deployed. Commercial and public sector spectrum allocations are managed in a broadly similar way by the same independent agency or government department (a major exception is the United States, where spectrum used by the Federal Government is managed by the NTIA). 31
See M Cave and W Webb, Can Market-based Methods Produce Efficient Reception Standards?, August
2008
32
According to the Coase theorem, if certain conditions are fulfilled, bargaining over property rights among
rational economic agents will lead to an efficient allocation of resources. Thus regulatory intervention is
not required, and will probably make things worse.
30
Under the command and control regime, public sector organisations, especially national defence departments, were accorded high priority in spectrum use and they were allocated spectrum for an indefinite period. But as demand for commercial spectrum grew, attention became increasingly focussed on the issue of whether public sector bodies crowded out much more valuable private sector users and uses of spectrum. A few countries have taken bold steps to promote efficient use of spectrum by public sector bodies. In the US, an Executive Memorandum in 2003 required the NTIA and other federal departments to improve efficiency of the use of spectrum, even though it has so far borne little fruit. 33 In 2004, the UK Government commissioned an independent audit of public sector spectrum holdings to inquire whether there is scope for re‐
allocation from public to private sector or within the public sector. 34 Its radical proposals were accepted by the Government, and UK policy is now based on requiring public sector users to acquire additional spectrum in the market place, allowing them to sell or lease their spectrum holdings to the private sector (and keep some of the proceeds), and charging an annual fee, known as an administrative incentive price for almost all other public sector spectrum holdings. 35
These studies have suggested that public sector use of spectrum is inefficient and that excessive appropriations of spectrum for public services over the past may have occurred. Therefore, in net terms, the public sector is like to be a supplier of spectrum to commercial users rather than a net demander, although there may be exceptions (for example, additional spectrum may be needed to provide emergency services with wireless broadband communications). In the UK the Ministry of Defence, having completed an audit of several of its holdings, has identified surplus spectrum, which it needs to be able either to sell or lease. 36
Eliminating the boundary between private sector and public sector spectrum markets is a bold, if logical, step, and one that spectrum regulators are as yet generally unwilling to take. For example, the European Commission in its 2006 proposals for spectrum reform advocates a market for much of the commercial spectrum, but makes a broad exception for public service spectrum. However, a report submitted in July 2008 to the Ministry of Enterprise in Sweden makes the important suggestion that military spectrum be subject to an annual charge. 37
33
http://www.ntia.doc.gov/osmhome/spectrumreform/index.html.
http://www.spectrumaudit.org.uk/ An equivalent audit of public sector spectrum in Australia was
published in 2008.
35
See Ofcom, Spectrum Framework Review for the Public Sector, June 2008.
36
UK Defence Spectrum Management 2008-2012, Consultation Document, Ministry of Defence, May
2008.
37
Effektivare signaller; hela dokumentet, English summary, July 2008
34
31
E Choosing the extent of unlicensed spectrum Traditionally, a small number of frequencies sat alongside spectrum licences assigned by administrative methods to provide unlicensed access to users of particular apparatus, or for experimental uses. These frequencies include those used for television remote controls, Bluetooth short range communications etc, as well as spectrum utilised for short‐range broadband access using standards such as IEEE 802.11 or WiFi. In the UK, for example, such license‐exempt spectrum amounts to 4‐6% of the total. But proponents of unlicensed now have the support of two companies which agree on little else‐ Google and Microsoft. While several have proposed a major expansion of the commons, others regard it as best suited to short range applications where rivalries between operators for spectrum are more limited. (This does not exclude the development of technologies such as mesh networks, especially to exploit short‐range transmission using un‐licensed spectrum‐ a discussion of such new technologies can be found at Annex B). However, drawing the line over time between the universal licensed and unlicensed spectrum is highly problematic, and historically has been done using administrative fiat in two dimensions ‐ in the basic decision to assign a frequency for unlicensed use, and in the choice of restrictions imposed on its use. Figure 5 illustrates the allocation decision between licensed spectrum, some of which
may be utilised for ‘private commons’ – an arrangement whereby users authorised by a
licensee (for example, purchasers of the licensee’s apparatus) have direct access to
spectrum – and unlicensed spectrum.
Figure 5: Licensed vs. unlicensed spectrum Spectrum Licensed Specify obligation and rights - geographic - temporal - interference parameters - noise floor (underlay) - overlays Licence‐exempt Private Commons Public Commons Rules Regulation ‐protocols ‐power limits, etc In the past, spectrum regulators have made decisions on unlicensed spectrum on administrative grounds. But this is arbitrary and unsatisfactory. In a market 32
environment, it would be better to introduce some form of market competition between the two modes of frequency management. The difficulty is that of estimating, and making effective, unlicensed spectrum users’ derived demand for spectrum, in the same way that, say, mobile operators can express their derived demand. The root cause of the problem is that of establishing the willingness to pay of a large number of non‐rivalrous spectrum users ‐ an illustration of the classic problem of establishing demand for a public good. In principle, individual levels of willingness to pay should be aggregated (vertically) to derive a social valuation. However, this is subject to the well‐known difficulty that respondents have an incentive to falsify their estimates, in order to increase the supply of a resource for which they will not have to pay or reduce the share of the cost which they will have to pay. A number of formal mechanisms have been developed to deal with such problems (e.g. the ‘Clarke‐Groves’ mechanism). They have the feature that any respondent whose reported valuation tips the decision to buy into the positive has herself to pay a surcharge equal to the difference between the price and all other participants’ preferences for the alternative option. This removes any incentive to report distorted valuations. These mechanisms do, however, encounter problems associated with the fact that they do not yield a balanced budget. Accordingly other techniques less sophisticated in terms of incentive properties, such as conjoint analysis, may be required to establish the aggregate valuation of unlicensed spectrum from willingness to pay for the services it can offer. Since a spectrum commons is typically be regulated to produce a range of mutually exclusive or co‐existing services, a range of options may have to be established, in circumstances where consumer understanding of them may not be strong. This will obviously represent a significant challenge. Suppose such an unbiased estimate were available. Then a proxy bidder for spectrum to be licence‐exempt could compete against bidders for licensed spectrum, including those proposing private commons. This would require public financing, but for limited frequencies this might not be too problematic. To summarise the argument so far: a case based on existing spectrum‐using technologies has been made for the use of market methods, and an (imperfect) way based on the market choosing between licensed and unlicensed use has been proposed. It is pertinent to ask, however if foreseeable technological developments support or undermine the case for the market. This is the task of the next section. 33
F. Improving Swedish spectrum management I assume that the objective is to maximise the value of services provided to end users, whether of public or private goods and whether produced by private firms or public organisations. This apparently simple formulation has a major implication that if taxes or subsidies are to be used made to discourage or promotes the provision of services, it is always better where possible to apply them to the end use service, and not to the inputs used to make them. In other words, gifting spectrum to the producer of a good or service is always inferior to subsidising production of the service itself. 38
Secondly, end users (almost always) benefit from efforts to expand the quantity of spectrum in use. This increase in supply reduces the price of services in the downstream market. On this basis, low auction and trading prices are a matter for congratulation, not regret. What immediate or medium term steps would further these two propositions? I believe that they include the following: - Adapt the design of auctions to make them not only nominally but genuinely service‐neutral. There are inevitable limits to this but as the example of the combinational clock auction shows, progress can be made; - building on the fact that the opportunity to transact across a wide range of frequencies is valuable, consider synchronising auctions to maximise choice and efficiency; - a more radical version of this known as the ‘big bang’, is to construct a design for a ‘two‐sided’ auction in which other participants can offer their frequencies for sale (and buy them themselves if they wish); this may seem rather fanciful, but it does overcome the problem of firms which need to relocate their activities, by buying and selling licences at the same time; - opportunities to increase the liquidity of the secondary market are rather limited, but the position is helped by releasing as much spectrum into the private sector as possible, even if it is held by speculators; this gives much firms quicker access to spectrum than can be achieved by making them go through a regulatory process; - a properly designed approach to spectrum user rights is required to accommodate change of use; - a system of sticks and carrots to encourage efficiency in public sector use is also highly desirable; charging public sector bodies an annual fee is a stick; allowing 38
This is known as the ‘Diamond-Mirrlees’ proposition, after two papers by those authors in the American
Economic Review in 1971, pp8-27 and 261-278. The intuition behind it is simply that subsidising an input
will inevitably lead to productive inefficiency - eg wasteful use of spectrum, which can be avoided by
subsidising the output.
34
them to lease, sell or share their spectrum and keep the proceeds is a carrot; there is some evidence that this combination may work. 4. Universal Service 39
A. Universal voice service The goal of universal voice telephone service has been accomplished in Sweden to an extremely rigorous standard, with only a handful of fixed locations incapable of receiving service by one means or another (wireline, wireless, mobile, satellite). This is despite legal issues relating to the identification of a universal service operator. Sweden therefore exemplifies the technological neutrality noted in the Commission’s last (2005/6) review of universal service. In that review, the Commission noted that; “Fixed telephone lines remain the main delivery mode of universal service, although operators are free to use any technology that can fulfill the requirements. Nevertheless, the most notable trend in telephony in the recent years has been the fixed‐to‐mobile substitution. Since 1999, the level of fixed telephony in the EU 15 has fallen by 10 percentage points with 82% penetration in early 2004, on a par with mobile telephony at 81%.” There has been a marked continuation of this trend since 2005. The Commission’s 13th Implementation Report notes that mobile penetration in Europe has rise to 112%, and is above 100% in 19 out of 27 member states. This has been accompanied by the following developments: - fixed subscriptions are falling in most countries - mobile voice traffic is catching up with fixed voice traffic (see Figure 2 below) - mobile prices are continuously declining, while fixed retail prices, including line rental, are rising; - these developments are encouraging European regulators to consider combining fixed and mobile calls in the same market for the purposes of deciding where to intervene with regulatory remedies. The crucial issue then becomes how to address the affordability question in a technologically neutral USO world, where there are multiple potential suppliers, the retail prices of some or none of which may be regulated. In its 2005/6 review, the Commission noted that the marginal cost of providing service to a subscriber via a radio access network is low, because the access network is shared among subscribers. 40 This is 39
This section draws extensively on M Cave and K Hatta, ‘Universal service obligations and spectrum
policy’ INFO, forthcoming.
40
COM (2005) 203.
35
clearly true in remoter areas, where a USO has a greater purchase. In such areas the overall volume of traffic is low, so that the base stations are fixed costs. 41
The Commission is probably on stronger ground in asserting that pre‐pay mobile tariffs exhibit low entry charges and greater levels of controllability, both of which appear to make them attractive relative to fixed wireline services. This led to the conclusion in 2006 that 42 : “.. the cost of using a mobile phone is less than the cost of using a fixed phone, mainly because costs of owning a fixed phone line includes the monthly line rental (the EU average was over €15.30 in 2005).In contrast, pre‐paid mobile services entail a low entry price and the possibility to make and receive calls without paying fixed charges, as well as greater possibility to control telephone expenditure thereby increasing their attractiveness to low income consumers. These cost advantages of mobile phones apply even when compared to the special ‘social’ tariffs that are in place in many Member States to ensure affordability of the fixed telephone network for low income customers. …” Although this conclusion is probably confined to ‘low users’, it is an important one, as it permits the inference that if fixed service prices are affordable, and if mobile prices are below them, then the latter must be affordable as well. The evidence suggests that this trend has advanced further since 2006. This creates a basis for arguing that the yardstick for universally available tariffs should switch from those for fixed to those for particular clients of mobile services. In summary, in developed countries ‐ in the EU in particular‐ the spread of mobile technologies is displacing the traditional fixed network as the natural or only means of discharging the voice USO, and this justifies full technological neutrality in allocating and discharging the USO; it makes it inappropriate to adopt a separate or additional mobile USO; and it makes it necessary to identify ‘affordable prices’ through the lens of mobile rather than fixed pricing practices. B. A broadband USO The previous section has discussed the role of wireless technologies in meeting voice USOs. In brief, the direction of travel will be supplementation or replacement of the standard fixed (usually wireline) obligation with a broader range of alternatives, which will include wireless – and specifically mobile – technologies. Clearly for the majority of the world’s inhabitants (including a small minority of inhabitants in the EU), the only choice will be wireless. 41
42
See the discussion of mobile costs in Section 2 above.
SEC (2006) 445.
36
In this section I consider the implications of a possible USO for data services, on which the Commission has recently opened a debate 43 . Although commentators in some countries have considered such an obligation, and many governments and regulators have spoken in favour of extending access to broadband – for example, by encouraging the fixed incumbent to enable as many exchanges are possible for the provision of DSL services, a formal obligation to provide broadband services universally at a uniform retail price has not been imposed. In the EU context, we note the inclusion in Article 4 of the Universal Service Directive that the connection provided by the universal service operator be capable of providing ‘..Data communications, at data rates that are sufficient to permit functional internet access....’ where Recital 8 refers to a data rate of 56 kbits/s. Although the speeds quoted in the (non‐binding) Recital seem quaint even five years afterwards, the phrase used in the Directive might be the foundation for a meaningful broadband obligation throughout Europe. It is also open to member states to impose their own obligation. Although they are usually debarred from establishing a broadband universal fund to finance it, thus restriction might be lifted in the current review. Is it likely to come to pass in Europe and elsewhere? 44 We must consider this question in relation to countries (and regions within countries) characterised by the considerable differences in network endowment as those discussed in Section 1 above. Firstly in a country with a ubiquitous copper network (and possibly a less ubiquitous cable network too), it is relatively simple to provide subscribers with fixed data services, at speeds which vary in accordance with their distance from the exchange. Take‐up rates are also important. It has been asserted, based on previous observations, that it is feasible to impose a universal service obligation when 70% of the population have access to it, 50% have taken up that opportunity spontaneously, and there is a network externality. 45 The EU countries with higher penetration levels of broadband connections including Sweden have reached those levels of penetration. 46
These supposed criteria were inferred from past experience. But as distinct from consumers in the days when fixed voice USOs were introduced and upgraded, consumers today typically have access to a range of broadband suppliers – offering service either on the copper networks, sometimes via an access regime, or on cable, or on a completely different platform. As a result, identifying a ‘broadband USO operator’ 43
See The Commission’s 2008 report on the scope of the universal service in telecoms, press release, 25
September 2008
44
In fact, one European country, Switzerland, already has an obligation imposed on its operator Swisscom
to provide any premises with a broadband service at 600 Kbps.
45
R. Collins and C. Murroni, New Media, New Policies, 1996.
46
Recall that OECD data on fixed broadband connections per 100 persons must be adjusted to take account
of multiple users in a household.
37
would be more difficult than extending a voice USO in the days of the statutory voice monopoly. As a consequence of the above, the predisposition to control retail broadband prices is quite different from that to control the retail price of monopoly voice services in the previous century. With no system for control of retail broadband prices, it is much harder to impose an obligation to supply a specified service at a uniform retail price; to do so would both be costly and distortive of competition. For this reason, it is unlikely that the traditional USO pricing associated historically with fixed voice will be imposed upon data. Moreover, it is also crucial to take account of wireless delivery of broadband. The capability of mobile broadband is considered below, but its comparative cost advantage in more sparsely populated areas must also be taken into account. Wireless networks are far more replicable than fixed ones as a result of the fact that they are both less capital‐
intensive and more readily scalable. As a result, they tend to have a comparative cost advantage in remoter areas and much greater competitive potential. Moreover, in some countries‐ Austria is an example‐ wireless broadband accounts for 40+% of broadband connections, so that it is more of a substitute than a complement for wireline broadband. This conclusion is supported by data which show that over the EU, the annual growth rate of fixed broadband connections is inversely related with the penetration of 3G wireless. The first growth of mobile broadband in Sweden is another example. Within a USO context, a range of technologies could be used to make broadband service available at a standard price. This might best be achieved by identifying potentially non‐commercial areas for the supply of broadband and imposing a coverage requirement on an operator or operators. How the financing of such an arrangement is linked to spectrum policy is considered below. On the basis of the above discussion, there seems to be a way forward, parallel with the way forward with voice, to utilise wireless technologies in the provision of a broadband USO. The USO would take the form of the imposition of a coverage requirement on one or more operators. The obligation might be to provide either a wholesale or a retail product (or both). Ideally the operator would be chosen through a competitive process, and we shall see in section C below how the auctioning of spectrum licences can be a means of achieving this objective over a group of wireless competitors: because they are bidding for a valuable asset‐ access to spectrum‐ an explicitly defined obligation can be superimposed on one licence, and then priced by the competitive process. 47 Creating a 47
An elegant extension of this approach would be to design a USO tendering process which involved a
transition from voice to data service (for example, from 2G to 3G; in other words, the object being
auctioned would be an obligation first to offer a voice service, with capacity specified in erlangs, and then a
38
level, technologically neutral playing field between wireless and wireline technologies is more problematic. 48
C. Implications for spectrum policy The discussion above has identified possible roles for wireless technologies in the provision of (voice or data) universal services. Where wireline networks exist, wireless technologies can supplement or compete with them in the supply of universal service. Where a wireline network does not exist, several competitive providers can dispute the role of meeting the USO. The policy advocated above of placing as many frequencies as possible in the hands of potential network operators (or even speculators) will also benefit end users by reducing barriers to entry and speeding up the process of competition. Is it possible or appropriate in either of these cases to use spectrum policy as a direct method of implementing a USO solution? This would mean, for example, gifting spectrum to an operator which assumed a USO. As noted in Section 3 above, there is a powerful and general economic argument which suggests that this is not a first‐best, or even a second‐best approach. Spectrum is an input in the production of –in this case– voice or data services. What we propose to do via the USO is to finance a downstream service which the market would not provide. The so‐called Diamond‐Mirrless proposition that is always more efficient to apply the subsidy to the service provided to or bought by the end user, in preference to subsidising an input used in its production. 49
In the former of the two cases set out above, where wireline and wireless technologies are in competition to provide the USO, the potentially harmful effects of a spectrum subsidy are even greater, as it might tip the balance in favour of a wireless technology where the wireline alternative would be preferred if the cost of spectrum were properly accounted for. What practical conclusions can be drawn from this? The argument above counts strongly against a policy of meeting USO obligations by granting a variable quantity of free (or below cost) spectrum to a designated operator. In such circumstances the risks of wasteful spectrum use are high. They are mitigated if the spectrum regulator could data service, with capacity measured in bps. The tender could take the form of a menu auction, in which
participants could offer alternatives at different prices. It might also be possible, via a carefully constructed
procedure to harness bidders’ knowledge of the best moment to make the ‘upgrade’. Ideally there would be
no restriction on the technology or platform chosen. 48
A summary of such ‘reverse auctions’ can be found in S Wallsten, Reverse Auctions and Universal
Telecommunications Service: Lessons from Global Experience, April 2008..
49
See fn 32 above.
39
itself derive the combination of spectrum and other resources which would produce the end user services at full productive efficiency, that this is not plausible. This fact seems to provide a good alignment between spectrum policy and universal service objectives in the circumstances postulated. Essentially, the government is paying for the USO via diminished auction revenues. This is thus a convenient method of competitive tendering for the USO’s provision. In the case where competing suppliers are using wireless technologies, it is thus a direct and tested method finding an efficient supplier. In other cases‐ for example where the USO can be provided by both wireless and wireline providers‐ other funding methods must be employed which are not based on spectrum management. These might entail finance out of general taxation or the creation of a universal service fund, financed by a levy on all operators. 5. Summary and Recommendations This section brings together some of the previous discussion and seeks to draw some conclusions. My approach has not been to draw on particular scenarios (though I believe this is a useful technique) but to identify the implications of particular regulatory principles on the design of policy to deal with a variety of possible combinations of outcomes. As a result the approach is intended to be capable of dealing with a number of possible futures‐ or even to be, to some extent at least, ‘future‐proof’. The fairly conventional principles which I have implicitly been appealing to are the following: - ensure that regulation is designed to achieve outcomes which are statically and dynamically efficient – which might loosely be construed as, respectively, generating efficient prices and efficient incentives to invest; - rely on competition as far as possible to deliver good outcomes for end users, and replace regulations with competitive pressures whenever necessary; - pursue the chosen equity objectives by the most efficient means possible – which will probably entail either competition in or competition for the market. The sector will inevitably develop in unpredicted ways. Demand will change; new business models will emerge; firms will be born and die. But adherence to the above principles provides a flexible framework within which to pursue end users’ long term interests. I now summarise the main conclusions which flow from analysis of the four areas of regulation dealt with above. 40
A. NGNs In my opinion it would be a serious strategic error to abandon the goal of access competition in the face of the development of FTTx networks. At the same time, different geographical areas have different scope for competition and I have argued above that serious thought should be given to adopting geographical market definitions which explicitly reflect this fact. In competitive areas the scope for end‐to‐end competition is considerable. This may be accompanied, in the interests of promoting competitive build out, by restricting or in the limit abandoning access regulation, though the latter would be a major step requiring careful analysis. In non‐competitive areas, other jurisdictions are considering relaxing or amending access regulation to provide incentives to bring forward investment. I understand this is not contemplated in relation to TeliaSonera fibre investments in Sweden. To the extent that this means that there is no trade‐off between the scope and depth of fibre investment and the competitive service pricing which traditional LRIC access pricing can generate, this is a good outcome. B. Interconnection Both analytical and technical developments have re‐opened the issue of how to regulate the termination ‘bottleneck’. In particular, the ‘calling party’s network pays’ principle has come under critical scrutiny. The same considerations apply to both fixed and mobile networks, but termination on the latter networks is more contentious as a result of the sheer scale of the charges. The focus of debate is on lowering or even eliminating per minute termination charges, implicitly requiring the associated costs to be recovered by other means‐ increasingly in fixed charges. This would eliminate the need to regulate the termination activity on an ex ante basis, provided that the markets for the relevant outgoing services were effectively competitive. The new approach therefore seems to support the objectives principles both of efficient pricing and of deregulation. The problem is how to accomplish the elimination or reduction of termination charges. Discrepancies between the objectives of networks, based on such things as asymmetrical traffic flows, make it unlikely that the marketplace will generate the proposed reforms spontaneously. If this is so, it will require a major regulatory shift, which neither the current Directives nor the Commission’s reform proposals seem to contemplate. However, implementation of the Commission’s draft Recommendation and the natural consequences of IP interconnect developments may achieve the same objective through a staged process. 41
C. Spectrum Policy Regulatory change in this area involves limited trade‐offs among objectives, and for this reason may be more readily attainable. As argued above, in my opinion, the best policy is to extend the use of market instruments and in the direction of completing a single public/private spectrum market in Sweden. Thus quite closely confirms with (n though also goes beyond) the agenda set out in the July 2008 Parliamentary report, where the ideal model includes transparency, use of market methods, secondary trading, maximising service‐ and technology‐neutrality and pressure to improve efficiency on public sector spectrum areas. 50
D. Universal service The apparent trend in voice USOs within the EU is towards greater reliance on the use of wireless and particularly mobile technologies. Provision of a universal voice service, is about complete in Sweden, and already relies on this approach. The new frontier is a broadband USO. Spontaneous access and penetration rates in Sweden make this a relatively feasible, as well as a desirable proposition. The two principal sources of funding such a service are government tax revenues and internal cross‐subsidies. More generously than has been the practice elsewhere, Sweden has invested heavily in the availability and spread of broadband services. How might a broadband USO be achieved in an efficient manner in a framework of competition, unregulated retail prices, and technological neutrality in the provision of the service. Restricting ourselves to processes base don subsidising the provider, rather than the consumer, one method is to identify an operator with a supply obligation (at a specified price) in a specified region, characterised by lack of commercial viability of broadband provision. The specified price could be a wholesale access price (thus permitting customers their area choice of retailer) or a retail price. (There are choice‐
based arguments in favour of the former.) The operator would be chosen either by a technologically neutral reverse auction, or –if a wireless frequency in a specified band were chosen in advance to deliver the service –
by attaching USO conditions to a particular spectrum licence (for which negative bids might be permitted). The duration of the contractual or licensing obligation would be a key issue, as would any arrangements for indexing the uniform price. The method of financing can be either tax revenue (as in the case of auctioning a spectrum licence with a specific service obligation), or a universal sharing fund. 50
Effektivare signaler: hela dokumentet, Summary, pp. 19, 22-23, July 2008..
42
E. An overview What do these recommendations add up to overall? Or where might Sweden be if they were successfully applied? First, there would be widespread and possibly universal access to high speed broadband, as the term is understood in 2008. Second, many but not all end users would have a choice of networks on which voice and data services would be supplied. With a liberal spectrum policy, these should include the largest commercially sustainable number of wireless providers. Fixed and mobile voice services would probably have converged into a single market, removing the need to regulate call origination. Termination regulation would also have been superceded by bill and keep and IP interconnect. There might be a residual need to regulate access to NGAs, in areas where there was only one in the market or where the small number of operators was colluding, but competition law might be adequate to this task. Does this mean that the euthanasia of the regulator is imminent? It is probably too soon to panic, as unexpected problems will inevitably materialise to disrupt this comfortable scenario. However, the application of the same principles should produce a strategy to deal with those problems too. 43
Annex A: Some recent economic literature on termination 51
Introduction This annex discusses a number of contributions to the literature on termination rates. Developing practice in the caller party pays (CPP) world has focussed on mandating a cost‐oriented price per minute of termination, which may or may not differ from one operator to another (this issue is not considered here) 52 . This regime has led to significant differences in charges across member states, for which it seems that diversity in the treatment of various kinds of costs is at least as responsible as differences in the costs themselves. However, more fundamental issues have surfaced (or resurfaced). One major one concerns the degree to which termination rates should be set in the framework of strategic interrelations among operators, which are likely to bear a different nature depending upon whether the termination relationship involves two operators providing services in the same outgoing market (ie mobile to mobile) or whether termination is provided by a mobile to a fixed operator, or vice versa. Termination of the former type is also closely linked to the question of the differential between on‐net and off‐net prices for mobile calls. The New Approaches It is worth emphasising the widely accepted point that these analyses can be applied indifferently to mobile and to fixed operators, even though departures from the optimum in mobile networks seem to be more flagrant. The key distinction in relation to the logic of setting rates is not based on the presence or absence of a wire, but on whether the termination seeker and the termination supplier are competing for the same group of customers making outgoing calls or are operating in separate retail markets for fixed and mobile services. In the case of fixed to mobile rates, and mobile to fixed rates in the absence of regulation, the goal of the termination provider is simply to charge a monopoly price. In the case of fixed to fixed and mobile to mobile termination, the above goal is complicated by the desire to gain a strategic advantage over competitors across the whole range of interactions. This implies that analytical conclusions which flow from consideration of fixed to fixed and mobile to mobile termination should be the same. Allowance has to be made for differences in cost structure, but the view of mobile network costs as containing substantial fixed costs makes them more similar in this respect to fixed network than the alternative assumption of constant returns to scale. 51
52
This annex repeats and elaborates on some ideas set out in Section 2 above.
See ERG (2007).
44
Low marginal costs Suppose a mobile world of general excess capacity, with very low marginal costs. In such circumstances, efficiency would require rebalancing of the recovery of termination‐
associated costs. This might take several forms. Operators would agree (or regulators could enforce) low marginal prices for termination, possibly via a two‐ part tariff system in which fixed costs were recovered by a fixed payment 53 . If the demands each operator placed on the other were symmetrical, and if marginal costs were close enough to zero, this might lead to a bill and keep system. As noted above, ‘symmetry’ might mean either equal flows of traffic in each direction, or the imposition of equal costs in dealing with incoming traffic. The latter approach might justify the regime on cost causation grounds in cases where a smaller scale/higher unit cost operator experienced a positive balance of outgoing traffic in relation to a larger rival; but the larger rival is unlikely to find this a palatable solution. Lower marginal costs also make lower on‐net call prices particularly attractive to operators, provided other costs can be recovered elsewhere in flat‐rate charges. Modelling callee benefits In a very simple model, suppose that the benefits of the call are divided between caller and callee in fixed proportions. If the marginal costs of origination and termination are divided between an outgoing call price and a reception price in proportion to those benefits, there will be the right incentives to use the network. But if, the calling party pays convention operates, then there will be too little demand for calls if the calling party pays the full costs. Another possibility which might emerge is the so‐called ‘connectivity breakdown’ (Jeon, et al .2004), when there is a distinction in the price of on‐net and off‐net calls. This may happen because if the benefit of receiving a call is relatively small, the receiving network wants to reduce them and sets a high off‐net reception price. If the benefit of receiving a call is high, and of originating it is low, then the originating network wants to choke off outgoing off‐net calls, which mostly benefit its competitors’ clients. This is to be contrasted with the treatment of on‐net calls. The network will try to get the price of these as low as possible, subject to their marginal cost (which, as we have noted, may be low) in order to benefit its clients. There will thus be a preference for recovering fixed costs in fixed charges. 53
The limiting case of this is bill and keep.
45
Strategic behaviour in relation to termination Recognition of the key role which the setting of mobile to mobile termination rates plays in strategic interaction among operators is, as noted above, probably the main innovation of recent economic models. The operators’ conduct has been characterised in several ways. In the earliest version, roughly symmetric operators collude to set high reciprocal rates per call minute. These support high retail rates and –absent differential on‐net and off‐
net prices‐all operators can make excess profits. However, it is now well known that high access prices harm welfare and reduce potential profits, so that if an alternative pricing structure is available, such as non‐linear pricing, firms will prefer it. Alternatively, it is in the operators’ interests to agree a low termination rate, because such a rate softens competition between them. A low termination rate diminishes the incentive to gain customers, because they bring with them low termination rates when they receive calls. (It is a waterbed effect operating in reverse.) In the limit this might even lead to agreement on a bill and keep system. This outcome was noted by Gans and King (2001) and is a feature of several current models‐ see below. Where operators are asymmetrical, the established ones will want to discriminate against the entrants. This is exemplified by experience in a country where two operators had agreed a very low termination between themselves, but wanted to charge entrants a much higher rate. Strategic goals are also pursued in the choice of the on‐net /off‐net call price differential. As noted above, Hoernig (2007) has shown how this variable can be used as an anti‐
competitive weapon. A combination of scale disadvantages and unbalanced traffic make smaller operators vulnerable to low on‐net rates offered by larger operators, and the standard regulatory response of allowing small operators to charge higher termination rates exacerbates the on‐net/off‐net differential. As a result, smaller operators will argue for regulation, not only to combat the crude termination price discrimination noted above but also to enforce regime such as bill and keep. However, Hoernig (2008) has shown that regulatory intervention can diminish welfare: it depends on the details of the regulatory objectives and of demand and cost characteristics. Particular models It is fair to say that there is no model of termination which synthesises all the above considerations to generate agreed policy conclusions. Accordingly, in this section I summarise the policy conclusions of three prominent recent models and then try to reach a view on the direction in which policy recommendations are travelling. For more details on the models themselves see the survey by Harbord and Pagnozzi (2008). 46
One particular model which leads to conditional policy results is that of Cambini and Valletti (2005). As in other models, calls confer a benefit on both parties. In this model, however, a call has a propogation effect, eliciting a call in return, which makes off‐net calls more desirable, reducing the connectivity breakdown risk. In their analysis, (symmetric) operators react to termination rates in ways, which are predictable but more complex than the simple CPP/RPP distinction. If termination rates are high, reception is encouraged by offering it to customers for nothing‐ as this encourages them to receive calls, thus earning the operator revenue. If rates are low, particularly below cost, operators start charging for reception. (Note the difference in the direction of causation‐ from the level of rates to CPP/RPP, rather than vice versa.) As far as negotiation of access charges is concerned, the authors accept the logic of the incentive to set low charges, to diminish competition among them. This is likely to lead to agreed reciprocal rates which are below cost, possibly even to bill and keep. Armstrong and Wright (2007) have developed a model in which a key role is played by the interaction between mobile to mobile and fixed to mobile termination rates. They find the latter to be subject to the market failure of excessive rates 54 ; the detriment to consumers form inefficient prices applies even when all excess profits are ploughed back. In the case of mobile‐to‐mobile rates, they predict that operators will set rates low to chill competition. They acknowledge the counter‐intuitive nature of this conclusion but reason as follows: “Unless firms set a low termination charge, call charges will be such that it is more expensive to call off‐net than on‐net. In such a situation, subscribers will, all else equal, prefer to join a larger network, since they can make a larger fraction of their calls at a cheaper rate. As is well known, in such markets competition is particularly fierce and profits low.” However, if entry is being attempted, they may set high rates to make life more difficult for the newcomers. Absent this condition, the analysis might suggest that regulators should try to bound mobile to mobile termination rates from below, rather than from above. Armstrong and Wright explain the need to continue to cap termination charges by noting the risk of substitution between mobile to mobile and fixed to mobile calls. Faced with high fixed to mobile retail charges, marked up from high termination rates, customers may switch to using their mobiles to make the call. (Alternatively‐ a point not considered by the authors‐ their operators may complete the call via a SIM card, to benefit from the mobile termination rate.) To prevent this occurring, operators bring the 54
It is assumed that fixed termination charges are regulated.
47
mobile to mobile rates up and the fixed to mobile rates down, to below the monopoly level. Finally Gabrielsen and Vagstad (2008) examine the conditions required for excessive access prices to be agreed, starting from the proposition that since mobile tariffs are observably non‐linear, it would be surprising if inefficient ‘excessive’ access charges were used as a means of extracting surplus from customers. They show that in a world with symmetrical operators, a combination of the following three features is sufficient to generate high access charges: -
tariff‐mediated network externalities, which confers benefits from belonging to a big network -
consumer switching costs -
calling clubs, implying that calls are not placed randomly. The key conclusion which follows is that what causes the problem is not the monopoly of each network’s termination (which will not necessarily lead to high access charges in a reciprocal world), but the mark‐up’s effect on the balance between on‐net and off‐net prices: ‘high termination prices are not a consequence of market power, but an effect of market power’ (ibid, p 111). This is demonstrated by the fact that the problem of excessive termination charges can be resolved by the simple expedient of prohibiting the on‐net/off‐net call price differential, which would eliminate the tariff‐mediated network externality. Bibliography Armstrong, Mark and Julian Wright (2007) Mobile call termination. Cambini, Carlo and Tommaso Valletti (2008) ‘ Information exchange and competition in telecommunications networks,’ Journal of Industrial Economics, forthcoming. ERG(2007) ERG Public Consultation on a Draft Common Position on Symmetry of Mobile/fixed Call termination Rates, ERG (07) 83. Gabrielsen, Tommy Stahl, and Steinar Vagstad (2008) ‘Why is on‐net traffic cheaper than off‐net traffic? Access markup as a collusive device’, European Economic Review, 52, pp99‐
115. 48
Gans, Joshua and Stephen King (2001) ‘Using ‘bill and keep’ interconnect agreements to soften network competition’ Economic Letters, 71(3) pp 413‐420. Genakos, Christos and Tommaso Valletti (2007) ‘Testing the water bed effect in mobile telephony’, in The Economics of Mobile Prices,Vodafone Policy Paper Series No 7. Harbord, David and Marco Pagnozzi(2008) On‐net/Off‐net Price Discrimination and ‘Bill‐
and –Keep’ vs ‘Cost‐based’ Regulation of Mobile Termination Rates Hoernig, Steffen (2007) ‘On‐net and off‐net pricing on asymmetric telecommunications networks’ Information Economics and Policy. pp171‐188. Hoernig, Steffen (2008) Tariff‐mediated Network Externalities: Is Regulator Intervention Any Good? Jeon, D, J‐J Laffont and Jean Tirole (2004) ‘On the receiver pays principle’, RAND Journal of Economics, pp. 85‐110. 49
Annex B: New Technologies and their Implications for Spectrum Management New smart (or advanced) antenna technologies have a potential to enable more efficient use of spectrum both directly and indirectly, according to the type of antenna and deployment in the radio‐based system. In fact, smart antennas seek to increase coverage, capacity and reliability of a radio network by improving the ability to send and receive signals (i.e., by limiting interference). Actually, antennas are “dumb”, but can be made “smart” if a digital signal processor enables an analysis of the spectrum environment, such that it is possible to either precisely determine and combine the sources of an incoming transmission, or to direct energy in a narrow beam towards the user. Therefore, intelligence and greater (indirect) spectrum efficiency can be obtained in two fundamental ways along the value chain of spectrum‐based services: on the receiver side, by improving the ability to listen (even with high level of interference, sometimes known as the “cocktail party effect”); on the transmitter side, by transmitting a response only in the desired direction, whereas traditional omni‐directional antennas transmit in all directions. Hence, smart antennas can save on spectrum requirements and also help reduce harmful interference, including multi‐path problems. 55
However, there are two main practical problems with these type of advanced antennas: they are expensive (particularly when implemented on users’ devices) and they can work properly only if there is a line of sight. Multiple Input Multiple Output (MIMO) systems use multiple antennas at both the transmitter and receiver. They are considered as extensions for smart antennas and they promise a cost‐effective way to improve spectral efficiency and network links’ operational reliability by performing spatial multiplexing. This technology highly increases the spectral efficiency without the need for using extra bandwidth, through different techniques that use the multi‐path problem as advantage to increase the capacity. MIMO is expected to be the standard in all wireless networks and its first generation can increase the transmission rate by the double. 56 Software‐defined radios (SDRs) are wireless communication devices that use software almost completely to implement their functionality on the physical transmission level. Microcomputers are installed in the transmitter and the receiver to handle a wide variety of waveforms 57 and their associated settings. Software is used to control high‐speed 55
Radio signals reflect off objects, creating multiple paths that in conventional radios cause interference
and fading. MIMO systems send data over these multiple paths (see below).
56
Combined with OFDM, it is adopted by IEEE 802.16e (WiMax) and also the new standard
IEEE 802.11n.
57
Waveform is used to refer to the whole software application determining the behaviour of the system.
50
signal processors, thereby performing functions that in traditional radios were carried out in the hardware. SDR technologies can have a great impact on electronic communications, although their potential to improve efficiency along the value chain is mainly indirect with regard to spectrum. In fact, by implementing as many functionalities as possible in the software, SDR systems are able to adapt to a wide range of frequencies, bandwidths and transmission standards, without any hardware changes. Hence, SDR can provide more flexibility. Also, SDR is an important enabler for advanced forms of dynamic spectrum access, i.e. any form of flexible spectrum use obtained by dynamically changing the set of transmission parameters. 58 However, a key feature of SDR technologies is their novel way of carrying out the radio functionalities. Indeed, SDRs represent a major technological advance beyond traditional radio devices for a number of reasons: except for the antenna, physical layer functions are implemented in software rather than in hardware, there is no built‐in waveform predetermined by the manufacturer and radios can be re‐programmed on the fly, thus providing a high degree of flexibility by adjusting frequencies, bandwidth and directionality. Thereby, a radio becomes a generic device with its functionality defined by running the software in a dynamic manner, allowing multiple uses of the same hardware and infrastructure. 59 These features of SDR technologies have generated some interest by carriers in the United States, as SDRs allow carriers to run multiple standards on the same network and promise to reduce operational costs. In the United States, the FCC has approved Vanu’s system, which commercially deploys SDR technologies in licensed spectrum. Although its commercial jumpstart was with the rural market, Vanu’s system is expected to gain popularity with large carriers and WiMax could potentially become part of the system. A relevant development in dynamic spectrum access technologies based on SDR capabilities are cognitive radios. These are smart devices that can perform autonomously a multitude of tasks enabling more flexible and efficient use of radio frequencies: a high‐
level cognitive radio is able to “observe, orient, plan, learn, decide, act” (Mitola 1999). Although their cognitive behaviour may take different forms, the focus here is on spectrum usage only. Cognitive radio (CR) technologies can acquire information on their spectrum environment and employ this information to decide on their transmission behaviour. Also, they are able to learn from their own behaviour and experience. Indeed, key 58
Early forms of such mechanisms can be found in modern car radio sets, DECT cordless communication
systems and WLAN devices operating in the 5 GHz frequency band. They are all examples of automatic
frequency selection mechanisms.
59
E.g., the same SDR could be used as a mobile or cordless phone, a pager or provide WLAN connectivity.
Also, a network operator could offer his customers a set of waveforms including UMTS and IEEE 802.11.
51
features include their ability to “learn” user preferences, prevailing spectrum rules and operator charges. Particularly, cognitive radios can sense vacant frequencies and transmit over unused frequencies until another radio tries to use the same portion of the spectrum. Therefore, one important advantage is their ability to transmit over temporarily unused frequencies, thus contributing to increasing the available bandwidth. Moreover, by exploiting their ability to acquire information on their physical environment in relevant dimensions ‐ including time, location and frequency – CR technologies have a potential to increase spectrum usage beyond their mere ability to sense vacant frequencies. In addition to transmitting in unused spectrum, CR could help in spectrum sharing by enabling two different wireless systems to coexist in the same frequency band through applying different techniques: dynamic frequency selection, power control and time agility. 60 Also, since the spectrum sensing is for enabling secondary users to transmit on frequency channels not used by the primary users, CR provides smart techniques (like spectrum pooling) which create “virtual unlicensed bands” to make the secondary user communication more reliable and continuous (Čabrić et al. 2004). The US Defence Advanced Research Projects Agency (DARPA) is working on a prototype radio system developed in a military‐relevant scenario. However, cognitive radios are still in development and it will probably be several years before they come onto the market. Meantime, a number of technological challenges still remain: they include wideband sensing, opportunity identification, interference prevention and dynamic coordination. These issues combine in the so called “hidden terminal” problem: if two devices are out of range of each other (e.g., there is a building between the two), they might not spot any activity on a particular part of the spectrum by making independent measurements, thereby starting transmissions. However, the receiver (for example, a base station located within range of both transmitters) will suffer from interference as the spectrum was actually already in use. Mesh networks are an example of technological developments at network level. In mesh networks radio nodes receiving information can also pass information along, thereby providing retransmission capabilities. Mesh networks can be divided in two groups: structured and ad‐hoc mesh networks. In structured mesh networks, radio nodes are fixed and a relevant planning activity takes place in advance. In ad‐hoc networks, mobile radio nodes are equipped with relay functionality, which provides a potential for great flexibility and ability to reroute transmissions along different paths by spotting another node within radio range. While fixed wireless access systems are often included in the concept of structured mesh networks, ad‐hoc mesh networks are in their early stages of development. 60
Dynamic frequency selection enables radios to choose the band with the least interference; power control
allows communications at the least possible transmit power and time agility enables radios to adopt each
other’s traffic patterns and avoid increasing interference in poor channel conditions.
52
Implementation of SDR technologies might facilitate deployment by allowing radio nodes in the network to become instantly flexible, according to the physical environment. High‐level cognitive radios are candidates to be deployed as radio nodes in ad‐hoc networks by creating their own transmission path in every specific situation, involving several hops on other radios in the network (perhaps using spread spectrum technologies such as UWB for high‐speed, short‐range communications). What requirements do the new technologies impose on spectrum management? The traditional regime for spectrum management is that of one frequency to one user (bound to provide a particular service using individually licensed apparatus). Developments in the use of market methods have in some jurisdictions already achieved first generation flexibility, allowing change of ownership and use across discrete spectrum‐using services and technologies in any given frequency. The next issue we consider is how to achieve the benefits of flexibility in the context of the more sophisticated technologies described above, which are capable of accommodating spectrum sharing, often dynamically ‐ i.e. in ways which vary over time. It is not suggested that all these technologies will necessarily be applied on a large scale. Indeed Ofcom’s own analysis of the costs of SDR and cognitive radio costs doubt on their cost effectiveness, compared with the historical alternative of achieving more efficient spectrum use simply by decreasing cell sizes and re‐using spectrum more efficiently on a discrete‐user basis. It is important, however, to give thought to their potential impact, if the spectrum management regime is to be made future‐proof. A simple dimension of spectrum use is its efficient scale. A technology such as cognitive radio relies upon using agility to pile more use into given frequencies. Much of the benefit flows from pooling intermittent demands to achieve a greater utilisation rate. Other things being equal, this process occurs more efficiently on a larger scale, subject to the increasing cost and technical complexity of ranging over more spectrum. At the least, therefore, we can expect aggregation of demand under cognitive radio, which will capture the benefits of scale, possibly involving intermediation – for example, a band manager which sells access to a range of frequencies. Mesh networks have the prospects of diminishing the power required of transmissions, by virtue of their use of multiple short hops at low power levels rather than one long hop at a higher power. To that extent, they enhance the scope for commons, provided that increases in equipment costs do not outweigh savings in spectrum use. The two remaining, and more fundamental, issues concern underlays and overlays. Underlays are exemplified by UWB, which operates under the noise floor of other services. In principle, UWB could be utilised in at least three ways. First, one or more separate geographical licences could be carved out beneath any existing noise floor and assigned 53
on an exclusive basis. Or, the same space could be carved out, and made licence‐
exempt. Or, an obligation could be imposed on any prospective user of UWB to negotiate an arrangement with all licensees under whose noise floor it proposed to operate. The last option would almost certainly fail because of the transaction cost incurred in negotiating with countless licensees. Choice between the licensed and licence‐exempt modes of permitting UWB should hinge upon a calculation of the risks of interference, either between UWB users or from UWB users to other licensees. It is necessary here to conduct a risk management exercise to establish what would be the consequences in the future if the multiplication of UWB users ultimately led to either of the interference problems noted above. The conclusions reached so far by regulatory agencies (notably the FCC and Ofcom) favour creation of a traditional commons with very strong power limitations. Finally, there is the question of overlays, or access by users to spectrum licensed to others. In principle, this could be made generally available. Indeed, the European Commission’s recent proposals on spectrum reform 61 seem to contemplate such a general right of access, when they say that “a new system for spectrum management is needed that permits different models of spectrum licensing (the traditional administrative, unlicensed and new marked‐based approaches) to coexist so as to promote economic and technical efficiency in the use of this valuable resource. Based on common EU rules, greater flexibility in spectrum management could be introduced by strengthening the use of general authorisations whenever possible” (p. 7). This clearly raises fundamental issues of spectrum management and the design of property rights. Within one given frequency, transaction costs would not preclude bargaining between an original licensee and potential secondary users. There is also a concern that unlicensed entrants, lacking security of access, would not be in a position to make collateral investments, or to offer adequate assurances to their customers of continuity of supply and quality of service. Alternatively, they might establish de facto some squatters’ rights of a contestable nature, which would prevent the licensee from being able to exploit its asset to the full. These questions have been widely debated in recent years. Some have argued in favour of unlicensed non‐interfering overlays or easements of this kind, arguing ‐ inter alia – that, absent imposition of such a regime, licensees will not be prepared to supply access to secondary users by entering into appropriate contracts with what might be their rivals in downstream markets. The opposing point of view is based in part upon the analogy of a successful market for
licensing intellectual property. It is suggested that the argument favouring market
61
European Commission’s Communication on the Review of EU Regulatory Framework for Electronic
Communications Networks and Services (SEC (2006)816-7).
54
accommodation of such emissions rests on the now familiar position that spectrum rights
holders have an incentive to act in ways that result in (approximately) optimal use of
spectrum space and suggest that there is reason to question whether a governmental rule
without price and profit incentives will be able to match the performance of a market
regime. Absent any experience of the non‐interfering easements regime, it is hard to discriminate on a priori grounds between the two approaches. The one economises on transactions costs, while the other eschews use of the price mechanism to ration access. As often in economics, it is unlikely that there will be a single solution, with the same regime optimal in all frequencies. Ideally, the choice of regime would be determined by a procedure akin to that described above, where some kind of quasi‐market testing of a secondary commons would be undertaken. But the design of such a mechanism is very challenging and the best way forward may be to undertake some limited testing of the non‐interfering easement approach in likely looking frequencies. Whatever application easements may have in the future, market methods will almost certainly provide the foundation for achieving flexibility in spectrum use of both the first and the second generation varieties. 55
Annex C: The Scope of the PTS Assignment The assignment is to make recommendations on how the regulation should be designed to maximize the degree of self‐regulation on the market, and thereby reduce the need for regulatory interventions. Alternatively, to recommend under what conditions more intrusive regulatory interventions can be implemented for a short‐term period to achieve the same long‐term goal. The assignment also includes giving recommendations on measures that a regulatory body (PTS) can take to stimulate self‐regulation so the market can move in the direction of more effective competition. It is important that both wired and wireless networks (techniques) are taken into consideration and issues that can be covered in the assignment are: •
Spectrum management •
Access to electronic communications networks •
Possible increased accessibility (USO) •
Geographical differentiation •
Convergence and NGN Questions that can be answered are: What is needed in terms of parallel infrastructure in the access network to enforce a withdrawal of regulation? What is needed in order to get a well‐functioning spectrum market where e.g. spectrum usage is not restricted? 56