SEARCH

Google

Saturday, January 26, 2008

Telecommunication reforms in developing countries.

Major innovations have pushed telecommunication costs down and demand up since the mid-1980s. The new segments of the mobile and the internet markets are hence suitable for oligopolistic competition. Reforms of the former public monopoly have been necessary to accommodate the entry of new operators. It is important to disentangle the effect of market liberalization that occurred in response to technological change and demand growth from the effects of privatizations resulting from structural adjustment programs. In line with popular opinion, privatization per se did not benefit consumers much. The biggest improvements for consumers have been driven by competition from mobile telecommunication firms. Governments should concentrate on liberalizing the mobile and internet segments. For the incumbent telecom operator, allocative inefficiency combined with the critical budgetary conditions found in most developing countries favour public ownership. This is an effective way of combining the regulation of the firm with a maximum level of taxation.

The percentage of countries that allowed private shareholders to own stakes in their incumbent telecommunication operator rose from 2% in 1980 to 56% in 2001 (International Telecommunication Union, ITU 2002). Simultaneously, markets worldwide have opened up to new entrants in the mobile and the internet segments. In the mobile market 78% of the 201 countries included in the ITU database had adopted some degree of competition by 2001; while this figure was 86% in the internet market. The massive trend towards privatization and liberalization should not mask the fact that almost half of the countries in the world still have a public incumbent operator and that roughly 20%, mainly developing countries, have no private operator in their telecommunication industry at all. Similarly poor countries have limited their liberalization reforms to the mobile and internet segments. In the fixed telephony market over 60% of the world's countries have a monopoly.
The differences between telecommunication industrial policies from country to country raise the issue of how optimal reforms have been. Are poor countries lagging inefficiently behind, as is sometimes argued by the advocates of privatization, or is there a rationale for keeping the incumbent telecommunication operator public and monopolistic? The answer to this question is not clear. Assessment of reforms varies widely depending on the assessor. Since they have led to improvements in the financial and operating performances of divested firms, and in many cases also to network expansion, specialists tend to think that the reforms have been successful. This positive appraisal contrasts sharply with the popular view among consumers in developing countries, where there is a widespread perception that the reforms have hurt the poor, notably through increases in prices and unemployment, while benefiting the powerful and wealthy. In a 2001 survey of 17 Latin American countries 63% of participants disagreed or strongly disagreed with the statement: "The privatization of state companies has been beneficial" (The Economist, July 28th-August 3rd 2001, p. 38). Similarly in Africa, reforms have been qualified as "re-colonization" due to the participation of foreign investors in many cases. It seems hard to reconcile consumer dissatisfaction with specialists' contentment. On the other hand, the unpopularity of the reforms cannot be disregarded by those who promote decentralization and democracy. This paper thus aims to clarify this issue. It analyses the advantages and drawbacks of telecommunication privatization and market liberalization in developing countries.

Tuesday, January 15, 2008

The telecommunications server revolution.

Telecom systems today consist mainly of handsets that act as dumb terminals connected to a proprietary switch. Several political and technological forces are driving a move to PC-based telephony solutions. Global telephone deregulation is creating demand for new equipment, advances such as IP telephony are blurring the line between voice and data and the CompactPCI standard provides a framework for hot-swapping and high availability in an open PC platform. The trend toward PCs in the telecom market is well established, and PC systems are now taking market share from traditional proprietary solutions. Forty percent of systems used for adjunct servers are PC-based, and the benefits of replacing proprietary equipment with a universal telecom server quickly become obvious. Universal servers are integrated systems that perform switching and voice and data services in one box. Existing universal solutions use primarily circuit-switched technology, but a shift toward packet switching is underway.
Today's telecommunication solutions consist primarily of dumb terminals (telephone sets) connected to proprietary switches. Any intelligence in the system resides in the public switched telephone network (PSTN), at the private branch exchange (PBX), or in adjunct servers that provide enhanced services such as voice mail. Because these components are primarily closed and proprietary, customization is limited and difficult. In addition, until recently, network service providers faced little competition. All of this is beginning to change.
There are many forces in the market today--both political and technological--that are combining to change the face of telecommunications solutions:
* The deregulation of the telecommunications industry in the U.S., Europe, and Japan is creating global demand for new equipment. This invites new entrants into the market and puts pressure on existing players to create new services and features quickly and inexpensively.
* The lines between voice and data communications are blurring. Corporate network managers are demanding the consolidation of voice, data, and video onto a single, managed network. Internet Protocol (IP) telephony is emerging as the technology to provide this solution.
* CompactPCI is providing the framework for high availability and hot swap capability in an open PC-based platform. These components allow developers to deliver new, innovative, reliable services.
* Operating system providers and groupware providers are looking to telephony applications to increase their market penetration. These vendors provide low-cost development tools and a trained workforce for distributing and supporting PC-based telephony solutions.
These market forces signal a common theme: Information technology (IT) managers can take advantage of a broad technology base which delivers more features cost effectively that boost productivity and reduce operations expense. In other words, they need solutions that allow equipment vendors to take advantage of PC-based building blocks, reducing time to market and providing easy integration into communications networks around the world.
The PC' s penetration of the telecommunications market is a logical progression of a well-established and accelerating trend. Today, the most time-consuming task when deploying new or enhanced telecommunications services is software development. Not only do PCs offer the richest software environment available, but they also have the largest trained workforce in the industry. New hardware technologies are often developed on PCs first because PCs offer a convenient platform on which to work.
In addition to reduced time to market and the prevalence of PCs in the marketplace, there are other trends pointing to the penetration of the telecommunications industry by PC-based open platforms. These include more memory and processing power on the desktop, increased communications capacity of the telecom buses within PCs, the availability of high- and low-density PC-based telephony solutions, and support for hot swap and high availability features.
PC systems have already begun to make inroads into primarily proprietary solutions. About 40% of the systems used for adjunct services today are PC-based. But the obvious advantages of PC-based components are seen when existing telecommunications solutions are replaced by universal telecommunications servers.
Universal telecommunications servers are integrated systems that basically "do it all." Telecommunications servers perform switching as well as enhanced services in a single box. There is no need for a proprietary switch plus one or more separate adjunct servers. All of these functions can now be supported in a single, open system.
These integrated systems are built with components from multiple vendors. Cost-effective, richer unified messaging applications with a software focus are beginning to emerge. Developers do not have to design hardware or middleware--they can choose a PC, open telecommunications hardware, and message handling software, out of the box, from a number of vendors. This allows individual companies to focus on their core competency, resulting in better, faster, and less expensive products.
The advantages of an open universal telecommunications server are just too great for the industry to ignore. Existing, proprietary switches are expensive, hard to modify, and provide basically only network control. An adjunct server has to be added for every new service, for example, voice mail, interactive voice response, and personal number service. Connecting and integrating these adjunct servers with a proprietary switch is often a complicated task.
Contrast this with the universal telecommunications server where each adjunct service is replaced by integrated software running in a single PC. This integrated solution requires software built on standard interfaces such as telephony application programming interface (TAPI). Adding a new service under this open architecture only involves adding more software.
And the universal telecommunications server eliminates most of the hardware required in the traditional enhanced services architecture, reducing equipment purchase, integration, and maintenance costs.
FUTURE TRENDS
The universal telecommunication servers emerging in today' s market are primarily built on circuit-switched technology and are already delivering tangible value. Over the next few years, however, there will be a shift toward packet-switched technology with IP as the carrier protocol of choice.
The movement toward IP telephony, or implementing telephony applications on a data network via IP, is already underway. Major vendors have embraced the International Telecommunication Union's H.323--a suite of audio and video conferencing standards for sending multimedia data, including voice telephony, over packet networks. IP telephony leverages long-term technology trends, is accessible and customizable by those in the computer industry, and has at least a 10 times price-performance advantage over traditional telephony.
The current revolution being staged by universal telecommunications servers is conceptually identical to the information technology revolution of the 1980s, when PCs and workstations supplanted expensive, proprietary hardware and software architectures. Expect this revolution to continue and eventually extend to all aspects of telecommunications--from the desktop to the PBX, to the central office switch, and beyond.
Steckbeck is the director of software product management at Natural MicroSystems and holds a B.S. in mathematics from George Mason University.

Monday, January 14, 2008

A new view of scale and scope in the telecommunications industry

Telecommunication economic analysis has largely relied upon a conventional economic framework that has its roots in neoclassical analysis that emerged almost a hundred years ago, and has contributed to reshaping the direction of economic policies by attacking the premises of the 1996 Telecommunications Act, and providing far greater leeway to incumbents, as well as challenging the economic efficiency of new entrants. Common approaches based upon a large number of simplifying assumptions that include, for instance, the idea that the technology is exogenous. Such hypotheses make little sense at a conceptual level. In addition, this idea is largely contradicted by the short period during which the sector achieved some level of competition around the 1900's and 2000. Not only have economists not thought about any number of such hypotheses, but they have also failed to consider how they might have an impact on their analysis. Evaluating a number of such issues in this paper, we are able to show how conventional economic analysis, uncritically applied to the sector, contributed to the undoing of the 1996 Telecommunications Act and of much of the competition it helped facilitate.
Key words: scale and scope, competition, telecommunications industry structure.
* Introduction: the problem of scale and scope as a source of confused assumptions
The telecommunications industry continues to be dominated by a small number of established players, most of whom are incumbents. Yet, notwithstanding their continued dominant market position, the prospects of these players are troubling to financial analysts (2). These worries are fuelled by declining numbers of access line (3) and declining prices in the traditionally more lucrative parts of the business in particular (4). The uncertain financial position of dominant firms, operating in dynamic markets (5) with seemingly limitless market opportunity, creates a strange incongruity that applied economic analysis is called upon to explain, but has not been able to so far.
At the same time, the legacy monopoly structure of the industry continues to exert considerable influence on today's market even though there has been a legislated end to most monopoly-endorsing public policies. This influence is rooted in the incumbent's sunk network, which already connects most of the population in its serving territory and which, in most cases, can be easily upgraded to newer technologies at a far lower cost than "evergreen" network builds by new entrants (6). The financial realities have thwarted many new entrants, heralding the widespread bankruptcies among new entrants witnessed towards the beginning of this decade, even in the face of observations that these failed entrants did many things much more efficiently than incumbents (7). The financial difficulties encountered by entrants have often been assumed to relate to presumed efficient scale and scope economies of incumbent networks. Yet, there is no direct evidence to substantiate such assumptions. On the contrary, it appears that such scale and scope assumptions boil down to a tautology at best. If sunk network technologies exhibit any scale and scope effects on a product level, we observe them as characteristics of technologies that were designed and adopted by the incumbent in response to the market structure specificity they operated in and over which in many, if not most cases, they were able to exert conscious choices different to those that might have been dictated by market efficiencies. The incumbent, as a monopoly, was absolved from any need to respond to market forces and instead could adopt network designs and develop technology that furthered its monopoly service mandate on some other basis. What we are therefore observing is the technology environment of a legacy monopoly and as such, there can be no assumptions that this technology is "efficient" in a market-honed economic or social welfare sense. Even if scale and scope economies within the technology of incumbent's "sunk" networks could be demonstrated, they would not have any necessary connection to efficiency. From the standpoint of economic welfare analysis, or even profit maximization in the use of assets, such observed economies would indicate only how to adjust the use of sunk assets, but not whether it would be better to use them differently, or to abandon them altogether.
The technology source of incumbent networks also has consequences for policy debate. A fundamentally changed policy from monopoly to competition requires a fundamentally changed governance structure. Not surprisingly, we observe policy makers and industry members attempting to steer policy changes towards a wide variety of often shifting goals such as the construction of alternative facilities, unbundling at "wholesale" rates, the redefinition of bottlenecks, etc. Much of this debate, however, is characterised by a discussion framework rooted in the legacy of the past, featuring scant economic analysis regarding the forward-looking impact of alternative policies on industry members, the evolution of markets and evolving social equity considerations (8). The failure to pursue such an inquiry has as adverse an impact on would-be entrants, incumbents, consumers and the capital markets as policy makers themselves. Without a basic understanding of where a changed policy is heading, the policy transition itself becomes a blur and business risk assessments become harder to manage (9). In the worse case scenario, if business risk rises to the level of utter uncertainty, capital will flee the sector.
Consequently, even though incumbents have mostly survived the maelstrom of this sector's recent financial history, they must still compete for investors with every other sector. If policy uncertainty or a perceived lack of adaptation to market changes prevent their efficient use of capital in a forward-looking economic sense, they will ultimately be punished in their financial results (10). Here we will unravel the problem of scale and scope analysis to show how a misguided approach to the problem has distorted markets and regulation, as well as severely affecting competition. In an accompanying paper, we also show how an alternative approach to modelling will stimulate innovation and provide guidance for the forthcoming restructuring of the industry. We will begin with a new reading of telecommunications history and then move on to discuss analytical approaches (BOURDEAU de FONTENAY & LIEBENAU, 2005). We conclude with a discussion of the effects on innovation and industry restructuring.
The central problem inhibiting a better understanding of the scale and scope of market forces currently at work today may well be the incumbents' perception of themselves--their market position and challenges. Incumbents (and indeed most others in the industry, even policy makers with whom incumbents feel eternally at loggerheads) naturally continue to look at the legacy of vertical (and horizontal) integration as the way to control the environment, including what incumbents perceive as the two primary sources of uncertainty: competition and innovation. In view of the longstanding mindset of monopoly and service obligation, the difficulty of evaluating that environment, and established success in dealing with their historical environment, it is understandable that incumbent firms today continue to allocate substantial resources to protect their legacy (POSNER, 1975) (11).
Nevertheless, established firms now face a highly complex adjustment process that will force them to think and act in terms of fundamentally new models brought about by increasing collateral entry and innovation. In a dynamic market environment, delay in recognizing new and newly possible models increases financial risk and limits profit opportunities.
* Monopoly defines technology, efficiency assumptions about legacies are unwarranted; a new reading of telecommunications history
Economists generally assume that the technology of any industrial sector is state-of-the-art and known to all, and that firms are bounded and formed by the dimensions of that technology (BOWLES, 2004). In other words, technology causes firms to look the way they do. The choice of technology is presumed efficient, reflecting the firm's incentive to maximise profit and is seldom scrutinized for form by economists. On this basis, we have come to assume (since SRAFFA, 1926) that a sector with few large firms, often a single monopoly, has to reflect the economies of scale inherent in the technology of the sector. Applied to telecommunications, the presence of such economies of scale justifies both the size and staying power of incumbents and why there is not more local competition provided by other firms.
On the contrary, there is every reason to believe that the assumed relationship between technology and firms has worked the other way around in telecommunications. Here, the monopoly selects and shapes the technology to serve its own interests. If the technology is endogenous, that is, if it becomes a strategic variable managed by the firm in pursuit of its own private objectives, then the technology we continue to observe throughout existing concentration in the sector cannot be presumed to be socially efficient, even if it could be efficient for the firm itself. Indeed, it is generally the case that a monopoly largely determines technology to meet its needs. Once the efficiency of technology assumption erodes, so does the efficiency of any apparent scale and scope economies. With an endogenous technology, observed scale and scope economies reflect a market structure-specific private economic efficiency at best. It may be nothing more than a technology the firm selected with the goal of fostering a technological path that shelters it from the risk of competition. In other words, such path dependence may create an entry barrier that increasingly obstructs competition and undermines the socially efficient allocation of assets.
This situation is clearly illustrated by the history of telephony, where systems and entities were built and organized pursuant to a policy design that worked outside of competitive market forces--the franchised or government controlled monopoly. However, it does not follow that, at least in the case of telecommunications, the structures and technology are, from a market perspective, efficiently produced. Telephone networks were built to fulfil a firm-level planned ubiquitous service policy expectation, and were not built in response to competitive market pressures.
The difference can be easily illustrated by an activity as basic as dialling a telephone number. From the standpoint of a monopoly serving a large metropolitan area, it might be "efficient" administration to adopt a seven, eight or even ten-digit dialling requirement (12). However, people living in neighbourhoods might well find added value in remembering and dialling only three or four numbers to reach their neighbours. Indeed, short dialling was one of the "instant" innovations included in competitive PBX equipment. This one example illustrates that, had the market ruled from the beginning, even the simplest technology might have looked different than it does today. Efficient technology for the monopolist may not be welfare enhancing from the standpoint of society or an economic analysis grounded in competition.
The numbering example illustrates the dimensions of the problem of how to transform this formerly monopoly sector into an effectively competitive one that can efficiently innovate, and why this understanding is important to investors, incumbents and policy makers alike. Blauvelt's dialling plan was a significant invention that went far beyond accommodating more subscriber addresses. With his system individual subscribers no longer controlled the routing of their calls. The first three digits represented an address for an AT&T switch. That switch would then take control of the call and route it in the manner most efficient from the standpoint of AT&T's network. In other words, trunking became divorced from the customer's dialling and became governed by the telephone company, not the customer. Understanding this simple invention, how it came about and was implemented, and the underlying assumptions of technology that stemmed from it helps us to recognise the continuing primary role of the incumbent in developing technology throughout the monopoly period and even in today's market.
The telephone business started in 1877 as a monopoly built on a series of Bell patents, but the original monopoly did not survive after their expiration. By the early 1900s, the United States had over 3,000 telephone operators and in 1907 competitors already controlled 51% of the telephone market. The situation alarmed the Morgan banking interests who, by then, financed and controlled AT&T. In 1907 they named Theodore Vail to head a revamped AT&T (13). Vail's approach was an acquisition strategy designed to expand the company's reach across the country, to reverse its market losses, and to protect it from the uncertainties of competition (BORNHOLZ & EVANS, 1983; TEMIN, 1987). Vail leveraged AT&T's control over key long distance patents, hence, over long distance interconnection to get this new foothold (14). In 1908, he coined the motto: "One System, One Policy, Universal Service" in a campaign to placate the government's antitrust concerns (15) and in 1913 successfully negotiated the Kingsbury Commitment with the federal government, essentially gaining government acquiescence to the Bell monopoly.
At that time, economists generally saw a clear dichotomy between utility-type monopolies and other types of monopolies. The idea that some sectors have unique economies of scale dates back to John Stuart Mill (BULLOCK, 1901) at least. Debates tended to deal more with whether the sector was best managed as a public or private utility. Vail's solution was original, elegant and generally accepted as in line with thinking at the time.
As a result of Vail's business strategy, and AT&T's successful sale of its policy to state and federal governments, the Bell System's control of the sector became almost complete. AT&T used its market power both to set standards and practices and, of course, to dictate prices (16). It ended the practice of requiring customers to provide their own inside wire and hook-up points and took over that function. TEMIN (1987), among others, tells us that the integration of the telephone set as a part of the telephone network was not so much, "Because there were joint costs," but, "Precisely because it was so easy for anyone to make a telephone set that Bell could never hope to police licenses for their manufacture." Vail's genius was not just to convince the government that telephony was a utility, it was also to define that utility as the complete, end-to-end system, a system that remained the most vertically integrated utility through time. There was no implication that the "one system" Vail was talking about was subject to technological constraints. The idea that technology was a constraint on the market potential of the sector is an "afterthought" that was introduced much later. On the other hand, VAIL was certainly, like Ford, among those who best understood Bullock's "law of economy in organization" (1902). Consequently, throughout the decades, AT&T political disputes were waged with the objective of preserving the entity's integration strategy.
AT&T's integration strategy never had anything to do with the kind of calculus described by Coase in his research of 1937 (1988) and WILLIAMSON (1971). Their work establishes the boundary between what is purchased by a business enterprise on the market and what is integrated within the (telephone) company based on what is most efficient, using a competitive market benchmark. Under normative analysis, a firm that integrates a function that is more efficiently produced within a competitive market will put itself at a cost disadvantage vis-a-vis its competitors (17). AT&T's all-encompassing structure essentially precluded the existence of such a competitive market benchmark and shielded it from market discipline. Vail's "one system" became the foundation on which the Bell System's culture emerged and, through time, created the routines that still permeate the management process, as well as negotiations with the government (GRANT & LIEBENAU, 2000) (18). It also created legacy technologies that still make it difficult for new entrants and also inhibit innovation by incumbents themselves in ways that a few are only just beginning to appreciate.
Most of the early historical experience with competition is centred in North America. However, there are some experiences elsewhere, especially in Scandinavia, that can enlighten us as to the market potential of telecommunications. In Sweden, competition became so intense with the end of the Bell patents that AT&T decided to pull out of the country. At the time, telephone rates were lower in Stockholm than they were in U.S. cities, a trend that continued for decades. This pricing advantage was achieved in spite of the supposed benefits of economies of scale and scope (19).
After the Bell exit, a latecomer in the Swedish telephone sector, the telegraph company Televerket, was able to leverage its technological control over long distance services together with its existing infrastructure of poles, to gain control over a growing number of competitors, generally following the pattern of AT&T. Only recently have regulation and interconnection requirements arrived (20). Yet here, too, there is no evidence that scale and scope played a role in the costs or industry structure of Swedish telecommunications.
In 1923, Clark concluded that "[t]elephone companies ... show no signs of economy with increased size, but rather the opposite" (21). Today, a small minority of economists argue that those economies are not playing a role that is all that significant (ROSSTON & TEECE, 1997). Nevertheless, economists rarely challenge the assumed existence of those economies (TEECE, 1995). In the absence of proof of scale and scope economies, there is no basis for assuming that incumbents' structures and their current organization of assets are performing efficiently.
* Analytical framework: policy change, innovation and investment
Policy makers, investors and financial analysts have a lot in common with a CEO running a business. The CEO is obliged to make the most efficient use of the assets investors have assigned to the management's care to maximize profits, excluding agency problems. Use of investments and innovation are the tools of this trade, and assessment of their use serves as the benchmark for financial analysts in their evaluation of management performance. Similarly, policy makers engage in economic regulation to foster greater economic efficiency and social welfare. Public investment and innovation are also the tools of that trade, although policy makers frequently do not recognize the extent of the investment required to effect a policy, or understand that innovations in government conduct are required to effect the desired goal's implementation. In a dynamic environment the challenge is to recognize opportunities and problems as they arise and the assumptions we hold, especially about scale and scope, make huge differences to the conclusions we arrive at.
One of the best ways to disaggregate the monopoly structure is to look at the existence, vel non, of wholesale markets and understand their importance and the problems standing in the way of their creation. To begin with, there is no basis for assuming that the traditional monopoly vision of end-to-end service provisioning is a natural and inevitable product of technology. There are literally thousands of functions that take place to build and operate networks and provide different services, and for many of them, improvements to economic efficiency could arise from desegregation. For immediate discussion purposes, we refer to all these intermediate functions (and potential markets) together under the heading of "wholesale markets".
The idea of wholesale markets in telecommunications emerged only once the sector had been privatized and opened up to free entry/exit. For incumbents, the wholesale concept did not begin as a means of fostering greater profits, but as a forced response to policy changes. If a highly integrated, former monopoly was now to face competition, would-be competitors needed to have the technological means of getting to customers. To a large extent, there was only one practical source of that technological means--the incumbent. Additionally, over the years the incumbent had incorporated into itself all the many intermediate functions of providing telephony, from laying of conduit to network and equipment design, to creating standards for interconnection. So there were few substitutes not only for the incumbent's network, but also for many of the intermediate functions necessarily involved in providing communications to end-users.
The question for incumbent management became, is there greater profit to be gained by cooperation or is this policy change only a question of sharing "my" existing market with others? If there are considerable legacy scale and scope forces at work, then new entrants add little value by expanding markets, but only compete for a market already served "efficiently" by the incumbent, given the existing technological path. The question for would-be investors in entrants becomes, what is the cost/risk associated with investment in this market? Is the regulator up to the task at hand? The question for policy makers becomes, what is required of us to make open entry model feasible? Wholesale access to incumbents' resources appeared obvious, but there was little understanding of what specifically was going to be required to realize this change of policy or what the consequences of it all would be.
At the core of this set of problems lies the question of scale and scope. If large, exogenous economies of scale and scope exist, then a policy that forces competition where it conflicts with technology is a policy that imposes inefficiency, is wrongly conceived, and cannot be implemented without constant intervention. KNIGHT (1925) and SRAFFA (1926) were among the first to clearly analyse such a situation. However, where there are not significant scale and scope economies, then the monopolization of a sector is inefficient and the reluctant incumbent's vertically integrated structure obstructs innovation and market growth. In the latter case, an incumbent's integrated structure, focused on providing a limited set of end-user services, is an albatross for itself, as well as for public welfare. Preserving its monopoly will limit the capital applied to the sector and thus also its market growth and opportunity. Indeed, in a dynamic market setting, the incumbent continually needs to examine its market opportunities and even rethink its structure. Following STIGLER (1951), as well as recent work on innovation and vertical disintegration by experts like CHRISTENSEN (2000), if we think of the incumbent as a collection of assets, in a dynamic setting, these assets look less attractive as a vertically integrated monolith. They are more usefully evaluated as a conglomerate of businesses that it may, or may not make sense to be engaged in and which may, or may not be maximising their profit potential. So, the scale and scope issue is one of the most fundamental of questions for all parties.
* Guideposts on scale and scope from economics
Existing quantitative findings do not support the conclusion that large scale and scope economies exist within the telecoms firm. Yet there is much work built upon assumptions that they exist (STEHMAN, 1925; KAHN, 1971; MITCHELL & VOGELSANG, 1991; BAUMOL & SIDAK, 1994; HARING, 2002; SPILLER, 1999; LAFFONT & TIROLE, 2000; ARMSTRONG, 2002; National Research Council, 2002; SPULBER & YOO, 2003) (22). Most of this work is general and not intended to offer pragmatic guidance for business and policy decision-making. However, if it were applied to practical questions of telecommunications business planning and policymaking, this body of work could mislead in critical ways.
Most definitions of competition in telecommunications are based on concepts of models that consider a single entrant that is essentially trying to duplicate what the incumbent is doing. Such a "monopoly-centric" approach is an easy path of analysis and is consistent with the hypothesis of a technology that is exogenous and known to all. It focuses on incremental changes to the established environment and uses the knowledge base associated with the status quo. Yet the change of policy that drives the need for new modelling is far more than just an incremental change. It is what Schumpeter saw as an innovation, and in such cases, it is the official responsible for policy implementation: "Who as a rule initiates economic change, and consumers are educated by him if necessary; they are, as it were, taught to want new things" (23). The official who implements the policy, and the investors and incumbent managers who have to deal with it, must face a world that does not have a concept of what competition might mean in this case other than to realise that it must be something different from what people are used to. Consumers, for their part, are not in a position to imagine what a maturely competitive telecommunications environment may mean for them. They may imagine lower prices, but how much lower? What kinds of new services they can expect?
If there was no knowledge available relating to the kind of environments competition could bring about in telecommunications, it might be impossible to imagine the competitive environment other than with monopoly-centric environment biases.
In general it is possible, by combining observations and analogies from other sectors, to gain some insight into what that new environment could be like and the business strategies that might be successful in harnessing new opportunities. One useful approach is to look at the process of desegregation that competition brings about when applied to other monopoly sectors. STIGLER (1951) describes the process of vertical disintegration and innovation that accompanies competition and it is possible to observe some elements of the process he describes in telecommunications, both during the competitive period around 1900 and the modern competitive period. For instance, in France the regional independent construction companies used by France Telecom, as well as its competitors build significant portions of France Telecom's local infrastructure. Those companies evidently have a comparative advantage in construction, facilities management and maintenance over the construction departments integrated into the legacy service companies. It did not take long for new entrants to discover that there were a wide range of rights-of-way and properties available that, with some imagination, could be used more cheaply than the methods incumbents continue to use (24). That implies that there is the potential for a commercial market place for rights of way and construction that is broader than telecommunications and can be more efficiently pursued as an independent activity.
Once the monopoly-centric bias of data is understood, then little of the recent analysis that has been applied to explain the plight of competition in the sector makes sense. For example, virtually every existing study of new entrants' investment incentives, including the innovation dimension of those incentives, implicitly uses a monopoly as the benchmark. As a result, they deal in aggregate with the firm and cannot disaggregate the layers of infrastructure that constitute the network. (JORDE, SIDAK & TEECE, 2000; HARING et al., 2002; CRANDALL, 2002; BREYER 2004) (25).
* Conclusion: "scale economies", integration and functional disintegration
Economies of scale and scope, and vertical integration are at the heart of STIGLER's (1951) demonstration that the division of labour need not lead to monopolies and therefore we understand that organizational choices require skilled and informed management. Stigler, as well as SRAFFA (1926), demonstrated that economies of scale and scope cannot be considered independent of the vertical integration of tasks (26).
Higher production levels make it possible to use more efficient production techniques, a cornerstone of the theory of the division of labour, a dimension upon which CHANDLER (1990) built his analysis of the emergence of increasingly large and, eventually, multifunctional and multiproduct firms. TIROLE (1988) tells us that engineers confirm Chandler's results. Sraffa's observation was that most firms could increase their output substantially without increasing their per-unit cost and, potentially, with the possibility of reducing it further. BESANKO et al. (2000) discuss the L-shaped curve as being far more common than the Viner's U-shaped curve and remarks that, "In reality, large firms rarely seem to be at a substantial cost disadvantage relative to smaller rivals" (p. 73). Those economies are not necessarily abstract assumptions, but may be associated with concrete factors such as Robinson's "economies of mass reserves." However, care must be taken in applying these observations to a sector not maturely competitive, such as telecommunications, where the economies engendered by great size are less likely to have been the result of efficient technology (insofar as they exist at all).
Economies of scale and scope are properties that are associated with technology, the organisation of technology and the technology of the organisation. Moreover, as Tirole points out: "Returns to scale have their limits." BESANKO et al. (2000) identify capacity bottlenecks, as well as agency problems as possible contributors to the eventual emergence of diseconomies of scale in a firm. They identify:
"[...] four major sources of economies of scale and scope economies: indivisibilities and the spreading of fixed costs, increased productivity of variable inputs (mainly having to do with specialization), inventories, [and] the cube-square rule" (p.75).
All of those sources imply some limits to those economies of scale and scope at any point in time or over a finite time period. They do not justify a blank assertion regarding the economies of scale and scope of telecommunication operators. Interestingly, three of those determinants refer not to the firm, but rather to the kind of activities Stigler identifies within a firm that is by the normal, proper definition correctly vertically integrated.

Tuesday, January 8, 2008

Telephone

Optical fibre provides cheaper bandwidth for long distance communication
In an analogue telephone network, the caller is connected to the person he wants to talk to by switches at various telephone exchanges. The switches form an electrical connection between the two users and the setting of these switches is determined electronically when the caller dials the number. Once the connection is made, the caller's voice is transformed to an electrical signal using a small microphone in the caller's handset. This electrical signal is then sent through the network to the user at the other end where it transformed back into sound by a small speaker in that person's handset. There is a separate electrical connection that works in reverse, allowing the users to converse.
The fixed-line telephones in most residential homes are analogue — that is, the speaker's voice directly determines the signal's voltage. Although short-distance calls may be handled from end-to-end as analogue signals, increasingly telephone service providers are transparently converting the signals to digital for transmission before converting them back to analogue for reception. The advantage of this is that digitized voice data can travel side-by-side with data from the Internet and can be perfectly reproduced in long distance communication (as opposed to analogue signals that are inevitably impacted by noise).
Mobile phones have had a significant impact on telephone networks. Mobile phone subscriptions now outnumber fixed-line subscriptions in many markets. Sales of mobile phones in 2005 totalled 816.6 million with that figure being almost equally shared amongst the markets of Asia/Pacific (204 m), Western Europe (164 m), CEMEA (Central Europe, the Middle East and Africa) (153.5 m), North America (148 m) and Latin America (102 m). In terms of new subscriptions over the five years from 1999, Africa has outpaced other markets with 58.2% growth. Increasingly these phones are being serviced by systems where the voice content is transmitted digitally such as GSM or W-CDMA with many markets choosing to depreciate analogue systems such as AMPS. There have also been dramatic changes in telephone communication behind the scenes. Starting with the operation of TAT-8 in 1988, the 1990s saw the widespread adoption of systems based on optic fibres. The benefit of communicating with optic fibres is that they offer a drastic increase in data capacity. TAT-8 itself was able to carry 10 times as many telephone calls as the last copper cable laid at that time and today's optic fibre cables are able to carry 25 times as many telephone calls as TAT-8. This increase in data capacity is due to several factors: First, optic fibres are physically much smaller than competing technologies. Second, they do not suffer from crosstalk which means several hundred of them can be easily bundled together in a single cable. Lastly, improvements in multiplexing have lead to an exponential growth in the data capacity of a single fibre.
Assisting communication across many modern optic fibre networks is a protocol known as Asynchronous Transfer Mode (ATM). The ATM protocol allows for the side-by-side data transmission mentioned in the second paragraph. It is suitable for public telephone networks because it establishes a pathway for data through the network and associates a traffic contract with that pathway. The traffic contract is essentially an agreement between the client and the network about how the network is to handle the data; if the network cannot meet the conditions of the traffic contract it does not accept the connection. This is important because telephone calls can negotiate a contract so as to guarantee themselves a constant bit rate, something that will ensure a caller's voice is not delayed in parts or cut-off completely. There are competitors to ATM, such as Multiprotocol Label Switching (MPLS), that perform a similar task and are expected to supplant ATM in the future.
Radio and television


Digital television standards and their adoption worldwide.
In a broadcast system, a central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous low-powered receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The antenna of the receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analogue (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values).
The broadcast media industry is at a critical turning point in its development, with many countries moving from analogue to digital broadcasts. This move is made possible by the production of cheaper, faster and more capable integrated circuits. The chief advantage of digital broadcasts is that they prevent a number of complaints with traditional analogue broadcasts. For television, this includes the elimination of problems such as snowy pictures, ghosting and other distortion. These occur because of the nature of analogue transmission, which means that perturbations due to noise will be evident in the final output. Digital transmission overcomes this problem because digital signals are reduced to discrete values upon reception and hence small perturbations do not affect the final output. In a simplified example, if a binary message 1011 was transmitted with signal amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9 0.2 1.1 0.9] it would still decode to the binary message 1011 — a perfect reproduction of what was sent. From this example, a problem with digital transmissions can also be seen in that if the noise is great enough it can significantly alter the decoded message. Using forward error correction a receiver can correct a handful of bit errors in the resulting message but too much noise will lead to incomprehensible output and hence a breakdown of the transmission.
In digital television broadcasting, there are three competing standards that are likely to be adopted worldwide. These are the ATSC, DVB and ISDB standards; the adoption of these standards thus far is presented in the captioned map. All three standards use MPEG-2 for video compression. ATSC uses Dolby Digital AC-3 for audio compression, ISDB uses Advanced Audio Coding (MPEG-2 Part 7) and DVB has no standard for audio compression but typically uses MPEG-1 Part 3 Layer 2. The choice of modulation also varies between the schemes. In digital audio broadcasting, standards are much more unified with practically all countries choosing to adopt the Digital Audio Broadcasting standard (also known as the Eureka 147 standard). The exception being the United States which has chosen to adopt HD Radio. HD Radio, unlike Eureka 147, is based upon a transmission method known as in-band on-channel transmission that allows digital information to "piggyback" on normal AM or FM analogue transmissions.
However, despite the pending switch to digital, analogue receivers still remain widespread. Analogue television is still transmitted in practically all countries. The United States had hoped to end analogue broadcasts on December 31, 2006; however, this was recently pushed back to February 17, 2009. For analogue television, there are three standards in use (see a map on adoption here). These are known as PAL, NTSC and SECAM. For analogue radio, the switch to digital is made more difficult by the fact that analogue receivers are a fraction of the cost of digital receivers. The choice of modulation for analogue radio is typically between amplitude modulation (AM) or frequency modulation (FM). To achieve stereo playback, an amplitude modulated subcarrier is used for stereo FM.

The Internet


The OSI reference model
The Internet is a worldwide network of computers and computer networks that can communicate with each other using the Internet Protocol. Any computer on the Internet has a unique IP address that can be used by other computers to route information to it. Hence, any computer on the Internet can send a message to any other computer using its IP address. These messages carry with them the originating computer's IP address allowing for two-way communication. In this way, the Internet can be seen as an exchange of messages between computers. An estimated 16.9% of the world population has access to the Internet with the highest access rates (measured as a percentage of the population) in North America (69.7%), Oceania/Australia (53.5%) and Europe (38.9%). In terms of broadband access, countries such as England (89%), Iceland (26.7%), South Korea (25.4%) and the Netherlands (25.3%) lead the world. The Internet works in part because of protocols that govern how the computers and routers communicate with each other. The nature of computer network communication lends itself to a layered approach where individual protocols in the protocol stack run more-or-less independently of other protocols. This allows lower-level protocols to be customized for the network situation while not changing the way higher-level protocols operate. A practical example of why this is important is because it allows an Internet browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi connection. Protocols are often talked about in terms of their place in the OSI reference model (pictured on the right), which emerged in 1983 as the first step in an unsuccessful attempt to build a universally adopted networking protocol suite.
For the Internet, the physical medium and data link protocol can vary several times as packets traverse the globe. This is because the Internet places no constraints on what physical medium or data link protocol is used. This leads to the adoption of media and protocols that best suit the local network situation. In practice, most intercontinental communication will use the Asynchronous Transfer Mode (ATM) protocol (or a modern equivalent) on top of optic fibre. This is because for most intercontinental communication the Internet shares the same infrastructure as the public switched telephone network.
At the network layer, things become standardized with the Internet Protocol (IP) being adopted for logical addressing. For the world wide web, these “IP addresses” are derived from the human readable form using the Domain Name System (e.g. 72.14.207.99 is derived from www.google.com). At the moment, the most widely used version of the Internet Protocol is version four but a move to version six is imminent.
At the transport layer, most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). TCP is used when it is essential every message sent is received by the other computer where as UDP is used when it is merely desirable. With TCP, packets are retransmitted if they are lost and placed in order before they are presented to higher layers. With UDP, packets are not ordered or retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handled by. Because certain application-level protocols use certain ports, network administrators can restrict Internet access by blocking the traffic destined for a particular port.
Above the transport layer, there are certain protocols that are sometimes used and loosely fit in the session and presentation layers, most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that the data transferred between two parties remains completely confidential and one or the other is in use when a padlock appears at the bottom of your web browser. Finally, at the application layer, are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer), IRC (Internet chat), BitTorrent (file sharing) and OSCAR (instant messaging).
Local area networks
Despite the growth of the Internet, the characteristics of local area networks (computer networks that run at most a few kilometres) remain distinct. This is because networks on this scale do not require all the features associated with larger networks and are often more cost-effective and efficient without them.
In the mid-1980s, several protocol suites emerged to fill the gap between the data link and applications layer of the OSI reference model. These were Appletalk, IPX and NetBIOS with the dominant protocol suite during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point but was typically only used by large government and research facilities. As the Internet grew in popularity and a larger percentage of traffic became Internet-related, local area networks gradually moved towards TCP/IP and today networks mostly dedicated to TCP/IP traffic are common. The move to TCP/IP was helped by technologies such as DHCP that allowed TCP/IP clients to discover their own network address — a functionality that came standard with the AppleTalk/IPX/NetBIOS protocol suites.
It is at the data link layer though that most modern local area networks diverge from the Internet. Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical data link protocols for larger networks, Ethernet and Token Ring are typical data link protocols for local area networks. These protocols differ from the former protocols in that they are simpler (e.g. they omit features such as Quality of Service guarantees) and offer collision prevention. Both of these differences allow for more economic set-ups.
Despite the modest popularity of Token Ring in the 80's and 90's, virtually all local area networks now use wired or wireless Ethernet. At the physical layer, most wired Ethernet implementations use copper twisted-pair cables (including the common 10BASE-T networks). However, some early implementations used coaxial cables and some recent implementations (especially high-speed ones) use optic fibres. Optic fibres are also likely to feature prominently in the forthcoming 10-gigabit Ethernet implementations. Where optic fibre is used, the distinction must be made between multi-mode fibre and single-mode fibre. Multi-mode fibre can be thought of as thicker optical fibre that is cheaper to manufacture but that suffers from less usable bandwidth and greater attenuation (i.e. poor long-distance performance).

History of telecommunication

Early telecommunications
Early telecommunications included smoke signals and drums. Drums were used by natives in Africa, New Guinea and South America, and smoke signals in North America and China. Contrary to what one might think, these systems were often used to do more than merely announce the presence of a camp.
In 1792, a French engineer, Claude Chappe built the first visual telegraphy (or semaphore) system between Lille and Paris. This was followed by a line from Strasbourg to Paris. In 1794, a Swedish engineer, Abraham Edelcrantz built a quite different system from Stockholm to Drottningholm. As opposed to Chappe's system which involved pulleys rotating beams of wood, Edelcrantz's system relied only upon shutters and was therefore faster. However semaphore as a communication system suffered from the need for skilled operators and expensive towers often at intervals of only ten to thirty kilometres (six to nineteen miles). As a result, the last commercial line was abandoned in 1880.
Telegraph and telephone
The first commercial electrical telegraph was constructed in England by Sir Charles Wheatstone and Sir William Fothergill Cooke. It used the deflection of needles to represent messages and started operating over twenty-one kilometres (thirteen miles) of the Great Western Railway on 9 April 1839. Both Wheatstone and Cooke viewed their device as "an improvement to the [existing] electromagnetic telegraph" not as a new device.
On the other side of the Atlantic Ocean, Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837. Soon after he was joined by Alfred Vail who developed the register — a telegraph terminal that integrated a logging device for recording messages to paper tape. This was demonstrated successfully over three miles (five kilometres) on 6 January 1838 and eventually over forty miles (sixty-four kilometres) between Washington, DC and Baltimore on 24 May 1844. The patented invention proved lucrative and by 1851 telegraph lines in the United States spanned over 20,000 miles (32,000 kilometres).
The first successful transatlantic telegraph cable was completed on 27 July 1866, allowing transatlantic telecommunication for the first time. Earlier transatlantic cables installed in 1857 and 1858 only operated for a few days or weeks before they failed.
The conventional telephone was invented by Alexander Bell in 1876. Antonio Meucci had in 1849 invented a device that allowed the electrical transmission of voice over a line but Meucci's device depended upon the electrophonic effect and was of little practical value because it required users to place the receiver in their mouth to “hear” what was being said.
The first commercial telephone services were set-up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. Bell held patents needed for such services in both countries. The technology grew quickly from this point, with inter-city lines being built and telephone exchanges in every major city of the United States by the mid-1880s. Despite this, transatlantic voice communication remained impossible for customers until January 7, 1927 when a connection was established using radio. However no cable connection existed until TAT-1 was inaugurated on September 25, 1956 providing 36 telephone circuits.


A 1950s television.
Radio and television
In 1832, James Lindsay gave a classroom demonstration of wireless telegraphy to his students. By 1854 he was able to demonstrate a transmission across the Firth of Tay from Dundee to Woodhaven, a distance of two miles, using water as the transmission medium.
Addressing the Franklin Institute in 1893, Nikola Tesla described and demonstrated in detail the principles of wireless telegraphy. The apparatus that he used contained all the elements that were incorporated into radio systems before the development of the vacuum tube. However it was not until 1900, that Reginald Fessenden was able to wirelessly transmit a human voice. In December 1901, Guglielmo Marconi established wireless communication between Britain and the United States earning him the Nobel Prize in physics in 1909 (which he shared with Karl Braun).
On March 25, 1925, Scottish inventor John Logie Baird publicly demonstrated the transmission of moving silhouette pictures at the London department store Selfridges. In October of 1925, Baird was successful in obtaining moving pictures with halftone shades, which were by most accounts the first true television pictures. This led to a public demonstration of the improved device on 26 January 1926 again at Selfridges. Baird's first devices relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of semi-experimental broadcasts done by the British Broadcasting Corporation beginning September 30, 1929.
However for most of the twentieth century televisions depended upon the cathode ray tube invented by Karl Braun. The first version of such a television to show promise was produced by Philo Farnsworth and crude silhouette images were demonstrated to his family on September 7, 1927. Farnsworth's device would compete with the concurrent work of Kalman Tihanyi and Vladimir Zworykin. Zworykin's camera, based on Tihanyi's Radioskop, which later would be known as the Iconoscope, had the backing of the influential Radio Corporation of America (RCA). In the United States, court action between Farnsworth and RCA would resolve in Farnsworth's favour. John Logie Baird switched from mechanical television and became a pioneer of colour television using cathode-ray tubes.
Computer networks and the Internet
On September 11, 1940 George Stibitz was able to transmit problems using teletype to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer or mainframe with remote dumb terminals remained popular throughout the 1950s. However it was not until the 1960s that researchers started to investigate packet switching — a technology that would allow chunks of data to be sent to different computers without first passing through a centralized mainframe. A four-node network emerged on December 5, 1969 between the University of California, Los Angeles, the Stanford Research Institute, the University of Utah and the University of California, Santa Barbara. This network would become ARPANET, which by 1981 would consist of 213 nodes. In June 1973, the first non-US node was added to the network belonging to Norway's NORSAR project. This was shortly followed by a node in London. ARPANET's development centred around the Request for Comment process and on April 7, 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet and many of the protocols the Internet relies upon today were specified through this process. In September 1981, RFC 791 introduced the Internet Protocol v4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today. A more relaxed transport protocol that, unlike TCP, did not guarantee the orderly delivery of packets called the User Datagram Protocol (UDP) was submitted on 28 August 1980 as RFC 768. An e-mail protocol, SMTP, was introduced in August 1982 by RFC 821 and HTTP/1.0 a protocol that would make the hyperlinked Internet possible was introduced on May 1996 by RFC 1945.
However not all important developments were made through the Request for Comment process. Two popular link protocols for local area networks (LANs) also appeared in the 1970s. A patent for the Token Ring protocol was filed by Olof Soderblom on October 29, 1974. And a paper on the Ethernet protocol was published by Robert Metcalfe and David Boggs in the July 1976 issue of Communications of the ACM.

Telecommunication

Telecommunication


Telecommunication is the assisted transmission of signals over a distance for the purpose of communication. In earlier times, this may have involved the use of smoke signals, drums, semaphore, flags, or heliograph. In modern times, telecommunication typically involves the use of electronic transmitters such as the telephone, television, radio or computer. Early inventors in the field of telecommunication include Alexander Graham Bell, Guglielmo Marconi and John Logie Baird. Telecommunication is an important part of the world economy and the telecommunication industry's revenue is placed at just under 3 percent of the gross world product


Basic elements
A telecommunication system consists of three basic elements:
• a transmitter that takes information and converts it to a signal;
• a transmission medium that carries the signal; and,
• a receiver that receives the signal and converts it back into usable information.
For example, in a radio broadcast the broadcast tower is the transmitter, free space is the transmission medium and the radio is the receiver. Often telecommunication systems are two-way, and a single device acts as both a transmitter and receiver or transceiver. For example, a mobile phone is a transceiver. Telecommunication over a phone line is called point-to-point communication because it is between one transmitter and one receiver. Telecommunication through radio broadcasts is called broadcast communication because it is between one powerful transmitter and numerous receivers.
Analogue or digital
Signals can be either analogue or digital. In an analogue signal, the signal is varied continuously with respect to the information. In a digital signal, the information is encoded as a set of discrete values (for example ones and zeros). During transmission the information contained in analogue signals will be degraded by noise. Conversely, unless the noise exceeds a certain threshold, the information contained in digital signals will remain intact. This represents a key advantage of digital signals over analogue signals.
Networks
A collection of transmitters, receivers or transceivers that communicate with each other is known as a network. Digital networks may consist of one or more routers that route information to the correct user. An analogue network may consist of one or more switches that establish a connection between two or more users. For both types of network, repeaters may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from noise.
Channels
A channel is a division in a transmission medium so that it can be used to send multiple streams of information. For example, a radio station may broadcast at 96.1 MHz while another radio station may broadcast at 94.5 MHz. In this case, the medium has been divided by frequency and each channel has received a separate frequency to broadcast on. Alternatively, one could allocate each channel a recurring segment of time over which to broadcast — this is known as time-division multiplexing and is sometimes used in digital communication.
Modulation
The shaping of a signal to convey information is known as modulation. Modulation can be used to represent a digital message as an analogue waveform. This is known as keying and several keying techniques exist (these include phase-shift keying, frequency-shift keying and amplitude-shift keying). Bluetooth, for example, uses phase-shift keying to exchange information between devices. Modulation can also be used to transmit the information of analogue signals at higher frequencies. This is helpful because low-frequency analogue signals cannot be effectively transmitted over free space. Hence the information from a low-frequency analogue signal must be superimposed on a higher-frequency signal (known as a carrier wave) before transmission. There are several different modulation schemes available to achieve this (two of the most basic being amplitude modulation and frequency modulation). An example of this process is a DJ's voice being superimposed on a 96 MHz carrier wave using frequency modulation (the voice would then be received on a radio as the channel “96 FM”).
Society and telecommunication
Telecommunication is an important part of modern society. In 2006, estimates placed the telecommunication industry's revenue at $1.2 trillion or just under 3% of the gross world product (official exchange rate). On the microeconomic scale, companies have used telecommunication to help build global empires. This is self-evident in the case of online retailer Amazon.com but, according to academic Edward Lenert, even the conventional retailer Wal-Mart has benefited from better telecommunication infrastructure compared to its competitors. In cities throughout the world, home owners use their telephones to organize many home services ranging from pizza deliveries to electricians. Even relatively poor communities have been noted to use telecommunication to their advantage. In Bangladesh's Narshingdi district, isolated villagers use cell phones to speak directly to wholesalers and arrange a better price for their goods. In Cote d'Ivoire, coffee growers share mobile phones to follow hourly variations in coffee prices and sell at the best price. On the macroeconomic scale, Lars-Hendrik Röller and Leonard Waverman suggested a causal link between good telecommunication infrastructure and economic growth. Few dispute the existence of a correlation although some argue it is wrong to view the relationship as causal. Due to the economic benefits of good telecommunication infrastructure, there is increasing worry about the digital divide. This is because the world's population does not have equal access to telecommunication systems. A 2003 survey by the International Telecommunication Union (ITU) revealed that roughly one-third of countries have less than 1 mobile subscription for every 20 people and one-third of countries have less than 1 fixed line subscription for every 20 people. In terms of Internet access, roughly half of all countries have less than 1 in 20 people with Internet access. From this information, as well as educational data, the ITU was able to compile an index that measures the overall ability of citizens to access and use information and communication technologies. Using this measure, countries such as Sweden, Denmark and Iceland received the highest ranking while African countries such as Niger, Burkina Faso and Mali received the lowest.