As mobile data usage escalates, so does the demand for capacity and coverage, particularly with the increased consumer adoption of smart phones, wearable devices and mobile-connected devices, in parallel with 3G and 4G deployments.
With ever-increasing mobile data usage demands, existing mobile networks are being asked to handle more than they were ever designed to cope with, and operators are being asked to deal with a level of capacity demand far greater than they ever could have imagined.
Mobile data usage explosion
As Next Generation networks such as 4G, LTE and LTE-Advanced continue to outdate 3G, with the premise of 100 Mb/s data transmission to the handset, operators worldwide are considering infrastructure developments to ensure their networks will have the capacity to deliver the massive increases in bandwidth demand and changing geographical patterns of data use.
Cisco’s VNI Mobile Data research found that global mobile data will increase nearly 11-fold between 2013 and 2018, with traffic growing at a compound annual growth rate (CAGR) of 61 percent, reaching 15.9 Exabytes per month by 2018. Whilst operators are keen to realise content and delivery revenues associated with mobile data growth, they also recognise the challenge of developing networks that can accommodate future consumer demands.
Turning 3G into 4G
The wireless industry has worked persistently on getting the 3G standards, equipments and networks deployed and functioning, but high bandwidth applications like HD video streaming are generally beyond what those standards, equipment and networks were originally designed to handle.
3G networks are based on macrocells (fairly large antenna base stations). In much of the world, macrocells have been upgraded with fibre optic network connections to be able to handle the bandwidth that 4G networks demand however to fully achieve 4G speeds needs more than new, faster radios in the 3G networks.
3G macrocell site,
could handle a maximum bandwidth (limited by the combination of the speed and spectrum of the radios and the speed of the backhaul connection to the network) of possibly 100 Mb/s with a fibre network connection and cover an area of perhaps 15-30 square kilometres. This means that all the wireless devices within the 15-30 square kilometres can share that 100 Mb/s.
In a downtown city core, that capacity is depleted very quickly so everyone gets very little bandwidth. If one of the people in this area is using a device that needs all of that 100 Mb/s bandwidth, then it is not available to anyone else, meaning that 3G network architecture will simply not work for high bandwidth applications.
To fully achieve 4G speeds would generally require 4-10 times the number of 3G macrocell sites. Given that each of these sites can cost anywhere from US $50,000 to US $250,000, depending on the distance that the fibre has to be laid to reach it, this is a very costly option and one that is hard to justify commercially and financially.
The alternative approach is to produce lots of smaller sites that are less expensive to deploy, are physically small with a lesser footprint and have a much lower transmit power, so that they cover a much smaller area, known as microcells, femtocells, picocells or just small cells for short.
The idea is that these small cells can be put almost anywhere to offload the macrocell sites. In the future it is likely that wireless networks will be almost entirely based on small cells, with macrocells being used to ‘fill the gaps’ either between the small cells or in suburban and rural areas where the population density is lower. These small cells would be designed to handle perhaps 20-50 simultaneous users with a backhaul connection speed of 10-100 Mb/s.
So what are the options for small cell backhaul?
As all communication with handheld devices is via the network, the higher the bandwidth available to the small cell, the more simultaneous devices that can be supported by that small cell and the fewer the number of devices supported by the macrocell network.
Providing backhaul to individual cells is becoming an increasing challenge for mobile operators due to the rising demand for bandwidth. Backhaul was already a demanding issue in a 3G world, but the rise in 4G deployments only serve to amplify this challenge.
Much has been said about the virtues of fibre; however, there is a growing realisation in the industry that copper still has a major role to play in releasing the backhaul bottleneck. After all, copper has been used in telecom infrastructure for many years. As there are fewer people using each small cell the installation has to be economical, the equipment low-cost and the access to the electrical power has to be affordable.
While copper is unlikely to be practical or commercially sustainable in meeting backhaul demands in its basic form, by using the same technology that provides household internet access over telephone wires (the latest version is VDSL2 which is capable of delivering up to 100 Mb/s over a single copper wire pair over short distances) and using copper bonding technologies (taking multiple pairs and combining their bandwidth so that a single large pipe can be realised), it is possible to power and provide significant amounts of backhaul bandwidth to small cells. Technologies like these are evolving and giving operators an affordable option to upgrade their networks to meet growing customer demands.
The technological landscape is constantly changing and developing, but it is important for operators to not lose sight of what is already in place. Small cell backhaul solutions must be low cost and suitable for an environment where most cell locations are not in line-of-site with each other. There is unlikely to be a ‘one size fits all’ approach to addressing the small cell backhaul challenge, operators will need an array of different technologies to deploy in different cases, but the case for bonded copper is certainly very compelling.
The small cell has a relatively fixed amount of power that it will consume to provide coverage over a given physical area. This means that the multiplexing equipment has to be flexible enough to terminate a bunch of DSL lines from the network, drive those network lines with the data from the houses as well as the small cell and do all the necessary processing of that data including applying a QoS scheme.
In order to achieve this, the small cell-based ‘nodes’ need to be very power efficient, yet be capable of providing significant bandwidth gains (typically a factor of at least 10x more than what is available today) to the wired households served by that small cell. The best approach for this is a Rings architecture, where there are at least two pairs going to each house from the small cell. This is generally the case throughout much of the developed world, and provides a platform for small cell deployments that provides significant amounts of bandwidth, multiple powering options and proximity to a huge percentage of telco customers.
Small but mighty
By using technologies like VDSL2 vectoring and copper bonding on existing infrastructure, small cells are able to provide the capacity and coverage demanded by today’s media-rich environment. This is an exciting step forward, demonstrating the benefits of copper that make it ‘worth its weight in gold’ to operators. These technologies attempt to move the point at which the network is shared closer to the user, so that every user benefits from being able to use any available bandwidth.