Internet Access in New Zealand

Introduction

The following is a brain-dump on the history, technology and issues associated with Internet access in NZ.

I’d like to start with the history, as there are some instructive lessons on why we do things the way we do and things we’d like not to do again. The next parts will look at the problem spaces and approaches being taken, or that could be taken, to address these. Finally, I’d like to offer some suggestions as to where we might go from here.

Inevitably, there will be some departures from fact into opinion in this piece. Also, I apologise for its length, but I believe there is a lot to cover. I’ve avoided talking about national or international infrastructure, concentrating on last mile and end-user access issues.

My aims in producing this document are:

  • to provide a background on how the Internet access industry works in NZ from a last-mile access perspective, including how we got to where we are, what we learned along the way, and the benefits and flaws of the NZ approach;
  • to look at how the access space might change in the future;
  • to advocate for changes that would benefit those who risk being left out by the rush to an on-line world; and
  • to address a few elephants in the room.

Digital equity

Firstly, we need to be clear about why we’re looking at this. Internet access isn’t a toy any more; it has become a necessity to function fully in modern society, and that includes: much social life, through social media, email, messaging and so-forth; information access, including news, Government communications, access to official information and civic discourse; access to official and commercial services, as public services and businesses alike move to online sales and support facilities over traditional telephone and in-person approaches; education, where educational institutions require (and only sometimes provide) access to online resources.

There’s a common argument that since the Internet is used largely for entertainment, that use should not be socialised, i.e. the cost placed on the rest of society. The problem with that view is that the Internet does not know or care what it is used for; a YouTube video might be frivolous, but it might be essential information that someone needs to do a job safely or or get essential care or assistance. The same applies across social media, web sites, and applications.

People without access to Internet connectivity risk being excluded from social circles, civic discourse, official services, and even democratic processes.

As a country we have made huge strides toward getting Internet access to everyone, but there are still things to do. We are past the point of no return in requiring that everybody be able to get on-line with fast and reliable connectivity, and no part of completing that project can be written off as too hard or too expensive.

The easy things are done, and now we need to look for imaginative solutions to fix the hard ones. We have a history of doing just that…

A brief history of broadband access in NZ

To put where we are in the access space into context, I will start with a quick history of how we got to where we are.

Telecom monopoly era

The first Internet connectivity in NZ was built on physical circuits provided by Telecom, now Spark, which in turn had been split from the NZ Post Office in the public sector reforms of the 1980s. Prior to 1 April 1989, Telecom was a regulated monopoly; it was effectively illegal to operate a network that did not use Telecom facilities.

The first international Internet connectivity to NZ was connected at Waikato University about two weeks later.

Telecom continued to own the majority of telecommunications infrastructure; overbuilds by other companies were extremely limited. When ADSL services started being deployed in the late ’90s. only Telecom could do so, since ADSL requires access to the local loop – the physical copper wire pairs connecting exchange equipment locations to homes and business premises. Telecom was pressured to provide wholesale access, which they did on a fairly limited basis; as a vertically integrated provider, Telelecom’s approach to wholesale was to define products they wanted to sell, and then offer the same levels of service to wholesale customers, leaving little room for innovation.

The dismantling of this monopoly came in two stages. First, in 2006, the copper local loop was unbundled, and a new company, Chorus, was formed to own and maintain those assets. This allowed third party providers to install their own DSL equipment in exchanges and connect it to customers, bypassing Telecom completely.

Local loop unbundling had very limited impact, because of the cost of installing DSL equipment in large numbers of exchanges, the need to obtain trunk links to those sites (which Chorus could not provide), and the cabinetisation of the network, where equipment was placed in road-side cabinets, bringing the active components closer to the customer’s equipment, which improved performance and reachability, but vastly complicated access to the copper local loop for third-party providers. By the late 2000s, NZ was languishing in quality broadband, and it was painfully obvious that the primary reason for this was Telecom’s near-monopoly control of local access.

An early indicator of the potential power of breaking the Telecom local loop monopoly came when Telecom dropped prices to customers in areas where Saturn/Telstra access infrastructure (see “Vodafone cable” below) had been built in competition to Telecom’s local loop.

Open access networks

Open access was the next step, and to characterise what that means, we can look at the history of open access networks in NZ.

In 1995, CityLink was established and started providing service in 1996. CityLink was an open access layer 2 network, in that it did just the access layer, shipping packets from endpoint to endpoint without caring too much whether they were IP or where they came from (that’s the “layer 2” part). In particular. CityLink did not provide its own services over its infrastructure. It was up to its customers – mostly Internet service providers – to offer higher level services, such as Internet access. The other unusual – for the time – characteristic of CityLink was that it was mostly fibre-optic, and used Ethernet switching instead of traditional telecommunications technology.

In 1996, CityLink’s network was primitive; in early incarnations, all connections shared one big common network, with no protection isolation or traffic management between connections. That improved somewhat over time, but the access model used by Citylink differs somewhat from that used by modern open access networks.

In the following years, several other efforts at such networks were also tried, although these had relatively poor take-up, mainly due to cost. Around 2008-2009, Christchurch City Network Ltd, wholly owned by the Christchurch City Council and re-branded in 2009 as Enable Networks, began building an open-access layer 2 Ethernet based network in the Christchurch area, after finding itself having difficulty selling the purely dark-fibre network. Unlike the nominally peer-to-peer CityLink model, the Enable model was built from the ground up as a strict hierarchy of service providers connecting customers, with distinct logical circuits between the service provider connection to each customer, and the use of multi-port switches at customer sites meant that a single fibre could bring multiple circuits from multiple service providers to a single customer location. The network design was also ambitious, envisaging eventually connecting residential as well as business customers.

The key difference between open access and wholesale access is that in an open access model, the access provider never treats the end user as its own customer, or more importantly, as its potential customer. Its customers are the upstream providers, and they are the focus of if its business. Wholesale access on the other hand is where a network operator offers a “full stack”, with an access component that can be accessed by wholesale customers. The main issue with this is that in such deployments, the access part of the stack tends to be configured and priced according to the operator’s retail needs, not the wholesale customers’ needs; the latter being treated as a minority interest.

UltraFast Broadband

By the mid 2000s, the failure of the market to get past Telecom’s monopoly and built world class access networks had reached the political level; unbundling had not been a success, and clearly more needed to be done, In the 2008 general election, both Labour and National proposed significant investments in broadband access as flagship policies; and the incoming National Government got to work on setting up its version, the UltraFast Broadband Initiative, or UFB.

Technically, UFB built heavily on the model developed at Enable, and awarded early deployments to local fibre companies (LFCs), Enable in Christchurch, Northpower in Northland, and UltraFast Fibre in Hamilton and the central North Island. These “shots across the bows” brought Telecom to the table, offering to truly structurally separate its low-level services. Chorus was re-formed – Telecom shareholders also became Chorus shareholders – to operate all access networks formerly owned by Telecom, both the new fibre deployments under UFB, but also the copper network assets, including DSL equipment, which also became fully available to service providers. Telecom retained its customers, but would have to buy access-layer services from Chorus. As Chorus and other LFCs have only service providers as their customers, their focus is on servicing those users.

Other access networks

I have concentrated in this section on the evolution of UFB and Chorus ADSL/VDSL, as these connections represent how the vast majority of NZers connect to the Internet. However, there are a number of minority players, including:

  • Regional Fibre: Several companies continue to operate fibre access networks independent of Chorus; these include Vital in Wellington (formerly CityLink), Unison Networks in Hawke’s Bay, Network Tasman Fibre in the Nelson/Tasman and Marlborough areas, Electricity Ashburton and others. Most of these follow the open access model, some such as NTF offer UFB-like service to green-fields and semi-rural developments. Several of these networks were built under pre-UFB broadband initiatives, and with an eye to potentially being chosen as LFCs for the UFB build.
  • Vodafone cable: The Kiwi Cable Company started providing cable TV in the Kapiti area in the early 1990s. In the late ’90s, this operation became Saturn, and cable deployments were expanded into Wellington and parts of Christchurch, and cable modem access was offered, initially at low speeds and fairly high prices. In 1998/99, Paradise Net did a deal with Saturn to start offering domestic Internet service over the cable at speeds and prices significantly better than alternatives. Saturn, Paradise Net, Telstra and Clear Communications went through a fairly rapid sequence of mergers and acquisitions in the 1999-2001 time-frame to form TelstraClear Ltd (TCL) by late 2001, after which further development of the cable network largely stopped. Vodafone acquired TCL in 2012, and continues to offer cable service at up to near-gigabit speeds. TCL did not offer viable wholesale access to the cable network.
  • Wireless ISPs: For small-scale deployments, wireless access is cheap and easy, and a number of providers have used low-cost wireless equipment in both licensed and unlicensed radio spectrum, to provide access in areas otherwise poorly (or not at all) served by cabled infrastructure. Historically, wireless services have been deployed in urban areas first as far back as 1996; as cabled infrastructure has improved, wireless has found its niche in the rural access sector. While there have been some efforts in open access wireless (notably Araneo in the mid 2000s), wireless ISPs are largely full-service ISPs, and don’t offer open-access or wholesale services.
  • Mobile broadband: Dial-up access via mobile providers became practical in the mid-late 1990s, but cost and poor performance made these connections strictly a last resort. More recently, 3G and 4G services are available through all the main mobile providers, and access using fixed antennas provides adequate performance in many remote areas. Like wireless ISPs, these are largely vertically integrated; while the RBI provides for some wholesale access, it is under restrictive conditions, not true open access.
  • Satellite: Satellite has been the access of last resort for some time, with providers like FarmSide offering service from geostationary satellites. Low-orbit satellite services, notably Starlink, promise to offer much better service, with higher bandwidths and lower latency.

Funding sources

Finally, it is worth having a quick look at how broadband access in NZ is funded.

  • Commercial contract: obviously, the majority of the money flow, for operational expenditure, much capital expenditure, and return on investment comes from customers paying their bills; for open-access networks, this is billed through service providers.
  • Crown Infrastructure Partners (CIP): formerly known as Crown Fibre Holdings (CFH), CIP controls UFB and RBI funding. The UFB model provides seed funding to get fibre infrastructure underway, with the money being repaid and recycled into new builds as completed builds enter revenue service.
  • Telecommunications Development Levy (TDL): The TDL raises approximately $10 million per annum (down from $50 million p.a. to 2019) from network operators (from over $4 billion p.a. in revenue). TDL revenue is administered by CIP for telecommunications programmes, including UFB and RBI.

The capital-heavy nature of telecommunications access infrastructure means that even small projects may be stillborn if not for externally sourced seed funding; the largest example of external seed funding for infrastructure is UFB, but even small projects can often get over the initial funding hurdle if funding can be made available. Other examples of essentially public money being made available to access developments include CityLink (WCC), Enable (CCC, through Christchurch City Holdings Ltd). Smaller scale developments have been funded by local bodies, and even InternetNZ.

Broadband access today

With the major deployment of UFB complete, and Chorus, LFCs, most NZ households have access to world-class fibre infrastructure. The environment has gone from one where bandwidth was constrained and expensive to one where, at least at the access layer, bandwidth is plentiful and cheap; the major costs are largely in getting fibre to the customer in the first place.

However there are areas where access is problematic, notably:

  • Low-income households;
  • Rural access;
  • Highly mobile users;
  • Access devices;
  • Accessibility;

and inevitably, there are users for whom several of the above apply. So let’s take a look at these.

Low-cost access

The open access model has unquestionably altered the access landscape for the better. However, open-access has one serious flaw, and that is, in offering the level playing field to providers and innovators that open access promises, the access component provides a floor in the price that can be offered for a service. For example, the minimum charge for residential GPON access from Chorus is $41.50+GST per month for 30 Mbps down, with 100 Mbps at $51+GST per month. Even if Internet service is offered free, that cost may still be enough to be a barrier to uptake. Most UFB services range in the $80-$100 range per month; slightly more for gigabit access. At $80/month including GST, that leaves less than $30/month to provide the “Internet” part of “Internet access”. Even copper access is not a lot cheaper.

For occasional access, mobile broadband is often cost-effective for low-bandwidth use, since installation cost is mostly the initial cost of the mobile device; monthly cost can be very cheap, even if price per volume downloaded can be high, and the risk of using up data and becoming effectively disconnected as a result is similarly high. Sadly, such services are often the only viable option given the high monthly cost of cabled service. Being restricted to such access isolates users from higher bandwidth services. Bottom-end plans cost in the vicinity of $10/gigabyte volume.

The cost of bandwidth is not the problem with cabled access. Domestic gigabit fibre services with unlimited volume usage cost ~$120/month, which makes mobile data costs look positively rapacious. The issue comes down to the cost of having such access at all.

An obvious answer to this, especially where there are concentrations of people in this situation, such as in affordable housing developments or social housing, is to bring single connections into blocks of homes and share them via local cabled or wireless infrastructure. Clearly, this approach is limited in scope, but if there were mandates and/or incentives for affordable and social housing to include Internet access, solutions could be found.

A more general solution is to socialise access for low-income households, simply subsidising UFB or similar access where there are no other options, or aggregating access as described above to reduce costs.

Another approach is to modify the open access model to be nuanced where affordable access is required. Since access providers are commercial entities, that may amount to a subsidy, although more imaginative solutions than simple funding might be possible. After all, providing a low-cost connection may be better financially for the access provider than providing no connection at all over infrastructure that has already been deployed.

Before leaving this topic, I would caution against allowing “monopoly” open-access providers to expand “up the stack” (e.g. to provide Internet access on their own infrastructure), even in the most limited manner. If allowed to do this, an essentially monopoly provider would be building the capability to place itself in a potentially anti-competitive position; the situation prior to UFB should be a stark warning about the dangers of this. I believe the industry is well served by organisations capable of doing the upper layers of low-cost deployments, in partnership with social services, housing and access providers, without building that capability in the access providers themselves.

Rural access

Cabled access and rural fibre

Rural service is a hard problem. In network deployments, we count costs in terms of cost per premise passed – that is, we count the cost of deploying a network over a given route, and divide that by the number of potential connections along that route. Urban deployments, where road frontages are typically in the vicinity of 10-30 metres, are obviously a lot cheaper in CPPP terms than rural areas where the distances between properties is in the order of hundreds or thousands of metres. That said, rural fibre is much cheaper to build than urban fibre when measured in cost per km.

Copper-based VDSL and ADSL services are used in rural settings, but the relatively short ranges (500 m from an exchange or powered cabinet for VDSL, 2-3 km for ADSL at more than 10 Mbps) limits its usability. ADSL achieves its extra range by sacrificing upstream bandwidth for downstream; in a world that increasingly requires high-speed upstream capability as well as high downstream speeds, e.g. for video conferencing, ADSL can no longer be seen as an appropriate solution for broadband access.

Very little large-scale work has been done on rural fibre, although with the prevalence of lifestyle and semi-rural residential blocks, there is a lot more rural fibre deployed than one might imagine. Fibre will eventually be built everywhere that can be reached with copper, in many cases simply by using the same routes and poles, since fibre can operate over longer distances than even copper voice-only analogue circuits, with GPON running at over 20 km with enhanced optics and point-to-point fibre able to operate at over 100 km distances.

Clearly, pulling national-scale rural fibre out of the too-hard basket is something the country needs to address over the coming years. In the mean time, rural access remains largely the province of wireless ISPs (WISPs), mobile broadband, and satellite operators.

Since cabled (fibre) broadband access will largely require using the same infrastructure – routes, poles et c – as existing power, transport, and telephone infrastructure, there does need to be regulatory support for this; a fibre provider needs to be able to use such infrastructure. (Much of this is covered under the existing Telecommunications Act network operator provisions, but this can always be improved, and the use of private assets such as poles may need review.)

I would also note that this may be an area of interest for regional electrical lines companies to be interested in if funding is available; some already operate extensive fibre networks.

Wireless ISPs and mobile broadband

WISPs use primarily unlicensed spectrum, as 5 GHz and newer bands are less crowded in rural areas than they are in urban. Equipment in these bands can operate at up to gigabit speeds on point-to-point links over many km, and multi-point access points can operate at similar speeds aggregated over multiple customer accesses. The equipment is fairly cheap; the difficulties are mainly around finding, building and powering suitable access sites, especially in hilly country (which NZ has a lot of).

Wireless needs line-of-sight between antennas, which means tree-free high spots to “see” into as many customer sites as possible. Such sites rarely have power (and even if they do, rural power is subject to many more outages than urban due to much longer transmission lines), meaning the sites need large batteries and solar panels (and sometimes generators) to stay powered in adverse conditions that include short winter days, extended periods of overcast or wet weather, low temperatures (which affects battery performance) or even snow and ice build-up on solar panels and antennas. Batteries also need periodic maintenance, so need to be visited regularly; this often means trips on 4WD routes through difficult country, and sometimes the use of helicopters for access. Particularly lumpy country can mean locating customer access away from homes, requiring power and data cabling out to antenna location.

Mobile broadband faces the same physical constraints as with WISPs; the differences are in the equipment used, and the types of organisations that mobile operators are compared to WISPs. The former tend to be telcos – Spark, Vodafone, 2degrees, while the latter tend to be much smaller organisations. Technologically, there is increasing convergence in the types of equipment used by WISPs and mobile operators; the basic requirements in terms of bandwidth, range, and power consumption are after all very similar. Mobile operators use largely licensed spectrum, while WISPs make heavy use of unlicensed spectrum, but that too is less true now than it once was. (This is part of an overall trend in all areas of telecommunications, away from infrastructure designed primarily for voice service to one where “everything is data”, and voice is just another application.)

As organisations, mobile operators have access to RBI and Telecommunications Development Levy (TDL) funding; while there is some RBI funding for WISPs, these organisations aren’t as well set up to take and use such funding, and their scale means they get used for piecemeal deployments.

Mobile broadband is also used in fixed configurations; at distance, placing a “mobile” radio at a fixed location with line-of-sight (or near line-of-sight) to a mobile tower and using a high-gain directional antenna can provide significantly more bandwidth and range than typical mobile data. This is the basis of RBI fixed access, which is wholesaled under regulation from mobile operators. This is rather like the wholesale vs open access dichotomy discussed above; the wholesale services have limits on volume, something that became an issue early on during the pandemic, as many normally casual RBI users found themselves working from home and exceeding their data caps, and having no backup option. I believe the RBI regulated wholesale model is not fit for purpose and needs review.

There are two basic barriers to wireless access of any kind. The first is the cost of building the infrastructure; each connection is essentially a new radio engineering job, at a minimum, finding a suitable site on the property to place the antenna, and at the other end of the scale there just isn’t a suitable location without building new relay sites, which makes cost and availability highly variable.

The second is the continuing use of data caps on wireless broadband services. This has two downstream consequences: first, the data caps mean there is a risk of service becoming unavailable, unusable or expensive; the second being that it is difficult to use such services to serve high-usage sites such as public access wifi in community spaces, marae an so-on. Work needs to be done to traffic management measures to ensure that users maintain an adequate level of service regardless of usage.

Satellite

Finally, there is satellite. Until recently, satellite options typically involved fixed stations connecting to geosynchronous satellites; this works, but has relatively poor bandwidth and awful latency, simply because of the distance out to geostationary orbit (GEO) and back means a round-trip time of over half a second, which seriously affects responsiveness. GEO satellite access is very much an access of last resort.

Low-earth orbit (LEO) broadband is becoming a viable option, notably by the introduction of Starlink service, costing $159/month with an initial, “do-it-yourself” set-up cost of $913 for the antenna and router (including shipping). This is not cheap, but it provides around 150 Mbps down (and 30 Mbps up) with no data caps in areas where there are few if any better options. Since the altitude of the satellites is around 500 km rather than 37,000+ km, the latency is vastly less than for GEO satellite service, and more akin to terrestrial service. The pricing, while reasonable, is currently at a level that discourages use by those with other alternatives available, making the service complimentary to terrestrial services. Starlink’s appears to have massive potential for use in disaster recovery, both in pre-planned DR configurations and in emergency response.

While promising, Starlink is very new and there are some as-yet unresolved questions. The terms and conditions have many “outs” around potential pricing changes and service restrictions; in particular, the terms of service state that the service is only for “personal, family, household or residential use”, leaving uncertainty over what is outside that use. After all, farms are businesses. In addition, there is a vague policy around managing “excessive use”, and it is not clear what that means in practice. The service is entirely dependent on the long-term viability of an overseas based and owned operation, one that is a long way from being cash-flow positive, let alone profit-making, and the low-orbit satellite communications field has a surprisingly long history of failure and unmet expectations.

Starlink is expected to have competition in future, notably from Amazon’s “Project Kuiper”.

Highly mobile users

One problem area is with people who do not stay in locations for long periods of time. These range from people with no fixed abode (homeless or simply wandering), through people working on civil works projects around the country. With most service providers requiring a year sign-up or extra installation fees for shorter terms, this provides a barrier to entry. For the most part, where service has already been given, the cost of enabling or disabling a service at a location should be minimal; even if a connection is new, one expects that it will be used again by future customers, even if the one originally ordering it departs early.

People not staying at any location for long, seeking boarding accommodation, or on the road need to have accessible communications.

Practical solutions include requiring service providers to offer reasonable early termination arrangements, encouraging or even requiring Internet access in boarding houses and other short-term accommodation, and encouraging mobile providers to offer plans suitable for users that use mobile access as their primary service. Similarly, encouraging public wifi services, especially in community and public spaces can also help with taking pressure off users of limited mobile plans.

One of the major justifications of one-year minimum terms is the cost of the router; many ISPs require the cost of the router up front for shorter terms (or they offer a “free” router for a one year term, which amounts to the same thing).

Experience in the industry is that the level of “churn” (the rate of re-connection to different providers or locations) is relatively low; most people stay with their provider unless they have a good reason to change, which in turn suggests a healthy ISP industry.

Access devices

Obviously the best broadband Internet access is no use without suitable devices. Top-end smart-phones, laptops and tablets retail in the $1,000-5,000 range, with low-end wifi-only tablets devices costing as little as $250, with 4G mobile broadband models only a little more expensive.

However, as with monthly access fees, a low-income family can often not scrape together that kind of money, especially in larger families where multiple devices are needed to meet educational, social and civic participation needs, and in many cases lower-end devices can be limited in capability.

The ongoing arms race in device capability and resource requirements of applications means that people are often pressured into replacing devices regularly; the limited lifetimes of batteries also tend to encourage relatively short life-cycles of mobile devices. (Desktop computers, being more capable at the outset and not reliant on batteries, tend to have much longer lifespans.)

The battery problem makes casual passing of pre-used devices without refurbishment less of a solution than would be desirable. For families and communities interested in re-using discarded devices, e.g. as starter devices for younger family members, the cost of refurbishing a device can be prohibitive.

Computer and mobile device recycling has a patchy history in NZ; while recycling for raw materials (mostly metals) has always been an active industry, device re-use has always been much less so. Devices are easily discarded, and for device re-use to be practical, there needs to be active encouragement to donate used devices, facilities to refurbish devices – replacing batteries, removing user configurations, and updating software – need to be present and efficient, and processes in place to get refurbished devices back into the community at minimal cost.

The wider issue of e-waste is also relevant to this discussion; while discarded phones are a small (but significant) part of the e-waste picture, obtaining re-usable devices would best be done through wider e-waste collection and recycling infrastructure. (See asides below.)

Accessibility, training and support

This isn’t my area of interest or expertise, but I will nod towards its importance: as well as having devices and connectivity, people need to be willing and able to use the internet to fully participate. That includes dealing with disabilities (e.g. pushes for accessibility to be included in web sites/applications et c), training for people unused to dealing with technology, and ensuring adequate support is available, through service providers or other community groups et c.

What can InternetNZ do?

A manifesto for Internet participation

I believe NZ needs a set of minimum standards for Internet access. All NZers should be able to expect, at prices they can afford:

  • Access to a personal Internet device;
  • Access at sufficient bandwidth and reliability is always available;
  • Support to ensure devices and access can be used, especially for those needing training or assistance.

Clearly some nuance in such standards is required to allow for practicalities, but standards need a level of aspiration, that is, an area that can not meet a standard needs to be prioritised for development so that it can; a standard that simply describes an undesirable status quo is of no use at all, and standards need to be updated as what is needed and what is achievable changes.

I don’t believe that the solution to this is to fully socialise Internet access, but clearly there is a role for social services to work with infrastructure and equipment providers to ensure that people have what they need to fully participate in an increasingly on-line society. In short, build incrementally on what we have already achieved to address the problems we face.

Community funding

While INZ can never match the kind of funding that CIP or the public sector can provide, we could provide funding for smaller community deployments. In particular, funding for projects like getting WISP towers deployed, wifi access points and connectivity for marae and community facilities and so-forth can be done.

An approach I would like to see explored more fully is that of underwriting new connection projects, where the up-front cost of establishing connectivity is provided, and the “paid forward” into the next project as the facility starts to generate revenue. INZ could work with WISPs and community groups to do this.

Similarly, assisting with getting facilities built into affordable and social housing developments may be an area we can work with.

Other areas such as community training and assistance programmes can also use assistance. I note that these tend to require ongoing funding, so really need more stable income sources than ad-hoc grants.

Advocacy work

Infrastructure

The following are areas we should work with CIP, Government and others to achieve:

  • A major review of the operation of regulated RBI wholesale service, with a view to removing per-user data caps and where possible improving end-user data rates.
  • Investigations into ways to get low-cost or subsidised UFB fibre services into social and affordable housing, engaging housing providers, CIP access providers and ISPs.;
  • Development of plans for long-term national regional fibre deployments, including engaging fibre providers to seek innovative solutions to reducing the cost of such deployments.
  • Advocate for regulatory reform to ensure access to infrastructure, e.g. poles, routes et c, for rural fibre deployments.
  • Encouragement of service providers to reduce and long contract terms.
  • Pushes to provide public wifi in community and public spaces.

I would observe that Government attention is off broadband as infrastructure, with housing, transport and sustainability being higher priorities, and we are unlikely to see the kind of attention that saw the UFB deployed or the changes that resulted in RBI. That means that the thrust of advocacy in the access space needs to be towards social equity programmes rather than infrastructure building (even if building infrastructure is the actual aim).

With that said, there is an interesting Government scheme in the UK for getting rural areas connected: the Gigabit Broadmand Voucher Scheme⤴. This basically provides seed funding for locally-driven broadband deployments.

Devices

These will require work with other agencies to achieve:

  • Consider providing to devices through educational providers and social services;
  • Work with social housing providers et c to include Internet access in the properties they manage;
  • Ensure the availability of low-cost but fully serviceable devices
  • Seek development of large-scale refurbishment and re-use of discarded devices, preferably as part of a national e-waste strategy. (Also, see the asides below.)

These are just food for thought – I’m sure there are many other things we could be looking at.


Some observations and asides

Finally, in researching and considering this article, I made a few observations that don’t belong in the main text:

  • While the bulk of the deployment work has by necessity fallen to substantial companies (or companies funded to become large through UFB), the real innovation in the access space has not come from established organisations, but by small innovators: CityLink, Enable, Araneo are some (but far from all) examples. We need to ensure small innovators are not ignored by large funding exercise.
  • Attention has to be given to motivation of organisations. Given a choice between an expensive, risky but potentially game changing investment, and sweating existing assets, larger companies often partly or completely fall back to the latter. The behaviour of Telecom in the pre-UFB period is the worst, but by no means only, example of this.
  • Most cabled infrastructure is physically built and maintained by large contracting companies (e.g Downer), rather than by the access providers themselves; these organisations have massive civil works capability, but tend to be fairly expensive and difficult to work with on a smaller scale. There is a potential market niche for smaller-scale fibre installation companies to operate in community network deployments, as occurred in some of the pre-UFB fibre deployments.
  • Batteries are a problem, whether we’re talking telecommunications sites, mobile devices, or battery backup for “must work” telecommunications systems. Quite simply, batteries need maintenance and recycling, and for devices where batteries are integral, this often translates in practice into replacing the whole device. In short when considering battery power in any context, the maintenance, replacement and disposal of batteries needs to be accounted for in the lifespan cost of the facility being considered.
  • The technology manufacturing industry grossly understates the life-cycle cost of its products; for example, in the EV space, they concentrate on the reduction of fossil fuel emissions of the actual vehicles, while drawing attention away from the energy and environmental costs of extracting the raw materials for batteries, engine parts et c and manufacturing the vehicle, and ignoring the maintenance and disposal costs, especially those of their batteries. The electronics industry is similarly affected by this “all upsides” marketing, while downplaying the environmental cost of building and then discarding a surprisingly large proportion of its products in very short time-frames. I touched on the role of recycling under re-using devices. But a long term issue to keep an eye on is that the rate of extraction of “fresh” raw materials for batteries and other electronic parts is unsustainable in the long term, and recycling or re-extraction of materials is going to become important. Landfills increasingly have higher densities of valuable elements than natural ore bodies; the problems around mining such deposits are largely around the efficient separation of these materials from waste, and the environmental safety of doing so. In the shorter term, there are significant opportunities for waste reduction and recycling, both for re-use of working devices and extraction of raw materials. However, these activities will need capital to build capability. I believe the technology industry will, by necessity, end up a lot more sustainable than it is now, but it will need some work to:
    • confront the industry with its externalities, and ensure that it can not just rely on citizens (e.g. through local councils) to pay for disposing of their products;
    • decrease the environmental impact of technology;
    • explore and exploit the needs of the technology industry for sustainably re-using/recycling waste; and
    • ensure that measures such as waste taxes on technology producers and/or vendors actually go to developing means of dealing with waste.
  • When deploying rural fibre, being on an existing fibre route doesn’t help a lot. Long-haul fibre is built differently to urban access fibre; the latter tends to be built using air-blown fibre in ducts designed for rapid deployment from cabinets or pits to new drops near the fibre. Long-haul fibre by contrast tends to be built using direct-buried or aerial cables with occasional pits providing access to the individual fibres, usually installed more for the convenience of those laying and maintaining the fibre, and a new fibre drop requires a new cable from a pit to the destination.
  • A recurring theme in discussions about access for people with low incomes or financial hardship is the impact of lump-sum costs. Too often I hear that, say, $250 for a bottom end device is “cheap”, when someone on a benefit or minimum wage job who is spending all of their take-home pay on living expenses may never have that much savings. The same applies to buying a router for a less than one year Internet plan. Expecting people to spend their “rainy day” savings for essentials is unreasonable when “rainy days” are already coming alarmingly often. Lump-sum installation costs are always a hurdle, whether we’re talking about a $250 device or a few thousand dollars to extend infrastructure to a remote location, even if the lifetime cost of what is being bought is very low.