Can wireless replace fixed broadband? Yes … and no.

66

opinion Over the past week, I’ve been conducting a little experiment with respect to my household broadband situation.

As you may recall, I am currently in the process of dumping our Voice over IP-based home telephone in favour of getting a traditional fixed-line PSTN connection switched on. This has meant that iiNet has had to ‘downgrade’ our home broadband connection from naked DSL to a normal ADSL2+ connection, so our broadband has been down for a week.

Because of our lack of a fixed connection, over the past week I have been testing the hypothesis that it’s possible to replace a fixed broadband connection with a wireless one for residential use. Astute observers amongst you will note that this topic has been the theme of a thousand angry discussions between National Broadband Network supporters and opponents over the past few years, fuelled by constant commentary by the Opposition that wireless technologies might eventually make the NBN obsolete.

After hundreds of candid photographs of Malcolm Turnbull carrying his treasured iPad around the well-heeled streets of Wentworth, we’ve got the point. The Coalition – and many other people – think wireless may be the future of Australian broadband.

Now, I haven’t just used any wireless network to test this theory. I currently have in my hot little hands one of Telstra’s flash new Elite Mobile Wi-Fi devices, which has already been garnering stellar reviews. On paper this little beastie supports speeds up to 21Mbps. Coupled with Telstra’s stellar Next G network, this should deliver one of the fastest commercial 3G mobile connections in the world (look out for our longer review later this week).

In addition, my household is not just any household when it comes to broadband usage.

In our house we have no less than four computers between two people – I have a Windows 7 gaming machine, there is an Ubuntu media centre box in the loungeroom, and my wife and I have a MacBook each as well as iPhones connected to Wi-Fi. We are both what might be classed as high-octane users – not only do we use the internet constantly for browsing, email and social media, but we also watch online HD video constantly through places like YouTube and the ABC’s iView, as well as pulling down sizable downloads (I just grabbed the latest version of Ubuntu, for example, and my Steam addiction is well documented).

I also play video games nightly – normally StarCraft II is my passion, and I play Terran. Death to the Swarm!

The result over that period has been mixed.

Firstly, it is a simple and obvious fact that a modern Australian household with moderate internet usage needs such as web browsing, email, social networking and some multimedia usage will easily be able to satisfy all of those needs through a 3G mobile connection. On a day to day basis (and note that I’m discussing the user experience here, not the ‘speeds and feeds’ situation), I noticed very little difference between using my normal iiNet ADSL2+ fixed broadband connection, and Telstra’s Next G network through the Elite Mobile Wi-Fi modem.

For several days last week, I conducted most of my daily business of researching, writing and uploading stories, communicating with people online and entertaining myself through browsing Reddit, all through Telstra’s Next G connection. And most of the time I completely forgot I was on wireless and not on fixed broadband. Er … Telstra … this test account may have racked up quite some quota. Sorry

Now, as you would expect, there were several important exceptions to this rule.

The first one relates to video. I regularly watch HD-quality video of StarCraft II matches broadcast online from GomTV in Korea or via YouTube (or anime on CrunchyRoll), and quite frankly, the Telstra wireless connection was really not up to this task. Sure, you can do standard definition video OK, but when you’re attempting to watch streaming video on a large flatscreen TV or even a 24” monitor, you need at least 720p HD quality — I normally go for 1080p — and Next G was not always up to this.

The connection would stutter, I would spend several minutes buffering the video before playing it for a minute and then buffering again; and streaming HD video also slowed down everything else happening over the broadband connection. This didn’t always happen — sometimes, especially with iView, the experience was fine — but overall it was inconsistent and the experience was unsatisfactory.

The second problem relates to online gaming.

On Next G I experienced problems even trying to get Blizzard’s login system to connect reliably to its Battle.Net server in the US or Singapore – let alone trying to play StarCraft II. Dropped connections, loading screens that took forever … it was a nightmare. It was a similar situation several times when I tried to log in to the Steam platform to play Portal 2. I wasn’t even trying to play multiplayer, but Steam hung several times when it tried to authenticate me online … in the end I gave up and went in to the office to use the fixed broadband connection there to hang with the lovely GLaDOS.

Sure, Steam isn’t supposed to need you to login online to play single player games … but in this case it seemed to have got suck midway between online and offline, packets stranded somewhere between my PC and Telstra’s mobile phone tower, when it didn’t need to. You can use it offline, but Steam is clearly designed for PCs with fixed broadband.

So what am I trying to say here?

The truth is, that if you’re a moderate to light internet user, 3G mobile broadband is a pretty good option. You won’t notice the difference between Next G (we’re not sure about the networks of Optus or Vodafone; your mileage may vary) and a fixed ADSL or HFC cable connection if all you’re doing is browsing the web, using email, chatting online and so on.

But the minute – and we mean this, the minute – that you start to enter into heavy internet use; video, video gaming and so on, 3G becomes second class and you will enter what we like to call “the World Wide Wait”.

Right now, I actually have three Telstra Next G connections; the one with the Elite Mobile Wi-Fi unit I am reviewing, my normal Next G USB modem, and my iPhone 4’s tethering feature. All three helped me get through the past week.

However, this afternoon I spoke with a nice gentleman at iiNet, who informed me my ADSL2+ would be back online some time tomorrow morning. For this HD video happy online gamer, stuck in a wireless wasteland for the past week … the return to fixed broadband couldn’t come soon enough.

Image credit: Julia Freeman-Woolpert, royalty free

66 COMMENTS

  1. Your experience of making do with 3G while waiting for ADSL mirrors mine except for one thing – the cost per GB! Although you allude to “racked up quite some quota”, you’ve avoided mentioning that the biggest plan that Telstra even sells only comes with 12GB (shaped to 64k after that), costs more than my 150GB/month ADSL2+ (with Internode, who are not a budget ISP) – and you can’t avoid a 2 year contract!

    It’s one thing to be stuck waiting for content to download, but quite another to simply not be able to afford it.

  2. I’ve recently moved back to Sydney after living in Canada for several years. Over there internet access is pretty much ubiquitous via cable. I now live within sight of the harbour bridge (Woolwich) but my only choice is Wireless as we are on an old exchange that doesn’t support ADSL apparently.

    I’m a moderate internet user from a data point of view and mostly it is Ok, but I often have to reset the connection/unplug the USB device as it seems to drop out. It is rare it makes it through 24 hours without an interuption. The small data cap and the cost/Gb as the previous commenter mentioned are also big downsides. If I had a choice I doubt I would be using Wireless.

  3. Can you say what suburb you live in ?

    Location matters a great deal with wireless broadband. It tends to perform poorly in high density areas with lots of short-term renters and/or absence of DSL ports.

    I think the above 2 scenarios is a catch-22 situation for wireless broadband providers. The basic revenue math says its economically justifiable to deploy extra base stations (many customers). The problem is that as a provider you’d be betting that fixed-line broadband will remain a dead option over the long term in that particular area. Thats a pretty risky bet on areas where fixed-line broadband is in fact quite economic to deploy.

  4. Can wireless replace fixed broadband ?

    Yes, And Absolutely Yes . . . . in 6 months time when Next G is upgraded to LTE.

    9 Years before NBN is completed.

    And how much more will LTE upgraded in the time !!

    • I find this amusing. On so many levels. You really think that the improvements to LTE based technologies are actually that good? You have got to be kidding me.

      Obviously you have no understanding of the technologies involved here Reality Check, and how, although LTE will be an order of magnitude better than 3G in terms of reliability and performance, it will still suffer from the same fundamental problems of wireless technology that mean that fixed line Broadband can always get you a better connection, assuming you upgrade them both.

      Yes, LTE will be awesome compared to the “current” technologies available to most Australian’s, like ADSL2+, however the fact remains that it is behind next generation and yet to be deployed fixed line technology, like VDSL (with a short line length) and GPON.

      • Good wireless will be better than great fixed line for delivering collaborative and dynamic apps in the future.

        My NextG is more reliable than any ADSL I’ve had and the speed is equal or better than any ADSL.

        If I had to make the choice I’d use NextG, ADSL is just for extra downloads, that I don’t even need to collect straight away and would be happy picking up once/twice a week from a central fibre point.

        ~500m range wireless will be a cheaper, better solution than fibre and will give us so much more for the future. People with massive bandwidth requirements can pay for a directional wireless or fibre link to the tower that will be 500m away.

        Spectrum shortage averted. Problem solved. Most of $36Bn saved.

        • Good wireless will be better than great fixed line for delivering collaborative and dynamic apps in the future.

          It all depends on the application in question. You cannot generalize like this.

          My NextG is more reliable than any ADSL I’ve had and the speed is equal or better than any ADSL.

          News flash: your experience is not representative of everyone’s. In fact, you are very much an exception to the rule.

          If I had to make the choice I’d use NextG, ADSL is just for extra downloads, that I don’t even need to collect straight away and would be happy picking up once/twice a week from a central fibre point.

          NextG is great, I also use it, and given the choice I would stick with my fixed line connection via WiFi, because I get far better reliability. Personally I would never drop my fixed line connection.

          ~500m range wireless will be a cheaper, better solution than fibre and will give us so much more for the future. People with massive bandwidth requirements can pay for a directional wireless or fibre link to the tower that will be 500m away.

          No it won’t. Were you even paying attention: if you have densely packed nodes you need to put them on difference spectrum bands to prevent contention, the denser you pack the nodes the more spectrum you need.

          Spectrum shortage averted. Problem solved. Most of $36Bn saved.

          Wireless at such a high node density actually will throw us headfirst into the spectrum shortage.

          • If we can densely pack home WiFi without problems, and deliver 300Mbit over 802.11n, then why can’t we use 4x the spectrum to setup 500m links that deliver ~100Mbps?

            Answer: of course we can. Densely packing SHORTER range stations is the solution. Just as it has enabled high bandwidth home wifi, it will enable decent bandwidth Internet. And if you want more for special purposes, run a fibre link to your node – it’ll be close by, so expense won’t be ridiculous. or wait for tech to improve and the bandwidth will increase on its own.

            As to the 300ms ping, that is the worst score to get to a capital city. not to the fibre network. that allows about a 150ms to the nearest fibre point, which is fine, make it <=200ms to all captials, allowing only 50ms to the nearest fibre point if it makes you happy. that's still doable.

          • If we can densely pack home WiFi without problems, and deliver 300Mbit over 802.11n, then why can’t we use 4x the spectrum to setup 500m links that deliver ~100Mbps?

            I could sum up my entire reply with the one word.

            We can’t densely pack home WiFi without problems is the point I was trying to make here. You ever how access points that are so underpowered they cannot negotiate a door and are only available in the room you put them, and if you don’t have line of sight with the access point well you probably didn’t want access anyway, or slightly overpowered that the person next door can access your network in their living room. Now, if every household has WiFi, or you’re in a place with lots of public access WiFi points you’ll find that you can hardly get enough spectrum for everyone.

            Also you do realize that 300Mbps is shared right? That all users connected to that access point get a slice of that 300Mbps? You also further realize that if a single client has access issues preventing theme from getting the full 300Mbps (for example low signal quality) the network will, at best case, allocate more of the shared bandwidth to said client to compensate (i.e. they are getting 20Mbps but to do that they need as much time as an ideal connection would take to deliver 60Mbps) or in worst case, the entire network (which WiFi actually does) reverts to the LCD.

            And let us not forget the problem of getting 300Mbps to nodes 500m apart (that is what you are proposing right?) will require almost as much fibre as running fibre to every home in the first place. I’m sorry, but Wireless cannot be the “catch-all” technology you want it to be.

            Just as it has enabled high bandwidth home wifi, it will enable decent bandwidth Internet.

            WiFi N considered a high bandwidth local area network? You do realize that for minimal extra expense the home can triple the theoretical maximum bandwidth to every device as well as get 50x the average actual achieved bandwidth in most cases (which in my experience is about 20Mbps give or take 10Mbps). It cost me $50 to run approximately 50 meters of Cat5e and one of the common applications, copying large files for backup to the local NAS is 160 times faster to execute.

            And if you want more for special purposes, run a fibre link to your node – it’ll be close by, so expense won’t be ridiculous. or wait for tech to improve and the bandwidth will increase on its own.

            I love this argument “if you want more, run it yourself”. No it doesn’t work like that, running connections ad-hoc isn’t actually viable, and to do so, even over short lengths, when dealing with publicly owned property like roads, still costs thousands of dollars per connection. It is an order of magnitude cheaper to run them on-mass, why do you think Telstra and Optus ran HFC cables to millions of homes even through, at the time, a pittance (read, no one) actually signed up for the service? It’s because in the long, and even medium term, this worked out cheaper for them than just running the connections if and when the client wanted them. Granted, in this case, they overestimated the demand and ending up writing off billions, but it is still common practice industry over and they likely would not have gotten into huge writes off had they not been trying to replicate each others network street for street.

            As to the 300ms ping, that is the worst score to get to a capital city. not to the fibre network. that allows about a 150ms to the nearest fibre point, which is fine, make it <=200ms to all captials, allowing only 50ms to the nearest fibre point if it makes you happy. that's still doable.

            The circumference of the earth is in the order of 38 thousand kilometers. In the worst case to get to any two points on the earth you will need to deal with about twice that in terms of fibre links and “fibre link equivalent”, 80 thousand kilometers. Due to delays in routing and switching, the average speed of a packet can be expected to be approximately 2/3rds the speed of light in a vacuum (very rough estimate). This puts an upper limit of getting any packet, anywhere in the world, of 400ms, i.e. a round trip of 800ms. In practice this is what we tend to achieve that in most cases, in fact I can get a packet to the UK with approximately half that. Observe:

            Pinging hexus.net [195.78.94.240] with 32 bytes of data:
            Reply from 195.78.94.240: bytes=32 time=381ms TTL=44
            Reply from 195.78.94.240: bytes=32 time=382ms TTL=44
            Reply from 195.78.94.240: bytes=32 time=382ms TTL=44
            Reply from 195.78.94.240: bytes=32 time=385ms TTL=44

            Ping statistics for 195.78.94.240:
            Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
            Approximate round trip times in milli-seconds:
            Minimum = 381ms, Maximum = 385ms, Average = 382ms

            When latency becomes important, which in many applications it does, the latency given by a current generation wireless technology is completely and utterly unacceptable and using that, as you seem to be doing, as a metric for acceptable deliverance is unwise and shows a completely lack of understand of the diverse nature of the many of the applications the Internet is used for today.

            In case you were wondering the word I was referring to at the start of my post was facepalm.

          • Dense packing is fine. 802.11n runs across 12-14 channels giving 4 non-overlapping chunks @ 300Mbps or 2 chunks at 600Mbps:
            http://en.wikipedia.org/wiki/File:NonOverlappingChannels2.4GHzWLAN-en.svg

            that means that with 802.11n a band with be saturated with 1200Mbps. Two such bands exist (2.4GHz and 5GHz) giving 2400Mbps with the technology.

            Australian Housing Density:
            http://en.wikipedia.org/wiki/Medium-density_housing
            8-15/dwellings/hectare for standard surburbia
            25-80/hectare for medium density.

            Lets assume we want 1:1 contention(which is business grade and excessive) and on balance, we will assume that LTE can deliver the same spectrum saturation as 802.11n.

            Taking the bandwidth available for 802.11n only, we can achieve:

            500m radius towers = 7.8hectares area ~ 1.5 towers per square km.
            standard suburbia = 64-128/dwellings
            = 25-50Mbit uncontested per dwelling

            250m radius towers = 2 hectares = ~5 towers per square km.
            medium density = 45-140/dwellings
            = 20-60Mbit uncontested per dwelling

            This will give us a network thats equivalent to having the best possible ADSL, but not tying you to a fixed point.

            Is that really so bad?

            As for ping. I’m trying to imagine worst case for rural. My ping on NextG is ~60ms and spikes to 120ms sometimes. That is totally acceptable for everything but twitch gaming and real time surgical procedures. The first we can cope with restricting to LAN cafes and the second will have fibre so its dealt with.

            Do we really want to favour in home twitch gaming over the equivalent of country wide WiFi? That’s what you’re asking and it’s pretty unreasonable.

          • Facepalm

            First off, a few points from my previous post. I have included a summary.

            Summary: The theoretical maximum is in reality nothing like what you actually get, even when you are the uncontested user of the connection.

            WiFi N considered a high bandwidth local area network? You do realize that for minimal extra expense the home can triple the theoretical maximum bandwidth to every device as well as get 50x the average actual achieved bandwidth in most cases (which in my experience is about 20Mbps give or take 10Mbps). It cost me $50 to run approximately 50 meters of Cat5e and one of the common applications, copying large files for backup to the local NAS is 160 times faster to execute.

            Summary: In contested issues the amount of bandwidth a user gets cannot just be subtracted off the theoretical maximum due in part to the issue stated above, and, in the case of WiFi and HSPA+, legacy fallback.

            Also you do realize that 300Mbps is shared right? That all users connected to that access point get a slice of that 300Mbps? You also further realize that if a single client has access issues preventing theme from getting the full 300Mbps (for example low signal quality) the network will, at best case, allocate more of the shared bandwidth to said client to compensate (i.e. they are getting 20Mbps but to do that they need as much time as an ideal connection would take to deliver 60Mbps) or in worst case, the entire network (which WiFi actually does) reverts to the LCD.
            :
            So. Yeah. About that. It ain’t looking good for Wireless is it. Also, another point you seemed to have just ignored:

            Summary: The cost of running fibre to that density to service this theoretical wireless network will likely be the the same order of running the NBN in the first place. The only reason wireless has maintained a cheaper stance is because the majority of medium to heavy users have a fixed line connection, and the tower density is relatively low. It is the same reason why the proposed FTTN solution from the Liberals is an order of magnitude cheaper.

            And let us not forget the problem of getting 300Mbps to nodes 500m apart (that is what you are proposing right?) will require almost as much fibre as running fibre to every home in the first place. I’m sorry, but Wireless cannot be the “catch-all” technology you want it to be.

            Is that really so bad?

            Yes. Yes it is that bad.

            As for ping. I’m trying to imagine worst case for rural. My ping on NextG is ~60ms and spikes to 120ms sometimes. That is totally acceptable for everything but twitch gaming and real time surgical procedures. The first we can cope with restricting to LAN cafes and the second will have fibre so its dealt with.

            So you can imagine the worst case for rural and don’t even consider the worst case for the city? I rarely get pings of less than 180ms in the city. Anything above one fifth of a second is noticeable and can get quite annoying depending on the application.

            Try using an SSH server where it takes a quarter of a second for the terminal to update. I type at 80 AWPM. That means I input approximately 7 characters into the screen in any given second. Because of that, it often buffers my input if I am ever using the terminal via NextG. Everytime I have to correct a typo to something vital like a shell command I have to wait a second to ensure that the buffer is actually up to date. This breaks work-flow, and this is from an extremely low bandwidth application.

            And this is too a server that is less than 20 kilometres away from me. I do the same activity via my cable connection and I notice absolutely buffer lag. I do the same activity to the server in the US via my cable connection and I still notice no buffer lag.

            Do we really want to favour in home twitch gaming over the equivalent of country wide WiFi? That’s what you’re asking and it’s pretty unreasonable.

            Actually it isn’t, because LTE and WiMax reduce latency overheads to the tower by approximately an order of magnitude. If you were actually in the know about the technologies you apparently support, you would know this.

            Wireless isn’t a catch-all solution. Their are legitimate technological and economic limitations to the technology that mean that no one, and it greatly concerns me that you do, should even advocate a complete wireless substitution. Even the heavy wireless focus of the US Broadband Plan doesn’t advocate this. In fact, one of the requirements of the plan is 100Mbps to 100 million homes. Something which is going to be more easily achieved using fixed line technologies, both in problem complexity, and cost.

          • uncontested you get about half the theoretical max. not a showstopper.

            sounds like your wireless N base station isn’t very good. you shouldn’t over generalise from your one example.

            i clearly considered the shared nature, which is why it was divided. you might lose a bit too poor signal. this can be handled ad hoc, or with additional towers in real problem areas.

            Towers every 500m would involve a significantly cheaper fibre roll out. Curb to house should be a huge chunk of the last mile cost. I’d be surprised if it was less than a quarter. Skipping ~75% of the roads will probably drop it by a small, but not negligible amount – (microtrenching should make them cheap).

            If LTE handles your latency issues to your satisfaction why are you arguing about it?

            I’m advocating a network that does a great job on 95%+ of home applications and does a good job on the last 5% with the added benefit of awesome country wide high speed wireless.

            Fixed line solutions are last century. If I thought the majority of productive work would be done from the same location every day, I’d agree with you to go fibre. But it won’t. And the benefit of supporting a mobile workforce is immense.

          • uncontested you get about half the theoretical max. not a showstopper.

            No. Depends entirely on your signal quality. You can’t generalise it to “half the theoretical max” and even then, explain to me why we can’t get the full theoretical maximum? Surely if there is no contention we should be able to get the as advertised speeds.

            sounds like your wireless N base station isn’t very good. you shouldn’t over generalise from your one example.

            Nope, actually mine is a very good base station. It just turns out the layout of my particular house means that I can’t get a nice clean signal in every single corner. You clearly don’t understand how much the topology of the environment to which wireless works in affects signal. Also, telling me off for generalising just after you generalise another point? Poor form.

            i clearly considered the shared nature, which is why it was divided. you might lose a bit too poor signal. this can be handled ad hoc, or with additional towers in real problem areas.

            Did I address your math directly? No. I didn’t. You math is fine, your assumptions on the other hand, are not.

            Towers every 500m would involve a significantly cheaper fibre roll out. Curb to house should be a huge chunk of the last mile cost. I’d be surprised if it was less than a quarter. Skipping ~75% of the roads will probably drop it by a small, but not negligible amount – (microtrenching should make them cheap).

            Oh I never denied this. However you have to remember that towers are very, very, complicated pieces of technology, and also require a lot of space. Even if you send the raw signal data to a centralised location to avoid the “shack at the base of the tower” problem, you still need to buy a significant amount of real estate, in areas where real estate isn’t available.

            Not to mention the fact that for people that do need a fixed line connection, for whatever reason, will end up coping the bill for fibre without benefiting from the saves of doing the rollout en-mass which will be prohibitively expensive, even if we assume that you can run the fibre to the nearest tower, which if they send raw signal data down the fibre, you won’t be able too.

            I haven’t even dived into the national security concerns presented by people being able to jam the network. Have you considered that?

            If LTE handles your latency issues to your satisfaction why are you arguing about it?

            Actually I was arguing your completely unrealistic expectations for latency. Which not only are completely unacceptable, they are also not representative of the technologies we would deploy in what you propose. Now as much as I like to attack your points directly, at some point I do have to declare that you are just completely incompetent and you are wasting my time. It is with this post I make this assertion.

            I’m advocating a network that does a great job on 95%+ of home applications and does a good job on the last 5% with the added benefit of awesome country wide high speed wireless.

            If such a network design were possible, I would also advocate it. But it isn’t. It may be at some point in the future, yes, that is undeniable, but I do not like making my decisions based upon hypothetical technology that is yet to exist. Right here, right now, in 2011, fixed line Broadband is a vital part of our Broadband infrastructure and, even sans NBN, will still continue to be for quite some time.

            Fixed line solutions are last century. If I thought the majority of productive work would be done from the same location every day, I’d agree with you to go fibre. But it won’t. And the benefit of supporting a mobile workforce is immense.

            Where is this myth that a mobile worker is somehow magically more productive coming from? Considering the majority of work is actually more productive at a location where there is easy access to all required facilitates, such as an office, be it a home office, or a multi-story office block, or a Cafe with printing facilities, etc, etc, you will find that the majority of people work from static locations. Sure, they may have multiple static locations, that location may be in a moving vehicle (such as on a train), or even in some case, temporary, but the fact of the matter is they are not actually all the mobile when they work are they?

            Now, the only two cases where mobile technology actually gives you a serious benefit is in a moving vehicle, or temporary facilitates. And we already tend to use these technologies for these exact situations. A home office can be serviced via WiFi or Ethernet, via a cheaper fixed line connection, a large office complex will be serviced by a dedicated fibre link, a Cafe might provide Free WiFi to customers.

            Also, WiFi just can’t be replaced with LTE and other technologies. WiFi networks provide way more flexibility than mobile. They can be open, they can be closed, they can even be used to aggregate the connections of multiple devices into one mobile stream in order to reduce contention on the overreaching network.

            From a technical point of view, pure wireless doesn’t work. We run into contention problems, signal problems, etc, that mean that we cannot scale the technology.

            From an economic point of view, pure wireless doesn’t work. The costs of overcoming the contention problems mean that the price of the hardware (signal processing) is quite expensive. Extra space is required for all the hardware, extra energy needs to be consumed, and the costs of running dedicated fixed infrastructure for those who need it sky-rocket.

            From a network design point of view, pure wireless doesn’t work. It doesn’t actually meet to the access requirements of all users. It doesn’t allow for internal local traffic (LAN) like WiFi and Ethernet does, it is insecure in that you cannot restrict the movement of sensitive data, and it is far to centralised.

            So clearly the solution is some kind of hybrid, and I’m inclined to advocate a dense fixed line network, as, even through it may seem counter intuitive to most, it actually provides way more flexibility.

            The model we have now works quite well, why do you want to rock the boat? Cause I seriously don’t think you have actually considered the repercussions of doing that. I’ve covered a lot of the why we can’t do what your proposing, but in all seriousness, should we?

            And no, the NBN, although it may be expensive, and doesn’t utilise existing resources, it is still the same model we have now, it is less disruptive to the market than what you propose.

          • Don’t need fancy towers. Just string them off light/ high gauge power poles, with a boost to height when required.

            No realestate problem then either. Similar to the amount of real estate the transact network in the ACT required. (about as much as a HFC or aerial fixed line fibre would).

            The in home NTP can have it’s antenna permanently located at the best signal point, making it much better than current wireless.

            The rest we seem to disagree on and only have anecdotes either side so I’ll leave it alone.

            The current NBN roll out is hitting ~50% of the dwellings in the liberal states. What do you plan to do for the other 50% of people after we’ve spent all the money? The big losers aren’t luddites, they’re renters who don’t have authority or who move to a non NBN dwelling after the roll out.

          • Don’t need fancy towers. Just string them off light/ high gauge power poles, with a boost to height when required.

            Please, go out and have a look at one of these “fancy towers”. Note the little metre cubed shed at the bottom of most of them. That equipment is the real cost of “installing towers” not the tower itself. We can, and do, put an antenna anywhere. You have to put that much equipment, about half a cubic meter, per every single tower. Until recently that also had to between 50 meters of the antenna, which was fun, now the signal can be whacked into fibre and decoded remotely.

            No realestate problem then either. Similar to the amount of real estate the transact network in the ACT required. (about as much as a HFC or aerial fixed line fibre would).

            See above. You need to put half a metre cube of equipment per tower somewhere. You are proposing a tower every fire hundred meters. That is a lot of equipment.

            The in home NTP can have it’s antenna permanently located at the best signal point, making it much better than current wireless.

            And still worse than WiFi connected to a fixed line connection. You’re trying to find a solution to a problem you shouldn’t have created in the first place. This is a very bad idea for an engineer. It is like adding a lead weight to train bogies because they’re slightly more stable and can take more load, and then having to redesign a bridge because it can’t handle the increased load, when a far simpler solution of adding another carriage would work just as well.

            The current NBN roll out is hitting ~50% of the dwellings in the liberal states. What do you plan to do for the other 50% of people after we’ve spent all the money? The big losers aren’t luddites, they’re renters who don’t have authority or who move to a non NBN dwelling after the roll out.

            What exactly are you implying here? I have in particular avoided the NBN in this debate, as the reason your stance is unworkable has nothing to do with the NBN. I merely use the NBN as an example, like my statement about the NBN being less disruptive to the market than what you propose. I did not deny that the NBN is disruptive and could possibly be a bad idea for the industry if implemented incorrectly, but what you propose is much worse.

  5. @ Reality Check… There will be absolutely no improvements in technology…what we have is plenty good enough [sic]… Well alongs with $$$$$$$$ that’s the argument the anti-NBNers use, isn’t it?

    Anyway enough facetiousness…

    Didn’t you tell us before that the NBN is a monopoly with no competitors and in the very next breath tell us wireless isn’t complementary to fixed, it is a direct competitor (as you again indicate above)…?

    Nothing like a good old fashioned monopoly… with competitors, eh…LOL!

    • @ RS

      Everyone else can see you are a moron, but I will just assume you are intellectually disadvantaged and so I will explain again that the NBN will absolutely be a fixed wire monopoly. Undeniable and in fact confirmed, acknowledged and admitted to by Conroy himself.

      And in becoming so requires the purchase and then destruction of 3 fixed wire networks,
      1 x PSTN and 2 x HFC.

      e.g.
      If 1 party had a monopoly over taxi’s, thats not a monopoly over transportation but indeed a valuable monopoly non the less.
      If 1 party had a monopoly over public mass transit and taxis, that would be a valuable monopoly over a sector or two but not a monopoly over transportation overall.

      etc etc etc.

      • But, but, but… Reality Check…

        LOL… coming from someone who says “NBN will be a monopoly” – “absolutely, wireless will be a competitor (to the monopoly)”… is priceless keep up the great work brainiac…

        • anyone expecting wireless to replace fixed ‘yes and absolutely yes’ is kidding themselves. both in congestion and spectrum terms:
          http://www.smh.com.au/business/mobile-broadband-traffic-to-soar-20110503-1e6nd.html

          the per user GB/month figures are rising pretty rapidly too. yes LTE will lower the cost per bit but that is not the be all and end all of the equation – as Thrwan rightly points out “It tends to perform poorly in high density areas with lots of short-term renters and/or absence of DSL ports.” IOW in a cell with high contention and lots of people trying to draw on the network at once.

          we know Voda has had issues with its network, and Optus too. if you think LTE is going to be the magic wand that will solve those carriers’ issues for them ive a few bridges to sell you. if the ‘extra 150mhz’ doesnt materialise (or gets tied up in a bunfight like i expect it will) then that will make the problem worse.

          contention, spectrum and congestion issues are – well… non issues when it comes to the NBN. it simply doesnt suffer those problems, and once built wont need upgrading for decades. sure LTE will be done before the NBN build is but how many decades do you seriously expect to get out of that? how many wireless upgrades will be carried out over say the first 30Y of NBN life? you keep comparing chalk and cheese and telling me they taste the same, and its simply not credible.

          And as for HFC – doesnt exist in (this) underground power service area.

      • “And in becoming so requires the purchase and then destruction of 3 fixed wire networks,
        1 x PSTN and 2 x HFC.”

        Boo hoos? Really they wont be missed. The copper is rotted and needs replacing and the “2 x HFC” none exists where I live… are you one of those that thinks that anyone that wants faster speeds should move to the city? yeah let’s all cram together in one place when we have so much room in this country to spread out and be comfortable. Seems some people miss the point of communications infrastructure.

  6. And then there’s that moment you switch to a FTTH connection, and realise even at a _guaranteed_ 50mbit connection, ADSL is nothing but a drag and pain in the butt. I experienced this with Virgin Media in the UK, and going back to ADSL is really not in the scope any more, and hopefully never will be.

    All those people thinking wireless is ‘good enough’ and current Aussie ADSL speeds of just under 6mbit are ‘good enough’ are only kidding themselves. Good enough for what, exactly?

  7. Luckily this is performance on the limitations of a 3G network. Can’t wait to see what the massive boost that 4G will provide, which will also provide performance enhancements for online gaming (massive latency reductions)

  8. Do people honestly think LTE is going to be competitive with fixed line in regards to data caps? I’d need a plan with 150GB each month for under $70 for it to be comparable with my current ADSL2 connection.

    I don’t see people mention upload speeds much either. As a heavy user of cloud services like DropBox, that’s where I find myself constantly frustrated with my Next G service AND my ADSL2 service.

    The NBN will be by far the most cost effective way for me to get the consistent, low latency 100Mbps+ service with at least 25mbps uplink. I can’t even imagine how awesome that will be after being used to 16mbps down but only 1mbps up on ADSL2.

    Telstra’s 3G network is excellent (I use it all the time while travelling) and I expect their LTE service to be even more awesome. I imagine some people will find it’s all they need and that’s fine. No one will force them to connect to the NBN. However in the house I live with 5 data hungry people (and no less than 20 separate devices connected to the router) all sharing the one fixed line connection, there is no way in hell wireless could take the place of ADSL2 or the NBN.

    • “I don’t see people mention upload speeds much either. As a heavy user of cloud services like DropBox, that’s where I find myself constantly frustrated with my Next G service AND my ADSL2 service.”

      You wont hear any mention of it from the likes of abbott & turnbull that’s for sure, according to them no one needs fast upload speeds. They could argue that X or Ymbps upload on a wireless connection is all that is all anyone ever needs but they’d be wrong. My experiences too have proven wireless just flounders in this area and less than 1mbps on ADSL2+ is not very useful. Then of course you have to think about the future (something abbott and his zoo crew chums are incapable of doing) Current upload speeds on anything are pitiful, this is where FTTH will shine and wireless technologies regardless of relentless bleating from turnbull simply wont be able to keep up.

  9. BTW, what’s with the ancient brick of a laptop that lovely lady is holding in the picture? It looks like it weighs more than her ;)

  10. I’m with Ian in the first post. I moved to a new estate (western Sydney) about 18 months ago which is on a pair-gain (RIM Hell) system so no ADSL for the 400-odd houses except for about 90-odd ADSL1 ports for the lucky first bunch of houses. So started with a Crazy John’s dongle and for the most part was like dial-up. I ditched that when I needed to use a VPN for work and went with the Telstra MF30 which has been a lot better but still congested at times (as you would expect.)

    Drawbacks are that I am paying $150 for 10GB (the 365 day expiry in reality lasts about 2-3 months), which is fairly expensive if I want to do anything other than basic surfing/websites (eg YouTube, PSN, I-View, any streaming). What I would give just to get my old 1.5Mb ADSL1 connection back!!!

    The silver lining here at least is that we are about to get hooked up with fibre (been promised/discussed at our estate since I’ve been here) in the next couple of months…….I’m GAGGING for it!

  11. No wireless can not replace a fixed service. The simple fact is the current 3G networks are hopelessly overloaded. For home internet use I require a consistent high speed low latency connection. Yes I play online games and yes I am a heavy dowloader.

    My home Internode ADSL connection gets a consistent 7Mbps download and 15ms latency 24/7. I was using Internode NodeMobile but unfortunately this uses the Optus 3G network which quite frankly is crap. I have just got off the phone today with Internode to cancel it.

    The only wireless service I use now is a Telstra 3G micro-SIM with my iPad and that has been good. But I am under no illusions – it’s good for web and email and that is about it.

    • I use Next G extensively — even uploading videos to YouTube on the road — and it’s good for much more than just email and web; to say that is to undervalue what the service really is. However, there is no doubt that it is not a complete substitute for a home fixed broadband connection for anything above light to moderate use.

  12. No wireless can not replace a fixed service. The simple fact is the current 3G networks are hopelessly overloaded. For home internet use I require a consistent high speed low latency connection. Yes I play online games and yes I am a heavy dowloader.

    My home Internode ADSL connection gets a consistent 7Mbps download and 15ms latency 24/7. I was using Internode NodeMobile but unfortunately this uses the Optus 3G network which quite frankly is awful. I have just got off the phone today with Internode to cancel it.

    The only wireless service I use now is a Telstra 3G micro-SIM with my iPad and that has been good. But I am under no illusions – it’s good for web and email and that is about it.

  13. So this is a comparison between fixed broadband and a USB modem? really that was the length you went to? Did you try getting a better connection? If you made a real attempt at it and got an antenna you would have seen your wireless go though the roof! It’s quite disappointing you didn’t do this to get a good understanding of the technology at play. Your Next G modem is actually FASTER if it is given a good connection! I have used it and I can watch FULL HD video streamed from the US let alone YouTube (which has servers in AUS)

    • “Your Next G modem is actually FASTER if it is given a good connection!”

      For some definition of faster
      1. Throughput – only if no one else in the cell tower footprint is using mobile broadband at the same time.
      2. Latency – never going to happen.

  14. “Er … Telstra … this test account may have racked up quite some quota. Sorry”

    Come on, tell us how much!
    I bet it will be $1000 at prepaid rates.

    .. and I’m sitting here getting 40K/s max on NextG due to living on the ‘wrong’ side of the building compared to the NextG tower just 200m up the road.

  15. Steam problem possibly due to your public ip shifting all the time. This is the only thing that has caused me problems on NextG.

    On NextG they use NAT to give you a 10.x.x.x address which they then translate into a public IP. Problem is, unlike NAT on your home router, the public up keeps shifting.

    This is a limitation of Telstra’s IPv4 implementation, not one of wireless. You could solve it via a VPN service if you were desperate or Telstra could solve it by changing their implementation

    • Now that IPv4 has run out, its more likely that DSL/NBN networks in the future would mirror Telstra’s NextG implementation than the other way around.

      • They don’t have to mirror the shifting exit point from the network however.

        (Notice: he wasn’t complaining about the NAT’ing just the shifting exit IP address)

  16. Also, try using an iPhone as a modem on NextG, connected via USB, with Windows internet connection sharing to get to the other PCs. I get 1.5-4.5Mbps down an 0.5-1.5Mbps up depending on location. Speeds are consistent for a given location. Pings are 50ms-120ms higher than DSL and are variable.

    Experiences on other networks are a different story.

  17. Dumping VOIP eh?

    The one thing you failed to mention was how well VOIP worked over your wireless 3G link!

    That’s the real test……

    My bet is that one a good day it’s usable, on most days at busy times it’s between crappy and unusable. Wireless isn’t the answer where you need reliability at internet prices.
    Ray

  18. As is so often the case the key point in this debate is overlooked by so many. It’s not what we can support today but what we’ll need in 5,10,20,50 years down the track. We have been damn lucky that the copper network, installed 80 odd years ago has been able to support 1000 times + more than the 3khz of bandwidth it supported at the start. Let’s put in something now that’ll see us through the next 80 years rather than just thinking of the next 5 or 10.

  19. It really begs the question why the Australian taxpayer needs to fork out $36 billion for people to watch HD video content and play on-line games. Surely it would be in everyone’s interest to get people off their bums and doing some (physical) exercise.

    • I very much agree with you here.

      The most valuable activities I engage in require the least bandwidth.

      There are activities that need more bandwidth that are valuable. So far these are constrained to medical imaging. Recreational video editing could always be higher res and is not critical for anything in society.

      There will be things that need a higher bandwidth in the future – advanced real time collaboration systems. They’re not ready yet.

      We absolutely need a national network. The copper network is falling apart. It needs to be replaced.

      Replacing it with lots of short range wireless base stations would get you 80% of the benefit of fibre for 20% of the price. People who really want fibre can pay the $5-10k to get it drawn the extra distance to their house. For everyone else, 50-100Mbps via <=500m wireless links will be fine. (Adjust distances for CBD and rural).

      People complain about wireless not having the bandwidth. This is ridiculous. 802.11n pumps out 300Mbps for home use and is clearly enough. Everyone could have one in their house off a fibre to the home connection and we'd all have bandwidth to spare.

      So… how about putting them in every second house, with a boosted range?

      How about every 4th house?

      How about just one base station per block?

      This is the kind of wireless that will solve the problem. Avoiding that stupidly costly last few hundred metres and replacing it with a wireless link is the answer. We already do it for the last 20m, we know doing it for the last 3km would be too slow (not enough spectrum), so how about just the last few hundred metres.

      This is the sensible inbetween and would allow us to scrap the decrepit copper network.

      More importantly, it would provide a very high bandwidth wireless solution. And all the awesome productivity benefits of the future (work while travelling, work from anywhere, augmented reality) can only be delivered wirelessly.

      • Of course, given the plethora of wireless technology options, the government really only needs to do three things to make Australian broadband awesome:

        1. Subsidise international transit.
        *

        2. Encourage local loop innovation.
        * Guarantee an area will be Telstra copper free by date X – This allows network operators to guarantee take up.
        * Relax local cabling and base station restrictions
        * Let the market take care of the rest.

        3. Progressively increase subsidies in all areas:
        * until a minimum number (say 3) wholesale networks are dealing a sufficient speed (say 100Mbps), at a reasonable latency, (eg. max 300ms to all Australian capital cities) for a data plan of 100GB @ $100/month.
        * Let the market take care of how that happens.

        Progressive subsidisation allows the bush to become more viable over time. Only meeting 50% of the speed, means only getting 50% of the subsidy. Double the latency? => Take half the subsidy.

        Maintenance:
        * Increase the $100/month with inflation
        * Increase the speed and datarequirements every year by 50% (compounding), should keep to Moore’s law nicely.
        * Keep latency the same unless we get around the speed of light somehow
        * Increase the subsidy if 3 wholesalers.
        * Maintain current rate of subsidy in areas with 3 wholesalers.

        NB.
        * Showing overall trendlines only. Values should be tweaked on a daily basis, not annually, to avoid gaming the system.
        * Adjust percentages of speed in line with expert advice.
        * Adjust subsidy percentage shifts in line with budget constraints.
        * No, I don’t want just $100/month for 100GB. Instead, build a reasonable bell curve, or at least include 5 price points(mean, +1sd, +2sd, -1sd, -2sd) and fit it to that.

        In areas with >3 suppliers, you could even drop the subsidy until it becomes negative, thus forcing rural cross subsidisation.

        Ta-dah! NBN fixed. And for a fraction of the price, with full market innovation support to keep the libs happy and lots of economic handles to pull to keep labor happy and a subsidy scheme for regional areas to keep the Nationals happy.

        Renai, can you go pimp my awesome plan and get us our NBN in half the time, at half the price, and with the chance of getting awesome wireless everywhere that isn’t possible with current copper competition and wouldn’t be possible when competing with a heavily subsidised NBN fibre system.

        • Okay, great Ben, so you’ve completely ignored some of the physical limitations that make this implementation impossible.

          First off all the amount of spectrum required to deliever the amount of bandwidth you a proposing using a purely wireless solution in urban areas is more than we are, so not only will you get contention between users, you will get contention between networks. This is a very bad idea. With a set amount of spectrum you have a critical density whereby increasing the density of nodes will give you diminishing returns.

          As the spectrum that can be utilized for Broadband communications safely is of a very limited and already contended with other technologies (digital television to name one) it is unlikely we’ll be able to improve the state of wireless communications much beyond what has currently been deployed.

          Also you ideas of a “good connection” amuse me greatly. 300 ms latency? Are you insane? That is the kind of latency I would expect to push a packet around the entire planet, not across the continent.

    • @Frank
      Everyone uses the Internet differently with all equally valid reasons. As an avid online gamer I find your comment superfluous. We live in a digital age and we deserve the best Internet infrastructure to serve the nation.

  20. Wireless is good in the same way that netbooks are good. ONLY if used for their designed purpose… You can’t replace your desktop PC with a atom powered netbook, same applies for wireless internet services.

    I believe what you experienced in your little test is not an accurate representation of the network in rural areas which is where most people cannot access ADSL. Most networks in rural areas are over subscribed, inconsistent and produce ‘average’ (read; sub-ADSL) speeds. I have the best modem available through telstra (Which is a joke in itself) running through an externally mounted antenna and it aint no ADSL I can tell you…

    This wireless sharing ideas are just not viable anywhere outside of densely populated areas, and who is going to put up with 300ms?

    My 2c

    • My best speeds on NextG were in Blackheath (Blue Mountains, regional 1 adsl) via my iPhone:
      4.5Mbps/1.5Mbps, with a 60-120ms ping to Google Sydney.

      Ran a 6 computer network and was great.

      Outside the dense areas, use more powerful towers.

      It’s a shame your network sucks. This can be resolved with subsidies. If you have no DSL and rely on an external antenna for NextG do you really think you’re in an area that will get fibre? Surely you’re a candidate for 3/4G.

      I agree your network needs to be better. My ADSL is 4.5/0.7 it sucks, it needs to be better too. I think the best solution for both of us is more reliable, better value wireless. I think better fixed line is the wrong direction.

      People compare current wireless with idealistic fibre and call the NBN based on that. What if the same money was pumped into wireless for last mile? I think $40Bn on fibre with wireless last mile would make the country much better than $40Bn on fibre to the home.

      • Giving more money to technology blindly is not going to improve performance behind a certain point. So you can’t just say “Imagine if we threw $40b at wireless technology” because at some point you will get diminishing returns from the money you give to the technology due to some physical limitation.

        As I have stated before that limitation is spectrum, and if you start investing in wireless technology in the way you a proposing you will hit that limit very very quickly. It is wise for us to try and minimize the risk of this considering the popularity of wireless technology, and the way to do that is to invest in fixed line alternatives.

  21. This debate never ceases to make me laugh every morning i see it….

    Having moved houses recently, i’ve had to survive on an optus usb dongle for a good month while waiting on a naked adsl2+ line to be installed (i have little use for a landline).

    The speeds were very dependent on the time of the day, ie, the number of people on the network. consequently, it was unusable between 7pm (when i arrive home) and 9pm. I cannot argue with the downstream speed though: at its best i managed to suck out ~5mb/s on steam downloads for durations as long as 2 hours (and yes, it does take a LOOOOONG time to log in if you can be bothered waiting 20-30 minutes).

    I stopped short of trying to play online as my latency to google.com.au (from south-east melb area) was fluctuating between 250ms and 1900ms, often jumping up and down between the 2 extremes every few seconds or so. To be fair, i can chalk some of that jitter up to my dongle switching between HSPA and WCDMA every so often (undeniably poor signal to start with), but the connection simply could not handle more than 2-3 streams at a time without appearing to juggle the traffic like a one-armed tradie.

    I wont repeat others about data transport tech appropriateness (wireless good for 1 person doing 1 thing at a time and not a gamer), so ill just say that ADSL is a good step up from wireless. The only improvements being connection stability and consistent (if not too much faster) throughput.

    As to all those blabbering about faster speeds having no value to the average person, a 4 word acronym is just one reason for web-based upload speeds to improve, namely ajax. It may sound silly to point out the bleeding obvious, but i dont see enough mention of websites/webapps that could be substantially improved in quality of design and usage merely by giving the developers enough bandwidth to get creative with real-time requests to the server back and forth. I have a biased view on this being one such developer, yet i cannot count the number of times an interesting web-app idea has crossed my mind which i’ve since had to put aside because of this.

    Take it from me, there is a golden chest of ingenuity right under our noses which will open on its own once upload speeds become more practical for it.

    G

    • NextG rocks the AJAX:

      2 months with NextG in Blackheath:
      GoogleDocs with multiple users on the one document was awesome. Surpassed all expectations. Indistinguishable from good DSL

      We had ADSL there and while it synced a 20Mbit it dropped every hour. This forced us to use NextG all day for work, as real time collab was only reliable on NextG.

      • That’s pretty spot on from what i’ve been playing with too. My main point was that ajax + better than either adsl or wireless upload speeds would allow for much more frequent polling operations (google docs would appear more realtime, if such a thing is possible) and larger uploads.

        One example could be to allow a collab editing web-app (a proper program would be better though) that doesnt need cloud storage – with the right upload conditions on all users’ endpoints, one could construct a web-app that can edit files in realtime on other machines instead of a shared copy on a server. well, realistically it would be closer to how google docs performs already on adsl/wireless, and not that this type of collab editing is ideal, it’s simply a different approach to it that isn’t all that viable yet.

        otherwise yeah, ive seen my share of unreliable adsl too…

  22. I am doing the same thing to a small extent. I am with Internode and I had massive floods in my area in Victoria a few weeks ago. My line went down for about a week, I purchased a 18 Gb wireless stick from a compeditor of Internode so I always had a wireless link if necessary.

  23. *The truth is, that if you’re a moderate to light internet user, 3G mobile broadband is a pretty good option. You won’t notice the difference between Next G (we’re not sure about the networks of Optus or Vodafone; your mileage may vary) and a fixed ADSL or HFC cable connection if all you’re doing is browsing the web, using email, chatting online and so on.

    But the minute – and we mean this, the minute – that you start to enter into heavy internet use; video, video gaming and so on, 3G becomes second class and you will enter what we like to call “the World Wide Wait”.*

    i think that’s an accurate description of the present status quo. more importantly, when it comes to broadband infrastructure, the article recognises that not everyone has the same needs or requirements or faces the same budget constraints.

    a lot of people are pushing the idea of “internet access” as a utility. there’s an element of truth in this classification if we’re merely referring to mandating some “minimum standard of access” for all citizens.

    however, notions of “ubiquitous* broadband access” (i.e. “if one group of people has access to fibre, everyone else should also have access to fibre”) are totally misplaced and have little merit.

    to illustrate this, take water and electricity, for example, which are the most common “utilities” (in developed countries at least). i think it’s fair to make a general statement that every household pretty much consumes the “same” amount of water and electricity.

    now you might find this statement objectionable. after all, at the extremes, some people take 3 showers a day, others none. some people have huge, well-tended gardens which require a lot of watering, others merely have a rusty old tin shed and Hills Hoist in the backyard. when it comes to electricity use, some people living in mansions have 5 plasma TV sets running every evening along with a whole host of energy-guzzling appliances, while others on smaller living budgets economise on a single TV set and a microwave.

    so, yes, when it comes to water and electricity consumption, there is variation from one household to the other. however, consumption patterns across households are most probably statistically encapsulated within a Bell curve (i.e. two-thirds of variation from the mean are within one standard deviation with short tails). on this basis, it’s reasonable to expect and implement “uniform service access” on a universal basis.

    however, when it comes to “broadband access”, we’re talking about a whole different animal. attempts by some people to conflate “internet access” with “electricity”, “water” and other common utilities ignore the fundamental differences that characterise the consumption of “internet access”. for starters, there’s a sizeable group of people out there in our community who choose not to have internet access.** equally important is the well-known fact that, globally, 5% of internet users account for over 40% of network traffic.

    thus, as compared to water and electricity consumption, the statistical distribution of “internet consumption” is most likely highly-skewed with a lop-sided fat tail distribution (where the mean consumption is substantially higher than the median consumption). despite the natural variation across households, it is highly unlikely that 40% of water or electricity consumption is accounted for by 5% of households.

    hence, when it comes to “internet access”, the rationale for providing “uniform service access” or “equivalent infrastructure access” simply doesn’t exist. broadband consumption patterns across households are highly “heterogenous” as compared to relatively “homogenous” consumption of common utilities such as water and electricity.

    taking all this into account, the smart and wise government policy is to mandate minimum standards of “internet access” for all Australians. forcing everyone onto the super-expensive fibre platform is just too costly, unnecessary and plain stupid. heterogenous infrastructure needs and affordability calls for heterogenous (least-cost, blended mix) solutions.

    in the modern age, “internet access” may well be a “utility” – but it’s clearly not the garden-variety type of utility that calls for “uniform infrastructure access”.

    * the term “ubiquity” originates from a religious context – Lord knows how it’s been misappropriated and misused to describe the simple concept of “uniformity of service access”.

    ** clearly, some people choose not to have an internet subscription because they simply can’t afford one. others, like Russian billionaire: http://en.wikipedia.org/wiki/Mikhail_Prokhorov choose not to have internet access because “the internet kills the brain” apparently.

    • Except Tosh, the entire premise of your argument here is based on a “fact” that is very hard to verify. Where is this 5% of users use 40% sourced from? Also, what metric are we measuring? Network traffic at any given time? Network usage over a given period? Is the 5% static (the same users) or dynamic (5% represents the average amount of users bursting at any given time)?

      Without an accurate understanding of the nature of the statistic is very difficult for someone to map the bell curve of users and thus verify your analysis. So please if you could take the time to further explain the mertic here then I could better understand the premise of your argument and, hopefully, agree with your analysis.

      Also, my understanding of network usage was that the demand for increased speed and reliability was independent of the network usage, i.e. the desire for higher speed does not always mean you are a higher user. It all depends on the application. For example, a light user may stream a few minutes of high definition video in their usage patterns (i.e. occasional YouTube browsing), and a heavy user may use a lot of traffic simply because they are always downloading small files from various servers (i.e. a developer collaborating with other users on a large project with several hundred megabytes of source, updated regularly), both these users will use the same amount of bandwidth on the two example tasks, the difference being the first one requires more bandwidth than the first, even through you’ll likely find that user uses less usage overall.

      This isn’t to say I don’t understand what is is you are are trying to say, that the needs of the few should define what the government should do. However the problem with your argument is actually proving that the problem of Broadband is actually something on a few are interested in. The fact that it is of the agenda of both governments to do something about Fixed Broadband infrastructure about at the election around 70% of the population were in support of the NBN seems to disagree with this, it seems Broadband is important.

      Yes, that in itself doesn’t mean that the NBN is the best solution, but if we take your argument to it’s conclusion, i.e. a wise government mandating a minimum speed of access (the fact that it requires regulation like this actually makes it a Public Utility by the way, in fact under the definition provided by Wikipedia it already is), we are left with a problem of enforcement and metric, that is actually not worth the effort of implementing.

      It sounds good in theory, you must deliver at least 12Mbps to all users, except it becomes a problem of enforcement. The only way to enforce this is to fine the provider if they fail to do so, baring in mind that they will only get reported on for this if a user notices and cares enough to report it, which a lot of users simply won’t. And then you get into the problem of, what do you do about the user where it cheaper for their provider to pay the fines than actually upgrade their infrastructure?

      There is an slightly more subtle problem with this method as well, if you fine providers for not providing the mandated speed, you are depriving them of revenue they could otherwise invest in meeting the mandate.

      • *Where is this 5% of users use 40% sourced from? Also, what metric are we measuring? Network traffic at any given time? Network usage over a given period? Is the 5% static (the same users) or dynamic (5% represents the average amount of users bursting at any given time)?*

        you’re 100% correct – a simple statement such as “5% of users account for 40% of traffic” can be interpreted in a “static user” sense and a “dynamic” sense. if memory serves me right, i came across this statistic in some article discussing filesharing / P2P activities and the composition of global network traffic. read in that context, the “5% of users” most likely refers to a static base of minority internet users who are heavily-engaged in P2P, etc, causing massive network congestion for the less-bandwidth-hungry majority.

        nonetheless, my argument doesn’t necessarily rest on this one particular statistic. in fact, i could rephrase and reiterate my arguments in a more general form: everyone pretty much has the “same uses” (or apps) for water and electricity. in terms of water, we shower, we wash our laundry, we cook, we water our gardens, etc. in terms of electricity, we run pretty much the same appliances in our households. there’s a strong “commonality” in our consumption of these utilities.

        however, when it comes to internet access, the heterogeneity of “uses” is very stark indeed. as i’ve already pointed out, there’s a significant portion of our community who choose not to have internet access at all (aside from affordability issues, some people have a negative perception of “the internet”, e.g. “full of distraction”, “kills the brain”, etc). also, there’s a sharp divide between the majority, who use the internet merely to email, browse news content and watch a few videos, and hardcore users who fill up a 2TB HDD every month.

        bottomline, broadband usage patterns across households do not exhibit the same “homogeneity” as consumption of common utilities such as “water”, “electricity” or “gas”. hence, arguments for “equivalent infrastructure access” based on false and misleading analogies to “utilities” do not hold up to scrutiny.

        *Also, my understanding of network usage was that the demand for increased speed and reliability was independent of the network usage, i.e. the desire for higher speed does not always mean you are a higher user.*

        you’re 100% correct – there’s a difference between “bandwidth” (flow) and “throughput” (stock).

        however, clearly, the only reason you invest in a bigger pipe is to stuff more data through that pipe. in the current (free, unsubsidised) market for fibre access, there’s a clear correlation between “bandwidth” and “throughput” (i.e. corporations and other large entities install fibre because they need the “throughput”).

        it doesn’t make any sense to invest tens of billions of dollars just to shorten the buffering period of “a few minutes of youtubing”. even NBN Co.’s business plan explicitly argues for a correlation between “higher speed tiers” and “higher data usage”.

        *However the problem with your argument is actually proving that the problem of Broadband is actually something on a few are interested in.*

        hold your horses, cowboy – nobody’s arguing that. there’s a strong (market) case for substantial investment in our fixed-line infrastructure. only a few years ago, Telstra was prepared to spend $10bln of shareholders’ money (a third of it’s market capitalisation) on FTTN. we would’ve experienced major improvements in our broadband access by now if it wasn’t for government intervention and regulatory obstructionism.

        *but if we take your argument to it’s conclusion, i.e. a wise government mandating a minimum speed of access we are left with a problem of enforcement and metric, that is actually not worth the effort of implementing.*

        no, to clarify, i was arguing that if the Government has to do something, then “mandating minimum service access” is all that is required. shutting down existing fixed-line infrastructure, indiscriminately forcing everyone onto super-expensive fibre and creating a giant, government monopoly is just NUTS.

        however, given the long-existing private market initiative to roll-out FTTN “nationally”, there’s no real need even to enforce a “national mandate”. the Government merely has to implement some subsidy programme to encourage investment in infrastructure that serves the most costly 10 percentile. a wholesale government takeover of 100% of the fixed-line market just to solve the “10% problem” is totally INSANE.

        • you’re 100% correct – a simple statement such as “5% of users account for 40% of traffic” can be interpreted in a “static user” sense and a “dynamic” sense. if memory serves me right, i came across this statistic in some article discussing filesharing / P2P activities and the composition of global network traffic. read in that context, the “5% of users” most likely refers to a static base of minority internet users who are heavily-engaged in P2P, etc, causing massive network congestion for the less-bandwidth-hungry majority.

          Except without you providing access to this material I cannot read it within context. Given the ambiguity of this particular statistic I am inclined to reject it.

          nonetheless, my argument doesn’t necessarily rest on this one particular statistic. in fact, i could rephrase and reiterate my arguments in a more general form: everyone pretty much has the “same uses” (or apps) for water and electricity. in terms of water, we shower, we wash our laundry, we cook, we water our gardens, etc. in terms of electricity, we run pretty much the same appliances in our households. there’s a strong “commonality” in our consumption of these utilities.

          however, when it comes to internet access, the heterogeneity of “uses” is very stark indeed. as i’ve already pointed out, there’s a significant portion of our community who choose not to have internet access at all (aside from affordability issues, some people have a negative perception of “the internet”, e.g. “full of distraction”, “kills the brain”, etc). also, there’s a sharp divide between the majority, who use the internet merely to email, browse news content and watch a few videos, and hardcore users who fill up a 2TB HDD every month.

          I don’t understand what the problem is here. Do you seriously think that Telephone or Electricity usage was “homogeneous” when the service was delivered to the public and in the first decades of usage? I very much doubt this was the case, and yet the government at the time saw fit to invest in the technology. Broadband is no different. The problem is that you seem to think building to the LCD is a good idea.

          I don’t, and I take the UK Power Gird as an example of why this is a bad idea. The majority of the network is running over 100% of design capacity, and the government, the only entity capable of fixing the issue, has provided subsides on power generation rather than grid improvements.

          The same problem is likely to occur with Broadband if we don’t consider future demand in any policy and design decisions made in policy, because like it or not Telecommunications is a Public Utilty, requiring considerable regulation in order to function. Take the fact that competition didn’t exist in the Broadband space until after ULL.

          bottomline, broadband usage patterns across households do not exhibit the same “homogeneity” as consumption of common utilities such as “water”, “electricity” or “gas”. hence, arguments for “equivalent infrastructure access” based on false and misleading analogies to “utilities” do not hold up to scrutiny.

          I pointed you to the Wikipedia article on Public Utility and if you took the time to read it you would note that Broadband, right now, fits under this banner. The definition I am referring to is:

          Wikipedia: A public utility (usually just utility) is an organization that maintains the infrastructure for a public service (often also providing a service using that infrastructure). Public utilities are subject to forms of public control and regulation ranging from local community-based groups to state-wide government monopolies. Common arguments in favor of regulation include the desire to control market power, facilitate competition, promote investment or system expansion, or stabilize markets. In general, though, regulation occurs when the government believes that the operator, left to his own devices, would behave in a way that is contrary to the government’s objectives.

          however, clearly, the only reason you invest in a bigger pipe is to stuff more data through that pipe. in the current (free, unsubsidised) market for fibre access, there’s a clear correlation between “bandwidth” and “throughput” (i.e. corporations and other large entities install fibre because they need the “throughput”).

          I already gave you an example as to why this isn’t the case. To further illustrate this point it is often a good idea to build a network where the design such that only a faction of the maximum potential of the connection is utilised in order to reduce latency.

          To give you an example, if you have a application that takes 10Mbps of bandwidth on average but fluctuates by minimal amounts, if you use a 10Mbps pipe you will end up queueing packets at the exit point of your network, and this will result in jitter. However if you do the same application in a 20Mbps pipe you find that you don’t have this problem, and data is actually sent as fast it is created. For many business applications, in particular VoIP, over provisioning is common for this very reason.

          Just because you have 20Mbps doesn’t mean you always use 20Mbps. Networking isn’t that simple.

          it doesn’t make any sense to invest tens of billions of dollars just to shorten the buffering period of “a few minutes of youtubing”. even NBN Co.’s business plan explicitly argues for a correlation between “higher speed tiers” and “higher data usage”.

          I didn’t say there wasn’t a relation, just that it varies greatly depending on the user or application. In general yes if you have a bigger pipe you will be inclined to use more, but that isn’t necessary the case. In fact, I use half that I did in the UK for Broadband, and I have a connection capable of twice as much speed (30Mbps Cable). So the idea that a user using 30Mbps will use twice as much (or even just “more”) quota than a user on a 15Mbps connection doesn’t hold true.

          It may not make sense to invest in “YouTube” you are correct, but if that isn’t the only application that uses that tier of bandwidth is it? I can think of things that have a legitimate benefit, and I don’t even need to exist the VoD sphere. Ever spent some time watching TED?

          no, to clarify, i was arguing that if the Government has to do something, then “mandating minimum service access” is all that is required. shutting down existing fixed-line infrastructure, indiscriminately forcing everyone onto super-expensive fibre and creating a giant, government monopoly is just NUTS.

          Okay, and again, if we take that to it’s conclusion, how do we mandate it? How do we continue to mandate it to account for ever increasing demand? The only way without direct control over the asset is to provide subsides for investment, which will result in cherry picking by organisations (invest in the cheaper technology, which won’t be the better one for users, like HFC of Mobile Broadband), or to provide punishment for users who fail to meet the mandates, which has even worse problems which I discussed.

          Some utilities are autonomous when it comes to improvements, and require minimal investment, unfortunately Broadband, given our limited population and unique topology, isn’t one of them. This is the same reason why many commentators of Broadband complain about FTTN not being a workable solution because of the majority of our population being in low and medium density.

          however, given the long-existing private market initiative to roll-out FTTN “nationally”, there’s no real need even to enforce a “national mandate”. the Government merely has to implement some subsidy programme to encourage investment in infrastructure that serves the most costly 10 percentile. a wholesale government takeover of 100% of the fixed-line market just to solve the “10% problem” is totally INSANE.

          I am sorry, but what long-existing private market initiative are you talking about? There is none. If there was there would be no justification for the original Broadband Connect plan in the first place. There isn’t a long existing trend if the government needs to subsidise the market in order to get a solution.

          • *Given the ambiguity of this particular statistic I am inclined to reject it.*

            frankly, there’s nothing ambiguous about a simple statement that “5% of users account for 40% of network traffic”. it’s not even surprising. it’s a well-known fact that there’s a small minority of users out there sitting near exchanges with TPG/PIPE backhaul who torrent/usenet like hell and pull down massive downloads every month. it’s only ambiguous if you’re trying to quibble over semantics just for the sake of arguing like lawyers do. furthermore, i’ve already explained the context of the statement.

            *I very much doubt this was the case, and yet the government at the time saw fit to invest in the technology.*

            why is it that NBN proponents have to trawl back centuries through the history books and refer to some supposed antiquated government practice to justify MASSIVE and UNNECESSARY government intervention in the C21st? why do we have to benchmark “best practices” against ancient history and live in the past? lobotomy used to be a universally-accepted practice in Western medicine a long time ago to treat mental illnesses – would you like to be lobotomised now?

            *The problem is that you seem to think building to the LCD is a good idea.*

            who says we’re building to the LCD? the correct solution is to provide differentiated services to the market according to local needs, requirements and affordability, as opposed to a single, uniform solution. how’s that catering to the LCD?

            *The same problem is likely to occur with Broadband if we don’t consider future demand in any policy and design decisions made in policy*

            policy? ferrchrissakes – if it wasn’t for political and bureaucratic obstructionism, we’d have had FTTN by now!

            and sorry, regurgitating huge blocks of some Wikipedia discussion of “public utilities”… achieves what?

            *I already gave you an example as to why this isn’t the case. To further illustrate this point it is often a good idea to build a network where the design such that only a faction of the maximum potential of the connection is utilised in order to reduce latency.*

            yes, “abundance” is always GOOD. but, in the real world that we live in, “abundance” costs MONEY. real world examples of corporations and other large entities installing fibre revolve around the need for “throughput” just as much as “bandwidth”.

            *Okay, and again, if we take that to it’s conclusion, how do we mandate it? How do we continue to mandate it to account for ever increasing demand?*

            you’re telling me the sophisticated telco market full of intelligent, knowledgeable entrepeneurs, engineers and investors needs political ignoramuses and bureaucratic clowns such as Conroy and Samuel to tell them to invest to cater to increasing demand? ROFLMAO. man, your comments are so laughably out of touch with reality. excessive political and regulatory intervention OBSTRUCT infrastructure investment.

            *The only way without direct control over the asset is to provide subsides for investment, which will result in cherry picking by organisations*

            if we’re talking about specific intervention to incentivise investment in infrastructure which serves the most remote 10% of residents, how does “cherry-picking” become an issue?

            *This is the same reason why many commentators of Broadband complain about FTTN not being a workable solution because of the majority of our population being in low and medium density.*

            the lower your population density, the more expensive it is to provide universal access to infrastructure. that’s why it behooves you to select the least capital-intensive form of infrastructure provision, which is FTTN (as opposed to FTTP).

            *I am sorry, but what long-existing private market initiative are you talking about? There is none.*

            a couple of years ago, Telstra took full-page advertisements in national newpapers declaring their preparedness to begin rolling-out FTTN within weeks.. if they had the necessary political and regulatory guarantees that their billion dollar investments won’t get socialised away and given away for free to their competitors.

            *If there was there would be no justification for the original Broadband Connect plan in the first place. There isn’t a long existing trend if the government needs to subsidise the market in order to get a solution.*

            Broadband Connect was the regional/rural investment subsidy programme. you’re confusing general investment in fixed-line infrastructure (Telstra’s 90% FTTN plan funded with shareholders’ equity) with a govt rural subsidy programme (targeting 10% of the most remote population).

          • frankly, there’s nothing ambiguous about a simple statement that “5% of users account for 40% of network traffic”. it’s not even surprising. it’s a well-known fact that there’s a small minority of users out there sitting near exchanges with TPG/PIPE backhaul who torrent/usenet like hell and pull down massive downloads every month. it’s only ambiguous if you’re trying to quibble over semantics just for the sake of arguing like lawyers do. furthermore, i’ve already explained the context of the statement.

            Given that I asked you to verify, read, provide an actual source, and you refused, and your context statement was full of “weasel words”, it is ambiguous, and that’s being generous as I am assuming that it is accurate. You want me to revoke that statement, provide enough definitive context for me to extract an actual statistic from somewhere, or go one better and actually cite the source.

            If we are actually going to discuss this problem at length and consider the data correctly using “well known facts” is not a good idea as it could be based upon anecdotal evidence, or a complete misconception. I am not “quibbling over semantics” I’m actually questioning the authenticity of your statement, because, if I’m frank, I don’t trust you at all.

            why is it that NBN proponents have to trawl back centuries through the history books and refer to some supposed antiquated government practice to justify MASSIVE and UNNECESSARY government intervention in the C21st? why do we have to benchmark “best practices” against ancient history and live in the past? lobotomy used to be a universally-accepted practice in Western medicine a long time ago to treat mental illnesses – would you like to be lobotomised now?

            So let us see here, this statement hangs on two things: that the implementation of the NBN is massive, i.e. that implies that some sector of the economy change achieve the same goal, and that it is unnecessary, that the implementation of the NBN can be achieved by a simpler or cheaper method. I don’t think that to provide the specification of the NBN can be done better by any sector of the economy, however the question of weather we can do it another way is still very much under debate.

            Further more, so, if we can’t benchmark the effectiveness of a particular plan against the past, what do we benchmark it against? If you ignore a set of options that the government has available to them simply because they are “ancient history” you run into the problem of not actually considering the problem in it’s entirety.

            I do not agree with the NBN in it’s current form, we are far from a workable solution I would endorse completely, but there are aspects I support for various reasons. However, I do not endorse the solution you are proposing here at all simply because I find it unworkable. That doesn’t mean I’m not open to other solutions, I do think FTTN could work under the right regulation environment, but this isn’t a debate of FTTN vs the NBN, this is a debate, or at least this is what I gather from what you posted, as to why the current NBN is a big fat pile of dog turd. I think we can agree on that in part, the current NBN is a big fat pile of dog turd.

            The difference we seem to have is I think it can be shaped in something quite less turdish, in fact maybe pleasant smelling and quite pleasing to look at, whereas you, for reasons that elude me, seem to think the entire policy is unworkable and there is not one single aspect of it worth recovering. Please, correct me if I’m wrong here?

            who says we’re building to the LCD? the correct solution is to provide differentiated services to the market according to local needs, requirements and affordability, as opposed to a single, uniform solution. how’s that catering to the LCD?

            And you think we’re going to do this by providing a mandate of a minimum speed and a few subsides, tell me, where is this mandate going to come from? Obviously if you want to minimise cost and market disruption, you will chose a speed which is easily obtainable and will satisfy the needs of basic users, and then leave it up the market to account for any further demand beyond this point based upon local needs and requirements.

            On top of this you will be providing subsides in places where these needs can’t be meet easily by the entity providing them. So to work out the minimum speed, you’ll find a the set of common applications of all users and decide on bandwidth that will meet this needs, and then set that as your minimum, in the hope that for users that require more than this will have the market in a condition such that they can get their requirements.

            How is that not catering to the LCD? Where in this plan do you consider changes to LCD over time? You can’t unless you update the mandate every few years. Oh yes, I can see that going down well with the industry.

            policy? ferrchrissakes – if it wasn’t for political and bureaucratic obstructionism, we’d have had FTTN by now!

            and sorry, regurgitating huge blocks of some Wikipedia discussion of “public utilities”… achieves what?

            First of all the repetition of the Wikipedia text was to point out something you missed from my first reply: Broadband is already a Utility. Utility doesn’t imply it is public owned or operated, only that is of social interest, i.e. the government has seen fit to regulate it carefully.

            Second, the bad policy decisions of the past you cannot ignore. You can’t just free the market up and say “have at it.” That’s what we did in the first place, and that created a mess. What you have to do is ensure that any policy you make ensures that at the competition of the policy we minimise any further government intervention. That is to say, we can leave it to it’s own devices for a while.

            This is part of the turd that comes with the NBN, in it’s current form I’m pretty sure we’ll have to come back every 20 or so years and “tweak” it to make sure it is running smoothly. But the same applies to your plan, so I’m inclined to think that while we consider the NBN a utility under regulation it may be impossible to write legislation such that the Broadband market can run indefinitely.

            Which isn’t an uncommon problem, let us look at Australia post, it is dying a slow and painful death at the hands of the popularity of new and emerging technologies. I have no doubt that as it focuses more on more on package delivery it may be in dire need of restructuring beyond which the current legislation is resides under will allow, similar to what has happened to Royal Mail in the UK.

            yes, “abundance” is always GOOD. but, in the real world that we live in, “abundance” costs MONEY. real world examples of corporations and other large entities installing fibre revolve around the need for “throughput” just as much as “bandwidth”

            Okay good, you’re getting it, now add into the fact that installing this abundance ad-hoc costs more that over-provisioning in the first place because of the nature of how installing Fibre works and you understand why over-provisioning for future demand (adding abundance that may otherwise seem unnecessary but it is not unreasonable to expect to be consumed, or the cost of providing is trivial) and you start to notice that is exactly what the NBN, in terms of using FTTH, is doing.

            It is not unreasonable to assume, based upon usage trends, that there will be need for 100Mbps/40Mbps services to a large majority of homes in the next 20 years, and the cost of upgrading this to 1Gbps/400Mbps was trivial. This reflects common business practice from those entities you refer to, the ones that are all concerned about money.

            The question therefore becomes, can we ensure that the mapping of the reasonably expected usage trends is a) accurate and b) cannot be provisioned for in another way? Well, is it accurate and can we? So far what I have read seems to indicate it is accurate, and in terms of FTTH vs anything else, we ultimately cannot provision that kind of service because of design limitations of alternative technologies, however that isn’t to say that we couldn’t use another technology as a respectable interim.

            A common suggestion to that effect is to extend the operation the HFC networks and not replace them as they are very close to provisioning the kinds of services FTTH can provide. Another common suggestion is to roll out to FTTN with technology that can easily be upgraded to FTTH. Each of these have their flaws, but if it is ultimately determined that this is the best option then so be it.

            you’re telling me the sophisticated telco market full of intelligent, knowledgeable entrepeneurs, engineers and investors needs political ignoramuses and bureaucratic clowns such as Conroy and Samuel to tell them to invest to cater to increasing demand? ROFLMAO. man, your comments are so laughably out of touch with reality. excessive political and regulatory intervention OBSTRUCT infrastructure investment.

            Facepalm.

            You give an industry a mandate, and then no way to enforce them for meeting it, and they will turn around that they simply can’t do it in some cases. What do you do for these cases? Do you fine the provider in the hope that the costs of meeting the mandate outweigh the costs being fined? Do you subsidise them to incentives they take action on the issue? Or do you simply accept that is the way it goes.

            I think you’ll find that in this it you who is out of touch with reality. The concept of mandating a minimum doesn’t really work in practice because it is almost impossible to enforce without government intervention.

            if we’re talking about specific intervention to incentivise investment in infrastructure which serves the most remote 10% of residents, how does “cherry-picking” become an issue?

            The flaw in your argument is right there, you are counting this issue by subsiding solutions to the remote 10% of residents. So what do you do if there are people in the other 90% that don’t get the mandated speed? The problem is so dynamic and vast that insuring your subsidy portfolio covers all of the affected population is next to impossible to do.

            the lower your population density, the more expensive it is to provide universal access to infrastructure. that’s why it behooves you to select the least capital-intensive form of infrastructure provision, which is FTTN (as opposed to FTTP)

            However the problem with that is, the lower your population density, the less effective cheaper alternative technologies when it comes to Broadband. At some point you need to make a decision, is the social benefits of using a technology better suited to our topology which is more expensive than a less ideal solution worth the extra expensive?

            Now this question isn’t something I have directly addressed because as I have mentioned about, oh, three times in the course of this post alone I do not completely support the NBN. So please don’t patronise me by stating something obviously like, a cheaper technology is cheaper.

            a couple of years ago, Telstra took full-page advertisements in national newpapers declaring their preparedness to begin rolling-out FTTN within weeks.. if they had the necessary political and regulatory guarantees that their billion dollar investments won’t get socialised away and given away for free to their competitors.

            So what part of government intervention is required and that the industry won’t fix the problem of the own initiative (i.e. without government intervention) does this contradict?

            Broadband Connect was the regional/rural investment subsidy programme. you’re confusing general investment in fixed-line infrastructure (Telstra’s 90% FTTN plan funded with shareholders’ equity) with a govt rural subsidy programme (targeting 10% of the most remote population).

            No I’m not actually. As I have stated earlier in this post, when you consider such a limited scope you risk leaving people out. Broadband Connect was brought about because the industry was incapable of fixing the regional problem by itself under the current regulatory framework, and now we are finding that the problems faced by other users of Broadband are hard to ignore as well.

          • look, my original contention was that the pattern of broadband consumption across households is heterogenous whilst the consumption of other common utilities such as water and electricity is relatively homogenous. hence, arguments in favour of “equivalent infrastructure access” which rely on the “utilities analogy” are totally unfounded.

            i’ve already put forward substantial evidence and arguments in support of this contention. here’s yet another line of attack:

            compare the pricing structures for broadband access with electricity and water consumption. in the case of the latter utilities, the unit cost of consumption increases step-wise as you consume more electricity and water. (there’re obvious reasons for that: e.g. higher cost of gas-fired, peak generation as opposed to coal-fired, base-load generation, etc.) in any event, even if electricity and water were sold with volume discounting (i.e. step-wise reduction in tariffs), it’s unlikely that more electricity or water will be consumed. in other words, i’m not going take 3 showers a day just because the water consumed during the 3rd shower costs less than the 1st or 2nd shower; or, change/wash my clothes twice a day as opposed to once a day; or, water my garden thrice daily as opposed to once, etc.

            similarly, i’m not going to toast 10 slices of bread every morning instead of 2 slices just because the unit cost of electricity falls as more is consumed; or, watch 2 TV sets as opposed to one; or, dry my hair for 1hr as opposed to 5mins, etc. (this is not to say that volume discounting won’t have a marginal impact on water or electricity consumption. of course, it will. in the case of gas which has a declining tariff schedule, i’m not going to grill 10 pieces of steak with my gas BBQ instead of 2 pieces; or, i’m not going to set my thermostat to 40*C instead of 20*C, etc.) bottomline, household consumption of water and electricity is relatively homogenous because these utilities are “non-discretionary” categories of personal expenditure. we all need to shower, wash our clothes, cook and keep ourselves warm, etc. also, importantly, the “units consumed” are relatively price-inelastic.

            compare this to broadband consumption. internet access plans are sold with volume discounting. the bigger the internet (quota) plan, the cheaper in terms of dollar per GB. in sharp contrast to “staple” utilities such as water, electricity and gas:

            (i) broadband consumption is highly-discretionary in nature:

            (a) a significant section of our population choose not to have an internet connection;

            (b) core activities performed by broadband subscribers vary across a wide spectrum in terms of bandwidth and throughput requirements (e.g. from your pensioner doing the occasional emailing and light browsing to your hardcore filesharer / torrenter).

            (ii) broadband consumption is highly-sensitive to price:

            (a) by definition, all discretionary categories of spending are “price-sensitive”;

            (b) as the price of data fell over the past decade due to ULL competition and investments in local / international backhaul, average throughput has increased;

            (c) in the absence of a series of (non-price-related) “exogenous shocks” which shift the demand curve upwards, NBN Co.’s CVC charges, by making data more expensive relative to the status quo, will arrest or possibly even reverse the historical growth in data throughput.

            now, if you combine the “discretionary” and “price-sensitive” nature of broadband consumption with the fact that households faces different budget constraints, you’ll arrive at the irrefutable conclusion that broadband consumption is highly heterogenous across households.

            there – wasn’t that painful, was it? it’s so much easier to argue with commonsense than against commonsense.

            *I think we can agree on that in part, the current NBN is a big fat pile of dog turd.*

            wow. i had to filter through tons of gibberish to get to that piece of gem.

            *So to work out the minimum speed, you’ll find a the set of common applications of all users and decide on bandwidth that will meet this needs, and then set that as your minimum*

            let’s not get pedantic – for average household purposes, FTTN will be MORE than sufficient. (FFS, there’s more important things in life than online gaming latency, uploading your 20MP holiday photos to Flickr and watching 1080p viral videos.) now, if you’re working from home and you need faster than FTTN, buy your own fibre connection. if you can’t afford one and you can’t make do w/o one, your business model is broken – so get another job. the last thing we need is to add “home business welfare” to the federal budget deficit.

            *in the hope that for users that require more than this will have the market in a condition such that they can get their requirements.*

            mate, you need to get out of your bubble of idealism. it’s a fact of life that infrastructure access will always be geographically-assymmetric. do you see subway or tram lines being laid outside of the major capital cities? isn’t that also “cherry-picking”? are we going to start building major international airports in the outback?

            the reason why the capital cities have more infrastructure than anywhere else is because you have large populations of people SHARING the cost of building, servicing and operating these fixed installations. to elaborate, it’s SELFISH for 1000 people in a small town to expect “$Xbln infrastructure” to be built and shared among them, when the same “$Xbln infrastructure” services 10,000 people in a capital city. in the former case, the cost per head of operating that infrastructure is 10 times larger! that’s why the regional/rural areas are relatively poor in terms of infrastructure access – and there’s nothing “inequitable” about it once you understand the underlying economics.

            *You can’t just free the market up and say “have at it.” That’s what we did in the first place, and that created a mess*

            FFS, we’ve never had a “free market” in telecommunications. access pricing for Telstra’s copper network is determined by bureaucratic fiat, and not by Telstra management. it is the underpricing of returns on copper that has discouraged further investment in the fixed network. if you were running a restaurant, would you sink millions into new renovations, fittings, furniture and equipment if the government kept forcing you to lower the prices on your menu? OF COURSE, NOT. in a similar vein, if you travel around the world, you’ll notice blocks of totally run-down apartments which haven’t received any work in decades because they’re under strict rent control regulations. why spent a cent restoring your building if your tenants still pay the same rent? (in the case of Telstra, the “rent” they received kept falling.)

            *now add into the fact that installing this abundance ad-hoc costs more that over-provisioning in the first place*

            BULLSHIT. you still haven’t understood the concept of opportunity cost. over-investing means you have excess sunk capital sitting fallow / unused and not generating any yield. meanwhile, the massive financing burden at compound rates of interest will sink you faster than you can pronounce “Quigley”.

            *or the cost of providing is trivial and you start to notice that is exactly what the NBN, in terms of using FTTH, is doing.*

            are you f**king serious? the cost of FTTH is at least twice that of FTTN.

            *It is not unreasonable to assume, based upon usage trends, that there will be need for 100Mbps/40Mbps services to a large majority of homes in the next 20 years, and the cost of upgrading this to 1Gbps/400Mbps was trivial.*

            BULLSHIT – it’s wild speculation. ever heard of the concept of diminishing marginal utility? the utility derived from any good is not a linear function of units consumed – it’s logarithmic. at 6Mbps, i can already download more shit than i have time to read/watch, etc. the extra amount of money i’m willing to pay for 25Mbps is much less than what i’m prepared to outlay to go from dial-up to 6Mbps. (so far, i’ve been talking about your average household which is pertinent because the NBN pushes fibre to everyone. of course, there’re hardcore maniacs out there who get a boner at just the thought of torrenting at 1Gbps and are willing to spend more money than your average punter.)

          • look, my original contention was that the pattern of broadband consumption across households is heterogenous whilst the consumption of other common utilities such as water and electricity is relatively homogenous. hence, arguments in favour of “equivalent infrastructure access” which rely on the “utilities analogy” are totally unfounded.

            Broadband is a Utility. You ignored it the first two times I said it. Please don’t ignore it in the third time. You can argue that it shouldn’t be a utility, but while the government sees fit to regulate it this heavily, it is.

            i’ve already put forward substantial evidence and arguments in support of this contention. here’s yet another line of attack:

            No you haven’t. Evidence implies facts, citable references, you have only presented hypothetical situations. The hypothetical situations may or may not be true, but it is very hard to verify that without some kind of evidence.

            compare the pricing structures for broadband access with electricity and water consumption. in the case of the latter utilities, the unit cost of consumption increases step-wise as you consume more electricity and water. (there’re obvious reasons for that: e.g. higher cost of gas-fired, peak generation as opposed to coal-fired, base-load generation, etc.) in any event, even if electricity and water were sold with volume discounting (i.e. step-wise reduction in tariffs), it’s unlikely that more electricity or water will be consumed. in other words, i’m not going take 3 showers a day just because the water consumed during the 3rd shower costs less than the 1st or 2nd shower; or, change/wash my clothes twice a day as opposed to once a day; or, water my garden thrice daily as opposed to once, etc.

            similarly, i’m not going to toast 10 slices of bread every morning instead of 2 slices just because the unit cost of electricity falls as more is consumed; or, watch 2 TV sets as opposed to one; or, dry my hair for 1hr as opposed to 5mins, etc. (this is not to say that volume discounting won’t have a marginal impact on water or electricity consumption. of course, it will. in the case of gas which has a declining tariff schedule, i’m not going to grill 10 pieces of steak with my gas BBQ instead of 2 pieces; or, i’m not going to set my thermostat to 40*C instead of 20*C, etc.) bottomline, household consumption of water and electricity is relatively homogenous because these utilities are “non-discretionary” categories of personal expenditure. we all need to shower, wash our clothes, cook and keep ourselves warm, etc. also, importantly, the “units consumed” are relatively price-inelastic.

            compare this to broadband consumption. internet access plans are sold with volume discounting. the bigger the internet (quota) plan, the cheaper in terms of dollar per GB. in sharp contrast to “staple” utilities such as water, electricity and gas:

            (i) broadband consumption is highly-discretionary in nature:

            Okay, considering the fact it took you two paragraphs to try and make a point and you still haven’t gotten close to making it, because you have repeated as many relevant analogies, I am starting to question the validity of your argument, but I will continue to read because I don’t want to ignore your argument just because you are incapable of making a quick and concise argument.

            So water and electricity are not discretionary either? I can decide to turn as my TV off and read a book just as much as I can decide to not go on Facebook this evening. All utilities are discretionary. The difference is that portions of the “traditional” utilities are essential for survival or to function in modern society. It is becoming essential to have some form of Internet access in order to function in modern society, enough for the government and opposition to consider it a utility instead of a luxury.

            (a) a significant section of our population choose not to have an internet connection;

            An ever decreasing percentage of the population.

            (b) core activities performed by broadband subscribers vary across a wide spectrum in terms of bandwidth and throughput requirements (e.g. from your pensioner doing the occasional emailing and light browsing to your hardcore filesharer / torrenter).

            So? That gives you an excuse to build only to the LCD just because why? Do you build the electricity or the water girds to the LCD? How would you like an electricity gird that in some cases is unable to handle it is you decide to use a plasma TV set instead of a LCD one? Or a water gird that on certain streets is unable to handle it if you decide to have a long shower?

            This doesn’t mean that FTTH is the best option, but technologies like VDSL2, which are highly sensitive to geographic location, do suffer from the problem. It all comes down to what you consider a sensible capacity. I don’t considering a design of 12Mbps/1Mbps to be adequate, but yes, I do agree that possibly a design of 1000Mbps/400Mbps might be too much. But like I pointed out, the cost of upgrading from 100Mbps/40Mbps FTTH turns out to be trivial.

            (ii) broadband consumption is highly-sensitive to price:

            (a) by definition, all discretionary categories of spending are “price-sensitive”;

            So, the fact that some of the electricity usage is non-discretionary, (this is where you are going right?) cancelling out the fact that a significant percentage of it isn’t, like Television and electronics, and Broadband is almost completely discretionary at current, is why Broadband shouldn’t be treated as a utility?

            So governments across the world continue to encourage investment in electricity even through the majority of extra consumption is due to discretionary usage are doing it wrong?

            (b) as the price of data fell over the past decade due to ULL competition and investments in local / international backhaul, average throughput has increased;

            And that’s fine, but the argument here is that we are reaching the limits of the current technology (DSL) and significant investment is required. If this wasn’t the case, then this wouldn’t be a political issue at all would it?

            (c) in the absence of a series of (non-price-related) “exogenous shocks” which shift the demand curve upwards, NBN Co.’s CVC charges, by making data more expensive relative to the status quo, will arrest or possibly even reverse the historical growth in data throughput.

            Congratulations, you have noted one of the primary flaws of NBN, which adovates of the proposal have being trying to get the government to address. Simon Hackett wants to restructure the charges, I personally think that some of the money should be written off and directly sunk rather than their being an expected return. Afterall the current Coalition policy consists of sinking about $6billion into Broadband. How much would the CVC and AVC charges drop if we sunk $6 billion of the supposed $50 billion price tag into the project?

            now, if you combine the “discretionary” and “price-sensitive” nature of broadband consumption with the fact that households faces different budget constraints, you’ll arrive at the irrefutable conclusion that broadband consumption is highly heterogenous across households.

            Finally the point. Have I refuted this? No. I haven’t refuted that it is heterogeneous, I have just refuted that it is significant enough to justify only provisioning a minimal (12Mbps) amount.

            there – wasn’t that painful, was it? it’s so much easier to argue with commonsense than against commonsense.

            Actually it was completely painful because you took so long to make the point, and the point wasn’t anything really that new. And you still haven’t noticed that I was only asking you to cite the 40% from 5% of users figure. Which you still have yet to do.

            wow. i had to filter through tons of gibberish to get to that piece of gem.

            Did you notice the part where I said I think it can be shaped into a far more workable policy? Or how about the the part where I suggested an alternative of using HFC as an interim? Or basicly my entire argument which goes something like this: I don’t fully support the NBN, but you suggestions for a solution probably won’t actually work and here is why.

            let’s not get pedantic – for average household purposes, FTTN will be MORE than sufficient. (FFS, there’s more important things in life than online gaming latency, uploading your 20MP holiday photos to Flickr and watching 1080p viral videos.) now, if you’re working from home and you need faster than FTTN, buy your own fibre connection. if you can’t afford one and you can’t make do w/o one, your business model is broken – so get another job. the last thing we need is to add “home business welfare” to the federal budget deficit.

            Let’s not get pedantic. For most cases those blots will hold. Why should we consider the particular bridge design we are using to ensure that we don’t need to use a better blot. It’ll be fine.

            mate, you need to get out of your bubble of idealism. it’s a fact of life that infrastructure access will always be geographically-assymmetric. do you see subway or tram lines being laid outside of the major capital cities? isn’t that also “cherry-picking”? are we going to start building major international airports in the outback?

            Okay, right, so the assumption here is that FTTH is somehow a technology that isn’t geographically-asymmetric? Transport infrastructure is built in such a way that on the fringes of the city the amount of routes is minimal because the amount of places to service is low, however in the centre of the city it is busy because there are a lot of places to service.

            FTTH is still built in a point and spur model, with higher density routes between spurs, getting denser and denser. Exactly the same as telephone technology, both mobile and wireless. In telephone technology the result is the same no matter where you are within the footprint, you pick up the phone, you can make a call. In public transport you wait at stop or station and a train, tram or bus comes.

            the reason why the capital cities have more infrastructure than anywhere else is because you have large populations of people SHARING the cost of building, servicing and operating these fixed installations. to elaborate, it’s SELFISH for 1000 people in a small town to expect “$Xbln infrastructure” to be built and shared among them, when the same “$Xbln infrastructure” services 10,000 people in a capital city. in the former case, the cost per head of operating that infrastructure is 10 times larger! that’s why the regional/rural areas are relatively poor in terms of infrastructure access – and there’s nothing “inequitable” about it once you understand the underlying economics.

            Which is why the NBN, as an example, puts the final 7% on cheaper alternative technologies. The billions of dollars that the NBN cost isn’t due to all the small towns it services, it is due to the cost of running fibre down every street. Sure the per unit cross of servicing a denser population will be less, but that is why the NBN isn’t trying to deliver FTTH to every single house in Australia.

            FFS, we’ve never had a “free market” in telecommunications. access pricing for Telstra’s copper network is determined by bureaucratic fiat, and not by Telstra management. it is the underpricing of returns on copper that has discouraged further investment in the fixed network. if you were running a restaurant, would you sink millions into new renovations, fittings, furniture and equipment if the government kept forcing you to lower the prices on your menu? OF COURSE, NOT. in a similar vein, if you travel around the world, you’ll notice blocks of totally run-down apartments which haven’t received any work in decades because they’re under strict rent control regulations. why spent a cent restoring your building if your tenants still pay the same rent? (in the case of Telstra, the “rent” they received kept falling.)

            Well that’s what happens when you have a monopoly. Would you invest in upgrading that apartment buildings if you were the only one in town? No you wouldn’t because you have absolutely no incentive to u

            BULLSHIT. you still haven’t understood the concept of opportunity cost. over-investing means you have excess sunk capital sitting fallow / unused and not generating any yield. meanwhile, the massive financing burden at compound rates of interest will sink you faster than you can pronounce “Quigley”.

            No I understand the concept of opportunity cost perfectly. So do the companies who provision fibre for business purposes. And yet they still end up “over-provisioning”, because the cost of doing it ad-hoc out-weighs the cost of provisioning to requirements and then upgrading. I’m asserting that this problem applies to the Broadband market as a whole.

            are you f**king serious? the cost of FTTH is at least twice that of FTTN.

            Did you even read my post? The trivial cost upgrade I was refering to was the upgrade from 100Mbps/40Mbps to 1000Mbps/400Mbps. I even stated this just after the statement you quoted.

            BULLSHIT – it’s wild speculation. ever heard of the concept of diminishing marginal utility? the utility derived from any good is not a linear function of units consumed – it’s logarithmic. at 6Mbps, i can already download more shit than i have time to read/watch, etc. the extra amount of money i’m willing to pay for 25Mbps is much less than what i’m prepared to outlay to go from dial-up to 6Mbps. (so far, i’ve been talking about your average household which is pertinent because the NBN pushes fibre to everyone. of course, there’re hardcore maniacs out there who get a boner at just the thought of torrenting at 1Gbps and are willing to spend more money than your average punter.)

            In the same way it is wild speculation to assume that users will only need 12Mbps?

  24. It seems like we’d need quite a few base stations to ensure evenly distributed WiFi coverage, even more than currently used for mobile phone coverage. This also introduces more RF radiation of everyone in the area. Sure that is currently ‘believed’ to be a minor issue but only because mobile usage is intermittent but when your at home swimming in it……
    The idea of using existing power poles is fine except most new estates are all underground delivery of power so there’s that idea knocked on the head.
    And wireless a shared resource. If you wanting to get your video on demand then that is competing with all the other traffic on the net.

Comments are closed.