Global sites still slow, says NBN early adopter

42

blog You might recall that earlier this week much of the internet exploded with jealousy after PCRange chief executive Raaj Menon revealed he had actually moved suburbs in Adelaide to get the National Broadband Network fibre earlier. Well, today Menon has posted a bit of a reality check for those who expect that all sites and internet access will be incredibly faster under the NBN. He writes:

“Some words of advice for newcomers to fibre. Don’t expect every site to appear instantly. It just isnt going to happen. Reason being there are many sites that load up very slow because they have big databases esp things like WordPress where some sites load up a bunch of plugins and it slows their site down heavily.

There are also several other factors like network congestion (this will be one of the big issues for a number of people), your ISP’s international connectivity and how that traffic is routed etc. Even if you have a 1Gb connection these sites arent going to load any faster than an ADSL connection and this has happened to me as well. Sites that load fast generally on an ADSL connection are even quicker with this Fibre connection however.”

Menon’s point is a good one — Australia’s international internet links still aren’t the greatest, and much will depend on the quality of your ISP’s links, as well as the general load of internet traffic on the day to particular sites, their database configuration, how many network hops there are between your PC and their server, and so on. In other words, the NBN fibre won’t be a perfect nirvana, simply due to the nature of the internet, in that it’s dependent on its weakest link. However, as Menon notes, he’s still more than happy with his new fibre connection ;) Especially when connecting to sites in Australia.

Image credit: NASA, Creative Commons

42 COMMENTS

  1. Yeah buit if those international sites were running wireless it would be a completely different story! *sarcasm*

  2. The need for extra international bandwidth is a problem that has existed in Australia since the internet first arrived here in 1990, via the Melbourne University satellite link.

    There’s always been an ever increasing need for that bandwidth – (sound familiar?) – and we’ve always only ever been running just ahead of the wave, particularly for international traffic.

    (Hands up who remembers the undersea earthquake off New Zealand in 1994 that broke what became the Southern Cross Cable southern-most cable, forcing all international traffic back out the UniMelb satellite link for about a month?)

    PPC-1 (PIPE Networks) has reserve capacity, as does Southern Cross, as do a number of other cables such as the Telstra Endeavour Cable. PPC-2 is on the drawing board, and there are a couple of other plans for completely new cables.

    All of this will need to be lit up over time, regardless of whether we have the NBN or not. Bandwidth needs are growing, and show no signs of slowing, despite what some will tell you.

    • We have heaps of international capacity.
      Pacific Fibre are building, nextgen are building, SXC have just announced an upgrade to 100Gb per wavelength with an extension of cable working life.

    • Point 1- We have heaps of International capacity. SXC have just announced successful trial of 100Gbps per wavelength. Pacific fibre are building, nextgen are building.
      Any congestion on capacity is down to how much the ISP has chosen to purchase from their wholesale supplier. The price has plummeted over the last 24 months.

      BUT, more importantly:

      Point 2 – Default TCP receive window means that accessing content via high latency links (ie – data from the US) will always be SLOW! If you use default TCP window size of 64K, on a 200ms RTT journey, the best throughput you will get is 2.4Mbps.

      http://bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/

      • Point 2 – Default TCP receive window means that accessing content via high latency links (ie – data from the US) will always be SLOW! If you use default TCP window size of 64K, on a 200ms RTT journey, the best throughput you will get is 2.4Mbps.

        Maybe some consideration should be put towards having RUDP used for http and not TCP perhaps?

        • There’s nothing whatsoever wrong with TCP.

          Linux has supported TCP window scaling since 2004, Apple also supports it and even Microsoft supports it (they only turned it on by default with Windows Vista, but Vista is slow and crap for all sorts of reasons, thus you pretty much need to go up to Windows 7 if you want fast TCP).

          I’ll also point out that TCP window scaling is only a limitation if you are running one single TCP connection for a download, but long ago people discovered that multiple parallel sessions are faster for downloads and pretty much all p2p applications already do this, as do web browsers (each image loads in a separate stream), etc.

          • There’s nothing whatsoever wrong with TCP.

            Never said there was, but as per one of your other replies above ..

            Typical long haul fiber networks offer service approximately one bit error in 10 billion bits

            Network reliability has long advanced past the point of needing to run X.25 everywhere, my point was that if (under normal and non-fault circumstances) the BER is extremely low then do we need as much reliability at layer4 as TCP provides?

            If the RUDP standard was ratified, would implementing it as a preferred standard for commonly used internet applications (http, smtp, pop3, etc) provide a better utilization of bandwidth. And of course if the errors hit a certain threshold it could always fallback to the more reliable TCP.

            Really you could think of this as being similar to the NBN, only difference being the NBN is to provide a bandwidth increase at layer1, and the RUDP would provide better throughput at layer4.

    • Its not just the international capacity.

      There are so many factors involved its hard to count them. Many sites run on average hardware with average connectivity and sometimes hugely bloated software.
      None of these sites will be faster including Australian sites connected to the NBN. You may be surprised to learn its the majority of sites out there.

      A simple test. Download a free bandwidth metering tool (google them but choose carefully) and then watch the majority of traffic. If it rarely hits your current bandwidth speed it never will under the NBN. Personally I only peak when I download large files from a small percentage of sites.
      It shows a couple of things, many sites load pretty fast because they are bandwidth optimised already. Many many sites are slow regardless of the size of YOUR pipe. To get the full benefit you need to download large files from fast sites.

      BUT

      Fast sites become slow when lots of people with big pipes are downloading from them.

      Its a huge expense to spend just to have to wait for the world to catch up. and then well perhaps we may not be able to afford the next step because we are still paying this one off.

      DC

    • Australia had a net connection before the 1990’s it was a 64Kbit ISDN link that handled all of Australia’s data, the unis upgraded it in the 80’s to 128Kbit ISDN but once the 90’s started to come up it couldn’t handle all the news groups and email sent around the world, world wide web didn’t exist back then, so it was sold to Telstra because they could actually upgrade the link properly.

      Once that happened then people could really make some use of the net but even back then many uni students didn’t want telstra to own the net link out to the world.

  3. Our international links are just fine, with plenty of spare capacity that can be lit up as demand requires.

    What really matters is how much of that bandwidth your ISP bothers to purchase.

    On top of that, upgrading the access network to fibre can’t beat physics – you’re still looking at a ~200ms round trip to the states. “It’s the latency Stupid” [1] is as true now as it was 15 years ago.

    [1] http://rescomp.stanford.edu/~cheshire/rants/Latency.html

    • 200ms is sweet bugger all for websites… (spec for a round trip) it comes down to what the site can do

      200ms for games is a little diff

      • I beg to differ. 200ms makes a significant different to apparent web browsing speed, and and unbelievable amount of difference to large file transfers.

        The number of assets a modern webpage includes combined with issues like TCP slow start can mean many multiple round trip times are required before a page renders.

        For large transfers (like the ones performed by various speed test tools) a single packet drop on a transfer over 200ms latency will make it very hard to get a decent speed.

        • I was about to regurgitate my uni networking notes before you posted this

          Maybe if we were still using circuit switched internet instead of packet based internet, then it would make a difference in viewing websites, however that is not the case

          TCP being the protocol that is used by HTTP, and in order for TCP to guarantee data transfer over a packet based network that has risks of packets dropping, the RTT (return trip time) starts becoming ridiculous over long distances

        • The round trip doesn’t mean much.

          The problem with a site is that most web servers will not be allowed to send 100gigabits of data to your connection.

          Some web servers will only send you 5-10Kbit and that’s it because they are on a 1megabit connection so they limit the speed of data sent to you so someone else can get data as well, so you might own a 100gigabit connection but it’ll be nothing more than a dial up connection as it’s purely up to the what the other end sends you.

          That is why many experts say 100gigabit connections are a waste of time because you simply won’t get the return speed, fast connections are really only of use for torrents and real large file transfers where you can split the file down to increase the amount of connections to get the file quicker.

        • http://en.wikipedia.org/wiki/TCP_congestion_avoidance_algorithm

          Packet loss can ofter trigger TCP to go into congestion avoidance mode, it will do this regardless of latency but if you select a suitable algorithm then you can get good results even with a bit of packet loss. Typical long haul fiber networks offer service approximately one bit error in 10 billion bits, so packet loss should not be a problem, unless something is wrong at your local link.

      • RTT is Return Trip Time – so while it might be 60ms for light to go MEL to LA, it’s 120ms for it to come back.

        • I seem to recall when the Southern Cross Cable was being laid advertising/offering 70ms one way latency from Sydney to LA – so about 140ms return, which seems pretty good!

  4. It’s more so the server infrastructure for the websites aren’t very efficient, and with so many elements on webpages nowadays, the overhead of sending requests for each one and waiting for a reply means slowness.

    To some extent, you can get around this with pipelining, but the slowness at the server end is more due to processing time rather than bandwidth. Google tried to create SPDY to take over HTTP in Chrome (uh, you are using Chrome aren’t you?) to try and overcome some of the latency issues inherent in HTTP, but it doesn’t seem to have “taken off”.

    Sometimes it’s the bandwidth of the transit that’s congested. That’s a problem too. But overall, it’s better to have a faster link anywhere in the chain for the “chance” that there’s a clear route and a fast server at the other end.

    I think if you download from big companies with CDN’s such as MS, Apple, etc, you will find that they’re capable of saturating an ADSL connection and will have no qualms about going faster on fibre. So things like patches and OS updates will be faster. So, big time sites should probably put more time into being more efficient with their site, and reduce the number of elements, use some other protocol or use a CDN or a combination of these techniques.

    • Typical web browser runs about 10 simultaneous HTTP sessions for a complex page. That’s enough to saturate a typical link, but no so many that it trashes system buffer resources.

  5. I’ve been thinking it would be good for ISP’s using the NBN to offer unlimited downloads (and uploads) within Aus. and just have limits when accessing offshore content.
    This article emphasises that idea for me – the expense for an ISP is o/s links.

    It’d allow users on NBN to do unlimited working from home, video calls, off site backup, IPTV, etc, as long as its within Aus. and not have to worry about download limits.

    Might encourage more services to be hosted in Aus also.

    Is this a good idea? And is it technically possible?

    • ISPs have a lot of costs beyond international transit.

      Don’t forget that for each customer, NBN charges the ISP for two things – a flat access fee and a volume based fee. That means the more the customer downloads (regardless of where the data is from) the more it costs the ISP.

      Add to that support costs, data centre costs, backhaul cost and staff wages. Unlimited local traffic is incredibly unlikely.

      • What will be really interesting is if the NBN ever does complete to a significant degree (unlikely), the cost that will be bourne to all the ISP’s to upgrade all their CISCO routers to handle the throughputs brought by high speeds of fibre

        However even this case would probably be unlikely, due to the fact that the CVC charges put an artificial constraint on how much bandwidth ISP’s are going to expect

        Its really not going to be any different to today’s situation, where you have a few people on the 100mbit links and the rest on slower links. The only difference is that today’s situation is because of physical infrastructure reasons, the NBN is going to be due to financial reasons (ISP’s will be borne with massive CVC costs if they let their customers go wild with downloading)

        Its going from one extreme to the other, which is not helpful

    • The NBN will actually make it less likely for unlimited (or very high data limits) because the comparative charges for data transit from NBN to the RSP’s (compared to currently) is around 10-20x
      http://www.businessspectator.com.au/bs.nsf/Article/NBNinternodeSimon-Hackettbroadband-Telstra-ADSL2-pd20110518-GXVWQ

      This is known as the CVC charge, and NBN requires it in order to pay off the massive capital required to construct the network

      Today’s data limits are only possible due to the ULL/LLS, which is basically Telstra offering a $16 flat charge for any ISP to use the last mile copper. There is no charge based on how much the ISP’s download, and the only thing that ISP’s need to install is DSLAM + DSLAM related equipment, much of which is initial capital that gets written off in a few years (most DSLAM’s by most ISP’s today have already been written off)

      • This chestnut has been answered many times, but here we go again. Unlimited plans already cost ISPs real money, but they are cross-subsidised by the 90% of users who under-utilise their data allocation.

        CVCs are not per-user data charges, but aggregated. A few months back the figure was that the average ADSL plan included 50 GB of data, and the average data consumed was 7GB. Multiply this by ten and you’ll probably have the situation in a couple of years. The few who leech massive data from RSPs which offer unlimited plans will continue to be cross-subsidised by the low-usage majority. No big deal.

        And from the day the CVC charge was first announced, it was also stated that it was pitched at a price point that would permit it to be reduced over time as upstream data costs come down, as if Moore’s Law hadn’t already taught us to expect this.

        Anyway, it’s great to see what Raaj is reporting, and I thoroughly recommend that all Delimiter readers subscribe to his informative blog.

        • This chestnut has been answered many times, but here we go again. Unlimited plans already cost ISPs real money, but they are cross-subsidised by the 90% of users who under-utilise their data allocation.

          Yes, thats how it works currently, with the current prices that ISP have to pay for data charges, which is close to nothing

          CVCs are not per-user data charges, but aggregated.
          Your point being?

          A few months back the figure was that the average ADSL plan included 50 GB of data, and the average data consumed was 7GB. Multiply this by ten and you’ll probably have the situation in a couple of years. The few who leech massive data from RSPs which offer unlimited plans will continue to be cross-subsidised by the low-usage majority. No big deal.
          Unfortunatley, and as shown on Whirlpool where NBNCo unsuccesfully pulled out a massive publicity stunt, those are off current pricing, and not on proposed NBN pricing. With proposed NBN pricing, the price of internet plans can easily double to triple on price (as Scott Bevan attempted to show, but then for some reason got banned).

          The figures you are showing are for current internet plans, which are predominantly ULL/LLS, which don’t have any charges for data transfer, this is the only reason that unlimited is possible by companies like TPG.

          Im sorry, but when ISP’s claim that NBN will severely limit the prospect of unlimited internet, I will listen to them, and not to you. Simon Hackett knows what he is talking about, you don’t

          When a web site is intrinsically fast, it is currently constrained by ADSL, whose median speed across Australia is 2 Mbps, and a third of Aussies don’t even have ADSL. As more sites are moved to virtual servers and “the cloud”, they become faster, and NBN fibre removes the bottleneck that slows the user experience of them.
          Fiber wouldn’t make any difference in viewing conventional websites, have no idea what you are getting at here

          Unsurprisingly, you didn’t even answer or address any of the CVC issues, and have inherently ignored half of the important details to suit your argument.

        • *This chestnut has been answered many times, but here we go again. Unlimited plans already cost ISPs real money, but they are cross-subsidised by the 90% of users who under-utilise their data allocation.*

          internet plans range from low data to high data with different pricing. the high data plans are the highest margins. why? for each price point, there are variable and fixed costs involved in providing the service. the high data plans with higher pricing points generate higher margins because of the leverage effect of higher revenue over fixed costs.

          the cross-subsidisation happens across two dimensions. firstly, within each pricing point, subscribers who download below the average constant bitrate provisioned subsidise subscribers who download above the average capacity provisioned. so, for an “unlimited” plan with average download of 500GB per user, people who download 500GB within that price point. given that high data plans earn higher margins, subscribers of high data plans, as a group, are subsidising subscribers of low data plans.

          the impact of the CVC charge is to introduce another layer of variable costs to these internet plans. a “cost wedge” if you like. for each pricing point, to preserve the same margins, ISPs either have to raise prices or low data allowances. the NBN unambiguously makes internet data more expensive than the present status quo because there is no analogous CVC charge on copper.

          *And from the day the CVC charge was first announced, it was also stated that it was pitched at a price point that would permit it to be reduced over time as upstream data costs come down, as if Moore’s Law hadn’t already taught us to expect this.*

          the falling CVC charge is often misunderstood. the fall in the CVC charge is conditioned on greater expenditure on data. the fall in the CVC charge from $20/Mbit to $8.75/Mbit is analogous to migrating from a low data plan to a high data plan. the cost in terms of $/GB falls but you’re spending more money to download more. to argue that data is getting cheaper because of the falling CVC charge is like saying internet is getting cheaper because you switch from a cheap low quota plan (high $/GB) to a more expensive high quota plan (low $/GB).

          ……………………………………………………

        • correction:

          so, for an “unlimited” plan with average download of 500GB per user, people who download less than 500GB are subsidising people who download more than 500GB, within that price point.

          …………………………………………………………..
          Renai’s script doesn’t like symbols.

  6. Gough, you got that right. Sites like CNN, Apple etc all have their content on the CDN and with Akamai servers cached locally, so when downloading say a big firmware update like Lion OS, I can pretty much max the bandwidth out. I have also put my blog up on CDN now and you will see more and more on the CDN and I think it will get very popular esp with the faster speeds.

  7. Raaj’s reality check covered more than this. The NBN is not about one user in a house having magical instant response from every remote web site, viewed one at a time.

    Fibre provides a high bandwidth, low latency connection to the outside world, allowing many simultaneous services to pass through, without other users in the premises even noticing.

    While one user might be enjoying low-latency gaming (I can’t think why you’d bother, but horses for courses), someone else’s laptop might be getting a huge Windows update or a large PDF, while a third resident or employee is uploading a video clip from their smartphone over the Wi-Fi. Each would feel as though they were the only user.

    When a web site is intrinsically fast, it is currently constrained by ADSL, whose median speed across Australia is 2 Mbps, and a third of Aussies don’t even have ADSL. As more sites are moved to virtual servers and “the cloud”, they become faster, and NBN fibre removes the bottleneck that slows the user experience of them.

    Raaj’s many other points include that uploads are REALLY fast, and that his remote operation of a US computer was free of any indication it was not in the same room. Very impressive. Check out his blog, folks, for all the good dirt.

  8. Australia’s distance from much of Europe and the USA means that,because of an inherent feature of the TCP protocol, faster speeds simply will not happen even if international link capacity was greater. It is all about how most servers will only allow a limited number of unacknowledged packets to exist. Most servers consider a latency of over 200mS as pathological and will treat such connections appallingly. One solution is to implement a TCP protocol especially designed for high latency connection at the server end. But unless you own the server in the USA or Europe it simply is not going to happen.

    Of course, our wise government knows all about this because they have studied everything very carefully. That is why they expect the entire population to be so very grateful for such a fantastic fast internet connection which reflects the government’s incredible foresight and knowledge of all things technological.

    • And with the limitations you talk about do you think countries like Singapore that already have 1Gbps fibre connections are not wise enough? Fibre is not just about downloading and surfing. You can still get good upload speeds and that helps in many areas like conferencing and other uses. Even downloading is fast as long as you open up multiple connections. I can achieve 9.5 MB/s. So all is not bad and there is nothing wrong with the Government’s foresight IMO. Maybe for your uses they may not have much foresight.

      • And with the limitations you talk about do you think countries like Singapore that already have 1Gbps fibre connections are not wise enough?
        Physical connections is only one part of the equation

        You can still get good upload speeds and that helps in many areas like conferencing and other uses.
        Whats your point though? I mean, if the government spent money giving everyone a brand new car, I am sure that some people would find a use for it.

        Even downloading is fast as long as you open up multiple connections.
        Which you can only do with certain types of content

        So all is not bad and there is nothing wrong with the Government’s foresight IMO.
        If you only focus on one part of it, then yes sure. I mean, you can blow anything out of proportion and call that “foresight” if you wish

        Maybe for your uses they may not have much foresight.
        Making arguments based on anecdotal evidence gets us nowhere. From real world examples (such as in South Korea and Japan), the predominant use for such high speeds has been for entertainment reasons, and the only reasons that the countries have such high speeds is because of financial reasons (economies of scale together with high density population).

        I mean using your logic, I can just as easily claim that for most of Australian users don’t have much “foresight”.

        • I mean at some stage though we can’t hide behind a veil and say that 10Mbits on Copper is going to be enough 20 years from now because websites are not feeding enough to people. Do you think 20 years form now all websites will be having slow links? Times change technology changes. We have to be prepared for it. No point saying we will do it when the need arises and by that time that cost will blow up 10 fold. Sure if you look at it now, the uses may not be that much.

          • .I mean at some stage though we can’t hide behind a veil and say that 10Mbits on Copper is going to be enough 20 years from now because websites are not feeding enough to people.

            Its a question of magnitude, underprovision and you have problems, overprovision and you also have problems. The NBN is the latter (i.e. giving fibre to everyone)

            Do you think 20 years form now all websites will be having slow links?
            Unless the HTTP protocol changes to anything thats not TCP, then this problem will stay until the forseeable future.

            Times change technology changes. We have to be prepared for it
            This is just rhetoric.

            On another note,remember the .com boom? Yeah…

            Doing stuff because you are banking on some massive change and that something will happen is a risky business

            No point saying we will do it when the need arises and by that time that cost will blow up 10 fold.
            Actually, if anything, the opposite of what you are saying will happen. Right now the costs are enourmouse because
            1. We have to pay Telstra 13 billion
            2. We have to pay optus 900 million
            3. Fiber costs will be at a higher price then what it will be in the future
            4. *PON costs are at a higher price now then what will be in the future
            5. Router costs are at a higher price now then what will be in the future

            If anything, in the future doing a FTTH will be much much much cheaper then now, especially when other countries start installing FTTH to a large degree and new technology improvements in deployments will bring the price down of FTTH installations.

            You do for example realize, that the 36 billion dollars of capital cost, is on government debt, and that NBNCo has to pay for all of it by charging its customers? Maybe you should do some research on the CVC costs that we will all eventually have to pay, because thats the cost of rushing forward like this

          • And what you are saying about prices dropping is not rhetoric? Sure some hardware costs will come down but guess what mate, you havent taken any labour or build costs into consideration and you think that will drop also? Dream on. Costs will double if not triple 20 years from now. You did correctly point that hardware costs will drop but not significantly but that is insignificant compared to the massive labour costs and other infrastructure costs.

          • you havent taken any labour or build costs into consideration and you think that will drop also?
            Labor costs in Australia are currently are at a ridiculous high, due to the mining doom and high demand (the mining sector is pinching all the labour from other sectors)

            Of course that may or may not increase (more likely to drop when the mining boom finishes), however hardware costs will almost certainly decrease

            Dream on. Costs will double if not triple 20 years from now.
            In relative terms (as a percentage of GDP) or in raw terms.

            If you are arguing the former, then possible, but as I said earlier far more likely they will drop when the boom finishes. As for the letter, using such a figure in uneducated.

            Labor costs naturally fluctuate, and they do not in % terms, relative to the country, drop (or raise) as massively as hardware costs have. Countries in general need cheap labor to grow ;)

            You did correctly point that hardware costs will drop but not significantly but that is insignificant compared to the massive labour costs and other infrastructure costs.
            The only infrastructure we are installing is fibre cabling (since we are using Telstra’s ducts anyways), so not sure what you are going on about here

            Your claim that NBN will cost triple of what it does in the future is both uneducated and odd. Especially considering that now the NBN is being built completely ontop of debt with interest, something that will most likely not be the case when built in the future.

          • *Costs will double if not triple 20 years from now. You did correctly point that hardware costs will drop but not significantly but that is insignificant compared to the massive labour costs and other infrastructure costs.*

            you’re basically arguing that we should build X now just because it will cost more Y years from now.

            that’s a logical fallacy.

            imagine if you had incredible foresight and you can anticipate all the physical infrastructure and other intangible assets that we will require in the future.

            compile those into a list.

            do you seriously think we can build or invest in them ALL today given the inherent scarcity of capital and labour?

            of course, not.

            you have to sort through your list and prioritise what’s important and what’s not – typically, this means prioritising in favour of building infrastructure or making investments that yield their “intended benefits” soonest.

            just because X will cost more Y years from now doesn’t mean we should build it now.

            the relevant metric in investment decision-making is the ratio of the “benefits” to “costs”.

            the observation that the cost of building anything will trend higher over time in a growing economy is trivial and inconsequential.

Comments are closed.