Huston calls for active FTTP NBN



blog There are actually few Australians who your writer considers to be actual, verifiable experts on the current class of broadband technologies being debated as part of the National Broadband Network discussion. However, Geoff Huston is one of them. Huston is the big cheese of network architecture in Australia. He’s currently Chief Scientist at the Asia Pacific Network Information Centre, where he is regarded as one of the world’s global authorities on the phenomenon of IPv4 address exhaustion, but he has also held a variety of other important roles in the history of the development of the Internet in Australia. From 1995 to 2005, Huston was the Chief Internet Scientist at Telstra, where he helped develop the big T’s Internet offerings. Before that, he was one of the main driving forces helping to construct AARNet — you know, the Internet network between Australia’s universities which represented one of the first actual IP-based networks with access to the Internet in Australia.

Yeah. He did all that. Not bad, eh?

In a new post on his site last month (we really recommend you click here to read through the whole thing; it’s worth your time), Huston expresses his surprise that Australia’s political sphere is actually actively discussing network architecture design daily, and provides an excellent overview of the current NBN debate. However, perhaps more importantly, Huston also expresses his view that an Active Optical Network in a fibre to the premises framework would be a much better option than the passive network currently being rolled out by NBN Co. The key paragraph:

“If there was an option for an Active Optical Network in a FttH framework then I think I’d prefer to head in that direction. That’s in spite of the considerations of the reliability issues associated with the deployment of active electronics in the node. This approach offers a direct path to increase the capacity of the trunk fibre from the exchange to the node, and a means of increasing the individual capacity from the node to each ONT on a service-by-service basis if need be. In the trunk networking environment in Australia we have already seen the long haul fibre network that was constructed in the mid nineties be upgraded from the original 500Mbps capacity to multi-gigabit capacity through the retro-fitting of DWDM optics, using the original glass.

While the edge cables in a FttH environment might not be the subject of such intense levels of capital investment, the reassuring thought is that the megabit speeds achieved through the FttH network are an artefact of the electronics of the system rather than a physical limitation of the cable plant, and there is an progressive upgrade path that does not involve a complete replacement of the edge cable system.”

To be honest, you have to take what Huston is saying with consideration to the context in which he’s writing. Huston hasn’t really explored the economics of the NBN solution he’s pushing here, and he also hasn’t fully explored the current political situation. In fact, as with many deep technical experts, I would say that Huston is probably a little bit politically naive. However, from a purely technical point of view, there is absolutely no doubt that we should be taking Huston’s view seriously. This is one expert who actually knows what he’s talking about. Without Huston’s work, the Internet in Australia that we all enjoy might look a lot different right now.

Image credit: APNIC


  1. “There are actually few Australians who your writer considers to be actual, verifiable experts on the current class of broadband technologies being debated as part of the National Broadband Network discussion.”

    Then perhaps the following is worth reading as well:

    “Coalition’s NBN will need ongoing, costly upgrading, experts warn”
    ( )

    ‘Telstra’s former chief internet scientist Geoff Huston, who now works as chief scientist for APNIC (an organisation that assigns the numbers that underpin the internet IP addresses to providers such as iiNet and Optus) said that Mr Turnbull was trying to convey that what we did on the internet today was what we would do for the next 30 years.’

    [censored due to breach of copyright]

      • “Sorry, I’m not sure what your point is?”

        The point about Coalitions’s NBN that Dr Huston has made (and which you ‘cenosored’) should be pretty obvious:

        “I would side with the view that this one is indeed a lemon,…And we’ve already learned from the [Remote Integrated Multiplexers] (mini telephone exchanges Telstra deployed that became bottlenecks) years ago that this kind of hybrid solution is extremely difficult to actually upgrade and replace.”

        • Following the stories that have been out about the coalitions policy it sounds like they want to use technology similar to rims in their solution when the mention remote nodes. This screams out to me as a bad thing especially if you have ever been on a rim.

          • Ok back on topic, I’d have to say that an active NBN really is completely over the top when all the road maps for GPON show it can easily cope with future demands for many decades to come.

            iirc I believe that NBN Co have also made allowances for leased PtoP fibre links in the NBN design (by having a bunch of spare fibres to each FSAM) for business so any 10GigE links that Gov depts or Corporations may wish to have are entirely possible with the current design.

          • So if I get this right. the summary is that the passive option covers our needs for, say, 50 years, which is good enough when the active option is only going to tack on another 20 years on top of that.

            Yes? No? Have to admit, I’m not up to speed on what the real effect of the differences between passive and active are with this debate.

          • pretty much right and as I understand it, the cost of AON vs GPON is orders of magnitude more expensive for almost no short to medium term gain.

          • And within the next 50 years, plenty of devleopments will occur within fibre communicaiton technologies, and surely make any difference between the Active and Passive kind of moot.

      • Hi Renai, I suspect the point is that you’ve castigated many of us for describing the LBN in unflattering terms and yet someone you acknowledge to be an expert in the field of networking has described the LBN policy as “a lemon”.

    • Interesting read, thanks for the link @delphi.

      So it looks like Geoff Huston also thinks the LNP plan is a lemon.
      I also liked the comment about 25Mbps being like dial-up in 10 years time.

      “Too little, too slow, too backwards” pretty much sums it up.

      • Ta. Since we aim here for an evidence-based discussion comparing different ‘solutions’ for the NBN we should be grateful to Renai for promoting search for the “truth” referencing expert advice rather than just political gossip. In that regards, this from Dr Huston is particularly worrying:

        “”Effectively what you are doing in this kind of hybrid solution is you are freezing at a certain level of capacity, and then if you try and augment that, you’ve actually got to re-do the entire network – and we know how much that costs in today’s dollars. [It’s] not pleasant.”

  2. The path from GPON to single fibre per household has been raised before.

    Yes it may need to happen. It may not need active electronics in the street to do it though.

    The NBNCo spec allocates more fibres than are being used for GPON.

    Reallocating a connection to a different fibre is not difficult.

    I need to study the spec more to see how possible this is with the current rollout. It was already on my todo list.

    Going single fiber is I believe in the NBNCo corporate plan.

    But I see no reason to put electronics in the street if you can run a fiber right back to the exchange

    • Yeah there seems to be something odd with what he is saying you don’t need active FttH for direct Fibre and since the ACCC chose 121 PoI the PoI could be considered to be the powered element as no house will be more than 20 KM from a PoI.

      Just for a reference that is about the distance from Werribee to just before the westgate bridge or for you Sydney folk Parramatta to the City these are approximate and you would never do this in practise but the point is if you want to do direct fibre in the future there is no reason you can’t just do it with the NBN fibre today it just cost more.

      • actually This

        “However, an AON FttH network is not one of the options for Australia’s NBN right now. But even with what’s on offer, making a choice between a GPON FttH and a ADSL2+ FttN network does not seem to be all that difficult. I’d rather do the field work now to replace the copper tail loops in one pass and come out of this exercise with an all-optical reticulation network now. The electronics at either end of the fibre can be replaced at any time in the future without the attendant cost of the field work to replace all the physical copper cable infrastructure. I would much rather see an network infrastructure in place that has the potential to address future needs in communications than spend all this money to provide a national communications network infrastructure that is only capable of meeting today’s needs, without any realistic attention being paid to where all this massive investment in silicon is heading. It’s like looking at the national transport infrastructure at the start of the twentieth century and deciding to improve the lock system on the canals without paying any attention to the emerging needs of those annoying horseless carriages!”

      • Actually that’s not entirely true. For us south-west people that live near Bunbury, WA. The PoI will be in Bunbury, and then places like the town i live in (Capel) could potentially be further than 20KM away from it.

        But then, it all depends on where they will be placing it, if closer to the south-end of Bunbury, then closer to the 20KM, if however closer to where the Telstra exchange is (on Victoria Street) then closer to 30KM.

        • That may be the case where you are which basically means you will be on an active style service anyway if you getting ftth but for around 90% of the population it won’t matter

          • We are on the 3yr rollout plan, but what sucks is that commencement doesn’t start till 2015, if the coalition gets in, i doubt it will ever happen.

            On the plus side, water is a big issue in the town when it comes to the internet, so the coalition would likely be screwed with their plan too, so maybe FTTH will come for me regardless.

    • What is going to fix this is not anything to do with the other fibres – it’s the eventual upgrade to WDM technology. While it’s not yet standardised (unlike XG-PON, previously known as 10G-PON, which gives 10Gbit shared bandwidth, and can be installed in parallel with GPON for a gradual upgrade of customers. There are two standards – XG-PON1, which is asynchronous, or XG-PON2, which is symmetric 10/10Gbps), it is being developed by some manufacturers and should hopefully be standardised and commercially available in four to six years.

      Basically, while GPON and XG-PON use time division multiplexing, the WDM technology instead puts different customers on different wavelengths (I think time division multiplexing will still used to give you the up to four data services and two phone services per premises). This means that the premises has a completely dedicated wavelength. This could be 40 or 100Gbit maximum speed. Perhaps more in the future, all using the same fibre.

      It is a bigger upgrade than the upgrade to 10Gbit will be – while XG-PON can still use the same optical splitters, new splitters have to be installed for WDM based PON technologies. So while the first upgrade will be transparent to customers (a little downtime to plug in the new equipment at the access node, and then the customers can upgrade when they want to move to faster speeds), the WDM upgrade will require a longer outage and probably new NTUs for every user on the fibre.

      If I recall correctly, the guy from NBN Co talks about this as a possible upgrade in the NBN Mythbusting lecture –

      Here’s the wikipedia article on the technology:

      Still, I think it’s funny that on the fibre NBN side, we’re worried about 32 premises sharing gigabits of bandwidth, whereas others are saying that ‘up to’ 25Mbit is fine…

      • Lots of good info here.

        Active FTTH is not worth the cost over GPON in a situation where upgrading one end (the exchange end) can yield improvements for for 1 or more customer end point, without impacting (or needing to replace) any of the other customer endpoints.

    • I guess you could have a user pay system, where if whatever minimum speed the NBN is at (be it 1, 10 or 100 gigs) you could pay $5000 to have a dedicated line run to you house.

      /sarcasm off

      It is good to see that the NBN has run extra fibres for future need. IMHO they get the best of both worlds, cheap GPON for the masses, with potential speed of 100 TB shared to each group of 32, and a dedicated active option where future need demands it.

  3. > [As] your neighbours take up the service everyone takes an incremental hit. Upgrading the base speed of a GPON system requires concerted activity. Not only do you need to up the clock speed at the exchange, but you need to up the clock speed of each and every ONT.

    Err…. I quote from section 3.8.2:

    > According to the ITU-T G.987 standard, NG PON 1 will be supported in parallel to current 2.5Gbps GPON, by using different upstream and downstream wavelengths from 2.5Gbps GPON, as illustrated in Figure 3.17 below. This will allow an operator to support both 2.5Gbps users and 10Gbps users on the same underlying PON infrastructure. However, the major drawback associated with such a solution is that existing NTDs will need to be retro-fitted with wavelength blocking filters (WBFs) to filter out the NG PON 1 wavelength. Also, if a combined GPON/NG PON system is implemented (as illustrated in Figure 3.17 below), a wavelength multiplexer will be required in the FAN to multiplex GPON and NG PON wavelengths into the same fibre. This may cause a minor service outage, since the GPON line card will have to be disconnected from the OFDF to install the wavelength multiplexer.

    Yes, it requires concerted activity, yes you need to up it at the exchange, but needing to up the clock speed of every ONT – wrong.

    Also, speaking about non-existing but technically possible things, it’s technically quite possible to embed into the existing signal a subsignal that runs at much higher speeds – this kind of thing has been done with DVB-T, for example, as hierarchical modulation. The basic concept of a pilot signal has been around for ages – have one big signal here at this wavelength that can be decoded and have a more complex one with a more sophisticated constellation nearby that only something else will pick up.

    He says about DWDM: “However this form of packing data into a fibre cable is expensive, and in a large scale FttH deployment its impractical to use WDM to share a cable across multiple users.” But I’d like to know why exactly. Picking different wavelengths has been common place to send TV signals separately from voice or data services in the past. It’d be nice to get a technical justification as to why upping the clock speed of every ONT is necessary when there’s plenty of ways to effectively hide a higher bandwidth signal that the other ONTs in a PON would simply not see. And it’s not like NBN Co is rolling out a widely divergent range of technologies, it’s the same frequencies and the same ONTs all around.

    What I’m guessing could be the case is that NBN Co is using DWDM further upstream to connect the FDHs on the same fibre… not sure though.

    I can see incremental upgrade paths from GPON upwards quite a reality. Not at the current time, definitely not, but as other countries are facing the same upgrade path, more technology and more importantly standards will come onto the market. If his point is that you can’t introduce 40GPON or 10GPON into an existing GPON network as-is on the same wavelength without it throwing up all over the proverbial couch, then he is 100% right. But no one said it had to be 10GPON or 40GPON and I’m not sure about the impossibility of WDM.

    In fact, here’s what Alcatel-Lucent say in one of their NBN related documents:

    > One of the main criticisms put up against GPON is that the shared fibre is not future-proof. The ever-increasing demands for bandwidth are shown as exhausting GPON capacity well before the lifetime of the fibre expires. Alcatel-Lucent believes that GPON represents a secure, futureproof path for residential fibre deployment.

    And then goes on to mention 10GPON. And, in fact, check out Figure 3.17 of this:

    So, now we have Alcatel-Lucent and Analysys Mason both saying that it’s quite the upgrade path…


    And ADC seems to think DWDM – running on PON, mind you – will keep us going until “Future versions may have as many as 64 10 Gbps wavelengths.” 640 Gbps instead of 2.5 Gbps.

    > The other upgrade option is keep the data speed costant, but divide the GPON splitter into two splitters, and thereby increase the effective capacity per subscriber by halving the number of subscribers per splitter.

    And this is of course much cheaper, but there are diminishing returns and diminishing dark fibre to keep doing this for decades.

    > just think of rather clever predictive noise cancelling headphones

    I really don’t like this, it does the magic behind vectoring a huge injustice :)

    But, yes, in a nutshell the losses in the splitters in the FDH may well be painful a few decades into the future. But I guess if that ever happens, we’ll just have to plug things around for 15 minutes in each FDH and are good to go for another two decades.

    > If there was an option for an Active Optical Network in a FttH framework then I think I’d prefer to head in that direction. That’s in spite of the considerations of the reliability issues associated with the deployment of active electronics in the node. This approach offers a direct path to increase the capacity of the trunk fibre from the exchange to the node, and a means of increasing the individual capacity from the node to each ONT on a service-by-service basis if need be.

    As far as I figure we’ll never reach that point as the PON technologies continue to evolve.

    I mean, and to channel Bill Gates a bit, 640 Gbps is good enough for anybody. And the PON standards will continue to evolve and not ever likely to be a few multiples behind AONs.

    • “> One of the main criticisms put up against GPON is that the shared fibre is not future-proof. The ever-increasing demands for bandwidth are shown as exhausting GPON capacity well before the lifetime of the fibre expires. Alcatel-Lucent believes that GPON represents a secure, futureproof path for residential fibre deployment.”

      The fibre is where the real cost of the network is (not because of the cost of fibre, but the cost of the labour to install it). The fibre is the most future-proof part of the system (until they develop some quantum system), so surely the GPON is the easiest/cheapest part to upgrade?

      Is this a “real” problemfor the next 30 years or so, or more an academic thing?

  4. “If there was an option for an Active Optical Network in a FttH framework then I think I’d prefer to head in that direction.”

    Bit of a can of worms really.

    Is there that option in the NBN GPON’s? Would it just require new cards (which could be done at a later date) or would it require totally new equipment? Is Dr Huston just trying to gild/gold plate the lily for something that can already do 1Gbps? Would Malcolm consider using it in the fibre portion of the LBN??

    • No need for any changes as long as there is enough fibre (they are laying more than they need for these reasons) an exchange is close enough to connect most premises directly to fibre if needed due to the ACCC 121 PoI ruling.

      Assuming that the ACCC ruling is still in effect there is no reason you could not do direct fibre using a dark fibre to the exchange for Malcolms plan

    • We look at 1 Gbps as some sort of holy grail, but by around 2025 our needs are going to be expecting that as some sort of base standard, like 12 Mbps is today. Move forward from that, and 2027 means 2 Gbps, 2029 4 Gbps, and so on and so on.

      There will always be need for more speed, we just dont know what for yet.

      History has proven that by jumping from dialup to basic ADSL the music industry transitioned to digital, from ADSL to ADSL2 social networking became a global phenomonon, and ADSL2 to FttN/H will see video go a similar way to music. Whats happens next when we increase our speed by a factor of 10?

      Whats inherent in the current rollout that could possibly get in the way of putting the tech in place for 10 Gbps for example?

      • in answer to your last paragraph, absolutely nothing – the optical splitters may need to be replaced at the same time as the line cards and ONT’s but that’s a worst case scenario – it’s entirely likely that future evolution’s (past 10GPON) of PON can even reuse the current passive optical splitters.

      • I’m not sure there will be any one particular technology that will trigger our “need for speed”, I think it will come from a lot of current technologies being adapted for use “in the cloud”.

        Just as one example, take 3D printing. People are already working on 3D printing buildings ( see here for one example ) and several other applications (eg: ). the people designing the things to output won’t be at the printing plant, they’ll be sending plans out to clients via the internet. Add to that all the other cloud services, both current like Dropbox, Adobe Creative Cloud, etc, etc, etc and yet to be emerging technologies/systems plus things like Smart Grids and that’s a lot of bandwidth your using on a day-to-day basis.

        You can’t just point at one technology and say “We need X Ybps to run that” there is no single “killer fibre app”. But we’ll need “X Ybps multiplied by Z” to run the lot of them…

        • Politicians are very bad at technology and science, they always have been because it’s not a field that attract innovative thinkers. Most of them are deeply unoriginal thinkers and like to apply others intellectual output even where totally inappropriate.

          This leads them to make poor choices even though their innate sense of entitlement (you don’t put yourself in a position to make decisions for others with out this) tells them they are the only people who should.

          I think Douglas Adams had it right when he postulated that anybody who seeks a position of power should be automatically excluded from holding one.

          Fascinating 3D printing article though – there’s so much being done in this space and it’s truly a game changer, from the nano to major construction scale!

        • “The cloud” (note, I hate that term, sounds so weak) is one thing I was thinking as well. But what about multiple devices?

          Looking back, social networking wasnt the tipping point for needing ADSL2 either, but it sure helped the process. Suddenly, for a reason nobody saw coming, EVERYONE needed more bandwidth, just to keep in touch with the person on the other side of the room. Your phone got ‘smart’, your home computer became 2 or 3, then there was a tablet, then your gaming console… Before you knew it, it wasnt necessarily about the max speed, but how many objects needed to share that total.

          Its a yin and yang situation. Some things show clearly what future speed can deliver – music for example, while there are just as many other things that might come along and take advantage of the same speeds – like Facebook did.

          1 Gbps might not be much use for a single device, but what about when its spread amongst 10 of them? Or 20, or 30? In a digital age, everything can be online, from your security system to your toaster. But you’re going to need the bandwidth to share amongst them all.

          • Couldn’t agree more Gav. When things like TV’s, mobile phones, computers and even your VOIP land-line are all setup to access your net connection (either directly or via a wireless router), it really adds up, especially with three people all using it at once.

          • Social networking is not a high bandwidth user. That’s why many mobile plans give away free Facebook and Twitter access, because they know it won’t whack their networks too hard.

            If anything these drove the mobile space and pushed people away from ADSL2, because a mobile is genuinely always available so you can really keep in touch. ADSL simply is not always available.

          • That does not invalidate his argument, merely points out why social media specifically will not drive the need for higher speeds. No where did he suggest it would, only that it was a catalyst for Broadband, both Mobile and Fixed in Australia.

            The fact that networks offer it for free proves this. “You can get Facebook on your phone? Cool” That’s why people upgraded to a smart phone. Not touch, not games, but Facebook.

          • His argument was “… social networking wasnt the tipping point for needing ADSL2 either, but it sure helped the process…” and yeah, if you look at the way social media works, this is completely wrong.

          • @Tel

            Yeah….because it’s not like you can share videos, music and pictures easier and faster than ever before on Social Media which boots the average downloads and required speeds of those who use social media….

            No, that isn’t happening at all….

          • Should you ever care to look at the content of Facebook and Twitter and similar sites, it is not videos and music, it is just people chit-chatting and sending short messages.

            If the guy had said “YouTube is driving ADSL2” I would agree, but that’s not what he said, regardless of how you might want to bend it around.

          • @Tel

            Yes, because Social Media hasn’t increased the SHARING of these videos, music and pictures.

            And you didn’t miss my point at all. Just because you chose to take his point one way, doesn’t mean that was the ONLY way.

          • I honestly don’t know who you have on Facebook, Twitter, and Google+, but if all they do is post short messages to their walls, they must be extremely boring.

            Oh my feed I get videos posted on interested topics from YouTube and TED, I have people uploading photos from recent events, all of which would significantly degrade my experience if I didn’t have ADSL2+ Broadband considering that a) the “Free Facebook” offered by wireless carriers doesn’t extend to YouTube and TED, and b) ADSL2+ is the only thing that has both the speed and quota I require to watch every interesting video posted.

            To say you completely misunderstood the arguemented presented is hardly giving you credit for how warped your logic is.

            Let me clarify this: Social Media by definition is the ability to share content and thus isn’t intrinstically bandwidth intensive however the content shared by it might be.

            Let’s put it this way, someone uploads a video of a funny cat. To be seen this funny cat must be shared, which is done over Facebook or Twitter… without Facebook or Twitter that funny cat might as well not even exist.

          • So you’re saying that when there were 500 million Facebook accounts, every single one of them was used solely through a mobile phone. Right, gotcha…

            Microanalyse what I wrote as much as you want, and you still miss the point. User numbers went from relatively small, to a daily necessity, for a number of very minor reasons. It was the combination of them all into a single need that drove the need for more bandwidth. At the time, that was ADSL2. As time went on, a good portion of that transitioned to mobile, but for every mobile account there is a PC somewhere accessing that same account, and playing all those little games you get on FB.

            Again, I didnt say it was THE reason ADSL2 took off. But for access to internet across a number of uses, it sure as hell came along at a convenient time. And thats all I’m saying. It was built, something used the supply. As far as I’m concerned, social networking took the most advantage.

          • “You can get Facebook on your phone? Cool” That’s why people upgraded to a smart phone. Not touch, not games, but Facebook.”

            Not exactly driving people towards ADSL2… except those small number of people choosing to run a long cable behind their smart phone.

  5. @Renai “Hey mate, this is not a Coalition NBN thread, it’s about active versus passive FTTP.”

    Coalition BB plans do include ‘on demand’ FTTP – which, of course, begs the question:

    Given Dr Huston’s comments on the Coalition’s BB plans :

    “”Effectively what you are doing in this kind of hybrid solution is you are freezing at a certain level of capacity, and then if you try and augment that, you’ve actually got to re-do the entire network – and we know how much that costs in today’s dollars. [It’s] not pleasant.”

    how an already compromised ‘on demand’ FTTP is going to be afected if/when an active solution upgrade is to be implemented?

    • hey mate,

      FYI I’m putting you on the pre-moderate list, for breaching the following tenet of the Delimiter comments policy:

      Comments which constantly change the subject to off-topic subjects, often in self-promoting areas. Occasional off-topic stuff is completely fine, but if commenters are constantly trying to push an agenda not related to the current topic of discussion, that will become a problem.



      • No problem. Thanks for answering the question on upgrades to active FTTP.


  6. He’s the expert. But I don’t really see the need.

    AON is easier to upgrade on a case by case basis, sure……but that’s it. PON is still as easy to upgrade. You just have to do a whole cabinet (or rather a line card and NTD’s + splitters). Considering they only cover 200 premises….it’s not exactly a big deal.

    Sure, it’d probably be slightly better. But It’s a HELL of alot better to take PON over FTTN than not having AON.

    And no, I DON’T think he’s advocating go FTTN forst because that has an upgrade path to AON before anyone suggests that. He’s made it clear he thinks Fibre is the right choice.

    • Sorry, this is off topic, but I found this bit in Dr Hustons article quiet amusing (not because it isn’t important, but getting our stick-in-the-mud pollies on-board):

      “It prompts me to wonder if somehow we could make the support of a massive exercise of IPv6 deployment across the entire national network into an equally prominent election issue. Now there’s a thought!”.

      Malcolm? Stephen? Would you guys like to champion this cause? :o)

      • Love it! I’ve been using an IPv6 tunnel and am moving to Internode just for the native IPv6. I’ve also chosen not to use certain company’s cloud hosting because they don’t support IPv6 and always tell them. Just to make sure that they know that not supporting IPv6 is losing them some business.

        I don’t really need it (currently), but I think it’s so important to the Internet to move into the future instead of supporting ugly hacks like CG-NAT to get around the v4 exhaustion issues.

        But, bringing it back to the NBN, it would have been nice if NBN Co had made it a requirement that you support IPv6 natively to be an RSP to help adoption…

    • “Opps” on the other reply, I meant that to be a general reply.

      Yeah, I think he thinks it’s more important to get the fibre in the ground, than what equipment is attached to it. From what I’ve read on it PON still has a lot of room for upgrades.

      I think his bigger worry about it though was more the trunk back from the GPON, rather than the ONT side of things?

  7. Ok not a networking Guru, so excuse me if I am wrong.

    But my impression from the article is that the AON works better than PON because it is managed.

    So the PON basically just sends the info down the pipe and if lots of other things are using the pipe then it all slows.
    Whereas AON directs that info improving performance by using the available space more efficiently.

    I remember the days when we had dumb Hubs, and then Switches came in. We gradually replaced the hubs with switches based on performance needs.

    It sounds to me(a Layman) like AON is the better option technically.

    In which case it will come down to best bang for buck between the 2 imo.

    How hard is it to upgrade the infrastructure(pon->aon) later? Is it even possible?

    • AON is better known in the biz world as PtoP Fibre, basically every premises gets it’s own dedicated pair back to an active powered fibre switch (so you were sorta right) which provides dedicated fully symmetrical bandwidth to each premise.

      It has some of the same drawbacks as FTTN, such as powered cabinets (although a lot less of them) which need batteries, cooling and active hardware.

      it’s also incredibly expensive to roll out as it needs quite large ducts to get the large bundles of fibre to each active cabinet.

    • It is probably impossible to upgrade from PON->AON the entire way, while they are laying more fibres than they need to the splitters; I would be surprised (in a good way!) if they were laying more than 32 to each optical splitter. (you’d need a couple spares in case of breakage – you don’t want to allocate them all and then have to lay new fibres when a fibre turns up faulty).

      • The wouldn’t need to run multiple cables, fibre optic cables have multiple strands in them, so it would be directly related to how many stands the cable they decided to use has.

        • Yeah; when I say “fibres” I mean the actual fibres.

          Rather than an optic fibre bundle.

  8. So how different/better would Active FTTP be compared to Goggle’s Chattanooga rollout, of individual fibre’s to each Premises?

    • There’s some debate over what Fibre Tech Google is actually using – it’s sounds like some sort of AON/PON hybrid from what I’ve read.

    • What does Google have to do with it?

      Chattanooga’s community-owned electric utility EPB is installing a 100% fiber to the premises network. Built to run America’s first true Smart Grid and offer residential high speed Internet, video and telephone services, the network was also built to empower our community in new ways.

      So the electricity company saves money at peak periods by turning off people’s air conditioners on a hot day, and they need their own data network to do that. Fiber makes sense for an electricity company (it is an insulator) and they already have all the poles and right of way and given that the main cost of the roll out is labour, might as well install fiber.

      After they have that done, their data requirements are microscopic so might as well offer the spare bandwidth to consumers.

      The really interesting thing is that this completely disproves the idea of a “natural monopoly” that the ALP have been pushing. The Electric Power Board of Chattanooga is owned by the city (not the state, and not the feds either) and it is a long way from any sort of nation-wide monopoly. This allows it to be responsive to the particular needs of Chattanooga residents and those residents have some reasonable chance of getting their city to listen to grievances or concerns. Your vote in some hundreds of thousand people is a lot more important than your vote in some hundreds of of millions of people.

      I would argue that managing networks on a local region-by-region basis is much more of a “right size” for such projects than a bone-headed national monopoly. This is especially so for Australia where the lifestyle and geography of bush areas and city areas is completely different, and requires completely different approaches — there isn’t any economy of scale to be had, and there is a huge dis-economy of management.

      • Google has to do with it because they were a catalyst. You think this power utility thought there was a point and demand to compete with the big cable without Google coming along?

        You realise that they could have built a smart network without investing in a huge FTTH network right? The data required by these systems barely goes over dial
        -up. In fact they could have done the smart bit over their pre-existing infrastructure. Why go to the effort of deploying FTTH?

        And your point about Natural Monopoly? You do realise that a natural monopoly tends towards one provider to prevent over building. This does not mean that a collection of small independents can’t do it where they don’t over build each other, that is still a natural monopoly. If anything it makes sense that local power boards will do it because their utility services are also a natural monopoly!

  9. Nice writeup by Mr Huston. I think he is quite right about having an active fibre rollout as a technically “better” Fibre plan. Its pretty hard to argue against it really, from a technical standpoint.

    This would be the “rolls royce” rollout though I think. Its quite possibly already warranted in very densely populated areas; I think it could potentially solve the “MDU” problem nicely if you could get a lot of tails to an active “Node” in the neighborhood.

    Wouldn’t like to think about how much it costs though. Plonking powered switching gear all around the place a’la FTTN would add up pretty fast. Wouldn’t need any where as many nodes, but still..

    Would be pretty cool to be able to request your fibre be upgraded to say 20Gbps though, and have the NBNCo installers come out and just switch the end module over (and do the other end in the “node” or exchange). Then you can seriously bag your neighbor for their slow ass 1Gbps link ;) Who knows, might become viable to replace the optical splitters in future, once fibre is run to a house though you have done a lot of the work anyway.

  10. Ahhh, thanks Renai, for showing me an alternate choice for a broadband network.

    After reading today’s proposition I have to say that the expensive one exceeds what I need for downloading simple pornos and stuff from Pirate Bay,

    We won’t need all those costly features for years and years and by then we will probably have quantum transmission.

    So I’ll vote for the economy version even if it is not the Ferrari that expensive plans promise. The WRX version will do me. It’s called FttP I believe.

    • Actually it’s prolly more like an 88 Laser with a Jap import inter-cooled turbo
      Motor dropped in vs a 2013 VW Passat 4Motion.

      One of them is severely limited by an aging platform and just can’t deliver what it’s on paper specs suggest is possible.


      • Oh dear!

        I didn’t think it was the least bit subtle.

        Renai banned comparisons with the libnut stuff so I didn’t include it.
        Just the one Geoff prefers and the NBN.


  11. AON has a lot of advantages over PON.

    Less vendor lockin.
    Easier path to real open access
    Way more adaptable to new developments etc.

    Now if only there was a transitional technology that pushes powered nodes out into the field……..

    • Yep, totally locked in – to vendors who following the ITU-T G.984 standard?

      How does it have any relation to open access in any way? How is more adaptable to new developments?

    • “Easier path to real open access”

      How exactly? Open access? You mean competition where an ISP installs equipment in the exchange? Why would they do that when they can just get wholesale access to the XGpon that the NBN will be upgraded to? or the successor to XGPON…

      “Way more adaptable to new developments etc.”

      Like? (seriously; if you haven’t heard of it; it wont be around for another 20+ years – I have heard of XGPON; but it isn’t even official yet, and XGPON is an incremental improvement on something that already exists, if it doesn’t exist now, it will take 20+ years to be commercialised).

  12. As a Layman, I found this Whitepaper rather enlightening

    There is a summary on page 12 that basically supports the thought that it is a better tech, but at a premium.

    It talks about FTTExchange and FTTCurb, which I am assuming are somewhat similar to the FTTN concept?

    Does anyone know what the proposed tech in the FTTN up to the Node point is? Is it AON or PON, or is it something different again. If it is AON, does that make the FTTN more palatable long term? (Which is ultimately my issue with FTTN long term cost vs short/long term gain)

    And please don’t make this an FTTN bash. I am just curious whether this is even applicable to the FTTN element of the discussion.

    • AON is a little misleading in the context of the debate between GPON and FTTN.

      FTTN and GPON networks will lay 1 fibre (actually more than one but for redundancy not capacity) to a distribution point. FTTN and GPON use different distribution technology.

      FTTN plugs the copper wires in on the customer side; and the fibre in on the exchange side.
      There is no passive technology that can handle the conversion between the fibre-based signals; and generate the subsequent copper-based signals, so power is required at this node.

      GPON plugs Fibre in on the exchange side into an optical splitter (a series of mirrors etc) and the fibre to each customer takes one of the optically split signals. There is no electricity used here; because effectively what is happening, is the same signal is taken from the exchange to each of the 32 end points, the electronics at each of the 32 homes then filters the signal for each. This is why the 2.5 gigabits is “shared”. The same 2.5 gigabits is sent to each house, but only some of that signal is messages for “you”.

      AON on the other hand could theoretically do away with this “node”. It would have a node, only because it is easier to maintain the fibre this way. If you have a break in the fibre; you can switch it at the node to a good one; instead of the whole length being marked as “faulty”, you get more redundancy with some kind of node in place.

      The problem with claiming FTTN is a stepping stone to AON is one in implementation. If when deploying your FTTN network you set-out to create a stepping stone to AON it is 100% feasible. If however your plan to deploy FTTN is one based on cost; you are unlikely to over-spec your FTTN network in the way that AON would require.

      AON requires 1 fibre per customer endpoint. It also requires additional spare fibres (you’d probably lay 2 to 3 fibres to each premise for this purpose). To your node; you don’t need to lay 3 times as many fibres, but you need to lay many extras (a statistician would tell you how many). If you are trying to save money in budgeting for your FTTN network; chances are you won’t spend on enough fibres to each node to support the level of redundancy required for an AON network.

      So; there is nothing precluding an AON upgrade from FTTN. But the coalition is aiming to save money, it is likely their “future” for the node is a GPON system, and will likely budget fibre allocation to nodes accordingly.

      • “FTTN plugs the copper wires in on the customer side; and the fibre in on the exchange side.
        There is no passive technology that can handle the conversion between the fibre-based signals; and generate the subsequent copper-based signals, so power is required at this node.”

        Yeah, I understand that. But the FTTE/FTTC concepts in the white paper I was looking at seem to be very similar to the FTTN concept. I am assuming(possibly incorrectly) that they are simply slightly different methods of doing the same thing. The FTTE and FTTC both talked about copper to the premise etc.

        I believe it is saying that the technology to get to the Exchange/Curb(and my assumption NODE), can be either PON or AON.

        My thinking was, that if AON was used up to the “node” would it be possible to then “upgrade” the last mile from the node to make it full AON throughout.

        I know I am not being as clear as I want to be, but I am working from a lack of understanding of some of the fundamentals.

        Also I am guessing this isn’t what is occurring as I believe the cost would make the FTTN plan untenable against the current FTTP plan.

        • FTTx technologies are actually all very similar, the main difference between them all is down to if it uses a copper tail and then how long.

          The Wikipedia article on them is pretty good at setting out the “proper” names and differences of them, and covers AON and PTP as well.

          One thing in common with them all though (except AON and PTP/Direct Fibre), is that fibre is run to a node/cabinet with either copper or fibre run from the node to the end-point. So saying “if AON was used up to the “node” is kind of what happens.

          AON is typically used in enterprise and academic networks (like AARNET that Dr Huston helped design) especially connecting site-to-site, and are functionally equivalent to an Ethernet network.

  13. Renai,

    I took the trouble to read his article in full. I might also add that I enjoyed a few chats back in the days of being a student, with Geoff. And I found him a quite approachable and intelligent guy.

    Geoff has basically branded what the Liberals propose a lemon. There’s no doubt about that.

    As for passive versus active optical networks. A fair reading of his article is that what he says about active optical networks is a mere quibble, compared to his all but thrashing of FTTN.

    I have some sympathy with an approach that provides a unique fibre to every home from some point further back in the network. I used to argue about this issue on WP back in the days before NBNco brought out its technical documents.

    When they did, I was pleasantly surprised. For one thing, they’re providing 3 fibres to each address. So in theory, if you want point to point you can get it. There is a limitation with the number of spare fibres on the trunk, but eventually an upgrade to that (pulling through more fibre) is very much less expensive than upgrading street cable.

    Now the argument that Geoff is making is that (and its a bit technical but I’ll break it down) a lot of traffic is high speed UDP based video streams. In such circumstances the statistical sharing of a PON network tends to break down. So instead of relying upon essentially uncorrelated and bursty behavior there is more chance of the users on a PON conflicting by simultaneously requesting long lasting video streams.

    Well, that’s interesting. But what Geoff doesn’t really go into much detail about is the upgrade path available to GPON, with 10GPON likely to see use by 2018. And of course there is a logical successor to that with 40Gbps of shared bandwidth.

    Where we go to from there is less clear. Eventually I think we’ll end up with WDM-PON in some form or another. Inherently the physical limitation of a shared system is some Tbps per user. Whereas of course with dedicated fibres, its 32 times that.

    I can live with that.

Comments are closed.