Linode Network Backbone

June 27, 2017 10:35 am

As we outlined in our last network update, our network engineering department has been hard at work revamping our entire Internet-facing network by turning up gobs of capacity and directly peering with hundreds of networks all over the globe. Since then,ย we have extended our network backbone to all datacenters in North America and Europe. This means any communications between Linode datacenters will never leave our ASN 63949. Having direct private connectivity between our sites takes the volatility of the internet out of the equation.

Customers can trust that their multi-site apps hosted with us will be able to communicate over the fastest and most reliable means possible. Our engineers worked closely with our vendors to pick the shortest fiber paths possible, which in turn gives our customers the lowest latency between locations. Across our current longest path (Fremont DC to Frankfurt DC), we were able to cut the RTT down by over 15ms. We have also seen the inevitable occurrences of internet-related jitter and packet loss disappear.

While delivering direct connectivity between Linode’s public interfaces is great, we do not plan to stop there. Our engineers have spent countless hours building relationships and forging peer agreements with hundreds of content and eyeball networks. Now with an accelerated and fortified network backbone, we can extend the benefits of these relationships that are currently contained in a single datacenter to any datacenter along our backbone. For example, customers in Newark will directly benefit from our robust peering relationships in Europe and vice versa. The less traffic sent through multiple transits hops the better cloud we provide for our customers.

Now that Linodeโ€™s North American and European DCs are integrated, we have begun addressing our AsiaPAC network. During a recent trip to Singapore, our network engineers lighted dark fiber to one of the most carrier-dense buildings in Asia, giving us the foundation to extend our backbone there, too.

16 Responses

  1. To improve Asian network performance, Linode can consider lighting up dark fibres to Hong Kong solely as a network PoP. That way you can cover more Asian countries like the Philippines, Brunei, Taiwan, Cambodia and Vietnam and provide redundant routes for these countries as their links to Singapore or Tokyo are not as stable as the ones going to Hong Kong.

  2. Good Job!!! Linode always Rocks…

  3. Goooood job guys! So, there will be a global private network with a unique private IP range shared between Datacenters?

  4. Excellent news ๐Ÿ™‚ I’m in AsiaPAC exclusively at the moment, so I look forward to the work done there ๐Ÿ™‚

  5. Come to StrayaaAAA!

  6. @Diego Thanks! It might be something we can consider, we’re always looking to make our service even better.

  7. awesome Linode Rocks

  8. Yeah yeah thanks Linode!!!

  9. Any plans to increase egress, i.e. currently 1Gbs out?

  10. Awesome job. Speed is important if one makes app syncing between two data centres

  11. Great News!

    We’ll love to have backup service across 2 datacenters ! (and private networks ๐Ÿ™‚ )

    E.

  12. Next POP should be in Australia! Tons of business here!

  13. +Stephen Reese
    If you have a low cost Linode, than 1Gbps is a good amount of bandwidth for most usage scenarios. You could always set up a cluster of cheap Linodes over multiple datacenters in order to increase your aggregate bandwidth.

    Does Linode with these network upgrades plan on increasing usage quotas on Linodes?

  14. I’m getting 174ms latency between Fremont and Frankfurt. Your diagram says that should be 147ms.

    What gives with that?

  15. @John Our North American ring consists of diverse paths between each DC. The primary path between Fremont and Newark was down due to a fiber cut which caused traffic to take the longer secondary path. It just so happens that we were working with our provider to get the secondary path shorter which happened in the last 48 hours. RTT for both paths should be around 146ms now. Please let us know if you see otherwise!

  16. Wow you guys are serious! Is transfer between DC’s “free” do you know? Just out of curiosity ๐Ÿ™‚

Leave a Reply