PayPal Payments

July 26, 2016 11:01 am

paypalWe’re happy to announce our newest feature: Payments via PayPal. You can now add credit to your account by transferring funds from a PayPal account.

While any customer can use PayPal to fund their account, new customers will still need to sign up using a credit card. You can use PayPal from then on.

This is in part because we don’t yet have the ability to automatically transfer funds from PayPal. If you intend on paying only via PayPal, you will need to ensure that you have enough credit on your Linode account to cover your next invoice. Otherwise, our system will attempt to collect any remaining balance from the credit card you have on file.

To start using PayPal just log into your Linode Manager and go to the Account -> Make a Payment tab.

Enjoy!

Introducing Fedora 24

June 22, 2016 3:45 pm

Today we are pleased to offer Fedora Server 24 for deployment on your Linode. The Fedora Project aims to provide the latest stable packages and approximately 13 months of support.

What’s New?

Fedora Server 24 installs lighter than previous versions due to the Fedora Project removing some packages they felt were unnecessary for their server build. The Fedora 24 ChangeSet outlines the major library and software updates like GNU Compiler Collection 6, Nodejs 5.10 and Python 3.5.

As always, you can use the stock Linux kernel we provide, or install a kernel of your choice using our kernel installation guide for KVM Linodes. For the full changelogs, see the Fedora 24 release notes.

How do I get Fedora 24?

A fresh disk image of Fedora Server 24 can be deployed directly from the Linode Manager. If you’re using an earlier version of Fedora, you can upgrade to version 24 using the DNF System Upgrade plugin. You’ll find instructions to do this and potential upgrade issues on Fedora’s Wiki.

Linode’s 13th Birthday – Gifts for All!

June 16, 2016 12:28 pm

It was 13 years ago today that Linode opened its doors and earned its first customers. Now, 13 years later, it’s amazing how much we’ve grown. According to a study by CloudHarmony, Linode is the 4th largest cloud provider to the top 10,000 Alexa websites, following only Amazon, Rackspace, and IBM. Not bad. We have helped over half a million customers, launched nearly 12 million Linode servers, and now have more than 100 employees, all while remaining independent and privately owned.

None of this would have been possible without the dedication and engagement of our employees, who care a great deal about you, our customer, and about the work we do. We’ve attracted some of the most passionate, brilliant, and handsome people I’ve ever met (BTW, we’re hiring). So, a big thanks to everyone here at Linode.

We also want to thank you for your business and support all these years. As a token of our gratitude, we’re announcing free RAM upgrades for both new and existing customers. Here’s the breakdown:

Old Plan New Plan Price
Linode 1 GB -> Linode 2 GB $10/mo ($0.015/hr)
Linode 2 GB -> Linode 4 GB $20/mo ($0.03/hr)
Linode 4 GB -> Linode 8 GB $40/mo ($0.06/hr)
Linode 8 GB -> Linode 12 GB $80/mo ($0.12/hr)
Linode 16 GB -> Linode 24 GB $160/mo ($0.24/hr)
Linode 32 GB -> Linode 48 GB $320/mo ($0.48/hr)
Linode 48 GB -> Linode 64 GB $480/mo ($0.72/hr)
Linode 64 GB -> Linode 80 GB $640/mo ($0.96/hr)
Linode 96 GB -> Linode 120 GB $960/mo ($1.44/hr)

View the full plan list

The free upgrade is available immediately. At the top of your Linode’s dashboard you will see the upgrade banner. It should be just the press of a button and a reboot for many of you, however sometimes a migration will be required – which typically just takes a few minutes.

This upgrade is available only for KVM Linodes. Legacy Xen Linodes will have to first upgrade to KVM before being able to take advantage of the RAM upgrade. You can upgrade your Xen Linodes to KVM using the “Upgrade to KVM” link on the lower-right side of your Linode’s dashboard.

Unfortunately, since Tokyo is sold out, the upgrade is not available there. We hope to have our second Tokyo facility online before the end of this year.

In addition to the new facility, our teams are working hard on the new API (in public alpha), the new open source Linode Manager, and the significant improvements to our networking infrastructure including transit, peering, and bandwidth upgrades. We’re also very excited about our future office in Philadelphia – a beautiful neoclassical 110-year-old former bank building right at the heart of N3RD Street. Renovations are underway and we hope to be working out of there in the spring.

We continue to build upon the foundations of a company that is big enough to handle its customers’ needs but small enough to care. We hope we’re accomplishing that. You have our eternal gratitude for your business. Stay tuned for more!

 

Arch Linux Network Configuration Update

June 13, 2016 5:14 pm

We are happy to announce that we have just pushed a new Arch image, updating it to the 2016.06.01 “release”. This image disables the predictable network interface naming convention using the suggested udev mask, so the default ethernet name is returned to eth0.

If you currently rely on predictable interface naming for any of your Linodes running Arch, make sure to update your configuration scripts before deploying from the newest image. If you want to read more about this configuration or static networking, see the documentation on freedesktop.org, or consult our Static IP Configuration guide. Alternately, you can let Network Helper configure your IP addresses statically for you.

Summer 2016 Events

May 4, 2016 2:19 pm

We’re rounding out our robust conference schedule with even more events than usual this summer! Take a look below and meet up with us.

DrupalCon New Orleans
New Orleans, LA | May 9-13Drupalconlogo
We are sponsoring a lounge at DrupalCon New Orleans. Stop by our space to relax and recharge – perks of the Big Easy. We’ll also be hosting a happy hour in our lounge on May 10th @ 3 PM. See y’all in the Crescent City.

PyCon
Portland, OR | May 28-June 5pycon2016-logo_1x
We’re flying to the West Coast for PyCon this year. We love Portland and can’t wait to see everyone at the conference. Stop by booth #521 to chat with our team and get $50 credit on a new Linode account.

Southeast Linuxfest
Charlotte, NC | June 10-12
SELFlogoSELF is being held in Charlotte again for 2016. We are looking forward to seeing some familiar faces and meeting new ones. We’ll have a table loaded with new swag in the expo hall, and we’ll be at the bottle share. Nothing could be finer than to be in Carolina… in June. See you there.

Velocity
Santa Clara, CA | June 20-23
We’re heading back to Santa Clara for Velocity 2016! Stop by our booth (#515) Velocitylogoin the expo hall. We’re also sponsoring the evening reception on June 22nd from 8-10 p.m. We look forward to seeing everyone in Cali.

Texas Linuxfest
Austin, TX | July 8-9TXLF_logo
This will be our first year sponsoring Texas Linuxfest. Stop by our booth to get the latest, greatest Linode swag and make sure to attend the Linode after-party (location TBD). See everyone in Austin.

FOSSCON
Philadelphia, PA | August 20fossconlogo
We’re thrilled to again be part of FOSSCON – right in our own backyard, Philly. We’ll be hanging out all day talking cloud hosting and all things virtualized that you might be curious about. See you in our backyard this August.

 

 

Network Status Updates – April 2016

May 2, 2016 3:09 pm

In his January blog post, Alex Forster articulated that we have a plan in motion to upgrade the network across all of our data centers. We would like to share with everyone what has been done so far, and what still needs doing.

Staffing up

Since December 2015, we’ve added two new members to our Network Operations department with a total of 24 years of experience between them. Our new Network Operations Manager, Dan Spataro, brings with him a wealth of experience in the maintenance of backbone and data center networks. Owen Conway has also come over from LINX, one of the largest peering exchanges in the world. They have been working tirelessly along with our existing Network Operations crew to make substantial improvements to our network infrastructure, and have proven to be invaluable additions to our team.

Infrastructure Improvements

Status: In progress
Expected Completion: Mid May

Phase one of our plan is to install new higher capacity networking hardware in every one of our locations. These new devices will greatly increase our capacity to the internet and allow us to multi-home to many internet peers at once. Our Network Operations team has been rolling out this new hardware as quickly as possible and is currently working on many simultaneous turn-ups with our providers. By mid-May we hope to have these installations complete.

Dark Fiber Turn-ups

Status: In progress
Expected Completion: July

Due to infrastructure variations between geographic locations, we have had to get creative in our mission to turn up large amounts of additional bandwidth capacity at our various data centers in order to reach the tier 1 providers in each region. Our first plan involved the procurement of 200 Gbps of lit services from our data center locations to the well connected carrier hotels. The provider estimated a late June install date for the first location. That timeline was unacceptable to us, so we decided to lease our own dark fiber and light it ourselves. The time to market is much quicker and the potential capacity is much greater. Instead of the fixed 200 Gbps of capacity per DC we can turn up many terabits of capacity per DC on demand. We expect to begin rolling out the optical gear in the next few weeks and hope to have the first locations up in mid May.

Internet Capacity

Status: In progress
Expected Completion: July

Internet capacity is a large part of our DDoS mitigation strategy – the more capacity, the better. We are currently working on turning up terabits of capacity to the Internet and have started to achieve this by peering with hundreds of Internet peers over many high capacity links. We have also reached agreements with three more tier 1 providers to provide us more connectivity to the Internet across all of our data center locations. We will begin installation of the first batch of connectivity from those agreements next week in Frankfurt.

Our entire Network Operations team has been working really hard to bring everyone these upgrades as quickly as possible. Your patience and understanding over the last few months have been greatly appreciated, and we look forward to sharing the benefits of these upgrades with everyone soon.

KVM Update

March 15, 2016 1:54 pm
How’s KVM going?

KVM became the default hypervisor for new Linode customers six months ago, and it’s been going great. Since KVM’s introduction last summer, customers have created over 500,000 KVM Linodes with over 3 million launches. It’s been very stable, it’s been very fast. Our service has improved reliability and uptime because it’s running a simpler stack. Overall the transition to KVM has been a huge success.

Since the launch, we also introduced Glish, the graphical console that allows you to view and interact with your virtual machine’s graphical output. This allows you to run a self-contained graphical environment within your virtual machine. Or, using the full-virtualization capabilities of KVM Linodes, to run alternative operating systems like Plan 9 or even Windows.

To give you an idea of our KVM fleet size: just about 50% of all Linodes are now running on KVM, with many of them being pre-existing Xen Linodes that migrated to KVM.

How do I upgrade a Linode from Xen to KVM?

On a Xen Linode’s dashboard, you will see an “Upgrade to KVM” link on the right sidebar. It’s a one-click migration to upgrade your Linode to KVM from there. Essentially, our KVM upgrade means you get a much faster Linode just by clicking a button.

Please note: Our Tokyo facility is currently sold out and does not offer an upgrade to KVM. We are looking into adding more capacity in Tokyo and will notify those customers once upgrades become available.

KVM will be the only option for new Linodes, starting May 1, 2016

If you are an older customer, new Linodes you create may still be getting placed onto a Xen hypervisor. You can change this default on your Account Settings page in the Linode Manager. However, coming May 1, 2016, the only option will be KVM. In other words, starting on May 1, it will not be possible to create a Xen-based Linode.

What will be the fate of existing Xen Linodes?

Existing Xen-based Linodes will be fine. However, in the near future we will begin to consolidate Xen Linodes onto fewer physical servers, which will mean scheduled migrations with periods of downtime. Don’t worry – if you will be affected, we’ll provide plenty of advance notice when those migrations are planned.

More information:

Linode Spring 2016 Conference Schedule

March 8, 2016 12:57 pm

It’s that time of year again – conference season is kicking off! Check out our schedule below to see where we’ll be through the end of April and make sure to stop by and check us out. We’ll be talking cloud hosting and giving out new swag – you don’t want to miss us!

Wordcamp Atlantawordcampatlanta
Altanta, GA | March 18-20
We’re spending the week down South for Wordcamp Atlanta and are looking forward to meeting all the WordPress enthusiasts who will be in attendance. Stop by our table to pick up swag and stick around for the afterparty to hang out with our team.

HackPrincetonhp-logo
Princeton, NJ | April 1-3
We’re road-tripping back up to Mid-Jersey for another edition of HackPrinceton. It seems to only get better every year and our mentors will be available for the entire weekend (seriously, they practically sleep there.) We can’t wait to see what amazing projects students produce this year. We will also be giving out a special prize, so follow us on Twitter to find out what it is!

Drupaldelphiadrupaldelphia_logo
Philadelphia, PA | April 8
One of the best Drupalcamps around and it happens right in our backyard – how lucky are we? Join us in Philly on April 8th to brag about how you use Drupal on your Linode. Stop by our booth to snag some swag and catch an invite to our impromptu happy hour happening after the event.

Philly Tech Weekptw2016_logo
Philadelphia, PA | April 29-May 6
Thousands of tech enthusiasts flock to the City of Brotherly Love for Philly Tech Week. We are sponsoring Dev Day on May 4th along with the happy hour from 6-8 p.m. Come by and get a drink on Linode! More details here.

 

Security Investigation Retrospective

February 19, 2016 3:30 pm

On January 5, 2016, we issued a password reset for all Linode customers during our investigation into the unauthorized access of three customer accounts. We have been working with federal authorities on these matters and their criminal investigations are ongoing. Today we are sharing our findings and those of the third-party security firm we retained to assist us with this investigation.

Before diving in we’d like to reassure you that your account information is safe. We found only three customers affected by this incident and have resolved these issues with them directly.

What Happened

This is a complex retrospective of two separate investigations, one in July and another in December. While the cases share similarities, we have no evidence to support the two being related. Nevertheless, here is a full timeline of the events as we have come to understand them.

On July 9 a customer notified us of unauthorized access into their Linode account. The customer learned that an intruder had obtained access to their account after receiving an email notification confirming a root password reset for one of their Linodes. Our initial investigation showed the unauthorized login was successful on the first attempt and resembled normal activity.

On July 12, in anticipation of law enforcement’s involvement, the customer followed up with a preservation request for a Linode corresponding to an IP address believed to be involved in the unauthorized access. We honored the request and asked the customer to provide us with any additional evidence (e.g., log files) that would support the Linode being the source of malicious activity. Neither the customer nor law enforcement followed up and, because we do not examine customer data without probable cause, we did not analyze the preserved image.

On the same day, the customer reported that the user whose account was accessed had lost a mobile device several weeks earlier containing the 2FA credentials required to access the account, and explained that the owner attempted to remotely wipe the device some time later. In addition, this user employed a weak password. In light of this information, and with no evidence to support that the credentials were obtained from Linode, we did not investigate further.

On December 9 an independent security researcher contacted us. The researcher claimed to be tracking an individual who had stolen credentials from numerous other service providers. The researcher wanted to make us aware that the individual may have made attempts to use these stolen credentials to log in to some of our customers’ accounts.

Our initial investigation concluded that the IPs provided had, in fact, been used to log into three accounts on the first attempt. In other words, the user arrived to the Linode Manager login page with the credentials necessary to log in, just as any regular user would. That same day we contacted the customers and received confirmation from each that the activities were suspicious. We also confirmed that none of these accounts had multi-factor authentication enabled and all had employed weak passwords.

On December 13 we started necessary fleet-wide Xen Security Advisory (XSA) maintenance, rebooting servers in their local nighttime hours around the clock. Although unrelated to the investigation, this continued through December 18 and was a significant resource constraint.

On December 14, although we had discovered no evidence of an intrusion into our infrastructure, we began interviewing third-party security firms and contacted multiple law enforcement agencies. We also dedicated all available internal resources to the effort and began scrutinizing our environment to identify any evidence of abuse or misuse.

On December 17, because of the similarities between this case and the one from July, we reopened the July case and concluded we now had sufficient reason to examine the image retained from that investigation.

Linode uses TOTP to provide two-factor authentication. This is an algorithm that uses a secret key stored on, and shared between, our server and the customer’s two-factor authentication app (such as Google Authenticator). The algorithm generates a time-sensitive code that the user provides during login as an additional authentication component. We encrypt these secret keys when storing them in our database.

After examining the image from our July investigation, we discovered software capable of generating TOTP codes if provided a TOTP key. We found software implementing the decryption method we use to secure TOTP keys, along with the secret key we use to encrypt them. We also found commands in the bash history that successfully generated a one-time code. Though the credentials found were unrelated to any of the unauthorized Linode Manager logins made in December, the discovery of this information significantly changed the seriousness of our investigation.

On December 21 our third-party security partner joined the investigation. This team proceeded with a forensic analysis to identify any unauthorized system-level activity that may have permitted an intruder to access customer credentials contained in our database. The team also searched for evidence of web application misuse that would have provided lateral movement from one Linode Manager account to another. Additionally, the team initiated a targeted vulnerability assessment with the objective of identifying a possible attack vector for obtaining access to the Linode database.

On December 25 DDoS attacks against our infrastructure began. Although we do not have any evidence to support these attacks being related to the incidents of unauthorized access, the attacks necessitated resources be pulled away from our investigation. This, combined with employees being away for the holidays, created additional challenges for our support and operations teams.

On January 5 our security partner concluded its investigation and we issued the password reset. Our internal security team continued to review our infrastructure for the next several weeks and developed a detailed plan for improving our overall security.

Findings

The findings of our security partner’s investigation concluded there was no evidence of abuse or misuse of Linode’s infrastructure that would have resulted in the disclosure of customer credentials. Furthermore, the security partner’s assessment of our infrastructure and applications did not yield a vector that would have provided this level of access.

Linode’s security team did discover a vulnerability in Lish’s SSH gateway that potentially could have been used to obtain information discovered on December 17, although we have no evidence to support this supposition. We immediately fixed the vulnerability.

Other theories considered that could explain the unauthorized access include an external compromise, such as the weak passwords previously mentioned being used with other online services, or phishing attacks against those users.

What We’re Doing About it

We are using what we’ve learned to make comprehensive improvements to our infrastructure, including areas unrelated to the incident. Here are a few of the things we’ve been working toward:

Authentication microservice: Several of our applications (such as the Linode Manager and Lish) perform user authentication. Previously, these applications performed this function by having access to the credential information directly within our database, and then performing comparisons themselves. We’ve developed a new approach that involves a carefully secured and monitored microservice that maintains ownership of all customer credentials. Under this method, when an application requires user authentication, the microservice is able to validate the credentials by returning a simple “yes” or “no.” Applications won’t have access to the credential information. In fact, the databases that power our infrastructure won’t contain credential information at all when the rollout of this microservice is complete. Also, customer passwords, previously stored as salted SHA-256 hashes with thousands of rounds, will be stored using bcrypt and will be upgraded seamlessly on a subsequent login.

Linode Manager notifications: We will be working to enhance notifications our customers receive about their respective account activity, including alerts to login attempts from new IP addresses and failed logins.

CC Tokenization: Although our investigation yielded no evidence of credit card information being accessed, we are taking advantage of our payment processor’s tokenization feature to remove the risk associated with storing credit card information.

Policy: We’ve been developing multiple policies derived from the NIST framework on topics ranging from clean-desk to password standards. A significant new policy is the creation of “security zones” for sensitive elements of infrastructure, like our database and authentication servers. The results of those efforts have greatly reduced the number of employees that have access to sensitive systems and data.

Hiring: In addition to the changes outlined above, we are hiring a senior-level security expert to join our company and lead a larger team of engineers focused full-time on security. This team will not only ensure we are following current best practices but will also expand our written policies, formalize our provisioning procedures and fundamentally ensure our policies are supported by process and accountability.

A New Linode API: Our most important long-term strategy is a rewrite of our legacy ColdFusion codebase that gives us the opportunity for a fresh start and to apply the lessons we have learned over the past 13 years. To do this, we’ve been building a new Linode API that’s stateless, RESTful, and implemented in Python. We’ve been working on this for the past many months and will announce a public alpha of the new API in the next few weeks.

Open-source Linode Manager: This new API will be the foundation for all things coming in the future, including an open-source Linode Manager which will replace the current manager.

Looking Ahead

We recognize we have room for improvement when it comes to communication and transparency. XSAs and persistent DDoS attacks throughout December notwithstanding, we should have communicated the nature and extent of the DDoS attacks and this security incident to our customers sooner. To say we were resource constrained at the time would be a fair assessment. Still, we could have done better and have since made procedural changes to ensure a team member is appointed to important events like these in the future to facilitate frequent and transparent communication with our customers.

We are incredibly grateful for the customers who have supported us throughout these events. We’ve heard your recommendations and felt the support you’ve provided over the past few months. Know that we continue listening and acting on your feedback.

We’ll conclude by saying how sorry we are if we’ve let you down. We value the trust you’ve placed in us as your hosting provider and are committed to earning that trust every day. We hope the details provided here clear up some misinformation and demonstrate our willingness to address opportunities for improvement, do the right thing, and increase communication and transparency with you, our customers.

The Twelve Days of Crisis – A Retrospective on Linode’s Holiday DDoS Attacks

January 29, 2016 4:30 pm

View a printable version of this post here.

Over the twelve days between December 25th and January 5th, Linode saw more than a hundred denial-of-service attacks against every major part of our infrastructure, some severely disrupting service for hundreds of thousands of Linode customers.

I’d like to follow up on my earlier update by providing some more insight into how we were attacked and what we’re doing to stop it from happening again.

Linode Attack Points

Pictured above is an overview of the different infrastructure points that were attacked. Essentially, the attacker moved up our stack in roughly this order:

  • Layer 7 (“400 Bad Request”) attacks toward our public-facing websites
  • Volumetric attacks toward our websites, authoritative nameservers, and other public services
  • Volumetric attacks toward Linode network infrastructure
  • Volumetric attacks toward our colocation provider’s network infrastructure

Most of the attacks were simple volumetric attacks. A volumetric attack is the most common type of distributed denial-of-service (DDoS) attack in which a cannon of garbage traffic is directed toward an IP address, wiping the intended victim off the Internet. It’s the virtual equivalent to intentionally causing a traffic-jam using a fleet of rental cars, and the pervasiveness of these types of attacks has caused hundreds of billions of dollars in economic loss globally.

Typically, Linode sees several dozen volumetric attacks aimed toward our customers each day. However, these attacks almost never affect the wider Linode network because of a tool we use to protect ourselves called remote-triggered blackholing. When an IP address is “blackholed,” the Internet collectively agrees to drop all traffic destined to that IP address, preventing both good and bad traffic from reaching it. For content networks like Linode, which have hundreds of thousands of IPs, blackholing is a blunt but crucial weapon in our arsenal, giving us the ability to ‘cut off a finger to save the hand’ – that is, to sacrifice the customer who is being attacked in order to keep the others online.

Blackholing fails as an effective mitigator under one obvious but important circumstance: when the IP that’s being targeted – say, some critical piece of infrastructure – can’t go offline without taking others down with it. Examples that usually come to mind are “servers of servers,” like API endpoints or DNS servers, that make up the foundation of other infrastructure. While many of the attacks were against our “servers of servers,” the hardest ones for us to mitigate turned out to be the attacks pointed directly toward ours and our colocation providers’ network infrastructure.

Secondary Addresses

The attacks leveled against our network infrastructure were relatively straightforward, but mitigating them was not. As an artifact of history, we segment customers into individual /24 subnets, meaning that our routers must have a “secondary” IP address inside each of these subnets for customers to use as their network gateways. As time has gone by, our routers have amassed hundreds of these secondary addresses, each a potential target for attack.

Of course, this was not the first time that our routers have been attacked directly. Typically, special measures are taken to send blackhole advertisements to our upstreams without blackholing in our core, stopping the attack while allowing customer traffic to pass as usual. However, we were unprepared for the scenario where someone rapidly and unpredictably attacked many dozens of different secondary IPs on our routers. This was for a couple of reasons. First, mitigating attacks on network gear required manual intervention by network engineers which was slow and error-prone. Second, our upstream providers were only able to accept a limited number of blackhole advertisements in order to limit the potential for damage in case of error.

After several days of playing cat-and-mouse games with the attacker, we were able to work with our colocation providers to either blackhole all of our secondary addresses, or to instead drop the traffic at the edges of their transit providers’ networks where blackholing wasn’t possible.

Cross-Connects

The attacks targeting our colocation providers were just as straightforward, but even harder to mitigate. Once our routers were no longer able to be attacked directly, our colocation partners and their transit providers became the next logical target – specifically, their cross-connects. A cross-connect can generally be thought of as the physical link between any two routers on the Internet. Each side of this physical link needs an IP address so that the two routers can communicate with each other, and it was those IP addresses that were targeted.

As was the case with our own infrastructure, this method of attack was not novel in and of itself. What made this method so effective was the rapidity and unpredictability of the attacks. In many of our datacenters, dozens of different IPs within the upstream networks were attacked, requiring a level of focus and coordination between our colocation partners and their transit providers which was difficult to maintain. Our longest outage by far – over 30 hours in Atlanta – can be directly attributed to frequent breakdowns in communication between Linode staff and people who were sometimes four-degrees removed from us.

We were eventually able to completely close this attack vector after some stubborn transit providers finally acknowledged that their infrastructure was under attack and successfully put measures in place to stop the attacks.

Lessons Learned

On a personal level, we’re embarrassed that something like this could have happened, and we’ve learned some hard lessons from the experience.

Lesson one: don’t depend on middlemen

In hindsight, we believe the longer outages could have been avoided if we had not been relying on our colocation partners for IP transit. There are two specific reasons for this:

First, in several instances we were led to believe that our colocation providers simply had more IP transit capacity than they actually did. Several times, the amount of attack traffic directed toward Linode was so large that our colocation providers had no choice but to temporarily de-peer with the Linode network until the attacks ended.

Second, successfully mitigating some of the more nuanced attacks required the direct involvement of senior network engineers from different Tier 1 providers. At 4am on a holiday weekend, our colocation partners became an extra, unnecessary barrier between ourselves and the people who could fix our problems.

Lesson two: absorb larger attacks

Linode’s capacity management strategy for IP transit has been simple: when our peak daily utilization starts approaching 50% of our overall capacity, then it’s time to get more links.

This strategy is standard for carrier networks, but we now understand that it is inadequate for content networks like ours. To put some real numbers on this, our smaller datacenter networks have a total IP transit capacity of 40Gbps. This may seem like a lot of capacity to many of you, but in the context of an 80Gbps DDoS that can’t be blackholed, having only 20Gbps worth of headroom leaves us with crippling packet loss for the duration of the attack.

Lesson three: let customers know what’s happening

It’s important that we acknowledge when we fail, and our lack of detailed communication during the early days of the attack was a big failure.

Providing detailed technical updates during a time of crisis can only be done by those with detailed knowledge of the current state of affairs. Usually, those people are also the ones who are firefighting. After things settled down and we reviewed our public communications, we came to the conclusion that our fear of wording something poorly and causing undue panic led us to speak more ambiguously than we should have in our status updates. This was wrong, and going forward, a designated technical point-person will be responsible for communicating in detail during major events like this. Additionally, our status page now allows customers to be alerted about service issues by email and SMS text messaging via the “Subscribe to Updates” link.

Our Future is Brighter Than our Past

With these lessons in mind, we’d like you to know how we are putting them into practice.

First, the easy part: we’ve mitigated the threat of attacks against our public-facing servers by implementing DDoS mitigation. Our nameservers are now protected by Cloudflare, and our websites are now protected by powerful commercial traffic scrubbing appliances. Additionally, we’ve made sure that the emergency mitigation techniques put in place during these holiday attacks have been made permanent.

By themselves, these measures put us in a place where we’re confident that the types of attacks that happened over the holidays can’t happen again. Still, we need to do more. So today I’m excited to announce that Linode will be overhauling our entire datacenter connectivity strategy, backhauling 200 gigabits of transit and peering capacity from major regional points of presence into each of our locations.

Upgraded Newark Diagram
Carriers shown are for example purposes only. All product names and logos are the property of their respective owners.

Here is an overview of forthcoming infrastructure improvements to our Newark datacenter, which will be the first to receive these capacity upgrades. The headliner of this architecture is the optical transport networks that we have already begun building out. These networks will provide fully diverse paths to some of the most important PoPs in the region, giving Linode access to hundreds of different carrier options and thousands of direct peering partners.

Compared to our existing architecture, the benefits of this upgrade are obvious. We will be taking control of our entire infrastructure, right up to the very edge of the Internet. This means that, rather than depending on middlemen for IP transit, we will be in direct partnership with the carriers who we depend on for service. Additionally, Linode will quintuple the amount of bandwidth available to us currently, allowing us to absorb extremely large DDoS attacks until properly mitigated. As attack sizes grow in the future, this architecture will quickly scale to meet their demands without any major new capital investment.

Final Words

Lastly, sincere apologies are in order. As a company that hosts critical infrastructure for our customers, we are trusted with the responsibility of keeping that infrastructure online. We hope the transparency and forward-thinking in this post can regain some of that trust.

We would also like to thank you for your kind words of understanding and support. Many of us had our holidays ruined by these relentless attacks, and it’s a difficult thing to try and explain to our loved ones. Support from the community has really helped.

We encourage you to post your questions or comments below.