In my past decade-plus dealing with distributed denial-of-service attacks, I have noticed a few patterns in the way that companies handle these attacks. Usually when an unprepared virgin company is first attacked, all hell breaks loose. The lack of preparedness causes several chain reactions that make the situation worse. Addressing these most common mistakes ahead of time can help a situation tremendously.
While the network and your services are exploding and bouncing offline, there must be someone that is comfortable enough to make good decisions. I’ve seen managers freak out and threaten everyone with the prospect of the company collapsing. I think they were trying to motivate people to figure out some solution, but they ended up creating more chaos during an already tough situation. Once I saw employees hastily rip out the network’s firewalls and re-configure the load balancers. They ended up creating more mess than they had before because they were reacting to an angry and stressed manager. You are going to create a disaster if you approach with a sledgehammer and wishes. Don’t let anyone make quick changes; try to follow your company’s policies. Sit back, analyze the problem, isolate the actual device that’s failing in the chain, and make an informed–and usually small–adjustment. If you’re in the 10th hour and things don’t seem to be improving, gather everyone, go away from the office, have a beer, relax for 15 minutes, and talk about something positive. The information flow after that beer might just save you and motivate everyone to do a good job – the solution will come!
This one is sadistically funny. Most companies host their email, VoIP system, IRC, Wiki, databases, primary storage, etc. all in the same colocation behind the same network connection that hosts their web sites and services. This is, for lack of better words, stupid. All of your digital eggs are in one basket, and that basket is also holding a grenade. A DDoS attack ends up crippling the company’s infrastructure, leaving it with no phones, email, or any communications structure whatsoever. I’ve seen CEOs of massive companies using their hotmail account and cell phone to contact me because it was their only way of communicating from their multi-million dollar offices. If you insist on being an “eggs in one basket” company, keep a list of vital email accounts and cell phone numbers on a notepad. That way you can at least call your IT person when everything is down.
If you are offline due to DDoS attack, chances are your IT staff cannot log in to the remotely hosted hardware in your datacenters. The easy solution is to physically get them there. They can console in to the hardware and actually see what is going wrong. It’s not fun, but it will result in a much faster resolution to the problem (Make sure they have folding chairs, cash for the vending machines, and serial cables).
If you’re dealing with an attack and yours is like a lot of companies, it may be difficult for you to set up a traffic monitoring port on your main routers. Assuming you’re setup with Ethernet, at least you can bridge a hub in-line and connect a laptop to the hub and sniff or analyze the traffic! This is key because having eyes into the data stream really helps figure out how to filter it. Pulling random cables and shutting down random services is not the solution. Make an informed call because you were thoughtful enough to have a hub or SPAN/Mirror port pre-configured.
There’s a reason you are the target for this attack. Obviously there are a lot of reasons for any given attack, yet understanding the attacker’s motivation is key to creating a better defense strategy. In the field I have observed a very strange phenomenon; the people working at a victim company usually have a gut feeling about why they are being attacked. So far, their gut instinct has been correct. Some people know they are being extorted and some people feel it’s a competitor trying to shut them down. Others have a customer that has pissed someone off so the attacker takes down the whole company just to silence one customer. Maybe shutting down the attacker’s target for awhile may actually save the entire ship. Go with your gut on this, make a hypothesis and test it.
Your business was just smacked around by some bad guys, but what proof do you have? If you don’t have any, then what do you think the law enforcement is going to do for you? During the attack, lock down all your logs and assign someone within the company to be the custodian of the records. Save server logs, web logs, email logs, any packet capture, network graphs, reports – anything – including a timeline of events.
If the attack is running longer than you had anticipated and you don’t have a solution in sight, you could get your site working at least enough to communicate to your customers. There are web-hosting companies, which as part of what they do, provide DDoS service level agreements. For a small amount of money you could quickly sign up with several of these companies, upload a “Sorry we’re down, but contact us here” page, and flip your DNS to the cluster of hosted servers. Your customers will have more confidence in your performance and the attackers may get bored because the attack has not completely shut everything down. If this plan doesn’t work, at least you have diverted some of the attack away from your network.
Post attack can be a blur; everyone is exhausted and burnt out. Mostly, everyone just wants the day-to-day atmosphere to return to status quo. Well, if you’ve been attacked and you did not learn and improve your strategy on how to deal with future attacks, then you are not doing your job. You should start a review the very day after, while everything is fresh, and make sure that everyone is prepared. Go over what worked, what did not work, and how to improve your system’s overall technology.