DoS And DDoS Attacks — The Origin Of A Species

Share on twitter
Share on whatsapp
Share on facebook

ddos-attacksShort Bytes: Over the last few months, we’ve seen some of the largest DDoS attacks to date and, by far, the most disruptive. You probably didn’t know that DoS and DDoS attacks are so effective because they are based on war strategies that have been fine-tuned over centuries. Read on to learn how, despite being worlds apart in technology, these strategies are founded in some of the most ancient of practices.

Strategy in Attacking — War and network security

It might not be readily obvious, but many of the approaches to information security are elegantly paralleled by the approaches used by ancient military strategists. We have Trojan viruses like that of the Trojan Horse in the battle of Troy, Ransomware that claims your files for ransom, and the topic of this article, denial of service attacks that limit the resources of the opponent. By limiting the resources of your opponent, you obtain a certain amount of control over the opponents’ subsequent actions. This is a practice that has worked extremely well for both war strategists and cyber criminals.

In the case of the war strategist targeting an opponent, we can easily think of the types of resources that could be restricted in order to limit the capability and capacity of the opponent. Limiting resources like food, water, and building supplies would quickly burden the opponent. Computers are a little bit different, though. Network services such as DNS, web serving, email, and storage all have different infrastructural requirements, but there’s a single pillar that underpins them all. That pillar is network availability. Without the network availability, there is no way to  access the service. There are other resources that can be starved as well, like memory and CPU, though, they are sometimes only applicable to specific types of services.

Knowing what resource to manipulate is only half the maneuver. An efficient way to effect any given resource must be determined. War strategists would do things like poison water, burn crops, and set up roadblocks. There are information technology analogs of these as well. The obvious attack that bears likeness to poison might be a virus. But, that virus won’t necessarily affect the network or the service, but the data that’s sent to the service can be poisoned. By corrupting the data that’s sent to the service, we can slow it down and potentially crash it. Corrupted data often takes longer to process, just like a body healing from a poison. That leaves the service with one of two options–somehow filter the poison from the good data, or consume the poisoned data and deal with the consequence.

Secondly, there’s the burning of crops. The larger a service, the more memory it needs. This, like food, bears a direct proportion to the size of the opponent. By consuming memory with junk information, the service will have a reduced capacity for legitimate information. And when any computer’s memory fills, it becomes extremely slow. Lastly, a roadblock stops anything from going to the opponent or leave from the opponent, and this is an uncanny reflection of limiting the amount of network traffic of a service.

The best denial of service attacks, like the best of war strategists, will leverage all of these methods wherever possible. But what happens if the opponent is larger and has more resources than a single attacker? Typically, the attacker will use whichever resource they have the most of, and sometimes that means obtaining more in preparation before attacking. This is often creating some sort of network of nodes that are under the attacker’s control, often called a botnet. The one thing that scales well with botnets is network output, which makes limiting the opponent’s network availability that much easier. There are two benefits to this approach. The first being that the attack is presumed to be distributed across many geographic areas and nodes. The second reason is the fact that it does not come from a single location, which means it cannot be traced as easily to the attacker.

If the combined network connection speeds of the botnet exceed the network connection speed of the opponent, then the botnet can saturate the opponent connection with traffic, which will consequently make it extremely difficult for any legitimate traffic to get through. This is our roadblock analogy. There is no need for specialized packets that cause abnormal memory or CPU consumption, but that would surely help in reducing the availability of the service.

Strategy in Defending

When there are so many ways your service can be targeted with a denial of service attack, how do you defend? There’s a very simple answer to that. And it, too, has its roots that go even further back than that of war strategists.

You simply watch for anything out of the ordinary. By monitoring traffic before you let it reach your application, you are able to filter out and drop and traffic that is detected to be malicious. The problem lies in determining which traffic is malicious. This is especially difficult when the legitimate traffic is indistinguishable from the malicious–this happens when the malicious traffic is normal traffic used maliciously like in the DDoS attack of October 21st, 2016. The traffic that hit Dyn’s servers was made up of completely normal DNS requests and, because it was coming from so many different nodes, could not be distinguished from the legitimate requests.

What to do when you can’t identify the malicious traffic is of debate. Should you “black hole” the traffic, dropping all the excess? Or should you let it hit in the hopes that your service can handle it? One thing that rings through all DDoS prevention materials is that you should have a plan of action for when a DDoS attack is detected. Incorporating DDoS attacks into your disaster recovery plan is essential. What this plan consists of would vary depending on your service and your users, but it is important to have a plan.

DDoS attacks are becoming increasingly common in increasing bandwidths. That means that the services we use are becoming increasingly susceptible to attacks. One of the ways that we can help reduce the number of attacks is by ensuring that our computers, and the computers of the people we’re close to, are clean of all kinds of viruses and malware including botnets.

Did you find this article helpful? Have something else to add? Don’t forget to drop your feedback in the comments section below.

Also Read: Teardrop Attack: What Is It And How Does It Work?

Devin McElheran

Devin McElheran

IT professional by day and various hobbies by night.

New On Fossbytes

Scroll to Top