In recent years the threat of DDoS) attacks on the Internet seems to be significantly increasing. The rapidly growing threat can be characterized by the orders of magnitude increases in the bandwidth of such attacks (from 100s of millions bits per second, to 100s of billions bits per second) and the growing range of targets (from ecommerce sites, to financial institutions, to components of critical infrastructure). The methods of launching massive DDoS attacks are also changing, from the mass use of infected individual PCs, to the use of powerful, richly connected hosting facilities and /or the use of mobile applications.
Reflection / amplification attacks represent a specific form of DDoS that is particularly problematic. Reflection attacks rely on the ability of an infected / controlled host to spoof the source address of its queries to powerful Internet servers (e.g., DNS servers). By placing the address of the eventual attack target in the source address of its queries, reflection attacks use the resources of the Internet’s own infrastructure against itself. These attacks are even more dramatic, when the attacker can use a very small query to generate a much larger response to be relayed toward the eventual target. This scaling up of input energy to size of response is called “amplification”, and recent events have documented attacks of this type reaching 300+Gbps.
The servers that are exploited in such attacks include DNS servers (estimated to be ~30 million vulnerable to exploitation), and Network Time Protocol (NTP) servers (estimated to be ~1 million vulnerable). While we can and should focus on improving the implementation and configuration of these servers and applications protocols to avoid their exploitation in DDoS attacks, the scope of that problem is vast and many of these severs are deployed in equipment and networks that are not actively maintained.
For well over a decade industry had developed specifications of techniques and deployment guidance for IP-level filtering techniques to block network traffic with spoofed source addresses . These techniques vary greatly in their scope and applicability. Some techniques are primarily focused on ingress filtering at the stub-boundaries of the Internet and typically have the granularity of Internet Protocol (IP) prefix filtering. These techniques are often referred to as “BCP38” after one of the original Internet Engineering Task Force (IETF) specifications in this area, but include a range of additional techniques not covered in BCP38 or its follow on BCP84.
Deployment of the anti-spoofing techniques can be viewed as a cycle of configuration, performance analysis, and finally monitoring and verification of the deployed techniques. The lessons learned from monitoring and verification, along with changes in the network itself, and then require updates to the configuration and the cycle repeats.
First, an organization must determine how and where to configure anti-spoofing controls. In the case of a small organization with a single Internet Service Provider (ISP), configuration may amount to identifying the organization’s IP address range and checking that these source addresses are used in packets sent to the ISP. BCP38 is designed primarily for this basic case. The configuration becomes substantially more complex for organizations with multiple address blocks and multiple Internet Service Providers. Providing transit services makes this even more complex. BCP38 updates, such as BCP84 address some of these more challenging cases.
Once a configuration plan has been identified, one must consider how deployment will impact performance. Measures of performance should encompass both the complexity of the configuration and the delay added to packet processing. Configurations may be static or dynamic. For dynamic configurations, one must consider how frequently updates occur, investigate tolerance for update bursts, and note the lag time before needed updates take effect. For packet processing, a key consideration is whether packets are processes in-line or require some additional paths in routers and other devices.
Finally, on-going monitoring and verification are arguably the most important part of any deployment. Network operators must be able to verify the configuration is not dropping valid traffic and should be able to confirm invalid traffic is being dropped. Policies for logging and monitoring the dropped traffic are critical. Network operators must also confirm performance metrics are within the expected range. False positives, false negatives, and performance concerns are expected to provide input for future configuration changes. The net result is a life cycle that begins with configuration decisions, the performance impact of the configuration changes must be considered, and then the system can be deployed. Once deployed, on-going monitoring and validation lead back to configuration updates.
NIST’s goals in this task are to work with the community to document and quantitatively characterize the applicability, effectiveness, and impact of various approaches to filtering spoofed IP traffic streams and then to develop consensus recommendations and deployment guidance that can drive adoption in Federal network environments and throughout the industry.
Technical Evaluation of Source Address Filtering Mechanisms:
NIST will survey the state of the art in source address filtering techniques and develop methods of quantitatively characterizing their scope of applicability, effectiveness, deployment considerations and potential impact on network performance and reliability. Particular emphasis will be given to identifying how different network structures impact configuration and performance. In particular, configuration and performance distinctions will be analyzed for edge networks (e.g. a bank, agency, etc.), small scale transit (university, larger agency, regional exchange point), and large-scale transit networks (national and global ISP’s) will be analyzed.
NIST will develop deployment scenarios and testing infrastructures to empirically measure the scaling, performance and robustness properties of current filtering techniques. Edge networks and small-scale scenarios will be measured on a test bed of current state of the art implementations. Extensions to large-scale transit networks will be investigated if resources permit.
NIST will publish a technical report on the applicability and performance of current source address filtering technologies, and release its software tools and data sets used in the task. In addition, NIST will establish a testbed used to evaluate state of the art source address filtering technologies. Note that NIST will not publish results about specific commercial products as a result of this task. Results will only address generalized techniques, not specific implementations.
Deployment Guidance for Source Address Filtering Mechanisms
NIST will develop comprehensive technical guidance and a strategic roadmap for the ubiquitous deployment of source address filtering mechanisms. The envisioned scope of this guidance will focus on data traffic and will address plans for incremental deployment and continued maintenance of the proposed mechanisms. For the federal sector, the strategic roadmap will address the recommended relationship between this deployment plan and the specifications of current programs such as Trusted Internet Connections (TIC) and Networx.
NIST will publish a draft deployment guidance document for public comment and will socialize it with the operator and security communities. After a period of review NIST will revise and publish a final deployment guidance document.
 Attackers use NTP reflection in huge DDoS attack, http://www.computerworld.com/s/article/9246230/Attackers_use_NTP_reflection_in_huge_DDoS_attack