Random Failures Vs. Deliberate Attacks

From ResiliNetsWiki
Jump to: navigation, search

Contents

[Holme-Kim-2002 (doi) .]

Petter Home and Beom Jun Kim
“Attack vulnerability of complex networks”,
Physical Review E, vol.65, #5, 2002, pp. 056109

ResiliNets Keywords: Attacks

Keywords:

Abstract: “We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of attacks on vertices and edges, four different attacking strategies are used: removals by the descending order of the degree and the betweenness centrality, calculated for either the initial network or the current network during the removal procedure. It is found that the removals by the recalculated degrees and betweenness centralities are often more harmful than the attack strategies based on the initial network, suggesting that the network structure changes as important vertices or edges are removed. Furthermore, the correlation between the betweenness centrality and the degree in complex networks is studied.”

Notes: importance and relevance to ResiliNets

Bibliographic Entries


[Albert-Jeong-Barabasi-2000 (doi) . ]

Réka Albert, Hawoong Jeong and Albert-László Barabási,
"Error and attack tolerance of complex networks",
Nature Vol.406, pp.378 - 382, 27 July 2000

ResiliNets Keywords: error, attack, exponential network, scale-free network

Abstract: "Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network1. Complex communication networks2 display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web3-5, the Internet6, social networks7 and cells8. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks."


Bibliographic Entries

[Park-Khrabrov-Pennock-Lawrence-Giles-Ungar-2003 (doi) . ]

Seung-Taek Park, Alexy Khrabrov, David M. Pennock, Steve Lawrence, C. Lee Giles, Lyle H. Ungar
"Static and Dynamic Analysis of the Internet's Susceptibility to Faults and Attacks",
INFOCOM 2003, San Francisco, 30 March ~ 3 April 2003

ResiliNets Keywords: faults, attacks, Internet susceptibility, static and dynamic analysis

Abstract: "We analyze the susceptibility of the Internetto random faults, malicious attacks, and mixtures of faultsand attacks. We analyze actual Internet data, as well assimulated data created with network models. The networkmodels generalize previous research, and allow generationof graphs ranging from uniform to preferential, and fromstatic to dynamic. We introduce new metrics for analyzingthe connectivity and performance of networks which im-prove upon metrics used in earlier research. Previous re-search has shown that preferential networks like the Inter-net are more robust to random failures compared to uni-form networks. We find that preferential networks, includ-ing the Internet, are more robust only when more than 95%of failures are random faults, and robustness is measuredwith the average diameter. The advantage of preferentialnetworks disappears with alternative metrics, and when asmall fraction of faults are attacks. We also identify dy-namic characteristics of the Internet which can be used tocreate improved network models, allowing more accurate analysis for the future Internet, for example facilitating thedesign of network protocols with optimal performance inthe future, or predicting future attack and fault tolerance.We find that the Internet has been becoming more prefer-ential as it evolves. The average diameter has been stable oreven decreasing as the number of nodes has been increas-ing. The Internet has been becoming more robust to ran-dom failures over time, but has also become more vulnera-ble to attacks."


Bibliographic Entries

[Magoni-2003 (doi) . ]

Dmien Magoni,
"Tearing Down the Internet",
IEEE Journal on Selected Areas in Communications, Vol.21, No.6, August 2003, pp. 949-960

ResiliNets Keywords: Attack, connectivity, Internet, topology

Abstract: "Recent advances in scale-free networks have claimed that their topologies are very weak against attacks. The inhomogeneous connectivity distribution of large-scale current communication networks, such as the Internet, could be exploited by evil hackers in order to damage these systems. However, there have not been many studies on the approaches and consequences of such targeted attacks. In this paper, we propose an in-depth study of the Internet topology robustness to attacks at the network layer. Several attacking techniques are presented, as well as their effects on the connectivity of the Internet. We show that although the removal of a small fraction of nodes (less than 10%) can damage the Internet connectivity, such a node removal attack would still require a large amount of work to be carried out. To achieve this, we study in detail the interactions between the intradomain and interdomain levels of the Internet through the use of an overlay."


Bibliographic Entries

[Markopoulou-Iannaccone-Bhattacharyya-Chuah-Diot-2003 (doi) . ]

Athina Markopoulou, Gianluca Iannaccone, Supratik Bhattacharyya, Chen-Nee Chuah, Christophe Diot,
"Characterization of Failures in an IP Backbone",
IEEE INFOCOM2004, Hongkong, China, 7-11, March, 2004

ResiliNets Keywords: Failure Characterization, IP Backbone

Abstract: "We analyze IS-IS routing updates from Sprint's IP network to characterize failures that affect IP connectivity. Failures are first classified based on probable causes such as maintenance activities, router-related and optical layer problems. Key temporal and spatial characteristics of each class are analyzed and, when appropriate, parameterized using well-known distributions. Our results indicate that 20% of all failures is due to planned maintenance activities. Of the unplanned failures, almost 30% are shared by multiple links and can be attributed to router-related and optical equipment-related problems, while 70% affect a single link at a time. Our classification of failures according to different causes reveals the nature and extent of failures in today's IP backbones. Furthermore, our characterization of the different classes can be used to develop a probabilistic failure model, which is important for various traffic engineering problems."


Bibliographic Entries

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox