1 / 27

4. PREFERENTIAL ATTACHMENT

4. PREFERENTIAL ATTACHMENT. The rich gets richer. Empirical evidences. Many large networks are scale free The degree distribution has a power-law behavior for large k (far from a Poisson distribution) Random graph theory and the Watts-Strogatz model cannor reproduce this feature.

orli-nixon
Download Presentation

4. PREFERENTIAL ATTACHMENT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 4. PREFERENTIAL ATTACHMENT The rich gets richer

  2. Empirical evidences • Many large networks are scale free • The degree distribution has a power-law behavior for large k (far from a Poisson distribution) • Random graph theory and the Watts-Strogatz model cannor reproduce this feature

  3. We can construct power-law networks by hand • Which is the mechanism that makes scale-free networks to emerge as they grow? • Emphasis: network dynamics rather to construct a graph with given topological features

  4. Topology is a result of the dynamics • But only a random growth? • In this case the distribution is exponential!

  5. Barabasi-Albert model (1999) • Two generic mechanisms common in many real networks • Growth (www, research literature, ...) • Preferential attachment (idem): attractiveness of popularity • The two are necessary

  6. Growth • t=0, m0 nodes • Each time step we add a new node with m (m0) edges that link the new node to m different nodes already present in the system

  7. Preferential attachment • When choosing the nodes to which the new connects, the probability  that a new node will be connected to node i depends on the degree ki of node i Linear attachment (more general models) Sum over all existing nodes

  8. Numerical simulations • Power-law P(k)k- SF=3 • The exponent does not depend on m (the only parameter of the model)

  9. =3. different m’s. P(k) changes.  not

  10. Degree distribution • Handwritten notes

  11. Preferential attachment but no growth • t=0, N nodes, no links • Power-laws at early times • P(k) not stationary, all nodes get connected • ki(t)=2t/N

  12. Average shortest-path <k>=k SF model just a fit

  13. No theoretical stimations up to now • The growth introduces nontrivial corrections • Whereas random graphs with a power-law degree distribution are uncorrelated

  14. Clustering coefficient • NO analytical prediction for the SF model 5 times larger SW: C is independent of N

  15. Scaling relations

  16. Spectrum exponential decay around 0 power law decay for large ||

  17. Nonlinear preferantial attachment • Sublinear: stretch exponential P(k) • Superlinear: winner-takes-all

  18. Nonlinear growth rates • Empirical observation: the number of links increases faster than the number of nodes • Accelerated growth • Crossover with two power-laws

  19. Growth constraints • Power-laws followed by exponential cutoffs • Model: when a node • reaches a certain age (aging) • has more than a critical number of links (capacity) • Explains the behavior

  20. Competition • Nodes compete for links • Power-law with a logarithmic correction

  21. The Simon model • H.A. Simon (1955) : a class of models to account empirical distributions following a power-law (words, publications, city populations, incomes, firm sizes, ...)

  22. Algorithm • Book that is being written up to N words • fN(i) number of different words that each occurred exactly i times in the text • Continue adding words • With probability p we add a new word • With probability 1-p the word is already written • The probability that the (n+1)th word has already appeared i times is proportional to i fN(i) [the total number of words that have occurred i times]

  23. Mapping into a network model • With p a new node is added • With 1-p a directed link is added. The starting point is randomly selected. The endpoint is selected such that the probability that a node belonging to the Nk nodes with k incoming links will be chosen is

  24. Does not imply preferential attachment • Classes versus actual nodes • No topology

  25. Error and attack tolerance • High degree of tolerance against error • Topological aspects of robustness, caused by edge and/or link removal • Two types of node removal: • Randomly selected nodes (errors!) • Most highly connected nodes are removed at each step (this is an attack!)

  26. Removal of nodes Squares: random Circles: preferential

More Related