7 el tda diccionario
Download
1 / 65

7. El TDA Diccionario - PowerPoint PPT Presentation


  • 95 Views
  • Uploaded on

7. El TDA Diccionario. ¿Qué es un Diccionario ? . Dado un conjunto de elementos {X 1 , X 2 , ..., X N } , todos distintos entre sí, se desea almacenarlos en una estructura de datos que permita la implementación eficiente de las operaciones:

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '7. El TDA Diccionario' - grady


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
7 el tda diccionario

7.El TDA Diccionario


Qu es un diccionario
¿Qué es un Diccionario ?

  • Dado un conjunto de elementos {X1, X2, ..., XN}, todos distintos entre sí, se desea almacenarlos en una estructura de datos que permita la implementación eficiente de las operaciones:

  • búsqueda(X): dado un elemento X, conocido como llave de búsqueda, encontrarlo dentro del conjunto o decir que no está.

  • inserción(X): agregar un nuevo elemento X al conjunto.

  • eliminación(X): eliminar el elemento X del conjunto.

  • Estas operaciones describen al TDA diccionario. En el presente capítulo se verán distintas implementaciones de este TDA y se estudiarán las consideraciones de eficiencia de cada una de dichas implementaciones.


Implementaciones sencillas
Implementaciones sencillas

  • Una lista enlazada, con la inserción de nuevos elementos al comienzo.

    • búsqueda: O(n) (búsqueda secuencial).

    • inserción: O(1) (insertando siempre al comienzo de la lista).

    • eliminación: O(n) (búsqueda + O(1)).

  • Arreglo ordenado: inserción y eliminación ineficientes, puesto ("correr" los elementos)

  • Sin embargo, la ventaja que tiene mantener el orden es que es posible realizar una búsqueda binaria para encontrar el elemento buscado.


Programacion de la b squeda binaria
Programacion de la Búsqueda binaria

Invariante

Inicialmente: i = 0 y j = n-1.

En cada iteración:

Si el conjunto es vacío (j-i < 0), o sea si j < i, entonces el elemento x no está en el conjunto (búsqueda infructuosa).

En caso contrario, m = (i+j)/2. Si x = a[m], el elemento fue encontrado (búsqueda exitosa).

Si x < a[m] se modifica j = m-1, sino se modifica i = m+1 y se sigue iterando.


Programacion de la b squeda binaria1
Programacion de la Búsqueda binaria

publicintbusquedaBinaria(int []a, int x) {

int i=0, j=a.length-1;

while (i<=j) {

int m=(i+j)/2;

if (x==a[m])

return m;

elseif (x<a[m])

j=m-1;

else

i=m+1;

}

return NO_ENCONTRADO;

// NO_ENCONTRADO se define como -1

}


Eficiencia de la b squeda binaria
Eficiencia de la Búsqueda binaria

  • Todo algoritmo de búsqueda basado en comparaciones corresponde a algún árbol de decisión.

  • Cada nodo de dicho árbol corresponde al conjunto de elementos candidatos en donde se encuentra el elemento buscado, y que es consistente con las comparaciones realizadas entre los elementos. Los arcos del árbol corresponden a los resultados de las comparaciones, que en este caso pueden ser mayor que o menor que el elemento buscado, es decir, es un árbol de decisión binario.

  • El número de comparaciones realizadas por el algoritmo de búsqueda es igual a la altura del árbol de decisión (profundidad de la hoja más profunda).


Eficiencia de la b squeda binaria1
Eficiencia de la Búsqueda binaria

Lema: sea D un árbol binario de altura h. D tiene a lo más 2h hojas.

Demostración: por inducción.

Lema: un árbol binario con H hojas debe tener una profundidad de al menos

Demostración: directo del lema anterior.

Si n es el número de nodos de elementos del conjunto, el número de respuestas posibles (hojas del árbol de decisión) es de n+1, el lema anterior implica que el costo en el peor caso es mayor o igual que el logaritmo del número de respuestas posibles.

Corolario: cualquier algoritmo de búsqueda mediante comparaciones se demora al menos preguntas en el peor caso. Por lo tanto, la búsqueda binaria es óptima.


M todos auto organizantes
Métodos auto-organizantes

  • Idea: cada vez que se accede a un elemento Xk se modifica la lista para que los accesos futuros a Xk sean más eficientes. Algunas políticas de modificación de la lista son:

  • TR (transpose): se intercambia de posición Xk con Xk-1 (siempre que k>1).

  • MTF (move-to-front): se mueve el elemento Xk al principio de la lista.

  • Se puede demostrar que Costooptimo<=CostoTR<=CostoMTF<=2Costooptimo.


Avl trees adelson velskii landis 1962
AVL-Trees (Adelson-Velskii & Landis, 1962)

In normal search trees, the complexity of find, insert and delete operations in search trees is in the worst case: (n).

Can be better! Idea: Balanced trees.

Definition: An AVL-tree is a binary search tree such that for each sub-tree T ' = < L, x, R >| h(L) - h(R) |  1 holds

(balanced sub-trees is a characteristic of AVL-trees).

The balance factor or height is often annotated at each node h(.)+1.



7 el tda diccionario

Thisis NOT an AVL tree (node * doesnotholdtherequiredcondition)


Goals
Goals

1. How can the AVL-characteristics be kept when inserting and deleting nodes?

2. We will see that for AVL-trees the complexity of the operations is in the worst case

= O(height of the AVL-tree)

= O(log n)


Preservation of the avl characteristics
PreservationoftheAVL-characteristics

After inserting and deleting nodes from a tree we must procure that new tree preserves the characteristics of an AVL-tree:

Re-balancing.

How ?: simple and double rotations


Only 2 cases an their mirrors
Only 2 cases (an their mirrors)

  • Let’s analyze the case of insertion

    • The new element is inserted at the right (left) sub-tree of the right (left) child which was already higher than the left (right) sub-tree by 1

    • The new element is inserted at the left (right) sub-tree of the right (left) child which was already higher than the left (right) sub-tree by 1


Rotation for the case when the right sub tree grows too high after an insertion
Rotation (for the case when the right sub-tree grows too high after an insertion)

Istransformedinto


7 el tda diccionario
Double rotation (for the case that the right sub-tree grows too high after an insertion at its left sub-tree)

Double rotation

Istransformedinto


7 el tda diccionario

b too high after an insertion at its left sub-tree)

First rotation

a

c

W

x

y

Z

new

a

Second rotation

b

W

c

x

new

y

Z


Re balancing after insertion
Re-balancing after insertion: too high after an insertion at its left sub-tree)

After an insertion the tree might be still balanced or:

theorem: After an insertion we need only one rotation of double-rotation at the first node that got unbalanced * in order to re-establish the balance properties of the AVL tree.

(* : on the way from the inserted node to the root).

Because: after a rotation or double rotation the resulting tree will have the original size of the tree!


The same applies for deleting
The same applies for deleting too high after an insertion at its left sub-tree)

  • Only 2 cases (an their mirrors)

    • The element is deleted at the right (left) sub-tree of which was already smaller than the left (right) sub-tree by 1

    • The new element is inserted at the left (right) sub-tree of the right (left) child which was already higher that the left (right) sub-tree by 1


The cases
The too high after an insertion at its left sub-tree) cases

Deleted node

1

1

1


Re balancing after deleting
Re-balancing after deleting: too high after an insertion at its left sub-tree)

After deleting a node the tree might be still balanced or:

Theorem: after deleting we can restore the AVL balance properties of the sub-tree having as root the first* node that got unbalanced with just only one simple rotation or a double rotation.

(* : on the way from the deleted note to the root).

However: the height of the resulting sub-tree might be shortened by 1, this means more rotations might be (recursively) necessary at the parent nodes, which can affect up to the root of the entire tree.


About implementation
About Implementation too high after an insertion at its left sub-tree)

  • While searching for unbalanced sub-tree after an operation it is only necessary to check the parent´s sub-tree only when the son´s sub-tree has changed it height.

  • In order make the checking for unbalanced sub-trees more efficient, it is recommended to put some more information on the nodes, for example: the height of the sub-tree or the balance factor (height(left sub-tree) – height(right sub-tree)) This information must be updated after each operation

  • It is necessary to have an operation that returns the parent of a certain node (for example, by adding a pointer to the parent).


Complexity analysis worst case
Complexity analysis– worst case too high after an insertion at its left sub-tree)

Be h the height of the AVL-tree.

Searching: as in the normal binary search tree O(h).

Insert: the insertion is the same as the binary search tree (O(h)) but we must add the cost of one simple or double rotation, which is constant : also O(h).

delete: delete as in the binary search tree(O(h)) but we must add the cost of (possibly) one rotation at each node on the way from the deleted node to the root, which is at most the height of the tree: O(h).

All operations are O(h).


Calculating the height of an avl tree
Calculating the height of an AVL tree too high after an insertion at its left sub-tree)

Principle of construction

Be N(h) the minimal number of nodes

In an AVL-tree having height h.

N(0)=1, N(1)=2,

N(h) = 1 + N(h-1) + N(h-2) for h  2.

N(3)=4, N(4)=7

remember: Fibonacci-numbers

fibo(0)=0, fibo(1)=1,

fibo(n) = fibo(n-1) + fibo(n-2)

fib(3)=1, fib(4)=2, fib(5)=3, fib(6)=5, fib(7)=8

By calculating we can state:

N(h) = fibo(h+3) - 1

0

1

2

3


7 el tda diccionario

Be n the number of nodes of an AVL-tree of height h. Then it holds that:

n  N(h) ,

Remember fn =(1 /sqrt(5)) (Ф1n - Ф2n)

withФ1= (1+ sqrt(5))/2 ≈ 1.618

Ф2= (1- sqrt(5))/2 ≈ 0.618

we can now write

nfibo(h+3)-1 = (Ф1 h+3 – Ф2h+3 ) / sqrt(5) – 1

 (Ф1h+3/sqrt(5)) – 3/2,

thus

h+3+log Ф1(1/sqrt(5))  log Ф1(n+3/2),

thus there is a constant c with

h log Ф1(n) + c

= log Ф1(2) • log2(n) + c

= 1.44… • log2(n) + c = O(log n).


Arboles b external search
Arboles B (External Search) holds that:

  • The algorithms we have seen so far are good when all data are stored in primary storage device (RAM). Its access is fast(er)

  • Big data sets are frequently stored in secondary storage devices (hard disk). Slow(er) access (about 100-1000 times slower)

    Access: always to a complete block (page) of data (4096 bytes), which is stored in the RAM

    For efficiency: keep the number of accesses to the pages low!


Arboles 2 3
Arboles 2-3 holds that:

  • Los nodos internos pueden contener hasta 2 elementos

  • por lo tanto un nodo interno puede tener 2 o 3 hijos, dependiendo de cuántos elementos posea el nodo.


Propiedad
Propiedad holds that:

  • todas las hojas están a la misma profundidad, es decir, los árboles 2-3 son árboles perfectamente balanceados

  • La altura está acotada por


Inserci n
Inserción holds that:

  • se realiza una búsqueda infructuosa y se inserta dicho elemento en el último nodo visitado durante la búsqueda,

  • implica manejar dos casos distintos:


Ejemplos
Ejemplos holds that:


Eliminaci n
Eliminación holds that:

  • Físicamente se debe eliminar un nodo del último nivel

  • Si el elemento a borrar está en un nodo interno el valor se reemplaza por el inmediatamente anterior/posterior

  • Estos necesariamenteestán en último nivel


Caso simple
Caso simple holds that:

  • El nodo donde se encuentra Z contiene dos elementos. En este caso se elimina Z y el nodo queda con un solo elemento.


Caso complejo 1
Caso complejo 1 holds that:

  • El nodo donde se encuentra Z contiene un solo elemento. En este caso al eliminar el elemento Z el nodo queda sin elementos (underflow). Si el nodo hermano posee dos elementos, se le quita uno y se inserta en el nodo con underflow.


Caso complejo 2
Caso complejo 2 holds that:

  • Si el nodo hermano contiene solo una llave, se le quita un elemento al padre y se inserta en el nodo con underflow.

  • Si esta operación produce underflow en el nodo padre, se repite el procedimiento anterior un nivel más arriba. Finalmente, si la raíz queda vacía, ésta se elimina.

  • Costo de las operaciones de búsqueda, inserción y eliminación en el peor caso: Θ (log(n))


7 el tda diccionario

For external search: a variant of search trees: holds that:

1 node = 1 page

Multiple way search trees!


7 el tda diccionario

Multiple way-search trees holds that:

Definición: An empty tree is a multiple way search tree with an empty set of keys {} .

Be T0, ..., Tn multiple way-search trees with keys taken from a common key set S, and be k1,...,kn a sequence of keys with k1 < ...< kn. Then is the sequence:

T0 k1 T1 k2 T2 k3 .... kn Tn

a multiple way-search trees only when:

  • for all keys x from T0 x < k1

  • for i=1,...,n-1, for all keys x in Ti, ki < x < ki+1

  • for all keys x from Tn kn < x


B tree
B-Tree holds that:

Definition

A B-Tree of Order m is a multiple way tree with the following characteristics

  • 1  #(keys in the root)  2m and

    m  #(keys in the nodes)  2m

    for all other nodes.

  • All paths from the root to a leaf are equally long.

  • Each internal node (not leaf) which has s keys has exactly s+1 children.

  • 2-3 Trees is a particular case for m=1



Assessment of b trees
Assessment of B-trees holds that:

The minimal possible number of nodes in a B-tree of order m and height h:

  • Number of nodes in each sub-tree

    1 + (m+1) + (m+1)2 + .... + (m+1)h-1

    =  ( (m+1)h – 1) / m.

    The root of the minimal tree has only one key and two children, all other nodes have m keys.

    Altogether: number of keys n in a B-tree of height h:

    n 2 (m+1)h– 1

    Thus the following holds for each B-tree of height h with n keys:

    h logm+1 ((n+1)/2) .


Example
Example holds that:

The following holds for each B-tree of height h with n keys:

h logm+1 ((n+1)/2).

Example: for

  • Page size: 1 KByte and

  • each entry plus pointer: 8 bytes,

    If we chose m=63, and for an ammount of data of

    n= 1 000 000

    We have      h  log 64 500 000.5 < 4 and with that hmax = 3.


Key searching algorithm in a b tree
Key searching algorithm in a B-tree holds that:

Algorithm search(r, x)

//search for key x in the tree having as root node r;

//global variable p = pointer to last node visited

in r, search for the first key y >= x or until no more keys

if y == x {stop search, p = r, found}

else

if r a leaf {stop search, p = r, not found}

else

if not past last key search(pointer to node before y, x)

else search(last pointer, x)


Inserting and deleting of keys
Inserting and deleting of keys holds that:

Algorithm insert (r, x)

//insert key x in the tree having root r

search for x in tree having root r;

if x was not found

{ be p the leaf where the search stopped;

insert x in the right position;

if p now has 2m+1 keys

{overflow(p)}

}


7 el tda diccionario

Algorithm Split (1) holds that:

Algorithm

overflow (p) = split (p)

Algorithm split (p)

first case: p has a parent q.

Divide the overflowed node. The key of the middle goes to the parent.

remark: the splitting may go up until the root, in which case the height of the tree is incremented by one.


7 el tda diccionario

Algorithm Split (2) holds that:

Algorithm split (p)

second case: p is the root.

Divide overflowed node. Open a new level above containing a new root with the key of the middle (root has one key).


7 el tda diccionario

Algorithm delete (r,x) holds that:

//delete key x from tree having root r

search for x in the tree with root r;

if x found

{ if x is in an internal node

{ exchange x with the next bigger key x' in the tree

// if x is in an internal node then there must

// be at least one bigger number in the tree

//this number is in a leaf !

}

be p the leaf, containing x;

erase x from p;

if p is not in the root r

{ if p has m-1 keys

{underflow (p)} } }


Algorithm underflow p
Algorithm underflow (p) holds that:

if p has a neighboring node with s>m nodes

{ balance (p,p') }

else

// because p cannot be the root, p must have a neighbor with m keys

{ be p' the neighbor with m keys; merge (p,p')}


Algorithm balance p p balance node p with its neighbor p s m r m s 2 m
Algorithm balance (p, p') holds that:// balance node p with its neighbor p' (s > m , r =(m+s)/2 -m )


Algorithm merge p p merge node p with its neighbor perform the following operation
Algorithm merge (p,p') holds that:// merge node p with its neighbor perform the following operation:

afterwards:

if( q <>  root) and (q has m-1 keys) underflow (q)

else (if(q= root) and (q empty)) {free q let root point to p^}


Recursion
Recursion holds that:

If when performing underflow we have to perform merge, we might have to perform underflow again one level up

This process might be repeated until the root.


Example b tree of order 2 m 2
Example: holds that:B-Tree of order 2 (m = 2)


7 el tda diccionario
Cost holds that:

Be m the order of the B-tree,

n the number of keys.

Costs for search , insert and delete:

O(h) = O(logm+1 ((n+1)/2) )

= O(logm+1(n)).


Remark
Remark: holds that:

B-trees can also be used as internal storage structure:

Especially: B-trees of order 1

(then only one or 2 keys in each node –

no elaborate search inside the nodes).

Cost of search, insert, delete:

O(log n).


Remark use of storage memory
Remark: use of storage memory holds that:

Over 50%

reason: the condition:

1/2•k #(keys in the node) k

For nodes  root

(k=2m)