Loading in 5 sec....

Lecture 1: The Greedy MethodPowerPoint Presentation

Lecture 1: The Greedy Method

- 110 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Lecture 1: The Greedy Method' - xalvadora-kieva

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Lecture 1: The Greedy Method

### Lecture 1: The Greedy Method

### Lecture 1: The Greedy Method

### Lecture 1: The Greedy Method

### Lecture 1: The Greedy Method

### Lecture 1: The Greedy Method

主講人:虞台文

Content

- What is it?
- Activity Selection Problem
- Fractional Knapsack Problem
- Minimum Spanning Tree
- Kruskal’s Algorithm
- Prim’s Algorithm

- Shortest Path Problem
- Dijkstra’s Algorithm

- Huffman Codes

What is it?

The Greedy Method

- A greedy algorithm always makes the choice that looksbest at the moment
- For some problems, it always give aglobally optimal solution.
- For others, it may only give a locally optimal one.

Main Components

- Configurations
- differentchoices, collections, or values to find

- Objective function
- a score assigned to configurations, which we want to either maximize or minimize

Is the solution alwaysoptimal?

Example: Making Change- Problem
- A dollar amount to reach and a collection of coin amounts to use to get there.

- Configuration
- A dollar amount yet to return to a customer plus the coins already returned

- Objective function
- Minimizenumber of coins returned.

- Greedy solution
- Always return the largest coin you can

Example: Largest k-out-of-n Sum

- Problem
- Pick k numbers out of n numbers such that the sum of these k numbers is the largest.

- Exhaustive solution
- There are choices.
- Choose the one with subset sum being the largest

- Greedy Solution
FOR i = 1 to k

pick out the largest number and

delete this number from the input.

ENDFOR

Is the greedy solution alwaysoptimal?

Example:Shortest Paths on a Special Graph

- Problem
- Find a shortest path from v0 to v3

- Greedy Solution

Is the solution optimal?

Example:Shortest Paths on a Special Graph- Problem
- Find a shortest path from v0 to v3

- Greedy Solution

Is the greedy solution optimal?

Example:Shortest Paths on a Multi-stage Graph- Problem
- Find a shortest path from v0 to v3

Is the greedy solution optimal?

Example:Shortest Paths on a Multi-stage Graph- Problem
- Find a shortest path from v0 to v3

The optimal path

Is the greedy solution optimal?

Example:Shortest Paths on a Multi-stage Graph- Problem
- Find a shortest path from v0 to v3

What algorithm can be used to find the optimum?

The optimal path

Advantage and Disadvantageof the Greedy Method

- Advantage
- Simple
- Work fast when they work

- Disadvantage
- Not always work Short term solutions can be disastrous in the long term
- Hard to prove correct

Activity Selection Problem

Activity Selection Problem(Conference Scheduling Problem)

- Input: A set of activities S = {a1,…, an}
- Each activity has a start time and a finish time
ai = [si, fi)

- Two activities are compatible if and only if their interval does notoverlap

- Each activity has a start time and a finish time
- Output: a maximum-size subset of mutually compatible activities

Example:Activity Selection Problem

Assume thatfi’s are sorted.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Example:Activity Selection ProblemIs the solution optimal?

1

2

3

4

5

6

7

8

9

10

11

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Example:Activity Selection ProblemIs the solution optimal?

1

2

3

4

5

6

7

8

9

10

11

Activity Selection Algorithm

Greedy-Activity-Selector (s, f)

// Assume that f1 f2 ... fn

n length [s]

A { 1 }

j 1

for i 2 to n

if sifj then

AA{ i }

ji

return A

Is the algorithm optimal?

Proof of Optimality

- Suppose A S is an optimal solution and the first activity is k 1.
- If k 1, one can easily show that B =A – {k} {1} is also optimal. (why?)
- This reveals that greedy-choice can be applied to the first choice.
- Now, the problem is reduced to activity selection on S’ = {2, …, n}, which are all compatible with 1.
- By the same argument, we can show that, to retain optimality, greedy-choice can also be applied for next choices.

Fractional Knapsack Problem

The Fractional Knapsack Problem

- Given: A set S of n items, with each item i having
- bi - a positive benefit
- wi - a positive weight

- Goal: Choose items, allowing fractional amounts, to maximizetotal benefit but with weight at most W.

2

3

4

5

Value:

($ per ml)

10 ml

The Fractional Knapsack Problem“knapsack”

- Solution:
- 1 ml of 5
- 2 ml of 3
- 6 ml of 4
- 1 ml of 2

Items:

wi:

4 ml

8 ml

2 ml

6 ml

1 ml

bi:

$12

$32

$40

$30

$50

3

4

20

5

50

The Fractional Knapsack Algorithm

- Greedy choice: Keep taking item with highest value

AlgorithmfractionalKnapsack(S,W)

Input:set S of items w/ benefit biand weight wi; max. weight W

Output:amount xi of each item i to maximize benefit w/ weight at most W

for each item i in S

xi 0

vi bi / wi{value}

w 0 {total weight}

whilew < W

remove item i with highest vi

xi min{wi , W w}

w w + min{wi , W w}

Does the algorithm always gives an optimum?

Proof of Optimality

- Suppose there is a better solution
- Then, there is an item i with higher value than a chosen itemj, but xi < wi, xj > 0 and vi > vj
- Substituting some i with j, we’ll get a better solution
- How much of i: min{wi xi, xj}
- Thus, there is no better solution than the greedy one

Recall: 0-1 Knapsack Problem

Which boxes should be chosen to maximize the amount of money while still keeping the overall weight under 15 kg ?

Is the fractional knapsack algorithm applicable?

Exercise

- Construct an example show that the fractional knapsack algorithm doesn’t give the optimal solution when applying it to the 0-1 knapsack problem.

What is a Spanning Tree?

- A tree is a connected undirected graph that contains nocycles
- A spanning tree of a graph G is a subgraph of G that is a tree and contains all the vertices of G

B

B

A

A

A

C

C

C

D

D

D

E

E

E

Properties of a Spanning Tree- The spanning tree of a n-vertex undirected graph has exactly n – 1 edges
- It connects all the vertices in the graph
- A spanning tree has no cycles

Undirected Graph

Some Spanning Trees

What is a Minimum Spanning Tree?

- A spanning tree of a graph G is a subgraph of G that is a tree and contains all the vertices of G
- A minimumspanning tree is the one among all the spanning trees with the lowest cost

Applications of MSTs

- Computer Networks
- To find how to connect a set of computers using the minimum amount of wire

- Shipping/Airplane Lines
- To find the fastest way between locations

Two Greedy Algorithms for MST

- Kruskal’s Algorithm
- merges forests into tree by adding small-cost edges repeatedly

- Prim’s Algorithm
- attaches vertices to a partially built tree by adding small-cost edges repeatedly

Kruskal’s Algorithm

G = (V, E) – Graph

w: ER+– Weight

T Tree

MST-Kruksal(G)

T← Ø

for each vertex vV[G]

Make-Set(v) // Make separate sets for vertices

sort the edges by increasing weight w

for each edge (u, v)E, in sorted order

if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed

T ← T {(u, v)}// Add edge to Tree

Union(u, v) // Combine Sets

return T

O(|E|log|E|)

Time ComplexityG = (V, E) – Graph

w: ER+– Weight

T Tree

MST-Kruksal(G , w)

T← Ø

for each vertex vV[G]

Make-Set(v) // Make separate sets for vertices

sort the edges by increasing weight w

for each edge (u, v)E, in sorted order

if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed

T ← T {(u, v)}// Add edge to Tree

Union(u, v) // Combine Sets

return T

O(1)

O(|V|)

O(|E|log|E|)

O(|E|)

O(|V|)

O(1)

Prim’s Algorithm

8

7

d

b

b

c

d

c

4

9

2

a

a

i

e

e

i

14

11

4

7

16

8

10

h

g

f

h

g

f

1

2

b

b

c

c

d

d

a

i

e

a

i

e

h

g

f

h

g

f

Prim’s Algorithm

G = (V, E) – Graph

w: ER+– Weight

r – Starting vertex

Q – Priority Queue

Key[v] – Key of Vertex v

π[v] –Parent of Vertex v

Adj[v] – Adjacency List of v

MST-Prim(G, w, r)

Q← V[G]// Initially Q holds all vertices

for each uQ

Key[u] ← ∞// Initialize all Keys to ∞

Key[r] ← 0 // r is the first tree node

π[r] ← Nil

while Q ≠ Ø

u ← Extract_min(Q) // Get the min key node

for each v Adj[u]

if vQ and w(u, v) < Key[v]// If the weight is less than the Key

π[v] ← u

Key[v] ← w(u, v)

O(|E|log|V|)

Time ComplexityG = (V, E) – Graph

w: ER+– Weight

r – Starting vertex

Q – Priority Queue

Key[v] – Key of Vertex v

π[v] –Parent of Vertex v

Adj[v] – Adjacency List of v

MST-Prim(G, r)

Q← V[G]// Initially Q holds all vertices

for each uQ

Key[u] ← ∞// Initialize all Keys to ∞

Key[r] ← 0 // r is the first tree node

π[r] ← Nil

while Q ≠ Ø

u ← Extract_min(Q) // Get the min key node

for each v Adj[u]

if vQ and w(u, v) < Key[v]// If the weight is less than the Key

π[v] ← u

Key[v] ← w(u, v)

Optimality

Yes

- Kruskal’s Algorithm
- merges forests into tree by adding small-cost edges repeatedly

- Prim’s Algorithm
- attaches vertices to a partially built tree by adding small-cost edges repeatedly

Shortest Path Problem

Shortest Path Problem (SPP)

- Single-Source SPP
- Given a graph G = (V, E), and weight w: ER+, find the shortest path from a source node s V to any other node, say, v V.

- All-Pairs SPP
- Given a graph G = (V, E), and weight w: ER+, find the shortest path between each pair of nodes in G.

Dijkstra's Algorithm

- Dijkstra's algorithm, named after its discoverer, Dutch computer scientist Edsger Dijkstra, is an algorithm that solves the single-source shortest path problem for a directed graph with nonnegative edge weights.

Dijkstra's Algorithm

- Start from the source vertex, s
- Take the adjacent nodes and update the current shortest distance
- Select the vertex with the shortest distance, from the remaining vertices
- Update the current shortest distance of the Adjacent Vertices where necessary,
- i.e. when the new distance is less than the existing value

- Stop when all the vertices are checked

Dijkstra's Algorithm

G = (V, E) – Graph

w: ER+– Weight

s – Source

d[v] – Current shortest distance from s to v

S – Set of nodes whose shortest distance is known

Q – Set of nodes whose shortest distance is unknown

Dijkstra(G, w ,s)

for each vertex vV[G]

d[v] // Initialize all distances to

π[v] Nil

d[s] 0// Set distance of source to 0

S

QV[G]

while Q≠

u Extract_Min(Q) // Get the min in Q

SS {u}// Add it to the already known list

for each vertex v Adj[u]

if d[v] > d[u] + w(u, v)// If the new distance is shorter

d[v] d[u] + w(u, v)

π[v] u

Huffman Codes

Huffman Codes

- Huffman code is a technique for compressing data.
- Variable-Length code

- Huffman's greedy algorithm look at the occurrence of each character and it as a binary string in an optimal way.

Example

Suppose we have a data consists of 100,000 characters with following frequencies.

Fixed vs. Variable Length Codes

Suppose we have a data consists of 100,000 characters with following frequencies.

Total Bits:

Fixed Length Code

345,000 + 313,000 + 312,000 + 316,000 + 39,000 + 35,000= 300,000

Variable Length Code

145,000 + 313,000 + 312,000 + 316,000 + 49,000 + 45,000= 224,000

1

a:45

0

1

0

1

0

1

b:13

c:12

d:16

0

1

f:5

e:9

Prefix CodesIn which no codeword is a prefix of other codeword.

Encode

aceabfd

=0100110101011100111

Decode

0100110101011100111

a

c

e

a

b

f

d

14

0

0

1

1

f:5

f:5

e:9

e:9

Huffman-Code Algorithmf:5

e:9

c:12

b:13

d:16

a:45

c:12

b:13

d:16

a:45

25

0

0

1

1

14

14

b:13

b:13

c:12

c:12

0

0

1

1

f:5

f:5

e:9

e:9

Huffman-Code Algorithma:45

d:16

c:12

b:13

d:16

a:45

25

30

30

0

0

1

1

0

0

1

1

14

14

14

b:13

b:13

c:12

c:12

d:16

d:16

0

0

0

1

1

1

f:5

f:5

f:5

e:9

e:9

e:9

Huffman-Code Algorithma:45

a:45

d:16

0

1

25

25

30

30

0

0

1

1

0

0

1

1

14

14

b:13

b:13

c:12

c:12

d:16

d:16

0

0

1

1

f:5

f:5

e:9

e:9

Huffman-Code Algorithma:45

55

0

0

1

1

25

25

25

30

30

30

0

0

0

1

1

1

0

0

0

1

1

1

14

14

14

b:13

b:13

b:13

c:12

c:12

c:12

d:16

d:16

d:16

0

0

0

1

1

1

f:5

f:5

f:5

e:9

e:9

e:9

Huffman-Code Algorithma:45

a:45

0

1

55

55

a:45

0

0

1

1

25

25

25

30

30

30

0

0

0

1

1

1

0

0

0

1

1

1

14

14

14

b:13

b:13

b:13

c:12

c:12

c:12

d:16

d:16

d:16

0

0

0

1

1

1

f:5

f:5

f:5

e:9

e:9

e:9

Huffman-Code Algorithma:45

0

1

55

55

a:45

0

0

1

1

25

25

25

30

30

30

0

0

0

1

1

1

0

0

0

1

1

1

14

14

14

b:13

b:13

b:13

c:12

c:12

c:12

d:16

d:16

d:16

0

0

0

1

1

1

f:5

f:5

f:5

e:9

e:9

e:9

Huffman-Code AlgorithmHuffman tree built

a:45

Huffman-Code Algorithm

Huffman (C)

n |C|

QC

for i 1 to n 1

z Allocate-Node ()

x left[z] Extract-Min (Q) // least frequent

y right[z] Extract-Min (Q) // next least

f[z] f[x] + f[y] // update frequency

Insert ( Q, z )

return Extract-Min (Q)

Optimality

Exercise

Download Presentation

Connecting to Server..