1 / 11

Chapter 20 Cost Minimization

Chapter 20 Cost Minimization Break  max to 1) cost min for every y, then 2) choose the optimal y * (max ). The cost minimization problems is: min x1 , x2 w 1 x 1 +w 2 x 2 st. f(x 1 ,x 2 )=y

elia
Download Presentation

Chapter 20 Cost Minimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 20 Cost Minimization • Break  max to 1) cost min for every y, then 2) choose the optimal y* (max ). • The cost minimization problems is: minx1,x2w1x1+w2x2 st. f(x1,x2)=y • Denote the solution the cost function c(w1,w2,y). It measures the minimal costs of producing y units of outputs when the input prices are (w1,w2).

  2. On the x1-x2 plane, can draw the family of isocost curves. A typical one takes the form of w1x1+w2x2=k, having the slope of -w1/w2. • At optimum, the usual tangency condition says that MRTS1,2(x1*,x2*)=∆x2/∆x1=-MP1(x1*,x2*)/MP2(x1*,x2*)= -w1/w2. Rearranging to get MP1(x1*,x2*)/w1= MP2(x1*,x2*)/w2, in words, an extra dollar spent on recruiting factor 1 or 2 would yield the same extra amount of outputs.

  3. Fig. 20.1

  4. The optimal choices of inputs are denoted as x1(w1,w2,y) and x2(w1,w2,y). These are called the conditional factor demands or derived factor demands. These are hypothetical constructs, usually not observed. • Some examples. • f(x1,x2)=min{x1,x2}: c(w1,w2,y)=(w1+w2)y • f(x1,x2)=x1+x2: c(w1,w2,y)=min{w1,w2}y • f(x1,x2)= x1ax2b: c(w1,w2,y)=kw1a/(a+b) w2b/(a+b)y1/(a+b)

  5. From the Cobb-Douglas example, we see that if CRS a+b=1, then costs are linear in y. If IRS a+b>1, then costs increase less than linearly with outputs. If DRS a+b<1, then costs increase more than linearly with outputs. This holds not only for Cobb-Douglas. • Let c(w1,w2,1) be the unit cost. If we have a CRS tech, what is the cheapest way to produce y units of outputs? Just use y times as much of every inputs.

  6. This is true because first, we cannot have c(w1,w2,y)> yc(w1,w2,1) since we can just scale everything up. Now can we have c(w1,w2,y)< yc(w1,w2,1)? We cannot either for if so, we can scale everything down. Hence we have c(w1,w2,y)= yc(w1,w2,1). In other words, average cost is just the unit cost, or AC(w1,w2,y)=c(w1,w2,y)/y=c(w1,w2,1).

  7. What about IRS? If we double inputs, outputs are more than doubled. So to produce twice as much outputs as before, we don’t need to double inputs. The costs are thus less than doubled. So AC decreases as y goes up. • Similarly for DRS, if we halve inputs, outputs are more than a half. So to produce half as much outputs as before, we can decrease inputs more than a half. AC decreases as y goes down.

  8. The assumption of cost minimization has implications on how the conditional factor demands change as the input prices change. • t: (w1t,w2t,yt = y) (x1t,x2t) • s: (w1s,w2s,ys = y) (x1s,x2s) • w1tx1t+w2tx2t≤w1tx1s+w2tx2s and w1sx1s+w2sx2s≤w1sx1t+w2sx2t. Let ∆z=zt-zs. We have ∆w1∆x1+∆w2∆x2≤0. In other words, the conditional factor demand slopes downwards.

  9. LR vs. SR costs: SR costs: cs(w1,w2,x2,y)=minx1w1x1+w2x2 st. f(x1, x2)=y. Denote x1s(w1,w2,x2,y) the optimal choice of factor 1 in the short run. Then cs(w1,w2,x2,y)=w1 x1s(w1,w2,x2,y)+w2x2. Recall in the LR c(w1,w2,y)=w1 x1(w1,w2,y)+w2x2(w1,w2,y). What if x2=x2(w1,w2,y)? Then x1s(w1,w2,x2,y)= x1(w1,w2,y). So cs(w1,w2, x2(w1,w2,y),y)=c(w1,w2,y). This can be shorthanded as cs(x2(y),y)=c(y).

  10. Intuitively, if in the SR, the fixed factors happen to be at the LR cost minimizing amount, then we don’t have an extra constraint in the SR. The cost-minimizing amount of the variable inputs in the SR equals that in the LR. • Normally, cs(x2,y)≥c(y). Can we have cs(x2(y),y)>c(y)? If so, then x1s(x2(y),y)>x1(y). But notice that in the SR you can also choose x1(y) to produce y, so this contradicts SR cost minimization. Hence cs(x2(y),y)=c(y).

  11. Sunk costs: expenditures that have been made and cannot be recovered. Should not affect future decisions.

More Related