1 / 50

BIOECONOMICS

BIOECONOMICS . Oscar Cacho School of Economics University of New England. AARES Pre-Conference Workshop Queenstown, New Zealand 13 February 2007. Outline. Definitions General models Solution techniques Incorporating risk Examples Extensions Useful literature.

Lucy
Download Presentation

BIOECONOMICS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BIOECONOMICS Oscar Cacho School of Economics University of New England AARES Pre-Conference Workshop Queenstown, New Zealand 13 February 2007

  2. Outline • Definitions • General models • Solution techniques • Incorporating risk • Examples • Extensions • Useful literature

  3. Bioeconomics - Definitions In its original coinage, "bioeconomics" referred to the study of how organisms of all kinds earn their living in "nature's economy," with particular emphasis on co-operative interactions and the progressive elaboration of the division of labor (see Hermann Reinheimer, Evolution by Co-operation: A Study in Bioeconomics, 1913). Today the term is used in various ways, from Georgescu-Roegen's thermodynamic analyses to the work in ecological economics on the problems of fisheries management. Corning (1996) Institute for the Study of Complex Systems Palo Alto, CA

  4. Bioeconomics - Definitions Bioeconomics is what bioeconomists do. Bioeconomics aims at the integration or ‘consilience’ (Wilson 1998) of two disciplines ...for the purpose of enriching both disciplines by substantially enlarging the theoretical and empirical bases which ultimately contribute to building of new hypotheses, theorems, theories and paradigms. Landa (1999) Department of Economics, York University Editor of the Journal of Bioeconomics

  5. Bioeconomics - Definitions (more in line with how AARES membersapply the term) The interrelations between the economic forces affecting the fishing industry and the biological factors that determine the production and supply of fish in the sea. Clark (1985) The idea of maximizing net economic yield while maintaining sustainable yield. van der Ploeg et al. (1987) The use of mathematical models to relate the biological performance of a production system to its economic and technical constraints. Allen et al (1984)

  6. Journal of Bioeconomics A sample of titles (with keywords) • The Bioeconomics of Cooperation (new institutional economics) • The Ecology of Trade (sustainability) • Surrender Value of Capital Assets: The Economics of Strategic Virginity Loss (love) • Making Good Decisions with Minimal Information: Simultaneous and Sequential Choice (ecological rationality) • Altruism and Spite in a Selfish Gene Model of Endogenous Preferences (evolution) • Evolutionary Theory and Economic Policy with Reference to Sustainability (behavioral economics)

  7. Journal of Bioeconomics A sample of titles in resource management • The Bioeconomics of Marine Sanctuaries • The Bioeconomics of the Spatial Distribution of an Endangered Species: The Case of the Swedish Wolf Population • Implementing a Stochastic Bioeconomic Model for the North-East Arctic Cod Fishery • Optimization of Harvesting Return from Age-Structured Population • Selective versus Random Moose Harvesting: Does it Pay to be a Prudent Predator? • Using Genetic Algorithms to Estimate and Validate Bioeconomic Models: The Case of the Ibero-atlantic Sardine Fishery

  8. Bioeconomics of renewable resources • Populations of natural organisms can be viewed as stocks of capital assets which provide potential flows of services. • The critical characteristics of capital are: • Durability: makes it necessary to apply intertemporal planning. • Adjustment costs: force the decision maker to consider the future in order to spread out the cost of altering the capital stock. • Types of decisions: • Timing problem: ie. when to harvest a stand of trees. • Harvest problem: i.e. how much of a resource to harvest each year. • In both cases the flow of profits per time period depends upon the stock level (biomass) and the control variable (harvest). Wilen (1985)

  9. Bioeconomics of renewable resources • In the simplest case the value derived from natural resources is related to consumptive use by harvesting. • The flows are usually measured in terms of number of organisms or biomass (weight). • In the more complex cases the size of the stock may also have intrinsic value (i.e. number of birds available for birdwatchers). • Models can be extended to include externalities. Wilen, J.E.(1985).Bioeconomics of renewable resource use. In Kneese, A.V and Sweeney, J.L. (ed.), Handbook of natural resource and energy economics, Vol 1. North-Holland, Amsterdam 61-124.

  10. reward final value The Hamiltonian is: General model in continuous time subject to: equation of motion initial state x(t) =state variable (resource stock) u(t)= control variable (harvest rate)

  11. FOC in continuous time maximum condition adjoint equation equation of motion initial state transversality condition This system is used to solve for the optimal trajectories u*(t), x*(t), *(t)

  12. reward final value The Hamiltonian is: General model in discrete time subject to: state transition initial state xt =state variable (resource stock) ut= control variable (harvest rate)

  13. Interpretation The Hamiltonian is the total rate of increase in the value of the asset (resource): Shadow price of the state variable (x) at time t (user cost) Value of net returns at time t

  14. FOC in discrete time t = 0,...,T1 t = 1,...,T1 t = 0,...,T1 This system has 3T+1 equations and 3T+1 unknowns: ut for t = 0,...T1 xt for t = 0,...T t for t = 1,...T

  15. Infinite horizon with discounting subject to: state transition initial state discount factor: The current-value Hamiltonian is:

  16. Solving (1) for steady state, substituting into (2) and rearranging yields the optimal condition: This can be used to solve for (x*,u*) given the steady state condition from (3): FOC of infinite horizon problem In steady state: ut+1 = ut = u xt+1 = xt = x t+1 = t =  (1) (2) (3) (4)

  17. Summary of general model

  18. Typical problems in bioeconomics

  19. Solution techniques • Numerical solution of optimal control model • Nonlinear programming • Dynamic programming Here I will consider only optimal control and dynamic programming

  20. Numerical optimal control

  21. To solve: 1. Set terminal value 2. Solve by backward recursion for a finite set of values 3. Obtain the optimal (state-contingent) decision rule 4. Use this decision rule to derive the optimal path for any initial state x0 Dynamic Programming (DP) recursive equation subject to: state transition

  22. Dynamic programming algorithm

  23. Alternative DP solution techniques Finite horizon models Backward recursion Function iteration (essentially the same as backward recursion but stop when V < tolerance) Infinite horizon models Policy iteration practical only for infinite-horizon problems (DP is converted into a root-finding problem)

  24. Introducing risk Two general approaches exist for stochastic optimisation of bioeconomic models: • Stochastic differential equations (Ito calculus) for continuous models • Stochastic dynamic programming (SDP) for discrete models Here I will only deal with SDP

  25. But now future returns are uncertain because the system is subject to stochastic influences: where t is a random variable with known probability, assumed to be iid and therefore the stochastic process {t} is stationary SDP Basics As before: the state variable xtcan be observed before selecting a value for the control utwhich results in a known reward R(xt,ut) There is a fixed state set X and a fixed control set U, with n and m elements respectively The time horizon may be fixed at T or 

  26. The transition probability matrix (TPM) Let the Markovian probability matrix: denote the n×n state transition probabilities when policy u is followed: (probability of jumping from state i to state j, given that action u is taken) To solve the problem first create an array P of dimensions n×n×m (Pcontains m transition probability matrices P(u))

  27. SDP Algorithm 1. Set dimensions (n,m) and initialise state set X and control set U 2. Run Monte Carlo simulation to create transition probability matrix P(n,n,m) and reward matrix R(n,m) 3. initialise terminal value vector VT(x) and set t=T-1 4. Recursion step; solve: for all xX 5. Save optimal decision rule 6. Decrease time counter and return to 4 until t=0 or convergence is achieved

  28. SDP Algorithm (2) 7. For infinite horizon problem, create optimal transition probability matrix P*(n,n) by selecting the elements of the Markovian probability matrices that satisfy the optimal decision rule for the given state statea: 8. Simulate the optimal state path by performing Monte Carlo simulation for any initial state x0 aPij* is the probability of jumping from state i to state j in the following period given that the optimal policy u*(xi) is followed

  29. Example: Weed Control • A weed can be viewed as a renewable resource with the seed bank representing the stock of this resource (x). • The size of x changes through time due to depletion by weed management and new organisms being created via seed production. • The change in the seed bank from one period to the next is represented by the state transition equation xt+1-xt=f(xt,ut). • The seed bank can be regulated through control u by targeting reproduction and seed mortality. • The objective is to determine the level of control (u) in each season that maximises profit over a period of T years. Jones and Cacho (2000) A dynamic optimisation model of weed control http://www.une.edu.au/economics/publications/gsare/index.php

  30. The reward is net revenue: xt=seedbank (seeds/m2) pi=price of i ($/unit) ut=herbicide (l/ha) cy=cropping cost ($/ha) y=crop yield (t/ha) Weed control model subject to: simulation model The simulation model consists of a system of equations that represent the weed population dynamics, the effect of weed density on crop yields and the effect of herbicide on weed survival

  31. Numerical optimal control (NOC) The Hamiltonian: is the net profit obtained from the existing level of xtand ut plus the value of a unit change in xtvalued at price t+1 t+1 , the costate variable, represents the shadow price of a unit of the stock of the seed bank; its value is  0 because the state variable is bad for profits

  32. NOC results The Hamiltonian and its components x=50, =-2 H(x,u,) R(x,u) $/ha  f(x,u) u

  33. NOC results Optimal paths State Control ut* xt* t t

  34. NOC results The costate variable Shadow price of seedbank t* u

  35. SDP model Use simulation model to generate transition probability matrix (P) and Reward matrix (R): • Solve simulation model: xj= xi + f(xi,u,), for  = 1,…, k. where k is the number of Monte Carlo iterations, with each  drawn from a lognormal distribution. • Use results from 1 to estimate Pij(u) given that • Calculate the reward Ri(u). • Repeat steps 1-3 for xi=x1,…xnto fill up the rows of P and R. • Repeat step 4 with u=u1,…um to fill up the columns of R and the 3rd dimension of P. • Perform Backward recursion to solve SDP.

  36. u = 1.0 SDP model: TPM matrix to state (xt+1) from state (xt) probabilities

  37. u = 3.0 u = 2.0 u = 1.0 TPM array

  38. SDP results Optimal decision rule u* x

  39. SDP results Optimal state path x* t

  40. SDP results Optimal state path (Monte Carlo) x* t

  41. The optimal transition probability matrix is created by selecting the elements of the Markovian probability matrices that satisfy the optimal decision rule for the given state. P*= Optimal probability maps for any initial condition can then be generated for any future time period t by applying (P*)t

  42. Example with multiple outputs Optimal control of Scotch broom (Cytisus scoparius) in the Barrington Tops National Park Odom et al (2002). Ecol Econ 44: 119-135

  43. Integrated weed management control “package” parameters ui

  44. Optimal state transition xt+1 45o xt

  45. Optimal paths x* w / budget constraint Sites invaded (%) no constraint t

  46. With no C credits x* With C credits Optimal soil C (t/ha) t t Optimal paths and policy Soil carbon content of an agoforestry system under optimal control Wise and Cacho (2005). Dynamic optimisation of land-use systems in the presence of carbon payments: http://www.une.edu.au/carbon/wpapers.php

  47. Extensions • Multiple state variables • Multiple control variables • Multiple outputs • Spatially-explicit models • Fish sanctuary models • Multiple species / interactions • Matrix population models • Metamodelling

  48. Literature and Links Conrad, JM (1999). Resource Economics. Cambridge University Press. Conrad, JM and Clark, CW (1987). Natural Resource Economics: notes and problems. Cambridge University Press. Fryer, MJ and Greenman, JV (1987). Optimisation theory: applications in OR and economics. Macmillan. Judge, KL (1998). Numerical methods in Economics. The MIT Press. Miranda, MJ and Fackler PL (2002). Applied computational economics and finance. The MIT Press. NEOS Optimization Tree: http://www-fp.mcs.anl.gov/otc/Guide/OptWeb/ Optimization Software Guide: http://www-fp.mcs.anl.gov/otc/Guide/SoftwareGuide/index.html

  49. Matlab optimal control model set x0 WeedOC [Lx0]=TransvCond(x0,tmax,ubound,delta) optimisation 1 (0) Lx0 = fminbnd(@ObjFn) g = ObjFn(y) [xstar,ustar,Lxstar]=SolveWOC(x0,y,tmax,ubound,delta) [xstar,ustar,Lxstar]=SolveWOC(x0,Lx0,tmax,ubound,delta) uopt=MaxHam(x,Lx,delta,ubound) optimisation 2 (u*) ustar = fminbnd(@ObjFn) g = ObjFn(u) [H]=HWeed(x,u,Lx,df) [x1,density]=seedbank(x,u) profit=gm(density,u); dHdx=dHweed(x,uopt,Lx,delta)

  50. Matlab SDP model CreateMatrix Generates TPM matrix (YM) and reward matrix (R) [x1,weeds]=seedbank(x,u) profit=gm(density,u); SDP: stage loop for t=nt:-1:1; for i=1:nx vopt=-inf; for k=1:nu fval = YM(i,:,k) * v(:,t+1); vnow = R(i,k) + delta*fval; if (vnow > vopt) vopt=vnow; uopt=k; end; end; v(i,t)=vopt; ustar(i,t)=uopt; end; end; state loop control loop expected vt+1 value function keep best control

More Related