Performance measurement and quasi-competitive mechanisms for the Public Employment Service
Download
1 / 16

- PowerPoint PPT Presentation


  • 474 Views
  • Updated On :

Performance measurement and quasi-competitive mechanisms for the Public Employment Service LSE lunchtime seminar, 27 January 2004. by David Grubb Directorate for Employment, Labour and Social Affairs, OECD . AUSTRALIA & NETHERLANDS – basic system architecture .

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '' - bernad


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Performance measurement and quasi-competitive mechanisms for the Public Employment Service LSE lunchtime seminar, 27 January 2004

by David Grubb

Directorate for Employment, Labour and Social Affairs, OECD


Australia netherlands basic system architecture
AUSTRALIA & NETHERLANDS – basic system architecture the Public Employment Service

  • A public body (Centrelink in Australia, Centres for Work and Income in the Netherlands) handles applications for benefits and evaluates unemployed applicants using a questionnaire.

  • Disadvantaged unemployed are referred to a subcontracted employment service provider (except JN3, from 2003 onwards, all unemployed are referred).

  • Employment service providers bid for contracts to handle a given number (or share) of jobseeker referrals in a locality. Invitations-to-tender in Australia 1997, 1999 and 2002 (2-3 year contracts); in the Netherlands (for UI beneficiaries only) in 2000, 2001, 2002.. (1-year contracts).

  • In Australia, central government (DEWR) is the purchaser (responsible for contract design and management, and conducting tenders). In the Netherlands benefit agencies (UWV and numerous municipalities) are the purchasers.


AUSTRALIA & NETHERLANDS – performance measures and incentives (UWV sector in Netherlands, JN1/JN2 in Australia)

  • Incentives are created by both by the fee system and by the criteria for awarding contracts.

  • Providers are paid via (a) a fee per unemployed person “commenced” (b) a fee per unemployed person placed in a (unsubsidised) job for 6 months (3 months for part-payment).

  • Providers, in bidding for contracts, specify their “bid price” (i.e. the scale of fees listed above). But in practice contracts are awarded mainly on the basis of “quality”.

  • In Australia (JN2 and JN3) “star ratings” - regression-adjusted measures of track record in terms of achieving placements – are a prime “quality” indicator.

  • In the Netherlands there are no clear performance ratings, and smaller municipalities award contracts without a formal tender process  there has been a “lack of transparency” in the market.


Australia netherlands some critical system parameters
AUSTRALIA & NETHERLANDS – some critical system parameters incentives

  • Contract duration. 1 year, 3 years, automatic rollover?

  • Contract size. In the Netherlands, providers typically operate multiple small (c.100 clients) contracts. In Australia, a (for-profit) local employment office can have just one multiyear contract.

  • Ongoing monitoring. Does the purchaser monitor provider behaviour and issue instructions to providers during the contract period?

  • Link with unemployment benefits. Can the provider impose benefit sanctions e.g. in case of a client’s failure to attend a job interview?

  • Activation measures (e.g. job-search monitoring and referrals to other labour market programmes). Do these remain in the public domain?

  • Specialisation. e.g. providers for youths, hearing difficulties, migrants/ language issues. How are clients referred and how is performance assessed?

  • Client choice. Can clients choose their provider?

  • Information systems. Poor availability of information on clients’ characteristics and employment statuses closes off many policy options.


Australia netherlands results
AUSTRALIA & NETHERLANDS – results incentives

  • Clients in Australia’s big cities can opt for 5 or more providers. This allows comparative benchmarking of performance. But in remote areas, only one or two local offices are accessible.

  • Placement fees are “too low” - profits can made by spending little on services. Contract non-renewal is the main incentive that obliges providers to spend on services. (Australia now also has minimum service standards and fee-for-service payments).

  • Placement performance varies significantly among otherwise comparable providers significant efficiency gains can be obtained by eliminating poor performers.

  • Transactions costs are high . Costs of contract design and award, monitoring and fee payment systems; providers costs in submitting bids.

  • Interface between public authorities and providers is costly (split responsibilities, communication of information, coordination of action)

  • Centralised performance measures play a significant role only in Australia

  • Client choice across providers exists, but is limited in practice.

  • New types of policy rigidity can arise as the purchaser cannot modify the contract conditions announced at tender time, and providers are a new interest group


Australia s job network 3 2003
AUSTRALIA’s JOB NETWORK 3 incentives (2003)

  • Innnovations in JN3 include:

    • Each client stays long-term with the same provider.

    • A rule that clients must attend interviews with their JN provider every 2 weeks

    • Fee system for paying providers now includes Service Fees (conditional on delivering prescribed, universal services) and Job Seeker Accounts (earmarked for training and related services to clients)

    • Providers are now responsible for managing referrals to ALMPs

    • Good performers can be allocated additional business. The contracts of the better-performing providers will be automatically renewed

  • These changes may point the way to market consolidation with a number of organisations each functioning more like a traditional PES – and with the government shifting business between these organisations on the basis of measured performance.

  • Since the start of JN3 unemployment in Australia has fallen significantly, though this may be mainly related to the 2nd innovation listed above


Eu management by objectives mbo
EU – Management by Objectives (MBO) incentives

  • 10 out of 18 EU PES organisations use MBO (Mosley et al., 2001)

  • Often 8 to 10 objectives. These typically include total placements and something to do with long-term unemployment (e.g. placements of LTU), plus other target groups and process measures (e.g. number of “action plans” prepared)

  • Ad hoc setting of targets. Usually the target for each objective is just to “do a little better than last year” (“stretching but feasible” idea).

  • Formal incentives are weak or non-existent. (Good performance may be rewarded by small bonus payments. Central management may react to poor local office performance by tightening surveillance of the office’s procedures).


Rest of world
REST OF WORLD incentives

  • Switzerland in 2000 set up high-quality measures for the performance of its c.150 “regional placement offices”. The final outcome measure is a weighted sum of the mean duration of unemployment spells and the rate of re-registration by individuals previously registered. Regression-adjusted (relative net impact) values were published in 2001-2, with plans to apply a bonus/malus system (but now dropped)

  • US since 1996 (TANF welfare reform) has allowed states to subcontract employment services for welfare recipients. About 13% of spending is subcontracted (for case management, the share may be lower). Contract models are highly varied: some contracts use pure cost reimbursement, some a hybrid of fixed fees and pay-for-results. Often 30, 90, 180, 360-day job retention is the main final outcome measure. Some contracts also use hourly earnings at re-employment. (Most also reward process indicators.)


Building an effective model 1
BUILDING AN EFFECTIVE MODEL (1) incentives

  • Minimum conditions as regards centralisation. Centralised (i.e. consistent) measures of final outcomes, with centralized initial intake and coding of clients (prior to referral to private providers).

  • The purchaser identifies client groups.  Providers may bid/contract to serve specific client groups, but only as identified by the centralised codings.

  • Gross final outcome measures must be “non-gameable”and correspond (at the margin) well to the social welfare value of the outcomes that employment services can influence.

  • Gross final outcome measures (e.g. employment rates) must be measured across all clients referred to a provider.  This avoids incentives to cream or selection biases in measurement.

  • Intermediate outcome (or even process) measures have a role in providing real-time feedback on performance and (under pay-for-results) managing cash flow.


Building an effective model 2
BUILDING AN EFFECTIVE MODEL (2) incentives

  • Several contracting models are viable (a) 100% cost reimbursement (b) fixed fee per client (c) pay-for-results.

    • However (a) alone is non-transparent (the purchaser must operate a trade-off between input costs and outcomes, but providers are not told what it is).

    • Under (b) contracts can be awarded transparently to providers with the best relative net impact on final outcomes as measured by historical gross final outcome data with regression adjustments.

    • Under (c) the results paid for are gross final outcomes and an auction process (award of contracts to the highest bidder) is involved. This avoids the need for the purchaser to estimate relative net impacts based on past performance.

  • Hybrid contracting models (including those mixed with traditional management style, i.e. requirements to follow a centralised procedures manual) are viable. Hybrid models can be optimal because they minimise (average across) different sources of error and provider risk. I recommend use of 75% (a) with 25% (c) (with some central regulation and auditing).

  • Contracts should differ between urban and remote rural areas. Under model (c), in dense urban areas with multiple providers clients can be allocated in small batches while providers enter and exit the according to profitability. In remote rural areas, the purchaser needs to invite bids for a single contract covering the whole inflow of clients over a long period (e.g. 5 years).


The gross final outcome measure
THE GROSS FINAL OUTCOME MEASURE incentives

  • In the case of unemployment beneficiaries, we assume that the individual will be better off in work (utility = E-T-H where E is earnings, T is tax on earnings, H is disutility of hours worked) than when unemployed (utility = B, benefit). Therefore E-H (gain in social welfare from employment) > T+B (improvement in government revenues from employment). So: employment outcomes for unemployment beneficiaries have a value of at least B+T. (For disability it cannot be assumed that earnings exceed the disutility of work).

  • The excess of E-H over B+T is (related to) the gains from matching, which can be partly modelled in term of earnings (e.g. excess of hourly earnings above some benchmark) . This gives the formula:

    Benefits savings + additional tax revenues + some further % of earnings + some adjustments (relating to client satisfaction and service quality during the unemployment spell itself/

  • Gross final outcomes should be measured in terms of the average benefit cost and earnings of clients, etc., over at least 5 years following initial referral to the provider (1-2 years of data may be an adequate basis for forward projections after return to a stable job). US random assignment experiments show that some but not necessarily all labour market programmes have impacts on outcomes for at least 5 years after participation.


What about externalities
WHAT ABOUT EXTERNALITIES? incentives

  • Negative externalities can arise in relation to vacancy hoarding, hiring subsidies, and (e.g. in remote areas) spending to attract new business to the locality. (In Australian experience, providers so far make little or no use of latter two strategies).

  • International historical experience suggests if anything positive long-term externalities from strategies that increase effective labour supply (e.g. falls in welfare rolls in the US exceed the total that would be expected from estimated impacts on the participants in welfare-to-work programmes).

  • So, arrangements which reward providers 100% for their impact on the unemployment and employment rates of their own clients are generally OK. But the externalities listed above must be kept in mind - and corrected.


Using placements etc as an intermediate outcome measure
USING PLACEMENTS etc. AS AN INTERMEDIATE OUTCOME MEASURE incentives

  • Many things can and do go wrong when placements are used as the final outcome measure. I recommend using placement data only for interim payments of outcome fees (or interim performance ratings). Final payments or ratings should be based on more-solid final outcome measures (see above).

  • Australia now pays providers much more for a job entry by a long-term unemployed client than by a job entry by a short-term unemployed client  providers now have an incentive to delay clients’ returns to work!

  • “Life expectancy” is a standard statistical indicator for the steady-state outcome that is implied by a point-in-time pattern of exit rates. Similarly, a given pattern of monthly exits from unemployment (+ monthly re-registrations by individuals who formerly exited) implies a particular steady-state level of unemployment. Interim outcome payments (or ratings), based on placements cross-classified by disadvantage group and unemployment duration, should be made in line with this. If this is done accurately (allowing for some nonlinearities, etc.), interim outcome payments will in fact add up to the final outcome payments based on the average of the unemployment rate achieved by clients over a 5-year period.


Setting the benchmark
SETTING THE BENCHMARK incentives

  • Gross final (or intermediate) outcome statistics are no use per se. The methodology used to calculating benchmark values (net impacts) for each local employment office/employment service provider is critical.

  • The MBO (ad hoc) method for setting “targets” creates perverse long-run incentives.

  • Benchmark values need to be calculated using a regression of gross final outcomes on client characteristic, local economy and other variables: equation residuals then measure the (relative) net impact of each provider.

  • In a single-provider-per-locality context, equations using current local economy characteristics (or even current client characteristics) as regressors will have a good fit. But this will “endogenize the benchmark”. A key issue for performance measurement is to avoid this while also minimising provider exposure to risk from exogenous factors.


Concluding remarks
CONCLUDING REMARKS incentives

  • Subcontracting the core “case management” functions of the PES using quasi-market mechanisms (open tenders and pay for results) is entirely possible, though it requires some complex management apparatus and cannot solve all political barriers to effective labour market policy.

  • Vague calls to “increase the role of market forces” in the PES can be viewed with scepticism. Effective performance management requires setting-up sound measures of final outcomes, and giving providers extensive responsibilities so that their impact on final outcomes can be clearly identified.

  • Prevailing ideas about PES performance measurement and contracting remain ad hoc or even dysfunctional. Effective performance management needs a more precise modelling as outlined above (c.f. modern ticket pricing strategies, which now maximize airline revenues and fill the seats on most flights, only emerged as the mathematical models improved).


Further reading
FURTHER READING incentives

  • Many evaluations of Australia’s Job Network are available – notably by DEWR, the Productivity Commission, OECD (2001), Innovations in Labour Market Policies: the Australian Way, and other expert comment.

  • Grubb, D.(2003) “Points of Comparison between Australia’s Job Network and the Dutch Market for Reintegration Services”, Australian Journal of Labour Economics, Vol 6, No 2 + other articles in this issue

  • Struyven, L and G Steurs (2002), “The Competitive Market for Employment Services in the Netherlands”, OECD Social Employment & Migration Working Paper 13 (www.oecd.org/els/employment/policy)

  • Mosley, H., H. Shütz and N. Breyer (2001), Management by Objectives in European Public Employment Services, WZB Discussion Paper FS I 01-203 (http://www.wz-berlin.de/ars/ab/abstracts/i01-203.en.htm)

  • McConnell, S, A Burwick, I Perez-Johnson and P Winston (2003) "Privatization in Practice: Case Studies of Contracting for TANF Case Management. Final Report". Report submitted by Mathematica Policy Research, Inc. to the Department of Health and Human Services. http://aspe.hhs.gov/hsp/privatization-rpt03/.


ad