1 / 26

A Markov Decision Model for Determining Optimal Outpatient Scheduling

A Markov Decision Model for Determining Optimal Outpatient Scheduling. Jonathan Patrick Telfer School of Management University of Ottawa. Motivation. The unwarranted skeptic and the uncritical enthusiast Outpatient clinics in Canada receiving strong encouragement to switch to open access

laksha
Download Presentation

A Markov Decision Model for Determining Optimal Outpatient Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Markov Decision Model for Determining Optimal Outpatient Scheduling Jonathan Patrick Telfer School of Management University of Ottawa

  2. Motivation • The unwarranted skeptic and the uncritical enthusiast • Outpatient clinics in Canada receiving strong encouragement to switch to open access • Basic operations research would claim that there is a cost to providing same day access • Does the benefit outweigh the costs?

  3. Trade-off • Any schedule needs to balance system-related benefits/costs - revenue, overtime, idle time,… versus patient related benefits – access, continuity of care,…. • Available levers include the decision as to how many new requests to serve today and how many requests to book in advance into each day.

  4. Scheduling Decisions Day 1 Day 1 Day 2 Day 2 Day 3 Day 4 Day 5 Day 3 New Demand

  5. Literature • Plenty of evidence that overbooking is advantageous in the presence of no-shows (work by Lawley et al and by Lawrence et al) • Also evidence that a two day booking window outperforms open access (work by Liu et al and by Lawrence and Chen) • Old trade-off between tractability of the model and complexity

  6. Model Aims • To create a model that • Incorporates a show rate that is dependent on the appointment lead time • Gives managers the ability to determine • the number of new requests to serve today • The number of requests to book into each future day (called the Advanced Booking Policy – ABP) • Allows the policy to depend on the current booking slate and demand.

  7. Markov Decision Process Model • Decision Epochs • Made once a day after today’s demand has arrived but before any appointments • State • Current ABP (w), queue size (x) and demand (y) • Actions • How many of today’s demand to serve today (b) • Whether to change the current ABP (a)

  8. Markov Decision Process Model • Transitions • Stochastic element is new demand • New queue size is equal to current queue size (x) minus today’s slate (x w) plus any new demand not serviced today (y-b) • New demand represented by random variable D.

  9. Markov Decision Process Model • Costs/Rewards • System Related: revenue, overtime, idle time • Patient Related: lead time • For switching the ABP

  10. Bellman Equation • Used a discounted (but with a discount rate of 0.99), infinite horizon model to avoid arbitrary terminal rewards • Can be solved to optimality

  11. Assumptions/Limitations • Advance bookings are done on a FCFS basis • Today’s demand arrives before any booking decisions need to be made • Service times are deterministic • Show rate dependent on size of queue at time of service instead of at time of booking • Immediate changes to ABP may mean that previous bookings need to be shifted • Does not account for fact that some bookings have to be booked in advance

  12. Clinic Types Considered

  13. Six Scenarios for each Clinic Type • Base scenario • Demand equal to capacity • Show rate based on research by Gallucci • All requests can be serviced the same day • Demand > Capacity • Demand < Capacity • Some requests must be booked in advance • Same day bookings given a show probability of 1 • Show probability with a steeper decline

  14. Performance Results • Clinics #1,2,3: • OA and MDP policy result in almost identical profits • Same day access ranges from 89% to 100% (max lead time 1 day) • Clinics #4,5,6: • MDP slightly outperforms OA (by less than 2%) • Same day access ranges from 84% to 100% (max lead time 2 days) • Clinics #7,8,9: • MDP vastly outperforms OA in all scenarios (by as much as 70%) • Same day access ranges from 28% to 98% (max lead time 4 days) • For all clinics, MDP provides a significant reduction in throughput variation and peak workload

  15. Optimal Policy (base scenario, w=11, x=0) Day 1 19 Day 1 10 9 20 8 18 7 17 6 16 5 15 4 14 3 13 2 12 1 11

  16. Optimal Policy (base scenario, w=11, x=0) 20 19 17 15 12 11 Day 1 Day 2 10 9 8 7 6 5 4 18 3 16 2 14 1 13

  17. Performance Trends • MDP performed best when demand was high (e.g. when demand > capacity and when same day show rate was guaranteed). • MDP approaches OA as the lead time cost increases • Presence of revenue makes OA much more attractive • Maximum booking window in any scenario tested was 4 days • MDP manages to perform as well even when revenue is present by sacrificing some throughput in order to reduce overtime and idle time costs.

  18. Conclusion • Model provides a booking policy that takes into account no-shows and reacts to the congestion in the system • Simulation results suggest that it achieves better results (same or higher objective, more predictable throughput) than open access with minimal cost to the patient in terms of lead times • Enhancements to the model certainly possible including the inclusion of stochastic services times, the transition to a continuous time setting, the possibility of a multi-doctor clinic…. • Currently in discussion with local clinic to build enhanced model and test it.

  19. Thank You!

  20. Optimal Policy (base scenario, w=11, x=0)

  21. Optimal Policy (base scenario, w=11, x=0)

More Related