1 / 26

Impact of the Granularity of Representation of Resource Availability Scott A. Moses

Impact of the Granularity of Representation of Resource Availability Scott A. Moses. School of Industrial Engineering University Of Oklahoma Norman, OK 73019 USA This material is based upon work supported by the National Science Foundation under Grant No. 0122082. Overview of this talk.

prentice
Download Presentation

Impact of the Granularity of Representation of Resource Availability Scott A. Moses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact of the Granularity of Representation of Resource AvailabilityScott A. Moses School of Industrial EngineeringUniversity Of OklahomaNorman, OK 73019 USA This material is based upon work supported by the National Science Foundation under Grant No. 0122082.

  2. Overview of this talk • Understand impact of granularity of a discrete-time representation of resource availability. • Discrete-time representation of resource availability • Scalable data structure … “TB-tree” • Planning methodology • Computational results • Interpretation of results • Effect of granularity on accuracy • Effect of order size variability on accuracy • Effect of system utilization on accuracy • Effect of granularity on computational time • Conclusions and implications

  3. Introduction • Availability of resources is an important factor in many types of planning decisions … or at least it should be • Due date assignment • Order acceptance • Shop order release • Production lot sizing (capacitated) • Time-phased availability of a resource can be represented with … • Continuous-time representation  for detailed scheduling (Gantt Chart) • Discrete-time representation  for planning (Load Profile)

  4. Introduction • Definition, Granularity: Size of the interval of time over which available capacity is computed in a discrete-time representation (e.g., one week). • Premise: Granularity specifies the tradeoff between accuracy and computational time. • Common sense: accuracy increases and computational time increases as granularity becomes finer. • The specific nature of this relationship is unknown, particularly for granularities that are small. • Is there a minimum necessary granularity … below which no further gain in accuracy occurs? • Is there a maximum effective granularity … above which accuracy rapidly degrades? • Is there a diminishing return in accuracy relative to computational time as granularity decreases?

  5. Relevant literature • The need for insights into how granularity should be determined has been noted by a few authors. • Bergamaschi, D., Cigolini, R., Perona, M., and Portioli, A., 1997, “Order review and release strategies in a job shop environment: a review and a classification,” International Journal of Production Research, 35(2), 399-420. • Cigolini, R., Perona, M, and Portioli, A., 1998, “Comparison of order review and release techniques in a dynamic and uncertain job shop environment,” International Journal of Production Research, 36(11), 2931-2951. • Horiguchi, K., Raghavan N., Uzsoy R., and Venkateswaran, S., 2001, “Finite-capacity production planning algorithms for a semiconductor wafer fabrication facility,” International Journal of Production Research, 39(5), 825-842. • Assumed that the maximum possible processing time must be smaller than the minimum size of the bucket used. • “Clearly, the shorter the length of the time period, the more accurate results of the model.”  hmmmm…. • Taal, M. and Wortmann, J.C., 1997, “Integrating MRP and finite capacity planning,” Production Planning and Control, 8(3), 245-254. • Note that smaller time buckets result in nervousness while larger time buckets result in aggregation errors • Note that planning too precisely can be detrimental • Assume that it is “sufficient to plan in day buckets” • In our review of literature, we found it provides no guidance on setting the interval size, or “granularity”. Typically it is set informally. • The only known paper in the literature to consider the effect of granularity on performance for any production decision is (Riezebos 2004), which examines the effect of time bucket size on a lot-splitting policy.

  6. Complete and balanced binary tree Hierarchical representation of time-phased resource availability Each node represents an interval of time Each node stores a value representing current availability for that interval Size of leaf nodes equals granularity Provides a fast search strategy, supports frequent insertions, and is space efficient Data structure for resource availability: Temporal Bin tree (TB-tree) 1 86400 1 43200 43200 21600 21600 21600 21600 1 3 1 10800 10800 10800 10800 10800 10800 10800 10800 3 5 7 00:00 21:00 03:00 06:00 09:00 12:00 15:00 18:00

  7. Simple example 100 100 200 300 400 300 200 0 30 units of capacity available 15 units of capacity available 65 units of capacity available 100 units of capacity available • Order arrives with task size = 45 units • Search node 1  30 < 45  Not enough capacity • Search node 2  15 < 45  Not enough capacity • Search node 3  65 > 45  Enough Capacity, Update Node 3, Available Capacity: 65-45 = 20 • Start Time = End of Node – Task Size = 300 - 45 = 255

  8. Planning methodology • Traditional Bucketed Planning: Minimum bucket size must be at least as big as largest task size. • Current Study: Tasks can occupy multiple leaf nodes • Number of leaf nodes needed is determined prior to beginning search • If b=1, find first node with sufficient availability • If b>1, we require that interior nodes are 100% available (prevents excessive fragmentation of the plan for a task) • Start Time = End Time of last node used – Task Size b = # of leaf nodes t = task size G = granularity

  9. Application Context: Estimate time-phased availability of a resource to assign order due dates in make-to-order environments. Data model 35 resources 250 unique enditems, each processed on 5 of the 35 resources Processing times for tasks UNIFORM~[100, 500] seconds. Resources process tasks FCFS. System utilization 75% 85% Order size variability Constant 15 Medium UNIFORM~[10,20] High UNIFORM~[1,29] Granularity Several values from 250 to 75,000 seconds. Note that 250 second granularity is small relative to the minimum size of a task (which in the constant order size case, for example, is 15*100=1500 seconds). Experimental factors

  10. Computational testbedJava-based, object-oriented, event-driven Data Model Buffer Adj. Service Promising Service Customers Requests Time Buffer Promises MR MR Alerts (proactive notification of late orders) WIP & State EBOs New Orders WIP & State Due Date Performance (Updated) Order Plans Unreleased Prodn Orders Exogenous Events (changes) Release Dates for Prodn Orders Planning Service (Simulated) Production System Suppliers Promises Endogenous Events Requests MR WIP & State Simulation Model Generator

  11. Computational results • Extensive computations were performed to collect results • Two utilizations • Three order size variabilities • Primary accuracy metric used for comparing the different granularities is median absolute lateness (MAL)

  12. Computational results • Similar results regardless of system utilization or order size variability. To obtain a clearer view, we will examine 85% utilization, medium order size variability case.

  13. Computational results For small granularities (<30,000)

  14. Computational results For large granularities (>30,000)

  15. Generalized results

  16. Seven regions of behavior Region B Region D Region E Region G Region F Region A Region C

  17. Discrete-time representation Hole • Definition, Hole: A period of availability that is small relative to task sizes so that it cannot be used or is unlikely to be used. Remember: If b>1, we require that interior nodes are 100% available (prevents excessive fragmentation of the plan for a task) No OK

  18. Region F Region F • Granularity is large enough to accommodate several tasks in one node thus reducing the prevalence of holes that occur in regions A-E • On the other hand, granularity is small enough so that the task start time is estimated accurately unlike in region G (start time = end of node – task size) • Holes cause interesting behavior for very small granularities. Local minimum occurs when granularity is equivalent to average task size. Region A-E Region G

  19. Effect of order size variability • Behavior for Regions A-E • Performance improves as variability increases • Major reason for lateness in regions A-E is holes – higher variability of task sizes allows filling of holes • Behavior for Regions F-G • Performance is unaffected by order size variability • Major reason for lateness in region G is Start Time rule

  20. Effect of shop utilization • Behavior for Regions A-E • Behavior is generally the same • Performance improves as utilization decreases • As utilization decreases there is less congestion and thus the time that a resource will be available to process a task can be more accurately estimated

  21. Effect of shop utilization • Behavior for Regions F-G • Performance improves (lower MAL) as utilization decreases • Minimum MAL occurs at smaller values of granularity for 75% than for 85%

  22. Effect of granularity on computational time • When larger granularities are utilized, the TB-tree need not be as deep, and the computational time required to search the tree will decrease as granularity increases

  23. Return on computational investment • Which values of granularity give the biggest ‘bang for the buck’? • 75% utilization: Between 18000 and 36000 (4x – 8x average processing time) • 85% utilization: Between 45000 and 72000 (10x – 16x average processing time)

  24. Conclusions • Definition, Granularity: Size of the interval of time over which available capacity is computed in a discrete-time representation (e.g., one week). • In practice, planning systems often arbitrarily use one-week buckets, and it seems likely that the resulting behavior resembles the suboptimal behavior of Region G. • As computing power continues to increase, smaller granularities can be used • But, poor performance can occur if granularity is set too small • Further work is required to develop a model that suggests a good value of granularity (i.e., a value in region F). • Our results indicate utilization and mean processing time are two of the most important parameters. • At 75% (85%) utilization, best predictions occurred when granularity was 4x-8x (10x-16x) the average processing time. • These are preliminary, limited results.

  25. Implications • When the granularity is set appropriately, resource availability can be accurately estimated with a discrete time representation, with relatively low computational cost. • Interpretation: Most of the benefit is obtained by planning the task at approximately the time a resource will be available to process it, and not by specifying the exact sequence of tasks. Detailed Scheduling Highest Return on Computational Investment Traditional Bucketed Planning Systems

  26. Impact of the Granularity of Representation of Resource AvailabilityScott A. Moses School of Industrial EngineeringUniversity Of OklahomaNorman, OK 73019 USA This material is based upon work supported by the National Science Foundation under Grant No. 0122082.

More Related