1 / 11

NTU/CHT 合作案子計劃: 大型資料中心之硬體資源分配管理

NTU/CHT 合作案子計劃: 大型資料中心之硬體資源分配管理. Motivation. There are tasks with different characteristics in a datacenter. Different in importance, computation demand, execution period, … etc. Should be completed before some time constraints.

fabbott
Download Presentation

NTU/CHT 合作案子計劃: 大型資料中心之硬體資源分配管理

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NTU/CHT合作案子計劃:大型資料中心之硬體資源分配管理NTU/CHT合作案子計劃:大型資料中心之硬體資源分配管理

  2. Motivation • There are tasks with different characteristics in a datacenter. • Different in importance, computation demand, execution period, … etc. • Should be completed before some time constraints. • How to decide the number of servers that execute each tasks in order to meet task deadline is an important issue.

  3. Goal • An resource manage component that can dynamically adjust the number of (dedicated) server for each task. • Decisions are made according to task attributes. • Different decision policies. • Fixed-size heterogeneous server environment.

  4. Models • We model a task Tk as: • R : Remaining workloads/data to be processed. • P : Priority of Tk. • D : Deadline of Tk.

  5. Models(Cont.) • We model each server Sn as a vector: • M : number of tasks • tn,i : throughput of the i-th task on this server • Obtain tn,i by pre-processing • Ex: run some small test cases. • High CPU demand, low processing speedtask=> small throughput

  6. Some Solutions • Priority-based • Task priority is related to its deadline. • Assign one server to each task, and allocate the rest servers to the task with highest priority. • “Earliest Deadline First”.

  7. Some Solutions(Cont.) • Workload-based • For each task, calculate the ratio of its workload to the workload summation of all tasks. • Distributed servers according to the ratio of each task. • “Fair distribution”

  8. Throughput-Aware • The previous policies make decisions according to only task characteristics. • However, in heterogeneous environment, servers can have different capabilities. • We propose a throughput-aware strategy that consider both task and server characteristics.

  9. Throughput-Aware(Cont.)

  10. Another Scenario • We only consider the situation that the total workload of each task is fixed. • However, the total workload of tasks may increase during execution in real life. • Ex: sensor data analyzing task. • We will modify our throughput-aware strategy to deal with such tasks.

  11. Future Plan • Oct. ~ Dec., 2013 • Modify throughput-aware for tasks with increasing workloads. • Policy for tasks with dependencies. • Midterm report • 2014 • Policy for non-dedicated server • A server can host two or more tasks. • Experiments

More Related