hard drive - CFQ scheduling algorithm -
the cfq algorithm uses ordered set of queues based on i/o priority of processes made requests. means there's queue process of priority, let say, 1, 1 priority 2, etc.
i understand algorithm takes first request each queue, sort them (for avoiding unnecessary head movements) , place them dispatch queue being handle. since single request can have many blocks read (not contiguous), how sort possible? mean, if have:
request1 = [1,2,345,6,423]
and
request2 = [3,4,2344,664]
being [a,b,c] list of blocks a, b , c, how resquests 1 , 2 placed dispatch queue? can see have non empty intersection (for example block 6 after blocks 3 , 4)
other thing don't is, again, since request can have multiples blocks read, kind of scheduling made inside of it? fcfs? or order blocks?
for example, let's have request contains following list of blocks read:
[1,23,5,76,3]
how algorithm handle this?
by fcfs:
[1,23,5,76,3]
or sorting blocks:
[1,3,4,23,76]
maybe didn't understand algorithm, couldn't find enough documentation. if have link o paper more detailed explanation, please refer me it.
i understandig, cfq not schedule single track requests, timeslices number of requests. quote ia64wiki:
the cfq scheduler aims distribute disk time amongst processes competing access disk. in addition, uses anticipation , deadlines improve performance, , attempt bound latency.
each process issuing i/o gets time slice during has exclusive access synchronous requests. time slice bounded 2 parameters:
slice_sync
adjusted process's i/o priority gives length in milliseconds of each slice; ,quantum
gives number of requests can issued.all processes share set of 17 queues asynchronous i/o, 1 each effective i/o priority. each queue gets time slice based on
slice_async
adjusted priority), , queues treated round-robin.within each slice, requests merged (both front , back), , issued according scan, backward seeks
back_seek_max
permitted, biassedback_seek_penalty
compared forward seeks. cfq waitslice_idle
ms within slice process issue more i/o. allows anticipation, , improves fairness when process issues bursty i/o; in general reduces performance.
Comments
Post a Comment