The following are demonstrations of the pridist.d script. Here we run pridist.d for a few seconds then hit Ctrl-C, # pridist.d Sampling... Hit Ctrl-C to end. ^C CMD: setiathome PID: 2190 value ------------- Distribution ------------- count -5 | 0 0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 6629 5 | 0 CMD: sshd PID: 9172 value ------------- Distribution ------------- count 50 | 0 55 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 10 60 | 0 CMD: mozilla-bin PID: 3164 value ------------- Distribution ------------- count 40 | 0 45 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 20 50 | 0 CMD: perl PID: 11544 value ------------- Distribution ------------- count 10 | 0 15 |@@@@@@@@ 60 20 | 0 25 |@@@@@@@@@@@@@@@ 120 30 | 0 35 |@@@@@@@@@@ 80 40 | 0 45 |@@@@@ 40 50 | 0 55 |@@@ 20 60 | 0 During this sample there was a CPU bound process called "setiathome" running, and a new CPU bound "perl" process was executed. perl, executing an infinite loop, begins with a high priority of 55 to 59 where it is sampled 20 times. pridist.d samples 1000 times per second, so this equates to 20 ms. The perl process has also been sampled for 40 ms at priority 45 to 49, for 80 ms at priority 35 to 39, down to 60 ms at a priority 15 to 19 - at which point I had hit Ctrl-C to end sampling. The output is spectacular as it matches the behaviour of the dispatcher table for the time sharing class perfectly! setiathome is running with the lowest priority, in the 0 to 4 range. ... ok, so when I say 20 samples equates 20 ms, we know that's only an estimate. It really means that for 20 samples that process was the one on the CPU. In between the samples anything may have occured (I/O bound processes will context switch off the CPU). DTrace can certainly be used to measure this based on schedular events not samples (eg, cpudist), however DTrace can then sometimes consume a noticable portion of the CPUs (for example, 2%). The following is a longer sample. Again, I start a new CPU bound perl process, # pridist.d Sampling... Hit Ctrl-C to end. ^C CMD: setiathome PID: 2190 value ------------- Distribution ------------- count -5 | 0 0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1820 5 | 0 CMD: mozilla-bin PID: 3164 value ------------- Distribution ------------- count 40 | 0 45 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 10 50 | 0 CMD: bash PID: 9185 value ------------- Distribution ------------- count 50 | 0 55 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 10 60 | 0 CMD: perl PID: 11547 value ------------- Distribution ------------- count -5 | 0 0 |@@@@@@@@@@@@@@@ 2020 5 |@@ 200 10 |@@@@@@@ 960 15 |@ 160 20 |@@@@@ 720 25 |@ 120 30 |@@@@ 480 35 |@ 80 40 |@@ 240 45 | 40 50 |@@ 240 55 | 10 60 | 0 Now other behaviour can be observed as the perl process runs. The effect here is due to ts_maxwait triggering a priority boot to avoid CPU starvation; the priority is boosted to the 50 to 54 range, then decreases by 10 until it reaches 0 and another ts_maxwait is triggered. The process spends more time at lower priorities, as that is exactly how the TS dispatch table has been configured. Now we run prdist.d for a considerable time, # pridist.d Sampling... Hit Ctrl-C to end. ^C CMD: setiathome PID: 2190 value ------------- Distribution ------------- count -5 | 0 0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3060 5 | 0 CMD: mozilla-bin PID: 3164 value ------------- Distribution ------------- count 40 | 0 45 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 20 50 | 0 CMD: perl PID: 11549 value ------------- Distribution ------------- count -5 | 0 0 |@@@@@@@@@@@@@@@@@@@ 7680 5 | 0 10 |@@@@@@@ 3040 15 | 70 20 |@@@@@@ 2280 25 | 120 30 |@@@@ 1580 35 | 80 40 |@@ 800 45 | 40 50 |@@ 800 55 | 20 60 | 0 The process has settled to a pattern of 0 priority, ts_maxwait boot to 50, drop back to 0. Run "dispadmin -c TS -g" for a printout of the time sharing dispatcher table. The following shows running pridist.d on a completely idle system, # pridist.d Sampling... Hit Ctrl-C to end. ^C CMD: sched PID: 0 value ------------- Distribution ------------- count -10 | 0 -5 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1190 0 | 0 Only the kernel "sched" was sampled. It would have been running the idle thread. The following is an unusual output that is worth mentioning, # pridist.d Sampling... Hit Ctrl-C to end. ^C CMD: sched PID: 0 value ------------- Distribution ------------- count -10 | 0 -5 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 940 0 | 0 5 | 0 10 | 0 15 | 0 20 | 0 25 | 0 30 | 0 35 | 0 40 | 0 45 | 0 50 | 0 55 | 0 60 | 0 65 | 0 70 | 0 75 | 0 80 | 0 85 | 0 90 | 0 95 | 0 100 | 0 105 | 0 110 | 0 115 | 0 120 | 0 125 | 0 130 | 0 135 | 0 140 | 0 145 | 0 150 | 0 155 | 0 160 | 0 165 | 10 >= 170 | 0 Here we have sampled the kernel running at a priority of 165 to 169. This is the interrupt priority range, and would be an interrupt servicing thread. Eg, a network interrupt.