Hi,
I am trying to understand the implication of vCenter server usage in oversubscription state for CPU. In particular I would like to find a relation between cut frequency and ready time.
I know that is very difficult because of the VMs number, configuration, the host configuration and so on, but I try to do it just for an initial reasoning.
So, given this configuration:
4 pCPUs with freq. 2 GHz
3 VMs with 1 vCPU
1 VM with 2 vCPUs
All of them with same shares value (1000 for example).
When all VMs required 100% of their frequency capacity it happens that:
- 3 VMs have all their frequency capacity (1 x 2GHz) with index 2000/1000 -> 2 MHz per shares
- 1 VM sees his frequency cut from 4 GHz to 2 GHz for having index 2 MHz per shares.
Please, correct me if I am wrong.
Now, surly ready time increases in all VMs because of the utilization of the system is near to 100%, but I expect that the VM with 2 vCPUs has a ready time significantly higher than the others, but how much?
F: virtualized frequency
F': cut frequency
So
F = 4 Ghz
F' = 2 Ghz
F = F'(1 + % ready)
and
% ready = (F - F') / F' = 100%
Does this relation has some sense?!
An example of real life...
If usually I do 4 operations per seconds, when I receive 8 operations, I will do them in 2 sec.
If for some reason (maybe I am ill XD!!) I do 2 operations per seconds, when I receive the same 8 operations, in 2 sec I will do 4 and I say "Wait for 2 sec for the others". It is the 100% more than before.
Thank you in advance for each reply!