[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

CPU for virtualization

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 54
Thread images: 1

File: xeon.jpg (46KB, 450x400px) Image search: [Google]
xeon.jpg
46KB, 450x400px
What determines how many virtualizations a CPU can support? I'm trying to run multiple clients on the same hardware, and I'm not sure how to select the right CPU.
>>
Infinite
Its only limited by processing power
>>
>>55930868
How can you share cores? I thought each VM needed its own core(s) and ram?
>>
>>55930868
Thanks. Do you need to allocate processing power to each server at set up (like in the case of hardware), or does the entire system essentially share processing power.
>>
>>55930927
Replace "server" with "client" and "hardware" with "storage"
>>
>>55930927
You allocate

From what I remember running more than 3 VMs per core starts causing CPU scheduling issues.
>>
>>55930884
>>55930927
No. Imagine a vm just like its a process in your os. 10 vms can run on a single core. RAM needs to be allocated statically though. At least in full virtualization. I think openvz can even dynamically do that.

>>55930960
Bullshit
>>
http://www.linux-kvm.org/page/Processor_support

As for how many vm, each vm should have:
- Enough memory
- Enough IOPS (i/o per seconds)
>>
>>55930960

anything more than 1 vm will cause scheduling issues unless you are using something like sparc which can run 4 simultaneous threads per cpu core. x64 will run 1 thread per core and each vm will be scheduling its own threads while the hypervisor/os schedules vm threads.

in short >>55931054
>>
>>55930977
>10 vms can run on a single core.
Lol you've obviously never actually done this. Or if you did it was 10 VMs idling and doing literally nothing.

Good luck getting 10 VMs to schedule appropriately when they're all doing work at the same time, even if it's low process work the fact you have 10 VMs trying to schedule at once will create a major bottleneck
>>
>>55932025
>Or if you did it was 10 VMs idling and doing literally nothing.
That is what 90% of VMs in the world are doing 90% of the time.
Yes in some cases this is not true, but it would depend on OP's use case, which they haven't shared in very much detail.
>>
>>55932195>>55932025

Are there special CPUs designed to handle multiple VMs without scheduling issues?
>>
>>55932393
No don't be retarded

Are your VMs even going to need much CPU time or will they be sitting idle 90% of the time?

If they're just monitoring something or doing basic shit you probably don't need to worry.

You'll need to explain what you actually want to use them for if you want better answers
>>
>>55932445
How is that being retarded? I'd guess Intel had designed a CPU to be used in a virtualized server that solves the most obvious problem. No, all of the VMs would be doing very intensive operations simultaneously.
>>
>>55932689
It's retarded because if Intel coils just magically make a better scheduler they'd have done so. There would be no reason to have some offshoot product SPECIFICALLY with that. It's something that would be brought to every CPU in the product stack.
>>
>>55932705
So, do you believe google has a billion CPU's in its data centers to handle searches from every user? Clearly, they have some kind of algorithm that handles this issue.
>>
>>55932025
If you've fully loaded 10 VMs per core then you're a failure at infrastructure planning. Fully loaded VMs (ones that will hold at 100% processor or close to it) should have dedicated cores. Not in the literal sense but in the sense that you have planned capacity for those cores to always be loaded.
>>
>>55932746
They have custom hardware too which they've not released basically any public information on.

Custom software written specifically for that hardware and that particular function is hardly comparable to consumer products running generic VMs.
>>
>>55930661
There is no limit past software. That's assuming you're ignoring load. A general rule of thumb is 4 vCPUs per core for heavier loads. 10 vCPUs per core are acceptible for light loads. As for hardware: any Xeon after the Core2-architecture support virtualization. Many desktop CPUs do as well.
AMD actually goes back further with virtualization. It has been supported since the Athlon 64. AMD has also supported pcie passthrough far longer than Intel.
>>
>>55932756
You can have 20% load on the core but if you've got 10 VMs all trying to schedule work at the same time you're going to hit a bottleneck. The actual CPU use is minimal, the scheduler just can't keep up.
>>
>>55932784
>custom hardware
Custom built servers, not CPU architectures. They use the same Intel/AMD processors that are available to the public.
>>
>>55932784
I'm not talking about consumer products, though, I'm talking about enterprise products for a server room or data center. I read somewhere that Intel Xeon E7 can support 60 VMs at high performance.
>>
>>55932803
What hypervisor are you running that this is an issue? I have never encountered this problem.
>>
>>55930868
>Infinite
>limited
>>
>>55932746
>do you believe google has a billion CPU's in its data centers to handle searches from every user?
Yes, they do.
>>
>>55932808
>Custom built servers, not CPU architectures.
http://www.anandtech.com/show/10340/googles-tensor-processing-unit-what-we-know
>>
>>55932854
>>55932808
>They use the same Intel/AMD processors that are available to the public.

They don't, though. Intel customizes chips for each major cloud provider. No one uses AMD.
>>
>>55932890
>Intel customizes chips for each major cloud provider
Proof.
>>
>>55932856
Just for fun here's some math
Blade servers allow 2 CPUs per blade for a 16-blade configuration.
A blade chassis to hold these is 10U. You can fit 4 per rack.

4*2*16*1000(reasonable number of racks for google)=128,000
That's CPUs, not cores. If you count cores then it comes to ~4,096,000
This is actually an older configuration, you now have 3U chassis that support 16 Xeon D blades.
>>
>>55932896
http://www.pcworld.com/article/2365240/intel-expands-custom-chip-work-for-big-cloud-providers.html
>Until a few years ago, all its customers got basically the same general purpose processors
>The rise of online giants like Google, Facebook, Amazon and eBay has changed that.

http://www.infoworld.com/article/3050369/cloud-computing/intels-new-22-core-server-chip-speeds-up-cloud-services.html
>Intel will customize the new Xeons for larger customers, Lane said.
>>
>>55932908
In 2012 it was estimated google had a little over 2.3M servers. Not sure what that would boil down to in pure core count. And I'm sure they've only gotten more dense since 2012
>>
>>55932938
If they were all Intel, multiply by 20 (dual 10-core processors).
>>
>>55932808
http://www.theregister.co.uk/2016/04/06/google_power9/
>>
>>55932919
Some more info on Xeon-FPGA hydrid designs

http://www.nextplatform.com/2016/03/14/intel-marrying-fpga-beefy-broadwell-open-compute-future/
>>
>>55932988
Still available to the public. You can buy IBM Power9 servers.
>>
>>55933046
See >>55932868

You've already been given proof they use custom hardware
>>
>>55932991
Nice. This means non-giants can have custom chip features as well.
>>
>>55933046
My point was that more than x86 Intel exists. We can't assume servers are only intel.

>>55930661
You might want to look into linux containers, they are lighter than VMs because they don't have their own separate kernel, its a bit like virtualization of only the working environment/applications if you want to think of it that way.
>>
>>55933085
Yup though obviously we'll see a shift to on-die FPGA before we see a broader availability. A few years though
>>
>>55933113
Intel bought up Altera a couple years ago, it'll be interesting to see what comes of it. With any luck we'll have completely redefinable processors in 20 years.
>>
>>55933141
We need a smart FPGA, so it can reprogram itself based on workload.

I can't imagine leaving it up to the average consumer to try and make heads or tails of it.
>>
>>55930868
And memory
>>
>>55930977
Linux guests with the qemu balloon driver can dynamically allocate RAM
>>
>>55933196
>We need a smart FPGA, so it can reprogram itself based on workload.
This will inevitably happen. Microsoft and Red Hat will develop kernel drivers to dynamically alter the configuration based on workload.
>>
>>55930661
Depends on how many you plan to have with active tasks at one time rather than how many machines total you have. With any modern hypervisor, it will be very efficient at allocating processor time on its own without any need for configuration. If you need vm's that will have requests consistently, you want more cores/threads over clock speed. If you have vm's that will have heavy workloads but mostly idle time, you want faster clock speed over cores. This is an oversimplified explanation, there are many things you may have to do to fully optimize your host configuration. I would recommend going through the IBM knowledge center on virtualization and the documentation for KVM.
https://www.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatkvm.htm
>>
>>55933248
Yea it's going to happen just a matter of hardware maturity and having software devs get thier hands on it.
>>
>>55933250
I should add to this, if you are using any Intel Xeon in an MCC or HCC package, it will be much more difficult to configure an optimal solution for your host because of the physical design of the processor itself. Not all cores are equal, in particular on the Haswell-EP and later Xeons. Depending on a VM's memory usage, IO requests, or use of AVX instructions, a VM with a specific workload may vary in performance dramatically across separate cores on the same processor, and even vary between different states of overall processor utilization, even with dedicated cores per vm. If you really want to find out exactly what you need to do and what processor will be most effective, you have a lot of research to do.
>>
>>55932025
>Or if you did it was 10 VMs idling and doing literally nothing.
Thats one of the main selling points of virtualization.

>>55932393
If you have enough cores you can set latency sensitivity to high on ESXi for workloads which require it, ensuring they have dedicated cores and memory.
>>
>>55932908
No one major is using Xeon-D, though.
>>
>>55932908
>I've never done DCIM
>I dont know what it would take to support power densities like that
>>
>>55932919
Wow, can't believe they're doing that.
>>
>>55930868
I thought almost all CPUs since ~2006 had special virtualization passthrough or something. Direct CPU access within a virtual machine?
>>
>>55934843
I think you mean VT-x?
>>
>>55934972
>VT-x
Yeah, thanks.
https://en.wikipedia.org/wiki/X86_virtualization
Thread posts: 54
Thread images: 1


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.