Xeon Phis are getting cheaper.. when can we use them with cycle :D

Could clever coding help here_ Like if I have six GPUs and they do the threads thing like CPUs.

Are there any other advantages in blender with this cpu monster?

What is actually multithreaded?

Sculpt is multithreaded, softbodies/cloth and fluid particles are multithreaded,
fluid sim,the compositor is multithreaded. Anything else?
Please correct me if I am wrong.

Would these operations benefit on the Xeon Phis?

edit: what is the situation with other Renderers, Octane,Luxrender,etc? same problem as with cycles?

Any benchmarks in Xeon Phi? I can’t find any.

Intel says “let’s put many X86 cores on one accelerator board”. The Xeon Phi is not a CPU, it’s a coprocessor. You cannot run an ordinary operating system on it, it runs a special version of Linux and programs have to be compiled (and likely modified) specifically for it. In order to run fast, these programs also need to be parallelizable, otherwise they would run much slower than on a normal CPU. In effect, support for the Xeon Phi is practically non-existent in CG applications (so far).

  1. Unlimited memory (aka up to 64 GB). GPUs are limited by their VRAM and this is a serious problem. I downloaded the latest Lighting Challenge from cgsociety and tried rendering in Cycles. Doesn’t work because my GPU runs out of memory faster than I can look because GPUs use VRAM. For rendering big scenes at home (like 50Mio polys + 16 GB textures), this is great because the CPUs can use all the RAM in the computer. Correct me if I am wrong

GPUs actually can access system RAM, but that has to go through the slow PCIe bus, which can be disastrous for performance. The Xeon Phi has the same problem.

That’s not to say that this is still in early development. It’s not production-ready yet (my opinion) and it needs some time to get into the market. But it is a good development, really.

The Xeon Phi is production ready, it’s already in use in several supercomputers. However, it’s not aimed at consumers. Those 300$ Xeon Phis are an oddity. Ordinarily, they would (individually) cost thousands of dollars, like the NVIDIA Quadro/Tesla and AMD FirePro cards.

While I like xeon chips I recently read that intel will also brin out i7 chips with 12 cores and if I was not mistaken the ability to p,ace two CPUs on a motherboard.

Well, you should be able to place 2 xeons with 14 cores each on a single motherboard, for a grand total of 28, but that would set you back about 6000$ with “just” the CPUs :eek:

For me the most great ship that could happen to CG rendering are dedicated ASIC or FPGA for path tracing algo.

Hp Proliant DL580G5 is 24 cores @ 2.4ish Ghz with 128 gig of ram for less then 500 off of ebay. Its not the fastest beast out there but it chews up the heavy loads.

whatever the secret weapon Intel has, they can bring them out now, desktop cpu’s been stalled for quite some years now.

you dont want secret weapon, because govts will keep it for exactly that, secret and a weapon. a quantium computer can brute force its way thru encryption very quickly, which would render all cyber security broken. nobody wants that. govts may already have it, but it would be declared a state secret.

in america even 3d volumetric displays we banned from the public as a security risk. they existed but you had to be a govt contractor, hospital, or university to have one legally. regular consumers want incremental improvements, a giant leap the govt will seize for themselves. you dont want anything too powerful because it will be declared a security risk and you’ll never get it, and govts will enact rules that might prevent you from having it even when incremental improvements might have gotten you there, it’ll probably still be considered a state secret or govt only property.

Shit, I’ll get the tinfoil hats and someone else get the meds.

Strap a fan to it, lol.

Have you heard of the DWave computer, it’s not a full quantum computer, but it makes use of some of it principles (meaning that its performance in some calculations far outstrips that of traditional computing).

It’s also anything but a secret (all of the major tech. sites have covered it and is being sold on the market).

yes i have heard of dwave and was excited about it initially. but then:

“In 2007 Umesh Vazirani, a professor at University of California (UC) Berkeley and one of the founders of quantum complexity theory, made the following criticism:[SUP][41][/SUP] Their claimed speedup over classical algorithms appears to be based on a misunderstanding of a paper my colleagues van Dam, Mosca and I wrote on “The power of adiabatic quantum computing.” That speed up unfortunately does not hold in the setting at hand, and therefore D-Wave’s “quantum computer” even if it turns out to be a true quantum computer, and even if it can be scaled to thousands of qubits, would likely not be more powerful than a cell phone.”

and

“A study published in Science in June 2014, described as “likely the most thorough and precise study that has been done on the performance of the D-Wave machine”[SUP][54][/SUP] and “the fairest comparison yet”, found that the D-Wave chip “produced no quantum speedup”.[SUP][55][/SUP] The researchers, led by Matthias Troyer at the Swiss Federal Institute of Technology, found “no quantum evidence” across the entire range of their tests, and only inconclusive results when looking at subsets of the tests. Several possible explanations were suggested. 1) Perhaps quantum annealing (the type of problem for which the D-Wave machine is designed) is not amenable to a speedup. 2) Perhaps the D-Wave 2 cannot realize a quantum speedup. 3) Perhaps the speedup exists but is masked by errors or other problems.[SUP][[56]”

https://en.wikipedia.org/wiki/D-Wave_Systems](https://en.wikipedia.org/wiki/D-Wave_Systems#cite_note-56)

[/SUP]it seems to have been a scam. it was first announced in 2007. the dwave2 came out last year. and people seem to still be waiting for it to start. from 2007 till now i dont know of a single break thru coming from the use of a dwave. surely some physics sim would have been in the news. fact is stuff like seti and boinc or what ever it is out perform dwave by far. a real quantum computer would destroy something like boinc

volumetric displays have been around for over a decade. but you cant buy one without a govt permit. its considered a security risk. and now vr devices such as the rift have probably surpassed any risk a 3d volumetric display could be[SUP][/SUP]yet you still cant have them. thats what i mean about incremental being better for consumers than giant leaps[SUP][/SUP]. i think if dwave had been real[SUP][/SUP]the govt would have seized it. [SUP][/SUP][SUP][/SUP][SUP][/SUP][SUP][/SUP]once the govt grabs something as a “secret” they dont want to give it up.

i think something like the new xeons are the way to go, incremental, but atleast they’ll work and you can get your hands on them without going thru the fbi or cia. [SUP][/SUP]

Dear Tom Roosendaal: If not for rendering at least make it usable for simulations, they say it is for scientific calculus. And forget about the low budget blenderheads, aim high, make it so you can show the world Blender is capable for high end hardware.

Dedicated Raytracing cards are already in development: http://blog.imgtec.com/powervr-developers/real-time-ray-tracing-on-powervr-gr6500-ces-2016

What you really need to do is a direct benchmark comparison.

300 dollars isn’t too steep - depending on the performance, you would easily pay more than 600 - 1000 dollars for a high-end graphics card. So if the performance was there, factor in the cost of a few strap-on fans, or even an external PCI chassis, and you are on to a winner.

This is based on technology from Caustic Graphics that was already on the market in the form of a dedicated raytracing card, before the company was bought out by imgtec. It’s not clear whether there is a market big enough to support such devices.

Those were prices for surplus units, if you click the link in the OP, it’s up to over 500$ now. The retail price for a new Xeon Phi is in the thousands.

… every apple has a part of it inside (caustic)

You mean like strapping a fan to the design? http://www.pcworld.com/article/3005414/computers/intel-plugs-72-core-supercomputing-chip-into-workstation.html#comments Yeah, like that costs $1000 plus. No. The fact the one you can buy comes passively cooled means it’s not putting out a terrible amount of heat. They’re special Xeon cores… running at like 1ghz, do you have any idea how little voltage is required to run a core at that speed? If it’s an issue you can strap a simple fan to it. Your thermals argument is bonkers.

If these start to become more common place for the 3D workflow, than you can bet there will be a way for them to work with blender.