Xeon Phis are getting cheaper.. when can we use them with cycle :D

hm it seams that it can run windows 10,… might work but might not be as fast as say a modern graphics card.
on the other hand, it would be able to do all cycles stuff, though the goal to have all cycles futures available under gpu besides cpu is somthing that is planned, its a goal (think i saw it mentioned on blender conf2016).

some poster here said that not all programs benefit from multithreaded code, that’s true ea notepad does not benefit.
however it becomes easier these days to write such programs, and if you have a bit of computer knowledge your still able to run various program under different dedicated cpu’s, that would even work for notepad.

another benefit might be that a pc with such a device is cheaper to expand, adding more memmory is no problem, and cheaper then updating to a newer gpu.

*and i do remind to have red on the git logs that blender allready has some optimizations for machines with large amounts of cpu’s

still i think the cheapest solution is to rent a renderfarm for your final animations.

That is a good question. I work @ the Jones Farm campus within DCG in the Enterprise & Govt group, specifically ecosystem enablement for HPC and AI solutions, and I have a hard time believing that an Intel employee would risk their job giving a family member access to unreleased products. Call me a skeptic.

Not all motherboards will support the older (KNC) version of Xeon Phi. So unless you have one of these boards it’s unlikely you will be able to use it.

https://streamcomputing.eu/blog/2015-08-01/xeon-phi-knights-corner-compatible-motherboards/

No, that is exactly what I am saying here. Xeon Phi now comes in a bootable system chip based on the x64 architecture.

I’ve been up hill and down with high performance GPUs and this is pretty much my conclusion.

I created a patch https://developer.blender.org/D2396 which was used for rendering of Agent327: http://blender.it4i.cz/rendering/agent-327-operation-barbershop/

Dumb question but could an FPGA be made for cycles?

Technically, maybe, But the complexity of the kernel doesn’t seem like it would be the most effective solution.

FPGAs are not made for any particular purpose, the whole point of an FPGA is to be configurable for a particular purpose. You’re probably thinking of an ASIC, i.e. a circuit made for one particular purpose.

To answer the underlying question: It is possible to configure FPGAs to do raytracing and it is possible to create ASICs to do raytracing. Both has been done, neither has seen success.

FPGAs are very expensive relative to their performance, ASICs have a high upfront cost to develop. GPUs on the other hand are commodity products. Any FPGA or ASIC to perform raytracing would also need a decent amount of high-bandwidth memory, which GPUs already have. Economically, it just doesn’t make that much sense - GPUs win on price/performance due to economics of scale.

The only company offering raytracing ASICs was bought up by ImgTec, who integrated that technology into some of their mobile GPUs. Unfortunately, it ended up effectively unused - raytracing on mobile isn’t very interesting and no desktop board made it to market. ImgTec itself is now up for sale after losing their biggest customer: Apple.

Maybe we’ll see some hardware support for ray traversal in future GPUs, but a specialized chip does not seem economically feasible.

:slight_smile: Thanks BB

So i’ll continue the dumb flow of thought
What about the new mining (crippled) boards and GPUs, could they be used (efficiently) to farm rays?

Looks really good (to my brute & naive mind) :smiley:

@burnin,

Potentially. I haven’t seen any benchmarks yet, but they could potentially be useful. especially second hand, after the market has been flooded when the next gen comes out.

:smiley:
Or when mining looses momentum, then no one else will want to use or buy it second hand.

Fpgas are programmable ASICS… they are both programmed with an HDL, a hardware description language (I’ve written verilog and VHDL). You prototype your design, simulate it then tape out an ASIC, or just sell a product with an fpga inside (some designs are simple enough and if you get a cheap fpga, you’re good to go.) Blackmagic design’s entire product line is xilinx fpgas for example. An NVIDIA GPU can be put in an FPGA, or it can be reprogrammed to be the controller for a USB printer(I had a friend who worked for a printer company prototyping printer contollers on fpgas to test it before taping out an expensive asic). Any design or digital chip can be put in an fpga, they just run hotter and slower and cost more. Long story short, you could design a ‘Cycles’ chip that ran cuda or opencl code the same as an fpga, or better yet make a new api and a hardware design with ray tracing in mind specifically designed to handle the cycles use case better than a gpu. The unanswered question… would such a design outperform NVIDIA or AMDs ASICs… maybe…

Here you can find the benchmarks: http://blender.it4i.cz/research/rendering-on-intel-xeon-phi/

I made new tests for Knights Landing: http://blender.it4i.cz/research/rendering-on-intel-xeon-phi .

LOL: http://www.guru3d.com/news-story/intel-halts-xeon-phi-accelerator-knights-hill-development.html

Intel Halts Xeon Phi accelerator Knights Hill Development

Intel adjusted its roadmap to focus on high-performance exascale computing. Part of this .plan is scrapping the previously announced Knights Hill based Xeon Phi accelerators. The product line has been removed from the latest roadmap.
Intel will replace Xeon Phi with a new platform and architecture reports tweakers.net today, Intel makes note of this fact (albeit a little hidden) in this article. The exact reasoning behind the cancellation of Xeon Phi is vague, as well as specs on the new architecture.
Intel first announced Knights Hill back in 2014, it would be based on 10nm and would support the new generation Omni-Path-interconnects. Xeon Phi, if you can remember it, was a relative if the Larrabee GPU project and thus effectively, this would be the end of Intel’s Larrabee GPU project.

You are right, but Skylake has lot of features from KNL. I am prepearing new tests using AVX512 vectorization.