The future of CPUs

@sztosz yup, correct - lots of movies were done using RT much before RT cores were added to desktop GPUs. The only difference here is that in GPU we have real time RT which was not possible earlier as typical 1½h Hollywood movie was rendered in lots of hours or even days (depends on render farm). Of course real time RT is still new and needs to be much more improved.

My bad for saying new idea - I mean new feature (again for desktop GPUs as a real time solution).

1 Like

I think this might be a feasible view of the future as well.
To put it differently: An actual cloud. :stuck_out_tongue_winking_eye:

I have to disagree with this. :slightly_smiling_face: The difference between baked reflections and shadows and their raytraced counterparts is massive, and I’m looking forward to the time that cards and games use raytracing for everything. While I’m not about to drop hundreds of Pounds/Euros on a new Nvidia card just yet (my old R9 390 is fine for me for the moment), I am pleased that Nvidia and AMD have/are making progress with raytracing and have pushed it into the mainstream.

1 Like

To each its own I guess. Most games I play i tend to tone down, or even turn off most of the shadows, blurring, camera shaking, particle effects etc. It’s an eye candy, but it does not help with the gameplay.

But beside, when you take into account the compute power needed for real-time raytraicing FullHD in 60 FPS, still rasterizing is good enough. There is no hardware being able to do that and there won’t be for a long time. What you get now with Nvidia RTX is ability to render some small parts of effects with raytracing. But the whole image is still made in “traditional way”.

Those are already modern GPU’s or addon vector processing units, or even SSE and the likes instruction sets on x86 on modern CPU’s.

Most coders don’t think it is worth the effort, a language change would help a lot, like actor’s on the BEAM (though the BEAM would need a JIT boost to get generically useful) or Rust’s actix or so.

You’d be surprised how many things use GPU’s via OpenCL or Vulkan to perform operations. ^.^

Oh it’s even older than that, much older than that. ^.^

Expensive because they need to hold the entire renderable coherent world view in memory at once while also performing millions to billions of rays per frame (which is a trivially parallizeable problem, but you still ‘need’ a ton of cores for that and the memory to hold the world state).

pov-ray is one of the oldest and mot powerful complete raytracers out that still exists if you want to play around with something powerful.

Exactly, RT is all hype, and thus far it’s shown to be marginally better in graphics compared to contemporary techniques all while being less complete (it holds a very restricted world state) while also somehow being much slower as well, which is not worth the very slight increase in quality. nVidia is just doing what they do best, trying to proprietarize things that don’t need to be.

… you can raytrace on older cards just fine. Heck I used to raytrace on my old Radeon’s 18 years ago (very restricted world state of course). nVidia just has more cores optimized for it along with more memory to store the world state is all and it doesn’t actually gain that much in speed, it’s a couple times increase for very restricted RT paths which still ends up overall significantly slower than traditional techniques that don’t look quite as good but don’t matter for gameplay, where to make full RT rendering viable it would have to be a multiple magnitude increase. Something like a hollywood movie rendered on the top of the line nVidia cards would render slower than their dedicated vector units, a lot slower.

Not really, as long as it is good enough for gameplay than it is good enough. Full RT will be able to do things that conventional rendering just cannot do, at all, and that is what I’m waiting for, but we’ll need that multiple magnitude increase to do it.

Likewise, but that’s more to keep me from getting sick. ^.^;

Exactly, nVidia’s RTX still can’t even RT each pixel in the very few things it can actually do and instead it’s combined ‘overtime’ to try to poorly make up for it. It is still a classically rendered scene with a couple of very limited pixels being raytraced, which is a far far cry from raytracing a large world state at HD+ qualities at 60+fps (or 144+ for vr).

I’m usually the same with things like motion blur and anything that compromises performance/visibility in a substantial way, but I view raytracing quite differently as it’s fundamental to the nature of the scene, and is more of a simulation of light than a faking of it, which will always be limited. Of course, I enjoy the gameplay of rasterised games just fine in the meantime. I think some of the immersion comes from graphics, though, hence my interest in realtime raytracing for the future.

I agree that full raytracing is where the excitement lies, and appreciate that’s a little way off yet. Though I did see some videos on YouTube a few months ago that looked promising regarding upcoming raytracing potential. I’ll have another look and see what I can find.

1 Like

Sure, same goes to encoding x265 4k movie - it’s only optimization (enhancement as I said already). It’s why I don’t see classical PC bigger change (of course other than already mentioned light usage).

Of course I know it - it’s why they are called dedicated :smiley: Topic GTX vs Quadro is really old and most of PC developers knows basics.

It would be a small revolution if one card could work as well as whole render farm. :077:

My man, the fact that something is theoretically possible does not at all necessitate it’s utilized in practice… Or shall I remind you how many languages’ compilers don’t utilize SSE / AVX and a heckton of other instructions? Or I must remind you that multi-core CPUs are a thing for 10-15 years now and yet most languages throw the towel at I/O switched green threads? Or give access to raw OS threads which, despite the thousands of horror stories, still somehow gets a free pass as an okay practice?

I am aware that the hardware has far outpaced software. This has been a fact for a while sadly. Even our favorite runtime around here – the BEAM – does not attempt to utilize the actor model inside the GPU, for example. A language that I came to like thanks to you – OCaml – still cannot offer better concurrency than Node.JS.

…etc.

Most coders wouldn’t recognize the value of better CPU core saturation (or even optimize their code to use less bloaty single-core frameworks) if it hit them on the nose. So yeah, you are right.

I very much would like to be surprised. It really feels like most software innovation is dead on its tracks and everything is about puking out the yet-another-CRUD-app-that-will-change-the-world these days.

Fault of the languages, not the hardware. ^.^

MC-OCaml will do better, but it’s still a branch repo slowly being absorbed piecemeal to ensure everything works properly, can’t wait for that. For speed reasons for heavy multicore work that is not I/O bound I’ll be using Rust (likely with actix or one of the other libaries depending on the kind of multicore work) for my next CPU-bound project. ^.^

For a lot of programs it doesn’t even need to be a consideration. They don’t take in to account that their program is running on a system with other programs though sadly… :frowning:

Sure, never claimed it’s the fault of the hardware.

I definitely am claiming that modern CPUs are way too damned complex nowadays though. And if that’s not enough (read yesterday about a x86_64 paper saying something about all instructions and variants being 7700+, jeebus!) they don’t respect their own machine code and rewrite it in their own microcode!

And let’s not even mention Meltdown and Spectre, the latter still not being 100% mitigated without (sometimes significant) loss of performance.

IMO we need another radical change. Things have stalled pretty badly and nobody has the balls and money to try something new.

1 Like

x86 isn’t the only option though! Most things I run at home are ARM based other than my desktops and servers, and I have other things from PIC to propeller to others too. :slight_smile:

We are going off-topic now but I am pondering building a home NAS and still not sure what setup to go for. An i5 CPU with 16GB RAM should be just fine (if I can find a motherboard that can support a 10Gbps Ethernet anyway) but am open to other options.

What would you recommend?

(Feel free to move that comment to private or another topic.)

If it’s pure data storage I’d go with setting up an object store on it with a block device exposed, that hardware is fine if you have sufficient storage.

Though I have mine double as recording my home cameras for motion and such too, which is pretty heavyweight. ^.^;