The future of CPUs

cpu
#1

Not sure I learnt anything from this video (lots of talk of ‘chiplets’ but not cores!) but interesting enough to include as the starting point of such a discussion - where is CPU design actually heading? And what does it mean to those of us who use Elixir or Erlang :101:

1 Like
#2

@AstonJ I believe that we need quantum revolution. Before it we would have only enhancements to existing solutions rather than real change. For sure there could still be lots of places for further optimizations, but new things would give improvement only in specific cases like new codecs (x264 -> x265) or more and more ray tracing cores or other dedicated GPU cores. Of course we can think about x266 standard (if any) and more ray tracing cores, but this is still not a real change … at least not like switching to fully quantum PC (here I mean PC with all quantum benefits - i.e. not machines which are currently used).

Just think about images of various Linux distributions shared using quantum superposition! :077: For me this would be true change. If we would have fully quantum PCs then basically games could be written in every language as all data would be delivered extremely fast and new quantum algorithms would be thousands faster comparing to current ones.

Assuming that we can count it easily then let’s say that Elixir code is even 20 times slower than C/C++ code and quantum algorithms would change Elixir speed thousands of times. In that case Elixir quantum code would be much faster than C/C++ non-quantum code is faster than Elixir non-quantum code.

elixir_speed = 1
c_c_plus_plus_sped = 20
quantum_change = 1000
quantum_elixir_speed = elixir_speed * quantum_change
compare1 = quantum_elixir_speed / c_c_plus_plus_sped # 1000 / 20
compare2 = c_c_plus_plus_sped / elixir_speed # 20 / 1
assert compare1 > compare2 # 50 vs 20 times

For sure numbers should be much higher and firstly we cannot even compare them so directly like that, but this just shows how big change it could be. Same things which we do now on non-quantum C/C++ code (convert & edit audio and video) could be even faster on (again fully) quantum Elixir code.

Just imagine that we are writing to DNA-like storage (we probably had a topic about it already) using quantum superposition. Extremely big storage with extremely fast read/write speeds and all of that could have size like postal mark.

Lots of current bottlenecks will simply does not matter as there would not be any need special connecting tricks to speedup data sending (for example between CPU and GPU) process. No need for cables without worrying about any I/O lags in games.

Next could be nanobots printed using 3D quantum printer which directly teleports them to infected parts of our body without any need for any transfers by blood.

Finally connecting PC to brains and speedup development + improve gaming by for example avoiding annoying typos.

In short everything bad and good from: Sword Art Online (and even more) is coming! :smiley:

2 Likes
#3

I thought Quantum computing are specialize tools that can solve specialize problems and other than that they’re no difference than classical computing in term of solving other classes?

3 Likes
#4

Of course currently released Quantum computers are designed to solve specific things. You can think about them like about standard super computer with almost empty cluster - all parts would be delivered a bit later.

But … if we assume that we can use all quantum benefits then we would have something which is just a dream for current CPU users. As said one thing called quantum superposition is enough to make your heart beat faster. For example current cryptography even without such data transfers would fall really quickly allowing to lots of attacks which are not possible for classic PCs. In theory quantum technology could even change your room into part of game world which would change based on your real moves without a need for any cameras or controllers.

For sure it’s not like 1 day, month or year to make it happen. There are still lots of places to standardize, stabilize and test. It’s definitely lots of years, but also it’s really possible and promising future.

1 Like
#5

For example current cryptography even without such data transfers would fall really quickly allowing to lots of attacks which are not possible for classic PCs.

Right that’s one class of problem that Quantum Computer specialized in.

In that article, classical computing can efficiently solve a class of problems. I guess what I’m saying is Quantum Computer isn’t going to compete against those class of problems. There are other such as https://en.wikipedia.org/wiki/Shor’s_algorithm that Quantum Computer is better at but I’m not entirely sure if Quantum can completely replace classical computer in the class of problems that classical computing is good at. Maybe on par?

1 Like
#6

All correct here …

This is easy to answer: NEVER

It depends on which quantum PC we are talking … Would you mind if I would give you an example based on manga/anime?

For example in Naruto we have characters like Uchiha Sasuke and Hatake Kakashi. First one is kid with huge talent and second one is his teacher. Of course at start of story teacher is much stronger than his Sasuke, but after years his power grow much faster than Kakashi power. In result after some years Kahashi have totally no chance to defeat Sasuke even if at start he was much stronger.

Same goes to classic PC and quantum PC. As said first one is almost only able to be enhanced while second is relatively new topic with lots of things to improve. Comparing to Naruto anime/manga classic PC is like Hatake Kakashi and quantum PC is like Uchiha Sasuke and we are at start of Naruto story. :smiley:

As classic PC years ago currently quantum PC are huge and expensive. We still need lots of years to make it cheaper and even it would not solve main problems. There are still lots of things for science to be achieved. And again as classic PC previously was used only by reach and smart people as quantum PC is also used by same groups.

Years ago I read that first quantum PC was slower than classic PC. The problem was that benchmarks was prepared and only optimized for classic PC - i.e. same things were not optimized for quantum theories. Not sure how they run such software on quantum machine … maybe it was some kind of hybrid…

To visualize what gives you quantum technology please take a look at:


Just think what would happen if people would find a way to use those 2 things in same process. Saving huge amount of data would be incredibly fast and storage would be really, really small.

Please also read about bottlenecks about current PC’s architectures and again look how simply they could be solved if people would know how to use quantum superposition in any cases. Data transfers which could be optimized:

  1. I/O transfers (like wireless keyboard or mouse)
  2. Network transfers (basically network limited by physic)
  3. Human reaction time (here I mean further connection brain with PC to improve data transfers from mind - i.e. instead of typing just think about specific words)

Oh, another good example of quantum superposition. Imagine if it could be used in Quantum BitTorrent (or something like that) and your streams are going directly to your and all people in world machines. :slight_smile:

Finally think about Elixir using all CPU cores and quantum machine using quantum superposition for all data transfers. Of course it would not be soon, but it would definitely change whole IT (gaming, programming, recording, science, writing).

1 Like
#7

Ah I understand where you are coming from now.

I guess I’m more pessimistic since it’s still experimental.

I personally think in the near future it’ll be a co-processor next to classical computer like the GPU.

As any further than that I don’t dare speculate. It would be nice if quantum superposition end up solving bus speed and can completely dominate classical cpu.

On a tangent, the next AMD chip is going to extend the bus (https://www.servethehome.com/why-amd-epyc-rome-2p-will-have-128-160-pcie-gen4-lanes-and-a-bonus/).

#8

When I’m to describe something which is not easy topic then I’m always trying to say way too generally in order to easily remember this thing for example in some funny way. For me AST is generally “nested” Assembler for Elixir even if I know that’s compiling into Erlang etc. etc. …

# call with 2 args
mov al, 24

vs

{:mov, [], [{:al, [], Elixir}, 24])
# same here - just looks differently
# and of course works differently

For me it’s only association play/wordplay - just to easily remember.

Here for me easiest is to say that quantum physics is like physics on steroids (again even if I know that’s much different topic than classic physics etc. etc.). Look that there are lots of classic physic laws which classic PC uses everyday. Exactly same goes to quantum PC, but here we just need to investigate and test solutions in order to build complete quantum PC - i.e. machine which tries to be limited by classic physic as less as possible.

Now just think how long time period we had between Albert Einstein and first classic PC which lots of people was able to buy. For sure I don’t believe that quantum physic would be investigated also so long before building first fully quantum PC. It’s because unlike Albert Einstein we know what we want to achieve (in short: better PC), we are ~understanding some of quantum laws and finally we are right about to test them.

Not sure if you know, but people already can use quantum superposition, but for now (as far as I know) tests were made only on singular atoms (unfortunately not on Elixir's ones :smiley:). About 3 years ago in Warsaw scientists created a hologram of single photon. Two years ago the Australian RMIT University in Melbourne and the Beijing Institute of Technology created the thinnest holographic display which does not require googles!

Here are articles in Polish language describing those topics a bit:
https://kopalniawiedzy.pl/foton-hologram-kwantowa-holografia,24913
https://kopalniawiedzy.pl/hologram-wyswietlacz-holografia,26440

We have already so many achievements! Do you still think that they can’t be used in standard PC when they would be cheaper? :077: Of course I’m sure that quantum physics would be used in military firstly. I’m even almost sure that at least China is using some of quantum physic laws.

Hope I would be able to have real quantum PC asap. :smiley:

1 Like
#9

Quantum Superposition is not some magical ftl transmission method, you can’t “communicate” using it at all. It is nothing but how particles work until they interact with ‘something’ else. The reason it is useful is for things like the above mention Shor’s algorithm, which uses its properties to bind a few adjacent waveforms together and then collapse them in such a way that they can only collapse in one specific way. Quantum computers will not and likely will never be useful for traditionally classical calculations. They are useful in very narrow usage fields, they are not some form of magical FPGA’s that can be made to do anything super fast.

DNA storage is compact because it is storing in mere atoms in size, however it is not the only way to encode using atoms, nor is it a ‘fast’ way to write (though ‘reading’ it is potentially, but is not currently fast), likely some other method of storing at an atomic level will be used instead of slow protein folding to write.

Shared ‘how’? What would gain by running linux on a quantum computer other than making it ridiculously slow (quantum systems are not efficient at classical linear computing due to physical limitations).

Actually encoding data into quantum structures is extremely slow, you make up for that by transforming that state very quickly if it can even be encoded in such a quantum state structure at all (the great huge majority of things cannot and will not have an equivalent algorithm).

‘How’? Quantum algorithms are exceptionally situationally specific, almost nothing that Elixir does would be useful on a quantum system.

Absolutely not, it is not some magical multiplier, quantum work is a completely different form of computing, you aren’t going to be doing something like making a game in it, or running a webserver on it, ever.

It is absolutely not language work, nor would any ‘traditional’ programming language be able to compile to it. A quantum CPU is nothing short of vector calculus and trying to devise waveforms that collapse in certain ways for very specific problems (again, the most famous of which is Shor’s algorithm, which only replaces one tiny itty bitty part of integer factorization, which very luckily is the slowest part).

That… uh… is entirely distinct things… Quantum superposition acts on a level entirely different and smaller than DNA storage, and would not be able to write DNA storage directly, you have to use protein folding for that to walk over it and make the change while praying it doesn’t screw it up somewhere else (DNA is tiny and packed, it is not reliable).

Current computers are already bound by near the speed of light on their northbridge interconnects, such transfer would not be any faster, you cannot beat the speed of causality. And if you are not using cables to transfer information then what… electromagnetic spectrum (light)? Gravity could potentially be slightly faster than light in propagation rate (it always propagates at the speed of causality, which is faster than the speed of light through a medium like air) but not any real gain there either (plus we have no way of generating gravitational fields right now short of injecting large amounts of energy into a system).

Teleports? How would that work? Everything we know about physics states that no teleportation outside of quantum coherence events can happen, even then those have massive restrictions such as being strictly less than the speed of causality and can only happen with extremely few if any interactions (each additional waveform interaction collapses it with monumentally higher probability so as to become certainty at anything even approaching a few atoms worth of interactions.

Eh, brains are classical machines, we just need to get enough computing power (matrioshka swarm around the sun?) that simulates everyones brains and either gives people what they want right before they know they want it by simulating ahead a few minutes with many variations as tests, or just get rid of the physical form and exist only in the system itself at that point (humans only need about 12 watts of power a day to simulate an entire brain with perfect efficiency, more realistically the best we could do in the physical universe is probably 40-50 watts). Though before then we could easily have a computer take over all the inputs and outputs of a brain to control everything it experiences.

Actually they are significantly worse than classical computing for ‘most’ problems. Collapsing many states and setting up the transitions is significantly slower than simple electron movement.

Not quite a dream though, they have significant physicality limitations.

Only certain kinds of modern cryptography, the ones that rely large prime factorizations, and not all cryptography formats rely on that and thus are still secure in the presence of a high quantum computing world.

Huh?! What is that going on about?! I’m no sure what that is referencing at all… o.O

Factorization specifically, not even all cryptography.

Not actually that huge, but they are expensive because to prevent a quantum decoherence event (destroying the state and operation in progress by introducing anything external to the waveform, thus collapsing it) you have to keep it super-cooled to near absolute zero (liquid nitrogen).

No quantum computer is useful by itself, you need classical components. And it’s not that it is slower but rather it only had a limited number of atoms to take part in the state waveform (now they have many thousands, each is an exponential), and even then those only work for certain algorithms, it is not useful for classical computing.

Except genetic storage is significantly larger than the scales at which superposition takes place and the energy values change to change the stored data (I.E. the atoms have to be moved in/out). Even assuming perfect mass/energy equivalency where you could just alter what is stored directly by changing the atom itself (such technology would absolutely alter humanity as a whole) with a burst of energy still requires the energy to do it, it is not a pure quantum operation.

Quantum superposition is useful for calculating state changes via collapsing it in controlled ways, it is absolutely not useful at all in any form for any of:

That sounds more like what a lot of people think about with Quantum Entanglement, which sadly does not allow information transference in any useful way whatsoever.

That also has absolutely nothing to do with quantum superposition, but again this sounds like what a lot of people who don’t know about the subject think that Quantum Entanglement can do, which it can’t.

(To date no useful effect of Quantum Entanglement has been found, it’s so far just a weird quirk of reality but allows nothing useful to actually be done, theorized or not.)

Quantum Superposition would absolutely kill data transfers, in no way would it speed it up but rather it would absolutely kill it off because the latency in setting up the state, collapsing it, and reading out the state at that time.

This is exactly how it is currently treated and currently acts like and will be for the foreseeable unbounded human future.

A larger bus is super nice! It helps to workaround the speed of causality by just transferring on more paths, but humanity is hard pressed to generate or consume that much information efficiently as of yet. ^.^;

Classic physics is not really a limitation, it gives a lot of assurances that are useful and fast for general computing. Quantum Computing is a large multidimensional vector matrix collapsing function, useful only in very specific ways just because of the nature of reality.

Other than the fact they’ve been in use quantum computers for nearing a decade now, or radiation, or all many of daily physical interactions? ^.^

Not sure what google has to do with it? But the warsaw experiment had nothing to do with holography at all (they even admitted they used the word in the total as a bit of a tongue-in-cheek relation) and was just something to map out the structure of a photon so we can examine how it’s interference patterns work better. The Beijing one is not so much a dynamic display (like you could change with a computer), but is rather a static image essentially printed/burned/whatever into the substrate using a laser; their method was impressive because their substrate was smaller than a single waveform of the reflectable light, this makes it cheaper to produce however. Their paper stated that their next goal is to create some form of dynamic display while also shrinking the essential ‘pixel size’ of the resultant image to make something for computer displays.

Currently it is significantly more expensive. Being able to keep something super-cooled cheap would be a world changing invention (not just for quantum computing but for whole ranges of applications) but we can’t beat physical thermal costs as it is, excepting that getting a quantum computer to work at human-level temperatures would also be an astounding invention, but that does not mean it would be necessarily cheaper either (and would likely introduce a huge range of noise into the results at best).

What would you do with it? What significantly sized and scope intertwined vector calculus are you wanting to solve?

9 Likes
#10

This is the future basically:

39

Which is why I’m excited to program in a language that allows me to easily take advantage of this. :slight_smile:

6 Likes
#11

Lol! Too true… They might not even be homogeneous cores either (and the BEAM would need an evolution for that). ^.^;

2 Likes
#12

Yeah, it’s perfect example which shows that we have started working on lots of things, but there is still lots to do, so this display would have in future more usage cases than now. I’m not good in quantum theory, but it should be same for rest of current quantum technologies, right?

Ah right, my mistake - sorry about it - I mean Quantum entanglement not Quantum superposition - just name “superposition” is what my mind associated with.

Is that so? I think about it in a bit different way. We have something which classical physics (with classical PC) can’t achieve. I believe that’s a matter of time to create a theory which would made data transfer useful. For sure there is of course no technology to do it (not sure about theories as I’m not up-to-date with this topic), but this does not mean that’s 100% not possible. Look again that we still don’t know much about quantum physics.

Anyway classic physics can’t solve some data transfer problems at all. Sending data to other planet would be really hard or even impossible (assuming that you want to download something really fast). Maybe for now Quantum entanglement is not enough good for PC, but it could be useful in for example 2 labs in different planets speeding up research. Who said that Quantum entanglement is all we can achieve?

For now we only have more cores. Moore's law is not working for years and it’s not going to be changed soon. Unlike before now one PC is enough for years and except new ideas (like RT cores) we don’t lose much. For now the bigger optimizations happen in code rather than in hardware.

I could be wrong, but single thought was already emulated, but … on supercomputer in 40 minutes. I don’t believe that with current technology we could have PC with size of our brain which is thousands faster than current supercomputer. For sure such change happen years ago, but again we have Moore's law problem here.

#13

A dynamic holo-display will be quite nice I have to admit, if it can work out the clarity issues. ^.^;

Yep, superposition is about it’s location and some other stats being… fuzzy until the waveform collapses.

You can kind of consider that quantum entanglement carries information from it’s starting position to two distinct position, but it does it at less than the speed of light (substantially less) and less useful data than just sending an electron, while the information is entirely useless (you can’t “set” the data, you just know that each ‘side’ will be opposite, which is useless).

You are always limited by the speed of causality. Regardless of anything else anywhere, there is nothing in physics and nothing even ‘hinted’ in physics that any form of transmission of anything whatsoever is possible or ever will be possible faster than the speed of causality (which is about equal to the speed of light in a vacuum).

So whatever comes up, we will always be up against that limitation, and we have already been up against the speed of causality in modern computers for a long time now in a lot of ways, we just increase technology by, say, increasing the bandwidth (more wires) or so, but you will always have that hard speed limit, that hard latency.

Moore’s law is working just fine, it stated that the transistor count in a microchip doubles every like 3 years or whatever, which has been holding very well, the transistor count is indeed still doubling fairly routinely, just in the form of more cores now. Moore’s law has nothing to do with speed.

No such problem here.

And we can no doubt make computers that far far beyond exceed what a human brain can do, but that’s a scaling ‘out’ problem, not a scaling ‘up’ problem. We are already pretty close to the maximum speed a theoretical electromagnetism based system can support for single transistors (so minor gains with graphite and so forth) and other systems like light based processors will likely be our next big (and likely last) jump for primary computational systems.

The concept of a single thought was already emulated, but … on supercomputer in 40 minutes is nonesense to be a bit blunt about it. Human brains are dreadfully, dreadfully slow, a lot of out processing is slow chemical interactions, and it only ‘seems’ fast to us because we are massively parallel, more cores, but each ‘core’ in a human brain is not just a lot slower, but holy-frick-how-does-it-work kind of slow. A single ‘operation’ in a human brain takes not nanoseconds but microseconds ‘at best’ and millisecond on average, which even old computers blow away in raw speed, however we need to make our systems significantly more ‘parallel’, our brains are graph based interconnected processing units, way simple, just a huge amount of conceptual ‘cores’.

#14

I though that it was different. I started to re-read, found EPR paradox and Bell’s theorem and I’m sure about 1 thing: I prefer to focus on Elixir - high level physic theories are just too much for my stupid brain. :smiley:

Firstly the point is not to think that’s not possible with quantum entanglement, but to think that quantum entanglement exists/works as is. I mean that if such thing is possible then how much other (similar?) things are possible also?

Secondly quantum theories around entanglement are not yet 100% investigated - there are still debates about them. Lots of people are sure that speed of light is fastest, but “few years” ago we though that Pluto is planet as same as Earth or Mars

Finally NOW we think that we can’t change data. It’s hard to change something which we do not fully understand. Nobody said that only “some magic power” can change them when we are not observing them. It’s definitely possible to change them “somehow”. The only matter is how fast we would find a way to do it and how hard it would be.

I watched some videos and people more or less seriously started to think if it would be possible then is quantum teleportation killing (in terms of law) original body or what if we would not destroy original body. Of course it’s only “what would happen if”, but look that already in history people are trying to create something which was described/seen somewhere (for example Star Trek was huge inspiration if I remember correctly).

It’s like “with current knowledge it’s not possible” rather than “we are sure that’s not possible”. It’s a true “magic” of science theories. :077:

Heh, probably one of last theories/things I forgot to say about and on that part you agree. :smiley:

#15

I think the biggest problem with more cores will be bus size and cache coherency. How the cache works when multiple core either share caches or have their own cache.

At least this is what I’ve read a while back I’m not really knowledgable in cpu enough to say more.

#16

Even similar things are not useful. You can’t ‘transport’ information, and entanglement only sets the information (waveform entanglement event) at the site where they are first entangled, you can then send the two things out and then measure one, and you know that when it is measured that the other will be the opposite direction of spin. It has absolutely nothing to do with being able to encode information or transport anything or being able to ‘do’ anything useful at all, thus I’m not sure what similar things you are referencing.

The math is 100% investigated, the physical reality is still being tested but so far it confirms 100% to the math that it has tested (and in fact if it didn’t match in any way then our entire understanding of quantum physics would have to essentially have to be relooked at and potentially rewritten or thrown out). There is not debate on its lack of ability to carry information however, the debate only ranges in ‘how’ it works, not how it is useful (or useless rather).

And pluto’s thing is a classification, it has nothing to do with physical mathematical models.

You can change the data just fine, but the process of interacting it will collapse the waveform, thus disentangling them as they then have unique state vectors, which means that the other one will not relate to it either. You can change it all you want, but that won’t change the other one, that is not how entanglement works nor what it means.

There is no such thing as quantum teleportation in a sci-fi style of teleportation systems (killing, what on earth does that have to do with quantum teleportation?!?). Quantum teleportation is just taking the/some information about a particle and altering another particle to have the same information somewhere else. The act of measuring a particle changes it so ‘essentially’ the original particle with its original description is gone, and this doesn’t work at larger scales that a particle or few because you’d have to measure each simultaneously in the same quanta ‘tick’ of time, otherwise the act of measuring one will also change the waveform overlapping neighbor particles as well. This has nothing to do with sci-fi style teleportation, nothing to do with ‘destroying an original body’, or anything of the sort, it is purely talking about how if you have a particle with identical information as another particle somewhere else then it is indistinguishable as the information about a particle fully describes that particle, making it indistinguishable from any other of the same state. It has nothing to do with star trek and I’m not even sure how that got pulled in to this?! Even the information transference only happens at the maximum speed of causality(/light) and measuring either side still changes the waveform regardless.

Based upon everything we know about reality and physics and the math behind it (which is predictive and shown to be 100% accurate thus far, and if it not 100% accurate to an experimental result then their the experiment was wrong or the entire theory is wrong), and so far faster than causality communication is impossible.

Think of it this way in way highly simplified terms, for every quanta of time (a single ‘tick’ of time) a particle can either process a tick of its “mass” or it can process a tick of its “energy”. Mass and Energy are the same thing but they are different expressions of the same thing. Ticking the mass means that the particle changes something about its state, and ticking the energy means that it interacts with spacetime field. Things that are massless, which are things that have only a single state, like a photon, cannot tick their mass so they can only tick their energy, thus meaning they can only move, hence why light in a vacuum travels at the speed of causality (which is the rate of temporal ticks in a closed system). A particle will randomly either tick its mass or tick its energy every quantum tick, things that are moving slower tick their mass more often and things that are moving faster tick their energy more often. Something that is ticking its energy more often will tick its mass less often thus changing its waveform state less often thus effectively moving through time slower (hence why the faster something moves the slower in time it progresses). Something just categorically cannot move faster than 1 quanta of space in 1 quanta of time, that is the very definition of the speed of causality and the speed light in a vacuum. This is also why you cannot have any form of FTL anything or physical transport of anything. Theoretically if we are a 3d space embedded in a higher dimensional space then you’d be able to curve our space to touch another part and skip the intervening space, but you are still not having FTL, and thus far there is no evidence to such higher dimensionalities either (as well as the fact that the energy requirements would be absolutely mindbogglingly large to be able to bend space in such a way, like the entire lifetime energy consumption of a large black hole kind of large just to begin the process).

Hence why we need a message passing architecture. The propeller microboard design was built with that in mind, 8 cores with a message passing bus (generally 1 core was dedicated to handling the interconnects though when you program it). ^.^

2 Likes
#17

Maybe we would do better by focusing not on the future of CPUs, but rather on the past: “The Mess We’re In” by Joe Armstrong.

6 Likes
#18

IMO it’s high time for mainstream CPUs with hundreds (or thousands) of smaller cores with simplified instruction sets. Of course this has to come with a state-of-the-art kernel making perfect use of them – which by itself isn’t a huge endeavor… at least compared to how the hell are you going to code a compiler that transparently parallelizes pure functions and chains the results back together for the next stage of processing.


Still though, in terms of something that we lack today, that’s true saturation of all CPU and GPU cores. It’s quite frustrating having a pretty strong gaming-grade PC and full well realizing that most tasks can’t even saturate 2 of the 4 CPU cores and like 0.1% of the software even attemps to saturate the GPU with real work (apart from games).

Even the MacBook Pro I am working from can likely benefit from partial GPU utilization – I hear there are GPU SQL planners which are miles ahead of the classic ones, for example.

2 Likes
#19

I believe, in our lifetime, we are to witness a major shift from classical computing towards asynchronous and self-evolving neural networks (and BEAM seems predestined for this). A shift from human-designed keyboard-mouse-screen apps towards new kinds of interaction. The future CPUs will have to cope with that on the instruction set, core count and efficiency levels.

#20

Sorry but hardware ray tracing is nothing new, the hardware were built for that for at least past 20 years. Theory is well-known too. If I was taught to use ray tracing on paper in elementary school in art class, then I believe theory must have been there for at least few hundred if not a thousand years :wink: And why we don’t have ray tracing hardware in every PC out there for past 20 years? Because ray tracing is very expensive in computation, and rasterized images are good enough. It’s used mostly for the high quality images produced for advertisement or FX. I remember when I rendered ice cubes and such using ray tracing plugin in 3ds Max first time about 15 years ago… that really looked photorealistic. But it wasn’t anything new or groundbreaking.

Come back to earth. Try to see past all the hype. RT Cores is really just a marketing, because GPU market has become stagnant and most people don’t see any point in exchanging last year GPU for this year GPU :wink: