I am 100% certain that Blizzard will not incorporate physx into their diablo 3 engine.
PhysX is like a dedicated Physics engine, such as Havoks physics engine, but they used to use Havoks' then they dropped it for an internal Blizzard Physics engine. So yes you are correct D3 will not use the Nvidia PhysX engine. This does not mean however that The physics processing can not be done by a dedicated PhysX card.
... No. With games that use Nvidia's proprietary software, you can do this. Otherwise, the other card will not "just know" to process physics calculations.
Multi-core optimization should not be aimed at a fixed number of cores. If you have 2 cores, you should use it, if you have 8, you should use it as well...
I think that those games optimzed to 2 cores are a little poorly programmed, because the developers doesn't aim in the full paralelization, only on dividing some tasks in half.
It really isn't that easy. With codes you have multiples upon multiples of variables. Core optimization means that it will recognize that you have four cores, then distribute the workload evenly, while using it to the max potential, and making sure every variable in line with how many cores you use works Well.
Ex: I have a Single core Processor, So variable X in the games code now has "Single core" locked in it So it can now use Variables A,B,C, &, D dependent on something else in the system. Another person has a Dual Core computer, so in variable X it now has "Dual-Core" locked in it and now displays A,B,C,D,E,F,G,H, &I. This goes on until all core configurations have been filled out. Now when you go through the secondary variables you can Have for JUST A i,ii,iii, and iv. then for B, v, vi, vii ,viii ix, and x. etc... but now these ALL have to be optimized other wise it can actually cause the game to have a poorer performance which is exactly what they don't want.
The time it would take would be enormous to go through variables for all 12 core configurations. Then when patches and Exp's hit you have to make sure even that data is optimized otherwise you are in even more of a mess.
Thread libraries take care of that, and threads are scheduled and processed without affinity for any cpu core. This is all handled by the native OS, and requires only an understanding of how this system works in order to optimize for it.
It really isn't that easy. With codes you have multiples upon multiples of variables. Core optimization means that it will recognize that you have four cores, then distribute the workload evenly, while using it to the max potential, and making sure every variable in line with how many cores you use works Well.
Ex: I have a Single core Processor, So variable X in the games code now has "Single core" locked in it So it can now use Variables A,B,C, &, D dependent on something else in the system. Another person has a Dual Core computer, so in variable X it now has "Dual-Core" locked in it and now displays A,B,C,D,E,F,G,H, &I. This goes on until all core configurations have been filled out. Now when you go through the secondary variables you can Have for JUST A i,ii,iii, and iv. then for B, v, vi, vii ,viii ix, and x. etc... but now these ALL have to be optimized other wise it can actually cause the game to have a poorer performance which is exactly what they don't want.
The time it would take would be enormous to go through variables for all 12 core configurations. Then when patches and Exp's hit you have to make sure even that data is optimized otherwise you are in even more of a mess.
Mutual exclusion and atomic operations are not the main problem in multi-core systems. As the other user said, today's libraries handle it very well. Load balancing is a big problem, but there are many examples in the academia of algorithms that deal with it very well, so the market must already have a lot of engines that uses some of them.
The problem is that since threaded programming became popular, people started to learn about threads thinking on fixed cores, but the academia has already been thinking on scalable systems for some time now, as the market is evolving and is not limited by two cores anymore, these scalable solutions are also becoming popular.
I have 8 cores and I expect them all to be at 100%.
Just kidding.
Look, if the game runs well on your system, don't worry about it. Blizzard are geniuses in the optimization department. But the bottom line is, in modern gaming the biggest limitation on performance is your video card GPU/GRAM specs. If you have a Pentium 4, I'd be worried. But if you've upgraded your computer in the last 4 years, you should be fine.
Rollback Post to RevisionRollBack
"Ridicule is the only weapon which can be used against unintelligible propositions."
-Thomas Jefferson
To post a comment, please login or register a new account.
... No. With games that use Nvidia's proprietary software, you can do this. Otherwise, the other card will not "just know" to process physics calculations.
Thread libraries take care of that, and threads are scheduled and processed without affinity for any cpu core. This is all handled by the native OS, and requires only an understanding of how this system works in order to optimize for it.
Mutual exclusion and atomic operations are not the main problem in multi-core systems. As the other user said, today's libraries handle it very well. Load balancing is a big problem, but there are many examples in the academia of algorithms that deal with it very well, so the market must already have a lot of engines that uses some of them.
The problem is that since threaded programming became popular, people started to learn about threads thinking on fixed cores, but the academia has already been thinking on scalable systems for some time now, as the market is evolving and is not limited by two cores anymore, these scalable solutions are also becoming popular.
Just kidding.
Look, if the game runs well on your system, don't worry about it. Blizzard are geniuses in the optimization department. But the bottom line is, in modern gaming the biggest limitation on performance is your video card GPU/GRAM specs. If you have a Pentium 4, I'd be worried. But if you've upgraded your computer in the last 4 years, you should be fine.
-Thomas Jefferson