cuda - Estimating increase in speed when changing NVIDIA GPU model -



cuda - Estimating increase in speed when changing NVIDIA GPU model -

i developing cuda application deployed on gpu much improve mine. given gpu model, how can estimate how much faster algorithm run on it?

you're going have hard time, number of reasons:

clock rate , memory speed have weak relationship code speed, because there lot more going on under hood (e.g., thread context switching) gets improved/changed new hardware.

caches have been added new hardware (e.g., fermi) , unless model cache hit/miss rates, you'll have tough time predicting how impact speed.

floating point performance in general dependent on model (e.g.: tesla c2050 has improve performance "top of line" gtx-480).

register usage per device can alter different devices, , can impact performance; occupancy affected in many cases.

performance can improved targeting specific hardware, if algorithm perfect gpu, improve if optimize new hardware.

now, said, can create predictions if run app through 1 of profilers (such nvidia compute profiler), , @ occupancy , sm utilization. if gpu has 2 sms , 1 run on has 16 sms, see improvement, not because of that.

so, unfortunately, isn't easy create type of predictions want. if you're writing open source, post code , inquire others test newer hardware, isn't option.

cuda gpu-programming time-estimation

Comments

Popular posts from this blog

iphone - Dismissing a UIAlertView -

intellij idea - Update external libraries with intelij and java -

javascript - send data from a new window to previous window in php -