Scaling & Efficiency
Speedup & Efficiency
- Speedup
- Is a ratio of the time it takes to run a program without parallelism versus the time it runs in parallel.
- Scalability (Parallel Efficiency)
- Is the ratio of speedup to the ideal speedup.
Given that $t_1$ is the time on one core and $t_p$ is the time on $p$ cores:
\[\text{speedup}(s_p) = \frac{t_1}{t_p}\]Ideal speedup is when $s_p=p$.
This is not a good measure as the two programs could be wildly different and may not be optimal in either case.
\[\text{efficiency} = \frac{s_p}{p}\]Runtime information should also be provided as this only compares a single implementation.
Scaling
Strong Scaling
For strong scaling we:
- Keep the input fixed.
- Change the number of cores.
- Report the runtime.
We want to see a linear relationship between the cores and runtime.
Weak Scaling
Weak scaling also changes the problem size:
- Increase the problem size proportionally to the number of cores (resolution of an image).
- Report time for solution.
We want the time taken to be identical between runs and the input size and resources are proportional.
Parallel Laws
Ahmdahl’s Law
\[\lim_{p\rightarrow\infty}s_p=\frac1\alpha\]A computer program will never go faster than the sum of the parts that do not run in parallel (the serial parts of the program) no matter how many processing elements we have.
where:
- $\alpha$ - a fraction of the original program that is inherently serial in terms of runtime.
- $s_p$ - speedup.
- $p$ - number of cores.
Therefore, given infinite resources, the maximum speedup will be $\frac1\alpha$.
Gustafon’s Law
This law allows the problem size to increase as we add more cores:
\[s_p=\alpha+p\times(1-\alpha)\]This offers a more optimistic speedup.