Benchmarking is the process of taking a known set of code or application and data and known conditions and running it to get timing information out - i.e. how long does it take to run? Benchmarking is important as it allows you to answer a number of questions about what you are doing.
A useful command for benchmarking is the
command from Linux which will tell you how much real, elapsed time your
program has run, user time, and system time. Generally the elapsed time is
the most useful as it corresponds to the walltime that the scheduler uses.
Ideally you should automate your benchmarking so you can repeat it, and potentially run benchmarks over multiple changes in program options or code changes, or whatever the changing variable is for your use case.
If your program can run over more than one core then it may be useful to try it over a number of different core counts and find out how long it takes to run. Typically this is done in powers of two - e.g. 1 core, 2 cores, 4 cores, 8 cores, 16 cores, and so on. Then a log plot should be a straight line without needing too many data points. Often the plot will not be straight, and will undershoot. This shows that the program doesn't scale completely. If the line starts levelling out then using moren cores will simply use up your time allocation faster (each job uses up walltime times core count units) without any benefit.
You may also calculate a measure of efficiency. If one core takes t hours to run then you would expect N cores to take T/N hours. If it took 2T/N hours then the efficiency would be 50 per cent.
E = 100t/(TN) where t is the time for one core, and T is the time for N cores.
Benchmarking over different program options but otherwise the same model and data allows you to work out the best combination.
There are many complers, options, and libraries (e.g. MKL and FFTW). Benchmarking allows you to try out these options. For more information see the pages on Intel Compilers , GNU Compilers and Numerical Analysis .
If you write your own code this allows you to track how code changes affect performance. You should consider this in conjunction with source code control (see documentation) so you can match your results against an exact revision of your code. It is also useful to use a test harness such that you know your code still produces the right results.
A list of Benchmarks are available.
This is comprehensive, if complex, and an alternative that is simpler, using XSLT to turn simpler XML into JUBE may be offered.