If your job requires a whole node to run (i.e. it is parallel over 12 or 20 cores) then you can use Array Jobs to submit a large, high-throughput workload easily.
Hydra works on the basis of Node Exclusivity it is inefficient to send single core jobs or others of less than a whole node's worth as you are charged (number of cores on a node) x (number of hours) from your allocation even if you used only a single core. If you need to use less than a whole node then look at Small Jobs .
Where jobs are of indeterminate length, but there are many of them, then the use of the mpi_task_farmer may be preferred.