CUC 2004 / New Frontiers / New Techhnologies for New Needs
CARNet logo
Benchmarking the performance of JMS on computer clusters / D2
Authors: Emir Imamagić, Branimir Radić, University Computing Centre – Srce, Croatia

Abstract

Benchmarks are instruments that evaluate overall performance of computers. They are used to quantify overall system performance by measuring the standard performance parameters and the time it takes for completing a specific job. Benchmarking clusters of computers demand somewhat different approach, being that they are composed of a number of interlinked computer nodes. One could put the emphases of the benchmarking on the performance of individual nodes or the system in total or a combination of the two.

There are numerous things than make the entire performance of a computer cluster: network throughput, node processing speed, size of the cluster itself, quality of JMS (job management system) speed of the file system and so on. For most of those things a more or less appropriate benchmark or even a set of benchmarks exists. However there isn’t an appropriate way for testing JMS performance by a single benchmark. Performance of a JMS depends greatly on the jobs that are submitted for completion and the order in which they have been submitted.

JMS can be tested by submitting a job load and measuring the time it takes for the JMS to distribute jobs on to the cluster, or the time it takes for the jobs to be re-scheduled. Alternatively the over all time it takes all the jobs to be completed could be measured and compared.

One can also evaluate the performance of the JMS from the viewpoint of the jobs them self. The time necessary for a job to start execution, the type of nodes it has been scheduled to and the time it takes for the jobs to complete.

For the purpose of evaluating the JMS performance a workloads comprised of a number of different benchmarks has been created and the performance of JMS during the processing of the jobs has been monitored. The jobs which are submitted wary in length of execution, memory utilization, network traffic they generate, the number of nodes necessary for execution and other.

JMS evaluated are PBS, CONDOR and SGE and an evaluation of their overall performance has been made through the observation of the following parameters: average, maximum & minimum: walltime, execution time and cluster resources utilization over the time. The summary of the overall performance of the JMS tested has been made.

Biography

Emir Imamagić graduated from the Department of Electronics, Microelectronics, Computer and Intelligent Systems, Faculty of Electrical Engineering and Computing, University of Zagreb in May 2004. His research interests are high performance computing, distributed computing, computer clusters and grid systems.

Before graduation, he has worked on the AliEn Grid project at CERN, Switzerland in summer 2003 and on the MidArc middleware project at Ericsson Nicola Tesla in summer 2002. He is currently working as a researcher on the CRO-GRID Infrastructure project at University Computing Centre.

 
 
Copyright © 1991- 2004. CARNet. All rights reserved. / Mail to cuc@carnet.hr / Legal notes / Impressum