Opened 15 years ago

Closed 11 years ago

#69 closed task (wontfix)

Job information leaking over from one Ganglia cluster to another when clusters are in the same PBS queue

Reported by: renfro@… Owned by: ramonb
Priority: normal Milestone:
Component: web Version: 0.3
Keywords: Cc:
Estimated Number of Hours:

Description

At one time, I had many Torque queues: one for each group of homogeneous systems. Since I couldn't rely on my users to consistently check qstat, showq, or Ganglia before submitting a job to an queue with free CPUs rather than a queue with none, I converted my Torque settings to put all cluster systems into one queue, and use Maui partitions to keep parallel jobs on a group of homogeneous systems. This has worked out great as far as queue efficiency is concerned.

Now that I'm getting Job Monarch integrated into the setup, I've noticed that active jobs in my batch queue show up in all cluster joblists and overviews, even when that particular cluster has no active jobs on its nodes. I'll try to attach screenshots, but if my users still have jobs running when you read this ticket, you can see for yourself on the live server:

Ganglia knows, for example, that "ChE Compute Nodes" has 9 systems in it named ch226-11 ... ch226-19. Monarch displays those, but also displays ch226-29 and ch226-31 from the "PNGV Project Compute Nodes" cluster that had active jobs. Any cluster view where Monarch was enabled had this effect.

Attachments (2)

not-working.png (75.1 KB) - added by renfro@… 15 years ago.
Cluster where Monarch has added busy nodes from another cluster.
working.png (76.0 KB) - added by renfro@… 15 years ago.
Cluster where Monarch has added no extra nodes from other clusters (since no other clusters had jobs running at the time).

Download all attachments as: .zip

Change History (6)

Changed 15 years ago by renfro@…

Cluster where Monarch has added busy nodes from another cluster.

Changed 15 years ago by renfro@…

Cluster where Monarch has added no extra nodes from other clusters (since no other clusters had jobs running at the time).

comment:1 in reply to: ↑ description Changed 15 years ago by renfro@…

Replying to renfro@:

On further investigation, this isn't entirely the fault of the web frontend, since it appears that each host I have running jobmond.py and monitoring the batch queue is inserting job information into gmetad:

From the ChE cluster:

<HOST NAME="ch226-11.cae.tntech.edu" IP="149.149.254.181" REPORTED="1227713990" TN="11" TMAX="20" DMAX="259200" LOCATION="unspecified" GMOND_STARTED="1222
112892">
<METRIC NAME="machine_type" VAL="x86" TYPE="string" UNITS="" TN="698" TMAX="1200" DMAX="0" SLOPE="zero" SOURCE="gmond"/>
<METRIC NAME="disk_free" VAL="319.866" TYPE="double" UNITS="GB" TN="98" TMAX="180" DMAX="0" SLOPE="both" SOURCE="gmond"/>
<METRIC NAME="MONARCH-JOB-35772-0" VAL="status=R start_timestamp=1227164353 name=plainBrec poll_interval=30 ppn=1 queue=batch reported=1227713970 requeste
d_time=199:00:00 queued_timestamp=1227164352 owner=nananthar21 domain=cae.tntech.edu nodes=ch226-31" TYPE="string" UNITS="" TN="28" TMAX="60" DMAX="60" SL
OPE="both" SOURCE="gmetric"/>

From the PNGV Cluster:

<HOST NAME="ch226-21.cae.tntech.edu" IP="149.149.254.191" REPORTED="1227713990" TN="11" TMAX="20" DMAX="259200" LOCATION="unspecified" GMOND_STARTED="1215
965919">
<METRIC NAME="machine_type" VAL="x86_64" TYPE="string" UNITS="" TN="65" TMAX="1200" DMAX="0" SLOPE="zero" SOURCE="gmond"/>
<METRIC NAME="disk_free" VAL="509.126" TYPE="double" UNITS="GB" TN="155" TMAX="180" DMAX="0" SLOPE="both" SOURCE="gmond"/>
<METRIC NAME="MONARCH-JOB-35772-0" VAL="status=R start_timestamp=1227164353 name=plainBrec poll_interval=30 ppn=1 queue=batch reported=1227713968 requeste
d_time=199:00:00 queued_timestamp=1227164352 owner=nananthar21 domain=cae.tntech.edu nodes=ch226-31" TYPE="string" UNITS="" TN="32" TMAX="60" DMAX="60" SL
OPE="both" SOURCE="gmetric"/>

Not sure of the best solution for this. The web frontend could possibly filter out nodes that aren't in the current cluster -- somehow the CPU count on the overview pages does that and counts the cluster CPUs accurately, but the node count doesn't. I don't know if there's any way for jobmond to filter out the information from nodes outside its cluster ahead of time, since I don't think jobmond is aware of cluster boundaries at all.

I'd rather not return to my users having to monitor multiple queues and choose one accordingly. My last attempt at a Torque routing queue with multiple execution queues just ended up filling up the first execution queue specified and never routed over to other queues at all.

comment:2 Changed 15 years ago by ramonb

  • Owner changed from somebody to ramonb
  • Status changed from new to assigned

comment:3 Changed 15 years ago by ramonb

  • Cc renfro@… added

I think I understand your issue. You have 1 Torque cluster divided over 2 Ganglia clusters?

Job Monarch already should filter out nodes not in the viewed cluster.

Job Monarch has the following limitations:

  • can only handle 1 Torque cluster per Ganglia cluster
  • only 1 jobmond should be running for each PBS/Ganglia Cluster, on max. 1 machine

If you have 2 jobmond's running (1 for each Ganglia cluster) and polling the same 1 Torque server, it will broadcast the same jobs on both clusters.

When Job Monarch's web frontend receives Torque job information for nodes in the same cluster, but that Ganglia doesn't know, it will assume it is either (temporarily) down or missed the node's information.

By that rational, it will add the node to Job Monarch's node overview (the colored boxes) either way. I think that is what is happening with your systems.

The origin of the problem is that you have 2 Ganglia clusters that are 1 Torque cluster. Job Monarch was not designed with that in mind.

Perhaps we can take this into account in the future. I will think about how this may be done.

comment:4 Changed 11 years ago by ramonb

  • Cc renfro@… removed
  • Resolution set to wontfix
  • Status changed from assigned to closed
  • Type changed from defect to task

this has been added to milestone 2.0 roadmap

Note: See TracTickets for help on using tickets.