Custom Query (101 matches)
Results (40 - 42 of 101)
Ticket | Resolution | Summary | Owner | Reporter |
---|---|---|---|---|
#32 | fixed | jobarchived use large amount of memory | bastiaans | gastineau@… |
Description |
The jobarchived process seems to consume large amount of memory on an Xeon 65 bits with Linux (CentOS 4.5). It uses 1Gb after an execution of 2 days. My /etc/jobarchived.conf : [DEFAULT] DAEMONIZE : 1 DEBUG_LEVEL : 0 USE_SYSLOG : 1 SYSLOG_LEVEL : 0 SYSLOG_FACILITY : DAEMON GMETAD_CONF : /etc/gmetad.conf ARCHIVE_XMLSOURCE : localhost:8651 ARCHIVE_DATASOURCES : "mycluster" ARCHIVE_HOURS_PER_RRD : 12 ARCHIVE_EXCLUDE_METRICS : ".*Temp.*", ".*RPM.*", ".*Version.*", ".*Tag$", "boott ime", "gexec", "os.*", "machine_type" ARCHIVE_PATH : /var/lib/jobarch JOB_SQL_DBASE : localhost/jobarchive JOB_TIMEOUT : 168 RRDTOOL : /usr/bin/rrdtool |
|||
#62 | worksforme | job archive question | somebody | jsarlo@… |
Description |
Question about setting up job archive. If I have jobmond.py running on multiple clusters, but the data is only viewable through the web on a central repository web site, do I have to set up the archive database on each of the clusters or just on the central repository server? Also where would I run the two .py scripts - on each cluster or just the central one? Thanks. Jeff |
|||
#71 | fixed | job array support | somebody | ramonb |
Description |
Thanks to Stijn De Weirdt from University Gent for discovering this: Add support for Torque job arrays to Job Monarch. A job array is currently displayed as only 1 job and causes massive memory usage by jobmond |