Opened 17 years ago

Closed 16 years ago

Last modified 16 years ago

#32 closed defect (fixed)

jobarchived use large amount of memory

Reported by: gastineau@… Owned by: bastiaans
Priority: normal Milestone: 0.3
Component: jobarchived Version: trunk
Keywords: Cc:
Estimated Number of Hours:

Description

The jobarchived process seems to consume large amount of memory on an Xeon 65 bits with Linux (CentOS 4.5).

It uses 1Gb after an execution of 2 days.

My /etc/jobarchived.conf :

[DEFAULT]

DAEMONIZE : 1

DEBUG_LEVEL : 0

USE_SYSLOG : 1

SYSLOG_LEVEL : 0

SYSLOG_FACILITY : DAEMON

GMETAD_CONF : /etc/gmetad.conf

ARCHIVE_XMLSOURCE : localhost:8651

ARCHIVE_DATASOURCES : "mycluster"

ARCHIVE_HOURS_PER_RRD : 12

ARCHIVE_EXCLUDE_METRICS : ".*Temp.*", ".*RPM.*", ".*Version.*", ".*Tag$", "boott ime", "gexec", "os.*", "machine_type"

ARCHIVE_PATH : /var/lib/jobarch

JOB_SQL_DBASE : localhost/jobarchive

JOB_TIMEOUT : 168

RRDTOOL : /usr/bin/rrdtool

Change History (4)

comment:1 Changed 17 years ago by bastiaans

  • Cc gastineau@… added
  • Owner changed from somebody to bastiaans
  • Status changed from new to assigned

This is related to the fact that the rrd pipes are too slow to write the data to disk.

This should be fixed since changeset r365.

Would you care to test the latest development version in trunk?

You do need to have 'py-rrdtool' installed however: http://sourceforge.net/projects/py-rrdtool/

comment:2 Changed 17 years ago by bastiaans

  • Cc gastineau@… removed

comment:3 Changed 16 years ago by bastiaans

  • Milestone set to 0.3
  • Resolution set to fixed
  • Status changed from assigned to closed
  • Version changed from 0.2 to trunk

no response from bug reporter.

I'm going to assume using py-rrdtool fixes the memory consumption issues.

Note: See TracTickets for help on using tickets.