Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
SNMP & MRTG made in easy
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Sat Mar 25, 2006 6:50 am    Post subject: Reply with quote

Very nice.

The uptime script works nicely as well, although, I'd suggest a small inline bash command in the config file:

Code:
Target[xacatecas.uptime]:    `sed -e 's,\.[0-9]*,,g' < /proc/uptime | (read C1 C2; echo $(( $C1 / 60 )); echo $(( $C2 / 60 )))`


It was written from the php script and does exactly the same thing.

Also, Gentoo allows one to run mrtg in daemon mode if instead of a bunch of seperate config files you create one big config file. I haven't done this but the option is available. This also allows one to make use of the global "Fork" option, which will allow a small speed improvement in the case where you are doing many remote queries (My mrtg updates takes well over a minute already with almost no CPU usage).

Instead of the bunch of shell scripts that does this I've instead got a single command stored in cron:

Code:
*/5 * * * * for i in /etc/mrtg/*.cfg; do /usr/bin/mrtg $i; done


Also, I'm running mrtg as a non-root user. Just do:

Code:
# useradd mrtg -g daemon -G cron -d /dev/null
# passwd -l mrtg


The passwd -l part is optional, I suspect it's the default. The home directory of /dev/null is just there for some added security. It does need a legal shell though. Then to set up the crontab do:

Code:
# crontab -e -u mrtg


or pass --user to mrtg.

Then I also need some help. I'm trying to get the hrSystemUptime working so that I can monitor the uptime of remote machines without the need of ssh hacks and tunnels.

Code:
xacatecas ~ # snmpwalk -v 1 -c public localhost hrSystemUptime
HOST-RESOURCES-MIB::hrSystemUptime.0 = Timeticks: (292802531) 33 days, 21:20:25.31
xacatecas ~ #


So that gives the correct value, now that is in Timeticks and the value in brackets does match up with that from /proc/uptime.

Code:
xacatecas mrtg # cat /proc/uptime && snmpwalk -v 1 -c public localhost hrSystemUptime
2928616.04 2881494.04
HOST-RESOURCES-MIB::hrSystemUptime.0 = Timeticks: (292861618) 33 days, 21:30:16.18
xacatecas mrtg #


However, whenever I try to use this value I get weird errors:

Code:
xacatecas mrtg # mrtg pug-uptime.cfg.tmp
WARNING: Expected a number but got '3 days, 13:15:46'
WARNING: Expected a number but got '3 days, 13:15:46'
ERROR: Target[pug.uptime][_IN_] ' $target->[0]{$mode} ' did not eval into defined data
ERROR: Target[pug.uptime][_OUT_] ' $target->[0]{$mode} ' did not eval into defined data
xacatecas mrtg #


I've found a thread that makes a comment about this behaviour here. Now the question is where to place the string:

Code:
$BER::pretty_print_timeticks=0


since it deffinately doesn't work in the mrtg config file. I'm suspecting /usr/lib/mrtg2/BER.pm, but then it would simply be without the $BER:: leading it. Indeed, changing this on line 67 to just 0 does indeed fix the errors produced by mrtg. However, what other side effects could there be?
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
mephist0
Tux's lil' helper
Tux's lil' helper


Joined: 19 Sep 2005
Posts: 94
Location: Germany, near Frankfurt/Main

PostPosted: Sun Apr 16, 2006 7:24 pm    Post subject: Reply with quote

anyone got CPU Active Load working on amd64 ?

all other charts are working well

except this %$U($ cpu graph

here is my config :
http://rafb.net/paste/results/9r0oDb14.html

it always shows 21% usage, but I am compiling right now and it should display 100% usage ?!?!?

any idea whats wrong ?

the same config file worked fine for my old system, a P4 3GHZ with HT

thanks in advance !!
_________________
There is only one God, and his name is Death. And there is only one thing we say to Death: 'Not today!'

Fotoblog
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Sun Apr 16, 2006 7:35 pm    Post subject: Reply with quote

There is a further state the CPU can be in. Waiting for IO. Afaik this isn't included in any of the three options you have. Try compiling with -j2 which will push your actual usage up a bit. Also try running a command such as the following for an extended period of time:

Code:
while true; do echo -n ""; done


Which will munch 100% cpu without doing any I/O whatsoever.

Also, does that 21% come down when you are not doing anything?

What does top say?

Remember that the values mrtg displays is averages over certain time periods (5 mins for the /day display).
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
mephist0
Tux's lil' helper
Tux's lil' helper


Joined: 19 Sep 2005
Posts: 94
Location: Germany, near Frankfurt/Main

PostPosted: Sun Apr 16, 2006 9:10 pm    Post subject: Reply with quote

I aborted the merge, waited 30minutes, resumed emerge and I dont know why, but now it works fine :)

thanks anyways ;)
_________________
There is only one God, and his name is Death. And there is only one thing we say to Death: 'Not today!'

Fotoblog
Back to top
View user's profile Send private message
ConiKost
Veteran
Veteran


Joined: 11 Jan 2005
Posts: 1360

PostPosted: Thu Aug 31, 2006 11:16 pm    Post subject: Reply with quote

Hello!
I got problems with cpu.sh

Code:

BlackBox mrtg # /usr/bin/mrtg /etc/mrtg/cpu.cfg
Friday, 1 September 2006 at 1:13: ERROR: Target[localhost.cpu][_IN_] ' $target->[0]{$mode}  + ssCpuRawSystem.0&ssCpuRawSystem.' (warn): Ambiguous use of & resolved as operator & at (eval 13) line 1.
Friday, 1 September 2006 at 1:13: ERROR: Target[localhost.cpu][_OUT_] ' $target->[0]{$mode}  + ssCpuRawSystem.0&ssCpuRawSystem.' (warn): Ambiguous use of & resolved as operator & at (eval 14) line 1.


What I am doing wrong?
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Fri Sep 01, 2006 5:32 am    Post subject: Reply with quote

It would help if you also supplied your config file.
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
ConiKost
Veteran
Veteran


Joined: 11 Jan 2005
Posts: 1360

PostPosted: Fri Sep 01, 2006 10:54 am    Post subject: Reply with quote

jkroon wrote:
It would help if you also supplied your config file.


See the first thread! I copyied it!
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Fri Sep 01, 2006 1:29 pm    Post subject: Reply with quote

Hmm, I do recall some similar issues. Anyhow, here is my cpu config (well, the orriginal one I did from this example) so you can just do an "eye diff" and see what's wrong:

Code:
WorkDir: /var/www/hackerpages.lan/htdocs/mrtg
LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt
Target[pug.cpu]:ssCpuRawUser.0&ssCpuRawUser.0:public@pug.lan + ssCpuRawSystem.0&ssCpuRawSystem.0:public@pug.lan + ssCpuRawNice.0&ssCpuRawNice.0:public@pug.lan
RouterUptime[pug.cpu]: public@pug.lan
MaxBytes[pug.cpu]: 100
Title[pug.cpu]: pug: CPU Load
PageTop[pug.cpu]: <H1>pug: Active CPU Load %</H1>
Unscaled[pug.cpu]: ymwd
ShortLegend[pug.cpu]: %
YLegend[pug.cpu]: CPU Utilization
Legend1[pug.cpu]: Active CPU in % (Load)
Legend2[pug.cpu]:
Legend3[pug.cpu]:
Legend4[pug.cpu]:
LegendI[pug.cpu]:  Active
LegendO[pug.cpu]:
Options[pug.cpu]: growright,nopercent,noinfo,unknaszero

_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
zdreantza
n00b
n00b


Joined: 16 Feb 2004
Posts: 2
Location: Bucharest, Romania

PostPosted: Wed Oct 25, 2006 4:36 pm    Post subject: CPU + Swap Reply with quote

I have the following errors when trying to use the cpu.cfg

Quote:
Singularity mrtg # mrtg cpu.cfg
Unknown SNMP var ssCpuRawUser.0
at /usr/bin/mrtg line 2035
Unknown SNMP var ssCpuRawUser.0
at /usr/bin/mrtg line 2035
Wednesday, 25 October 2006 at 19:24: WARNING: Expected a number but got '1 day, 5:40:14'
Wednesday, 25 October 2006 at 19:24: WARNING: Expected a number but got 'Singularity'
Unknown SNMP var ssCpuRawSystem.0
at /usr/bin/mrtg line 2035
Unknown SNMP var ssCpuRawSystem.0
at /usr/bin/mrtg line 2035
Wednesday, 25 October 2006 at 19:24: WARNING: Expected a number but got '1 day, 5:40:14'
Wednesday, 25 October 2006 at 19:24: WARNING: Expected a number but got 'Singularity'
Unknown SNMP var ssCpuRawNice.0
at /usr/bin/mrtg line 2035
Unknown SNMP var ssCpuRawNice.0
at /usr/bin/mrtg line 2035
Wednesday, 25 October 2006 at 19:24: WARNING: Expected a number but got '1 day, 5:40:14'
Wednesday, 25 October 2006 at 19:24: WARNING: Expected a number but got 'Singularity'
Wednesday, 25 October 2006 at 19:24: ERROR: Target[localhost.cpu][_IN_] ' $target->[0]{$mode} + $target->[1]{$mode} + $target->[2]{$mode} ' (warn): Use of uninitialized value in addition (+) at (eval 17) line 1.
Wednesday, 25 October 2006 at 19:24: ERROR: Target[localhost.cpu][_OUT_] ' $target->[0]{$mode} + $target->[1]{$mode} + $target->[2]{$mode} ' (warn): Use of uninitialized value in addition (+) at (eval 18) line 1.


My cpu.cfg file looks like this:
Quote:
WorkDir: /var/www/localhost/htdocs/mrtg
LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt
Target[localhost.cpu]:ssCpuRawUser.0&ssCpuRawUser.0:public@127.0.0.1+ ssCpuRawSystem.0&ssCpuRawSystem.0:public@127.0.0.1+ ssCpuRawNice.0&ssCpuRawNice.0:public@127.0.0.1
RouterUptime[localhost.cpu]: public@127.0.0.1
MaxBytes[localhost.cpu]: 100
Title[localhost.cpu]: CPU Load
PageTop[localhost.cpu]: <H1>Active CPU Load %</H1>
Unscaled[localhost.cpu]: ymwd
ShortLegend[localhost.cpu]: %
YLegend[localhost.cpu]: CPU Utilization
Legend1[localhost.cpu]: Active CPU in % (Load)
Legend2[localhost.cpu]:
Legend3[localhost.cpu]:
Legend4[localhost.cpu]:
LegendI[localhost.cpu]: Active
LegendO[localhost.cpu]:
Options[localhost.cpu]: growright,nopercent


swap.cfg gives no errors, yet my graphic isnt populated.

Quote:
LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt
Target[localhost.swap]: .1.3.6.1.4.1.2021.4.11.0&.1.3.6.1.4.1.2021.4.11.0:public@localhost
PageTop[localhost.swap]: <H1>Swap Memory</H1>
WorkDir: /var/www/localhost/htdocs/mrtg
Options[localhost.swap]: nopercent,growright,gauge,noinfo
Title[localhost.swap]: Free Memory
MaxBytes[localhost.swap]: 1000000
kMG[localhost.swap]: k,M,G,T,P,X
YLegend[localhost.swap]: bytes
ShortLegend[localhost.swap]: bytes
LegendI[localhost.swap]: Free Memory:
LegendO[localhost.swap]:
Legend1[localhost.swap]: Swap memory avail, in bytes


Anyone having any ideas?
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Wed Oct 25, 2006 9:58 pm    Post subject: Reply with quote

Did you generate the indexes? Does the user you're running this as have rw access to the WorkDir? Did you actually leave it for a while (running every 5 minutes) to give it time to actually collect some data? Those errors do look like ones I got when I tried to graph uptime though. The errors before that is rather worrysome though and is probably an indication that the first two lines returned are in what should be lines 3 and 4, with the first two lines being omitted. I'm thinking you're not loading the appropriate resource description, my mrtg.conf file contains the following:

Code:
LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt,/usr/share/snmp/mibs/HOST-RESOURCES-MIB.txt


Please check whether adding that helps.

Then just another tip that I've seen a couple of times here. It's possible to combine all the *.conf files into a single config file, something like this:

Code:
WorkDir: /var/lib/mrtg

LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt,/usr/share/snmp/mibs/HOST-RESOURCES-MIB.txt
Forks: 4

Extension[_]: php
PageTop[_]: \n<!-- PAGE TOP -->\n
PageFoot[_]: \n<!-- PAGE FOOT -->\n

Options[^]: growright,unknaszero,nopercent
#Suppress[_]: ym
RouterUptime[_]: public@localhost

Target[cpu]: ssCpuRawUser.0&ssCpuRawUser.0:public@localhost + ssCpuRawSystem.0&ssCpuRawSystem.0:public@localhost + ssCpuRawNice.0&ssCpuRawNice.0:public@localhost
MaxBytes[cpu]: 100
Title[cpu]: CPU Utilization
ShortLegend[cpu]: %
YLegend[cpu]: CPU Utilization
Legend1[cpu]: CPU Utilization
LegendI[cpu]: CPU:
LegendO[cpu]:
Options[cpu]: nopercent
Unscaled[cpu]: dwmy

Include: /var/lib/mrtg/netmrtg.conf
Include: /var/lib/mrtg/stormrtg.conf
Include: /var/lib/mrtg/diskiomrtg.conf

Options[mysql]: perminute, integer

Target[mysql]: `${mysql_to_load} -u maildb`
MaxBytes[mysql]: 2000000
Title[mysql]: MySQL Utilization
Shortlegend[mysql]: q/m
YLegend[mysql]: Queries per minute
LegendI[mysql]: Queries
LegendO[mysql]: Slow queries
Legend1[mysql]: Queries per minute
Legend2[mysql]: Slow queries per minute


Upon every run I recreate this Include: config files based on various system states, although for most operations they'll stay consistent, but if things like the physical disks change, partitions gets altered, network config changes etc I want those files to automatically reflect the changes. Thus I generate them using a small script.

Obviously I've been playing around a bit recently. I've now got some nice looking graphs, most of these things are auto-detected, mapping cpu usage, MySQL utilization, all storage devices's usage (mounted partitions and RAM), network devices IO rates, as well as all block devices IO rates. For those that care, you're welcome to take a look at:

http://mail.kroon.co.za/mrtg.php

Most of those were created using info from this page, the disk IO ones were created using a bit of vmstat -d and some awk magic (single invocation of vmstat every 5 minutes ... not bad imho). This info is quite possibly also available via snmp which would be a better option imho as that gets rid of the "local system" requirement and would thus allow one to keep these stats for remote servers as well.

And for those that are wondering, yes, that is a Gentoo-based server. It got rebooted once shortly after it got installed about a month back, and has been giving me about a hundreth of the grief the previous Linux server (distro shall remain unnamed) gave me.
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
PaveQ
Apprentice
Apprentice


Joined: 11 Feb 2005
Posts: 225
Location: Finland

PostPosted: Sat Dec 02, 2006 4:50 am    Post subject: Reply with quote

Hmm how can I get hourly graph? :?:
_________________
http://blitzkrieg.homelinux.org/
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Mon Dec 04, 2006 5:55 am    Post subject: Reply with quote

Use rrdtool.

mrtg by default only takes samples every five minutes, which is 12 samples an hour. Not quite sufficient for an hourly graph I would reckon. If however you use rrdtool then you can take samples as often as you like (every minute?) and then you'd have 60 data samples for an hour, which should be usable for an hourly graph.
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
a1exus
n00b
n00b


Joined: 25 Jan 2007
Posts: 9
Location: Brooklyn, NY

PostPosted: Thu Jan 25, 2007 5:55 pm    Post subject: Reply with quote

Code:
Legend1[CPU-IU]: CPU - USER
Legend2[CPU-IU]: CPU - IDLE
Legend3[CPU-IU]: CPU - NICE
Legend4[CPU-IU]: CPU - SYSTEM
LegendI[CPU-IU]: USER
LegendO[CPU-IU]: NICE
MaxBytes[CPU-IU]: 100
Options[CPU-IU]: growright,nopercent,gauge
PageTop[CPU-IU]: <H1>CPU</H1>
RouterUptime[CPU-IU]: public@127.0.0.1
ShortLegend[CPU-IU]: %
Target[CPU-IU]:ssCpuRawUser.0&ssCpuRawIdle.0:public@127.0.0.1+ssCpuRawNice.0&ssCpuRawSystem.0:public@127.0.0.1
Title[CPU-IU]: CPU (User,Idle,Nice,System)
Unscaled[CPU-IU]: ymwd
YLegend[CPU-IU]: CPU (User,Idle,Nice,System)


few questions...

1) i only seeing User and Idle (my idle almost never change, even i'm pretty sure my server is kind of busy, not all the time but i get on average 25% of user, so i think idle should somehow be vary rather then just be all the way at the top all the time...)
2) as i mention i only seeing user and idle not the other two (nice and system)
3) at the bottom of each graph, i only see in legend just LegendI[CPU-IU]: USER, how can i put there stuff from other 2?
4) and at the very bottom of the page, i seeing GREEN ### CPU - USER, not any others period...
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Thu Jan 25, 2007 7:05 pm    Post subject: Reply with quote

mrtg only renders two graphs. You're adding User and Nice together, and Idle and System. Adding the 4 up should always end up at 100%.

For CPU I usually just do this:

Code:
Target[cpu]: ssCpuRawUser.0&ssCpuRawUser.0:public@localhost + ssCpuRawSystem.0&ssCpuRawSystem.0:public@localhost + ssCpuRawNice.0&ssCpuRawNice.0:public@localhost
MaxBytes[cpu]: 100
Title[cpu]: CPU Utilization
ShortLegend[cpu]: %
YLegend[cpu]: CPU Utilization
Legend1[cpu]: CPU Utilization
LegendI[cpu]: CPU:
LegendO[cpu]:
Options[cpu]: nopercent
Unscaled[cpu]: dwmy


Adjust the MaxBytes according to the number of CPUs you have (100 * nr_cpus/cores/hts).
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
a1exus
n00b
n00b


Joined: 25 Jan 2007
Posts: 9
Location: Brooklyn, NY

PostPosted: Thu Jan 25, 2007 10:15 pm    Post subject: Reply with quote

um.. not true..

http://alexus.org/qmail/mrtg/

i have 4 graphs here..
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Fri Jan 26, 2007 6:06 am    Post subject: Reply with quote

Sorry, wrong wording :). You can render as many graphs as you want, but only two series on each graph.

Btw, how did you do the smtp counter one? I'm guessing you're using qmail, but I'm pretty sure that given an example I'd be able to implement it for exim as well.
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
a1exus
n00b
n00b


Joined: 25 Jan 2007
Posts: 9
Location: Brooklyn, NY

PostPosted: Fri Jan 26, 2007 3:32 pm    Post subject: Reply with quote

http://www.inter7.com/index.php?page=qmailmrtg7
Back to top
View user's profile Send private message
a1exus
n00b
n00b


Joined: 25 Jan 2007
Posts: 9
Location: Brooklyn, NY

PostPosted: Fri Jan 26, 2007 3:33 pm    Post subject: Reply with quote

what does that mean two series?
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Fri Jan 26, 2007 8:20 pm    Post subject: Reply with quote

2 series of numbers (each line on a graph is called a series, short for a series of sequenced numbers). But it seems I'm wrong on that, I looked again at those graphs and on one or two of the year graphs I can actually see 4 seperate series. Will need to investigate that.
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
LiquidRain
n00b
n00b


Joined: 15 Jun 2004
Posts: 29

PostPosted: Fri Mar 23, 2007 6:13 am    Post subject: Re: How to monitor a ppp interface with mrtg Reply with quote

kevinverma wrote:
Hello,

I will be very much thankful if someone can please hint me special case of a non-static ppp interface, so that it can be monitored via mrtg. I suppose this is a non-snmp interface as well.

Many Thanks for reading,

Have MRTG monitor the ethernet device instead. This is what worked for me.
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Fri Mar 23, 2007 6:23 am    Post subject: Reply with quote

Why does mrtg have issues with ppp interfaces? It simply keeps on saying that it's all zero ... why? No clue actually. Would also appreciate some advice, cause ifconfig correctly shows increasing byte counters.
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
LiquidRain
n00b
n00b


Joined: 15 Jun 2004
Posts: 29

PostPosted: Fri Mar 23, 2007 7:43 pm    Post subject: Reply with quote

jkroon wrote:
Why does mrtg have issues with ppp interfaces? It simply keeps on saying that it's all zero ... why? No clue actually. Would also appreciate some advice, cause ifconfig correctly shows increasing byte counters.

As I told kevinverma, you can have MRTG monitor the ethernet device if it's PPPoE. However I am having difficulty myself, as MRTG is not giving the right bandwidth stats for the device, it seems it always reads the same, or nearly the same, values. I'm trying to solve this myself by learning the SNMP tools, though, so hopefully I can get somewhere and let people here know.

I should also throw in that none of the CPU, swap, or memory configurations worked for me as provided in the OP.
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Fri Mar 23, 2007 8:54 pm    Post subject: Reply with quote

That wasn't my question, but yea, monitoring the connecting interface does give approximate results. Keep in mind that there may well be other traffic flowing over that interface as well, and as such the traffic will always be an over-estimate.
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
LiquidRain
n00b
n00b


Joined: 15 Jun 2004
Posts: 29

PostPosted: Fri Mar 23, 2007 9:02 pm    Post subject: Reply with quote

Sorry to say but I cannot get it working at all with SNMP. I'll write a simple shell script later that just grabs the information from ifconfig and feeds it to MRTG that way. Hopefully that's accurate enough. I also can't get cpu/memory info working at all, and I don't have the patience or will to learn SNMP just for MRTG, so I'm just going to write scripts for those as well.
Back to top
View user's profile Send private message
jkroon
Tux's lil' helper
Tux's lil' helper


Joined: 15 Oct 2003
Posts: 110
Location: South Africa

PostPosted: Fri Mar 23, 2007 9:44 pm    Post subject: Reply with quote

Ok, snmp configuration:

Code:
com2sec local 127.0.0.1/32      public

group MyROGroup v1              local
group MyROGroup v2c             local
group MyROGroup usm             local

view all included .1    80

access MyROGroup "" any noauth exact all  none none

syslocation ULS Offices
syscontact Jaco Kroon <jaco@kroon.co.za>


Not sure exactly what all of that does, but it's all read-only as far as I can tell, and only to 127.1/32 anyway, so that is restrictive enough for me.

Then I've got the following mrtg config files on this particular machine (auto-generated using a script):

mrtg.conf wrote:
WorkDir: /var/lib/mrtg/

LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt,/usr/share/snmp/mibs/HOST-RESOURCES-MIB.txt
Forks: 4

Extension[_]: html
PageTop[_]: \n<!-- PAGE TOP -->\n
PageFoot[_]: \n<!-- PAGE FOOT -->\n

Options[^]: growright,unknaszero,nopercent
#Suppress[_]: ym
RouterUptime[_]: public@localhost

Target[cpu]: ssCpuRawUser.0&ssCpuRawUser.0:public@localhost + ssCpuRawSystem.0&ssCpuRawSystem.0:public@localhost + ssCpuRawNice.0&ssCpuRawNice.0:public@localhost
MaxBytes[cpu]: 100
Title[cpu]: CPU Utilization
ShortLegend[cpu]: %
YLegend[cpu]: CPU Utilization
Legend1[cpu]: CPU Utilization
LegendI[cpu]: CPU:
LegendO[cpu]:
Options[cpu]: nopercent
Unscaled[cpu]: dwmy

Include: /var/lib/mrtg/netmrtg.conf
Include: /var/lib/mrtg/stormrtg.conf
Include: /var/lib/mrtg/diskiomrtg.conf

Options[mysql]: perminute, integer

Target[mysql]: `'/usr/local/bin/mrtg-mysql-load' -u 'mrtg' -p 'as if i am going to hand out passwords' -h 'localhost'`
MaxBytes[mysql]: 2000000
Title[mysql]: MySQL Utilization
Shortlegend[mysql]: q/m
YLegend[mysql]: Queries/Min
LegendI[mysql]: Queries
LegendO[mysql]: Slow queries
Legend1[mysql]: Queries per minute
Legend2[mysql]: Slow queries per minute


netmrtg.conf wrote:
# Config for lo:
Title[lo]: lo
Target[lo]: 1:public@localhost:
SetEnv[lo]: MRTG_INT_DESCR="lo"
MaxBytes[lo]: 1000000000
Options[lo]: bits
# Config for ethadsl:
Title[ethadsl]: ethadsl
Target[ethadsl]: 2:public@localhost:
SetEnv[ethadsl]: MRTG_INT_DESCR="ethadsl"
MaxBytes[ethadsl]: 1000000000
Options[ethadsl]: bits
# Config for ethtt:
Title[ethtt]: ethtt
Target[ethtt]: 3:public@localhost:
SetEnv[ethtt]: MRTG_INT_DESCR="ethtt"
MaxBytes[ethtt]: 1000000000
Options[ethtt]: bits
# Config for ethlan:
Title[ethlan]: ethlan
Target[ethlan]: 4:public@localhost:
SetEnv[ethlan]: MRTG_INT_DESCR="ethlan"
MaxBytes[ethlan]: 1000000000
Options[ethlan]: bits
# Config for ppp0:
Title[ppp0]: ppp0
Target[ppp0]: 5:public@localhost:
SetEnv[ppp0]: MRTG_INT_DESCR="ppp0"
MaxBytes[ppp0]: 1000000000
Options[ppp0]: bits


stormrtg.conf wrote:
# Config for Physical memory (1)
Target[stor_physical_memory]: hrStorageUsed.1&hrStorageSize.1:public@localhost * hrStorageAllocationUnits.1&hrStorageAllocationUnits.1:public@localhost
LegendI[stor_physical_memory]: Used:
LegendO[stor_physical_memory]: Total:
Legend1[stor_physical_memory]: Used Storage
Legend2[stor_physical_memory]: Total Storage
Title[stor_physical_memory]: Physical memory
Kilo[stor_physical_memory]: 1024
MaxBytes[stor_physical_memory]: 1099511627776
ShortLegend[stor_physical_memory]: iB
YLegend[stor_physical_memory]: Bytes
Options[stor_physical_memory]: gauge
# Config for Virtual memory (3)
Target[stor_virtual_memory]: hrStorageUsed.3&hrStorageSize.3:public@localhost * hrStorageAllocationUnits.3&hrStorageAllocationUnits.3:public@localhost
LegendI[stor_virtual_memory]: Used:
LegendO[stor_virtual_memory]: Total:
Legend1[stor_virtual_memory]: Used Storage
Legend2[stor_virtual_memory]: Total Storage
Title[stor_virtual_memory]: Virtual memory
Kilo[stor_virtual_memory]: 1024
MaxBytes[stor_virtual_memory]: 1099511627776
ShortLegend[stor_virtual_memory]: iB
YLegend[stor_virtual_memory]: Bytes
Options[stor_virtual_memory]: gauge
# Config for Memory buffers (6)
Target[stor_memory_buffers]: hrStorageSize.6&hrStorageSize.1:public@localhost * hrStorageAllocationUnits.6&hrStorageAllocationUnits.1:public@localhost
LegendI[stor_memory_buffers]: Size:
LegendO[stor_memory_buffers]: RAM:
Legend1[stor_memory_buffers]: Memory buffers
Legend2[stor_memory_buffers]: Physical RAM Size
Title[stor_memory_buffers]: Memory buffers
Kilo[stor_memory_buffers]: 1024
MaxBytes[stor_memory_buffers]: 1099511627776
ShortLegend[stor_memory_buffers]: iB
YLegend[stor_memory_buffers]: Bytes
Options[stor_memory_buffers]: gauge
# Config for Cached memory (7)
Target[stor_cached_memory]: hrStorageSize.7&hrStorageSize.1:public@localhost * hrStorageAllocationUnits.7&hrStorageAllocationUnits.1:public@localhost
LegendI[stor_cached_memory]: Size:
LegendO[stor_cached_memory]: RAM:
Legend1[stor_cached_memory]: Cached memory
Legend2[stor_cached_memory]: Physical RAM Size
Title[stor_cached_memory]: Cached memory
Kilo[stor_cached_memory]: 1024
MaxBytes[stor_cached_memory]: 1099511627776
ShortLegend[stor_cached_memory]: iB
YLegend[stor_cached_memory]: Bytes
Options[stor_cached_memory]: gauge
# Config for Shared memory (8)
Target[stor_shared_memory]: hrStorageSize.8&hrStorageSize.1:public@localhost * hrStorageAllocationUnits.8&hrStorageAllocationUnits.1:public@localhost
LegendI[stor_shared_memory]: Size:
LegendO[stor_shared_memory]: RAM:
Legend1[stor_shared_memory]: Shared memory
Legend2[stor_shared_memory]: Physical RAM Size
Title[stor_shared_memory]: Shared memory
Kilo[stor_shared_memory]: 1024
MaxBytes[stor_shared_memory]: 1099511627776
ShortLegend[stor_shared_memory]: iB
YLegend[stor_shared_memory]: Bytes
Options[stor_shared_memory]: gauge
# Config for Swap space (10)
Target[stor_swap_space]: hrStorageUsed.10&hrStorageSize.10:public@localhost * hrStorageAllocationUnits.10&hrStorageAllocationUnits.10:public@localhost
LegendI[stor_swap_space]: Used:
LegendO[stor_swap_space]: Total:
Legend1[stor_swap_space]: Used Storage
Legend2[stor_swap_space]: Total Storage
Title[stor_swap_space]: Swap space
Kilo[stor_swap_space]: 1024
MaxBytes[stor_swap_space]: 1099511627776
ShortLegend[stor_swap_space]: iB
YLegend[stor_swap_space]: Bytes
Options[stor_swap_space]: gauge
# Config for Part: / (31)
Target[stor_part:_:]: hrStorageUsed.31&hrStorageSize.31:public@localhost * hrStorageAllocationUnits.31&hrStorageAllocationUnits.31:public@localhost
LegendI[stor_part:_:]: Used:
LegendO[stor_part:_:]: Total:
Legend1[stor_part:_:]: Used Storage
Legend2[stor_part:_:]: Total Storage
Title[stor_part:_:]: Part: /
Kilo[stor_part:_:]: 1024
MaxBytes[stor_part:_:]: 1099511627776
ShortLegend[stor_part:_:]: iB
YLegend[stor_part:_:]: Bytes
Options[stor_part:_:]: gauge
# Config for Part: /boot (33)
Target[stor_part:_:boot]: hrStorageUsed.33&hrStorageSize.33:public@localhost * hrStorageAllocationUnits.33&hrStorageAllocationUnits.33:public@localhost
LegendI[stor_part:_:boot]: Used:
LegendO[stor_part:_:boot]: Total:
Legend1[stor_part:_:boot]: Used Storage
Legend2[stor_part:_:boot]: Total Storage
Title[stor_part:_:boot]: Part: /boot
Kilo[stor_part:_:boot]: 1024
MaxBytes[stor_part:_:boot]: 1099511627776
ShortLegend[stor_part:_:boot]: iB
YLegend[stor_part:_:boot]: Bytes
Options[stor_part:_:boot]: gauge
# Config for Part: /tmp (34)
Target[stor_part:_:tmp]: hrStorageUsed.34&hrStorageSize.34:public@localhost * hrStorageAllocationUnits.34&hrStorageAllocationUnits.34:public@localhost
LegendI[stor_part:_:tmp]: Used:
LegendO[stor_part:_:tmp]: Total:
Legend1[stor_part:_:tmp]: Used Storage
Legend2[stor_part:_:tmp]: Total Storage
Title[stor_part:_:tmp]: Part: /tmp
Kilo[stor_part:_:tmp]: 1024
MaxBytes[stor_part:_:tmp]: 1099511627776
ShortLegend[stor_part:_:tmp]: iB
YLegend[stor_part:_:tmp]: Bytes
Options[stor_part:_:tmp]: gauge
# Config for Part: /var (35)
Target[stor_part:_:var]: hrStorageUsed.35&hrStorageSize.35:public@localhost * hrStorageAllocationUnits.35&hrStorageAllocationUnits.35:public@localhost
LegendI[stor_part:_:var]: Used:
LegendO[stor_part:_:var]: Total:
Legend1[stor_part:_:var]: Used Storage
Legend2[stor_part:_:var]: Total Storage
Title[stor_part:_:var]: Part: /var
Kilo[stor_part:_:var]: 1024
MaxBytes[stor_part:_:var]: 1099511627776
ShortLegend[stor_part:_:var]: iB
YLegend[stor_part:_:var]: Bytes
Options[stor_part:_:var]: gauge
# Config for Part: /var/spool (36)
Target[stor_part:_:var:spool]: hrStorageUsed.36&hrStorageSize.36:public@localhost * hrStorageAllocationUnits.36&hrStorageAllocationUnits.36:public@localhost
LegendI[stor_part:_:var:spool]: Used:
LegendO[stor_part:_:var:spool]: Total:
Legend1[stor_part:_:var:spool]: Used Storage
Legend2[stor_part:_:var:spool]: Total Storage
Title[stor_part:_:var:spool]: Part: /var/spool
Kilo[stor_part:_:var:spool]: 1024
MaxBytes[stor_part:_:var:spool]: 1099511627776
ShortLegend[stor_part:_:var:spool]: iB
YLegend[stor_part:_:var:spool]: Bytes
Options[stor_part:_:var:spool]: gauge
# Config for Part: /home (37)
Target[stor_part:_:home]: hrStorageUsed.37&hrStorageSize.37:public@localhost * hrStorageAllocationUnits.37&hrStorageAllocationUnits.37:public@localhost
LegendI[stor_part:_:home]: Used:
LegendO[stor_part:_:home]: Total:
Legend1[stor_part:_:home]: Used Storage
Legend2[stor_part:_:home]: Total Storage
Title[stor_part:_:home]: Part: /home
Kilo[stor_part:_:home]: 1024
MaxBytes[stor_part:_:home]: 1099511627776
ShortLegend[stor_part:_:home]: iB
YLegend[stor_part:_:home]: Bytes
Options[stor_part:_:home]: gauge
# Config for Part: /opt (38)
Target[stor_part:_:opt]: hrStorageUsed.38&hrStorageSize.38:public@localhost * hrStorageAllocationUnits.38&hrStorageAllocationUnits.38:public@localhost
LegendI[stor_part:_:opt]: Used:
LegendO[stor_part:_:opt]: Total:
Legend1[stor_part:_:opt]: Used Storage
Legend2[stor_part:_:opt]: Total Storage
Title[stor_part:_:opt]: Part: /opt
Kilo[stor_part:_:opt]: 1024
MaxBytes[stor_part:_:opt]: 1099511627776
ShortLegend[stor_part:_:opt]: iB
YLegend[stor_part:_:opt]: Bytes
Options[stor_part:_:opt]: gauge
# Config for Part: /usr (39)
Target[stor_part:_:usr]: hrStorageUsed.39&hrStorageSize.39:public@localhost * hrStorageAllocationUnits.39&hrStorageAllocationUnits.39:public@localhost
LegendI[stor_part:_:usr]: Used:
LegendO[stor_part:_:usr]: Total:
Legend1[stor_part:_:usr]: Used Storage
Legend2[stor_part:_:usr]: Total Storage
Title[stor_part:_:usr]: Part: /usr
Kilo[stor_part:_:usr]: 1024
MaxBytes[stor_part:_:usr]: 1099511627776
ShortLegend[stor_part:_:usr]: iB
YLegend[stor_part:_:usr]: Bytes
Options[stor_part:_:usr]: gauge


diskiomrtg.conf wrote:
# Config for sda
Target[diskio_sda]: `echo -e "2459277\n29377248"`
Title[diskio_sda]: sda
MaxBytes[diskio_sda]: 1000000000
Kilo[diskio_sda]: 1000
ShortLegend[diskio_sda]: sect/s
YLegend[diskio_sda]: Sectors/Second
Legend1[diskio_sda]: Sectors Read
Legend2[diskio_sda]: Sectors Written
Legend3[diskio_sda]: Read
Legend4[diskio_sda]: Written
# Config for lvm-tmp
Target[diskio_dm-0]: `echo -e "64\n1493912"`
Title[diskio_dm-0]: lvm-tmp
MaxBytes[diskio_dm-0]: 1000000000
Kilo[diskio_dm-0]: 1000
ShortLegend[diskio_dm-0]: sect/s
YLegend[diskio_dm-0]: Sectors/Second
Legend1[diskio_dm-0]: Sectors Read
Legend2[diskio_dm-0]: Sectors Written
Legend3[diskio_dm-0]: Read
Legend4[diskio_dm-0]: Written
# Config for lvm-usr
Target[diskio_dm-1]: `echo -e "2076488\n949864"`
Title[diskio_dm-1]: lvm-usr
MaxBytes[diskio_dm-1]: 1000000000
Kilo[diskio_dm-1]: 1000
ShortLegend[diskio_dm-1]: sect/s
YLegend[diskio_dm-1]: Sectors/Second
Legend1[diskio_dm-1]: Sectors Read
Legend2[diskio_dm-1]: Sectors Written
Legend3[diskio_dm-1]: Read
Legend4[diskio_dm-1]: Written
# Config for lvm-opt
Target[diskio_dm-2]: `echo -e "80\n40"`
Title[diskio_dm-2]: lvm-opt
MaxBytes[diskio_dm-2]: 1000000000
Kilo[diskio_dm-2]: 1000
ShortLegend[diskio_dm-2]: sect/s
YLegend[diskio_dm-2]: Sectors/Second
Legend1[diskio_dm-2]: Sectors Read
Legend2[diskio_dm-2]: Sectors Written
Legend3[diskio_dm-2]: Read
Legend4[diskio_dm-2]: Written
# Config for lvm-var
Target[diskio_dm-3]: `echo -e "271760\n20730344"`
Title[diskio_dm-3]: lvm-var
MaxBytes[diskio_dm-3]: 1000000000
Kilo[diskio_dm-3]: 1000
ShortLegend[diskio_dm-3]: sect/s
YLegend[diskio_dm-3]: Sectors/Second
Legend1[diskio_dm-3]: Sectors Read
Legend2[diskio_dm-3]: Sectors Written
Legend3[diskio_dm-3]: Read
Legend4[diskio_dm-3]: Written
# Config for lvm-var_spool
Target[diskio_dm-4]: `echo -e "19624\n6164904"`
Title[diskio_dm-4]: lvm-var_spool
MaxBytes[diskio_dm-4]: 1000000000
Kilo[diskio_dm-4]: 1000
ShortLegend[diskio_dm-4]: sect/s
YLegend[diskio_dm-4]: Sectors/Second
Legend1[diskio_dm-4]: Sectors Read
Legend2[diskio_dm-4]: Sectors Written
Legend3[diskio_dm-4]: Read
Legend4[diskio_dm-4]: Written
# Config for lvm-home
Target[diskio_dm-5]: `echo -e "40\n40"`
Title[diskio_dm-5]: lvm-home
MaxBytes[diskio_dm-5]: 1000000000
Kilo[diskio_dm-5]: 1000
ShortLegend[diskio_dm-5]: sect/s
YLegend[diskio_dm-5]: Sectors/Second
Legend1[diskio_dm-5]: Sectors Read
Legend2[diskio_dm-5]: Sectors Written
Legend3[diskio_dm-5]: Read
Legend4[diskio_dm-5]: Written


Note that since I regen these config files every 5 minutes when I actually run mrtg I can sometimes get away with just doing an echo for obtaining the values (which I sometimes obtain from commands like vmstat whilst generating the config files - so it's simpler to obtain the values and the desired target values in one go). The script I use automatically decides what configurations etc to construct. Yea, it's probably not the most efficient way to do it, but considering that this still takes up near zero CPU compared to what exim/courier/spamd/clamd takes out of my machines and they are on average all idling at < 10% with one or two going to 50 % during peak (and maybe once a week hit 80 %) I'm not too worried about it.

The script itself uses snmpwalk to determine what the network interfaces and storage locations are that is monitorable, and vmstat for the diskio stuff. The command "snmpwalk -Os -c public -v 2c localhost hrStorageDescr" should give you an idea of what I'm doing to determine what goes where. From there sed and grep are rather handy. ppp* are the only graphs that does not produce the desired results, and upon reboots i sometimes get nasty spikes.

The results for a different machine is visible on http://mail.kroon.co.za/system/serverstats (this particular case doesn't have a ppp device though).
_________________
There are 10 kinds of people in the world,
those who understand binary and who don't
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2, 3, 4  Next
Page 3 of 4

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum