Archive for March, 2013

22 March, 2013

Monitoring tools: nmon

by gorthx

Next up in my occasional monitoring tools review series: another oldie-but-goodie, readily available tool, nmon.

What: nmon
What it monitors: system stats
Where to get it: it’s probably pre-installed on your system. If not, get it from sourceforge.
Why you’d want (or not) to use it: Pretty much the same reasons you’d want to use sar, as I discussed previously.

I’ve (casually) used the interactive interface, and until a few weeks ago, thought that’s all that there was to this tool. Not so. There’s an option (-f) you can use to save a single data poll to a file, in “spreadsheet format”. You can also specify an interval and a number of polls to take:

nmon -f -s 60 -c 60
= poll once a minute for an hour.

nmon will create a file for you, with a default name of [server]-timestamp.nmon, or you can specify your own filename with -F.

To generate graphs, there are two Excel spreadsheets you can download from the wiki. I tried the nmon Analyzer Spreadsheet (the newer of the two). The docs recommend “keep the number of snapshots to around 300”. I agree. The graphs look a lot nicer with fewer data points in them. However, Excel graphs just aren’t as pretty as rrdtool graphs.

There’s an nmon2rrd tool, but it was compiled for AIX so I didn’t try it out.

Of the two, if I’m looking for on-the-spot visualization of system performance, nmon wins it. For storage and later review of the data, I’d go with sar + sar2rrd.pl over nmon + the Excel spreadsheet. The graphs are prettier and easier to read with sar.

Advertisements
Tags: ,
15 March, 2013

Monitoring Tools: sar

by gorthx

What: sar
What it monitors: pretty much every system stat you can imagine (and some you haven’t)
Where to get it: it’s probably pre-installed on your system; if not, try the sysstats package (the same one that includes iostats)
Why you’d want to use it:

  • you need an answer fast, but maybe don’t have access to the “enterprise” monitoring (or there isn’t any…[1])
  • you’re doing system testing and want a command-line tool that’s easy to configure and run in discrete timeframes.

Why you wouldn’t want to use it:

  • you want data you can easily throw into a graphing or analysis program; the data produced by sar isn’t readily machine-readable
  • you’re looking for a near-real-time long-term monitoring solution. In that case, just go ahead and set up munin or collectd.

Because it’s lightweight and so readily available, it’s a good tool to have in your toolbox. Plus, it’ll tell you things like fan speed and temperature, and I’m just a sucker for environmental monitoring [2].

read more »

8 March, 2013

Updating My Linux Command line Toolbox

by gorthx

Over the past few months I’ve been working with some people with many more years of unix-y experience than I have. They’ve been teaching me new stuff, and showing me updated versions of commands I’ve been using in sometimes kludgy ways. Here are 10 examples.

1. join, which feels kind of like a stoneage tool; it’s not that versatile, and I’m sure there are perl one-liners that could do the same thing. But join’s readily available and it’s a fast way to join two files, if they meet the requirements: same file separator, a “key” field (which is easy to add with nl, #2 below), etc.

2. nl to number the lines in a file.

My most-used switches:
nl -v 800 -w 3 -n rn -s, oldfile > newfile
-v = start with this number
-w = number of characters (in this example, numbers > 999 will be truncated)
-n rn = ‘right justified, no 0 padding’ so you don’t have to go back through another round of text processing to strip them off
-s, = use a comma as a field sep

3. truncate as a desperate move to free up some space. Bonus: do this on a log file to confuse your coworkers.

4. watch [1]
Current favorite:
watch -n 10 “psql -d my_db -c \”SELECT datname, procpid, usename, backend_start, xact_start, query_start, waiting, current_query FROM pg_stat_activity WHERE current_query LIKE ‘autovacuum:%’\””

5. lastlog, last, and lastb
most recent login, all logins, and all bad logins

6. df -h instead of df -k
df -k was one of the first unix commands I learned. It used to be easy to read the blocks, Used, and Available columns (they had fewer digits back then). df -h makes it easy again, and shows me the units. The muscle memory on this one has been particularly hard to shed.

7. echo * | wc -w instead of ls | wc -l or ls -1 | wc -l
The first sysadmin I ever worked with taught me ls -1 | wc -l. Turns out you don’t need the -1 to list the files individually, if you are piping the output to another command.

Use echo * to quickly list the number of files in a directory, when that number is more than a few thousand; assumes no spaces in the filenames.

8. awk can be kind of intimidating due to its sheer power and uninformative error messages. I have a small cheatsheet I keep handy, for tasks I do frequently. These are the most recent additions:

To remove everything owned by me in a directory:
ls -l | awk ‘/gabrielle/{print $9}’ | xargs rm

To find all files > 0 size in a directory:
ls -l | awk ‘$5>0{print}’

9. Everything here: https://dougvitale.wordpress.com/2011/12/21/deprecated-linux-networking-commands-and-their-replacements/

10. And sar, which I’ll write a separate post about.

Thanks to Joe, Vibhor, Oscar, and of course MJM.

1 – “How did I not know about ‘watch’?” “You were using Solaris all these years, m’dear.”

Tags: ,