Auditd

From osnews.com

How do you audit your Linux environment? How do you track after changes to your files? What kind of processes are running on your system at any given moment? What uses the most resources? Valid questions, all. Special contributor Dedoimedo gives us the straight scoop on “audit.”. We’ve monitored system activity with sar. We used iostat, vmstat and dstat to collect statistics on resource usage. We learned about pacct for process accounting. We also worked with top in batch mode and atop. Finally, we worked with some high-end debugging tools like OProfile and strace. Links below. But we did not use any utility for file auditing.

Today, we do that. The name of the game: Linux audit.

read the rest here

Gliffy

From Gliffy and techhamlet.com

Gliffy is an online tool that makes it easy to create, share, and collaborate on a wide range of diagrams, Gliffy users can communicate more clearly, boost innovation, improve decisions, and work more effectively. Hundreds of thousands of customers have embraced Gliffy. They create diagrams in a broad range of industries and functions, from individuals to small business owners to leading global companies. Gliffy is created upon Adobe Flash framework and it’ll redirect you to the web app from the home page when you click on “Get Started”

Want to create a diagram for your project in a hurry? Or how about creating neat diagrams while you are on-the-go? It’s in times like these you’ll want a nice online diagramming tool like Gliffy. Let’s see how Gliffy could ease our work with its cool features.

On the very first screen it’ll ask you what type of diagram you want to create and it features diagrams such as, Website/Software UI Design, Venn Diagrams, Flowcharts, Organization Charts, SWOT Analysis, UML Diagramming, Network Diagrams, Business Process and you can even create floor plans!

After selecting the digram type you’ll be presented with the workspace where you can design it the way you want it to be. The interface is well designed and you can tweak with the options they have provided to make the process snappy.

Exporting the Diagram
In the left pane, you can add shapes, images (powered by Yahoo! image search) and customize the user interface. Just below the menu bar you’ll see the design tool bar by which you can add text, basic shapes, connectors and navigation tools. Here are some of the keyboard shortcuts which will save you a big time.

Text Tool – Ctrl+Shift+F2
Ellipse Tool – Ctrl+Shift+9
Connector Tool – Ctrl+Shift+3
Line Tool – Ctrl+Shift+6
Zoom in – Ctrl with +
Zoom out – Ctrl with –
Hand Tool – Space+Left mouse button

On the right pane it’ll display the properties of a selected object where you can edit the values and see the changes. Furthermore, you can open up more tabs and work concurrently by clicking on the “plus sign” located just below the design toolbar. When you have finished creating the diagram just click on “File” and then choose the format you want to export.

But why the name Gliffy?
It comes from the word glyph, a symbol or character that imparts information non-verbally. Gliffy is an online diagramming service that helps users communicate with a combination of shapes, text, and lines. Doesn’t sound so silly now, does it?

From Gliffy and techhamlet.com

Thar’ be softwares ere’

From kmandla.wordpress.com

I am a harsh software critic. I’m usually willing to try something new if there’s the possibility it will do a better job than my current favorite, but I hold grudges against programs — and sometimes even entire desktop environments — if they disappoint me.

In addition, I am a minimalist maximalist. I have a clear set of criteria that I use to judge a program.

Do one thing, and only one thing.
Everybody likes a flexible program. But I don’t like software that tries to do too much at once. For example, I resent music management software suites or photo management applications. I manage the photos. I manage the music. The application shows it, or plays it. Period. If you try to be all things at once to me, you will only disappoint.

Do that one thing well.
A program needs focus — that goes without saying. If it achieves that goal and doesn’t muddle the final product, it is a winner. In other words, if you can’t do it right, don’t bother trying at all.

Don’t drag my system down.
If you burden my installation with pointless libraries and dependencies that don’t add anything to your software, you fail. Some of the greatest software ever written has about two dependencies. Some of the worst drags in all of Gnome just to put an icon on the screen. That is inexcusable.

Finally, points are awarded for style.
I can forgive and even adopt an ugly or cumbersome program if it achieves in the first three categories. But if you manage to capture all three and have a clever interface or a smooth look, then I embrace thee through the power of the Internets. The odd consequence of all these points is that I tend to rely on console-based, or at least framebuffer-oriented software over “standard” graphical applications.

And considering that has been the case for quite some time, I think I’m safe in recommending command-line applications over graphical alternatives. The more I use them, the more I realize that terminal-based software can do 99 percent of the work a graphical desktop does, with ten times the speed and a tenth of the resources.

And therefore, in no particular order or arrangement…

the rest of the story here…

MySQL 5.5 upgrade

From ovaistariq.net
Reference MySQL Documentation Here

MySQL 5.5 has created a lot of hype and its not just hype, there are major performance enhancements not only in the MySQL server itself but in the newer InnoDB plugin shipped with MySQL 5.5. That’s exactly the reason why I have myself upgraded to MySQL 5.5 (The server running this blog run MySQL 5.5). Now since I haven’t come across a guide to help in upgrading to MySQL 5.5, I thought why not make one myself. So here goes nothing!

Continue reading “MySQL 5.5 upgrade”

Add custom buttons to WP HTML editor

I ran into a small issue yesterday which after upgrading to 3.0.3 that the colorcoder plugin removed the ‘code’ button from the HTML editor. In researching how to remedy this, I came across instruction to modify the quicktags.js file which controls the functionality of the buttons on the HTML editor. In order to add, remove or modify these buttons;

Browse to the default WP folder install location and then into the folder /wp-includes/js/

move the compressed default quicktags.js to quicktags.js.bak
mv quicktags.js quicktags.js.bak

then copy the quicktags.dev.js file to quicktags.js (it’s the same file, only compressed)
cp quicktags.dev.js quicktags.js

edit the quicktags.js file
vim quicktags.js

add buttons using the following format;

function edButton(id, display, tagStart, tagEnd, access, open) {
this.id = id; // used to name the toolbar button
this.display = display; // label on button
this.tagStart = tagStart; // open tag
this.tagEnd = tagEnd; // close tag
this.access = access; // access key
this.open = open; // set to -1 if tag does not need to be closed

div tag

edButtons[edButtons.length] =
new edButton('ed_div'
,'div'
,'

'
,'

'
,'d'
);

link2 button (original link button has specific code related to it later in the file which defaults to the standard link action)

edButtons[edButtons.length] =
new edButton('ed_link2'
,'link2'
,'From link title'
,'
'
,'c'
);

email link button

edButtons[edButtons.length] =
new edButton('ed_email'
,'email'
,''
,'
'
,'c'
);

You can modify this file to add or remove buttons which are more or less useful depending on your needs.

CLIcompanion

From okiebuntu.homelinux.com

CLI Companion is an Terminal with an attached ‘command dictionary’. The application comes with a set of commands already added to the dictionary. However CLI Companion lets you add as many commands as you want to the ‘command dictionary’. You may use this tool to store and run Terminal commands from a GUI. People unfamiliar with the Terminal will find CLI Companion a useful way to become acquainted with the Terminal and unlock its potential. Experienced users can use CLI Companion to store their extensive list of commands in a searchable list.

Get the .deb from here launchpad.net/clicompanion or directly from HERE

You can visit the projects Launchpad page or follow the directions below to install from the Terminal.

I will include include instructions to install the deb directly as well as instructions to install from the PPA so you can receive updates. You can install the deb with the following commands in a terminal.
To get the deb:

wget http://launchpad.net/clicompanion/1.0/1.0rc2/+download/clicompanion_1.0-3.1_all.deb

To install the deb:
dpkg -i clicompanion_1.0-3.1_all.deb

To automatically receive updates you can add the clicompanion-nightlies ppa to your Software Sources with the following command:
sudo add-apt-repository ppa:clicompanion-devs/clicompanion-nightlies

Then install the .deb (not necessary if you already followed the wget and dpkg commands above):
sudo apt-get update; sudo apt-get install clicompanion

You can find the “text” file that holds all the specific users commands in ~/clicompanion2. You’ll want to save that file if you upgrade/ move systems around. The base file that holds all the initial commands is in /etc/clicompanion.d/clicompanion2.config.

Cmd Service Check Raw Output: Fixed ownership 12/15

In getting this email error last night…

nameserver failed @ Wed Dec 15 01:46:06 2010. A restart was attempted automagically.
Service Check Method: [check command]

Cmd Service Check Raw Output: Fixed ownership on /etc/named.conf
Fixed ownership on /etc/rndc.key
Fixed ownership on /etc/rndc.conf

I checked the yum logs for updates, i’m seeing:

tac /var/log/yum.log |less

Dec 15 00:16:48 Updated: 30:caching-nameserver-9.3.6-4.P1.el5_5.3.x86_64
Dec 15 00:16:48 Updated: 30:bind-devel-9.3.6-4.P1.el5_5.3.i386
Dec 15 00:16:46 Updated: openssl-devel-0.9.8e-12.el5_5.7.x86_64
Dec 15 00:16:44 Updated: 30:bind-devel-9.3.6-4.P1.el5_5.3.x86_64
Dec 15 00:16:40 Updated: openssl-devel-0.9.8e-12.el5_5.7.i386
Dec 15 00:16:38 Updated: 30:bind-utils-9.3.6-4.P1.el5_5.3.x86_64
Dec 15 00:16:37 Updated: 30:bind-9.3.6-4.P1.el5_5.3.x86_64
Dec 15 00:16:36 Updated: 30:bind-libs-9.3.6-4.P1.el5_5.3.i386
Dec 15 00:16:36 Updated: 30:bind-libs-9.3.6-4.P1.el5_5.3.x86_64
Dec 15 00:16:35 Updated: openssl-0.9.8e-12.el5_5.7.x86_64

so, checking for errors;

tac /var/log/messages |grep named |less

getting this;

Dec 15 10:13:49 host named[24058]: max open files (1024) is smaller than max sockets (4096)
Dec 15 10:13:49 host named[24058]: loading configuration from '/etc/named.conf'
Dec 15 10:13:49 host named[24058]: using up to 4096 sockets

resolution may be here;
http://forum.nginx.org/read.php?24,158533
http://osdir.com/ml/centos/2010-12/msg00776.html

===============================================

After the latest security update for bind (which came out last night), now
there’s a new message on syslog, (facility: daemon, severity: warning) every
time you restart named:

max open files (1024) is smaller than max sockets (4096)

After googling for a while the solution seems to be to add this to
/etc/security/limits.conf:

named soft nofile 4096

….and mofity /etc/named.conf in order to add, under the options section:

files 4096;

That seems to work. Of course, you may raise the 4096 but I guess that’s
the default in BIND and I was good with that.

I’m not sure why this happend. Maybe before the update bind had a value of
1024 for max.sockets and now it was raised to 4096.

==================================================

Maldet

The home page for LMD is located at:
Linux Malware Detect | R-fx Networks

cpanel.net info on maldet

The current version of LMD is always available at the following URL:
http://www.rfxn.com/downloads/maldetect-current.tar.gz
http://www.rfxn.com/appdocs/README.maldetect
http://www.rfxn.com/appdocs/CHANGELOG.maldetect

What is Linux Malware Detect (LMD)?

Linux Malware Detect (LMD) is a malware scanner for Linux released under the GNU GPLv2 (free, open source) license, that is designed around the threats faced in shared hosted environments. It uses threat data from network edge intrusion detection systems to extract malware that is actively being used in attacks and generates signatures for detection. In addition, threat data is also derived from user submissions with the LMD checkout feature, threats found on the TCH network of over 30,000 hosted domains and from malware community resources.

The result, is a malware scanner with over 2,200 signatures and growing, that detects a varied assortment of malware from the infamous Yellsoft Darkmailer CGI Mailer to the more common r57 PHP Command Shell to BASE64 Encoded file injection strings.

Installation & Configuration:

There is nothing special to installing LMD, download the package and run the enclosed install.sh script:

wget http://www.rfxn.com/downloads/maldetect-current.tar.gz
2010-05-15 23:34:05 (148 MB/s) - `maldetect-current.tar.gz' saved [268031/268031]

tar xfz maldetect-current.tar.gz
cd maldetect-*
./install.sh
Linux Malware Detect v1.3.4
(C) 1999-2010, R-fx Networks (C) 2010, Ryan MacDonald
inotifywait (C) 2007, Rohan McGovern
This program may be freely redistributed under the terms of the GNU GPL

installation completed to /usr/local/maldetect
config file: /usr/local/maldetect/conf.maldet
exec file: /usr/local/maldetect/maldet
exec link: /usr/local/sbin/maldet
cron.daily: /etc/cron.daily/maldet

maldet(32517): {sigup} performing signature update check...
maldet(32517): {sigup} local signature set is version 2010051510029
maldet(32517): {sigup} latest signature set already installed

Now that LMD is installed, take note of the file locations and we can go ahead with opening the configuration file located at /usr/local/maldetect/conf.maldet for editing (vi or nano -w). The configuration file is fully commented so you should be able to make out most options but lets take a moment to review the more important ones anyways.

email_alert
This is a top level toggle for the e-mail alert system, this must be turned on if you want to receive alerts.

email_addr
This is a comma spaced list of e-mail addresses that should receive alerts.

quar_hits
This tells LMD that it should move malware content into the quarantine path and strip it of all permissions. Files are fully restorable to original path, owner and permission using the –restore FILE option.

quar_clean
This tells LMD that it should try to clean malware that it has cleaner rules for, at the moment base64_decode and gzinflate file injection strings can be cleaned. Files that are cleaned are automatically restored to original path, owner and permission.

quar_susp
Using this option allows LMD to suspend a user account that malware is found residing under. On CPanel systems this will pass the user to /scripts/suspendacct and add a comment with the maldet report command to the report that caused the users suspension (e.g: maldet –report SCANID). On non-cpanel systems, the users shell will be set to /bin/false.

quar_susp_minuid
This is the minimum user id that will be evaluated for suspension, the default should be fine on most systems.

The rest of the options in conf.maldet can be left as defaults unless you clearly understand what they do and how they may influence scan results and performance.

Usage & Manual Scans
The usage of LMD is very simple and there is a detailed –help output that provides common usage examples, I strongly recommend you check the –help output and spend a few minutes reviewing it.

The first thing most users are looking to do when they get LMD installed is to scan a certain path or series of paths. An important note is that LMD uses the ‘?’ character for wildcards instead of the ‘*’ char. In the below examples I will be using the long form flags but they are interchangeable with the short form flags (i.e: –scan-recent vs. -r).

If we wanted to scan all user public_html paths under /home*/ this can be done with:

maldet --scan-all /home?/?/public_html

If you wanted to scan the same path but scope it to content that has been created/modified in the last 5 days you would run:

maldet --scan-recent /home?/?/public_html 5

If you performed a scan but forget to turn on the quarantine option, you could quarantine all malware results from a previous scan with:

maldet --quarantine SCANID

Similarly to the above, if you wanted to attempt a clean on all malware results from a previous scan that did not have the feature enabled, you would do so with:

maldet --clean SCANID

If you had a file that was quarantined from a false positive or that you simply want to restore (i.e: you manually cleaned it), you can use the following:

maldet --restore config.php.2384
maldet --restore /usr/local/maldetect/quarantine/config.php.2384

Once again, I encourage you to fully review the –help output for details on all options and the README file for more details on how LMD operates.

Daily Scans
The cronjob installed by LMD is located at /etc/cron.daily/maldet and is used to perform a daily update of signatures, keep the session, temp and quarantine data to no more than 14d old and run a daily scan of recent file system changes.

The daily scan supports Ensim virtual roots or standard Linux /home*/user paths, such as Cpanel. The default is to just scan the web roots daily, which breaks down as /home*/*/public_html or on Ensim /home/virtual/*/fst/var/www/html and /home/virtual/*/fst/home/*/public_html.

If you are running monitor mode, the daily scans will be skipped and instead a daily report will be issued for all monitoring events. If you need to scan additional paths, you should review the cronjob and edit it accordingly.

Report Location

The malware detect scan reports are stored in /usr/local/maldetect/sess with file names similar to session.hits.082311-0402.8042
or they can be accessed via the following;
maldet -e, or –report SCANID
View scan report of the most recent scan or provided SCANID
e.g: maldet –report (shows last scan)
e.g: maldet –report 050910-1534.21135 (shows specific date scan)

Release & Signature Updates
Updates to the release version of LMD are not automatically installed but can be installed using the –update-ver option. There is good reasons that this is not done automatically and I really dont feel like listing them so just think about it a bit.

The latest changes in the release version can always be viewed at: http://www.rfxn.com/appdocs/CHANGELOG.maldetect

The LMD signatures are updated typically once per day or more frequently depending on incoming threat data from the LMD checkout feature, IDS malware extraction and other sources. The updating of signatures in LMD installations is performed daily through the default cron.daily script with the –update option, which can be run manually at any time.

An RSS & XML data source is available for tracking malware threat updates:
RSS: R-fx Networks – Linux Malware Detect Updates
XML: http://www.rfxn.com/api/lmd?id=recent
XML: http://www.rfxn.com/api/lmd?id=all

Real-Time Monitoring
The inotify monitoring feature is designed to monitor users in real-time for file creation/modify/move operations. This option requires a kernel that supports inotify_watch (CONFIG_INOTIFY) which is found in kernels 2.6.13+ and CentOS/RHEL 5 by default. If you are running CentOS 4 you should consider an inbox upgrade with:
Upgrade CentOS 4.8 to 5.3 | R-fx Networks

There are three modes that the monitor can be executed with and they relate to what will be monitored, they are USERS|PATHS|FILES.

e.g: maldet –monitor users
e.g: maldet –monitor /root/monitor_paths
e.g: maldet –monitor /home/mike,/home/ashton

The options break down as follows:
USERS – The users option will take the homedirs of all system users that are above inotify_minuid and monitor them. If inotify_webdir is set then the users webdir, if it exists, will only be monitored.
PATHS – A comma spaced list of paths to monitor
FILE – A line spaced file list of paths to monitor

Once you start maldet in monitor mode, it will preprocess the paths based on the option specified followed by starting the inotify process. The starting of the inotify process can be a time consuming task as it needs to setup a monitor hook for every file under the monitored paths. Although the startup process can impact the load temporarily, once the process has started it maintains all of its resources inside kernel memory and has a very small userspace footprint in memory or cpu usage.

The scanner component of the monitor watches for notifications from the inotify process and batches items to be scanned, by default, every 30 seconds. If you need tighter control of the scanning timer, you can edit inotify_stime in conf.maldet.

The alerting of file hits under monitor mode is handled through a daily report instead of sending an email on every hit. The cron.daily job installed by LMD will call an –alert-daily flag and send an alert for the last days hits. There is also an –alert-weekly option that can be used, simply edit the cron at /etc/cron.daily/maldet and change the –alert-daily to –alert-weekly.

Terminating the inotify monitoring is done by passing the ‘-k|–kill-monitor’ option to maldet, it will touch a file handle monitored by maldet and on the next waking cycle of the monitor service, it will terminate itself and all inotify processes.

Ignore Files:
There are three ignore files available and they break down as follows:

/usr/local/maldetect/ignore_paths
A line spaced file for paths that are to be execluded from search results

/usr/local/maldetect/ignore_sigs
A line spaced file for signatures that should be removed from file scanning

/usr/local/maldetect/ignore_inotify
A line spaced file for paths that are to be excluded from inotify monitoring

NOTE: These scans are thorough and can take days if your home dir is large, you have 8 million files or 1200 domains on the server…. just saying. This tool is more suited for specific accounts or directories that may be compromised. So, if you run this and do a full homedir scan and t takes 8 days, don’t blame me, you have been warned 😀

Redirect Nginx to www

From go2linux.org

Yesterday I’ve written a post about how to redirect traffic from www.domain.com to domain.com using Nginx. It is using an if statement inside the server section in the nginx.conf file, there is another way to do it. It is to create two server section, and redirect the traffic from one of them to the other:

Strip www from url with nginx redirect

server {
server_name www.domain.com;
rewrite ^(.*) http://domain.com$1 permanent;
}

server {
server_name domain.com;
#The rest of your configuration goes here#
}

What you need to do, it use the same IP in your DNS server for both

domain.com and www.domain.com, and when the browser asks for www.domain.com will be redirected to domain.com and will ask to the same server that info, in that case, it will get the proper answer from your server.

Add the www to the url with nginx redirect

If what you need is the opposite, to redirect from domain.com to www.domain.com, you can use this:

server {
server_name domain.com;
rewrite ^(.*) http://www.domain.com$1 permanent;
}

server {
server_name www.domain.com;
#The rest of your configuration goes here#
}

As you can imagine, this is just the opposite and works the same way the first example.

What I like about this, is that you can actually use this method to forward other domains you may own to an specific domain, let’s see.

Imagine you own:

domain.com
domain.net
domain.org

Your site is located at www.domain.com, you may write a configuration file like this:

server {
server_name domain.org;
rewrite ^(.*) http://www.domain.com$1 permanent;
}
server {
server_name www.domain.org;
rewrite ^(.*) http://www.domain.com$1 permanent;
}

server {
server_name domain.net;
rewrite ^(.*) http://www.domain.com$1 permanent;
}
server {
server_name www.domain.net;
rewrite ^(.*) http://www.domain.com$1 permanent;
}

server {
server_name domain.com;
rewrite ^(.*) http://www.domain.com$1 permanent;
}
server {
server_name www.domain.com;
#From here you start your Nginx server normal configuration#
}

Do not forget to manage your DNS server and on each zone assign the same IP for all

domain.com
www.domain.com
domain.org
www.domain.org
domain.net
www.domain.net

From go2linux.org

Poor man’s processes monitor

From andreinc.net

Recently a member of the Romanian Ubuntu Community asked for a script to monitor the running processes on his server . He didn’t requested for anything fancy, just a small utility that will be able to detect a hanging application (a process that is “eating” more than 80% of the CPU for a long period of time) and then log the results.

I am no sysadmin, but I am sure there are a lot of dedicated open-source solutions for monitoring a server .Still the functionality he asked for can be easily achieved by combining bash and awk . One of the things I like Linux for is the power of the shell and the full-control over the your system . You can write a script for every repeating task that arise as bash is easy to learn but of course, hard to master .

More as an exercise for myself I’ve proposed the following solution :


#!/bin/bash

#DATE: Nov 5, 2010
#AUTHOR: nomemory

#Maximum memory for a process (%)
declare -i MEM_LIMIT=1

#Maximum CPU for a process (%)
declare -i CPU_LIMIT=1

#Loop sleep interval
declare -i SEC_INT=30

while true; do
ps aux | awk -v MEM_LIMIT=${MEM_LIMIT} \
-v CPU_LIMIT=${CPU_LIMIT} \
-v CDATE="`date`" '{
if ($3 > CPU_LIMIT) {
printf "%s [ %10s %d %40s ] CPU LIMIT EXCEED: %2.2f (MAX: %2.2f) \n", \
CDATE, $1, $2, $11, $3, CPU_LIMIT
}
if ($4 > MEM_LIMIT) {
printf "%s [ %10s %d %40s ] MEM LIMIT EXCEED: %2.2f (MAX: %2.2f) \n", \
CDATE, $1, $2, $11, $4, MEM_LIMIT
}
}'
sleep ${SEC_INT}
done

If you run this script the output will probably look similar to this one :

?
Mon Nov 8 00:01:08 EET 2010 [ andrei 1718 /opt/google/chrome/google-chrome ] MEM LIMIT EXCEED: 2.20 (MAX: 1.00)
Mon Nov 8 00:01:08 EET 2010 [ andrei 1726 pidgin ] MEM LIMIT EXCEED: 1.40 (MAX: 1.00)
Mon Nov 8 00:01:08 EET 2010 [ andrei 1853 /opt/google/chrome/chrome ] CPU LIMIT EXCEED: 5.70 (MAX: 1.00)
Mon Nov 8 00:01:08 EET 2010 [ andrei 1853 /opt/google/chrome/chrome ] MEM LIMIT EXCEED: 2.70 (MAX: 1.00)
Mon Nov 8 00:01:08 EET 2010 [ andrei 2054 gnome-terminal ] CPU LIMIT EXCEED: 1.50 (MAX: 1.00)
Mon Nov 8 00:01:08 EET 2010 [ andrei 2058 bash ] CPU LIMIT EXCEED: 1.70 (MAX: 1.00)

The output can then be redirected to a file (>>) and interpreted as:
[ ] MEM/CPU LIMIT EXCEED: (max: MAXIMUM_LIMIT for CPU/MEM)

In this form the script support the following variables:

MEM_LIMIT
A float number (integers are accepted) representing the maximum memory percent a process can use before triggering the alarm .
CPU_LIMIT
A float number (integers are accepted) representing the maximum
SEC_INT
The pause in the main loop . Every SEC_INT the process will be scanned
Those variables are passed as awk variables whilst using the ‘ -v ‘ flag .

The shortcomings of the script are obvious: sometimes a process can have a short spike of CPU consumption, so false positives may appear . Probably the best thing to do will be to write another script to analyze the log, and see how many times a certain command is repeated . For example the log should be ‘grep’ed to find a certain command, then use the ‘wc’ utility the count how many times the process triggered the alarm . All in all the problem worthed a try !

Written by: Andrei Ciobanu on November 7, 2010.
Last revised by: Andrei Ciobanu on November 14, 2010.

From andreinc.net