iScanner: malware pwned


iScanner is free open source tool lets you detect and remove malicious codes and web pages viruses from your Linux/Unix server easily and automatically.

This tool is programmed by iSecur1ty using Ruby programming language and it’s released under the terms of GNU Affero General Public License 3.0.

iScanner Features:
* Detect malicious codes in web pages. This include hidden iframe tags, javascript, vbscript, activex objects and PHP codes .
* Extensive log shows the infected files and the malicious code.
* Send email reports.
* Ability to clean the infected web pages automatically.
* Easy backup and restore system for the infected files.
* Simple and editable signature based database.
* Ability to update the database and the program easily from dedicated server.
* Very flexible options and easy to use.
* Fast scanner with good performance.


First of all make sure that Ruby is installed in your server.
# ruby -v

If Ruby wasn’t installed, you can simply use yum package manager:
# yum install ruby

Or apt-get if you were using Ubuntu or any other Debian based distributions:
# apt-get install ruby

Otherwise you can download Ruby and install it on your machine from:

Download iScanner – HERE

Extract the package to some directory –

tar -zxvf iscanner.tar.gz

iScanner doesn’t require any external libraries and you don’t need to install the program to be able to use it but we included installer you can use it to install and uninstall iScanner from your server using the following command:
# ./installer -i

This command will install iScanner in the default directory ‘/etc/iscanner’ but you can change the installation directory using the ‘-d’ option:
# ./installer -i -d /opt/iscanner

And easly uninstall iScanner by removing the the folder or using the following command:
# ./installer -u

Using iScanner:


-R Use this option to scan a remote web page / website.
# iscanner -R

-F Use this option to scan a specific file.
# iscanner -F /home/user/file.php

-f Use this option to scan a specific directory.
# iscanner -f /home/user

-e This option will allow you to select specific file extensions for scanning, by default iScanner will scan [htm, html, php, js] if you wanted to scan other or specific extensions only:
# iscanner -f /home/user -e htm:html

-d By default iScanner will load the latest version of signatures database in the folder, if wanted to use older or modified version of the database:
# iscanner -f /home/user -d database.db

-M Using this option allows you to specify malware code and iScanner will automatically generate regex signature for it then scan the files / website you want (incase you wanted to scan for specific code or undetected malware!)
# iscanner -M /home/user/malware_code.txt -f /home/user
# iscanner -M /home/user/malware_code.txt -R

-o Using this option will allow you to choose the name of the infected log file. If this option wasn’t used the name of the infected log file will be in this format “infected-[TIME]-[DATE].log”, if you wanted to select other name:
# iscanner -f /home/user -o user.log

-m With this option you can tell iScanner to send a copy of the infected log to selected email address:
# iscanner -f /home/user -m

-c This option will clean the infected files by removing the malicious code only without deleting the infected files. Before using this option make sure to check the infected log to know what exactly iScanner will remove from each infected file.
# iscanner -c infected.log

-b If you used this option iScanner will take backup from the infected files before cleaning it. You can find backup of the cleaned files in a folder have the name “backup-[TIME]-[DATE]”.
# iscanner -b -c infected.log

-r If you used the previous option when cleanning the infected log and something went wrong, use this option to restore the files from the backup directory.
# iscanner -r backup/

-a Using this option will make iScanner clean every infected file automatically. This option could be dangerous if you didn’t scan the folder before and you don’t know the results.
# iscanner -f /home/user -a

-D This option will make iScanner run in the debug mode, the option will be useful if you had problem while running iScanner.
# iscanner -f /home/user -D

-q If you don’t want to see any output from iScanner you can enable this option to make the program run in the quit mode.
# iscanner -f /home/user -q

-s This option allows you to send malicious file to iScanner developers for analyzes. This will help us improve the signatures database and keeping it updated.
# iscanner -s /home/user/malicious_file.html

-U With this option you can update iScanner and the signatures database to the latest version easily.
# iscanner -U

-u This option will update the signatures database only instead of updating the whole program.
# iscanner -u

-v This option will print iScanner’s version and release date with the database version and it’s release date too.
# iscanner -v

-h This option will show the help message.
# iscanner -h

Tips & Tricks:

You can easily modify the signatures database and have your own customized version by adding, removing or changing the regular expressions in the database. If several websites on your server has been hacked, you can add signature to iScanner’s database to make it locate all the hacked index pages in your server. You can add iScanner to the cron jobs to make it scan you server every 24 hours and send the infected log to your email address. It is also possible to configure your ftp server (PureFTPd for example) to make iScanner scan all the uploaded files by the users and send email alert if malicious file has been detected.

Linux Server Profiling


Using Open Source Tools For Bottleneck Analysis

This tutorial covers profiling of Linux servers using open-source tools such as “iostat”, “oprofile” and “blktrace”. Both processor-bound and I/O-bound cases are covered, and the emphasis is on tools that provide visual displays of relevant metrics.

Linux Server Profiling: Using Open Source Tools For Bottleneck Analysis

AMD’s 12-core chip may cut software costs


Advanced Micro Devices today released its 12-core chip, doubling the number of cores in the previous-generation chip in its Opteron line. One of the key benefits in taking advantage of the performance gains delivered by a chip with a dozen cores may be in reducing software licensing costs.

The performance its new Opteron, code-named Magny-Cours, is about double that of its six-core chip, AMD said.

Users will look at the price, performance and energy use of the chip and compare it with Intel’s x86 chip upgrades, but other reasons for moving to a 12-core chip will be the impact on overall data center space needs and software licensing costs. The chip will be sold in an eight-core version.

Read the rest here…

OpenSSL 1.0.0 released

The OpenSSL Project is a collaborative effort to develop a robust, commercial-grade, full-featured, and Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols as well as a full-strength general purpose cryptography library. The project is managed by a worldwide community of volunteers that use the Internet to communicate, plan, and develop the OpenSSL toolkit and its related documentation.

OpenSSL is based on the excellent SSLeay library developed by Eric A. Young and Tim J. Hudson. The OpenSSL toolkit is licensed under an Apache-style licence, which basically means that you are free to get and use it for commercial and non-commercial purposes subject to some simple license conditions.

Download it here…

0M “disk used” in whm

Ran into this error again last night, scrambled for the notes but couldn’t find them… Pulled this from cpanel support (those guys ROCK!)

SSH into the server and run;
quotaoff -a
quotacheck -acfmv

Main >> Account Information >> List Accounts should now have the “Disk Used” section reporting correctly again

also if
cat /etc/fstab
/dev/sda8 on /home type ext3 (rw)

is showing different than mount,
LABEL=/home /home ext3 defaults,usrquota 1 2
/home may need to be remounted using
mount -o remount,ro newdir

ssh keys for multiple server access


How to use multiple SSH keys for password less login

by Vivek Gite

I’ve already written about howto log in, on your local system, and make passwordless ssh connections using ssh-keygen command. However, you cannot just follow these instructions over and over again, as you will overwrite the previous keys.

It is also possible to upload multiple public keys to your remote server, allowing one or more users to log in without a password from different computers.

Step # 1: Generate first ssh key

Type the following command to generate your first public and private key on a local workstation. Next provide the required input or accept the defaults. Please do not change the filename and directory location.
workstation#1 $ ssh-keygen -t rsa

Finally, copy your public key to your remote server using scp
workstation#1 $ scp ~/.ssh/
Step # 2: Generate next/multiple ssh key

a) Login to 2nd workstation

b) Download original the authorized_keys file from remote server using scp:
workstation#2 $ scp ~/.ssh

c) Now create the new pub/private key:
workstation#2 $ ssh-keygen -t rsa

d) Now you have new public key. APPEND this key to the downloaded authorized_keys file using cat command:
workstation#2 $ cat ~/.ssh/ >> ~/.ssh/authorized_keys

e) Finally upload authorized_keys to remote server again:
workstation#2 $ scp ~/.ssh/authorized_keys

You can repeat step #2 for each user or workstations for remote server.
Step #3: Test your setup

Now try to login from Workstation #1, #2 and so on to remote server. You should not be asked for a password:
workstation#1 $ ssh
workstation#2 $ ssh

This worked like a dream. I’ve simplified this process down to a script to use;

# setup ssh keys multiple server instances
scp root@ ~/.ssh
ssh-keygen -t rsa
cat ~/.ssh/ >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys root@

To use this script, simply
touch ~/
insert the above lines for the script, modifying them for the hosts ip address you want to login to, then save the file.
chmod +x
add pw for server
add pw for server
rm -f

Copy from shell to clipboard


Linux never stops surprising me, I discover a new application, or a new tip every day, I have been with no time this last months so I could not write here as much as I would like.

Anyway, let me share this tip with you.

Today tool is xclips, first of all install it


sudo aptitude install xclip


sudo yum instal xclip

And for my favorite distro

sudo pacman -S xclip

This great tool sends the output of shell commands to clipboard, which is very useful, if for example you want to create a document containing man pages, to do that just enter:

man htop | xclip

Then go to your favorite word editor and use , or or the middle mouse button to paste the contents of clipboard to your document.

You can copy the output of any command for instance free

free | xclip

These are the results:

total used free shared buffers cached
Mem: 2064156 1114088 950068 0 69860 455464
-/+ buffers/cache: 588764 1475392
Swap: 2650684 0 2650684

You can also send the output of the clipboard to a command for instance, if you want to echo some text into a file.

Just select the text from a webpage, use to copy to clipboard, and then in the console run this:

xclip -o > file.txt

Read the xclip man page for more info



(atop has been updated to 1.26. Get it HERE

Atop is an ASCII full-screen performance monitor that is capable of reporting the activity of all processes (even if processes have finished during the interval), daily logging of system and process activity for long-term analysis, highlighting overloaded system resources by using colors, etc. At regular intervals, it shows system-level activity related to the CPU, memory, swap, disks, and network layers, and for every active process it shows the CPU utilization, the memory growth, priority, username, state, and exit code.

The command atop has some major advantages compared to other performance-monitors:

Resource consumption by all processes
It shows the resource-consumption by all processes that were active during the interval, so also the resource-consumption by those processes that have finished during the interval.

Utilization of all relevant resources
Obviously it shows system-level counters concerning cpu-, memory- and swap-utilization, however it also shows disk I/O and network utilization counters on system-level.

Permanent logging of resource utilization
It is able to store raw counter-data in a file (compressed) for long-term analysis on system- and process-level. By default the daily logfiles are preserved for 28 days.
System activity reports can be generated from a logfile by using the atopsar command.

Highlight critical resources
It is able to highlight resources that have (almost) reached a critical load by using colors for system statistics.

Scalable window width
It is able to add or remove columns dynamically at the moment that you enlarge or shrink the width of your window.

Watch activity only
By default, it only shows system-resources and processes that were really active during the last interval (output related to resources or processes that were completely passive during the interval is by default suppressed).

Watch deviations only
For the active system resources and processes, only the load during the last interval is shown (not the accumulated utilization since boot or process startup).

Accumulated process activity per user
For each interval it is able to accumulate the resource consumption for all processes per user.

Accumulated process activity per program
For each interval it is able to accumulate the resource consumption for all processes with the same name.

Network activity per process
In combination with optional kernel patches it shows process-level counters concerning network activity.

Various system activity reports reports can be generated by the command atopsar similar to the UNIX sar command. Colors and (on request) markers are used to highlight that the utilization of a resource is critical (red) or almost critical (cyan).

With the flag -c in the following example a report is generated about current CPU utilization of the system during 5 minutes (five times with an interval of sixty seconds):

$ atopsar -c 60 5



One Line Linux Command to Print Out Directory Tree Listing


My professor sent us this little one liner which prints out the current directory tree:

ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' \-e 's/^/ /' -e 's/-/|/'

What’s going on here?

* ls -R — list files and directories recursively
* grep “:$” — find lines with : at the end (so only the directories)
* sed -e — evaluate expressions on the lines
* s/:$// — remove ‘:’ at the end of the line
* s/[^-][^\/]*\//–/g — replaces text between / / lines (parent directories) with — , globally on each line
* s/^/ / — add space at the beginning of the lines
* s/-/|/ — replace first – of the line with |

I reduced this using the following command. The most notable difference is that I use find instead of ls, which results in also viewing .hidden directories. I’m not sure which command is faster.

find ./ -type d | sed -e 's/[^-][^\/]*\//--/g;s/--/ |-/'

Both commands result in a formatted directory listing, demonstrated below:


Of course you can also use


Accessing an IMAP email account using telnet

About IMAP.

IMAP is an email protocol for organizing, storing and retrieving emails on a remote server. It was developed after POP and is a much more advanced system, one of the main differences being that all the mail is stored on the server so it remains accessible from many different locations. With POP you have to download the mail to your local computer in order to read it and therefore you cannot synchronize your mail across many different machines.It may be more complex than POP but there are still only a few core commands we need to know in order to access our mail on an IMAP server.

Rest Here…