vnStat – a console-based network traffic monitor

If you are searching for an open source network monitoring tool, then the answer is vnStat; one of the best command line tool that displays and logs network traffic transmitted to and from your box. It depends on the network statistics provided by the kernel. So, vnStat doesn’t add any additional load to your system for monitoring and logging the network traffic.

Once installed, verify whether kernel is providing all the information that vnStat is expecting.

[root@linuxgenius soj]# vnstat –testkernel
This test will take about 60 seconds.
[==============================] done.

Detected boot time variation during test: 0
Maximum boot time variation set in config: 15

The current kernel doesn’t seem to suffer from boot time variation problems.
Everything is ok.

The option ‘iflist’ displays available interfaces that vnStat can monitor.

[root@linuxgenius soj]# vnstat –iflist
Available interfaces: lo eth0 eth1

The various options for vnstat is as follows:

[root@linuxgenius soj]# vnstat –longhelp
vnStat 1.10 by Teemu Toivola

Update:
-u, –update update database
-r, –reset reset interface counters
–sync sync interface counters
–enable enable interface
–disable disable interface
–nick set a nickname for interface
–cleartop clear the top10
–rebuildtotal rebuild total transfers from months
Query:
-q, –query query database
-h, –hours show hours
-d, –days show days
-m, –months show months
-w, –weeks show weeks
-t, –top10 show top10
-s, –short use short output
-ru, –rateunit swap configured rate unit
–oneline show simple parseable format
–dumpdb show database in parseable format
–xml show database in xml format
Misc:
-i, –iface select interface (default: eth0)
-?, –help short help
-D, –debug show some additional debug information
-v, –version show version
-tr, –traffic calculate traffic
-l, –live show transfer rate in real time
–style select output style (0-4)
–delete delete database and stop monitoring
–iflist show list of available interfaces
–dbdir select database directory
–locale set locale
–config select config file
–savemerged save merged database to current directory
–showconfig dump config file with current settings
–testkernel check if the kernel is broken
–longhelp display this help

See also “man vnstat”.

A sample output is as follows:

[soj@linuxgenius ~]$ vnstat
Database updated: Sat Dec 31 21:50:01 2011

   eth0 since 11/28/11

          rx:  1.85 GiB      tx:  498.45 MiB      total:  2.34 GiB

   monthly
                     rx      |     tx      |    total    |   avg. rate
     ------------------------+-------------+-------------+---------------
       Nov '11    198.83 MiB |   28.58 MiB |  227.41 MiB |    0.72 kbit/s
       Dec '11      1.65 GiB |  469.87 MiB |    2.11 GiB |    6.64 kbit/s
     ------------------------+-------------+-------------+---------------
     estimated      1.66 GiB |     470 MiB |    2.12 GiB |

   daily
                     rx      |     tx      |    total    |   avg. rate
     ------------------------+-------------+-------------+---------------
     yesterday      5.72 MiB |    1.13 MiB |    6.85 MiB |    0.65 kbit/s
         today     23.56 MiB |    4.15 MiB |   27.71 MiB |    2.89 kbit/s
     ------------------------+-------------+-------------+---------------
     estimated        25 MiB |       4 MiB |      29 MiB |

Advertisements

Weed out the top 10 CPU & Memory Consuming Process

To find out the top 10 CPU consuming process, issues the following command:

[root@domU-12-31-39-00-B4-26:~] ps aux | sort -n -k 3 | tail -10

root 6543 0.0 0.3 11992 5448 ? Rs Dec19 0:04 /usr/sbin/httpd.worker
root 10309 0.0 0.0 1632 300 ? Ss 04:00 0:00 collectdmon -P /var/run/collectdmon.pid -c /usr/sbin/collectd — -C /etc/collectd.conf
root 10310 0.0 0.1 47264 2376 ? Sl 04:00 0:04 /usr/sbin/collectd -C /etc/collectd.conf -f
root 15097 0.0 0.1 8020 2368 ? Ss 17:54 0:00 sshd: root@pts/0
root 15099 0.0 0.0 2560 1388 pts/0 Ss 17:54 0:00 -bash
root 15158 0.0 0.0 2160 852 pts/0 R+ 17:57 0:00 ps aux
root 15159 0.0 0.0 27284 548 pts/0 R+ 17:57 0:00 sort -n -k 3
root 15160 0.0 0.0 1668 416 pts/0 S+ 17:57 0:00 tail -10
smmsp 1966 0.0 0.0 7596 656 ? Ss Dec16 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue
jboss 13114 6.3 58.0 1563976 1009844 ? Sl 10:41 27:46 /usr/java/default/bin/java -Dprogram.name=run.sh -server -Xms768m -Xmx768m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:PermSize=512m -XX:MaxPermSize=512m -Djava.net.preferIPv4Stack=true -Djava.endorsed.dirs=/usr/local/jboss/lib/endorsed -classpath /usr/local/jboss/bin/run.jar:/usr/java/default/lib/tools.jar org.jboss.Main -c default

To find out the top 10 Memory consuming process, issues the following command:

[root@domU-12-31-39-00-B4-26:~] ps aux | sort -n -k 4 | tail -10

root 15167 0.0 0.0 27288 556 pts/0 R+ 17:59 0:00 sort -n -k 4
root 15168 0.0 0.0 1664 416 pts/0 S+ 17:59 0:00 tail -10
smmsp 1966 0.0 0.0 7596 656 ? Ss Dec16 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue
root 10310 0.0 0.1 47264 2376 ? Sl 04:00 0:04 /usr/sbin/collectd -C /etc/collectd.conf -f
root 15097 0.0 0.1 8020 2368 ? Ss 17:54 0:00 sshd: root@pts/0
root 6543 0.0 0.3 11992 5448 ? Ss Dec19 0:04 /usr/sbin/httpd.worker
apache 14555 0.0 0.6 295928 11844 ? Sl 15:40 0:03 /usr/sbin/httpd.worker
apache 14604 0.0 0.6 295336 11192 ? Sl 15:47 0:03 /usr/sbin/httpd.worker
apache 14520 0.0 0.7 298388 13244 ? Sl 15:40 0:03 /usr/sbin/httpd.worker
jboss 13114 6.3 58.0 1563976 1009848 ? Sl 10:41 27:58 /usr/java/default/bin/java -Dprogram.name=run.sh -server -Xms768m -Xmx768m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:PermSize=512m -XX:MaxPermSize=512m -Djava.net.preferIPv4Stack=true -Djava.endorsed.dirs=/usr/local/jboss/lib/endorsed -classpath /usr/local/jboss/bin/run.jar:/usr/java/default/lib/tools.jar org.jboss.Main -c default

The fsck way => lost+found

In UNIX, a file name can be thought of as a link to an inode. If something corrupts a filesystem and the inode-to-file name mapping is damaged, the file may disappear from normal “ls” output but still be available under its inode number. See the “ln” command for more discussion of how links work.

If you run something like fsck and it finds inodes that aren’t correctly linked to file names, it’ll assign a name to these files and place them in the lost+found directory so you have a chance to recover them. In Windows parlance, think of the effect under DOS when you run a utility like scandisk and recover file chains that are tossed into the \ directory with names like FILE00001 and so forth.

Don’t mess with this directory; it’s created on every filesystem by default, and I’m not sure if removing it would have a negative effect.

The below explanation is from the manual:

Orphaned files and directories (those that cannot be reached) are, if you allow it, reconnected by placing them in the lost+found subdirectory in the root directory of the file system. The name assigned is the i-node number. If you do not allow the fsck command to reattach an orphaned file, it requests permission to destroy the file.

Tuning (tune2fs) Linux File System

You can use the tool tune2fs to check file system parameters, adjust it and measure the performance of the file system. Using this tool, you can examine the file system parameters stored in the super-block of a disk partition. The usage is as follows:

root@ubuntuPC:~# tune2fs -l /dev/sda5
tune2fs 1.41.14 (22-Dec-2010)
Filesystem volume name:
Last mounted on: /
Filesystem UUID: 62657be6-f9ae-4c8d-8775-016b2d9e9a21
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 1892352
Block count: 7568384
Reserved block count: 378419
Free blocks: 5921199
Free inodes: 1709933
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1022
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sat Dec 3 10:57:06 2011
Last mount time: Fri Dec 9 19:26:45 2011
Last write time: Mon Dec 5 11:48:08 2011
Mount count: 13
Maximum mount count: 22
Last checked: Mon Dec 5 11:48:08 2011
Check interval: 15552000 (6 months)
Next check after: Sat Jun 2 11:48:08 2012
Lifetime writes: 13 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 538795
Default directory hash: half_md4
Directory Hash Seed: e788af22-9f8b-4087-b7e6-a7d0c845ad8f
Journal backup: inode blocks

In order to change the Label, you can use the command tune2fs -L

To change the UUID value, you can use the command tune2fs -U

On checking the above parameter, you will notice something called Errors behavior in the superblock. This indicates what should happen when an error is detected on the filesystem. This filesystem is marked continue, which means the error will be reported by the kernel using syslog, but the filesystem will be left in service. There are two other possible values for this parameter:

remount-ro – remount the filesystem read-only when an error is detected
panic – cause a kernel panic when an error is detected.

When a filesystem error is detected, the filesystem will be checked at the next reboot.

In order to manually force disk checks at startup, you can run the following commands and then reboot.

# su root
# cd /
# touch /forcefsck
# reboot or shutdown -r now

Okay, now from the filesystem parameters, the most important once are the Mount count and the Maximum mount count.

Mount Count (-C) specifies the number of times the file system has been mounted.

Maximum mount count (-c) specifies the maximum number of times the file system is mounted before it will be checked. If you want to force the file system to be checked at the next reboot, you can change the Maximum mount count value to a number less than the Mount count as follows:

root@ubuntuPC:~# tune2fs -c 12 /dev/sda5 #mount count is already 13.
tune2fs 1.41.14 (22-Dec-2010)
Setting maximal mount count to 12

root@ubuntuPC:~# tune2fs -l /dev/sda5 | grep -i mount
Last mounted on: /
Default mount options: (none)
Last mount time: Fri Dec 9 19:26:45 2011
Mount count: 13
Maximum mount count: 12

Now, you can reboot and see that it checks the file system before it proceeds to boot the system.

Mount count is incremented each time the filesystem is mounted. When fsck is run, if the mount count exceeds maximum mount count, then a check is forced. This parameter can be disabled by setting the Maximum mount count to -1 as follows:

root@ubuntuPC:~# tune2fs -c -1 /dev/sda5
tune2fs 1.41.14 (22-Dec-2010)
Setting maximal mount count to -1

root@ubuntuPC:~# tune2fs -l /dev/sda5 | grep -i “mount count”
Mount count: 1
Maximum mount count: -1

Another important parameter is Check interval (-i). This sets the time interval between checks. The suffix d (days), w (weeks) or m (months) may be added. Days is the default. This is the maximum amount of time between two filesystem checks. If check interval is set to 0, time-dependent checking will be disabled. This can be done as follows:

root@ubuntuPC:~# tune2fs -i 0 /dev/sda5
tune2fs 1.41.14 (22-Dec-2010)
Setting interval between checks to 0 seconds

root@ubuntuPC:~# tune2fs -l /dev/sda5 | grep -i check
Last checked: Mon Dec 5 11:48:08 2011
Check interval: 0 ()

The reason to use a journaled filesystem is to avoid the lengthy filesystem-checking procedure when the filesystem is reattached. At boot, the system customarily performed a filesystem check. Normaly, “clean” filesystems are skipped. Since journaled filesystems are always “clean”, this meant that a full check was never performed on a journaled filesystem. Until recently, it was considered wise to force a full check every so often. This full check was performed even if fsck was given a clean filesystem. When fsck is run, if the current date is more than check interval past the last checked date, the filesystem is fully checked even if the filesystem is marked “clean”. This metric can be disabled by setting check interval to Zero (0).

It is strongly recommended that either -c (mount-count-dependent) or -i (time-dependent) checking be enabled to force periodic full e2fsck(8) checking of the filesystem. Failure to do so may lead to filesystem corruption (due to bad disks, cables, memory, or kernel bugs) going unnoticed, ultimately resulting in data loss or corruption.

Inorder to specify filesystem checks after a specific number of mounts, you should adjust the tune2fs command as follows:

tune2fs -c 50 -i 2m /dev/sda5

This will check the filesystem or partition after 50 mounts or 2 months, whichever comes first.

You can also check inode usage using the ‘df’, ‘stat’ and ‘dumpe2fs’ commands as follows:

root@ubuntuPC:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda5 1892352 182553 1709799 10% /
udev 213972 478 213494 1% /dev
tmpfs 218982 403 218579 1% /run
none 218982 1 218981 1% /run/lock
none 218982 3 218979 1% /run/shm

root@ubuntuPC:~# stat -f /dev/sda5
File: “/dev/sda5”
ID: 0 Namelen: 255 Type: tmpfs
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 238939 Free: 238938 Available: 238938
Inodes: Total: 213972 Free: 213494

root@ubuntuPC:~# dumpe2fs /dev/sda5 | grep -i ^inode
dumpe2fs 1.41.14 (22-Dec-2010)
Inode count: 1892352
Inodes per group: 8192
Inode blocks per group: 512
Inode size: 256

Vsftpd configuration with SSL/TLS enabled

Vsftpd is one of the best FTP server which is considered to be secure. By default, vsftpd server is not secure, but it can be configured to use SSL to make it more secure. First check if vsftpd is installed on your server. If not, use yum for Redhat or apt-get (debian) to install vsftpd server.

Check if you have vsftpd installed

[root@centos soj]# rpm -qa | grep vsftpd
vsftpd-2.2.2-6.el6_0.1.x86_64

Vsftpd Defaults

Default port: TCP / UDP – 21 and 20
The main configuration file: /etc/vsftpd/vsftpd.conf
Users that are not allowed to login via ftp: /etc/vsftpd/ftpusers

Create SSL certificate to make the File Transfer more secure.

[root@centos vsftpd]# cd /etc/vsftpd
[root@centos vsftpd]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout vsftpd.pem -out vsftpd.pem

Generating a 2048 bit RSA private key
…………………………………………………………+++
.+++
writing new private key to ‘vsftpd.pem’
—–
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [XX]:IN
State or Province Name (full name) []:Karnataka
Locality Name (eg, city) [Default City]:Bangalore
Organization Name (eg, company) [Default Company Ltd]:Kernel Craft, Inc.
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server’s hostname) []:kernelcraft.com
Email Address []:info@kernelcraft.com

My vsftpd configuration is as follows:

[root@centos soj]# vi /etc/vsftpd/vsftpd.conf

#Disabled Anonymous Login
anonymous_enable=NO

#Do not allow local_users to login.
local_enable=NO

#FTP users should be able to write data. In case you don’t want FTP users to upload data, then change it to ‘NO’
write_enable=YES

#Turned off Port 20. Makes vsftpd less privileged
connect_from_port_20=NO

# You may specify an explicit list of local users to chroot() to their home
# directory. If chroot_local_user is YES, then this list becomes a list of
# users to NOT chroot()
chroot_local_user=YES

#Set umask to 022 to make sure all files (644) and folders (755) you upload get the proper permission.
local_umask=022

#Create warning banners for all FTP users:

banner_file=/etc/vsftpd/issue

#Create /etc/vsftpd/issue file with a message compliant with the local site policy or a legal disclaimer:

#NOTICE TO USERS
#Use of this system constitutes consent to security monitoring and testing.
#All activity is logged with your host name and IP address.
#

# Activate logging of uploads/downloads.
xferlog_enable=YES

#Default log file
xferlog_file=/var/log/vsftpd.log

#Turn on SSL. You might have to add this directive to make it a secure FTP connection.
ssl_enable=YES

# Allow anonymous users to use secured SSL connections
allow_anon_ssl=YES

# All non-anonymous logins are forced to use a secure SSL connection in order to
# send and receive data on data connections.
force_local_data_ssl=YES

# All non-anonymous logins are forced to use a secure SSL connection in order to send the password.
force_local_logins_ssl=YES

# Permit TLS v1 protocol connections. TLS v1 connections are preferred
ssl_tlsv1=YES

# Permit SSL v2 protocol connections. TLS v1 connections are preferred
ssl_sslv2=NO

# permit SSL v3 protocol connections. TLS v1 connections are preferred
ssl_sslv3=NO

# Specifies the location of the RSA certificate to use for SSL encrypted connections
rsa_cert_file=/etc/vsftpd/vsftpd.pem

For added security, you may restrict FTP access to certain users by adding them to the list of users in the /etc/vsftpd/ftpusers file. The VSFTPD package creates this file with a number of entries for privileged users that normally shouldn’t have FTP access.

Create a user group and shared directory. In this case, use /home/ftp-users and a user group name of ftp-users for the remote users

[root@centos vsftpd]# groupadd ftp-users
[root@centos vsftpd]# mkdir /home/ftp-docs

Make the directory accessible to the ftp-users group

[root@centos vsftpd]# chmod 750 /home/ftp-docs/
[root@centos vsftpd]# chown -R root:ftp-users /home/ftp-docs

Add users, and make their default directory /home/ftp-docs

[root@centos vsftpd]# useradd -g ftp-users -d /home/ftp-docs sojftp
[root@centos vsftpd]# useradd -g ftp-users -d /home/ftp-docs sojftp1
[root@centos vsftpd]# passwd sojftp
[root@centos vsftpd]# passwd sojftp1

Use WinSCP or Filezilla to login to the VsFTPD server and download the files. Make sure you select SFTP to login in Secure Mode. Using the basic FTP mode won’t allow you access to FTPD server.

That’s it.

Openssl – CSR Creation for obtaining SSL certificate

In order to obtain a certificate from a Certifying authority like VeriSign, Digicert etc, you have to create a CSR (Certificate Signing Request) and it’s corresponding key before requesting for a certificate. Once the request is made, the Certifying Authority checks if the information provided in the CSR is legitimate and then process the request and provide us with the certificate.

In order to create a CSR, you need to first generate the key file as follows:

openssl genrsa -des3 -out server.key 2048

Once the Key file is generated, you have to create the CSR using the above key as follows:

openssl req -new -key server.key -out server.csr

Now, you can send this CSR file to the certifying authority for them to verify and provide you with the certificate files.

In case you want to create a self-signed SSL certificate using the above key and csr, proceed as follows:

openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

If you are on Windows Server and has openssl installed, you can create the CSR as follows:

Go to the Openssl\bin folder and issue the following command:

openssl req -config openssl.cnf -new -newkey rsa:2048 -nodes -keyout server.com.key -out server.com.csr

The above command will create both the CSR and its Key file.

Note: You only have to send the CSR file to the certifying authority for verification purpose.

You can test your CSR using the following URL:

http://www.sslshopper.com/csr-decoder.html
Or
https://www.networking4all.com/en/support/tools/csr+check/

SQUID ACLs

Why SQUID?

There are two main reasons for installing and configuring SQUID on your network:

1. Reduce Internet bandwidth charges

Administrators configure the client web browsers to use the Squid proxy server instead of going to the web directly. The Squid server then checks its web cache for the web information requested by the user. It will return any matching information that finds in its cache (which in SQUID terminology is a TCP_HIT), and if not, it will access the Internet to find it on behalf of the user(TCP_MISS). Once it finds the information, SQUID will populate its cache with the new page information and forwards it to the user’s web browser. This reduces the amount of data accessed from the Internet when the same website is accessed again.

2. Limit access to the Web to only authorized users

You can configure your firewall (IPTABLES, CISCO PIX etc.) to only accept HTTP connections only from the Squid server and no one else. Squid can then be configured using access control list (ACL) to allow only certain departments, subnets or specific hosts to access Internet. SQUID can also block specific websites and allow access to Internet only during specific hours of the day.

In one of my previous post I have discussed on squidclient and squid proxy logs. Now, before starting with various squid rules (ACL’s – Access Control Lists), I would like to remind you that SQUID by default listens to TCP port 3128. Most Company’s Business Rule would ask you to change this default SQUID port to some other port for security purpose. You can customize the SQUID port to, say 8080 as follows:

vi /etc/squid/squid.conf
{Go to the section “Network Options” by searching for this string}
http_port 3128 (change to 8080)
:wq!

You will also see https_port. HTTPS_port allows SQUID to be an accelerator that basically is a middle man between the client and the server providing HTTPS access.

Client => Squid => HTTPS Web Server

Here, the client makes a request to the SQUID which terminates the SSL session and then connects to the backend server thus increasing the performance of the SSL connection.

Reload SQUID service so port 8080 is in effect.

/sbin/service squid reload

(Reloading SQUID service won’t disconnect or stop the SQUID sessions, while restarting a SQUID service will stop and start the service thereby interrupting the existing sessions/connections)

You can verify SQUID port by using netstat command

netstat -ntlp

Also, make sure you update the proxy port to 8080 on your web-browser so you can access the websites.

Access Control List (ACLs):

Squid matches each Web access request it receives by checking the http_access list from top to bottom. If it finds a match, it enforces the allow or deny statement and stops reading further. You have to be careful not to place a deny statement in the list that blocks a similar allow statement below it. The final http_access statement denies everything, so it is best to place new http_access statements above it.

ACL Syntax:

Defining an ACL

acl {acl_name} acl_type(src/dstdomain/srcdomain/time_of_day etc) decison_string

eg: acl our_networks src 192.168.2.0/24 192.168.5.0/24

Apply rules on ACL_names

http_access allow/deny acl_name

eg: http_access allow our_networks

There are pre-defined ACLs that SQUID will use to determine whether or not clients are able to make connections to certain ports outside.

eg: vi /etc/squid/squid.conf
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443

These are safe ports which SQUID is allowed to connect. The rule to allow is as follows:

http_access deny !Safe_ports

(use of negation. In this case, it’s read as http_access allow Safe_port)

If, in your network, you use unix shell to download and configure various application, you will have to direct access to SQUID so you can download the contents via SQUID server thereby saving on bandwidth and time to pull the contents from Internet. Inorder to do this, you need to SET the variable “http_proxy” to point to the SQUID server as follows:

export http_proxy=192.168.2.50:3128

(If port is changed to any other, specify that port instead of the default 3128)

Now, if you download via wget or curl, you will see it uses the proxy server 192.168.2.50 to download the contents.

DENY Internet ACCESS to specific hosts:

In SQUID configuration file (/etc/squid/squid.conf), include the following to deny access to two hosts in your network

acl bad_hosts src 192.168.2.10 192.168.2.15
http_access deny bad_hosts

You can verify or confirm this denial via access.log (/var/log/squid/access.log).

ALC Lists:

There are mainly 2 methods you can use to create ACL Lists:

1. By repeating ACL names

acl bad_hosts src 192.168.2.10
acl bad_hosts src 192.168.2.15
http_access deny bad_hosts

2. By creating a file and including all the host IP’s in the file

acl bad_hosts src “/etc/squid/bad_host_file.txt”
http_access deny bad_hosts

In bad_host_file.txt, you can add each IP’s line after line.
Note:- Make sure this file “bad_host_file.txt” is readable by SQUID user. This file can be owned by any user though.

Run the command “service squid reload” to reload the squid configuration file without breaking the existing connections/sessions.

ACL’s Based on TIME:

Syntax: acl {acl_name} time {SMTWHFA} 00:00-00:00

Days of Week is represented as follows:

S – Sunday
M – Monday
T – Tuesday
W – Wednesday
H – Thursday
F – Friday
A – Saturday
D – WeekDays (Monday to Friday)

Hours and Minutes are represented as: hh:mm-hh:mm

eg:
acl break time 12:00-13:00
acl work_hours time D 09:00-17:00

eg 1: Deny Internet access between Work Hours
acl work_hours time 09:00-17:30
httpd_access deny work_hours

eg 2: Allows Internet access at all times
acl admins src 192.168.2.10 192.168.2.15
http_access allow admins

eg 3: Deny/Disable Internet access between 9am to 7pm on Monday, Wednesday, Thursday and Friday)
acl work_hours time MWHF 09:00-19:00
http_access deny work_hours

ACL’s Based on specific Destination Domains:

You can deny access to destination domains (dstdomain) or source domains (srcdomain) as follows:

eg 1: Using redundant lists:
acl bad_sites dstdomain .facebook.com
acl bad_sites dstdomain .orkut.com
acl bad_sites srcdomain .games.com
http_access deny bad_sites

eg 2: Using a text file:
acl bad_sites dstdomain “/etc/squid/bad_sites_file.txt”
http_access deny bad_sites

vi /etc/squid/bad_sites_file.txt
.facebook.com
.orkut.com
:wq!

Make sure you have a period ‘.’ before domain names so all subdomains match the domnain names. If you don’t put the period before the domain name, then http://www.facebook.com will be allowed and only facebook.com will be blocked by the above rule. So, keep this in mind when constructing rules.

Combining SQUID ACLs:

You can build separate rules, combine them and then apply tags to those combined rules to create ACLs. It’s done as follows:

acl work_hours time MTWHF 08:00-17:00
acl bad_sites dstdomain “/etc/squid/bad_site_file.txt”
http_access deny work_hours bad_sites

Here we are ANDing Or combining work_hours and bad_sites to deny access to all domains in the file “bad_site_file.txt” from Monday through Friday between 8am to 5pm

eg: No casual browsing during Work Hours on weekdays between 8am to 5pm from subnet 192.168.2.0/24. But, permit access to work related website like wikipedia.org

acl work_site dstdomain .wikipedia.org
http_access allow work_site
acl employees src 192.168.2.0/24
acl work_hours time MTWHF 08:00-17:00
http_access deny employees work_hours

eg: Deny browsing of sites with keyword ‘sex’

acl bad_keyword url_regex -i sex
http_access deny bad_keyword

You can use file to store more bad keywords line by line and block those websites as follows:

acl bad_keyword url_regex -i “/etc/squid/bad_keyword_file.txt”
http_access deny bad_keyword

eg: Deny download of prohibited extension like .exe .vbs etc

acl bad_extensions url_regex .*\.exe$
http_access deny bad_extensions

.* -> means any or all.
\ -> to escape the ‘.’ which is following
$ -> to end with .exe

eg: Block outbound access to certain TLD’s like .jp, .cn, .ru etc

acl bad_tlds dstdom_regex \.cn$
http_access deny bad_tlds

You can have multiple TLDs blocked using file as follows:

acl bad_tlds dstdom_regex “/etc/squid/bad_tld_file.txt”
http_access deny bad_tlds

vi bad_tld_file.txt
\.cn$
\.jp$
\.ru$
:wq!

Construct ACL to configure SQUID as a NON-Caching Proxy Server:

Sometimes you might want to construct specific rules so request from a particular subnet or specific host doesn’t get cached on SQUID server. This can be implemented for specific destination domain as well so access to those websites won’t get cached in SQUID server and all the contents will be pulled from the Internet upon each request for those websites.

NOTE:- NON-Caching rules should be setup before any other rules.

The following rule sets SQUID as a non-caching proxy server:

acl non_caching_hosts src 0.0.0.0/0.0.0.0 ####{this can be set as 0/0 as well}
no_cache deny non_caching_hosts

The following rule disables caching of specific websites

acl block_caching_sites dstdomain .hotmail.com
no_cache deny block_caching_sites

Or using file as follows

acl block_caching_sites url_regex -i “/etc/squid/no_cache_file.txt”
no_cache deny block_caching_sites

You can also disable caching of Dynamic webpages like .php, .pl, .asp, .jsp etc as follows:

acl no_dynamic_sites url_regex -i “/etc/squid/dynamic.txt”
no_cache deny no_dynamic_sites

vi /etc/squid/dynamic.txt
\.php$
\.pl
\.asp$
\.jsp$
:wq!

Also, you can construct a rule for not caching specific hosts and cache everyone else as follows:

acl no_cache_admins src 192.168.2.10 192.168.2.15
no_cache deny no_cache_admins

The above rule only applies to admin computers and denies caching so admin always pulls contents from Internet.

Don’t forget to RELOAD SQUID whenever you apply rules 🙂

That’s it.

Note: I found this URL very helpful: http://wiki.squid-cache.org/SquidFaq/SquidAcl