Squid Web Proxy

The Squid Web Proxy Cache is a fully featured Internet caching server that handles all types of web requests on behalf of a user. When a user requests a web resource (webpage, movie clip, graphic, etc..), their request is sent to the caching server which then forwards the request to the real web server on their behalf. When the requested resource is returned to the caching server, it stores a copy of the resource in its “cache” and then forwards the request back to the original user. The next time someone requests a copy of the “cached” resource, it is delivered directly from the local proxy server and not from the distant web server (depending on age of resource etc..).

Using a proxy server can greatly reduce web browsing speed if frequently visited sites and resources are stored locally in the cache. There are also financial savings to be gained if you’re a large organisation with many Internet users or even a small home user that has a quota allowance for downloads. There are many ways a proxy can be beneficial to all networks.

The squid proxy has so many features, access controls and other configurable items, that it is impossible to cover all of the settings here. This chapter will provide some basic configuration settings (which is all thats required) to enable the server, and provide access controls to prevent unauthorised users from gaining access to the Internet through your proxy. The configuration file has been documented extremely well by the developers and should provide enough information to assist your set up, however if you don’t know what a setting does, don’t touch it.

Basic Configuration
Many Linux distributions provide Squid as part of their available packages and are already configured to some degree. The following settings are the more commonly used ones to enable and administer the service.

The configuration file for Squid is quite large which is mainly because of the detailed explanations through out and it is important that its backed up before we make any changes. If it all goes wrong theres only one thing that will save your sanity, a restored working config file.

[bash]# cp /etc/squid/squid.conf /etc/squid/squid.conf.original
[bash]# vi /etc/squid/squid.conf

The http_port is the port number on the local server that Squid binds itself to and listens for incoming requests, its default port is 3128 but can be changed if needed (8080 is also a common cache port). Which ever port is used here, it will need to be set in all the workstations that will attach to and use the proxy; it can also be bound to only listen on the internal network IP address.

#http_port 3128

The icp_port is used to send Internet Cache Protocol (RFC2186) queries to neighbouring proxy servers. The default is port 3130, however unless you are using multiple proxys inside an organisation, its safe to disable (set to “0”).

icp_port 0

Some ISPs provide their customers with a proxy server to use, this provides benefits to both the user and ISP. If your ISP does has a proxy server, you can configure your own proxy to send all requests to the “upstream” server, this may provide a quicker return on your requests if the ISPs proxy has the requested objects stored in its cache. ICP is not required here, hence the “no-query” attribute.

cache_peer proxy.myisp.com parent 3128 3130 no-query

Squid’s cache is designed to store cached objects from the Internet, however there may be a time that you don’t want to store certain objects. The following settings tell the proxy not to store anything that was called through a cgi script.

acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

The cache_dir tag specifies the location where the cache will reside in the filesystem. ufs identifies the storage format for the cache. The “100” specifies the maximum allowable size of the cache (in MB), and should be adjusted to suit your needs. The 16 and 256 specify the number1 of directories contained inside the first and second level cache store.

cache_dir ufs /var/spool/squid 100 16 256

Caution !!     Squid does not have a cache store (the directories) when it is first installed and wont run without it. The cache store can be created by typing “squid -z” at the command prompt before starting the service for the first time.

The following tags specify the standard log file locations.

cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log

The log_fqdn tag tells Squid to log the Fully Qualified Domain Name of the remote web server that it is getting the resources from, this is logged in the cache_access_log. This may slow the proxy slightly if it needs to do any external DNS queries, but may be required if the logs are to be analysed.

log_fqdn off

When Squid proxies any FTP requests, this is the password used when logging in with an anonymous FTP account.

ftp_user Squid@example.com

The dns_nameservers specifies which DNS should be queried for name resolution. Squid will normally use the values located in the /etc/resolv.conf file, but can be overridden here.


If Squid dies, it will attempt to send an email to the cache_mgr. The cache_mgr’s email address is also displayed at the bottom of any error messages the proxy may display to the clients.

cache_mgr admin@example.com

This is the name of the host server that is running the Squid service. It is also displayed at the bottom of any error messages the proxy may display to the clients.

visible_hostname galaxy.example.com

Note !!     The server is ready to be started, however no access has been granted at this point. The server is functional but inaccessible, see access controls to start using the proxy. The localhost will have access.

Starting The Server
Starting the proxy server is similar to any other service, set the appropriate runlevels that it should be active at, and then check to see if they are set correctly.

[bash]# chkconfig –level 345 squid on
[bash]# chkconfig –list squid

If this is the first time the Squid service has been started, or you have changed the cache_dir directive in some way, then the cache store needs to be (re)initialised. Squid may not start if this has not been done.

squid -z

The service can now be started, be sure to check the system log to see if any errors have occurred.

[bash]# /etc/init.d/squid restart
[bash]# grep squid /var/log/messages

Setting Access Controls
The initial access controls for the Squid server are fairly restrictive, with good reason too. Before anyone can use the server, the access controls must be written to allow access. Rules can be written for almost any type of requirement and can be very complex for large organisations, we will concentrate on some smaller home user types configurations.

The worst thing about configuring a proxy server for a site, is when the users can change their web browser details and just exit the firewall without even using it (naughty users !). With some simple restrictions set on the firewall, we can block any outgoing request to a web server that has not come through the proxy.

The following iptables rule lists all of the Safe_ports (and common ports) that Squid allows, and blocks them if they came directly from any of the internal workstations. So the only outgoing requests could have come from the proxy running on the gateway. The computers on the internal network are still allowed to send requests to the gateway proxy. You may need to change this rule depending on your network topology.

iptables -I FORWARD -o ppp0 -s -p tcp -m multiport \
–dports 21,23,70,80,81,82,210,280,443,488,563,591,777,3128,8080 -j DROP

Caution !!     A iptables multiport rule can only list up to 15 port numbers for each rule.

Now that all Internet browsing has been disabled, we need to allow access to the proxy server. You need to locate the following line in your configuration file, it is a placeholder telling you where you put your rules. If you put them anywhere else in the configuration file they may not work.

[bash]# vi /etc/squid/squid.conf


Warning !!     Rules are tested in sequential order as they appear in the configuration file. Always check the order of your “http_access deny/allow” rules to ensure they are being enforced correctly.

To allow our internal network to have access to the proxy server, insert this rule. It defines an ACL called INTERNAL for all the IP address in the source range of It then allows ACL INTERNAL to have access.

This is the minimum rule that you require to allow your users to access the cache. Further rules should be used to tighten the restrictions.

acl INTERNAL src
http_access allow INTERNAL

This rule defines an ACL called BADPC with a single source IP address of It then denies access to the ACL.

acl BADPC src
http_access deny BADPC

The following is a mixed rule, it uses two ACLs to deny access. This rule denies KIDsPC during an ACL called CLEANTIME which is in effect Monday-Friday 3-6PM.

acl KIDsPC src
acl CLEANTIME MTWHF 15:00-18:00
http_access deny KIDsPC CLEANTIME

Note !!     When more than one ACL is used in a deny/allow rule, they are processed with the “LOGICAL AND” function. So both ACLs must be true before the rule is enforced.

The following two rules will block all files that end in the file extensions “.mp3” and “.exe” respectively. The “-i” means treat them as case insensitive which matches both upper and lower case.

acl FILE_MP3 urlpath_regex -i \.mpBHD1
http_access deny FILE_MP3
acl FILE_EXE urlpath_regex -i \.exe$
http_access deny FILE_EXE

Domain Blacklists
Whether you’re a system administrator for a large site or simply a parent running a home network, there may come a time where access to certain domains should be controlled or blocked, this can be easily accomplished by introducing a domain blacklist. A blacklist is just a file containing all of the domain names that are considered inappropriate for the internal users to access, and Squid is configured to check each request made to ensure it is not within the blacklist.

There are several sites around the Internet that have updated blacklists available for you to download and use, these lists normally contain thousands of entries. Below are the details on how to create your own blacklist. Each entry located in the “bad_domains” file should be listed on a separate line. It is also important that only root and squid users have access to the list, otherwise users may change the contents.

After the blacklist has been created, populated and secured, ensure that you place the appropriate “BAD_DOMAINS” access control policy in the configuration file.

[bash]# vi /etc/squid/bad_domains

[bash]# chown root.squid /etc/squid/bad_domains
[bash]# chmod 640 /etc/squid/bad_domains
acl BAD_DOMAINS dstdom_regex -i “/etc/squid/bad_domains”
http_access deny BAD_DOMAINS

Caution !!     Using regular expressions to match unwanted domain names may also block legitimate sites, such as “breast” blocking “www.breastcancer.com”. Always check your entries to see if they may effect other domains, or use “dstdomain” instead of “dstdom_regex”.

Now that the proxy server has been configured to allow or deny access based on the access controls you have specified, its time to reload the configuration into Squid and test the controls.

[bash]# /etc/init.d/squid reload

Remember, all the rules are tested in sequential order from the top, so putting the “http_access allow INTERNAL” ACL above the others will allow full access and no other rules would be tested. As a general rule, you should put any of your DENY rules before your ALLOW rules, if things aren’t working exactly as you expected, check the order of your rules.

Authenticating Users
Further security can be maintained over your Internet access by firstly authenticating valid users before their access is granted. Squid can be told to check for valid users by looking up their username and password details in a common text file. The password values located inside the valid user list are subject to a hashing function, so they can not be compromised by someone reading the file “over your shoulder” (social engineering).

The password file can be created using the following commands.

[bash]# touch /etc/squid/passwd
[bash]# chown root.squid /etc/squid/passwd
[bash]# chmod 640 /etc/squid/passwd

Caution !!     The username and password pairs located in the “passwd” file could be subject to a brute force attack. Ensure that only root and squid users have access to this file (hence the “chmod”).

To add users to the password list, use the htpasswd application, you will then be prompted to enter a password for the username. If you are setting up user access for an organisation, always allow the user to type their own password here, this stops the user blaming an administrator from using their account if problems arise.

[bash]# htpasswd /etc/squid/passwd arab04

The configuration file now needs to be adjusted so it checks for valid users. Locate the “INTERNAL” access control statement you used eariler, and make the following changes. This set of rules will now only allow users that have been authenticated and are located inside your private network.

acl INTERNAL src
http_access allow INTERNAL AUTHUSERS

The final configuration required is to tell Squid how to handle the authentication. These listings are already in the configuration file and need to be adjusted to suit your requirements.

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid – Home Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

Its time to reload Squids configuration and test it; good luck.

[bash]# /etc/init.d/squid reload

Configuring a Transparent Proxy
Now that you have successfully configured your Squid proxy server, you will need to configure all of your workstations on your internal network to be able to use it; this may seem like a lengthy task depending on how big your internal network is. It also means that you will need to manually configure all of your applications that connect to remote web servers for information / data exchange, this includes all web browsers, virus update applications and other such utilities. Hmm, this could take a while.

One great feature of Squid is that is can be used as a HTTPD accelerator, and when configured in conjunction with an iptables redirect rule, it will become transparent to your network. Why? because we will no longer need to setup all of our applications on our workstations to use the proxy, now we can redirect all HTTP requests as they come through our firewall to use our transparent proxy instead; easier administration.

An important point before proceeding, transparent proxies CAN NOT be used for HTTPS connections over SSL (Port 443). This would break the server to client SSL connection dependant upon your security and confidentiality of the protocol, it could also allow a “man in the middle” attack because of captured (proxied) packets.

To continue, make the following changes to your Squid configuration file.

[bash]# vi /etc/squid/squid.conf
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on

The following rule is written for our firewall script as detailed in Chapter 6. The rule directs all packets from the internal LAN address to the proxy server’s active “http_port” address (default is 3128). Once the proxy server has the packet it will be processed and returned to the client as normal, the client won’t even know.

[bash]# vi /root/firewall.sh
# Redirect all WWW (port 80) OUTBOUNT packets to the Squid Server on port 3128
iptables -t nat -A PREROUTING -i $INT_DEV -s $INT_NET -p tcp –dport 80  -j REDIRECT –to-port 3128
[bash]# /root/firewall.sh

Once the Squid configuration has been adjusted, it needs to be reloaded before it will be available.

[bash]# /etc/init.d/squid reload

To test if the transparent proxy is functioning correctly, type the following command at a command prompt and watch for any clients using the Internet; you should see Squid access requests being logged to the screen.

[bash]# tail -f /var/log/squid/access.log

Wael Isa