Slammed

Home » CentOS » Slammed
CentOS 12 Comments

I just got SLAMMED with accessed to httpd from
91.230.121.156

I added the address to my firewall to drop it. FYI

host 91.230.121.156
156.121.230.91.in-addr.arpa domain name pointer no-rdns.offshorededicated.net.

Jerry

12 thoughts on - Slammed

  • Are you running WordPress?

    My company’s WordPress installation was getting hammered by an IP in the same netblock, yesterday…look in your httpd logs for repeated POST
    operations to xmlrpc.php.

  • Most people don’t even need xmlrpc.php to be open to the world, so I
    prefer to block all requests to it. I also have successfully used ngrep to capture POSTs on a server hosting many WordPress sites and log them to a file that is watched by fail2ban. After x many POSTs automatically ban the IP, for example.

    The reason I did not just monitor the Apache log files for POSTs is that there were so many sites with their own log files . I had to aggregate all the POSTs to a single log file so when the botnet hit multiple WordPress sites it could be more easily identified. Occasionally they’ll only do a couple POSTs from each IP/bot in the group and so it would evade detection unless you aggregated it all into one log file.

  • Chris Pemberton wrote:

    Have a look in /etc/fail2ban/filters.d, at the apache filters, and see if you can just uncomment things out, or use them as models for what you need.

    mark

  • Disabling XMLRPC completely via wp-config.php is quite easy.. I can send required info when I’m in front of a computer. You can also use an .htaccess rule for Apache to stop requests completely. I’m sure there’s also rules for Nginx, lighttpd, etc that can be found quite easily via Google. Surprised most people don’t have this disabled/blocked already.


    Sent from Mailbox

  • I use Fail2Ban which is available from the EPEL repo to ban these addresses. Works well for SSH attacks by skriptkiddies as well. I usually block an address for 8 hours.

  • +1

    I wrote an Apache rewrite rule (saved it in a separate file) that I can include on any WordPress sites getting hammered by requests to xmlrpc. There’s also wp-login as well that gets brute forced from time to time.

    I was kicking back a HTTP 410 (gone, as opposed to 403 or 404). Of course they’re stupid bots, so they keep hammering away!

    With some ISPs using NAT, I prefer the rewrite rule solution … that way it stops the requests and doesn’t block the IP entirely. Pros and cons of course, but I prefer a conservative approach first rather than a heavy handed one.

  • All my web sites are configured as virtual hosts. The ’empty’ default web site (one on every server) redirects all requests to 127.0.0.1. Sometimes I change this a Chinese consumer site. Why give the hackers and pests an opportunity to annoy you – send them away before their requests can be done to your web site.

    xx.xx.xx.xx is the web server’s IP address. Some of the configuration relates to the previous system of banning every IP directly accessing the server’s IP address.


    DocumentRoot /data/web/do/default/www
    ServerName xx.xx.xx.xx
    CustomLog /data/web/weblogs/acc.000118 combined
    ErrorLog /data/web/weblogs/err.000118.w
    DirectoryIndex banned.php
    HostnameLookups Off

    RedirectMatch permanent ^/(.*)$ http://127.0.0.1/

    The real web sites have entries beginning with, for example, …

  • Alas, it is not the inevitable crawlers from recognised major search engines, computer start-up companies, curious people but also the determined hackers attempting to probe and break-in.

    If a crawler wants a web site, then the crawler should follow the domain name and not an IP address. Conversely hackers chose IP addresses primarily and domain names secondarily.

    I’m merely redirecting IP curious to 127.0.0.1

    Hope that helps.

    Regards,

    Paul. England, EU.

    Learning until I die or experience dementia.

  • If you mean by ‘scanbot’ a web crawler, then YES.

    I recently moved a sub-domain web site on 1111.2222.com to 1111.3333.com and redirected traffic, including crawlers, with a simple:-


    ………….
    ……………
    ……….

    RedirectMatch permanent ^/(.*)$ http://1111.3333.com/$1

    and Google followed. So did other crawlers.

    I have previously, and successfully, moved whole web sites to different domains and got the crawlers to follow by using the same method.

    Hope this helps.

    Regards,

    Paul. England, EU.