.htaccess File

Home » CentOS » .htaccess File
CentOS 16 Comments

Hello,

My home system on a DSL line is getting worn out by bad behavior robots.

Awhile back, I created a .htaccess file that block countries by IP blocks. Its 2MB in size.

I have been running Linux since Slackware 1.0 and moved to Redhat around
2.0. I started after running a BBS using a doorway for newsgroups. Been hooked ever since.

So, today, I tried following the directions for apache.org website, https://httpd.apache.org/docs/current/howto/htaccess.html to move the
.htaccess
file to a file located in a directory /var/www/htdocs.

I’m just not following or understanding. The .htaccess file works but on a slow DSL, I don’t want the hits.

I added the following to my httpd.conf:

AddType text/htdocs “.txt”

And copied my .htaccess to /var/www/htdocs as htaccess.txt

In the example from the apache website, I don’t get the: AddType text/example “.exm” Where did they come up .exm?

TIA

16 thoughts on - .htaccess File

  • What version of CentOS are you using?

    For 7.x, and I think 6.x, there is a much simpler way of doing this, using mod_geoip from the Epel repository.

    It rejects all unwanted HTTP connections using 403 responses. Here’s an example geoip.conf file, which is what I’m using:

  • What exactly is slow when you receive requests from remote clients that you don’t want? Are you actually seeing problems when clients make requests and Apache has to read in your 2MB .htaccess on every request?
    And if so, you might also consider moving your blocking even higher, to iptables rules, so that Apache never even has to deal with them.

    Where did you get the idea that this is how to do global Apache configuration? This won’t actually do anything useful.

    They made it up as an example, to demonstrate how directives work in
    .htaccess files versus global Apache config files. It’s not meant to demonstrate how to add blocking rules to the global config.

    Here’s the main point of that page:

    “Any directive that you can include in a .htaccess file is better set in a Directory block, as it will have the same effect with better performance.”

    So, to achieve what I think you’re hoping, take all the IPs you’re denying in your .htaccess file, put them into a relevant Directory block in a config file under /etc/httpd, reload Apache, and move your
    .htaccess file out of the way. Then httpd will no longer have to read in
    .htaccess for every HTTP request.

    Or, alternatively, block those IPs using iptables instead. However, clients will still be able to make those requests, and that will still use bandwidth on your DSL. The only way to eliminate that altogether is to block those requests on the other side of your link. That’s something you’d have to work out with your ISP, but I don’t think it’s common for ISPs to put up blocking rules solely for this purpose, or to allow home users to configure such blocks themselves.

    –keith

  • blocks. the
    [Thomas E Dukes]
    Thanks,

    I’ll take a look at that as well. I am getting hit on several services but httpd is getting the majority.

    Thanks!!

  • you in file clients will your on solely themselves.
    [Thomas E Dukes]
    I setup an ipset but quickly ran out of room in the set. I guess I’ll have to setup multiple sets. Right now, I’m just trying to take some load off my home server from badbots but I am getting hit on other services as well.

    There’s nothing on the webserver except a test site I use. Just trying to keep out the ones that ignore robots.txt

    Thanks!!

  • If its just a test server, then I’d be tempted to use HTTP AUTH at the top level. Most robots will be blocked by that, and you can use iptables to block the ones that try to guess your password, perhaps with fail2ban.

  • I’m not familiar with ipsets, but from a quick Google search it seems like you can increase the size of an ipset (or make a new larger one and migrate your IPs to the new one). Multiple sets looks like it’d work as well.

    Another possibility for you to look at is sshguard. It can protect against brute force SSH attacks (using iptables rules, which is how I
    use it) but IIRC it can also protect against http attacks (I’ve never used it that way, so I don’t know how difficult this is).

    Can you be more specific about the “load” you’re trying to mitigate? Is it really the load on your home system, or is it that attackers are using your bandwidth, or a combination?

    –keith

  • I use fail2ban, provides similar functionality like sshguard + Apache mod_evasive (for http DoS attacks).

    — Arun Khan

  • you your against brute it can know it your
    [Thomas E Dukes]
    I saw that as well but it was a little vague on how to do that.

    Thanks!!

  • Hi,

    Do you control your home server ? If so, then .htaccess is the wrong solution, because you need to incorporate blockages in your IP Tables firewall and then use your Apache configuration file to restrict any remaining unwanted visitors.

    .htaccess (its possible in Apache to rename it) is inefficient and suitable as a second-rate solution when you are using a hosted service and lack full control of the server. VPSs are cheap and a better alternative to hosted mail and web.

    On my servers (C5 and C6) in IP Tables, I have three sets of blockages:

    * permanent for all ports
    * only for web (port 80)
    * only for emails (port 25)

    In web and emails there is a permanent table plus a monthly one (one for every month). Perpetual pests go in the permanent tables and irritants in the monthly table – otherwise the banned IPs entries would get too large.

    A compromised computer trying to send me junk mail or trying to wrongly access a web page or attempting to break-in to SQL (instantly identified and IP instantly blocked because I impose string size limits for the ?key=….) has its IP added to the monthly list and remains there until one month after the last access from that IP address.

    I am unwilling to be a passive victim of junk mail and web hackers.

    All home-made solutions but effective and robust. CentOS made all this possible (sincere thanks to the C-Team; they are all ‘A*’ rated).

  • blocks. remaining
    [Thomas E Dukes]

    Yes. I knew .htaccess wasn’t the best method. I didn’t know about ipsets. It make this so much easier.

    suitable as a control the and
    ?key=….) has the
    [Thomas E Dukes]
    Same here!!

    [Thomas E Dukes]
    Ditto!!

    Thanks!!

  • There are two easy (though not quantitative) tests you can do.

    First, look at the load on the server. If httpd is using a lot of CPU
    and putting your load over 1, your main issue is probably the load being generated by .htaccess reads.

    If you have another system on your home network, try a speed test. If it performs crappy you probably have a problem with attackers eating your bandwidth.

    You and another poster mentioned fail2ban; if you can get that configured to watch and protect both sshd and httpd that will help both problems quite a bit.

    –keith

  • putting
    [Thomas E Dukes]
    Its not necessarily the load on my server, but the bandwidth on my dsl.

    [Thomas E Dukes]
    I have a fire stick on my network that I stream movies. Getting beatup by badbots isn’t helping.

    to

    [Thomas E Dukes]
    I have all the jails setup for the services I’m running. Not sure its working. Not getting any emails.

    Thanks!!

  • Check your logs. fail2ban probably keeps a log of what it’s doing, and you can also check the appropriate fail2ban targets (either iptables,
    /etc/hosts.deny, the Apache config file) to see if they are being populated. You certainly should see something; if you don’t it’s a likely misconfiguration.

    –keith

  • you can
    /etc/hosts.deny,
    [Thomas E Dukes]

    I did change the MTA from sendmail to mail since CentOS uses postfix.

    I may need to change that back.

    Thanks!!

LEAVE A COMMENT