Can’t Disable Tcp6 On CentOS 7

Home » CentOS » Can’t Disable Tcp6 On CentOS 7
CentOS 12 Comments

hey all,

I tried disabling tcp v6 on a C7 box this way:

[root@puppet:~] #cat /etc/sysctl.conf
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an
/etc/sysctl.d/.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5). net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

Then going:

[root@puppet:~] #sysctl -p net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

Then I restarted xinetd for good measure:

[root@puppet:~] #systemctl restart xinetd
[root@puppet:~] #

Because I’m trying to hit nrpe on this host.

Yet, xinetd/nrpe still seems to be listeing on TCP v6!!

[root@puppet:~] #netstat -tulpn | grep -i listen | grep xinetd tcp6 0 0 :::5666 :::* LISTEN
2915/xinetd

This is a CentOS 7.1 box:

[root@puppet:~] #cat /etc/redhat-release CentOS Linux release 7.1.1503 (Core)

What am I doing wrong? I need to be able to disable tcpv6 completely!

Thanks Tim

12 thoughts on - Can’t Disable Tcp6 On CentOS 7

  • It’s listening on both IPv6 and IPv4. Specifically, why is that a problem?

    You could add “ipv6.disable=1” to your kernel args.

  • Ultimately you can disable ipv6 completely by disabling the ipv6 module. On this FAQ below also includes a reason why you may not want to do that. http://wiki.CentOS.org/FAQ/CentOS7#head-8984faf811faccca74c7bcdd74de7467f2fcd8ee

    Alternatively on top of what you have already done which is disabling ipv6
    via sysctl, you can set the xinet service to specifically listen on ipv4
    only. There are other examples on that FAQ on how to accomplish that with other services.

    flags = IPv4

    However if you feel like you have to completely disable ipv6 and you are not running selinux, ip6tables, this approach works on disabling ipv6.:

    – Edit /etc/default/grub and append ‘ipv6.disable=1’ to the GRUB_CMDLINE_LINUX configuration variable

    – Generate the grub configuration file grub2-mkconfig -o /boot/grub2/grub.cfg

    – Reboot

  • The central problem seems to be that the monitoring host can’t hit nrpe on port 5666 UDP.

    [root@monitor1:~] #/usr/local/nagios/libexec/check_nrpe -H
    puppet.mydomain.com CHECK_NRPE: Socket timeout after 10 seconds.

    It is listening on the puppet host on port 5666

    [root@puppet:~] #lsof -i :5666
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    xinetd 2915 root 5u IPv6 24493 0t0 TCP *:nrpe (LISTEN)

    And the firewall is allowing that port:

    [root@puppet:~] #firewall-cmd –list-ports
    5666/udp

    But if I check the port using nmap

    [root@monitor1:~] #nmap -p 5666 puppet.mydomain.com

    Starting Nmap 6.40 ( http://nmap.org ) at 2015-05-03 22:51 UTC
    Nmap scan report for puppet.jokefire.com (216.120.250.140)
    Host is up (0.012s latency). PORT STATE SERVICE
    5666/tcp filtered nrpe

    That port is closed despite the port being allowed on the firewall.

    So I thought that the problem was that xinetd was listening to port 5666
    only on tcp v6. And when the monitoring host hits the puppet host using tcp v4 it can’t because only tcp v6 is active on that port.

    You mention that it’s listening on both tcp v4 and v6. But I only see v6 in that output. How are you determining that

    It’s a problem because the port does not appear to be open from the monitoring host:

    [root@monitor1:~] #nmap -p 5666 puppet.mydomain.com

    Starting Nmap 6.40 ( http://nmap.org ) at 2015-05-03 22:33 UTC
    Nmap scan report for puppet.jokefire.com (216.120.250.140)
    Host is up (0.011s latency). PORT STATE SERVICE
    5666/tcp filtered nrpe

    What am I doing wrong? I need to be able to disable tcpv6 completely!

    Worth a shot!

  • is it working on localhost or not???!!! it could be selinux problem also, if context is not correct.

  • It’s working on localhost:

    [root@puppet:~] #telnet localhost 5666
    Trying 127.0.0.1… Connected to localhost. Escape character is ‘^]’.

    I notice if I stop the firewall on the puppet host (for no more than 2
    seconds) and hit NRPE from the monitoring host it works:

    [root@monitor1:~] #/usr/local/nagios/libexec/check_nrpe -H
    puppet.mydomain.com NRPE v2.15

    But as soon as the firewall has been enabled on the puppet host (a microsecond later) I get this result:

    [root@monitor1:~] #/usr/local/nagios/libexec/check_nrpe -H
    puppet.mydomain.com connect to address 216.120.xxx.xxx port 5666: No route to host connect to host puppet.mydomain.com port 5666: No route to host

    And nmap from the monitoring host tells me that the port is closed:

    [root@monitor1:~] #nmap -p 5666 puppet.mydomain.com

    Starting Nmap 6.40 ( http://nmap.org ) at 2015-05-03 23:20 UTC
    Nmap scan report for puppet.jokefire.com (216.120.250.140)
    Host is up (0.011s latency). PORT STATE SERVICE
    5666/tcp filtered nrpe

    Back on the puppet host I verify that the port is open for UDP:

    [root@puppet:~] #firewall-cmd –list-ports
    5666/udp

    That should be right AFAIK.

    Can anybody tell me what I’m doing wrong ?

    Thanks Tim

  • This is using TCP

    This is using TCP

    So why are you opening a UDP port?

  • Tim,

    where did you installed this nrpe package? is selinux running enforcing mode (getenforce command), try disabling with setenforce 0. why you are running it under xinetd as usual way is to run it as nrped daemon.

    test against with check_nrpe, not using telnet.

  • Eero,

    where did you installed this nrpe package? is selinux running enforcing

    For NRPE I usually do a source install with these flags:

    ./configure make all make install-plugin make install-daemon make install-daemon-config make install-xinetd

    Rather than a yum install. If I install the nrpe package from yum I don’t find a check_nrpe script on the system for some reason!

    I demonstrate this on another system than the ones I’ve been working with in this thread:

    [root@monitor1:~] #rpm -qa | grep nrpe | grep -v mcollective nrpe-2.15-2.el7.x86_64

    [root@monitor1:~] #find / -name “check_nrpe”
    [root@monitor1:~] #

    So I’m more comfortable with a source install.

    test against with check_nrpe, not using telnet.

    I actually solved the problem by adding the port to tcp instead of udp on the puppet host:

    firewall-cmd –permanent –add-portV66/tcp

    Then from the monitoring host:

    [root@monitor1:~] #/usr/local/nagios/libexec/check_nrpe -H
    puppet.mydomain.com NRPE v2.15

    So it’s all good at this point. I’m not sure why the instructions I
    followed said to open up the port under UDP.. Had I just done what I did I
    would have saved a lot of trouble..

    Thanks for the input guys!! I’m glad the problem is solved now.

  • I see that there’s been quite a bit of discussion on this issue, already, but I don’t believe I’ve seen anyone note/mention this:

    The above does not indicate that the port is closed…the above indicates that the port is open but is being filtered by your firewall rules.

    You might want to also check your firewall rules to ensure that port
    5666 is allowing connections from the client system(s) in question.

  • That’s because the ‘check_nrpe’ command isn’t in the nrpe package. It’s in the nagios-plugins-nrpe package. The executable is installed, along side all other nagios check commands, as
    /usr/lib64/nagios/plugins/check_nrpe.

  • Got it!! Thanks Johnathan!! I’ll make sure I take a note of that. I’d rather use packages on a regular basis rather than source code installs.

    Thanks, Tim

  • On Linux, IPv4 is mapped inside the IPv6 space. An application that listens on an address-less v6 port is listening on both IPv4 and IPv6.
    For example, look at TCP port 22 for SSH.