Why?


Search This Blog

Saturday, February 25, 2017

Centos 7 with two different networks

Centos 7 with two different networks

I have a vm image with two nics in it. One to my 10Gb 192.168.70.0 network. This network is not on a switch but simply a DAC to my PC. So my PC is 192.168.70.5 and my ESXi is 192.168.70.90.

I also have another network in the vm inage on my 1Gb 192.168.10.0 network. This is my may main network, with a switch, and a route out to internet (192.168.10.1). So my PC is 192.168.10.5, ESXi 192.168.10.90, and router 192.168.10.1.

IP addresses for the vm image itself are:
192.168.10.22
192.168.70.22


Problem is when I have the 192.168.70.22 enabled I am not able to reach the internet. To remedy this I have the folowing /etc/sysconfig/network-scripts ifcfg files:

# cat ifcfg-ens192
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens192
UUID=dcc2277d-a060-48a0-8829-350abf95e269
DEVICE=ens192
ONBOOT=yes
DNS1=192.168.10.1
IPADDR=192.168.70.22
PREFIX=24
#GATEWAY=192.168.70.1
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_PRIVACY=no


# cat ifcfg-ens224
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens224
UUID=78e2b2a8-d444-4fd2-9751-bb108861392a
DEVICE=ens224
ONBOOT=yes
IPADDR=192.168.10.22
PREFIX=24
GATEWAY=192.168.10.1
DEFROUTE=yes
DNS1=192.168.10.1
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_PRIVACY=no


I then change the value of rp_filter to loose mode, value 2 with:

# sysctl net.ipv4.conf.all.rp_filter=2
net.ipv4.conf.all.rp_filter = 2

Restart network with:

# systemctl restart network

I can now ping internet (8.8.8.8), all nodes on 192.168.10.0, and all nodes on 192.168.70.0 (ESXi 192.168.70.90, PC 192.168.70.5 and the vm image of 192.168.70.22).

A quick reboot and re test of access and I am good.

I also want to tune the 10Gb interface as Im wanting to to test SSD disk caching on this image and need the fastest I can get. FYI My PC on 192.168.70.5 has m.2 512GB 950 Pro. I am using a 2.5" 512GB 850 Pro for cache on the vm image of 192.168.70.22. This 850 Pro is in the ESXi server.

So now tune the 10Gb image:

---turn off NetworkManage     

systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl stop NetworkManager-wait-online
systemctl disable NetworkManager-wait-online
systemctl enable network
chkconfig network on
systemctl start network


reboot and sanity check

systemctl status NetworkManager
systemctl status network

---end turn off NetworkManager

---tune 10Gb CNA

service irqbalance stop
service cpuspeed stop
chkconfig irqbalance off
chkconfig cpuspeed off
systemctl disable irqbalance
systemctl disable cpuspeed
systemctl stop irqbalance
systemctl stop cpuspeed

vi /etc/sysconfig/network-scripts/ifcfg-eth???
MTU="9000"

vi /etc/sysctl.conf
# -- tuning -- #
# Increase system file descriptor limit
fs.file-max = 65535

# Increase system IP port range to allow for more concurrent connections
net.ipv4.ip_local_port_range = 1024 65000

# -- 10gbe tuning from Intel ixgb driver README -- #

# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000

net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000

---end tune 10Gb CNA 

1 comment: