Why?


Search This Blog

Tuesday, April 26, 2016

CentOS 7 ftp server install and setup with vsftpd

CentOS 7 ftp server install and setup with vsftpd

# yum -y install vsftpd ftp


# vi /etc/vsftpd/vsftpd.conf


Disallow anonymous, unidentified users to access files via FTP; change the 

anonymous_enable setting to NO:
anonymous_enable=NO

Allow local uses to login by changing the local_enable setting to YES:
local_enable=YES

If you want local user to be able to write to a directory, then change the write_enable setting to YES:
write_enable=YES

Local users will be ‘chroot jailed’ and they will be denied access to any other part of the server; change the chroot_local_user setting to YES:
chroot_local_user=YES

# systemctl enable vsftpd 
# systemctl restart vsftpd

# firewall-cmd --permanent --add-port=21/tcp
# firewall-cmd --reload 


 

Sunday, April 17, 2016

CentOS 7 network interfaces come up out of order

CentOS 7 network interfaces come up out of order

# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.100  netmask 255.255.255.0  broadcast 192.168.10.255
        inet6 2600:8800:2580:eda:225:90ff:fe5d:a401  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::225:90ff:fe5d:a401  prefixlen 64  scopeid 0x20<link>
        ether 00:25:90:5d:a4:01  txqueuelen 1000  (Ethernet)
        RX packets 5356  bytes 1745688 (1.6 MiB)
        RX errors 0  dropped 4  overruns 0  frame 0
        TX packets 2480  bytes 396936 (387.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xdf700000-df77ffff

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet 192.168.80.100  netmask 255.255.255.0  broadcast 192.168.80.255
        inet6 fe80::21b:21ff:febb:7ad0  prefixlen 64  scopeid 0x20<link>
        ether 00:1b:21:bb:7a:d0  txqueuelen 1000  (Ethernet)
        RX packets 1047  bytes 122429 (119.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1724  bytes 1041752 (1017.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 3296  bytes 1705774 (1.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3296  bytes 1705774 (1.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



I am missing eth2 and eth3 from the ifconfig command above

----came up wrong
# ls -lsa /sys/class/net/
total 0
0 drwxr-xr-x  2 root root 0 Apr 17 08:45 .
0 drwxr-xr-x 52 root root 0 Apr 17 08:45 ..
0 lrwxrwxrwx  1 root root 0 Apr 17 08:46 eth0 -> ../../devices/pci0000:00/0000:00:1b.0/0000:06:00.0/net/eth0
0 lrwxrwxrwx  1 root root 0 Apr 17 08:46 eth1 -> ../../devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:09.0/0000:04:00.0/net/eth1
0 lrwxrwxrwx  1 root root 0 Apr 17 08:46 eth2 -> ../../devices/pci0000:00/0000:00:1f.6/net/eth2
0 lrwxrwxrwx  1 root root 0 Apr 17 08:46 eth3 -> ../../devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:09.0/0000:04:00.1/net/eth3
0 lrwxrwxrwx  1 root root 0 Apr 17 08:45 lo -> ../../devices/virtual/net/lo

----end came up wrong

----came up right
# ls -lsa /sys/class/net/
total 0
0 drwxr-xr-x  2 root root 0 Apr 17 09:51 .
0 drwxr-xr-x 52 root root 0 Apr 17 09:51 ..
0 lrwxrwxrwx  1 root root 0 Apr 17 09:51 eth0 -> ../../devices/pci0000:00/0000:00:1b.0/0000:06:00.0/net/eth0
0 lrwxrwxrwx  1 root root 0 Apr 17 09:51 eth1 -> ../../devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:09.0/0000:04:00.0/net/eth1
0 lrwxrwxrwx  1 root root 0 Apr 17 09:51 eth2 -> ../../devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:09.0/0000:04:00.1/net/eth2
0 lrwxrwxrwx  1 root root 0 Apr 17 09:51 eth3 -> ../../devices/pci0000:00/0000:00:1f.6/net/eth3
0 lrwxrwxrwx  1 root root 0 Apr 17 09:51 lo -> ../../devices/virtual/net/lo

----end came up right

As you can see they are in the /sys/class/net/ with my issue being eth2 and eth3 getting swapped sometimes during a reboot.

The solution is to get udev to ignore these and allow ifup to bring things up  in order

# vi /etc/udev/rules.d/10-local.rules
SUBSYSTEM=="pci", SYSFS{class}=="0x020000", OPTIONS="ignore_device"


Make sure you have the HWADDR= in your ifcfg-ethx files to assign hardware address
Example:
HWADDR=00:25:90:5d:a4:00


Now get that hardware address and additional information on your interfaces

# lspci | grep -i ethernet
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-V (rev 31)
04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
06:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)


Match the PCI address from lspci | grep -i ethernet to the PCI address from ls -lsa /sys/class/net/
Example:
Lets look at 00:1f.6 from the lspci | grep -i ethernet command
We can see this in the ls -lsa /sys/class/net/ command mapped as eth3
Now get some more info on eth3

# ethtool -i eth3
driver: e1000e
version: 3.2.5-k
firmware-version: 0.8-4
bus-info: 0000:00:1f.6
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no


And get the physical hardware address we need for the ifcfg-eth3 file

# ethtool -P eth3
Permanent address: 00:25:90:5d:a4:00



Now i can reboot, power reset, etc., and all my interfaces come up correctly now.

# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.100  netmask 255.255.255.0  broadcast 192.168.10.255
        inet6 2600:8800:2580:eda:225:90ff:fe5d:a401  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::225:90ff:fe5d:a401  prefixlen 64  scopeid 0x20<link>
        ether 00:25:90:5d:a4:01  txqueuelen 1000  (Ethernet)
        RX packets 3303  bytes 1144712 (1.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1660  bytes 186249 (181.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xdf700000-df77ffff

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet 192.168.80.100  netmask 255.255.255.0  broadcast 192.168.80.255
        inet6 fe80::21b:21ff:febb:7ad0  prefixlen 64  scopeid 0x20<link>
        ether 00:1b:21:bb:7a:d0  txqueuelen 1000  (Ethernet)
        RX packets 491  bytes 54845 (53.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 997  bytes 599121 (585.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet 192.168.90.100  netmask 255.255.255.0  broadcast 192.168.90.255
        inet6 fe80::21b:21ff:febb:7ad2  prefixlen 64  scopeid 0x20<link>
        ether 00:1b:21:bb:7a:d2  txqueuelen 1000  (Ethernet)
        RX packets 4152  bytes 729960 (712.8 KiB)
        RX errors 0  dropped 10  overruns 0  frame 0
        TX packets 4663  bytes 1325114 (1.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth3: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.10.10.100  netmask 255.255.255.0  broadcast 10.10.10.255
        ether 00:25:90:5d:a4:00  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xdf800000-df820000

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 3138  bytes 1365194 (1.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3138  bytes 1365194 (1.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



Enjoy!

Wednesday, April 13, 2016

Asterisk setup for Flowroute SIP trunk

Asterisk setup for Flowroute SIP trunk

At bottom of /etc/asterisk/sip.conf

[100]
type=friend
callerid="Asterisk 100" 100
secret=my_password_here
context=internal
host=dynamic
allow=all
dtmfmode=rfc2833
;
;
;
[flowroute] ;keep this lowercase, do not change format
type=friend
secret=my_secret_here

username=my_username_here
host=sip.flowroute.com
dtmfmode=rfc2833
context=inbound ;change to 'ext-did' or 'from-trunk' for asterisk@home
canreinvite=no
allow=ulaw
;allow=g729 ;uncomment this line if you have G.729 licenses installed.
insecure=port,invite
fromdomain=sip.flowroute.com


At bottom of /etc/asterisk/extensions.conf

[internal]
exten => _1NXXXXXXXXX,1,Dial(SIP/${EXTEN}@flowroute)
;Send NANPA (USA) as 11 digit
exten => _011.,1,Dial(SIP/${EXTEN:3}@flowroute)
;dialing format - SIP/{countrycode}{number}@flowroute


;used to pass extension dialed, 100, to registered phone of 100
exten => 100,1,Dial(SIP/100,20)
exten => 100,n,Playback(vm-goodbye)
exten => 100,n,Hangup


Now from asterisk console (asterisk -r) do:

core reload
sip reload
dialplan reload

sip show peers
sip show registry
sip show channels

Make calls.

enjoy!

Wednesday, March 30, 2016

ESXi 5.5 Remove partiions from disk

ESXi 5.5 Remove partiions from disk

/dev/disks # ls
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:1
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:2
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:3
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:5
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:6
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:7
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:8
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:9
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212:1
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201:1
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112:1
vml.0100000000202020202057442d574d43334630343632323132574443205744
vml.0100000000202020202057442d574d43334630343632323132574443205744:1
vml.0100000000202020202057442d574d43334630343839323031574443205744
vml.0100000000202020202057442d574d43334630343839323031574443205744:1
vml.0100000000202020202057442d574d43334631343436313132574443205744
vml.0100000000202020202057442d574d43334631343436313132574443205744:1
vml.01000000005332314e4e45414732303432363241202020202053616d73756e
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:1
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:2
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:3
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:5
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:6
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:7
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:8
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:9
/dev/disks #

/dev/disks # partedUtil delete "t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112" 1


Do this for other drives you want to remove portions from. I did all three WD 500GB drives. When finished,

/dev/disks # ls
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:1
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:2
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:3
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:5
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:6
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:7
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:8
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:9
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112
vml.0100000000202020202057442d574d43334630343632323132574443205744
vml.0100000000202020202057442d574d43334630343839323031574443205744
vml.0100000000202020202057442d574d43334631343436313132574443205744
vml.01000000005332314e4e45414732303432363241202020202053616d73756e
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:1
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:2
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:3
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:5
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:6
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:7
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:8
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:9
/dev/disks #


As you can see all the portions are gone. I can now add these drives under ESXi server for images and ISO storage, etc...

Sunday, March 27, 2016

Centos 7 create mdadm raid0 with mount and nsf share

Centos 7 create mdadm raid0 with mount and nsf share

Install mdadm

# yum -y install mdadm*

Get list of disk to be used in array

# fdisk -l

Remove partitions if needed

# fdisk /dev/sdb
p
d
p
w
#

Repeat for all other drives if needed

Now create the array. I am using Raid0 with 3 devices

# mdadm -C /dev/md0 --level=raid0 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd 

Format it

# mkfs.ext4 /dev/md0

Inspect your work

# mdadm --detail /dev/md0

Create mount point and mount

# mkdir /raid0
# mount /dev/md0 /raid0

See if mounted and what space we have now

# df -h

Set this for auto mount at boot

# vi /etc/fstab
/dev/md0        /raid0  ext4    defaults        0 0

ESXi 5.5 Create and use raw disk images

ESXi 5.5 Create and use raw disk images

I created these rdm disk. then created a centos 7 guest on the SSD in the ESXi server. I added the rdm disk to the Centos 7 guest. I then did mdadm raid0 on them. I then exported the raid0 vol via nfs. I then mounted the raido for a datastore.

I am know using consumer sata drives for raid in my ESXi server. Home lab use only :)

datastore1 is on the SSD. i put the pointers to the rdm disk in that datastore so i can get to them when i build my Centos 7 guest.

I created the directory rdm under /vmfs/volumes/datastore1

On ESXi server

# ls /dev/disks/ -l
-rw-------    1 root     root     500107862016 Mar 27 09:10 t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212
-rw-------    1 root     root     500107862016 Mar 27 09:10 t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201
-rw-------    1 root     root     500107862016 Mar 27 09:10 t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112
lrwxrwxrwx    1 root     root            74 Mar 27 09:40 vml.0100000000202020202057442d574d43334630343632323132574443205744 -> t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212
lrwxrwxrwx    1 root     root            74 Mar 27 09:40 vml.0100000000202020202057442d574d43334630343839323031574443205744 -> t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201
lrwxrwxrwx    1 root     root            74 Mar 27 09:40 vml.0100000000202020202057442d574d43334631343436313132574443205744 -> t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112


vmkfstools -z /vmfs/devices/disks/t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212 /vmfs/volumes/datastore1/rdm/rdmdisk1.vmdk

vmkfstools -z /vmfs/devices/disks/t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201 /vmfs/volumes/datastore1/rdm/rdmdisk2.vmdk

vmkfstools -z /vmfs/devices/disks/t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112 /vmfs/volumes/datastore1/rdm/rdmdisk3.vmdk


now in the vsphere client add drives to guest image and choose use existing image. nav to the datastore and the dir that you created these in.

Saturday, March 26, 2016

Centos 7 install setup Chrony

Ripped from

https://www.certdepot.net/rhel7-set-ntp-service/

 

Presentation

NTP (Network Time Protocol) is a protocol to keep servers time synchronized: one or several master servers provide time to client servers that can themselves provide time to other client servers (notion of stratus).
This tutorial deals with client side configuration, even though server configuration is not entirely different.
Two main packages are used in RHEL 7 to set up the client side:
  • ntp: this is the classic package, already existing in RHEL 6, RHEL 5, etc.
  • chrony: this is a new solution better suited for portable PC or servers with network connection problems (time synchronization is quicker). chrony is the default package in RHEL 7.

Prerequisites

Before anything else, you need to assign the correct time zone.
To get the current configuration, type:
# timedatectl
Local time: Sat 2015-11-07 08:17:33 EST
Universal time: Sat 2015-11-07 13:17:33 UTC
RTC time: Sat 2015-11-07 13:17:33
Timezone: America/New_York (EST, -0500)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: no
Last DST change: DST ended at
Sun 2015-11-01 01:59:59 EDT
Sun 2015-11-01 01:00:00 EST
Next DST change: DST begins (the clock jumps one hour forward) at
Sun 2016-03-13 01:59:59 EST
Sun 2016-03-13 03:00:00 EDT
To get the list of all the available time zones, type:
# timedatectl list-timezones
Africa/Abidjan
Africa/Accra
Africa/Addis_Ababa
...
America/La_Paz
America/Lima
America/Los_Angeles
...
Asia/Seoul
Asia/Shanghai
Asia/Singapore
...
Pacific/Tongatapu
Pacific/Wake
Pacific/Wallis
Finally, to set a specific time zone (here America/Los_Angeles), type:
# timedatectl set-timezone America/Los_Angeles
Then, to check your new configuration, type:
# timedatectl
      Local time: Sat 2015-11-07 05:32:43 PST
  Universal time: Sat 2015-11-07 13:32:43 UTC
        RTC time: Sat 2015-11-07 13:32:43
        Timezone: America/Los_Angeles (PST, -0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: no
 Last DST change: DST ended at
                  Sun 2015-11-01 01:59:59 PDT
                  Sun 2015-11-01 01:00:00 PST
 Next DST change: DST begins (the clock jumps one hour forward) at
                  Sun 2016-03-13 01:59:59 PST
                  Sun 2016-03-13 03:00:00 PDT

The NTP Package

Install the NTP package:
# yum install -y ntp
Activate the NTP service at boot:
# systemctl enable ntpd
Start the NTP service:
# systemctl start ntpd
The NTP configuration is in the /etc/ntp.conf file:
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).

driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1 
restrict ::1

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

includefile /etc/ntp/crypto/pw

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography. 
keys /etc/ntp/keys
Note: For basic configuration purpose, only the server directives could need a change to point at a different set of master time servers than the defaults specified.
To get some information about the time synchronization process, type:
# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*y.ns.gin.ntt.ne 192.93.2.20      2 u   47   64  377   27.136    6.958  11.322
+ns1.univ-montp3 192.93.2.20      2 u   45   64  377   34.836   -0.009  11.463
+merlin.ensma.ne 193.204.114.232  2 u   48   64  377   34.586    4.443  11.370
+obsidian.ad-not 131.188.3.220    2 u   50   64  377   22.548    4.256  12.077
Alternatively, to get a basic report, type:
# ntpstat
synchronised to NTP server (129.250.35.251) at stratum 3
time correct to within 60 ms
polling server every 64 s
To quickly synchronize a server, type:
# systemctl stop ntpd
# ntpdate pool.ntp.org
 5 Jul 10:36:58 ntpdate[2190]: adjust time server 95.81.173.74 offset -0.005354 sec
# systemctl start ntpd

The Chrony Package

Alternatively, you can install the new Chrony service that is quicker to synchronize clocks in mobile and virtual systems.
Install the Chrony service:
# yum install -y chrony
Activate the Chrony service at boot:
# systemctl enable chronyd
Start the Chrony service:
# systemctl start chronyd
The Chrony configuration is in the /etc/chrony.conf file:
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

# Ignore stratum in source selection.
stratumweight 0

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Enable kernel RTC synchronization.
rtcsync

# In first three updates step the system clock instead of slew
# if the adjustment is larger than 10 seconds.
makestep 10 3

# Listen for commands only on localhost.
bindcmdaddress 127.0.0.1
bindcmdaddress ::1

keyfile /etc/chrony.keys

# Specify the key used as password for chronyc.
commandkey 1

# Generate command key if missing.
generatecommandkey

# Disable logging of client accesses.
noclientlog

# Send a message to syslog if a clock adjustment is larger than 0.5 seconds.
logchange 0.5

logdir /var/log/chrony
Note: For basic configuration purpose, only the server directives could need a change to point at a different set of master time servers than the defaults specified.
To get information about the main time reference, type:
# chronyc tracking
Reference ID    : 94.23.44.157 (merzhin.deuza.net)
Stratum         : 3
Ref time (UTC)  : Thu Jul  3 22:26:27 2014
System time     : 0.000265665 seconds fast of NTP time
Last offset     : 0.000599796 seconds
RMS offset      : 3619.895751953 seconds
Frequency       : 0.070 ppm slow
Residual freq   : 0.012 ppm
Skew            : 0.164 ppm
Root delay      : 0.030609 seconds
Root dispersion : 0.005556 seconds
Update interval : 1026.9 seconds
Leap status     : Normal
To get equivalent information to the ntpq command, type:
# chronyc sources -v
210 Number of sources = 4

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||                                                /   xxxx = adjusted offset,
||         Log2(Polling interval) -.             |    yyyy = measured offset,
||                                  \            |    zzzz = estimated error.
||                                   |           |
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^+ merlin.ensma.fr               2   6    77    61   +295us[+1028us] +/-   69ms
^* lafkor.de                     2   6    77    61  -1371us[ -638us] +/-   65ms
^+ kimsuflol.iroqwa.org          3   6    77    61   -240us[ -240us] +/-   92ms
^+ merzhin.deuza.net             2   6    77    61    +52us[  +52us] +/-   48ms

# chronyc sourcestats -v
210 Number of sources = 4
                             .- Number of sample points in measurement set.
                            /    .- Number of residual runs with same sign.
                           |    /    .- Length of measurement set (time).
                           |   |    /      .- Est. clock freq error (ppm).
                           |   |   |      /           .- Est. error in freq.
                           |   |   |     |           /         .- Est. offset.
                           |   |   |     |          |          |   On the -.
                           |   |   |     |          |          |   samples. \
                           |   |   |     |          |          |             |
Name/IP Address            NP  NR  Span  Frequency  Freq Skew  Offset  Std Dev
==============================================================================
merlin.ensma.fr             7   5   200      0.106      6.541   +381us   176us
lafkor.de                   7   4   199      0.143     10.145   -916us   290us
kimsuflol.iroqwa.org        7   7   200     -0.298      6.717    +69us   184us
merzhin.deuza.net           7   5   200      0.585     11.293   +675us   314us
To quickly synchronize a server, type:
# ntpdate pool.ntp.org
 5 Jul 10:31:06 ntpdate[2135]: step time server 193.55.167.1 offset 121873.493146 sec
Note: You don’t need to stop the Chrony service to synchronize the server.

Intel 10Gb x520-da2 Performance Tuning Windows

Ripped from:
http://www.intel.com/content/www/us/en/support/network-and-i-o/ethernet-products/000005811.html

Adapter installation suggestions
  • Install the Intel® Network Adapter in a slot that matches or exceeds the bus width of the adapter.
    • Example 1: if you have a 32-bit PCI adapter put it in a 32-bit or 64-bit PCI or PCI-X* slot.
    • Example 2: if you have a 64-bit PCI-X adapter put it in a 64-bit PCI-X slot.
    • Example 3: if you have an x4 PCIe* adapter put it in an x4, x8, or x16 PCIe* slot.
    Note Some PCIe* slots are physically wired with fewer channels than the dimensions of the slot would indicate. In that case, a slot that matches an x8 dimensions would have the functionality of an x4, x2 or x1 slot. Check with your system manufacturer.
  • For PCI and PCI-X*, install the Intel Network Adapter in the fastest available slot.
    • Example 1: if you have a 64-bit PCI adapter put it in a 66 MHz 64-bit PCI slot.
    • Example 2: if you have a 64-bit PCI-X adapter put in a 133 MHz (266 or 533 if available) 64-bit PCI-X slot.
    Note The slowest board on a bus dictates the maximum speed of the bus. Example: when a 66MHz and a 133 MHz add-in card are installed in a 133 MHz bus, then all devices on that bus function at 66 MHz.
  • Try to install the adapter in a slot on a bus by itself. If add-in cards share a bus, they compete for bus bandwidth.
Driver configuration suggestions
  • For Intel® Ethernet 10 Gigabit Converged Network Adapters, you can choose a role-based performance profile to automatically adjust driver configuration settings.
  • Reduce Interrupt Moderation Rate to Low, Minimal, or Off
    • Also known as Interrupt Throttle Rate (ITR).
    • The default is "Adaptive" for most roles.
    • The low latency profile sets the rate to off.
    • The storage profiles set the rate to medium.
    Note Decreasing Interrupt Moderation Rate increases CPU utilization.
  • Enable Jumbo Frames to the largest size supported across the network (4KB, 9KB, or 16KB)
    • The default is Disabled.
    Note Enable Jumbo Frames only if devices across the network support them and are configured to use the same frame size.
  • Disable Flow Control.
    • The default is Generate & Respond.
    Note Disabling Flow Control can result in dropped frames.
  • Increase the Transmit Descriptors buffer size.
    • The default is 256. Maximum value is 2048.
    Note Increasing Transmit Descriptors increases system memory usage.
  • Increase the Receive Descriptors buffer size.
    • The default is 256. Maximum value is 2048.
    Note Increasing Receive Descriptors increases system memory usage.
TCP configuration suggestions
  • Tune the TCP window size (Applies to Windows* Server editions before Windows Server 2008*).
    Notes Optimizing your TCP window size can be complex as every network is different. Documents are available on the Internet that explain the considerations and formulas used to set window size.
    Before Windows Server 2008, the network stack used a fixed-size receive-side window. Starting with Windows Server 2008, Windows provides TCP receive window auto-tuning. The registry keywords TcpWindowSize, NumTcbTablePartitions, and MaxHashTableSize, are ignored starting with Windows Server 2008.
Teaming considerations and suggestions
When teaming multiple adapter ports together to maximize bandwidth, the switch needs to be considered. Dynamic or static 802.3ad link aggregation is the preferred teaming mode, but this teaming mode demands multiple contiguous ports on the switch. Give consideration to port groups on the switch. Typically, a switch has multiple ports grouped together that are serviced by one PHY. This one PHY can have a limited shared bandwidth for all the ports it supports. This limited bandwidth for a group may not be enough to support full utilization of all ports in the group.
Performance gain can be limited to the bandwidth shared, when the switch shares bandwidth across contiguous ports. Example: Teaming 4 ports on Intel® Gigabit Network Adapters or LAN on motherboards together in an 802.3ad static or dynamic teaming mode. Using this example, 4 gigabit ports share a total PHY bandwidth of 2 Gbps. The ability to group switch ports is dependent on the switch manufacturer and model, and can vary from switch to switch.
Alternative teaming modes can sometimes mitigate these performance limitations. For instance, using Adaptive Load Balancing (ALB), including Receive Load Balancing. ALB has no demands on the switch and does not need to be connected to contiguous switch ports. If the link partner has port groups, an ALB team can be connected to any port of the switch. Connecting the ALB team this way distributes connections across available port groups on the switch. This action can increase overall network bandwidth.
Performance testing considerations
  • When copying a file from one system to another (1:1) using one TCP session, throughput is significantly lower than doing multiple simultaneous TCP sessions. Low throughput performance on 1:1 networks is because of latency inherent in a single TCP/IP session. A few file transfer applications support multiple simultaneous TCP streams. Some examples are: bbFTP*, gFTP*, and FDT*.
    This graph is intended to show (not guarantee) the performance benefit of using multiple TCP streams. These are actual results from an Intel® 10 Gigabit CX4 Dual Port Server Adapter, using default Advanced settings under Windows 2008* x64.
  • Direct testing of your network interface throughput capabilities can be done by using tools like: iperf*, and Microsoft NTTTCP*. These tools can be configured to use one or more streams.
  • When copying a file from one system to another, the hard drives of each system can be a significant bottle neck. Consider using high RPM, higher throughput hard drives, striped RAIDs, or RAM drives in the systems under test.
  • Systems under test should connect through a full-line rate, non-blocking switch.
  • Theoretical Maximum Bus Throughput:
    • PCI Express* (PCIe*) Theoretical Bi-Directional Bus Throughput.
      PCI Express Implementation Encoded Data Rate Unencoded Data Rate
      x1 5 Gb/sec 4 Gb/sec (0.5 GB/sec)
      x4 20 Gb/sec 16 Gb/sec (2 GB/sec)
      x8 40 Gb/sec 32 Gb/sec (4 GB/sec)
      x16 80 Gb/sec 64 Gb/sec (8 GB/sec)
    • PCI and PCI-X Bus Theoretical Bi-Directional Bus Throughput.
      Bus and Frequency 32-Bit Transfer Rate 64-Bit Transfer Rate
      33-MHz PCI 1,064 Mb/sec 2,128 Mb/sec
      66-MHz PCI 2,128 Mb/sec 4,256 Mb/sec
      100-MHz PCI-X Not applicable 6,400 Mb/sec
      133-MHz PCI-X Not applicable 8,192 Mb/sec
      Note The PCIe* link width can be checked in Windows* through adapter properties. Select the Link Speed tab and click the Identify Adapter button. Intel® PROSet for Windows* Device Manager must be loaded for this utility to function.

Intel 10Gb x520-da2 Performance Tuning for Linux

Ripped from:
http://dak1n1.com/blog/7-performance-tuning-intel-10gbe/

By default, Linux networking is configured for best reliability, not performance. With a 10GbE adapter, this is especially apparent. The kernel’s send/receive buffers, TCP memory allocations, and packet backlog are much too small for optimal performance. This is where a little testing & tuning can give your NIC a big boost.
There are three performance-tuning changes you can make, as listed in the Intel ixgb driver documentation. Here they are in order of greatest impact:
  1. Enabling jumbo frames on your local host(s) and switch.
  2. Using sysctl to tune kernel settings.
  3. Using setpci to tune PCI settings for the adapter.

Keep in mind that any tuning listed here is only a suggestion. Much of performance tuning is done by changing one setting, then benchmarking and seeing if it worked for you. So your results may vary.
Before starting any benchmarks, you may also want to disable irqbalance and cpuspeed. Doing so will maximize network throughput and allow you to get the best results on your benchmarks.
service irqbalance stop
service cpuspeed stop
chkconfig irqbalance off
chkconfig cpuspeed off

Method #1: jumbo frames

In Linux, setting up jumbo frames is as simple as running a single command, or adding a single field to your interface config.
ifconfig eth2 mtu 9000 txqueuelen 1000 up
For a more permanent change, add this new MTU value to your interface config, replacing “eth2” with your interface name.
vim /etc/sysconfig/network-scripts/ifcfg-eth2
MTU="9000"

Method #2: sysctl settings

There are several important settings that impact network performance in Linux. These were taken from Mark Wagner’s excellent presentation at the Red Hat Summit in 2008.
Core memory settings:
  • net.core.rmem_max –  max size of rx socket buffer
  • net.core.wmem_max – max size of tx socket buffer
  • net.core.rmem_default – default rx size of socket buffer
  • net.core.wmem_default – default tx size of socket buffer
  • net.core.optmem_max – maximum amount of option memory
  • net.core.netdev_max_backlog – how many unprocessed rx packets before kernel starts to drop them
Here is my modified /etc/sysctl.conf. It can be appended onto the default config.
 # -- tuning -- #
# Increase system file descriptor limit
fs.file-max = 65535

# Increase system IP port range to allow for more concurrent connections
net.ipv4.ip_local_port_range = 1024 65000

# -- 10gbe tuning from Intel ixgb driver README -- #

# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000

net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000

Method #3: PCI bus tuning

If you want to take your tuning even further yet, here’s an option to adjust the PCI bus that the NIC is plugged into. The first thing you’ll need to do is find the PCI address, as shown by lspci:
[chloe@biru ~]$ lspci
 07:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
Here 07.00.0 is the pci bus address. Now we can grep for that in /proc/bus/pci/devices to gather even more information.
[chloe@biru ~]$ grep 0700 /proc/bus/pci/devices
0700    808610fb        28              d590000c                       0                    ecc1                       0                d58f800c                       0                       0                   80000                       0                      20                       0                    4000                  0                0        ixgbe
Various information about the PCI device will display, as you can see above. But the number we’re interested in is the second field, 808610fb. This is the Vendor ID and Device ID together. Vendor: 8086 Device: 10fb. You can use these values to tune the PCI bus MMRBC, or Maximum Memory Read Byte Count.
This will increase the MMRBC to 4k reads, increasing the transmit burst lengths on the bus.
setpci -v -d 8086:10fb e6.b=2e
About this command:
The -d option gives the location of the NIC on the PCI-X bus structure;
e6.b is the address of the PCI-X Command Register,
and 2e is the value to be set.
These are the other possible values for this register (although the one listed above, 2e, is recommended by the Intel ixgbe documentation).
MM value in bytes
22 512 (default)
26 1024
2a 2048
2e 4096

And finally, testing

Testing is something that should be done in between each configuration change, but for the sake of brevity I’ll just show the before and after results. The benchmarking tools used were ‘iperf’ and ‘netperf’.
Here’s how your 10GbE NIC might perform before tuning…
 [  3]  0.0-100.0 sec   54.7 GBytes  4.70 Gbits/sec

bytes  bytes   bytes    secs.    10^6bits/sec
87380 16384 16384    60.00    5012.24

And after tuning…
 [  3]  0.0-100.0 sec   115 GBytes  9.90 Gbits/sec

bytes  bytes   bytes    secs.    10^6bits/sec
10000000 10000000 10000000    30.01    9908.08
Wow! What a difference a little tuning makes. I’ve seen great results from my Hadoop HDFS cluster after just spending a couple hours getting to know my server’s network hardware. Whatever your application for 10GbE might be, this is sure to be of benefit to you as well.

Saturday, March 19, 2016

Centos 7 new build list of stuff to do after initial install

Centos 7 new build list of stuff to do after initial install

Ignore what is not needed

Disable selinux

vi /etc/sysconfig/selinux
    selinux=diabled

Disable and turn off firewalld
  
systemctl disable firewalld
systemctl stop firewalld

reboot

---begin turn off NetworkManager

vi /etc/hostname
    make sure your hostname is in there. i use name.domain.com

vi /etc/hosts
    make sure your hotname is in there. I both name and name.domain.com
  
vi /etc/resolv.conf
        search yourdomain.com
        nameserver 192.168.10.1 or what ever you use for DNS
      
      
---begin if you want to use the old eth0 naming convention      
      
vi /etc/default/grub
            Search for the line “GRUB_CMDLINE_LINUX” and append the following: net.ifnames=0 biosdevname=0

you can also turn off the screensaver for your console by adding consoleblank=0

My line is now:

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos_nas/swap rd.lvm.lv=centos_nas/root net.ifnames=0 biosdevname=0 consoleblank=0"

grub2-mkconfig -o /boot/grub2/grub.cfg

grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg  

mv /etc/sysconfig/network-scripts/ifcfg-enp????? /etc/sysconfig/network-scripts/ifcfg-eth0  

vi /etc/sysconfig/network-scripts/ifcfg-eth0
    NAME=eth0
    DEVICE=eth0

---end     if you want to use the old eth0 naming convention      

systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl stop NetworkManager-wait-online
systemctl disable NetworkManager-wait-online
systemctl enable network
chkconfig network on
systemctl start network


reboot and sanity check

systemctl status NetworkManager
systemctl status network

---end turn off NetworkManager

Create text file /root/list with packge list below in it
do not include the --begin list or the --end list lines in the file

--begin list  
bind-utils
traceroute
net-tools
ntp*
gcc
glibc
glibc-common
gd
gd-devel
make
net-snmp
openssl-devel
xinetd
unzip
libtool*
make
patch
perl
bison
flex-devel
gcc-c++
ncurses-devel
flex
libtermcap-devel
autoconf*
automake*
autoconf
libxml2-devel
cmake
sqlite*
wget
ntp*
lm_sensors
ncurses-devel
qt-devel
hmaccalc
zlib-devel
binutils-devel
elfutils-libelf-devel
wget
bc
gzip
uuid*
libuuid-devel
jansson*
libxml2*
sqlite*
openssl*
lsof
NetworkManager-tui
mlocate
yum-utils
kernel-devel
nfs-utils
tcpdump
--end list

yum -y install $(cat list)

yum -y groupinstall "Development Tools"

yum -y update

reboot


---install zfs if needed

cd /root
yum -y localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
yum -y localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
yum -y install kernel-devel zfs

modprobe zfs
lsmod | grep -i zfs
    zfs                  2179437  3
    zcommon                47120  1 zfs
    znvpair                80252  2 zfs,zcommon
    spl                    89796  3 zfs,zcommon,znvpair
    zavl                    6784  1 zfs
    zunicode              323046  1 zfs

vi /etc/sysconfig/modules/zfs.modules
#!/bin/sh

if [ ! -c /dev/zfs ] ; then
        exec /sbin/modprobe zfs >/dev/null 2>&1
fi

chmod +x /etc/sysconfig/modules/zfs.modules

reboot

lsmod | grep -i zfs
    zfs                  2179437  3
    zcommon                47120  1 zfs
    znvpair                80252  2 zfs,zcommon
    spl                    89796  3 zfs,zcommon,znvpair
    zavl                    6784  1 zfs
    zunicode              323046  1 zfs


create pool called myraid
this is a 8 drive 4 vdev stripe mirror pool set

zpool create myraid mirror sdb sdc mirror sdd sde mirror sdf sdg mirror sdh sdi

zpool status
  
zfs mount myraid
echo "zfs mount myraid" >> /etc/rc.local

zfs set compression=lz4 myraid
zfs set sync=disabled myraid
zfs set checksum=fletcher4 myraid
zfs set primarycache=all myraid
zfs set logbias=latency myraid
zfs set recordsize=128k myraid
zfs set atime=off myraid
zfs set dedup=off myraid



vi /etc/modprobe.d/zfs.conf
# disable prefetch
options zfs zfs_prefetch_disable=1
# set arc max to 48GB. I have 64GB in my server
options zfs zfs_arc_max=51539607552
# set size to 128k same as file system block size
options zfs zfs_vdev_cache_size=1310720
options zfs zfs_vdev_cache_max=1310720
options zfs zfs_read_chunk_size=1310720
options zfs zfs_vdev_cache_bshift=17
options zfs zfs_read_chunk_size=1310720
# Set thes to 1 so we get max IO at cost of banwidth
options zfs zfs_vdev_async_read_max_active=1
options zfs zfs_vdev_async_read_min_active=1
options zfs zfs_vdev_async_write_max_active=1
options zfs zfs_vdev_async_write_min_active=1
options zfs zfs_vdev_sync_read_max_active=1
options zfs zfs_vdev_sync_read_min_active=1
options zfs zfs_vdev_sync_write_max_active=1
options zfs zfs_vdev_sync_write_min_active=1

i am using my pool via nfs to my ESXi server for quest images so
i share this on my nas with both the 1Gb and 10Gb networks

vi /etc/exports
/myraid/     192.168.10.0/24(rw,async,no_root_squash,no_subtree_check)
/myraid/     192.168.90.0/24(rw,async,no_root_squash,no_subtree_check)

systemctl start rpcbind nfs-server
systemctl enable rpcbind nfs-server


---end install zfs if needed



--install samaba if needed

yum -y install samba

useradd samba -s /sbin/nologin

smbpasswd -a samba
            Supply a password
            Retype the password
  
mkdir /myraid

chown -R samba:root /myraid/

vi /etc/samba/smb.conf

[global]
workgroup = WORKGROUP ;use name of your workgroup here
server string = Samba Server Version %v
netbios name = NAS

Add this to botton of /etc/samba/smb.conf file

[NAS]
comment = NAS
path = /myraid
writable = yes
valid users = samba


systemctl start smb
systemctl enable smb
systemctl start nmb
systemctl enable nmb

testparm
  
--end install samaba if needed




---install plex if needed


visit plex site and get rpm for your version of OS
copy this to /root

yum -y localinstall name.rpm

systemctl enable plexmediaserver
systemctl start plexmediaserver

---end install plex if needed

---install LAMP

yum -y install httpd mariadb-server mariadb php php-mysql
systemctl enable httpd.service
systemctl start httpd.service
systemctl status httpd.service

Make sure it works with:
http://your_server_IP_address/

systemctl enable mariadb
systemctl start mariadb
systemctl status mariadb
mysql_secure_installation

vi /var/www/html/info.php
<?php phpinfo(); ?>

http://your_server_IP_address/info.php


---End install LAMP

---Extra goodies

yum -y install epel-release
yum -y install stress htop iftop iotop hddtemp smartmontools iperf3 sysstat mlocate

updatedb **this is to update mlocate db


---End Extra goodies

---tune 10Gb CNA if needed

service irqbalance stop
service cpuspeed stop
chkconfig irqbalance off
chkconfig cpuspeed off

vi /etc/sysconfig/network-scripts/ifcfg-eth???
MTU="9000"

vi /etc/sysctl.conf
# -- tuning -- #
# Increase system file descriptor limit
fs.file-max = 65535

# Increase system IP port range to allow for more concurrent connections
net.ipv4.ip_local_port_range = 1024 65000

# -- 10gbe tuning from Intel ixgb driver README -- #

# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000

net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000

reboot and test speed.

on linux client pointing to server with ip 192.168.90.100

# iperf3 -c 192.168.90.100 -p 5201

on linux server with IP 192.168.90.100

iperf3 -s -p 5201 -B 192.168.90.100

---end tune 10Gb CNA if needed


Centos 7 turn off NetworkManager

Centos 7 turn off NetworkManager

My domain whittenberg.domain and my machine is nas.whittenberg.domain

Do not attempt this unless you have console access. You can do all the below from a say a putty session, but if things go wrong you will need console access.

vi /etc/hostname
    nas.whittenberg.domain

vi /etc/hosts
    127.0.0.1   nas nas.whittenberg.domain localhost localhost.localdomain localhost4 localhost4.localdomain4
     ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

   
vi /etc/resolv.conf
        # Generated by NetworkManager
        search whittenberg.domain
        nameserver 192.168.10.1
        nameserver 2600:8800:2580:eda:4af8:b3ff:fe93:615d


---begin if you want to use the old eth0 naming convention       
       
vi /etc/default/grub
 

Search for the line “GRUB_CMDLINE_LINUX” and append the following: 

“net.ifnames=0 biosdevname=0″

**copy/paste from this blog sometimes leaves incorrect " in the linux file. Please type those in manually from your kyb. Also make sure you do not have to many quotes in the file. The line should begin and end in a quote. If you have any in the middle it will fail.

grub2-mkconfig -o /boot/grub2/grub.cfg

grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg


**above is used if you have EFI boot    
 
mv /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-eth0   

vi /etc/sysconfig/network-scripts/ifcfg-eth0


    NAME=eth0
    DEVICE=eth0


---end if you want to use the old eth0 naming convention

systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl stop NetworkManager-wait-online
systemctl disable NetworkManager-wait-online
systemctl enable network
chkconfig network on
systemctl start network


reboot and sanity check

systemctl status NetworkManager
systemctl status network


Friday, March 18, 2016

Centos 7 install stress

Centos 7 install stress

# cd /root


# wget ftp://ftp.pbone.net/mirror/dag.wieers.com/redhat/el7/en/x86_64/dag/RPMS/stress-1.0.2-1.el7.rf.x86_64.rpm


# yum localinstall /root/stress-1.0.2-1.el7.rf.x86_64.rpm


if you have a 4 core CPU then use the following to stress all 4 cores

# stress -c 4

Centos 7 Setup static IP using NetworkManager

Centos 7 Setup static IP using NetworkManager

# yum install NetworkManager-tui

# nmtui


│ ╤ IPv4 CONFIGURATION
│ │          Addresses 192.168.10.100/24________
│ │                                                           
│ │            Gateway 192.168.10.1_____________                      
│ │        DNS servers 192.168.10.1_____________
│ │              
│ │     Search domains whittenberg.domain_______
│ │                   
│ │                                                                      
│ │            Routing (No custom routes)
│ │ [ ] Never use this network for default route                         
│ │                                                                      
│ │ [X] Require IPv4 addressing for this connection                     
│ └                                                                      
│                                                                        
│ ═ IPv6 CONFIGURATION
│                                                                        
│ [X] Automatically connect                                              
│ [X] Available to all users                                             

# systemctl restart network

Saturday, February 27, 2016

CentOS 7 install Nagios



CentOS 7 install Nagios

This is from https://www.digitalocean.com/community/tutorials/how-to-install-nagios-4-and-monitor-your-servers-on-centos-7

Prerequisites

To follow this tutorial, you must have superuser privileges on the CentOS 7 server that will run Nagios. Ideally, you will be using a non-root user with superuser privileges. If you need help setting that up, follow the steps 1 through 3 in this tutorial: Initial Server Setup with CentOS 7.
A LAMP stack is also required. Follow this tutorial if you need to set that up: How To Install LAMP stack On CentOS 7.

This tutorial assumes that your server has private networking enabled. If it doesn't, just replace all the references to private IP addresses with public IP addresses.
Now that we have the prerequisites sorted out, let's move on to getting Nagios 4 installed.

Install Nagios 4

This section will cover how to install Nagios 4 on your monitoring server. You only need to complete this section once.

Install Build Dependencies

Because we are building Nagios Core from source, we must install a few development libraries that will allow us to complete the build.
First, install the required packages:
sudo yum install gcc glibc glibc-common gd gd-devel make net-snmp openssl-devel xinetd unzip

Create Nagios User and Group

We must create a user and group that will run the Nagios process. Create a "nagios" user and "nagcmd" group, then add the user to the group with these commands:
sudo useradd nagios
sudo groupadd nagcmd
sudo usermod -a -G nagcmd nagios

Let's install Nagios now.

Install Nagios Core

Download the source code for the latest stable release of Nagios Core. Go to the Nagios downloads page, and click the Skip to download link below the form. Copy the link address for the latest stable release so you can download it to your Nagios server.
At the time of this writing, the latest stable release is Nagios 4.1.1. Download it to your home directory with curl:
cd ~
curl -L -O https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.1.1.tar.gz
Extract the Nagios archive with this command:
tar xvf nagios-*.tar.gz
Then change to the extracted directory:
cd nagios-*
Before building Nagios, we must configure it with this command:
./configure --with-command-group=nagcmd
Now compile Nagios with this command:
make all
Now we can run these make commands to install Nagios, init scripts, and sample configuration files:
sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config
sudo make install-webconf
In order to issue external commands via the web interface to Nagios, we must add the web server user, apache, to the nagcmd group:
·        sudo usermod -G nagcmd apache
·         
Install Nagios Plugins

Find the latest release of Nagios Plugins here: Nagios Plugins Download. Copy the link address for the latest version, and copy the link address so you can download it to your Nagios server.
At the time of this writing, the latest version is Nagios Plugins 2.1.1. Download it to your home directory with curl:
cd ~
curl -L -O http://nagios-plugins.org/download/nagios-plugins-2.1.1.tar.gz
Extract Nagios Plugins archive with this command:
tar xvf nagios-plugins-*.tar.gz
Then change to the extracted directory:
cd nagios-plugins-*
Before building Nagios Plugins, we must configure it. Use this command:
./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl
Now compile Nagios Plugins with this command:
make
Then install it with this command:
sudo make install

Install NRPE

Find the source code for the latest stable release of NRPE at the NRPE downloads page. Download the latest version to your Nagios server.
At the time of this writing, the latest release is 2.15. Download it to your home directory with curl:
·        cd ~
·         
·        curl -L -O http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz
·         
Extract the NRPE archive with this command:
·        tar xvf nrpe-*.tar.gz
·         
Then change to the extracted directory:
·        cd nrpe-*
·         
Configure NRPE with these commands:
·        ./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu
·         
Now build and install NRPE and its xinetd startup script with these commands:
·        make all
·         
·        sudo make install
·         
·        sudo make install-xinetd
·         
·        sudo make install-daemon-config
·         
Open the xinetd startup script in an editor:
·        sudo vi /etc/xinetd.d/nrpe
·         
Modify the only_from line by adding the private IP address of the your Nagios server to the end (substitute in the actual IP address of your server):
only_from = 127.0.0.1 10.132.224.168
Save and exit. Only the Nagios server will be allowed to communicate with NRPE.
Restart the xinetd service to start NRPE:
·        sudo service xinetd restart
·         
Now that Nagios 4 is installed, we need to configure it.

Configure Nagios

Now let's perform the initial Nagios configuration. You only need to perform this section once, on your Nagios server.

Organize Nagios Configuration

Open the main Nagios configuration file in your favorite text editor. We'll use vi to edit the file:
sudo vi /usr/local/nagios/etc/nagios.cfg
Now find an uncomment this line by deleting the #:
#cfg_dir=/usr/local/nagios/etc/servers
Save and exit.
Now create the directory that will store the configuration file for each server that you will monitor:
sudo mkdir /usr/local/nagios/etc/servers

Configure Nagios Contacts

Open the Nagios contacts configuration in your favorite text editor. We'll use vi to edit the file:
sudo vi /usr/local/nagios/etc/objects/contacts.cfg
Find the email directive, and replace its value (the highlighted part) with your own email address:
email                           nagios@localhost        ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******
Save and exit.

Configure check_nrpe Command

Let's add a new command to our Nagios configuration:
·        sudo vi /usr/local/nagios/etc/objects/commands.cfg
·         
Add the following to the end of the file:
define command{
        command_name check_nrpe
        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}
Save and exit. This allows you to use the check_nrpe command in your Nagios service definitions.

Configure Apache

Use htpasswd to create an admin user, called "nagiosadmin", that can access the Nagios web interface:
sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
Enter a password at the prompt. Remember this login, as you will need it to access the Nagios web interface.
Note: If you create a user that is not named "nagiosadmin", you will need to edit /usr/local/nagios/etc/cgi.cfg and change all the "nagiosadmin" references to the user you created.
Nagios is ready to be started. Let's do that, and restart Apache:
sudo systemctl start nagios.service
sudo systemctl restart httpd.service
To enable Nagios to start on server boot, run this command:
sudo chkconfig nagios on

Optional: Restrict Access by IP Address

If you want to restrict the IP addresses that can access the Nagios web interface, you will want to edit the Apache configuration file:
sudo vi /etc/httpd/conf.d/nagios.conf
Find and comment the following two lines by adding # symbols in front of them:
Order allow,deny
Allow from all
Then uncomment the following lines, by deleting the # symbols, and add the IP addresses or ranges (space delimited) that you want to allow to in the Allow from line:
#  Order deny,allow
#  Deny from all
#  Allow from 127.0.0.1
As these lines will appear twice in the configuration file, so you will need to perform these steps once more.
Save and exit.
Now start Nagios and restart Apache to put the change into effect:
sudo systemctl restart nagios.service
sudo systemctl restart httpd.service
Nagios is now running, so let's try and log in.

Accessing the Nagios Web Interface

Open your favorite web browser, and go to your Nagios server (substitute the IP address or hostname for the highlighted part):
http://nagios_server_public_ip/nagios
Because we configured Apache to use htpasswd, you must enter the login credentials that you created earlier. We used "nagiosadmin" as the username:

After authenticating, you will be see the default Nagios home page. Click on the Hosts link, in the left navigation bar, to see which hosts Nagios is monitoring:

As you can see, Nagios is monitoring only "localhost", or itself.
Let's monitor another host with Nagios!

Monitor a CentOS 7 Host with NRPE

In this section, we'll show you how to add a new host to Nagios, so it will be monitored. Repeat this section for each CentOS or RHEL server you wish to monitor.
Note: If you want to monitor an Ubuntu or Debian server, follow the instructions in this link: Monitor an Ubuntu Host with NRPE.
On a server that you want to monitor, install the EPEL repository:
sudo yum install epel-release
Now install Nagios Plugins and NRPE:
sudo yum install nrpe nagios-plugins-all
Now, let's update the NRPE configuration file. Open it in your favorite editor (we're using vi):
sudo vi /etc/nagios/nrpe.cfg
Find the allowed_hosts directive, and add the private IP address of your Nagios server to the comma-delimited list (substitute it in place of the highlighted example):
allowed_hosts=127.0.0.1,10.132.224.168
Save and exit. This configures NRPE to accept requests from your Nagios server, via its private IP address.
Restart NRPE to put the change into effect:
sudo systemctl start nrpe.service
sudo systemctl enable nrpe.service
Once you are done installing and configuring NRPE on the hosts that you want to monitor, you will have to add these hosts to your Nagios server configuration before it will start monitoring them.

Add Host to Nagios Configuration

On your Nagios server, create a new configuration file for each of the remote hosts that you want to monitor in /usr/local/nagios/etc/servers/. Replace the highlighted word, "yourhost", with the name of your host:
sudo vi /usr/local/nagios/etc/servers/yourhost.cfg
Add in the following host definition, replacing the host_name value with your remote hostname ("web-1" in the example), the alias value with a description of the host, and the address value with the private IP address of the remote host:
define host {
        use                             linux-server
        host_name                       yourhost
        alias                           My first Apache server
        address                         10.132.234.52
        max_check_attempts              5
        check_period                    24x7
        notification_interval           30
        notification_period             24x7
}
With the configuration file above, Nagios will only monitor if the host is up or down. If this is sufficient for you, save and exit then restart Nagios. If you want to monitor particular services, read on.
Add any of these service blocks for services you want to monitor. Note that the value of check_command determines what will be monitored, including status threshold values. Here are some examples that you can add to your host's configuration file:
Ping:
define service {
        use                             generic-service
        host_name                       yourhost
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
}
SSH (notifications_enabled set to 0 disables notifications for a service):
define service {
        use                             generic-service
        host_name                       yourhost
        service_description             SSH
        check_command                   check_ssh
        notifications_enabled           0
}
If you're not sure what use generic-service means, it is simply inheriting the values of a service template called "generic-service" that is defined by default.
Now save and quit. Reload your Nagios configuration to put any changes into effect: