Why?


Search This Blog

Wednesday, March 30, 2016

ESXi 5.5 Remove partiions from disk

ESXi 5.5 Remove partiions from disk

/dev/disks # ls
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:1
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:2
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:3
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:5
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:6
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:7
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:8
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:9
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212:1
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201:1
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112:1
vml.0100000000202020202057442d574d43334630343632323132574443205744
vml.0100000000202020202057442d574d43334630343632323132574443205744:1
vml.0100000000202020202057442d574d43334630343839323031574443205744
vml.0100000000202020202057442d574d43334630343839323031574443205744:1
vml.0100000000202020202057442d574d43334631343436313132574443205744
vml.0100000000202020202057442d574d43334631343436313132574443205744:1
vml.01000000005332314e4e45414732303432363241202020202053616d73756e
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:1
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:2
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:3
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:5
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:6
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:7
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:8
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:9
/dev/disks #

/dev/disks # partedUtil delete "t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112" 1


Do this for other drives you want to remove portions from. I did all three WD 500GB drives. When finished,

/dev/disks # ls
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:1
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:2
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:3
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:5
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:6
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:7
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:8
t10.ATA_____Samsung_SSD_850_EVO_250GB_______________S21NNEAG204262A_____:9
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201
t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112
vml.0100000000202020202057442d574d43334630343632323132574443205744
vml.0100000000202020202057442d574d43334630343839323031574443205744
vml.0100000000202020202057442d574d43334631343436313132574443205744
vml.01000000005332314e4e45414732303432363241202020202053616d73756e
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:1
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:2
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:3
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:5
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:6
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:7
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:8
vml.01000000005332314e4e45414732303432363241202020202053616d73756e:9
/dev/disks #


As you can see all the portions are gone. I can now add these drives under ESXi server for images and ISO storage, etc...

Sunday, March 27, 2016

Centos 7 create mdadm raid0 with mount and nsf share

Centos 7 create mdadm raid0 with mount and nsf share

Install mdadm

# yum -y install mdadm*

Get list of disk to be used in array

# fdisk -l

Remove partitions if needed

# fdisk /dev/sdb
p
d
p
w
#

Repeat for all other drives if needed

Now create the array. I am using Raid0 with 3 devices

# mdadm -C /dev/md0 --level=raid0 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd 

Format it

# mkfs.ext4 /dev/md0

Inspect your work

# mdadm --detail /dev/md0

Create mount point and mount

# mkdir /raid0
# mount /dev/md0 /raid0

See if mounted and what space we have now

# df -h

Set this for auto mount at boot

# vi /etc/fstab
/dev/md0        /raid0  ext4    defaults        0 0

ESXi 5.5 Create and use raw disk images

ESXi 5.5 Create and use raw disk images

I created these rdm disk. then created a centos 7 guest on the SSD in the ESXi server. I added the rdm disk to the Centos 7 guest. I then did mdadm raid0 on them. I then exported the raid0 vol via nfs. I then mounted the raido for a datastore.

I am know using consumer sata drives for raid in my ESXi server. Home lab use only :)

datastore1 is on the SSD. i put the pointers to the rdm disk in that datastore so i can get to them when i build my Centos 7 guest.

I created the directory rdm under /vmfs/volumes/datastore1

On ESXi server

# ls /dev/disks/ -l
-rw-------    1 root     root     500107862016 Mar 27 09:10 t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212
-rw-------    1 root     root     500107862016 Mar 27 09:10 t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201
-rw-------    1 root     root     500107862016 Mar 27 09:10 t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112
lrwxrwxrwx    1 root     root            74 Mar 27 09:40 vml.0100000000202020202057442d574d43334630343632323132574443205744 -> t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212
lrwxrwxrwx    1 root     root            74 Mar 27 09:40 vml.0100000000202020202057442d574d43334630343839323031574443205744 -> t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201
lrwxrwxrwx    1 root     root            74 Mar 27 09:40 vml.0100000000202020202057442d574d43334631343436313132574443205744 -> t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112


vmkfstools -z /vmfs/devices/disks/t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0462212 /vmfs/volumes/datastore1/rdm/rdmdisk1.vmdk

vmkfstools -z /vmfs/devices/disks/t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F0489201 /vmfs/volumes/datastore1/rdm/rdmdisk2.vmdk

vmkfstools -z /vmfs/devices/disks/t10.ATA_____WDC_WD5000AZLX2D00CL5A0_______________________WD2DWMC3F1446112 /vmfs/volumes/datastore1/rdm/rdmdisk3.vmdk


now in the vsphere client add drives to guest image and choose use existing image. nav to the datastore and the dir that you created these in.

Saturday, March 26, 2016

Centos 7 install setup Chrony

Ripped from

https://www.certdepot.net/rhel7-set-ntp-service/

 

Presentation

NTP (Network Time Protocol) is a protocol to keep servers time synchronized: one or several master servers provide time to client servers that can themselves provide time to other client servers (notion of stratus).
This tutorial deals with client side configuration, even though server configuration is not entirely different.
Two main packages are used in RHEL 7 to set up the client side:
  • ntp: this is the classic package, already existing in RHEL 6, RHEL 5, etc.
  • chrony: this is a new solution better suited for portable PC or servers with network connection problems (time synchronization is quicker). chrony is the default package in RHEL 7.

Prerequisites

Before anything else, you need to assign the correct time zone.
To get the current configuration, type:
# timedatectl
Local time: Sat 2015-11-07 08:17:33 EST
Universal time: Sat 2015-11-07 13:17:33 UTC
RTC time: Sat 2015-11-07 13:17:33
Timezone: America/New_York (EST, -0500)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: no
Last DST change: DST ended at
Sun 2015-11-01 01:59:59 EDT
Sun 2015-11-01 01:00:00 EST
Next DST change: DST begins (the clock jumps one hour forward) at
Sun 2016-03-13 01:59:59 EST
Sun 2016-03-13 03:00:00 EDT
To get the list of all the available time zones, type:
# timedatectl list-timezones
Africa/Abidjan
Africa/Accra
Africa/Addis_Ababa
...
America/La_Paz
America/Lima
America/Los_Angeles
...
Asia/Seoul
Asia/Shanghai
Asia/Singapore
...
Pacific/Tongatapu
Pacific/Wake
Pacific/Wallis
Finally, to set a specific time zone (here America/Los_Angeles), type:
# timedatectl set-timezone America/Los_Angeles
Then, to check your new configuration, type:
# timedatectl
      Local time: Sat 2015-11-07 05:32:43 PST
  Universal time: Sat 2015-11-07 13:32:43 UTC
        RTC time: Sat 2015-11-07 13:32:43
        Timezone: America/Los_Angeles (PST, -0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: no
 Last DST change: DST ended at
                  Sun 2015-11-01 01:59:59 PDT
                  Sun 2015-11-01 01:00:00 PST
 Next DST change: DST begins (the clock jumps one hour forward) at
                  Sun 2016-03-13 01:59:59 PST
                  Sun 2016-03-13 03:00:00 PDT

The NTP Package

Install the NTP package:
# yum install -y ntp
Activate the NTP service at boot:
# systemctl enable ntpd
Start the NTP service:
# systemctl start ntpd
The NTP configuration is in the /etc/ntp.conf file:
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).

driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1 
restrict ::1

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

includefile /etc/ntp/crypto/pw

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography. 
keys /etc/ntp/keys
Note: For basic configuration purpose, only the server directives could need a change to point at a different set of master time servers than the defaults specified.
To get some information about the time synchronization process, type:
# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*y.ns.gin.ntt.ne 192.93.2.20      2 u   47   64  377   27.136    6.958  11.322
+ns1.univ-montp3 192.93.2.20      2 u   45   64  377   34.836   -0.009  11.463
+merlin.ensma.ne 193.204.114.232  2 u   48   64  377   34.586    4.443  11.370
+obsidian.ad-not 131.188.3.220    2 u   50   64  377   22.548    4.256  12.077
Alternatively, to get a basic report, type:
# ntpstat
synchronised to NTP server (129.250.35.251) at stratum 3
time correct to within 60 ms
polling server every 64 s
To quickly synchronize a server, type:
# systemctl stop ntpd
# ntpdate pool.ntp.org
 5 Jul 10:36:58 ntpdate[2190]: adjust time server 95.81.173.74 offset -0.005354 sec
# systemctl start ntpd

The Chrony Package

Alternatively, you can install the new Chrony service that is quicker to synchronize clocks in mobile and virtual systems.
Install the Chrony service:
# yum install -y chrony
Activate the Chrony service at boot:
# systemctl enable chronyd
Start the Chrony service:
# systemctl start chronyd
The Chrony configuration is in the /etc/chrony.conf file:
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

# Ignore stratum in source selection.
stratumweight 0

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Enable kernel RTC synchronization.
rtcsync

# In first three updates step the system clock instead of slew
# if the adjustment is larger than 10 seconds.
makestep 10 3

# Listen for commands only on localhost.
bindcmdaddress 127.0.0.1
bindcmdaddress ::1

keyfile /etc/chrony.keys

# Specify the key used as password for chronyc.
commandkey 1

# Generate command key if missing.
generatecommandkey

# Disable logging of client accesses.
noclientlog

# Send a message to syslog if a clock adjustment is larger than 0.5 seconds.
logchange 0.5

logdir /var/log/chrony
Note: For basic configuration purpose, only the server directives could need a change to point at a different set of master time servers than the defaults specified.
To get information about the main time reference, type:
# chronyc tracking
Reference ID    : 94.23.44.157 (merzhin.deuza.net)
Stratum         : 3
Ref time (UTC)  : Thu Jul  3 22:26:27 2014
System time     : 0.000265665 seconds fast of NTP time
Last offset     : 0.000599796 seconds
RMS offset      : 3619.895751953 seconds
Frequency       : 0.070 ppm slow
Residual freq   : 0.012 ppm
Skew            : 0.164 ppm
Root delay      : 0.030609 seconds
Root dispersion : 0.005556 seconds
Update interval : 1026.9 seconds
Leap status     : Normal
To get equivalent information to the ntpq command, type:
# chronyc sources -v
210 Number of sources = 4

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||                                                /   xxxx = adjusted offset,
||         Log2(Polling interval) -.             |    yyyy = measured offset,
||                                  \            |    zzzz = estimated error.
||                                   |           |
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^+ merlin.ensma.fr               2   6    77    61   +295us[+1028us] +/-   69ms
^* lafkor.de                     2   6    77    61  -1371us[ -638us] +/-   65ms
^+ kimsuflol.iroqwa.org          3   6    77    61   -240us[ -240us] +/-   92ms
^+ merzhin.deuza.net             2   6    77    61    +52us[  +52us] +/-   48ms

# chronyc sourcestats -v
210 Number of sources = 4
                             .- Number of sample points in measurement set.
                            /    .- Number of residual runs with same sign.
                           |    /    .- Length of measurement set (time).
                           |   |    /      .- Est. clock freq error (ppm).
                           |   |   |      /           .- Est. error in freq.
                           |   |   |     |           /         .- Est. offset.
                           |   |   |     |          |          |   On the -.
                           |   |   |     |          |          |   samples. \
                           |   |   |     |          |          |             |
Name/IP Address            NP  NR  Span  Frequency  Freq Skew  Offset  Std Dev
==============================================================================
merlin.ensma.fr             7   5   200      0.106      6.541   +381us   176us
lafkor.de                   7   4   199      0.143     10.145   -916us   290us
kimsuflol.iroqwa.org        7   7   200     -0.298      6.717    +69us   184us
merzhin.deuza.net           7   5   200      0.585     11.293   +675us   314us
To quickly synchronize a server, type:
# ntpdate pool.ntp.org
 5 Jul 10:31:06 ntpdate[2135]: step time server 193.55.167.1 offset 121873.493146 sec
Note: You don’t need to stop the Chrony service to synchronize the server.

Intel 10Gb x520-da2 Performance Tuning Windows

Ripped from:
http://www.intel.com/content/www/us/en/support/network-and-i-o/ethernet-products/000005811.html

Adapter installation suggestions
  • Install the Intel® Network Adapter in a slot that matches or exceeds the bus width of the adapter.
    • Example 1: if you have a 32-bit PCI adapter put it in a 32-bit or 64-bit PCI or PCI-X* slot.
    • Example 2: if you have a 64-bit PCI-X adapter put it in a 64-bit PCI-X slot.
    • Example 3: if you have an x4 PCIe* adapter put it in an x4, x8, or x16 PCIe* slot.
    Note Some PCIe* slots are physically wired with fewer channels than the dimensions of the slot would indicate. In that case, a slot that matches an x8 dimensions would have the functionality of an x4, x2 or x1 slot. Check with your system manufacturer.
  • For PCI and PCI-X*, install the Intel Network Adapter in the fastest available slot.
    • Example 1: if you have a 64-bit PCI adapter put it in a 66 MHz 64-bit PCI slot.
    • Example 2: if you have a 64-bit PCI-X adapter put in a 133 MHz (266 or 533 if available) 64-bit PCI-X slot.
    Note The slowest board on a bus dictates the maximum speed of the bus. Example: when a 66MHz and a 133 MHz add-in card are installed in a 133 MHz bus, then all devices on that bus function at 66 MHz.
  • Try to install the adapter in a slot on a bus by itself. If add-in cards share a bus, they compete for bus bandwidth.
Driver configuration suggestions
  • For Intel® Ethernet 10 Gigabit Converged Network Adapters, you can choose a role-based performance profile to automatically adjust driver configuration settings.
  • Reduce Interrupt Moderation Rate to Low, Minimal, or Off
    • Also known as Interrupt Throttle Rate (ITR).
    • The default is "Adaptive" for most roles.
    • The low latency profile sets the rate to off.
    • The storage profiles set the rate to medium.
    Note Decreasing Interrupt Moderation Rate increases CPU utilization.
  • Enable Jumbo Frames to the largest size supported across the network (4KB, 9KB, or 16KB)
    • The default is Disabled.
    Note Enable Jumbo Frames only if devices across the network support them and are configured to use the same frame size.
  • Disable Flow Control.
    • The default is Generate & Respond.
    Note Disabling Flow Control can result in dropped frames.
  • Increase the Transmit Descriptors buffer size.
    • The default is 256. Maximum value is 2048.
    Note Increasing Transmit Descriptors increases system memory usage.
  • Increase the Receive Descriptors buffer size.
    • The default is 256. Maximum value is 2048.
    Note Increasing Receive Descriptors increases system memory usage.
TCP configuration suggestions
  • Tune the TCP window size (Applies to Windows* Server editions before Windows Server 2008*).
    Notes Optimizing your TCP window size can be complex as every network is different. Documents are available on the Internet that explain the considerations and formulas used to set window size.
    Before Windows Server 2008, the network stack used a fixed-size receive-side window. Starting with Windows Server 2008, Windows provides TCP receive window auto-tuning. The registry keywords TcpWindowSize, NumTcbTablePartitions, and MaxHashTableSize, are ignored starting with Windows Server 2008.
Teaming considerations and suggestions
When teaming multiple adapter ports together to maximize bandwidth, the switch needs to be considered. Dynamic or static 802.3ad link aggregation is the preferred teaming mode, but this teaming mode demands multiple contiguous ports on the switch. Give consideration to port groups on the switch. Typically, a switch has multiple ports grouped together that are serviced by one PHY. This one PHY can have a limited shared bandwidth for all the ports it supports. This limited bandwidth for a group may not be enough to support full utilization of all ports in the group.
Performance gain can be limited to the bandwidth shared, when the switch shares bandwidth across contiguous ports. Example: Teaming 4 ports on Intel® Gigabit Network Adapters or LAN on motherboards together in an 802.3ad static or dynamic teaming mode. Using this example, 4 gigabit ports share a total PHY bandwidth of 2 Gbps. The ability to group switch ports is dependent on the switch manufacturer and model, and can vary from switch to switch.
Alternative teaming modes can sometimes mitigate these performance limitations. For instance, using Adaptive Load Balancing (ALB), including Receive Load Balancing. ALB has no demands on the switch and does not need to be connected to contiguous switch ports. If the link partner has port groups, an ALB team can be connected to any port of the switch. Connecting the ALB team this way distributes connections across available port groups on the switch. This action can increase overall network bandwidth.
Performance testing considerations
  • When copying a file from one system to another (1:1) using one TCP session, throughput is significantly lower than doing multiple simultaneous TCP sessions. Low throughput performance on 1:1 networks is because of latency inherent in a single TCP/IP session. A few file transfer applications support multiple simultaneous TCP streams. Some examples are: bbFTP*, gFTP*, and FDT*.
    This graph is intended to show (not guarantee) the performance benefit of using multiple TCP streams. These are actual results from an Intel® 10 Gigabit CX4 Dual Port Server Adapter, using default Advanced settings under Windows 2008* x64.
  • Direct testing of your network interface throughput capabilities can be done by using tools like: iperf*, and Microsoft NTTTCP*. These tools can be configured to use one or more streams.
  • When copying a file from one system to another, the hard drives of each system can be a significant bottle neck. Consider using high RPM, higher throughput hard drives, striped RAIDs, or RAM drives in the systems under test.
  • Systems under test should connect through a full-line rate, non-blocking switch.
  • Theoretical Maximum Bus Throughput:
    • PCI Express* (PCIe*) Theoretical Bi-Directional Bus Throughput.
      PCI Express Implementation Encoded Data Rate Unencoded Data Rate
      x1 5 Gb/sec 4 Gb/sec (0.5 GB/sec)
      x4 20 Gb/sec 16 Gb/sec (2 GB/sec)
      x8 40 Gb/sec 32 Gb/sec (4 GB/sec)
      x16 80 Gb/sec 64 Gb/sec (8 GB/sec)
    • PCI and PCI-X Bus Theoretical Bi-Directional Bus Throughput.
      Bus and Frequency 32-Bit Transfer Rate 64-Bit Transfer Rate
      33-MHz PCI 1,064 Mb/sec 2,128 Mb/sec
      66-MHz PCI 2,128 Mb/sec 4,256 Mb/sec
      100-MHz PCI-X Not applicable 6,400 Mb/sec
      133-MHz PCI-X Not applicable 8,192 Mb/sec
      Note The PCIe* link width can be checked in Windows* through adapter properties. Select the Link Speed tab and click the Identify Adapter button. Intel® PROSet for Windows* Device Manager must be loaded for this utility to function.

Intel 10Gb x520-da2 Performance Tuning for Linux

Ripped from:
http://dak1n1.com/blog/7-performance-tuning-intel-10gbe/

By default, Linux networking is configured for best reliability, not performance. With a 10GbE adapter, this is especially apparent. The kernel’s send/receive buffers, TCP memory allocations, and packet backlog are much too small for optimal performance. This is where a little testing & tuning can give your NIC a big boost.
There are three performance-tuning changes you can make, as listed in the Intel ixgb driver documentation. Here they are in order of greatest impact:
  1. Enabling jumbo frames on your local host(s) and switch.
  2. Using sysctl to tune kernel settings.
  3. Using setpci to tune PCI settings for the adapter.

Keep in mind that any tuning listed here is only a suggestion. Much of performance tuning is done by changing one setting, then benchmarking and seeing if it worked for you. So your results may vary.
Before starting any benchmarks, you may also want to disable irqbalance and cpuspeed. Doing so will maximize network throughput and allow you to get the best results on your benchmarks.
service irqbalance stop
service cpuspeed stop
chkconfig irqbalance off
chkconfig cpuspeed off

Method #1: jumbo frames

In Linux, setting up jumbo frames is as simple as running a single command, or adding a single field to your interface config.
ifconfig eth2 mtu 9000 txqueuelen 1000 up
For a more permanent change, add this new MTU value to your interface config, replacing “eth2” with your interface name.
vim /etc/sysconfig/network-scripts/ifcfg-eth2
MTU="9000"

Method #2: sysctl settings

There are several important settings that impact network performance in Linux. These were taken from Mark Wagner’s excellent presentation at the Red Hat Summit in 2008.
Core memory settings:
  • net.core.rmem_max –  max size of rx socket buffer
  • net.core.wmem_max – max size of tx socket buffer
  • net.core.rmem_default – default rx size of socket buffer
  • net.core.wmem_default – default tx size of socket buffer
  • net.core.optmem_max – maximum amount of option memory
  • net.core.netdev_max_backlog – how many unprocessed rx packets before kernel starts to drop them
Here is my modified /etc/sysctl.conf. It can be appended onto the default config.
 # -- tuning -- #
# Increase system file descriptor limit
fs.file-max = 65535

# Increase system IP port range to allow for more concurrent connections
net.ipv4.ip_local_port_range = 1024 65000

# -- 10gbe tuning from Intel ixgb driver README -- #

# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000

net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000

Method #3: PCI bus tuning

If you want to take your tuning even further yet, here’s an option to adjust the PCI bus that the NIC is plugged into. The first thing you’ll need to do is find the PCI address, as shown by lspci:
[chloe@biru ~]$ lspci
 07:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
Here 07.00.0 is the pci bus address. Now we can grep for that in /proc/bus/pci/devices to gather even more information.
[chloe@biru ~]$ grep 0700 /proc/bus/pci/devices
0700    808610fb        28              d590000c                       0                    ecc1                       0                d58f800c                       0                       0                   80000                       0                      20                       0                    4000                  0                0        ixgbe
Various information about the PCI device will display, as you can see above. But the number we’re interested in is the second field, 808610fb. This is the Vendor ID and Device ID together. Vendor: 8086 Device: 10fb. You can use these values to tune the PCI bus MMRBC, or Maximum Memory Read Byte Count.
This will increase the MMRBC to 4k reads, increasing the transmit burst lengths on the bus.
setpci -v -d 8086:10fb e6.b=2e
About this command:
The -d option gives the location of the NIC on the PCI-X bus structure;
e6.b is the address of the PCI-X Command Register,
and 2e is the value to be set.
These are the other possible values for this register (although the one listed above, 2e, is recommended by the Intel ixgbe documentation).
MM value in bytes
22 512 (default)
26 1024
2a 2048
2e 4096

And finally, testing

Testing is something that should be done in between each configuration change, but for the sake of brevity I’ll just show the before and after results. The benchmarking tools used were ‘iperf’ and ‘netperf’.
Here’s how your 10GbE NIC might perform before tuning…
 [  3]  0.0-100.0 sec   54.7 GBytes  4.70 Gbits/sec

bytes  bytes   bytes    secs.    10^6bits/sec
87380 16384 16384    60.00    5012.24

And after tuning…
 [  3]  0.0-100.0 sec   115 GBytes  9.90 Gbits/sec

bytes  bytes   bytes    secs.    10^6bits/sec
10000000 10000000 10000000    30.01    9908.08
Wow! What a difference a little tuning makes. I’ve seen great results from my Hadoop HDFS cluster after just spending a couple hours getting to know my server’s network hardware. Whatever your application for 10GbE might be, this is sure to be of benefit to you as well.

Saturday, March 19, 2016

Centos 7 new build list of stuff to do after initial install

Centos 7 new build list of stuff to do after initial install

Ignore what is not needed

Disable selinux

vi /etc/sysconfig/selinux
    selinux=diabled

Disable and turn off firewalld
  
systemctl disable firewalld
systemctl stop firewalld

reboot

---begin turn off NetworkManager

vi /etc/hostname
    make sure your hostname is in there. i use name.domain.com

vi /etc/hosts
    make sure your hotname is in there. I both name and name.domain.com
  
vi /etc/resolv.conf
        search yourdomain.com
        nameserver 192.168.10.1 or what ever you use for DNS
      
      
---begin if you want to use the old eth0 naming convention      
      
vi /etc/default/grub
            Search for the line “GRUB_CMDLINE_LINUX” and append the following: net.ifnames=0 biosdevname=0

you can also turn off the screensaver for your console by adding consoleblank=0

My line is now:

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos_nas/swap rd.lvm.lv=centos_nas/root net.ifnames=0 biosdevname=0 consoleblank=0"

grub2-mkconfig -o /boot/grub2/grub.cfg

grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg  

mv /etc/sysconfig/network-scripts/ifcfg-enp????? /etc/sysconfig/network-scripts/ifcfg-eth0  

vi /etc/sysconfig/network-scripts/ifcfg-eth0
    NAME=eth0
    DEVICE=eth0

---end     if you want to use the old eth0 naming convention      

systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl stop NetworkManager-wait-online
systemctl disable NetworkManager-wait-online
systemctl enable network
chkconfig network on
systemctl start network


reboot and sanity check

systemctl status NetworkManager
systemctl status network

---end turn off NetworkManager

Create text file /root/list with packge list below in it
do not include the --begin list or the --end list lines in the file

--begin list  
bind-utils
traceroute
net-tools
ntp*
gcc
glibc
glibc-common
gd
gd-devel
make
net-snmp
openssl-devel
xinetd
unzip
libtool*
make
patch
perl
bison
flex-devel
gcc-c++
ncurses-devel
flex
libtermcap-devel
autoconf*
automake*
autoconf
libxml2-devel
cmake
sqlite*
wget
ntp*
lm_sensors
ncurses-devel
qt-devel
hmaccalc
zlib-devel
binutils-devel
elfutils-libelf-devel
wget
bc
gzip
uuid*
libuuid-devel
jansson*
libxml2*
sqlite*
openssl*
lsof
NetworkManager-tui
mlocate
yum-utils
kernel-devel
nfs-utils
tcpdump
--end list

yum -y install $(cat list)

yum -y groupinstall "Development Tools"

yum -y update

reboot


---install zfs if needed

cd /root
yum -y localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
yum -y localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
yum -y install kernel-devel zfs

modprobe zfs
lsmod | grep -i zfs
    zfs                  2179437  3
    zcommon                47120  1 zfs
    znvpair                80252  2 zfs,zcommon
    spl                    89796  3 zfs,zcommon,znvpair
    zavl                    6784  1 zfs
    zunicode              323046  1 zfs

vi /etc/sysconfig/modules/zfs.modules
#!/bin/sh

if [ ! -c /dev/zfs ] ; then
        exec /sbin/modprobe zfs >/dev/null 2>&1
fi

chmod +x /etc/sysconfig/modules/zfs.modules

reboot

lsmod | grep -i zfs
    zfs                  2179437  3
    zcommon                47120  1 zfs
    znvpair                80252  2 zfs,zcommon
    spl                    89796  3 zfs,zcommon,znvpair
    zavl                    6784  1 zfs
    zunicode              323046  1 zfs


create pool called myraid
this is a 8 drive 4 vdev stripe mirror pool set

zpool create myraid mirror sdb sdc mirror sdd sde mirror sdf sdg mirror sdh sdi

zpool status
  
zfs mount myraid
echo "zfs mount myraid" >> /etc/rc.local

zfs set compression=lz4 myraid
zfs set sync=disabled myraid
zfs set checksum=fletcher4 myraid
zfs set primarycache=all myraid
zfs set logbias=latency myraid
zfs set recordsize=128k myraid
zfs set atime=off myraid
zfs set dedup=off myraid



vi /etc/modprobe.d/zfs.conf
# disable prefetch
options zfs zfs_prefetch_disable=1
# set arc max to 48GB. I have 64GB in my server
options zfs zfs_arc_max=51539607552
# set size to 128k same as file system block size
options zfs zfs_vdev_cache_size=1310720
options zfs zfs_vdev_cache_max=1310720
options zfs zfs_read_chunk_size=1310720
options zfs zfs_vdev_cache_bshift=17
options zfs zfs_read_chunk_size=1310720
# Set thes to 1 so we get max IO at cost of banwidth
options zfs zfs_vdev_async_read_max_active=1
options zfs zfs_vdev_async_read_min_active=1
options zfs zfs_vdev_async_write_max_active=1
options zfs zfs_vdev_async_write_min_active=1
options zfs zfs_vdev_sync_read_max_active=1
options zfs zfs_vdev_sync_read_min_active=1
options zfs zfs_vdev_sync_write_max_active=1
options zfs zfs_vdev_sync_write_min_active=1

i am using my pool via nfs to my ESXi server for quest images so
i share this on my nas with both the 1Gb and 10Gb networks

vi /etc/exports
/myraid/     192.168.10.0/24(rw,async,no_root_squash,no_subtree_check)
/myraid/     192.168.90.0/24(rw,async,no_root_squash,no_subtree_check)

systemctl start rpcbind nfs-server
systemctl enable rpcbind nfs-server


---end install zfs if needed



--install samaba if needed

yum -y install samba

useradd samba -s /sbin/nologin

smbpasswd -a samba
            Supply a password
            Retype the password
  
mkdir /myraid

chown -R samba:root /myraid/

vi /etc/samba/smb.conf

[global]
workgroup = WORKGROUP ;use name of your workgroup here
server string = Samba Server Version %v
netbios name = NAS

Add this to botton of /etc/samba/smb.conf file

[NAS]
comment = NAS
path = /myraid
writable = yes
valid users = samba


systemctl start smb
systemctl enable smb
systemctl start nmb
systemctl enable nmb

testparm
  
--end install samaba if needed




---install plex if needed


visit plex site and get rpm for your version of OS
copy this to /root

yum -y localinstall name.rpm

systemctl enable plexmediaserver
systemctl start plexmediaserver

---end install plex if needed

---install LAMP

yum -y install httpd mariadb-server mariadb php php-mysql
systemctl enable httpd.service
systemctl start httpd.service
systemctl status httpd.service

Make sure it works with:
http://your_server_IP_address/

systemctl enable mariadb
systemctl start mariadb
systemctl status mariadb
mysql_secure_installation

vi /var/www/html/info.php
<?php phpinfo(); ?>

http://your_server_IP_address/info.php


---End install LAMP

---Extra goodies

yum -y install epel-release
yum -y install stress htop iftop iotop hddtemp smartmontools iperf3 sysstat mlocate

updatedb **this is to update mlocate db


---End Extra goodies

---tune 10Gb CNA if needed

service irqbalance stop
service cpuspeed stop
chkconfig irqbalance off
chkconfig cpuspeed off

vi /etc/sysconfig/network-scripts/ifcfg-eth???
MTU="9000"

vi /etc/sysctl.conf
# -- tuning -- #
# Increase system file descriptor limit
fs.file-max = 65535

# Increase system IP port range to allow for more concurrent connections
net.ipv4.ip_local_port_range = 1024 65000

# -- 10gbe tuning from Intel ixgb driver README -- #

# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000

net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000

reboot and test speed.

on linux client pointing to server with ip 192.168.90.100

# iperf3 -c 192.168.90.100 -p 5201

on linux server with IP 192.168.90.100

iperf3 -s -p 5201 -B 192.168.90.100

---end tune 10Gb CNA if needed


Centos 7 turn off NetworkManager

Centos 7 turn off NetworkManager

My domain whittenberg.domain and my machine is nas.whittenberg.domain

Do not attempt this unless you have console access. You can do all the below from a say a putty session, but if things go wrong you will need console access.

vi /etc/hostname
    nas.whittenberg.domain

vi /etc/hosts
    127.0.0.1   nas nas.whittenberg.domain localhost localhost.localdomain localhost4 localhost4.localdomain4
     ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

   
vi /etc/resolv.conf
        # Generated by NetworkManager
        search whittenberg.domain
        nameserver 192.168.10.1
        nameserver 2600:8800:2580:eda:4af8:b3ff:fe93:615d


---begin if you want to use the old eth0 naming convention       
       
vi /etc/default/grub
 

Search for the line “GRUB_CMDLINE_LINUX” and append the following: 

“net.ifnames=0 biosdevname=0″

**copy/paste from this blog sometimes leaves incorrect " in the linux file. Please type those in manually from your kyb. Also make sure you do not have to many quotes in the file. The line should begin and end in a quote. If you have any in the middle it will fail.

grub2-mkconfig -o /boot/grub2/grub.cfg

grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg


**above is used if you have EFI boot    
 
mv /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-eth0   

vi /etc/sysconfig/network-scripts/ifcfg-eth0


    NAME=eth0
    DEVICE=eth0


---end if you want to use the old eth0 naming convention

systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl stop NetworkManager-wait-online
systemctl disable NetworkManager-wait-online
systemctl enable network
chkconfig network on
systemctl start network


reboot and sanity check

systemctl status NetworkManager
systemctl status network


Friday, March 18, 2016

Centos 7 install stress

Centos 7 install stress

# cd /root


# wget ftp://ftp.pbone.net/mirror/dag.wieers.com/redhat/el7/en/x86_64/dag/RPMS/stress-1.0.2-1.el7.rf.x86_64.rpm


# yum localinstall /root/stress-1.0.2-1.el7.rf.x86_64.rpm


if you have a 4 core CPU then use the following to stress all 4 cores

# stress -c 4

Centos 7 Setup static IP using NetworkManager

Centos 7 Setup static IP using NetworkManager

# yum install NetworkManager-tui

# nmtui


│ ╤ IPv4 CONFIGURATION
│ │          Addresses 192.168.10.100/24________
│ │                                                           
│ │            Gateway 192.168.10.1_____________                      
│ │        DNS servers 192.168.10.1_____________
│ │              
│ │     Search domains whittenberg.domain_______
│ │                   
│ │                                                                      
│ │            Routing (No custom routes)
│ │ [ ] Never use this network for default route                         
│ │                                                                      
│ │ [X] Require IPv4 addressing for this connection                     
│ └                                                                      
│                                                                        
│ ═ IPv6 CONFIGURATION
│                                                                        
│ [X] Automatically connect                                              
│ [X] Available to all users                                             

# systemctl restart network