Why?


Search This Blog

Saturday, June 16, 2018

Galera Cluster on CentOS 7

Galera Cluster on CentOS 7

Resource links

https://linuxadmin.io/galeria-cluster-configuration-centos-7/
https://mariadb.com/kb/en/library/getting-started-with-mariadb-galera-cluster/
https://mariadb.com/kb/en/library/galera-cluster-system-variables/

I will be using three servers

mysql1 = 192.168.10.160
mysql2 = 192.168.10.161
mysql3 = 192.168.10.162

With mysql1 being the master node.


Turn off firewalld

Disable selinux

reboot

Add the MariaDB repository to each of the 3 or more nodes

vi /etc/yum.repos.d/MariaDB.repo

Insert the following repository information and save the file

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

Install the packages from yum, Galera is included when these are installed when using 10.1 or greater

yum -y install MariaDB-server MariaDB-client MariaDB-common

rsync is the default to perform the replication so install that. Also lsof (list open files)

yum install -y rsync lsof

Make sure each of the MariaDB instances starts on reboot

systemctl enable mariadb

Galera Master Node Configuration

After installing MariaDB on the master node edit the server.cnf file

vi /etc/my.cnf.d/server.cnf

add the following under the [galera]

binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.10.160,192.168.10.161,192.168.10.162"
#
## Galera Cluster Configuration
wsrep_cluster_name="cluster1"
## Galera Synchronization Configuration
wsrep_sst_method=rsync
## Galera Node Configuration
####unigue per node
wsrep_node_address="192.168.10.160"
####unigue per node
wsrep_node_name="mysql1"

wsrep_on=ON – Setting this to ON enables replication. In MariaDB 10.1, replication is turned off as a default, so this needs to be explicitly stated.
wsrep_cluster_address  – This is where we specify each of the IP addresses for the nodes delineated by a comma. The primary node is always the first IP address, this this case its 192.168.10.160
wsrep_cluster_name – Is the name of the cluster, you can name this anything you want
wsrep_node_address – Is the IP address of the node you are configuring
wsrep_node_name – This is the name of the node you are currently configuring, it can be named anything you want, it just needs to be unique.


Under the [mysqld] section add a log location (if you don’t, it will log to the main syslog)

log_error=/var/log/mariadb.log

Once you have finished editing and saved server.cnf, go ahead and create the error log

touch /var/log/mariadb.log

Give the error log the appropriate permissions:

chown mysql:mysql /var/log/mariadb.log

You can now start the new master node by typing the following

galera_new_cluster

After you have started it, make sure it has bound to the correct ports using lsof

Port 4567 is for replication traffic:

# lsof -i:4567
 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
 mysqld 4121 mysql 11u IPv4 34770 0t0 TCP *:tram (LISTEN)
Port 3306 is for MySQL client connections:

# lsof -i:3306
 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
 mysqld 4121 mysql 26u IPv4 34787 0t0 TCP *:mysql (LISTEN)

mysql

Then check the cluster size

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size';
 +--------------------+-------+
 | Variable_name      | Value |
 +--------------------+-------+
 | wsrep_cluster_size | 1     |
 +--------------------+-------+
It should say 1 at this point because only the primary node is connected.

Adding Additional Nodes To Galera

After installing MariaDB on the addtional nodes, you will want to copy the [galera] section of /etc/my.cnf.d/server.cnf that We created earlier and insert it into the server.cnf on each of the additional nodes. The only lines that will each on each of the additional nodes will be the the following:

wsrep_node_address="192.168.10.161"
wsrep_node_name="mysql2"

The wsrep_node_address will be the IP address of the node you are configuring and the wsrep_node_name will be the name of that node.

After you have finished each of the servers configuration files, you can start them normally

systemctl start mariadb

As each node connects to the cluster you should see the wsrep_cluster_size increase:

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size';
 +--------------------+-------+
 | Variable_name      | Value |
 +--------------------+-------+
 | wsrep_cluster_size | 3     |
 +--------------------+-------+

You will also see nodes join in the log:

WSREP: Member 1.0 (centos7-vm2) synced with group.
The logs will also indicate when a node as left the group:

WSREP: forgetting 96a5eca6 (tcp://192.168.1.101:4567)
WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2

You can view the full configuration of Galera by typing the following:

MariaDB [(none)]> show status like 'wsrep%';

Testing Replication On The Galera Cluster. On any node create a new db with

MariaDB [(none)]> create database testdb1;

You should see the testdb1 database appear on the other nodes as well.

Sunday, June 10, 2018

MySQL allow remote access

MySQL allow remote access

Say you want to allow the user root with the password of password remote access to the database of test.


GRANT ALL PRIVILEGES ON test.* TO 'root'@'%' IDENTIFIED BY 'password';

FLUSH PRIVILEGES;
quit;

CentOS 7 install mysql connector for python3.6

CentOS 7 install mysql connector for python3.6

Go to website

https://pypi.org/project/mysqlclient/#files

and download the file

mysqlclient-1.3.12.tar.gz

place in /root dir

Now from server console as root user (or su)

cd /root

gzip -d mysqlclient-1.3.12.tar.gz

tar -xvf mysqlclient-1.3.12.tar

cd /root/mysqlclient-1.3.12

yum install -y mariadb-devel

python3.6 setup.py build

python3.6 setup.py install

Now in py script

#!/usr/bin/python3.6

import MySQLdb

# Open database connection
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )

# prepare a cursor object using cursor() method
cursor = db.cursor()

# execute SQL query using execute() method.
cursor.execute("SELECT VERSION()")

# Fetch a single row using fetchone() method.
data = cursor.fetchone()
print ("Database version : %s " % data)

# disconnect from server
db.close()

If you use this in a web browser

#!/usr/bin/python3.6
import os
import cgi
import MySQLdb

# Set content typr for web browser use
print ("Content-type: text/html\n")

# Open database connection
db = MySQLdb.connect("localhost","root","password","information_schema" )

# prepare a cursor object using cursor() method
cursor = db.cursor()

# execute SQL query using execute() method.
cursor.execute("SELECT VERSION()")

# Fetch a single row using fetchone() method.
data = cursor.fetchone()
print ("Database version : %s " % data)

# disconnect from server
db.close()

Sunday, June 3, 2018

Python3.6 My personal 101 notes

Python3.6 My personal 101 notes

Miscellaneous operating system interfaces
import os
Common Gateway Interface support
import cgi

CGI Env Variables
http://www.cgi101.com/book/ch3/text.html


# Set content type for web browser use
print ("Content-type: text/html\n")

# Begin the web page html tags
print ("<html>")
print ("<body>")

# End the web page html tags
print ("</body>")
print ("</html>" )


# Get Values from a form
# So say form.html submites to /cgi-bin/form.py
form = cgi.FieldStorage()
if "name" not in form or "addr" not in form:
    print "<H1>Error</H1>"
    print "Please fill in the name and addr fields."
    return
print "<p>fname:", form["fname"].value
print "<p>lname:", form["lname"].value
...further form processing here...

# Get data from fields assign to vars and print them
# using form = cgi.FieldStorage()
first_name = form.getvalue('fname')
last_name  = form.getvalue('lname')
my_age = form.getvalue('age')
print (first_name)
print ("<br>")
print (last_name)
print ("<br>")
print (my_age)
print ("<br>")

# Print form values using form = cgi.FieldStorage()
print ("first name: " + form["fname"].value)
print ("<br>")
print ("last name: " + form["lname"].value)
print ("<br>")
print ("age: " + form["age"].value)
print ("<br>")

# Print some os enviroment vars
# using import os
ra = os.environ['REMOTE_ADDR']
print (ra)
print ("<br>")
ss = os.environ['SERVER_SOFTWARE']
print (ss)
print ("<br>")


# This is a formatted web ui output of some stuff
print ("Content-type: text/html\n")
print ("<html>")
print ("<body>")
print ("<h1>Hello " + first_name + " " + last_name + ". Your age is " + age + "</h1>")
print ("The IP address of your client is: " + ra)
print ("<br>")
print ("The software on this Python Server is: " + ss)
print ("</body>")
print ("</html>" )


# Read Write files with Python
http://www.pythonforbeginners.com/files/reading-and-writing-files-in-python

print ("<h1>Writing text file now to testfile.txt</h1>")

# Set file path and name as var of path
path = '/var/www/cgi-bin/testfile.txt'

file = open(path,'w')
file.write("Hello World\n")
file.write("This is our new text file\n")
file.write("and this is another line.\n")
file.write("Why? Because we can.\n")
file.close()

print ("<h1>Reading text file testfile.txt</h1>")
file = open(path,'r')
print (file.read())
file.close()

print ("<h1>Reading first 10 characters</h1>")
file = open(path,'r')
print (file.read(10))
file.close()

print ("<h1>Reading file by line</h1>")
file = open(path,'r')
print (file.readline())
file.close()

print ("<h1>Reading file all lines</h1>")
file = open(path,'r')
print (file.readlines())
file.close()

print ("<h1>Reading file looping over file object</h1>")
file = open(path,'r')
for line in file:
    print (line)
...
file.close()



Saturday, June 2, 2018

Python3.6 Processing user Input

Jacked from http://interactivepython.org/runestone/static/webfundamentals/CGI/forms.html

Processing user Input


Lets start with a basic page with a form.

<html>
  <body>
      <form action='cgi-bin/hello2.py' method='get'>
          <label for="myname">Enter Your Name</label>
          <input id="myname" type="text" name="firstname" value="Nada" />
          <input type="submit">
      </form>
  </body>
</html> 
 
There are two important attributes on the form tag:
  • method: this tells the browser which http method to use when submitting the form back to the server. The options are get or post.
  • action: This tells the browser the URL to use when submitting the form.
The input type submit renders as a button in the form. The purpose of this input type is to cause the form to be submitted back to the web server.

#!/usr/bin/env python
import os

print "Content-type: text/html\n"

qs = os.environ['QUERY_STRING']
if 'firstname' in qs:
    name = qs.split('=')[1]
else:
    name = 'No Name Provided'

print "<html>"
print "<body>"
print "<h1>Hello %s</h1>" % name
print "</pre>"
print "</body>"
print "</html>" 
 
The new cgi script must now check to see if the QUERY_STRING environment variable contains the string firstname. Note that that firstname in the query string corresponds to the name attribute of the input element.

When you press the submit button for a form, the web browser iterates over all of the input elements in the form, and collects the name value attributes of each. These are put together into a string that becomes part of the URL. The name value pairs are added after the usual URL information using the form: ?firstname=Sheldon&lastname=Cooper The ? separates the query string information from the URL itself. The & separates each name value pair.

The following figure gives you a good sense for the flow of how our little application works.
../_images/cgi-round-trip.svg

Combining into One File

Lets now combine our application into a single file. Using the following flow:
  1. If there is no QUERY_STRING simply return the HTML for the form.
  2. If there is a QUERY_STRING then do not display the form, simply display the Hello greeting to the name stored in the QUERY STRING.
Along the way we will clean up our code and refactor it into a couple of functions.

#!/usr/bin/env python
import os

headers = ["Content-type: text/html"]
qs = os.environ['QUERY_STRING']

def sendHeaders():
    for h in headers:
        print h
    print "\n"

def sendForm():
    print '''
    <html>
      <body>
          <form action='cgi-bin/hellobetter.py' method='get'>
              <label for="myname">Enter Your Name</label>
              <input id="myname" type="text" name="firstname" value="Nada" />
              <input type="submit">
          </form>
      </body>
    </html>
    '''

def sendPage(name):
    print '''
    <html>
      <body>
        <h1>Hello {0}</h1>
      </body>
    </html>
    '''.format(name)

if not qs:
    sendHeaders()
    sendForm()
else:
    if 'firstname' in qs:
        name = qs.split('=')[1]
    else:
        name = 'No Name Provided'
    sendHeaders()
    sendPage(name) 
 
The headers list is to set us up with a pattern that will be useful later. Sometimes we don’t know right away what headers we may want to send. We’ll see that in the next section. So we can defer sending the headers until we have done all of our processing and are ready to send back the results. To add a header to our response we can simply append the string to the list of headers.

The other functions, sendPage and sendForm reduce the number of print statements we need by making use of Python’s triple quoted strings, and string formatting.
Next Section - Cookies




 




CentOS 7 install and load mod_wsgi in Apache for Python

CentOS 7 install and load mod_wsgi in Apache for Python

Must have httpd-develinstalled

yum install httpd-devel

Source code tar balls can be obtained from:

https://github.com/GrahamDumpleton/mod_wsgi/releases

wget the file you want

uncompress and untar

tar xvfz mod_wsgi-X.Y.tar.gz

cd into the mod_wsgi dir

Now configure, make, and make install

./configure

make

make install

Make sure the module gets loaded in Apache at restart by adding into the main Apache “httpd.conf” configuration file by adding the following

LoadModule wsgi_module modules/mod_wsgi.so

Now restart Apache

systemctl restart httpd

Now check to make sure mod is loaded

apachectl -M


Now to setup for WSGIScriptAlias. This was taken from

https://modwsgi.readthedocs.io/en/master/configuration-directives/WSGIScriptAlias.html

add the following to the httpd.conf file

WSGIScriptAlias /wsgi-scripts/ /var/www/wsgi-scripts/
<Directory /var/www/wsgi-scripts>
Order allow,deny
Allow from all
</Directory>


mkdir /var/www/wsgi-scripts/

vi /var/www/wsgi-scripts/myapp.wsgi

def application(environ, start_response):
    status = '200 OK'
    output = b'Hello World!'
    response_headers = [('Content-type', 'text/plain'),
                        ('Content-Length', str(len(output)))]
    start_response(status, response_headers)
    return [output]

Now chmod the file to 705

chmod 705 /var/www/wsgi-scripts/myapp.wsgi

Restart httpd with

systemctl restart httpd

Now see your app in the web browser at

http://your-ip-here/wsgi-scripts/myapp.wsgi

CentOS 7 Enable CGI executing for use of Python script

CentOS 7 Enable CGI executing for use of Python script

See the post "CentOS 7 install and setup for python3.6" at

http://glenewhittenberg.blogspot.com/2018/06/centos7-install-and-setup-for-python36_1.html

before continuing

By default, CGI is allowed under the "/var/www/cgi-bin" directory.

[root@python /]# grep "^ *ScriptAlias" /etc/httpd/conf/httpd.conf
    ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
   
Create test page in /var/www/cgi-bin/ directory

[root@python /]# vi /var/www/cgi-bin/test.py

#!/usr/bin/python3.6
print ("Content-type: text/html\n\n")
print ("<html>\n<body>")
print ("<div style=\"width: 100%; font-size: 40px; font-weight: bold; text-align: center;\">")
print ("Python Script Test Page")
print ("</div>\n</body>\n</html>")
Now chmod so the file is executable to all (-rwx---r-x)

[root@python /]# chmod 705 /var/www/cgi-bin/test.py

Now run in web browser

http://your-ip-here/cgi-bin/test.py

Friday, June 1, 2018

CentOS 7 install and setup for python3.6

CentOS 7 install and setup for python3.6

cp /etc/sysconfig/selinux /etc/sysconfig/selinux.bak
sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux

cp /etc/selinux/config /etc/selinux/config.bak
sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config

systemctl disable firewalld
systemctl stop firewalld

service iptables stop
service ip6tables stop
chkconfig iptables off
chkconfig ip6tables off

yum -y install bind-utils traceroute net-tools ntp* gcc glibc glibc-common gd gd-devel make net-snmp openssl-devel xinetd unzip libtool* make patch perl bison flex-devel gcc-c++ ncurses-devel flex libtermcap-devel autoconf* automake* autoconf libxml2-devel cmake sqlite* wget ntp* lm_sensors ncurses-devel qt-devel hmaccalc zlib-devel binutils-devel elfutils-libelf-devel wget bc gzip uuid* libuuid-devel jansson* libxml2* sqlite* openssl* lsof NetworkManager-tui mlocate yum-utils kernel-devel nfs-utils tcpdump git vim gdisk parted

yum -y groupinstall "Development Tools"
yum -y update
yum -y upgrade

cd /root
echo ':color desert' > .vimrc

systemctl disable kdump.service

reboot


yum -y install epel-release
yum -y install stress htop iftop iotop hddtemp smartmontools iperf3 sysstat mlocate
updatedb

---install LAMP

yum -y install httpd mariadb-server mariadb php php-mysql
systemctl enable httpd.service
systemctl start httpd.service
systemctl status httpd.service

Make sure it works with:
http://your_server_IP_address/

systemctl enable mariadb
systemctl start mariadb
systemctl status mariadb
mysql_secure_installation

vi /var/www/html/info.php
<?php phpinfo(); ?>

http://your_server_IP_address/info.php


---End install LAMP



Below was jacked from https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-centos-7

Let’s first make sure that yum is up to date by running this command:

    sudo yum -y update

The -y flag is used to alert the system that we are aware that we are making changes, preventing the terminal from prompting us to confirm.

Next, we will install yum-utils, a collection of utilities and plugins that extend and supplement yum:

    sudo yum -y install yum-utils

Finally, we’ll install the CentOS Development Tools, which are used to allow you to build and compile software from source code:

    sudo yum -y groupinstall development

Once everything is installed, our setup is in place and we can go on to install Python 3.
Step 2 — Installing and Setting Up Python 3

CentOS is derived from RHEL (Red Hat Enterprise Linux), which has stability as its primary focus. Because of this, tested and stable versions of applications are what is most commonly found on the system and in downloadable packages, so on CentOS you will only find Python 2.

Since instead we would like to install the most current upstream stable release of Python 3, we will need to install IUS, which stands for Inline with Upstream Stable. A community project, IUS provides Red Hat Package Manager (RPM) packages for some newer versions of select software.

To install IUS, let’s install it through yum:

    sudo yum -y install https://centos7.iuscommunity.org/ius-release.rpm

Once IUS is finished installing, we can install the most recent version of Python:

    sudo yum -y install python36u

When the installation process of Python is complete, we can check to make sure that the installation was successful by checking for its version number with the python3.6 command:

    python3.6 -V

With a version of Python 3.6 successfully installed, we will receive the following output:

Output
Python 3.6.1

We will next install pip, which will manage software packages for Python:

    sudo yum -y install python36u-pip

A tool for use with Python, we will use pip to install and manage programming packages we may want to use in our development projects. You can install Python packages by typing:

    sudo pip3.6 install package_name

Here, package_name can refer to any Python package or library, such as Django for web development or NumPy for scientific computing. So if you would like to install NumPy, you can do so with the command pip3.6 install numpy.

Finally, we will need to install the IUS package python36u-devel, which provides us with libraries and header files we will need for Python 3 development:

    sudo yum -y install python36u-devel

The venv module will be used to set up a virtual environment for our development projects in the next step.
Step 3 — Setting Up a Virtual Environment

Now that we have Python installed and our system set up, we can go on to create our programming environment with venv.

Virtual environments enable you to have an isolated space on your computer for Python projects, ensuring that each of your projects can have its own set of dependencies that won’t disrupt any of your other projects.

Setting up a programming environment provides us with greater control over our Python projects and over how different versions of packages are handled. This is especially important when working with third-party packages.

You can set up as many Python programming environments as you want. Each environment is basically a directory or folder in your computer that has a few scripts in it to make it act as an environment.

Choose which directory you would like to put your Python programming environments in, or create a new directory with mkdir, as in:

    mkdir environments
    cd environments

Once you are in the directory where you would like the environments to live, you can create an environment by running the following command:

    python3.6 -m venv my_env

Essentially, this command creates a new directory (in this case called my_env) that contains a few items that we can see with the ls command:

bin include lib lib64 pyvenv.cfg

Together, these files work to make sure that your projects are isolated from the broader context of your local machine, so that system files and project files don’t mix. This is good practice for version control and to ensure that each of your projects has access to the particular packages that it needs.

To use this environment, you need to activate it, which you can do by typing the following command that calls the activate script in the bin directory:

    source my_env/bin/activate

Your prompt will now be prefixed with the name of your environment, in this case it is called my_env:


This prefix lets us know that the environment my_env is currently active, meaning that when we create programs here they will use only this particular environment’s settings and packages.

Note: Within the virtual environment, you can use the command python instead of python3.6, and pip instead of pip3.6 if you would prefer. If you use Python 3 on your machine outside of an environment, you will need to use the python3.6 and pip3.6 commands exclusively.

After following these steps, your virtual environment is ready to use.
Step 4 — Creating a Simple Program

Now that we have our virtual environment set up, let’s create a simple “Hello, World!” program. This will make sure that our environment is working and gives us the opportunity to become more familiar with Python if we aren’t already.

To do this, we’ll open up a command-line text editor such as vim and create a new file:

    vi hello.py

Once the text file opens up in our terminal window, we will have to type i to enter insert mode, and then we can write our first program:

print("Hello, World!")

Now press ESC to leave insert mode. Next, type :x then ENTER to save and exit the file.

We are now ready to run our program:

    python hello.py

The hello.py program that you just created should cause the terminal to produce the following output:

Output
Hello, World!

To leave the environment, simply type the command deactivate and you’ll return to your original directory.

Saturday, October 28, 2017

Docker on CentOS 7

This is copy of:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-centos-7

Great to the point article that got the job done for me. Thanks!

How To Install and Use Docker on CentOS 7

UpdatedNovember 2, 2016 163.1k views Docker CentOS

Introduction

Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system. For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components.
There are two methods for installing Docker on CentOS 7. One method involves installing it on an existing installation of the operating system. The other involves spinning up a server with a tool called Docker Machine that auto-installs Docker on it.
In this tutorial, you'll learn how to install and use it on an existing installation of CentOS 7.

Prerequisites

Note: Docker requires a 64-bit version of CentOS 7 as well as a kernel version equal to or greater than 3.10. The default 64-bit CentOS 7 Droplet meets these requirements.
All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by sudo. Initial Setup Guide for CentOS 7 explains how to add users and give them sudo access.

Step 1 — Installing Docker

The Docker installation package available in the official CentOS 7 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that.
But first, let's update the package database:
  • sudo yum check-update
Now run this command. It will add the official Docker repository, download the latest version of Docker, and install it:
  • curl -fsSL https://get.docker.com/ | sh
After installation has completed, start the Docker daemon:
  • sudo systemctl start docker
Verify that it's running:
  • sudo systemctl status docker
The output should be similar to the following, showing that the service is active and running:
Output
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2016-05-01 06:53:52 CDT; 1 weeks 3 days ago Docs: https://docs.docker.com Main PID: 749 (docker)
Lastly, make sure it starts at every server reboot:
  • sudo systemctl enable docker
Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We'll explore how to use the docker command later in this tutorial.

Step 2 — Executing Docker Command Without Sudo (Optional)

By default, running the docker command requires root privileges — that is, you have to prefix the command with sudo. It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you'll get an output like this:
Output
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.
If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:
  • sudo usermod -aG docker $(whoami)
You will need to log out of the Droplet and back in as the same user to enable this change.
If you need to add a user to the docker group that you're not logged in as, declare that username explicitly using:
  • sudo usermod -aG docker username
The rest of this article assumes you are running the docker command as a user in the docker user group. If you choose not to, please prepend the commands with sudo.

Step 3 — Using the Docker Command

With Docker installed and working, now's the time to become familiar with the command line utility. Using docker consists of passing it a chain of options and subcommands followed by arguments. The syntax takes this form:
  • docker [option] [command] [arguments]
To view all available subcommands, type:
  • docker
As of Docker 1.11.1, the complete list of available subcommands includes:
Output
attach Attach to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on a container or image kill Kill a running container load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container network Manage Docker networks pause Pause all processes within a container port List port mappings or a specific mapping for the CONTAINER ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart a container rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop a running container tag Tag an image into a repository top Display the running processes of a container unpause Unpause all processes within a container update Update configuration of one or more containers version Show the Docker version information volume Manage Docker volumes wait Block until a container stops, then print its exit code
To view the switches available to a specific command, type:
  • docker docker-subcommand --help
To view system-wide information, use:
  • docker info

Step 4 — Working with Docker Images

Docker containers are run from Docker images. By default, it pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anybody can build and host their Docker images on Docker Hub, so most applications and Linux distributions you'll need to run Docker containers have images that are hosted on Docker Hub.
To check whether you can access and download images from Docker Hub, type:
  • docker run hello-world
The output, which should include the following, should indicate that Docker in working correctly:
Output
Hello from Docker. This message shows that your installation appears to be working correctly. ...
You can search for images available on Docker Hub by using the docker command with the search subcommand. For example, to search for the CentOS image, type:
  • docker search centos
The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:
Output
NAME DESCRIPTION STARS OFFICIAL AUTOMATED centos The official build of CentOS. 2224 [OK] jdeathe/centos-ssh CentOS-6 6.7 x86_64 / CentOS-7 7.2.1511 x8... 22 [OK] jdeathe/centos-ssh-apache-php CentOS-6 6.7 x86_64 / Apache / PHP / PHP M... 17 [OK] million12/centos-supervisor Base CentOS-7 with supervisord launcher, h... 11 [OK] nimmis/java-centos This is docker images of CentOS 7 with dif... 10 [OK] torusware/speedus-centos Always updated official CentOS docker imag... 8 [OK] nickistre/centos-lamp LAMP on centos setup 3 [OK] ...
In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you've identifed the image that you would like to use, you can download it to your computer using the pull subcommand, like so:
  • docker pull centos
After an image has been downloaded, you may then run a container using the downloaded image with the run subcommand. If an image has not been downloaded when docker is executed with the run subcommand, the Docker client will first download the image, then run a container using it:
  • docker run centos
To see the images that have been downloaded to your computer, type:
  • docker images
The output should look similar to the following:
[secondary_lable Output]
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              latest              778a53015523        5 weeks ago         196.7 MB
hello-world         latest              94df4f0ce8a4        2 weeks ago         967 B
As you'll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

Step 5 — Running a Docker Container

The hello-world container you ran in the previous step is an example of a container that runs and exits, after emitting a test message. Containers, however, can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.
As an example, let's run a container using the latest image of CentOS. The combination of the -i and -t switches gives you interactive shell access into the container:
  • docker run -it centos
Your command prompt should change to reflect the fact that you're now working inside the container and should take this form:
Output
[root@59839a1b7de2 /]#
Important: Note the container id in the command prompt. In the above example, it is 59839a1b7de2.
Now you may run any command inside the container. For example, let's install MariaDB server in the running container. No need to prefix any command with sudo, because you're operating inside the container with root privileges:
  • yum install mariadb-server

Step 6 — Committing Changes in a Container to a Docker Image

When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.
This section shows you how to save the state of a container as a new Docker image.
After installing MariaDB server inside the CentOS container, you now have a container running off an image, but the container is different from the image you used to create it.
To save the state of the container as a new image, first exit from it:
  • exit
Then commit the changes to a new Docker image instance using the following command. The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container ID is the one you noted earlier in the tutorial when you started the interactive docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username:
  • docker commit -m "What did you do to the image" -a "Author Name" container-id repository/new_image_name
For example:
  • docker commit -m "added mariadb-server" -a "Sunday Ogwu-Chinuwa" 59839a1b7de2 finid/centos-mariadb
Note: When you commit an image, the new image is saved locally, that is, on your computer. Later in this tutorial, you'll learn how to push an image to a Docker registry like Docker Hub so that it may be assessed and used by you and others.
After that operation has completed, listing the Docker images now on your computer should show the new image, as well as the old one that it was derived from:
  • docker images
The output should be of this sort:
Output
REPOSITORY TAG IMAGE ID CREATED SIZE finid/centos-mariadb latest 23390430ec73 6 seconds ago 424.6 MB centos latest 778a53015523 5 weeks ago 196.7 MB hello-world latest 94df4f0ce8a4 2 weeks ago 967 B
In the above example, centos-mariadb is the new image, which was derived from the existing CentOS image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that MariaDB server was installed. So next time you need to run a container using CentOS with MariaDB server pre-installed, you can just use the new image. Images may also be built from what's called a Dockerfile. But that's a very involved process that's well outside the scope of this article. We'll explore that in a future article.

Step 7 — Listing Docker Containers

After using Docker for a while, you'll have many active (running) and inactive containers on your computer. To view the active ones, use:
  • docker ps
You will see output similar to the following:
Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f7c79cc556dd centos "/bin/bash" 3 hours ago Up 3 hours silly_spence
To view all containers — active and inactive, pass it the -a switch:
  • docker ps -a
To view the latest container you created, pass it the -l switch:
  • docker ps -l
Stopping a running or active container is as simple as typing:
  • docker stop container-id
The container-id can be found in the output from the docker ps command.

Step 8 — Pushing Docker Images to a Docker Repository

The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.
This section shows you how to push a Docker image to Docker Hub.
To create an account on Docker Hub, register at Docker Hub. Afterwards, to push your image, first log into Docker Hub. You'll be prompted to authenticate:
  • docker login -u docker-registry-username
If you specified the correct password, authentication should succeed. Then you may push your own image using:
  • docker push docker-registry-username/docker-image-name
It will take sometime to complete, and when completed, the output will be of this sort:
Output
The push refers to a repository [docker.io/finid/centos-mariadb] 670194edfaf5: Pushed 5f70bf18a086: Mounted from library/centos 6a6c96337be1: Mounted from library/centos ...
After pushing an image to a registry, it should be listed on your account's dashboard, like that show in the image below.
Docker image listing on Docker Hub
If a push attempt results in an error of this sort, then you likely did not log in:
Output
The push refers to a repository [docker.io/finid/centos-mariadb] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required
Log in, then repeat the push attempt.

Conclusion

There's a whole lot more to Docker than has been given in this article, but this should be enough to getting you started working with it on CentOS 7. Like most open source projects, Docker is built from a fast-developing codebase, so make a habit of visiting the project's blog page for the latest information.
Also check out the other Docker tutorials in the DO Community.

Sunday, March 12, 2017

Bad luck with WD Red NAS 3TB drives

3/12/2017

I have had 4 of 8 drives go bad in the last year and half. Oldest drive is 2 years old. Model numbers for all the drives are WD30EFRX-68EUZN0. I am waiting on an RMA to come back, and another went bad tonight. Lucky I have extra drives and am replacing tonight with another new WD Red NAS 3TB drive i had for just this purpose.

I have decided to not get anymore of these drives, other than the RMA I am waiting on now, and the RMA I will do after I replace the current bad drive. I have ordered a single HGST Desktsar NAS 3TB drive to look at. If this will fit my needs I will replace all drives in my NAS with these.

I want to look at the heat, noise, and see if i can get head parking off on the HGST drives. I will also measure performance, but I am sure the 7200 HGST will be faster then the 5400 WD Red.

Just an FYI to everyone. Be careful with these WD Red NAS 3TB drives.

UPDATE 3/13/2017

So i got the drive out of the NAS. Home built  on Centos 7.3, kernel 4.10, btrfs-progs 4.9. Took a bit to get the drive removed from the array (wait for data to be moved across the other 7 drives) and then add the replacement back in (wait for data to be balanced back across all 8 drives again).

I then took the "bad drive" and put it in my Windows workstation. I ran the WD Data Lifeguard extended test and then ran the extended test from Aomei Partition Assistant Pro. Both of these test show I have no bad sectors. The drive seems to be good as new. To make long story short I think i have a bad fan cable from my SAS/SATA controller to the drives. I have two ports on controller that controls 4 SATA drives each. I reseated HBA card and cables, and power connectors on the drives, and ran some test and scrubs. all seems good with new drive. I then put the side panels back on the case and place it back in its home. And within an hour i start getting more errors on the same device id, only with the brand new drive on that id. So i pull the server back out, remove side panels, and reboot. Over 24 hours later I don't have another error. It maybe that the side panels on the case are moving the cables enough for the bad cable to cause the errors. So i have two new fan cables coming and should have them installed tomorrow night. I still have the new HGST drive coming because I really want to check this out. If they are not to noisy and run to hot, I may just replace them all anyway. I better see some latency and IO improvements before that happens. At $150.00 per drive x8..... Well its not cheap to me.

Friday, March 3, 2017

BTRFS Drive errors from 2017-02-26

BTRFS Drive errors from 2017-02-26

Was not getting failed scrubs but seeing some errors.

This is 8x 3TB WD Red drives in raid10 on a Supermicro AOC-SAS2LP-MV8 Add-on Card, 8-Channel SAS/SATA Adapter with 600MB/s per Channel in a PCIE x16 slot running at x8 on a Supermicro ATX DDR4 LGA 1151 C7Z170-OCE-O Motherboard with 64GB DDR4 RAM (4x 16GB sticks).

FYI Everything was done online unless otherwise stated.

What file system looks like in btrfs.

[root@nas ~]# btrfs fi show
Label: 'myraid'  uuid: 1ec4f641-74a8-466e-89cc-e687672aaaea
        Total devices 8 FS bytes used 1.16TiB
        devid    1 size 2.73TiB used 301.53GiB path /dev/sdb
        devid    2 size 2.73TiB used 301.53GiB path /dev/sdc
        devid    3 size 2.73TiB used 301.53GiB path /dev/sdd
        devid    4 size 2.73TiB used 301.53GiB path /dev/sde
        devid    6 size 2.73TiB used 301.53GiB path /dev/sdg
        devid    7 size 2.73TiB used 301.53GiB path /dev/sdh
        devid    8 size 2.73TiB used 301.53GiB path /dev/sdi
        devid    9 size 2.73TiB used 301.53GiB path /dev/sdf


Take a look at device stats.

[root@nas ~]# /usr/local/bin/btrfs device stats /myraid/
[/dev/sdb].write_io_errs   0
[/dev/sdb].read_io_errs    0
[/dev/sdb].flush_io_errs   0
[/dev/sdb].corruption_errs 0
[/dev/sdb].generation_errs 0
[/dev/sdc].write_io_errs   0
[/dev/sdc].read_io_errs    0
[/dev/sdc].flush_io_errs   0
[/dev/sdc].corruption_errs 0
[/dev/sdc].generation_errs 0
[/dev/sdd].write_io_errs   0
[/dev/sdd].read_io_errs    0
[/dev/sdd].flush_io_errs   0
[/dev/sdd].corruption_errs 0
[/dev/sdd].generation_errs 0
[/dev/sde].write_io_errs   0
[/dev/sde].read_io_errs    44
[/dev/sde].flush_io_errs   0
[/dev/sde].corruption_errs 0
[/dev/sde].generation_errs 0
[/dev/sdg].write_io_errs   0
[/dev/sdg].read_io_errs    0
[/dev/sdg].flush_io_errs   0
[/dev/sdg].corruption_errs 0
[/dev/sdg].generation_errs 0
[/dev/sdh].write_io_errs   0
[/dev/sdh].read_io_errs    0
[/dev/sdh].flush_io_errs   0
[/dev/sdh].corruption_errs 0
[/dev/sdh].generation_errs 0
[/dev/sdi].write_io_errs   0
[/dev/sdi].read_io_errs    0
[/dev/sdi].flush_io_errs   0
[/dev/sdi].corruption_errs 0
[/dev/sdi].generation_errs 0
[/dev/sdf].write_io_errs   0
[/dev/sdf].read_io_errs    0
[/dev/sdf].flush_io_errs   0
[/dev/sdf].corruption_errs 0
[/dev/sdf].generation_errs 0


Run extended smartcl test.

[root@nas ~]# smartctl -t long /dev/sdb
[root@nas ~]# smartctl -t long /dev/sdc
[root@nas ~]# smartctl -t long /dev/sdd
[root@nas ~]# smartctl -t long /dev/sde
[root@nas ~]# smartctl -t long /dev/sdf
[root@nas ~]# smartctl -t long /dev/sdg
[root@nas ~]# smartctl -t long /dev/sdh
[root@nas ~]# smartctl -t long /dev/sdi


I waited an hour then reviewed the results.

[root@nas ~]# smartctl -l selftest /dev/sdb
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.7.0-1.el7.elrepo.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%     12660         -
# 2  Extended offline    Completed without error       00%      8916         -
# 3  Short offline       Completed without error       00%      6097         -
# 4  Extended offline    Completed without error       00%      4288         -
# 5  Short offline       Completed without error       00%      4245         -
# 6  Short offline       Completed without error       00%      4242         -
# 7  Short offline       Interrupted (host reset)      50%      4241         -
# 8  Short offline       Completed without error       00%      4172         -
# 9  Short offline       Completed without error       00%      4109         -

[root@nas ~]# smartctl -l selftest /dev/sdc
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.7.0-1.el7.elrepo.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%     12660         -
# 2  Extended offline    Completed without error       00%      8916         -
# 3  Short offline       Completed without error       00%      6096         -
# 4  Extended offline    Completed without error       00%      4288         -
# 5  Short offline       Completed without error       00%      4109         -

[root@nas ~]# smartctl -l selftest /dev/sdd
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.7.0-1.el7.elrepo.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%     13003         -
# 2  Extended offline    Completed without error       00%      9260         -
# 3  Short offline       Completed without error       00%      6440         -
# 4  Extended offline    Completed without error       00%      4632         -
# 5  Short offline       Completed without error       00%      4452         -
# 6  Short offline       Completed without error       00%         0         -

[root@nas ~]# smartctl -l selftest /dev/sde
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.7.0-1.el7.elrepo.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       90%      8452         339784376
# 2  Extended offline    Completed without error       00%      4716         -
# 3  Short offline       Completed without error       00%      1896         -
# 4  Extended offline    Completed without error       00%        88         -
# 5  Short offline       Completed without error       00%        15         -
# 6  Short offline       Aborted by host               90%        12         -
# 7  Short offline       Aborted by host               90%        12         -
# 8  Short offline       Aborted by host               90%        12         -
# 9  Short offline       Aborted by host               90%         5         -
#10  Short offline       Aborted by host               90%         5         -
#11  Short offline       Aborted by host               90%         5         -
#12  Short offline       Aborted by host               90%         4         -
#13  Short offline       Aborted by host               90%         4         -
#14  Short offline       Aborted by host               90%         4         -
#15  Short offline       Aborted by host               90%         0         -
#16  Short offline       Aborted by host               90%         0         -

[root@nas ~]# smartctl -l selftest /dev/sdf
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.7.0-1.el7.elrepo.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%      3118         -
# 2  Short offline       Completed without error       00%        12         -

[root@nas ~]# smartctl -l selftest /dev/sdg
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.7.0-1.el7.elrepo.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%      6298         -
# 2  Extended offline    Completed without error       00%      2555         -

[root@nas ~]# smartctl -l selftest /dev/sdh
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.7.0-1.el7.elrepo.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%      6147         -
# 2  Extended offline    Completed without error       00%      2404         -
# 3  Short offline       Completed without error       00%         0         -

[root@nas ~]# smartctl -l selftest /dev/sdi
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.7.0-1.el7.elrepo.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%      6147         -
# 2  Extended offline    Completed without error       00%      2404         -
# 3  Short offline       Completed without error       00%         0         -



[root@nas ~]# smartctl -a /dev/sdb | grep "Raw_Read_Error_Rate"
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
[root@nas ~]# smartctl -a /dev/sdc | grep "Raw_Read_Error_Rate"
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
[root@nas ~]# smartctl -a /dev/sdd | grep "Raw_Read_Error_Rate"
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
[root@nas ~]# smartctl -a /dev/sde | grep "Raw_Read_Error_Rate"
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       1258
[root@nas ~]# smartctl -a /dev/sdf | grep "Raw_Read_Error_Rate"
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
[root@nas ~]# smartctl -a /dev/sdg | grep "Raw_Read_Error_Rate"
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
[root@nas ~]# smartctl -a /dev/sdh | grep "Raw_Read_Error_Rate"
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
[root@nas ~]# smartctl -a /dev/sdi | grep "Raw_Read_Error_Rate"
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0

 

Yup. /dev/sde seems to have an issue.



Get serial number of all drives for a possible RMA on /dev/sde.

[root@nas ~]# smartctl -a /dev/sdb | grep "Serial Number:"
Serial Number:    WD-WMC4N0J0YT1V
[root@nas ~]# smartctl -a /dev/sdc | grep "Serial Number:"
Serial Number:    WD-WMC4N0J2L138
[root@nas ~]# smartctl -a /dev/sdd | grep "Serial Number:"
Serial Number:    WD-WCC4N2FJRTU9
[root@nas ~]# smartctl -a /dev/sde | grep "Serial Number:"
Serial Number:    WD-WCC4N4SSDRFN
[root@nas ~]# smartctl -a /dev/sdf | grep "Serial Number:"
Serial Number:    WD-WCC4N1VYZH52
[root@nas ~]# smartctl -a /dev/sdg | grep "Serial Number:"
Serial Number:    WD-WMC4N0M57KEY
[root@nas ~]# smartctl -a /dev/sdh | grep "Serial Number:"
Serial Number:    WD-WCC4N5YF2Z2Y
[root@nas ~]# smartctl -a /dev/sdi | grep "Serial Number:"
Serial Number:    WD-WCC4N5CJ6H8U




Get List of BadBlocks on all drives. Run in background and save to file.

badblocks -v /dev/sdb > /tmp/bad-blocks-b.txt &
badblocks -v /dev/sdc > /tmp/bad-blocks-c.txt &
badblocks -v /dev/sdd > /tmp/bad-blocks-d.txt &
badblocks -v /dev/sde > /tmp/bad-blocks-e.txt &
badblocks -v /dev/sdf > /tmp/bad-blocks-f.txt &
badblocks -v /dev/sdg > /tmp/bad-blocks-g.txt &
badblocks -v /dev/sdh > /tmp/bad-blocks-h.txt &
badblocks -v /dev/sdi > /tmp/bad-blocks-i.txt &

Monitor file size with:

[root@nas ~]# watch ls -lsa /tmp/bad-blocks-*.txt

If you have a really bad drive it could create a file the size of the drive itself so be sure to monitor and make sure you do not fill up your /tmp directory.

If you need to kill it then ket the pid with:

[root@nas tmp]# ps -ef | grep "badblocks"
UID        PID  PPID  C STIME TTY      TIME     CMD
root     27013 25404  3 10:43 pts/0    00:01:12 badblocks -v /dev/sdb
root     27014 25404  3 10:43 pts/0    00:01:12 badblocks -v /dev/sdc
root     27015 25404  3 10:43 pts/0    00:01:12 badblocks -v /dev/sdd
root     27016 25404  2 10:43 pts/0    00:01:11 badblocks -v /dev/sde
root     27017 25404  3 10:43 pts/0    00:01:13 badblocks -v /dev/sdf
root     27018 25404  3 10:43 pts/0    00:01:12 badblocks -v /dev/sdg
root     27019 25404  3 10:43 pts/0    00:01:12 badblocks -v /dev/sdh
root     27020 25404  3 10:43 pts/0    00:01:12 badblocks -v /dev/sdi
root     31044 26976  0 11:22 pts/1    00:00:00 grep --color=auto badblocks



While badblock test is running I have already got a RMA number from WD and a shipping label on my printer. I ordered a new drive from Amazon that will be here on the 28th. I will swap out then and ship bad drive back on the 29th.

Running smartctl test long on all drives

smartctl -t long /dev/sdb
smartctl -t long /dev/sdc
smartctl -t long /dev/sdd
smartctl -t long /dev/sde
smartctl -t long /dev/sdf
smartctl -t long /dev/sdg
smartctl -t long /dev/sdh
smartctl -t long /dev/sdi

Check progress of test

smartctl -a /dev/sdb | grep "Self-test execution status"
smartctl -a /dev/sdb | grep "of test remaining."

smartctl -a /dev/sdc | grep "Self-test execution status"
smartctl -a /dev/sdc | grep "of test remaining."

smartctl -a /dev/sdd | grep "Self-test execution status"
smartctl -a /dev/sdd | grep "of test remaining."

smartctl -a /dev/sde | grep "Self-test execution status"
smartctl -a /dev/sde | grep "of test remaining."

smartctl -a /dev/sdf | grep "Self-test execution status"
smartctl -a /dev/sdf | grep "of test remaining."

smartctl -a /dev/sdg | grep "Self-test execution status"
smartctl -a /dev/sdg | grep "of test remaining."

smartctl -a /dev/sdh | grep "Self-test execution status"
smartctl -a /dev/sdh | grep "of test remaining."

smartct
l -a /dev/sdi | grep "Self-test execution status"
smartctl -a /dev/sdi | grep "of test remaining."



New drive is in and I am backing up NAS. My btrfs pool of /myraid is gettin backuped up to a PC with raid1.
I am also duplicating the more inportant files to a SSD. Better safe than sorry.

New drive has had a full surface test (9 hours) and passed with flying colors.

I am not turning this into a drive remove/replace since i want to change the partions of my boot SSD so I am just
goging to nuke the entire system and rebuild from scratch.

my /(root) partion is getting a backup vi tar so i can have access to my old crontab files and maintemce scripts.
I can just yse the old samba.conf as well, etc....



tar -zcvpf /myraid/nas.backup.tar.gz --exclude=/myraid --exclude=/usr --exclude=/proc --exclude=/lib --exclude=/lib64 --exclude=/dev /

Now copy the tar.gz file to a few drives off the server as well.

Make USB install Centos 7.3 min and do the install :)

Install done. I see all 9 drives and 4 network connections. I setup all the NICs during the install and they all seem to be ok
Possibily some tweeking on these later.

I installed the OS on my SSD with 1GB /boot and /boot/efi (I am using EFI). The rest to /

My other 8 drives are on my Supermicro AOC-SAS2LP-MV8 JBOD HBA. I will not touch those untill I get ready to setup btrfs on them.

So now some base stuff


cp /etc/sysconfig/selinux /etc/sysconfig/selinux.bak
sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux

cp /etc/selinux/config /etc/selinux/config.bak
sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config

systemctl disable firewalld
systemctl stop firewalld

service iptables stop
service ip6tables stop
chkconfig iptables off
chkconfig ip6tables off

yum -y install bind-utils traceroute net-tools ntp* gcc glibc glibc-common gd gd-devel make net-snmp openssl-devel xinetd unzip libtool* make patch perl bison flex-devel gcc-c++ ncurses-devel flex libtermcap-devel autoconf* automake* autoconf libxml2-devel cmake sqlite* wget ntp* lm_sensors ncurses-devel qt-devel hmaccalc zlib-devel binutils-devel elfutils-libelf-devel wget bc gzip uuid* libuuid-devel jansson* libxml2* sqlite* openssl* lsof NetworkManager-tui mlocate yum-utils kernel-devel nfs-utils tcpdump git vim gdisk parted

yum -y groupinstall "Development Tools"
yum -y update
yum -y upgrade

cd /root
echo ':color desert' > .vimrc

systemctl disable kdump.service

reboot

# cat /etc/default/grub
GRUB_TIMEOUT=60
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl_bcache/root ipv6.disable=1 zswap.enable=1 consoleblank=0"
GRUB_DISABLE_RECOVERY="true"

Make changes as refelected about to
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl_bcache/root ipv6.disable=1 zswap.enable=1 consoleblank=0"

# grub2-mkconfig -o /boot/grub2/grub.cfg
or if using UEFI
# grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg

reboot


Now update kerenel

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum install yum-plugin-fastestmirror

yum --enablerepo=elrepo-kernel install kernel-ml

reboot

Manualy select new kernel from grub boot screen.

uname -r
4.10.1-1.el7.elrepo.x86_64

Do any testing you need and when happy set this to default entry when happy.

grub2-set-default 0

reboot

uname -r
4.10.1-1.el7.elrepo.x86_64


Now time to physicaly replace drive then setup btrfs.

poweroff

New drive in and server rebooted

parted -l

shows all drives. 7x WD RED NAS drives show still brtfs partitions. New drive has nothing.

idle3ctl show new drive has head parking on. lets turn that off.

# ./idle3ctl /dev/sde
Idle3 timer set to 138 (0x8a)

# ./idle3ctl -d /dev/sde
Idle3 timer disabled
Please power cycle your drive off and on for the new setting to be taken into account. A reboot will not be enough!

So lets powerr off, let set for a min, and power back on and rechedck.

poweroff

looks good on all drives 8 WD RED drives

[root@nas idle3-tools-0.9.1]# ./idle3ctl /dev/sdb
Idle3 timer is disabled
[root@nas idle3-tools-0.9.1]# ./idle3ctl /dev/sdc
Idle3 timer is disabled
[root@nas idle3-tools-0.9.1]# ./idle3ctl /dev/sdd
Idle3 timer is disabled
[root@nas idle3-tools-0.9.1]# ./idle3ctl /dev/sde
Idle3 timer is disabled
[root@nas idle3-tools-0.9.1]# ./idle3ctl /dev/sdf
Idle3 timer is disabled
[root@nas idle3-tools-0.9.1]# ./idle3ctl /dev/sdg
Idle3 timer is disabled
[root@nas idle3-tools-0.9.1]# ./idle3ctl /dev/sdh
Idle3 timer is disabled
[root@nas idle3-tools-0.9.1]# ./idle3ctl /dev/sdi
Idle3 timer is disabled


No lets clean the disk for a new array

I used parted to rm the partiions then a w and q.

Then

wipefs -a /dev/sdb
wipefs -a /dev/sdc
wipefs -a /dev/sdd
wipefs -a /dev/sde
wipefs -a /dev/sdf
wipefs -a /dev/sdg
wipefs -a /dev/sdh
wipefs -a /dev/sdi

Then

dd if=/dev/zero of=/dev/sdb bs=1024 count=1024
dd if=/dev/zero of=/dev/sdc bs=1024 count=1024
dd if=/dev/zero of=/dev/sdd bs=1024 count=1024
dd if=/dev/zero of=/dev/sde bs=1024 count=1024
dd if=/dev/zero of=/dev/sdf bs=1024 count=1024
dd if=/dev/zero of=/dev/sdg bs=1024 count=1024
dd if=/dev/zero of=/dev/sdh bs=1024 count=1024
dd if=/dev/zero of=/dev/sdi bs=1024 count=1024

Then to just look at the devs

ls -lsa /dev/sd*
0 brw-rw---- 1 root disk 8,   0 Mar  1 15:02 /dev/sda
0 brw-rw---- 1 root disk 8,   1 Mar  1 15:02 /dev/sda1
0 brw-rw---- 1 root disk 8,   2 Mar  1 15:02 /dev/sda2
0 brw-rw---- 1 root disk 8,   3 Mar  1 15:02 /dev/sda3
0 brw-rw---- 1 root disk 8,  16 Mar  1 15:11 /dev/sdb
0 brw-rw---- 1 root disk 8,  32 Mar  1 15:11 /dev/sdc
0 brw-rw---- 1 root disk 8,  48 Mar  1 15:11 /dev/sdd
0 brw-rw---- 1 root disk 8,  64 Mar  1 15:11 /dev/sde
0 brw-rw---- 1 root disk 8,  80 Mar  1 15:11 /dev/sdf
0 brw-rw---- 1 root disk 8,  96 Mar  1 15:11 /dev/sdg
0 brw-rw---- 1 root disk 8, 112 Mar  1 15:11 /dev/sdh
0 brw-rw---- 1 root disk 8, 128 Mar  1 15:11 /dev/sdi



fdisk -l also shows they look ready


[root@nas idle3-tools-0.9.1]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 128.0 GB, 128035676160 bytes, 250069680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


#         Start          End    Size  Type            Name
 1         2048      2099199      1G  EFI System      EFI System Partition
 2      2099200      4196351      1G  Microsoft basic
 3      4196352    250068991  117.2G  Linux LVM      

Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sde: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdf: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdh: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdi: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/cl_nas-root: 125.9 GB, 125883645952 bytes, 245866496 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes



Check out btrfs version first

# btrfs --version
btrfs-progs v4.4.1

I think there is a newer on out. Lets go see.

git clone git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git

cd btrfs-progs

yum -y install libuuid-devel libattr-devel zlib-devel libacl-devel e2fsprogs-devel libblkid-devel lzo* asciidoc xmlto

./autogen.sh
./configure
make

Lets check version from within the folder

[root@nas btrfs-progs]# ./btrfs --version
btrfs-progs v4.9.1

Yup its newer

Now check from /

[root@nas btrfs-progs]# cd /
[root@nas /]# btrfs --version
btrfs-progs v4.4.1
[root@nas /]#

So we got two versions

I copied all +x files from /root/btrfs-progs to /usr/sbin overright files if they exist.

Now from / of drive I get

[root@nas /]# btrfs --version
btrfs-progs v4.9.1

I hope thats good :)

So lets build an array!!!

First I will use raid0 for some quick testing.

mkfs.btrfs -f -m raid0 -d raid0 -L myraid /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi

also here is for raid10

mkfs.btrfs -f -m raid10 -d raid10 -L myraid /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi

[root@nas ~]# mkfs.btrfs -f -m raid0 -d raid0 -L myraid /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
btrfs-progs v4.9.1
See http://btrfs.wiki.kernel.org for more information.

Label:              myraid
UUID:               5a5610aa-2615-4ee2-bd4a-076ab2931b70
Node size:          16384
Sector size:        4096
Filesystem size:    21.83TiB
Block group profiles:
  Data:             RAID0             8.00GiB
  Metadata:         RAID0             4.00GiB
  System:           RAID0            16.00MiB
SSD detected:       no
Incompat features:  extref, skinny-metadata
Number of devices:  8
Devices:
   ID        SIZE  PATH
    1     2.73TiB  /dev/sdb
    2     2.73TiB  /dev/sdc
    3     2.73TiB  /dev/sdd
    4     2.73TiB  /dev/sde
    5     2.73TiB  /dev/sdf
    6     2.73TiB  /dev/sdg
    7     2.73TiB  /dev/sdh
    8     2.73TiB  /dev/sdi
   
[root@nas ~]# btrfs fi show
Label: 'myraid'  uuid: 5a5610aa-2615-4ee2-bd4a-076ab2931b70
        Total devices 8 FS bytes used 112.00KiB
        devid    1 size 2.73TiB used 1.50GiB path /dev/sdb
        devid    2 size 2.73TiB used 1.50GiB path /dev/sdc
        devid    3 size 2.73TiB used 1.50GiB path /dev/sdd
        devid    4 size 2.73TiB used 1.50GiB path /dev/sde
        devid    5 size 2.73TiB used 1.50GiB path /dev/sdf
        devid    6 size 2.73TiB used 1.50GiB path /dev/sdg
        devid    7 size 2.73TiB used 1.50GiB path /dev/sdh
        devid    8 size 2.73TiB used 1.50GiB path /dev/sdi

Lets mount this thing

mkdir /myraid

mount with UUID from above. uuid: 5a5610aa-2615-4ee2-bd4a-076ab2931b70

mount -t btrfs -o defaults,nodatacow,noatime,x-systemd.device-timeout=30 -U 5a5610aa-2615-4ee2-bd4a-076ab2931b70 /myraid

[root@nas ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                  32G     0   32G   0% /dev
tmpfs                     32G     0   32G   0% /dev/shm
tmpfs                     32G  8.9M   32G   1% /run
tmpfs                     32G     0   32G   0% /sys/fs/cgroup
/dev/mapper/cl_nas-root  118G  2.2G  116G   2% /
/dev/sda2               1014M  191M  824M  19% /boot
/dev/sda1               1022M  9.5M 1013M   1% /boot/efi
tmpfs                    6.3G     0  6.3G   0% /run/user/0
/dev/sdb                  22T   20M   22T   1% /myraid

Oh ya!!! 22TB of btrfs array

My line for fstab i will put in later is:

UUID=5a5610aa-2615-4ee2-bd4a-076ab2931b70   /myraid   btrfs  defaults,nodatacow,noatime,x-systemd.device-timeout=30  0 0


this is how to clear cache when testing transfer speeds to make sure you are not using cache.
Do this between each transfer


sync; echo 2 > /proc/sys/vm/drop_caches
sync; echo 3 > /proc/sys/vm/drop_caches

---tune 10Gb CNA if needed

service irqbalance stop
service cpuspeed stop
chkconfig irqbalance off
chkconfig cpuspeed off
systemctl disable irqbalance
systemctl disable cpuspeed
systemctl stop irqbalance
systemctl stop cpuspeed

vi /etc/sysconfig/network-scripts/ifcfg-eth???
MTU="9000"

vi /etc/sysctl.conf
# -- tuning -- #
# Increase system file descriptor limit
fs.file-max = 65535

# Increase system IP port range to allow for more concurrent connections
net.ipv4.ip_local_port_range = 1024 65000

# -- 10gbe tuning from Intel ixgb driver README -- #

# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000

net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000

reboot and test speed.

on linux client pointing to server with ip 192.168.90.100

# iperf3 -c 192.168.90.100 -p 5201

on linux server with IP 192.168.90.100

iperf3 -s -p 5201 -B 192.168.90.100

---end tune 10Gb CNA if needed


---setup NFS for ESXi server

vi /etc/exports
/myraid/     192.168.10.0/24(rw,async,no_root_squash,no_subtree_check)
/myraid/     192.168.90.0/24(rw,async,no_root_squash,no_subtree_check)

systemctl start rpcbind nfs-server
systemctl enable rpcbind nfs-server

---end setup NFS for ESXi server




--install samaba if needed

yum -y install samba

useradd samba -s /sbin/nologin

smbpasswd -a samba
            Supply a password
            Retype the password
   
mkdir /myraid

chown -R samba:root /myraid/

vi /etc/samba/smb.conf

[global]
workgroup = WORKGROUP ;use name of your workgroup here
server string = Samba Server Version %v
netbios name = NAS

Add this to botton of /etc/samba/smb.conf file

[NAS]
comment = NAS
path = /myraid
writable = yes
valid users = samba


systemctl start smb
systemctl enable smb
systemctl start nmb
systemctl enable nmb

testparm
   
--end install samaba if needed



---install plex if needed


visit plex site and get rpm for your version of OS
copy this to /root

yum -y localinstall name.rpm

systemctl enable plexmediaserver
systemctl start plexmediaserver

---end install plex if needed



---install LAMP

yum -y install httpd mariadb-server mariadb php php-mysql
systemctl enable httpd.service
systemctl start httpd.service
systemctl status httpd.service

Make sure it works with:
http://your_server_IP_address/

systemctl enable mariadb
systemctl start mariadb
systemctl status mariadb
mysql_secure_installation

vi /var/www/html/info.php
<?php phpinfo(); ?>

http://your_server_IP_address/info.php


---End install LAMP


---Extra goodies

yum -y install epel-release
yum -y install stress htop iftop iotop hddtemp smartmontools iperf3 sysstat mlocate
yum -y update


updatedb **this is to update mlocate db


---End Extra goodies




---Use gmail as relay for sending mail

Replace glen@gmail.com with a real email address in items below
Replace mycentserver.mydomain.domain with real hostname in items below
Replace gmail_password with real password in items below

# yum remove postfix

Now install ssmtp.

# yum -y install ssmtp mailx

Now edit your  /etc/ssmtp/ssmtp.conf. I removed everything and just added the below in the file.

#  vi /etc/ssmtp/ssmtp.conf

root=glen@gmail.com
mailhub=smtp.gmail.com:587
rewriteDomain=gmail.com
hostname=mycentserver.mydomain.domain
UseTLS=Yes
UseSTARTTLS=Yes
AuthUser=glen@gmail.com
AuthPass=gmail_password
FromLineOverride=YES

# This solved if you get a ssmtp: Cannot open smtp.gmail.com:587 when try to send an email
# if you enabled uncommenting DEBUG=Yes line and your /var/log/maillog show
# SSL not working: certificate verify failed (20) Uncomment the following line but first
# VERIFY FILE EXISTS
TLS_CA_File=/etc/pki/tls/certs/ca-bundle.crt

# DEBUG=Yes

Now edit your /etc/ssmtp/revaliases file and add the following.

# vi /etc/ssmtp/revaliases

root:glen@gmail.com:smtp.gmail.com:587

Now run

# alternatives --config mta

And choose the number for sendmail.ssmtp, like below

There is 1 program that provides 'mta'.

  Selection    Command
-----------------------------------------------
*+ 1           /usr/sbin/sendmail.ssmtp

Enter to keep the current selection[+], or type selection number: 1
#

Now send email to your gmail account from Centos cli

# mail -s "Test Subject" glen@gmail.com

Type your message text and on new line press ctrl d to send

---End Use gmail as relay for sending mail