Why?


Search This Blog

Saturday, May 30, 2015

Centos 7 Plex media server install

Centos 7 Plex media server install

I am on Centos7 using kernel 3.19.8 with Desktop install. I am putty/ssh into system as root.

I am at home behind firewall and do not use iptables, firewall, or selinux on my Centos7 system, so I have turned them off.

Turn off firewall and iptables if they are on.

# systemctl disable firewalld.service
# systemctl stop firewalld.service
# systemctl disable iptables.service
# systemctl stop iptables.service


Disable selinux if it is enforced.

# vi /etc/sysconfig/selinux
    SELINUX=disabled


reboot if you have changed these

# reboot

Locate the latest version of Plex media server for Centos7

Go to

https://plex.tv/downloads


on your PC and then click download for Computer, not NAS. Choose Linux, then right click on the Centos 64-bit button. Chose copy link location.

Now putty into your Centos7 system as root. Changed to your root home dir.

# cd /root

Type "wget", space bar, then right click mouse. This should paste the link you copied above. Should look like:

# wget https://downloads.plex.tv/plex-media-server/0.9.12.1.1079-b655370/plexmediaserver-0.9.12.1.1079-b655370.x86_64.rpm

After download of the rpm package do a "yum -y localinstall <Plex Media Server RPM package>" command. Mine looks like:

# yum -y localinstall plexmediaserver-0.9.12.1.1079-b655370.x86_64.rpm

After install make sure it is set to start at boot and then make sure it is on now.

# systemctl enable plexmediaserver.service
# systemctl start plexmediaserver.service


Now see if you can access the Plex web ui from your PC with (use your IP addr of your system):

http://192.168.1.100:32400/web/

Lots of info on the Plex site on how to finish the config. I wont cover them here. This was just for the install.


Hope it helps !



Friday, May 29, 2015

Centos 7 zfs install

Centos 7 zfs install

I have already installed asterisk on this system (see my post on that if you wish). During that install I did a kernel upgrade to 3.19.8, and that is the kernel I am running.

I am using putty/ssh and root user.

I am at home behind firewall and do not use iptables, firewall, or selinux on my Centos7 system, so I have turned them off.

Turn off firewall and iptables if they are on.

# systemctl disable firewalld.service
# systemctl stop firewalld.service
# systemctl disable iptables.service
# systemctl stop iptables.service


Disable selinux if it is enforced.

# vi /etc/sysconfig/selinux
    SELINUX=disabled


reboot if you have changed these

# reboot

Now get the packages and install them.

# cd /root
# yum -y localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
# yum -y localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
# yum -y install kernel-devel zfs


You can now load the ZFS module:

# modprobe zfs

After running the above command you should have seen a list of loaded modules from ZFS.

# lsmod | grep -i zfs
zfs                  2179437  3
zcommon                47120  1 zfs
znvpair                80252  2 zfs,zcommon
spl                    89796  3 zfs,zcommon,znvpair
zavl                    6784  1 zfs
zunicode              323046  1 zfs


You should now make sure the module is loaded persistently on boot. We need to make a new file and add a script to it.

# vi /etc/sysconfig/modules/zfs.modules

Add the following code:

#!/bin/sh

if [ ! -c /dev/zfs ] ; then
        exec /sbin/modprobe zfs >/dev/null 2>&1
fi


Make this file executable:

# chmod +x /etc/sysconfig/modules/zfs.modules

Now reboot and make sure everything loaded

# reboot

After reboot run lsmod again and make sure modules are loaded

# lsmod | grep -i zfs
zfs                  2179437  3
zcommon                47120  1 zfs
znvpair                80252  2 zfs,zcommon
spl                    89796  3 zfs,zcommon,znvpair
zavl                    6784  1 zfs
zunicode              323046  1 zfs



Create pool. I have 4 WD RED 3TB drives on /dev/sdb through /dev/sde. I will create a raid10 pool with these now.

First I make sure I am at the latest firmware on my drives. I use the WD tool for that

# ./wd5741x64
WD5741 Version 1
Update Drive
Copyright (C) 2013 Western Digital Corporation
-Dn   Model String           Serial Number     Firmware
-D0   Samsung SSD 850 PRO 128GB   S1SMNSAG301480T   EXM02B6Q
-D1   WDC WD30EFRX-68EUZN0   WD-WMC4N0J0YT1V   82.00A82
-D2   WDC WD30EFRX-68EUZN0   WD-WMC4N0J2L138   82.00A82
-D3   WDC WD30EFRX-68EUZN0   WD-WCC4N2FJRTU9   82.00A82
-D4   WDC WD30EFRX-68EUZN0   WD-WCC4N7SP4HHF   82.00A82


As you can see I have Samsung SSD 850 PRO 128GB I use as my /boot and OS drive.

Then I turn off head parking on my WD RED drives with the WD tool

# ./idle3ctl -d /dev/sdb
Idle3 timer disabled
Please power cycle your drive off and on for the new setting to be taken into account. A reboot will not be enough!


I do this on all four drives then reboot.

# reboot

I then make sure the setting stuck.

# ./idle3ctl -g /dev/sdc
Idle3 timer is disabled


I check all four drives for the "disabled" value above.

Now I zero out the MBR to remove any legacy info that may have been on them.

# dd if=/dev/zero of=/dev/sdb bs=1M count=1

Repeat for all four drives.

I will now create a raid10 pool, called myraid, for use.

# zpool create myraid mirror -f /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde

Make sure it was created

# zpool status
  pool: myraid
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        myraid      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0


Check it is mounted

# mount | grep zfs
myraid on /myraid type zfs (rw,xattr)


# df -h | grep myraid
myraid                5.3T  284G  5.0T   6% /myraid


If you don’t see it mounted try

# zfs mount myraid

Add ZFS to auto mount /myraid with boot if wanted

# echo "zfs mount myraid" >> /etc/rc.local           

To speed things up

# zfs set sync=disabled myraid

Read below before disabling though

sync=standard
  This is the default option. Synchronous file system transactions
  (fsync, O_DSYNC, O_SYNC, etc) are written out (to the intent log)
  and then secondly all devices written are flushed to ensure
  the data is stable (not cached by device controllers).

sync=always
  For the ultra-cautious, every file system transaction is
  written and flushed to stable storage by a system call return.
  This obviously has a big performance penalty.

sync=disabled
  Synchronous requests are disabled.  File system transactions
  only commit to stable storage on the next DMU transaction group
  commit which can be many seconds.  This option gives the
  highest performance.  However, it is very dangerous as ZFS
  is ignoring the synchronous transaction demands of
  applications such as databases or NFS.
  Setting sync=disabled on the currently active root or /var
  file system may result in out-of-spec behavior, application data
  loss and increased vulnerability to replay attacks.
  This option does *NOT* affect ZFS on-disk consistency.
  Administrators should only use this when these risks are understood.

 
 
You can also turn lz4 compression on your pool which speeds things up, at the cost of some cpu though. I have an i5 4590 with 16GB RAM so I have the resources to do so.

Tuning

# cat /sys/module/zfs/parameters/zfs_prefetch_disable
0

# modprobe zfs zfs_prefetch_disable=1


This setting is done in the /etc/modprobe.d/zfs.conf file.

I have i5-6600k with 64GB DDR4 RAM on a Supermicro C7Z170-OCE motherboard, x4 WD RED 3TB drives in mirrored stripes on a Supermicro 8 port 600MB SAS/SATA HBA AOC. Dont use these parameters below unless you know your hardware :)

Below are my settings for my NAS with ZFS foir use in running ESXi VMware guest images off of over 10Gb network between ESXi 5.5 U2 and the NAS via NFS. I am getting 340+MB read and write across the wire with linux client with SSD to NAS ZFS pool using scp. I think I reached my drives/zpool performance. Time to add more drives. I also tested with Windows 10 guest image on the zpool via 10Gb NFS and get read and write of 280MB to/from the Zpool Windows image and the NAS SSD drive. I think the smb/samba adds some overhead there.

edit zfs.conf to reflect:

# disable prefetch
options zfs zfs_prefetch_disable=1
# set arc max to 48GB. I have 64GB in my server
options zfs zfs_arc_max=51539607552
# set size to 128k same as file system block size
options zfs zfs_vdev_cache_size=1310720
options zfs zfs_vdev_cache_max=1310720
options zfs zfs_read_chunk_size=1310720
options zfs zfs_vdev_cache_bshift=17
options zfs zfs_read_chunk_size=1310720
# Set these to 1 so we get max IO at cost of bandwidth
options zfs zfs_vdev_async_read_max_active=1
options zfs zfs_vdev_async_read_min_active=1
options zfs zfs_vdev_async_write_max_active=1
options zfs zfs_vdev_async_write_min_active=1
options zfs zfs_vdev_sync_read_max_active=1
options zfs zfs_vdev_sync_read_min_active=1
options zfs zfs_vdev_sync_write_max_active=1
options zfs zfs_vdev_sync_write_min_active=1


# reboot


Sanity check

# cat /sys/module/zfs/parameters/zfs_prefetch_disable
# cat /sys/module/zfs/parameters/zfs_arc_max


Example commands to see settings

# zfs get all
# zfs get all myraid
# zfs get checksum
# zfs get checksum myraid

*ALWAYS* use Mirror / RAID10 – never, never, ever use RAIDz !

Data compression : LZ4 ( Yes, on *everything*, make sure you have enough CPU though. )
    zfs set compression=lz4 myraid
   
Checksum : Fletcher4
    zfs set checksum=fletcher4 myraid

Use Cache for : Data & Metadata*1
    zfs set primarycache=all myraid

Write bias : Latency*1
    zfs set logbias=latency myraid

Record size / block size : 128k ( This is vital people – we go against the “use record size as in workload” recommandation )
    zfs set recordsize=128k myraid

Update access time on read : disable
    zfs set atime=off myraid
   
Do not use dedupe.
    # zfs set dedup=off myraid

Enable Jumbo Frames

Disable sync
    # zfs set sync=disabled myraid

Have fun!

Centos 7 Asterisk 13.3.2 install with kernel 3.19.8 update

Centos 7 Asterisk 13.3.2 install with kernel 3.19.8 update

I am not going to cover basic OS install here, You should have your OS installed and able to reach the internet. I did the Desktop install as I will be using this for other purposes. If this is a production asterisk only system you should do a minimal install.

I logged in as root user via putty/ssh.

Get your system ready.

Install some prereqs

# cd /root
# contrib/scripts/install_prereq install


Turn off firewall and iptables if they are on.

# systemctl disable firewalld.service
# systemctl stop firewalld.service
# systemctl disable iptables.service
# systemctl stop iptables.service


Disable selinux if it is enforced.

# vi /etc/sysconfig/selinux
    SELINUX=disabled


reboot if you have changed these

# reboot

Install NTP and turn on of not already.

# yum -y install ntp*
# systemctl disable chronyd.service
# systemctl enable ntpd.service
# systemctl start ntpd.service
# ntpq -p
# date


Update system and reboot.

# yum -y update
# reboot


I update the kernel as this is new build an I like to use a good updated kernel when I start things out. This "could be" optional. I don't know. I have not tried this on another kernel.

Update kernel to latest 3.x (3.19.8).

Get required packages for the kernel update, and some for asterisk install.

# yum -y groupinstall "Development Tools"
# yum -y install ncurses-devel qt-devel hmaccalc zlib-devel binutils-devel elfutils-libelf-devel wget bc gzip uuid* libuuid-devel jansson* libxml2* sqlite* openssl*
# yum -y update


Now from /root directory download the kernel source.

# wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.19.8.tar.gz

Unzip and extract the source file.

# gzip -d ./linux-3.19.8.tar.gz
# tar -xvf ./linux-3.19.8.tar -C /usr/src/


Now let’s go to our source directory and configure the new Kernel. I just left everything alone and saved the file before I exit the menu.

# cd /usr/src/linux-3.19.8/
# make menuconfig


Compile kernel.

# make

Now go get coffee or beer. The compile will take a bit.

Install kernel.

# make modules_install
# make install


When finished lets reboot and use the new kernel. This will not be the default selection in grub so after you reboot the machine you need to press enter when you see the grub menu. You will only have a few seconds so beware.

You must be at console for choosing the kernel at boot manually. After we set this to to default kernel in grub we can return to putty.

# reboot

On boot screen select the new kernel for boot. After the boot and login type:

# uname -r
3.19.8


Looks like I am on the new Kernel.

Now set new kernel for default boot. I also leave console and return to putty here.

Check menu entries in grub.

# grep ^menuentry /boot/grub2/grub.cfg | cut -d "'" -f2
CentOS Linux 7 (Core), with Linux 3.19.8
CentOS Linux 7 (Core), with Linux 3.10.0-229.el7.x86_64
CentOS Linux 7 (Core), with Linux 3.10.0-229.4.2.el7.x86_64
CentOS Linux 7 (Core), with Linux 0-rescue-455229da2acf4d3b941fda6a689c779c


Check current default.

# grub2-editenv list
saved_entry=CentOS Linux (3.10.0-229.4.2.el7.x86_64) 7 (Core)


Set new default.

# grub2-set-default "CentOS Linux 7 (Core), with Linux 3.19.8"

Check new default.

# grub2-editenv list
saved_entry=CentOS Linux (3.19.8) 7 (Core)


Now reboot and make sure new kernel is set for default.

# reboot
# uname -r
3.19.8


Download Asterisk and supported packages.

# cd /root

Get libpri.


# wget http://downloads.asterisk.org/pub/telephony/libpri/libpri-1.4-current.tar.gz
# gzip -dfv libpri-1.4-current.tar.gz
# tar -xvf libpri-1.4-current.tar -C /usr/src/


Get DAHDI.

# wget http://downloads.asterisk.org/pub/telephony/dahdi-linux-complete/dahdi-linux-complete-current.tar.gz
# gzip -dfv dahdi-linux-complete-current.tar.gz
# tar -xvf dahdi-linux-complete-current.tar -C /usr/src/


Get asterisk.

# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-13-current.tar.gz
# gzip -dfv asterisk-13-current.tar.gz
# tar -xvf asterisk-13-current.tar -C /usr/src/


DAHDI install.

# cd /usr/src/dahdi-linux-complete*
# make
# make install
# make config


libpri install.

# cd /usr/src/libpri*
# make
# make install


Asterisk install

# cd /usr/src/asterisk*
# ./configure --libdir=/usr/lib64
# make menuselect
# make
# make install
# ldconfig


Optional items.

# cd /usr/src/asterisk*
# make samples
# make progdocs


See if asterisk will run.

# asterisk -vvv &
# ps -ef | grep asterisk
root     15498  2402  0 21:12 pts/0    00:00:00 asterisk -vvv


Now go to asterisk console.

# asterisk -r
    -- Remote UNIX connection
Asterisk 13.3.2, Copyright (C) 1999 - 2014, Digium, Inc. and others.
Created by Mark Spencer <markster@digium.com>
Asterisk comes with ABSOLUTELY NO WARRANTY; type 'core show warranty' for details.
This is free software, with components licensed under the GNU General Public
License version 2 and other licenses; you are welcome to redistribute it under
certain conditions. Type 'core show license' for details.
=========================================================================
Connected to Asterisk 13.3.2 currently running on centos7vm (pid = 15498)
centos7vm*CLI> help


Get asterisk to start up on boot

# cd /usr/src/asterisk*
# make config

# /sbin/chkconfig --add asterisk
# /sbin/chkconfig asterisk on

Enjoy!

Wednesday, May 27, 2015

KVM create Windows guest Centos 6.6 and 7



KVM create Windows guest

This is what works best for me. I have a Centos 6.6 with 3.19.8 kernel I use for my KVM host. I use this at home for testing/play.

This also works for Centos7. I have used the same procedure.

My KVM host is an i5 4590 with 16GB (2x8GB) Corsair DDR3 1600 RAM, 120GB Samsung PRO SSD for boot and OS, with 4x WD RED 3TB in zfs RAID 10.

# zpool create myraid mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde

I used the name myraid for the pool.

My KVM host, I mention above, is headless and I have Gnome and VNC installed on it. I manage most everything through the cli via ssh, but working with KVM for Windblows, I mean Windows, quest is very difficult without a desktop. Of course I have virt-manager, virt-viewer, virt-win, etc... on it.

I found this procedure works fairly well. Sometimes I see heavy CPU load from the Windows guest I occasional run on it, but hey, it’s Windows.

First I create a 100GB disk in my pool for the image. For this example I use disk1, to keep it simple.

# zfs create -V 100G myraid/disk1

Verify the new zvol is there

# ls -l /dev/zvol/myraid
total 0
lrwxrwxrwx 1 root root 9 May 26 18:54 disk1 -> ../../zd0

# ls -l /dev/zvol/myraid/disk1
lrwxrwxrwx 1 root root 9 May 26 18:54 /dev/zvol/myraid/disk1 -> ../../zd0

We now have a block device in our zfs pool we can manage like any other block device in Linux. The fdisk commands below are:

n = for new partition.
p = when asking for primary or extended.
1 = is the partition number.
Then hit enter key twice to choose default first cylinder.
p = to to print to screen our new partition, verify it looks good.
w = for write changes.

Now let’s get a partition on it

# fdisk /dev/zvol/myraid/test1
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-20805, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-20805, default 20805):
Using default value 20805

Command (m for help): p

Disk /dev/zvol/myraid/test: 10.7 GB, 10737418240 bytes
16 heads, 63 sectors/track, 20805 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disk identifier: 0x094e7df9

                Device Boot      Start         End      Blocks   Id  System
/dev/zvol/myraid/test1               1       20805    10485688+  83  Linux
Partition 1 does not start on physical sector boundary.

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
#

Lets put a file system on it

# mkfs.ext4 /dev/zvol/myraid/disk1

Make a directory for a mount point. I used /disk1 for simplicity here

# mkdir /disk1

Mount it so we can us it

# mount /dev/zvol/myraid/disk1 /disk1

To make this mount on boot edit your fstab file and add a line like the following to bottom of file:

# vi /etc/fstab
/dev/zvol/myraid/disk1   /disk1   ext4    defaults,_netdev    0 0

I used the _netdev above as I was getting a failed to mount message in my boot.log, although it got mounted by the end of the boot. It was trying to mount this before zfs loaded, therefore not able to mount a zvol. The _netdev I think makes it wait for the services to start before trying to mount.

I then issue the following command so I don't have any permission issues later down the road. Remember this is home play setup, not corporate production.

# chmod 777 /disk1

Now create your KVM guest image on /disk1 using the virt-manager in the Gnome desktop, via VNC if your headless like me. Options I found that run the best for me are:

Storage format: virtio and raw
Cache mode: none (not default!)
I/O mode: native

I Also create a floppy and CDROM, and attach the virt-win drivers to each. ISO gets the CDROM and the floppy gets the fd. !!NOTE you must have these mounted and available on quest boot our the quest will not have a hard drive to use. Hence the virtio.

I also use the virt-nic and virt-mem found in the virt-win drivers ISO. The virt-nic gave me a 10GB network card and really improved my network speed.

Get virtio drivers for windows.

# wget https://fedorapeople.org/groups/virt/virtio-win/virtio-win.repo

Copy the contents of the virtio-win.repo file you just downloaded and paste in this new file

# vi /etc/yum.repos.d/virtio-win.repo

As of 8/24/2015 the contents of virtio-win.repo are.

# virtio-win yum repo
# Details: https://fedoraproject.org/wiki/Windows_Virtio_Drivers

[virtio-win-stable]
name=virtio-win builds roughly matching what was shipped in latest RHEL
baseurl=http://fedorapeople.org/groups/virt/virtio-win/repo/stable
enabled=1
skip_if_unavailable=1
gpgcheck=0

[virtio-win-latest]
name=Latest virtio-win builds
baseurl=http://fedorapeople.org/groups/virt/virtio-win/repo/latest
enabled=0
skip_if_unavailable=1
gpgcheck=0

[virtio-win-source]
name=virtio-win source RPMs
baseurl=http://fedorapeople.org/groups/virt/virtio-win/repo/srpms
enabled=0
skip_if_unavailable=1
gpgcheck=0

Now install the virtio-win drivers.

# yum -y install virtio-win

Drivers are now located at.

# ls -lsah /usr/share/virtio-win/

Have fun!

Tuesday, May 26, 2015

Centos 6.6 prevent console from going blank

Centos 6.6 prevent console from going blank

I want to have my console to go blank when NOT logged in, but stay on when I am logged in.

In my home directory I edit the .bash_profile file,

# cd $HOME

# vi .bash_profile


and add the following to the end.

setterm -blank 0 -powerdown 0


To turn console blanking off, if logged in or not.

# vi /etc/default/grub

append consoleblank=0 to your GRUB_CMDLINE_LINUX= line.

Example:

GRUB_TIMEOUT=5
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
#GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/root rd.lvm.lv=centos/swap crashkernel=auto rhgb quiet"
GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/root rd.lvm.lv=centos/swap crashkernel=auto consoleblank=0"
GRUB_DISABLE_RECOVERY="true"
GRUB_CMDLINE_LINUX_DEFAULT="video=1024x768"
GRUB_GFXMODE=1024x768
GRUB_GFXPAYLOAD_LINUX=keep


Then for BIOS machines using grub2

# grub2-mkconfig -o /boot/grub2/grub.cfg

If you have UEFI-based machine then

# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

Now reboot 

Monday, May 25, 2015

Download website with wget

Download website with wget

I use this to backup my blogger site


wget \
--recursive \
--no-clobber \
--page-requisites \
--html-extension \
--convert-links \
--domains glenewhittenberg.blogspot.com \
http://glenewhittenberg.blogspot.com

This command downloads the Web site http://whittenberg.blogspot.com/.

The options are:

    --recursive: download the entire Web site.
    --domains website.org: don't follow links outside website.org.
    --no-parent: don't follow links outside the directory tutorials/html/.
    --page-requisites: get all the elements that compose the page (images, CSS and so on).
    --html-extension: save files with the .html extension.
    --convert-links: convert links so that they work locally, off-line.
    --restrict-file-names=windows: modify filenames so that they will work in Windows as well.
    --no-clobber: don't overwrite any existing files (used in case the download is interrupted and resumed).

Sunday, May 24, 2015

Centos 6.6 VNC install


Centos 6.6 VNC install



I am having so many problems doing a Windows guest install on my KVM host server, which is headless, I figured I would install Gnome on it and VNC. I have another article on the Gnome install. Please see that article if you don’t have a desktop on your server.



I am doing this on my KVM host server logged in as root



Install VNC packages



# yum install tigervnc-server

# yum install xorg-x11-fonts-Type1



Make sure VNC starts on boot



# chkconfig vncserver on



Now set your VNC password



# vncpasswd 



Now create a connection so the VNC clients can connect to your VNC server



# vi /etc/sysconfig/vncservers



Add the following to the end of the file: **NOTE: Use a regular user account name here, not root user



VNCSERVERS="1:your_user_name_here"

VNCSERVERARGS[1]="-geometry 1024x768" 



If you need to add another user then do so. Make sure you use a different VNCSERVERS= number and the other users name, like



VNCSERVERS="2:another_user_name_here"

VNCSERVERARGS[1]="-geometry 1024x768" 



If you use iptables then do the following to allow VNC traffic



# iptables -I INPUT 5 -m state --state NEW -m tcp -p tcp -m multiport --dports 5901:5903,6001:6003 -j ACCEPT

# service iptables save

# service iptables restart



Now actually start the service



# service vncserver restart



If you get an error like "getpassword error: Inappropriate ioctl for device" then try the following.

Login or ssh to the server as the user and run the “# vncpasswd “ command from the users home dir.



Once you can start the service without an error continue.



# vncserver -kill :1



Now from the users home directory do this to run the Gnome desktop



# vi .vnc/xstartup



Comment the last line and add the “exec gnome-session &” line at the bottom



#twm &

exec gnome-session &



Restart service again



# service vncserver restart



Now put the VNC client on your PC or Linux desktop and connect using:



Name:1 or serverIP:1


Centos 6.6 installing Gnome Desktop

Centos 6.6 installing Gnome Desktop

# yum -y groupinstall "Desktop" "Desktop Platform" "X Window System" "Fonts" 


Optional GUI packages

# yum -y groupinstall "Graphical Administration Tools"
# yum -y groupinstall "Internet Browser"
# yum -y groupinstall "General Purpose Desktop"
# yum -y groupinstall "Office Suite and Productivity"
# yum -y groupinstall "Graphics Creation Tools"
 


Start the dekstop with

# startx

To make the GUI permanent between reboots you'll need to change your
runlevel to 5. Open this file: `/etc/inittab using a text editor and
change the following line:

# vi /etc/inittab
 
id:3:initdefault:

to

id:5:initdefault: