Saturday 31 March 2012

Monitorix - A server monitoring tool

Monitorix is a free, open source, lightweight system monitoring tool designed to monitorize as many services as possible. It can be accessed via a web browser.

Monitorix has been designed to be used under production UNIX/Linux servers, but due its simplicity and small size you may also use it to monitor embedded devices.

You can install the package either via rpm or via yum

 # yum install monitorix

 # vim /etc/monitorix.conf

 # /etc/init.d/httpd reload

cd /etc/httpd/conf.d/

vim monitorix.conf
     Alias /monitorix /usr/share/monitorix
   ScriptAlias /monitorix-cgi /usr/share/monitorix/cgi-bin

<Directory /usr/share/monitorix/cgi-bin/>
        DirectoryIndex monitorix.cgi
        Options ExecCGI
        order deny,allow
        deny from all
 #        allow from 127.0.0.1
        allow from all    ------------ change above line like this
 </Directory>

# /etc/init.d/httpd reload

# service monitorix start

# chkconfig monitorix on

This version introduces two new major features. The first one refers to a new MySQL statistics multigraph which includes a complete and an overall state of the current performance of a MySQL server. The six nested graphs cover the most important and relevant status information which should help System Administrators to optimize their MySQL server accordingly.

This graph requires an unprivileged MySQL user (with password) in order to collect all the statistics. So it's strongly recommended to NOT grant any privilege to any database to this user.

The following are the two commands in order to create such unprivileged user:

mysql> CREATE USER ’user’@’localhost’ IDENTIFIED BY ‘password’;

mysql> FLUSH PRIVILEGES;

# service mysqld restart   

vim /etc/monitorix.conf
enable options of mysql to Y    --- to enable a fearture Y and to disable a feature N

# service monitorix restart

# service mysqld restart

# /etc/init.d/httpd reload

To access the interface http://192.168.1.67/monitorix/

























































Friday 30 March 2012

Network & Bandwidth Monitoring Using Darkstat

Let check how can we monitor traffic using Darkstat in a machine. traffic grapher designed to run on a router, collect traffic statistics, export them to HTML pages.

$ sudo apt-get install darkstat

make some changes to Darkstat's configuration file, /etc/darkstat/init.cfg
 vim /etc/darkstat/init.cfg  ----- edit no to "yes"

# Turn this to yes when you have configured the options below.
START_DARKSTAT=yes

# Don't forget to read the man page.

# You must set this option, else darkstat may not listen to
# the interface you want
INTERFACE="-i eth0"
PORT="-p 666"
BINDIP="-b 127.0.0.1"
LOCAL="-l 192.168.1.0/24"
FIP="-f 127.0.0.1"
DNS="-n"

#SPY="--spy eth0"
# /etc/init.d/darkstat start
or
# darkstat -i eth0
 
Access the interface using  http://localhost:667/
 
 
 

Monday 26 March 2012

Monitoring n/w traffic using speedometer

Speedometer Monitor network traffic or speed/progress of a file transfer.

How long it will take for 100 mb transfer to finish
How quickly is another transfer going

 Speedometer can print the RX and TX rates on a per-interface basis.

# sudo apt-get install speedometer

# speedometer -rx eth0 -tx eth0










BitMeter OS for bandwidth and data transfer monitor

BitMeter OS keeps track of how much you use your Internet/network connection.

You can monitor your connection usage either via a web browser, or by using the command line tools. The Web Interface displays various graphs and charts that show how your Internet / network connection has been used over time.

Download the package and install it

 bitmeteros_0.7.5-i386.deb
 bitmeteros_0.7.5-amd64.deb

# dpkg -i  bitmeteros_0.7.5-i386.deb

After sucessful installation you can access the BitMeter OS web interface using  web-browser:  http://localhost:2605

The following commands will stop/start/restart the OS BitMeter Web Interface in Linux
sudo /etc/init.d/bitmeterweb stop
sudo /etc/init.d/bitmeterweb start
sudo /etc/init.d/bitmeterweb restart

The below screen shoots showing different options available in  BitMeter










Sunday 25 March 2012

vnStat

vnstat is an excellent and simple tool to monitor your server or interface bandwidth and the results can be displayed both on the server console and on the web interface with vnStat phpFrontend.

 rpm available at : http://packages.sw.be/vnstat/

# rpm -ivh vnstat-1.7-1.el5.rf.i386.rpm

After you install, just edit the /etc/vnstat.conf config file and setup the following lines with according to your needs.

# default interface
  Interface “eth0″

If you want to store it in a separate database or directory just edit the following lines and change it to your new path/directory.

# location of the database directory
    DatabaseDir “/var/lib/vnstat”

Running vnstat as a deamon

 Go to http://humdi.net/vnstat/init.d/

Download the corerct os version of your server and copy it inside the /etc/init.d/. Then enable the daemon by

  # chmod 755 /etc/init.d/vnstat
 # chkconfig vnstat on
 # service vnstat start

Installing the vnStat php Frontend

Download the vnstat php frontend tool directly from http://www.sqweek.com/sqweek/files/vnstat_php_frontend-1.5.1.tar.gz. Copy or move the folder to your document-root directory (say for example /var/www/html)

Edit the following lines in the config.php to setup the language and the interface list and title to finish the setup

$language = ‘en’;
$iface_list = array(‘eth1′, ‘sixxs’);
$iface_title['eth1'] = ‘Internal’;
$iface_title['sixxs'] = ‘SixXS IPv6′; 

access the vnstat url now to display the stats. It is advisable to setup a password protect directory to avoid unauthorized access. 

Access the gui of vnstat as below
http://192.168.1.67/vnstat

We can get all these information from console itself

Statistics by day:
# vnstat -d

By hour
# vnstat -h

By month
# vnstat -m

Live monitoring of the network interface
# vnstat -l

show top10
# vnstat -t

update database
# vnstat -u

calculate traffic
# vnstat -tr

In the output  Rx and Tx were abbreviation's for Receive and Transmit. Data is crossing or traveling over a cable or through a signal in the air.
 

Vim commandline Tricks

ctrl+u  -------> undo in vim editor

:w filename -------> save as filename without exiting from vim

vim ctechz +25 -----> open file and go to line 54, any : command can be run using + on command line

vim -O ctechz1 ctechz2  -----> open ctechz1 and ctechz2 side by side

:ls    -----> list buffers

:bd  ----->  delete buffer and any associated windows

Ctrl+g    ----->  Show file info including your position in the file

vi filename   ----> Opening a file / Creating text

Edit modes: These keys enter editing modes and type in the text of your document. 

i     Insert before current cursor position
I     Insert at beginning of current line
a     Insert (append) after current cursor position
A     Append to end of line
r     Replace 1 character
R     Replace mode
<ESC> Terminate insertion or overwrite mode

Deletion of text:

x       Delete single character
dd     Delete current line and put in buffer
ndd   Delete n lines (n is a number) and put them in buffer
J        Attaches the next line to the end of the current line (deletes carriage
          return).

Undo:

u     Undo last command

Cut and Paste:

yy     Yank current line into buffer
nyy   Yank n lines into buffer
p       Put the contents of the buffer after the current line
P       Put the contents of the buffer before the current line

Cursor Positioning:

^d     Page down
^u     Page up
:n      Position cursor at line n
:$      Position cursor at end of file
^g     Display current line number
h,j,k,l Left,Down,Up, and Right respectivly. Your arrow keys should also work if
                                                your keyboard mappings are anywhere near sane.

String Substitution:

:n1,n2:s/string1/string2/[g]       Substitute string2 for string1 on lines
                                                 n1 to n2. If g is included (meaning global), 
                                                 all instances of string1 on each line
                                                 are substituted. If g is not included,
                                                  only the first instance per matching line is
                                                 substituted.

   ^  matches start of line
    .  matches any single character
    $  matches end of line

These and other "special characters" (like the forward slash) can be "escaped" with \ i.e to match the string "/usr/STRIM100/SOFT" say "\/usr\/STRIM100\/SOFT".

Saving and Quitting and other "ex" commands:

These commands are all prefixed by pressing colon (:) and then entered in the lower left corner of the window. They are called "ex" commands because they are commands of the ex text editor - the precursor line editor to the screen editor  vi. You cannot enter an "ex" command when you are in an edit mode (typing text onto the screen)
Press <ESC> to exit from an editing mode.

:w                      Write the current file.
:w new.file         Write the file to the name 'new.file'.
:w! existing.file  Overwrite an existing file with the file currently being edited.
:wq                     Write the file and quit.
:q                        Quit.
:q!                       Quit with no changes.

:e filename          Open the file 'filename' for editing.

:set number         Turns on line numbering
:set nonumber      Turns off line numbering










HA setup using HeartBeat and Squid

A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer.
 
Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

Cluster terminology

Node : It’s one of the system/computer which participates with other systems to form a Cluster.

Heartbeat : This a pulse kind of single which is send from all the nodes at regular intervals using a UDP packet so that each system will come to know the status of availability of other node.


It’s a kind of door knocking activity like pinging a system, So that each node which are participating in Cluster will come to know the status of other nodes availability in the Cluster.

Floating IP or Virtual IP : This is the IP assigned to the Cluster through which user can access the services. So when ever clients request a service they will be arrived to this IP, and client will not know what are the back-end/actual ip addresses of the nodes. This virtual IP is used to nullify the effect of nodes going down.

Master node : This is the node most of the time where services are run in a High availability Cluster.

Slave node  : This is the node which is used in High availability Cluster when master node is down. It will take over the role of servicing the users, when it will not receive heartbeat pulse from master. And automatically gives back the control when the master server is up and running. This slave comes to know about the status of master through heartbeat pulse/signals.  


Types of Clusters 

Cluster types can be divided in to two main types

1. High availability : These types of Clusters are configured where there should be no downtime. If one node in the cluster goes down second node will take care of serving users without interrupted service with availability of five nines i.e. 99.999%.

2. Load balancing :  These types of Clusters are configured where there are high loads from users. Advantages of load balancing are that users will not get any delays in their request because load on a single system is shared by two or more nodes in the Cluster.


HeartBeat Configuration files Details

Three main configuration files :
/etc/ha.d/authkeys
/etc/ha.d/ha.cf
/etc/ha.d/haresources

Some other configuration files/folders to know :

/etc/ha.d/resource.d.  Files in this directory are very important  which contains scripts to start/stop/restart a service run by this Heartbeat cluster.

Before configuration of Heartbeat Cluster these below points to be noted.

Note1 : The contents of ha.cf file are same in all the nodes in a cluster,except ucast and bcast derivatives.

Note2 : The contents of authkeys and haresources files are exact replica on all the nodes in a cluster.

Note3 : A cluster is used to provided a service with high availability/high performance, that service may be a web server, reverse proxy or a Database.

Test scenario setup

1. The cluster configuration which I am going to show is a two node cluster with failover capability for a Squid reverse proxy..

2. For Squid reverse proxy configuration please click here.

3. Node details are as follows.

Node1 
IpAddress(eth0):10.77.225.21
Subnetmask(eth0):255.0.0.0
Default Gateway(eth0):10.0.0.1
IpAddress(eth1):192.168.0.1(To send heartbeat signals to other nodes)
Sub net mask (eth1):255.255.255.0
Default Gateway (eth1):None(don’t specify any thing, leave blank for this interface default gateway).

Node2 
IpAddress(eth0):10.77.225.22
Subnetmask(eth0):255.0.0.0
Default Gateway (eth0):10.0.0.1
IpAddress(eth1):192.168.0.2(To send heartbeat signals to other nodes)
Sub net mask (eth1):255.255.255.0
Default Gateway(eth1):None(don’t specify any thing, leave blank for this interface default gateway).

 tyle=”font-family: verdana;”>4. Floating Ip address:10.77.225.20

Lets start configuration of Heartbeat cluster. And make a note that ever step in this Heartbeat cluster configuration is divided in two parts 

1.(configurations on node1)
2.(configurations on node2)

For better understanding purpose

Step1 : Install the following packages in the same order which is shown. If you did not find the packages online you can download it from our site, click here to download the packages.

Step1(a) : Install the following packages on node1
#rpm -ivh heartbeat-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-ldirectord-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-pils-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-stonith-2.1.2-2.i386.rpm

Step1(b) : Install the following packages on node2
#rpm -ivh heartbeat-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-ldirectord-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-pils-2.1.2-2.i386.rpm
#rpm -ivh heartbeat-stonith-2.1.2-2.i386.rpm

Step2 : By default the main configuration files (ha.cf, haresources and authkeys) are not present in /etc/ha.d/ folder we have to copy these three files from /usr/share/doc/heartbeat-2.1.2 to /etc/ha.d/

Step2(a) : Copy main configuration files from /usr/share/doc/heartbeat-2.1.2 to /etc/ha.d/ on node 1
#cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
#cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
#cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/

Step2(b) : Copy main configuration files from /usr/share/doc/heartbeat-2.1.2 to /etc/ha.d/ on node 2
#cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
#cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
#cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/

Step3 : Edit ha.cf file
# vi /etc/ha.d/ha.cf

Step3(a) : Edit ha.cf file as follows on node1
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 25
warntime 10
initdead 50
udpport 694
bcast eth1
ucast eth1 192.168.0.1
auto_failback on
node rp1.linuxnix.com
node rp2.linuxnix.com

Step3(b) : Edit ha.cf file as follows on node2
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 25
warntime 10
initdead 50
udpport 694
bcast eth1
ucast eth1 192.168.0.2
auto_failback on
node rp1.linuxnix.com
node rp2.linuxnix.com

Let me explain each entry in detail:

Debugfile : This is the file where debug info with good details for your heartbeat cluster will be stored, which is very much useful to do any kind of troubleshooting. 

Logfile : This is the file where general logging of heartbeat cluster takes place.

Logfacility : This directive is used to specify where to log your heartbeat logs(if its local that indicates store logs locally or if it’s a syslog then store it on remote server and none to disable logging). And there are so many other options, please explore yourself.

Keepalive : This directive is used to set the time interval between heartbeat packets and the nodes to check the availability of other nodes. 

Deadtime : A node is said to be dead if the other node didn’t receive any update form it.

Warntime : Time in seconds before issuing a “late heartbeat” warning in the logs.

Initdead : With some configurations, the network takes some time to start working after a reboot. This is a separate “deadtime” to handle that case. It should be at least twice the normal deadtime.

Udpport : This is the port used by heartbeat to send heartbeat packet/signals to other nodes to check availability(here in this example I used default port:694).

Bcast : Used to specify on which device/interface to broadcast the heartbeat packets.

Ucast : Used to specify on which device/interface to uni-cast the heartbeat packets.

auto_failback : This option determines whether a resource will automatically fail back to its “primary” node, or remain on whatever node is serving it until that node fails, or an administrator intervenes. In my example I have given as on that indicate if the failed node come back online, control will be given to this node automatically. Let me put it in this way. I have two nodes node1 and node2. My node one machine is a high end one and node is for serving temporary purpose when node 1 goes down. Suppose node1 goes down, node2 will take the control and serve the service, and it will check periodically for node1 starts once it find that node 1 is up, the control is given to node1.

Node : This is used to specify the participated nodes in the cluster. In my cluster only two nodes are participating (rp1 and rp2) so just specify that entries. If in your implementation more nodes are participating please specify all the nodes.

Step4 : Edit haresources file
# vi /etc/ha.d/haresources

Step4(a) : Just specify below entry in last line of this file on node1
rp1.linuxnix.com 10.77.225.20 squid

Step4(b) : Just specify below entry in last line of this file on node1
rp1.linuxnix.com 10.77.225.20 squid

Explanation of each entry :
rp1.linuxnix.com is the main node in the cluster
10.77.225.20 is the floating ip address of this cluster.

Squid : This is the service offered by the cluster. And make a note that this is the script file located in /etc/ha.d/ resource.d/.

Note : By default squid script file will not be there in that folder, I created it according to my squid configuration.

What actually this script file contains

This is just a start/stop/restart script for the particular service. So that heartbeat cluster will take care of the starting/stopping/restarting of the service(here its squid). Here is what squid script file contains.

Step5 : Edit authkeys file, the authkeys configuration file contains information for Heartbeat to use when authenticating cluster members. It cannot be readable or writeable by anyone other than root. so change the permissions of the file to 600 on both the nodes.

Two lines are required in the authkeys file:
A line which says which key to use in signing outgoing packets.
One or more lines defining how incoming packets might be being signed.

Step5 (a) : Edit authkeys file on node1
#vi /etc/ha.d/authkeys
auth 2
#1 crc
2 sha1 HI!
#3 md5 Hello!
Now save and exit the file

Step5 (b) : Edit authkeys file on node2
#vi /etc/ha.d/authkeys
auth 2
#1 crc
2 sha1 HI!
#3 md5 Hello!
Now save and exit the file

Step6 : Edit /etc/hosts file to give entries of host-names for the nodes

Step6 (a) : Edit /etc/hosts file on node1 as below

10.77.225.21 rp1.linuxnix.com rp1
10.77.225.22 rp2.linuxnix.com rp2

Step6 (b) : Edit /etc/hosts file on node2 as below

10.77.225.21 rp1.linuxnix.com rp1
10.77.225.22 rp2.linuxnix.com rp2


Step7 : Start Heartbeat cluster

Step7 (a) : Start heartbeat cluster on node1
#service heartbeat start

Step7 (b) : Start heartbeat cluster on node2
#service heartbeat start

Checking your Heartbeat cluster:
If your heartbeat cluster is running fine a Virtual Ethernet Interface is created on node1 and 10.77.225.20 Clipped output of my first node.
# ifconfig

Try accessing your browser whether Squid is working fine or not.


Saturday 24 March 2012

DNS Records

A     ## address record,Returns a 32-bit IPv4 address, most commonly used to
            map hostnames to an IP address of the host.
                            eric.ctechz.com. IN A 32.36.7.6
       
 (address) Maps a host name to an IP address. When a computer has multiple adapter cards or IP addresses, or both, it should have multiple address records.

AAAA   ## IPv6 address record, Returns a 128-bit IPv6 address, most commonly used to map hostnames to an IP address of the host.

CNAME  ## Canonical name record, Alias of one name to another: the DNS
                      lookup will continue by retrying the lookup with the new name.
                     CNAME records simply allow a machine to be known by more than
                     one hostname. There must always be an A record for the machine
                     before aliases can be added. The host name of a machine that is
                      stated in an A record is called the canonical.

                      www.ctechz. IN CNAME eric.ctechz.com.

 (canonical name) Sets an alias for a host name. For example, using this record, zeta.microsoft.com can have an alias as www.microsoft.com.

MX     ## mail exchange record, Maps a domain name to a list of message
                   transfer agents for that domain.

(mail exchange) Specifies a mail exchange server for the domain, which allows mail to be delivered to the correct mail servers in the domain.

NS    ## name server record, Delegates a DNS zone to use the given authoritative name servers.
                 ctechz.com. IN NS ravan.ctechz.com.

(name server) Specifies a name server for the domain, which allows DNS lookups within various zones. Each primary and secondary name server should be declared through this record.

PTR   ## pointer record, Pointer to a canonical name. Unlike a CNAME, DNS processing does NOT proceed, just the name is returned. The
most common use is for implementing reverse DNS lookups, but other uses
include such things as DNS-SD.

 (pointer) Creates a pointer that maps an IP address to a host name for reverse lookups.

SOA   ## start of [a zone of] authority record, Specifies authoritative
  information about a DNS zone, including the primary name server, the email of the domain administrator, the domain serial number, and
 several timers relating to refreshing the zone.

(start of authority) Declares the host that's the most authoritative for the zone and, as such, is the best source of DNS information for the zone. Each zone file must have an SOA record (which is created automatically when you add a zone).

    ctechz.com. IN SOA dom.ctechz.com.
        hostmaster.ctechz.com. (
           1996111901 ; Serial
           10800 ; Refresh
           3600 ; Retry
           3600000 ; Expire
           86400 ) ; Minimum

TXT   ## Text record, Originally for arbitrary human-readable text in a DNS
           record. 

The @ symbol in your DNS record refers to the record for your domain name without any www or sub-domain name.

The result of this record is that visitors can connect to your domain name at http://your-domain.com. You may also notice the @ symbol in the CNAME section:

ftp        @
www    @

This will create aliases to the @ A Record, which will point www.your-domain.com and ftp.your-domain.com to the same IP address.

The @ symbol may also be used in an MX record. For example:
@        mail        1
This indicates that the primary MX record for the email sent to @your-domain.com points to the A-record called "mail".

DNS Resolution Process

Let's check how a Domain Name Service works when we entering a name into a client like a browser or a mail client.

1. When a user type a host name (www.ctechz.co.in) in a browser the application then try to find the IP address associated with that domain name. The process associated with it is either called a reverse lookup or a forward lookup.

Checking of IP address associated with a domain name is
known as forward lookup and also the checking of domain name associated with an IP address is called reverse lookup.

There are 13 root name servers on the internet which provides the necessary name server details.

Each country has a name server and each organization has a name server too. Each NS only has information about machines in its own domain as well as information about other name servers. The root NS only has information on the ip address of the name servers of .com, .edu etc (top level domains).

* .in NS only has information on the IP address of the name servers of .org.in, .ac.in, .co.in etc

* .co.in NS only has information on the name servers of all indian companies or which hosted in india.

* .ctechz.co.in NS only has information on the machines at ctechz systems like www.ctechz.ci.in etc...

The Name Resolution Process

Here we can take an example with domain www.ctechz.co.in, the following take place to resolve this name into an IP address. This procedure is called hostname resolution and the algorithm performs this operation is called the resolver.

2.  The application checks local database on the local machine first. If it can get an answer directly from them it proceeds no further.

3. otherwise request will sent to NS to find the IP address associated with www.ctechz.ci.in.

4. NS determines whether that IP has been recently looked up or not. If it is there, no need to ask further. Since the result would be stored in a local cache.

5. NS checks whether the domain is local, ie, if a computer that has a direct information about. In  this case this would only be true if the NS were www.ctechz.co.in's very own NS.

6. NS strips out the TLD (top level domain) .in and it queries a root NS, asking what NS is responsible for .in. It will return an answer say a NS of IP 127.168.2.33. Depend on the answer NS will query authoritative server for IP address.

7. NS strips out the next highest domain .co.in and it queries to 127.168.2.33 asking what NS is responsible for .co.in, it will return an answer say a NS of IP 192.168.55.67.

8. NS strips out next highest domain .ctechz.co.in and it queries 192.168.55.67 asking what NS is responsible for ctechz.co.in, it will return an answer say a NS of IP 196.28.120.5

9. NS queries 196.28.120.5 asking for IP address of  www.ctechz.co.in and the answer will be 160.120.170.3

10. NS returns result to the application.

11. NS stores each of these result in local cache with an expiration date, to avoid having to look up a second time.

Configuring local Machine

Some configuration file in local machines are the following,

  /etc/host.conf
 /etc/hosts
 /etc/resolv.conf

1. Application checks /etc/host.conf which has the line order hosts,bind
 specify it should first check the local database file /etc/hosts and then query the NS specified in /etc/resolv.conf(bind)

The file /etc/hosts contain plain list of IP address and names. If an application can get an answer directly from /etc/hosts it proceeds no further.

2. The application checks in the file /etc/resolv.conf for a line 
nameserver <nameserver>

3. The application sends to the NS a query with the hostname [ checks local db first ]  then proceeds with the hierarchical queries.

Thursday 22 March 2012

How to setup SSL on Apache

SSL certificates verify your identity with an end user and make it possible to encrypt the communication between two hosts.

The browser would check the web server's certificate to see if it's valid or not. If the certificate is valid the browser and web server negotiate on an encryption algorithm they both can understand.

Once a negotiation has been reached they use unique keys or codes (public key and private key) for encrypting and decrypting the data on both sides. Finally the browser and web server communicate securely so no one can eavesdrop on their conversation. Secure Sockets Layer (SSL) is used in e-commerce and other applications where the information being transmitted must be secure and not visible to anyone watching the network traffic. SSL certificates must be signed by a trusted authority or more commonly known as Certificate Authorities (CA). CA's confirm your identity by adding their signature to your SSL certificate. On the web browser side, browsers like FireFox and Internet Explorer have a list of CA fingerprints to match against the SSL certificates they come across.if all goes well your browser would accept the certificate and give no complaints, however, if the certificate doesn't have the fingerprint on file of CA it would complain and typically throw up a window saying the certificate is bad or shouldn't be trusted.

OpenSSL helps in creating self signed certificates for free. Self-signed certs are the same as signed versions except for the fact that a CA doesn't stamp it with their approval, instead you stamp it with yours.

Self-signed certs offer the same amount of protection but at the cost of dealing with the annoying popup alert the browser displays and someone being able to forge your identity.

SSL is a layered protocol and consists of four sub-protocols:
  !  SSL Handshake Protocol
  !  SSL Change Cipher Spec Protocol
  !  SSL Alert Protocol
  !  SSL Record Layer

@ Get the apache package first  

# cd /ctechz/

# wget http://apache.mirrors.hoobly.com//httpd/httpd-2.2.22.tar.gz

# gunzip httpd-2.2.22.tar.gz

# tar -xvf httpd-2.2.22.tar

# cd httpd-2.2.22

# ./configure --prefix=/opt/apachessl/ --enable-ssl --enable-so

# make

# make install

# cd /opt/apachessl

# /opt/apachessl/bin/apachectl start

take browser http://192.168.1.240  ## it will shows the default apache page if every this going right OR if you need a custom html page follow the steps below.

# mkdir /opt/apachessl/htdocs/ctechz.com/   ## this is its default document root

create an index.html page there for you.

# vim /opt/apachessl/conf/httpd.conf

<VirtualHost 192.168.1.240:80>
    DocumentRoot /opt/apachessl/htdocs/ctechz.com/
    ServerName ctechz.com
</VirtualHost>

Listen 192.168.1.240:80

@ Now generate a self signed ssl certificate key

# cd /opt/apachessl/conf/

# mkdir ssl

# cd ssl

generate private key file (server.key), certificate signing request file (server.csr) and webserver certificate file (server.crt) that can be used on Apache server with mod_ssl.

Generate Private Key on the Server Running Apache + mod_ssl

First, generate a private key on the Linux server that runs Apache webserver using openssl command as shown below.

Generating RSA private key, 1024 bit long modulus.

# openssl genrsa -des3 -out www.ctechz.com.key 1024

Generate a Certificate Signing Request (CSR)

Using the key generate above, you should generate a certificate request file (csr) using openssl as shown below.

Once the private key is generated a Certificate Signing Request can be generated. The CSR is then used in one of two ways. Ideally, the CSR will be sent to a Certificate Authority, such as Thawte or Verisign who will verify the identity of the requestor and issue a signed certificate. The second option is to self-sign the CSR, which will be demonstrated in the next section.

# openssl req -new -key www.ctechz.com.key -out www.ctechz.com.csr

@ Remove Passphrase from Key

One unfortunate side-effect of the pass-phrased private key is that Apache will ask for the pass-phrase each time the web server is started. Obviously this is not necessarily convenient as someone will not always be around to type in the pass-phrase, such as after a reboot or crash. mod_ssl includes the ability to use an external program in place of the built-in pass-phrase dialog, however, this is not necessarily the most secure option either. It is possible to remove the Triple-DES encryption from the key, thereby no longer needing to type in a pass-phrase. If the private key is no longer encrypted, it is critical that this file only be readable by the root user! If your system is ever compromised and a third party obtains your unencrypted private key, the corresponding certificate will need to be revoked. With that being said, use the following command to remove the pass-phrase from the key:

Do this only if you enter any password while creating a key file

# cp server.key server.key.org

# openssl rsa -in server.key.org -out server.key

Generate a Self-Signed SSL Certificate

For testing purpose, you can generate a self-signed SSL certificate that is valid for 1 year using openssl command as shown below.

# openssl x509 -req -days 365 -in www.ctechz.com.csr -signkey
www.ctechz.com.key -out www.ctechz.com.crt

After generating the certificate, if it has any default location for each certificate copy the files to that location. Here i create a directory called ssl under /opt/apachessl/ssl and copied all files there.

Then edit httpd.conf and shows the certificate there. For apache on Red Hat using the default location, the config file is /etc/httpd/conf/apache.conf. Note that your apache.conf file may make use of separate config files and you may have an /etc/httpd/conf.d/ssl.conf file. Check for this first before you place the following in your apache.conf file.

# cd /opt/apachessl/conf

# vim httpd.conf

Listen *:80
Listen *:443

<VirtualHost *:80>
#    ServerAdmin webmaster@dummy-host.example.com
    DocumentRoot /opt/apachessl/htdocs/ctechz.com/
    ServerName ctechz.com
#  ErrorLog logs/dummy-host.example.com-error_log
#  CustomLog logs/dummy-host.example.com-access_log common
</VirtualHost>

<VirtualHost *:443>
#    ServerAdmin webmaster@dummy-host.example.com
    DocumentRoot /opt/apachessl/htdocs/ctechz.com/
    ServerName ctechz.com
# ErrorLog logs/dummy-host.example.com-error_log
# CustomLog logs/dummy-host.example.com-access_log common
SSLEngine on

SSLCertificateFile /opt/apachessl/ssl/www.ctechz.com.crt
SSLCertificateKeyFile /opt/apache/ssl/www.ctechz.com.key
</VirtualHost>

And take the browser and access the link
https://192.168.1.240

Mysql Replication using DRBD and Heartbeat

 DRBD is a block device which is designed to build high availability clusters.

This is done by mirroring a whole block device via (a dedicated) network. DRBD takes over the data, writes it to the local disk and sends it to the other host. On the other host, it takes it to the disk there. The other components needed are a cluster membership service, which is supposed to be heartbeat, and some kind of application that works on top of a block device. Each device (DRBD provides more than one of these devices) has a state, which can be 'primary' or 'secondary'. If the primary node fails, heartbeat is switching the secondary device into primary state and starts the application there.If the failed node comes up again, it is a new secondary node and has to synchronise its content to the primary. This, of course, will happen whithout interruption of service in the background.

The Distributed Replicated Block Device (DRBD) is a Linux Kernel module that constitutes a distributed storage system. You can use DRBD to share block devices between Linux servers and, in turn, share file systems and data.

DRBD implements a block device which can be used for storage and which is replicated from a primary server to one or more secondary servers. The distributed block device is handled by the DRBD service. Each DRBD service writes the information from the DRBD block device to a local physical block device (hard disk).

 On the primary data writes are written both to the underlying physical block device and distributed to the secondary DRBD services. On the secondary, the writes received through DRBD and written to the local physical block device.The information is shared between the primary DRBD server and the secondary DRBD server synchronously and at a block level, and this means that DRBD can be used in high-availability solutions where you need failover support.

When used with MySQL, DRBD can be used to ensure availability in the event of a failure. MySQL is configured to store information on the DRBD block device, with one server acting as the primary and a second machine available to operate as an immediate replacement in the event of a failure.

For automatic failover support, you can combine DRBD with the Linux Heartbeat project, which manages the interfaces on the two servers and automatically configures the secondary (passive) server to replace the primary (active) server in the event of a failure. You can also combine DRBD with MySQL Replication to provide both failover and scalability within your MySQL environment.

NOTE:- Because DRBD is a Linux Kernel module, it is currently not supported on platforms other than Linux.

Configuring the DRBD Environment

To set up DRBD, MySQL and Heartbeat, you follow a number of steps that affect the operating system, DRBD and your MySQL installation.

@ DRBD works through two (or more) servers, each called a node.

@ Ensure that your DRBD nodes are as identically configured as possible, so that the secondary machine can act as a direct replacement for the primary machine in the event of system failure.

@ The node that contains the primary data, has read/write access to the data, and in an HA environment is the currently active node is called the primary.

@ The server to which the data is replicated is called the secondary.

@ A collection of nodes that are sharing information is referred to as a DRBD cluster.

@ For DRBD to operate, you must have a block device on which the information can be stored on each DRBD node. The lower level block device can be a physical disk partition, a partition from a volume group or RAID device or any other block device.

@ For the distribution of data to work, DRBD is used to create a logical block device that uses the lower level block device for the actual storage of information. To store information on the distributed device, a file system is created on the DRBD logical block device.

@ When used with MySQL, once the file system has been created, you move the MySQL data directory (including InnoDB data files and binary logs) to the new file system.

@ When you set up the secondary DRBD server, you set up the physical block device and the DRBD logical block device that stores the data. The block device data is then copied from the primary to the secondary server.

Installation and configuration sequence

@ First, set up your operating system and environment. This includes setting the correct host name, updating the system and preparing the available packages and software required by DRBD, and configuring a physical block device to be used with the DRBD block device.

@ Installing DRBD requires installing or compiling the DRBD source code and then configuring the DRBD service to set up the block devices to be shared.

@ After configuring DRBD, alter the configuration and storage location of the MySQL data.

@ Optionally, configure high availability using the Linux Heartbeat service

Setting Up Your Operating System for DRBD

To set your Linux environment for using DRBD, follow these system configuration steps:

@ Make sure that the primary and secondary DRBD servers have the correct host name, and that the host names are unique. You can verify this by using the uname command:

# hostname drbd1   -----> set the hostname for first node
# hostname drbd2   -----> set the hostname for first node

@ Each DRBD node must have a unique IP address. Make sure that the IP address information is set correctly within the network configuration and that the host name and IP address has been set correctly within the /etc/hosts file.

# vim /etc/hosts
192.168.1.231 drbd1 drbd1

# vim /etc/hosts
192.168.1.237  drbd2 drbd2

@ Because the block device data is exchanged over the network,everything that is written to the local disk on the DRBD primary is also written to the network for distribution to the DRBD secondary.

@ You devote a spare disk, or a partition on an existing disk, as the physical storage location for the DRBD data that is replicated. If the disk is unpartitioned, partition the disk using fdisk, cfdisk or other partitioning solution. Do not create a file system on the new partition. (ie, do not partation the new device attached or new partation created).

# fdisk /dev/sdb  -----> in primary node create a partation first
  n / p(1)
  w
# partprobe
# fdisk -l
/dev/sdb1

# fdisk /dev/hdb  ----------> create a partation in secondary node also
  n / p(1)
  w
# partprobe
# fdisk -l
/dev/hdb1

create a new partation OR if you are using a vmware or a virtual box, and you do not have an extra space for a new partation please add an extra data block to have more space. and don't partation the disk. After attaching the drbd device only we need to partation the device. use identical sizes for the partitions on each node, for primary and secondary.

@ If possible, upgrade your system to the latest available Linux kernel for your distribution. Once the kernel has been installed, you must reboot to make the kernel active. To use DRBD, you must also install the relevant kernel development and header files that are required for building kernel modules.
  
Before you compile or install DRBD, make sure the following tools and files are there

update and install the latest kernel and kernel header files:-

@ root-shell> up2date kernel-smp-devel kernel-smp

@ root-shell> up2date glib-devel openssl-devel libgcrypt-devel glib2-devel pkgconfig ncurses-devel rpm-build rpm-devel redhat-rpm-config gcc gcc-c++ bison flex gnutls-devel lm_sensors-devel net-snmp-devel python-devel bzip2-devel libselinux-devel perl-DBI

# yum install drbd kmod-drbd   / if any dependency error came
OR
# yum install drbd82 kmod-drbd82

[/etc/drbd.conf] is the configuration file

To set up a DRBD primary node, configure the DRBD service, create the first DRBD block device, and then create a file system on the device so that you can store files and data.

@ Set the synchronization rate between the two nodes. This is the rate at which devices are synchronized in the background after a disk failure, device replacement or during the initial setup. Keep this in check compared to the speed of your network connection.

@ To set the synchronization rate, edit the rate setting within the syncer block:

Creating your primary node

# vim /etc/drbd.conf

global { usage-count yes; }

common {
syncer {
rate 50M;
verify-alg sha1;
}
handlers { outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater";}
}

resource mysqlha {
protocol C;   # Specifies the level of consistency to be 
used when information
                                    is written to the block device.Data is considered  written
                                     when the data has reached  the local disk and the 
                                       remote node's  physical disk.
disk {
on-io-error detach;
fencing resource-only;
#disk-barrier no;
#disk-flushes no;
}

@ Set up some basic authentication. DRBD supports a simple password hash exchange mechanism. This helps to ensure that only those hosts with the same shared secret are able to join the DRBD node group.

net {
cram-hmac-alg sha1;
shared-secret "cEnToS";
sndbuf-size 512k;
max-buffers 8000;
max-epoch-size 8000;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
data-integrity-alg sha1;
} 

@ Now you must configure the host information. You must have the node information for the primary and secondary nodes in the drbd.conf file on each host. Configure the following information for each node:

@ device: The path of the logical block device that is created by DRBD.

@ disk: The block device that stores the data.

@ address: The IP address and port number of the host that holds this DRBD device.

@ meta-disk: The location where the metadata about the DRBD device is stored. If you set this to internal, DRBD uses the physical block device to store the information, by recording the metadata within the last sections of the disk.

  The exact size depends on the size of the logical block device you have created, but it may involve up to 128MB.

@ The IP address of each on block must match the IP address of the corresponding host. Do not set this value to the IP address of the corresponding primary or secondary in each case.

on drbd1 {
device /dev/drbd0;
#disk /dev/sda3;
disk /dev/sdb1;
address 192.168.1.231:7789;
meta-disk internal;
}

on drbd2 {
device /dev/drbd0;
#disk /dev/sda3;
disk /dev/hdb;
address 192.168.1.237:7789;
meta-disk internal;
}

And in  second machine do the same as in first machine

Setting Up a DRBD Secondary Node  

The configuration process for setting up a secondary node is the same as for the primary node, except that you do not have to create the file system on the secondary node device, as this information is automatically transferred from the primary node.

@ To set up a secondary node:

Copy the /etc/drbd.conf file from your primary node to your secondary node. It should already contain all the information and configuration that you need, since you had to specify the secondary node IP address and other information for the primary node configuration. 

global { usage-count yes; }

common {
syncer {
rate 50M;
verify-alg sha1;
}

handlers { outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater";}
}

resource mysqlha {
protocol C;
disk {
on-io-error detach;
fencing resource-only;
#disk-barrier no;
#disk-flushes no;
}

net {
cram-hmac-alg sha1;
shared-secret "cEnToS";
sndbuf-size 512k;
max-buffers 8000;
max-epoch-size 8000;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
data-integrity-alg sha1;
}


on drbd1 {
device /dev/drbd0;
#disk /dev/sda3;
disk /dev/sdb1;
address 192.168.1.231:7789;
meta-disk internal;
}

on drbd2 {
device /dev/drbd0;
#disk /dev/sda3;
disk /dev/hdb1;
address 192.168.1.237:7789;
meta-disk internal;
}

@@ On both machines, Before starting the primary node and secondary nodes, create the metadata for the devices

# drbdadm create-md mysqlha

@@ On primary/active node,

# /etc/init.d/drbd start  ## DRBD should now start and initialize, creating the DRBD devices that you have configured.

DRBD creates a standard block device - to make it usable, you must create a file system on the block device just as you would with any standard disk partition. Before you can create the file system, you must mark the new device as the primary device (that is, where the data is written and stored), and initialize the device. Because this is a destructive operation, you must specify the command line option to overwrite the raw data.

# drbdadm -- --overwrite-data-of-peer primary mysqlha

@  On seconday/passive node,

# /etc/init.d/drbd start
 
@  On both machines,

# /etc/init.d/drbd status

# cat /proc/drbd      ##  Monitoring a DRBD Device

cs: connection state
st: node state (local/remote)
ld: local data consistency
ds: data consistency
ns: network send
nr: network receive
dw: disk write
dr: disk read
pe: pending (waiting for ack)
ua: unack'd (still need to send ack)
al: access log write count

# watch -n 10 'cat /proc/drbd'

@ On primary/active node,

# mkfs.ext3 /dev/drbd0

# mkdir /drbd        ##  no needed this bcz we need to mount it in 
                                            another point called  /usr/local/mysql/

# mount /dev/drbd0 /drbd   ## not need this as well

Your primary node is now ready to use.

@ On seconday/passive node,

# mkdir /drbd   ## not needed, it will replicate from primary

[[[[[[[[[[ for TESTING the replication of drbd alone follow the above steps. after the primary node is mounted to a mount point, create any files in it. create same mount point in both the system.
  # cd /mountpoint

  # dd if=/dev/zero of=check bs=1024 count=1000000

After that in primary
# umount /drbd

# drbdadm secondary mysqlfo  ## make the primary node as secondary

And in secondary
# drbdadm primary mysqlfo  ## make the secondary node as primary

# mount /dev/drbd0 /drbd/

# ls /drbd/   ## the data will be replicated into it. ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]

Mysql for DRBD

# [MySQL]  ##  install mysql if its not there, here its installed with partation enabled
@ On primary/active node,
 

# cd  mysql-5.5.12/

# cmake . -LH

# cmake .

# make

# make install

# cd /usr/local/mysql/

# chown mysql:mysql . -R

# scripts/mysql_install_db --datadir=/usr/local/mysql/data/ --user=mysql 

# scp /etc/my.cnf root@192.168.1.231:/usr/local/mysql/  
                          ## config file copied from another machine

# cd /usr/local/mysql/

# vim my.cnf

    datadir=/usr/local/mysql/data/
    socket=/usr/local/mysql/data/mysql.sock
    log-error=/var/log/mysqld.log
    pid-file=/usr/local/mysql/mysqld.pid
 
./bin/mysqld_safe --defaults-file=/usr/local/mysql/my.cnf &
                ##  start mysql server

OR
# nohup sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/usr/local/mysql/my.cnf &

#./bin/mysqladmin -h localhost -uroot password 'mysql'

# vim /etc/profile
  export PATH=$PATH:/usr/local/mysql/bin

# . /etc/profile

# mysql -uroot -pmysql

# cd /usr/local/mysql/support-files/

# cp mysql.server /etc/init.d/mysqld

# /etc/init.d/mysqld restart

# /etc/init.d/mysqld stop
### /etc/init.d/drbd stop ---- dont stop drbd

# mkdir /tmp/new

# mv /usr/local/mysql/* /tmp/new/ ## Move the mysql data to a secure location, and mount the drbd partation to /usr/local/mysql

# umount /drbd  ## Already mounted partation, need to unmount it and mount the drbd partation to /usr/local/mysql, where mysql data are stored.

# mount /dev/drbd0 /usr/local/mysql/  ## To this location we want to mount drbd where mysql directory locations and installation files resides.

# cp -r /tmp/new/* .  ## after mounting the drbd partation to /usr/local/src/ copy the mysql data's back to /usr/local/mysql/ from the alrady backeded place. now the mysql is in drbd partation.

 [[[[[[[[[[[[[ for TESTING the mysql replication in drdb

In Primary node
 
# mysql -uroot -pmysql

mysql> create database DRBD;  ## if database is not created the entire mysql instance is replicated to secondary.

# /etc/init.d/mysqld stop    ## we stopped this because if the mysql from primary node is stopped we want to replicated the mysql service and db in secondary server.

# umount /usr/local/mysql/   ## umount the mount point in primary

# ls /usr/local/mysql/

# drbdadm secondary mysqlfo  ## make primary as secondary node

# /etc/init.d/drbd status

In secondary Node

# drbdadm primary mysqlfo

# /etc/init.d/drbd status

# mount /dev/drbd0 /usr/local/mysql/

# ls /usr/local/mysql/

# /usr/local/mysql/bin/mysql -uroot -pmysql  ## we can see the database created in primary replicated to secondary.

# /etcinit.d/mysqld start

      ]]]]]]]]]]]]]]]]]]]]]]]

 Configuring Heartbeat for BRBD (the service attached to DRBD) failover

1. Assign hostname node01 to primary node with IP address 172.16.4.80 to eth0
2. Assign hostname node02 to slave node with IP address 172.16.4.81

Note: on node01

uname -n ---- must return node01
uname -n ---- must return node02

We set the host name already to configure the drbd. Here we use 192.168.1.245 as virtual ip, communications will listen to that IP.

# yum install heartbeat heartbeat-devel  ##  On both the servers

@ if config files are not under /usr/share/doc/heartbeat

# cd /etc/ha.d/

# touch authkeys

# touch ha.cf

# touch haresources

# vim authkeys
   auth 2
   2 sha1 test-ha

  ## auth 3
     3 md5 "secret"

# chmod 600 /etc/ha.d/authkeys

# vim ha.cf

logfacility local0
debugfile /var/log/ha-debug
logfile /var/log/ha-log
keepalive 500ms
deadtime 10
warntime 5
initdead 30
mcast eth0 225.0.0.1 694 2 0
ping 192.168.1.22
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster
respawn hacluster /usr/lib/heartbeat/dopd
apiauth dopd gid=haclient uid=hacluster
auto_failback off (on)
node drbd1
node drbd2

# vim haresource ## This file contains the information about resources which we want to highly enable. 

drbd1 drbddisk Filesystem::/dev/drbd0::/usr/local/mysql::ext3 mysqld 192.168.1.245 (virtual IP)

# cd /etc/ha.d/resource.d
   
# vim drbddisk
    DEFAULTFILE="/etc/drbd.conf"

@ On PRIMARY Node

# cp /etc/rc.d/init.d/mysqld /etc/ha.d/resource.d/

@ Copy the files from primary node to secondary node

# scp -r ha.d root@192.168.1.237:/etc/ ## copy all files to node two, because Primary node and secondary node contains the same configuration.

 @@ Stop all services in both the nodes

node1$ service mysqld stop
node1$ umount /usr/local/mysql/
node1$ service drbd stop
node1$ service heartbeat stop

node2$ service mysqld stop
node2$ umount /usr/local/mysql/
node2$ service drbd stop
node2$ service heartbeat stop

@@ # Automatic startup,
node1$ chkconfig drbd on
node2$ chkconfig drbd on

node1$ chkconfig mysqld off ## mysql will be handled by heartbeat, its exe we placed in /ha.d/resources/
node2$ chkconfig mysqld off
node1$ chkconfig heartbeat on
node2$ chkconfig heartbeat on

# Start drbd on both machines,
node1$ service drbd start
node1$ service heartbeat start    
  
# Start heartbeat on both machines,
node2$ service drbd start
node2$ service heartbeat start

No need of starting mysql Heartbeat will start it automatically.

For testing the replication

#/usr/lib/heartbeat/hb_standby ## Run this command in any host, then that host will going down and the DB will replicate to other system

@ access the DB from a remote host using the virtual IP

mysql> grant all privileges on *.* to 'root'@'192.168.1.67' identified by 'mysql1';

mysql> flush privileges;
# delete from user where Host='192.168.1.67'

# mysql -uroot -p -h 192.168.1.245

#[Test Failover Services]
node1$ hb_standby
node2$ hb_takeover

#[Sanity Checks]
node1$ service heartbeat stop
node2$ service heartbeat stop
$/usr/lib64/heartbeat/BasicSanityCheck

#[commands]
$/usr/lib64/heartbeat/heartbeat -s