Skip to main content

MPICH Configuration


Here I’m discussing about the installation of MPICH software in servers. For this we need minimum 2 numbers or more computers. One should be server and all others are node computers. After configuring this package the entire system will work like a Cluster Solution. You can execute parallel programs as well as serial. No need of installing additional packages. You can assign number of processors for each program. If you want any graphical monitoring tool then you can go for open sources like Ganglia. 

  1. Install OS in both servers (master & slave)
  2. Assign IPs and hostname on both (master & slave)
  3. Disable firewall and security on both (master & slave)
  4. MPICH2 installation on both servers (master & slave)
  1. Download the latest version of software from net.
  2. Unzip by using following command
# tar –zxvf mpich2-1.2.1p1.tar.tar
  1. Configure by using following command
# cd mpich2-1.2.1p1
# ./configure --prefix=/opt/mpich2-1.2.1p1 –with- rsh=ssh --with-pm=mpd:gforker --with-device=ch3:ssm
  1. Compile and install with following commands
# make
# make install
On Master node –
  1. Repeat the same steps with destination=/opt/mpich-intel
# cd mpich2-1.2.1p1
# ./configure --prefix=/opt/mpich2-intel –with- rsh=ssh --with-pm=mpd:gforker --with-device=ch3:ssm
# make
# make install

  1. NFS server setup
# vi /etc/exports
/home *(rw,no_root_squash,sync)
# exportfs -a
# chkcofig nfslock on
# chkconfig nfs on
# chkconfig portmap on
# service portmap start
# service nfslock start
# service nfs start
On Slave node –
# vi /etc/fstab
50.1.1.1:/home /home nfs defaults 0 0

7) Create authorized keys on master
# ssh-keygen -t rsa
# cp /root/.ssh/id-rsa.pub /root/.ssh/authorized_keys
8) Copy the authorized key to slave node
# scp /root/.ssh/authorized_keys 50.1.1.2:/root/.ssh/authorized_keys
  1. Add path of MPICH2 on master and slave nodes
# vi /etc/profile
export PATH=/opt/mpich2/bin:$PATH
  1. Configure the NIS Client in slave node
# authconfig-tui
# vi /etc/yp.conf
Domain NIS-VPS server 50.1.1.1
# vi /etc/sysconfig/network
NISDOMAIN=NIS-VPS
# vi /etc/nsswitch.conf
passwd: files nis
shadow: files nis
group: files nis
# service portmap start
#service ypbind start
#chkconfig ypbind on
#chkconfig portmap on
  1. Test NIS access to the NIS server (slave node only)
#ypcat passwd
#ypmatch nisuser passwd
#getent passwd nisuser
#ssh -l nisuser 50.1.1.1
#service sshd restart
  1. Configure the NIS server in master node
# vi /etc/sysconfig/network
NETWORKING_IPV6 = no
HOSTNAME = master.vps.co.in
NETWORKING = yes
NISDOMAIN = “NIS-VPS

# vi /etc/yp.conf
ypserver 127.0.0.1
# service portmap start
# service yppasswdd start
# service ypserv start
# chkconfig portmap on
# chkconfig yppasswdd on
# chkconfig ypserv on
# rpcinfo -p localhost
# /usr/lib/yp/ypinit -m
# service ypbind start
# service ypxfrd start
# chkconfig ypbind on
# chkconfig ypxfrd on
# rpcinfo -p localhost

Adding new NIS users
# useradd -g users nisuser
# passwd nisuser
# cd /var/yp
# make
# ypmatch nisuser passwd
# getent passwd nisuser

Executing the program with MPICH2
Login as particular user (not root)
Create file .mpd.conf
$ vi .mpd.conf
MPD_SECRETWORD = vps
Change the permission of file
$ chmod 600 .mpd.conf
Create a file which contains name of nodes (i.e mpd.hosts)
$ vi mpd.hosts
master
node1

First login as user to the cluster, then type
$ mpdboot –n 2 –r ssh –f ~/mpd.hosts
Where “ –n 2” implies the 2 machines and “mpd.hosts” file contains the node name.
$ mpdtrace
This will list all the machines in the cluster.

To compile a MPI program, type
$ mpicc -o <exename> <filename.c>

To run the program, type
$ mpiexec -n 10 ./<exename>
Or
$ mpirun -np 10 ./<exename>
Where 10 refers to the number of processors in the cluster.

Comments

Post a Comment