DB2 DPF with multiple nodes

Introduction

There are two major references on setup DB2 partitioning feature:

To install DB2 partitioning across multiple nodes, the master node and the participating nodes should share DB2 binary folders and instance folders and DB2 through NFS. DB2 admin users should not share their home folders. In the following example, we assume there are three nodes:node28 (master), node29, and node30. /home folder is shared across these nodes through NFS. The folder /dpfhome is a local disk folder. The DB2 users should be the same names across the three nodes.

Create DB2 groups and users

We assume three users db2inst1, db2fenc1 and dasusr1 are created for all three nodes (28, 29, 30). With sudo, run following commands to create the groups and users on all three nodes.

/usr/sbin/groupadd -g 999 db2iadm1
/usr/sbin/groupadd -g 998 db2fadm1
/usr/sbin/groupadd -g 997 dasadm1
/usr/sbin/useradd -u 1004 -g db2iadm1 -m -d /home/db2inst1 db2inst1
/usr/sbin/useradd -u 1003 -g db2fadm1 -m -d /home/db2fenc1 db2fenc1
/usr/sbin/useradd -u 1002 -g dasadm1 -m -d /dpfhome/dasusr1 dasusr1
/usr/bin/passwd db2inst1
/usr/bin/passwd db2fenc1
/usr/bin/passwd dasusr1
/usr/sbin/usermod -a -G dasadm1 db2inst1

The last command is required for our NFS shared setup.

Run DB2 setup to create response files

Under Infosphere warehouse installation folder: isw/Ese, run sudo ./db2setup to generate two response files: db2ese.rsp (for master node) and db2ese_addpart.rsp (for slave nodes). During the installation, select generating response files only, and create multi-partition instance.

Install DB2 using response files

Install DB2 on master node

On master node28, run following command to setup DB2 and master instance:

sudo ./db2setup -u /home/fwang/db2ese.rsp

After installation, check the permission of this folder /home/db2/V9.7/. If it is not public readable/executable, change the permissions:

sudo chmod a+xr -R /home/db2/V9.7/
db2 update dbm cfg using JDK_PATH  /home/db2/V9.7/java/jdk64

Note that creating instance might not be successful due to the above permission problem. If instance creation is not successful, drop instance and recreate it.

List instance: sudo /home/db2/V9.7/instance/db2ilist

Drop instance: sudo /home/db2/V9.7/instance/db2idrop db2inst1

Create instance: sudo /home/db2/V9.7/instance/db2icrt -a server -s ese -u db2fenc1 -p db2c_db2inst1 db2inst1

If db2 admin server installation fails, run the following commands to recreate it:

sudo /home/db2/V9.7/instance/daslist
sudo /home/db2/V9.7/instance/dasdrop dasusr1
sudo rm -rf /dpfhome/dasusr1
sudo /home/db2/V9.7/instance/dascrt dasusr1

If a DB2 setup fails, clean the installation as follows:

sudo rm -rf /home/db2inst1/sqllib
sudo rm -rf /home/dasusr1/das
sudo rm -rf /home/db2/V9.7/

And remove DB2 related entries at the end of /etc/services file.

Install DB2 on slave nodes

On node28, node29, setup DB2 with slave installation response file:

sudo ./db2setup -u /home/fwang/db2ese_addpart.rsp

After that, in the /etc/service file, add the following entries (copied from master node):

db2c_db2inst1 50001/tcp
DB2_db2inst1 60002/tcp
DB2_db2inst1_1 60003/tcp
DB2_db2inst1_2 60004/tcp
DB2_db2inst1_END 60005/tcp

Setup physical partitioning information on master node

Create a file db2nodes.cfg in /home/db2inst1/sqllib, with following entries:

0 node28.cci.emory.edu 0
1 ndoe29.cci.emory.edu 0
2 node30.cci.emory.edu 0

Install DB2 Spatial Extender

Setup DB2 Spatial Extender on master node

Install DB2 spatial extender on master node, by running db2setup from DB2GSE installation file path:

sudo ./db2setup

Choose to generate response files only, and reuse existing instance.

Two response files are generated: db2gse.rsp, and db2gse_slave.rsp

Run the script on master node (node28):

sudo ./db2setup -u -u db2gse.rsp

Run the script on slave nodes (node29 and node30)

sudo ./db2setup -u -u db2gse_slave.rsp

Run the script on slave nodes (node29 and node30)

Apply DB2 fixpack

On master node node28, under DB2 universal fixpack installation folder, run:

sudo ./installFixPack

And choose DB2 installation path: /home/db2/V9.7

No need to run this on slave nodes, since binary DB2 files are shared through /home/db2 folder.

Apply DB2 Spatial Extender fixpack

Install DB2 GSE fixpack on master server from DB2GSE fixpack folder. Generate a response filedb2gse_fp3a_slave.rsp for slave node as well:

sudo ./setup

On slave nodes, run:

./db2setup -u ~fwang/db2gse_fp3a_slave.rsp

Update DB2 instance

On master node, run the following command to update DB2 instance db2inst to latest fixpack:

sudo /home/db2/V9.7/instance/db2iupdt db2inst1

Note that above installations may need to shut down DB2 if it is running, by running db2stop force from db2inst1 user.

Try to start db2 (db2start) from db2inst1 to test if it works.

During our installation, we have following error when running db2start:

“SQL5043N Support for one or more communications protocols failed to start successfully. However, core database manager functionality started successfully”

db2 get dbm cfg |grep SVCE
TCP/IP Service name (SVCENAME) =
SSL service name (SSL_SVCENAME) =

This is due to empty SVCENAME. Run following command under db2inst1 to update it:

db2 update dbm cfg using SVCENAME db2c_db2inst1

Also setup DB2 communication with TCPIP:

db2set DB2COMM = TCPIP
db2set DB2_ENABLE_LDAP=no
db2set -all DB2COMM

Try to start db22admin server from dasusr1 user by running db2admin start.

If admin server can't start, recreate it.

Note to kill dead processes before installing DB2. Dead processes may take ports 553.

sudo /home/db2/V9.7/instance/dasdrop dasusr1
sudo /home/db2/V9.7/instance/dascrt dasusr1

Node communication setup

Master node setup

On node28, run the following commands:

ssh-keygen -t rsa
cd ~/.ssh
mv id_rsa identity
chmod 600 identity
cat id_rsa.pub>> authorized_keys
chmod 644 authorized_keys
rm id_rsa.pub
ssh-keyscan -t rsa node29 node29.cci.emory.edu,170.140.138.148>> ~/.ssh/known_hosts
ssh-keyscan -t rsa node30 node30.cci.emory.edu,170.140.138.149>> ~/.ssh/known_hosts

sudo nano /etc/ssh/sshd_config (adding entry and update entry):

1. HostbasedAuthentication no

   HostbasedAuthentication yes

sudo nano /etc/ssh/shosts.equiv (add entries):
node29
node29.cci.emory.edu
node30
node30.cci.emory.edu

sudo ssh-keyscan -t rsa node29 node29.cci.emory.edu,170.140.138.148>> /tmp/ssh_known_hosts
sudo ssh-keyscan -t rsa node30 node30.cci.emory.edu,170.140.138.149>> /tmp/ssh_known_hosts
sudo cp /tmp/ssh_known_hosts .

sudo /sbin/service sshd restart
db2inst1 home: .rhosts
node28 db2inst1
node29 db2inst1
node30 db2inst1

DB2 setup

On master node, run following command with db2inst1 user:

db2set DB2RSHCMD=/usr/bin/ssh

Slave nodes setup

On node29 and node30, run the following commands:

sudo nano /etc/ssh/ssh_config
HostbasedAuthentication yes
#added:
EnableSSHKeysign yes

Testing

ssh to slave nodes from master node with db2inst1. Logout and try again, no login is required