|
|
|
## Setup and install the head node Shark with Ubuntu 10.04.04 LTS and Open Grid Scheduler
|
|
|
|
|
|
|
|
|
|
|
|
## Base Ubuntu server Installation
|
|
|
|
Use the Ubuntu 10.04.4 LTS server release for installation.
|
|
|
|
Use a manual network configuration. Tip if your interface gets an IP address from an DHCP server and you want to have a static IP address press afther the network configuration on the <go back> button and then select <configure network manually>.
|
|
|
|
|
|
|
|
configure eth0 for external network access
|
|
|
|
IP: xx.xx.xx.xx
|
|
|
|
Netmask: 255.255.255.0
|
|
|
|
Gateway: xx.xx.xx.1
|
|
|
|
Nameservers: ip1 ip2
|
|
|
|
Hostname: sharktest
|
|
|
|
Domain: lumcnet.prod.intern
|
|
|
|
|
|
|
|
|
|
|
|
Choose to partition the sda disk manual with the following schema.
|
|
|
|
|
|
|
|
#1 primary 10.0 GB B f ext4 /
|
|
|
|
#2 primary 265 MB f ext2 /boot
|
|
|
|
#3 primary 8.0 GB f swap swap
|
|
|
|
#5 logical 14.0 GB f ext4 /tmp
|
|
|
|
#6 logical 14.0 GB f ext4 /var
|
|
|
|
#7 logical 14.0 GB f ext4 /usr
|
|
|
|
|
|
|
|
Add the first username and password and choose not to encrypt your home directory.
|
|
|
|
When asked "choose software to install" choose:
|
|
|
|
|
|
|
|
[*] LAMP Server
|
|
|
|
[*] OpenSSH server
|
|
|
|
|
|
|
|
Install the GRUB bootloader to the master boot record and reboot.
|
|
|
|
|
|
|
|
## Configure the base system
|
|
|
|
Log into the new system and run :
|
|
|
|
|
|
|
|
sudo apt-get update ; sudo apt-get -y dist-upgrade ; sudo reboot
|
|
|
|
|
|
|
|
|
|
|
|
Setup the second ethernet card eth1 for cluster access
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/network/interfaces
|
|
|
|
|
|
|
|
Edit the interfaces file to look like this:
|
|
|
|
|
|
|
|
# The loopback network interface
|
|
|
|
auto lo
|
|
|
|
iface lo inet loopback
|
|
|
|
|
|
|
|
# The primary network interface
|
|
|
|
## eth0 will give access to the LUMC network
|
|
|
|
auto eth0
|
|
|
|
iface eth0 inet static
|
|
|
|
address 10.13.17.10
|
|
|
|
netmask 255.255.255.0
|
|
|
|
network 10.13.17.0
|
|
|
|
broadcast 10.160.12.255
|
|
|
|
gateway 10.13.17.1
|
|
|
|
dns-nameservers 10.11.1.12 10.12.1.9
|
|
|
|
dns-search lumcnet.prod.intern
|
|
|
|
|
|
|
|
# The seondairy network interface
|
|
|
|
## eth1 will be the internal Shark cluster interface
|
|
|
|
auto eth1
|
|
|
|
iface eth1 inet static
|
|
|
|
address 192.168.62.7
|
|
|
|
netmask 255.255.255.0
|
|
|
|
network 192.168.62.0
|
|
|
|
broadcast 192.168.62.255
|
|
|
|
}}}
|
|
|
|
restart your network service with
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo /etc/init.d/networking restart
|
|
|
|
|
|
|
|
Check if your network configuration by pinging an outside host and your internal cluster:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
ping -c 1 www.google.com ; ping -c 1 192.168.62.7
|
|
|
|
|
|
|
|
Your result should look like this:
|
|
|
|
|
|
|
|
PING www.l.google.com (173.194.66.105) 56(84) bytes of data.
|
|
|
|
64 bytes from we-in-f105.1e100.net (173.194.66.105): icmp_seq=1 ttl=46 time=8.69 ms
|
|
|
|
|
|
|
|
--- www.l.google.com ping statistics ---
|
|
|
|
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
|
|
|
rtt min/avg/max/mdev = 8.690/8.690/8.690/0.000 ms
|
|
|
|
PING 192.168.62.7 (192.168.62.7) 56(84) bytes of data.
|
|
|
|
64 bytes from 192.168.62.7: icmp_seq=1 ttl=64 time=0.036 ms
|
|
|
|
|
|
|
|
--- 192.168.62.7 ping statistics ---
|
|
|
|
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
|
|
|
rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms
|
|
|
|
}}}
|
|
|
|
Edit your hosts file.
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/hosts
|
|
|
|
|
|
|
|
Your host file should look like this:
|
|
|
|
|
|
|
|
127.0.0.1 localhost.localdomain localhost
|
|
|
|
10.160.12.60 shark.lumcnet.prod.intern shark
|
|
|
|
192.168.62.7 nurseshark.cluster.loc nurseshark nurse
|
|
|
|
192.168.62.8 angelshark.cluster.loc angelshark angel
|
|
|
|
192.168.62.9 blacktipshark.cluster.loc blacktipshark black
|
|
|
|
192.168.62.10 caribbeanshark.cluster.loc caribbeanshark carib
|
|
|
|
|
|
|
|
|
|
|
|
# The following lines are desirable for IPv6 capable hosts
|
|
|
|
::1 localhost ip6-localhost ip6-loopback
|
|
|
|
fe00::0 ip6-localnet
|
|
|
|
ff00::0 ip6-mcastprefix
|
|
|
|
ff02::1 ip6-allnodes
|
|
|
|
ff02::2 ip6-allrouters
|
|
|
|
}}}
|
|
|
|
|
|
|
|
Shark has a sdb disk we are going to use this as a /opt partition and use nfs to export this.
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo fdisk /dev/sdb
|
|
|
|
|
|
|
|
create one primary ext4 partion.
|
|
|
|
|
|
|
|
Command (m for help): n
|
|
|
|
Command action
|
|
|
|
e extended
|
|
|
|
p primary partition (1-4)
|
|
|
|
p
|
|
|
|
Partition number (1-4): 1
|
|
|
|
First cylinder (1-8924, default 1):
|
|
|
|
Using default value 1
|
|
|
|
Last cylinder, +cylinders or +size{K,M,G} (1-8924, default 8924):
|
|
|
|
Using default value 8924
|
|
|
|
|
|
|
|
Command (m for help): w
|
|
|
|
The partition table has been altered!
|
|
|
|
|
|
|
|
Calling ioctl() to re-read partition table.
|
|
|
|
Syncing disks.
|
|
|
|
}}}
|
|
|
|
Now format the new partition with ext4.
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo mkfs.ext4 /dev/sdb1
|
|
|
|
|
|
|
|
Mount the /dev/sdb1 on the /opt directory
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
mount /dev/sdb1 /opt/
|
|
|
|
|
|
|
|
To make the /opt partition mount on boot edit your /etc/fstab
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/fstab
|
|
|
|
|
|
|
|
Add this line.
|
|
|
|
|
|
|
|
/dev/sdb1 /opt ext4 defaults 0 2
|
|
|
|
|
|
|
|
Create mount points for the isilon storage:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
mkdir -p /share/isilon/system /data/LGTC /data/MolEpi /data/DIV5/HumGen /bam-export
|
|
|
|
|
|
|
|
Auto mount the isilon nfs exports on every reboot edit your fstab file.
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/fstab
|
|
|
|
|
|
|
|
add these lines to your fstab file.
|
|
|
|
|
|
|
|
research.isilon.lumcnet.prod.intern:/ifs/exports/home /home/ nfs rw,hard,intr,rsize=32768,wsize=32768,tcp,vers=3,noatime 0 2
|
|
|
|
research.isilon.lumcnet.prod.intern:/ifs/exports/system /share/isilon/system/ nfs rw,hard,intr,rsize=32768,wsize=32768,tcp,vers=3,noatime 0 2
|
|
|
|
research.isilon.lumcnet.prod.intern:/ifs/exports/UCSC-bam /bam-export nfs rw,hard,intr,rsize=32768,wsize=32768,tcp,vers=3,noatime 0 2
|
|
|
|
research.isilon.lumcnet.prod.intern:/ifs/exports/data/LGTC /data/LGTC nfs rw,hard,intr,rsize=32768,wsize=32768,tcp,vers=3,noatime 0 2
|
|
|
|
research.isilon.lumcnet.prod.intern:/ifs/exports/data/MolEpi /data/MolEpi nfs rw,hard,intr,rsize=32768,wsize=32768,tcp,vers=3,noatime 0 2
|
|
|
|
research.isilon.lumcnet.prod.intern:/ifs/exports/data/DIV5/GoNL /data/DIV5/GoNL nfs rw,hard,intr,rsize=32768,wsize=32768,tcp,vers=3,noatime 0 2
|
|
|
|
research.isilon.lumcnet.prod.intern:/ifs/exports/data/DIV5/HumGen /data/DIV5/HumGen nfs rw,hard,intr,rsize=32768,wsize=32768,tcp,vers=3,noatime 0 2
|
|
|
|
|
|
|
|
To mount all type:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo mount -a
|
|
|
|
|
|
|
|
Setup the resolv.conf file:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
vi /etc/resolv.conf
|
|
|
|
|
|
|
|
change the file to look like this:
|
|
|
|
|
|
|
|
search cluster.loc
|
|
|
|
nameserver 192.168.62.7
|
|
|
|
nameserver 10.11.1.12
|
|
|
|
nameserver 11.12.1.9
|
|
|
|
|
|
|
|
enable IP Forwarding
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/sysctl.conf
|
|
|
|
|
|
|
|
Uncomment the next line to enable packet forwarding for IPv4
|
|
|
|
|
|
|
|
net.ipv4.ip_forward=1
|
|
|
|
|
|
|
|
Make packet forwarding for IPv4 active for now:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sysctl -w net.ipv4.ip_forward=1
|
|
|
|
|
|
|
|
NAT configuration with IP Tables
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
|
|
|
|
|
|
|
|
Save the ip table rules in a file:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo sh -c "iptables-save > /etc/iptables.rules"
|
|
|
|
|
|
|
|
Add the iptable rule on startup:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/rc.local
|
|
|
|
|
|
|
|
add the line :
|
|
|
|
|
|
|
|
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
|
|
|
|
|
|
|
|
|
|
|
|
### Configure SSH passwordless login
|
|
|
|
This will be done for the root user. su to root
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo su -
|
|
|
|
mkdir /root/.ssh
|
|
|
|
ssh-keygen -q -P "" -t rsa -b 2048 -f /root/.ssh/id_rsa
|
|
|
|
cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
|
|
|
|
chmod 700 /root/.ssh
|
|
|
|
chmod 600 /root/.ssh/*
|
|
|
|
|
|
|
|
|
|
|
|
## Install Services
|
|
|
|
### Install Name Service
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo apt-get install -y bind9
|
|
|
|
|
|
|
|
Edit the named.conf.options file:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/bind/named.conf.options
|
|
|
|
|
|
|
|
Make sure the file looks like this:
|
|
|
|
|
|
|
|
options {
|
|
|
|
directory "/var/cache/bind";
|
|
|
|
|
|
|
|
|
|
|
|
auth-nxdomain no; # conform to RFC1035
|
|
|
|
listen-on-v6 { any; };
|
|
|
|
|
|
|
|
# Added - Vill
|
|
|
|
version none;
|
|
|
|
allow-query { 10.13.17.10; 192.168.62.0/24; };
|
|
|
|
allow-transfer { none; };
|
|
|
|
|
|
|
|
forwarders {
|
|
|
|
# Replace the address below with the address of your provider’s DNS server
|
|
|
|
10.11.1.12;
|
|
|
|
10.12.1.9;
|
|
|
|
};
|
|
|
|
};
|
|
|
|
}}}
|
|
|
|
Edit the named.conf.local file:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/bind/named.conf.local
|
|
|
|
|
|
|
|
To look like this:
|
|
|
|
|
|
|
|
zone "cluster.loc" {
|
|
|
|
type master;
|
|
|
|
file "/etc/bind/db.cluster.loc";
|
|
|
|
};
|
|
|
|
|
|
|
|
zone "62.168.192.in-addr.arpa" {
|
|
|
|
type master;
|
|
|
|
file "/etc/bind/db.62.168.192";
|
|
|
|
};
|
|
|
|
}}}
|
|
|
|
### Configure the Forward DNS Records
|
|
|
|
Create the file db.cluster.loc :
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/bind/db.cluster.loc
|
|
|
|
|
|
|
|
Edit the file to look like this:
|
|
|
|
|
|
|
|
$TTL 24h
|
|
|
|
|
|
|
|
cluster.loc. IN SOA nurseshark.cluster.loc. root.cluster.loc (
|
|
|
|
2007062800 ; serial number
|
|
|
|
3h ; refresh time
|
|
|
|
30m ; retry time
|
|
|
|
7d ; expire time
|
|
|
|
3h ; negative caching ttl
|
|
|
|
)
|
|
|
|
|
|
|
|
; Nameservers
|
|
|
|
cluster.loc. IN NS 192.168.62.7.
|
|
|
|
|
|
|
|
; Hosts
|
|
|
|
nurseshark.cluster.loc. IN A 192.168.62.7
|
|
|
|
angelshark.cluster.loc. IN A 192.168.62.8
|
|
|
|
blacktipshark.cluster.loc. IN A 192.168.62.9
|
|
|
|
caribbeanshark.cluster.loc. IN A 192.168.62.10
|
|
|
|
dogfishshark.cluster.loc. IN A 192.168.62.11
|
|
|
|
greatwhiteshark.cluster.loc. IN A 192.168.62.12
|
|
|
|
hammerheadshark.cluster.loc. IN A 192.168.62.13
|
|
|
|
lemonshark.cluster.loc. IN A 192.168.62.14
|
|
|
|
megamouthshark.cluster.loc. IN A 192.168.62.15
|
|
|
|
tigershark.cluster.loc. IN A 192.168.62.16
|
|
|
|
whaleshark.cluster.loc. IN A 192.168.62.17
|
|
|
|
baskingshark.cluster.loc. IN A 192.168.62.18
|
|
|
|
makoshark.cluster.loc. IN A 192.168.62.19
|
|
|
|
wobbegongshark.cluster.loc. IN A 192.168.62.24
|
|
|
|
epauletteshark.cluster.loc. IN A 192.168.62.25
|
|
|
|
frilledshark.cluster.loc. IN A 192.168.62.26
|
|
|
|
threshershark.cluster.loc. IN A 192.168.62.27
|
|
|
|
kitefinshark.cluster.loc. IN A 192.168.62.28
|
|
|
|
nightshark.cluster.loc. IN A 192.168.62.29
|
|
|
|
pygmeshark.cluster.loc. IN A 192.168.62.30
|
|
|
|
zebrashark.cluster.loc. IN A 192.168.62.31
|
|
|
|
goblinshark.cluster.loc. IN A 192.168.62.32
|
|
|
|
sawshark.cluster.loc. IN A 192.168.62.33
|
|
|
|
greenlandshark.cluster.loc. IN A 192.168.62.34
|
|
|
|
whorltoothshark.cluster.loc. IN A 192.168.62.35
|
|
|
|
camouflageshark.cluster.loc. IN A 192.168.62.36
|
|
|
|
megalodonshark.cluster.loc. IN A 192.168.62.37
|
|
|
|
}}}
|
|
|
|
|
|
|
|
### Configure the Reverse DNS Records
|
|
|
|
Create the file db.62.168.192
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/bind/db.62.168.192
|
|
|
|
|
|
|
|
Edit the file to look like this:
|
|
|
|
|
|
|
|
$TTL 24h
|
|
|
|
|
|
|
|
62.168.192.in-addr.arpa. IN SOA nurseshark.cluster.loc. root.cluster.loc (
|
|
|
|
2007062800 ; serial number
|
|
|
|
3h ; refresh time
|
|
|
|
30m ; retry time
|
|
|
|
7d ; expire time
|
|
|
|
3h ; negative caching ttl
|
|
|
|
)
|
|
|
|
|
|
|
|
; Nameservers
|
|
|
|
62.168.192.in-addr.arpa. IN NS 192.168.62.7.
|
|
|
|
|
|
|
|
; Hosts
|
|
|
|
10.17.13.10.in-addr.arpa. IN PTR shark.lumcnet.prod.intern.
|
|
|
|
7.62.168.192.in-addr.arpa. IN PTR nurseshark.cluster.loc.
|
|
|
|
8.62.168.192.in-addr.arpa. IN PTR angelshark.cluster.loc.
|
|
|
|
9.62.168.192.in-addr.arpa. IN PTR blacktipshark.cluster.loc.
|
|
|
|
10.62.168.192.in-addr.arpa. IN PTR caribbeanshark.cluster.loc.
|
|
|
|
11.62.168.192.in-addr.arpa. IN PTR dogfishshark.cluster.loc.
|
|
|
|
12.62.168.192.in-addr.arpa. IN PTR greatwhiteshark.cluster.loc
|
|
|
|
13.62.168.192.in-addr.arpa. IN PTR hammerheadshark.cluster.loc.
|
|
|
|
14.62.168.192.in-addr.arpa. IN PTR lemonshark.cluster.loc.
|
|
|
|
15.62.168.192.in-addr.arpa. IN PTR megamouthshark.cluster.loc.
|
|
|
|
16.62.168.192.in-addr.arpa. IN PTR tigershark.cluster.loc.
|
|
|
|
17.62.168.192.in-addr.arpa. IN PTR whaleshark.cluster.loc.
|
|
|
|
18.62.168.192.in-addr.arpa. IN PTR baskingshark.cluster.loc.
|
|
|
|
19.62.168.192.in-addr.arpa. IN PTR makoshark.cluster.loc.
|
|
|
|
24.62.168.192.in-addr.arpa. IN PTR wobbegongshark.cluster.loc.
|
|
|
|
25.62.168.192.in-addr.arpa. IN PTR epauletteshark.cluster.loc.
|
|
|
|
26.62.168.192.in-addr.arpa. IN PTR frilledshark.cluster.loc.
|
|
|
|
27.62.168.192.in-addr.arpa. IN PTR threshershark.cluster.loc.
|
|
|
|
28.62.168.192.in-addr.arpa. IN PTR kitefinshark.cluster.loc.
|
|
|
|
29.62.168.192.in-addr.arpa. IN PTR nightshark.cluster.loc.
|
|
|
|
30.62.168.192.in-addr.arpa. IN PTR pygmeshark.cluster.loc.
|
|
|
|
31.62.168.192.in-addr.arpa. IN PTR zebrashark.cluster.loc.
|
|
|
|
32.62.168.192.in-addr.arpa. IN PTR goblinshark.cluster.loc.
|
|
|
|
33.62.168.192.in-addr.arpa. IN PTR sawshark.cluster.loc.
|
|
|
|
34.62.168.192.in-addr.arpa. IN PTR greenlandshark.cluster.loc.
|
|
|
|
35.62.168.192.in-addr.arpa. IN PTR whorltoothshark.cluster.loc.
|
|
|
|
36.62.168.192.in-addr.arpa. IN PTR camouflageshark.cluster.loc.
|
|
|
|
37.62.168.192.in-addr.arpa. IN PTR megalodonshark.cluster.loc.
|
|
|
|
}}}
|
|
|
|
|
|
|
|
Restart BIND :
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo /etc/init.d/bind9 restart
|
|
|
|
|
|
|
|
If bind does not restart try and troubleshoot with the following command:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo /usr/sbin/named -g
|
|
|
|
|
|
|
|
Read the output and act on it.
|
|
|
|
If everything starts up, it's time to test.
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
host nursesharktest
|
|
|
|
|
|
|
|
should give the following result back:
|
|
|
|
|
|
|
|
nursesharktest.cluster.loc has address 192.168.62.7
|
|
|
|
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
host 192.168.62.7
|
|
|
|
|
|
|
|
returns
|
|
|
|
|
|
|
|
7.62.168.192.in-addr.arpa domain name pointer nursesharktest.cluster.loc.
|
|
|
|
|
|
|
|
|
|
|
|
## Install DHCP
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo apt-get install dhcp3-server
|
|
|
|
|
|
|
|
Edit the dhcpd.conf file:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/dhcp3/dhcpd.conf
|
|
|
|
|
|
|
|
Make sure the file looks like this, the mac addresses should reflect the mac addresses for the blade servers!!!
|
|
|
|
in the file are the current mac addresses, these could change!!!
|
|
|
|
|
|
|
|
default-lease-time 900;
|
|
|
|
max-lease-time 900;
|
|
|
|
option subnet-mask 255.255.255.0;
|
|
|
|
option domain-name-servers 192.168.62.7,10.11.1.12,10.12.1.9;
|
|
|
|
option domain-name "cluster.loc";
|
|
|
|
ddns-update-style none;
|
|
|
|
server-name shark;
|
|
|
|
allow booting;
|
|
|
|
allow bootp;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
subnet 192.168.62.0 netmask 255.255.255.0 {
|
|
|
|
option subnet-mask 255.255.255.0;
|
|
|
|
option routers 192.168.62.7;
|
|
|
|
filename "pxelinux.0";
|
|
|
|
next-server 192.168.62.7;
|
|
|
|
|
|
|
|
host angelshark {
|
|
|
|
hardware ethernet 00:22:19:BC:8B:BF;
|
|
|
|
fixed-address 192.168.62.8;
|
|
|
|
# option host-name "angelshark";
|
|
|
|
}
|
|
|
|
host blacktipshark {
|
|
|
|
hardware ethernet 00:22:19:BC:8B:C4;
|
|
|
|
fixed-address 192.168.62.9;
|
|
|
|
# option host-name "blacktipshark";
|
|
|
|
}
|
|
|
|
host caribbeanshark {
|
|
|
|
hardware ethernet 00:22:19:C8:E0:A9;
|
|
|
|
fixed-address 192.168.62.10;
|
|
|
|
# option host-name "caribbeanshark";
|
|
|
|
}
|
|
|
|
host dogfishshark {
|
|
|
|
hardware ethernet 00:26:b9:fd:1e:a4;
|
|
|
|
fixed-address 192.168.62.11;
|
|
|
|
# option host-name "dogfishshark";
|
|
|
|
}
|
|
|
|
host greatwhiteshark {
|
|
|
|
hardware ethernet 00:26:B9:FD:1E:88;
|
|
|
|
fixed-address 192.168.62.12;
|
|
|
|
# option host-name "greatwhiteshark";
|
|
|
|
}
|
|
|
|
host hammerheadshark {
|
|
|
|
hardware ethernet 00:26:B9:FD:1E:84;
|
|
|
|
fixed-address 192.168.62.13;
|
|
|
|
# option host-name "hammerheadshark";
|
|
|
|
}
|
|
|
|
host lemonshark {
|
|
|
|
hardware ethernet 00:26:B9:FD:1E:D8;
|
|
|
|
fixed-address 192.168.62.14;
|
|
|
|
# option host-name "lemonshark";
|
|
|
|
}
|
|
|
|
host megamouthshark {
|
|
|
|
hardware ethernet 00:26:B9:FD:1D:6C;
|
|
|
|
fixed-address 192.168.62.15;
|
|
|
|
# option host-name "megamouthshark";
|
|
|
|
}
|
|
|
|
host tigershark {
|
|
|
|
hardware ethernet 00:26:B9:FD:1E:74;
|
|
|
|
fixed-address 192.168.62.16;
|
|
|
|
# option host-name "tigershark";
|
|
|
|
}
|
|
|
|
host whaleshark {
|
|
|
|
hardware ethernet 00:26:B9:FD:1E:94;
|
|
|
|
fixed-address 192.168.62.17;
|
|
|
|
# option host-name "whaleshark";
|
|
|
|
}
|
|
|
|
host baskingshark{
|
|
|
|
hardware ethernet B8:AC:6F:11:12:D8;
|
|
|
|
fixed-address 192.168.62.18;
|
|
|
|
# option host-name "baskingshark";
|
|
|
|
}
|
|
|
|
host makoshark {
|
|
|
|
hardware ethernet 18:03:73:0A:79:BD;
|
|
|
|
fixed-address 192.168.62.19;
|
|
|
|
# option host-name "makoshark";
|
|
|
|
}
|
|
|
|
host wobbegongshark {
|
|
|
|
hardware ethernet 00:24:E8:6F:26:62;
|
|
|
|
fixed-address 192.168.62.24;
|
|
|
|
# option host-name "wobbegongshark";
|
|
|
|
}
|
|
|
|
host epauletteshark {
|
|
|
|
hardware ethernet 00:24:E8:6F:26:6F;
|
|
|
|
fixed-address 192.168.62.25;
|
|
|
|
# option host-name "epauletteshark";
|
|
|
|
}
|
|
|
|
host frilledshark {
|
|
|
|
hardware ethernet 00:24:E8:6F:26:7C;
|
|
|
|
fixed-address 192.168.62.26;
|
|
|
|
# option host-name "frilledshark";
|
|
|
|
}
|
|
|
|
host threshershark {
|
|
|
|
hardware ethernet 00:24:E8:6F:26:89;
|
|
|
|
fixed-address 192.168.62.27;
|
|
|
|
# option host-name "threshershark";
|
|
|
|
}
|
|
|
|
host kitefinshark {
|
|
|
|
hardware ethernet 00:24:E8:6F:26:96;
|
|
|
|
fixed-address 192.168.62.28;
|
|
|
|
# option host-name "kitefinshark";
|
|
|
|
}
|
|
|
|
host nightshark {
|
|
|
|
hardware ethernet 00:24:E8:6F:26:A3;
|
|
|
|
fixed-address 192.168.62.29;
|
|
|
|
# option host-name "nightshark";
|
|
|
|
}
|
|
|
|
|
|
|
|
host pygmeshark {
|
|
|
|
hardware ethernet 00:24:E8:6F:26:B0;
|
|
|
|
fixed-address 192.168.62.30;
|
|
|
|
# option host-name "pygmeshark";
|
|
|
|
}
|
|
|
|
host zebrashark {
|
|
|
|
hardware ethernet 00:24:E8:6F:26:BD;
|
|
|
|
fixed-address 192.168.62.31;
|
|
|
|
# option host-name "zebrashark";
|
|
|
|
}
|
|
|
|
host goblinshark {
|
|
|
|
hardware ethernet 24:B6:FD:F4:EC:40;
|
|
|
|
fixed-address 192.168.62.32;
|
|
|
|
# option host-name "goblinshark";
|
|
|
|
}
|
|
|
|
host sawshark {
|
|
|
|
hardware ethernet 24:B6:FD:F4:F6:70;
|
|
|
|
fixed-address 192.168.62.33;
|
|
|
|
# option host-name "sawshark";
|
|
|
|
}
|
|
|
|
host greenlanshark {
|
|
|
|
hardware ethernet 24:B6:FD:F4:F3:10;
|
|
|
|
fixed-address 192.168.62.34;
|
|
|
|
# option host-name "greenlanshark";
|
|
|
|
}
|
|
|
|
host whorltoothshark {
|
|
|
|
hardware ethernet 24:B6:FD:F4:F7:60;
|
|
|
|
fixed-address 192.168.62.35;
|
|
|
|
# option host-name "whorltoothshark";
|
|
|
|
}
|
|
|
|
host camouflageshark {
|
|
|
|
hardware ethernet 24:B6:FD:F4:E1:18;
|
|
|
|
fixed-address 192.168.62.36;
|
|
|
|
# option host-name "camouflageshark";
|
|
|
|
}
|
|
|
|
host megalodonshark {
|
|
|
|
hardware ethernet 24:B6:FD:F4:DD:A8;
|
|
|
|
fixed-address 192.168.62.37;
|
|
|
|
# option host-name "megalodonshark";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}}}
|
|
|
|
Restart your DHCP server:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo /etc/init.d/dhcp3-server restart
|
|
|
|
|
|
|
|
|
|
|
|
## Install NFS server
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo apt-get install -y nfs-common nfs-kernel-server
|
|
|
|
|
|
|
|
|
|
|
|
### Install PXE boot
|
|
|
|
Install the following packages:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo apt-get install -y xinetd tftpd-hpa
|
|
|
|
|
|
|
|
edit the file /etc/default/tftpd-hpa
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/default/tftpd-hpa
|
|
|
|
|
|
|
|
To look like this :
|
|
|
|
|
|
|
|
# /etc/default/tftpd-hpa
|
|
|
|
RUN_DAEMON="yes"
|
|
|
|
OPTIONS="-l -s /tftpboot"
|
|
|
|
TFTP_USERNAME="tftp"
|
|
|
|
TFTP_DIRECTORY="/srv/tftp"
|
|
|
|
TFTP_ADDRESS="0.0.0.0:69"
|
|
|
|
|
|
|
|
Edit the file /etc/inetd.conf:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/inetd.conf
|
|
|
|
|
|
|
|
add the line :
|
|
|
|
|
|
|
|
tftp dgram udp4 wait root /usr/sbin/in.tftpd /usr/sbin/in.tftpd -s /var/lib/tftpboot
|
|
|
|
|
|
|
|
Now we need to create the tftboor Dir:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo mkdir /tftpboot
|
|
|
|
|
|
|
|
We need to get the Ubuntu netboot image, version 12.04 from http://cdimage.ubuntu.com/netboot/ and untar this image.
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo wget http://archive.ubuntu.com/ubuntu/dists/precise/main/installer-amd64/current/images/netboot/netboot.tar.gz -P /tftpboot
|
|
|
|
sudo cd /tftpboot
|
|
|
|
sudo tar xvfz netboot.tar.gz
|
|
|
|
|
|
|
|
Edit the boot menu config file txt.cfg
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /tftpboot/ubuntu-installer/amd64/boot-screens/txt.cfg
|
|
|
|
|
|
|
|
Add the following lins to this file:
|
|
|
|
|
|
|
|
LABEL Shark execute Node Precise server auto install
|
|
|
|
menu default
|
|
|
|
kernel ubuntu-installer/amd64/linux
|
|
|
|
append ramdisk_size=14984 locale=en_US console-setup/ask_detect=false keyboard-configuration/layoutcode=us netcfg/wireless_wep= netcfg/choose_interface=eth0 netcfg/get_hostname= preseed/url=http://shark.lumcnet.prod.intern/preseed.cfg vga=normal initrd=ubuntu-installer/amd64/initrd.gz --
|
|
|
|
|
|
|
|
edit the file /tftpboot/ubuntu-installer/amd64/pxelinux.cfg/default :
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /tftpboot/ubuntu-installer/amd64/pxelinux.cfg/default
|
|
|
|
|
|
|
|
Change timeout to 10
|
|
|
|
|
|
|
|
# D-I config version 2.0
|
|
|
|
include ubuntu-installer/amd64/boot-screens/menu.cfg
|
|
|
|
default ubuntu-installer/amd64/boot-screens/vesamenu.c32
|
|
|
|
prompt 0
|
|
|
|
timeout 10
|
|
|
|
|
|
|
|
Place the preseed file in your www root directory.
|
|
|
|
Restart apache, DHCP and tftpd
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo service apache2 restart ; sudo service dhcp3-server restart ; sudo service tftpd-hpa restart
|
|
|
|
|
|
|
|
Create the Directory node-installs in your www root and place the postinstallscript.sh file there.
|
|
|
|
|
|
|
|
sudo mkdir /var/www/node-installs
|
|
|
|
|
|
|
|
Get the nodeinstall.tar.gz file and untar this in your /home directory
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo wget https://humgenprojects.lumc.nl/trac/shark/raw-attachment/wiki/Shark_setup/nodeinstaller.tar.gz -P /home
|
|
|
|
cd /home
|
|
|
|
sudo tar xvfz nodeinstaller.tar.gz
|
|
|
|
|
|
|
|
create a tgz file from your .ssh directory
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
cd /root
|
|
|
|
tar cvfz /share/isilon/system/backup/ssh.tgz .ssh/
|
|
|
|
|
|
|
|
Add the tftp service to your xinetd config
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/xinetd.d/tftp
|
|
|
|
|
|
|
|
add the following:
|
|
|
|
|
|
|
|
# TFTP configuration
|
|
|
|
service tftp
|
|
|
|
{
|
|
|
|
socket_type = dgram
|
|
|
|
protocol = udp
|
|
|
|
port = 69
|
|
|
|
wait = yes
|
|
|
|
user = root
|
|
|
|
server = /usr/sbin/in.tftpd
|
|
|
|
server_args = -s /tftpboot
|
|
|
|
disable = no
|
|
|
|
}
|
|
|
|
}}}
|
|
|
|
Restart xinetd service:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo service xinetd restart
|
|
|
|
|
|
|
|
|
|
|
|
### Install postfix mailer
|
|
|
|
to install the postfix mailer :
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo apt-get install postfix
|
|
|
|
configuration type : Internet Site
|
|
|
|
System mail name: `hostname`
|
|
|
|
Root and postmaster mail recipient: xxxx
|
|
|
|
Other destinations to accept mail for: shark.lumcnet.prod.intern, localhost.localdomain,localhost
|
|
|
|
Force synchronous on mail queue: Yes
|
|
|
|
Local networks: │ 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
|
|
|
|
Mailbox size limit: 0
|
|
|
|
Local address extension character: +
|
|
|
|
Internet protocols to use: all
|
|
|
|
|
|
|
|
### Install FTP server
|
|
|
|
To Install the ftp :
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo apt-get install vsftpd
|
|
|
|
|
|
|
|
Add the ftp user that is in the postinstall.sh script "nodeinstaller":
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
adduser nodeinstaller
|
|
|
|
|
|
|
|
make sure the home dir is owned by the nodeinstaller user:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
chown -R nodeinstaller:nodeinstaller /home/nodeinstaller/
|
|
|
|
|
|
|
|
|
|
|
|
### Install Open Grid Sheduler
|
|
|
|
Get compile dependencies:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo apt-get build-dep gridengine-common gridengine-client gridengine-exec gridengine-master gridengine-qmon
|
|
|
|
sudo apt-get install -y pvm-dev csh libpam0g-dev libxt-dev libmotif-dev x11proto-fixes-dev x11proto-randr-dev x11proto-xinerama-dev libxft-dev libxp-dev libxp6
|
|
|
|
sudo apt-get install -y xfs xfstt gsfonts gsfonts-x11 texlive-fonts-extra texlive-fonts-recommended xfonts-scalable cm-super cmap-adobe-cns1 cmap-adobe-gb1 dvi2ps-fontdata-three t1-cyrillic tex-gyre ttf-adf-ikarius libfreetype6 ttf-freefont ttf-uralic xfonts-mplus fgfs-base t1-xfree86-nonfree ttf-xfree86-nonfree ttf-xfree86-nonfree-syriac ttf-mscorefonts-installer dbus-x11 gnuplot-x11 gsfonts-x11 libx11-6 libx11-data libx11-dev libx11-protocol-perl x11-common x11-utils x11-xserver-utils x11proto-core-dev x11proto-fixes-dev x11proto-input-dev x11proto-kb-dev x11proto-print-dev x11proto-randr-dev x11proto-render-dev x11proto-xext-dev x11proto-xinerama-dev xfonts-base xfonts-100dpi xfonts-75dpi xfonts-100dpi-transcoded xfonts-75dpi-transcoded gsfonts-x11 lmodern texlive-fonts-recommended texlive-font-utils defoma fontconfig ttmkfdir cabextract ttmkfdir libmotif-dev libmotif3
|
|
|
|
|
|
|
|
Create the OpenGridScheduler directoryand change into this directory:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo mkdir /usr/local/OpenGridScheduler
|
|
|
|
cd /usr/local/OpenGridScheduler
|
|
|
|
|
|
|
|
Get the Open Grid Scheduler source,extract and compile read http://gridscheduler.sourceforge.net/CompileGridEngineSource.html:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
wget wget http://downloads.sourceforge.net/project/gridscheduler/GE2011.11/GE2011.11.tar.gz
|
|
|
|
tar xvfz GE2011.11.tar.gz
|
|
|
|
cd GE2011.11/source
|
|
|
|
./aimk -no-java -no-jni -no-secure -spool-classic -no-dump -only-depend
|
|
|
|
./scripts/zerodepend
|
|
|
|
./aimk -no-java -no-jni -no-secure -spool-classic -no-dump depend
|
|
|
|
./aimk -no-java -no-jni -no-secure -spool-classic -no-dump
|
|
|
|
|
|
|
|
|
|
|
|
Install the gridengine
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
mkdir /usr/local/OpenGridScheduler/gridengine-GE2011.11
|
|
|
|
ln -s /usr/local/OpenGridScheduler/gridengine-GE2011.11 /usr/local/OpenGridScheduler/gridengine
|
|
|
|
export SGE_ROOT=/usr/local/OpenGridScheduler/gridengine
|
|
|
|
/usr/local/OpenGridScheduler/GE2011.11/source/scripts/distinst -all -local -noexit
|
|
|
|
cd $SGE_ROOT
|
|
|
|
./install_qmaster
|
|
|
|
hit <RETURN>
|
|
|
|
under an user id other than >root< >> No
|
|
|
|
hit <RETURN>
|
|
|
|
hit <RETURN>
|
|
|
|
(default: 2) >> 2
|
|
|
|
hit <RETURN>
|
|
|
|
(default: 2) >> 2
|
|
|
|
hit <RETURN>
|
|
|
|
Enter cell name [default] >> <RETURN>
|
|
|
|
Hit <RETURN> to continue >>
|
|
|
|
Using cell >default<.
|
|
|
|
Hit <RETURN> to continue >>
|
|
|
|
to use default [p6444] >>
|
|
|
|
Enter a qmaster spool directory [/usr/local/OpenGridScheduler/gridengine/default/spool/qmaster] >>
|
|
|
|
Are you going to install Windows Execution Hosts? (y/n) [n] >> n
|
|
|
|
and set the file permissions of your distribution (enter: y) (y/n) [y] >> y
|
|
|
|
We do not verify file permissions. Hit <RETURN> to continue >> <RETURN>
|
|
|
|
Are all hosts of your cluster in a single DNS domain (y/n) [y] >> y
|
|
|
|
Ignoring domain name when comparing hostnames. Hit <RETURN> to continue >> <RETURN>
|
|
|
|
Do you want to enable the JMX MBean server (y/n) [n] >> n
|
|
|
|
Making directories, Hit <RETURN> to continue >> <RETURN>
|
|
|
|
Setup spooling,Hit <RETURN> to continue >> <RETURN>
|
|
|
|
You can change at any time the group id range in your cluster configuration. Please enter a range [20000-20100] >> <RETURN>
|
|
|
|
Using >20000-20100< as gid range. Hit <RETURN> to continue >> <RETURN>
|
|
|
|
Default: [/usr/local/OpenGridScheduler/gridengine/default/spool] >> <RETURN>
|
|
|
|
Please enter an email address in the form >user@foo.com<. Default: [none] >> <RETURN>
|
|
|
|
Do you want to change the configuration parameters (y/n) [n] >> n
|
|
|
|
Creating local configuration
|
|
|
|
----------------------------
|
|
|
|
Creating >act_qmaster< file
|
|
|
|
Adding default complex attributes
|
|
|
|
Adding default parallel environments (PE)
|
|
|
|
Adding SGE default usersets
|
|
|
|
Adding >sge_aliases< path aliases file
|
|
|
|
Adding >qtask< qtcsh sample default request file
|
|
|
|
Adding >sge_request< default submit options file
|
|
|
|
Creating >sgemaster< script
|
|
|
|
Creating >sgeexecd< script
|
|
|
|
Creating settings files for >.profile/.cshrc<
|
|
|
|
Hit <RETURN> to continue >> <RETURN>
|
|
|
|
We can install the startup script that will, start qmaster at machine boot (y/n) [y] >> y
|
|
|
|
Hit <RETURN> to continue >> <RETURN>
|
|
|
|
Starting qmaster daemon. Please wait ... starting sge_qmaster, Hit <RETURN> to continue >> <RETURN>
|
|
|
|
Do you want to use a file which contains the list of hosts (y/n) [n] >> n
|
|
|
|
Adding admin and submit hosts, Host(s): sharktest
|
|
|
|
Host(s): Finished adding hosts. Hit <RETURN> to continue >> <RETURN>
|
|
|
|
Do you want to add your shadow host(s) now? (y/n) [y] >> n
|
|
|
|
Hit <RETURN> to continue >> <RETURN>
|
|
|
|
Configurations , 1) Normal, Default configuration is [1] >> 1
|
|
|
|
We're configuring the scheduler with >Normal< settings!, Do you agree? (y/n) [y] >> y
|
|
|
|
|
|
|
|
You should now enter the command:
|
|
|
|
|
|
|
|
source /usr/local/OpenGridScheduler/gridengine/default/common/settings.csh
|
|
|
|
|
|
|
|
if you are a csh/tcsh user or
|
|
|
|
|
|
|
|
# . /usr/local/OpenGridScheduler/gridengine/default/common/settings.sh
|
|
|
|
|
|
|
|
if you are a sh/ksh user.
|
|
|
|
|
|
|
|
This will set or expand the following environment variables:
|
|
|
|
|
|
|
|
- $SGE_ROOT (always necessary)
|
|
|
|
- $SGE_CELL (if you are using a cell other than >default<)
|
|
|
|
- $SGE_CLUSTER_NAME (always necessary)
|
|
|
|
- $SGE_QMASTER_PORT (if you haven't added the service >sge_qmaster<)
|
|
|
|
- $SGE_EXECD_PORT (if you haven't added the service >sge_execd<)
|
|
|
|
- $PATH/$path (to find the Grid Engine binaries)
|
|
|
|
- $MANPATH (to access the manual pages)
|
|
|
|
|
|
|
|
Hit <RETURN> to see where Grid Engine logs messages >> <RETURN>
|
|
|
|
Do you want to see previous screen about using Grid Engine again (y/n) [n] >> n
|
|
|
|
Please hit <RETURN> >> <RETURN>
|
|
|
|
|
|
|
|
}}}
|
|
|
|
Tight Integration of the MPICH2 library into SGE.
|
|
|
|
Remove libopenmpi1.3 openmpi-common:
|
|
|
|
|
|
|
|
for i in `qconf -sel` ; do ssh $i apt-get purge -y libopenmpi1.3 openmpi-common ; done
|
|
|
|
|
|
|
|
Install the MPICH2 library on all execution nodes:
|
|
|
|
|
|
|
|
for i in `qconf -sel` ; do ssh $i apt-get install -y libmpich2-3 mpich2 mpich2-doc ; done
|
|
|
|
|
|
|
|
Add a new parallel environment mpich2.
|
|
|
|
|
|
|
|
qconf -ap mpich2
|
|
|
|
|
|
|
|
Make sure it looks like this:
|
|
|
|
|
|
|
|
pe_name mpich2
|
|
|
|
slots 122
|
|
|
|
user_lists NONE
|
|
|
|
xuser_lists NONE
|
|
|
|
start_proc_args NONE
|
|
|
|
stop_proc_args NONE
|
|
|
|
allocation_rule $round_robin
|
|
|
|
control_slaves TRUE
|
|
|
|
job_is_first_task FALSE
|
|
|
|
urgency_slots min
|
|
|
|
accounting_summary FALSE
|
|
|
|
|
|
|
|
Add the new parallel environment to the queue you want mpich2 for.
|
|
|
|
|
|
|
|
qconf -mq para.q
|
|
|
|
|
|
|
|
These are the things to change:
|
|
|
|
|
|
|
|
qname para.q
|
|
|
|
pe_list mpich2
|
|
|
|
slots 1,[koala.cluster.loc=8], \
|
|
|
|
[kiribati.cluster.loc=8]
|
|
|
|
shell /bin/bash
|
|
|
|
shell_start_mode unix_behavior
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Install Nagios monitoring Software
|
|
|
|
|
|
|
|
### Install Ganglia monitoring Software
|
|
|
|
Create the directories /usr/local/ganglia/ganglia-backup/ etc and init.d:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
mkdir -p /usr/local/ganglia/ganglia-backup/etc /usr/local/ganglia/ganglia-backup/init.d
|
|
|
|
|
|
|
|
Get the latest Ganglia monitor Core:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
wget http://downloads.sourceforge.net/project/ganglia/ganglia%20monitoring%20core/3.3.7/ganglia-3.3.7.tar.gz -P /usr/local/ganglia/
|
|
|
|
|
|
|
|
untar, get build dependencies, configure and compile Ganglia:
|
|
|
|
|
|
|
|
sudo apt-get build-dep ganglia-monitor ganglia-webfrontend gmetad
|
|
|
|
sudo apt-get install libpcre3 libpcre3-dev rrdtool
|
|
|
|
tar xvzf ganglia-3.3.7.tar.gz
|
|
|
|
cd ganglia-3.3.7
|
|
|
|
./configure --enable-gexec --with-gmetad --sysconfdir=/etc/ganglia --prefix=/usr/
|
|
|
|
make
|
|
|
|
make install
|
|
|
|
mkdir /var/lib/ganglia/rrds
|
|
|
|
chown nobody /var/lib/ganglia/rrds
|
|
|
|
|
|
|
|
create the /etc/init.d/ganglia-start script:
|
|
|
|
|
|
|
|
vi /etc/init.d/ganglia-monitor
|
|
|
|
chmod a+x /etc/init.d/ganglia-monitor
|
|
|
|
|
|
|
|
edit the script to look like this:
|
|
|
|
|
|
|
|
#! /bin/sh
|
|
|
|
### BEGIN INIT INFO
|
|
|
|
# Provides: ganglia-monitor
|
|
|
|
# Required-Start: $network $named $remote_fs $syslog
|
|
|
|
# Required-Stop: $network $named $remote_fs $syslog
|
|
|
|
# Default-Start: 2 3 4 5
|
|
|
|
# Default-Stop: 0 1 6
|
|
|
|
### END INIT INFO
|
|
|
|
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
|
|
|
|
DAEMON=/usr/sbin/gmond
|
|
|
|
NAME=gmond
|
|
|
|
DESC="Ganglia Monitor Daemon"
|
|
|
|
|
|
|
|
test -x $DAEMON || exit 0
|
|
|
|
|
|
|
|
set -e
|
|
|
|
|
|
|
|
case "$1" in
|
|
|
|
start)
|
|
|
|
echo -n "Starting $DESC: "
|
|
|
|
start-stop-daemon --start --quiet -m --pidfile /var/run/$NAME.pid \
|
|
|
|
--exec $DAEMON
|
|
|
|
echo "$NAME."
|
|
|
|
;;
|
|
|
|
stop)
|
|
|
|
echo -n "Stopping $DESC: "
|
|
|
|
start-stop-daemon --stop --quiet --oknodo --name $NAME \
|
|
|
|
2>&1 > /dev/null
|
|
|
|
echo "$NAME."
|
|
|
|
;;
|
|
|
|
reload)
|
|
|
|
;;
|
|
|
|
restart|force-reload)
|
|
|
|
$0 stop
|
|
|
|
$0 start
|
|
|
|
;;
|
|
|
|
*)
|
|
|
|
N=/etc/init.d/$NAME
|
|
|
|
# echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
|
|
|
|
echo "Usage: $N {start|stop|restart|force-reload}" >&2
|
|
|
|
exit 1
|
|
|
|
;;
|
|
|
|
esac
|
|
|
|
|
|
|
|
exit 0
|
|
|
|
}}}
|
|
|
|
Create the /etc/init.d/gmetad start script:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/init.d/gmetad
|
|
|
|
|
|
|
|
MAke sure it looks like this:
|
|
|
|
|
|
|
|
#! /bin/sh
|
|
|
|
### BEGIN INIT INFO
|
|
|
|
# Provides: gmetad
|
|
|
|
# Required-Start: $network $named $remote_fs $syslog
|
|
|
|
# Required-Stop: $network $named $remote_fs $syslog
|
|
|
|
# Default-Start: 2 3 4 5
|
|
|
|
# Default-Stop: 0 1 6
|
|
|
|
### END INIT INFO
|
|
|
|
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
|
|
|
|
DAEMON=/usr/sbin/gmetad
|
|
|
|
NAME=gmetad
|
|
|
|
DESC="Ganglia Monitor Meta-Daemon"
|
|
|
|
|
|
|
|
test -x $DAEMON || exit 0
|
|
|
|
|
|
|
|
set -e
|
|
|
|
|
|
|
|
case "$1" in
|
|
|
|
start)
|
|
|
|
echo -n "Starting $DESC: "
|
|
|
|
start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
|
|
|
|
--exec $DAEMON
|
|
|
|
echo "$NAME."
|
|
|
|
;;
|
|
|
|
stop)
|
|
|
|
echo -n "Stopping $DESC: "
|
|
|
|
start-stop-daemon --stop --quiet --oknodo \
|
|
|
|
--exec $DAEMON 2>&1 > /dev/null
|
|
|
|
echo "$NAME."
|
|
|
|
;;
|
|
|
|
reload)
|
|
|
|
;;
|
|
|
|
restart|force-reload)
|
|
|
|
$0 stop
|
|
|
|
$0 start
|
|
|
|
;;
|
|
|
|
*)
|
|
|
|
N=/etc/init.d/$NAME
|
|
|
|
# echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
|
|
|
|
echo "Usage: $N {start|stop|restart|force-reload}" >&2
|
|
|
|
exit 1
|
|
|
|
;;
|
|
|
|
esac
|
|
|
|
|
|
|
|
exit 0
|
|
|
|
}}}
|
|
|
|
Majke the script executable:
|
|
|
|
|
|
|
|
chmod a+x /etc/init.d/gmetad
|
|
|
|
|
|
|
|
|
|
|
|
Configure ganglia:
|
|
|
|
|
|
|
|
cp ~/ganglia-3.3.7/web/debian/gmond.conf /etc/ganglia/
|
|
|
|
|
|
|
|
vi /etc/ganglia/gmond.conf, make sure the cluster part looks like this:
|
|
|
|
|
|
|
|
|
|
|
|
cluster {
|
|
|
|
name = "Shark Cluster"
|
|
|
|
owner = "LGTC"
|
|
|
|
latlong = "unspecified"
|
|
|
|
url = "unspecified"
|
|
|
|
}
|
|
|
|
}}}
|
|
|
|
add a UDP receive channel to the gmond.conf file :
|
|
|
|
|
|
|
|
|
|
|
|
udp_recv_channel {
|
|
|
|
#mcast_join = shark.lumcnet.prod.intern
|
|
|
|
port = 8649
|
|
|
|
#bind = shark.lumcnet.prod.intern
|
|
|
|
family = inet4
|
|
|
|
}
|
|
|
|
}}}
|
|
|
|
|
|
|
|
Install the ganglia web frontend:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
cd /var/www
|
|
|
|
wget http://downloads.sourceforge.net/project/ganglia/ganglia-web/3.4.2/ganglia-web-3.4.2.tar.gz
|
|
|
|
tar xvfz ganglia-web-3.4.2.tar.gz
|
|
|
|
ln -s /var/www/ganglia-web-3.4.2 /var/www/ganglia
|
|
|
|
cd /var/www/ganglia
|
|
|
|
|
|
|
|
Please edit the Makefile found in the tarball. Adjust the DESTDIR to where you copied the files /var/www/ganglia and APACHE_USER is www-data. When done type
|
|
|
|
|
|
|
|
make install
|
|
|
|
chown -R www-data:www-data /var/www/ganglia-web-3.4.2
|
|
|
|
}}}
|
|
|
|
|
|
|
|
Make sure ganglia and gmetad start at boot time:
|
|
|
|
|
|
|
|
update-rc.d ganglia-monitor defaults
|
|
|
|
update-rc.d gmetad defaults
|
|
|
|
|
|
|
|
Place the ganglia config files in /usr/local/ganglia/ganglia-backup:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
cp -rf /etc/ganglia/ /usr/local/ganglia/ganglia-backup/etc/
|
|
|
|
cp /etc/init.d/ganglia-monitor /etc/init.d/gmetad /usr/local/ganglia/ganglia-backup/init.d/
|
|
|
|
|
|
|
|
|
|
|
|
### Configure bash and common env. vars
|
|
|
|
Add the following lines to /etc/bash.bashrc:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /etc/bash.bashrc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
export HISTSIZE=10000
|
|
|
|
export HISTFILESIZE=''
|
|
|
|
export HISTCONTROL=ignoreboth
|
|
|
|
export HISTTIMEFORMAT='%a, %d %b %Y %l:%M:%S%p %z '
|
|
|
|
export JAVA_HOME=/usr/lib/jvm/java-6-sun
|
|
|
|
|
|
|
|
|
|
|
|
Make the directory COMMON-ENV
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
mkdir /usr/local/COMMON-ENV
|
|
|
|
|
|
|
|
Create a common-cluster-env.sh file:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
sudo vi /usr/local/COMMON-ENV/common-cluster-env.sh
|
|
|
|
|
|
|
|
Make the file look like this:
|
|
|
|
|
|
|
|
#!/bin/bash
|
|
|
|
|
|
|
|
####add here system wide variables and PATH for all sharks
|
|
|
|
####this filexs will be loaded by the /etc/bash.bashrc script on all sharks.
|
|
|
|
|
|
|
|
#####SGE#####
|
|
|
|
#. /usr/local/gridengine/default/common/settings.sh
|
|
|
|
|
|
|
|
####Open Grid Scheduler #####
|
|
|
|
. /usr/local/OpenGridScheduler/gridengine/default/common/settings.sh
|
|
|
|
|
|
|
|
|
|
|
|
####HELICOS#####
|
|
|
|
#. /usr/local/helisphere/helicos.bashrc
|
|
|
|
|
|
|
|
#####PacBio#####
|
|
|
|
#. /usr/local/smrtanalysis/current/etc/setup.sh
|
|
|
|
|
|
|
|
###ENSEMBL-API####
|
|
|
|
PERL5LIB=${PERL5LIB}:/usr/local/ensembl-api-56/ensembl/modules
|
|
|
|
PERL5LIB=${PERL5LIB}:/usr/local/ensembl-api-56/ensembl-compara/modules
|
|
|
|
PERL5LIB=${PERL5LIB}:/usr/local/ensembl-api-56/ensembl-variation/modules
|
|
|
|
PERL5LIB=${PERL5LIB}:/usr/local/ensembl-api-56/ensembl-functgenomics/modules
|
|
|
|
PERL5LIB=${PERL5LIB}:/usr/local/MedStat/privatePerl
|
|
|
|
PERL5LIB=${PERL5LIB}:${PERLLIB}
|
|
|
|
export PERL5LIB
|
|
|
|
RPRIVATE=/usr/local/MedStat/Rprivate
|
|
|
|
export RPRIVATE
|
|
|
|
|
|
|
|
export PATH=$PATH:/usr/local/bin:/usr/local/R/current/bin:/usr/local/smrtanalysis/current/analysis/bin
|
|
|
|
|
|
|
|
|
|
|
|
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib64/R/lib
|
|
|
|
export PIPELINEPATHES=/usr/local/MedStat/pipelines
|
|
|
|
export TMOUT=1000
|
|
|
|
|
|
|
|
export readonly TMOUT=1000
|
|
|
|
|
|
|
|
unset MANPATH
|
|
|
|
}}}
|
|
|
|
make this file executable:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
chmod a+x /usr/local/COMMON-ENV/common-cluster-env.sh
|
|
|
|
|
|
|
|
Create the directory /usr/local/config-files/
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
mkdir /usr/local/config-files/
|
|
|
|
|
|
|
|
create the file /usr/local/config-files/shark_configuration.conf
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
vi /usr/local/config-files/shark_configuration.conf
|
|
|
|
|
|
|
|
Make the file look like this:
|
|
|
|
|
|
|
|
SGE_ROOT="/usr/local/OpenGridScheduler/gridengine"
|
|
|
|
SGE_QMASTER_PORT="6444"
|
|
|
|
SGE_EXECD_PORT="6445"
|
|
|
|
SGE_ENABLE_SMF="false"
|
|
|
|
CELL_NAME="default"
|
|
|
|
ADMIN_USER=""
|
|
|
|
QMASTER_SPOOL_DIR="/usr/local/OpenGridScheduler/gridengine/default/spool/qmaster"
|
|
|
|
EXECD_SPOOL_DIR="/usr/local/OpenGridScheduler/gridengine/default/spool/execd"
|
|
|
|
GID_RANGE="20000-21000"
|
|
|
|
SPOOLING_METHOD="berkeleydb"
|
|
|
|
DB_SPOOLING_SERVER="none"
|
|
|
|
DB_SPOOLING_DIR="/opt/sge/default/spooldb"
|
|
|
|
ADMIN_HOST_LIST="shark"
|
|
|
|
SUBMIT_HOST_LIST="shark"
|
|
|
|
EXEC_HOST_LIST="angelshark.cluster.loc baskingshark.cluster.loc blacktipshark.cluster.loc caribbeanshark.cluster.loc dogfishshark.cluster.loc greatwhiteshark.cluster.loc hammerheadshark.cluster.loc lemonshark.cluster.loc megamouthshark.cluster.loc tigershark.cluster.loc whaleshark.cluster.loc baskingshark.cluster.loc makoshark.cluster.loc epauletteshark.cluster.loc frilledshark.cluster.loc threshershark.cluster.loc kitefinshark.cluster.loc nightshark.cluster.loc pygmeshark.cluster.loc zebrashark.cluster.loc wobbegongshark.cluster.loc"
|
|
|
|
HOSTNAME_RESOLVING="true"
|
|
|
|
SHELL_NAME="ssh"
|
|
|
|
COPY_COMMAND="scp"
|
|
|
|
DEFAULT_DOMAIN=""
|
|
|
|
ADMIN_MAIL="none"
|
|
|
|
ADD_TO_RC="true"
|
|
|
|
SET_FILE_PERMS="true"
|
|
|
|
RESCHEDULE_JOBS="wait"
|
|
|
|
SCHEDD_CONF="2"
|
|
|
|
# all options below are irrelevant in our setup
|
|
|
|
SHADOW_HOST=""
|
|
|
|
EXEC_HOST_LIST_RM=""
|
|
|
|
REMOVE_RC="false"
|
|
|
|
WINDOWS_SUPPORT="false"
|
|
|
|
WIN_ADMIN_NAME="Administrator"
|
|
|
|
WIN_DOMAIN_ACCESS="false"
|
|
|
|
CSP_RECREATE="true"
|
|
|
|
CSP_COPY_CERTS="false"
|
|
|
|
CSP_COUNTRY_CODE="DE"
|
|
|
|
CSP_STATE="Germany"
|
|
|
|
CSP_LOCATION="Building"
|
|
|
|
CSP_ORGA="Organisation"
|
|
|
|
CSP_ORGA_UNIT="Organisation_unit"
|
|
|
|
CSP_MAIL_ADDRESS="name@yourdomain.com"
|
|
|
|
|
|
|
|
|
|
|
|
### Configure ADS authentication
|
|
|
|
|
|
|
|
## Cluster Software Install
|
|
|
|
### SMRT Analysis v1.3.1
|
|
|
|
Reconfigure dash to use bash
|
|
|
|
|
|
|
|
dpkg-reconfigure dash
|
|
|
|
|
|
|
|
Select No when askesd : Install dash as /bin/sh?
|
|
|
|
Install the packages :
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
apt-get install -y mysql-server-5.1 libxml-parser-perl
|
|
|
|
|
|
|
|
|
|
|
|
Create the install dir:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
mkdir /usr/local/smrtanalysis/
|
|
|
|
|
|
|
|
Add the smrtanalysis user:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
adduser smrtanalysis
|
|
|
|
|
|
|
|
|
|
|
|
Download the smrtanalysis software:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
wget ftp://SoftwareDL_Read:PBsec82347hd34@ftp.pacificbiosciences.com/SMRT_Analysis/v1.3.1/smrtanalysis-1.3.1-ubuntu.tgz -P /usr/local/smrtanalysis/
|
|
|
|
|
|
|
|
Change to the Download directory, unpack and link:
|
|
|
|
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
cd /usr/local/smrtanalysis/
|
|
|
|
tar xvfz smrtanalysis-1.3.1-ubuntu.tgz
|
|
|
|
ln -s /usr/local/smrtanalysis/smrtanalysis-1.3.1/ /usr/local/smrtanalysis/current
|
|
|
|
|
|
|
|
export your $SEYMOUR_HOME
|
|
|
|
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
export SEYMOUR_HOME=/usr/local/smrtanalysis/current
|
|
|
|
|
|
|
|
Change ownership for the $SEYMOUR_HOME
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
chown -R smrtanalysis:smrtanalysis /usr/local/smrtanalysis/smrtanalysis-1.3.1/
|
|
|
|
|
|
|
|
Edit the setup.sh script to match your SEYMOUR_HOME
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
vi /usr/local/smrtanalysis/current/etc/setup.sh
|
|
|
|
|
|
|
|
Edit the SEYMOUR_HOME= like this:
|
|
|
|
|
|
|
|
SEYMOUR_HOME=/usr/local/smrtanalysis/current
|
|
|
|
|
|
|
|
setup the smrtportal:
|
|
|
|
|
|
|
|
#!sh
|
|
|
|
/etc/scripts/postinstall/configure_smrtanalysis.sh
|
|
|
|
|
|
|
|
edit the following file:
|
|
|
|
|
|
|
|
vi /usr/local/smrtanalysis/current/analysis/etc/cluster/SGE/interactive.tmpl
|
|
|
|
|
|
|
|
Make sure the file looks like this:
|
|
|
|
|
|
|
|
export SGE_ROOT=/usr/local/OpenGridScheduler/gridengine
|
|
|
|
qsub -S /bin/bash -sync y -V -q all.q -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} -pe BWA ${NPROC} ${CMD}
|
|
|
|
|
|
|
|
edit the following file:
|
|
|
|
|
|
|
|
vi /usr/local/smrtanalysis/current/analysis/etc/cluster/SGE/kill.tmpl
|
|
|
|
|
|
|
|
Make sure the file looks like this:
|
|
|
|
|
|
|
|
export SGE_ROOT=/usr/local/OpenGridScheduler/gridengine
|
|
|
|
qdel ${JOB_ID}
|
|
|
|
|
|
|
|
edit the following file:
|
|
|
|
|
|
|
|
vi /usr/local/smrtanalysis/current/analysis/etc/cluster/SGE/start.tmpl
|
|
|
|
|
|
|
|
Make sure the file looks like this:
|
|
|
|
|
|
|
|
export SGE_ROOT=/usr/local/OpenGridScheduler/gridengine
|
|
|
|
qsub -pe BWA ${NPROC} -S /bin/bash -V -q all.q -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
|
|
|
|
|
|
|
|
|