[[TracNav(TracNav/TOC)]]
= How to Setup OSG & VPB Build Environment in Linux as Step-By-Step Tutorial =
= Tutorial how build OSG on Linux and render a OSG !VirtualPlanetBuilder database clustered =
Basically it is possible to render virtualPlanetBuilder Databases on MS Windows and Linux. Unfortunately MS Windows is not stable enough, maybe caused by NTFS and the heavy systemload (RAM, CPU, HDD).
To render large databases, it is absolutely necessary to use VPB on linux.
= System installation and basic preparation ==
* Install Kubuntu 9.10 64 bit (64 bit is important to prevent your system to crash due to some ressource limits)
* Activate proprietary video driver: Menu -> system -> driver. --> reboot
* Link shell to desktop
* install basic packages
aptitude update
aptitude install vim
=== Setup Raid, if nessecary ===
* Install package:
aptitude install dmraid
* Read in existing Windows or Linux raids and activate them
dmraid -r
dmraid -ay -v
* Add raids an windows Drives to fstab
mkdir /mnt/disk1
mkdir /mnt/disk2
vim /etc/fstab
dev/mapper/isw_befejdgeeb_Volume01 /mnt/disk2 auto defaults 0 0
/dev/sda2 /mnt/disk1 auto defaults 0 0
lookup the devicename in /dev/mapper after the last dmraid command
=== [Optional] Using Synergy to control the whole cluster with only one keyboard and mouse ===
* Install synergy
aptitude install synergy quicksynergy
howto: http://www.mattcutts.com/blog/how-to-configure-synergy-in-six-steps/
== Compile and install OSG and its dependencies ==
* Install osg-dependencies and osg like http://www.openscenegraph.org/projects/osg/wiki/Support/PlatformSpecifics/Debian-Dependencies
Use this updated list which is adapted for kubuntu 9.10:
aptitude update
aptitude install cmake subversion g++ libx11-dev nvidia-glx-185-dev libglu-dev
aptitude install libpng-dev libjpeg-dev libtiff-dev libfreetype6-dev libgdal-dev gdal-bin
aptitude install libcurl4-dev dcmtk libdcmtk1-dev libgtk2.0-dev libxul-dev libpoppler-glib-dev
aptitude install libvncserver-dev librsvg2-dev libsdl-dev libxml2-dev
aptitude install xxdiff dos2unix libboost-regex doxygen graphviz subversion-tools
To use the new introduced resume function, yu must use at least OSG 2.9.5 with VPB 0.9.11
Basically: VPB versions are bound to special OSG versions. (like 0.9.10 is bound to osg 2.8.2)
* Compile OSG 2.9.5
cd /tmp
svn co http://www.openscenegraph.org/svn/osg/OpenSceneGraph/tags/OpenSceneGraph-2.9.5 OpenSceneGraph
cd OpenSceneGraph
./configure
make -j 8
make install
cd ..
* ... or Compile newest OSG from SVN
cd /tmp
svn co http://www.openscenegraph.org/svn/osg/OpenSceneGraph/trunk OpenSceneGraph
cd OpenSceneGraph
./configure
make -j 8
make install
cd ..
* Add OSG environment variables to make all osg binaries callable via interactive shell:
vim /home/fsd/.bash_profile
export PATH=$PATH:/home/fsd/OpenSceneGraph/bin
export OSG_FILE_PATH=/home/fsd/sampledata/:/home/fsd/sampledata/Images
vim /etc/bash.bashrc
export LD_LIBRARY_PATH=/home/fsd/OpenSceneGraph/lib/:/home/fsd/VirtualPlanetBuilder/lib/
reboot to load changes and allow VPB compiling.
* Compile VPB 0.9.11:
svn checkout http://www.openscenegraph.org/svn/VirtualPlanetBuilder/tags/VirtualPlanetBuilder-0.9.11/ VirtualPlanetBuilder
cd VirtualPlanetBuilder
./configure
make -j 8
make install
* or Compile newest VPB from SVN:
svn checkout http://www.openscenegraph.org/svn/VirtualPlanetBuilder/trunk VirtualPlanetBuilder
cd VirtualPlanetBuilder
./configure
make -j 8
make install
* Download Sampledata :
svn co http://www.openscenegraph.org/svn/osg/OpenSceneGraph-Data/tags/OpenSceneGraph-Data-2.8.0 sampledata
* Configure ressource limits
sudo vim /etc/security/limits.conf
# End of file
fsd soft nofile 65353
fsd hard nofile 65353
== Cluster preparation ==
=== Requirements for successful use of the cluster ===
* Source data must be available on all nodes under the identical path (best solution: local source data on each node located at the same path)
* Destination folder must be shared over any network file system and must be available on all nodes under the identical path. Easiest solution: sshfs)
* Compile-script must be called from the destination directory (because all tasks contain the "run-path", this must be accessible by the executing node -> "run-path" must be the shared destination folder).
To fulfill this requirements, let's assume this setup:
* One node acts as server node and has the IP 192.168.0.55
* Two nodes act as client nodes and have the IPs 192.168.0.54 and 192.168.0.56
* sshfs is used as network filesystem
* the username on all nodes is fsd
* The destination folder for the compiled database is {{/geodata}}
* The local drive with geo source data is {{/localSourceData}}
To use vpbmaster clustered, all maschines must have access to the server node.
Open a fresh shell and typ in as standart user (NOT root!):
ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub fsd@192.168.0.55
Because remote ssh commands do not invoke the environment settings of an interactive shell, ssh must be prepared to provide the required environment variables also for ssh non-interactive-shell:
{{vim /etc/ssh/sshd_config}}
PermitUserEnvironment yes
vim /home/fsd/.ssh/environment
LD_LIBRARY_PATH=/home/fsd/OpenSceneGraph/lib/:/home/fsd/VirtualPlanetBuilder/lib/
OSG_FILE_PATH=/home/fsd/sampledata/:/home/fsd/sampledata/Images
To share over network the destination folder to write the database in, install sshfs:
aptitude install sshfs
Create Folder for the local source data and shared destination data
mkdir /mnt/disk3 # for source data
mkdir /geodata # for destination data, shared over sshfs
== Use Cluster ==
=== Preparation after boot of all clients ===
* mount at the @@+++ server node @@ the destination harddrive into the appropriate folder:
mount /dev/sde1 /geodata
* mount on @@+++ all client nodes @@ the destination directory of the server node over sshfs.
Let's assume the server has the IP 192.168.0.56:
sshfs fsd@192.168.0.56:/geodata /geodata
=== Create the compile script ===
To allow easy recalls of the database build command, create a script:
{{vim /etc/geodata/compile_BRD_Sued.sh}}:
{{{
#!/bin/sh
vpbmaster --machines machinepool.txt\
--geocentric\
--terrain\
--compressed\
-d /localSourceData/srtm-V4.2-europa\
-t /localSourceData/Muenchen_25cm\
-t /localSourceData/geocontent/Deutschland_1m/Sued\
-t /localSourceData/bluemarble\
-o /geoData/BRD1m_MUC0.25m_srtmEU_BM/terrain.ive
}}}
=== Compile Database ===
If everything is setup correctly, the processing step ist very easy. Open an new shell on the server node and call as standard user (NOT ROOT!):
cd /geodata
./compile_BRD_SUED.sh
While the database is creating, it is possible to watch the actual progress in another console. Because files are written and modified, the viewer application will give warnings or will terminate in few cases. This happens due to the creation progress and does not affect or modify the database, please ignore it.
After finished creation, this should not happen, of course ;)
Because system load during database creation is very high, on some systems the operation system or the compile process crash.
If these crashes happens at the beginning of the rendering process, please check if your "openfile" limits are set correctly
=== Resume compiling database after crash ===
See http://www.openscenegraph.org/projects/VirtualPlanetBuilder/wiki/Resume
To resume the crashed compile process, execute on the server node:
cd /geodata
vpbmaster --machines machinepool.txt --tasks build_master.tasks
Take care @@+++ NOT @@ to call the compile script again, in this case all compiled tasks will be resetted to status "pending"
== Source data aquisition ==
The following source data could be used:
* Digital Elevation Data
* Free global elevation data with 3 arcs resolution: SRTM data (NASA)
* Free local high resolution elevation models: DEM data (www.viewfinderpanoramas.org)
* Textures/Orthophotos
* Free global low resolution texture data : Bluemarble Next Generation (NASA)
* Free local low/medium resolution texture data: Landsat (NASA)
* Commercial global medium/high resolution texture data: Landsat (atlogis.com, ...)
* Commercial high resolution national texture data: e.g. Germany (Geocontent), USA (USGS), ...
=== SRTM-Data ===
SRTM data with 3 arcs are available for free at
* http://www.csi.cgiar.org/index.asp (american server, very slow)
* ftp://xftp.jrc.it/pub/srtmV4/ (european mirror, very fast)
Local high resolution DEM data (mainly based on SRTM)
* http://www.viewfinderpanoramas.org/dem3.html
Tip:
Because SRTM data is delivery in many small .zip or tar.gz files, download and unpack it automatically:
wget -r ftp://xftp.jrc.it/pub/srtmV4/tiff/
for zipfile in *.zip;do unzip -o "$zipfile" -d unpacked; done
=== US texture data ===
Local High and global resolution Texture and DEM data
* http://edcsns17.cr.usgs.gov/EarthExplorer/
* http://glovis.usgs.com
To use LANDSAT arial images, read https://zulu.ssc.nasa.gov/mesid/tutorial/LandsatTutorial-V1.html for introduction. LANDSAT datasets are deliverey with up to seven images, each representing a different sensor with different wavelength. Three of this files (sensors for RGB) must be combined for the raw "natural" image.
The image merging is possible with gdal_merge.py (available in FWTools):
gdal_merge.py -o outfile.tif R_sensor.tif G_sensor.tif B_sensor.tif
=== National high resolution data ===
National high resolution data is available from many companies. Germany: !GeoContent
== compress Data ===
To shift system load from HDD to CPU, compress all textures lossless with LZW. This will increase rendering time a lot, because usually the HDD ist the bottleneck.
{{{
gdal_translate -co "COMPRESS=LZW" unsw ToDo
}}}
=== Moon Data ===
To animate earth rising above moon horizon, it could be usefull to model the moon.
http://lunar.arc.nasa.gov/dataviz/datamaps/index.html
== Troubleshooting ==
If compiling fails at the beginning:
* reboot
* Please check if you use a 64 bit operating system
* check your limits: {{limits -a}}
If compiling fails after many hours or days:
* Maybe your OS runs out of any ressource, so reboot and resume (read above)
* Check your hardware if resuming do not help.
If only local threads are executed, but no remote ones:
* Check your ssh setup, if you can login passwordless from the server node to the client nodes
* Check your ssh setup, if your environmental variables on the client nodes are available if you login to a client node