Changes between Version 3 and Version 4 of OsgVpbBuildEnvLinux
- Timestamp:
- Aug 3, 2010, 9:24:32 PM (14 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
OsgVpbBuildEnvLinux
v3 v4 1 1 [[TracNav(TracNav/TOC)]] 2 2 3 = How to Setup OSG & VPB Build Environment in Linux as Step-By-Step Tutorial = 4 5 6 7 = Tutorial how build OSG on Linux and render a OSG !VirtualPlanetBuilder database clustered = 3 = How to Setup OSG & VPB clustered Build Environment in Linux as Step-By-Step Tutorial = 8 4 9 5 Basically it is possible to render virtualPlanetBuilder Databases on MS Windows and Linux. Unfortunately MS Windows is not stable enough, maybe caused by NTFS and the heavy systemload (RAM, CPU, HDD). 10 To render large databases, it is absolutely necessary to use VPB on linux. 11 12 13 = System installation and basic preparation == 14 15 * Install Kubuntu 9.10 64 bit (64 bit is important to prevent your system to crash due to some ressource limits) 16 * Activate proprietary video driver: Menu -> system -> driver. --> reboot 17 * Link shell to desktop 18 * install basic packages 19 <code> 6 To render large databases, it is absolutely necessary to use VPB on linux. This HowTo is guide to setup a proper 7 8 9 == System installation and basic preparation == 10 11 * Install Kubuntu 9.10 64 bit (64 bit is important to prevent your system to crash due to some ressource limits) 12 * Activate proprietary video driver: Menu -> system -> driver. --> reboot 13 * Link shell to desktop 14 * install basic packages 15 {{{ 16 #!sh 20 17 aptitude update 21 18 aptitude install vim 22 </code> 19 }}} 23 20 24 21 === Setup Raid, if nessecary === 25 * Install package: 26 <code> 22 * Install package: 23 {{{ 24 #!sh 27 25 aptitude install dmraid 28 </code> 29 * Read in existing Windows or Linux raids and activate them 30 <code> 26 }}} 27 * Read in existing Windows or Linux raids and activate them 28 {{{ 29 #!sh 31 30 dmraid -r 32 31 dmraid -ay -v 33 </code> 34 * Add raids an windows Drives to fstab 35 <code> 32 }}} 33 * Add raids an windows Drives to fstab 34 {{{ 35 #!sh 36 36 mkdir /mnt/disk1 37 37 mkdir /mnt/disk2 … … 41 41 dev/mapper/isw_befejdgeeb_Volume01 /mnt/disk2 auto defaults 0 0 42 42 /dev/sda2 /mnt/disk1 auto defaults 0 0 43 </code> 43 }}} 44 44 lookup the devicename in /dev/mapper after the last dmraid command 45 45 … … 47 47 48 48 === [Optional] Using Synergy to control the whole cluster with only one keyboard and mouse === 49 * Install synergy 50 <code> 49 * Install synergy 50 {{{ 51 #!sh 51 52 aptitude install synergy quicksynergy 52 </code> 53 }}} 53 54 54 55 howto: http://www.mattcutts.com/blog/how-to-configure-synergy-in-six-steps/ … … 57 58 == Compile and install OSG and its dependencies == 58 59 59 * Install osg-dependencies and osg like http://www.openscenegraph.org/projects/osg/wiki/Support/PlatformSpecifics/Debian-Dependencies60 * Install osg-dependencies and osg like http://www.openscenegraph.org/projects/osg/wiki/Support/PlatformSpecifics/Debian-Dependencies 60 61 Use this updated list which is adapted for kubuntu 9.10: 61 <code> 62 {{{ 63 #!sh 62 64 aptitude update 63 65 aptitude install cmake subversion g++ libx11-dev nvidia-glx-185-dev libglu-dev … … 67 69 aptitude install xxdiff dos2unix libboost-regex doxygen graphviz subversion-tools 68 70 69 </code> 70 To use the new introduced resume function, yu must use at least OSG 2.9. 5with VPB 0.9.1171 }}} 72 To use the new introduced resume function, yu must use at least OSG 2.9.9 with VPB 0.9.11 71 73 Basically: VPB versions are bound to special OSG versions. (like 0.9.10 is bound to osg 2.8.2) 72 74 73 * Compile OSG 2.9.5 74 <code> 75 * Compile OSG 2.9.9 76 {{{ 77 #!sh 75 78 cd /tmp 76 svn co http://www.openscenegraph.org/svn/osg/OpenSceneGraph/tags/OpenSceneGraph-2.9. 5OpenSceneGraph79 svn co http://www.openscenegraph.org/svn/osg/OpenSceneGraph/tags/OpenSceneGraph-2.9.9 OpenSceneGraph 77 80 cd OpenSceneGraph 78 81 ./configure … … 80 83 make install 81 84 cd .. 82 </code> 83 84 * ... or Compile newest OSG from SVN 85 <code> 85 }}} 86 To build OSG in debug mode, edit the configure file and change the build type to Debug, then repeak the last instructions. 87 88 * ... or Compile newest OSG from SVN 89 {{{ 90 #!sh 86 91 cd /tmp 87 92 svn co http://www.openscenegraph.org/svn/osg/OpenSceneGraph/trunk OpenSceneGraph … … 91 96 make install 92 97 cd .. 93 </code> 94 95 96 97 98 * Add OSG environment variables to make all osg binaries callable via interactive shell: 99 <code> 98 }}} 99 100 101 * Add OSG environment variables to make all osg binaries callable via interactive shell: 102 {{{ 103 #!sh 100 104 vim /home/fsd/.bash_profile 101 105 export PATH=$PATH:/home/fsd/OpenSceneGraph/bin … … 104 108 vim /etc/bash.bashrc 105 109 export LD_LIBRARY_PATH=/home/fsd/OpenSceneGraph/lib/:/home/fsd/VirtualPlanetBuilder/lib/ 106 </code> 110 }}} 107 111 reboot to load changes and allow VPB compiling. 108 112 109 113 110 * Compile VPB 0.9.11: 111 <code> 112 svn checkout http://www.openscenegraph.org/svn/VirtualPlanetBuilder/tags/VirtualPlanetBuilder-0.9.11/ VirtualPlanetBuilder 113 cd VirtualPlanetBuilder 114 * Compile VPB 0.9.11: 115 {{{ 116 #!sh 117 svn checkout http://www.openscenegraph.org/svn/VirtualPlanetBuilder/tags/VirtualPlanetBuilder-0.9.11/ VirtualPlanetBuilder 118 cd VirtualPlanetBuilder 114 119 ./configure 115 120 make -j 8 116 121 make install 117 </code> 118 119 * or Compile newest VPB from SVN: 120 <code> 121 svn checkout http://www.openscenegraph.org/svn/VirtualPlanetBuilder/trunk VirtualPlanetBuilder 122 cd VirtualPlanetBuilder 122 }}} 123 124 * or Compile newest VPB from SVN: 125 {{{ 126 #!sh 127 svn checkout http://www.openscenegraph.org/svn/VirtualPlanetBuilder/trunk VirtualPlanetBuilder 128 cd VirtualPlanetBuilder 123 129 ./configure 124 130 make -j 8 125 131 make install 126 </code> 127 128 * Download Sampledata : 129 <code> 132 }}} 133 134 * Download Sampledata : 135 {{{ 136 #!sh 130 137 svn co http://www.openscenegraph.org/svn/osg/OpenSceneGraph-Data/tags/OpenSceneGraph-Data-2.8.0 sampledata 131 </code> 132 133 * Configure ressource limits 134 <code> 138 }}} 139 140 * Configure ressource limits 141 {{{ 142 #!sh 135 143 sudo vim /etc/security/limits.conf 136 144 … … 138 146 fsd soft nofile 65353 139 147 fsd hard nofile 65353 140 </code> 148 }}} 141 149 142 150 … … 147 155 === Requirements for successful use of the cluster === 148 156 149 * Source data must be available on all nodes under the identical path (best solution: local source data on each node located at the same path)150 * Destination folder must be shared over any network file system and must be available on all nodes under the identical path. Easiest solution: sshfs)151 * Compile-script must be called from the destination directory (because all tasks contain the "run-path", this must be accessible by the executing node -> "run-path" must be the shared destination folder).157 * Source data must be available on all nodes under the identical path (best solution: local source data on each node located at the same path) 158 * Destination folder must be shared over any network file system and must be available on all nodes under the identical path. Easiest solution: sshfs) 159 * Compile-script must be called from the destination directory (because all tasks contain the "run-path", this must be accessible by the executing node -> "run-path" must be the shared destination folder). 152 160 153 161 To fulfill this requirements, let's assume this setup: 154 * One node acts as server node and has the IP 192.168.0.55155 * Two nodes act as client nodes and have the IPs 192.168.0.54 and 192.168.0.56156 * sshfs is used as network filesystem157 * the username on all nodes is fsd158 * The destination folder for the compiled database is {{/geodata}}159 * The local drive with geo source data is {{/localSourceData}}162 * One node acts as server node and has the IP 192.168.0.55 163 * Two nodes act as client nodes and have the IPs 192.168.0.54 and 192.168.0.56 164 * sshfs is used as network filesystem 165 * the username on all nodes is fsd 166 * The destination folder for the compiled database is {{/geodata}} 167 * The local drive with geo source data is {{/localSourceData}} 160 168 161 169 162 170 To use vpbmaster clustered, all maschines must have access to the server node. 163 171 Open a fresh shell and typ in as standart user (NOT root!): 164 <code> 172 {{{ 173 #!sh 165 174 ssh-keygen -t rsa 166 175 <if asked for passphrase: Press enter for no passphrase - otherwise the login would not be passwordless :)> 167 176 168 177 ssh-copy-id -i .ssh/id_rsa.pub fsd@192.168.0.55 169 </code> 178 }}} 170 179 171 180 Because remote ssh commands do not invoke the environment settings of an interactive shell, ssh must be prepared to provide the required environment variables also for ssh non-interactive-shell: 172 {{vim /etc/ssh/sshd_config}} 173 <code> 181 {{{ 182 #!sh 183 vim /etc/ssh/sshd_config 184 185 174 186 PermitUserEnvironment yes 175 187 … … 177 189 LD_LIBRARY_PATH=/home/fsd/OpenSceneGraph/lib/:/home/fsd/VirtualPlanetBuilder/lib/ 178 190 OSG_FILE_PATH=/home/fsd/sampledata/:/home/fsd/sampledata/Images 179 </code> 191 }}} 180 192 181 193 To share over network the destination folder to write the database in, install sshfs: 182 <code> 194 {{{ 195 #!sh 183 196 aptitude install sshfs 184 </code> 185 186 Create Folder for the local source data and shared destination data 187 <code> 197 }}} 198 199 Create Folder for the local source data and shared destination data: 200 {{{ 201 #!sh 188 202 mkdir /mnt/disk3 # for source data 189 203 mkdir /geodata # for destination data, shared over sshfs 190 </code> 204 }}} 191 205 192 206 … … 195 209 196 210 === Preparation after boot of all clients === 197 * mount at the @@+++ server node @@ the destination harddrive into the appropriate folder: 198 <code> 211 * mount at the @@+++ server node @@ the destination harddrive into the appropriate folder: 212 {{{ 213 #!sh 199 214 mount /dev/sde1 /geodata 200 </code> 201 * mount on @@+++ all client nodes @@ the destination directory of the server node over sshfs.215 }}} 216 * mount on @@+++ all client nodes @@ the destination directory of the server node over sshfs. 202 217 Let's assume the server has the IP 192.168.0.56: 203 <code> 218 {{{ 219 #!sh 204 220 sshfs fsd@192.168.0.56:/geodata /geodata 205 221 <enter password of the server node> 206 </code> 222 }}} 207 223 208 224