Differences between revisions 1 and 4 (spanning 3 versions)
Revision 1 as of 2013-04-19 13:56:56
Size: 3513
Editor: SteveLudtke
Comment:
Revision 4 as of 2014-12-05 22:41:01
Size: 4573
Editor: jgalaz
Comment: Added notes on how to define the LD_PRELOAD variable
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
== Installing pydusa for EMAN2/SPARX MPI Support== == Installing pydusa for EMAN2/SPARX MPI Support ==
Line 26: Line 26:
 * If the script cannot find a local MPI installation, it will download and install a version of MPI for you. On most clusters, this is not what you want to happen.  * If the script cannot find a local MPI installation, it will download and install a version of MPI for you. On most clusters, this is '''not''' what you want to happen.
Line 33: Line 33:
  * You may need to set the environment variable LD_PRELOAD to point at the correct libmpi.so   * You may need to set the environment variable LD_PRELOAD to point at the correct libmpi.so, as follows:

Add a similar line to the one below to your ''.bashrc'' file (which is an invisible file in your home directory) by following these steps:

1) Open your invisible ''.bashrc file'' by typing at the command line:
{{{
vi .bashrc
}}}
(''vi'' is an obnoxious text editor, but you can look up how to use it on Google).

2) Add the following line anywhere in the ''.bashrc'' file:
''export LD_PRELOAD=<mpi_directory>/lib/libmpi.so''
making sure to replace <mpi_directory> with your mpi directory.

In my case, the directory of my mpi installation is ''/raid/home/jgalaz/openmpi-1.4.3''

Thus, the line I added to my .bashrc fine was:

''export LD_PRELOAD=/raid/home/jgalaz/openmpi-1.4.3/lib/libmpi.so''

To find what mpi you're using (what the installation directory is for your local version of mpi), type at the command line:
{{{
which mpirun
}}}

I get:

''/raid/home/jgalaz/openmpi-1.4.3/bin/mpirun''

Therefore the path before '/bin' corresponds to <mpi_directory> (''/raid/home/jgalaz/openmpi=1.4.3'')
      

Installing pydusa for EMAN2/SPARX MPI Support

Note: Before 4/19/2013, EMAN2 and SPARX required different libraries to enable MPI support. After this date, we switched to a common system, which is based on the pydusa project. These instructions cover installing pydusa. Actually using MPI still differs somewhat between the packages. Once you have pydusa installed, please see:

Unfortunately, given the wide variations in MPI libraries used on different clusters and other computers, it is impractical to try and distribute precompiled MPI libraries with the EMAN2/SPARX binaries. Luckily, the pydusa project went to considerable effort to make installation from source easy on the vast majority of computers. Note that we do not directly support running using MPI on Windows. This system is primarily targeted at Linux clusters, though presumably it could also be used on individual Linux/Mac workstations. In EMAN2, however, there are alternative strategies (http://blake.bcm.edu/emanwiki/EMAN2/Parallel) which are more efficient in this situation.

Pydusa Download

The modified version of Pydusa is available for download from the main EMAN2/SPARX download pages, here: http://ncmi.bcm.edu/ncmi/software/counter_222/software_121 While we don't expect this package itself to require frequent updates, you will need to rerun the installation script if you update your version of EMAN2/SPARX. That is, after installing pydusa, we suggest keeping the installation directory, so you can rerun the installer in the future with minimum hassle.

Installing Pydusa

The package is provided as a compressed TAR archive. If you aren't familiar with these, they are easily unpacked using a single command shown below. EMAN2/SPARX must already be installed in your home directory for this to work properly. On most systems you should be able to simply:

tar -xvzf pydusa-1.15es.tgz
cd pydusa-1.15es
./install_mpi.py

Assuming your cluster has MPI installed, this will generally be able to find and use it automatically.

Notes:

  • If the script cannot find a local MPI installation, it will download and install a version of MPI for you. On most clusters, this is not what you want to happen.

  • Generally you can find out which MPI version your account is set up to use by typing 'which mpicc'
  • If the script can't find your system MPI or if you wish to use a specific version of MPI when more than one is available on the cluster, you can try configuring your PATH and LD_LIBRARY_PATH for the specific distribution you want before running this script.
  • If the installation fails, you may need to get support from your cluster administrator.
  • You can use 'install_mpi.py --force' if you want to force the script to download and install an MPI library for you. Again, this is usually not desirable.
  • Some specific versions of OpenMPI have a bug which can prevent pydusa (or any other Python/MPI wrapper) from working properly. If you get errors about 'undefined symbols', you may refer your local cluster support staff to:
    • Make sure that OpenMPI is compiled with the '--disable-dlopen' option
    • You may need to set the environment variable LD_PRELOAD to point at the correct libmpi.so, as follows:

Add a similar line to the one below to your .bashrc file (which is an invisible file in your home directory) by following these steps:

1) Open your invisible .bashrc file by typing at the command line:

vi .bashrc

(vi is an obnoxious text editor, but you can look up how to use it on Google).

2) Add the following line anywhere in the .bashrc file: export LD_PRELOAD=<mpi_directory>/lib/libmpi.so making sure to replace <mpi_directory> with your mpi directory.

In my case, the directory of my mpi installation is /raid/home/jgalaz/openmpi-1.4.3

Thus, the line I added to my .bashrc fine was:

export LD_PRELOAD=/raid/home/jgalaz/openmpi-1.4.3/lib/libmpi.so

To find what mpi you're using (what the installation directory is for your local version of mpi), type at the command line:

which mpirun

I get:

/raid/home/jgalaz/openmpi-1.4.3/bin/mpirun

Therefore the path before '/bin' corresponds to <mpi_directory> (/raid/home/jgalaz/openmpi=1.4.3)

When you have the library installed remember to refer to the EMAN2/SPARX specific pages listed above for usage information.

EMAN2/Parallel/PyDusa (last edited 2014-12-05 22:41:01 by jgalaz)