Differences between revisions 2 and 23 (spanning 21 versions)
Revision 2 as of 2009-05-21 15:58:00
Size: 954
Editor: SteveLudtke
Comment:
Revision 23 as of 2011-09-03 01:23:15
Size: 2829
Editor: SteveLudtke
Comment:
Deletions are marked like this. Additions are marked like this.
Line 4: Line 4:
environment. Unfortunately, as of May, 2009, the parallelism infrastructure is just beginning to come together. This should be gradually fleshed out over
summer 2009. At the moment, only one parallelism infrastructure is fully functional.
environment. We now support 3 distinct methods for parallelism, and each has its own page of documentation.
Line 7: Line 6:
Programs with parallelism support will take the --parallel command line option as follows: Which option is best ? If you are running on a single machine/node, then Threaded is by far the most efficient option,
and the easiest to use as well. If you are running on a few nodes on a single cluster, I
would suggest MPI as probably the easiest option, and the one that will cause your sysadmin
the fewest headaches, but this may not be true on all clusters. DC is most appropriate when you
are trying to use multiple independent computers, or combine the resources from multiple clusters.
In a sense it is the most flexible, as nodes can be added and removed during the
job at any time and DC will make efficient use of what's available at any moment in time.
However, it takes a lot more work to use it, is somewhat complicated, and the network policies on
some clusters will not permit its use.
Line 9: Line 16:
--parallel=<type>:<option>=<value>:<option>=<value>:... Please follow the appropriate link:
Line 11: Line 18:
for example, for the distributed parallelism model: --parallel=dc:localhost:9990  * [[EMAN2/Parallel/Threaded|Threaded]] - This is for use on a single computer with multiple processors (cores). For example, the Core2Duo processors of a few years ago had 2 cores. In 2010, individual computers often have single or dual processors with 2, 4 or 6 cores each, for a total of up to 12 cores. EMAN2 can make very efficient use of all of these cores, but this mode will ONLY work if you want to run on a single computer.
 * [[EMAN2/Parallel/Mpi|MPI]] - This is the standard parallelism method used on virtually all large clusters nowadays. It will require a small amount of custom installation for your specific cluster, even if you are using a binary distribution of EMAN2. Follow this link for more details
 * [[EMAN2/Parallel/Distributed|Distributed]] - This was the original parallelism method developed for EMAN2. It can be used on anything from sets of workstations to multiple clusters, and can dynamically change how many processors it's using during a single run, allowing you, for example, to make use of idle cycles at night on lab workstations, but reduce the load during the day for normal use. It is very flexible, but requires a bit of effort, and a knowledgeable user to configure and use.
Line 13: Line 22:
=== Local Machine (multiple cores) ===
Not yet implemented, please use Distributed Computing

=== Distributed Computing ===



=== MPI ===
Sorry, we haven't had a chance to finish this yet. For the moment you will have to use the Distributed Computing mode on clusters.
Note : All 3 parallelism options have been fully supported and stable since early 2011. Both MPI and DC have been tested on jobs using at least 256 cores,
for multiple days, and are in routine use on large refinement jobs at multiple sites. That said, DC and MPI can both take a little effort to establish on
a new system, particularly if you have no past experience with cluster computing. We are happy to help if you have difficulties.

Parallel Processing in EMAN2

EMAN2 uses a modular strategy for running commands in parallel. That is, you can choose different ways to run EMAN2 programs in parallel, depending on your environment. We now support 3 distinct methods for parallelism, and each has its own page of documentation.

Which option is best ? If you are running on a single machine/node, then Threaded is by far the most efficient option, and the easiest to use as well. If you are running on a few nodes on a single cluster, I would suggest MPI as probably the easiest option, and the one that will cause your sysadmin the fewest headaches, but this may not be true on all clusters. DC is most appropriate when you are trying to use multiple independent computers, or combine the resources from multiple clusters. In a sense it is the most flexible, as nodes can be added and removed during the job at any time and DC will make efficient use of what's available at any moment in time. However, it takes a lot more work to use it, is somewhat complicated, and the network policies on some clusters will not permit its use.

Please follow the appropriate link:

  • Threaded - This is for use on a single computer with multiple processors (cores). For example, the Core2Duo processors of a few years ago had 2 cores. In 2010, individual computers often have single or dual processors with 2, 4 or 6 cores each, for a total of up to 12 cores. EMAN2 can make very efficient use of all of these cores, but this mode will ONLY work if you want to run on a single computer.

  • MPI - This is the standard parallelism method used on virtually all large clusters nowadays. It will require a small amount of custom installation for your specific cluster, even if you are using a binary distribution of EMAN2. Follow this link for more details

  • Distributed - This was the original parallelism method developed for EMAN2. It can be used on anything from sets of workstations to multiple clusters, and can dynamically change how many processors it's using during a single run, allowing you, for example, to make use of idle cycles at night on lab workstations, but reduce the load during the day for normal use. It is very flexible, but requires a bit of effort, and a knowledgeable user to configure and use.

Note : All 3 parallelism options have been fully supported and stable since early 2011. Both MPI and DC have been tested on jobs using at least 256 cores, for multiple days, and are in routine use on large refinement jobs at multiple sites. That said, DC and MPI can both take a little effort to establish on a new system, particularly if you have no past experience with cluster computing. We are happy to help if you have difficulties.

EMAN2/Parallel (last edited 2023-04-15 02:03:25 by SteveLudtke)