Differences between revisions 21 and 29 (spanning 8 versions)
Revision 21 as of 2010-12-17 13:56:03
Size: 1663
Editor: SteveLudtke
Comment:
Revision 29 as of 2023-04-15 02:03:25
Size: 2514
Editor: SteveLudtke
Comment:
Deletions are marked like this. Additions are marked like this.
Line 4: Line 4:
environment. We now support 3 distinct methods for parallelism, and each has its own page of documentation. Please follow the appropriate link: environment.
Line 6: Line 6:
 * [[EMAN2/Parallel/Threaded|Threaded]] - This is for use on a single computer with multiple processors (cores). For example, the Core2Duo processors of a few years ago had 2 cores. In 2010, individual computers often have single or dual processors with 2, 4 or 6 cores each, for a total of up to 12 cores. EMAN2 can make very efficient use of all of these cores, but this mode will ONLY work if you want to run on a single computer.
 * [[EMAN2/Parallel/Mpi|MPI]] - This is the standard parallelism method used on virtually all large clusters nowadays. It will require a small amount of custom installation for your specific cluster, even if you are using a binary distribution of EMAN2. Follow this link for more details
 * [[EMAN2/Parallel/Distributed|Distributed]] - This was the original parallelism method developed for EMAN2. It can be used on anything from sets of workstations to multiple clusters, and can dynamically change how many processors it's using during a single run, allowing you, for example, to make use of idle cycles at night on lab workstations, but reduce the load during the day for normal use. It is very flexible, but requires a bit of effort, and a knowledgeable user to configure and use.
Which option is best ? If you are running on a single machine/node, then Threaded is by far the most efficient option,
and the easiest to use as well. If you are running on a few nodes on a single cluster, use MPI. In many cases a single cluster node has enough cores that using Threaded parallelism on one cluster node at a time isn't a bad choice. MPI setup can be painful for people not familiar with clusters, and Threaded can be used without any extra configuration.
Line 10: Line 9:
''As of 12/17/2010 MPI parallelism is now functioning, but still under testing/optimization.'' Please follow the appropriate link:

 * '''[[EMAN2/Parallel/Threaded|Threaded]]''' - This is for use on a single computer with multiple processors (cores) or a single node of a cluster. EMAN2 can make very efficient use of all of these cores, but this mode will ONLY work if you want to run on a single computer.
 * '''[[EMAN2/Parallel/Mpi|MPI]]''' - This is the standard parallelism method used on virtually all large clusters nowadays. It will require a small amount of custom installation for your specific cluster, even if you are using a binary distribution of EMAN2. Follow this link for more details
 * '''[[EMAN2/Parallel/Distributed|Distributed]]''' - This was the original parallelism method developed for EMAN2. Having said that, it hasn't really been developed or actively used for at least 10 years, with MPI now preferred for clusters and threaded preferred for individual computers. May not even work any more...

 * '''--threads''' option - In addition to --parallel, some commands have a --threads option. There are a few commands which cannot be run using the generic multi-computer parallelism provided by --parallel. These commands may still be able to take advantage of multiple cores on a single machine. --threads is the number of available processors on a single computer. It should be specified in addition to --parallel when both are available.

Note : All 3 parallelism options have been fully supported and stable since early 2011. Both MPI and DC have been tested on jobs using at least 256 cores,
for multiple days, and are in routine use on large refinement jobs at multiple sites. That said, DC and MPI can both take a little effort to establish on
a new system, particularly if you have no past experience with cluster computing. We are happy to help if you have difficulties.

Parallel Processing in EMAN2

EMAN2 uses a modular strategy for running commands in parallel. That is, you can choose different ways to run EMAN2 programs in parallel, depending on your environment.

Which option is best ? If you are running on a single machine/node, then Threaded is by far the most efficient option, and the easiest to use as well. If you are running on a few nodes on a single cluster, use MPI. In many cases a single cluster node has enough cores that using Threaded parallelism on one cluster node at a time isn't a bad choice. MPI setup can be painful for people not familiar with clusters, and Threaded can be used without any extra configuration.

Please follow the appropriate link:

  • Threaded - This is for use on a single computer with multiple processors (cores) or a single node of a cluster. EMAN2 can make very efficient use of all of these cores, but this mode will ONLY work if you want to run on a single computer.

  • MPI - This is the standard parallelism method used on virtually all large clusters nowadays. It will require a small amount of custom installation for your specific cluster, even if you are using a binary distribution of EMAN2. Follow this link for more details

  • Distributed - This was the original parallelism method developed for EMAN2. Having said that, it hasn't really been developed or actively used for at least 10 years, with MPI now preferred for clusters and threaded preferred for individual computers. May not even work any more...

  • --threads option - In addition to --parallel, some commands have a --threads option. There are a few commands which cannot be run using the generic multi-computer parallelism provided by --parallel. These commands may still be able to take advantage of multiple cores on a single machine. --threads is the number of available processors on a single computer. It should be specified in addition to --parallel when both are available.

Note : All 3 parallelism options have been fully supported and stable since early 2011. Both MPI and DC have been tested on jobs using at least 256 cores, for multiple days, and are in routine use on large refinement jobs at multiple sites. That said, DC and MPI can both take a little effort to establish on a new system, particularly if you have no past experience with cluster computing. We are happy to help if you have difficulties.

EMAN2/Parallel (last edited 2023-04-15 02:03:25 by SteveLudtke)