Question: I heard that someone did a refinement in (insert favorite program) and got a higher resolution structure than they did in EMAN. Is this true ?
Answer: This is a very complicated issue, and depends a lot on your relative skill in using the different packages as well as your understanding of some very complicated issues. The first issue is what your friend meant by 'better resolution'. People in CryoEM often confuse resolvability with resolution. These are two VERY different terms. In single particle processing, resolution typically refers to the results of a test where the data is split into even and odd halves, then reconstructions are compared using the Fourier shell correlation curve. Fundamentally this is measuring the signal to noise ratio in your model. That is, it measures at what resolution your model begins to look noisy. Resolvability is a measure of how close together two blobs can get in your model and still be discerned as two distinct objects. The trick here is that you can apply an arbitrary low pass filter (blurring) to your structure, thus making the resolvability much worse, while having absolutely no impact on the measured resolution of the structure.
Sometimes someone looks at their final structure and sees all of these beautiful 'high resolution' features, then looks at the structure from another reconstruction packaged, and sees some of these features blurred out, concluding that one package did a a better job reconstructing the model than the other package. This is simply not true. Different packages handle CTF amplitude correction and B-factor correction in very different ways. It may well be that if you take the 'blurrier' structure and apply a small inverse B-factor correction, that the structure that originally looked 'blurrier' may actually be better.
The next issue is initial model/noise bias. Different reconstruction strategies are susceptible to different degrees to this problem. EMAN incorporates an iterative class-averaging proceedure designed specifically to make it almost completely unsusceptible to this problem. This option is embodied in the classiter= option in refine. If you run your final high-resolution refinement with classiter=8, your reconstruction will end up significantly blurrier than it could be. On the other hand, if you refine with classiter=0, which is equivalent to what most other single particle reconstruction programs do, you can end up with a substantially exaggerated resolution (and incorrect features in your model that you may be tempted to interpret). In general, in EMAN, we suggest starting out in the early refinement rounds with classiter= in the 5-8 range to eliminate model bias and improve convergence rate. Then when you are doing your final refinement, drop this value to 3 (the smallest permissible value other than the special case 0, which disables the routine). While this will not produce the absolute highest possible resolution, it will be very close, and largely prevent any model bias from creeping it. It may also be safe at some stage to run a few iterations with classiter=0 to produce the highest possible resolution structure, but if you do so, you must run a range of additional tests to demonstrate the reliability of your model.
Another issue relates to angular sampling. In EMAN, and other programs which generate reference projections of your 3-D model for classification, resolution can be strongly affected by angular sampling. Perversely, using a larger angular sampling, which results in rotationally 'blurring' your structure, will actually produce a better resolution using standard methods. Why ? When you reduce your angular sampling, you are averaging more particles together in specific orientations, thus reducing the noise-levels in that orientation. Ideally we would use very fine angular sampling in conjunction with maximum liklihood methods to optimally weight the data contributions at different angles. Lacking that, however, it is possible to exaggerate your resolution (or underestimate it in the opposite case), through bad choices of angular spacing.
One final issue is how resolution is measured. Ideally people would split their raw data in two halves, then run a full refinement on each set, thus testing both for noise levels as well as model-bias. However, this is not what most people do. Typically people go to the final refinement step (3-D reconstruction) and do 2 reconstructions, one from even numbered particles and one from odd numbered particles. However, in this case, all of the Euler angles are fixed, and were generated based on the refinement performed with the full data set. The eman eotest program does something between these two extremes. While it does make use of the particle classification (2 Euler angles determined) from the final refinement iteration, it reruns the iterative 2-D class averaging process to reduce model-bias problems in the resolution estimate. If classiter is set high, again, a worse resolution estimate will result. Even if classiter is set to 0 in eotest if it was not also 0 in the final iteration of refine, a slighly worse resolution estimate may be produced.
So, the final answer is, it depends on how you use each software package. You need to understand the terms you are using. In the end, the real question is how detailed are the features I can reliably interpret in my map. This may give you a different answer than comparing 'resolutions'.