Recommended procedures

From Relion
Jump to navigation Jump to search

The following is what we typically do for each new data set for which we have a decent initial model. (If you don't have an initial model: perform RCT, tomography+sub-tomogram averaging, or (if you really need to) common-lines procedures in a different program).


Getting organised

First of all, make sure you have access to a computing cluster. RELION may yield excellent results, but it does take some serious computing power. Perhaps smaller data sets may still be analysed on a single multi-core (e.g. a 16/32-core) Linux machine, but most data sets will benefit from access to at least 64 reasonably up-to-date Linux cores. (RELION does compile on a Mac (after minor tweeking), but there are not many OSX clusters out there...).

Save all your micrographs in one or more subdirectories of the project directory (from where you'll launch the RELION GUI). We like to call these directories "Micrographs/" if all micrographs are in one directory, or "Micrographs_15jan13/" and "Micrographs_23jan13/" if they are in different directories (e.g. because they were collected on different dates). If you for some reason do not want to place your micrographs inside the RELIOn project directory, then inside the project directory you can also make a symbolic link to the directory where your micrographs are stored.

Particle selection & preprocessing

Our favourite programs for particle picking are Ximdisp and e2boxer.py. Be careful at this stage: you are probably better at getting rid of bad/junk particles than any of the classification procedures below! So spend a decent amount of time on selecting good particles, be it manually or (semi-)automatically. BTW: if you cannot see your particles this probably means they are not there. In that case: don't bother to use RELION, or any other single-particle reconstruction program. You'll be better off spending your time on improving your sample.

After picking the particles, we use the PreProcessing run-type on the GUI to estimate CTFs in ctffind3], extract, normalize and invert contrast (if necessary to get white particles). The radius around the particles for normalization should be chosen to be slightly larger than the actual particles to account for suboptimal centering at this stage.

If you experience any type of problem with RELION when using particles that were extracted (and/or preprocessed) by another program, then BEFORE REPORTING PROBLEMS TO US, PLEASE first try using the Preprocessing procedures through the RELION GUI. Re-doing your preprocesing inside RELION is very fast (it's fully parallelized) and it is the most convenient way to prepare the correct STAR input files for you.

2D class averaging

We like to Calculate 2D class averages to get rid of bad/junk particles in the data set. Apart from choosing a suitable particle diameter (make sure you don't cutt off any real signal, but try to minimise the noise around your particle as well), the most important parameters are the number of classes (K) and the regularization parameter T. For cryo-EM we typically have at least 150-250 particles per class, so with 3,000 particles we would not use more than K=20 classes. Also, to limit computational costs, we rarely use more than say 150 classes even for large data sets. For negative stain, one can use fewer particles per class, say at least 50-100. For cryo-EM, we typically use T=2; while for negative stain we use values of 1-2. We typically do not touch the default sampling parameters.

Most 2D class averaging runs yield some classes that are highly populated (look for the data_model_classes table in the model.star files for class occupancies) and these classes typically show nice, relative high-resolution views of your complex in different orientations. Besides these good classes, there are often also many bad classes: these are typically bad/junk particles. Because junk particles do not average well together there are often few particles in each bad class, and the resolution of the corresponding class average is thus very low. These classes will look very ugly! We then use awk (see the [[FAQs#How_can_I_select_images_from_a_STAR_file.3F | FAQs page] to make a smaller STAR file, from which all the bad classes are excluded. The reasoning behind this is that if particles do not average well with the others in 2D class averaging, they will also cause trouble in 3D refinement.

Depending on how clean our data is, we some times repeat this process 2 or 3 times. Be patient, as 2D class averaging is remarkably slow in RELION... However, having a clean data set is an important factor in getting good 3D classification results.

3D classification

Once we're happy with our data cleaning in 2D, we almost always Classify 3D structural heterogeneity. Remember: ALL data sets are heterogeneous! It is therefore always worth checking to what extent this is the case in your data set. At stage stage we use our initial model for the first time. Remember, if it is not reconstructed from the same data set in RELION or XMIPP, it is probably NOT on the correct grey scale. Also, if it is not reconstructed with CTF correction in RELION or it is not made from a PDB file, then one should probably also set "Has reference been CTF corrected?" to No. We prefer to start from relatively harsh initial low-pass filters (often 40-60 Angstrom), and typically perform 25 iterations with a regularization factor T=4 for cryo-EM; and T=2-4 for negative stain. (But remember: classifying stain is often a pain due to variations in staining.) For cryo-EM, we prefer to have at least (on average) 5,000-10,000 particles per class. For negative stain, fewer particles per class may be used. We typically do not touch the default sampling parameters, except perhaps for icosahedral viruses where we use finer angular samplings.

After classification, we use the same awk command as above to generate separate STAR files for each structural state of interest. Similarly-looking classes may be considered as one structural state at this point. Difference maps (after alignment of the maps in for example Chimera) are a useful tool to decide whether two maps are similar or not. In some cases, most often with large data sets, one may choose to further classify separate classes in an additional classification run.

3D refinement

The 3D classes of interest are each refined separately using the 3D-auto-refine procedure. We often use the refined map of the corresponding class as the initial model (or sometimes the original initial model) and we start refinement again from a rather harsh initial low-pass filter, often 40-60 Angstroms. We typically do not touch the default sampling parameters, except for icosahedral viruses where we may start from 3.7 degrees angular sampling and we perform local searches from 0.9 degrees onwards. After 3D refinement, we sharpen the map based on the unfiltered maps that are written out from release 1.2 onwards, as explained on the Analyse results page.

Sometimes, in our hands often with ribosomes, we actually first refine all images against a single model in the 3D auto-refine procedure, and after that perform 3D classification with a fine angular sampling (e.g. 1.8 degrees) and local angular searches (e.g. with a search range of 5 degrees). This only work if the orientations of particles assigned based on a single "consensus" reference are not very different from the orientations assigned based on the properly classified maps (which is usually the case for ribosomes). After this classification, we then again perform a 3D auto-refine on each of the relevant classes.

Movie refinement

We now collect all our data as movies on our fast direct-electron camera, and boost the resolution of our final map by processing movies.

Afterwards

If this is useful for you, please cite RELION (either Scheres (2012) JMB or Scheres (2012) JSB). The relevance of the 0.143 criterion for the gold-standard FSCs used in RELION is described in Scheres & Chen (2012) Nat. Meth. Movie processing is described in Bai et al 2013 eLife.