Iterative Reconstruction

Parallel MRI techniques utilize the inherent encoding effect of receiver coil sensitivity for complementing gradient-driven Fourier encoding. As a consequence of this hybrid encoding approach, parallel techniques require advanced reconstruction algorithms beyond mere Fourier transform. Generally, taking coil sensitivity into account renders the reconstruction problem more complex and numerically challenging. In the special case of Cartesian k-space sampling, reconstruction can still be accomplished quite efficiently by direct unfolding in the image domain (1, 2). However, non-Cartesian sampling and other complications, such as B0 inhomogeneity, prevent efficient reconstruction with direct methods. In these cases iterative algorithms offer an efficient and effective alternative. The joint action of gradient and sensitivity encoding creates a general linear mapping of the object’s signal density. Consequently, image reconstruction may likewise be viewed as a linear mapping of the sample values, yielding the final image. Let this mapping be represented by the reconstruction matrix F. It has one row for each voxel to be resolved and one column for each sample value acquired. Thus its size is N×(nC nK) for an N×N image matrix, where nC, nK denote the number of receiver coils used and the number of sampling positions in k-space, respectively. The net encoding effect is conversely described by the (nC nK)×N encoding matrix E, given by

Parallel MRI techniques utilize the inherent encoding effect of receiver coil sensitivity for complementing gradient-driven Fourier encoding.As a consequence of this hybrid encoding approach, parallel techniques require advanced reconstruction algorithms beyond mere Fourier transform.Generally, taking coil sensitivity into account renders the reconstruction problem more complex and numerically challenging.In the special case of Cartesian k-space sampling, reconstruction can still be accomplished quite efficiently by direct unfolding in the image domain (1,2).However, non-Cartesian sampling and other complications, such as B0 inhomogeneity, prevent efficient reconstruction with direct methods.In these cases iterative algorithms offer an efficient and effective alternative.The joint action of gradient and sensitivity encoding creates a general linear mapping of the object's signal density.Consequently, image reconstruction may likewise be viewed as a linear mapping of the sample values, yielding the final image.Let this mapping be represented by the reconstruction matrix F. It has one row for each voxel to be resolved and one column for each sample value acquired.Thus its size is N 2 ×(n C n K ) for an N×N image matrix, where n C , n K denote the number of receiver coils used and the number of sampling positions in k-space, respectively.The net encoding effect is conversely described by the (n C n K )×N 2 encoding matrix E, given by where ρ r denotes the position of the ρ-th voxel, κ k the κ-th sampling position in k-space, and γ s the complex spatial sensitivity of the γ-th coil.Ideally, all signal should be reconstructed at its true origin.Formally this means that F should be chosen such that its concatenation with the encoding matrix E approaches identity: or, equivalently, for each pixel ρ the squared norm deviation from the ideal spatial response should approach zero: .
At the same time the noise variance of the pixel value should be minimized: where Ψ denotes the noise covariance of the input data.Eqs.[2] and [3] form two competing goals, requiring that we carefully trade-off signal fidelity and noise behavior.There are many possible ways of formalizing this trade-off.The most practical one is minimizing the weighted sum of the two terms, yielding where α is the relative weight of noise in the joint minimization.
Applying this reconstruction matrix to a vector of input data, d, yields the image Equivalently, I is the solution of the linear system For solving a system like this numerical mathematics knows many iterative algorithms.An iterative algorithm calculates a progression of images that converges towards exact reconstruction.A typical implementation, based on the conjugate gradient (CG) method, is sketched in Fig. 1 (assuming = 0 α , Ψ = Id for simplicity) (3).In each iteration loop it combines coil-wise multiplication with coil sensitivity (S i ), fast Fourier transform (FFT) and gridding operations in k-space.It is important to note that switching back and forth between image domain and k-space is crucial for making the iteration loop efficient.
Multiplication by coil sensitivity is computationally cheap only in the image domain, while gridding is cheap only in k-space.In total, the operation count of one loop is approximately twice as much as that of conventional gridding reconstruction from array data without k-space undersampling.An example of iterative reconstruction is shown in Fig. 2.

K-Space S 1
Receive Channels Several measures have been proposed for further numerical optimization of iterative procedures like the one shown above.One is the use of equalizing filters both in k-space and image domain, derived from the general numerical concept of preconditioning (3).In this fashion the convergence speed of the iteration can be enhanced.The number of iterations required can also be reduced by calculating an approximate solution with a direct method and starting the iteration from there (4).Another option is replacing the pairs of gridding operations in Fig. 1 by a single equivalent k-space filter, which together with the FFT modules forms a fast convolution (5).The downside of this elegant approach is that the filter needs to be applied at twice the kspace density, hence eating up much of the computation time savings (6).
The strength of iterative methods is that they translate inverse problems into the much simpler task of performing forward mappings.As a consequence, iterative algorithms remain quite efficient even when the encoding mechanism grows more sophisticated.For instance, the effects of B0 inhomogeneity can readily be included in Eq. [1] and incorporated into forward mappings (7,8).Finally iterative approaches are also powerful means of incorporating prior knowledge, e.g. in the form of phase constraints (9,10).