The GSoC Experience and Project Summary

Model Fitting using Microstructure Imaging of Crossing (MIX): DIPY

Abstract

Diffusion MRI measures water diffusion in biological tissue, which can be used to probe its microstructure. The most common model for water diffusion in tissue is the diffusion tensor (DT), which assumes a Gaussian distribution. This assumption of Gaussian diffusion oversimplifies the diffusive behavior of water in complex media and is known experimentally to break down for relatively large b-values. DT derived indices, such as mean diffusivity or fractional anisotropy, can correlate with major tissue damage, but lack sensitivity and specificity to subtle pathological changes.

Microstructure Imaging of Crossing (MIX) is versatile and thus suitable to a broad range of generic multicompartment models, in particular for brain areas where axonal pathways cross.

Multicompartment models (assess the variability of diffusion in sub-voxel regions) enable the estimation of more specific indices, such as axon diameter, density, orientation, and permeability, and so potentially give much greater insight into tissue architecture and sensitivity to pathology.

The goal of Model Fitting: Identity which model compartments are essential to explain the data and parameters that are potentially estimable from a particular experiment.

As a part of GSoC, I worked a lot with Model Fitting using the Neurite Orientation Dispersion and Density Imaging (NODDI) model with its implementation using the MIX framework.

Achievements and Benchmarks

The MIX framework for Microstructure Imaging is a very novel and advanced technique, however, required a higher fitting time. As reported by the MIX paper the MATLAB implementation took 191.8 seconds to fit, whereas, the DIPY implementation that I worked on is significantly faster, taking only 10-14 seconds to fit.

This basically means that the current implementation is ~ 20x faster than the state-of-the-art.

Pull Requests with Detailed Descriptions:

https://github.com/nipy/dipy/pull/1600 -> Branch: https://github.com/ShreyasFadnavis/dipy/tree/noddix_speed

https://github.com/nipy/dipy/pull/1614 -> Branch: https://github.com/ShreyasFadnavis/dipy/tree/noddix_gsoc

The above mentioned PRs contain:
Main Code for the NODDIx model (contained in: dipy/dipy/reconst)
Simulation and Fitting Simulated Signal (contained in: dipy/dipy/sims)
Testing the model (contained in: dipy/dipy/reconst/tests)
Example for NODDIx using HCP example (contained in: dipy/docs/examples [only present in the noddix_speed branch])

Link to submission via the PSF GSoC blog : 
https://blogs.python-gsoc.org/shreyas-fadnavis/2018/08/10/the-gsoc-experience-and-project-summary/

Challenging aspects of the work

My project was particularly challenging as it was geared more towards research based on Mathematical Optimization and Model Fitting which I had not worked on before.

I feel that working with DIPY under PSF was one of the most amazing experiences as I really learnt how to overcome the following challenges and contribute quality code to DIPY:

  1. Optimization in Scientific Computing using Approximations and not losing out on model accuracy at the same time.
  2. Better coding practices and Software Design for new Models.
  3. Fitting the Model Parameters in Less time so that it can be used from an analytical standpoint.
  4. Understanding and implementing Cython modules to remove the Python Bottlenecks.
  5. Line-by-line profiling to understand and remove bottlenecks from the code.

As a part of this project, I have worked on [links to the Blogs written at each step]:

Implementing the NODDI model for fitting 2 Fiber Crossings using MIX

The NODDI model consists of normalized signals from intracellular and extracellular compartments.

The estimated dMRI signal Ŝ  comprises of the normalized signals from the following three compartments:

Ŝ  = (1−v_iso)(v_ic * S_ic (𝑂𝐷,𝜃,𝜑) + (1−v_ic)S_ec(d,𝜃,𝜑))+ v_iso * S_iso

where S_ic and S_ec are the normalized signals from intracellular and extracellular compartments respectively.

Parameters to be estimated: Six (v_icv_iso, 𝑂𝐷, d, 𝜃, 𝜑)

Noise: Rician noise needs is added to signal for each substrate for different noise realizations.

To visualize each of the compartments mentioned in the formulation above, please take a look at the following figure for the compartments:

Ref: https://users.fmrib.ox.ac.uk/~michielk/phdthesis.html

The above diagram is representative of a single fiber.

To further expand the above model to 2 fiber orientation crossings, the following formulation has been implemented:

Visual Representation of almost 0 ODs:

Parameters to be estimated: 13 but for implementation, we use only 11 since some of the parameters are reused.

The parameters of the model can be visualized as:

The signal was estimated with the model above and then fitted with SHORE (an analytical basis). We could then  visualize the fiber orientations using SHORE’s Orientation Distribution Functions (ODFs) as follows:

[Note: the steps to implement the above simulations and visualize the signals have been explained in detail below]

Implementation of NODDIx using MIX framework of Optimization

The Microstructure Imaging of Crossings is a novel and robust method using a 3 step optimization process. It enables to fit existing biophysical models with improved accuracy by utilizing the Variable Separation Method (VSM) to distinguish parameters that enter in both, linear and non-linear manner, in the model (Methods). The estimation of non-linear parameters is a non-convex problem and is handled first. This is done by using Differential Evolution since it is more effective in approximating exponential time series models.

The task to estimate linear parameters amounts to a convex problem and can be solved using standard least squares techniques. These parameter estimates provide a starting point for a Trust Region method in search for a refined solution.

4 Steps involved in Implementing MIX:

Step 1 – Variable Separation: The objective function has a separable structure which can be exploited to separate the variables by Variable Separation (VarPro) method. We can rewrite our objective function as a projection using the Moore-Penrose Inverse (Pseudoinverse) and get the variable projection functional.

This is a rather advanced and mathematically well-formed method which makes use of variable projections to transform the complicated computations between the variables of the NODDIx model into a space where they can be fit individually.

Taking advantage of this special structure of the model, the method of variable projections eliminates the linear variables obtaining a somewhat more complicated function that involves only the nonlinear parameters.

This procedure not only reduces the dimension of the parameter space but also results in a better-conditioned problem. The same optimization method applied to the original and reduced problems will always converge faster for the latter.

Further literature for this method can be found here:

Step 2 – Stochastic search for non-linear parameters ‘x’: The objective function is non-convex, particularly of non-linear least-square form. Any gradient based method employed to estimate the parameters will have critical dependence on a good starting point, which is unknown. Alternative approach can be regular grid search, which is time consuming and adds computational burden. This particular type of problem therefore points towards considering stochastic search methods like Differential Evolution (DE). In case of time series analysis, DE can be used efficiently for sum of exponentials functions. DE parameters can be varied for each selected biophysical model and time complexity may change with each choice.

I have written a different blog post for implementation of DE with a detailed explanation of its working and its nature for NODDIx.

Link: https://blogs.python-gsoc.org/shreyas-fadnavis/2018/07/23/differential-evolution-for-noddix

Step 3 – Constrained search for linear parameters ‘f ’: After estimating the parameters ‘x’, estimation of linear parameters ‘f ’ is a constrained linear least-squares estimation problem. I have made use of the cvxpy optimizer to perform the constrained search to find the f’s of the model compartments.

Step 4 – Non-Linear Least Squares Estimation (NLLS) using Trust Region Method: Step 2 and step 3 give a reliable initial guess of both ‘x’ and ‘f ’ as initial value for Trust Region method. This method has been implemented using the Levenberg Marquardt method to perform (NLLS) fitting from SciPy: Optimize module.

Testing and Running my code for NODDIx model in DIPY

Here is the link to the code for NODDIx implemented in DIPY: https://github.com/ShreyasFadnavis/dipy/tree/noddix_speed [1]

Steps to install DIPY:

  1. Install Anaconda: https://anaconda.org. Its free: you just need to create an account and you should be good to go!
  2. In the search of your Windows, search for: Anaconda Prompt.
  • This should open up a terminal where you will need to operate from. (try avoiding the normal windows command prompt as Anaconda uses a virtual environment to do so.)
  1. Install DIPY:
  •   conda install vtk
  •   You may need to install cvxpy (a convex optimization library) using:  
    • conda install -c cvxgrp cvxpy on Windows

Following are the steps you need to take to run the code:

I am not sure how well versed you are with Git, but all you need to do is:

  • Click on the above link in (1).
  • Clone the repository:
    • A green button will appear on the top right corner of the GitHub web-page, which will give you a link. Copy that link.
    • Open the command prompt and run the following 2 commands:
      • git clone <link copied from the above step>
    • This should create a folder in your local file-system which we need to compile using the following command:
      • cd path/to/dipy
      • Once inside the main dipy folder, run the following command to install the repository:
        • pip install –user -e .

Dabbling with the NODDIx code:

  • Anaconda will have already installed an IDE (editor) called Spyder . You can access it from the default Windows search.
  • Once inside spyder, click on open file and navigate to: ./dipy/dipy/reconst
  • There inside reconst is the file which contains the NODDIx code, namely:
    • NODDIx.py (has some code in cython: noddi_speed.pyx)

The sim_noddix.py file is what contains the simulation and fitting of the signal using the model written in NODDIx.py.

All you need to do is hit the RUN button in the navbar of the Spyder IDE and it will return:

  • Actual Parameters (11)
  • Fitted Parameters (11)
  • List of Errors in the Estimation (for each param) (11)
  • Sum of all errors

To run the simulations and visualize the fiber orientation crossings, please navigate to: ./dipy/dipy/sims

Within sims, you should find a file named sim_noddix.py which will simulate the signal and visualize with SHORE. The code contains the error functions and functions to visualize the data. They are as follows:

  • show_with_shore(gtab, reconst_signal) -> to visualize the simulated signal
  • sim_voxel_fit(reconst_signal) -> to fit the simulation of 1 voxel

Fitting the Real Data With NODDIx Model

Now it’s time to see how the model works on real data. To do so, I am working on an Example using a subject from the HCP (Human Connectome Project) data which is publicly available online.

The whole brain takes really long to fit, however, I have made use of a mask to fit only the region of the brain with the corpus callosum to see how the model works.

Here is the link you can use to test the model with your own data by just replacing the file paths: https://github.com/ShreyasFadnavis/dipy/blob/noddix_speed/doc/examples/example_noddi_hcp.py

Next Steps:

Approximately ~ 3000 lines of code have been written so far for the NODDIx model. Next steps include:

  • Creating an Example for the Gallery of DIPY to help new users use the MIX module.
  • Extend the 2 crossing model to 1 and 3 crossings.

Simulating dMRI Data using CAMINO

     +     

Data simulation forms a crucial component of any data-driven experiment which deals with model fitting as it is the ground-truth that we will be comparing against. In my project, I will be working with the following 2 tools for data simulation:

  • UCL Camino Diffusion MRI Toolkit
  • DIPY Simulations (… Obviously!)

This post will cover Camino 1st and I aim to get into DIPY in the next post!

The most confusing part about the Camino documentation  is understanding what the ‘scheme’ file is really made up of, because it needs to be passed as a parameter to the ‘datasynth’ command which we will look at for data simulation.

Scheme files accompany DWI data and describe imaging parameters that are used in image processing. For most users, we require the gradient directions and b-value of each measurement in the sequence.

Once you have this information, you can use the CAMINO commands described below to generate scheme files.

  • Comments are allowed, the line must start with ‘#’
  • The first non-comment line must be a header stating “VERSION: <version>”. In our case:
VERSION: BVECTOR
  • After removing comments and the header, measurements are described in order, one per line. The order must correspond to the order of the DWI data.
  • Entries on each line are separated by spaces or tabs.

The BVECTOR is the most common scheme format. Each line consists of four values: the (x, y, z) components of the gradient direction followed by the b-value. For example:

   # Standard 6 DTI gradient directions, [b] = s / mm^2
  VERSION: BVECTOR
   0.000000   0.000000   0.000000   0.0
   0.707107   0.000000   0.707107   1.000E03
  -0.707107   0.000000   0.707107   1.000E03
   0.000000   0.707107   0.707107   1.000E03
   0.000000   0.707107  -0.707107   1.000E03
   0.707107   0.707107   0.000000   1.000E03
  -0.707107   0.707107   0.000000   1.000E03

If the measurement is unweighted, its gradient direction should be zero. Otherwise, the gradient directions should be unit vectors, followed by a scalar b-value. The b-value can be in any units. Units are defined implicitly, in the above example we have used s / mm^2. The choice of units affects the scale of the output tensors, if we used this scheme file we would get tensors in units of mm^2 / s. We could change the units of b to s / m^2 by scaling the b-values by 1E6. Our reconstructed tensors would then be specified in units of m^2 / s.

Finding the information for the scheme file

The best way to find the information for your scheme file is to talk to the person who programmed your MRI sequence. There is software that can help you recover them from DICOM or other scanner-specific data formats. The dcm2nii program will attempt to recover b-values and vectors in FSL format.

Converting to Camino format

If you have a list of gradient directions, you can convert them to Camino format by hand or by using pointset2scheme. If you have FSL style bval and bvec files, you can use fsl2scheme. See the man pages for more information.

 

Simulating the Data

Finally! Now that we know what the scheme files are, lets look at how to simulate the voxels…

I will be making use of the 2 utilities which I feel are relevant to my project and will test the simulation functionalities using the 59.scheme file which is present on the Camino website tutorial.

1. Synthesis Using Analytic Models

 

This uses Camino to synthesize diffusion-weighted MRI data with the white matter analytic models.

The method is explained in detail in (Panagiotaki et al NeuroImage 2011, doi:10.1016/j.neuroimage.2011.09.081).

The following example synthesizes data using the three-compartment model “ZeppelinCylinderDot“, which has an intra-axonal compartment of single radius, a cylindrically symmetric tensor for the extra-axonal space and a stationary third compartment.

Example:

datasynth -synthmodel  compartment 3 CYLINDERGPD 0.6 1.7E-9 0.0  0.0  4E-6 zeppelin 0.1 1.7E-9 0.0 0.0 2E-10  Dot -schemefile 59.scheme -voxels 1 -outputfile ZCD.Bfloat
2. Crossing cylinders using Monte Carlo Diffusion Simulator

This simulator allows the simulation of diffusion from simple to extremely complex diffusion environments, called “substrates“. We will be looking at the Crossing fibres substrates as of now.

A substrate is envisaged to sit inside a single voxel, with spins diffusing across it. The boundaries of the voxel are usually periodic so that the substrate defines an environment made up of an infinite, 3D array of whatever you specify. The measurement model in the simulation does not capture the trade-off between voxel size and SNR and hence simulation "voxels" can be quite a bit smaller than those in actual scans. This simulation is, and has always been, intended as a tool to simulate signals due to sub-voxel structure, rather than large spatially-extended structures. [- UCL Camino Docs]
Crossing Cylinders

A situation that is often of interest in diffusion MR research is where we have more than one principle fibre direction. The simulation is able to model crossing fibres with a specified crossing angle. This substrate contains two populations of fibres in interleaved planes. One population is parallel to the z-axis and another is rotated about the y-axis by a given angle with respect to the first.

Cylinders on this substrate are arranged in parallel in the xz-plane in parallel layers one cylinder thick. i.e. a plane of cylinders parallel to the z-axis, with a rotated with respect the first, then another parallel z-axis and so on. Cylinders are all of a constant radius.

An example command to use here is:
datasynth -walkers 100000 -tmax 1000 -voxels 1 -p 0.0 -schemefile 59.scheme -initial uniform -substrate crossing -crossangle 0.7854 -cylinderrad 1E-6 -cylindersep 2.1E-6 > crossingcyls45.bfloat


Here we’ve specified a crossing substrate. The crossing angle is specified in radians (NOT degrees) using the -crossangle: 0.7854. This is approximately pi/4, or 45 degrees. The crossing angle can take on any value, just make sure you use radians!

 

REFERENCES:

[1] http://camino.cs.ucl.ac.uk/index.php

[2] Panagiotaki et al NeuroImage 2011, doi:10.1016/j.neuroimage.2011.09.081

Microstructure Imaging of Crossings: Diffusion Imaging in Python (Computational Neuroanatomy)

I am really proud to get an opportunity of working with the DIPY team of the Python Software Foundation as a Google Summer of Code Candidate.

Things I will be working on and writing about in the upcoming weeks:

  • Non-Linear Optimization
  • Model Fitting
  • Stochastic Methods and Machine Learning
  • Neuroscience using Python

Image result for python software foundation logoImage result for gsoc 2018

Image result for brain microstructure imaging dipy

DIPY is a free and open source software project for computational neuroanatomy, focusing mainly on diffusion magnetic resonance imaging (dMRI) analysis. It implements a broad range of algorithms for denoising, registration, reconstruction, tracking, clustering, visualization, and statistical analysis of MRI data.

Magnetic resonance imaging (MRI)… in 5 Lines:

MRI uses the body’s natural magnetic properties for imaging purposes. It makes use of the Hydrogen nucleus (a single proton) due to its abundance in water and fat: H+. When the body is placed in a strong magnetic field of the MRI, the protons’ axes all line up. This uniform alignment creates a magnetic vector oriented along the axis of the MRI scanner.

What does lining up of Protons mean :: ? courtesy: http://www.schoolphysics.co.uk/age16-19/Atomic%20physics/Atomic%20structure%20and%20ions/text/MRI/index.html

I feel that Neuroscience being closely tied to and having formed foundations in Hebbian and Boltzmann paradigms of Statistical Learning forms an extremely important component AI research from a variety of standpoints, a crucial one being connectivity. MRI has facilitated understanding brain mechanisms by getting and analyzing this ‘Brain Data’  and will be working on one such technique called ‘Microstructure Imaging of Crossings’.

Diffusion MRI measures water diffusion in biological tissue, which can be used to probe its microstructure. The most common model for water diffusion in tissue is the diffusion tensor (DT), which assumes a Gaussian distribution. This assumption of Gaussian diffusion oversimplifies the diffusive behavior of water in complex media, and is known experimentally to break down for relatively large b-values. DT derived indices, such as mean diffusivity or fractional anisotropy, can correlate with major tissue damage, but lack sensitivity and specificity to subtle pathological changes.

Microstructure Imaging of Crossing (MIX) is a versatile and thus suitable to a broad range of generic multicompartment models, in particular for brain areas where axonal pathways cross

 These ‘multicompartment models’ assess the variability of subvoxel regions by enabling the estimation of more specific indices, such as axon diameter, density, orientation, and permeability, and so potentially give much greater insight into tissue architecture and sensitivity to pathology.

Goal of Model Fitting:

We want to identify which model compartments are essential to explain the data and parameters that are potentially estimable from a particular experiment and compare the models to each other using the Bayesian Information Criterion (BIC) or any other Model Selection Criterion (TIC, Cp, etc.), ranking them in order of how well they explain data acquired.

This requires a novel regression method, which is robust and versatile. It enables to fit existing biophysical models with improved accuracy by utilizing the Variable Separation Method (VSM) to distinguish parameters that enter in both, linear and non-linear manner, in the model. The estimation of non-linear parameters is a non-convex problem and is handled first. This is done by stochastic search that utilizes Genetic Algorithms (GA) since GAs are effective in approximating exponential time series models. The task to estimate linear parameters amounts to a convex problem and can be solved using standard least squares techniques. These parameter estimates provide a starting point for a Trust Region method in search for a refined solution.

4 Steps involved in Implementing MIX:

Step 1 – Variable Separation: The objective function has a separable structure which can be exploited to separate the variables by variable separation method. We can rewrite our objective function as a projection using the Moore-Penrose Inverse (Pseudoinverse) and get the variable projection functional.

Step 2 – Stochastic search for non-linear parameters ‘x’: The objective function is non-convex, particularly of non-linear least-square form. Any gradient based method employed to estimate the parameters will have critical dependence on a good starting point, which is unknown. Alternative approach can be regular grid search, which is time consuming and adds computational burden. This particular type of problem therefore points towards considering stochastic search methods like GA. In case of time series analysis, GA can be used efficiently for sum of exponential functions. GA parameters can be varied for each selected biophysical model and time complexity may change with each choice. (GA method: Elitism based).

Step 3 – Constrained search for linear parameters ‘f ’: After estimating the parameters ‘x’, estimation of linear parameters ‘f ’ is a constrained linear least-squares estimation problem.

Step 4 – Non-Linear Least Squares Estimation using Trust Region Method: Step 2 and step 3 give a reliable initial guess of both ‘x’ and ‘f ’ by applying Trust Region method. This basically is an unconstrained optimization method for a region around the current search point, where the quadratic model for local minimization is “trusted” to be correct and steps are chosen to stay within this region. The size of the region is modified during the search, based on how well the model agrees with actual function evaluations: where GAs kick in. 

References:

[1] Farooq, H., Xu, J., Nam, J. W., Keefe, D. F., Yacoub, E., Georgiou, T., & Lenglet, C. (2016). Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI. Scientific Reports, 6(September), 1–9. https://doi.org/10.1038/srep38927

[2] Ferizi, U., Schneider, T., Panagiotaki, E., Nedjati-Gilani, G., Zhang, H., Wheeler-Kingshott, C. A. M., & Alexander, D. C. (2014). A ranking of diffusion MRI compartment models with in vivo human brain data. Magnetic Resonance in Medicine, 72(6), 1785–1792. https://doi.org/10.1002/mrm.25080

[3] Farooq, H., Xu, J., Nam, J. W., Keefe, D. F., Yacoub, E., & Lenglet, C. (n.d.). Microstructure Imaging of Crossing ( MIX ) White Matter Fibers from diffusion MRI Supplementary Note 1 : Tissue Compartment Model Functions, (Mix), 1–18.

[4] Manuscript, A., & Magnitude, S. (2013). NIH Public Access, 31(9), 1713–1723. https://doi.org/10.1109/TMI.2012.2196707