Researchers are developing new methods and models using machine learning (ML) to reduce noise in X-ray data

A graphical representation of the machine learning model, showing the series of XPCS images (top left), which are fed into the machine learning model (bottom), yielding the denoised data (top right) that are used for further analysis. Credit: Brookhaven National Laboratory. Source:, article: pdf
This article is based on the research paper  'Noise reduction in X‑ray photon correlation spectroscopy with convolutional neural networks
encoder–decoder models'.  All credit goes to the researchers of this paper  👏👏👏

Please don't forget to join our ML Subreddit

There is a recurring problem of noise reduction or inconsistent information present in existing datasets related to synchrotron X-ray experiments. Researchers from the National Synchrotron Light Source II (NSLS-II) and the Computational Science Initiative (CSI) at Brookhaven National Laboratory of the U.S. Department of Energy (DOE) have successfully developed a method to solve this problem, thus providing scientists with an enriching research experience.

A speckled pattern is created when an X-ray beam is scattered over a sample. The technique that analyzes the intensity of these sequential frames of speckled patterns and draws conclusions based on their structure is known as X-ray photon correlation spectroscopy (XPCS). This series of experiments uses a calculated matrix called the two-time intensity-intensity correlation function (2TCF) that is time dependent. However, with increasing noise levels in these images, finding information becomes increasingly difficult. Several efforts have been made to mitigate the impact of instability and reduce the noise associated with photon detection. Nevertheless, despite these recent developments in experimental setups, achieving high signal-to-noise ratio in many XPCS investigations remains a real problem.

The researchers aim to solve the problem on a larger scale by developing models that can be used in a wide range of XPCS studies. They proposed an AI-based solution by effectively deploying a convolutional neural network-based encoder-decoder (CNN-ED) model for noise suppression and signal restoration. The real-world experimental data used to train the ML model was created by the NSLS-II Coherent Hard X-ray Scattering (CHX) beamline. The computer approach that constitutes the main model is based on “auto-encoders”. Autoencoders is an unsupervised artificial neural network that learns to efficiently compress and encode data before reconstructing it as close to the original input as possible. By definition, an autoencoder decreases the dimensionality of the data by learning to ignore noise in the data. The model also makes optimal use of storage memory and computational resources, making it easy to tune for local experiments. The best results were obtained using a model architecture consisting of two convolutional layers of ten channels each.

The model’s accuracy in extracting meaningful data from a series of images was determined using various testing approaches. A striking observation was that the model can obtain enough information about equilibrium system dynamics from less amount of data and can also be extended to non-equilibrium systems with dynamical parameters that change with the weather. The CNN-ED model outperforms other existing algorithms, and its accuracy can be further improved by using larger training datasets.

There was a significant improvement using the CNN-ED approach for noise suppression in XPCS with respect to signal quality. These models are faster to train and do not require a large amount of data. Moreover, their accuracy is relatively consistent when it comes to hyperparameter selection. These models also require less computational resources than others to achieve the same signal-to-noise ratio. However, some limitations also persist. Tests conducted by the research group revealed that the model might not be able to reliably remove noise from extremely noisy data. For future developments, enhancements to the model’s capabilities are being carried out by the research team to integrate it into CHX’s XPCS pipelines. The team is investigating other ways to exploit their model to detect instabilities in instruments during measurements and heterogeneities or other anomalous dynamics in XPCS data inherent in the sample.



Sherry J. Basler