Machine learning generates a 3D model from 2D images – The Source

Researchers from the McKelvey School of Engineering at Washington University in St. Louis have developed a machine learning algorithm that can create a continuous 3D model of cells from a partial set of 2D images that have been taken using the same standard microscopy tools found in many laboratories today. .


Their findings were published Sept. 16 in the journal Nature Machine Intelligence.

“We train the model on the set of digital images to get a continuous representation,” said Ulugbek Kamilov, assistant professor of electrical and systems engineering and computer science and engineering. “Now I can show it however I want. I can zoom in smoothly and there is no pixelation.”

Key to this work was the use of a neural field network, a special type of machine learning system that learns a mapping from spatial coordinates to corresponding physical quantities. When the training is complete, researchers can point to any coordinate and the model can provide the image value at that location.

A particular strength of neural field networks is that they do not need to be trained on large amounts of similar data. Instead, as long as there are enough 2D images of the sample, the network can represent it in its entirety, inside and out.

The image used to form the grating is like any other microscopy image. Essentially, a cell is lit from below; light passes through it and is caught on the other side, creating an image.

The imaging system can zoom in on a pixelated image and fill in missing pieces, creating a continuous 3D representation.

“Because I have a few views of the cell, I can use those images to train the model,” Kamilov said. This is done by providing the model with information about a point in the sample where the image has captured part of the internal structure of the cell.

Then the network does its best to recreate this structure. If the output is erroneous, the network is modified. If correct, that path is strengthened. Once the predictions match the real-world measurements, the network is ready to fill in the parts of the cell that were not captured by the original 2D images.

The model now contains information about a complete and continuous representation of the cell – there is no need to save a data-rich image file as it can always be recreated by the neural field network.

And, Kamilov said, not only is the model a faithful, easy-to-store representation of the cell, but also, in many ways, it’s more useful than the real thing.

“I can put any coordinate and generate this view,” he said. “Or I can generate entirely new views from different angles.” He can use the model to spin a cell like a top or zoom in to get a closer look; use the model to perform other numerical tasks; or even introduce it into another algorithm.

This work was supported by the National Science Foundation, awards CCF-1813910, CCF-2043134, CCF-1813848 and EPMD-1846784

The McKelvey School of Engineering at Washington University in St. Louis promotes independent research and education with an emphasis on scientific excellence, innovation, and collaboration without boundaries. McKelvey Engineering offers top-notch research and graduate programs in all departments, especially in biomedical engineering, environmental engineering, and computer science, and offers one of the most selective undergraduate programs in the nation. With 140 full-time faculty, 1,387 undergraduate students, 1,448 graduate students, and 21,000 living alumni, we work to solve some of society’s greatest challenges; prepare students to become leaders and innovate throughout their careers; and to be a catalyst for economic development for the Saint-Louis region and beyond.

Sherry J. Basler