Machine learning generates a 3D model from 2D images

Researchers from the McKelvey School of Engineering at Washington University in St. Louis have developed a machine learning algorithm that can create a continuous 3D model of cells from a partial set of 2D images that have been taken using the same standard microscopy tools found in many laboratories today. .

Their findings were published September 16 in the journal Intelligence of natural machines.

“We train the model on the set of digital images to get a continuous representation,” said Ulugbek Kamilov, assistant professor of electrical and systems engineering and computer science and engineering. “Now I can show it however I want. I can zoom in smoothly and there’s no pixelation.”

Key to this work was the use of a neural field network, a special type of machine learning system that learns a mapping from spatial coordinates to corresponding physical quantities. When the training is complete, researchers can point to any coordinate and the model can provide the image value at that location.

A particular strength of neural field networks is that they do not need to be trained on large amounts of similar data. Instead, as long as there are enough 2D images of the sample, the network can represent it in its entirety, inside and out.

The image used to form the grating is like any other microscopy image. Essentially, a cell is lit from below; light passes through it and is caught on the other side, creating an image.

“Because I have a few views of the cell, I can use those images to train the model,” Kamilov said. This is done by providing the model with information about a point in the sample where the image has captured part of the internal structure of the cell.

Then the network does its best to recreate this structure. If the output is erroneous, the network is modified. If correct, that path is strengthened. Once the predictions match the real-world measurements, the network is ready to fill in the parts of the cell that were not captured by the original 2D images.

The model now contains information about a complete and continuous representation of the cell – there is no need to save a data-rich image file as it can always be recreated by the neural field network.

And, Kamilov said, not only is the model a faithful, easy-to-store representation of the cell, but also, in many ways, it’s more useful than the real thing.

“I can put any coordinate and generate this view,” he said. “Or I can generate entirely new views from different angles.” He can use the model to spin a cell like a top or zoom in to get a closer look; use the model to perform other numerical tasks; or even introduce it into another algorithm.

Source of the story:

Material provided by Washington University in St. Louis. Note: Content may be edited for style and length.

Sherry J. Basler