The heart is an especially tricky organ with which to work. Even normal human hearts are complex organs, with different chambers and valves and a lot of fine detail. Hearts with congenital defects are not only different from normal hearts, but they are different from each other. Even hearts diagnosed with the same condition often have differences among them, Powell said.
This is a problem for surgeons attempting to choose how best to go about operating, and how the operation might affect the patient.
It is even tougher for computers, which are generally not as good at image recognition as the human brain is. They have a hard time recognizing the boundaries between different objects in a picture—such as the border between one chamber in the heart and another.
This means that a person has to sit down at a computer and trace the outlines of different areas, called "segments" on an image—in this case on one of the hundred or more slices that make up an MRI scan or CT image. It would take several hours at least for a person to "segment" a single MRI scan.
An algorithm can do this much faster, but it would be far less accurate.
Danielle Pace and her adviser, Polina Golland at MIT, have been working on that piece of the process. They developed an algorithm that works with a user—a clinician sits in front of the computer, segments a portion of all of the slices in an MRI scan, and then the algorithm looks at what the user has done and uses the information to segment the rest.
Pace's algorithm can produce an image that is 90 percent accurate using only 14 user-annotated slices out of 100 or 200 total slices. Even manually segmenting three slices produced a model that was 80 percent accurate.
Pace said they have managed to get the amount of time for the process of converting the images from MRI to 3-D blueprint down to about two hours—about one hour for the user to manually segment the slices, and another for the algorithm to do the rest. The goal is to cut those two hours down to 30 minutes.