Skip to content

Commit a14348d

Browse files
Merge pull request #82 from arcadelab/dev
Dev
2 parents 74a2649 + 445f980 commit a14348d

37 files changed

+9362
-3629
lines changed

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,9 @@ DeepDRR requires an NVIDIA GPU, preferably with >11 GB of memory.
4040
conda install -c conda-forge pycuda
4141
```
4242

43-
to install it in your environment.
43+
to install it in your environment.
4444

45-
4. You may also wish to [install PyTorch](https://pytorch.org/get-started/locally/) separately, depending on your setup.
45+
4. You may also wish to [install PyTorch](https://pytorch.org/get-started/locally/) separately, depending on your setup.
4646
5. Install from `PyPI`
4747

4848
```bash
@@ -116,7 +116,7 @@ DeepDRR combines machine learning models for material decomposition and scatter
116116

117117
![DeepDRR Pipeline](https://raw.githubusercontent.com/arcadelab/deepdrr/master/images/deepdrr_workflow.png)
118118

119-
Further details can be found in our MICCAI 2018 paper "DeepDRR: A Catalyst for Machine Learning in Fluoroscopy-guided Procedures" and the subsequent Invited Journal Article in the IJCARS Special Issue of MICCAI "Enabling Machine Learning in X-ray-based Procedures via Realistic Simulation of Image Formation". The conference preprint can be accessed on arXiv here: https://arxiv.org/abs/1803.08606.
119+
Further details can be found in our MICCAI 2018 paper "DeepDRR: A Catalyst for Machine Learning in Fluoroscopy-guided Procedures" and the subsequent Invited Journal Article in the IJCARS Special Issue of MICCAI "Enabling Machine Learning in X-ray-based Procedures via Realistic Simulation of Image Formation". The conference preprint can be accessed on arXiv here: <https://arxiv.org/abs/1803.08606>.
120120

121121
### Representative Results
122122

@@ -126,13 +126,13 @@ The figure below shows representative radiographs generated using DeepDRR from C
126126

127127
### Applications - Pelvis Landmark Detection
128128

129-
We have applied DeepDRR to anatomical landmark detection in pelvic X-ray: "X-ray-transform Invariant Anatomical Landmark Detection for Pelvic Trauma Surgery", also early-accepted at MICCAI'18: https://arxiv.org/abs/1803.08608 and now with quantitative evaluation in the IJCARS Special Issue on MICCAI'18: https://link.springer.com/article/10.1007/s11548-019-01975-5. The ConvNet for prediction was trained on DeepDRRs of 18 CT scans of the NIH Cancer Imaging Archive and then applied to ex vivo data acquired with a Siemens Cios Fusion C-arm machine equipped with a flat panel detector (Siemens Healthineers, Forchheim, Germany). Some representative results on the ex vivo data are shown below.
129+
We have applied DeepDRR to anatomical landmark detection in pelvic X-ray: "X-ray-transform Invariant Anatomical Landmark Detection for Pelvic Trauma Surgery", also early-accepted at MICCAI'18: <https://arxiv.org/abs/1803.08608> and now with quantitative evaluation in the IJCARS Special Issue on MICCAI'18: <https://link.springer.com/article/10.1007/s11548-019-01975-5>. The ConvNet for prediction was trained on DeepDRRs of 18 CT scans of the NIH Cancer Imaging Archive and then applied to ex vivo data acquired with a Siemens Cios Fusion C-arm machine equipped with a flat panel detector (Siemens Healthineers, Forchheim, Germany). Some representative results on the ex vivo data are shown below.
130130

131131
![Prediction Performance](https://raw.githubusercontent.com/arcadelab/deepdrr/master/images/landmark_performance_real_data.PNG)
132132

133133
### Applications - Metal Tool Insertion
134134

135-
DeepDRR has also been applied to simulate X-rays of the femur during insertion of dexterous manipulaters in orthopedic surgery: "Localizing dexterous surgical tools in X-ray for image-based navigation", which has been accepted at IPCAI'19: https://arxiv.org/abs/1901.06672. Simulated images are used to train a concurrent segmentation and localization network for tool detection. We found consistent performance on both synthetic and real X-rays of ex vivo specimens. The tool model, simulation image and detection results are shown below.
135+
DeepDRR has also been applied to simulate X-rays of the femur during insertion of dexterous manipulaters in orthopedic surgery: "Localizing dexterous surgical tools in X-ray for image-based navigation", which has been accepted at IPCAI'19: <https://arxiv.org/abs/1901.06672>. Simulated images are used to train a concurrent segmentation and localization network for tool detection. We found consistent performance on both synthetic and real X-rays of ex vivo specimens. The tool model, simulation image and detection results are shown below.
136136

137137
This capability has not been tested in version 1.0. For tool insertion, we recommend working with [Version 0.1](https://github.com/arcadelab/deepdrr/releases/tag/0.1) for the time being.
138138

@@ -223,18 +223,18 @@ For the original DeepDRR, released alongside our 2018 paper, please see the [Ver
223223
## Acknowledgments
224224

225225
CUDA Cubic B-Spline Interpolation (CI) used in the projector:
226-
https://github.com/DannyRuijters/CubicInterpolationCUDA
226+
<https://github.com/DannyRuijters/CubicInterpolationCUDA>
227227
D. Ruijters, B. M. ter Haar Romeny, and P. Suetens. Efficient GPU-Based Texture Interpolation using Uniform B-Splines. Journal of Graphics Tools, vol. 13, no. 4, pp. 61-69, 2008.
228228

229229
The projector is a heavily modified and ported version of the implementation in CONRAD:
230-
https://github.com/akmaier/CONRAD
230+
<https://github.com/akmaier/CONRAD>
231231
A. Maier, H. G. Hofmann, M. Berger, P. Fischer, C. Schwemmer, H. Wu, K. Müller, J. Hornegger, J. H. Choi, C. Riess, A. Keil, and R. Fahrig. CONRAD—A software framework for cone-beam imaging in radiology. Medical Physics 40(11):111914-1-8. 2013.
232232

233233
Spectra are taken from MCGPU:
234234
A. Badal, A. Badano, Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit. Med Phys. 2009 Nov;36(11): 4878–80.
235235

236236
The segmentation pipeline is based on the Vnet architecture:
237-
https://github.com/mattmacy/vnet.pytorch
237+
<https://github.com/mattmacy/vnet.pytorch>
238238
F. Milletari, N. Navab, S-A. Ahmadi. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv:160604797. 2016.
239239

240240
We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the GPUs used for this research.

deepdrr/annotations/__init__.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
from .line_annotation import LineAnnotation
2+
from .fiducials import FiducialList, Fiducial
23

3-
__all__ = ['LineAnnotation']
4+
__all__ = ["LineAnnotation", "FiducialList", "Fiducial"]

deepdrr/annotations/fiducials.py

Lines changed: 148 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,148 @@
1+
from __future__ import annotations
2+
3+
import logging
4+
from typing import List, Literal, Optional
5+
from pathlib import Path
6+
import numpy as np
7+
import json
8+
import pyvista as pv
9+
import pandas as pd
10+
11+
from .. import geo, utils
12+
from ..vol import Volume
13+
14+
log = logging.getLogger(__name__)
15+
16+
17+
class FiducialList:
18+
# Can be treated like a list of Point3Ds
19+
def __init__(
20+
self,
21+
points: List[geo.Point3D],
22+
world_from_anatomical: Optional[geo.FrameTransform] = None,
23+
anatomical_coordinate_system: Literal["RAS", "LPS"] = "RAS",
24+
):
25+
self.points = points
26+
self.world_from_anatomical = world_from_anatomical
27+
self.anatomical_coordinate_system = anatomical_coordinate_system
28+
29+
def __getitem__(self, index):
30+
return self.points[index]
31+
32+
def __len__(self):
33+
return len(self.points)
34+
35+
def __iter__(self):
36+
return iter(self.points)
37+
38+
def __repr__(self):
39+
return f"FiducialList({self.points})"
40+
41+
def __str__(self):
42+
return str(self.points)
43+
44+
def to_RAS(self) -> FiducialList:
45+
if self.anatomical_coordinate_system == "RAS":
46+
return self
47+
else:
48+
return FiducialList(
49+
[geo.RAS_from_LPS @ p for p in self.points],
50+
self.world_from_anatomical,
51+
"RAS",
52+
)
53+
54+
def to_LPS(self) -> FiducialList:
55+
if self.anatomical_coordinate_system == "LPS":
56+
return self
57+
else:
58+
return FiducialList(
59+
[geo.LPS_from_RAS @ p for p in self.points],
60+
self.world_from_anatomical,
61+
"LPS",
62+
)
63+
64+
@classmethod
65+
def from_fcsv(
66+
cls, path: Path, world_from_anatomical: Optional[geo.FrameTransform] = None
67+
) -> FiducialList:
68+
"""Load a FCSV file from Slicer3D
69+
70+
Args:
71+
path (Path): Path to the FCSV file
72+
73+
Returns:
74+
np.ndarray: Array of 3D points
75+
"""
76+
with open(path, "r") as f:
77+
lines = f.readlines()
78+
points = []
79+
coordinate_system = None
80+
for line in lines:
81+
if line.startswith("# CoordinateSystem"):
82+
coordinate_system = line.split("=")[1].strip()
83+
elif line.startswith("#"):
84+
continue
85+
else:
86+
x, y, z = line.split(",")[1:4]
87+
points.append(geo.point(float(x), float(y), float(z)))
88+
89+
if coordinate_system is None:
90+
log.warning("No coordinate system specified in FCSV file. Assuming LPS.")
91+
coordinate_system = "LPS"
92+
assert coordinate_system in ["RAS", "LPS"], "Unknown coordinate system"
93+
94+
return cls(
95+
points,
96+
world_from_anatomical=world_from_anatomical,
97+
anatomical_coordinate_system=coordinate_system,
98+
)
99+
100+
@classmethod
101+
def from_json(
102+
cls, path: Path, world_from_anatomical: Optional[geo.FrameTransform] = None
103+
):
104+
# TODO: add support for associated IDs of the fiducials. Should really be a list/dict.
105+
data = pd.read_json(path)
106+
control_points_table = pd.DataFrame.from_dict(
107+
data["markups"][0]["controlPoints"]
108+
)
109+
coordinate_system = data["markups"][0]["coordinateSystem"]
110+
# TODO: not sure if this works.
111+
points = [
112+
geo.point(*row[["x", "y", "z"]].values)
113+
for _, row in control_points_table.iterrows()
114+
]
115+
116+
return cls(
117+
points,
118+
world_from_anatomical=world_from_anatomical,
119+
anatomical_coordinate_system=coordinate_system,
120+
)
121+
122+
def save(self, path: Path):
123+
raise NotImplementedError()
124+
125+
126+
class Fiducial(geo.Point3D):
127+
@classmethod
128+
def from_fcsv(
129+
cls,
130+
path: Path,
131+
world_from_anatomical: Optional[geo.FrameTransform] = None,
132+
):
133+
fiducial_list = FiducialList.from_fcsv(path)
134+
assert len(fiducial_list) == 1, "Expected a single fiducial"
135+
return cls(
136+
fiducial_list[0].data,
137+
world_from_anatomical=world_from_anatomical,
138+
anatomical_coordinate_system=fiducial_list.anatomical_coordinate_system,
139+
)
140+
141+
@classmethod
142+
def from_json(
143+
cls, path: Path, world_from_anatomical: Optional[geo.FrameTransform] = None
144+
):
145+
raise NotImplementedError
146+
147+
def save(self, path: Path):
148+
raise NotImplementedError

deepdrr/annotations/line_annotation.py

Lines changed: 28 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,10 @@
1313
log = logging.getLogger(__name__)
1414

1515

16+
# TODO: make this totally independent of the Volume it corresponds to, and make a super-class for
17+
# all annotations.
18+
19+
1620
class LineAnnotation(object):
1721
"""Really a "segment annotation", but Slicer calls it a line.
1822
@@ -77,7 +81,7 @@ def world_from_anatomical(self) -> geo.FrameTransform:
7781
return self.volume.world_from_anatomical
7882

7983
@classmethod
80-
def from_markup(
84+
def from_json(
8185
cls,
8286
path: str,
8387
volume: Optional[Volume] = None,
@@ -132,6 +136,10 @@ def from_markup(
132136
anatomical_coordinate_system=anatomical_coordinate_system,
133137
)
134138

139+
@classmethod
140+
def from_markup(cls, *args, **kwargs):
141+
return cls.from_json(*args, **kwargs)
142+
135143
def save(
136144
self,
137145
path: str,
@@ -223,7 +231,7 @@ def to_lps(x):
223231
"display": {
224232
"visibility": True,
225233
"opacity": 1.0,
226-
"color": [0.5, 0.5, 0.5],
234+
"color": color,
227235
"selectedColor": color,
228236
"activeColor": [0.4, 1.0, 0.0],
229237
"propertiesLabelVisibility": False,
@@ -271,6 +279,24 @@ def endpoint_in_world(self) -> geo.Point3D:
271279
def midpoint_in_world(self) -> geo.Point3D:
272280
return self.world_from_anatomical @ self.startpoint.lerp(self.endpoint, 0.5)
273281

282+
@property
283+
def trajectory_in_world(self) -> geo.Vector3D:
284+
return self.endpoint_in_world - self.startpoint_in_world
285+
286+
@property
287+
def direction_in_world(self) -> geo.Vector3D:
288+
return self.trajectory_in_world.normalized()
289+
290+
def get_mesh(self):
291+
"""Get the mesh in anatomical coordinates."""
292+
u = self.startpoint
293+
v = self.endpoint
294+
295+
mesh = pv.Line(u, v)
296+
mesh += pv.Sphere(2.5, u)
297+
mesh += pv.Sphere(2.5, v)
298+
return mesh
299+
274300
def get_mesh_in_world(
275301
self, full: bool = True, use_cached: bool = False
276302
) -> pv.PolyData:

deepdrr/device/__init__.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
from .device import Device
22
from .carm import CArm
33
from .mobile_carm import MobileCArm
4+
from .simple_device import SimpleDevice
45

56

6-
__all__ = ["Device", "CArm", "MobileCArm"]
7+
__all__ = ["Device", "CArm", "MobileCArm", "SimpleDevice"]

deepdrr/device/device.py

Lines changed: 42 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,10 @@
88
class Device(ABC):
99
"""A parent class representing X-ray device interfaces in DeepDRR.
1010
11+
To implement a sub class, the following methods/attributes must be implemented:
12+
- device_from_camera3d
13+
14+
1115
Attributes:
1216
sensor_height (int): the height of the sensor in pixels.
1317
sensor_width (int): the width of the sensor in pixels.
@@ -23,6 +27,16 @@ class Device(ABC):
2327
source_to_detector_distance: float
2428
world_from_device: geo.FrameTransform
2529

30+
@property
31+
def detector_height(self) -> float:
32+
"""Height of the detector in mm."""
33+
return self.sensor_height * self.pixel_size
34+
35+
@property
36+
def detector_width(self) -> float:
37+
"""Width of the detector in mm."""
38+
return self.sensor_width * self.pixel_size
39+
2640
@property
2741
def device_from_world(self) -> geo.FrameTransform:
2842
"""Get the FrameTransform for the device's local frame.
@@ -75,6 +89,21 @@ def camera3d_from_world(self) -> geo.FrameTransform:
7589
"""
7690
return self.camera3d_from_device @ self.device_from_world
7791

92+
@property
93+
def index_from_camera3d(self) -> geo.CameraProjection:
94+
"""Get the CameraIntrinsicTransform for the device's camera3d_from_index frame (in the current pose).
95+
96+
Returns:
97+
CameraIntrinsicTransform: the "index_from_camera3d" frame transformation for the device.
98+
"""
99+
return geo.CameraProjection(
100+
self.camera_intrinsics, geo.FrameTransform.identity()
101+
)
102+
103+
@property
104+
def camera3d_from_index(self) -> geo.Transform:
105+
return self.index_from_camera3d.inv
106+
78107
def get_camera_projection(self) -> geo.CameraProjection:
79108
"""Get the camera projection for the device in the current pose.
80109
@@ -93,18 +122,29 @@ def index_from_world(self) -> geo.CameraProjection:
93122
return self.get_camera_projection()
94123

95124
@property
96-
@abstractmethod
125+
def world_from_index(self) -> geo.Transform:
126+
"""Get the world_from_index transform for the device in the current pose.
127+
128+
Returns:
129+
Transform: the "world_from_index" transform for the device.
130+
"""
131+
return self.index_from_world.inv
132+
133+
@property
97134
def principle_ray(self) -> geo.Vector3D:
98135
"""Get the principle ray for the device in the current pose in the device frame.
99136
100137
The principle ray is the direction of the ray that passes through the center of the
101138
image. It points from the source toward the detector.
102139
140+
By default, this is just the z axis, but this can be overridden by sub classes.
141+
103142
Returns:
104143
Vector3D: the principle ray for the device as a unit vector.
105144
106145
"""
107-
pass
146+
principle_ray_in_camera3d = geo.v(0, 0, 1)
147+
return self.device_from_camera3d @ principle_ray_in_camera3d
108148

109149
@property
110150
def principle_ray_in_world(self) -> geo.Vector3D:

deepdrr/device/mobile_carm.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,12 @@ def pose_vector_angles(pose: geo.Vector3D) -> Tuple[float, float]:
3636

3737

3838
class MobileCArm(Device):
39+
"""A C-arm imaging device with orbital movement (alpha, beta) and isocenter movement (x, y, z).
40+
41+
Default parameters are based on the Siemens CIOS Spin.
42+
43+
"""
44+
3945
# basic parameters which can be safely set by user, but move_by() and reposition() are recommended.
4046
isocenter: geo.Point3D # the isocenter point in the device frame
4147
alpha: float # alpha angle in radians

0 commit comments

Comments
 (0)