gms | German Medical Science

GMS Current Topics in Computer and Robot Assisted Surgery

Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC)

ISSN 1863-3153

Methods and tools for navigated radiotherapy

Research Article

  • corresponding author Otto A. Sauer - Julius Maximilians University Würzburg, Klinik und Poliklinik für Strahlentherapie, Würzburg, Germany
  • Lucia Vences - Julius Maximilians University Würzburg, Klinik und Poliklinik für Strahlentherapie, Würzburg, Germany
  • Kajetan Berlinger - Technische Universität München, Lehrstuhl für Informatik IX, München, Germany
  • Manfred Dötter - Technische Universität München, Lehrstuhl für Informatik IX, München, Germany
  • Michael Roth - Technische Universität München, Lehrstuhl für Informatik IX, München, Germany
  • Achim Schweikard - University Lübeck, Institut für Robotik und kognitive Systeme, Lübeck, Germany

GMS CURAC 2006;1:Doc15

The electronic version of this article is the complete one and can be found online at: http://www.egms.de/en/journals/curac/2006-1/curac000015.shtml

Published: November 6, 2006

© 2006 Sauer et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en). You are free: to Share – to copy, distribute and transmit the work, provided the original author and source are credited.


Abstract

The purpose of this work was to develop tools for navigated radiation therapy in order to improve the conformity of the dose distribution to the target volume for moving targets. In order to achieve a breathing status dependent 3D patient model, for CT scans, the breathing status was monitored with an infrared tracking system. With different morphing concepts a complete 4D model could be accomplished. By means of local registration the target position known at one breathing status could be tracked through a breathing cycle. At treatment delivery, a fast 2D/3D matching algorithm was utilized to synchronize the target position with this of an external marker. The external marker position will be used to control a treatment robot. The developed tools were identified to be necessary in order to apply navigated radiotherapy in a clinic.


Objective

To control a tumor with radio therapy without severe side effects, it is necessary to accumulate a sufficiently high dose of ionizing radiation to the target volume and simultaneously to keep the dose below certain threshold levels at surrounding vulnerable tissue. These aims are achieved by adequate treatment planning. The treatment plan is delivered in a number of fractions and has to take into account interfractional and intrafractional variability of the treatment set up and the patient geometry. Therefore a sufficiently big margin is added to the so called clinical target volume (CTV) [1]. Within the body stem internal movement may significantly derogate the probability for the success of a radiotherapy treatment. In order to improve the therapy different methods for adaptive and image guided radiotherapy had been developed and tested. Some authors have tried to control or reduce intrafractional movement by controlling the patient’s respiration [2], [3]. Devices to hold the breath and to trigger the radiation beam were utilized. Radiotherapy techniques controlling, gating, or tracking respiratory motion were recently reviewed by Shirato et al. [4]. In this work we observe how the respiration displaces a tumor, to take appropriate adjustments. Besides interfractional dislocations, especially for targets close to the diaphragm, periodic movement of the target due to the breathing cycle occurs.

The main objective of this work was to determine the trajectory of a target offline by image processing tools and in real time by allocating external marker positions to the breathing status and thus the tumor position. These steps are necessary in order to get the control parameters for a treatment robot performing navigated radiotherapy. In detail, we developed methods to achieve a 4D patient model. Different attempts in creating 4D models of internal organs have been made. Baudet et al used the mass-spring system and finite-elements methods [5], Boldea et al. and Rohlfing et al. trusted in methods based on non-rigid registration [6], [7]. Our approach makes use of particular morphing techniques. By means of image registration we find the position and angulations of an anatomical pattern with reference to an external coordinate system. This transformation is correlated to the tumor position.


Materials and methods

4D Patient model

Prerequisite for all adaptive and navigated radiotherapy applications is the generation of a 4D patient model, where time is considered as the 4th dimension. While for modern multislice CT scanners it is rather simple to produce a 3D movie from a sufficiently big part of a patient, we had to achieve this with a rather slow single slice spiral CT scanner. Our concept was to calculate artificial CT data from a pair of inhale and exhale data sets for an arbitrary breathing status by means of a physical model. In order to test different models real data for intermediate states were compared with calculated data. The acquisition and analysis of such data is described.

Acquiring real data with CT and external markers

For lung tumor patients a cohort study was performed. With a spiral CT scanner (Philips Tomoscan AV) a volume scan of the thorax was performed with normal breathing of the patient. This data set was also used for our normal treatment planning procedure. In order to record the breathing cycle the positions of two infrared emitters, on the patient’s belly, were tracked with a 3D IR camera (Flashpoint 5000). With a 3rd emitter the longitudinal position of the patient relative to the CT gantry was measured (Figure 1 [Fig. 1]). The position of the third emitter is correlated to the Z-coordinate of a CT slice and by means of a time stamp to the coordinates of the emitters 1 and 2, i.e. the breathing status. Therefore a breathing state could be assigned to every CT slice. Additionally volume scans at inhalation and at exhalation states were conducted.

Acquiring artificial data

We use a computed 4D Patient model to determine the respiratory state and therefore the position of the target region. For real time tracking this is done in a 2D/4D registration step (see latter section), matching 2D radiographs with all the computed synthetic 3D scans. The best match identifies the actual respiratory state. Brock et al used finite element analysis to create a four-dimensional model of the liver [8]. We considered two methods for the deformation of the lung or the liver, one based on tetrahedrization and the other on thin-plate splines. Both methods need as input:

1.
Two initial CT scans, taken during inhalation respectively exhalation
2.
The number of desired interpolation steps (number of image stacks of the resulting 3D motion picture)
3.
An arbitrary amount of pairs of corresponding control points
Deformation by barycentric coordinates

First the two volumes are tessellated into tetrahedrons by Delaunay tetrahedrization. The control points and the 8 corners of the bounding box of the volume are used as vertices for this tetrahedrization. During deformation, the vertices are linearly interpolated over time. In each interpolation step, the barycentric coordinates are calculated in relation to the enclosing tetrahedron, for all the points of the actual stack.

Using these weights the positions of the corresponding pixels of the two source images are computed. The value of the new pixel is linearly interpolated between these pixels.

Deformation by thin-plate splines

The second approach is based on thin-plate splines, which were introduced by Duchon [9], but in the context of analysis of plane mappings they were first considered by Bookstein [10]. A thin-plate spline interpolates a surface that is constrained not to move at given control points. This surface represents a thin metal plate, which will be deformed into the most economic shape, in reference to its bending energy. The following quantity represents the bending energy of the thin-plate function in 3D space.

Equation 1

The thin-plate spline function is:

Equation 2

The function consists of an affine part and a sum of functions U(r), whereas in 3D space UTPS(r)=|r| is used. In our approach we linearly interpolate the corresponding control point lines and in each interpolation step determine the coefficients of the thin-plate spline function. The number of interpolation steps depends on the number of desired target scans. This method deforms the inhalation scan into the exhalation scan step by step, yielding the 3D motion picture, needed to infer the respiratory state.

Localization of targets by image registration

Due to the vast amount of data, it is generally not feasible to determine the position of the target at any time of a 4D patient model manually by an expert. Depending on the image modalities different software tools were developed in order to determine the position of the target from the known coordinates in an initial study. These are a “local registration” tool in order to find a similar 3D pattern in the different CT volume data of a patient, a “time study” tool in order to find a 2D pattern in CT volume data of a patient, and a “2D/4D registration” tool in order to match a projection radiograph with CT volume data at the corresponding breathing status.

Local registration tool

Before a therapy is executed, a treatment CT study is carried out. The Radiotherapy Department at the Universitätsklinikum Würzburg disposes of a Philips Tomoscan in the therapy room, as shown in Figure 1 [Fig. 1]. To control patient’s position for therapy a new dataset is taken with this scanner, i.e. the treatment study. This new dataset is compared with the planning study in order to adjust the therapy to the actual patient’s and tumour’s position. Using the xrayortho++ library from the Medical Applications Research Group at TU Munich, we developed an application to identify target’s motion. The program first performs a global registration between the planning and therapy studies. The result of the global registration is used to get a coarse approximation of region of interest’s position. Then a local registration is performed to improve accuracy in the vicinity of this region. The program flow is described in detail in the following paragraphs.

Global registration

The first step in our application is a global registration. As input the program receives two datasets to be registered, the planning volume and the treatment volume. At initialization time both volumes are globally registered using a rigid body transformation model.

There are several advantages of using a rigid body registration. Rigid body registration has been extensively used in the last decades because of its low computational cost. As it consists of rotations and translations, the problem of finding the best transformation has only six degrees of freedom and it can be quickly solved with a modern PC. Another advantage is the fact that it can be performed completely automatically. Similarity measures between two images drive the spatial correspondence search. Measures as mutual information or correlation coefficients rely just on image information [11]. Therefore, to get a geometric transformation between two images, there is no need of additional information, like user-defined landmarks or segmentation.

Despite these advantages, there are some other aspects to take into account, especially for medical applications: A global rigid transformation can not always provide an accurate solution for tumor localization. Because the transformation is applied to the entire volume, the distance between any two points in the floating volume is assumed to be the same after being mapped in the reference volume. Although this is true for rigid structures like bones, it is not always the case in body parts, like thorax or pelvis, where local displacements may take place because of internal organ movements. These local displacements may also affect tumor’s position. Therefore, the global registration needs to be refined.

Local registration

To improve the tumor’s position accuracy we apply a local registration. The result of the global registration, i.e. a rigid transformation to align the planning with the treatment volume, is saved in a file to be used later by the local registration. The planning study may have marked the target volume as a region of interest (ROI). In this case the application has to read it. If not, using a graphical user interface (GUI) the user can directly mark a ROI on the planning volume. Having the ROI, the local registration is achieved in three steps. Firstly, the centre of ROI is mapped to the treatment volume using the global transformation. Secondly, this point is used as the centre of a search region with twice the width, twice the height and twice the depth of the ROI. And thirdly, these two sub volumes, i.e. the ROI in the planning image and the search region in the treatment image, are then locally registered also with a rigid transformation. With this last transformation we calculate tumor’s position.

Knowing tumor’s position in the volume, the program provides its coordinates to the user. The patient lying on the treatment couch may have skin markers, which are easy to recognize on the CT images. These markers establish a coordinate system, which is used as reference for the treatment. Thus, the program can also calculate tumor’s position and displacement from the origin of this skin markers coordinate system.

A local rigid registration provides a refinement of target’s position. Because both datasets are from the same patient and usually made with a difference of few days, big changes in target’s shape and position are not really expected. Thus it is reasonable to look for the tumor within the search region. In this way, local registration will detect small local movements.

Time study tool

The above tool is good mainly for the detection of interfractional motion. Our primary goal was to analyze tumor’s movement due to respiration. We identify an appropriate cranial-caudal position with the results from the 3D-3D rigid registration. In this position, several transversal CTs are achieved in about 15 seconds to observe the free respiratory cycle. We reconstruct the tumor’s motion with each slice from this time study. These slices are sorted chronologically according with their time stamps. Each slice is compared with the 3D therapy CT and organ displacement is calculated. These displacements are obtained using also a rigid registration between a slice and the volume CT. This process is achieved in few seconds using a PC. The displacements are then applied to the ROI to reconstruct its motion.

2D/3D registration

Before treatment, x-ray images are taken and compared to the 4D Model of the patient. (These x-ray images will be called live images). In these live images generally the tumor cannot be seen, but in the CT scans the target region is determined. After registration of the live image with the particular stacks of the 4D Model, we know the best matching 3D scan in the time series and therefore respiratory state and tumor position. Thus we acquire correlation from respiratory state to tumor position. For the task of registering the live kV image of the patient to the different stacks of our 4D model, we use an intensity-based 2D/3D registration approach [12]. Intensity-based fluoroscopy-to-CT fusion in the case under consideration involves the following three crucial steps:

1.
Calculation of a digitally reconstructed radiograph (DRR) for the viewpoint of the live image
2.
Comparison of the DRR with the real fluoroscopic kV images using a specific similarity measure
3.
Adaptation of the pose parameters (rigid transformation) so that the value of the similarity measure increases

These steps are repeated until no further improvement is possible. For each of these steps there have been proposed a lot of different techniques during the last few years [13]. Due to the fact that in our context there are several successive registrations to be performed it is quite essential that a very fast but still accurate and robust method is ready to hand. The most time consuming part of the registration is step 1, the generation of DRRs for the volume data to register, because this image has to be generated for each iteration of the optimization-algorithm.

Using a hardware-accelerated volume rendering technique, DRRs for a 1283-volume can be created up to 80 times per second on today’s CPUs. Our implementation is based on 2D textures, which involves the resampling of the given volume into three stacks of slices with the main-axes perpendicular to each other. During rendering the slices of the stack whose axis is most correlated to the viewing direction are drawn transparently in a back to front order to emulate kV images [12]. A further way to decrease the time needed for one single registration is the choice of an optimization algorithm (step 2), which needs only a small number of evaluations of the similarity-measure and therefore DRRs. We found the Hooke-Jeeves-Pattern-Search to be effective with respect to speed and robustness as well [14], [15]. We already have observed very promising accuracy results in the past using our own clinical navigation system designed for specific orthopaedic surgery procedures such as intertrochantic osteotomy.

But for all that additional experiments using freely available gold standard data [16] revealed that both accuracy as well as convergence behaviour may be improved and are therefore subject to further investigation.

In our application of tracking tumors, registration time for a new live image is about 10 seconds for each stack. This yields only intermittent information about the target location. At the time the comparison of all scans in the time series, with the current live shot is completed, the target may already have moved. To solve this problem we use a sensor to report information on the current state of respiration in real time. This sensor is an infrared tracking system, with emitters attached to significant positions on the patient's chest and abdomen (Figure 1 [Fig. 1]).

Kubo and Hill [17] compared various external sensors, like breath temperature sensor, strain gauge or spirometer, in matters of their applicability for respiratory gating. We use the infrared tracking method within our application, because of its high accuracy and speed as well as its stability under radiation exposure. The information of the sensor is correlated to the target location computed by the comparison between the live shot and the 4D Model. Therefore, the live shot has a time stamp, and we can determine which reading of the real time sensor corresponded to this point in time. Repeating this time stamp synchronization, a complete correlation model can be obtained, which correlates target motion to the readings of the real time sensor (internal to the external motion). The correlation allows the computation of the position of the target volume only. However the therapeutical beam has to reach the target volume, which is in motion, early enough. Both synchronization (gating) and active tracking have latencies due to physical limitations of the particular technique. Therefore predictive tracking uses the periodicity of the particular anatomical signal (respiration or pulsation), in order to compensate for this latency [18].

In Figure 2 [Fig. 2] the setup of the system described above is shown.


Results

Acquisition of 4D image data

For an emitter on the belly, Figure 3 [Fig. 3] shows the tracking signal during the whole course of a CT examination. Between approximately 400 s and 600 s scans with normal breathing of the patient were performed. At about 900 s the patient was asked to inhale and hold breath. The inhalation is clearly detected with the y signal of the emitter. At about 1300 s a volume scan was performed with the patient in exhale status. Figure 4 [Fig. 4] shows the correlation between CT slices and respiratory state by means of corresponding tracking signals.

Warping algorithms for 4D models

To evaluate the warping algorithms, MR scans of the liver and lung were used [19]. Both of the methods were tested with 91 control point pairs selected manually by an expert and the two image stacks, one inhalation, one exhalation, as input. 9 intermediate scans, for later registration, were calculated. Calculation times were 64 minutes for the thin-plate spline based method and 69 minutes for the tetrahedrization based method, respectively. Both computations were performed on a 2 GHz standard personal computer. To evaluate the results we compared them with an intermediate scan, located right between the two initial scans. First the liver in the two input scans and in the intermediate scan was segmented. Then the calculations were performed on these images and the same control point pairs as before. After that the reconstructed surfaces of the synthetic intermediate scans were compared to the surface of the real intermediate scan, in order to get the average deviation of the surfaces in mm. The surfaces were reconstructed using the Marching Cubes Algorithm [20]. For each of the surface triangle’s vertices of the real intermediate state, the distance to the nearest point on the nearest triangle of the computed intermediate state was calculated. The average deviation was taken to be the mean value of these distances. As expected, the intermediate data set matches the artificial 4D model best at the middle of the time period of the respiratory cycle (Figure 5 [Fig. 5]). The tetrahedrization based method led to an average deviation of 4.9 mm and the thin-plate spline based method to an average deviation of 3.8 mm. Additionally we generated difference images to visualize the results. Figure 6 [Fig. 6] shows an example with the original image data, and Figure 7 [Fig. 7] with the segmented data. Similar results were obtained with CT data for lung patients. They are obligatory for the DRR based tracking procedure.

Determination of the target position

We realized 180 tests of our local registration application with eleven pairs of CT volumes. These volumes correspond to thorax, neck, head and pelvis. In Figure 8 [Fig. 8] the display of an exemplary result is shown. By visual inspection, in most of the cases the local registration seems to deliver an improved result of the global registration alone. A quantitative comparison of both methods was also achieved. To have a quantitative performance measure, we examined the differences between the region of interest in the plan volume and the two resulting images determined in the therapy volume. In this way, we compared how well a resulting image covered the original region of interest. In our tests, we found that in 75% of the cases, the local registration covers better the region of interest (62%) or at least as good as the global registration (13%). The other 25% corresponds to body parts with higher flexibility, like spine. In these cases both local and global registration performed poor, because such movements can not be captured by a rigid body model. For such body parts where local deformations may take place, non-rigid registration may deliver better results [21], [22], but at a more expensive time complexity.

Figure 9 [Fig. 9] shows an example with the time study tool. We have tested this application with data from 4 different lung patients and a total of 8 studies and compared our results with the results of an expert. In the 3D-3D registration, we had 3D volumes with 5mm space between the slices (cranial-caudal direction).When comparing our results, one has to keep in mind that a human has to choose the “best slice” and round his coordinates. The application can reconstruct and interpolate volume between the slices. However, we observed differences of the order between +/-5mm in right-left and cranial-caudal directions. In the dorsal-ventral direction we observed bigger differences from about 10mm. Taking into account the respiratory movement we reduced the discrepancy between our results and experts coordinates to +/- 3mm. As there is no gold standard for this problem, it is difficult to establish absolute errors.

2D/4D registration

To verify our 4D registration method, we generated DRRs from real and from synthetic scans of a whole respiration time period and matched them with our 4D model, generated by different interpolation techniques. As we expected, the best matches were located in the respiratory states we generated the DRRs from. Furthermore the next best matches were the adjoining scans. The charts in Figure 10 [Fig. 10] show the registration results of the different respiratory states. As metric for the registration we used in this case mutual information. In Figure 11 [Fig. 11] the result of a single 2D/3D registration step is shown.


Discussion

Navigated control of a radiation beam is sought to further increase the benefit of radiotherapy for moving targets. Many different activities have to be coordinated in order to reach this goal. Necessary software tools in order to reach this goal were developed and tested. We are able to measure a sparse 4D CT patient model. The possibility to calculate missing data with morphing techniques was demonstrated. Over that we are able to find the target position with the aid of anatomical structures or landmarks in 3D and in 2D images at any time or at an arbitrary breathing status. Our next goal is the control of a robotic therapy couch in order to keep the target covered by the radiation field at any time of the radiation.


Acknowledgements

The help of Drs Jörn Wulf and Dirk Vordermark with gathering appropriate patient data is acknowledged.

Supported by: Deutsche Forschungsgemeinschaft (DFG SPP1124)


References

1.
ICRU. Prescribing, Recording and Reporting Photon Beam Therapy. International Commission on Radiation Units and Measurements. 1993. Report nr 50.
2.
Cheung PC, Sixel KE, Tirona R, Ung YC. Reproducibility of lung tumor position and reduction of lung mass within the planning target volume using active breathing control (ABC). Int J Radiat Oncol Biol Phys. 2003;57(5):1437-42.
3.
Remouchamps VM, Letts N, Yan D, Vicini FA, Moreau M, Zielinski JA, Liang J, Kestin LL, Martinez AA, Wong JW. Three-dimensional evaluation of intra- and interfraction immobilization of lung and chest wall using active breathing control: a reproducibility study with breast cancer patients. Int J Radiat Oncol Biol Phys. 2003;57(4):968-78.
4.
Shirato H, Seppenwoolde Y, Kitamura K, Onimura R, Shimizu S. Intrafractional tumor motion: lung and liver. Semin Radiat Oncol. 2004;14(1):10-8.
5.
Baudet V, Villard PF, Jaillet F, Beuve M, Shariat B. Towards Accurate Tumour Tracking in Lungs. In: International Conference on Information Visualization. London; 2003. p 338-43.
6.
Boldea V, Sarrut D, Clippe S. Lung Deformation Estimation with Non-Rigid Registration for Radiotherapy Treatment. In: MICCAI Lecture Notes in Computer Science. Springer Verlag; 2003. p. 770-7.
7.
Rohlfing T, Maurer CR, O'Dell WG, Zhong J. Modeling Liver Motion and Deformation During the Respiratory Cycle Using Intensity-Based Free-Form Registration of Gated MR Images. In: Medical Imaging: Visualization, Display, and Image-Guided Procedures. Bellingham, WA: SPIE; 2001. p. 337-48.
8.
Brock KK, Hollister SJ, Dawson LA, Balter JM. Technical note: Creating a four-dimensional model of the liver using finite element analysis. Med Phys. 2002;29(7):1403-5.
9.
Duchon J. Interpolation des fonctions de deux variables suivant le principe de la flexion des plaques minces. RAIRO Analyse Numérique. 1976.
10.
Bookstein FL. Principal warps: thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis an Machine Intelligence. 1989;11:567-85.
11.
Hajnal JV, Hawkes DJ, Hill DLG, editors. Medical Image Registration. Boca Raton: CRC Press; 2001.
12.
Roth M, Dötter M, Burgkart R, Schweikard A. Fast intensity-based fluoroscopy-to-CT registration using pattern search optimization. Elsevier; 2004. p. 165.
13.
Penney GP, Weese J, Little JA, Desmedt P, Hill DLG, Hawkes DJ. A Comparison of Similarity Measures for Use in 2D-3D Medical Image Registration. In: Wells WM AC, Delp S, editor. Lecture Notes in Computer Science. Cambridge, MA, USA; 1998. p. 1153.
14.
Bell M, Pike MC. Remark on algorithm 178 [E4] direct search. Commun ACM. 1966;9(9):684-5.
15.
Tomlin FK, Smith LB. Remark on algorithm 178 [E4]: direct search. Commun ACM. 1969;12(11):637-8.
16.
Tomazevic D, Likar B, Pernus F. "Gold Standard" 2D/3D Registration of X-Ray to CT and MR Images. In: Dohi T RK, editor. Lecture Notes in Computer Science. Tokyo, Japan; 2002. p 461-8.
17.
Kubo HD, Hill BC. Respiration gated radiotherapy treatment: a technical study. Phys Med Biol. 1996;41(1):83-91.
18.
Schweikard A, Glosser G, Bodduluri M, Murphy MJ, Adler JR. Robotic motion compensation for respiratory movement during radiosurgery. Comput Aided Surg. 2000;5(4):263-77.
19.
Berlinger K, Roth M, Fisseler J, Sauer O, Schweikard A, Vences L. Volumetric deformation model for motion compensation in radiotherapy. In: Medical Image Computing and Computer-Assisted Intervention - MICCAI. Saint Malo, France; 2004. p. 925-32.
20.
Lorensen WE, Cline HE. Marching cubes: A high resolution 3D surface construction algorithm. In: International Conference on Computer Graphics and Interactive Techniques. 1987. p. 163-9.
21.
Lester H, Arridge SR, Jansons KM, Lemieux L, Hajnal JV, Oatridge A. Non-linear registration with the variable viscosity fluid algorithm. In: Kuba A, editor. IPMI. Heidelberg: Springer-Verlag; 1999. p. 238-51.
22.
Little JA, Hill DLG, Hawkes DJ. Deformations incorporating rigid structures. Comp Vision and Image Understanding. 1997;66(2):223-32.