gms | German Medical Science

GMS Current Topics in Computer and Robot Assisted Surgery

Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC)

ISSN 1863-3153

Intuitive volume classification in medical augmented reality (AR)

Research Article

GMS CURAC 2006;1:Doc08

Die elektronische Version dieses Artikels ist vollständig und ist verfügbar unter: http://www.egms.de/de/journals/curac/2006-1/curac000008.shtml

Veröffentlicht: 20. September 2006

© 2006 del Río et al.
Dieser Artikel ist ein Open Access-Artikel und steht unter den Creative Commons Lizenzbedingungen (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.de). Er darf vervielfältigt, verbreitet und öffentlich zugänglich gemacht werden, vorausgesetzt dass Autor und Quelle genannt werden.


Abstract

A proper classification is a key factor for the successful visualization of 3D medical data. In direct volume rendering applications, the data is classified by means of a transfer function mapping each voxel characteristics to optical properties such as color and opacity, thus highlighting certain regions over others. In a medical scenario, the structures of interest that must be highlighted may vary strongly depending on the application. Therefore, easy and intuitive user interaction is crucial to achieve a meaningful classification for a given patient’s scan. In this paper, we focus on a novel approach that combines direct visualization and interaction with 3D data in a medical augmented reality (AR) environment. The proposed method takes into account regions of interest directly defined by the physician in the patient’s anatomy and employs this information to automatically generate an adequate transfer function. Illustrative results demonstrate the utility of this interactive visualization paradigm for medical applications.

Keywords: medical visualization, augmented reality, volume rendering, volume classification, machine learning


1. Introduction

The fast development during the last decades of better and more accurate scanning devices, such as computed tomography (CT) or magnetic resonance tomography (MRT) scanners, has brought new challenges to medical visualization in order to provide efficient techniques able to display the acquired data. Direct volume rendering provides a powerful tool for displaying scanned volume data in a three-dimensional environment. Since all the data is directly processed and incorporated to the scene, it generates a valuable global overview of the whole dataset at once. However, despite the importance of such a general insight as a first approach to a patient’s scan, commonly only certain limited regions are specially relevant for a meaningful visualization and must be highlighted with respect to the rest of the anatomy.

In direct volume rendering, the classification step consists on the assignment of renderable optical properties to different regions within the dataset. Therefore, in order to highlight certain regions of interest over others, a proper classification is mandatory. Classification is commonly performed by means of a transfer function, which maps internal parameters of the data (e.g., voxel intensity) to the color and opacity values to be used during rendering. Since in most medical scenarios, the information sought after is highly dependent on the application, user interaction is vital to allow the physician to guide the classification of the data. By translating the interaction between the user and the analyzed data into an augmented reality environment, a better and more direct manipulation of the volume is enabled. Moreover, given the inherent three-dimensional nature of the data, the definition of a key factor for its visualization, such as the transfer function, clearly benefits from a direct real-time interaction in 3D.

In this work, we propose a semi-automatic strategy for informative volume classification, based on user interaction in a medical AR environment. The remainder of this paper is structured as follows: In the following section, we first provide a brief overview of related work. In Section 2.2, we introduce ARGUS, our medical AR system. Following, in Section 2.3, the classification process of a patient’s scan is described in detail. Results obtained with real clinical data are then presented and discussed in Section 3. Finally, we summarize the most relevant aspects of our work in Section 4.


2. Methods

2.1 Related work

Despite the crucial relevance of classification as a tool to identify and visualize relevant features in a volume dataset with direct volume rendering, it has not been until recent years that a considerable research effort has been put on addressing the problem of finding a proper transfer function. The use of multi-dimensional transfer functions can drastically benefit the success of volume visualization [1]. Therefore, much effort has been devoted to optimize the definition of such transfer functions. Kindlmann et al. [2] use first and second order derivative information in the transfer function design in order to semi-automatically isolate structures within the volumetric dataset correlating with a material boundary model. An interactive approach is presented by Kniss et al. in [3], where the gradient magnitude is also computed together with the Hessian matrix. This information is then incorporated into the transfer function design process by a set of manipulation widgets. However, even for this interactive system, a considerable expertise from the user is necessary to achieve a meaningful visualization in a reasonable amount of time. The concept of direct interaction with the rendered result instead of with a representation of the transfer function domain has been taken one step further by Tzeng et al. [4]. In this case, the transfer function space is kept completely hidden to the user, who only interacts with the volume itself by painting on sample slices. The classification itself is performed by one multilayer perceptron (MLP) neural network for each predefined material class. In our interface proposal, we borrow the idea of using a multi-dimensional transfer function, while limiting the user interaction to the spatial domain. However, our approach clearly differentiates by employing an augmented reality paradigm on which a real 3D interaction with the volume is guaranteed, in contrast with a 2D slice-based solution.

In this work, we propose the use of intuitive interaction in a medical augmented reality environment as the interface for semi-automatic volume classification. Augmented reality denotes techniques which combine images of the real environment with three-dimensional computer-generated graphics. An overview of augmented reality is given by Azuma [5]. The implementation of our proposed method was performed using our medical augmented reality framework ARGUS [6]. Medical diagnosis and treatment has traditionally been one of the main applications for augmented reality. State et al. [7] proposed an early system for supporting ultrasound-guided needle biopsies. As another example, a high performance video see through augmented reality system for medical applications was presented by Vogt et al. [8].

The combination of augmented reality and volume classification has also been recently addressed in a system utilizing an augmented reality user interface as a tool for defining transfer functions in a manual way [9]. This work, however, is devoted to the generation of a traditional one-dimensional transfer function, where the user manually combines a set of predefined functions in a trial and error fashion.

2.2 Medical AR

The ARGUS system for medical augmented reality, which is the basis for the approach presented here, was previously described in [6]. ARGUS is a framework for medical AR which is based on existing, commercially available surgical equipment (ARGUS stands for "Augmented Reality based on Image GUided Surgery"). In the AR setup, a VectorVision® intraoperative navigation device is used (see Figure 1b [Fig. 1]), which is equipped with a highly accurate infrared tracking system. The tracking information delivered by the infrared cameras is utilized for the pose estimation of the digital video camera used in the AR system, but also for realizing the novel user interaction in the presented semi-automatic volume classification approach.

In the ARGUS system, a specialized calibration step is utilized for computing the transformation between the tracked surgical instrument clamp and the camera coordinate system. This is a one-time calibration step, which does not need to be repeated as long as the physical relationship between the instrument clamp and the digital camera remains unchanged. In this calibration step, optical marker tracking is used for establishing a common coordinate system. This is the reason why a black-and-white optical marker is partially visible in Figure 1a [Fig. 1] as well as other figures. During the actual operation of the augmented reality framework, however, only the infrared tracking provided by the intraoperative navigation device is used. Figure 2 [Fig. 2] shows an overview of the one-time calibration step.

Several objects can be tracked simultaneously by the intraoperative navigation system. Infrared marker clamps consisting of a configuration of three reflective spheres are attached to surgical or interaction tools which are to be tracked. Moreover, a pre-configured pointer tool is supplied by the manufacturer of the IGS device. A user interaction library based on the ARGUS framework was designed and implemented, which uses this tracking information [10]. Different pen-like and pointer-like tools are used as untethered interaction devices. The user interface system is capable of detecting different click gestures for the definition of points in 3D. Moreover, a full-fledged menu system with freely placeable menu items is provided. An example application of this user interaction system is shown in Figure 1a [Fig. 1].

2.3 Classification of patient’s scan

The novel paradigm for volume classification we propose constructs a multi-dimensional transfer function in a semi-automatic manner. The integration of interaction and direct visualization in a medical AR environment, liberates the physician from the internal complexity of the tedious design process. The key elements of the algorithm are summarized into a functional pipeline as described by Figure 3 [Fig. 3].

Initially, the patient’s scan is rendered in the medical AR environment, so that both the camera and the registered patient are within the tracked working area. The rendered volume is overlaid on top of the actual patient’s anatomy. The dataset is rendered with a standard linear ramp transfer function for all color and opacity channels producing a gray-scale representation. This rendered volume can be directly examined in the medical AR environment using a clipping plane widget (see Figure 4 [Fig. 4]).

The next step consists in the identification of materials of interest within the patient’s anatomy. The physician points at a location directly in the volume using a clipping plane and a pointer tool (Figure 4 [Fig. 4]). The clipping plane allows the user to browse through the rendered volume, thus providing an insight view on the dataset. Once a representative location has been found for one material of interest, this can be picked up by the user by pointing at it with the pointer tool. Note that this does not mean that the tip of the pointer tool has to be on the clipping plane (which would generally mean inside the patient’s anatomy). Instead, a ray is projected from the tip of the pointer tool, as illustrated in the upper part of Figure 4 [Fig. 4]. The intersection between the clipping plane and this ray is used as the material definition point. This way, a set of points identifying different materials of the patient’s anatomy can be defined. These points are then processed and used for the automatic generation of a transfer function that highlights the marked regions. The transfer function is produced with the help of machine learning algorithms working on the basis of a 2D histogram of the dataset (i.e., voxel intensity vs. gradient magnitude). In order to provide enough data to the automatic classifier, samples in a small surrounding region around each point are selected on the 2D histogram. This also helps to reduce the interaction time necessary to define the regions of interest. For the automatic transfer function generation, we have tested two different approaches. The first one is based on the utilization of an artificial neural network, more specifically a Multi-Layer Perceptron (MLP), while the second employs a k-Nearest Neighbors (kNN) classifier. Finally, the obtained classification is applied to the volume and the result is rendered again in the medical AR environment on top of the registered patient.


3. Results & discussion

We have tested the viability of the proposed method by applying the above described techniques to real clinical data. Due to space restrictions, only two representative examples are presented in this paper. The first example dataset is an MRI scan of a human head with 512x512x160 voxels and 8 bits/voxel. In order to simulate the typical scenario for a medical AR application, the patient’s scan has been registered using a semi-transparent plastic skull phantom. This way, the patient data can be efficiently overlaid on top of the phantom as presented in Figure 5 [Fig. 5]. However, due to the fact that the patient’s scan and the plastic skull do not have the exact same size and shape, a minor, but visible registration mismatch occurs.

As can be seen on the images (see Figure 5 [Fig. 5]), the initial rendering does not provide a sufficient insight of the structures present in the MRI scan of the patient’s head, and a new classification must be performed in order to reveal the information it contains. Using the direct interaction tools (clipping plane and pointer tool), we define five different materials as described in Table 1 [Tab. 1].

Each material is represented by a color and opacity value, as well as a label. The selected sample points are reproduced in Figure 6c [Fig. 6] on the 2D histogram of the dataset. A preliminary manual classification may already be performed with these user defined data. The result of such an intermediate classification is shown in Figure 6a and b [Fig. 6]. Even though this provides a more appealing visual result than the initial standard transfer function, the result is still not satisfactory enough for most applications. Of course the user can further refine this intermediate manual classification by defining more and more points for each material. This, although compatible with our algorithm, would notably increase the time required for the classification of the volume, thus limiting the benefits introduced by the method. Therefore, an automatic classifier is used instead. This way, the user defines only a minor set of points that is enough to create a much more complex and meaningful visualization of the patient’s anatomy.

In this example, an MLP neural network is used as the automatic classifier. A threshold of 70% was set as the minimum probability to consider a voxel as belonging to one of the defined material classes. Voxels with a probability inferior to 70% were rendered as transparent. Figure 7 [Fig. 7] shows the obtained result after applying the transfer function generated by the MLP neural network. The transfer function itself is represented in Figure 7c [Fig. 7] on the 2D histogram of the dataset. Each point on this 2D histogram corresponds to one voxel in the volume, while the colors indicate the distribution of the classes (i.e. materials) produced by the automatic classification process.

As can be seen in Figure 7a and b [Fig. 7], the inner structure of the volume has been made visible, while the materials marked by the user during the direct manipulation stage have been successfully highlighted.

Figure 8 [Fig. 8] shows our second example, a more realistic case that illustrates the utilization of our method in a pseudo-clinical scenario with a non-clinical test subject. Here again, the dataset is an MRI scan of a human head with a spatial resolution of 512x512x74 voxels with 16 bits/voxel. The MRI scan is first rendered on top of the image of the registered test subject, with help of the infrared tracking provided by the IGS system (see Figure 8a and b [Fig. 8]). (Note that in some of the images shown in Figure 8 [Fig. 8], an offset between the graphical overlay and the skull of the test subject is visible. This inaccuracy is caused by a non-optimal registration based on fiducials fixed to a pair of goggles worn by the subject. Moreover, an incorrect scaling of the volume dataset was erratically applied in the test case illustrated in this figure.) A set of points is then directly defined in the medical AR environment as shown in Figure 8c and d [Fig. 8]. As these points are processed and passed to the automatic classifier, the corresponding 2D transfer function is obtained and applied to the dataset (see Figure 8e and f [Fig. 8]). The transfer function is displayed on the 2D histogram of the dataset in Figure 8g [Fig. 8].

This second example not only illustrates the interaction in a real scenario, but also demonstrates how our proposed method allows us to produce rather complex volume classifications, that are able to discern between different tissues and materials within a patient’s anatomy (see Figure 8e [Fig. 8]).


4. Conclusions

Volume classification provides a powerful tool for the visualization of 3D anatomical data with direct volume rendering. In a medical scenario, the aim of visualization is highly dependent on the application. Therefore, the integration of the user with the visualization tools is of utter importance for a successful utilization of visualization techniques. This makes user interaction a first order priority for any useful volume classification method with medical purposes. Unfortunately, most transfer function definition algorithms are rather complex, so that the assistance of a visualization expert is usually mandatory to get meaningful images. In this paper, we have presented a approach for semi-automatic transfer function design that is based on direct interaction between the user and the rendered volume in a medical augmented reality (AR) environment. By working in an AR environment, intuitive manipulation and interaction tools can be directly utilized to guide the classification process. Based on a set of sample points defined by the user directly in the patient’s anatomy within the AR environment, an automatic volume classification process is carried out using machine learning techniques. Specifically, a multi-layer perceptron neural network and a k-Nearest Neighbors classifier have been selected and tested. The obtained results indicate that this method permits the generation of complex multi-dimensional transfer functions, producing meaningful images and gaining a good insight of the data. On the other hand, given the combination of an intuitive and easy-to-use user interface and the application of automatic classifiers, the complexity of the volume classification process is kept transparent to the user, thus facilitating the integration of these techniques into the practice.


References

1.
Pfister H, Lorensen B, Bajaj C, Kindlmann G, Schroeder W, Avila LS, Martin K, Machiraju R, Lee J. The Transfer Function Bake-Off. IEEE Computer Graphics and Applications. 2001;21(3):16-22.
2.
Kindlmann G, Durkin JW. Semi-Automatic Generation of Transfer Functions for Direct Volume Rendering. In: Proc. of IEEE Symposium on Volume Visualization. 1998. pp. 79-86.
3.
Kniss J, Kindlmann G, Hansen C. Interactive Volume Rendering Using Multi-Dimensional Transfer Functions and Direct Manipulation Widgets. In: Proc. of IEEE Visualization. 2001. pp. 255-62.
4.
Tzeng FY, Lum EB, Ma KL. A Novel Interface for Higher-Dimensional Classification of Volume Data. In: Proc. of IEEE Visualization. 2003. pp. 505-12.
5.
Azuma R. A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments. 1997;6(4):355-85.
6.
Fischer J, Neff M, Freudenstein D, Bartz D. Medical Augmented Reality based on Commercial Image Guided Surgery. In: Proc. of Eurographics Symposium on Virtual Environments. June 2004. pp. 83-6.
7.
State A, Livingston M, Hirota G, Garrett W, Whitton M, Fuchs H, Pisano E. Technologies for Augmented-Reality Systems: Realizing Ultrasound-Guided Needle Biopsies. In: Proc. of ACM SIGGRAPH. August 1996. pp. 439-46.
8.
Vogt S, Khamene A, Sauer F, Keil A, Niemann H. A High Performance AR System for Medical Applications. In: Proc. of IEEE International Symposium on Mixed and Augmented Reality. October 2003. pp. 270-1.
9.
Reitinger B, Zach C, Bornik A, Bichel R. User-Centric Transfer Function Specification in Augmented Reality. In: Proc. of WSCG. Plzen, Czech Republic. February 2004; vol. 12.
10.
Fischer J, Bartz D, Straßer W. Intuitive and Lightweight User Interaction for Medical Augmented Reality. In: Vision, Modeling, and Visualization. Erlangen. 2005. pp. 375-382.