gms | German Medical Science

GMS Current Topics in Computer and Robot Assisted Surgery

Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC)

ISSN 1863-3153

Expressive anatomical illustrations based on scanned patient data

Research Article

  • corresponding author Zein Salah - WSI/GRIS-VCM, University of Tübingen, Tübingen, Germany
  • Dirk Bartz - WSI/GRIS-VCM, University of Tübingen, Tübingen, Germany
  • Wolfgang Straßer - WSI/GRIS-VCM, University of Tübingen, Tübingen, Germany
  • Marcos Tatagiba - Department of Neurosurgery, University Hospital of Tübingen, Tübingen, Germany

GMS CURAC 2006;1:Doc09

The electronic version of this article is the complete one and can be found online at: http://www.egms.de/en/journals/curac/2006-1/curac000009.shtml

Published: September 20, 2006

© 2006 Salah et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en). You are free: to Share – to copy, distribute and transmit the work, provided the original author and source are credited.


Abstract

The art and profession of medical illustration depend not only on the talent and skills of an illustrator, but also on the complimentary knowledge of human anatomy. Therefore, anatomical illustrations were mostly based on comprehensive dissections and observations during surgery. Recently, illustrative visualization techniques have been utilized to illustrate features and shapes of anatomical objects. Thereby, illustrative visualization provides representations that highlight relevant features, while de-emphasizing irrelevant details.

However, anatomical illustrations usually include multiple neighboring structures, which increases the complexity of the illustration problem, as the relationship between multiple organs still remains difficult to convey. In this paper, we present an approach for the combined illustrative visualization of multiple anatomical structures based on scanned patient data. To enhance the expressiveness, we incorporate multiple stylized rendering and shading techniques, and propose strategies for tuning object attributes (such as color, opacity, and silhouette), which allows to better perceive the spatial relationship between objects as well as to draw the focus to targeted structures.

Keywords: anatomical illustration, illustrative visualization, medical visualization


1. Introduction

Anatomical illustrations play a vital role in medical education, where they serve as an efficient tool for conveying shape- and function-related features of organs and organ systems. Unfortunately, illustrations remain dependant not only on the artistic skills of the illustrator, but also on the complimentary knowledge of human anatomy. Actually, most of the remarkable illustration contributions were based on comprehensive dissections and observations during surgery. Fortunately, with the introduction of 3D medical scanners (e.g. CT, MRI) and the visible human datasets, it became possible to segment organ systems, and to generate a geometric and visual representation of them [1]. While conventional visualization approaches such as surface- and volume rendering generate accurate visual representations of the scanned data, these representations can be quite complex and do not convey the essential anatomical information very well, in particular in the case of differentiating neighboring object.

On the other hand, illustrative (non-photorealistic) visualization focuses on the major visual features of an object. This is in particular important for medical illustration, for which organ models must be segmented from volumetric data. However, rendering multiple proximate organs still represents a challenge, since they need to be clearly distinguished. In this paper, we present an approach for the combined illustrative visualization of multiple neighboring anatomical structures. In essence, we integrate standard polygonal-based silhouette rendering with a point-based illustrative rendering algorithms, by which the point models are directly extracted from volumetric segmented data. To enhance the expressiveness, we also adopt multiple shading techniques such as halftoning, watercolor, and discrete shading. The appropriate choice of different rendering and shading styles allow for the better differentiation and perception of the spatial relationship between objects. By additionally adjusting object attributes such as color, transparency, and silhouette thickness, the focus can be drawn to targeted structures.

The remainder of this paper is structured as follows; the next subsection provides an overview of related work. In Section 2, the whole method is described. Illustration and shading styles are introduced in Sections 2.1 and 2.2, respectively, and the combination process is outlined in Section 2.3. Results are demonstrated in Section 3. The presented method and results are discussed in Section 4. Finally, Section 5 concludes our approach.

Related work

In particular for medical visualization, illustrative (or non-photorealistic) rendering and scientific visualization share the same goal of efficiently communicating information. Hence, several approaches were introduced that adopted illustrative styles to enhance the illustration power of traditional rendering techniques. Black-white illustrations frequently rely on hatching to express shape properties of objects. Some methods estimate hatching directions by computing the surface principal curvatures [2], [3], while some others rely on material properties (e.g. the direction of muscle fibers [4]). Other common styles that have a long tradition in illustration are halftoning [5] and stippling [6]. Regarding colored renderings, Lake et al. [7] introduced a technique for emulating cartoon shading. Gooch et al. [8] presented a stylized lighting model for technical illustration. In particular, the model introduced the color-temperature-based shading concept.

Some methods employ stylized primitives for volume rendering. For example, Lu et al. [9] presented a framework for volume stippling and Dong et al. introduced volumetric hatching [4]. Other methods try to enhance the illustration of surfaces within a volume by means of illustrative styles such as pen-and-ink [10], strokes [11], or silhouettes and hatching [12]. A texture-based volume rendering approach that incorporates silhouette enhancement and tone shading was introduced in [13]. Rheingans and Ebert [14] introduced volume illustration, which combines the physics-based volume rendering model with stylized techniques to enhance important features. Csébfalvi et al. [15] presented an illustrative technique for volumetric data, where only the silhouette lines are rendered in order to give a first 3D impression about the content of the volume. Kindlmann et al. [16] utilized curvature-based transfer functions to incorporate illustrative effects in a volume renderer.

Hauser et al. [17] introduced two-level volume rendering, where different structures within the dataset are rendered using different rendering techniques (e.g. DVR, surface rendering, NPR, etc.). In [18], Hadwiger et al. introduced two-level volume rendering for segmented datasets. Viola et al. [19] introduced importance-driven volume rendering, where features within the volumetric data are classified according to object importance. Bruckner et al. [20] presented context-preserving volume rendering, where the opacity of a sample is modulated by a function of shading intensity, gradient magnitude, distance to the eye point, and previously accumulated opacity. Bruckner and Gröller [21] introduced an environment for illustrative volume rendering, with support for annotations. In [22], [23] a system for medical volume illustration is presented, in which a hierarchy of transfer functions is employed to differentiate objects, materials, and regions within the illustrated data. Recently, an illustrative rendering algorithm for segmented anatomical data based on surface point sets was introduced in [24]. Fischer et al. [25] used silhouette layers to produce illustrative renderings of hidden isosurfaces. Tietjen et al. [26] presented hybrid renderings, where combined surface and volume renderings are emphasized with silhouette lines. Moreover, they present a questionnaire-based evaluation study on the quality and usefulness of the generated renderings for medical education and surgery planning. The role of illustrative rendering in focussing and emphasizing features in medical visualization has been discussed in [27].


2. Methods

Our anatomical illustration process, shown in Figure 1 [Fig. 1], integrates multiple styles for the illustrative rendering of isosurfaces, as well as segmented data, and adopts multiple shading techniques such as halftoning, water-color, and discrete shading. The process begins by constructing the input models of the organs of interest. These models can be polygonal meshes or surface point sets based on segmentations. Subsequently, the suitable rendering algorithm is selected for each object. At the same time, one from several available shading styles is selected. Overall, a large number of possible combinations is available. Hence, the careful selection of rendering and shading styles is essential for the efficient illustration. Finally, object attributes (opacity, color, silhouette, etc.) are specified and transparency is handled to produce the proper final effect.

2.1 Illustration styles

Illustration of segmented data

Segmented data is illustrated by adapting the point-based algorithm presented in [24], which belongs to the class of silhouette algorithms. However, in this contribution, we enrich the algorithm with more shading styles and provide support for transparency within and between objects. In this section, we limit ourselves to an abstract description of the algorithm. For more details, please refer to [24].

First, an initial set of surface points is extracted from binary segmentations of the targeted organs. For maintaining an acceptable continuity and solidity of the generated silhouettes, surface points should be dense enough and uniformly cover the object surface, especially in regions of sharp edges. The subset of surface points that defines the silhouette of the object is determined by examining point normals. Points where the normals are almost perpendicular to the viewing direction are labelled as silhouette points. Each silhouette point is rendered with a disk of a chosen silhouette color, oriented in 3D, with the point normal perpendicular to its surface. At silhouette points, the rendered disks look like thin ellipsoids, since their orientation is almost perpendicular to the view vector. Hence they produce the effect of a pen-stroke curve when they overlap with each other. The radii of the disks are chosen in a way that the two disks corresponding to two neighboring silhouette points overlap to produce sufficient overlap. Rendering disks, scaled with a proper global scaling factor, at non-silhouette points, occludes hidden silhouettes. The overall rendering of these disks may still permit few and very small pieces of the hidden silhouette to appear. Since the goal is an illustration, this possible effect can be ignored and the resulting rendering is acceptable.

Illustration of polygonal isosurfaces

Several methods have been proposed to outline the silhouettes of polygonal models [28]. In our approach, we adopt an image-based method (presented in [29]) that highlights silhouette edges, i.e. edges that connect front-facing to back-facing polygons. This two-pass procedure utilizes z-buffering to depict model outlines as follows: In a first pass, front-facing polygons are rendered and the required shading style is applied. In the second pass, the depth test is set to LEQUAL and back-facing polygons are rendered in wire frame mode. Since the z-buffer is filled for front-facing polygons from the first pass, the depth test will draw only silhouette edges.

2.2 Shading styles

We implement shading by regarding one light source, located at the upper-left corner behind the viewer. For simplicity, we use only the diffuse reflection term of Phong’s illumination model, computing the light intensity at a point x i with normal n i as:

(1) Equation 1

where LP is the light position. In addition to the standard shading, we provide different illustrative shading options:

1. Halftone Shading: Gradual lighting is implemented by drawing shading patterns of different sizes. In essence, we shade with disks whose radii are inversely proportional to the illumination intensity of the corresponding area. For point models, we scale each shading disk with a scaling factor S shd defined as:

(2)Equation 2

where L i is the illumination defined in Equation 1, and S all is a global normalization factor. The parameter I shd controls the overall intensity (darkness) of shading, and A shd controls its stretching. The transition within the shade is controlled by the parameter n. In the case of polygonal models, we render scaled shading disks at triangle midpoints, orthogonal to the viewing direction.

2. Cool-to-Warm Shading: In this shading style, we adopt the layer-based approach presented in [30], which mainly utilizes the cool-to-warm shading model introduced by Gooch et al. [8]. In essence, we combine two shading components (scales): a cool-to-warm and a toned-color component. In practice, a shading scale is defined as a vector of n color entries. The cool-to-warm scale represents transitions along color temperature, i.e. like cool and warm colors (e.g. green-to-red). Analogous, the toned-color component represents a transition between the object color and black (or white). To avoid performing two-pass shading, the two scales are finally linearly combined (blended) in one shading scale. Cool-to-warm shading is shown in Figure 2 [Fig. 2].

3. Optionally, more impression of stylization can be embedded by carefully modulating the final rendering with Perlin noise [31]. For this purpose, we generate a turbulent 3D Perlin noise texture and map the model to it. At shading time, the final color of a disk is modulated with the value at the nearest location in the noise texture. Similarly, halftone renderings can be modulated with Perlin noise by disturbing the sizes of the shading disks.

4. Discrete Shading: To give a stronger impression of stylization, discrete shading can be used. Here, we define a shading scale of limited, discretely distributed intensities (the case in which only two intensities are used is commonly known as toon shading). The result is modulated with the objects surface color to get the final effect. The same concept can be generalized to produce discrete cool-to-warm shading.

2.3 Multiple-object rendering

Anatomical illustrations usually include multiple neighboring structures, which increases the complexity of the illustration problem. In our approach, two important factors control the quality of the final illustration; transparency should be consistently handled, within individual objects as well as among neighboring ones, and the proper combination of illustration styles for the different objects should be selected.

Handling transparency within individual objects

If the point-based rendering algorithm were applied to translucent objects, undesirable effects arise. In particular, the hidden parts of the silhouette will shine through the transparent occluding disks. To avoid this problem, we follow the following procedure; we enable alpha blending and render the occluding transparent disks before silhouettes. At the same time, the desired shading effect is applied. We then set the depth test to LEQUAL and render the silhouette disks. Since the z-buffer is filled by the occluding disks, the depth test will draw only parts of disks that do not lie behind occluding disks. This mechanism enables an efficient handling of transparency within and between objects. A similar concept can be used for rendering translucent polygonal models.

Handling transparency among objects

When we render multiple objects, not only the transparency of objects must be considered, the order of rendering objects is also important. We can generalize the previous idea to multiple objects rendering by applying the following procedure; first, all opaque objects are rendered in any order. Afterwards, translucent objects are rendered in reverse order of the desired transparency, and an in-to-out order is followed for interleaved objects. Figure 3 [Fig. 3] clarifies the concept and shows how the incorrect rendering order produces undesirable results. In this example, the skin, the colon, and the skeleton models are extracted from a CT dataset. If we are aimed at a rendering in which the bones and skin are translucent, and the colon is opaque, the semitransparent rendering should follow the order: colon, bones, and skin (Figure 3b [Fig. 3]). If skin were rendered before the bones, this would produce the effect shown in Figure 3a [Fig. 3]. Note that even though the skin is translucent, the bones cannot be seen through. However, the colon can still be seen through the bones, since it is rendered first.

Style selection and attribute tuning

Overall, our approach provides a wide range of possible combinations of rendering and shading styles. To facilitate the proper selection between such combinations, we classify objects in a combined illustration into focus, context, and container objects. Focus objects are those of highest interest. Context objects are other surrounding or neighboring objects in the scene. A container object encloses a subset of focus and context objects (in our current work, we define only one container object, which corresponds to the skin). For an expressive combined illustration, we define for each object a rendering and a shading style, and also tune its color, degree of transparency, and silhouette. The following strategy has been found useful:

Rendering style: If a segmentation is provided, the point-based algorithm can be applied, thus avoiding a triangulation step. Moreover, the point-based algorithm is preferable for objects of moderate and large sizes, and focus objects that do not exhibit extreme curvature variances, like most internal organs (e.g. kidney, heart, colon, etc.). Polygonal-based rendering would better be used for container objects (since it produces smoother surfaces), for objects with sharp edge, and relatively thin structures.

Shading style: Standard (or discrete) shading can be used for container objects. On the other hand, context objects, cool-to-warm shaded, appear quite clear and distinctive. It is not advisable to apply the same style to both container and context object, as they become hardly distinguishable. Halftoning is a good choice for shading focus objects, especially large, innermost organs.

Degree of transparency: Container object must be very translucent. For the other objects, we apply the degree of interest concept (doi) [32]. In essence, objects that lie within the same nesting level from the viewer are given transparencies inversely proportional to their degree of interest. Focus objects are rendered almost (or totally) opaque.

Color: Rendering container object with light skin-like colors produces acceptable visual effects, even if the container object is not skin. Focus objects are rather emphasized with fully saturated, preferably warm, colors (e.g. red or orange). In general, context objects are rather given cooler colors than focus objects.

Silhouette: For illustration purposes, rendering with silhouettes reveals better expressiveness and shape perception. Hence, we always render focus and context objects with outlined silhouettes. Moreover, the thickness of the silhouette reflects the degree of object importance. Therefore, we render focus objects with thicker silhouettes. Containers might be rendered without silhouettes (see Figure 7).


3. Results

The overall illustration system integrates the algorithms for the illustrative rendering of point and polygonal models together with the different shading styles. The system provides flexible viewing capabilities and allows for the adjustment of parameters for tuning the quality of the final rendering. We have tested our approach with a diversity of models, which were extracted from segmented patient datasets (CT and MRI scans).

In Figure 4 [Fig. 4], we show how different combinations of rendering and shading styles are combined to produce expressive rendering of a thorax. Three object models were extracted from a CT dataset; the lungs model (surface points model) was extracted from a segmentation, the bone (ribs and parts of the shoulders and spine) and skin models were constructed using the Marching Cubes algorithm. The lungs model (focus object) is rendered with the point-based illustrative rendering algorithm, with halftoning as a shading style. To draw more attention to them, they are rendered opaque with a warm color. The bones (context) are rendered with the polygonal-based algorithm and shaded with a cool-to-warm style. The skin (container) is rendered based on a polygonal representation and shaded using the Gouraud shading.

Similarly, Figure 5 [Fig. 5] shows renderings of an abdominal CT dataset. The colon is rendered based on a surface point set and shaded with halftoning. In these rendering, the colon is the focus object, bones are the context, and the skin is the container. In Figure 6 [Fig. 6], a CT-based illustration of the skull is demonstrated, where the paranasal sinus (focus) are rendered in red. To show their spatial relationship with the surrounding, a translucent cut of the skull (context) is rendered with a cool-to-warm shader. Note that the paranasal cavity is located within the skull bones. Another MRI-based illustration from the head region is demonstrated in Figure 7 [Fig. 7], where the eye balls, eye muscles, and the cerebral ventricular system are shown. In this Figure, all selected objects within the head are focus objects. In Figures 3 [Fig. 3], 4 [Fig. 4], 5 [Fig. 5], and 6 [Fig. 6], the stronger reddish color on some parts of the lungs, intestine, and paranasal sinus are resulted at locations where there are no skin between the viewpoint and these parts. However, these cut-outs are resulted from the scanning process in the datasets of the abdomen and the thorax, and are intentionally made in the case of the paranasal sinus.


4. Discussion

As demonstrated in the previous section, the expressiveness of an anatomical illustration is highly dependant on the good choice of rendering and shading styles, as well as on the careful tuning of the attributes of the individual structures that constitute the illustration. For example, choosing different shading styles for cascading semitransparent container and context objects facilitates the differentiation between them. In addition, a specific combination of colors draws the focus to certain objects. In particular for medical illustration, colors may to some extent support the perception of spatial information. This is illustrated by the renderings of the abdomen (Figure 5 [Fig. 5]) and the skull (Figure 6 [Fig. 6]), where the inner side of the skin is modulated with a rosy color to further enhance the inside-outside sentiment.

As depicted by the different illustrations, we always render focus and context object with silhouettes to simplify differentiating them from the surrounding. Actually, silhouettes serve as supportive cues for figure-to-ground segregation [33]. This also conforms with the conclusive remarks of the evaluation study presented in [26]. Another idea is to render colored focus objects within a grey-scale context, or to apply the semantic depth of field (SDOF) technique, which relies on sharply rendering important objects while the surrounding is blurred [32].

Finally, it should be noted that our approach relies on segmented patient data. However, in contrast to other medical applications (diagnosis, planning, etc.), our intended application (educational illustration) does not require a highly accurate segmentation. In some cases, the segmentation of an organ might rather be blurred to generate a higher level of abstraction. Overall, and excluding the segmentation time, combined renderings are generated at near interactive rates.


5. Conclusion

In this paper, we have presented an approach for the combined illustrative visualization of multiple anatomical structures. The major goal of our work is to obtain a clearly perceivable display of the spatial relationship of the various neighboring organs in an illustration. For this purpose, we combined different illustrative rendering techniques with stylized shading methods. Furthermore, object attributes (transparencies, color, and silhouettes) can be tuned to draw the focus of a user to or away from specific objects. In particular, we showed how conventional stylized techniques can be integrated to produce high quality, expressive illustration of proximate object, if combining strategies are applied.

Our future work will focus on investigating the effectiveness of other illustrative styles and on the illustrative visualization of complex organ system relationships. Although related work [27] already indicated that illustrative techniques can successfully shift the users focus on specific organs, more task-oriented user studies need to be conducted in future work. This will help determining the suitable styles and combination strategies for different applications. In addition, more work should be directed toward the automation of the transparency handling.


References

1.
Höhne K, Pflesser B, Pommert A, Priesmeyer K, Riemer M, Schiemann T, Schubert R, Tiede U, Frederking H, Gehrmann S, Noster S, Schumacher U. VOXEL-MAN 3D Navigator: Inner Organs. Regional, Systemic and Radiological Anatomy [CD-ROM]. Springer-Verlag Electronic Media. Heidelberg, Germany; 2000.
2.
Girshick A, Interrante V, Haker S, Lemoine T. Line Direction Matters: An Argument for the Use of Principal Directions in 3D Line Drawings. In: Proc. of the International Symposium on Non-Photorealistic Animation and Rendering. 2000. p. 43-52.
3.
Hertzmann A, Zorin D. Illustrating Smooth Surfaces. In: Proc. of ACM SIGGRAPH. 2000. p. 433-8.
4.
Dong F, Clapworthy G, Lin H, Krokos M. Nonphotorealistic Rendering of Medical Volume Data. IEEE Computer Graphics and Applications. 2003;23(4):44-52.
5.
Freudenberg B, Masuch M, Strothotte T. Real-Time Halftoning: A Primitive for Non-Photorealistic Shading. In: Proc. of Eurographics Workshop on Rendering. 2002. p. 227-31.
6.
Deussen O, Hiller S, Van Overveld C, Strothotte T. Floating Points: A Method for Computing Stipple Drawings. Computer Graphics Forum. 2000;19(3):40-51.
7.
Lake A, Marshall C, Harris M, Blackstein M. Stylized rendering techniques for scalable real-time 3D animation. In: Proc. of the International Symposium on Non-Photorealistic Animation and Rendering. 2000. p. 13-20.
8.
Gooch A, Gooch B, Shirley P, Cohen E. A Non-Photorealistic Lighting Model For Automatic Technical Illustration. In: Proc. of ACM SIGGRAPH. 1998. p. 447-52.
9.
Lu A, Morris C, Ebert D. Non-Photorealistic Volume Rendering Using Stippling Techniques. In: Proc. of IEEE Visualization. 2002. p. 211-8.
10.
Treavett S, Chen M. Pen-and-Ink Rendering in Volume Visualization. In: Proc. of IEEE Visualization. 2000. 203-10.
11.
Interrante V. Illustrating Surface Shape in Volume Data via Principal Direction-Driven 3D Line Integral Convolution. In: Proc. of ACM SIGGRAPH. 1997. p. 109-16.
12.
Yuan X, Chen B. Illustrating Surfaces in Volume. In: Proc. of the Joint Eurographics/IEEE Symposium on Visualization. 2004. p. 9-16.
13.
Lum E, Ma KL. Hardware-Accelerated Parallel Non-Photorealistic Volume Rendering. In: Proc. of the International Symposium on Non-Photorealistic Animation and Rendering. 2002. p. 67-74.
14.
Rheingans P, Ebert D. Volume Illustration: Nonphotorealistic Rendering of Volume Models. IEEE Transactions on Visualization and Computer Graphics. 2001;(7)3:253-64.
15.
Csébfalvi B, Mroz L, Hauser H, König A, Gröller M. Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering. Computer Graphics Forum. 2001;20(3):452-60.
16.
Kindlmann G, Whitaker R, Tasdizen T, Möller T. Curvature-Based Transfer Functions for Direct Volume Rendering: Methods and Applications. In: Proc. of IEEE Visualization. 2003. p. 513-20.
17.
Hauser H, Mroz L, Bischi G, Gröller ME. Two-Level Volume Rendering. IEEE Transactions on Visualization and Computer Graphics. 2001;7(3):242-52.
18.
Hadwiger M, Berger C, Hauser H. High-Quality Two-Level Volume Rendering of Segmented Data Sets on Consumer Graphics Hardware. In: Proc. of IEEE Visualization. 2003. p. 301-8.
19.
Viola I, Kanitsar A, Gröller ME. Importance-Driven Volume Rendering. In: Proc. of IEEE Visualization. 2004. p. 139-45.
20.
Bruckner S, Grimm S, Kanitsar A, Gröller ME. Illustrative Context-Preserving Volume Rendering. In: Proc. of the Joint Eurographics/IEEE Symposium on Visualization. 2005. p. 69-76.
21.
Bruckner S, Gröller ME. VolumeShop: An Interactive System for Direct Volume Illustration. In: Proc. of IEEE Visualization. 2005. p. 671-8.
22.
Svakhine N, Ebert D, Stredney D. Illustration Motifs for Effective Medical Volume Illustration. IEEE Computer Graphics and Applications. 2005; (25)3:31-9.
23.
Sousa M, Ebert D, Stredney D, Svakhine N. Illustrative Visualization for Medical Training. In: Proc. of Eurographics Workshop on Computational Aesthetics in Graphics, Visualization and Imaging. 2005. p. 201-9.
24.
Salah Z, Bartz D, Straßer W. Illustrative Rendering of Segmented Anatomical Data. In: Proc. of Simulation and Visualization. 2005. p. 175-84.
25.
Fischer J, Bartz D, Straßer W. Illustrative Display of Hidden Iso-Surface Structures. In: Proc. of IEEE Visualization. 2005. p. 663-70.
26.
Tietjen C, Isenberg T, Preim B. Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning. In: Proc. of the Joint Eurographics/IEEE Symposium on Visualization. 2005. p. 303-10.
27.
Preim B, Tietjen C, Dörge C. NPR, Focussing and Emphasis in Medical Visualizations. In: Proc. of Simulation and Visualization. 2005. p. 139-52.
28.
Isenberg T, Freundenberg B, Halper N, Schlechtweg T, Strothotte T. A Developer's Guide to Silhouette Algorithms for Polygonal Models. IEEE Computer Graphics and Applications. 2003;(23)4:28-37.
29.
Raskar R, Cohen M. Image Precision Silhouette Edges. In: Proc. of ACM Symposium on Interactive 3D Graphics. 1999. p. 135-40.
30.
Lum E, Ma KL. Non-Photorealistic Rendering Using Watercolor Inspired Textures and Illumination. In: Proc. of Pacific Conference on Computer Graphics and Applications. 2001. p. 322-30.
31.
Perlin K. An Image Synthesizer. In: Proc. of ACM SIGGRAPH. 1985. p. 287-96.
32.
Hauser H. Generalizing Focus+Context Visualization. In: Proc. of Dagstuhl 2003, Seminar on Scientific Visualization. VRVis Tecchnical Report TR-VRVis-2003-037. 2003.
33.
Halper N, Mellin M, Herrmann C, Linneweber V, Strothotte T. Towards an Understanding of the Psychology of Non-Photorealistic Rendering. In: Proc. of Workshop Computational Visualistics, Media Informatics and Virtual Communities. 2003. p. 67-78.