Medical Simulation/Taxonomy

Motivation edit

To our knowledge, there is no taxonomy for VR-based medical simulators. However, there are many benefits of a taxonomy:

  • Provides standardized terminology and classification;
  • Helps to communicate between engineers, medical experts, educators and other important disciplines;
  • Can be used after task analysis to prioritize components;
  • Facilitates analysis and validation.

In summary a taxonomy supports communication and development.

The taxonomy on this page is based on [1]. The intention is to make the taxonomy more accessible and dynamic to community-based extensions and changes.

Related Work edit

In medical simulation there is an overwhelming amount of papers describing simulators and algorithms. To create a taxonomy we identified and analyzed numerous position papers [2][3], surveys of existing simulators [4][5] to name a few.

Satava postulated five generations of simulators: geometric anatomy, physical dynamics modeling, physiologic characteristics, microscopic anatomy, and biochemical systems [2]. Furthermore, he defined the following requirements for realism in medical simulators: visual fidelity, interactivity between objects, object physical properties, object physiologic properties, and sensory input.

Liu et al. [3] discriminate between technical (deformable models, collision detection, visual and haptic displays, and tissue modeling and characterization) and cognitive components (performance and training).

Delingette [4] divided simulator components into input devices, surgery simulator (collision detection and processing, geometric modelling, physical modelling, haptic rendering, and visual rendering), and output devices.

In a recent overview by John [5], three areas have been defined: input data, processor, and interaction. Here, interaction has been subdivided into haptics, display technologies, other hardware components, and algorithms and software.

Taxonomy edit

Merging the definitions and reports of the related work, we propose a taxonomy (see outline below) with three main classes: Datasets, Hardware, and Software. In the following we provide a brief definition of each class and give some examples, that will be discussed in the following chapters of this book in more detail.


  1. Datasets
    Synthetic
    Computed
    Modeled
    Subject-specific
    In Vivo
    Ex Vivo
  2. Hardware
    Interaction devices
    Sensor-based
    Props
    Processing Unit
    Stationary
    Mobile
    Output
    Visual
    Haptic
    Acoustic
  3. Software
    Model
    Technical
    Content
    Interaction
    Tasks
    Metaphors
    Technical
    Simulation
    Static
    Dynamic
    Physiological
    Rendering
    Visual
    Haptic
    Acoustic

Datasets edit

Synthetic datasets can be Computed (e.g., based on statistic models or heuristics) or Modeled (e.g., produced by digital artists with 3D modeling tools, or sourced from CAD designs of instruments). Usually, these are well-meshed surface geometries with highly detailed textures. Another approach are Subject-specific datasets. Several medical imaging modalities (e.g., sonography, MRI, CT) allow the reconstruction of volume data, that can either be used directly or segmented for further processing. Furthermore, physiological parameters, tissue properties and other can either be measured In Vivo or Ex Vivo.

Hardware edit

Interaction devices can either be Sensor-based or Props. Sensor-based devices can be commercial off-the-shelf products, self-constructed prototypes or hybrids and examples range from game console controllers to haptic devices and optical tracking systems. Props can replicate body-parts or instruments, that are either augmented, tracked or simply passive parts of the overall setup. The Processing unit relates to the kind of computing systems that are used for the simulator. This can be Stationary (e.g., single- or multi-core systems, clusters, or servers) or Mobile systems (e.g., handheld devices, or streaming clients). Furthermore, GPUs can be used for parallelization. Finally, the Output can be realized on several modalities, Visual, Haptic and Acoustic being the three most common. The visual component can be further divided into different display types: HMD, screen or projection screen with or without stereoscopic rendering. Likewise, haptics is divided into tactile and kinesthetic feedback.

Software edit

The Model is the link between the datasets and the algorithms. It can be regarded from two points of view: Technical (e.g., data structure, LODs, mappings [6], etc.) and Content (e.g., patient, instruments, and environment [7]). One crucial element for the acceptance of a medical simulator is the Interaction with numerous solutions from HCI and 3DUI. Here, we can distinguish between Tasks (navigation, selection, manipulation [8], session management, assessment etc.), Metaphors (direct “natural” interaction, gestures, etc. [9]) and Technical (e.g., GUI elements, OSDs, or annotations). Simulation is divided into different levels: Static (e.g., fixed structural anatomy, environment), Dynamic (e.g., physics-based with collision detection and handling [10], rigid body dynamics, or continuum mechanics applied to soft tissue) and Physiological (e.g., functional anatomy [11], or the physiome project). The Rendering is tightly coupled to the results of the simulation. It can be divided into Visual, Haptic or Acoustic algorithms.

References edit

  1. Ullrich S, Knott T, Kuhlen T. Dissecting in Silico: Towards a Taxonomy for Medical Simulators. Studies in health technology and informatics. Newport Beach CA, USA. 2011, February. 163:677-679. IOS Press.
  2. a b Satava RM. Medical virtual reality. The current status of the future. Studies in Health Technology and Informatics. 1996;29:100–106.
  3. a b Liu A, Tendick F, Cleary K, Kaufmann C. A survey of surgical simulation: applications, technology, and education. Presence: Teleoper Virtual Environ. 2003;12(6):599–614.
  4. a b Delingette H, Ayache N. Soft Tissue Modeling for Surgery Simulation. In: Computational Models for the Human Body. Elsevier; 2004. p. 453–550.
  5. a b John NW. Design and implementation of medical training simulators. Virtual Reality. 2008;12(4):269–279.
  6. Rosse C, Mejino JLV. A reference ontology for biomedical informatics: the Foundational Model of Anatomy. J Biomed Inform. 2003 Dec;36(6):478–500.
  7. Harders M. Surgical Scene Generation for Virtual Reality-Based Training in Medicine. Springer-Verlag; 2008.
  8. Heinrichs WL, Srivastava S, Montgomery K, Dev P. The Fundamental Manipulations of Surgery: A Structured Vocabulary for Designing Surgical Curricula and Simulators. The Journal of the American Associacion of Gynecologic Laparoscopists. 2004;11(4):450–456.
  9. Bowman DA, Kruijff E, LaViola JJ, Poupyrev I. 3D User Interfaces: Theory and Practice. Addison-Wesley Professional; 2004.
  10. Teschner M, Kimmerle S, Heidelberger B, Zachmann G, Raghupathi L, Fuhrmann A, Cani MP, Faure F, Magnenat-Thalmann N, Strasser W, Volino P. Collision Detection for Deformable Objects. Computer Graphics Forum. 2005 March;24(1):61–81.
  11. Ullrich S, Valvoda JT, Prescher A, Kuhlen T. Comprehensive architecture for simulation of the human body based on functional anatomy. In: Proceedings Bildverarbeitung fur die Medizin 2007. Springer Verlag; 2007. p. 328–332.