We human organisms regard each other’s bodies through their visual surface components. The interior, when considered at all, is typically only due to medical concern for one’s-self – rarely envisioning that of others. Within the digital realm we find the expected representations of the body as surface manifold, extending even to those limited levels to which interior organs are made visible. Radiological tools have dramatically improved our capacity for non-invasive representation, but their use is (for a variety of reasons including exposure both to potentially damaging radiation and of private medical information) often confined to the domains of personal health. Yet, through their abstracted volumetric output we can as well uncover the possibilities to represent our bodily forms in a greater totality, simultaneously eschewing surface boundaries to which we are accustomed – the same which lead to numerous biases and conflicts. Here, I present techniques used to re-appropriate artificial intelligence based frame-blending tools as a novel workflow to produce synthetic radiology made not from a generative algorithm, but as combined volumes of chimeric human forms. These are then used to explore the metaphor of imperceptible identities through expanded techniques to animate these visual datasets – traditionally diagnostically static excepting the manner of camera orientation or layer visibility.