The concept “multiformity” resonates with the concept “multimodality,” not only in the way it looks and sounds, but also the broad social and cultural contexts from which it develops and to which it is addressed. Both concepts allow us to explore forms of communication that mix what we apprehend as distinct channels of transmitting information: written and spoken words; still and moving visual images; sound; gestures and so on. Both concepts ponder the great influence new technologies have on human interaction, and their implications in the arts and education, especially in the digital era. Therefore, it is important to clarify the main difference between the “multiformity” that I offer and Gunther Kress’s “multimodality” (2010).
The difference between the two concepts or approaches lies in the weight the two ascribe to social versus biological factors in human communication, and consequently the approaches’ use of the concepts “mode” versus “form.” “Mode,” according to Kress, is a semiotic resource for making meaning; its potential is in the affordance of the material it is made of (i.e. sound in speech or graphic stuff in writing), while its realization is the practical use of materialistic potentials by members of a given society. Further, modes differ in their underlying “semiotic logic;” the organizing principle under which they are conveyed. Kress claims, for example, that the logic of words is to follow each other in temporal sequence, and the logic of images is to display their elements simultaneously in space (2010: 81-82).
Noa Yaari, Multimodality, 2014.
Kress’s approach to multimodality is social; it sees the sign-makers’ use of signs as the mechanism that generates potential semantic meanings, rather than grammar, that can be thought of as an abstract and fixed system of rules originating in the brain. According to the social approach, the process that refashions lingual resources and practices is humans’ motivation to frame meanings in social context; this enables lingual interactions to reflect the present, with its instability: its social and technological transformations (Kress 1996, 2010).
Noa Yaari, Multiformity, 2018.
“Form,” on the other hand, relates to the most fundamental level that one can explore in relation to verbal and visual, or any other type of communication, while the level itself is defined according to the question in hand. For example, while exploring written language, we may focus on forms such as a graphic element in a letter, a letter, a sentence, a shape of a paragraph, etc. Forms in spoken language can be a phoneme, a word, a whole speech, and even a sound in the background of the speech. Forms in visual (non-verbal) communication can be a brush stroke, an artwork, a garden or building, but also a graphic element within a letter or a gesture of a speaker, during their speech.
Thus, the use of the concepts “form” and “multiform” attempts to open a theoretical framework to immediate and direct semiotic phenomena of any type and scale, so that they can shape and convey meanings synergistically, before any established logic interferes between them. It implies that “forms” function in high velocity; they do not have time to distinguish between temporal sequence, on the one hand, and simultaneous display in space, on the other, nor between ”time” and “space.” The reason why they are fast is that they function on both the social and biological levels. The body’s reaction to any kind of stimuli and processing them into meanings, given probable social conditioning and contexts, cannot rule out survival mechanisms.
Kress, Gunther. Multimodality: A Social Semiotic Approach to Contemporary Communication. New York: Routledge, 2010.
Kress, Gunther, and Theo van Leeuwen. Reading Images: The Grammar of Visual Design. New York: Routledge, 1996.