What are Multimodal Projects?

The opening chapter of the Writing/Designer book introduced me to a new means of analyzing the content regularly communicated to me. From simple texts to political ads to even Michigan Daily articles, I have consciously received messages and knowledge from texts while my subconscious has received a plethora of signals that manipulate the way in which I received those very messages. I think it’s safe to say every form of communication in the year 2016 is multimodel — or has more than one component of the New London Group’s categorization of modes: linguistic, visual, spacial, aural, and gestural.

Although the digested mental information we receive from an ad we watch or an article we read may seem exclusively based on the textual message, we regularly fall victim to what I describe as context. Take for an example the recent political campaign put forth by the Clinton campaign.

The quotes alone from Trump carry negative connotations regardless of political views. The implied message is clear: do you really want a man who speaks so “unprofessionally” to be the next leader of America’s future? But the campaign takes the textual evidence to the next level to play into the emotions of its audience. Visually, the video juxtaposes a future under Trump versus a future under Clinton with the use of darkness and shadowing. While in the Trump portion, the visual mode of communication subconsciously predisposes the audience to anxiety and fear, which rapidly disappears upon the entry of Clinton’s portion with full light and a comforting aesthetic. The aural messaging also create a clear divide as the somber, gloomy music playing behind Trump’s voice only furthers the provoked fear behind his quotes.

The multimodel display is crucial. Clinton could merely tweet his quotes or discuss it in an interview or debate, but her campaign’s choice to release an ad optimizes the ability to exploit Trump’s sound bites and shortcomings through a perfectly blended multimodel composition. Yes, we, The People, know the quotes are bad for any American to say (let alone a presidential candidate), but they need to be taken beyond face value. The other modes help us necessarily delve deeper into the severity of his quotes. We wouldn’t normally consider vulnerable children listening to his message or what how it can shape our families’ futures, but the message hits home because of the multimodel aspects.

I’ll be honest. I use to think pictures and videos were a blogger’s way of being lazy. I thought pictures could be nice, but it could be done better in words. I was wrong. What better way is there to convey a message than to provide an audience with as much material as possible?

Take for example, Buzzfeed’s article 12 Millennials Who Actually Give a Shit. 


The stories and actions of the text are strong, but as a writer the images push the message so much further. Every reader (And writer!) has personal bias. We can’t help but to grow up in a world where everything is relative to our own experience. It’s not because we’re bad humans; it’s because we are human. By blending the visual mode in the piece, Buzzfeed detracts unnecessary effort from the reader. The simple picture of all 12 millennials makes the stories more compelling — more human. It’s transforms the textual intent of the piece from simple anecdotes to inspiring, tangible people. It practically humanizes the literal words because without pictures we are seemingly constrained by our own bias in a way that stalls the knowledge gained through readership.

The multimodel approach to communication can also cure a lot of uncertainty. We have all experienced situations where we wish the sarcasm font was real or that we didn’t send the heat-of-the-moment message. Maybe, we sent a perfectly fine Facebook message or text that seemingly unpredictably spiraled out of control. When it comes to solo text, there are two voices. The message is composed in the voice of the writer, yet it is read in the voice of the reader. We assume that a joke or grave statement is clear, but in reality even a basic conversation can be analyzed like an AP Lit essay. Multimodel communication leaves less gray area and ambiguity. Take for example the following text:

“I didn’t like it”

What is the intent? What is the tone? Is it rude? Aggressive? You obviously don’t know the context of the situation, but who wants to receive the text “I didn’t like it”?

A multimodel approach, however, would incorporate maybe a visual and/or gestural aid — via emoji! Dependent on the emoji chosen, the 4 words can be shot anywhere from sarcasm to playful to the beginning of the end. Why leave the answer up for interpretation when you can simply add more to your text? To me, the 5 modes blend to create clearer messages that push the boundary of hitting the writer’s target.

What if we take away the words? Can a message still be conveyed with no text?


The moving image (GIF) above has no words, yet in context can perfectly convey a message. The expression and gestures of the monkey combine with the lighting and mood to show disapproval or sadness. The visual says more than “I’m sad.” It gives more to the experience of the sender’s true feelings.

Ultimately, an effective message is a clear message. With improved technology, writers are able to do more work for the readers clearing ambiguity and unintentional analysis. Combining each mode only provides more depth and flavor to the words. All the completely multimodel pieces are new, and I think it’s because the means of conveying messages are changing. The presentation is seemingly limitless to the antiquated black and white words on a page.


Leave a Reply