When I first heard the word “multimodal,” I thought it was going to refer to a very specific style of writing. I was expecting guidelines. I was surprised to find out that multimodal projects exist everywhere. I see them when I am browsing Facebook, when I am researching for a paper, when an ad comes on Pandora – literally everywhere. A multimodal project is what it sounds like: a piece of work that has multiple modes. In this piece, I learned that there are five specific modes of communication in multimodal projects: linguistic, visual, aural, gestural, and spatial. Today I will be exploring a few of my favorite multimodal projects – some new and some old. I am excited to reverse engineer each project as each has resonated differently with me.
The first multimodal project I want to draw your attention to is a piece written by the LA Times last month. I could spend the entire blog post writing about this piece – as there is so much to say and reverse engineer – but I will refrain from getting into too much detail here for the sake of variety. You can find the entire story here. In short, the piece is a 6-Part Mystery about a PTA mom who was framed. The online article is interactive and has actual 911 calls, videos, pictures, and testimonials from the case.
What I thoroughly enjoyed about this piece is that not only was it super interesting to read, but it had an overlap of every mode of communication: spatial, visual, linguistic, aural, and gestural. When they referenced the 911 call in the writing, I was about to Google it to see if I could find it online to hear it. Then the audio started playing as I scrolled further to read the transcripts. It was effortless and yet so effective. Without the addition of the audio from the 911 call, the transcripts highlighting as they are read aloud, the videos of actual testimonies at trial, and the various other forms of incorporated media – I do not think that this story would be as powerful or as interesting as it has now become. I highly recommend that everyone go check it out.
The second multimodal project that I want to draw attention to is Youtube. I do not know how many of you are active Youtube viewers, but I watch my subscriptions every day and it has become part of my daily routine. I actually go home looking forward to curling up in bed with No Thai and watching Youtube videos for hours. The Youtube community has taken off in the past 5 years with many Youtubers now able to make their living solely off of the ads on the videos. I am not entirely sure on what all factors into how much a Youtube makes, but I do know that views on videos plays an important role. This being said, many Youtubers have been accused of using ‘clickbait’ to gain viewers. For example, titling their video “ARE WE BREAKING UP?” when the video is not at all about their relationship actually ending. Viewers have grown more and more annoyed of this craze, especially after finding out how much Youtubers make from views. Trisha Paytas, a Youtube who I have watched for years, is famous for doing this – so much so that her audience has learned to take nothing seriously.
The photos above are the thumbnails of the videos before you click to watch them. Sometimes a thumbnail may not even be from the video at all. Rather, Youtubers will create a thumbnail to draw more attention. These are multimodal projects because they combine linguistic (the title of the video) with spatial and visual (how the thumbnail is organized). One could also examine the aural aspect of Youtube videos as well as the gestural. The gestures being made in the above thumbnails are very distinct from each other and suggest different things. For example, Trish uses emojis to edit her thumbnails. If she would have used a laughing face rather than a scared face for her “WE WERE ALMOST MURDERED!” thumbnail, we would know that something funny happened in the video (or perhaps not because clickbait, but you know what I mean). We are able to read the texts with different emotions even if the titles/thumbnails are clickbait.
Lastly, I would like look at a multimodal project that has taken comedy to a new level. GIFs have taken over the internet and I am not complaining. One of my favorite things to do is tweet something and then use a GIF as my reaction. For those who don’t know what a GIF is – let me change your life. Imagine adding your favorite line from a TV show or movie to explain how you feel – but rather than just playing the scene aloud, you can watch it happen over and over again without the need of audio. Often times, a GIF will use subtitles if they are needed. Twitter’s multimodal project of incorporating GIFs has changed the Twitter platform for good. They have even made it easy to insert a GIF by creating a dropdown menu sorted by reactions for all your GIF needs.
I love the use of linguistic, visual, and gestural modes here. GIFs give users the ability to make “moving” images that play in an infinite loop. Rather than a static image, the gestures have movement and thus are easier to interpret. What is interesting about GIFs is that the same GIF could be applied to thousands of words. For example, the GIF below is associated with 2016 as a whole but I could also use it to explain how my weekend went. There is so much hidden meaning in using GIFs and the comedic aspect has made them popular among internet users.
Before reading “What Are Multimodal Projects?” I was unaware of all the texts that I use in every day life. I never thought of audio recordings or Youtube videos or GIFs as anything more than forms of media in their truest form: audio or visual. There is so much more that factors into how we perceive certain texts and so much more that goes into crafting those perceptions. It is fascinating just how many different modes of communication can be found in one multimodal project.