Daily Learnings from SXSW 2018 - From Face Down to Chin Up
Building on our first SXSW post about 2018 being the beginning of the end of mobile, we are diving into how our digital ecosystem is becoming screenless. That’s right, no screens – finally a generation with some decent posture.
With Google Home, Amazon’s Alexa, Apple’s Siri and the lesser known but very powerful SoudHound gaining significant popularity, every strategist needs to look closely at how brands will connect with consumers when the screen is no longer the standard go-to starting point on a digital journey. This is a tough concept for some to wrap their heads around. That’s because until now, the screen has always been the gateway to digital content and information.
With the proliferation of digital voice assistance and advancements in voice recognition accuracy, our mobile-first, screen-first world is being usurped. Don’t believe it? It’s happening faster than most people are really paying attention to. At this year’s SXSW, this is a reoccurring theme and isn't taking on any fad-like characteristics we sometimes see here. Artificial Intelligence is driving this advancement to allow for significantly more robust, voice-led opportunities. If it hasn’t already, it will impact your experience in your car, your home and your life in the future. It’s freeing up your eyes and simplifying the journey you take to access content.
These screenless advancements go well beyond word recognition: gesture, mood (emotional recognition), auditory tech advancement, tone, and background noise recognition are all making incredibly segmented and contextually relevant content possible.
For us marketers, this will have an enormous impact on how we reach, engage and measure consumers and their behaviour. We need to start thinking about how we are designing for a screenless world for instance. How can we prepare brands for consumer level voice inquiries that use a long tail of more natural language and keywords? How can brands start designing for their own voice & tone as part of their brand personality? How can we start a complete rethink of how to convey content if a screen isn’t in the scenario? In this case, an image or video won’t do. Then there’s further segmentation and modeling opportunities when AI via voice recognition allows us to understand a consumer’s mood, gesture and intent – which is endless. And we need to start understanding how to truly analyze sequential inquiry data – this is currently held in Google’s walled garden, but new technology is also allowing for new data sources.
FUSE is actively exploring these new impacts but we are thrilled about the idea of people being able to make eye contact with others and avoid lamp post collisions on their walks home. This face down to chin up trend has all of us pretty motivated, and dreaming about a world with better posture.