Mark Beccue - Principal Analyst, Tractica
Telecoms, Media & Technology is part of the Knowledge and Networking Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
Voice Interaction as Past and Future
Unlocking the Power of Place: the Opportunities for Place-Aware Apps & Contextual UX
Embracing the change Machine Learning Brings to Our Business Model
The Coca-Cola Company
Day One Keynote Interface Innovation – How AI & Voice is Driving the New UI Paradigm
Not too long ago, important information was memorized. We knew phone numbers by heart, but now the numbers of closest contacts could be lost in a crisis. We rely so heavily on technology for things we couldn’t have imagined just a few years ago – navigating directions, connecting with colleagues, sharing files, finding love to name just to start the list.
The proliferation of devices and apps has changed our lives making us more efficient yet more distracted and more functional but less emotional. As quickly as our dependency on apps as we know them emerged, another paradigm is starting to take hold, a new user experience driven by voice, motion and emotion.
Color and visuals won’t be the only means to elicit emotion. Rather language, tone, and context will be the catalyst for creating trust and adoption between customers and brands based on life events and dynamic micro moments.
Voice-driven virtual assistants like Siri and Alexa, powered by artificial intelligence, are leading this new evolution. It’s changing consumer preferences for how we work and how we buy. We’re architecting an invisible experience. According to market research firm IHS Markit, more than 4 billion consumer devices will make use of digital assistants by the end of 2017.
The way we market, design and distribute products will change as apps disappear and interfaces become invisible. For a new UI experience to achieve initial adoption, we have to gain customers’ trust. We’re designing now for the more powerful and pervasive technologies of the future. Together let’s explore the anatomy of an invisible user experience.
Those of us who can have long known how to talk. So long that we have no idea how we actually learned the skill. This makes voice interaction with machines a great attraction to us. We get to speak naturally and have a technological object respond appropriately, which feels magical. Yet we are all too familiar with the occurrence of speaking naturally and being ignored, misunderstood, or just not quite “gotten”. It’s frustrating!
The allure of voice interaction with machines is this inherent humanity. It is at once its greatest strength and challenge both as a modality and a design task. The ease of describing the exchange of a few words is offset by the seemingly infinite subtlety and range of meaning. It’s an opportunity to design like none other.
There’s a good way and a bad way. It’s easy to not know which one you are delivering. Despite this conundrum, come learn why you know how. And why you don’t. And what to do about this past and future of UX design.
AI is in the devices that talk to us at home, in the car and all around us. AI is in the brain behind the software we interact on web and mobile from search, filters in social apps, in recommendation engines, chatbots, and photo recognition apps. As the AI enters more realms of our normal human interaction, the engagement between humans and machine blurs and we seek more voice and conversational, context-driven interactions. If the AI behind the interaction is machine learning from each of our interaction, how can we design for a consistent engagement that feels authentic to the user while delivering the desired value from the interaction? Can the interface evolve to a learning interface between man and machine? In voice conversations how can we design for elements of tone, inflections, and modulations to represent the desired engagement with the user? Can voice capture social nuances of communication between humans and machines and keep the desired equilibrium for continued engagement?
CxO Roadmap - AI, Voice, machine vision, haptics: Blending inputs into a new UI
In the mobile age, location should be part of every developer’s toolbox: whether it’s segmenting users based on where they typically go in the real world or re-engaging with them based on where they are at that very moment. Foursquare’s Pilgrim SDK unlocks location intelligence for developers and brands in groundbreaking ways to drive DAU, session time, and transactions. Learn how Capital One, TouchTunes, SnipSnap and Retale create DAUs with Pilgrim SDK.
Design & Development - Workshops
Nan and Max will discuss how design thinking combined with data centricity can power the creation of holistic experiences that span hardware, software and customer interactions.
At Google, we believe the future is AI first, and we’re investing heavily in the fields of machine learning, speech recognition and language understanding.
These technologies come together in the Google Assistant, which allows you to have a conversation with Google that helps you get things done. Developers can build apps for the Google Assistant using Actions on Google.
This talk is a primer on the Actions On Google platform. You’ll learn how to:
Get in front of your audience, and monetize your applications.
Design experiences that delight your users with fantastic conversational UX.
We will also give you the option to learn by coding, so please bring your laptop.
Looking forward to see you!
Voice Interfaces In Action
The RBC Conversational Customer Care Virtual Assistant was designed to boost automation, minimize advisor-to-advisor transfers, improve customer satisfaction with conversational dialogues, and use voice biometrics for user authentication. Taking a holistic approach for conversational customer care, everything is deployed on a single platform with a single knowledgebase, able to work over all available channels, and able to connect to real human advisors when necessary to complete the goal of servicing 100% of customer requests on the first interaction.
Context & Personalization: Enabling focussed & natural interactions
We're entering an era of Ambient Computing, driven by mass amounts of data flowing through artificially intelligent, digital ecosystems. The days of being glued to a screen are nearing their end, and instead we'll be interacting with these systems conversationally.
However, creating these systems requires us to reverse engineer some of the most fundamental aspects of life, which can be difficult and overwhelming — especially if you don't know where to start.
Designing Intelligence will:
• Help you understand what's happening in the artificial intelligence space now, where it's been, and where it's going
• Explain why you'll be making one of these systems sooner than you think and teach you how to create products that will thrive in the age of automation
• Give you language to help you sell your ideas to your team and enable you to bring them to life