Sudha Jamthe - CEO, IoT Disruptions.com
AI is in the devices that talk to us at home, in the car and all around us. AI is in the brain behind the software we interact on web and mobile from search, filters in social apps, in recommendation engines, chatbots, and photo recognition apps. As the AI enters more realms of our normal human interaction, the engagement between humans and machine blurs and we seek more voice and conversational, context-driven interactions. If the AI behind the interaction is machine learning from each of our interaction, how can we design for a consistent engagement that feels authentic to the user while delivering the desired value from the interaction? Can the interface evolve to a learning interface between man and machine? In voice conversations how can we design for elements of tone, inflections, and modulations to represent the desired engagement with the user? Can voice capture social nuances of communication between humans and machines and keep the desired equilibrium for continued engagement?