There’s no doubt that a revolution is coming. As mixed mode, customer contact through Amazon Alexa and Google Assistant becomes ubiquitous and enterprises create their own voice apps and chatbots, how will they ensure brand identity and differentiation from their direct competitors?
Enterprises are coming to understand that these voice and text enabled services can be valuable new channels and a direct connection with the consumer. But will brand identity be diluted by these MetaBots? How do brands avoid disintermediation?
There’s an opportunity and a process for delivering extraordinary, branded, A.I. driven customer experiences through these channels. In this keynote, we’ll discuss how.
With the total number of Alexa skills surpassing 50000 worldwide, companies now face the challenge how to make themselves heard in the voice universe. In this session we take a closer look at how companies can combine the strengths of existing brands with a user-centric approach to conversational design in order to create voice experiences that manage to stick out no matter the size of the competition.
This talk explores the genres and types that attract the attention of professional voice developers, as well as the business models that have been tried, established, and brought to fruition. This includes ‘gaming’ - Amazon’s developer reward program, highly engaging games with premium content, voice apps built for branding and marketing purposes, and assistants for board or video games.
One notable example we’ll be investigating is Sensible Object’s ‘When In Rome’. This is a board game that comes with its own Skill, in which Alexa introduces the rules, keeps track of scores, and moderates the game’s trivia questions. We’ll be looking at how well voice is integrated into this traditional medium, and if it has the potential to impact customers’ expectations towards tabletop games in general.
After this fine example of a voice-enabled game, we will see how much value Alexa & Co can provide in voice-assisted games where it serves as an optional modality. One such example from contemporary computer games is Destiny 2, where Alexa assumes the role of an in-game character and manages parts of the player’s inventory and clan communications.
In the past year, machine learning has made another great step forward and is about to become pervasive in enterprise software. Learn more about a wide spectrum of new capabilities intended to provide intelligent solutions for an even broader variety of business challenges.
As user behaviour continues to shift to voice-based activities, brand managers are wondering “Is voice relevant to my business?” To help answer that question, we’ll explore whether yours is a voice-first company or, rather, if a “voice as a channel” strategy is better suited to reaching your customers.
In this session, we’ll share an evaluation companies need to undertake to help you decide the best path forward in voice. We’ll examine common business types that should be thinking about a voice-first strategy. For those, we will look at factors to help determine whether to develop an O&O customer-facing Voice Interface Application, factors when considering multimodal devices, and whether to develop for a single platform or multiple platforms.
For those better suited to a “voice as a channel” strategy, we’ll review how to optimize your digital experience for voice search, inclusion in the Google marketplace, and how to leverage existing data to inform future voice decisions. We will share “voice as a channel” strategy tactics to connect with consumers who are engaging with Voice Interface Applications including content development, partnership evaluation, distribution and audience engagement.
Making smart chat bots, that really understand what the user means, can be quite time consuming. A smart bot needs to be trained with an extensive set of expressions, and coming up with fifty or a hundred ways to express the same meaning can be hard, especially for people who are not used to it. In order to enhance the user experience for our clients that make use of our chat bot platform, we are currently implementing text generation. Based on some expressions provided by the user, we generate a number of similar expressions, using the most innovative text generation techniques. Additionally, our text generation system learns on the fly! The fine training capabilities allow us to tailor expression generation functionality in near-real time.
We confront approaches based on character- and word-level embeddings and explain their advantages and disadvantages. Finally, we discuss the importance of post-processing for filtering candidate expressions, using Part of Speech taggers and other NLP tools from very popular toolkits and libraries such as SpaCy, Gensim and NLTK.
In comparison to other Voice Assistants, Alexa already has a shopping function for the Amazon store. However, the Amazon Store -and the rest of the internet, as a matter of fact- is not prepared to support this function.
The content and search option must be transformed into "natural language" to fit the users' needs and to make a great Voice-User-Interface possible.
Robert C. Mendez from Internet of Voice (Cologne) offers some Dos and Don'ts, as well as some hints as to what vendors can do on Amazon to make their content findable with Alexa.
Recent viral articles on the internet revealed that a number of purchases done through smart speakers is lower than expected and customers tend to avoid shopping via voice.
Does this mean that voice shopping is just an utopia? A promise that cannot be fulfilled?
During the talk we’ll find out the truth behind voice shopping. We’ll take a look at some success stories as well as harsh failures to discover common misconceptions and what the secret of voice commerce is. Attendees will better understand how to design and develop voice applications that really make sense for e-commerce from strategy to user experience. We'll uncover the utopia and discover the business sense in creating for voice purchasing.
Echo Buttons are the first gadgets of their kind with a lot of potential for developers. They allow an additional contextual user input for your skills - physically and visually.
When developing for the Buttons, there are more things to consider on the technical and the conceptual side as if you are developing for an echo device alone.
Mario Johansson will show you his best practices, techniques and a few tips to consider when building for the echo buttons in this interactive session. Learn how to use the Gadgets Controller and Game Engine API to build great skills for this revolutionary input device.
Voice Games are one of the fastest growing categories in the Amazon Alexa Skill Store and casual gaming is currently experiencing a revolution with Voice Assistants. This year even a board game was released that interacts with Amazon Alexa. Why “voice” and “games” are a perfect match and how you can transform a game concept from “mobile screen” to a “voice-first” experience will be revealed in their talk.
Tim Kahle, one of the co-founders of 169 Labs, and one of the three German Alexa Champions, is on the jury for the current Alexa Skills Kit Challenge with prizes worth a total of EUR 50,000. He and Dominik Meissner will talk about the agency’s latest voice game project (to be published in October 2018). They have now brought one of the most famous quiz games (even with an own TV show) on Amazon Alexa.
In this talk Matthias will present an overview of the best banking voice applications for Alexa and Google Assistant. He will also look at the banking skill of Sparkasse Bremen in more detail and will pay special attention to topics like optimization for Echo Show and gamification.
Building a voice application for Amazon Alexa requires the Voice First approach. But with the growing device family with displays like the Echo Spot, the Echo Show, or the Fire TV, you are able to support your voice experience with photos, illustrations, or videos. This session concentrates on how to build a Multi-Modal application with Amazon Alexa. We will have a closer look on the best-practices as well as some tools and techniques to help you to create richer voice applications.
The most difficult aspect of creating a compelling voice app is the design. Because of the limited fidelity, the classic palette of U/X tools goes out the window and new ones have to be invented. With three years of experience creating the richest and most engaging content on Alexa and Assistant, TsaTsaTzu will present audio techniques to deal with the problems of discoverability, complexity, and support that this platform presents. You will leave with actionable design tools that can be used to craft applications which, like ours, will engage users for hundreds of hours.
Machine learning enables customized conversations between man and machine that can result in buying decisions. Marketing experts should include artificial intelligence purposefully in their strategic thinking. It is important to build a bridge for the customer from the source of inspiration to your own content. Kathleen Jaedtke and Tina Nord explain how this can be achieved through the use of dialogue-oriented technologies.
Jan König is one of the founders of Jovo (http://www.jovo.tech ), the first open source framework that enables developers to build voice apps for both Amazon Alexa and Google Assistant. In this session, Jan will walk through the essentials of building for Alexa and Google, talk about important differences of both platforms, and show practical examples of successful voice apps.
As a consumer, commodities are great. Companies will compete for the lowest possible price because every product is almost the same. For the companies on the other hand, this is a cut throat business. Margins are low and differentiating yourself is nearly impossible. Building materials, vegetables, cars and even smartphones have become (near) commodities. But will AI ever become a commodity? And what are the hurdles we need to overcome to (not) get there?
(by the way the URL seems to be for sale for an outrageous amount of money)
Make your devices smarter by embedding the Google Assistant into your own device using the Assistant SDK. You could build your own voice-driven interactions without requiring your users to also have their own voice assistant device (like Google Home). It is available for you to tinker with on Raspberry Pi devices and it's easy to get started!
A hands on approach to developing a Google Action with Dialogflow and Google protocol buffers. After a quick introduction of the tools, we are going to do a hands on coding session to create a Google Action with a python webhook. In the session you will learn how to create your own Google Action and still have access to all the powerful machine learning tools python has to offer.
Learn how to create your own voice interfaces using the Google Actions platform. We'll look at the technologies involved, how to plan for a conversation, and then build a voice interaction together. With the rise of voice assistants, voice is becoming another surface area for users to interact with your product or service. We can now start to blend this new technology with our existing offerings to improve user experience, engagement, and satisfaction.
In this workshop, we'll learn about the Google Actions platform and how it works to provide you with all the tools you need to build your own conversational interfaces. Throughout the workshop, you'll also build your own Action and see how to extend it for deeper integration with your application.We'll also spend time looking at how to design a conversation interface, including thinking through the various phases of dialog and sketching out expected flows.Finally, we'll look at how to review and improve your Action by using the analytics and AI training tools available from Google.
Learn the technical fundamentals of building voice actions quickly, as well as the social and human considerations for its design.