---【戰(zhàn)略思維】課程作業(yè)“我所在的行業(yè)的未來”
In the Hollywood movie “Blade Runner 2049”乐设,in2049, there will be no “smart phone”, the way the hero interact with machine will be exactly the way we human beings communicate with each other through voice.
You can say it is a science fiction film and very far away. But nobody doubts that the way people interact with machines will be reshaped in the future. No need to wait till 2049, our lives will witness big revolution once again, and the most significant sign of this revolution is the large-scale application of new human-computer interaction, especially voice interaction.
Industry Trend
If we go back to ten years ago, Apple launched the iPhone 1 and achieved sales of 6.12 million in 2007, marked the beginning of the "Smart phone" era.At that time, most people didn’t know, also didn’t see, or think of ten years after the mass application of mobile internet (Didi Chuxing, MeiTuan, WeChat pay, etc.) would bring so dramatical change to people's life.
In2014, Amazon launched the first smart speaker, Echo. In 2016, Echo were sold 6.5 million units, whose shipments was equivalent to the shipments of the first generation of iPhone. It indicated that quiet a number of early adopters were slowly entering into the era of voice interaction. In 2017, the total shipments of smart speakers have reached 33 million and expected to exceed 56 million in2018. It’s very likely that one day the global shipments of smart devices will achieve billion, as smart phones once did in recent years.
What will the smart phone be like in 2023? The development of consumers electronic is always accompanied with the progress of human-computer interface technology. If the launch of iPhone in 2007 was the “singularity” of the touch-screen interaction, then the launch of Echo in 2014 was the “singularity” of voice interaction.
If we imagine the ultimate way for people to interact with machines, brain-machine interface should be the “Natural End-Point”. It is a direct link between the human brain and external device, which means that you can communicate with your phone, computer, control your environment by thoughts. It’s the most natural and efficient manner. However, brain-machine interface will not probably become reality very soon. Without the breakthrough of neuroscience, brain-machine interface is spectacular science fiction portrayal.
With the advancement of artificial intelligence, voice-interaction entered people’s life gradually. It has a lot consumer benefits, such as fast, easy, context-driven and keyboard free. More and more people search information directly through talk to their device, rather than use their finger to type in what they want to know on the screen. From US industry giants in US, Google, Microsoft, Apple, to Chinese internet leaders, Tencent, Alibaba, and so on, all have launched smart speakers to occupy the market. It’s a definite trend that in the future that devices with voice interaction will become a necessary for individual, just like smartphones.
Echo is not just a speaker, it’s more like an AI assistant which interact with people by voice. You can ask echo to write down to-do lists, book a ticket, or order a meal, all by talking to it. At the same time, when the IoT infrastructure become mature, smart speakers will serve as a terminal of smart home controller, help people to control all the household items. Since voice-interaction and AI assistant is so powerful, will smart phone simplify to become a tiny box with microphone?
At present, the performances of smart speakers are not perfect, and its functions are relatively limited. The accuracy of semantic comprehension is not 100%,leading to chances that your smart speakers can’t understand what you are saying. The efficiency could be not so good in some scenarios. For example, when you are ordering a meal, will it be more convenient for you to choose dishes by pictures or merely by listening their name? Or when you are asking your smart speaker about the path from CBD to downtown, it might take you 5minus to understand the driven route by listening, but only 30 seconds when you watch the map.
Speechwill not completely replace screen and keyboard. Integrate voice and smartphones’ traditional interaction style, touch-screen, could enjoy the benefitsof each interaction and leverage user experience.
- Voice as an input means. For it can quickly give commands to system, frees people’s hand and allows multitasking.
- Screen as an output means. For it can display a large amounts or complex information at the same time. And it allows people to scan quickly when given along sequential information.
Combining these two kinds of interaction style into one device, is not equal to embed a voice assistant to a touch-screen phone, or an upgrade existing product of iPhone, Google Nexus. It requires deep insight into customer needs and transformation of product technology.?
Customer Needs
1) Environmental awareness. Device should adapt to the user, automatically identify the user’sstatus and scenario, and provide corresponding services according to the conversation.
2) Share data and seamless connection with other smart devices.
3) Larger display area.
4) Smaller size, less weight, easier to carry around.
Product technology
1)?5G. Transmitting speed of 5G is hundreds of times higher than 4G, a 3D movie can be downloaded in a few seconds. With the advent of 5G technology, all the applications can be running in the cloud, including those resource consuming and complex programming, and remote desktop will be loaded to smart phones. Under such circumstance, smart phones will serve as a desktop and interaction device, like a monitor, no need to carry big storage or high-speed processor.
2)?AI will be more intelligent, automatically sense the environment, adapt to users’ behavior, predict their needs, rather than wait for a command. For example, in an OT scenario, users say, "I’m tired, I wanna relax for a while." The AI assistant will analysis conversation context and provide such services: lower seat height for user to take a rest, online streaming APP will automatically activate and search for a bunch of soft music according to user preferences, the takeaway platform will take the initiative to ask the user if he/she need some snacks.
3) Virtual touch?screen.The technology to augments virtual objects into reality either through a projector or optical display using sensors to track a person's interaction with the object, is becoming commercially available and affordable. With such technology, any smart device, no matter it is a watch, glasses, even a headset, could display content for user to watch, and user can control the device by touch the virtual screen, just as if there’s a physical screen.
In a word, the“smart phone” in 2023 might not look like a “smart phone”. It will be a small, portable device with power AI assistants, it projects virtual touch screen when needed, people can control the smart phone by voice and touch screen.
Competition
Besides mobile phone manufacturers, technology companies, which used to depend on internet service, will be the strongest competitor in the future. The development of smart phone in the future, depend on the capability of R&D, cloud computing, machine learning, all those are the competitive advantages of technology companies. Traditional smart phone manufactures are good at hardware and manufacturing, which will not be vital for the new smart phones. At the same time, technology companies already have lots of internet services, which provide them plenty of data for iteration of AI assistant. Traditional smartphone manufactures don’t have such data.?
Conclusion
With the advancement of voice interaction and artificial intelligence and, smart phones will certainly realize intelligence for everyone in the future. Those companies who can leverage new technologies and translate them into good user experience will definitely win the market.