Sajjad Khan, vice president digital vehicle & mobility at Daimler, tells automotiveIT how in-vehicle UX, operating systems and AI will develop and converge.
Daimler’s Khan says that, with the help of cloud-based technologies, his team can develop and roll out new digital functions in two-to-three months. (Photo: Claus Dick)
Artificial intelligence will play a key role in the operation of future car generations, says Sajjad Khan. The Daimler IT executive explains the degree of innovation that is redefining HMI systems. He also predicts that the number of apps in the car will soon decline.
Digitalization is now a core part of Daimler’s corporate strategy. Can you bring innovative ideas into the car quickly enough, or would you still prefer to move into a much higher gear?
To paraphrase a line from Bob Dylan’s song ‘The times they are a changin’: If you’re satisfied and stop swimming, you’ll sink like a stone. That’s why our motto is: We can always improve. Take idea-to-product, for example. We have drastically shortened the time beteen an idea and the realization of a product. In the car we continue to see five- to six-year lifecycles. But in the software area, we can now bring innovations within two or three months. To make these fast software updates possible, we have not only adapted our production and development processes, but have also made changes to cars already on the road. A concrete example is our MBUX infortainment system, which we introduced earlier this year. It perfectly implements our over-the-air strategy.
Does it now really only take just two or three months to turn ideas into products?
The speed depends, of course, on the complexity of an idea and the desired functions. When we are working on defined interfaces that have already been implemented on our platforms, we manage to do releases in six to eight weeks. And I’m talking about live features and not merely about test products. For MBUX we develop and test many features in the cloud. We are by now using the cloud as the universal control unit for the entire vehicle.
Which roadblocks do you have to overcome to keep the pace high and, maybe even, step it up a bit more?
Our work force and their knowhow are always central to our strategy. Our goal is to not just continue to expand this expertise, but to also orchestrate the exchanges within an international network. That’s how we build up speed and innovation power. In Sindelfingen, Germany, we have many people with an excellent software background. They optimally enhance the expertise we have in our digital hubs in Berlin, Seattle, Beijing and Tel Aviv. We also have a development unit in Silicon Valley.
Which technologies will define tomorrow’s digital vehicle?
From a technology point of view, there are several elements of interaction that will be needed; these include gestures, vision, speech and touch. This multitude is anchored in human nature and it is in line with the five human senses. With the MBUX interface in the new Mercedes cars we have made a quantum leap in natural voice control. The cloud plays an important role here, both as a push and a pull mechanism. Ideally, the driver doesn’t even notice which processes take place in the cloud and which happen within the on-board systems.
It’s still early days in the development of next-generation – 5G – connectivity. How do you deal with this?
When the cloud is not available we rely on edge computing with an approach we call C-and-C, which stands for cloud and car. Of course we also assure our functionality in the car when there is no internet connection. The high bandwidth that will come with 5G doesn’t help us when it is not available. That’s why we focus on network coverage. In pre-development we are looking at whether satellites, for example, could improve the availabiity of mobile internet. A lot of companies are working on this issue and a rollout in two-to-three years seems possible.
Buttons on the steering wheel, touch screens, natural speech input – the industry is counting on redundancy when it comes to operating a vehicle. Will the future see ever more HMI systems in Mercedes vehicles?
I wouldn’t quite put it that way. The decision will always depend on the vehicle and on the wishes of our customers. Essentially, we apply the highest standards and only bring our new technologies into the vehicle when they are mature, function reliability and can be operated in an optimal way. We are courageous enough to forego a new feature when we see it doesn’t yet meet our high requirements to improve the user experience. Touch control, for example, assures a minmum degree of distraction. But the question comes up whether the advantages of the system won’t take a backseat as soon as cars increasingly will drive autonomously. The journey will then rather go in the direction of voice commands as well as gesture and gaze control processed by in-car artificial intelligence.
Are there any operating elements that will gradually disappear from the cockpit? For example, the traditional rotary pushbuttons?
There continue to be elements in a car that need to be updated. In our new A-Class, for example, we see that the combination of touchpad and voice control functions perfectly. Rotary push controls are less necessary for an intuititve operation. Our strategy is to approach as much as possible the natural senses and interaction mechanisms, so that you can make the technologies as intelligent and intuitive as possible.
Which technologies and HMI elements will define the car 10 years from now?
Operations will surely come closer to the way people naturally interact. In the new A-Class you can see the first steps on this journey. Artificial intelligence will clearly play a much more important role than individual operating elements or sensors. Speech recognition is the first step, but even more important is the correct categorization in a particular context. For example, instead of recognizing three words such as “I am cold,” the vehicle will in future be able to recognize the intention of the speaker and turn up the heating.
Digital displays are pushing out classical analog instruments. Is that just an optical improvement or do you see clear advantages?
It’s a mix of both. Of course, we are seeing new design opportunities in the vehicle. My team is working closely with chief designer Gordon Wagener. Because of the additional freedom that comes with using the technology, we can design more emtional and intelligent cars. In addition, we can provide more personalization for the driver, who can decide what his display will look like. Here, too, AI will in future play an important role. AI will be able to show various forms of information depending on a particular situation.
MBUX was developed across your global digital network. What are your expectations for this decentralized development approach?
There was a very pragmatic reason why we did this: We couldn’t recruit enough specialists in the individual centers. Seattle is a mecca for cloud experts. Tel Aviv has an extremely innovative startup culture, while China is clearly ahead of Germany in digital ecosystems. Worldwide, we are looking for the best talents and we bring them together in a high-performance network that is managed from our headquarters in Stuttgart.
What role do partnerships and common platforms play in the development of new technologies? Can car manufacturers still solve the technological challenges on their own?
In the technology field we operate in, there is not doubt that a company cannot move fast enough on ist own. Entering into cooperations is the only right strategy. The flipside of the medal is, of course, that you’re bringing competitors on board. That’s why we have to set clear boundaries. How far does cooperation with competitors go and at what point do we start to compete with other companies? We support, for example, Android Auto, and Apple’s CarPlay in our vehicles, but we offer the better MBUX navigation system. We want to offer our customers as many options as possible, but want to provide the best solution ourselves.
On the technology side of MBUX, we use the computing power of Nvidia’s platform, a cooperation that is very close. But that’s not a new phenomenon; look at the decennia-long cooperations we have had with partners such as Robert Bosch and Continental. But we will never give up control of areas that are a USP of our products and services. I’m thinking specifically of the customer interface and the technology integration.
Will the Mercedes me ecosystem at some point open up for third-party developers and products?
We’re already open for products from third-party suppliers. And it’s, of course, imaginable that we bring third-party developers on to the platform, as long as their apps are tested and approved by Mercedes. A completely different aspect in this discussion is the question whether apps haven’t already passed the zenith of their hype cycle. Instead of counting on the manual opening and operating of an app, I’m betting on intuitive operation through artificial intelligence, which controls the software for me. In my view, the relevance of third-party apps will gradually decrease. What’s exciting is the question how quickly this process will run its course.
You have opened a new tech center in Tel Aviv that focuses on digital vehicle and mobility services. What can we expect from this?
There is not one service that will define the mobility of the future. In Europe, for example, we have built up a very good mobility service provider with Car2Go. But we also saw that this service alone is not sufficient to meet all customer requirements. That’s why we established additional services such a Mytaxi an Moovel. Those three products, in combination with your own vehicle and Mercedes Me, form an ecosystem that customers appreciate and value. Tel Aviv will not produce another specific service. Rather, we’re working there on basic technologies such as image recognition, sensor systems, security and data services. Those are important building blocks to secure and optimize existing services.
(Editor’s note: An abbreviated version of this interview was published on our website in early September)