In only about 5 years only, AI-driven smart assistants have become a common usage for many us. We find them on websites, to catch visitors attention or potential customers, we find them integrated in connected speakers, mostly by Google, Amazon and Apple, and of course they also are present in many iOS and Android smartphones.
Those assistants can be very useful, used with a vocal or a textual interface, and probably that as they become more and more sophisticated and efficient, our children will consider them as a very natural way for interacting with digital services, possibly along with other new haptic interfaces.
The Privacy Concern
There is an issue though with this technology: it is about personal data protection. Use a digital assistant, and you can be sure that you’re going to maximise the amount of your personal data processed, stored and analyzed somewhere in the cloud, most probably for the benefit of one of the Net Giants.
And with vocal assistants it’s still worse: who never felt uncomfortable, visiting a friend or going at a dinner, when realizing that there is a connected speaker in the room that can potentially listen to everyone’s conversation?
Add a camera to this and you really have “1984” at home 24/7!
Some connected speakers or assistant projects have tried to offer an alternative with better privacy, such as MyCroft, and several initiatives for open source digital assistants have raised over the past years: Mozilla has started a great project called “Voice” that tries to build a state of the art Automatic Speech Recognition (and you can easily contribute to improve it by giving a few minutes of your voice), and we have seen several assistant frameworks appear like Getleon.ai and Olivia.ai.
Towards an /e/ assistant?
Since the begining of /e/, I said that we would possibly try to do something in this field, because our users deserve a great assistant in /e/OS, and why not some day as a connected speaker.
Last year I had the opportunity to meet a group of students (1) who were interested in working on this subject. So they started to review everything that exists as open source assistant technologies. Unfortunately, nothing was really ready for production yet, but we decided to keep on working on this and do a kind of very simple prototype. We had very simple goals: find the best assistant framework for our usage, be able to support 2 or 3 skills like “open an app”, “show me the weather in…” and “send a sms to…”.
And we also wanted to have something where new skills could easily be developed and added. That could be interesting to have a big repository of skills where users could pick the skills they need and build their custom assistant. And we wanted to have an Android build of the client software, that could run on /e/OS.
For our purpose, the guys forked Olivia.ai because it didn’t fit all our expectations, and finally they released “/e/livia”! Well, we didn’t purchase the domain name because it surely is expensive, and we’re unsure about the final name!
Of course, this is still a first, limited, proof of concept, and it misses some of the components that are mandatory to create a state of the art vocal smart assistant such as an elaborated NLP (Natural Language Processing), a good ASR (Automatic Speech Recognition), probably some neural network processing for deep learning…
However, you can already have a look at Elivia, download the source code, build the software, test it…
Everything (backend in Go and frontend in Kotlin) is at: https://gitlab.e.foundation/e/elivia/
Feel free to suggest and comment on this thread at our community website.
And never forget: Your data is YOUR data!
(1) They are Théo, Luca, Paul, Loïc and Tom from the PoC R&D Center