AI in eLearning – Why Virtual Tutors with a Webcam Are the Better Coach

virtual tutor2017 has been called the year of Chatbots, futurists tell us how chatbots will revolutionise Learning & Development and your grandma probably started using Alexa in your living room. AI is being predicted by experts to become the leading driver of education by the 2020s. We know FAQ chatbots on websites and we are talking more to bots on our phones than people. But how can we use those virtual assistants and even AI in our next online course, micro-training session or MOOC?

Enter Intelligent Tutoring Systems. Finally, the tech is maturing above chatbots.

We are not talking about a pre-produced talking Avatar guiding you through your average online course. Nor are we talking about a voice assistant like Siri, Alexa or Cortana who searches Google on your command for knowledge from Wikipedia (and storing our data who-knows-where). We are talking about a virtual assistant, a chatbot, who is your interface to artificial intelligence, has a live animated face and body when needed (in Skype or virtual worlds) and is the best coach, trainer or teacher you can wish for. Yes, I am talking about an ideal world, but we are getting there!

Intelligent Tutoring Systems are the CUI for eLearning & Training

Conversational User Interfaces (CUI) are already starting to replace our good old Graphical UI (buttons, touch and click). We have known this for decades – Captain Kirk and his crew always talked to the Enterprise,  and that was in the nineteen sixties! These sophisticated chatbots are certainly an interesting addition to the toolbox of online learning designers. The tech is advancing fast: Botanic Technologies, for example, has already developed multimodal Avatars (text, voice and video) who act as an advisor to medical personnel or as your personal job interview coach on Skype. One digital coach — an animated character named Andi — actually does a sentiment and emotional analysis. During a Skype video chat, it processes via webcam your facial expressions, your tonality and your use of words in order to prepare you for a successful job interview. Here’s an example:

Watch Andi, the job interview coach in more detail on Vimeo.
Or rather: Let Andi watch and analyse how you are doing …

Your Personal Virtual Tutor: 24/7:
Never Tired, Never out of Office

That’s why the virtual all-knowing assistant is a better tutor. Think „personal mobile virtual coach“: Your personal learning tutor who lives in the net and assists your learning wherever you are – as a chatbot or voice assistant on your phone or as a 3D character in your favourite virtual learning environment. But the best thing is, those tutors don’t have office hours and thousands of students could benefit from on well-designed conversational bots. MOOC providers know what I am talking about – they’d give anything to clone good tutors for a 24/7 service.

But who can actually develop a chatbot for eLearning environments?

How does that even work? Most of the non-coding chatbot creation platforms (Chatfuel, Botsify) are offering marketing, faq and customer service templates. I’ve created my first virtual teachers with tiny avatars and synthetic voices more than 10 years ago (with Pikkubot in Second Life, sitepal and presenters with Mediasemantics) and unfortunately, I don’t see a lot of advancement in the educational area. To integrate a virtual tutor like the example in the video above you’d need to hire a company that creates your sophisticated coach – which is out of the question in most L&D projects. Or you did deeper and try to create your own dialogue flow in Flow.ai or with IBM Watson. But you already have a profession and who has time to build virtual tutors from scratch?

What to do? Well, I have chosen to take it one step at a time and I joined forces with an interesting company called SEED Vault. They want to make the world a better place by building a massive open-source bot economy based on the blockchain. Independent developers and learning designers need to have access to AI-powered virtual assistants with a variety of ready-to-go templates as well as to open, shared standards we can all use. It doesn’t make sense for a few companies to own the entire AI & CUI market (like Facebook, Alexa, Microsoft, Google, Apple … ). Also, educators need to be very aware of data privacy – which you don’t get from these big corporations. They store every word your students would utter in a black box in exchange for their services. Bots and AI need to be transparent and verified — and that is where blockchain technology comes in.

SEED: A Garden Eden for Bot Builders, Designers, Deployers and Educators on Blockchain

SEED actually emanated from Botanic (the creators of Andi the interview coach) and other global bot communities are joining them as we speak. I’ve been working with Mark Stephen Meadows, founder of Botanic, years ago and have been following his work ever since. Now I’m excited to work with SEED to democratise AI, keeping my focus on education, eLearning and virtual training. I am responsible for managing Bot Community Initiatives – if you are a bot developer, dialogue designer or author please let me know – the SEED community needs you to speed up the evolution of conversational user interfaces so we can finally have ourselves a couple of wise-cracking charming virtual tutors in our next online course 🙂

SEED Token on Telegram  and on Twitter

Contact me on LinkedIn

XING

 

Advertisements

Face and head tracking for Second Life avatars – Massively

We’re seeing some real nice developments that are important for the use of virtual worlds as a place of collaboration and any other social function: non-verbal communication like facial expressions (a smile, surprise, scorn) or other body languages like a nod of your head has been missing. Now sl.vr-wear.com offers a beta viewer for Second Life that uses a camera to track your head and expressions and acts them out with your avatar.

Up to date people adopted various new ways of social behaviour in immersive 3D worlds like Second Life, but the point is that body language and sudden emotions on your face are unconscious behaviour and while typing „lol“ is second nature to most of us by now, it’s still different if you’re suddenly appaled or delighted.

sl.vr-wear.com supposedly shows immediately – and therefore genuinely – these kinds of emotions on your avatar’s face as well. All you need is a webcam and their special SL viewer, available for Windows and MacOS.

Found via: http://www.massively.com/2008/12/01/face-and-head-tracking-for-second-life-avatars/

UPDATE: I couldn’t get it to work on my MacBook Pro and I would like to know if anybody else has had more luck on a Mac. But I found this Seesmic video showing how simple it’ll work (once it works):

VRW – SL Viewer mod – 0.99 beta 1 public release

Another way to talk to machines: Jacking into the brain

Bill Diodato, Scientific American]

Photo: Bill Diodato, Scientific American

While people still have to gesture wildly in front of a giant computer display when they go with a human-machine interface á la Minority Report (see previous article), futurist Ray Kurzweil’s dream has been rather to jack directly into the brain (in order to upload it to the net to live forever, but that’s another story, called Singularity and all).

The Scientific American recently ran an article about the current research in neuro-technology regarding brain-machine interfaces. The first page is a little dragging, mentioning all the crazy ideas of science-fiction authors of the past 30 years, but the rest of the article is about current research programs like using the brain as an interface for prosthetics, steering through virtual worlds by mere thought or improving our memories with an artificial hippocampus. If you don’t have time skip to page 4 of the article. Full article on Scientific American: Jacking into the Brain – Is the Brain the Ultimate Computer Interface?

Although we appearently still don’t know jack about jacking into the brain and doing something really useful with it, there is the next best thing that you can order now (will be shipped to US addresses only by the end of the year for 299 US$): The EmotivEpoc headset taps your neurons from the outside and translates your intentions, facial expressions and emotions into commands for 3D games and virtual worlds. Their technology also lets you control a wheelchair just using mind control (video). Spooky, huh? And damn useful if it works. Here’s a video showing how it works with games.

 

No more typing lol (laugh out loud)

Mac support is planned but scheduled for later – „the market conditions dictate that Windows comes first“ is what Jonathan Geracifrom the Emotiv team told me in July. But they offer an open API set for developers so the range of supported games and virtual 3D platforms should be impressive.

Read more about the sensor-laden headset or order it now if you are living in Obama land at the Emotiv Website. (No chance for the rest of the world without US address yet)

Minority Report For Real – Human-Machine Interface in action

Vodpod videos no longer available.

posted with vodpod

Demonstration of Oblong’s g-speak spatial operating environment.

Some of the core ideas are already familiar from the film Minority Report, where Tom Cruise’s character performed forensic analysis using massive, gesturally driven displays. The similarity is no coincidence: one of Oblong’s founders served as science advisor to Minority Report and based the design of those scenes directly on his earlier work at MIT. More about the technology: oblong industries, inc.

Augmented Reality in education: How kids learn Mandarin with a book, a cell and a panda

A book with special embedded pictures/codes and a cell phone with a camera is all you need to learn Mandarin from a 3D panda

A book with special embedded pictures/codes and a cell phone with a camera is all you need to learn Mandarin from a 3D panda

I have done a little research about Augmented Reality (AR) lately and this is one of the few really useful examples of  AR solutions for books that I have found (unfortunately I couldn’t find a video of it).

Here is how it works: The book designers have embedded cues (a graphic or a code) into the graphics and the software on your cell phone reacts to those. In this example, the child’s book shows several Chinese characters and if you point your cell’s camera to the page, a small 3D cartoon panda comes to life on your mobile’s display and says the character in English and then in Mandarin. This way the child doesn’t need a computer to use a learning software but still has the advantages of modern media – animation, interaction, sound – when (and where) needed.

These kinds of „magic books“ will become available by the end of this year.

Learn more about the current commercial development in the SCIAM article here: Augmented Reality Makes Commercial Headway: Scientific American

Or have a look at my new AR YouTube Playlist with many different examples of applied Augmented Reality.

Ray-Ban offers Augmented Reality service to sell glasses online

Ray-Ban Virtual Mirror

Ray-Ban Virtual Mirror

Augmented Reality steps into our living-room with this new web service from Ray-Ban: All you need is a webcam and their software (PC only). Then you pick some glasses from the online catalogue and see yourself in a virtual mirror. Works especially great with sunglasses, since you see yourself just like others do for the first time. This I call a successful implementation of innovative technology for business purposes.

via Ray-Ban Official Website (click on „Virtual Mirror“ on the banner in the middle)

Endlich: Eingebaute Lippensynchronisation in Second Life

Was die Kommunikation in virtuellen 3D-Welten angeht, vor allem in der Zusammenarbeit mit anderen, ist die fehlende gewohnte nonverbale Kommunikation immer noch ein großes Manko. In Second Life gab es dafür zwar bereits schon lange (kommerzielle) Zusatzprogramme, die z.B. zum getippten Textchat oder auch Voicechat passende Mundbewegungen erzeugen (auch typische vorgefertigte Gesten/Mimikabläufe für Situationen aller Art und sogar animierte Sculpty-Gesichter), aber gerade für Neueinsteiger, sogenannte Noobs, ist es immer wieder mühsam, sich solche Add-Ons zusammen zu suchen (abgesehen von den Zusatzkosten). Über SLtalk bin ich jetzt darauf aufmerksam geworden, dass die Lippensynchronisation bereits im aktuellen Second Life Test-Viewer (Version 1.20.14) eingebaut ist. Man muss es zunächst aktivieren (Advanced Menü mithilfe Shift-Alt-Cmd-D einblenden, dann Character/Enable lip sync Beta auswählen), doch ich gehe davon aus, dass dies in Zukunft voreingestellt ist. Und so sieht es aus:


Automatische Lippenbewegungen während des Voicechats in Second Life (leider nicht wirklich synchron).

Wenn man das kombiniert mit Mimik-/Eye-Tracking Lösungen, die langsam immer günstiger werden, wird die Unterstützung von internationalen Arbeitsgruppen, Meetings und anderen Formen der Zusammenarbeit durch virtuelle Welten immer realistischer für die Teilnehmer.

Hier ein paar Beispiele von Entwicklungen, die in der Richtung gemacht werden:
http://www.ioct.dmu.ac.uk/projects/eyegaze.html
http://de.youtube.com/watch?v=UUeqrYEzNi4
http://gazeinteraction.blogspot.com/2008/07/eye-tracking-using-webcamera.html

Und auch der ganze Körpereinsatz ist bereits in der deutschen Forschung in Arbeit: Junge Forscher des Deutschen Forschungszentrums für Künstliche Intelligenz Saarbrücken surften mit dem Nintendo Wii-Balanceboard bereits durch Google Earth und Second Life und hielten dies auf YouTube fest.