The iRobis Brainstorm® technology facilitates and accelerates the development of humanoid self-learning robots – like this little fella. HR-2 walks, talks, imitates and „evolves“. He is already 3 years old by now (he was a prototype developed within 3 months with help of the early Brainstorm® system), but the video about HR-2’s abilities is still worth watching. Want! [picture: Almir Heralic]
From the press release: „The Institute of Robotics in Scandinavia (iRobis) has announced that the world’s first “complete cognitive software system for robotics” is ready for application. The system turns robots into self-developing, adaptive, problem-solving, “thinking” machines. Brainstorm® automatically adapts to on-board sensors and actuators, immediately builds a model of any robot on which it is installed, and automatically writes control programs for the robot’s movements. It can then explore and model its environment. Through simulated interaction using these models, it solves problems and develops new behavior using “imagination.” Once it has “learned” to do something, it can use its imagination to adapt its behavior to a wide range of circumstances. A methodology known as genetic programming (GP) is “the trick” that makes it all possible. GP is an automated programming methodology inspired by natural evolution that is used to evolve computer programs.“
You can’t download their system right away – iRobis is rather looking for high-potential partners/companies/researchers with whom they can develop prototypes like toys or perhaps household robots. Please hurry – I really want some kind of little robot fella on my desktop or maybe a little bit larger so he can get me a coffee or pizza from the kitchen.
Here’s the full iRobis press release
Digg deeper: More information about the technology and history behind iRobis Brainstorm® by Roger F. Gay
You can subscribe to this blog: It’s about the Metaverse – 3DWeb, social web, virtual worlds, Artificial Intelligence and whatever relates to those.
„A friendly collaboration between humans and robots is not always easy. Either robots work efficiently and far from humans in controlled environments, or they’re equipped with lots of sensors to work along humans and not harm them. Now, a EU-funded project, Phriends — short for ‚Physical Human-Robot Interaction: DepENDability and Safety‘ — has started to force robots to respect Asimov’s laws. In a nutshell, these laws say that robots cannot cause harm to humans and that they have to obey us. This 3-year project will end next year and has received € 2.16 million from the EU. (…) The robots developed by Phriends will be intrinsically safe, since the safety is guaranteed by their very physical structure, and not by external sensors or algorithms that can fail. (…)“
Read the full article here: Can robots become our ‚phriends‘? >>
Watch this. The future is now. ‚Emily‘ will set a new precedent for photo-realistic characters in video games and films, says her creator, Image Metrics. Now imagine that realism backed by natural language speech enabled software robots…and we’ve got ourselves a useful android. Article: Lifelike animation heralds new era for computer games – Times Online.
Plymouth University researchers will build two robots using hardware and software allowing them to interact with humans and each other to exchange learned information like humans. They are equipped with cameras, speakers, microphones and tracking devices in order to learn about nonverbal communication (gestures, pointing) and the meaning of words just like childrens would. The goal of the project is to teach concepts to robots including the meaning of words and enable them later to teach each other. The robots will then use the Internet as a medium to interact and are no longer limited by the slow real world to do “show and tell” teaching. Nice!
Teaching language to robots – let them learn like kids do and then teach each other >>
Article (old but still interesting) on KurzweilAI.net: Artificial Intelligence Meets the Metaverse: Teachable AI Agents Living in Virtual Worlds by Ben Goertzel, Oct. 2007. I would like to know what happened to the ESC project of the virtual intelligent pets
Virtual Worlds as the Catalyst for an AI Renaissance
How Intelligent Have Second Life AIs Become? [New World Notes]
The answer is: not very. Hamlet Au describes his conversation with a “robotar”, a computer controlled avatar and chatbot. He was mostly impressed by the nonverbal movements the helper bot made – she turned in his direction, for example – but there are some bot applications/services available today which allow these things (Pikkubot for example). I should tell him about it, but right now I’ve got to catch a movie 🙂
Justin Gibbs wrote an interesting article about NPCs and their role in our future digitized life and I couldn’t agree more with his thoughts.