Skip to main content

It seems if you fill a classroom with robots that make mistakes, the kids get smarter. According to this New Scientist Report, a Nao Robot was operated by humans in the next room during an English class in a Japanese school. Yes, that is Telepresence rather than true robotics, but the kids didn’t know that. They played a learning game where the English name for a shape was given, and the robot and kids would draw that shape. It appears the kids learned faster when the robot made mistakes, and the children would have to teach it to draw the correct shape to go with that word. Which is just scientific backing for the old adage The best way to learn is to teach. Not only that, but the kids then wanted to continue learning with the Robot, and would carry on studying longer and learning better as they did so. The results will be presented at Ro-Man this year, the 21st IEEE International Symposium on Robot and Human Interactive Communication, which will run from tomorrow, September 9th through the 13th in Paris, France. The event is all about real world results of humans and robots working and communicating together, and every year contributes tremendously to the further development of robotics on both the software and hardware fronts. Take a look at their scheduled presentations to get an idea of the scope of this event. If you were thinking of building your own robot, this is a great place to absorb some real understanding of what is possible today and coming for tomorrow.

The Mars Descent Imager camera, MARDI for short, took a bunch of 1600X1200 pixel resolution pictures during the decent and landing. As usual with things that happen so far away, the bandwidth of the uplink back to Earth was the choke point on our retrieving the sequence, but now we have it. The original capture rate was 5 images per second, but this playback is at 15 frames/second, so it takes noticeably less than the original 7 minutes of terror (watch second video for that one) to play the video back. Use the link to watch the video on YouTube if you want to see it in full 1080P resolution. Thanks to Peta Pixel for the heads up on this one.

Here is a slightly different project created with the Robot Operating System at the core of its programming. While I think he might have wanted to spend a bit more time training his Voice Recognition interface before making this video, he did do a wonderful job with this project. If you are curious about how exactly this works, he has lots more videos on his You Tube Channel, plus he has uploaded his source code and hardware interfacing instructions to Sourceforge. I am sure it will be no surprise to anyone that this is a project involving AstroMech, the R2 Builders Club.

One of the modern holy grails in physics is the Higgs Boson, AKA The God Particle, and several weeks ago Cern announced they believed they had finally proven it exists. As in all such advances, some will only go so far as to admit that a Higgs-like particle has been identified, but even if it is not the elusive Higgs itself a major step along the road to understanding how the universe works has been taken. As complex as both the question and the means of answering it are, there is a simple explanation put forward by Assistant Professor Daniel Whiteson of the University of California that anyone can understand. And just to make it easier to follow, this excellent animation was assembled; thanks to Open Culture for the heads up on this one.

The Higgs Boson Explained from PHD Comics on Vimeo.