I am officially the wiz at I-O for the Aperios Operating System.Â I’ve just completed most of the internal work to make various OPEN-R objects (our modules–Vision, Motion, Communication) talk to each other in exactly the way I want them to–which is good!Â The motion system now tells the Vision module (and our Brain) the current state of the motion system and the current joint values every frame. This is a basic building block for better and better behaviors.
Hey–I just commented out an errant AssertReady I found in the TCP protocol stuff that deals with AiboConnect, and suddenly the framerate–even when you do our full vision processing–is significantly faster. This is quite cool.
It will be over a little after 4:30pm. Few demostrations today. We’ll then switch over and talk exclusively about the brain.
I’ve been working with professor Majercik to implement an Adaptive Resource Allocating Vector Quantizer (ARAVQ). Over the summer we’ve gotten that working, and now this semester I’m experimenting with how it could possibly work with robocup.
The ARAVQ was proposed by a cool guy named Fredrik Linaker in his PhD thesis – basically you take a whole bunch of ‘noisy’ world vectors (such as the vision of a robot, combined with the other data we gather) and you generate a small set of model vectors. Depending on parameters, the size of the set of models can be 0<N<(size of world vector). This has been used to solve a couple of interesting learning problems, like the T junction (where at the beginning of a junction, a robot is shown a light on the left or right, and later on needs to turn based on where it was the light). We’re trying to see if this can be of any use to the robots, in communicating essential communication quickly (since a dictionary of states is built, you can send an int that corresponds to a dictionary entry).
So far, it seems that the ARAVQ defined states can work decently well in baisc practice situations, but I’m still working on a few more complicated test runs.
More to come.
And here’s an essay about the beauty of programming by Linus Torvalds – can be good to send to friends who don’t understand why we like it (from bryn mawr)
Sad day! The home of the OPEN-R SDK is gone as is the soon departure of the Aibo.
I’m in here late on a Saturday, cursing the OPEN-R SDK and its inter-objectivity, and trying to make life for our team a lot simple: One object. One object to rule them all. It’s a risk in one way: things could run slower, compilation time could take longer, and I might not even be able to make it work. I’ve gotten one module into the one-module framework. Now it’s time for the second.
UPDATE::Brought in the Chlaos, our motion module. Boots up, and even stands up. AiboConnect works, but not for motions yet. Promising…
UPDATEx2::Motions are wickedly choppy. Bleak.
UPDATEx3::Motions work fine if I don’t initialize sensors/image. Time to work it out. —Works when initializing Sensors, now
UPDATEx4::Seems like a Memory Management issue. Turning off some of the initializing malloc’s–big image arrays–speed things up considerably.
I have written a protected page describing how to use aiboConnect and what all of the different joint angles are for the head and legs. In addition, I describe the head literal and the motion engines. Soon I will put up diagrams to go along with the instructions, which will simplify the descriptions considerably.
I’ve fulfilled my life-long dream to make a really smart message parser for AiboConnect’s server side. W00t!! Also, fixed the ‘timed’ head motions so that they, you know, work. Anyone know how timer functions work on Aperios?
Want to know what all the * and & do in all our C Code? Read this really good introduction (or refresher) to C and C++ pointers.
Upgraded, finally, all of the machines to the OPEN_R_SDK-1.1.5-r5 pack.