I’ve been working with professor Majercik to implement an Adaptive Resource Allocating Vector Quantizer (ARAVQ). Over the summer we’ve gotten that working, and now this semester I’m experimenting with how it could possibly work with robocup.
The ARAVQ was proposed by a cool guy named Fredrik Linaker in his PhD thesis – basically you take a whole bunch of ‘noisy’ world vectors (such as the vision of a robot, combined with the other data we gather) and you generate a small set of model vectors. Depending on parameters, the size of the set of models can be 0<N<(size of world vector). This has been used to solve a couple of interesting learning problems, like the T junction (where at the beginning of a junction, a robot is shown a light on the left or right, and later on needs to turn based on where it was the light). We’re trying to see if this can be of any use to the robots, in communicating essential communication quickly (since a dictionary of states is built, you can send an int that corresponds to a dictionary entry).
So far, it seems that the ARAVQ defined states can work decently well in baisc practice situations, but I’m still working on a few more complicated test runs.
More to come.
And here’s an essay about the beauty of programming by Linus Torvalds – can be good to send to friends who don’t understand why we like it (from bryn mawr)