Category Archives: Behaviors

Early Grabbing Behaviors

Though a far stretch from being useful in an actual game, I am still to this day impressed with these slow, ineffective behaviors. I believe this was done in early March 2006.

Considering the vision system was running the worst camera settings (everything was really blurry), and was running at about four frames a second (instead of 30 it is now), I think these simple, slow grabs are actually pretty impressive. Moreover, all the behavior code was written in C++, meaning that we had to literally turn the robot on/off every time we wanted to make a change. Ah, how crappy the process was back then.

Role switching on its way

I’ve set up the foundations. It is working whimsically as of now, but it has great promise. It is certainly necessary to perfect this since the full game test we made today caused all 3 dogs to get stuck into each other and couldn’t continue playing unless taken away from each other. In a real game they would have all gotten penalized for 30 seconds…Not a good thing!

Before the dog goes into approach state, it considers whether there is another dog that is closer to the ball. If there is, stay put. This is all based on communicated localization from other dogs on the field. Eventually, we could give idle standers-by some other work while a single robot approaches the ball.

Inter-Objectivity

I am officially the wiz at I-O for the Aperios Operating System.  I’ve just completed most of the internal work to make various OPEN-R objects (our modules–Vision, Motion, Communication) talk to each other in exactly the way I want them to–which is good!  The motion system now tells the Vision module (and our Brain) the current state of the motion system and the current joint values every frame. This is a basic building block for better and better behaviors.

News from the other Side

I’ve been working with professor Majercik to implement an Adaptive Resource Allocating Vector Quantizer (ARAVQ). Over the summer we’ve gotten that working, and now this semester I’m experimenting with how it could possibly work with robocup.

The ARAVQ was proposed by a cool guy named Fredrik Linaker in his PhD thesis – basically you take a whole bunch of ‘noisy’ world vectors (such as the vision of a robot, combined with the other data we gather) and you generate a small set of model vectors. Depending on parameters, the size of the set of models can be 0<N<(size of world vector). This has been used to solve a couple of interesting learning problems, like the T junction (where at the beginning of a junction, a robot is shown a light on the left or right, and later on needs to turn based on where it was the light). We’re trying to see if this can be of any use to the robots, in communicating essential communication quickly (since a dictionary of states is built, you can send an int that corresponds to a dictionary entry).

So far, it seems that the ARAVQ defined states can work decently well in baisc practice situations, but I’m still working on a few more complicated test runs.

More to come.

And here’s an essay about the beauty of programming by Linus Torvalds – can be good to send to friends who don’t understand why we like it (from bryn mawr)

Hello, World!

.flickr-photo { border: solid 2px #000000; }
.flickr-yourcomment { }
.flickr-frame { text-align: left; padding: 3px; }
.flickr-caption { font-size: 0.8em; margin-top: 0px; }


flickr: northern_bites / photo: hello_world.jpg.

After many frustrating days and numerous debugging styles, I’ve finally gotten Python to run on Aperios!

I’ve often read that good Programmers are not amazingly proficient coders but they instead excel at debugging their own code. Boy–do I need to get better at this. The technique that finally got things going was taking the entirety of the 2005 rUNSWift code and widdling it down to the Python layer. This allowed me to step incrementally while making sure things stay working. Long process–but I learned heaps about Aperios, Python, and C++ in general.

Onward!