Category Archives: Programming

Preparing a source code release

Our source code has been available to the public on github since we started a new repository last summer. However, we haven’t had a stable master worthy of releasing to the public until the RoboCup in Graz.

Now we’ve decided to do a real ‘source-release’ with the code we played in the Finals which has relatively stable versions of all the major modules necessary for soccer play. In addition, we’ve added pages of documentation to the motion and vision systems, and are revamping our online documentation at our wiki. Once we finished some more documentation and get a good draft of our Team Report done, we’ll add a tag to our github account, and make an official announcement here.

Our hope is that by setting the example of sharing our code, we can convince other teams to share their code as well, and help out teams who don’t want to develop all their modules on their own. If you’re interested in providing feedback on our documentation in advance of the code release, please take a look at our GettingStarted page, and reply to this post, or email me directly (jstrom bowdoin edu).

Below is a graph from github.com of our the git commits leading up to RoboCup 2009.

NBites Dev History

NBites Dev History

Learning Unix

Here’s my kind of computer article I found when typing in the words “Learning Unix” into Google.

It doesn’t contain any specific information about the unix environment, but instead just gives comforting words to those — like me — trying to get more out of it.

Nice Quote:

Unix as acquired a reputation as something that can only be used by people who learned 14 programming languages by the age of ten and build all their home machines from scratch because it’s fun. This is not true. If it did my artsy-fartsy background would have prevented me from ever touching the keyboard. There are two basic premises that would’ve saved me a lot of angst in learning Unix. First, it does not make sense. Period. None.

Early Grabbing Behaviors

Though a far stretch from being useful in an actual game, I am still to this day impressed with these slow, ineffective behaviors. I believe this was done in early March 2006.

Considering the vision system was running the worst camera settings (everything was really blurry), and was running at about four frames a second (instead of 30 it is now), I think these simple, slow grabs are actually pretty impressive. Moreover, all the behavior code was written in C++, meaning that we had to literally turn the robot on/off every time we wanted to make a change. Ah, how crappy the process was back then.

Role switching on its way

I’ve set up the foundations. It is working whimsically as of now, but it has great promise. It is certainly necessary to perfect this since the full game test we made today caused all 3 dogs to get stuck into each other and couldn’t continue playing unless taken away from each other. In a real game they would have all gotten penalized for 30 seconds…Not a good thing!

Before the dog goes into approach state, it considers whether there is another dog that is closer to the ball. If there is, stay put. This is all based on communicated localization from other dogs on the field. Eventually, we could give idle standers-by some other work while a single robot approaches the ball.

dogSight

I just added a pre-compiled binary of AiboConnect to the root /dog/ directory, which is just an quickly-accessible AiboConnect for seeing the thresholded vision image while the dog is executing its own brain. Useful for debugging, i’ve found. Use it exactly like any other aiboConnect: ./dogSight

More Architecture Changes

Relativity.cc -> VerticalScan.cc and SubVision.cc , split up for the two different vision systems, the old vertican scanning style and the newer relativity, subvision technique.  Theoretically, the subvision still uses vertical scanning, but whatever.  Both files are still within the Vision class and are not their own classes.

#ifdef Switches, Sensors

Yo–I moved NotifyImage (and thus all the calls to the vision processing, brain functioning) and NotifySensors methods into Vision.cc for simplicity.  It just got annoying going into Interobject.cc all the time.

Also, because we keep switching between the two vision systems, between color table thresholding and our old vision.cfg thresholding, between using localization and not, using python and not, using the brain and not, etc–I’ve implemented a bunch of #ifdef and #ifndef preprocessors to make this toggling easier.  Basically: no more commenting out methods or large swathes of code in NotifyImage–just comment/uncomment the #define switches at the top of the file. This should make things a lot easier.

Lastly,  I’ve begun to make the sensors more useful.  Initially, it’s just the touch sensors but I’m going to give some people the job of using distance sensors and acceleration estimators to some kind of advantage.  Hopefully some dead reckoning and some extra distance estimation will work out.