Category Archives: Asides

Work yesterday (1/8/07) included…

lots more Vision profiling. For my parents’ information, that means that I found out what specific parts of the vision system slow the Aibo down. For the Aibo, ‘slow’ means that it can process less information in a second, or another way of putting it is that its reaction time degrades. Ideally, the Aibo will make decisions at about 30 times a second. Currently, with a bunch of stuff that we’ve been adding and some bottlenecks we’ve only just discovered, it’s down to about 23-25 frames per second (fps).

Moving onto more complicated things for fellow nBiters: over 50% of the average vision frame is taken up by chromatic distortion filtering. I know, it seems pretty ridiculous to me as well. About 25% percent of a vision frame is just thresholding, 7% is for line recognition, and then the rest is basically python processes including the EKF. Check the Wiki for more details.

Anyways, here are the areas for optimization:
-Chromatic Distortion (duh). We may be seriously screwing something up.
-Thresholding (duh). There may be more we can do here, either by reducing the size of the LUT or memory-wise.
-Python Overhead — see the tests on Trac, but I believe we’re losing about 3-4 fps just on creating python objects from c objects, a project ripe for Jeremy’s attention.

In other news, I found another huge bug in our body transforms just a few minutes ago: turns out I was doing body rotations in the wrong order (apparently matrix multiplication order matters, who knew?) and it took me re-reading and re-reading the German, Ozzie, and Texan papers to figure the proper order out. The focal point estimates look a lot better now in cortex and so I’ll be testing distance estimates tomorrow.

Next up: finally figuring out the pose-estimated horizon line swiftly followed by blob rotation fun. Fun.

Work Today (1/07/07) included…

Lots of integration code. In fact, I think I wrote over 500 lines of code today. A lot of it was mindless except for decisions on how to handle our increasingly complex and growing code base. It’s tough handling a project this big when you feel that every step towards cleaning up the code base feels like you’re slowing down towards encroaching deadlines.

I’m thinking now that our sights should be set firmly on video presentation date: February 15th. Showing what we’re been doing since Germany it’s a hard feat at this point since so much of it is on the low-level side. All the milestones so far this year have been substantial: odometry calibration, an extended kalman filter, line landmark recognition, pose estimation, and the whole slew of development tools that have sped up progress. However, little of this progress is show-worthy.

Localization is the 800-pound gorilla pounding on our door and without taming it we will have to be happy with unsophisticated behaviors.

Work Today (1/06/06) included…

Testing the distance estimations that new matrix transformations of the aibo’s joints and camera have produced so far. The effort is promising, and I think it is nearly 95% done, but I’m still getting a consistently over estimated distances for objects to the center of the body.

Figuring that most of the work is done there, I’ve moved on to porting all that line recognition code I’ve written in our offline ‘cortex’ environment to its on class in the Vision module. Tomorrow I hope to do some corner recognition PLUS actually estimating distances to that point. Fun stuff. We’re inching closer towards making that real localization system for our team that we’ve always been talking about–just a few more weeks I believe.

Jeremy’s Matrix Majesty

I just wanted to give high praise to Jeremy’s long efforts to bring a c-based python matrix library to the Sony Aibo. Thanks to a lot of hard work and banging-head-against-the-table nights, he’s committed it and we’ve been able to test it. This is essential to getting our Extended Kalman Filter — written by Mark — fast enough that we can keep the code written in Python. Oh, and, nBiters–you can see the results of a speed test I did today here.

Late Night Adventures with ‘make’ and ‘gprof’

Well, I’ve taken my first steps towards understanding the gnu ‘make’ utility thanks to Andrew Oram and Steve Talbott. I’ve hacked together Makefiles before (we use quite a lot of them in our code base), but tonight I said ‘No more hackery! Makefiles deserve some attention!’. So I read the first few chapters and successfully fixed my vision debugger’s Makefile to handle profiling with gprof, gnu’s profiling utility. I’ll post our (offline) vision system’s profile results when I get them.