Category Archives: Localization

Work yesterday (1/8/07) included…

lots more Vision profiling. For my parents’ information, that means that I found out what specific parts of the vision system slow the Aibo down. For the Aibo, ‘slow’ means that it can process less information in a second, or another way of putting it is that its reaction time degrades. Ideally, the Aibo will make decisions at about 30 times a second. Currently, with a bunch of stuff that we’ve been adding and some bottlenecks we’ve only just discovered, it’s down to about 23-25 frames per second (fps).

Moving onto more complicated things for fellow nBiters: over 50% of the average vision frame is taken up by chromatic distortion filtering. I know, it seems pretty ridiculous to me as well. About 25% percent of a vision frame is just thresholding, 7% is for line recognition, and then the rest is basically python processes including the EKF. Check the Wiki for more details.

Anyways, here are the areas for optimization:
-Chromatic Distortion (duh). We may be seriously screwing something up.
-Thresholding (duh). There may be more we can do here, either by reducing the size of the LUT or memory-wise.
-Python Overhead — see the tests on Trac, but I believe we’re losing about 3-4 fps just on creating python objects from c objects, a project ripe for Jeremy’s attention.

In other news, I found another huge bug in our body transforms just a few minutes ago: turns out I was doing body rotations in the wrong order (apparently matrix multiplication order matters, who knew?) and it took me re-reading and re-reading the German, Ozzie, and Texan papers to figure the proper order out. The focal point estimates look a lot better now in cortex and so I’ll be testing distance estimates tomorrow.

Next up: finally figuring out the pose-estimated horizon line swiftly followed by blob rotation fun. Fun.

Work Today (1/07/07) included…

Lots of integration code. In fact, I think I wrote over 500 lines of code today. A lot of it was mindless except for decisions on how to handle our increasingly complex and growing code base. It’s tough handling a project this big when you feel that every step towards cleaning up the code base feels like you’re slowing down towards encroaching deadlines.

I’m thinking now that our sights should be set firmly on video presentation date: February 15th. Showing what we’re been doing since Germany it’s a hard feat at this point since so much of it is on the low-level side. All the milestones so far this year have been substantial: odometry calibration, an extended kalman filter, line landmark recognition, pose estimation, and the whole slew of development tools that have sped up progress. However, little of this progress is show-worthy.

Localization is the 800-pound gorilla pounding on our door and without taming it we will have to be happy with unsophisticated behaviors.

Work Today (1/06/06) included…

Testing the distance estimations that new matrix transformations of the aibo’s joints and camera have produced so far. The effort is promising, and I think it is nearly 95% done, but I’m still getting a consistently over estimated distances for objects to the center of the body.

Figuring that most of the work is done there, I’ve moved on to porting all that line recognition code I’ve written in our offline ‘cortex’ environment to its on class in the Vision module. Tomorrow I hope to do some corner recognition PLUS actually estimating distances to that point. Fun stuff. We’re inching closer towards making that real localization system for our team that we’ve always been talking about–just a few more weeks I believe.

Line Work

So the the real focus of our team right now is localization: teaching the dog to know where it is on the field. A critical part of localization is a decent Vision system: having the dog to correctly identify landmarks. We’re pretty good at identifying big things like the posts and goals (though the new goals are giving us issue), but we’re pretty lousy at detecting line intersections on the field.

Here are the various issues plaguing our line recognition:
* Thresholding. The way we threshold–identifying RoboCup colors from the millions of colors that show up on the Aibo’s camera–heavily relies on segmentation. I don’t have time to explain segmentation, but let’s just say that it’s great for every kind of color object except for really small white sections that make up the lines.
* Landmark Detection. We can identify line as individual segments with some success–we can’t identify when they intersect. That is to say we can recognize two lines on the screen, but can’t recognize that they form a corner of the field. This should be one of the easier tasks.
* The lines and center circle in the lab. The physical lines on our Lab’s field are pretty yellowish (made from masking tape) and the center circle isn’t actually a circle. You have to stand way back to mistake it for a circle. It’s instead just a circle-ish grouping of line segments.
* The camera settings. We are forced to use the most blurry settings — which make the field brighter — because the lighting in our lab is so dim. Our lighting upgrade may still be months away.
* We don’t have a PS3 or even a Wii.

Role switching on its way

I’ve set up the foundations. It is working whimsically as of now, but it has great promise. It is certainly necessary to perfect this since the full game test we made today caused all 3 dogs to get stuck into each other and couldn’t continue playing unless taken away from each other. In a real game they would have all gotten penalized for 30 seconds…Not a good thing!

Before the dog goes into approach state, it considers whether there is another dog that is closer to the ball. If there is, stay put. This is all based on communicated localization from other dogs on the field. Eventually, we could give idle standers-by some other work while a single robot approaches the ball.

GlobalBall

I just added global ball to the vision framework. This is a filter that keeps track of the absolute position of the ball on the field. It still needs testing and appropriate parameters set. Of course, I’m fully confident that will take no time at all.

OpenGL testing

we’re nearly there on getting a near-real time OpenGL app running via AiboConnect that gives updates on the robot’s localization information to a client computer.  When this thing starts to work well, it will be so rad. In the meantime, we have to really smooth out our distance readings.

Current State of Localization and GL

My dream of having an openGL representation of the current worldMap state has been realized (mostly). This is cool because we can actually see what is going on inside the huge matrix of probablities without resorting to really ugly text output. I’ll spend some more time refining the look of the window, but I think its pretty good for having taught myself openGL as I went along. If you’d like to see it; go into the tools branch then to LocalizationGLUT. Run the make file and then ./testerGLUT. Next up for this visualization tool is to incorporate Jesse’s new heading class.
The tool is set up much like the vision in aiboConnect, so hopefully we will be able to incorporate it easily. This would allow us to watch what the dog is thinking re localization as it runs around; pretty sweet.
Once Henry finishes off the application, he will help us adapt the WorldMap and Heading classes so that we can test it on the dog.
Stay tuned Monte Carlo Localization Fans

Localization Goin’ Well

Pat and Jesse are making great strides with the Localization system. We’re basically imitating the Monte Carlo localization that all the serious RoboCup teams employ–but we hope to have a few cool, innovative add-ons as the months go on. The great thing about all this is that we are just using a c++ compiler to check over all the math and programming without using the Aibo itself. This saves time in compilation, making memory sticks, and having to turn the Aibo on/off.