So the the real focus of our team right now is localization: teaching the dog to know where it is on the field. A critical part of localization is a decent Vision system: having the dog to correctly identify landmarks. We’re pretty good at identifying big things like the posts and goals (though the new goals are giving us issue), but we’re pretty lousy at detecting line intersections on the field.
Here are the various issues plaguing our line recognition:
* Thresholding. The way we threshold–identifying RoboCup colors from the millions of colors that show up on the Aibo’s camera–heavily relies on segmentation. I don’t have time to explain segmentation, but let’s just say that it’s great for every kind of color object except for really small white sections that make up the lines.
* Landmark Detection. We can identify line as individual segments with some success–we can’t identify when they intersect. That is to say we can recognize two lines on the screen, but can’t recognize that they form a corner of the field. This should be one of the easier tasks.
* The lines and center circle in the lab. The physical lines on our Lab’s field are pretty yellowish (made from masking tape) and the center circle isn’t actually a circle. You have to stand way back to mistake it for a circle. It’s instead just a circle-ish grouping of line segments.
* The camera settings. We are forced to use the most blurry settings — which make the field brighter — because the lighting in our lab is so dim. Our lighting upgrade may still be months away.
* We don’t have a PS3 or even a Wii.