Category Archives: Vision

Work Today (1/13/07) included…

just a few short hours on my improvements to this horizontal/perpendicular to the horizon vision scanning system.



horizon vision testing, originally uploaded by northern_bites.

The photo is a first attempt at a post scanning algorithm that starts at the horizon line and then goes upwards and downwards in decreasing frequency scanning parallel to the horizon.

Next up is to give a run structure another go that stores important colors.

The added bonuses of this system are:

1) it fixes the disproportions that show up when you move the head around, thus making the scan lines perpendicular to the ground
2) it scans a lot fewer pixels, hence making the whole vision system much faster

I’m going to see if I can’t get more out of this system this week as I’ll have little to do other than work on vision while the Lab is under construction.

Work Today (1/12/07) included…

Re-read and cleaned up a lot of our vision code. My basic plan is to over the next few days try to make some substantial vision improvements in two particular areas: speed optimization and horizon line utilization.

All night I’ve been working on a scanning routine that only scans parallel and perpendicular to the horizon line. The math is a bit complicated, but as I’ve worked most of it out it seems doable but time consuming. I’m doing most of the code out to see if this has better advantages over a blob rotation method. We’ll find out.

Work Today (1/11/07) included…

Some New Goal work and some Horizon work.

First, I tried to debug and integrate Joho’s work on recognizing the new goals. I’ve narrowed down the buggy code, but couldn’t fix it. So hopefully more luck with that when Joho can take a closer look at it.

Second, I actually used the Horizon line calculation for some good. The purpose of the horizon line is basically twofold: it gives you an idea of where to look for things and it gives you an idea of where not to look for things. This seems simple but the GermanTeam‘s report helped me clarify this a bit.

Most important objects more frequently appear around the horizon: goals, posts, far-away balls, etc. Close objects included really close goals, close balls, dogs, and lines. The former are also by nature further away from you and are therefore take up less room in the image. The horizon line gives you a good idea of where to scan intently. Beneath the horizon line you can scan sparsely. Above the horizon line–heavens to betsy–you don’t really have to scan at all.

This last bit was clear to me from the beginning. Scan less above the horizon line and you’ll cut down on false positives and speed up the vision system simply because it has fewer pixels to process. But giving you a better idea of where to scan more fervently, however, now that’s an idea that deserves some good coding.

Aibo Likes Distance Estimation

With a new fix of pose transforms last night, here are some new pose estimated distances of the ball to the center of the body, using nothing but the position of the body and the position of the ball on the screen (plus the known height of the ball):

Actual: 30 Reported: 32
Actual: 50 Reported: 49
Actual: 70 Reported: 69
Actual: 90 Reported: 92
Actual: 110 Reported: 113

As you can see, I am a genius. A few notes though: the estimates get noisier as ball gets further away. This is to be expected: variations in the angle from the focal point (camera point) to the ball xyz space gets narrower, hence the noise has a greater effect on the distance estimation. Now to check the distances at various head angles

Work yesterday (1/8/07) included…

lots more Vision profiling. For my parents’ information, that means that I found out what specific parts of the vision system slow the Aibo down. For the Aibo, ‘slow’ means that it can process less information in a second, or another way of putting it is that its reaction time degrades. Ideally, the Aibo will make decisions at about 30 times a second. Currently, with a bunch of stuff that we’ve been adding and some bottlenecks we’ve only just discovered, it’s down to about 23-25 frames per second (fps).

Moving onto more complicated things for fellow nBiters: over 50% of the average vision frame is taken up by chromatic distortion filtering. I know, it seems pretty ridiculous to me as well. About 25% percent of a vision frame is just thresholding, 7% is for line recognition, and then the rest is basically python processes including the EKF. Check the Wiki for more details.

Anyways, here are the areas for optimization:
-Chromatic Distortion (duh). We may be seriously screwing something up.
-Thresholding (duh). There may be more we can do here, either by reducing the size of the LUT or memory-wise.
-Python Overhead — see the tests on Trac, but I believe we’re losing about 3-4 fps just on creating python objects from c objects, a project ripe for Jeremy’s attention.

In other news, I found another huge bug in our body transforms just a few minutes ago: turns out I was doing body rotations in the wrong order (apparently matrix multiplication order matters, who knew?) and it took me re-reading and re-reading the German, Ozzie, and Texan papers to figure the proper order out. The focal point estimates look a lot better now in cortex and so I’ll be testing distance estimates tomorrow.

Next up: finally figuring out the pose-estimated horizon line swiftly followed by blob rotation fun. Fun.

Work Today (1/06/06) included…

Testing the distance estimations that new matrix transformations of the aibo’s joints and camera have produced so far. The effort is promising, and I think it is nearly 95% done, but I’m still getting a consistently over estimated distances for objects to the center of the body.

Figuring that most of the work is done there, I’ve moved on to porting all that line recognition code I’ve written in our offline ‘cortex’ environment to its on class in the Vision module. Tomorrow I hope to do some corner recognition PLUS actually estimating distances to that point. Fun stuff. We’re inching closer towards making that real localization system for our team that we’ve always been talking about–just a few more weeks I believe.

Line Work

So the the real focus of our team right now is localization: teaching the dog to know where it is on the field. A critical part of localization is a decent Vision system: having the dog to correctly identify landmarks. We’re pretty good at identifying big things like the posts and goals (though the new goals are giving us issue), but we’re pretty lousy at detecting line intersections on the field.

Here are the various issues plaguing our line recognition:
* Thresholding. The way we threshold–identifying RoboCup colors from the millions of colors that show up on the Aibo’s camera–heavily relies on segmentation. I don’t have time to explain segmentation, but let’s just say that it’s great for every kind of color object except for really small white sections that make up the lines.
* Landmark Detection. We can identify line as individual segments with some success–we can’t identify when they intersect. That is to say we can recognize two lines on the screen, but can’t recognize that they form a corner of the field. This should be one of the easier tasks.
* The lines and center circle in the lab. The physical lines on our Lab’s field are pretty yellowish (made from masking tape) and the center circle isn’t actually a circle. You have to stand way back to mistake it for a circle. It’s instead just a circle-ish grouping of line segments.
* The camera settings. We are forced to use the most blurry settings — which make the field brighter — because the lighting in our lab is so dim. Our lighting upgrade may still be months away.
* We don’t have a PS3 or even a Wii.

Recognizing and making use of lines

For the past week, I’ve been trying to implement line recognition.  The purpose of this is two-fold: at some point, we might be able to use a better version to help with localization and, more importantly, we can use it to try to keep our robots from crossing lines that they should not cross.

I’m using the green-white transition rules that I devised before spring break, and am hoping to be able to have the dogs discern which of the six line objects they might be looking at (a corner, a midline intersection, a goalie box intersection, a goalie box corner, the center circle or any other boundary line).

#ifdef Switches, Sensors

Yo–I moved NotifyImage (and thus all the calls to the vision processing, brain functioning) and NotifySensors methods into Vision.cc for simplicity.  It just got annoying going into Interobject.cc all the time.

Also, because we keep switching between the two vision systems, between color table thresholding and our old vision.cfg thresholding, between using localization and not, using python and not, using the brain and not, etc–I’ve implemented a bunch of #ifdef and #ifndef preprocessors to make this toggling easier.  Basically: no more commenting out methods or large swathes of code in NotifyImage–just comment/uncomment the #define switches at the top of the file. This should make things a lot easier.

Lastly,  I’ve begun to make the sensors more useful.  Initially, it’s just the touch sensors but I’m going to give some people the job of using distance sensors and acceleration estimators to some kind of advantage.  Hopefully some dead reckoning and some extra distance estimation will work out.