Yo–I moved NotifyImage (and thus all the calls to the vision processing, brain functioning) and NotifySensors methods into Vision.cc for simplicity. It just got annoying going into Interobject.cc all the time.
Also, because we keep switching between the two vision systems, between color table thresholding and our old vision.cfg thresholding, between using localization and not, using python and not, using the brain and not, etc–I’ve implemented a bunch of #ifdef and #ifndef preprocessors to make this toggling easier. Basically: no more commenting out methods or large swathes of code in NotifyImage–just comment/uncomment the #define switches at the top of the file. This should make things a lot easier.
Lastly, I’ve begun to make the sensors more useful. Initially, it’s just the touch sensors but I’m going to give some people the job of using distance sensors and acceleration estimators to some kind of advantage. Hopefully some dead reckoning and some extra distance estimation will work out.