OK - I’ve been combing through the litany of threads on localization techniques for a solid week now. I’ve learned a lot, but still have a bunch of questions I’m hoping to get some help with.
First – the requirements:
–Outdoor localization of 4 vehicles all operating within a .5 mile radius
–Update rate of at least 30hz (so I get accurate position updated 30 times per second)
–Accuracy to within 1’
–Area is mostly an open field, but has some lightly wooded areas
–Terrain is mostly flat with some broad slopes
–Budget under $2500
The bad options, and why they don’t work (correct me if I’m wrong!)
–GPS: too slow; not accurate enough
–DGPS: not available where I am (plus it’s crazy expensive)
–Sonar: doesn’t have the necessary range
–RFID: Too much space to bury that many tags
–Buried Line: Again, too much space. That’d be good for a perimeter, but not much else.
–Dead-reckoning: not accurate enough
The slightly-less-bad options:
-
OpenCV. Mount cameras everywhere, network them, and process all the images for unique identifiers mounted on the vehicles. Problem is, that’s a lot of space to cover with cameras, and a lot of processing power to continually analyze them. And even with all that in place, getting 30 FPS out of OpenCV may not be plausible with the budget limitations.
-
Pixy on motors (servo or stepper). High in the center of the arena, set up a Pixy on a pan-tilt system, swapping the stock lens with a zoomable one. That Pixy then follows its assigned vehicle and uses feedback from the Pixy combined with the motors to determine position. Speed isn’t a problem – the Pixy runs at 50 fps – but sunlight is a concern. As the vehicles go from sunlight to shadow, it really messes up the Pixy’s hue-based recognition system. I’m hoping new lenses could address that.
-
Homemade DGPS. Setup a GPS at some fixed, known position in the arena and broadcast the error to the other vehicles, and use some sort IMU to handle the information between GPS updates. Has anyone done this? Does it really work? I imagine it would greatly improve accuracy, but not become accurate enough for my needs.
-
Trilateration via RSSI – signals are too unreliable.
-
Radar/Lidar. Of all the methods, I know the least about this. I imagine the wooded areas would be a problem. Also, can I get unique identifiers of the objects that get detected so I know which car is which, or is it just generalized blobs? And can I get 30 fps with this approach?
The utopian option:
- Beaglebone + RF transmitters + Time Difference of Arrival. Set up 3 transmitters at known locations, connected via identical length wire, set to broadcast at specific intervals. From my hobbyist/amateur/probably-totally-off-base calculations, the Beaglebone runs at 1ghz, so 1 billion cycles per second. Light travels at 983 million feet per second (thereabouts), which means, light travels less than 1 foot per Beaglebone cycle. So does that mean this trilateration grid could be accurate to a degree of about 1’? And since it’s based on arrival, and the signals only broadcast at assigned intervals (and therefore won’t collide), there shouldn’t be any reflections creating false signals (right?).
So, the Beaglebone TOA method seems like a great option, yet, I can’t found anyone who’s done it. Common sense suggests I’m missing something (maybe the Beaglebone can’t REALLY handle things that fast, or, something).
Am I trying to do the impossible with this project? Any thoughts are welcome.