My math is very rusty, so I figure someone has to have solved this problem, as it is a common feature of many automotive data acquisition systems (which I’m building one of). The issue is when a race car has crossed a start/finish line to calculate lap times. The user will initiate the start finish line by pressing a button (or it can be retrieved from a saved dataset for a given track). when the button gets pressed, I have a GPS point, and a heading (direction the vehicle is moving), this gives me a vector, I can easily calculate a new vector perpendicular to the initial vector establishing a line. My problem comes now to checking and calculating if the vehicle has crossed that line. From my reading, I think I really need to create a rectangular polygon (which should be simple, take the initial point, and the new vector, calculate another point on that line (say 400 feet away), then create an additional line parallel to the initial line (say 25 ft), this gives me 4 points to determine my polygon. Once the polygon has been determined, then I have to take a given point, and determine if its within the polygon or outside of it.
a) is my “theory” here correct
b) has anyone got any sample code for an arduino Mega that already does this or something similar that I can modify.
I made exactly what you are looking. I should find the code in my old PC.
I used 2 GPS points for the finish line. These points could be easily defined using googleearth, placing them about 10 meters apart the track border.
Then you should use the current GPS point and the previous one, which define another line. If both lines crosses, you calculate the distance between your last position and the “cross point”. And then the time is proportional to the distances you have.
This work very well, and you should have good timings for club track days, let say precision less than 0.1s.
If you want to be very exact, you should consider acceleration during line crossing. But don’t forget you are relying on GPS, which is precise down to 2.5 meters. The only problem is you should determine the finish line points before going to the track, and program them in you device.
I also have made an algo based on a single point finish line. You must have a second parameter for the finish line: the distance from this point to the cross point. You also need your current and previous position. You determine a finish line which crosses you line with an angle of 90°, not more than x meters.
i’m not going to even ask what the translation is… If you can share the excel and code that would be awesome (I really don’t relish dusting off my math skills).
It is exactly the algorithm I used, found on my own. I could not find the project on the PC, it is probably on a backup. But the algorithm correspond to what is described on the link provided.
I didn’t converted the points coordinates in meters. I made assumption using degrees instead of meters will not make significative difference on such small distances. And as a “mid level motorbike pilot”, I don’t need 0.01s precision. I already know I am to slow on the corners :mrgreen:
I forgot to mention you must use double instead of float for GPS points.
And when calculating time, acceleration could change timings. Or use a finish line where acceleration is lower. I used lines at end of straight, before braking.
Hmm… double instead of float… on an arduino, they are the same. I’m not really that concerned about precision, there is only so much you can get from a GPS. if i can get down to 0.1s I’d be extremely happy. I think that I can take this info and make something work. Polux, you suggest pre-determining the start finish with google maps. It seems to me that should be unnecessary, even if the user selects the finish line in the middle of the track (I’d assume that this would be the case, If I make my line perpendicular to the direction of travel, and compute 2 new points say 10meters away in each direction I can make this work fine
Good idea. I should implement this in the next revision of my design. Thanks.
About using double or float. In a special version I made using 10Hz GPS, I only displayed time of current position when line was crossed. The max error was 0.1s. It was enough for my friend, as he was even slower than me :mrgreen:
Another way, that does require 8th grade algebra is to take the line of the finish and the line of travel and treat them as a set of simultaneous equations. This gives you the exact point of crossing as well as the exact time of crossing. No harder then what is done above. The other advantage of this is that you can fit a line to the last N data points and have much better accuracy then just using two line.
This method also work if you have not yet crossed the line and will give a time in the future, assuming you are close enough that a line is a good approximation. You can use any set of points that describe a line that will or did cross the finish.
I’m thinking this is a VERY good this to compute for sail boat racing. In sail racing the race starts with the boats going as fast as they can across the start line. They are given a count down to the starting horn. It’s a bit tricky because you don’t want to be late crossing but if you are early you have to make a u-turn and go back and cross again which puts you out of the race. To the sailor wants to know if at the one-minute horn if he is one minute from the starting line. The simultaneous equations method would give him that
bullethole:
The issue is when a race car has crossed a start/finish line to calculate lap times. The user will initiate the start finish line by pressing a button (or it can be retrieved from a saved dataset for a given track). when the button gets pressed, I have a GPS point, and a heading (direction the vehicle is moving), this gives me a vector, I can easily calculate a new vector perpendicular to the initial vector establishing a line.