SiRF Binary Longitude Offset

I have a EM-406A module, and in NMEA mode the latitude and longitude outputted is correct, but in SiRF binary mode, the latitude is correct but the longitude is about 56 degrees west of where I am.

I am positive it’s not the fault of my software because I’ve viewed the payload through a serial terminal on my computer, it matches what I have stored in my buffer in the software, and it is indeed sending the wrong longitude while the latitude is correct. Time and date data are also correct.

I’ve also tried to scan through the payload for any consecutive 4 bytes which combines to a signed 32 bit integer to give me a longitude near where I am at but had no success doing that. So I’m sure I’m not reading the wrong bytes in the payload. Also I am reading the bytes as big endian as per the reference manual.

I do clear the 32nd bit (the sign bit) in the 4 byte longitude data so I get a unsigned 32 bit integer without knowing whether it’s positive or negative, but before I do that, I also read the 32nd bit so I know whether the longitude is east or west. If the sign bit is not the 32nd bit (which it should be according to the SiRF binary reference manual, as it clearly indicates that a positive 4 byte integer is east), then which bit is it?

If I am reading the right packet and that is a longitude that has been offset by the SiRF chipset, then how much is it offset by? I really want to know an accurate number that has been documented which I can subtract from the wrong location data to get the right location. Since the differences between readings are only different by 1/100000, I worked out the offset to be about +557961409 (I used averages of longitudes while staying still), so add 557961409 to the signed 32 bit integer to get a new integer, determine east and west, and voila, sort of accurate data. But I’m still looking for a cause, fix, and documented offset instead of something I measured.

Factory reset command does not help

I couldn’t find an exact definition of that field, but if it’s a signed number it’s almost certainly in 2’s-complement form - in which case you can’t simply ignore the sign bit! You have to actually negate the number to turn a negative value into a positive one, which can be done by inverting all the bits and then adding 1.

nuts, I read something about sign bits, I always thought signed int only used the highest bit as the sign… :oops: i feel really really stupid

thanks, that fixed it