Get an LPC2148 dev board made by Olimex. It has a slot for an SD card, and there are several Open Source file systems to choose from.
After that you need to be creative is how you use the file system. I’ve used a file system in the past as nothing more than a double linked list using integers converted to ascii as file names.
I’ve also used date strings for file names, like 20091231_43. You can do an ls on these and sort if necessary.
I suspect you need to use a lat/lon scheme for file names that allows you to access proximity data based on file names. Of course most file systems out there are DOS/FAT based, with the number of files in a directory very limited, so you may have to use lat/lon as a key or offset into a file.
If you can envision a scheme that works out of RAM, you can “virtualize” that using a file, and with the proper functions that can be mostly masked from the read/write operations.
What you lose with any file system approach is processor cycles. But with a one second update interval you have cycles to burn. My preemptive multi-task scheduler works with a timer tick of 65,535 Hz.
Priority evaluation is done at least this often, but also done anytime a task calls for a system service.
My next advice is this: Write your application on a Linux box and thoroughly test it there before moving it onto a target. I just finished writing a file system strictly for data logging using soldered-in Atmel data flash chips. I used the Linux file system to act as the data flash chips using fseek, fwrite and fread. When it was working and tested I moved it onto the target, where I already has SPI functions that did the equivalent
functions. Total debugging time on the target – ZERO.
Using Linux as your development host for gcc and other tools, a huge amount of code that will eventually run on the target can be written and debugged. When you move it to the target, the only thing changing is the fact the the compiler is producing ARM code instead of X86 code.
Functions like fprintf can be used on the Linux machine, and all you need on the target is a wrapper function behind that to use the UART. In fact I have target library functions for printf and scanf, and guess what - they were written and debugged on my Linux machine before the went into my target libraries. The same goes for a lot of my target library code - malloc/free, strlib, stdio, linked lists, queues, mathlib, . . . .
Take a look at the code necessary. How much of it has to use I/O resources of the target? How much of that can be simulated using resources on you development machine? What’s the time difference between compile-link-program target over just compiling and linking on your desktop?
Using serial ports on Linux is very easy, so acquiring GPS data is a snap. Taking the development process up to higher level is also possible. You could write your application in Python much faster than C. This would give you an idea of it’s complexity. And after you’ve done this, you’d find that writing functions which mimic Python behavior makes translation to C easier.
So now I’m going to revise the advice I gave at the top. Don’t buy a development board yet. Put that decision off until your application is running on your desktop.
One last note: Don’t write C code unless you have to. Write a Python program to generate C code whenever possible. The whole user interface program in my present project is generated by a Python program. It’s the biggest object file on the system because it generates all the user programmable parameter forms, the user display panels, user menus, etc.
All the processor setup code is generated by another Python program, including the I/O, Uarts, interrupts, usb, adc, and more. These are all functions I won’t have to ever write again because I only have to plug in some values into Python dictionaries. And I don’t have to keep looking at pin selection tables in the user manual to make sure the right function is selected, and make sure my defines to select it are correct.
Most of all, have fun!