Bad product support and details

Arduinos simply aren’t up to the task of acquiring, storing, and processing image data in a reasonable time frame. Microcontrollers in general excel at responding to discrete events but are not designed or intended to be used with large amounts of streaming data. More powerful microcontrollers like the ARM line may have fast enough execution times to get basic functionality with a 640x480 camera but even then you are consuming a huge portion of the available bandwidth with little room to do anything else. Higher resolutions and frame rates without a JPEG output option is not feasible with your standard microcontrollers.

Non traditional microcontrollers like the XMOS and Propeller might be more up your alley for applications like this. Both controllers have multiple cores and the XMOS is capable of running at relatively high frequencies. You may be able to dedicate a core to acquiring an image while other cores are busy processing data as it becomes available. You would have to do the due diligence of researching their capabilities to see if they can meet your requirements.

The ideal solution for camera interfacing is to use programmable logic like CPLDs and FPGAs. These devices excel at working on repetitive tasks involving large amounts of high speed data making them suitable for any camera device you throw at them. They can simultaneously acquire an image, store it in external memory, compare the differences between the new and old frame, and make the data available through an SPI, I2C, or other interface of your choice. The problem with these is there is a steep learning curve to understand how they work and how you are supposed to program them. Traditional programming concepts taught to C programmers don’t apply here.

thebecwar already did a first round of go/no-go timing checks for the Arduino and says you are pushing it even running the camera at its slowest while using highly optimized assembly code. Because of the lack of experience you have in this subject matter, I would consider finding another way to tackle your project.

-Bill

This part really isn’t intended to capture single frames. I’m not sure it can even be done without another processor or FPGA to grab frames and store them for you. If you can’t clock out all frames before the next starts to capture I can see three possibilities:

  • - You might confuse the device and it crashes.
  • - The device might reset the readout pointer to the beginning of the buffer (partial image transfer).
  • - The image transferred would be composed from data from successive frames. Any sort of motion would create a fragmented looking image
  • It might be possible if the frame buffer can be read while the camera is in sleep mode. Completely untested armchair design: Issue wakeup command via I2C, wait for a rising edge on VD, Issue sleep command via I2C, wait for 3 frames (200ms @ 15fps), clock out the data.

    Running the numbers again, I don’t see any way that you can clock the camera from your processor running at 16MHz. Typically for a clock signal you’d use a timer interrupt, but even the most optimistic estimate places this at 9 clock cycles which is more than 180 degrees of Dclk. Even at 20 MHz, you’d only have one clock cycle in between interrupts. You could still code it manually in ASM, using some very careful math, and a close reading of the processor’s datasheet, but it would be an uphill struggle at best. (note this only applies to a 1 MHz Dclk. The 6MHz clock required to start the camera’s operation would impossible as an interrupt based clock.)

    Running at 16 MHz, your fastest interrupt driven clock would be about 1.77 MHz. Assuming you want no spare clock cycles to do anything else, like reading the data on the bus, averaging the data, etc. 8Mbps is a LOT of data for an 8 bit processor to handle. At 20 MHz, your maximum clock increases to about 2.22 MHz, and spares you on average 1 clock cycle between interrupts to do anything.

    Manually coding in a 1 MHz clock without interrupts is possible, but requires you to set the clock bit every 8 clock cycles. That requires that you know with 100% accuracy how many clock cycles each of your instructions takes, and would require a lot of forethought and planning to ensure that your clock is consistent. (example: you have a clock update 3 cycles from now, but your next instruction takes 4 cycles to complete. You’d need 3 NOPs to fill the space.) You’d also be dedicating 12.5-25% of your computing power to the simple task of running a clock (2 or 4 cycles out of every 16).

    Because of all the above, I wouldn’t recommend using this camera with a processor running at any less than 50MHz. Could it be done? As an academic process I’d say it’s possible, but that’s with no looping, no branching, limited conditionals, and almost no data processing. Any 16 bit arithmetic, floating point numbers or subroutines would make life a living hell.

    propjohn:
    It might be possible if the frame buffer can be read while the camera is in sleep mode. Completely untested armchair design: Issue wakeup command via I2C, wait for a rising edge on VD, Issue sleep command via I2C, wait for 3 frames (200ms @ 15fps), clock out the data.

    Looking at the camera’s sheet, it looks like it will finish sending an entire frame before it goes into a powered down state. Also, sleep mode doesn’t allow you to clock out the data.

    The only way I see this working is to find/create an IC/CPLD/FPGA/DSP or an ARM7/9 based processor that can buffer the frame data for you.

    –tB–

    @Phalanx - Thanks for sanity checking my math. I’ve been running magnetic flux equations all day, and I wasn’t sure if my numbers were in the right ballpark.

    I just can’t understand how it is 8mbps? I wont be encoding the image, just handling pixels on the fly. doing 4926002 (the screen dimensions which are the pixel counts * 2 byte per pixel (I will be outputting on RGB) It becomes 590 400 or 590kbps. I could possibly do 0.5 fps if u still consider that taking snapshots each second is too much. Also I don’t know if u included this in ur calculations but I won’t be outputting anywhere the data so far. I will just be calculating the 2d coordinates of the brightest pixel in the screen (using math).

    Also I am sorry to ask but if my arduino clock speed is at 16 Mhz why would it be hard to use a camera which has much lower frequency?

    I wont be doing any other operations…

    d4n1s:
    Also I don’t know if u included this in ur calculations but I won’t be outputting anywhere the data so far. I will just be calculating the 2d coordinates of the brightest pixel in the screen (using math).

    Also I am sorry to ask but if my arduino clock speed is at 16 Mhz why would it be hard to use a camera which has much lower frequency?

    The large posts by thebecwar are trying to show you the difficulties in simply getting the raw data off of the camera using an AVR controller. At the slowest you can realistically clock the camera, you are pushing the physical limits of what the AVR can muster to simply produce the clocking signals for the data. This leaves you no time to move data let alone perform compare operations on it. The AVR and every other 8-bit MCU I can think of are simply not designed to operate in this type of application. You can use more powerful controllers like an ARM 7 or 9 but those only solve the problem by being able to execute more instrutions per second. The general inefficiency is still there and you will hinder the ARM’s ability to perform other tasks.

    FPGAs can work on this kind of data without breaking a sweat. Their architecture makes them especially adept at processing repetitive loops with tight timing constraints. If you want to move forward with this camera, that would be your best option.

    You could also try to find a camera that outputs JPEG images which would reduce the hardware requirements of your controller but would increase the complexity of your code since you would have to handle the decoding of the image.

    -Bill

    ok… the 8 Mbps comes from 1MHz (1,000,000Hz) * 8 bits per transfer = 8,000,000 baud = 8 Mbps. Remember little ‘b’, so we’re talking bits. For comparison to ethernet/wifi, it works out to about 7 MB/s (Big B = Bytes). This is several orders of magnitude more than serial, even 115200 baud. (69.4 times more to be exact)

    Framerate is set by your clock speed not the other way around. At 1MHz, according to my references you’ve got about 1.5 FPS. Much below 1MHz and your image won’t be clocking correctly, and will be too overexposed to do you any good.

    The clocking situation requires an understanding of how accurate clocks are usually generated in software. (Yes, some chips do have clock dividers built in to provide this, but we’re talking about a software solution.) Usually when you want to generate a clock, you set up a timer that throws an interrupt every time it overflows/hits a specific value. Knowing your processor’s clock speed, you can calculate with reasonable precision the amount of cycles required to get the clock rate you need. You also have to realize that the effective rate of the timeout is twice the frequency of the desired output clock. (We’ll ignore actually reading out the data for now.)

    The clock generation using a timer, fires an interrupt every time it overflows/reaches it’s set value. On the 8 bit ‘mega’ AVR chips, calling an interrupt vector requires 4 instruction cycles. (The current instruction pointer needs to be pushed to the stack, the offset of the vector needs to be read out of memory, and the processor needs to jump to that location.) Returning from an interrupt also requires 4 clock cycles. (The processor needs to unwind it’s execution back to the point that it was at before the interrupt was fired.) 8 Processor cycles, and all you’ve done is move from one spot in memory to another. You haven’t actually done anything yet.

    Inside the interrupt vector we need to either bring the clock pin high or low depending on it’s current state. You could do this with branching logic, but that always requires more clock cycles. We’ll use a 2 instruction method. (I’m using some pseudocoded assembler in this example. It may need a MOV instruction to be accurate.)

    LDI Rd, 0x01 ;We're using pin 0 of port b for this example
    EOR PORTB, Rd ;EOR=XOR
    

    Adding the 2 cycles we need to toggle the clock to the 8 we need to load and clear the interrupt, we have a total of 10 processor cycles per clock transition. 20 For every cycle. Therefore, the fastest clock we generate is about 1/20th of our processor’s clock. (I say about because it’s after midnight and my math skills decline after 11.) 1/20th of 16MHz is 800KHz.

    Add in instructions to fetch the data off of the parallel bus when the clock goes low and you can see how it’s mathematically improbable that you have enough processor speed to clock the camera.

    Hopefully this is clear enough. Parallel data buses are easy to work with, but at higher throughput they can be quite a pain to work with, especially on a limited platform like an MCU.

    ok I think I know what u mean… is there any chance u provide me help on how to import this breakout board design on my eagle?

    I been trying for hours now… http://kreature.org/ee/avr/cmos_cam/TCM … ut_1.1.rar

    I found it here viewtopic.php?f=5&t=10314&start=105 from user Kreature

    The .sch and .brd files are missing!

    Should I put resistors to the output pins or only input pins of the camera to make logic level conversation? In which I should?

    On the topic of reading datasheets, there’s now an introductory tutorial on the subject available on the SparkFun site: [Bite-size Datasheet Tutorial

    –Philip;](Bite-size Datasheet Tutorial - News - SparkFun Electronics)

    I just saw it… amazing work!!!