TCM8240MD connector and example

I’ve finally had some success with the '30:

[<LINK_TEXT text=“http://farm4.static.flickr.com/3236/308 … bb0b25.jpg”>http://farm4.static.flickr.com/3236/3088364166_c985bb0b25.jpg</LINK_TEXT>

I’m using 15fps, RGB, 128x96 (same resolution as the LCD, which is the $1 SF sell), no sync codes, everything else at default.

My microchip is an STM32 F103 VBT6, a Cortex-M3 ARM. I’d recommend the VET6 over it, though - it has significantly more memory, so you’d be able to store a whole frame (at 128x96) in RAM and write it out at your leisure. With this, I’m writing out each line after reading it, using 18MHz SPI with DMA.

The image on the screen updates fast enough that you can’t see a scan line and motion is a bit blurred but pretty similar to a mobile phone. Colour reproduction on the LCD is excellent, colours look lifelike.

The camera is clocked at 6MHz initially and then dropped to 4MHz when I start collecting image data (at this resolution DCLK=0.5 EXTCLK so 2MHz data input).

I’m reading the image data with some simple assembler that runs on a normal interrupt firing on each HD rise, using another interrupt that fires on VD rise to enable the HD ones.](Robot2 Camera+LCD: Working at last! | Quite a few lines of n… | Flickr)

Hello,

I have put together a summary of the current status of the project, by

asking each of the significant developers, what equipment/hardware

approach they are using. Note that there are developers working on both

the TCM8240 (1300x1040) and TCM8230 (640x480) sparkfun cameras. The

'40 appears to have JPEG encoding on board, but so far I think the

developers have had most success using the camera in RGB mode.

(1) KreAture ('40):

  • Developed a breakout board, and published it on the forum

  • Published the underside, and side pinouts of the '40

  • Published header structs for BMP output from RGB data. Adding this

struct as a header to the raw RGB data from the camera will allow

most windows (and others?) software to read the file as a standard

bitfield BMP without any further processing.

  • Experimented with JPG output, but currently is using RGB. His

platform has too little memory to handle all the data from an image in

one go so captures 2-4 lines of data at a time and skips onto the next

image until a complete frame is captured. Data is continuously sent to

host via serial. If the image is stationary, this works ok but takes a

long time per image.

  • Used ATmega64 for most tests, but wants to run a AT32 7000 model

on theNGW100 board later. Has access to, and uses STK500, STK600,

Jtag ICE MKII as well as 40msps, 200msps and 1gsps digital scopes.

(2) ma4826 ('40):

  • Published a set of I2C register values and matching 352x288 images

  • Using a CPLD at 50MHz (XC95288XL 30% used) and a SRAM 256Kx16

(12ns) connected to a PC for the tests.

  • Has tried with these clock values with and without PLL.
  1. 6.25MHz (50/8)

  2. 8.33MHz (50/6)

  3. 12.5MHz (50/4)

  • Has tried with these sizes:
  1. 352x288

  2. 160x120

  3. 320x240

  4. 1280x1024 (In the SRAM fit 1280x200)

  • Without JPEG everything is OK, but with JPEG obtains defective

jpeg images.

(3) buffercam ('40):

  • Has shown pictures of his breakout board

  • Has shown I2C register values

  • Has shown noisy 352x288 images, but the images are looking much better now.

The confetti-like noise in the first image was due to the data

acquisition device (an Agilent Mixed Scope).

  • Using ma8426 I2C register values for the most part

  • Now can stream 352x288px images at a rate of 2.8 FPS onto a

microSD card continuously. (The 2GB card can hold almost an hour of

pictures at this rate.)

  • Hardware is a PIC32 USB Starter Kit microcontroller running at 80MHz.

The data from the camera feeds into a AL440B-12 512KB FIFO buffer.

The data is clocked out using the Parallel Master Port protocol on the

PIC32. We use the FatFs file system to write the data to a microSD

card using the SPI mode.

  • This is for a senior design project, so there will be MUCH more info

and the full code posted by the end of the week.

  • As far as buffercam can tell, there is nothing that limits the setup from

capturing JPEG data except figuring out the correct settings for the

registers. The data captured in JPEG mode seems to have a good JPEG

header but no valid data in the body of the image. (It’s just a string of

bytes that repeats over and over.)

(4) Twingy ('30)

  • Using AT91SAM7S64, has shown 128x96 images

  • Will publish a summary paper soon

(5) Random ('30)

  • Using ARM STM32 F103 VBT6, has shown 128x96 images

  • Random uses the ARM chip to clock in data using some short assembler

code that reads the data clock, waits for it to drop low and then

copies in the byte of data to RAM. It reads in one line (128 pixels

or 256 bytes) and then triggers a DMA channel to send this data out

to the attached LCD, which takes RGB data in the same format, over

SPI. The DMA channel shunts the data over SPI automatically.

This is repeated for each line, and then again for each image (with

the LCD told to redraw at each image).

  • The camera is configured in RGB mode, 128x96, no sync codes. More

specifically, register 0x03 is set to 0x22 and register 0x1E to 0x48.

  • The camera is interfaced using a PCB made commercially which

connects the camera to the ARM directly, there is no supporting logic.

The camera’s data pins are connected to 8 sequential pins on a port

on the ARM, and the sync lines are connected to random I/Os.

  • Is not using any debugging but is using the “Logic” logic analyser to

look at the data the camera is sending and figure out timing. Is

programming the thing with a USB-TINY-ISP from Olimex and the

camera is the '30, the smaller version without JPG and with a max

image size of 640x480 (not that anyone’s got this yet).

  • Uses a hardware interrupt trigger on each VD and HD sync. The VD

interrupt routine enables the HD interrupts, and each of those runs

the assembler that reads in the image data. The interrupts are normal

interrupts fired by an event connected to an interrupt signal.

  • Was originally only storing the first 32 pixels of the first 24 lines

of the image and sending them serially to an OLED screen, but is now

using the sparkfun LCD which can be sent data considerably faster (it

takes less time to send it the data then it does to receive it from

the camera!)

  • The camera is clocked at 6MHz to configure, then 4MHz when receiving

data. It only initialises correctly every now and again but more often

than not - this seems to be pretty much random but might be a

consequence of the slow clock. When initialisation fails gets random

colour noise or blocks of solid colours, no consistent failure.

I hope this is helpful to others (like me) who have come late to this project.

Regards Hugh

Interesting sumary by hughanderson and I caught something I hadn’t noticed. The corrupt/illegal data in jpeg mode might be error code…

I can’t remember where in the docs I read this, but there are some limits to resolution and minimum pll speeds that must be met for jpeg output to work. Might be worth looking into.

I’m heading down that road as soon as I have time to finish my code for the avr32 platform. My plan is to use the image sensor interface as it should be fully compatible with the cmos cam.

Well done Random, that looks like a fantastic project! :slight_smile:

Here is a technical report I compiled for those interested in using the TCM8230 and TCM8240 cameras.

http://www.js.cx/~justin/documents

Twingy:
http://www.js.cx/~justin/documents

It seems the link is not working....

Agreed. 404 not found.

Domain renewal was today as luck might have it, give it a few more hours.

Twingy:
Here is a technical report I compiled for those interested in using the TCM8230 and TCM8240 cameras.

http://www.js.cx/~justin/documents

So… What is the final cut with this?

Can the TCM8240 be used in 1.3M pixel mode? Can the JPEG feature be used? If not, what are the road blocks?

Thanks

What I described in this report was a general technique for capturing data from 8-bit I2C image sensors (cameras). You can apply this technique to capturing data off the TCM8240MD and other cameras as well. I don’t see any issues with using this technique to capture data at 1.3MP provided your processor has enough internal or external memory to store the data.

I’ve only just joined this thread as i’m about to place an order for one of these cameras and have been going over and over the datash33t trying to get my head around things. I may be way off here but i have a couple of thoughts.

buffercam:
Also, I noticed that if I set the PLL mode to 0x1 instead of 0xF, both error flags become set (“ENC_ERRN” in addition to “FULL_ERRN.”)

I assume the FULL_ERRN is because your not capturing data quick enough and its filling up the internal FIFO but not sure why your getting the ENC_ERRN .... do you reckon your reading it fast enough?

buffercam:
Here are some other things that I’ve noticed:

In the DQT structure, I received a “length of field” of 0x84. This matches the length of the DQT data that I received, but does not match the length mentioned in the datasheet (pg. 21 says it should be 197bytes=0xC5).

The other structures match what the datasheet says as far as length of field goes (SOF, DHT, and SOS).

(As long as you realize that the datasheet has a typo in the SOF length of field - it should be 0x11 instead of 0xC5 - someone forgot to change it after copying and pasting.)

Looking at your pic, i can see the Y & U Quantisation tables but there is no V table (you can see the Y starting with 0x00, the U with 0x01 but there is no 0x02 for the V table) So it doesnt have the associated 64byte table. This adds up: 132(0x84) + 65(1+64=0x41) = 197(0xC5)

buffercam:
Any ideas as to why the image data repeats and why I’m getting the error code?

I have an idea, but i have very little experience with JPEG compression, so feel free to shoot it down...

Going back the YUV Quantisation tables (DQT data), the datasheet says on page 20

The host can adjust the picture quality mode (namely compression ratio) by sending a specific quantisation table or by sending Q table gain via IIC bus

Having a look at the registers, there is obviously no room for 3x 64byte tables so i can only assume we only have access to the gains? The closest i can see (guess) are registers 0xE9 & 0xEA called **DYQTG** & **DUVQTG**. As a total guess, could they be **D**efine**Y/UV** **Q**uantisation **T**able **G**ains ??

Can you verify the default values for these two registers? can you try adjusting these and see what happens? I wonder if one is acutally zero and causing the weird repeat data?

Just a few thoughts to maybe spark discussion here again (until i get my camera modules for me to try myself :slight_smile: )

Great job on the summary, Hugh.

Nice insight on the Quantization Table Gains, MattyZee. That’s something that might help.

Here is all of our documentation on our project from last semester:

http://www.buffercam.com

(The latest stuff is in “Documents” and then the “Final Report” section.)

Here is the latest video (352x288 at 2.8 FPS):

http://www.prism.gatech.edu/~gth681s/TCM8240_video.wmv (~5MB)

I’d really like to get JPEG working. Our hardware setup should be able to handle it - we just need the correct register settings.

I’m quite busy this week, but I should be able to pick things up again next week. I’ll try changing the gain values and see if that helps.

Regards,

David

Hi,

I am trying to interface the camera with an FPGA. I am still in the problem for interfacing through the IIC bus, that what is slave address and subaddress? :frowning: It may be the foolish question for those who have interfaced it, but I am the beginner . So please help me out to get the subadress.

bhaskarapte:
Hi,

I am trying to interface the camera with an FPGA. I am still in the problem for interfacing through the IIC bus, that what is slave address and subaddress? :frowning: It may be the foolish question for those who have interfaced it, but I am the beginner . So please help me out to get the subadress.

I haven’t got my camera yet so i haven’t actually tried this yet, but page six of the datasheet shows the Slave address as 0b0111101 (or 0x3D in hex). Then the sub addresses are the addresses of the registers listed on pages 7- 10.

Its been a while since i worked with FPGAs, but last time i did i used the I2C core from OpenCores. Have a look at the projects here http://www.opencores.org/browse.cgi/by_category

I think i used this one

http://www.opencores.org/projects.cgi/web/i2c/overview

So if you dont have your I2C controller coded up yet, that might give you a head start…

Hope that helps

Anybody using this with a gumstix or other ARM9 board? ARM9 SoCs have a built-in camera interface that seems to be compatible… should be far less work than bitbanging it or building your own FPGA/CPLD glue logic

Hi,

I am getting the dclk but of the same frequency as extclk. and also hblk and vblk are low always. d3 and d7 are high, else every pin is low. Plz anybody tell me hw to activate hblk and vblk

[See this post.

That initialization was based on ma4826’s original settings.

In short: 0x02 0x00 //Set Camera Active

As a side note, we toggled the reset during initialization, but I don’t think it was necessary.

It might be something to try if it’s giving you trouble.

I2CWrite(0x02,0x00); //Set Camera Active
usDelay(2);
I2CWrite(0x02,0x40); //Set Camera Reset
usDelay(2);
I2CWrite(0x02,0x00); //Set Camera Active

](http://forum.sparkfun.com/viewtopic.php?t=10314&postdays=0&postorder=asc&start=191)

Have any of you been able to capture JPEG yet. I am interested in hiring a contractor for advice on how to get this module to capture a JPEG and provide programming examples.

Please let me know - Regards, Jim

P.S. I am a newbie so if this post is inappropriate for this board I apologize in advance, and please let me know.

jwg,

No one here has figured out how to get JPEG working.

The camera is not well documented, so it is difficult to debug.

MattyZee has suggested changing the quantization table gains (DYQTG & DUVQTG), but I have not yet found time to try his suggestion. I’ll post and update when I do.

-David

EDIT: My adviser put it like this: “I’m sure this camera is used in a gazillion Toshiba cellphones - the problem is that the rest of the documentation for the camera likely consists of internal Toshiba documents written in Japanese.”

So if you can find a Japanese-speaking contractor with ties to Toshiba - you’ll be set!

Hi to all people in this thread !!!

I have found this TCM8240 camera a lot time ago. I think its great. But for different reasons I have never bought it and started to play with it.

These last two weeks I have been reading its datas**t completely more that one time. And I also read carefully about the 80% of this thread.

I have also read about BufferCam job(I read almost all documents in his Web Page). Really amaizing project !

Now I think I am ready to buy it, but firstly I need someone’s advice.

I am from Argentina, and I will spend at least USD30 in ship*ing)

Do you think that it will be possible to read JPEG data from the camera?

I think that RGB data will be also great, but what made me love this camera was JPEG compression.

I work with PIC MCUs. I think that my problem is the minimum 6MHz frequency at Extclk.

But as there is no MCU with 300KB of RAM…I think that with any processor it will be necessary to add external components just as RAM for buffering

I plan to use 512Kx8 RAM…and address and control it with pure logic (counters and other gates). The PIC will only send commands and check when data of a picture is complete. The buffering will be automatic.

I also thought on buying C328R camera module, but it is 5 times more expensive than TCM, and doesn’t have 1.3 Mpx

Conclusion: for simple RGB output I will need external logic and memory.

It is not worth for VGA resolution.

But it will be worth if I can take JPEG 1.3 Mpx data from it.

Let’s say what you think…

PD: I had to copy/paste sentence by sentence because I used “forbidden word”. My language is not English (Spanish), but what is wrong with “ship*ing” ?