RAM limitations of LPC2148

Maybe this is a dumb question. I’m thinking of using the SF OLED display with an LPC2148. In “pseudo-SPI mode”, the display controller doesn’t provide for reading back the current display buffer, which means I’d probably want to keep some or all of it in memory on the uC.

The display is 128x128x24bit, so that would require 49152 bytes (48k) of memory. This happens to be the exact amount of RAM that the LPC2148 has.

So I guess my question is: what are the ramifications of absolutely FILLING the ram with one or more variables? Is this even possible? the 2148 doesn’t have an external bus, so my external RAM options are nonexistant without switching controllers.

I guess a better solution would be to use the display in Parallel mode, but the underlying question still stands: Does 48k of RAM mean no variables larger than 48k?

I guess it depends on the program. If it uses only variables that fit in registers, you can take up the full ram in a single variable. I havent tried it just dont see why not.

As far as the framebuffer goes, do you really need the full 24 bits? If you just need 16-bit color dont read the remaining bits, and write those bits on each write.

I’ve been trying to write an even more compressed framebuffer technique where data in the buffer would say NULL upto so and so address and the refresher program would use the LCD controls to just move the cursor there instead of waiting.

LCD is the biggest reason I’m waiting for the LPC2888 board from Olimex

It’s hard to imagine writing any useful software that doesn’t use ANY memory other than the screen buffer, so it’s probably not a good idea to use your entire RAM for that purpose…

Consider storing your buffer with only 8, 12, or 16 bits per pixel, which are then padded out to the full 24 bits when updating the OLED. You’ll lose some color resolution, but then there’s not much real need for 16 million simultaneous colors on a display that only has 16 thousand pixels!

jasonharper:
It’s hard to imagine writing any useful software that doesn’t use ANY memory other than the screen buffer, so it’s probably not a good idea to use your entire RAM for that purpose…

You have hit precisely upon the reason for me asking the question in the first place :slight_smile:

I’m not really ready to get into frame buffering strategies just yet (though I’m sure I will eventually, and thank you both for your suggestions!).

Unfortunately, it’s all too easy to imagine writing useful software that might require more than 48k of RAM (maybe an audio or video codec?), so the underlying question still stands, I guess: is there a way to “cache” variables to flash (bad idea, I know) in the event I need more than 48k of RAM, or must I change controllers to something with an external bus, or more RAM?

Or is this a dumb question?

Remember that with the LPC2148 chip, you have 8k of DMA RAM for the USB controller. If you’re not using USB or can spare some of this, you could use it for variable storage. Although I will second the sentiment of others that keeping the frame buffer on the 2148 is not such a good idea. That project would probably see you moving up to a 2200 series or higher with the external bus.

There have got to be sevenral ways to encode your data to ‘pad out’ before display. The “Use X bits, and pad out” is a great idea. Unless you have great eyesight, or real high quality desires, you can often chop lower bits off of image info to save space.

There is also Run Length Encoding, and other ways to reduce the stored data vs. the output data. I’m not sure what you are drawing (movies? Simple graphics?) but if you can shrink it down to circles, rectangles, or other easy to calculate shapes, you could even do simple “vector graphics” for it.

RLE isn’t really going to help you because you’ve still got to decompress the image to send it to the OLED (unless someone knows of an OLED module that can decode RLE on the fly… yeah I didn’t think so). In this case it would even hurt you, as you’d have to keep the RLE’d copy somewhere while you decompressed it, and you’d run out of memory half way through the cycle.

In this case, you’re only (real) choice is to use a lower resolution and pad out. You could attempt to use an EEPROM or SRAM attached to a serial bus and buffer the screen there, but it seems like a very nasty solution to what’s otherwise a simple problem. You might want something like this anyways if you’re going to be drawing fonts as you’ve got to store your font table somewhere as well. You could buy an SPI-driven Flash model (Atmel sells their “dataflash” series which would be pretty nice for a situation like this). Or you could hack together an external memory controller based on a parallel SRAM and some patch logic (8-to-1 shift register) and a small microcontroller (avr tiny maybe). Ugly, ugly solution but might be faster to read/write than a Flash ram.

With either external RAM, pick one big enough to stick a couple of frames on, along with your font and you should be good to go (a big one will let you double buffer the screen if you wanted to). But, even with this solution, you’re still going to need a lower image resolution as you’d be filling up the entire RAM and there’d be no way for the program to continue executing after you did that. But I really don’t think you’d miss 4-8 bits of color information on a display that size anyways. Do yourself a favor and go with 16-bit images.

Or just pick a more spacious microcontroller.

… In this case it would even hurt you, as you’d have to keep the RLE’d copy somewhere while you decompressed it, and you’d run out of memory half way through the cycle.

Correct me if I’m wrong here, but that stimmt only if you have to BitBlt the whole image from part to part all at once. I don’t know the OLED part or the application roach is writing this for, but it’s reasonably likely he could decompress an encoding (RLE, JPEG, etc) one section at a time into the devices buffer. AFAIK there isn’t enough info about the situation or part to write off spacial compression(s).

roach, do you have a datasheet or the part or something you can share?

d’oh… I was thinking of compression in general (where most compression schemes require the entire data segment to be decoded at once).

I guess RLE might be more useful, but you’ve gotta think how well is your image going to compress using it? I still wouldn’t imagine you getting around running your program at the same time as you’re blitting the image to the screen. But, I’m here, prove me wrong. I guess it depends a great deal on how the images are generated (static vs dynamic) and what the images are displaying (per pixel data vs displaying text).