Why did you choose ARM?

Hi!

Im new to the world of microcontrollers and need some help to choose what platform that i should go for.

I guess that alot of you started out with the PIC or AVR and moved on to the ARM, am i right? What made you move to another platform?

Do i need an operatingsystem with the arm if its going to be usefull to me?

Is it as easy to run motors with this chip as with the PIC/AVR?

Is the ARM for me if my aim is on video processing/streaming? Or will the PIC and the AVR do the trick?

Im familar with intels x86 processors. Would i be better of with a x86 board instead of a board with a ARM on it? Why/why not?

Best regards

Peter

I started in hobby micros many years ago, with 8051’s, then PIC. I graduated from that awful PIC architecture to the Atmel AVRs. Still use those.

Of late, using the ARM7. Cost comparable to high end 8 bit micros. But AVR and PIC both use the Harvard architecture - dual memory spaces, RAM and Flash. While allowing an 8 bit to be fast, they are a major PITA for software, since most compilers were intended for the more common architecture, single address space, von Neuman.

So the ARM7 has both flash and RAM in the same address space, Very fast because of the architecture, lots of registers, and flash is accessed with very wide words then used in less wide chunks. This keeps the flash memory cycles per second less than the MIPS, if you will.

Programming the ARM7, for me, has been better than graduating from the PIC to the AVR. After a learning curve, development goes faster on the ARM7 for many reasons. Also, with ARM, there are 10+ vendors of ARM chips which are all quite similar. If one crumps, it’s not so painful to change vendors. Or reuse your ARM knowledge on each new project buit with different ARM chip family members, e.g., LPC2103 at the low end, then 2106 and on up. Into ARM9 and beyond you are going into memory managed CPUs for Linux.

WIth a JTAG, debugging is just like it is on a PC - breakpoints anywhere, single step, etc. I chose IAR’s compiler and J-Link JTAG. IAR has a generous free version for students hobby users. There is the free GCC WinARM which like WinAVR can be good with the right IDE. But no match for professional compilers such as IAR or Keil or less so, ImageCraft.

The latest greatest is the ARM Cortex - lower power, etc. A bit more expensive now (early in products). ARM7 chips are $4 and up, whereas Cortex are like $12 and up (1ea).

Lots of ARM board/module vendors on the web and several here at SFE.

Cortex-M3’s are cheap. $2 to start and $4 to $6 mid class. When getting to the 128k and 256k parts they seem to be about the same price as an ARM7, though. They’re a relatively easy migration into ARM 32bit from 8bit uC’s. The Luminary Micro’s have 8bit ports, which confuses me no end.

I have a couple here which I just started breadboarding. Not certain if I’ll give up ARM7 for them, though.

I have no experience with video/audio, but I think you might want to look into DSP’s for that.

I use pic right now and am moving to cortex-m3 for speed: adc, display, etc.

to me, arm offers much higher levels of performance at a slightly higher price point. the consistency from chip to chip helps programming.

the downside is the lack of pdip chips: lqfp means you have to solder lots of legs.

if you lots of video, I guess you may look into vdhl.

Depends on what you mean by “video”. If you mean shovel bits off a camera onto a network, an ARM SoC like the LPC2468 will do that just fine.

If you actually want to process the video you might want to look at e.g. Blackfin.

GCC tools that work on Linux;

low cost processors with lots of functions.

We quit using AVR because of the Z register bottleneck and all the C code it takes to work with it.

We quit using MSP430 because the development tools for Linux are not very good.

Hello,

  • Multiple vendors so you are never totally locked in.

  • Learn one architecture that can scale from very lo cost micros to very powerful CPUs.

  • CGG ARM port very well supported.

  • Standard JTAG and debug functionalities.

  • OpenOCD.

  • Totally standard C programming without non standard extensions required (see weird address spaces or I/O requirements).

  • Cortex-M3 is very efficient and has good code density, the Cortex-M0 looks promising.

  • Very good for learning, it allows you to learn embedded programming without have to waste time on specific architectural weirdness or limitations (see PIC-AVR syndrome :)).

Giovanni

gdisirio:

  • Multiple vendors so you are never totally locked in.

I don’t know how much value there is in that. each vendor has vastly different implementation, and even between its own families so the code will be rewritten substantially anyway.

  • Cortex-M3 is very efficient and has good code density, the Cortex-M0 looks promising.

I find it horrifying to read the errata, :).

  • Very good for learning, it allows you to learn embedded programming without have to waste time on specific architectural weirdness or limitations (see PIC-AVR syndrome :)).

each chip has its own peculiarities / weirdness / limitations so I count that as a draw.

millwood:
I don’t know how much value there is in that. each vendor has vastly different implementation, and even between its own families so the code will be rewritten substantially anyway.

It depends on how the code is implemented. Most details can be hidden behind abstraction layers, I/O ports, timers, serial channels, network interfaces, interrupt handlers as example.

Of course you have to let an OS/IOSubsystem to handle the details and simply focus on the application code.

each chip has its own peculiarities / weirdness / limitations so I count that as a draw.

As long an architecture allows for plain, standard C/C++ then I consider it “sane”, there are architectures that do not allow that (PIC and AVR as example, for different reasons, MSP430 is decent). This kind of architectural weirdness cannot be abstracted away, you have to face it everywhere in your code.

I agree. As said earlier, vanilla C code ports easily from ARM to other 16 and 32 bit single-address space micros.

The dual-address space scheme in AVR and the same plus the horrid small memory pages in the PIC, make all code, not just I/O, too chip dependent. The AVR though, can minimize this.

But, to simplify, the ARM7, cost comparable to high end 8 bitters, makes a vast difference in software simplicity.

programming wise, the difference between pic/mcs48 and arm isn’t that big at all. if anything, the arm chips I have looked at (LPC and Luminary) are more complex to set up than the PICs: it is absolutely horrifying for example to read the Luminary errata.

price-wise, some of the low-end arm chips are cheaper than pics: lm3s801 for less than a dollar for example. but I am not sure if the cost of the chips makes that big of a difference for the hobbyest. they may for a consumer user of large quantity of chips.

NXP ARM7 - I’ve been using them for some time now, having used AVRs a great deal and a brief time with the horrific PICs’ bank switching.

Using the numerous example code from IAR and others, getting the NXP running was easy. My current app is 6,000 lines of code with several drivers, including dual serial, clock, and WizNet 812MJ ethernet.

I started with PICs then AVRs and now I’m venturing into ARM. I chose NXP LPC ARM for the CAN ports. The CAN filter capabilities are amazing!

stevech:
NXP ARM7 - I’ve been using them for some time now, having used AVRs a great deal and a brief time with the horrific PICs’ bank switching.

I can understand bank switching being a problem in a large application written with assembly.

with a high-level language like C, it is just transparent so I am not sure how that can be a big problem.

millwood:

stevech:
NXP ARM7 - I’ve been using them for some time now, having used AVRs a great deal and a brief time with the horrific PICs’ bank switching.

I can understand bank switching being a problem in a large application written with assembly.

with a high-level language like C, it is just transparent so I am not sure how that can be a big problem.

PICs with 256 byte banks in RAM is just silly. Lots of byes of Flash and CPU cycles are spent switching RAM banks, and with interrupts, it really gets hairy. C or ASM, the code has to take up space and CPU cycles. Bank switching in small segments was how micros worked in the 80’s.

Stacks are an even greater problem in both PICs and AVRs.

stevech:
PICs with 256 byte banks in RAM is just silly.

PICs are simple micros designed years ago to address simple tasks. so it would have been silly, from a business and engineering point of view, to put in a huge bank / stack size into those guys.

Bank switching in small segments was how micros worked in the 80’s.

don’t disagree on that. But we are in a different millennium now, I think.

Many PICs don’t even HAVE a stack pointer per se.

AVR’s have a generic stack pointer as do real CPUs.

ARM7’s have many stack pointers - per CPU mode, per interrupt type, etc.

stevech:
Many PICs don’t even HAVE a stack pointer per se.

so what? I am sure years from now, people will say similar things about the computers we use today for their lack of certain features to be taken for granted in the future.

a good computer / mcu is one that gets the job done, not one that has all the features in the world.

All the 8-bit PICs have hardware stacks, apart from the very small ones. The 16-bit and 32-bit PICs have conventional stack pointers.

Microchip is the market leader in 8-bit MCUs, so they must be doing something right. They have sold over 7 billion devices since 1990.

Leon