Subsentient writes:
"I'm a C programmer and Linux enthusiast. For some time, I've had it on my agenda to build the new version of my i586/Pentium 1 compatible distro, since I have a lot of machines that aren't i686 that are still pretty useful.
Let me tell you, since I started working on this, I've been in hell these last few days! The Pentium Pro was the first chip to support CMOV (Conditional move), and although that was many years ago, lots of chips were still manufactured that didn't support this (or had it broken), including many semi-modern VIA chips, and the old AMD K6.
Just about every package that has to deal with multimedia has lots of inline assembler, and most of it contains CMOV. Most packages let you disable it, either with a switch like ./configure --disable-asm or by tricking it into thinking your chip doesn't support it, but some of them (like MPlayer, libvpx/vp9) do NOT. This means, that although my machines are otherwise full blown, good, honest x86-32 chips, I cannot use that software at all, because it always builds in bad instructions, thanks to these huge amounts of inline assembly!
Of course, then there's the fact that these packages, that could otherwise possibly build and work on all types of chips, are now limited to what's usually the ARM/PPC/x86 triumvirate (sorry, no SPARC Linux!), and the small issue that inline assembly is not actually supported by the C standard.
Is assembly worth it for the handicaps and trouble that it brings? Personally I am a language lawyer/standard Nazi, so inline ASM doesn't sit well with me for additional reasons."
(Score: 3, Insightful) by neagix on Saturday March 08 2014, @05:47AM
Sure, but is it maintainable to extend C with the CPU-specific trove of shiny optimized instructions for very specific tasks?
But yeah, I get it's a flaming discussion..
(Score: 2) by mojo chan on Saturday March 08 2014, @11:07AM
It's easy to maintain such a collection, just add them to the compiler's optimizer. Your compiler is open source, right? ;-)
const int one = 65536; (Silvermoon, Texture.cs)
(Score: 1, Funny) by Anonymous Coward on Saturday March 08 2014, @02:30PM
> Sure, but is it maintainable to extend C with the CPU-specific trove of shiny optimized instructions for very specific tasks?
Yes, we call those "libraries." :)
(Score: 2) by TheRaven on Sunday March 09 2014, @06:14AM
sudo mod me up
(Score: 2) by neagix on Sunday March 09 2014, @08:27AM
Although it's a nice anecdotal example, since my OP I was referring to all cases where you can't ignore assembly for extra juice or features, but still you would prefer a graceful fallback to the #ifdef jungle.
Please don't let me open the "bestiary", we would fight on each case with anecdotes, but I am talking about emulation (also dynamic recompilation), GPU/CPU integration quirks, codecs, and architecture-specific gotchas for embedded devices, more in general where - as I said - you want the best available OR a graceful fallback, without having to lay boilerplates.
I know libraries is the most obvious answer, but not a perfect solution (higher cost, less elegant because of very tiny functional payload). Another way could be CMAKE sorcerery, but CMAKE is far high in the foodchain to not strip the power of being in contact with the machine. In a perfect standardized world [xkcd.com] we wouldn't have these problems in first place (discussing this is OT).
So I was wondering if there could be a "framework-like" approach/solution to the problem (maybe not). The premise to discuss is if there is enough redundancy to find common patterns/development shortcuts.
I know the riddle, "let's make a factory to make factories of hammers" and so on, but I would consider such idea IIF (if and only if) it doesn't clash with C. It's quite a theoretical discussion, but basically nothing new at thinking to put a portability patch on the mess vendors do with hardware we buy (just look at the hard work in Linux kernel over the years).