Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by Cactus on Saturday March 08 2014, @03:30AM   Printer-friendly
from the don't-tell-me-upgrade-PCs dept.

Subsentient writes:

"I'm a C programmer and Linux enthusiast. For some time, I've had it on my agenda to build the new version of my i586/Pentium 1 compatible distro, since I have a lot of machines that aren't i686 that are still pretty useful.

Let me tell you, since I started working on this, I've been in hell these last few days! The Pentium Pro was the first chip to support CMOV (Conditional move), and although that was many years ago, lots of chips were still manufactured that didn't support this (or had it broken), including many semi-modern VIA chips, and the old AMD K6.

Just about every package that has to deal with multimedia has lots of inline assembler, and most of it contains CMOV. Most packages let you disable it, either with a switch like ./configure --disable-asm or by tricking it into thinking your chip doesn't support it, but some of them (like MPlayer, libvpx/vp9) do NOT. This means, that although my machines are otherwise full blown, good, honest x86-32 chips, I cannot use that software at all, because it always builds in bad instructions, thanks to these huge amounts of inline assembly!

Of course, then there's the fact that these packages, that could otherwise possibly build and work on all types of chips, are now limited to what's usually the ARM/PPC/x86 triumvirate (sorry, no SPARC Linux!), and the small issue that inline assembly is not actually supported by the C standard.

Is assembly worth it for the handicaps and trouble that it brings? Personally I am a language lawyer/standard Nazi, so inline ASM doesn't sit well with me for additional reasons."

 
This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by mojo chan on Saturday March 08 2014, @01:58PM

    by mojo chan (266) on Saturday March 08 2014, @01:58PM (#13248)

    The problem with GCC not vectorizing code is due to you not telling it all the assumptions you made when you expected it to. GCC will only vectorize when it knows it is absolutely safe to do so, and you need to communicate that. When you wrote your own assembler version you did so based on these same assumptions.

    In the specific example you cite have a look at the FFT code in ffdshow, specifically the ARM assembler stuff that uses NEON. To get good performance there is a hell of a lot of duplicated code since it processes stuff in power of 2 block sizes. If you had specified -O3 that's the signal to the compiler to go nuts and generate massive amounts of unrolled code like that. Even then it might not be worth it in all cases because if the array was only say 5 elements long you might spend more time setting up the vector stuff than it would save. So what you need to do is create your own functions to break the array down into fixed size units that can be heavily optimized, just like they did in the ffdshow assembler code. The compiler isn't psychic, unless you tell it this stuff it can't know what kind of data your code will be processing or how big variable length arrays are likely to be at run time.

    --
    const int one = 65536; (Silvermoon, Texture.cs)
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Interesting) by hankwang on Saturday March 08 2014, @02:38PM

    by hankwang (100) on Saturday March 08 2014, @02:38PM (#13262) Homepage

    The problem with GCC not vectorizing code is due to you not telling it all the assumptions you made when you expected it to. GCC will only vectorize when it knows it is absolutely safe to do so

    For the record, this is the full test code:

    float calc(float x[], float c, int veclen)
    {
      int i, j, k;
      for (int i = 0; i < 10000; ++i) {
        for (k = 0; k < veclen*4; ++k)
          x[k] = c*x[k] + x[k+veclen*4];
      }
    }

    The compiler should know that there cannot be any aliasing issues in the array 'x', so it *is* safe. But I wasn't aware that -O2 and -O3 makes such a big difference; with -O3 I do indeed get vector instuctions. From now on, I my number crunching code will be -O3...

    • (Score: 3, Informative) by mojo chan on Saturday March 08 2014, @08:14PM

      by mojo chan (266) on Saturday March 08 2014, @08:14PM (#13370)

      O2 doesn't make the compiler check if x is safe from aliasing and so forth because it is an expensive operation, and the resulting code can be problematic to debug on some architectures. Moving to O3 does check, so the compiler uses vector instructions. C can be somewhat expensive to optimize because there is a lot of stuff you can do legally that has to be checked for, and often that involves checking entire modules.

      --
      const int one = 65536; (Silvermoon, Texture.cs)