Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by LaminatorX on Saturday March 15 2014, @11:28PM   Printer-friendly
from the premature-optimization-is-the-root-of-all-evil dept.

Subsentient writes:

"I've been writing C for quite some time, but I never followed good conventions I'm afraid, and I never payed much attention to the optimization tricks of the higher C programmers. Sure, I use const when I can, I use the pointer methods for manual string copying, I even use register for all the good that does with modern compilers, but now, I'm trying to write a C-string handling library for personal use, but I need speed, and I really don't want to use inline ASM. So, I am wondering, what would other Soylenters do to write efficient, pure, standards-compliant C?"

 
This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by frojack on Sunday March 16 2014, @04:13PM

    by frojack (1554) on Sunday March 16 2014, @04:13PM (#17254)

    This is so true.

    For many years we practiced code optimization with a ruler. Some times we needed a yard stick. (Usually for string handling code, oddly enough).

    We would literally print out the assembly, (an option offered by our compiler, which included the source code in comments) spread it out, and measure how many INCHES of assembly each source code line would generate.

    Almost always, a series of individual hand coded operations resulted in shorter segments of assembly than would the complex single statement. We would compile it both ways, and simply measure the total amount of assembly statements in each.

    We learned to identify two foot language constructs from two inch constructs.

    After a while we learned to avoid those constructs that that would generate a mountain of code, and write simpler structures to do the same work. We might use a small amount of code in hand coded loop(s) to process an array, and avoid the complex code generated by the language's array operations.

    We wrote another program to scan the assembly and assign how many clocks each assembly operation took, and sum them up, on the theory that many long sequences of assembly might be faster than smaller sequences performed many times. (We were prepared to sacrifice a little speed for being able to fit the code into memory.)

    But it turns out that we always gained speed, and a lot of it. Simple code seldom generates a complex high-clock instruction sequence.

    Sadly, hardly anybody does this analysis anymore, they just use the shortest high level code (smuggly congratulating themselves on its uptuseness) while forcing the compiler to generate what ever mess it might spew and then turn on the compilers optimization option and assume it was the best.

    Its frequently not even close to the best.

    --
    Discussion should abhor vacuity, as space does a vacuum.
    Starting Score:    1  point
    Moderation   +2  
       Informative=2, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 4, Insightful) by maxwell demon on Sunday March 16 2014, @04:45PM

    by maxwell demon (1608) on Sunday March 16 2014, @04:45PM (#17269)

    However today neither counting instruction, nor adding cycles is going to give a good estimate about your running time (unless it turns out e.g. that the loop is slightly too large to fit into the instruction cache), because the processors tend to do a lot behind the scenes (register renaming, branch prediction, out-of-order execution, speculative execution, ...). Far more important issues are things like cache locality (this alone can get you quite a bit of speedup, and can be analyzed entirely on the C level). And of course no amount of micro-optimization can save you from a badly chosen algorithm.

    --
    The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 1) by jackb_guppy on Sunday March 16 2014, @05:22PM

    by jackb_guppy (3560) on Sunday March 16 2014, @05:22PM (#17286)

    I so agree. You have to understand what the compiler and underlying hardware will do.

    One day back in the XT & AT days, we had two young programmers trying to write ASM for simple screen processing. They tried to save as many instructions as possible figuring it was faster. So one instruction was multiple by 80. It looks good. Showed that a Store, 2 shifts, Add, 4 shifts was way faster, but took 8 instructions, instead of 1.

  • (Score: 1) by rev0lt on Sunday March 16 2014, @05:33PM

    by rev0lt (3125) on Sunday March 16 2014, @05:33PM (#17288)

    For many years we practiced code optimization with a ruler

    Assuming x86, after Pentium Pro, you ought to have a big ruler. As an example, hand-optimization for P4 is a *f****** nightmare*.

    After a while we learned to avoid those constructs that that would generate a mountain of code, and write simpler structures to do the same work. We might use a small amount of code in hand coded loop(s) to process an array, and avoid the complex code generated by the language's array operations.

    The problem is, reducing the amount of instructions isn't necessarily good. On your example, short loops were discouraged in long pipeline CPUs such as the prescott line, but would actually be faster in the Centrino/Pentium M line. Also, you'd gain a huge amount of speed if you respected 32-byte boundaries on cache lines. So, having 15 instructions to avoid 2 odd out-of-boundaries memory operations would be faster than having a simple, compact loop that would cause a cache miss. And don't even get me started on paralelization - reducing the number of instructions doesn't necessarily mean the code will be faster.

    Its frequently not even close to the best.

    Its not the best. But unless you're a asm wizard, the result will be fast enough regardless of the CPU. For most purposes, handwritten assembly or optimizing listings is a waste of time. However, I do agree that knowing what the compiler generates will make you a better programmer, and avoid some generic pitfalls.

    • (Score: 2) by frojack on Sunday March 16 2014, @06:12PM

      by frojack (1554) on Sunday March 16 2014, @06:12PM (#17302)

      You haven't a clue about how to quote do you?

      You also assume way more than I've said.

      You also assume that changes in pipe lining make all efforts at optimization useless and unnecessary. Nothing could be further from the truth. The techniques one might adopt with knowledge of current processors might be different than what you would use before, but there are many more things you can so in your code today than you could do before.

      The "fast enough" mentality is exactly part of the problem.

      --
      Discussion should abhor vacuity, as space does a vacuum.
      • (Score: 0) by Anonymous Coward on Sunday March 16 2014, @08:10PM

        by Anonymous Coward on Sunday March 16 2014, @08:10PM (#17325)
        To be fair to him, it's easy to assume the quoting mechanism is still <quote> instead of <blockquote> .
        • (Score: 1, Troll) by frojack on Sunday March 16 2014, @08:18PM

          by frojack (1554) on Sunday March 16 2014, @08:18PM (#17326)

          When the screen you post from clearly shows the supported syntax?

          --
          Discussion should abhor vacuity, as space does a vacuum.
        • (Score: 2) by maxwell demon on Monday March 17 2014, @01:26PM

          by maxwell demon (1608) on Monday March 17 2014, @01:26PM (#17707)

          Actually <blockquote> is the old one. It worked already before Slashdot introduced <quote> with its slightly different spacing behaviour, and it never stopped working.

          --
          The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 1) by rev0lt on Monday March 17 2014, @01:11AM

        by rev0lt (3125) on Monday March 17 2014, @01:11AM (#17408)

        You haven't a clue about how to quote do you?

        No, not really (Tnx AC). Is that relevant to the topic?

        ou also assume that changes in pipe lining make all efforts at optimization useless and unnecessary.

        Well, you assume I said that. I didn't. What I said was that producing a blend of optimized code for all common CPUs at a given time is complex, and one of the most obvious examples was when you had both Prescott and Pentium M in the market. Totally different CPUs in terms of optimization.

        Nothing could be further from the truth. The techniques one might adopt with knowledge of current processors might be different than what you would use before

        Well, I've worked extensively with handwritten and hand-optimized assembly for most (all?) Intel x86 CPUs upto Pentium4. Just because you optimize it, doesn't necessarily mean its faster (as an old fart example, think about all those integer-only Bresenham line algorithms vs having a div per pixel). And even if it is generically faster, it is usually model-specific. And it is very easy to get it to run slower (eg. by direct and indirect stalls, cache misses, branch prediction misses, etc). The Intel Optimization Manual is more than 600 pages (http://www.intel.com/content/www/us/en/architectu re-and-technology/64-ia-32-architectures-optimizat ion-manual.html), if you can generically beat a good compiler, good for you. Or you can stop wasting time and use a profiling tool like http://software.intel.com/en-us/intel-vtune-amplif ier-xe [intel.com] to have a concrete idea of what and when to optimize, instead of having to know all little details all by yourself.