How to increase speed of large for loops
-
And I was just thinkin' we are having a nice easygoing conversation ... 'cause when I see this:
template< int exponent, typename T > T power( T base ) { // ... }
I cringe so badly my face is contorted for a week.
-
@kshegunov
No idea what's foul about it, or the bit you've quoted, so you'd better explain? Unless you mean the whole idea of using templates, which of course I never used: C didn't need them, C++ added them as an obfuscation layer, so I'm quite happy without ;-)Mind you, I looked at @JohanSolo's code above. His definition is a recursive one (
return power< exponent / 2 >( base * base ) * base;
). I'm surprised. This would be all very well in my old Prolog, but I don't think the C++ compiler is going to recognise & remove tail recursion in the definition. So I don't know what he means by "trivially replaced", why would one want to use such a definition? -
I never though my little post could produce so much noise... First the snippet is not mine, as I already stated, I took it from a lecture I followed at CERN in 2009. The lecturer was Dr Walter Brown, who was presented as: "Dr. Brown has worked for Fermilab since 1996. He is now part of the Computing Division's Future Programs and Experiments Quadrant, specializing in C++ consulting and programming. He participates in the international C++ standardization process and is responsible for several aspects of the forthcoming updated C++ Standard. In addition, he is the Project Editor for the forthcoming C++ Standard on Mathematical Special Functions."
About the recursive template: the compiler expands it at compile time, therefore leading to
power< 4 >( x )
being replaced byx*x * x*x
, which is apparently (or at least was) way faster than callingstd::pow
. Therefore, I expectpower< 2 >( something )
to be faster thanstd::pow( something, 2 )
. -
@JonB said in How to increase speed of large for loops:
No idea what's foul about it, or the bit you've quoted, so you'd better explain? Unless you mean the whole idea of using templates, which of course I never used: C didn't need them, C++ added them as an obfuscation layer, so I'm quite happy without ;-)
Recurrently instantiating a function for no apparent reason, basically invoking the sophisticated copy-paste machinery that is the compiler's template engine to produce:
x * x
, especially when the latter would suffice.Mind you, I looked at @JohanSolo's code above. His definition is a recursive one (
return power< exponent / 2 >( base * base ) * base;
). I'm surprised. This would be all very well in my old Prolog, but I don't think the C++ compiler is going to recognise & remove tail recursion in the definition. So I don't know what he means by "trivially replaced", why would one want to use such a definition?Code inlining is kind of a religion. Surely it has its values in the proper places, and most certainly templates make some things easier, then again ... it's very much like chocolate, when you don't eat it, you want it, when you eat it, you want more of it, but in the ultimate scheme of things it makes you fat ...
The most ugly thing about templates, however, is that everything has to be defined for instantiation to take place, which is of course expected. So you can't have abstractions manifested without spilling the guts of the implementations. And of course there exists no such thing as binary compatibility, as everything is recompiled every time ... such a wonderful idea.
@JohanSolo said in How to increase speed of large for loops:
I never though my little post could produce so much noise...
Well yeah, I'm from eastern europe - all simmering under the hood.
First the snippet is not mine, as I already stated, I took it from a lecture I followed at CERN in 2009.
Yes, I glanced at the slides. FYI even boost's math module doesn't do that kind of nonsense because fast exponentiation algorithms for integral powers was (and is known) for 50+ years. And if the compiler actually inlines all the (unnecessary) instantiations, depending on the optimizations it applies, you could end up in the same
x * x * x * ... * x
case. The point is computers are rather stupid, they do what we tell them to do, and ultimately everything you write is going to be compiled to binary, not to a cool concept from a book (or lecture, or w/e).The lecturer was Dr Walter Brown, who was presented as: "Dr. Brown has worked for Fermilab since 1996. He is now part of the Computing Division's Future Programs and Experiments Quadrant, specializing in C++ consulting and programming. He participates in the international C++ standardization process and is responsible for several aspects of the forthcoming updated C++ Standard. In addition, he is the Project Editor for the forthcoming C++ Standard on Mathematical Special Functions."
Good for him. I don't know him, nor do I hold people in esteem for their titles. He might be a contemporary Einstein for all I know, but I place merit whenever I judge there to be reason for. In this case, I have not. The lecture, and all the proof of it boiling down to a synthetic test, is not nearly enough for me.
Just as a disclaimer, I've seen quite a lot of "scientific code" to be cynical to the point of not believing academia can (or should) write programs.About the recursive template: the compiler expands it at compile time, therefore leading to
power< 4 >( x )
being replaced byx*x * x*x
No it leads to
power<4>(x)
being replaced bypower<2>(x) * power<2>(x)
wherepower<2>
is a distinct function. This may lead tox * x * x * x
in assembly, which of course would have the same performance as multiplying the argument manually, or it may lead to be evaluated as(x * x)
, which is then multiplied by itself, where you may gain a multiplication. The point is your template can't tell the compiler how to produce the efficient binary code.Therefore, I expect
power< 2 >( something )
to be faster thanstd::pow( something, 2 )
.I expect them to be exactly the same up to a couple of
push
/pop
s and a singlecall
.
I did find it rather surprising that
pow
andsqrt
were implicated here. I'd like to top off this missive with a quotation that I love from a fictional character:You wake up in the morning, your paint's peeling, your curtains are gone, and the water is boiling. Which problem do you deal with first?
...
None of them! The building's on fire! -
I never though my little post could produce so much noise...
It's OK, this is all a friendly debate, not a mud-slinging contest!
@JohanSolo , @kshegunov
I don't know what you are going on about with thispower()
stuff and in-line expansion. Just maybe the compiler is clever enough to in-line expand to avoid recursion if your code goespower<4>(x)
, where the4
is a compile-time constant. However, that definition ofpower<>
takes the exponent as a variable/parameter. So if your code callspower<n>(x)
wheren
is a variable, I don't see how any amount of in-lining or optimizations can do anything at all, and you are left with code which will compile to a ridiculously inefficient (time & space) tail-recursive implementation, which you would be mad to use. If you're going to do in-lining, it seems to me it should be done iteratively rather than recursively in C++, no? That is what I was commenting on.... -
@JonB said in How to increase speed of large for loops:
@JohanSolo
However, that definition ofpower<>
takes the exponent as a variable/parameter. So if your code callspower<n>(x)
wheren
is a variableIn the
power< n >( x )
expression,n
must be known at compile time, it's a template parameter. If it is a variable it won't compile (I've just checked to be 1000% sure). -
@JohanSolo
Ohhh, I had no idea templates worked like that...! I get it now.I hope the compiler generated code copies your (first) parameter into a temporary variable/register when it expands that code in-line, else it could actually be slower....
In any case, to belabour the perhaps-obvious: the squaring won't take much time, it's the square-rooting which will be slow....
-
@JonB said in How to increase speed of large for loops:
Ohhh, I had no idea templates worked like that...!
Your childish naïveté really made me chuckle. :)
A template is (hence the name) a template for a function or class. It's nothing by itself, it does not produce binary on its own. The magic happens when instantiation takes place, that is when you supply all the dependent types (the stuff in the angle brackets) to it. Then the compiler knows how to generate actual code out of it and does so. That's all my babbling about inlining too - since the compiler has all the code for a fully specialized (i.e. everything templatey provided) template it can inline whatever it decides into the binary, which it often does. The thing is, however, that each instantiation is a distinct function/class. Sopower<3>
is a function, which is different frompower<2>
, which is different frompower<1>
and so on ... -
@kshegunov said in How to increase speed of large for loops:
Your childish naïveté
Maybe a touch harsh :) [Though brownie points for typing in those two accents.]
I thought when I saw templates they were to do with providing type-"independent" generic functions, aka "generics" e.g. in C#. Nothing to do with in-lining....
-
@JonB said in How to increase speed of large for loops:
Maybe a touch harsh :)
Oh, you know I wrote that with loving condescension, as I usually do! ;)
I thought when I saw templates they were to do with providing type-"independent" generic functions, aka "generics", e.g. in C#.
Well, yes, templates are for that - providing type independent code, or rather as you put it generic, because there may be limitations put on the types involved (i.e. the type may be required to be floating point, or integral). This is fine and very useful for many purposes. Consider writing an algorithm that operates on a matrix, the matrix may contain rational numbers, or floating point, or complex, or quaternions. The point is the algorithm is the same regardless of the type it operates on (with some sane limitations).
C# isn't a good example as it doesn't compile to native assembly. It has, much like its mother Java, an interpreter for opcode. That is, source is compiled to an intermediary opcode (which is similar to assembly), which is then interpreted by a virtual machine.
Nothing to do with in-lining....
Well, not nothing. Templates' instantiations are known fully, including all dependent types and the whole source. While you can have them hidden in a source file and prevent inlining, that's an extremely rare case. Usually the point of them being stored in the headers without exposing only the instantiated types is to allow the compiler to freely inline everything it wants to. So they're also used to hint that to the compiler. In this case that's the idea, otherwise you'd just write the simple fast exponentiation which takes 2 arguments (i.e. basically a rewrite of
std::pow
) instead of giving the compiler enough rope to hang itself. As I said, every instantiation is different, so the compiler is going to generate one function for each template argument, so callingpower<8>
, leads to your compiler generating code forpower<8>
,power<4>
,power<2>
,power<1>
and so on. These are separate functions, mind you. Then it may (or may not) decide to inline some (or all) of them into the others. -
I'm surprised no one bit on pointer access of the array elements instead of using array indexing throughout. Pointer access is usually faster than array indexing, but that is an implementation specific detail. I'd also replace pow() with pure multiplication of the terms as (x*x),