Tuesday, November 23, 2021

Digest for comp.lang.c++@googlegroups.com - 25 updates in 2 topics

Richard Damon <Richard@Damon-Family.org>: Nov 22 06:25PM -0500

On 11/22/21 5:19 PM, Lynn McGuire wrote:
> We show how to use our results to provide software engineers support to
> decide which language to use when energy efficiency is a concern."
 
> Lynn
 
Well since one of the goals of the C Language was to enable programmers
to write fast code, it can makes sense for C to be 'Green', at least by
some measures.
 
Fast code will be more energy efficient as processors basically use
power based on the number of instructions executed (and memory accessed).
 
What also needs to be considered is the time/energy needed to initially
WRITE the code and get it working, and then factor in how many times the
program will be used compared to the effort to write it.
Juha Nieminen <nospam@thanks.invalid>: Nov 23 07:10AM

> Interesting. However, I can write something in C that uses all CPU's
> 100%.
 
I don't think that's the point.
Juha Nieminen <nospam@thanks.invalid>: Nov 23 07:21AM

> some measures.
 
> Fast code will be more energy efficient as processors basically use
> power based on the number of instructions executed (and memory accessed).
 
I think that for as long as programmable computers have existed, there has
been a direct correlation between the "levelness" ("higher-level" vs.
"low-level") of a language and disregard towards how much resources the
language uses. As computers have become faster and faster, and the amount
of resources (primarily RAM) has increased, this indifference towards
resource consumption in higher-level languages has only likewise
increased.
 
The farther away the design of the language has been from the details of
the underlying hardware, the more the question of "how efficient is
the language, and how much RAM does it consume?" has been (implicitly
or explicitly) answered with, essentially, "it doesn't matter" and
"who cares?"
 
Take pretty much any scripting language, or any other interpreted
language (like the original BASIC and most of its subsequent variants).
Pretty much nobody cared how fast they are, or how much RAM they
consume. When someone is, let's say, implementing something in PHP
they seldom stop to think how much memory it consumes or how fast it is.
When someone is implementing something in shell script, one most
definitely does not think about speed or memory consumption.
 
Most object-oriented languages don't really care about, especially,
memory consumption. Especially garbage-collected OO languages don't
give a flying F about memory consumption. So what if an object needs
something like 16 or 32 bytes of bookeeping data? Who cares? That's
nothing! Trivial and inconsequential! Modern computers have gigabytes
of RAM! Why should you care if an object takes 32 bytes in addition
to whatever is inside it? What a silly notion!
 
Ironically, the programming world has in the last couple of decades awakened
to the harsh realization that this complete disregard of memory usage
actually causes inefficiencies in modern CPU architectures (because
cache misses are very expensive).
 
Oh well. As long as it works, who cares? Buy a faster computer.
om@iki.fi (Otto J. Makela): Nov 23 02:17PM +0200


> "C Is The Greenest Programming Language" by: Chris Lott
> https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/
 
Perhaps one contributing factor is that not so much development is any
longer done using C or C++, and the programs that are run (and still
being developed) were originally created in the era when CPU power was
much lower than these days, so code optimization was more important.
--
/* * * Otto J. Makela <om@iki.fi> * * * * * * * * * */
/* Phone: +358 40 765 5772, ICBM: N 60 10' E 24 55' */
/* Mail: Mechelininkatu 26 B 27, FI-00100 Helsinki */
/* * * Computers Rule 01001111 01001011 * * * * * * */
Bonita Montero <Bonita.Montero@gmail.com>: Nov 23 02:39PM +0100

Am 22.11.2021 um 23:19 schrieb Lynn McGuire:
> less/more energy, and how memory usage influences energy consump- tion.
> We show how to use our results to provide software engineers support to
> decide which language to use when energy efficiency is a concern."
 
Developing in C is a magnitude more effort than in C++, and if you've
the right programming-style you get the same code speed like in C. And
sometimes you're even faster in C++ and in C you'd shoot yourself in
your head instead of implementing the complexity you can handle in C++
in minutes.
Manfred <noname@add.invalid>: Nov 23 04:03PM +0100

On 11/22/2021 11:19 PM, Lynn McGuire wrote:
> Our results show interesting find- ings, such as, slower/faster
> languages consuming less/more energy, and how memory usage influences
> energy consump- tion
 
I find the correlation slower/faster language respectively less/more
energy quite confusing. In fact I believe it is the opposite.
 
An interpreted language (á la Java :O) may require orders of magnitude
more CPU instructions than C to perform the same task. This makes the
former slower and more energy consuming than the latter.
 
Slower or faster /hardware/ is a totally different thing, of course.
gazelle@shell.xmission.com (Kenny McCormack): Nov 23 03:50PM

>> energy consump- tion
 
>I find the correlation slower/faster language respectively less/more
>energy quite confusing. In fact I believe it is the opposite.
 
Yes. I think this was misstated/sloppily-written in the original text.
 
It depends, of course, on what exactly you mean by a "slower" language.
It is true that if you run the CPU at a slower speed (and that would make
for a slower processing model), then you will use less energy.
 
--
https://en.wikipedia.org/wiki/Mansplaining
 
It describes comp.lang.c to a T!
Bart <bc@freeuk.com>: Nov 23 04:14PM

On 23/11/2021 15:03, Manfred wrote:
> energy quite confusing. In fact I believe it is the opposite.
 
> An interpreted language (á la Java :O) may require orders of magnitude
> more CPU instructions than C to perform the same task.
 
Java is probably not a good example.
 
Properly interpreted code may need 1-2 magnitudes more instructions as
it has to perform the task indirectly (this is with dynamic typing).
 
But this is only relevant if the processor is executing 100% indirect
code versus 100$ direct code.
 
In practice it will be a mix, which if done properly makes means the
overheads of interpretation are not significant.
 
More significant is overall design: you can write slow, bloated,
inefficient programs in C too!
 
Also many interpreted languages are now JIT-accelerated, to close the gap.
 
 
This makes the
David Brown <david.brown@hesbynett.no>: Nov 23 05:51PM +0100

On 23/11/2021 16:50, Kenny McCormack wrote:
 
> It depends, of course, on what exactly you mean by a "slower" language.
> It is true that if you run the CPU at a slower speed (and that would make
> for a slower processing model), then you will use less energy.
 
It is /not/ true that running the CPU at a slower speed uses less energy
- at least, it is often not true. It is complicated.
 
There are many aspects that affect how much energy is taken for a given
calculation.
 
Regarding programming languages, it is fairly obvious that a compiled
language that takes fewer instructions to do a task using optimised
assembly is going to use less energy than a language that has less
optimisation and more assembly instructions, or does some kind of
interpretation. Thus C (and other optimised compiled languages like
C++, Rust or Ada) are going to come out top.
 
It is less obvious how the details matter. Optimisation flags have an
effect, as do choices of instruction (as functional blocks such as SIMD
units or floating point units) may be dynamically enabled. For some
target processors, unrolling a loop to avoid branches will reduce energy
consumption - on others, rolled loops to avoid cache misses will be
better. Some compilers targeting embedded systems (where power usage is
often more important) have "optimise for power" as a third option to the
traditional "optimise for speed" and "optimise for size".
 
The power consumption for a processor is the sum of the static power and
the dynamic power. Dynamic power is proportional to the frequency and
the square of the voltage. And energy usage is power times time.
 
A processor that is designed to run at high frequency is likely to have
high-leakage transistors, and therefore high static power - when the
circuits are enabled. But the faster you get the work done, the higher
a proportion of the time you can have in low-power modes with minimal
static power. On the other hand, higher frequencies may need higher
voltages.
 
As a rule of thumb, it is better to run your cpu at its highest
frequency - or at the highest it can do without raising the voltage -
and get the calculation done fast. Then you can spend more time in
low-power sleep modes. However, entering and exiting sleep modes takes
time and energy, so you don't want to do it too often - hence the
"big-little" processor combinations where you have a slower core that
can be switched on and off more efficiently.
DozingDog@thekennel.co: Nov 23 04:53PM

On Tue, 23 Nov 2021 14:17:50 +0200
>longer done using C or C++, and the programs that are run (and still
>being developed) were originally created in the era when CPU power was
>much lower than these days, so code optimization was more important.
 
Its good to see the attitude of just throw more CPU at something instead of
optimising it is still around. These days when server farms are taking up
a siginificant percentage of the planet's electrical output its beholder on
programmers to make their code as efficient as is reasonable.
DozingDog@thekennel.co: Nov 23 04:55PM

On Tue, 23 Nov 2021 14:39:08 +0100
>Developing in C is a magnitude more effort than in C++, and if you've
 
That depends on the problem. If you're writing code that needs to store a
lot of structured data then C wouldn't be your first choice of language. But
if you're writing something that simply interfaces with system calls then
there's probably not much if any extra effort in using C over C++.
Guillaume <message@bottle.org>: Nov 23 05:56PM +0100

Le 23/11/2021 à 08:21, Juha Nieminen a écrit :
> Oh well. As long as it works, who cares? Buy a faster computer.
 
This sentence shows that you quite obviously got what "green" means.
DozingDog@thekennel.co: Nov 23 05:00PM

On Tue, 23 Nov 2021 17:51:09 +0100
>time and energy, so you don't want to do it too often - hence the
>"big-little" processor combinations where you have a slower core that
>can be switched on and off more efficiently.
 
Why do many battery powered systems throttle the CPU to save the battery when
its getting low then and why does undertaking CPU intensive tasks deplete the
battery faster? You seem to be claiming that you can get something for nothing.
Richard Damon <Richard@Damon-Family.org>: Nov 23 12:28PM -0500

> lot of structured data then C wouldn't be your first choice of language. But
> if you're writing something that simply interfaces with system calls then
> there's probably not much if any extra effort in using C over C++.
 
I would disagree. With a decent compiler, C code can generate close to
assembly level optimizations for most problems. (Maybe it doesn't have
good support for defining Multiple Datapath Single Instruction
sequences, but a GOOD compiler maybe be able to detect and generate this).
 
ANYTHING in terms of data-structures that another language can generate,
you can generate in C.
 
The big disadvantage is that YOU as the programmer need to deal with a
lot of the issues rather than to compiler doing things for you, but that
is exactly why you can do things possibly more efficiently then the
compiler. You could have always generated the same algorithm that the
compiler did.
 
The one point where the compiler can do better is if there is a piece
like instruction sequencing to optimize performance, but that is where
the C language gives the implementation the freedom to adjust things to
allow for that.
 
The C language, with the common extensions, give you the power to do as
well as any other language.
 
Now, if you want to talk of the efficiency of WRITING the code, (as
opposed to executing it) all this power it gives you is a negative,
which seems to be what you are talking about.
 
If we want to talk 'Greenness', we need to define the development/usage
life cycle of the code.
Richard Damon <Richard@Damon-Family.org>: Nov 23 12:36PM -0500


> Why do many battery powered systems throttle the CPU to save the battery when
> its getting low then and why does undertaking CPU intensive tasks deplete the
> battery faster? You seem to be claiming that you can get something for nothing.
 
CPUs, at a given operating voltage will consume an approximately fixed
amount of energy per instruction.
 
One effect of slowing down the processor is that if it was running at
25% utilization, that means that 75% of the instructions executed did no
'useful' work, so slowing down the processor to make instructions take
longer means you do less of these wasteful cycles.
 
Some processors have the ability that when it get to those 'wasteful'
instructions it automatically 'stops' and drops its power concumption
until something happens that needs running again.
 
The other affect is that in many cases if you slow down the processor
speed, you can slightly drop the voltage you are running the processor
at, and the power consumed turns out to go largely as the square of the
voltage (as the dynamic power is consumed in charging and discharging
tiny capacitance through the system)
 
 
So, through various tricks in the system, you can sometimes save some
power when the processor is 'idle' or running 'slower'. Battery powered
system especially try to implement these sorts of capability.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 23 07:05PM +0100

> ANYTHING in terms of data-structures that another language can generate,
> you can generate in C.
 
With a lot of effort compared to C++.
 
> is exactly why you can do things possibly more efficiently then the
> compiler. You could have always generated the same algorithm that the
> compiler did.
 
Why shoul one use sth. different than std::vector<>, std::string,
std::unordered_map<> ... ? There are no opportunities to make the
same more efficient.
David Brown <david.brown@hesbynett.no>: Nov 23 07:24PM +0100


> Why do many battery powered systems throttle the CPU to save the battery when
> its getting low then and why does undertaking CPU intensive tasks deplete the
> battery faster? You seem to be claiming that you can get something for nothing.
 
If you have to do a certain calculation or set of calculations, it is
/usually/ more efficient to do them quickly and then let the processor
sleep deeper and longer.
 
Modern cpus generally do /not/ throttle the cpu speed to save battery
power. It only makes sense to slow down the processor if you have large
leakage currents that you can't turn off, in which case a slow clock can
mean lower energy overall. With modern devices, clock gating lets you
turn off all or parts of the core much more effectively.
Richard Damon <Richard@Damon-Family.org>: Nov 23 01:41PM -0500

On 11/23/21 1:05 PM, Bonita Montero wrote:
 
> Why shoul one use sth. different than std::vector<>, std::string,
> std::unordered_map<> ... ? There are no opportunities to make the
> same more efficient.
 
Except where there are.
 
For instance, I regularly use a variant of std:string that the char
const* constructor checks if the input is from 'read only' memory, and
if it is reuses that data instead of making a copy, at least until in
wants to change it. This saves me a LOT of memory in embedded systems
where most of my 'string' data is constant, but some spedific cases need
to dynamically compute the string.
 
The C++ standard library is very good code for the general case. There
can be cases where specific application requirements make alternative
better.
 
As I said, the fundamental issue is the trade off of final execution
efficiency for efficiency in writing the code.
 
C++ keeps a lot of the efficiencies of C, and adds some significant
'power' to the expresiveness. But sometimes implementing the C++
features directly in C while needing more coding by the programmer can
make some things more efficient.
 
For example, rather than letting the C++ class system implicitly handle
the 'vtable' for a class, there are tricks you can do to make some
operation more efficient in C with an explicit vtable (at the expense of
it adding all the explicit code). Things like changing the 'type' of a
structure to that of another compatible type with just a change of the
vtable pointer.
Richard Damon <Richard@Damon-Family.org>: Nov 23 01:50PM -0500

On 11/23/21 1:24 PM, David Brown wrote:
> leakage currents that you can't turn off, in which case a slow clock can
> mean lower energy overall. With modern devices, clock gating lets you
> turn off all or parts of the core much more effectively.
 
Not sure if this holds for desktop/laptop caliber processors, but in the
embedded world, many processors can have their core voltage dropped down
when running at slower speeds, and since switching energy is
proportional to V^2, slowing the processor and waiting less CAN save power.
 
I beleive this applies to processors with a 'Turbo' or 'Overclocked'
mode in the desktop/laptop space, to run their fastest, they need to
boost their power supplies at the cost of increase power consumption but
fast speeds become available.
 
Then there is the fact that the 'simple' idle modes don't stop
everything, so you are still burning the higher frequency power even
when idle in parts of the circuit, and switching to more power savings
mode can take a bit of time and actually cost power (as you power down
then back up sections of the die) means that slower, higher usage, uses
less power.
 
The problem is that this also limits you PEAK operational speed, so you
need to balence those needs with your power concumption.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 23 07:53PM +0100

Am 23.11.2021 um 19:41 schrieb Richard Damon:
 
> const* constructor checks if the input is from 'read only' memory, and
> if it is reuses that data instead of making a copy, at least until in
> wants to change it. ...
 
Then use string_view.
David Brown <david.brown@hesbynett.no>: Nov 23 08:02PM +0100

On 23/11/2021 19:50, Richard Damon wrote:
> embedded world, many processors can have their core voltage dropped down
> when running at slower speeds, and since switching energy is
> proportional to V^2, slowing the processor and waiting less CAN save power.
 
That depends on the class of embedded system. If you are talking about
large embedded processors - running embedded Linux, for instance - then
that's true. But even there you usually aim for high speed and lots of
sleep if you can. However, as I mentioned, entering and exiting sleep
modes takes some time - if you are doing it too frequently, that
overhead becomes dominant and it is better to run the whole thing at a
low clock rate.
 
Smaller embedded systems rarely change the voltage to the core.
 
> less power.
 
> The problem is that this also limits you PEAK operational speed, so you
> need to balence those needs with your power concumption.
 
It is all a complicated balance, and a subject of continuous development
and improvement - no one choice fits everything. (And the ideal power
management decisions need to know what a process is going to do before
it does it.)
Bonita Montero <Bonita.Montero@gmail.com>: Nov 23 03:59PM +0100

I just tried this:
 
pair<cpu_it, cpu_it> foundApicId = equal_range( apicIds.begin(),
apicIds.end(),
apicId, []( cpu_apic_id const &idRange, unsigned apicId ) { return
idRange.apicId == apicId; } );
 
It should be possible that the key for the range has a different type
than than the elemnens in the range so that I can compare against a
part of the range-objects like in the above code.
 
Instead I'd hat to write:
 
pair<cpu_it, cpu_it> foundApicId = equal_range( apicIds.begin(),
apicIds.end(),
cpu_apic_id( -1, apicId ), []( cpu_apic_id const &idRange, cpu_apic_id
const &id ) { return idRange.apicId == id.apicId; } );
Bonita Montero <Bonita.Montero@gmail.com>: Nov 23 04:04PM +0100

Am 23.11.2021 um 15:59 schrieb Bonita Montero:
> apicIds.end(),
>         cpu_apic_id( -1, apicId ), []( cpu_apic_id const &idRange,
> cpu_apic_id const &id ) { return idRange.apicId == id.apicId; } );
 
And even more: I've to use const-references in my lambda.
Who thinks up such nonsense ?
Bonita Montero <Bonita.Montero@gmail.com>: Nov 23 05:13PM +0100

Am 23.11.2021 um 16:04 schrieb Bonita Montero:
>> cpu_apic_id const &id ) { return idRange.apicId == id.apicId; } );
 
> And even more: I've to use const-references in my lambda.
> Who thinks up such nonsense ?
 
I've got it:
 
equal_range uses sth. lower_bound and upper_bound need < and >
comparison. This could be realized by swapping the parameters
on the predicate, therefore the predicate must be symmetrical.
A solution would be a predicate that returns a strong_ordering
object, so that no swapping would be necessary.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 23 07:53PM +0100

Am 23.11.2021 um 17:13 schrieb Bonita Montero:
> on the predicate, therefore the predicate must be symmetrical.
> A solution would be a predicate that returns a strong_ordering
> object, so that no swapping would be necessary.
 
I think it should look like the following then:
 
#pragma once
#include <iterator>
#include <concepts>
#include <compare>
#include <cassert>
 
template<typename RandomIt, typename T, typename Pred>
std::pair<RandomIt, RandomIt> xequal_range( RandomIt first, RandomIt
end, T const &key, Pred pred )
requires std::random_access_iterator<RandomIt>
&&
requires( Pred pred, typename
std::iterator_traits<RandomIt>::value_type &elem, T const &key )
{
{ pred( elem, key ) } -> std::convertible_to<std::strong_ordering>;
}
{
using namespace std;
size_t n = end - first;
if( !n )
return pair<RandomIt, RandomIt>( end, end );
strong_ordering so;
for( ; ; )
if( (so = pred( first[n / 2], key )) < 0 )
{
first += n / 2 + 1;
if( !(n -= n / 2 + 1) )
return pair<RandomIt, RandomIt>( end, end );
}
else
if( !(n /= 2) )
if( so == 0 )
break;
else
return pair<RandomIt, RandomIt>( end, end );
n = end - first;
RandomIt lower = first;
do
if( (so = pred( lower[n / 2], key )) > 0 )
n /= 2;
else
lower += n / 2 + 1,
n -= n / 2 + 1;
while( n );
end = lower;
return pair<RandomIt, RandomIt>( first, end );
}
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: