Tuesday, July 3, 2018

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

Bart <bc@freeuk.com>: Jul 02 09:44PM +0100

On 02/07/2018 21:06, David Brown wrote:
> at re-compiling a "max" macro in a few microseconds.  Yes, over time
> this re-compilation waste will build up - perhaps to the tune of several
> whole seconds of wasted time every single year!
 
So, when people say that C++ is slow to compile (compared to equivalent
code in other languages), then what is the reason?
 
--
bart
Ben Bacarisse <ben.usenet@bsb.me.uk>: Jul 03 01:21AM +0100

> they would fit with a _Generic like this - but size_t at least should
> be there.
 
> Any others that I am missing?
 
I'd expect max to work for pointers too. For any two pointers into an
array of type T, p1 > p2 ? p1 : p2 is well-defined. Equally for
pointers into struct and union objects.
 
<snip>
> that problem would be nested _Generic's - perhaps with a utility
> _Generic macro for giving a common type for arithmetic operations on
> two different types. However it is done, it would be ugly.
 
You can cover a lot of cases by using an expression that converts to a
common type. For example, rather that (a) as the control clause in the
_Generic expression, you can use 1?(a):(b).
 
--
Ben.
David Brown <david.brown@hesbynett.no>: Jul 03 07:34AM +0200

On 02/07/18 22:44, Bart wrote:
>> several whole seconds of wasted time every single year!
 
> So, when people say that C++ is slow to compile (compared to equivalent
> code in other languages), then what is the reason?
 
It is partly the complexity of the language (it is a lot harder to parse
than C), but mainly the large header files that need to be re-processed
for every compilation. The problem is not in the couple of lines of
"max" macro definition that gets re-processed - it is in the million
lines of standard library headers, boost, and other template libraries
that get re-processed.
 
Thus improving this system - by separately built "modules" - is an
important area for the future of C++.
 
Making "max" into an operator, on the other hand, is utterly irrelevant
from an efficiency viewpoint (either compiler efficiency, or generated
code efficiency).
 
It's okay to want a "max" operator because you think code would be
neater if it were an operator. General opinion seems to be that it is
not worth the effort, but certainly some people would like it. Some
people would like ways to define their own operators, either as
punctuation combinations (like /\) or as identifiers (like "max"). In
C++, you'd be looking at user definable functions here, rather than
built-in operators.
 
Arguing that making "max" an operator would improve compile times or the
quality of generated code, however, is not going to impress anyone.
David Brown <david.brown@hesbynett.no>: Jul 03 07:42AM +0200

On 03/07/18 02:21, Ben Bacarisse wrote:
 
> I'd expect max to work for pointers too. For any two pointers into an
> array of type T, p1 > p2 ? p1 : p2 is well-defined. Equally for
> pointers into struct and union objects.
 
Yes, I thought about that. But I really can't see any type-safe way to
do this with C _Generics. (C++ templates, and gcc extensions allow it.)
Although pointer max is well defined (for appropriate pointers), I
can't see it being a big use-case.
 
 
> You can cover a lot of cases by using an expression that converts to a
> common type. For example, rather that (a) as the control clause in the
> _Generic expression, you can use 1?(a):(b).
 
Good idea - I hadn't thought of that. Yes, that should work. Using
"(a) + (b)" would be alternative.
Wouter Verhelst <w@uter.be>: Jul 03 10:14AM +0200

On 02-07-18 20:51, Bart wrote:
 
>> Yeah, that's true, but who cares? If the compiler is slow during compile
>> time so that it can produce a fast program, I couldn't care less.
 
> Suppose it's unnecessarily slow?
 
What does it matter? Not like those extra few cycles will kill me.
 
> wasted hours.
 
> But some of these overheads can impinge on build speeds even with
> optimisation turned off.
 
For debugging, you obviously compile with -O0, so that single-stepping
through your program doesn't make your debugger go all over the place.
 
Typically, you also use a build system that supports incremental builds
-- like "make" has done since the dawn of time, but you can use other
options too -- and then it doesn't matter anymore how much overhead a
compiler needs, because whether compiling the single file I changed
takes a second or just a tenth of one, I probably haven't finished
scanning the compiler output for issues yet by the time compilation has
finished -- if I've even finished my "move finger up from the <enter>
key" action.
aph@littlepinkcloud.invalid: Jul 03 04:15AM -0500

>> and memory-mapped.
 
> Numbered, yes (R0..R7, where R7 is the PC (Program Counter)), but not
> memory-mapped.
 
IIRC on earlier PDP-11s they were mapped at 17777700-17777717. You
needed this on order to be able to initialize registers from the front
panel. I think they gave up doing this in some later VLSI versions.
 
Andrew.
 
 
bitsavers.trailing-edge.com/pdf/dec/pdp11/handbooks/PDP11_Handbook1979.pdf
David Brown <david.brown@hesbynett.no>: Jul 03 11:23AM +0200

On 03/07/18 10:14, Wouter Verhelst wrote:
>> optimisation turned off.
 
> For debugging, you obviously compile with -O0, so that single-stepping
> through your program doesn't make your debugger go all over the place.
 
I usually compile with -O1 (with gcc) for debugging - it doesn't do
nearly as much code re-arrangement as -O2 or -Os, but it also avoids the
huge wastage of using the stack for all data and thus gives assembly
code that is a lot easier to read. Of course, this will vary according
to your needs, your target processors, your compiler, and the kind of
debugging you do (I like to see the assembly, and sometimes single-step
it). You also need at least -O1 to get good static warnings.
 
Sure, many people prefer -O0 for debugging - but it is not "obviously"
the case :-)
 
> scanning the compiler output for issues yet by the time compilation has
> finished -- if I've even finished my "move finger up from the <enter>
> key" action.
 
For some projects, I've had "automatically build on save" enabled in my
IDE - rebuilds are so quick that I don't even need to consider them as a
separate action. For bigger projects, I find that it is often linking
that takes the noticeable time, rather than compiling (unless I change a
commonly used header, forcing many files to be re-compiled).
Ian Collins <ian-news@hotmail.com>: Jul 03 10:20PM +1200

On 03/07/18 21:23, David Brown wrote:
> separate action. For bigger projects, I find that it is often linking
> that takes the noticeable time, rather than compiling (unless I change a
> commonly used header, forcing many files to be re-compiled).
 
If you are using a platform where it is supported, the gold linker takes
most of the pain out of that step for large projects.
 
--
Ian.
Bart <bc@freeuk.com>: Jul 03 12:00PM +0100

On 03/07/2018 06:34, David Brown wrote:
 
> Making "max" into an operator, on the other hand, is utterly irrelevant
> from an efficiency viewpoint (either compiler efficiency, or generated
> code efficiency).
 
To use min and max the C++ way requires that these are defined as part
of those libraries and thus their implementation code needs to be
reprocessed for each compile whether it is used or not. And if it it is
used, it requires that expansion (instantiation or whatever you call it).
 
It is this principle, applying to just about everything in the language
not just max, then might be the reason for those huge headers.
 
(FWIW I tried compiling 100,000 lines of 'a=MAX(a,b);', where MAX is the
safe macro that someone posted, as C code with gcc -O0, and it crashed.
But by extrapolation would have taken some 18 seconds. -O3 didn't crash
and was much faster, but still much slower than compiling a=a+b;
 
Elsewhere, I can compile 100K lines of 'a max:=b' in 0.25 seconds.)
 
> It's okay to want a "max" operator because you think code would be
> neater if it were an operator.
 
That too. Slicker compilation is a bonus.
 
I can see also the attraction of implementing such things outside the
core language, so that they can be applied to new user types more
easily, but that seems to have a cost. However that doesn't stop binary
+ and - being built-in (I assume, in the case of C++) but still being
applied to user types.
 
> Arguing that making "max" an operator would improve compile times
 
Not just that by itself, no. Not unless the program consists of nothing
but max's like my example above. But a program /could/ consist largely
of things that do need to be expanded through things in those large
headers, things that /might/ be built-in in another language.
 
--
bart
Bart <bc@freeuk.com>: Jul 03 01:04PM +0100

On 03/07/2018 09:14, Wouter Verhelst wrote:
>>> time so that it can produce a fast program, I couldn't care less.
 
>> Suppose it's unnecessarily slow?
 
> What does it matter? Not like those extra few cycles will kill me.
 
 
but you can use other
> options too -- and then it doesn't matter anymore how much overhead a
> compiler needs, because whether compiling the single file I changed
> takes a second or just a tenth of one,
 
What is your tolerance threshold? Mine is about half a second beyond
which any delay is annoying and a distraction if I'm in the middle of
development.
 
My current projects each build from scratch in about 0.2 seconds or
under when generating native code.
 
All my five main language projects together (three compilers, an
interpreter and assembler/linker all written in static code, about
100Kloc total), can be built from source to new production exes in about
0.7 seconds. One of those five projects is a C compiler.
 
None of the programs involve use optimised code; it's all quite poor.
Yet the programs are still fast. None of them are written in C.
 
I reckon I could double that speed because there are some
inefficiencies, but for the time being it's adequate, since I can build
a project from scratch faster than I can press the Enter key.
 
(An older compiler in dynamic code could be built from source to
byte-code in some 30msec. It was arranged so that every time I ran it
it, the whole thing is recompiled, but you couldn't tell. Now /that/ was
scary.)
 
> What does it matter? Not like those extra few cycles will kill me.
 
Why am I saying all this? Because extra cycles do matter to some of us
who like small tools that work more or less instantly. Compilation
considered as a task that converts a few hundred KB of input to a few
hundred KB of output, shouldn't really take that long.
 
--
bart
David Brown <david.brown@hesbynett.no>: Jul 03 03:37PM +0200

On 03/07/18 13:00, Bart wrote:
> of those libraries and thus their implementation code needs to be
> reprocessed for each compile whether it is used or not. And if it it is
> used, it requires that expansion (instantiation or whatever you call it).
 
Yes and no. For the std::min and std::max functions, you need to
#include <algorithm>. If you don't want anything from that header, you
don't include it and there is no cost. And if you do include that
header, then the cost for min and max is negligible - they are very
simple templates.
 
A reasonably complex C++ file often includes a good number of standard
library files. It is not uncommon for the compiler to be chewing
through several hundred thousand lines before getting to the user's
code. This /is/ an issue with C++ compilation - it is a known issue,
and work is being done to improve the matter with C++ modules.
(Pre-compiled headers can help, but have a lot of limitations.)
 
The three or four lines for "max" and "min" templates in this lot does
not matter. It is /irrelevant/ for compilation time. It is
/irrelevant/ for generating efficient code. It is a /good/ thing from
the point of view of language maintenance and development - the more of
the language that is in the library rather than the core language, the
easier it is to add new features, improve existing features, deprecate
bad features, and generally develop the language.
 
 
> It is this principle, applying to just about everything in the language
> not just max, then might be the reason for those huge headers.
 
There are many thousands of types, functions and templates in the C++
standard library. Do you think these should all be built-in parts of
the language?
 
> But by extrapolation would have taken some 18 seconds. -O3 didn't crash
> and was much faster, but still much slower than compiling a=a+b;
 
> Elsewhere, I can compile 100K lines of 'a max:=b' in 0.25 seconds.)
 
If you find a use for a file consisting of 100,000 lines of "a = MAX(a,
b)", then please let us know.
 
>> It's okay to want a "max" operator because you think code would be
>> neater if it were an operator.
 
> That too. Slicker compilation is a bonus.
 
No, the slicker compilation is irrelevant.
 
 
> I can see also the attraction of implementing such things outside the
> core language, so that they can be applied to new user types more
> easily, but that seems to have a cost.
 
The attraction is real - the cost is not.
 
> However that doesn't stop binary
> + and - being built-in (I assume, in the case of C++) but still being
> applied to user types.
 
You have to have /something/ built in to the language. You need /some/
functions, operators, types as your fundamentals on which to build. You
pick these based on having the most necessary, common and convenient
features in the language - with other parts in the library. Thus
although multiplication could, in theory, be defined in terms of looped
additional and a smart optimiser, it makes sense to include such a
common operating in the core language. But "max" is rarely used, easily
defined in a library, and easily optimised by the compiler - there are
no advantages in putting it in the core.
 
If and when some of the more advanced proposals for C++ make it to the
language - reflection, modules, metaclasses - then a number of parts of
what is currently in the core language, could be done in libraries.
"class", "struct", "union", "enum" could all be part of a library rather
than being embedded in the language, leading to greater flexibility
(plus the downside - risk of greater complication and confusion). With
modules, this would not result in greater compile time.
 
> but max's like my example above. But a program /could/ consist largely
> of things that do need to be expanded through things in those large
> headers, things that /might/ be built-in in another language.
 
I am sure there are things that /could/ be made more efficient by having
them in the core, rather than in the library - maybe even things that
would have a noticeable effect on efficiency (either of compilation, or
of the generated code). Compiler implementations certainly do this sort
of thing, with intrinsic functions or builtin functions. "max" is not a
serious candidate for such a thing.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jul 03 10:18AM -0400

On 7/3/2018 8:04 AM, Bart wrote:
> and assembler/linker all written in static code, about 100Kloc total), can be
> built from source to new production exes in about 0.7 seconds. One of those
> five projects is a C compiler.
 
Impressive. Using Visual Studio and a C++ compiler to compile my
mostly C-like code, even my modest programs take a couple seconds.
My Visual FreePro, Jr. app is about 60K lines of code and it takes
about 5-7 seconds to compile in debug mode, and about 10-12 seconds
in release mode (optimized code).
 
> None of the programs involve use optimised code; it's all quite poor. Yet the
> programs are still fast. None of them are written in C.
 
One of my tests for developing CAlive required creating a little parser
that converted C code to x86 assembly. Compared to Microsoft's Debug
mode code generation, my parser used about 50% more temporary variables,
most of which ultimately went unused, but I had to have a simple model
for the test parser, and it also used a fixed assembly model (no
optimization). It generated about 2x the number of asm instructions as
the Debug mode code by MS's Visual C/C++ compiler. But it too was still
very fast, and the entire parser was around 2,500 lines and could work
with any valid C expression, except I didn't have struct, pointer, or
address-of support.
 
My goals have never been for optimization as a primary focus. Today,
computers are fast enough that even the poor code can run fast enough
for nearly all tasks. My goals are in making developers more productive
and getting more code generated and debugged in less time.
 
With CAlive, I'm looking to get everything syntactically correct in my
first release (and a subsequent maintenance release to fix all the bugs
I missed in my testing), and then to turn my attention to optimization
after everything's working. I've asked Supercat to help me, but have
not hear a definitive yes or no yet. I figure that will be in the 2021-
2022 timeframe, with CAlive being released in the late 2019 timeframe.
 
> I reckon I could double that speed because there are some inefficiencies, but
> for the time being it's adequate, since I can build a project from scratch
> faster than I can press the Enter key.
 
My little 2,500 line parser issues were due to hard assumptions I had
to make in the generated code. I had assumed all variables are 32-bit
signed integers, for example, so I had no real types. And I generated
my asm code like this:
 
; a = b + c;
mov eax,b
mov ebx,c
add eax,ebx
mov t0,eax ; a = t0
 
mov eax,t0
mov a,eax
mov t1,eax ; t1
 
So there are obvious inefficiencies there that a simple optimizer would
remove:
 
; a = b + c
mov eax,b
add eax,c
mov a,eax
 
It's doable, it would just require a more complex code generation
algorithm. The one I had was literal printf() statements. :-)
 
> like small tools that work more or less instantly. Compilation considered as
> a task that converts a few hundred KB of input to a few hundred KB of output,
> shouldn't really take that long.
 
I don't worry too much about how long optimization code generation
takes. If it takes five minutes, but generates faster code, to me
that's not an issue because you can do all of your development in
debug mode and it should be fast enough. And where it's not, take
the handful of algorithms slowing you down and compile them separately
in optimized mode and link them in that way to the rest of your debug
code.
 
But, I do think the debugging phase should be nearly instantaneous,
and a primary focus of CAlive is to have a continuous compiler. I
want the code being typed to be compiled continuously, updating the
running ABI in memory continuously, so that you are working on a live
system, using the LiveCode ABI that CAlive generates for. And remember
that CAlive introduces the concept of an inquiry, which is a state
where the code branches to a suspend unit or to the debugger when it
encounters code that is in error from the source code producing errors
during compilation. It won't crash. It will suspend so the developer
can bring up the code, fix it, and keep going.
 
-----
Things are moving forward positively toward CAlive. The 2019-2020
release date has a glimmer of hope. My target is to release it on
Christmas 2019 (my gift to Jesus Christ, and to all mankind).
 
We'll see though. Lots left to do, and I could get hit by a bus
or have a heart attack or stroke before then. You never know what
tomorrow holds ... which is why you must get right with Jesus today.
It's important.
 
--
Rick C. Hodgin
scott@slp53.sl.home (Scott Lurndal): Jul 03 02:59PM

>> whole seconds of wasted time every single year!
 
>So, when people say that C++ is slow to compile (compared to equivalent
>code in other languages), then what is the reason?
 
Who says that?
Keith Thompson <kst-u@mib.org>: Jul 03 09:02AM -0700

> In comp.lang.c Keith Thompson <kst-u@mib.org> wrote:
[...]
 
> IIRC on earlier PDP-11s they were mapped at 17777700-17777717. You
> needed this on order to be able to initialize registers from the front
> panel. I think they gave up doing this in some later VLSI versions.
 
I should have mentioned that there are/were different versions of the
PDP-11. The version I worked with did not have memory-mapped
registers as far as I know. Others could have.
 
[...]
 
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Bart <bc@freeuk.com>: Jul 03 05:33PM +0100

On 03/07/2018 14:37, David Brown wrote:
> On 03/07/18 13:00, Bart wrote:
 
> If you find a use for a file consisting of 100,000 lines of "a = MAX(a,
> b)", then please let us know.
 
Yes, you can find out how crap or otherwise a compiler is at handling
lots of such expansions. The fact that it crashed (not due to memory)
suggests there is a bug.
 
 
> than being embedded in the language, leading to greater flexibility
> (plus the downside - risk of greater complication and confusion). With
> modules, this would not result in greater compile time.
 
I used to be excited about stuff like that, but very long ago before I
grew out of it. For example, there is a language called Seed7 which
defines its own syntax. So you don't need to hard-code syntax into a
language implementation any more (although it raises questions of what
syntax examples in the language would use).
 
This kind of flexibility can cause problems - look at Python which is so
dynamic you never really know what the f() line here might do until you
run it, and then it might be different each time:
 
exec(s) # s is a string containing arbitrary code
f()
 
But it'll be interesting to see what magic C++'s modules will perform;
here's my take on the problem:
 
Assume these system headers take up 100Kloc and that nothing can be done
about that. Then compiling a 20-line program means processing 100,000
lines of prologue, then the 20 lines.
 
What is the internal state of the compiler just after those 100,000
lines? Devise a way to remember that state, or to restore that state if
compiling multiple small modules with the same invocation of the compiler.
 
Or simply keep the compiler in memory, together with that state, ready
to be invoked whenever a small program has to be compiled. But the
preprocessing of those headers /has/ to be independent of the program
being compiled.
 
> of the generated code). Compiler implementations certainly do this sort
> of thing, with intrinsic functions or builtin functions. "max" is not a
> serious candidate for such a thing.
 
Is that why it seems to be an extension in gcc for C++? There it uses <?
and >? operators rather than min and max.
 
The accompanying notes call min and max fundamental arithmetic
operations, and goes on to say:
 
"Since <? and >? are built into the compiler, they properly handle
expressions with side-effects; int min = i++ <? j++; works correctly."
 
https://gcc.gnu.org/onlinedocs/gcc-3.4.3/gcc/Min-and-Max.html
 
 
--
bart
scott@slp53.sl.home (Scott Lurndal): Jul 03 04:47PM


>I should have mentioned that there are/were different versions of the
>PDP-11. The version I worked with did not have memory-mapped
>registers as far as I know. Others could have.
 
I just looked through my PDP-11 handbook (04/34/45/55/60) and it
doesn't show those in the memory map (which is 16-bits wide, unless
the virtual memory option is installed on the /34, which makes for
18-bit physical/16-bit virtual addresses). The highest addressable
byte would have been 777777(octal) on the unibus, the top 4k words of address space
are automatically sign extended to 18-bits, so accessing 177777
from the CPU would translate to 777777 on the unibus.
 
The paging hardware (registers at 772300-360, 777600-660) was mapped
in that space.
Ian Collins <ian-news@hotmail.com>: Jul 04 08:42AM +1200

On 04/07/18 01:37, David Brown wrote:
> code. This /is/ an issue with C++ compilation - it is a known issue,
> and work is being done to improve the matter with C++ modules.
> (Pre-compiled headers can help, but have a lot of limitations.)
 
If you are looking to boost compilation speed, look at zapcc
(https://www.zapcc.com/) which uses an in memory cache for parsed headers.
 
--
Ian.
Bart <bc@freeuk.com>: Jul 04 12:20AM +0100

On 03/07/2018 15:18, Rick C. Hodgin wrote:
> On 7/3/2018 8:04 AM, Bart wrote:
 
> One of my tests for developing CAlive required creating a little parser
> that converted C code to x86 assembly.
 
That sort of sounds like a compiler...
 
  Compared to Microsoft's Debug
> very fast, and the entire parser was around 2,500 lines and could work
> with any valid C expression, except I didn't have struct, pointer, or
> address-of support.
 
My code generation is so bad that I avoid looking at it as it's so
embarrassing. But so long as it works. And actually for somewhat
sprawling applications like compilers (where execution jumps around a
lot) it doesn't make a huge difference. Highly optimised code might be
up to 50% faster.
 
With parsing of C (that is, tokenising, preprocessing, parsing, name
resolution, type checking, constant expression reduction, with AST
output) I can generally do that at something over one million lines per
second. Using that embarrassingly poor code.
 
>     mov    t1,eax       ; t1
 
> So there are obvious inefficiencies there that a simple optimizer would
> remove:
 
This looks like 'three-address code' with lots of temporaries. Easy to
generate but I found it hard to optimise. It was generally half the
speed of the code I described as 'poor' above. (I'm shortly having one
final go at an easy-to-generate intermediate code which is not so hard
to turn into reasonable-running native code.)
 
> the handful of algorithms slowing you down and compile them separately
> in optimized mode and link them in that way to the rest of your debug
> code.
 
Well, that describes the methods I used during the 80s and 90s. Except I
used in-line assembly code in bottlenecks to help the otherwise
indifferent code generation.
 
 
> Things are moving forward positively toward CAlive.  The 2019-2020
> release date has a glimmer of hope.  My target is to release it on
> Christmas 2019 (my gift to Jesus Christ, and to all mankind).
 
I decided to start my C compiler project on 25-Dec-16. It was usable by
Easter '17. (However that is now just an interesting, quirky project as
it plays no part in my current code development, except as a test program.)
 
--
bart
Vir Campestris <vir.campestris@invalid.invalid>: Jul 03 09:45PM +0100

On 01/07/2018 22:09, Alf P. Steinbach wrote:
> asd
fgh?
 
ES was down. Alt.test is over there >>>
Ian Collins <ian-news@hotmail.com>: Jul 03 08:39AM +1200

On 03/07/18 07:29, jacobnavia wrote:
> Le 01/07/2018 à 23:40, Vir Campestris a écrit :
>> Use the old syntax. Then you can type i +=2 to skip the alternate ones.
 
> Why is the index not included as info with the new syntax?
 
Not everything that can be iterated over has an index, sets and maps for
example.
 
--
Ian.
Manfred <noname@invalid.add>: Jul 03 03:36PM +0200

On 7/2/2018 9:29 PM, jacobnavia wrote:
 
> Why is the index not included as info with the new syntax?
 
> Yes, you can figure it out, as several people pointed me to a simple
> solution, so its not a big deal.
 
Since you are calling the range-for the "new syntax", maybe it is worth
recalling that the "range-based for" statement (as the standard defines
it) it is an /alternative/ formulation of the for loop additional to the
for(init; condition; update){ } one, and not meant to be a replacement
or obsolete the traditional syntax.
 
From this perspective, it is /intentional/ that the loop variable is
not present in the range-for expression: the intent is that given a
collection, some block of instructions is executed on each element of
the collection - hence the syntax only involving the collection and a
declaration of the element to be acted upon during each iteration.
 
Moreover, the syntax itself only allows for iteration over /all/
elements in the collection, and not even the ordering of the traversal
is explicit in the expression - it is written in the standard that it
runs from begin() to end(), not in the coding.
 
So, as other have said, if the loop variable is actually instrumental to
processing in the loop, then probably the traditional syntax is a better
choice.
 
Out of curiosity, below are a few alternatives, the last one being the
suggestion from Alf, with the addition of an index filter:
 
 
=================================
 
#include <iostream>
#include <vector>
#include <boost/range/adaptor/indexed.hpp>
#include <boost/range/adaptor/filtered.hpp>
 
template< typename Range >
auto enumerate( const Range& r ) { return boost::adaptors::index( r ); }
 
template< typename Range >
bool is_even(const typename boost::range_value<Range>::type& x) { return
x.index()%2 == 0; }
 
 
template< class Type >
auto filter_even( const Type& c ) { return boost::adaptors::filter( c,
is_even<Type> ); }
 
 
int main()
{
std::vector<wchar_t> values{ L'a', L'b', L'c', L'd', L'e' };
 
// your original
for (int i=0; i<values.size(); ++i)
{
if (i&1) continue;
std::wcout << values[i] << std::endl;
}
 
std::wcout << std::endl;
 
// minimal loop, avoiding anything like __begin, __end ...
for (const auto& c: values)
{
if ((&c - &(values[0])) & 1 ) continue;
std::wcout << c << std::endl;
}
 
std::wcout << std::endl;
 
{
// explicit iterator, with a flag
bool skip = false;
for (auto it = values.cbegin(); it != values.cend(); ++it)
{
if (!(skip = !skip)) continue;
std::wcout << (*it) << std::endl;
}
}
 
std::wcout << std::endl;
 
// the example from Alf, with a filtered index
for( const auto& item : filter_even( enumerate( values ) ) )
{
std::wcout << item.index() << ": " << item.value() << std::endl;
}
}
 
=================================
 
 
> Of course you can do i += 2, but I ws thinking in terms of a filter
> going through all records according to some criteria. An easy criteria
> was using the index, but that was a very bad example.
See above for a boost filter
 
Sam <sam@email-scan.com>: Jul 02 07:40PM -0400

Soviet_Mario writes:
 
 
> When I embedded the same code and declarations from MY.CPP
> either in Main.CPP or in MainWindow.CPP, the build went
> plain without errors.
 
So far, you've provided no evidence that compilation order matters.
 
> I'm less than newbie in this IDE).
> There might be other aspect in declarations I miss / forgot
> (like extern directives ?)
 
Declarations have nothing to do with compilation order.
 
My current project consists of 422 .C files. They can be compiled in any
order, and when it comes to link time, no matter what order they get built
in, everything links correctly. Actually, every time I build it things
always get built in slightly different order. I use a parallel build, so the
next file gets compiled as soon as one of the current modules finishes
compiling; so the compilation order varies slightly, every time.
 
> global scope (with static they were restricted to file
> scope). As far as class declaration I've no memory of their
> scope and if/how to alter it.
 
If you are seeking assistance with your compilation troubles, it will be
necessary to provide something more substantive than vague memories and
recollections.
 
 
> > Trying random things, without a clear understanding of what
> > the problem is, is very unlikely to be very productive.
 
> I'm asking in order to understand
 
All C++ books provide detailed information about scoping, and how header
files work. If there's
 
> a consequence of the former :)
 
> I tried to explain with a generic example ... I dunno if it
> will be useful
 
No, it's not. "Generic examples" are rarely useful. What is useful are
concrete, specific examples. Vaguely describing files by name, like
"my.cpp", with generic description of what they look like is like calling an
auto mechanic, providing a generic description like "my car doesn't move
when I press the gas pedal", and expect to achieve productive results with
that.
Paavo Helde <myfirstname@osa.pri.ee>: Jul 03 03:42PM +0300

On 2.07.2018 18:13, Soviet_Mario wrote:
> try
> {
> int Posiz = -1;
 
...
> /home/gattovizzato/B/SourceCode/QT5/Prova_QT_00/mainwindow.cpp:28:
error: undefined reference to
`clsDiskMap::FzScanPathFoldersRecursively(QString const&)'
 
'undefined reference' means a linker error.
 
Apparently, you have defined class clsDiskMap in two places, once
containing member function declarations and the second time definitions.
This is a violation of the ODR (one definition rule) and will cause UB.
 
Instead, the member function definitions should go into the cpp file
without no enclosing class:
 
// clsDiskMap.cpp:
#include "clsDiskMap.h"
 
bool clsDiskMap::FzScanPathFoldersRecursively (const QString & StartPath)
{
try
{
...
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jul 03 02:29PM +0200

On 01.07.2018 01:51, Tim Rentsch wrote:
> Later( Fixer f0 ) : f( f0 ) {}
> ~Later(){ f(); }
> };
 
I think this is an interesting approach, very C++11-like in its use of
functions rather than classes.
 
This is commonly called a "scope guard" class, AFAIK first invented
roughly in the year 2000 by Petru Marginean. His ¹DDJ article,
co-authored with Andrei Alexandrescu (author of "Modern C++ Design"), is
a classic. But most of it addresses C++03 concerns, at the time C++98
concerns, that are not particularly relevant in C++11 and later.
 
For example, as you show, `std::function` + lambdas provides one very
easy C++11 way to implement a scope guard, and the `auto` mechanism plus
type of cleanup function as template parameter is a
not-complex-but-more-verbose way to implement it, with marginal
improvement of efficiency. Marginean's main contribution was a trick to
do this in C++03, using lifetime extension of a temporary bound to a
reference to const. They also used a neat scheme for generating unique
names for temporaries, which however failed in Visual C++ when a certain
option was used because then Visual C++ bungled up the result of
standard __LINE__, so that one had to use the Microsoft-specific
__COUNTER__...
 
One nice feature of the original scope guard class was that a scope
guard could be /dismissed/. Microsoft's minimal implementation for the
²C++ guidelines, called ³gsl::final_action, misses that sometimes
critical functionality. But then, as I see it, one should really have
two distinct scope guard classes, one dismissable and one not, so that
what can happen, the possible effects on the code, is more clear.
 
The simple C++11 implementations, like yours and Microsoft's, implicitly
make an important design decision opposite of Marginean and Alexendrescu.
 
Namely, how to deal with exceptions in the cleanup code. Their code
suppressed such exceptions via a `catch( ... )`. I remember discussing
that with Andrei in a short e-mail exchange, because it bit me, but I
didn't get a satisfactory answer as to their rationale or goals.
 
 
> int
> main( int argc, char *argv[] ){
> Fixer supersede_with( const char * ), use_stdin();
 
Style nitpick: hiding function declarations inside a function hinders
at-a-glance code comprehension.
 
 
> bool faux = argc > 1 && string( argv[1] ) == "--faux-text";
 
Ditto style nitpick: when I see a variable that's not `const` I start
looking for the place where it's modified, how it affects the code.
 
 
> the C++ standard says could not be called entirely successful.
> Running the program produced the expected output in both case. >
> (Also thanks due to Manfred, from whose program I cribbed a bit.)
 
Thanks for this approach, I didn't think of it – stuck in Old Ways™. :)
 
Sorry for responding so late.
 
First I deferred responding and just starred the posting because I knew
it would take more than a minute; then the Eternal September clc++ feed
malfed for a day or so (I reported it); then there was a ~night.
 
 
Cheers!,
 
- Alf
 
 
 
Notes:
¹ <url:
http://www.drdobbs.com/cpp/generic-change-the-way-you-write-excepti/184403758>
² <url:
https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md>
³ <url: https://github.com/Microsoft/GSL/blob/master/include/gsl/gsl_util>
Soviet_Mario <SovietMario@CCCP.MIR>: Jul 03 01:38AM +0200

Il 02/07/2018 08:44, Öö Tiib ha scritto:
> C++ that the framework preprocesses into actual C++. It is done by utility
> called qmake. You need to indicate to qmake what files are in your project
> with a .pro file. I take random one from my hard drive:
 
In another strictly related thread I pasted the .pro file
 
uh I forgot the PROJECT file (generated by QT Creator)
 
#-------------------------------------------------
#
# Project created by QtCreator 2018-06-21T16:21:34
#
#-------------------------------------------------
 
QT += core gui
 
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
 
TARGET = Prova_QT_00
TEMPLATE = app
 
 
SOURCES +=\
main.cpp \
mainwindow.cpp \
disk_scanner_07.cpp
 
HEADERS += mainwindow.h \
disk_scanner_05.h
 
FORMS += mainwindow.ui
 
it seems to contain everything, nevertheless the errors :\
 
 
 
> HEADERS += \
> Peg.h \
> Pattern.h
 
this .pro files has some more options.
I did not write mine manually, it was autogenerated
 
 
> The point is to illustrate how one defines to qmake what .cpp and .h
> files are in a little project. So if you want to #include .cpp
> files then you should perhaps list those as HEADERS.
 
well actually I'm including just as headers.
 
Maybe I make mistakes writing the header itself.
 
 
> Better answers you will get in some Qt-related forums. Qt is compiling
> something that is not actually standard C++ and so is not strictly
> topical here.
 
well I'm not comfortable with forums out there :\
 
TY
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: