Wednesday, January 31, 2018

Digest for comp.lang.c++@googlegroups.com - 25 updates in 9 topics

Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 31 06:30AM

On Tue, 2018-01-30, Robert Wessel wrote:
 
>>> Thanks for reminding me that the earliest UIs where flat! I'd forgotten.
 
>>The earliest UIs had a major design constraint: they had to look good on
>>monochrome displays.
 
Motif looked kind of ugly on Sun's (excellent) monochrome displays.
 
> Low resolution and 16-color displays pretty much also required a
> flat-ish approach.
 
I think I disagree: here's what people tended to do with four colors:
 
http://scacom.bplaced.net/Collection/600/amiga202.png
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Juha Nieminen <nospam@thanks.invalid>: Jan 31 08:18AM

> Any screenshots?
 
Consider, for example, the window decorations between Windows 7 vs.
Windows 10:
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFGIdq5vztlOGLYOa5xp_lETJ2vTfgCWr4PC1DA0TJp0h9GoPOYb3lvtgA06u3N0eUEhQnCBYFoGKfz2impqoGFZE4Rbw1Ckb2HaLw1BGrZvV5xl0V7OFFmBVFrlVD8JiTWj1LtKtg7Vk/s1600/Windows7_vs_Windows10.png
 
In Windows 7 buttons looked clearly like buttons, and the menu bar was
clearly distinct from the title bar. (For example, it's quite clear where
you have to click to drag the window by its title bar, and where the menu
area is instead. Likewise the clickable area of buttons is very clear and
delineated.) The outer edge of the window is much more visible and shaded.
 
In Windows 10, however, suddenly buttons have no borders at all (why?!?)
and there is no edge delineating the header bar from the menu bar.
The button icons have become nothing more than one-pixel-wide lines.
Likewise the outer edge of the window has become one-pixel-wide, and
harder to distinguish from other elements.
 
That last thing, for example, makes it visually very confusing if you
have several inactive windows on top of each others. It can be hard
to visually see where a certain window is, and misclicks happen
often (they happen to me annoyingly often). In Windows 7 this was
an almost inexistent problem because window edges were much better
visually defined and visible, and header bars actually looked like
header bars, quite clearly distinct from the other window elements.
"Öö Tiib" <ootiib@hot.ee>: Jan 31 01:13AM -0800

> >monochrome displays.
 
> Low resolution and 16-color displays pretty much also required a
> flat-ish approach.
 
The problem is not with flat look but with its ambiguity. When there
are no room nor colors then there are less artistic freedom.
That does not mean that the user interface has to become vague
and unintuitive.
 
Current era when everything runs on hand-held devices users are
typically confused if what they touched already reacted and is
waiting for some server, or wasn't supposed to react to touch
at all or may be their finger was too dry or what? So the
lame lists of text and unintuitive icons plus poor interaction
feels like made by someone stupid, unfriendly or both.
 
That can not be blamed on situation.The Borland's text-only GUI
(Turbo Vision) all GUI elements did stand out as different and had
clear extent. There was least surprize if and what happens when
you click, touch or drag at any spot of screen. Even the keyboard
accelerators were hinted:
https://thomasjensen.com/software/buchfink/buchfink.gif
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 31 12:28PM

On Wed, 2018-01-31, Juha Nieminen wrote:
> Jorgen Grahn <grahn+nntp@snipabacken.se> wrote:
>> Any screenshots?
 
I should explain that that didn't mean "I don't believe you" --
I was just wondering if it was the same kind of thing annoying me.
 
And it seems it was:
 
> an almost inexistent problem because window edges were much better
> visually defined and visible, and header bars actually looked like
> header bars, quite clearly distinct from the other window elements.
 
Ah, all that. Yes, the borders between things -- even between windows
-- seem to become more and more vague in Windows. I don't understand
why anyone would want that.
 
OTOH, I don't use Windows much, and I tend to force it to use a
"Classic" theme.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Paavo Helde <myfirstname@osa.pri.ee>: Jan 31 06:12PM +0200

On 31.01.2018 14:28, Jorgen Grahn wrote:
> Ah, all that. Yes, the borders between things -- even between windows
> -- seem to become more and more vague in Windows. I don't understand
> why anyone would want that.
 
You are forgetting those lovely effects from semi-transparent windows.
You cannot even tell the window contents apart, not to speak about borders.
 
I especially like the semi-transparent system tray notification windows
which slowly fade away, but are clickable up to some undefined time
point during their fading process.
legalize+jeeves@mail.xmission.com (Richard): Jan 31 04:52PM

[Please do not mail me a copy of your followup]
 
Jorgen Grahn <grahn+nntp@snipabacken.se> spake the secret code
 
>Ah, all that. Yes, the borders between things -- even between windows
>-- seem to become more and more vague in Windows. I don't understand
>why anyone would want that.
 
Not only that but it appears that the width of the resize area on the
borders of the windows has decreased (or perhaps it's a consequence of
high DPI displays). I find when I want to resize something I have to
really carefully position the cursor in order to get a grip on the
sizer.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
woodbrian77@gmail.com: Jan 31 09:16AM -0800

On Wednesday, January 31, 2018 at 2:19:07 AM UTC-6, Juha Nieminen wrote:
> The button icons have become nothing more than one-pixel-wide lines.
> Likewise the outer edge of the window has become one-pixel-wide, and
> harder to distinguish from other elements.
 
I've been looking for a new laptop and was leaning toward
Windows 10. What about Windows 8? Is it also a mess?
 
 
Brian
Ebenezer Enterprises - In G-d we trust.
http://webEbenezer.net
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jan 31 12:25PM -0500

> I've been looking for a new laptop and was leaning toward
> Windows 10. What about Windows 8? Is it also a mess?
 
It's worse than Windows 10. Windows 10 restored some of the original
desktop features, but not all. Its graphics are a little better than
Win8, more user-friendly. But, it's still bland. I don't know what
they were thinking.
 
My personal goals are to use a Win7-like OS. Even Win7 itself would
work, but they aren't going to support it forever. They will force
us to upgrade if we want to keep using Windows.
 
Personally, on my own private machines, I will never switch to anything
beyond Windows 7. Overt and invasive spyware is not my thing.
 
--
Thank you! | Indianapolis, Indiana | God is love -- 1 John 4:7-9
Rick C. Hodgin | http://www.libsf.org/ | http://tinyurl.com/yaogvqhj
-------------------------------------------------------------------------
Software: LSA, LSC, Debi, RDC/CAlive, ES/1, ES/2, VJr, VFrP, Logician
Hardware: Arxoda Desktop CPU, Arxita Embedded CPU, Arlina Compute FPGA
Manfred <noname@invalid.add>: Jan 31 02:35PM +0100

On 1/30/2018 11:33 PM, Richard wrote:
 
>> (you are probably referring to a different version of the standard than
>> n4618)
 
> I thought n4659 was the last published draft before C++17 was accepted.
Probably so, my note was only about the different section numbering.
The content is not different on this matter, though. (and I believe the
committee is not likely to change such basic features).
 
Manfred <noname@invalid.add>: Jan 31 03:09PM +0100

On 1/30/2018 10:42 PM, James R. Kuyper wrote:
>> lookup has succeeded"
 
> That's 6.4p1 in n4659.pdf, and the cross-reference is to 16.3 rather
> than 13.3. The words are the same, however.
I am referring to n4659 too in this follow-up.
 
> declaration of f() with a return type of 'int', so overload resolution
> still ends up with no role to play in this code. However, this code
> contains no definition for any function that matches that declaration.
I would correct this last part, in that the code /does/ contain a
definition of a function that matches the function /signature/ of f() -
the key issue being that the function signature does not contain the
function return type.
 
> prohibited by 16.1p2, regardless of hiding or overload resolution.
> Therefore, it is not permitted for there to be any actual function named
> f() which is compatible with the block-scope declaration of f().
This is not how I would understand it, since hidden names would not
participate in overload resolution (because of 6.4 p1).
 
 
> and "... the types specified by all declarations referring to a given
> variable or function shall be identical... A violation of this rule on
> type identity does not require a diagnostic." (6.5p10)
I believe you are right on this, which is in fact a requirement about
linkage.
p9 states that both declarations of f() "shall denote the same function"
and p9.1, 9.2 and 9.3 define this identity based on linkage, namespace
and parameter list identity (i.e. function /signature/)
The following p10 states that the /types/ shall be identical (which is
not the same as signatures, singe the function return type is part of
the function type, but not part of the function signature), but no
diagnostic is required from the compiler in case of violation.
 
To my understanding, this sums up to the following:
- The program is ill-formed (because it violates 6.5 p10)
- But the compiler is still compliant since "a diagnostic is not required".
 
This apparent inconsistency is explained by the linking rules (which are
based on signature instead of type), and is part of the inheritance of C.
 
<snip>
legalize+jeeves@mail.xmission.com (Richard): Jan 31 04:44PM

[Please do not mail me a copy of your followup]
 
Manfred <noname@invalid.add> spake the secret code
>Probably so, my note was only about the different section numbering.
>The content is not different on this matter, though. (and I believe the
>committee is not likely to change such basic features).
 
If it helps, I've put links to what I believe are the last published
draft standards for C++11, C++14 and C++17 in the Reference sidebar
on my user group website:
 
Utah C++ Programmers <http://utahcpp.wordpress.com>
 
I got sick of trying to find which draft was the last published before
acceptance of the standard :).
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
mcheung63@gmail.com: Jan 30 09:24PM -0800

Rick C. Hodgin於 2018年1月31日星期三 UTC+9上午2時48分13秒寫道:
> will be there.
 
> --
> Rick C. Hodgin
 
fuck off asshole, you mother is killed by god
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jan 31 05:31AM -0800

> .. off .., you mother is killed by god
 
You are correct that God holds the keys to life and death. But there
is a part of that equation you are missing: Namely that God holds the
keys to life and death, AND He has prescribed the manner for which we
are able to attain life. He gives that part out freely to all who will
receive it.
 
My mother was a Christian, Peter, and she was saved. She has left
this Earth, but her existence did not end. She is alive now in
Heaven with God, the promise God has for all who are redeemed.
 
The opposite of that end is to die here on the Earth not being saved.
Such a person goes on to judgment, and then is found guilty and cast
into the eternal lake of fire, which is called "the second death."
That second death is unyielding, while our first death is only temp-
orary, as we "go to sleep" until judgment day.
 
-----
So, while God does hold the keys to life and death, He has made a
way out of that fiery second death end for all who will receive it.
 
That's the part of the equation you're missing. You're also missing
the framework for which that part of the equation exists: love.
God has a love for you which makes it possible for Him to forgive
you, despite your guilt.
 
--
Rick C. Hodgin
Tim Rentsch <txr@alumni.caltech.edu>: Jan 30 11:32PM -0800

>> to dividing the function appropriately than to making the long
>> function easier to live with.
 
> Yes, but refactoring a function is more work and more risk.
 
I am taking this to mean you are talking about decomposing a
function in two or more smaller ones, rather than shortening or
simplifing a function body "in place". (To be fair my word
"dividing" was overspecific - I could have used a more neutral
word like "revising". But no matter.)
 
In any case, yes, decomposing a function into several is more
work, but the payoff is a lot higher. Probably the ROI is higher
for decomposing than it is for adding 'const' in several places.
Adding const does have the advantage of being more incremental.
The benefit just doesn't go as far, that's all I'm saying.
 
I'm not convinced that de-/re-composing a function carries more
risk necessarily. That depends on how much effort it put in to
the revising, and what testing methodology is being used. I
suppose in some absolute sense the risk is smaller, since adding
a 'const' somewhere is very unlikely to cause a problem (assuming
the code still compiles, of course). But the risk/payoff ratio
may very well be higher. The question is more complicated in
C++, where IIANM the presence of 'const' can change things in
unobvious ways due to things like function overloading. It
probably won't, but it might.
 
> function is to add const, so I can be sure 'foo' set near the
> beginning of the function has the same value one screenful later.
> Then when I understand it better, it's easier to refactor.
 
As it turns out I have been doing a fair amount of refactoring
work recently. It's almost always good to start small, looking
for ways to get some local improvements, before moving up to
the scale of decomposing big functions. Adding 'const' can be
one way to help with that, depending on what the starting point
is.
 
>> of the reader. Overusing const is just as much of a negative as
>> underusing it.
 
> Different mindsets, then.
 
I think the word "mindsets" is not really apropos here. What I
think you're talking about is what 'const' conveys to you, or
what it does or should convey to others. (Or similarly its
absence, I don't mean to distinguish those two cases.) What I am
talking about is (empirically) how I use 'const', not about what
it conveys or is meant to convey. As a rule I try not to assume
when reading other people's code that they think the same way I
do, and conversely.
Tim Rentsch <txr@alumni.caltech.edu>: Jan 30 11:37PM -0800

>>> to dividing the function appropriately than to making the long
>>> function easier to live with.
 
> I would say it is about function complexity more than size.
 
I didn't mean "large" to be just about length. Length is one
aspect, but certainly not the only aspect.
Tim Rentsch <txr@alumni.caltech.edu>: Jan 31 12:51AM -0800

>> things a lot less than having to read code backwards.
 
> I'm not sure I grok what you mean here. Surely you don't read the
> code sequentially?
 
Have you ever noticed how newspaper articles are written? They
start with a single concise paragraph that briefly states the
most essential details, at a high level. As the article goes on,
more and more detail is added, in a more or less breadth-first
order, so that the picture is gradually filled in everywhere,
without focusing on small details in any one area too early. The
result is you can stop reading kind of at any point, depending on
how much detail you want - sort of like how the graphic technique
of progressive refinement works.
 
Conversely, have you ever looked at the "live blogs" written by,
eg, sports writers covering sports events in real time? The
distinguishing feature of the articles I'm talking about is that
they are in reverse chronological order - as new bits and pieces
are added, they are added at the top rather than the bottom.
Thus if you want to read about the game from the beginning you
need to start near the bottom, read a paragraph or two while
scrolling towards the bottom of the page, then scroll the other
way to get to the next entry - back and forth in kind of a
zig-zag pattern. It's quite...hmmm...unsatisfying.
 
I find it very helpful when code is laid out in a mostly temporal
order, with calling functions written before called functions,
and child functions called earlier by a parent written before
child functions called later by the same parent. Of course there
are more elaborate connection structures possible, but to the
extent the call graph is a tree this corresponds to a prefix
traversal of the tree.
 
How I read code is probably too complicated to describe (if
indeed I even know all the various detours that I take). But for
sure I don't read code sequentially line-by-line. For one thing
my unit of reading is mostly one function body at a time. What
may be more important is that I don't necessarily read everything
but will skip some pieces here and there. When code is laid out
in a mostly temporal order, there is a very good chance that what
I want to read next is just further down the page. The general
direction of page-turning (or scrolling, if you will) is largely
the same - starting at the top, headed generally towards the
bottom, skipping pages now and then when there are details not
important at the moment, plus the occasional sideways jog when I
need to look back at something that happened earlier, or to skip
ahead briefly to look at a detail in a shared leaf node (and so
likely to be out of temporal order).
 
I believe there are other reasons that code laid out in a mostly
temporal order is easier to read and comprehend. In any case
what I was talking about is a caller-before-callee ordering for
the functions of a program or program sub-component.
Tim Rentsch <txr@alumni.caltech.edu>: Jan 31 01:03AM -0800

> I should. I can't be bothered to maintain three things: the
> declaration of helper(), the call to helper(), and the implementation
> of helper(), somewhere far down in the file.
 
What's sad is that C++ compilers have the machinery they needs,
or at least most of it, so that forward function declarations are
not necessary. If you will excuse a rather dumb example:
 
#include <stdio.h>
 
unsigned a, b, c;
 
struct Program {
 
int
main( int argc, char **argv ){
foo();
bas();
bar();
printf( "a, b, c: %u %u %u\n", a, b, c );
return 0;
}
 
void
foo(){
a = 4;
}
 
void
bas(){
b = 49;
for( unsigned i = 0; i < a; i++ ) b += i*i;
}
 
void
bar(){
c = 121;
for( unsigned i = 0; i < b; i++ ) c -= i;
}
 
} program;
 
int
main( int argc, char **argv ){
return program.main( argc, argv );
}
 
It would be nice to push the function-calling capability for
methods inside a struct/class up to the level of outer functions
in each entire translation unit.
mcheung63@gmail.com: Jan 30 09:23PM -0800

Rick C. Hodgin於 2018年1月30日星期二 UTC+9上午4時32分13秒寫道:
 
> It's why men and women like me teach as we do.
 
> --
> Rick C. Hodgin
 
fuck off asshole
Juha Nieminen <nospam@thanks.invalid>: Jan 31 08:20AM

> I acknowledged those were your terms, and that I would not be able
> to have back-and-forth exchanges with you based on them. It is to
> my loss, but it's where you and I are.
 
Why don't you just fuck off, you fucking spammer? Go to hell.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 31 06:50AM

On Tue, 2018-01-30, Scott Lurndal wrote:
> Borneq <borneq@antyspam.hidden.pl> writes:
>>If GnuC have macro that put to code info about current git commit hash?
 
> gcc -DGIT_HASH=$(git rev-parse HEAD) source-file.c
 
One has to be careful though, so you don't build source-file.o,
perform changes so that the hash changes (e.g. move to a different
branch), build with the old object file, and end up with a misleading
GIT_HASH in the executable.
 
I don't have a solution to that; nowadays I tend to think trying to do
what you want is a mistake. And it's also offtopic, since it's mostly
about the version control and build systems.
 
Also check out the gcc (or linker) BuildID feature (which I know very
little about).
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
David Brown <david.brown@hesbynett.no>: Jan 31 09:12AM +0100

On 30/01/18 22:30, Borneq wrote:
> If GnuC have macro that put to code info about current git commit hash?
 
I would recommend avoiding such things. When I check out code from a
repository and re-build, I want bit-for-bit identical builds. That way
I know everything is correct, everything works, there is no need for any
new testing or qualifications.
 
Different people and different types of programming have different
needs, of course - but you should consider carefully if this is really a
feature you want. It is easy to get things wrong, and impossible to
check that you always get it right.
 
<https://wiki.debian.org/ReproducibleBuilds>
Andrey Karpov <karpov2007@gmail.com>: Jan 30 11:20PM -0800

We'd like to present the series of articles dealing with the recommendations on writing code of high quality using the examples of errors found in the Chromium project. This is the fifth part, which deals with the use of unchecked or incorrectly checked data. A very large number of vulnerabilities exist thanks just to the use of unchecked data that makes the this topic exciting and actual.
 
Continue read: https://www.viva64.com/en/b/0557/
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 31 06:40AM

On Tue, 2018-01-30, Chris M. Thomasson wrote:
 
>> Isn't that a PIC-32 instruction?
 
> A PowerPC instruction eieio?
 
> https://en.wikipedia.org/wiki/Enforce_In-order_Execution_of_I/O
 
Ah, that's right; I confused them.
 
There used to be a humorous list of fake errno codes floating around;
/that/ was what EOWNERDEAD reminded me of.
 
I thought I had a copy, but cannot find it right now. Unless it was
this one from USENIX 86 -- but these aren't as funny as I remembered
them.
 
https://www.gnu.org/fun/jokes/errno.2
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Tim Rentsch <txr@alumni.caltech.edu>: Jan 30 05:51PM -0800


> In one ideal language one has to remember the minimum possible
> for the language (so one easy language)
> because one have to think to the algorithm and see the loop
 
Do you mean programming languages should be designed so there
isn't very much to remember?
 
The Smalltalk language has just three levels of precedence, and
binary operators (the middle precedence level) are grouped
strictly left-to-right (unless parentheses are used, of course).
(Smalltalk also has some related constructs that might be
considered another level of precedence or two.) The whole
language has just three keywords. Also, control structures (eg,
if/then/else, for, while) are expressed using the same syntax
as expressions, so there is less to remember there.
 
APL has no levels of precedence to remember: all expressions
are grouped right-to-left, again with parentheses available
to pick a different grouping.
 
Languages like Forth and PostScript have /no/ precedence and /no/
parentheses - evaluation is strictly left-to-right. PostScript
uses braces ({}) to enclose unevaluated code fragments, which
are then used with special operators to provide if/then/else,
for loops, etc.
 
Each of these cases has some nice properties and also some other
properties that are not as nice. Personally I think a happy
medium is a better path: trading a little bit of expression
complexity (and expressiveness) for having to remember a somewhat
bigger syntax has a positive ROI, up to a point. I don't know
what that point it but it definitely seems higher than the
"syntactically minimalist" languages like those mentioned
above.
Tim Rentsch <txr@alumni.caltech.edu>: Jan 30 06:04PM -0800


> Absolutely.
 
> To me the language is a way to get things done, not a means in itself.
 
> Tim & I will have to disagree over that,
 
I haven't said anything about it. Why do you think we
would disagree?
 
> just as we disagree on minor
> points of English vs American.
 
I don't "disagree" about differences between British English and
American English, any more than the difference between French and
German. They are different languages, that's all (or different
dialects of the same language, some would say). For that matter
American English itself has a fair number of regional dialects.
For an example see the movie "Airplane!".
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: