Friday, April 30, 2021

Digest for comp.lang.c++@googlegroups.com - 8 updates in 4 topics

TDH1978 <thedeerhunter1978@movie.uni>: Apr 30 05:08PM -0400

On 2021-04-27 11:02:01 +0000, Paavo Helde said:
 
> os << proxy.get_value();
> return os;
> }
 
Thank you; I'm sure that would work, but I would have expected
operator<<() to prefer the 'const':
 
X operator[](const Y& y) const
 
over the 'non-const':
 
MyDBProxy_& operator[](const Y& y)
 
 
Do you know why the compiler is choosing one over the other?
 
What if, instead of operator<<(), I had a function:
 
void foo(const X& x);
 
and I pass it:
 
foo(mydb[y]);
 
Would the compiler complain because it thinks mydb[y] returns the proxy
and not x?
TDH1978 <thedeerhunter1978@movie.uni>: Apr 30 04:53PM -0400

On 2020-12-31 22:24:46 +0000, Keith Thompson said:
 
> problem (what you posted won't compile because uint8_t is not defined),
> along with the exact command line you used to compile it and the output
> of "gcc --version"?
 
I have upgraded to Fedora 34 and I can confirm that this warning does
not appear with gcc/g++ 11.0.1. This confirms my suspicion that this
was probably a compiler bug.
 
cmd> gcc --version
gcc (GCC) 11.0.1 20210324 (Red Hat 11.0.1-0)
olcott <polcott2@gmail.com>: Apr 29 07:08PM -0500

On 4/29/2021 5:24 PM, Ben Bacarisse wrote:
> important. But you refused to address this fundamental point. I asked:
 
> "First, do you really disagree with my simple statement that a
> computation that halts, for whatever reason, is a halting computation?
 
Yes you and I both disagree with this:
Yes you and I both disagree with this:
Yes you and I both disagree with this:
Yes you and I both disagree with this:
 
On 11/27/2020 9:02 PM, Ben Bacarisse wrote:
> A computation that would not halt if its simulation were not
> halted is indeed a non-halting computation. But a computation that
> would not halt and one that is halted are different computations.
 
 
void Infinite_Loop()
{
HERE: goto HERE;
}
 
The first sentence corresponds to a halt decider that simulates its
input and examines the execution trace to determine that a finite string
specifies infinite execution such as the above infinite loop. After the
halt decider determines that a finite string does specify infinite
execution the halt decider stops simulating this input and reports not
halting.
 
The second sentence is false because it would decide that the above
infinite loop is a halting computation on the basis that the halt
decider stopped simulating it.
 
 
--
Copyright 2021 Pete Olcott "Great spirits have always encountered
violent opposition from mediocre minds." Einstein
olcott <NoOne@NoWhere.com>: Apr 30 09:10AM -0500

On 4/30/2021 8:10 AM, Ben Bacarisse wrote:
 
>> I created those words, yours are merely a paraphrase of my words.
 
> Well that's good news. So you now agree with me that the computation
> that "would not halt" and the one that "is halted" are different?
 
I notice my error after I posted.
These are the words that are only a paraphrase of my own words:
>>>> On 11/27/2020 9:02 PM, Ben Bacarisse wrote:
>>>>> A computation that would not halt if its simulation were not
>>>>> halted is indeed a non-halting computation.
 
< You
> that would not halt) as the answer for the one that is halted. But if
> you know they are different computations, you must surely reject your
> own trick.
 
Every computation that never stops unless its simulator stops simulating
it <is> a non halting computation.
 
void Infinite_Loop()
{
HERE: goto HERE;
}
 
void Infinite_Recursion(u32 N)
{
Infinite_Recursion(N);
}
 
void H_Hat2(u32 P)
{
u32 Input_Would_Halt = Simulate(P, P);
}
 
void H_Hat(u32 P)
{
u32 Input_Halts = Halts(P, P);
}
 
int main()
{
u32 Input_Would_Halt = Halts((u32)Infinite_Loop, (u32)Infinite_Loop);
u32 Input_Would_Halt = Halts((u32)Infinite_Recursion, 1);
u32 Input_Would_Halt = Halts((u32)H_Hat2, (u32)H_Hat2);
u32 Input_Would_Halt = Halts((u32)H_Hat, (u32)H_Hat);
}
 
All of the above functions are computations that never stop unless their
simulator stops simulating them and you know it.
 
 
 
> gone off the rails elsewhere. Unless you can agree that every
> computation that halts is halting computation, discussion of the halting
> problem is utterly pointless.
 
It depends on how you are defining your terms and the scope of what is
considered the computation.
 
When we have an infinite loop that halts because its simulator stopped
simulating it then we have an infinite loop that did halt yet its finite
string retains the property of infinite execution.
 
This counter-example correctly refutes your claim:
On 4/30/2021 8:10 AM, Ben Bacarisse wrote:
 
> I suppose we should add it to your other Great Equivocation. You need
> to accept, once and for all, that
 
> (A) Every instance of the halting problem has a correct yes/no answer.
 
I would not word it that way. Every complete Turing machine description
either halts on its input or fails to halt on its input when simulated
by a UTM.
 
> (B) Every computation that halts, for whatever reason, is a halting
> computation.
 
The equivocation on this statement is the last refuge for people that
have disagreement and rebuttal as a much higher priority than the mutual
understanding that can be achieved through an honest dialogue.
 
We can say that an infinite loop that was forced to halt by its
simulator on the basis that this simulator discerned that its input
finite string has the property of infinite execution is a halting
computation because it was forced to halt.
 
If we look at it this way then building a universal halt decider is
trivial we simply say that all inputs halt. It does not matter that they
halt on their own or are forced to halt because their simulation was
stopped, every input falls into one of these two categories.
 
// Proving that your reasoning is incorrect
bool Universal_Halt_Decider(u32 P, u32 I)
{
return true;
}
 
 
--
Copyright 2021 Pete Olcott
 
"Great spirits have always encountered violent opposition from mediocre
minds." Einstein
wij <wyniijj@gmail.com>: Apr 30 10:27AM -0700

On Friday, 30 April 2021 at 08:08:44 UTC+8, olcott wrote:
 
> --
> Copyright 2021 Pete Olcott "Great spirits have always encountered
> violent opposition from mediocre minds." Einstein
 
Let X(), Y() be the functions we had discussed before. (X the halt decider, Y the input program)
Y() is written AFTER X(). Such a Y() can do anything opposing what X() expect.
Otherwise, X() is probably not a deterministic function.
 
X() implies "P=NP", at least a one million dollar answer, maybe more.
Anyone has the answer you expect will win the prize, NOT YOU and no need to tell you.
olcott <NoOne@NoWhere.com>: Apr 30 01:04PM -0500

On 4/30/2021 12:27 PM, wij wrote:
> Otherwise, X() is probably not a deterministic function.
 
> X() implies "P=NP", at least a one million dollar answer, maybe more.
> Anyone has the answer you expect will win the prize, NOT YOU and no need to tell you.
 
void H_Hat(u32 P)
{
u32 Input_Halts = Halts(P, P);
if (Input_Halts)
HERE: goto HERE;
}
 
int main()
{
u32 Input_Would_Halt = Halts((u32)H_Hat, (u32)H_Hat);
Output("Input_Would_Halt = ", Input_Would_Halt);
}
 
Although the "P=NP" stuff is totally out of the scope of my
investigation I have discovered that all of the conventional halting
problem undecidability proof counter-examples can be decided as not
halting on the basis that the specify infinite recursion to any halt
decider that bases its halting deciding decision on examining the
execution trace of its input.
 
I have the above fully operational in the x86utm operating system that I
wrote that is based on an x86 emulator.
 
http://www.liarparadox.org/Halting_problem_undecidability_and_infinite_recursion.pdf
 
--
Copyright 2021 Pete Olcott
 
"Great spirits have always encountered violent opposition from mediocre
minds." Einstein
Juha Nieminen <nospam@thanks.invalid>: Apr 30 10:10AM

> Because that bottleneck does not exist. All that time wasted and 1%
> of performance gained. Meanwhile possible opportunities to improve
> performance several times lost.
 
If an action that could take a hundreth of a second is instead
taking a second, there certainly is a bottleneck.
 
The problem is that if the bottleneck is distributed among
thousands of sloppily written lines, each with an attitude of
"this doesn't need to be efficient", you aren't going to fix
that bottleneck any time soon.
 
 
> I don't think someone deliberately wrote c.size()==0 when it was less
> efficient than c.empty() ... or it++ when ++it was more efficient. Their
> focus was simply on things that mattered.
 
When the attitude is "this doesn't need to be efficient", and
"it doesn't matter if this takes 10 times longer because the
time is still in the microseconds", then you *are* deliberately
and willingly writing inefficient code.
Bo Persson <bo@bo-persson.se>: Apr 30 03:33PM +0200

On 2021-04-30 at 12:10, Juha Nieminen wrote:
>> performance several times lost.
 
> If an action that could take a hundreth of a second is instead
> taking a second, there certainly is a bottleneck.
 
It's not a bottleneck if that program runs 3 times a day. At non peak hours.
 
> "it doesn't matter if this takes 10 times longer because the
> time is still in the microseconds", then you *are* deliberately
> and willingly writing inefficient code.
 
It is often more efficient use of the developer's time to focus on the
applications that has the most runtime on a specific system - perhaps
using several CPU-hours per day. Not trying to get everything else down
from 5 to 4 microseconds.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Thursday, April 29, 2021

Digest for comp.lang.c++@googlegroups.com - 8 updates in 3 topics

Juha Nieminen <nospam@thanks.invalid>: Apr 29 04:45AM

> I don't see any problem either way as both seemed as error prone and used
> same magic number 5 ... I only thought wtf is that 5.
 
Are you always this nitpicky about examples (that have been simplified from
the real-life counterparts that they are based on)?
 
> Have you ever used profiler on some large product?
 
If you have slight unnecessary inefficiencies at a thousand places in a
huge program, the profiler isn't going to tell you where and which.
The only thing it's going to tell you is that "this action is taking
1 second" but it won't be able to pinpoint the exact bottleneck...
because the bottleneck will be about evenly distributed among a
thousand instances of "this doesn't need to be efficient".
 
It's, arguably, the worst possible bottleneck that could exist,
because you can't pinpoint it. And fixing it is extraordinarily
laborious, it will probably never be done, and the program will be
doomed to being inefficient because nobody will fix it. All because
of thousands of instances of the "this doesn't need to be efficient"
attitude.
 
> Nonsense, I don't wonder, I have decades of evidence that the attitude
> is correct, and considering optimal when focus should be to robustness
> and correctness is worthless and counterproductive.
 
Yeah, because string::substr() is so much more robust and correct than
string::compare().
 
I think that *deliberately* writing inefficient code is highly
counterproductive.
"Öö Tiib" <ootiib@hot.ee>: Apr 29 03:16PM -0700

On Thursday, 29 April 2021 at 07:45:17 UTC+3, Juha Nieminen wrote:
> > same magic number 5 ... I only thought wtf is that 5.
> Are you always this nitpicky about examples (that have been simplified from
> the real-life counterparts that they are based on)?
 
Sorry, but those removed parts can actually show why it was done like that.
 
> 1 second" but it won't be able to pinpoint the exact bottleneck...
> because the bottleneck will be about evenly distributed among a
> thousand instances of "this doesn't need to be efficient".
 
No, you haven't used profiler? It always shows that only tiny subpart of
whole program code is ran over 90% of time. Rest of code is ran more
rarely and rarely. Half of most C++ code can be rewritten in Python and
overall perceived performance of product does not change at all.
 
> doomed to being inefficient because nobody will fix it. All because
> of thousands of instances of the "this doesn't need to be efficient"
> attitude.
 
Because that bottleneck does not exist. All that time wasted and 1%
of performance gained. Meanwhile possible opportunities to improve
performance several times lost.
 
> > and correctness is worthless and counterproductive.
> Yeah, because string::substr() is so much more robust and correct than
> string::compare().
 
For example the size() of all containers wasn't O(1) in C++98 but
programmers used it to check if container was empty. It had to be
made O(1) as it was somehow easier for programmers to reason.
Same can be about your code that whatever you simplified out
made that substr() easier to reason than compare().
 
> I think that *deliberately* writing inefficient code is highly
> counterproductive.
 
I don't think someone deliberately wrote c.size()==0 when it was less
efficient than c.empty() ... or it++ when ++it was more efficient. Their
focus was simply on things that mattered.
Juha Nieminen <nospam@thanks.invalid>: Apr 29 04:47AM

> Agreed that the TIOBE index should be taken with a grain of salt
 
Isn't it mainly based on how much people ask questions in forums about
the language in question?
 
If that's so, then maybe the index shouldn't be interpreted as how
popular the language is, but how many questions it raises when people
try to use it.
Paavo Helde <myfirstname@osa.pri.ee>: Apr 29 11:42AM +0300

29.04.2021 07:47 Juha Nieminen kirjutas:
 
> If that's so, then maybe the index shouldn't be interpreted as how
> popular the language is, but how many questions it raises when people
> try to use it.
 
Some of my recent programming related google searches:
 
man recv
man send
man setsockopt
man wcsncmp
man fileno
man ftello64
man fabs
man scanf
std::deque
c++ inline variable
 
So I can easily see how someone might mistakenly consider me a C programmer.
 
One of my frequent questions about C functions is what the correct
include header is, these are sometimes quite chaotic, but one must get
them correct when e.g. moving around some existing code.
wij <wyniijj@gmail.com>: Apr 29 10:56AM -0700

On Thursday, 29 April 2021 at 00:57:11 UTC+8, Öö Tiib wrote:
> themselves (and their kids) being vegetarians. It is because everybody want
> talk about themselves and to find and to cooperate with others who share
> their attitude.
 
Many 'larger' companies in my place are production companies.
If C++ is listed in the job requirement, often associated with are database,
web,HMI,AI,motor,vehicle,...,for hardware verification/simulation/analysis.
Rare companies like these would intentionally announce changes of their tools,
internal affairs.
 
E.g. Garmin (because the job requirement is in English, and is typical form)
wants C/C++ people.
 
1. Good understanding in C/C++ programming.
2. Fitness GPS product design – Running Wearables, cycling products and wellness products.
3. Good English communication ability to communicate with foreign engineers.
 
Will be a plus if have experience about:
1. Computer architecture and ARM Compiler.
2. Have experience about embedded system or RTOS.
3. Knowledge about how to use logic analyzer and Jtag ICE.
4. Other OS related knowledge.
----------------------------
In this case, normal C++ applicant would headache with ARM compiler, RTOS, ICE,
and hardware things. And, "C/C++" is a general term.
I can not think of any reason, any company like this would say anything too
specific about their production tools.
Christian Gollwitzer <auriocus@gmx.de>: Apr 29 11:04PM +0200

Am 29.04.21 um 10:42 schrieb Paavo Helde:
> c++ inline variable
 
> So I can easily see how someone might mistakenly consider me a C
> programmer.
 
This is not how TIOBE works. They run searches on Google and other
search engines with the name of the programming language and count the hits:
https://www.tiobe.com/tiobe-index/programming-languages-definition/
 
If you run a google search yourself for scanf, that doesn't influence
TIOBE at all, not even if you post a question about scanf to
stackoverflow, unless you (or someone else) also mentions "C" or "C++"
on the same page.
 
So yes, it is true, the more questions a language raises, the more
you'll find pages referencing it, but also the more people use it, the
more will be questions.... it is definitely not a tool to run a
competition between the second and third place - that is rather
arbitrary. But you can be sure that a language on the 25th place
(Prolog) is used less than one from the 3rd place (Python) - though in
this realm, you'll find languages like Bash at the 42nd place. Bash is
definitely a language without which the world would be a different
place, yet people dont write that much LOC in it and therefore pose
fewer questions - hence, TIOBE place doesn't equal importance.
 
Concerning C and C++, the situation is especially bad, because many
search engines cannot really distinguish between those two, let alone
the users always post the correct term - some refer to C++ simply as C
(as opposed to Python, e.g.). Therefore, TIOBE tells practically nothing
if C or C++ is more commonly used.
 
A similar ranking with maybe more sound methodology is this:
https://madnight.github.io/githut/#/pull_requests/2021/1
 
It's counting the number of pull requests on Github. I'm inclinded to
believe this more than TIOBE overall. But that is only my gut feeling.
 
Christian
Ralf Goertz <me@myprovider.invalid>: Apr 29 09:50AM +0200

Am Wed, 28 Apr 2021 23:10:03 +0100
> > from the newsgroup name.
 
> If you post directly on to C++ link, does it work?
 
> <https://groups.google.com/g/comp.lang.c++>
 
Written as such it is no surprise that you land in the C group and not
the C++ one. My newsreader (maybe erroneously) also drops the two pluses
when clicking on it to open the link in a browser since they are not
percent encoded. With <https://groups.google.com/g/comp.lang.c%2B%2B>
you should be on the safe side. I must admit though that "+" being a
reserved character should in some circumstances also be usable "as is"
if I understand <https://en.wikipedia.org/wiki/Percent-encoding>
correctly:
 
"Reserved characters that have no reserved purpose in a particular
context may also be percent-encoded but are not semantically different
from those that are not."
 
But I am not inclined to figure out when that is the case.
 
> I would have thought that if people can find this link on google
> portal then it should work.  Google Groups are a mess and you can't
> find direct links to the posting site any more.
 
I haven't tried to post using that link, though.
Richard Harnden <richard.nospam@gmail.com>: Apr 29 01:58PM +0100

On 29/04/2021 08:50, Ralf Goertz wrote:
>> portal then it should work.  Google Groups are a mess and you can't
>> find direct links to the posting site any more.
 
> I haven't tried to post using that link, though.
 
"New conversation" on both the ++ and %2b%2b versions open a "New
conversation in comp.lang.c" window.
 
So GG is useless. Everybody knows this.
 
 
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 2 updates in 2 topics

Amine Moulay Ramdane <aminer68@gmail.com>: Apr 28 05:23PM -0700

Hello,
 
 
More philosophy about Canada and its education system..
 
I am a white arab, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
 
I am living in Canada since year 1989, and i think Canada is the best country in the world, and here is the logical proof of it: Read the following about Canada to notice it:
 
Canada is the No. 1 Country in the World, According to the 2021 Best Countries Report
 
https://www.usnews.com/info/blogs/press-room/articles/2021-04-13/canada-is-the-no-1-country-in-the-world-according-to-2021-best-countries
 
And the education system of Canada is one of the best in the world, read the following to notice it:
 
How Canada became an education superpower
 
https://www.bbc.com/news/business-40708421
 
And to know more about USA, read my following writing:
 
More philosophy about USA and its education system..
 
I invite you to look carefully at this 9 minutes video of Michio Kaku
an American theoretical physicist, and he explains something really
important to know about USA:
 
Michio Kaku: US has the worst educational system known to science
 
https://www.youtube.com/watch?v=-fphPeRvhjQ
 
Here is more proof, read my following writing about USA:
 
Let's look for example at USA, so read the following from Jonathan Wai that is a Ph.D., it says:
 
"Heiner Rindermann and James Thompson uncovered that the "smart fraction" of a country is quite influential in impacting the performance of that country, for example, its GDP."
 
And it also says the following:
 
""According to recent population estimates, there are about eight Chinese and Indians for every American in the top 1 percent in brains." But consider that the U.S. benefits from the smart fractions of every other country in the world because it continues to serve as a magnet for brainpower, something that is not even factored into these rankings.
 
What these rankings clearly show is America is likely still in the lead in terms of brainpower. And this is despite the fact federal funding for educating our smart fraction is currently zero. Everyone seems worried Americans are falling behind, but this is because everyone is focusing on average and below average people. Maybe it's time we started taking a closer look at the smartest people of our own country."
 
Read more here:
 
https://www.psychologytoday.com/us/blog/finding-the-next-einstein/201312/whats-the-smartest-country-in-the-world
 
So as you are noticing it's immigrants(and there are about eight Chinese and Indians for every American in the top 1 percent in brains) that are making USA a rich country.
 
And read also the following to understand more:
 
Why Silicon Valley Wouldn't Work Without Immigrants
 
There are many theories for why immigrants find so much success in tech. Many American-born tech workers point out that there is no shortage of American-born employees to fill the roles at many tech companies. Researchers have found that more than enough students graduate from American colleges to fill available tech jobs. Critics of the industry's friendliness toward immigrants say it comes down to money — that technology companies take advantage of visa programs, like the H-1B system, to get foreign workers at lower prices than they would pay American-born ones.
 
But if that criticism rings true in some parts of the tech industry, it misses the picture among Silicon Valley's top companies. One common misperception of Silicon Valley is that it operates like a factory; in that view, tech companies can hire just about anyone from anywhere in the world to fill a particular role.
 
But today's most ambitious tech companies are not like factories. They're more like athletic teams. They're looking for the LeBrons and Bradys — the best people in the world to come up with some brand-new, never-before-seen widget, to completely reimagine what widgets should do in the first place.
 
"It's not about adding tens or hundreds of thousands of people into manufacturing plants," said Aaron Levie, the co-founder and chief executive of the cloud-storage company Box. "It's about the couple ideas that are going to be invented that are going to change everything."
 
Why do tech honchos believe that immigrants are better at coming up with those inventions? It's partly a numbers thing. As the tech venture capitalist Paul Graham has pointed out, the United States has only 5 percent of the world's population; it stands to reason that most of the world's best new ideas will be thought up by people who weren't born here.
 
If you look at some of the most consequential ideas in tech, you find an unusual number that were developed by immigrants. For instance, Google's entire advertising business — that is, the basis for the vast majority of its revenues and profits, the engine that allows it to hire thousands of people in the United States — was created by three immigrants: Salar Kamangar and Omid Kordestani, who came to the United States from Iran, and Eric Veach, from Canada.
 
But it's not just a numbers thing. Another reason immigrants do so well in tech is that people from outside bring new perspectives that lead to new ideas.
 
Read more here:
 
https://www.nytimes.com/2017/02/08/technology/personaltech/why-silicon-valley-wouldnt-work-without-immigrants.html
 
 
Thank you,
Amine Moulay Ramdane.
Amine Moulay Ramdane <aminer68@gmail.com>: Apr 28 04:27PM -0700

Hello,
 
 
 
More philosophy about USA and its education system..
 
I invite you to look carefully at this 9 minutes video of Michio Kaku
an American theoretical physicist, and he explains something really
important to know about USA:
 
Michio Kaku: US has the worst educational system known to science
 
https://www.youtube.com/watch?v=-fphPeRvhjQ
 
Here is more proof, read my following writing about USA:
 
Let's look for example at USA, so read the following from Jonathan Wai that is a Ph.D., it says:
 
"Heiner Rindermann and James Thompson uncovered that the "smart fraction" of a country is quite influential in impacting the performance of that country, for example, its GDP."
 
And it also says the following:
 
""According to recent population estimates, there are about eight Chinese and Indians for every American in the top 1 percent in brains." But consider that the U.S. benefits from the smart fractions of every other country in the world because it continues to serve as a magnet for brainpower, something that is not even factored into these rankings.
 
What these rankings clearly show is America is likely still in the lead in terms of brainpower. And this is despite the fact federal funding for educating our smart fraction is currently zero. Everyone seems worried Americans are falling behind, but this is because everyone is focusing on average and below average people. Maybe it's time we started taking a closer look at the smartest people of our own country."
 
Read more here:
 
https://www.psychologytoday.com/us/blog/finding-the-next-einstein/201312/whats-the-smartest-country-in-the-world
 
So as you are noticing it's immigrants(and there are about eight Chinese and Indians for every American in the top 1 percent in brains) that are making USA a rich country.
 
And read also the following to understand more:
 
Why Silicon Valley Wouldn't Work Without Immigrants
 
There are many theories for why immigrants find so much success in tech. Many American-born tech workers point out that there is no shortage of American-born employees to fill the roles at many tech companies. Researchers have found that more than enough students graduate from American colleges to fill available tech jobs. Critics of the industry's friendliness toward immigrants say it comes down to money — that technology companies take advantage of visa programs, like the H-1B system, to get foreign workers at lower prices than they would pay American-born ones.
 
But if that criticism rings true in some parts of the tech industry, it misses the picture among Silicon Valley's top companies. One common misperception of Silicon Valley is that it operates like a factory; in that view, tech companies can hire just about anyone from anywhere in the world to fill a particular role.
 
But today's most ambitious tech companies are not like factories. They're more like athletic teams. They're looking for the LeBrons and Bradys — the best people in the world to come up with some brand-new, never-before-seen widget, to completely reimagine what widgets should do in the first place.
 
"It's not about adding tens or hundreds of thousands of people into manufacturing plants," said Aaron Levie, the co-founder and chief executive of the cloud-storage company Box. "It's about the couple ideas that are going to be invented that are going to change everything."
 
Why do tech honchos believe that immigrants are better at coming up with those inventions? It's partly a numbers thing. As the tech venture capitalist Paul Graham has pointed out, the United States has only 5 percent of the world's population; it stands to reason that most of the world's best new ideas will be thought up by people who weren't born here.
 
If you look at some of the most consequential ideas in tech, you find an unusual number that were developed by immigrants. For instance, Google's entire advertising business — that is, the basis for the vast majority of its revenues and profits, the engine that allows it to hire thousands of people in the United States — was created by three immigrants: Salar Kamangar and Omid Kordestani, who came to the United States from Iran, and Eric Veach, from Canada.
 
But it's not just a numbers thing. Another reason immigrants do so well in tech is that people from outside bring new perspectives that lead to new ideas.
 
Read more here:
 
https://www.nytimes.com/2017/02/08/technology/personaltech/why-silicon-valley-wouldnt-work-without-immigrants.html
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

Wednesday, April 28, 2021

Digest for comp.lang.c++@googlegroups.com - 19 updates in 4 topics

Keith Thompson <Keith.S.Thompson+u@gmail.com>: Apr 28 10:28AM -0700

You posted in comp.lang.c. You want comp.lang c++ -- probably.
 
Google Groups has a rather horrid bug that quietly drops the "++" from
the newsgroup name.
 
I've cross-posted this to comp.lang.c++ and redirected follows there.
If you reply to this message (and not to any others in this thread), you
should be able to continue the discussion in the correct newsgroup.
 
(I see you're using a lot of C library functions, which is valid, but
perhaps not a good idea if you want to program in C++.)
 
This syntax:
MyForm^ NewUI = gcnew MyForm();
suggests that you're using some non-standard dialect. Someone in
comp.lang.c++ who recognizes it might suggest a better place to post.
 
Previous message follows (I normally wouldn't top-post, but I made an
exception in this case):
 
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
Cetin Aslantepe <cetin.aslantepe1988@gmail.com>: Apr 28 02:49PM -0700

Dear Keith,
 
Thank you for posting in the right group .
Cetin Aslantepe <cetin.aslantepe1988@gmail.com>: Apr 28 03:09PM -0700

Dear Supporter,
 
I have been trying to control a stepper motor via GUI(WinForms) with my little programming knowledge.
 
With the example programs of device, I get commands (movements) performed.
All functions are already stored under a DLL.
 
I was told that only by applying threads two main programs (device and winforms) is possible. Are threads necessary?
 
If yes, how should I design the program with threads best?
Are there other solutions?
 
First of all, I would like to use the GUI (WinForms) to make the first simple movements.
How should I proceed?
 
First of all, I want my GUI to communicate with the stepper motor?
How would I do that?
 
I would be very happy about a feedback. Thanks in advance.
 
I appreciate your help.
 
With best regards
Cetin Aslantepe
Real Troll <real.troll@trolls.com>: Apr 28 11:10PM +0100

On 28/04/2021 18:28, Keith Thompson wrote:
 
> Google Groups has a rather horrid bug that quietly drops the "++" from
> the newsgroup name.
 
If you post directly on to C++ link, does it work?
 
<https://groups.google.com/g/comp.lang.c++>
 
I would have thought that if people can find this link on google portal
then it should work.  Google Groups are a mess and you can't find direct
links to the posting site any more.
Cetin Aslantepe <cetin.aslantepe1988@gmail.com>: Apr 28 03:18PM -0700

Dear Real Troll,
 
I'm in the right group now "comp.lang.c++"
 
The message has been posted in the right group.
This is also discussed in the c group.
 
About any help I would be grateful.
 
best regads
cetin
Real Troll <real.troll@trolls.com>: Apr 28 11:29PM +0100

On 28/04/2021 23:18, Cetin Aslantepe wrote:
 
> About any help I would be grateful.
 
Sure. I would like to help but it is beyond my pay grade.  Sorry about this.
 
I can create DLLs and all that in C and C++ but to bind the functions in the GUI environment is not something I do in C or C++. I am mainly a C# programmer where GUI is much better IMO. Also, there are many Videos about C# on YouTube dealing with Visual Aspect of the programs.
Christian Gollwitzer <auriocus@gmx.de>: Apr 28 08:03AM +0200

Am 26.04.21 um 21:07 schrieb wij:
> Then, "my problem" is the concept of multi-paradigm and C++ standard practiced
> by usual programmers contradict each other if C++ standard library dictates its
> usage only a bit more. [...]
 
I think, you simply do not like C++ and like C more - and that is OK, it
is your opinion. De gustibus non est disputandum.
 
Most people in this group simply have another opinion.
 
Christian
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Apr 28 12:02AM -0700

On 4/17/2021 10:12 AM, Manfred wrote:
 
> As second, I agree with Jacob that a text editor is a simple tool, but
> it is not easy to make expecially for a beginner.
 
> [snip]
 
Wrt using GC as the everlasting crutch:
 
https://youtu.be/ao-Sahfy7Hg
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Apr 28 12:32AM -0700

On 4/23/2021 4:10 AM, wij wrote:
 
>> Technically sound, I am convinced now...
 
>> ... that you have nothing to say
 
> Let it burn. I like to play fire, safely.
 
;^)
 
The dragon in the clip I posted is pretty damn pissed off! ;^)
 
wij <wyniijj@gmail.com>: Apr 28 11:04AM -0700

On Wednesday, 28 April 2021 at 15:33:05 UTC+8, Chris M. Thomasson wrote:
 
> > Let it burn. I like to play fire, safely.
 
> ;^)
 
> The dragon in the clip I posted is pretty damn pissed off! ;^)
 
Yop, that's what the director want audience to see.
 
This song (lyrics, at least) is from Book of Songs (very old)
https://www.youtube.com/watch?v=9QVTcv6geHQ
 
"Book of Songs"==Classic_of Poetry==Lyrics
https://en.wikipedia.org/wiki/Classic_of_Poetry
 
David Brown <david.brown@hesbynett.no>: Apr 28 09:01AM +0200

On 27/04/2021 18:31, jacobnavia wrote:
 
> https://www.tiobe.com/tiobe-index/
 
> SOME companies must be using it, don't you think so?
 
> C++ comes 4th.
 
The TIOBE index is well-known for being no more than a vague indication
of the popularity of a language - it is certainly not the ranking some
people would believe.
 
But let us assume that C is significantly more used than C++ - it is not
an unreasonable assumption, regardless of TIOBE.
 
The question asked was "what companies want to reduce C++ by using more
C?" It was not "what companies use C?".
 
In my experience, companies and developers sometimes move from C to C++.
They rarely move back. But if you know differently, tell us.
Manfred <noname@add.invalid>: Apr 28 02:54PM +0200

On 4/28/2021 9:01 AM, David Brown wrote:
> C?" It was not "what companies use C?".
 
> In my experience, companies and developers sometimes move from C to C++.
> They rarely move back. But if you know differently, tell us.
 
Agreed that the TIOBE index should be taken with a grain of salt
(possibly even with a few pounds of it)
However, the /trend/ shows that C is raising as of recent, unlike C++ -
while in the long run C++ shows a more significant decrease, vs C being
pretty much stable.
If this were a /usage/ index it would tell something about your point -
the fact still is that TIOBE is a popularity index, it measures how much
people /talk/ about some programming language, which is hardly coupled
with any actual professional usage, IMHO (I'm no expert in
marketing/social research).
"Öö Tiib" <ootiib@hot.ee>: Apr 28 09:57AM -0700

> > wants to migrate/translate/switch from C++ product/code base to any of those
> > now and why as I know outright totally none cases.
> Who? Who can say that specifically except themselves?
 
I specifically asked for quote or cite of concrete "themselves" as technology
companies say such kind of things (like their trend in choice of technology stacks)
about themselves quite often. If it is something noteworthy like moving to C
or Fortran then I expect similar amount of noise like people do who about
themselves (and their kids) being vegetarians. It is because everybody want
talk about themselves and to find and to cooperate with others who share
their attitude.
"Öö Tiib" <ootiib@hot.ee>: Apr 27 06:14PM -0700

On Tuesday, 27 April 2021 at 17:57:17 UTC+3, Paavo Helde wrote:
> makes e.g. a million of such comparisons, then with large substrings it
> would slow down from 0.004 s to 0.04 s. I bet many people won't be
> overly concerned about this.
 
The focus has long gone away from considering each time we solve some
problem if the result is most efficient. On most cases it matters
if it works correctly for all input and if it is easy to understand what is
going on. Efficiency matters only for about 5% of code base.
For example if we need to sort very small array of fixed size then on 95%
of cases it is not worth to dig out some code that implements one of
such mathematically proven optimal from Donald Knuth's "The Art of
Computer Programming" volume 3 (written in sixties) but to use std::sort.
Andrey Tarasevich <andreytarasevich@hotmail.com>: Apr 27 07:57PM -0700

On 4/27/2021 5:07 AM, Juha Nieminen wrote:
> complicated to write or use, or would require several lines of code,
> or something.
> ...
 
In modern C++ the original comparison is better done through a
`std::string_view`
 
if (std::string_view(str1).substr(n, 5) == str2)
 
This is also an in-place comparison with zero actual run-time overhead,
even though conceptually it looks like a creation of an extra
[lightweight] temporary.
 
Tri-state comparisons like `std::string::compare` have their own
important role (which is why we now have built-in tri-state comparisons
through `<=>`), but the version you are proposing doesn't exactly look
too elegant when invoked for the purpose of a plain equality comparison.
Might be one of the reasons people would consciously avoid it in favor
of the original `.substr` version, despite the considerable extra
run-time cost.
 
Before the `std::string_view` era I would also opt to use
`std::string::compare` in this context for the same reasons you
mentioned. But, again, some people might stick to an expensive `.substr`
version in non-critical code just for its stylistic elegance.
 
--
Best regards,
Andrey Tarasevich
wij <wyniijj@gmail.com>: Apr 27 09:15PM -0700

On Tuesday, 27 April 2021 at 20:07:40 UTC+8, Juha Nieminen wrote:
 
> For example, I am currently dealing with C++ code that does a lot of
> things like:
 
> if(str1.substr(n, 5) == str2)
 
With my library, it would be: if(str1.cseg(n,5)==str2.cseg())
With QString, there may be more examples.
In not few cases, using some kind of XX::vect<char> is efficient.
 
 
> std::string is an awesome tool that makes one's life a thousand times
> easier, but any competent C++ programmer should learn how to use it
> *efficiently*, not just lazily. Know your tools. Use them efficiently.
 
There are lot examples using C-string is more efficient and convenient
than wrapped in std::string, sort of OO twisted.
The basic tool is still C. Yes, Know your tools. Use them efficiently.
Juha Nieminen <nospam@thanks.invalid>: Apr 28 05:21AM

> problem if the result is most efficient. On most cases it matters
> if it works correctly for all input and if it is easy to understand what is
> going on. Efficiency matters only for about 5% of code base.
 
I don't see a problem in choosing the more efficient solution, especially
if both solutions are approximately of the same complexity.
 
The problem with thinking like "this is just like a millisecond slower,
it doesn't really matter" is that when an entire huge program is full of
such compromises, they stack up, and may add up to a significant slowdown.
Multiply that millisecond by a thousand instances of doing the same thing,
and suddenly your program takes a second to do something that it could
be doing in a hundreth of a second. It will start feeling sluggish instead
of responding immediately. It may be slow to react to things. If the
program is being constantly run, doing that same thing over and over,
suddenly it might take a minute to do something that it could be doing
in one second.
 
Then you wonder why programs seem to be getting slower and slower,
even though hardware is getting faster and faster. It's exactly because of
this kind of attitude by programmers. It's precisely because of the
"efficiency only matters only for about 5% of code base". Because of the
"it's just like a millisecond slower, it doesn't matter."
Paavo Helde <myfirstname@osa.pri.ee>: Apr 28 11:13AM +0300

28.04.2021 07:15 wij kirjutas:
 
> There are lot examples using C-string is more efficient and convenient
> than wrapped in std::string, sort of OO twisted.
> The basic tool is still C. Yes, Know your tools. Use them efficiently.
 
There are lot of examples when they are equally efficient and equally
(in-)convenient, like here with strcmp() and std::string::compare().
 
And there are lot of examples where using C strings is less efficient
(constantly recalculating the string length as it is not stored
anywhere) and/or less convenient. So indeed, know your tools and use
them efficiently.
"Öö Tiib" <ootiib@hot.ee>: Apr 28 09:09AM -0700

On Wednesday, 28 April 2021 at 08:21:47 UTC+3, Juha Nieminen wrote:
> > going on. Efficiency matters only for about 5% of code base.
> I don't see a problem in choosing the more efficient solution, especially
> if both solutions are approximately of the same complexity.
 
I don't see any problem either way as both seemed as error prone and used
same magic number 5 ... I only thought wtf is that 5.
 
> Multiply that millisecond by a thousand instances of doing the same thing,
> and suddenly your program takes a second to do something that it could
> be doing in a hundreth of a second.
 
Nope. Paavo said that the difference was 4 nanoseconds.
Multiplying 4 nanoseconds with thousand results with 4 microseconds.
You need to multiply with 250 000 to first reach that millisecond.
There are only rare entity types in programs whose counts ever reach
such numbers and even with those you won only that millisecond.

> program is being constantly run, doing that same thing over and over,
> suddenly it might take a minute to do something that it could be doing
> in one second.
 
Have you ever used profiler on some large product? It is not running all
its code base over and over ever. Most code is ran next to never and so
must be left exactly as inefficient as it was written. Changing it can just
cause more defects in rarely verified places and so waste time into
doing something meaningless and money into funding something
counterproductive. Meanwhile lot (perhaps majority) of programs can
be made orders of magnitude faster overall by concentrating the effort
of choosing optimal algorithms to those few places where it matters
because these places are ran over and over.

> this kind of attitude by programmers. It's precisely because of the
> "efficiency only matters only for about 5% of code base". Because of the
> "it's just like a millisecond slower, it doesn't matter."
 
Nonsense, I don't wonder, I have decades of evidence that the attitude
is correct, and considering optimal when focus should be to robustness
and correctness is worthless and counterproductive. The projects do
not lag behind, get nowhere or fail exactly because of not tinkering over
things that will never matter. And so there will be time to profile it and
to optimize.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Tuesday, April 27, 2021

Digest for comp.lang.c++@googlegroups.com - 4 updates in 1 topic

Kaz Kylheku <563-365-8930@kylheku.com>: Apr 27 05:13PM


> There were no typedefs back then (at least I don't remember them that
> far back, and they don't appear in either Ritchie's reference manual nor
> Kernighan's tutorial).
 
The preprocessor was used for type definitions; that's why FILE is capitalized.
 
#define FILE struct _iobuf
 
or something like that. The Indian Hill naming (typedefs are all caps)
that spilled into MS Windows is probably inspired by that.
 
 
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Lew Pitcher <lew.pitcher@digitalfreehold.ca>: Apr 27 05:28PM

On Tue, 27 Apr 2021 10:55:37 -0600, Joe Pfeiffer wrote:
 
 
> There were no typedefs back then (at least I don't remember them that
> far back, and they don't appear in either Ritchie's reference manual nor
> Kernighan's tutorial).
 
FWIW, my copy of K&R (Copyright 1978) has a whole section (Chapter 6.9) on typedefs
The "C Programming Language - Reference Manual" (by Dennis M. Richie) published as
Appendix A of that edition of K&R also includes a section (8.8 Typedef) on typedef,
along with the BNF for it in section 18.2 ("Declarations").
 
 
--
Lew Pitcher
"In Skills, We Trust"
scott@slp53.sl.home (Scott Lurndal): Apr 27 07:27PM


>There were no typedefs back then (at least I don't remember them that
>far back, and they don't appear in either Ritchie's reference manual nor
>Kernighan's tutorial).
 
At the time of the SVR4 conversion in 1989, typedef was part of the
language, and pid_t, gid_t, and uid_t were used subsequently for
those purposes respectively. In addition, all API's were modified
to use those abstract types both in SVR4 and via the SVID into
POSIX.
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Apr 27 01:33PM -0700

> The "C Programming Language - Reference Manual" (by Dennis M. Richie) published as
> Appendix A of that edition of K&R also includes a section (8.8 Typedef) on typedef,
> along with the BNF for it in section 18.2 ("Declarations").
 
https://www.bell-labs.com/usr/dmr/www/cman.pdf (1975) doesn't mention typedef.
K&R1 (1978) does. That's as far as I've been able to narrow it down.
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

Keith Thompson <Keith.S.Thompson+u@gmail.com>: Apr 26 04:57PM -0700

> Eli the Bearded <*@eli.users.panix.com> writes:
[...]
 
> *Lord* that one bit me hard early on. Having all struct members
> use a single namespace is a decision I still don't understand (I don't
> care how limited the machines running the compiler were).
 
As of the 1975 C manual, a "prefix.identifier" or "prefix->identifier"
expression *assumed* that the LHS was of the correct type. For "->",
it wasn't even required to be a pointer; it could be a pointer,
character, or integer.
 
K&R1 (1978) imposed the requirement for the LHS to be of the correct
struct or union type for "." or a pointer to the correct struct or union
type for "->".
 
If you ran into that problem, it must have been some very early code
and/or a very old compiler.
 
I've used a compiler (VAXC) that knew about the modern "+=" compound
assignment operators, but also accepted the older "=+" forms -- and
preferred them in ambiguous cases. That was in the late 1990s. That's
also a change that was made between 1975 and K&R1 in 1978. Fortunately,
though VAXC was available, we mostly used the more modern DECC.
 
[...]
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
Ben Bacarisse <ben.usenet@bsb.me.uk>: Apr 27 01:34AM +0100


>>The posted code can't be 50 years old. ed is 50 years old, but the
>>original was not written in C (there was no C in 1971).
 
> ed first appeared in V2. ed2.s and ed3.s
 
Oh. I found a man page dated 1971 in V1 Unix.
 
> https://minnie.tuhs.org/cgi-bin/utree.pl?file=V2/cmd/ed2.s
 
> ed.c first appeared in V6 (1975).
 
Yup. ed.c can't predate C!
 
 
--
Ben.
Kaz Kylheku <563-365-8930@kylheku.com>: Apr 27 12:46AM


> 1. It predated printf being called that. I think it was just using
> print(). And that used "-lS" in the Makefile.
 
> 2. Prototypes? Who needs em? ("#include <stdio.h>"? What, why?)
 
Version 7 Unix in 1979 had <stdio.h>, and fprintf and printf functions,
wrappers around an assembly language routine taking the stream as a
parameter. In version 5 (1974), printf was still a dedicated assembly
routine. It had the "f" in the name.
 
 
> struct {
> int p_xy; /* used to set/transfer entire plot */
> };
 
That was a feature of early C. Basically, no type checking. If you
wrote
 
ptr->member
 
then it would look up member in a global dictionary of *all* structure
members that have been declared in the translation unit, and not in the
*ptr type! And then it would just generate the machine code to access
the pointer relative to that offset.
 
This is part of the reason why Unix structure members have prefixes,
just like the convention you see in the code you found above.
 
E.g. "struct stat" has "st_mtime", "st_size" and so on.
 
That situation persists until today.
 
Another funny fact is that some early compilers used a static buffer for
returning structs instead of the stack. Oops, not safe w.r.t. signals or
threading.
 
> 5. Implicit in above: seems like there is the expectation that
> sizeof(int) == 2.
 
Newly written code today has assumptions like this. TOns of code written
for 16-bit systems (PC's with MS-DOS, for instance) was riddled with
sizeof(int) == 2 == 16 bits assumptions.
 
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Joe Pfeiffer <pfeiffer@cs.nmsu.edu>: Apr 26 09:42PM -0600

> type for "->".
 
> If you ran into that problem, it must have been some very early code
> and/or a very old compiler.
 
It wasn't an old compiler... at the time... it was roughly 1977 or so.
Juha Nieminen <nospam@thanks.invalid>: Apr 27 04:51AM

> 5. Implicit in above: seems like there is the expectation that
> sizeof(int) == 2.
 
That's actually a good point, which I didn't myself think of earlier.
 
Several arguments have been made in this thread that C is very "portable",
and that C code written in the 1970's still compiles and works just fine
(well, at least if you tell gcc/clang to use the C89 standard, hopefully).
 
However, the undetermined size of basic types (especially before
standardization) makes it more likely for C programs, especially ones
written back then, to be non-portable. Sure, even back when K&R first
"soft-standardized" the C language you shouldn't have assumed a certain
size for any basic type.
 
(I don't know if sizeof(char) was guaranteed to be 1 even since K&R,
but even then, and to this day, you can't really trust that it's actually
one 8-bit byte, only that the sizes of all other types are multiples of it.)
 
There were no fancy uint32_t and other such type aliases back then
(and not even in C89), so there wasn't really a sure way to have a basic
type of a particular size. (You can check if a basic type is of a given
size with #if, and try several of them to see if one of them is of
the desired size, and produce an #error if none of them are, but that's
as far as you could go. In fact, even today that's technically as far
as you can go, even with the possibly existing standard typedefs.)
 
For most code it's enough for basic types to have a minimum size, but this
isn't always the case, and it's easy to write code that assumes a particular
size for such a type and breaks if it actually isn't of that size.
 
I suppose the conclusion is that maybe C code written in the 70's does
compile and work today... but only if it was properly written. If it
wasn't properly written, it's perfectly possible it won't work today,
in a different architecture than it was originally written for
(for example because it assumes the wrong size for 'int'.)
David Brown <david.brown@hesbynett.no>: Apr 27 10:59AM +0200

On 26/04/2021 22:49, jacobnavia wrote:
>> to invent unrealistic reasons.
 
> Sure sure, I am desperate. But you ignored the problem with templates...
> and many others I mentioned
 
I didn't ignore them, I just didn't mention them. There /are/ backwards
incompatible changes between C++03 and C++11 (and later versions). Some
of these will be in language or library features that might well occur
in real code. It would be perfectly reasonable to point out these
differences - especially if you can give examples or references to real
cases.
 
What is /unreasonable/ and shows desperation (perhaps people moving to
C++ is bad for your business) is listing every little point you can
think of, regardless of realism. It makes a mockery of your /real/ points.
scott@slp53.sl.home (Scott Lurndal): Apr 27 02:28PM


>Things I found tricky about the code:
 
>1. It predated printf being called that. I think it was just using
> print(). And that used "-lS" in the Makefile.
 
printf was printf from day one. Sounds like 'print' was provided
by an implementation library that wasn't included with the tar file you resurrected.
 
 
>2. Prototypes? Who needs em? ("#include <stdio.h>"? What, why?)
 
Prototypes were not used at the time, argument types were
'flexible', the programmer was expected to do the right thing.
 
> char p_x;
> char p_y;
> } p_plot[LIM_PLOTS];
 
Now this isn't a union. p_x and p_y occupy unique memory
locations.
 
But member names (MoS - Member of Structure) were top-level
symbol table names, so any member from any struct could be
used with any other struct, so:
a
 
> struct {
> int p_xy; /* used to set/transfer entire plot */
> };
 
p_xy has offset zero from the start of the structure; when software
uses p_xy (e.g. structpointer->p_xy) it has offset zero which allows
16-bit accesses to the two 8-bit fields in the p_plot struct.
 
 
>5. Implicit in above: seems like there is the expectation that
> sizeof(int) == 2.
 
Yes, for that particular application, it was assumed that
sizeof(int) == 2 * sizeof(char). Which was the case on the
PDP-11.
Juha Nieminen <nospam@thanks.invalid>: Apr 27 03:11PM

>>2. Prototypes? Who needs em? ("#include <stdio.h>"? What, why?)
 
> Prototypes were not used at the time, argument types were
> 'flexible', the programmer was expected to do the right thing.
 
I love the example given in the original 1978 The C Programming
Language, which demonstrates how non-int-returning standard
library functions ought to be used:
 
FILE *fopen(), *in;
in = fopen("name", "r");
 
Apparently back in those days <stdio.h> couldn't be expected
to declare the fopen function.
Joe Pfeiffer <pfeiffer@cs.nmsu.edu>: Apr 27 10:03AM -0600

> written back then, to be non-portable. Sure, even back when K&R first
> "soft-standardized" the C language you shouldn't have assumed a certain
> size for any basic type.
 
Everybody "knew" longs were 32 bits, shorts and ints were both 16 bits,
chars were 8 bits and we coded under that assumption. Lots and lots of
code broke when we moved from 16 bit to 32 bit machines and the size of
an int changed as a result.
 
<snip>
 
> wasn't properly written, it's perfectly possible it won't work today,
> in a different architecture than it was originally written for
> (for example because it assumes the wrong size for 'int'.)
 
Correct (as those of us who are old enough learned in the 80s!).
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Apr 27 09:20AM -0700

> chars were 8 bits and we coded under that assumption. Lots and lots of
> code broke when we moved from 16 bit to 32 bit machines and the size of
> an int changed as a result.
 
Everybody who programmed on PDP-11s knew that. There were other
configurations at least as early as K&R1, 1978. But most programmers at
the time probably wouldn't have worked on more than one system.
 
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
jacobnavia <jacob@jacob.remcomp.fr>: Apr 27 06:31PM +0200

Le 25/04/2021 à 15:19, Öö Tiib a écrit :
> I do not see that point ... What companies want to reduce C++ by using more COBOL,
> FORTRAN, Ada, D or C? Can anyone point at such companies? Any cite?
 
Look. C is the most popular programming language, according to the TIOBE
index
 
https://www.tiobe.com/tiobe-index/
 
SOME companies must be using it, don't you think so?
 
C++ comes 4th.
scott@slp53.sl.home (Scott Lurndal): Apr 27 04:34PM

>chars were 8 bits and we coded under that assumption. Lots and lots of
>code broke when we moved from 16 bit to 32 bit machines and the size of
>an int changed as a result.
 
And, unfortunately, the types often were not abstracted behind
a typedef. The process identifer, group identifier and user
identifers were all 16-bit as well. This made for a painful
conversion to 32-bit PID/GID/UID in SVR4 (accompanied by abstract
types pid_t, gid_t, uid_t which, when used correctly, mean that
applications only needed a simple recompile to handle a change in width).
Joe Pfeiffer <pfeiffer@cs.nmsu.edu>: Apr 27 10:55AM -0600

> conversion to 32-bit PID/GID/UID in SVR4 (accompanied by abstract
> types pid_t, gid_t, uid_t which, when used correctly, mean that
> applications only needed a simple recompile to handle a change in width).
 
There were no typedefs back then (at least I don't remember them that
far back, and they don't appear in either Ritchie's reference manual nor
Kernighan's tutorial).
Kaz Kylheku <563-365-8930@kylheku.com>: Apr 27 05:11PM

>>an int changed as a result.
 
> And, unfortunately, the types often were not abstracted behind
> a typedef.
 
Equally unfortunately, types were often abstracted behind bad typedefs.
 
:)
 
 
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Juha Nieminen <nospam@thanks.invalid>: Apr 27 12:07PM

Recently I posted a controversial thread where I recounted my recent
experience trying to tutor a beginner programmer who wanted to create
a small program that would read a file and allow the user to edit it,
like a simple text editor, and how frustrating it was trying to do
this in C due to the complexities of its memory management.
 
Because such accusations about C is heresy, I thought I would redeem
myself and cleanse my sins by commenting a bit on the flipside, based
on actual examples out there that I have had to deal with.
 
While trying to tutor that beginner, I was constantly thinking how
much easier the task would have been in C++, using std::string
(and std::vector, etc).
 
Indeed, std::string makes such things so much easier and simpler.
On the flipside, perhaps it makes things a bit *too* easy, as it
induces people to create inefficient code with it, perhaps due to
ignorance or laziness.
 
For example, I am currently dealing with C++ code that does a lot of
things like:
 
if(str1.substr(n, 5) == str2)
 
Mind you, this is in commercial production code done by a relatively
experienced C++ programmer, not some noob.
 
I'm constantly thinking like "sheesh, there's already a function that
does that exact comparison in-place, without creating any temporary
copies, avoiding a useless allocation, copying of data and deallocation."
Namely:
 
if(str1.compare(n, 5, str2) == 0)
 
In this particular case it's not like the more efficient version is more
complicated to write or use, or would require several lines of code,
or something.
 
There are lots and lots of other examples, related to std::string
(and std::wstring), such as using it for fixed-sized strings in situations
where a small inbuilt char array would suffice, always taking a std::string
as parameter when a const char* would suffice (or could well be
provided as a more efficient alternative) and so on and so forth.
 
std::string is an awesome tool that makes one's life a thousand times
easier, but any competent C++ programmer should learn how to use it
*efficiently*, not just lazily. Know your tools. Use them efficiently.
Paavo Helde <myfirstname@osa.pri.ee>: Apr 27 05:57PM +0300

27.04.2021 15:07 Juha Nieminen kirjutas:
> copies, avoiding a useless allocation, copying of data and deallocation."
> Namely:
 
> if(str1.compare(n, 5, str2) == 0)
 
I have wondered about this exact thing myself. So I now made a little
test to check if and how much the slowdown actually is.
 
It looks like with small strings so that SSO gets used, substr()
comparing is ca 2x slower than compare(), and with large strings
(needing a dynamic allocation) substr() comparing is ca 10x slower than
compare().
 
OTOH, a single compare() is only something like 4 ns, so if a program
makes e.g. a million of such comparisons, then with large substrings it
would slow down from 0.004 s to 0.04 s. I bet many people won't be
overly concerned about this.
James Lothian <jameslothian1@gmail.com>: Apr 27 12:28PM +0100

Andrey Tarasevich wrote:
>   {
>     typedef Unit<L1 + R1, L + R...> Result;
>   };
 
Thanks for this -- it's less horrible than the workaround I've been using.
 
James
TDH1978 <thedeerhunter1978@movie.uni>: Apr 26 09:10PM -0400

I have a database-related class (MyDB) where I would like to overload
the [] operator, so that the class looks and behaves like a map.
 
MyDB mydb; // key-value pairs, like a map
 
//
// read from database
//
X x = mydb[y];
 
//
// write to database (using proxy object)
//
mydb[y] = x;
 
 
Does anyone know how I can do this?
 
I tried using a proxy class for the 'write' operation, but when it came
time to print a database entry, the compiler would complain that it
does not know how to print the proxy object, when in fact it should be
printing 'x' from the 'read' operation:
 
cout << mydb[y] << endl; // compiler error; tries to print proxy
object and not 'x'.
 
 
I was hoping that the last 'cout' statement would be the same as:
 
cout << x << endl;
 
But the compiler had other ideas.
 
=================================
 
Below is some pseudo-code (does not compile due to incompleteness).
 
//
// Proxy for the MyDB class
//
class MyDBProxy_
{
public:
 
MyDBProxy_(MyDB* owner) : mydb_(owner) {}
 
void set_key(const Y& y)
{
y_ = y;
}
 
void operator=(const X& x)
{
mydb_->store_value_(y_, x); // internal database operation
}
 
private:
MyDB* mydb_;
Y y_;
};
 
 
//
// The actual database class
//
class MyDB
{
friend class MyDBProxy_;
 
public:
 
MyDB() : proxy_(this)
{
// internal databse construction
}
 
//
// read operation: X x = mydb[y];
//
X operator[](const Y& y) const
{
return fetch_value_(y); // internal database operation
}
 
//
// write operation: mydb[y] = x;
//
MyDBProxy_& operator[](const Y& y)
{
proxy_.set_key(y);
 
return proxy_;
}
 
// definition of fetch_value_ here...
// definition of store_value_ here...
 
private:
 
MyDBProxy_ proxy_; // proxy object
};
Paavo Helde <myfirstname@osa.pri.ee>: Apr 27 02:02PM +0300

27.04.2021 04:10 TDH1978 kirjutas:
> printing 'x' from the 'read' operation:
 
>   cout << mydb[y] << endl;  // compiler error; tries to print proxy
> object and not 'x'.
 
You need to add a streaming operator for your proxy. Something like:
 
std::ostream& operator<<(std::ostream& os, const Proxy& proxy) {
os << proxy.get_value();
return os;
}
Cholo Lennon <chololennon@hotmail.com>: Apr 26 09:35PM -0300


> I'm confused why you think the 2nd example is clearer or neater, requiring
> as it does a 2nd class instead of handling everything in 1. But everyone has a
> prefered style I suppose, its just a matter of taste.
 
I am just showing different use cases, I am not saying than the 1st
example is better than the 2nd one (or vice versa), but the latter is
very common, for example, in GUI applications... your Window/Dialog
class handles the events from the contained widgets. You can use free
functions as event handlers, but member functions has the context of the
parent class (of course, std::bind is very powerful so you can add the
context to your free functions if you want), i.e:
 
 
class MyWindow: public Window {
Button okCancel;
EditBox name;
EditBox address;
 
SomeWindowContext context;
 
void onClick(Button& button) {
if (button.isCancel()) {
...
}
else {
...
}
}
 
void onFocus(EditBox& editBox) {
// Use window's context here
}
 
void onLostFocus(EditBox& editBox) {
// Use window's context here
}
 
void initWidgets() {
...
okCancel.onClick = std::bind(&MyWindows::onClick, this, _1);
 
// For simplicity, callbacks are reused for similar widgets
name.onFocus = std::bind(&MyWindows::onFocus, this, _1);
name.onLostFocus = std::bind(&MyWindows::onLostFocus, this, _1);
 
address.onFocus = std::bind(&MyWindows::onFocus, this, _1);
address.onLostFocus = std::bind(&MyWindows::onLostFocus, this, _1);
...
}
 
public:
MyWindow(...): ... {
initWidgets();
}
 
void show() {
...
}
 
};
 
 
C# uses the same scheme via delegates (the .Net std::function
counterpart). Java used to have a similar approach in Swing, but
anonymous classes were used to redirect a widget event to the parent
class callback. In modern C++ we can use a lambda to code the event or
to redirect it to the parent class. There are a lot of approaches, all
of them with pros and cons.
 
Usually, the initWidget member function (initializeComponent/jbInit in
C#/Java) are managed by the GUI editor/IDE, so the event connection (as
well as the widget configuration) is automatic. In the old days of MFC,
a message map built with macros was the initWidget equivalence. The map
was managed by the Class Wizard (*) tool.
 
I use the past tense to talk about MFC because my last contact with it
was 16 years ago :-O but I think Class Wizard (and the message map)
still exist.
 
 
--
Cholo Lennon
Bs.As.
ARG
Bonita Montero <Bonita.Montero@gmail.com>: Apr 27 05:32AM +0200

>        name.onLostFocus = std::bind(&MyWindows::onLostFocus, this, _1);
>        address.onFocus = std::bind(&MyWindows::onFocus, this, _1);
>        address.onLostFocus = std::bind(&MyWindows::onLostFocus, this, _1);
 
You would never use bind()-objects themselfes as callback-types because
the types are variable depending on the parameters. Usually you would
encapsulate them into a function-object which gives you the advantage,
that you can have any types of paramters with them, f.e.:
name.onFocus = function<void(param_type)>( bind(
&MyWindows::onFocus, this, _1 ) );
Cholo Lennon <chololennon@hotmail.com>: Apr 27 12:46AM -0300

On 4/27/21 12:32 AM, Bonita Montero wrote:
> that you can have any types of paramters with them, f.e.:
>          name.onFocus = function<void(param_type)>( bind(
> &MyWindows::onFocus, this, _1 ) );
 
But for my previous post it should be clear that onFocus/onLostFocus/etc
are declared as std::function. For the sake of simplicity I didn't add
the declaration/definition of these properties (because they belong to
classes, Button and Editbox, whose code I didn't show)
 
--
Cholo Lennon
Bs.As.
ARG
MrSpook_1xajf_a1kl@f5mda.co.uk: Apr 27 07:37AM

On Mon, 26 Apr 2021 21:35:13 -0300
>functions as event handlers, but member functions has the context of the
>parent class (of course, std::bind is very powerful so you can add the
>context to your free functions if you want), i.e:
 
Perhaps I'm just biased, but IMO if the answer is using std::bind() to link
an object method to a function pointer then you're asking the wrong
question. In simple code I'm sure it wouldn't be an issue, but in a complex
project that may be hacked about by dozens of people over the years the chances
the object may be deleted leaving a dangling function pointer elsewhere in
the code grows much higher.
 
>well as the widget configuration) is automatic. In the old days of MFC,
>a message map built with macros was the initWidget equivalence. The map
>was managed by the Class Wizard (*) tool.
 
A much safer and simpler method on an event is simply to call a callback
with the widget id. Event driven GUI code isn't speed critical so if safety
and simplicity costs a few extra cpu cycles then its a trade worth making.
Juha Nieminen <nospam@thanks.invalid>: Apr 27 04:37AM

> refactoring anyway. Are you speaking of a real-life example? Dozens of
> different struct types that all include one another and none of which
> (originally) have dynamically allocated members?
 
But you yourself said that you don't automatically add manual constructors
and destructors to every struct you write, just in case.
 
I don't even understand what you mean by "well-designed code" then.
Is "well-designed" C code one where every struct has a constructor and
destructor function (that you always call for every single instance)?
Or is it something else?
Robert Latest <boblatest@yahoo.com>: Apr 27 07:22AM

["Followup-To:" header set to comp.lang.c.]
Juha Nieminen wrote:
>> (originally) have dynamically allocated members?
 
> But you yourself said that you don't automatically add manual constructors
> and destructors to every struct you write, just in case.
 
Of course not. When you design a struct type you should already know what it is
supposed to represent and if the represented things lend themselves to dynamic
allocation. If so, better write construction / deconstriction functions. It
hardly adds significant amounts of code
 
> "well-designed" C code one where every struct has a constructor and
> destructor function (that you always call for every single instance)? Or is
> it something else?
 
A certain amount of foresight is part of good planning. If you jump straight
into coding without paying attention to possible issues of extending and re-use
you may face some refactoring effort later. This of course goes for anything,
not just changing from static to dynamic allocation in C.
 
I suspect you once had to refactor some code with a few structs because you
forgot this planning and have now worked yourself into a fret about some
contrived example with "lots and lots" of struct types that need to be adapted.
I don't believe it's a real world case, but if it is, it's poorly planned.
 
--
robert
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.