Tuesday, June 25, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 7 topics

Real Troll <real.troll@trolls.com>: Jun 24 09:40PM -0400

On 24/06/2019 22:30, Queequeg wrote:
 
>> If you are going to reply to my posts, strip the attributions.
>> Otherwise I consider you as rude!!!!!!!!!!!!!1111
> Why is that, Bonita?
 
Because she doesn't want the evidence of her being a nincompoop
available to readers longer, even after she is no longer alive. The bad
thing is they can access it quickly and accurately because of that thing
called Google Search Engine.
 
Does this answer your question?
Bonita Montero <Bonita.Montero@gmail.com>: Jun 25 08:08AM +0200


>> If you are going to reply to my posts, strip the attributions.
>> Otherwise I consider you as rude!!!!!!!!!!!!!1111
 
> *plonk*
 
LOL.
Bo Persson <bo@bo-persson.se>: Jun 25 10:43AM +0200

On 2019-06-24 at 21:25, Bonita Montero wrote:
>> but the standard says almost nothing about such issues.
 
> C and C++ allow recursions.
> And recursions aren't possible without a stack.
 
But it doesn't have to be a *hardware* stack.
 
C and C++ are used on systems with no dedicated stack pointer and where
a subroutine call doesn't store the return address in memory.
 
IBM mainframes is one important example.
Bonita Montero <Bonita.Montero@gmail.com>: Jun 25 01:19PM +0200


>> C and C++ allow recursions.
>> And recursions aren't possible without a stack.
 
> But it doesn't have to be a *hardware* stack.
 
There's nothing like a "hardware stack", it's just memory, but only
special instructions which alleviate you pushing and popping onto
the stack.
 
> C and C++ are used on systems with no dedicated stack pointer and
> where a subroutine call doesn't store the return address in memory.
 
At least on the next call level the instruction-pointer stored in
a register in the meantime will be pushed on something like a stack.
So there's really no way around a stack with full recursion-capa-
bilities.
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 25 04:56AM -0700

On Tuesday, June 25, 2019 at 7:19:46 AM UTC-4, Bonita Montero wrote:
 
> There's nothing like a "hardware stack", it's just memory, but only
> special instructions which alleviate you pushing and popping onto
> the stack.
 
That's what "hardware stack" means: memory that can be accessed by
special instructions for pushing and popping onto the stack. And it
isn't necessary for recursion.
 
> a register in the meantime will be pushed on something like a stack.
> So there's really no way around a stack with full recursion-capa-
> bilities.
 
On some real life machines, each function call uses a different
dynamically allocated block of memory called activation records. Those
records are, as is typical of dynamically allocated memory, in no
guaranteed order, and certainly not required to be adjacent to each
other. Certainly the return address for each function call is stored
somewhere, but unlike a hardware stack, there's no particular guaranteed
relationship between the location where the return address of a called
function is stored, and where the return address for it's calling
function is stored.
 
About the only thing these activation records share with a hardware
stack is Last In First Out (LIFO) semantics, as is implied by the
requirements of the C standard: since the lifetime of objects local to
a particular instance of a called function is a subset of the lifetime
of objects local to the calling function, the memory for the called
function must (modulo the as-if rule) be allocated after, and
deallocated before, the memory for the calling function. A hardware
stack is one way to implement LIFO semantics, but it's not the only
way. Those LIFO semantics mean that activation records could be
described as a logical stack, but they certainly do not qualify as
forming hardware stack.
Bart <bc@freeuk.com>: Jun 25 01:00PM +0100

On 25/06/2019 12:19, Bonita Montero wrote:
 
> There's nothing like a "hardware stack", it's just memory, but only
> special instructions which alleviate you pushing and popping onto
> the stack.
 
But a hardware stack (even if you define it as one using hardware
support) is very commonly used. It has to be, to avoid inefficiency.
 
And hardware stacks (like the ones on x86 and ARM) have certain
characteristics, one of which is not dealing gracefully with overflows.
 
Bonita Montero <Bonita.Montero@gmail.com>: Jun 25 02:30PM +0200

> On some real life machines, each function call uses a different
> dynamically allocated block of memory called activation records.
 
It doesn't depend on the type physical layout if a stack is a stack.
 
> relationship between the location where the return address of a called
> function is stored, and where the return address for it's calling
> function is stored.
 
You can also call this a stack.
 
> way. Those LIFO semantics mean that activation records could be
> described as a logical stack, but they certainly do not qualify as
> forming hardware stack.
 
You were the one to define a hardware-stack as something that could
be only LIFO-contignous-memory. And I wasn't discussing hardware-stacks
but only stacks in general. And without whatever type of stack a C/C++
-language-implementation is impossible.
Bonita Montero <Bonita.Montero@gmail.com>: Jun 25 02:34PM +0200

>> the stack.
 
> But a hardware stack (even if you define it as one using hardware
> support) is very commonly used. It has to be, to avoid inefficiency.
 
The notion of a "hardware-stack" is just a phantasm as any architecture
cout support stacks made of physically contignous memory. The only thing
that makes some architectures different is the alleviated support with
call-/ret- and push-/pop-instructions.
 
> And hardware stacks (like the ones on x86 and ARM) have certain
> characteristics, one of which is not dealing gracefully with overflows.
 
That's not a part of the stack-support but of the support of virtual
memory (non-readable/-writable pages).
scott@slp53.sl.home (Scott Lurndal): Jun 25 01:54PM

Bart wrote, and Bonita foolishly elided the attribution:
 
>> But a hardware stack (even if you define it as one using hardware
>> support) is very commonly used. It has to be, to avoid inefficiency.
 
>The notion of a "hardware-stack" is just a phantasm as any architecture
 
I refer you to the Burroughs B6500 and successors (still active as
Unisys Clearpath) and the Hewlett Packard HP-3000 for
examples of true hardware stacks.
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 25 07:18AM -0700

On Tuesday, June 25, 2019 at 8:30:32 AM UTC-4, Bonita Montero wrote:
[Missing attribution line inserted, and deliberately falsely attributed to the person who should have inserted it in the first place:]
> On Tuesday, June 25, 2019 at 7:56:30 AM UTC-4, James Kuyper wrote:
> > On some real life machines, each function call uses a different
> > dynamically allocated block of memory called activation records.
...
> You can also call this a stack.
 
Oddly enough, I did.
 
> be only LIFO-contignous-memory. And I wasn't discussing hardware-stacks
> but only stacks in general. And without whatever type of stack a C/C++
> -language-implementation is impossible.
 
If you understood the distinction between logical stacks and hardward
stacks, why did my original comment:
 
> > Nothing. Almost all implementations of C++ use a stack structure to
> > store local variables, and many of them do so using a hardware stack,
> > but the standard says almost nothing about such issues.
 
Provoke the following response from you:
> C and C++ allow recursions.
> And recursions aren't possible without a stack.
 
The stack structures I was referring to fully allow recursion. The only
justification for responding in that fashion is if you falsely believed
otherwise. Of course, there's no need to assume that your response was
justified - quite the contrary, based on prior evidence.
Juha Nieminen <nospam@thanks.invalid>: Jun 25 07:00AM

> they unearth artifacts from that time period with the names and
> events which all correlate to indicate specific things that didn't
> have physical evidence previously now do.
 
Your brain seems physically incapable of understanding why
"thing X described in the Bible really happened, therefore everything
the Bible says is true" is fallacious.
 
Alternatively, even if your brain *is* capable of understanding why
it's flawed thinking, you are incapable of acknowledging it. You seem
only capable of spouting apologetics like a brainwashed robot. You
are completely incapable of having an actual discussion.
gazelle@shell.xmission.com (Kenny McCormack): Jun 25 07:26AM

In article <a4858035-8cd2-4fb0-8e9f-d4882efb6095@googlegroups.com>,
 
>Could I ask if you have a university degree, and if so in what subject? The
>reason that I'm interested is that you write as if you were uneducated, and
>had no facility for critical thinking.
 
It's the Trump phenomenon: When you don't know anything about anything, you
might as well believe that you know everything about everything. You have
nothing to calibrate on.
 
This describes well both our whacky president and our whacky religious
nutcase Rick.
 
--
I love the poorly educated.
rick.c.hodgin@gmail.com: Jun 25 04:15AM -0700

On Tuesday, June 25, 2019 at 3:00:25 AM UTC-4, Juha Nieminen wrote:
 
> Your brain seems physically incapable of understanding why
> "thing X described in the Bible really happened, therefore everything
> the Bible says is true" is fallacious.
 
I never said that. I said, "More Biblical proof."
 
A new thing was found by archaeologists. It provides additional
evidence on the Biblical account. That physical evidence didn't
exist before. Now it does.
 
> ... You seem
> only capable of spouting apologetics like a brainwashed robot. You
> are completely incapable of having an actual discussion.
 
I have lots of discussions. They sometimes involve the topic of
God, of Jesus, of sin, of salvation by Jesus' atoning work at the
cross.
 
One could argue that you are incapable of having a conversation
on subjects related to Biblical teachings, choosing instead to disparage
the people who do speak about the most influential book in human
history. It's almost as if you're hiding from its true teachings,
choosing to keep true knowledge of its teachings outside of your
grasp by purposefully mocking and attacking others who bring up
real facts about Biblical teaching, correlated to real-world observed
events.
 
You seem quite hostile to the things of God, Juha. They seem to
get under your skin and drive you from within. Why is that I wonder?
 
--
Rick C. Hodgin
rick.c.hodgin@gmail.com: Jun 25 04:17AM -0700

On Tuesday, June 25, 2019 at 3:26:20 AM UTC-4, Kenny McCormack wrote:
> nothing to calibrate on.
 
> This describes well both our whacky president and our whacky religious
> nutcase Rick.
 
 
K-k-k-kenny.
 
https://www.youtube.com/watch?v=tzY7qQFij_M
 
--
Rick C. Hodgin
Daniel <danielaparker@gmail.com>: Jun 25 05:17AM -0700


> I have lots of discussions.
 
Factually, I think that's incorrect. You believe one, that you are right and
everybody else is wrong, and two, that you have to make them see that they
are wrong. Moreover, you have to make them see that they are wrong using two
arguments, one, it's in the bible so it must be true, and two, it was
revealed to you personally so it must be true. This isn't how educated
people have discussions, including educated people that have religious
convictions.

Be well,
Daniel
rick.c.hodgin@gmail.com: Jun 25 05:22AM -0700

On Tuesday, June 25, 2019 at 8:17:55 AM UTC-4, Daniel wrote:
 
> > I have lots of discussions.
 
> Factually, I think that's incorrect. You believe one, that you are right and
> everybody else is wrong,
 
 
I tell people in my posts not to believe me, not to trust me, and
to not believe or trust any person, but to go the Bible for them-
selves and see with their own eyes if what I post is true.
 
People can be deceived, misguided, believe something that's in-
correct, but we are each personally accountable unto God based on
what HE HAS REVEALED to us.
 
It's personal, Daniel. One-on-one with God. It doesn't involve
a third party or middle-man. I simply point you to Him, because
it is in HIM you have salvation, and in HIM you will learn the
truth of all things.
 
--
Rick C. Hodgin
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jun 25 02:13PM +0100

> a third party or middle-man. I simply point you to Him, because
> it is in HIM you have salvation, and in HIM you will learn the
> truth of all things.
 
obtuse, adj.
View as: Outline |Full entryKeywords: On |OffQuotations: Show all |Hide all
Pronunciation: Brit. /əbˈtjuːs/, /ɒbˈtjuːs/, /əbˈtʃuːs/, /ɒbˈtʃuːs/,
U.S. /əbˈt(j)us/, /ɑbˈt(j)us/
Frequency (in current use):
Origin: A borrowing from Latin. Etymon: Latin obtūsus.
Etymology: < classical Latin obtūsus blunt, dull, stupid, (of an angle)
greater than 90 degrees and less than 180 degrees, use as adjective of
past participle of obtundere obtund v. Compare Middle French, French obtus
, obtuse blunt (c1370 in Chauliac; compare quot. ?a1425 at sense 1a),
dull, stupid (1532), (of an angle) greater than 90 degrees and less than
180 degrees (1542), indistinctly perceived (c1550 in Paré).
With obtuse-lobate (see Special uses 2) compare earlier obtusilobous adj.
at obtusi- comb. form .
(Show Less)
1.
Thesaurus »
Categories »
 
a. Chiefly Botany and Zoology. Not sharp or pointed, blunt.
?a1425 tr. Guy de Chauliac Grande Chirurgie (N.Y. Acad. Med.) f. 78v
Þo [instruments]..he calleþ ciryngathoma, bycopez, curue, suple, & obtuse,
i. blont [?c1425 Paris dulle; L. obtusos], byhynde & at þe ende & not sharp.
1589 G. Puttenham Arte Eng. Poesie ii. xi. 84 Such shape as might not
be sharpe..to passe as an angle, nor so large or obtuse as might not essay
some issue out with one part moe then other as the rounde.
1657 S. Purchas Theatre Flying-insects 6 Their tails are somewhat
sharp (the Drones more obtuse).
1660 R. Boyle New Exper. Physico-mechanicall xxxix. 322 An Oval (1858)
Glass..with a short Neck at the obtuser end.
1753 Chambers's Cycl. Suppl. at Leaf Obtuse Leaf, one terminated by
the segment of a circle.
1767 B. Gooch Pract. Treat. Wounds I. 237 A blow with an obtuse weapon.
1806 Philos. Trans. (Royal Soc.) 96 427 This socket..supports the
whole weight of the moveable part of the instrument, which revolves on an
obtuse point at the bottom.
?1877 F. E. Hulme Familiar Wild Flowers I. Summary p. viii Spur stout,
obtuse.
1961 J. Stubblefield Davies's Introd. Palaeontol. (ed. 3) v. 130 The
marginal border of the cephalon is drawn out into an obtuse point in front.
1997 Jrnl. Ecol. 85 531/1 Leaves..oblong-ovate to lanceolate, cordate
at base, crenate-serrate, apex acute or obtuse.
(Hide quotations)
 
 
Thesaurus »
Categories »
 
b. Geometry. Of a plane angle: greater than 90 degrees and less than 180
degrees. Frequently in obtuse angle.
1570 H. Billingsley tr. Euclid Elements Geom. i. f. 2v An obtuse angle
is that which is greater then a right angle.
1633 P. Fletcher Purple Island iii. xxi. 34 Into two obtuser angles
bended.
1701 N. Grew Cosmol. Sacra ii. v. §18 All Salts are Angular; with
Obtuse, Right, or Acute Angles.
1790 Nat. Hist. in J. White Jrnl. Voy. New S. Wales App. 283 Their
base is a triangle of the scalenus kind, or having one angle obtuse and
two acute.
1879 E. P. Wright Animal Life 6 This bone forms an obtuse angle with
the pelvis.
1972 M. Kline Math. Thought xii. 239 The negative values of the cosine
and tangent functions for obtuse angles.
1991 Choice Mar. 77/3 Most restaurants now have low chairs..with the
backs at an acute angle, compressing the stomach, whereas they should be
at right angles and preferably an obtuse angle.
(Hide quotations)
 
 
2. figurative.
Thesaurus »
Categories »
 
a. Annoyingly unperceptive or slow to understand; stupid; insensitive.
Also, of a remark, action, etc.: exhibiting dullness, stupidity or
insensitivity; clumsy, unsubtle. Formerly also: †rough, unpolished; =
blunt adj. 4 (obsolete rare).
1509 S. Hawes Pastime of Pleasure (1845) xiii. 113 I am but yonge, it
is to me obtuse Of these maters to presume to endyte.
a1586 Sir P. Sidney Lady of May in Arcadia (1598) sig. Bbb5v Thus must
I vniforme my speech to your obtuse conceptions.
1602 J. Marston Antonios Reuenge i. iii. sig. B2 I scorne to retort
the obtuse ieast of a foole.
1606 W. Warner Continuance Albions Eng. xvi. civ. 408 Obtuse in phrase.
1667 Milton Paradise Lost xi. 541 Thy Senses then Obtuse, all taste of
pleasure must forgoe.
1792 M. Wollstonecraft Vindic. Rights Woman iii. 107 If the faculties
are not sharpened by necessity, they must remain obtuse.
1829 Scott Anne of Geierstein I. ii. 41 Obtuse in his understanding,
but kind and faithful in his disposition.
1885 M. Blind Tarantella I. xi. 121 We were too obtuse to understand
their peculiar way of manifesting it.
1915 W. S. Maugham Of Human Bondage cxi. 589 He remembered with what a
callous selfishness his uncle had treated her, how obtuse he had been to
her humble, devoted love.
1952 H. E. Bates Love for Lydia ii. iii. 121 Perhaps the sisters were
not, after all, as obtuse as they sometimes seemed.
1992 Daily Tel. (BNC) 5 Apr. 13 Kohl..will have to live with a
politically obtuse gesture that is being compared to his appearance with
American President Ronald Reagan [etc.].
1999 SL (Cape Town) June 144 (advt.) I love being obtuse. Obtuse is
my middle name.
(Hide quotations)
 
 
 
†b. Not acutely affecting the senses; indistinctly felt or perceived;
dull. Obsolete.
1620 T. Venner Via Recta ii. 31 The wine..carrieth the same, which
otherwise is of an obtuse operation, vnto all the parts [of the body].
1733 Swift Epist. to Lady 12 Bastings heavy, dry, obtuse.
1781 W. Cowper Hope 22 Pleasure is labour too, and tires as much,..By
repetition palled, by age obtuse.
1791 Philos. Trans. 1790 (Royal Soc.) 80 426 I..felt an obtuse
pain..in my stomach.
1897 T. C. Allbutt et al. Syst. Med. IV. 126 Pain, sharp or obtuse.
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into
snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
David Brown <david.brown@hesbynett.no>: Jun 25 02:27PM +0200

>>> any more articles posted by David Brown, either in this or
>>> any other newsgroup.
 
>> Please explain why any of us should care about that?
 
I'll leave that one for Tim, if he wants to answer it.
 
> regular readers will know that the person David responded to didn't
> even read the post, and that David's replies may be wrong, yet not
> corrected, due to the outright shunning.
 
More than enough people read my posts - when I get something
significantly wrong, there is usually someone to correct it. I am glad
Usenet works that way.
 
> It also lets David know the influence he's having on people in case
> he cared about how negatively he affects other people, and wants to
> seek to change.
 
I do care what people think and how my posts affect them. This makes
Tim's post here particularly useless - he hasn't given any indication
/why/ he has decided to killfile me. Thus his comment says far more
about him than me.
 
Posts like Tim's are a strange thing - I don't really understand them.
I could understand Tim telling /me/ he won't read my posts, with a
justification of the decision. That would cause me to stop and think,
and question whether there was something I wanted to change in what I
write or how I write it. But he addressed the post to everyone else,
not me - and why should others care? It makes him sound like a
schoolyard social bully, trying to make others in his class ostracise me.
 
> I also stopped reading David's replies some time ago.
 
Yes, we all know that.
Horizon68 <horizon@horizon.com>: Jun 24 05:37PM -0700

Hello..
 
 
My scalable Adder is here..
 
As you have noticed i have just posted previously my modified versions
of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in
parallel programs.
 
But i have just read the following about how to avoid race conditions
in Parallel programming in most cases..
 
Here it is:
 
https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/
 
This is why i have invented my following powerful scalable Adder to help
you do the same as the above, please take a look at its source code to
understand more, here it is:
 
https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal
 
Other than that, about composability of lock-based systems now:
 
Design your systems to be composable. Among the more galling claims of
the detractors of lock-based systems is the notion that they are somehow
uncomposable: "Locks and condition variables do not support modular
programming," reads one typically brazen claim, "building large programs
by gluing together smaller programs[:] locks make this impossible."9 The
claim, of course, is incorrect. For evidence one need only point at the
composition of lock-based systems such as databases and operating
systems into larger systems that remain entirely unaware of lower-level
locking.
 
There are two ways to make lock-based systems completely composable, and
each has its own place. First (and most obviously), one can make locking
entirely internal to the subsystem. For example, in concurrent operating
systems, control never returns to user level with in-kernel locks held;
the locks used to implement the system itself are entirely behind the
system call interface that constitutes the interface to the system. More
generally, this model can work whenever a crisp interface exists between
software components: as long as control flow is never returned to the
caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in
per-instance state, and it must be up to consumers of the subsystem to
assure that they do not access their instance in parallel. By leaving
locking up to the client of the subsystem, the subsystem itself can be
used concurrently by different subsystems and in different contexts. A
concrete example of this is the AVL tree implementation used extensively
in the Solaris kernel. As with any balanced binary tree, the
implementation is sufficiently complex to merit componentization, but by
not having any global state, the implementation may be used concurrently
by disjoint subsystems—the only constraint is that manipulation of a
single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
And about Message Passing Process Communication Model and Shared Memory
Process Communication Model:
 
An advantage of shared memory model is that memory communication is
faster as compared to the message passing model on the same machine.
 
However, shared memory model may create problems such as synchronization
and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral
equivalent of gotos in un-structured programming (it's about time
somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI
application is significantly higher than that required to write a
shared-memory version of it.
 
Thank you,
Amine Moulay Ramdane.
Juha Nieminen <nospam@thanks.invalid>: Jun 25 07:02AM

Will you please stop posting the same thing over and over?
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 25 08:00AM -0400

On 6/25/19 3:02 AM, Juha Nieminen wrote:
> Will you please stop posting the same thing over and over?
 
You're far from being the first person to ask that - he doesn't care.
Ralf Goertz <me@myprovider.invalid>: Jun 25 10:05AM +0200

Hi,
 
I wrote a small program to create all distinct circular arrangements of
n_i objects from category i. If you have, say two A's and two B's, there
are two distinctive ways to put them in a circular order:
 
AABB ABAB
 
The other four (linear) arrangements ABBA(1), BAAB(1), BABA(2), BBAA(1)
are only circular permutations of the two above (with the number in
parenthesis indicating of which). It is surprisingly difficult to come
up with a formula¹ for the number of such arrangements, at least in the
case where the greatest common divisor of all the n_i is greater than 1.
My idea for the program was to go over all linear arrangements (with the
first element fixed) and store those which were no circular permutations
of any of the previously stored arrangements. To test that I created a
string called tester which is basically the current linear permutation
twice. Let the test permutation be BAAB. Then tester would be BAABBAAB.
Now I can try to find all previously stored permutations within tester:
 
tester.find(1,i)
 
which is not string::npos if i=AABB. (I need to start only at position 1
because there can't be a match at 0. Likewise, I don't need the last
character of tester.) The complete program goes like this:
 
 
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
#include <string>
#include <string_view>
 
using namespace std;
 
 
int main(int argc, char *argv[]) {
string permuter;
char c='A';
for (auto i=1;i<argc;++i) {
auto n=stoi(argv[i]);
permuter.insert(permuter.end(),n,c++);
}
if (!permuter.size()) {
cout<<0<<endl;
return 0;
}
vector<string> result;
size_t bin=1;
do {
string const tester=permuter+permuter.substr(0,permuter.size()-1); //*
bool found(false);
for (auto &i:result) {
if (tester.find(i,1)!=string::npos) {
found=true;
break;
}
}
if (!found) {
result.push_back(permuter);
if (result.size()==bin) {
cout<<bin<<endl;
bin<<=1;
}
}
} while (next_permutation(permuter.begin()+1,permuter.end()));
if (result.size()<1000) copy(result.begin(),result.end(),ostream_iterator<string>(cout,"\n"));
cout<<result.size()<<endl;
}
 
In an attempt to optimize I tried (for the first time) to use
string_view instead of string on the line marked with the asterisk. I
tried it with the arguments "6 6" and was impressed by the fact that the
program now needed only 1.5 seconds instead of the 2.5 it needed with
string. But then I realised that the size of the result vector was 6709
and not the correct 2704 it was with string. I figured out that the
problem probably was the fact that the string_view was initialised from
a temporary. I tried to compile with "g++ -pedantic -Wall" but there was
no diagnostic message at all. Why? If that is not the problem then what
is?
 
As an aside: What find algorithm does basic_string::find() use? Does it
do something more sophisticated than the naïve way?
 
¹Clarke, L. E. and Singer, James: On circular permutations, The American
Mathematical Monthly (65), 1958, 609--610.
Ralf Goertz <me@myprovider.invalid>: Jun 25 10:25AM +0200

Am Tue, 25 Jun 2019 10:05:10 +0200
 
> tried it with the arguments "6 6" and was impressed by the fact that
 
Oops, make that "9 9"
Horizon68 <horizon@horizon.com>: Jun 24 05:40PM -0700

Hello..
 
 
DelphiConcurrent and FreepascalConcurrent version 0.6
 
DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way
to build Delphi and Freepascal applications which involve parallel
executed code based on threads like application servers.
 
DelphiConcurrent and FreepascalConcurrent provide to the programmers the
internal mechanisms to write safer multi-thread code while taking a
special care of performance and genericity.
 
In concurrent applications a DEADLOCK may occurs when two threads or
more try to lock two consecutive shared resources or more but in a
different order. With DelphiConcurrent and FreepascalConcurrent, a
DEADLOCK is detected and automatically skipped - before he occurs - and
the programmer has an explicit exception describing the multi-thread
problem instead of a blocking DEADLOCK which freeze the application with
no output log (and perhaps also the linked clients sessions if we talk
about an application server).
 
Amine Moulay Ramdane has extended them with the support of his scalable
RWLocks for Windows and Linux that are starvation-free and with the
support of his scalable lock called MLock for Windows and Linux and he
has also added the support for a Mutex for Windows and Linux, please
look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files
to understand more.
 
And please read the html file inside to learn more how to use it.
 
Language: FPC Pascal v2.2.0+ / Delphi 5+: http://www.freepascal.org/
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi XE-XE7 and Tokyo switch: -$H+ -DXE
 
You can configure it as follows from inside defines.inc file:
 
{$DEFINE CPU32} and {$DEFINE Windows32} for 32 bit systems
 
{$DEFINE CPU64} and {$DEFINE Windows64} for 64 bit systems
 
- Platform: Windows and Linux (x86)
 
 
You can download them from:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 24 05:36PM -0700

Hello..
 
 
My scalable Adder is here..
 
As you have noticed i have just posted previously my modified versions
of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in
parallel programs.
 
But i have just read the following about how to avoid race conditions
in Parallel programming in most cases..
 
Here it is:
 
https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/
 
This is why i have invented my following powerful scalable Adder to help
you do the same as the above, please take a look at its source code to
understand more, here it is:
 
https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/
 
Other than that, about composability of lock-based systems now:
 
Design your systems to be composable. Among the more galling claims of
the detractors of lock-based systems is the notion that they are somehow
uncomposable: "Locks and condition variables do not support modular
programming," reads one typically brazen claim, "building large programs
by gluing together smaller programs[:] locks make this impossible."9 The
claim, of course, is incorrect. For evidence one need only point at the
composition of lock-based systems such as databases and operating
systems into larger systems that remain entirely unaware of lower-level
locking.
 
There are two ways to make lock-based systems completely composable, and
each has its own place. First (and most obviously), one can make locking
entirely internal to the subsystem. For example, in concurrent operating
systems, control never returns to user level with in-kernel locks held;
the locks used to implement the system itself are entirely behind the
system call interface that constitutes the interface to the system. More
generally, this model can work whenever a crisp interface exists between
software components: as long as control flow is never returned to the
caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in
per-instance state, and it must be up to consumers of the subsystem to
assure that they do not access their instance in parallel. By leaving
locking up to the client of the subsystem, the subsystem itself can be
used concurrently by different subsystems and in different contexts. A
concrete example of this is the AVL tree implementation used extensively
in the Solaris kernel. As with any balanced binary tree, the
implementation is sufficiently complex to merit componentization, but by
not having any global state, the implementation may be used concurrently
by disjoint subsystems—the only constraint is that manipulation of a
single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
And about Message Passing Process Communication Model and Shared Memory
Process Communication Model:
 
An advantage of shared memory model is that memory communication is
faster as compared to the message passing model on the same machine.
 
However, shared memory model may create problems such as synchronization
and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral
equivalent of gotos in un-structured programming (it's about time
somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI
application is significantly higher than that required to write a
shared-memory version of it.
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: