Wednesday, October 16, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

Paavo Helde <myfirstname@osa.pri.ee>: Oct 16 07:57AM +0300

On 16.10.2019 0:59, Soviet_Mario wrote:
> One thing I really can't stand is the way C used to share a single
> variable (errno) to store status, that may be overwritten unpredictably
 
It's not a single variable, it's a single variable per thread.
 
Yes, it may be overwritten by something seemingly innocent like a
std::string construction, but this would be just "unexpectedly" (for
some of us), not "unpredictably".
 
Errno may only be overwritten "unpredictably" if there is a signal
handler which does not take care to store and restore it properly. But
this would be a bug in the signal handler.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Oct 16 06:20AM

On Tue, 2019-10-15, Melzzzzz wrote:
> On 2019-10-15, Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk> wrote:
...
 
>> default, call abort()).Just calling abort() on its own without using
>> exceptions is a bit pants as there is no opportunity to log.
 
> What's stopping you to log then call abort?
 
I think he means if he throws, upper layers of code can decide if it
wants to log or not. On the other hand, if it /does/ catch the
exception, stack unwinding will happen and there will be no
interesting core dump even if it re-throws.
 
I think I would log and then throw, and then in higher layers not
catch that exception.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
"Öö Tiib" <ootiib@hot.ee>: Oct 16 12:01AM -0700

On Wednesday, 16 October 2019 00:44:04 UTC+3, Mr Flibble wrote:
> > not been followed, byte sequence is illegal or arguments are
> > incorrect.
 
> Bad input data isn't a logic error.
 
The logic_error means that the error is because of possibly
preventable logical inconsistency in input. But how can
std::stoi() know that the out of range or illegal input
is coming from corrupt input data? Also for preventing the
inconsistency its caller has possibly to do most of what
stoi does itself.
David Brown <david.brown@hesbynett.no>: Oct 16 10:16AM +0200

On 15/10/2019 23:59, Soviet_Mario wrote:
 
> mine is just a no more than a hobbist tinkerer's opinion, but ...
> even if I surely won't object to performance issues, another (often
> contrasting) point of view would be readability and cross-usability of code
 
Readability is of course very important - and code correctness even more
so. However, I am not convinced that exceptions are a great correctness
feature. It is hard to write fully exception-safe code. And, worse, it
is easy to write code that /appears/ to be exception safe, but is not.
 
The two key problems, as I see it, are first that exceptions means all
sorts of code (function calls and expressions) can fail unexpectedly at
any time, leading to early returns from your function. You have to be
thinking all the time about what will happen if there is an exception
when the code runs. This can mean making more RAII classes, or more
indirect references (via unique_ptr, for example), simply to ensure
cleanup in case functions throw an exception. That will make the code
safe and correct - but can be of significant readability cost.
 
The other problem is that a lot of the information about C++ exceptions
that a function can throw, is basically lying. You can /say/ that a
function only throws specific exception types, which would be very
useful, but it's a lie. At least C++11 "noexcept" is a stronger, and
now "throw" specifications are no longer part of C++17 at all. But
there is currently no way to say what exceptions a function really could
throw, no way to check such features at compile time, no integration
with the type system, and the default is for functions to be specified
as "this function could throw anything, or pass on any exception" rather
than the more sensible "this function won't throw - it will do what it
says it will do".
 
(Static exceptions will, hopefully, improve on this a lot.)
 
> I mean, often I feel the need of "consistent" (= UNIFORM) error
management.
 
Consistency is good - but not if it is artificial consistency just for
consistency's sake.
 
> return types, and maybe it's more error prone.
 
> One thing I really can't stand is the way C used to share a single
> variable (errno) to store status, that may be overwritten unpredictably
 
Yes, errno is a very questionable idea. It made sense when it was
introduced, but leads to many limitations. It does allow some useful
practices, however, such as doing a string of calculations and then
checking for errno at the end rather than checking after each calculation.
 
(It's not a single global any more - there is one errno per thread.)
queequeg@trust.no1 (Queequeg): Oct 16 12:52PM


>> Do you use out_of_range when there's a buffer overrun? It seems to be the closest
 
> I decided not to use exceptions in new programs...
 
What is the reason?
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
"Öö Tiib" <ootiib@hot.ee>: Oct 16 05:54AM -0700

On Wednesday, 16 October 2019 11:17:13 UTC+3, David Brown wrote:
> indirect references (via unique_ptr, for example), simply to ensure
> cleanup in case functions throw an exception. That will make the code
> safe and correct - but can be of significant readability cost.
 
Basically to live with exceptions all classes have to be RAII as bare
minimum. Yes, unique_ptr is one of the handy components for to
build (or on simpler case to typedef) such class but I don't
understand what readability cost there is. Can you bring example
what you meant?
 
> as "this function could throw anything, or pass on any exception" rather
> than the more sensible "this function won't throw - it will do what it
> says it will do".
 
The exception specifications are often a lie anyway ... or hackable around.
What we see in lot of bad code of languages where basically everything
may throw? Something like that (Java or C# code):
 
public void foo() throws Exception {/* ... */} // WTF?
 
So there have to be tools and policies in place for to use the exceptions
regardless of language. However it is not in any way better with checking
error codes of function "success" returns. Latter just adds lot of
required boilerplate code. Isn't that major readability cost?
 
> management.
 
> Consistency is good - but not if it is artificial consistency just for
> consistency's sake.
 
Every good project has to establish consistent project-wide error
handling policy. It is likely possible to make it company-
wide when company is working on closely related products
in narrow problem domain. However I can not imagine
programming-language-wide (or even industry-wide) error
handling policies. May be with some different programming
language (like Rust).
 
> practices, however, such as doing a string of calculations and then
> checking for errno at the end rather than checking after each calculation.
 
> (It's not a single global any more - there is one errno per thread.)
 
The errno is awful ... OTOH it seems to be the only semi-portable way
in C++ to figure out what went wrong with lot of things (like with that
std::basic_fstream::open() and the like) when it matters.
queequeg@trust.no1 (Queequeg): Oct 16 01:00PM

> buffer overrun and the server detects it in time, then there is no
> reason to abort the web server. It could throw an exception to trigger
> the error handling which could e.g., close the connection with the client.
 
Exception doesn't seem to be a good choice here, at least for me.
Exceptions are meant to handle exceptional situations, not an invalid user
input, even if this input, if not validated, would cause a buffer overrun
or other critical error.
 
Your code can / should safely continue after an invalid user input, and
handle it appropriately (by signalling an error, terminating the
connection, logging it -- but it should be defined).
 
In my code, I use exceptions only to signal assertions and situations that
should never happen if the code was correct, machine had enough memory,
and the code could safely continue. The only way to recover from such
exception is to log debugging information and gracefully die, not doing
any more harm.
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
Soviet_Mario <SovietMario@CCCP.MIR>: Oct 16 03:04PM +0200

On 16/10/2019 00:13, Scott Lurndal wrote:
>> overwritten unpredictably
 
> Can you elaborate? It was well understood and documented when errno could
> be modified (on any system or library call).
 
I must have mis-expressed. Unpredictably would have meant
that this variable can be async modified anywhere and
anytime (and in multithread context, or even in event-driven
programs, when a routine could be called amidst another call
is working, and that gets frozen and revived later, one
might find errno modified even just after tested in the last
instruction on the "local" flow).
 
And in that case even knowing in advance this may happen is
useless practically.
Sure : often this modification is not that relevant locally
(an error occurred elsewhere is more likely not affect the
reprise of the code frozen before) but not always.
Consider an error generated on a shared resource, like a
file/stream and so.
Routine A tests a file (probing errno after some operation)
before starting to worki and prepares to work on it
 
then an async event trigger invoke routine B (with A being
put on wait) which generates an error on the same file,
setting errno.
When resumed, routine A will assume wrongly that the file is
not corrupted, but actually is.
 
Actually even return value test does not seem (to me) to
protect much from this.
Exception is more resilient, at the cost of speed, as it
catch on the fly ex-post rather then test-and-act)
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Soviet_Mario <SovietMario@CCCP.MIR>: Oct 16 03:05PM +0200

On 16/10/2019 06:57, Paavo Helde wrote:
>> variable (errno) to store status, that may be overwritten
>> unpredictably
 
> It's not a single variable, it's a single variable per thread.
 
oh ! Sorry I was wrong then :\
 
 
> Errno may only be overwritten "unpredictably" if there is a
> signal handler which does not take care to store and restore
> it properly. But this would be a bug in the signal handler.
 
I have been in error :\
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
David Brown <david.brown@hesbynett.no>: Oct 16 04:12PM +0200

On 16/10/2019 14:54, Öö Tiib wrote:
> build (or on simpler case to typedef) such class but I don't
> understand what readability cost there is. Can you bring example
> what you meant?
 
I can try and give an outline of an example. Suppose you have code like
this, using a bool "success" return instead of exceptions:
 
bool success = tryThis();
doThisAnyway();
if (!success) return false;
continueProcessing();
return true;
 
How do you make that into an exception-based handling, with "tryThis"
throwing an exception on failure?
 
You can use a try block and re-throw:
 
try {
tryThis();
} catch (...) {
doThisAnyway();
throw();
}
doThisAnyway();
continueProcessing();
 
You've duplicated code as well as reducing the clarity.
 
Or you can make a class:
 
struct DoThisAnyway_later {
DoThisAnyway_later() {};
~DoThisAnyway_later() { doThisAnyway(); };
}
 
{
DoThisAnyway_later doThisAnywayLater();
tryThis();
// doThisAnyway called by destructor
}
continueProcessing();
 
Now you have re-arranged the order of the code, losing the correlation
between the textual order and the run-time order. (A general "scope
guard" template would reduce the code here, but not change the re-ordering.)
 
 
With explicit, manual control of the errors or unusual circumstances, it
can be easier to see the flow of the code.
 
And note that if you want to be sure that "doThisAnyway()" is done, and
"tryThis()" does not have a "noexcept" specifier, you have to have
something like these exception-safe codings just in case, because it
/might/ throw something. With a return-based solution, you know if it
is giving success feedback because it is in the return type (perhaps
something more sophisticated than a simple bool, such as a
std::optional, or a variant, or an "expected") - and you can mark it
with [[nodiscard]] to make sure the code checks the result.
 
 
I am not saying that exceptions /always/ make code harder to read - far
from it. I am merely saying that /sometimes/ they do.
 
(To be clear here, my kind of C++ programming is usually done with
exception support disabled in the compiler - partly for worst-case
performance reasons, partly for clarity, partly due to limited support
for some targets, partly due to a mix of C and C++ code. This limits my
experience of C++ exceptions.)
 
> regardless of language. However it is not in any way better with checking
> error codes of function "success" returns. Latter just adds lot of
> required boilerplate code. Isn't that major readability cost?
 
Somewhere there is /always/ going to be a cost!
 
I think a lot of the readability cost of using function return values
for errors comes from the days of older C and older C compilers, where
you often had a mess of goto's in order to deal with errors underway
without having deeply nested "if" statements. Often you can get a lot
clearer code by simply dividing the code into several smaller functions,
with a style of "if (!success) return false;" as an exit. Before C99
"inline", and before compilers inlined single-use static functions
automatically, multiple small functions could quickly be costly in
efficiency.
 
And of course with C++ you have all the power of RAII, and tools like
std::unique_ptr - you don't need exceptions to use them. They apply
equally well to exiting functions with an early return.
 
> programming-language-wide (or even industry-wide) error
> handling policies. May be with some different programming
> language (like Rust).
 
Certainly there is no one "ideal for every case" solution here.
 
 
> The errno is awful ... OTOH it seems to be the only semi-portable way
> in C++ to figure out what went wrong with lot of things (like with that
> std::basic_fstream::open() and the like) when it matters.
 
I am not an errno fan myself. But I can understand why it was made, and
that it could be useful for some kinds of coding.
melzzzzz <mel@melzzzzz.com>: Oct 16 06:29PM +0200

> melzzzzz <mel@melzzzzz.com> wrote:>> Do you use out_of_range when there's a buffer overrun? It seems to be the closest> > I decided not to use exceptions in new programs...
 
"What is the reason?"
 
I started to use discriminated unions, as it is better way of
handling errors...
there was recent discussion about that...
--
Press any key to continue or any other to quit....
Soviet_Mario <SovietMario@CCCP.MIR>: Oct 16 08:45PM +0200

On 16/10/2019 16:12, David Brown wrote:
> doThisAnyway();
> continueProcessing();
 
> You've duplicated code as well as reducing the clarity.
 
the example is formally genial :)
But I wonder what could actually be inside.
I mean : a function like dothisanyway should be either
independent on the execution of tryThis result, or, if it is
just PARTIALLY dependent, then tryThis seems to have a too
coarse grain : it has internally a part tryThis depends upon
and another who don't, and it could be splitted logically in
two function.
The former who is actually to be put BEFORE doThisAnyway,
and the latter, possibly generating the error (on which
doThisAnyway does not depend) who can be placed AFTER IT
 
 
getting sth like
 
tryThisPart_1 ();
doThisAnyway ();
try {
tryThisPart_2 ();
} catch (...) {
// etc
};
 
Part_1 is essential to doThisAnyway
Part_2 is independent
 
The two Parts seem to me to be logically weakly correlated
... but it's just sort of an intuition, not really sure
 
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
aminer68@gmail.com: Oct 16 10:34AM -0700

Hello,
 
 
About the LRU scalable algorithm..
 
On 10/16/2019 7:48 AM, Bonita Montero on comp.programming.threads wrote:
> in locked mode in very rare cases. And as I said inserting and
> flushing is conventional locked access.
> So the quest is for you: Can you guess what I did?
 
 
 
I think i am also smart, so i have just quickly found a solution that is scalable and that is not your solution, so it needs my hashtable that scales very well and it needs my fully scalable FIFO queue that i have invented. But i will not patent it, because i will let you patent yours since i have seen you speaking on the newsgroup of C++ about it.
 
 
But my solution is not Lockfree, it uses locks and it is scalable.
 
 
Thank you,
Amine Moulay Ramdane.
Bonita Montero <Bonita.Montero@gmail.com>: Oct 16 03:40PM +0200

> https://groups.google.com/d/msg/comp.arch/8Y0C8zGjtqI/bwg-hBLRAQAJ
 
A lock-free stack is the simplest kind of lock-free algorithm.
 
My LRU-algorithm has the following properties: it has a hashtable as
well as a LRU-list. The hashtable points to the nodes in the LRU-list.
For a cache-hit in the hashtable, pushing the LRU-entry to the head of
the LRU-list is kernel-contention-free in almost any case (how often
is configurable). As cache-hits have a high freqeuency parallel updates
are very essential. Inserts into the hashtable and the LRU-list as well
as flushes from the LRU-list occur conventionally locked. But this does
not really hurt because it happens by far not that often but only at
times when blocks are fetched from disk / ssd; and this has a lower
frequency by nature.
Here's a graph of the performance of my algorithm:
https://abload.de/image.php?img=performancekkklx.png
The groups arre the graphs according to the number of threads running.
On the vertical axis you can see the number of updates which occured
per second. Within the groups the bars have a growing parameter which
controls how often kernel-contention is necessary. For the rightmost
groups this hasn't pushed to a reasonable maximum as the performance
hasn't logarithmically hit the top trougput. With this benchmark I'm
randomly give inserts into the hashtable / LRU-list at an average of
150 iterations to have a realistic disk-slowdown.
Bonita Montero <Bonita.Montero@gmail.com>: Oct 16 04:25PM +0200

> hasn't logarithmically hit the top trougput. With this benchmark I'm
> randomly give inserts into the hashtable / LRU-list at an average of
> 150 iterations to have a realistic disk-slowdown.
 
BTW: The drop from four to fife threads is because of the stupid CCX
-concept of my old Ryzen 7 1800X. On this CPU the cores are grouped in
clusters of four cores. Within such a cluster communications is faster
than between the groups. And the rest of the slowdown wih increasing
number of threads simply comes from the cache-traffic.
The leftmost bar in each group is the bar with parameter zero, where
my algorithm has almost the same locking-behaviour like having a single
global lock for everything. As you can see, there's a significant drop
from group 1 / bar 1 to group 2 / bar 1. But the drop isn't as dramatic
as it might look because I simulate a typical behaviour of filesystems
and databases: multiple fetches from the hash-table / LRU-list occur
at once because of prefetching. So my class has a call for getting
some entries in a row where the locks have been taken only once.
Without prefetching you get this performance:
https://abload.de/image.php?img=performance2b2j3l.png
One interesting thing here is that the simple one-mutex-for-all-algo
has a significant advantage over with mine when run single-threaded.
When I raise the parameter which helps my algorithm to handle mt-work-
loads, the performance even drops single-threaded! The other bars in
this group are also single-threaded but have an increasing parameter
to handle multithreaded access. And now as we don't have any prefet-
ching here look at the first bars in each group; that are the bars
of the one-mutex-for-all-case. The drop is even higher than 1 : 50
through the synch-overhead.
For larger number of threads the role of the optimization-parameter
beomes increasingly important. The group shifts more and more to be
a leftmost part of a logarithmic curve.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 13 01:51PM -0700

On 10/13/2019 9:26 AM, Manfred wrote:
 
>> ;^)
 
> Funny, however I wonder how reliable it can be - there is no guarantee
> of an actual race condition.
 
Yes there is. I am implementing an atomic fetch-and-add operation in a
very standard yet racy way, well, lets call it the full blown wrong way
wrt the fetch-and-add...
_______________________________
void racer(unsigned int n)
{
for (unsigned int i = 0; i < n; ++i)
{
// Race infested fetch-and-add op
unsigned int r = g_racer.load(std::memory_order_relaxed);
r = r + 1;
g_racer.store(r, std::memory_order_relaxed);
}
}
_______________________________
 
This can be influenced by simple moves of the mouse. Its funny to me.
There is no race condition wrt the standard, however, wrt the rules of
fetch-and-add, its totally foobar!
 
 
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 13 11:27PM -0700

On 10/13/2019 1:51 PM, Chris M. Thomasson wrote:
 
>>> ;^)
 
>> Funny, however I wonder how reliable it can be - there is no guarantee
>> of an actual race condition.
 
Well, if the simulation was using a single thread, then there is no race
condition wrt the foobare'd impl of fetch-and-add.
 
 
> Yes there is. I am implementing an atomic fetch-and-add operation in a
> very standard yet racy way, well, lets call it the full blown wrong way
> wrt the fetch-and-add...
[...]
Bonita Montero <Bonita.Montero@gmail.com>: Oct 13 06:39PM +0200


> Really? When I constantly read from random_device I get 100% load
> on the core, but almost only kernel-time. I.e. random_device must
> be feeded significantly by the kernel.
 
When I compile an run this ...
 
#include <random>
int main()
{
std::random_device rd;
for( unsigned u = 10'000'000; u; --u )
rd();
}
 
... this is the result when being compiled with g++ and run with
"Windows Subsystem for Linux 2.0" ...
 
real 0m25.452s
user 0m0.563s
sys 0m24.875s
 
And this is the result when being compiled with MSVC and run under
Windows 10 ...
 
real 531.25ms
user 531.25ms
sys 0.00ms
 
Maybe random_device hasn't a kernel-call for each ()-call, but the
kernel-calls might be a thousand times slower than when the app is
being feeded from the userland-state.
 
I think it's a big mistake to build random_device on top of /dev/random
or /dev/urandom. Standard-library random-number-generators simply don't
need to give high quality randomness.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 12 09:40PM -0700

On 10/12/2019 2:14 PM, Manfred wrote:
 
> Bottom line is that the properties random_device are implementation
> dependent, so the authoritative source is the implementation, rather
> than the standard.
 
Agreed. I think a per-thread PRNG seeded with a std::random_device is
fine. The thread would take the seed from a single instance of
std::random_device at startup, before passing control to the user. It
can even mix in its thread id for the seed as an experiment. Therefore,
after this, a thread can use its own per-thread context to generate a
pseudo random number without using any sync.
 
basic pseudo-code, abstract on certain things:
 
 
struct rnd_dev
{
a_true_random_ng rdev; // assuming a non-thread safe TRNG
std::mutex mtx;
 
seed_type generate()
{
mtx.lock();
seed_type seed = rdev.generate();
mtx.unlock();
 
return seed;
}
};
 
 
static rnd_dev g_rnd_dev;
 
 
struct per_thread
{
prng rseq; // thread local prng
 
void startup()
{
// seed with the main TRNG
rseq.seed(g_rnd_dev.generate());
}
 
// generate a thread local PRN
rnd_type generate()
{
return rseq.next();
}
};
 
 
Just for fun, Actually, each thread can use different PRNG algorithms. I
remember doing an experiment where there was a global static table of
function pointers to different PRNGS and a thread could choose between them.
 
 
Btw, can this be an implementation of a random device wrt its output:
 
https://groups.google.com/d/topic/comp.lang.c++/7u_rLgQe86k/discussion
 
;^)
 
 
 
Manfred <noname@invalid.add>: Oct 13 06:36PM +0200

On 10/13/19 6:28 PM, Bonita Montero wrote:
 
> Really? When I constantly read from random_device I get 100% load
> on the core, but almost only kernel-time. I.e. random_device must
> be feeded significantly by the kernel.
 
In my case disassembly shows that it does - of course it can be that on
other systems it doesn't.
Manfred <noname@invalid.add>: Oct 13 06:26PM +0200

On 10/13/19 6:40 AM, Chris M. Thomasson wrote:
 
> Btw, can this be an implementation of a random device wrt its output:
 
> https://groups.google.com/d/topic/comp.lang.c++/7u_rLgQe86k/discussion
 
> ;^)
 
Funny, however I wonder how reliable it can be - there is no guarantee
of an actual race condition.
 
Anyway, for a few years now Intel has been shipping CPUs with rdrand,
and GCC's random_device does use it, as far as I can see. Still I get
entropy() = 0, though.
Bonita Montero <Bonita.Montero@gmail.com>: Oct 13 06:28PM +0200

> Anyway, for a few years now Intel has been shipping CPUs with rdrand,
> and GCC's random_device does use it, ...
 
Really? When I constantly read from random_device I get 100% load
on the core, but almost only kernel-time. I.e. random_device must
be feeded significantly by the kernel.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 13 04:11PM +0100

On Sun, 13 Oct 2019 09:33:56 -0500
> for(const auto &i : a)
> cout << i << endl;
> }
 
You have to prevent pointer decay so as enable the array to retain its
size. One way to do that is to take the array as a reference so that
the size remains part of its type:
 
void func(int (&a)[5]) {
for(const auto &i : a)
cout << i << endl;
}
 
Another way is to use std::array, which has its size as its second
template parameter.
Joseph Hesse <joeh@gmail.com>: Oct 13 09:33AM -0500

On 10/3/19 9:03 AM, Joseph Hesse wrote:
> that it has to stop when it sees a 0?
 
> Thank you,
> Joe
 
This is similar to my last post.
How can one put a "range based for loop" for a built in array type
inside a function? The following program will not compile to an object
file.
 
#include <iostream>
using namespace std;
 
void func(int a[])
{
for(const auto &i : a)
cout << i << endl;
}
Bonita Montero <Bonita.Montero@gmail.com>: Oct 16 06:14AM +0200

> doesn't declare any functions. I don't actually need the name or
> declaration for the real function corresponding to f, merely it's
> relationship to the classes you've already mentioned.
 
I'n not interested anymore in solving the problem. I'm using the
cast to the type of FREE_NODE and everything is fine. Maybe a newer
version of g++ will fix this.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: