Thursday, August 25, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Aug 24 11:41PM +0100

On 24/08/2016 22:58, Jerry Stuckle wrote:
 
> The cost of unwinding a stack is ALWAYS a concern - at least to a
> competent programmer.
 
No, it isn't. Donald Knuth once said:
 
"We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil. Yet we should not pass
up our opportunities in that critical 3%."
 
So we only need to worry about the overhead of throwing exceptions in
that 3% not that 97%.
 
Maybe you should take a course in computer programming?
 
/Flibble
Jerry Stuckle <jstucklex@attglobal.net>: Aug 24 10:57PM -0400

On 8/24/2016 6:41 PM, Mr Flibble wrote:
> that 3% not that 97%.
 
> Maybe you should take a course in computer programming?
 
> /Flibble
 
Right. This from a demonstrated troll who has proven he knows nothing
about programming.
 
No wonder you can't get a job with a decent company.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Gareth Owen <gwowen@gmail.com>: Aug 25 06:15AM +0100


> I read that as a function that as a function that takes an integer
> "x", foo's it, and returns an integer result. It is clear and simple,
> and ready to use.
 
That assumes that the returned int must always be a valid value
(i.e. there are no domain errors, foo(x) can never fail due to resource
exhaustion etc). Otherwise, some return value(s) must be set aside as
error codes, which you must now check. There's no clue in the function
signature as to whether this is true. Furthermore, if you neglect check
for these error codes, your program appears to work, but is probably
doing so incorrectly. If I do check, I have to propagate that error up
the call chain, which is probably going to require me to find out the
error-return and return that.
 
(Boy, I hope the callers all the way up the stack remember to check &
propagate, and clean up their resources correctly).
 
Conversely, if you know that all return values of int foo(int x) are not
error codes (i.e. f() can't fail), then you also know that foo is not
going to throw. If you are unaware of this failure case, your program
fails, but it fails early with a (possibly uncaught) exception.
 
The only question that matters is "Can this function fail and what
happens if it does?"
 
int foo(int x);
 
doesn't tell you that.
Gareth Owen <gwowen@gmail.com>: Aug 25 06:27AM +0100


> So we only need to worry about the overhead of throwing exceptions in
> that 3% not that 97%.
 
> Maybe you should take a course in computer programming?
 
An additionally, its simply wrong to suggest that if I use an
error-return code to signal failure from the point-of-failure to
point-of-recovery then I am somehow absolved from having to unwind the
stack.
 
On the contrary, I know have to do it by hand, requiring error handling
code at multiple places in the call graph, and requires API that has
error returns for every function.
David Brown <david.brown@hesbynett.no>: Aug 25 09:58AM +0200

On 25/08/16 07:15, Gareth Owen wrote:
 
> That assumes that the returned int must always be a valid value
> (i.e. there are no domain errors, foo(x) can never fail due to resource
> exhaustion etc).
 
Exactly. It's that simple.
 
If there are domain restrictions on x, then they should be documented as
part of the type of x, the name of the function, or a comment. If code
calls foo with an invalid x, then the calling code is broken. It has a
bug. When implementing foo(), it is helpful to spot invalid x values to
help during debugging, and it is helpful to avoid propagating errors,
but it is /not/ helpful to try to aid recovery in the calling code.
 
Functions should not fail when they are used correctly. If it might do
something other than its primary function, then that is a secondary
function and should be part of the function's specifications. You can't
have a function that claims to always do "foo", but sometimes does
something else entirely just because of lack of memory. Then it is no
longer the function "foo" - it is the function
"try_to_foo_or_do_bar_instead". And that is absolutely fine - but it is
a different specification.
 
> Otherwise, some return value(s) must be set aside as
> error codes, which you must now check.
 
If it foo can do something other than its expected job, and the caller
of foo cares about it, then yes - some sort of returned indication is
needed, and some action is needed by the caller. In many cases, the
caller is /not/ interested - how often do people check the return from
printf?
 
> There's no clue in the function
> signature as to whether this is true.
 
If the function needs to return a "success/failure" indication as well
as a return value, then it has to return two bits of information. This
needs to be in the function's signature and/or documentation - just like
any other function that returns one or more pieces of information.
Sometimes for efficiency, the error states are indicated by particular
values in the return type - the documentation and commenting needs to be
/really/ clear in such cases.
 
But with exceptions, you really have no clue as to what is going on -
the specifications are totally missing from the function signature.
Given "int foo(int x);", you don't have a function that returns an int -
you have a function that can return absolutely anything.
 
> doing so incorrectly. If I do check, I have to propagate that error up
> the call chain, which is probably going to require me to find out the
> error-return and return that.
 
Exactly the same applies to exceptions.
 
Exceptions can make it a little easier to pretend that this doesn't
happen, or that it doesn't matter. But they also mean that you have to
code /everything/ as though an error was propagating through it, because
you never know when that might happen.
 
And the real meat of the problem - what should code do with the error -
is pretty much the same. Generally, there are four sorts of error that
could be caught. One is a coding bug - this should (at least during
testing/debugging) not be propagated, but lead as quickly as possible to
a helpful error message so that it can be fixed. Another is recoverable
local errors (such as a packet error that triggers a retry). These
should be spotted locally, and handled locally. Thus propagation is not
an issue. Then there are recoverable long-range errors (such as a
connection timeout). The function that can fail in this way is a "best
effort" function, and "failure" is part of its expected normal operation
- there is no error. And then there are unexpected disasters such as
running out of memory. Recovery is not an option, and propagation is
seldom useful or successful (good luck propagating exceptions when your
heap has failed).
 
> happens if it does?"
 
> int foo(int x);
 
> doesn't tell you that.
 
In my view, it /does/ tell you - functions don't fail if you use them
correctly. But function specifications are often lax about telling you
/exactly/ what they will do in various circumstances (either dependent
on the parameters, or dependent on other resources).
 
If you write a function like this:
 
int squareRoot(int x);
 
described as "returns the integer square root of x", then the problem
lies not with the function, but with the specification of the function.
 
It should be specified as:
 
"returns the integer square root of x if x >= 0, and has undefined
behaviour otherwise"
 
or
 
"returns the integer square root of x if x >= 0, or -1 otherwise"
 
or
 
"returns the integer square root of x if x >= 0, or throws
std::domain_error otherwise"
 
Code can use the function incorrectly, but the function itself does not
"fail". "Failure modes" are part of the specification of a function,
and thus known and documented behaviour.
scott@slp53.sl.home (Scott Lurndal): Aug 25 01:03PM


>Again your ignorance is showing. In a decent implementation the
>overhead for exception handling is zero for the case of the exception
>not being thrown.
 
Actually, there is a non-zero cost for that case; the instruction
footprint of the code gets larger due to the added code to handle
stack unwinding when the exception is thrown. This will have a
rather small impact on performance due to Icache resource scarcity
which matters in some cases (embedded code, operating systems,
hypervisors, simulators).
 
$ objdump -d bin/a > /tmp/a.dis
$ grep Unwind_Resume /tmp/a.dis |wc -l
941
 
I once compared using sigsetjmp/siglongjmp with using Exceptions
in the same codebase. Exceptions were significantly slower. It's
much more difficult to use sigsetjmp/siglongjmp correctly, however,
particularly in multithreaded code. That codebase still uses
sigsetjmp/siglongjmp for performance reasons (and because it was
developed before efficient C++ exceptions were available and in a time when
a 180mhz Intel P6 was brand new and 4MB was a lot of memory).
scott@slp53.sl.home (Scott Lurndal): Aug 25 01:05PM


>That assumes that the returned int must always be a valid value
>(i.e. there are no domain errors, foo(x) can never fail due to resource
>exhaustion etc). Otherwise, some return value(s) must be set aside as
 
Or you can use 'bool foo (int x, int& rval)' instead.
Jerry Stuckle <jstucklex@attglobal.net>: Aug 25 09:30AM -0400

On 8/25/2016 1:27 AM, Gareth Owen wrote:
 
> On the contrary, I know have to do it by hand, requiring error handling
> code at multiple places in the call graph, and requires API that has
> error returns for every function.
 
Unwinding the stack via exception handling is much more CPU intensive
than returning from functions.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Aug 25 04:14PM +0200

On 25/08/16 15:05, Scott Lurndal wrote:
>> (i.e. there are no domain errors, foo(x) can never fail due to resource
>> exhaustion etc). Otherwise, some return value(s) must be set aside as
 
> Or you can use 'bool foo (int x, int& rval)' instead.
 
Yes. Or "int foo(int x, bool& success)", or "tuple<int, bool> foo(int
x)", or various other arrangements. You can even use a global "errno"
type value (though it might make some people scream...). Or you can use
exceptions.
 
The point is that this should all be part of what "foo" does - not a
hidden mechanism.
"Öö Tiib" <ootiib@hot.ee>: Aug 25 08:01AM -0700

On Thursday, 25 August 2016 02:17:36 UTC+3, David Brown wrote:
> > simple and foolproof tool.
 
> It is neither simple nor foolproof, whether or not you use exceptions.
> Exceptions are just one tool in the C++ toolbox.
 
So my solution with that so far has been to give the tool to people
who can handle it or at least express excitement to learn to. Existence
of people who can't handle it and/or dislike it does not matter.
 
> specifications did not guarantee anything (other than giving the
> compiler new ways to make things go wrong for your program if you don't
> follow your specifications).
 
Yes. In C++98 the exceptions were somewhat worse. It is generally considered
good news that things evolve.
 
 
> Sure, it can be done - but the effort involved is not negligible. If
> the end result is better, safer code, then great. If not, the effort is
> wasted, and it's another thing for people to get wrong.
 
Yes. Exceptions like everything else are not silver bullet. There is cost
when we code and run-time cost. Misuse may result with inefficiency
*and* more work. So don't misuse, use. ;-)
 
For example hard drive can become full. For what operations in our
program it is showstopper issue? Where we discover it? Where we
can do anything about it? How likely it is that it happens? Answers
to those questions likely show that it is perfect candidate to handle
with exception but is sure not as "cheap" as to ignore the whole
possibility and to assume that hard drive is *never* full. I remember
Windows NT 3.5 was such that just died down when system drive was
full.
 
> > when it returns an error code. So it is special *functionality* and is
> > expected to be documented about a function.
 
> Having a strong guarantee on a function can definitely be a useful thing.
 
My point was that there has to be such requirement and that it is orthogonal
to exceptions. Example: In middle of overwriting a file we may discover that
we lack something to complete the operation. Now we want the file back how
it was before we started to write it. Rolling that file back makes sense
regardless if we use programming language that supports exceptions or not
and the programming effort and run-time cost of it is not somehow magically
bigger for languages that have exceptions.
 
> and the complications for the strong guarantee can make code much
> slower. And all in the name of avoiding something that will never
> happen, and minimising the consequences unnecessarily.
 
I do not see what you mean. The strong roll-back guarantee is needed
only on very special cases IMHO. The basic exception guarantee is plenty
on general case.
 
 
> Again, I am not saying it is wrong to use exceptions, or wrong to
> provide strong guarantees - just that it comes with a cost that is often
> not worth paying.
 
Again I can't see your point. We can't redefine the universe that is full of
things that can break down or fill up or reach limits in various ways.
"Hard drive full" is inacceptable? "Sensor short circuited" is not acceptable?
Reality has to be acceptable, problems have to be acceptable. Solution of
quitting to work when sensor is short circuited may be unacceptable. That
is again a requirement orthogonal from if the event when sensor was
discovered being short circuited was propagated with exceptions or error
codes. On any case we have to behave differently with broken sensor and
there goes the real development effort.
 
 
... snip

> > the issue.
 
> Lack of support for exceptions, or inefficient implementations, does not
> mean low quality tools.
 
Ok, former is better described as "non-conforming to specification" tool and
latter as "inefficient" tool. Both have "quality of implementation issues" IOW
are "low quality" tools. The quality of tool can be is not uniform so some other
feature is possibly made brilliantly there but we got fly in the ointment so "low
quality".
 
> have exceptions, you have to be aware that just about anything can fail
> unexpectedly with an exception. Without exceptions, and using explicit
> error handling, you know where things can go wrong.
 
What you mean by that "trial-and-error code"? Doing something may be the
cheapest and sometimes the only available proof that it is doable. We may
check that hard drive has plenty of room beforehand, but millisecond later
some other process uses it up and our write fails. Real code is actually full
of unhandled exceptional circumstances with or without exceptions. Lets
take first allfamous "trial-and-error" C program:
 
/* Hello World program */
#include<stdio.h>
#include<stdlib.h>
int main(void)
{
printf("Hello World!\n");
return EXIT_SUCCESS;
}
 
The return value of 'printf' is unhandled and therefore 'EXIT_SUCCESS'
is a lie on such exceptional case when it fails. If 'printf' would throw
on failure then there at least would be chance that some static analysis
tool would report about potentially unhandled exception. Otherwise
all same old crap.
 
 
> completely unexpected object of an unknown type that may not even be
> known to the user of foo, which will lead to a jump to somewhere else in
> code that the user of foo doesn't know about.
 
Exceptions must be documented like any other potential outcome. If the
exceptions are potentially from injected stuff then that must be also
documented. Also some of it can be constrained and handled compile-
time. The 'noexcept' can be 'enable_if'd for and so constrained or
specialized for.
 
The 'throws' clutter in Java function's signature does not work well;
programmers weasel out of it by throwing covariants and it is
getting close to 'public void foo(String myString) throws Throwable'
plus actual exceptions described in documentation. So it is only
good how C++ deprecated that worthless nonsense bloat.
 
 
> If foo is a function that could fail, I prefer to know about it:
 
> tuple<int, bool> foo(int x);
 
I prefer 'boost::optional<int>' there when lack of return is normal
and reason of it is obvious and show must go on anyway. The
'pair<int,bool>' or 'tuple<int,bool>' are less convenient.
 
However if 'false' is exceptional (say 1/50000 likelihood) and it
breaks the whole operation during what that 'foo' was called then
handling it in close vicinity just pointlessly clutters the core logic
up.
 
With exceptions we can deal farther away or may be even up stack
in 'catch' block, and there we may also get the phone number of
grandmother whose fault it was that hard drive was full and no
'int' did emerge from 'foo' and so we could not complete the whole
operation.
 
 
> As an aside, I /really/ hope that structured binding makes it into
> C++17, so that we can write:
 
> auto [result, success] = foo(123);
 
That is another issue how to express what are the 'in', 'in-out' or 'out'
parameters. It is lot simpler since it is syntax issue. Exceptions are
not any of those (and may be rather long list) so it is good idea to
handle those separately.
Gareth Owen <gwowen@gmail.com>: Aug 25 06:13PM +0100

>>(i.e. there are no domain errors, foo(x) can never fail due to resource
>>exhaustion etc). Otherwise, some return value(s) must be set aside as
 
> Or you can use 'bool foo (int x, int& rval)' instead.
 
Of course you can. But if every failable function looks like this, you
end up with a horrible nested mess of error checks and partial cleanup
whenever you try and do something complicated.
 
Consider a Matrix algebra package that can go wrong in multiple ways
(singular matrices, numeric instability, incompatible dimensions etc).
 
You can write code that looks like:
 
try {
Matrix A = B + C + D;
Vector Z;
C = transpose(B) * (inverse(A) * Z);
return C*C + transpose(A *Z);
} catch(const MatrixError&) {
// whatever
}
or you can write
 
 
if(Add(tmp,B,C)) goto fail;
if(Add(A,tmp,D)) goto fail;
if(Transpose(tmp1,B)) goto fail;
if(Inverse(tmp2,A)) goto fail;
if(Mult(tmp3,tmp2,Z)) goto fail;
// Well you get the idea... there are, what, ten-fifteen more lines here that
// I can't be bothered to write!
// .
// .
// .

return tmp;
 
fail:
return -EMATRIX_ERROR;
 
No thanks.
Gareth Owen <gwowen@gmail.com>: Aug 25 06:20PM +0100


> Actually, there is a non-zero cost for that case; the instruction
> footprint of the code gets larger due to the added code to handle
> stack unwinding when the exception is thrown.
 
Is this more or less than the size of the error checking/handling code
that you would have to put if you choose not to use exceptions?
 
And its easier for the compiler to put the exception-unwind code in a
relatively remote segment and jump to it at exceptions, rather than
interleaving the error handling in the hot path. Which can actually
improve cache behaviour. And since the compiler knows what the "normal"
path through the code is, that can improve performance of branch
prediction and speculative execution, as well as cache locality.
SG <s.gesemann@gmail.com>: Aug 25 05:32AM -0700

On Wednesday, August 24, 2016 at 7:24:00 PM UTC+2, Bo Persson wrote:
 
> On the other hand, this doesn't forbid storing a Type in a std::vector,
> even if Type's move constructor throws on odd Thursdays in october on
> leap-years.
 
Right. And I'm really glad that noexcept/move_if_noexcept along with
type traits to test "noexceptability" made it into C++11. It was an
important add-on to C++'s move semantics story.
 
> Flexibility before simplicity. :-)
 
I don't think of possibly throwing move constructors as a feature but
more of an accidental consequence of language design decisions that
made things more complicated than it has to be. All of the C++11/14
object types I can think of that have a dedicated move constructor but
a possibly throwing one are due to composition with legacy types:
Think std::pair<std::string,Foo> where Foo is a legacy type that is
not move-optimized but std::string offers a nice move constructor we'd
like to make use of if such a pair is moved. This pair's move
constructor cannot be marked noexcept in case Foo has a throwing copy
constructor. So, basically, we have throwing move constructors because
we have throwing copy constructors and making types movable requires
a class author's interaction in case the author already wrote a custom
destructor.
 
In a language like Rust this is not something anybody needs to worry
about because every value of every data type automatically moves
efficiently without any failures. The flexibility you lose is the
ability to define a custom action that is taken when a value of a
user-defined type moves. While this may sound inflexible I struggle to
come up with compelling examples where this would be an issue. The
only example I can come up with is Boost's linked_ptr. It's a kind of
shared ownership smart pointer which keeps track of all owners not by
allocating a shared reference counter but by linking itself with other
owners in a circular list. So, it needs to be aware of when it moves
to where. But this kind of smart pointer vanished some time ago for
some reason (performance/threading/lack of a weak_linked_ptr/...?)
And in other cases that I can't come up with, an additional layer of
indirection is probably part of the solution. Given the potential
rarity of such scenarios, I think it's fair to say that this
restriction is insignificant.
 
Cheers!
SG
me <crisdunbar@gmail.com>: Aug 25 08:59AM -0700

On Thursday, August 11, 2016 at 8:09:06 PM UTC-7, Richard Damon wrote:
> resource out of the object. It is perfectly valid for a move constructor
> to be identical in function to a copy constructor, so the destructor
> does need to be run to make sure the resources are freed.
 
Ack. But I think it's really more fundamental than that. The simple fact
is: the moved-from object _still_exists_, so it must be destructed. Even
after the move, there are still two objects. Typically, the destructor
will only have to do trivial things, as pointers to data should have been
nulled during the move, other resources have been shifted over to the new
object, etc, but the object exists until destructed.
thomas.grund.1975@gmail.com: Aug 25 01:41AM -0700

Hello,
 
Why exactly does the following code compile?
I would expect that it is not possible to cast (?) 0 to some const reference.
What is the C++ language rule behind this?
 
Thanks a lot,
Thomas
 
 
#include <string>
void f(const std::string &) {}
int main() {
f(0);
}
Ian Collins <ian-news@hotmail.com>: Aug 25 09:37PM +1200

> int main() {
> f(0);
> }
 
It will construct a std::string object with the constructor that takes a
const char* parameter (0 is being interpreted as NULL).
 
It may compile, but it would crash if you ran it.
 
--
Ian
Ben Bacarisse <ben.usenet@bsb.me.uk>: Aug 25 10:56AM +0100


> Why exactly does the following code compile?
> I would expect that it is not possible to cast (?) 0 to some const reference.
 
I agree with the "?": I would not use the word cast here. A cast is an
operator used to perform explicit conversions, and there is no cast
operator in the code.
 
What's more, the part you are asking about is not really a type
conversion at all; instead, a temporary std::string object is being
created.
 
> What is the C++ language rule behind this?
 
One of the constructors for std::string has a single const char *
parameter, and the literal 0 is a valid argument for that constructor.
There are lots of language rules that are needed to explain the gory
details but this might be enough explanation for now.
 
<snip>
> int main() {
> f(0);
> }
 
Calling this std::string constructor with a null pointer as the argument
is undefined behaviour, so the code might compile but it's not valid.
 
--
Ben.
thomas.grund.1975@gmail.com: Aug 25 02:57AM -0700

Am Donnerstag, 25. August 2016 11:37:41 UTC+2 schrieb Ian Collins:
 
> It may compile, but it would crash if you ran it.
 
> --
> Ian
 
Thank you, that helps!
 
Thomas
thomas.grund.1975@gmail.com: Aug 25 02:59AM -0700

Am Donnerstag, 25. August 2016 11:56:36 UTC+2 schrieb Ben Bacarisse:
> is undefined behaviour, so the code might compile but it's not valid.
 
> --
> Ben.
 
Thanks, that helps!
 
Thomas
bitrex <bitrex@de.lete.earthlink.net>: Aug 25 11:38AM -0400

On 08/25/2016 05:37 AM, Ian Collins wrote:
 
> It will construct a std::string object with the constructor that takes a
> const char* parameter (0 is being interpreted as NULL).
 
> It may compile, but it would crash if you ran it.
 
Under the latest GCC with -std=C++11, it causes a runtime error:
std::logic_error essentially complaining that you're trying to construct
a string from a nullptr
Bo Persson <bop@gmb.dk>: Aug 25 01:10PM +0200

On 2016-08-24 20:46, Cholo Lennon wrote:
> including it? (at least in g++ and VC++ this behavior is present)
 
> The standard seems not guarantee that ("file is searched for in an
> implementation-defined manner"):
 
The language standard does not limit itself to file systems with a
directory structure. z/OS used on IBM mainframes is an important example.
 
 
So it might depend on exactly HOW portable you want to be.
 
 
Bo Persson
bitrex <bitrex@de.lete.earthlink.net>: Aug 24 07:36PM -0400

On 08/24/2016 05:15 PM, Victor Bazarov wrote:
> Splitting responsibilities of 'auto_ptr' between 'shared_ptr' and
> 'unique_ptr' solved some serious problems, IIRC...
 
> V
 
Ah okay. The simple example above did not include the rest of the
code...the purpose is because unlike the example, there are actually
class methods which are returning heap-allocated vector objects and
assigning them to that "unique_ptr."
bitrex <bitrex@de.lete.earthlink.net>: Aug 24 09:18PM -0400

On 08/24/2016 04:35 PM, Alf P. Steinbach wrote:
> [/code]
 
> Cheers & hth.,
 
> - Alf
 
Okay, it seems my point of confusion was where std::vector allocates the
container items. If it's always on the heap by default, then there's
indeed no point in doing what I'm doing. Too used to the "C-style" arrays...
bitrex <bitrex@de.lete.earthlink.net>: Aug 25 05:32AM -0400

On 08/24/2016 03:28 PM, bitrex wrote:
> Having a bit of trouble with code like this (still getting acquainted
> with 21st century C++):
 
Here's a working example of what I was trying to do that's better, I think.
 
Foo is a class with a regular std::vector member variable containing a
bunch of some integer type T.
 
Bar is a class containing a std::vector of some number of unique_ptrs
to class Foo<T> which are pushed back onto the vector when an object of
type Bar is instantiated. Class Bar can then sort the vector of Foo
objects by pointer instead of by value, and pick the "best" one.
 
If my understanding is correct, because the vector in Bar contains
unique_ptrs to Foo, when Bar goes out of scope all the Foo objects and
their vectors of type T will be released from memory automatically.
 
http://pastebin.com/68NqSZg8
Ian Collins <ian-news@hotmail.com>: Aug 25 09:41PM +1200

On 08/25/16 09:32 PM, bitrex wrote:
 
> If my understanding is correct, because the vector in Bar contains
> unique_ptrs to Foo, when Bar goes out of scope all the Foo objects and
> their vectors of type T will be released from memory automatically.
 
Your understanding is correct. I haven't looked at the code, but in many
cases std::vector<std::unique_ptr<T>> can be used as a standard
replacement for boost::ptr_vector<T>.
 
--
Ian
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: