Thursday, July 26, 2018

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

legalize+jeeves@mail.xmission.com (Richard): Jul 25 08:57PM

[Please do not mail me a copy of your followup]
 
Jorgen Grahn <grahn+nntp@snipabacken.se> spake the secret code
 
>I guess my message is: save the documentation for when it's needed.
 
This is why "Comments" are one of the code smells listed in
"Refactoring" by Martin Fowler.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
gof@somewhere.invalid (Adam Wysocki): Jul 26 09:16AM

> *
> * \retval Whether dividend is divisible by divisor
> */
 
Could be (if we replace \retval with \return). It's Doxygen syntax (more
precisely: JavaDoc syntax, but used with Doxygen).
 
After running it through Doxygen it looks like this:
 
http://www.chmurka.net/r/usenet/fizzbuzz.png
 
The code is not visible here (and should not be), only this documentation
and prototypes.
Bo Persson <bop@gmb.dk>: Jul 26 02:35PM +0200

On 2018-07-26 11:16, Adam Wysocki wrote:
 
> http://www.chmurka.net/r/usenet/fizzbuzz.png
 
> The code is not visible here (and should not be), only this documentation
> and prototypes.
 
Nice pictures, but it only confirms Jörgen's argument about the redundacy.
 
If the function is
 
bool is_X()
 
it pretty much tells us what the function does.
 
The description "Checks if X is true" and "returns true if X is true",
"returns false if X is false" just makes us read more text to verify
that it doesn't say something else. It doesn't help in any way.
 
 
Bo Persson
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jul 26 05:53AM -0700

On Thursday, July 26, 2018 at 8:35:41 AM UTC-4, Bo Persson wrote:
 
> The description "Checks if X is true" and "returns true if X is true",
> "returns false if X is false" just makes us read more text to verify
> that it doesn't say something else. It doesn't help in any way.
 
In this case it's redundant, but in other cases it may need to do
more to test if X is true. X may only be true in cases where it
is true AND we're not currently in some maintenance override mode,
or if the application's thread isn't suspended to save battery life,
etc. There may be more factors than a raw read and report. The
documentation here would give that information, up or down.
 
And, to be consistent with other documentation in the project, by
having such documentation here, it makes it an easy task to see
what's what quickly.
 
I like Doxygen. I think it's a little obtuse in some cases, but
what it provides is great. I have used such documentation rather
heavily on Java-based apps where it was built-in. I've used it
rarely in other software.
 
--
Rick C. Hodgin
David Brown <david.brown@hesbynett.no>: Jul 26 03:29PM +0200

On 26/07/18 14:53, Rick C. Hodgin wrote:
> or if the application's thread isn't suspended to save battery life,
> etc. There may be more factors than a raw read and report. The
> documentation here would give that information, up or down.
 
Yes, of course. If you can give a good enough description of the
function, its actions, its parameters and its return value by picking a
good name, then do so. If more information is needed, then document it.
I don't think anyone is arguing against useful comments - merely
against redundant ones.
 
It is a good general rule that if something can be described in the
language or the names of identifiers, then that is what you use. Only
resort to comments when necessary (but don't skimp on them if they /are/
necessary). Extra comments have a tendency to lose synchronisation when
the source code changes - information held in the source itself does not.
 
> what it provides is great. I have used such documentation rather
> heavily on Java-based apps where it was built-in. I've used it
> rarely in other software.
 
I like doxygen too. As well as for commenting my own code, I have found
it useful for analysing existing code - set it up to document
/everything/ regardless of comments, and include caller, callee and
include graphs. It makes it easy to navigate around the code, see
cross-references, find messy parts, etc., using just a browser.
 
But what I don't like is when there is a documentation standard that
says you need big sections of doxygen commenting (or any commenting, for
that matter) for every little function or bit of code, so that you can't
find the real code in all the commentary.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jul 26 10:29AM -0400

On 7/26/2018 9:29 AM, David Brown wrote:
> resort to comments when necessary (but don't skimp on them if they /are/
> necessary). Extra comments have a tendency to lose synchronisation when
> the source code changes - information held in the source itself does not.
 
That's one viewpoint. It's not the only one. And it's good practice
to err on the side of over-documenting if the documentation is also
clear and even obvious to some, because not everybody will be thinking
the way you were, or you do, when they read that. The little bit of
extra prompting might kick their brain cells into gear and get them
from where they are to where they need to be.
 
> says you need big sections of doxygen commenting (or any commenting, for
> that matter) for every little function or bit of code, so that you can't
> find the real code in all the commentary.
 
Yes. Yours is one viewpoint. I've also worked with/for people who
like to see documentation over code, because then even managers can
follow along with what it's doing.
 
Chocolate, vanilla and strawberry flavored ice cream. They all exist
for a reason. Not everybody likes your chocolate, David. A lot of
people do. And probably most people like it from time to time. But,
many people prefer vanilla, strawberry, or other, or if they're like
me and Captain Janeway, coffee-flavored ice cream (she's also from
Indiana you know, from a town about an hour south of where I live).
 
--
Rick C. Hodgin
legalize+jeeves@mail.xmission.com (Richard): Jul 25 08:56PM

[Please do not mail me a copy of your followup]
 
BGB <cr88192@hotmail.com> spake the secret code
 
>I have my own custom C compiler (*) which can compile Quake and Doom in
>about 5 seconds.
 
Are your customizations around making the compilation process faster?
If so, what exactly are you doing to get the speedup? Is this
something that could be contributed to clang?
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
BGB <cr88192@hotmail.com>: Jul 26 12:17AM -0500

On 7/25/2018 3:56 PM, Richard wrote:
 
> Are your customizations around making the compilation process faster?
> If so, what exactly are you doing to get the speedup? Is this
> something that could be contributed to clang?
 
Nothing that really seems that particularly notable (beyond just its
potentially "unusual" architecture).
 
The original form of this compiler was written about 10 years ago (based
on a fork of an early form of my BGBScript language, which was at the
time a lazy JavaScript mockup with "pretty much awful" performance;
which I had hacked into being able to parse C), and is stand-alone /
ground-up, and written in C. But, at the time, the original idea was
partly inspired by the scripting system used by Quake 3 Arena.
 
It has been used/maintained on-off for much of this time. It spent many
years ignored and/or used as a "parse C headers for metadata" tool,
before fairly recently being "reborn" back as a a full fledged C
compiler for some of my recent projects.
 
 
I don't know Clang well enough to comment how much is applicable to
LLVM/Clang, and its speed doesn't really seem to be due to anything in
particular (beyond maybe being fairly small/simple).
 
Speeds otherwise generally aren't that drastically different from what I
see from MSVC (whereas compiling Quake with GCC in WSL is closer to
around 35s on my PC; though it seems a little faster if compiled via
Linux inside VirtualBox).
 
 
Vs a normal C compiler (such as GCC):
The compilation process is essentially monolithic.
A single binary does everything, rather than spawning new processes.
There is currently a cache where headers are kept one read-in.
Otherwise, not much seems particularly notable on this front.
Some minor lexer/parser tricks exist, but nothing major.
AST nodes are essentially small key/value B-Trees.
It is faster and uses less memory than CONS lists or XML DOM.
But, many other compilers use tagged-unions (probably faster).
All the translation-units use a shared context in the middle/backend.
Internal renaming is used to isolate TU-local features.
Ex: 'static' variables are renamed to hide them from other TU's.
As-is, this abstraction has some leaks which can break stuff (*1).
Only during parsing/frontend stuff are TU's independent.
Code generation uses a few major stages.
There is no separate assembler or linker stage.
Codegen sends commands directly to an "instruction emitter".
Resembles the inner part of an assembler, w/o an ASM parser.
I did later add an ASM parser, but mostly to support ASM.
Just relocates sections and spits out a binary as its output.
 
 
( *1: In particular, mismatched declarations between translation units
can break compilation. As can quietly having global variables and
functions with overlapping names (though MSVC also rejects these, but
GCC seems to accept them in certain cases).
 
Doom stumbled on a lot of these edge cases, so ended up modifying the
code so that, either, the declarations matched, or in other cases things
were manually renamed such that there was no longer a name clash. GCC
and MSVC just sort of quietly merged these cases union-style (unless
they were initialized with a value, in which case the linker would
typically complain).
 
Granted, a better solution could be making this work, probably along
with a few other cases which aren't currently handled, like initializer
expressions performing arithmetic or other calculations on the addresses
of other variables (which can't be handled in the front-end).
 
An overall better solution (to the naming issue) may be to have a
two-level scheme, with a partially separate per-translation-unit
namespace, and a second-level "extern" space which bindings may be
promoted to if referenced from within another translation unit, but
which may be shadowed by bindings local within each TU. Uninitialized
variables may be merged across TU's if their names match and their types
can be reconciled (otherwise, "do something sane").
 
These sorts of edge cases weren't really much of an issue for Quake or
for most of my own code though.
)
 
 
 
Beyond the frontend stage, there is no real separation between
translation units in the middle-end or backends.
 
Things like the C library are handled by compiling to an intermediate
stack-based bytecode (vaguely similar to CLR bytecode, produced with
only partial type-information), which is reloaded by the middle-end and
converted into a 3AC/SSA form used by the backend. No bytecode is used
at runtime, but exists purely as an IR stage (between the front-end and
middle-end). In the conversion to 3AC, stack-locations get remapped to
temporaries.
 
The current stack bytecode is called "RIL3", which consists of blobs of
stack-based bytecode within a collection of begin/end tag structuring.
The format can also contain LZ compressed data blobs (using an LZ4 like
format), which are used for various purposes (such as blobs of ASM code).
 
 
The backend basically works with everything in 3AC form, and using an
SSA-like scheme, but it differs from the one used in LLVM in that,
rather than giving each instance of a variable its own "name", it is
given a name with a revision number (and implicitly a "phi" just sort of
merges everything which corresponds to the same base variable).
 
Though, in general the backend is pretty naive (fancy optimizations
aren't really its strong area).
 
 
Eliminating a lot of the "infrastructural busywork" compared with
something like GCC does help with keeping the code smaller and simpler.
 
Namely, there is a whole lot of stuff in GCC which just doesn't need to
exist in this design.
 
 
As-is, compiler code size:
Current total = ~ 157 kLOC;
27 kLOC, BSR1 ISA backend;
34 kLOC, BJX2 ISA backend;
41 kLOC, SuperH/BJX1 ISA backend.
10 kLOC, C parser.
23 kLOC, front/middle-end (AST -> RIL -> 3AC);
15 kLOC, MM/AST/util code.
~7 kLOC, misc stuff
 
Compiler binary is currently approx 2.5 MB.
Compiler can also be itself recompiled within a few seconds.
 
 
There was a debatable possible feature to allow front-ends and back-ends
to exist as plug-in DLLs or SOs, but I have been on the fence as this
would add complexity to the compiler.
 
 
ISA Support Overview:
 
Where BJX1 was a prior ISA of mine developed as a superset of the
Hitachi SH-4 ISA. A later incarnation was a 64-bit ISA with 32 GPRs and
only a superficially similar instruction set remained (code was not
binary compatible between the 32 and 64 bit ISAs). Much hair developed
trying to deal with this much ISA variation in a single backend.
 
BSR1 was similar to SH, but somewhat simplified (left out features that
unnecessarily created pain when trying to implement the ISA in Verilog,
or "in-general"). It was intended to be a small/simple ISA. Like SH, it
is a 32-bit ISA with fixed-width 16-bit instructions and 16 GPRs, and
was intended for smaller/cheaper FPGAs (ex: lower cost 9kLUT devices).
 
 
BJX2 was essentially a redesign of the BJX1 ISA based on BSR1. In many
regards it is a cleaner and simpler design, but expands to support
16/32/48 bit instruction-forms (the larger 48-bit forms being able to
support larger immediate values and displacements). Note that it is
otherwise a 64-bit RISC-style ISA with 32 GPRs.
 
 
I have also had a little more success getting a working Verilog
implementation, mostly due to design simplifications in many areas.
 
The latter two backends were mostly copy/paste/edit versions of the
original SH / BJX1 backend.
 
Note that it is a fairly minimal ISA if compared with something like
x86, as many things which are done natively in hardware in x86 need to
be faked in software in these ISAs (such as integer division). Granted,
it is intended to work on a "hobbyist grade" FPGA dev-board (currently
targeting an Arty S7 with a 50 kLUT Spartan-7).
 
 
There is not currently an x86/x86-64 compiler backend, mostly because I
typically use MSVC or GCC for these (and, for the most part, C is C).
Though, x86-64 support is still a possibility.
 
 
Canonically, currently used output formats are either raw ROM images, or
PE/COFF (the BJX2 Boot-ROM expects to load a PE/COFF image). For BJX2,
the PE/COFF format is based on the PE32+ format, but lacks anything in
the MZ stub.
 
For both boot and program launching, currently PE/COFF is the format
used, but program launching (unlike launch via the Boot-ROM) will
support base-relocatable binaries and DLLs.
 
 
Or such...
Juha Nieminen <nospam@thanks.invalid>: Jul 26 06:06AM

> would have expected that all the emacs afficionados would have
> automated refactoring implemented in elisp, but nope. They have
> essentially the same environment I had in 1988.
 
In my experience there is no perfect C++ source code editor. Every
single editor is lacking something.
 
Typically their auto-indentation feature sucks (except (mostly) in emacs),
or they lack code completion (most non-IDE editors, including emacs),
they lack or have lacking programmable keyboard shortcuts, and so on and
so forth.
Paavo Helde <myfirstname@osa.pri.ee>: Jul 26 12:10PM +0300

On 26.07.2018 9:06, Juha Nieminen wrote:
> single editor is lacking something.
 
> Typically their auto-indentation feature sucks (except (mostly) in emacs),
> or they lack code completion (most non-IDE editors, including emacs),
 
I have heard one can have code completion in emacs, but this probably
requires special setup and I do not use emacs often enough to bother
about that. When I do not recall the exact symbol name in Emacs I will
switch over to Visual Studio window and do a lookup or search there.
Ian Collins <ian-news@hotmail.com>: Jul 26 09:55PM +1200

On 26/07/18 18:06, Juha Nieminen wrote:
> or they lack code completion (most non-IDE editors, including emacs),
> they lack or have lacking programmable keyboard shortcuts, and so on and
> so forth.
 
Those features or lack of then shape our choice of editors. I for one
don't like Visual Studio's indentation, but it has excellent code
completion which I don't really use. NetBeans has near infinite code
layout configuration, but sluggish code completion.
 
--
Ian.
Paavo Helde <myfirstname@osa.pri.ee>: Jul 26 01:25PM +0300

On 26.07.2018 12:55, Ian Collins wrote:
> Those features or lack of then shape our choice of editors. I for one
> don't like Visual Studio's indentation, but it has excellent code
> completion which I don't really use.
 
Visual Studio's indentation is highly customizable (I count 16 related
options), I am just curious what style it does not support?
Ian Collins <ian-news@hotmail.com>: Jul 26 10:34PM +1200

On 26/07/18 22:25, Paavo Helde wrote:
>> completion which I don't really use.
 
> Visual Studio's indentation is highly customizable (I count 16 related
> options), I am just curious what style it does not support?
 
NetBeans has 81 just for C++!
 
The one that bugs me most (not say it can't be fixed, just haven't found
how) is brace placement on initialisers; it appears to confuse them with
inline function bodies.
 
--
Ian
bart4858@gmail.com: Jul 26 03:46AM -0700

On Thursday, 26 July 2018 06:17:11 UTC+1, BGB wrote:
> ~7 kLOC, misc stuff
 
> Compiler binary is currently approx 2.5 MB.
> Compiler can also be itself recompiled within a few seconds.
 
That sounds like quite a substantial product. My 'C' compiler is about 20Kloc (of original source, as it's not written in C; expressed in C it's only a bit longer).
 
That targets only x64, and outputs x64 asm source. (My own format and run through my own assembler, which assembles/links multiple .asm files to .exe at some 2-3Mlps. Perhaps about 9Kloc for that.)
 
Compilation speed on a PC, .c to .asm, is typically 0.6Mlps. (Plus .asm preocessing, but that stage could be incorporated into the main compiler without probably affecting that 0.6Mlps much.)
 
However, it only processes one C file per compiler invocation, so multi-module projects are slow. Also, it has so many other problems that I now just call it a C dialect compiler (suitable for the intermediate C I sometimes generate). While it can compile and run certain open-source programs (eg. Lua), making it work for arbitrary C code involves going down a rabbit-hole I'm longer interested in exploring.
 
I'm concentrating at the minute on an alternative low-level language, visually different from C, with a module system and designed for whole-program compilation. This needs more passes to compile, and I'm only getting up to 0.4Mlps, but there are no C-like headers to repeatedly process so all of that throughput is usable.
 
Neither product optimises code, which looks pretty terrible when you look at the output. But on my apps, the difference between this code, and gcc-O3, is fairly narrow; gcc might be 20-50% faster.
 
(Which is still worthwhile enough that I am reinstating a C target for this compiler - which will always be a single whole-project C source file - just so I can pass it through gcc-O3 and step up those lps figures a couple of notches.
 
Not that it needs them on the PC: all my 20-25Kloc multi-module programs each build from source to .exe, including asm processing, in the time it takes me to press Enter. But it'll be useful on a slower machine.)
 
 
> PE/COFF (the BJX2 Boot-ROM expects to load a PE/COFF image). For BJX2,
> the PE/COFF format is based on the PE32+ format, but lacks anything in
> the MZ stub.
 
PE/COFF formats are horrible (trust MS to overcomplicate matters). I used to generate COFF for object files, but since I generated PE directly, I hardly ever bother. (My PEs don't support DLLs so I might need COFF still which have to be passed through a conventional linker.) However I find my PEs attract unwelcome attention from AV software.
 
Anyway, in response to the query, it is possible to streamline compilation of a C-like language to it can be done quickly if you don't care about code-speed.
 
C itself doesn't make things easy with its macro language and its textual include files which must be reprocessed not only in different TU but within the same one, unless you use guards or things like #pragma once, all very messy.
 
But Tiny C shows it can be done.
 
As for C++ though, I can only say it's making a rod for its own back.
 
--
bart
boltar@cylonHQ.com: Jul 26 10:55AM

On Thu, 26 Jul 2018 22:34:50 +1200
 
>The one that bugs me most (not say it can't be fixed, just haven't found
>how) is brace placement on initialisers; it appears to confuse them with
>inline function bodies.
 
If someone is so lazy they need an editor to do something as basic and quick
as indentation then perhaps development isn't the right job for them.
Rosario19 <Ros@invalid.invalid>: Jul 26 01:40PM +0200

>>inline function bodies.
 
>If someone is so lazy they need an editor to do something as basic and quick
>as indentation then perhaps development isn't the right job for them.
 
agree the appaence and indentation on the paper or on the sheet or in
the screen is almost all (quasi tutto)
at last so has to think someone that see programming as an art
Juha Nieminen <nospam@thanks.invalid>: Jul 26 12:00PM

> Visual Studio's indentation is highly customizable (I count 16 related
> options), I am just curious what style it does not support?
 
I really like the indentation mechanism in emacs, where you simply press
tab (regardless of where the cursor is at the moment) and it will
auto-indent the current line based on the previous line(s). It will move
it to the left or the right so that it becomes indented according to the
indentation rules. Even if the indentation of a block of code is completely
messed up, you can quickly indent it by simply pressing tab on each line.
 
(Of course you can also select the block of code and run the indent-region
command on it, which will do the same even easier. Obviously if you find
yourself running this command a lot, you can bind it to a keyboard
shortcut. But when we are talking about a few lines of code, pressing tab
on each is also viable and handy.)
 
There may be other editors which have similar or even the exact same
functionality (at least as an option), but I have not encountered one.
(To be fair, though, it's not like I have tried an enormous amount of
them.)
 
The major drawback of emacs is that configuring the indentation is
complicated, as there are no easy menus with mouse-clickable options.
It also doesn't always get the indentation of certain types of code
as I like, and it's difficult to configure it to do so. (It's possible,
but usually requires you to understand elisp.)
Paavo Helde <myfirstname@osa.pri.ee>: Jul 26 04:02PM +0300

On 26.07.2018 15:00, Juha Nieminen wrote:
> it to the left or the right so that it becomes indented according to the
> indentation rules. Even if the indentation of a block of code is completely
> messed up, you can quickly indent it by simply pressing tab on each line.
 
Agreed.
 
> It also doesn't always get the indentation of certain types of code
> as I like, and it's difficult to configure it to do so. (It's possible,
> but usually requires you to understand elisp.)
 
Agreed. For my modest indentation preferences (TABs displayed as 4
spaces) I need to edit 2 configuration files and add some cryptic lines
(the simpler one has also a mouse-clickable option somewhere, to be honest).
Thiago Adams <thiago.adams@gmail.com>: Jul 26 06:10AM -0700


> But Tiny C shows it can be done.
 
> As for C++ though, I can only say it's making a rod for its own back.
 
> --
 
Bart, BCG and others.
I would like to have a group to talk about compilers
and language design and implementation. What do you think?
Why not create a google group for that?
Juha Nieminen <nospam@thanks.invalid>: Jul 26 05:58AM

> I don't want to be rude, but you are taking a lecturing tone
> with people whose understandings are deeper than your own.
 
"I don't want to be rude, but I'll nevertheless make a
condescending remark that questions your knowledge and experience."
 
Fuck off. You have no idea what kind of programming experience
and knowledge I have.
boltar@cylonHQ.com: Jul 26 08:31AM

On Thu, 26 Jul 2018 05:58:52 -0000 (UTC)
>> with people whose understandings are deeper than your own.
 
>"I don't want to be rude, but I'll nevertheless make a
>condescending remark that questions your knowledge and experience."
 
Thats par for the course in this group.
David Brown <david.brown@hesbynett.no>: Jul 26 11:21AM +0200


>> "I don't want to be rude, but I'll nevertheless make a
>> condescending remark that questions your knowledge and experience."
 
> Thats par for the course in this group.
 
No, it is not. There are some people who are consistently rude, some
people who are occasionally rude, some people who are consistently rude
to particular other people, and some people who consistently inspire
rudeness in others. And sometimes rudeness is inferred when it was not
intended.
 
On the whole, however, people in this group (and its sister group,
c.l.c) converse amicably.
Juha Nieminen <nospam@thanks.invalid>: Jul 26 05:55AM

> least potentially) different than the static type. Passing an
> object by pointer/reference is necessary for dynamic dispatch
> of virtual functions actually to be dynamic rather than static.
 
In the example I gave there's a virtual function, and the base
class implementation calls either its own implementation or a
derived class implementation depending on the situation (ie.
depending on which kind of object it is). Yet no passing
by reference or by pointer was performed. No "throwing
away dynamic type information" happened.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jul 26 09:23AM +0200

On 26.07.2018 07:55, Juha Nieminen wrote:
> depending on which kind of object it is). Yet no passing
> by reference or by pointer was performed. No "throwing
> away dynamic type information" happened.
 
Tim's got a point because slicing, which can happen with pass by value,
throws away information about the original type.
 
Juha's goat a point because the virtual function mechanism continues to
work in the copied objects.
 
I believe you both are aware of the other's technical possibility, but
maybe don't realize that that's what the other one is talking about.
 
 
Cheers!,
 
- Alf
Juha Nieminen <nospam@thanks.invalid>: Jul 26 06:13AM

> the "landing pads" and tables state->landing pads.
> Also it has to change some state (let's say change one pointer,
> I don't known) while the code is completing ctors.
 
I did write "shouldn't suffer a speed penalty".
 
> The "zero-overhead" is also something unrealistic in my option.
> Error propagation has a cost.
 
I said "when exceptions are not thrown".
 
Of course exception-throwing is a very heavy operation. The point is that
support for it doesn't slow down the code when exceptions are *not* thrown.
 
Support for throwing exceptions could easily slow down code even when
they aren't thrown, if a naive implementation of exceptions is implemented
into the compiler.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: