Thursday, December 22, 2022

Digest for comp.lang.c++@googlegroups.com - 24 updates in 4 topics

Frederick Virchanza Gotham <cauldwell.thomas@gmail.com>: Dec 22 11:41AM -0800

I don't know if I've re-invented the wheel here but I can't remember having seen something like this in the C++ standard library nor Boost nor wxWidgets.
 
Let's say we have a multi-threaded program, it has a main GUI thread and five worker threads, giving a total of six threads.
 
The program at all times has a status string, which is a global 'std::string' object. All six threads read and write the global status string.
 
I've tried to code a solution to greatly simplify allowing all six threads to read and write the global object. Let's start off with a basic function that gives access to the global string:
 
string &GetStatusString(void)
{
static string str;
return str;
}
 
So then from another function we can call 'GetStatusString' as follows:
 
void Func(void)
{
// This function accesses the global status string
 
auto &str = GetStatusString();
 
str += "monkey";
}
 
Of course this isn't thread-safe. I want to make this thread-safe, firstly by making the following two changes to 'Func' as follows:
 
void Func(void)
{
// This function accesses the global status string
 
auto str = GetStatusString(); // Change 1: no longer auto&
 
str.obj += "monkey"; // Change 2: 'str.obj' instead of 'str'
}
 
Next I change the function 'GetStatusString' as follows:
 
ObjectReserver<string> GetStatusString(void)
{
static string str;
static recursive_mutex mtx;
 
return ObjectReserver(mtx,str);
}
 
The last thing to do now is write the code for 'ObjectReserver'. Here's what I've got so far:
 
#include <mutex> // recursive_mutex, unique_lock
 
template<typename T>
class ObjectReserver final {

std::unique_lock<std::recursive_mutex> m_lock;
 
public:
 
T &obj;
 
ObjectReserver(std::recursive_mutex &argM, T &argO)
: m_lock(argM), obj(argO) {}
 
ObjectReserver(void) = delete;
ObjectReserver(ObjectReserver const &) = delete;
ObjectReserver(ObjectReserver &&) = delete;
ObjectReserver &operator=(ObjectReserver const &) = delete;
ObjectReserver &operator=(ObjectReserver &&) = delete;
ObjectReserver *operator&(void) = delete;
ObjectReserver const *operator&(void) const = delete;
};
 
I'm considering using this 'ObjectReserver' template class in a complex multithreaded program I'm currently writing. I thought I'd share this here though first just in case there's already something similar in Boost or whatever.
Frederick Virchanza Gotham <cauldwell.thomas@gmail.com>: Dec 22 11:51AM -0800

On Thursday, December 22, 2022 at 7:41:35 PM UTC, Frederick Virchanza Gotham wrote:
 
> template<typename T>
> class ObjectReserver final {
 
> std::unique_lock<std::recursive_mutex> m_lock;
 
I could have used "lock_guard" here, it's a more basic lock with all the functionality I need.
scott@slp53.sl.home (Scott Lurndal): Dec 22 08:11PM


>I don't know if I've re-invented the wheel here but I can't remember having seen something like this in the C++ standard library nor Boost nor wxWidgets.
 
>Let's say we have a multi-threaded program, it has a main GUI thread and five worker threads, giving a total of six threads.
 
>The program at all times has a status string, which is a global 'std::string' object. All six threads read and write the global status string.
 
Rather than contending for single string updates, which can never scale,
keep a string private to each thread where each thread updates the
string as required without locking; when you need the full status
append the six strings into a single string. All you need to synchronize
is fetching the string from each thread and if you do it right, you'll
likely not even need to synchronize access from the main thread to
the strings (e.g. each thread has a pointer to the current status string
and when the worker thread updates the status, it uses an atomic exchange to replace
the pointer to the former status string with a pointer to the new
status - when the main thread fetches the status it will get either
the old or the new depending on how it races with the update). The
main thread can append the individual strings (or use/display each
status independently).
 
The recursive mutex solution seems fragile and not scalable to large
numbers of worker threads.
Frederick Virchanza Gotham <cauldwell.thomas@gmail.com>: Dec 22 12:25PM -0800

On Thursday, December 22, 2022 at 8:11:42 PM UTC, Scott Lurndal wrote:
 
> status independently).
 
> The recursive mutex solution seems fragile and not scalable to large
> numbers of worker threads.
 
 
I wanted to keep my original post simple and so that's why I used 'std::string' as an example.
 
Really though I'm coding a man in the middle program that receives, modifies and forwards RS232 traffic. More than one thread has to access the various data structures that get created and edited in response to the receipt of RS232 traffic.
 
My program is already coded, and it actually works properly, but there are data races all over the camp. I coded it quickly to get it working to help me debug a microcontroller, but now I want to clean it up and so I think I'll use ObjectReserver for this.
David Brown <david.brown@hesbynett.no>: Dec 22 09:41PM +0100

On 22/12/2022 20:41, Frederick Virchanza Gotham wrote:
 
> The program at all times has a status string, which is a global
> 'std::string' object. All six threads read and write the global
> status string.
 
I'd say that's your problem right there.
 
Only allow /one/ thing to change a given variable. Problem solved.
(Well, /almost/ solved - you might still need something to ensure
changes are seen atomically by readers, depending on how your strings
work and get updated.)
 
 
I try to think of programming like electronics. Devices have inputs and
outputs. It's okay to drive multiple inputs from one output, but if you
try to drive one line from multiple outputs, things are going to go
wrong (such as letting out the magic grey smoke that runs all electronics).
 
When you have six devices that all have an error indicator output, you
don't just join them together on one red LED. You make six error LEDs,
one for each device. Or you use a multiplexer, such as an OR gate, to
combine the signals safely.
 
Do the same in your programming.
 
 
So you have six separate status strings, and a "multiplexer" that
combines them in some way. Maybe it serialises them to a log file, or
joins them together for display, or prioritises them to show error
messages at higher priority than "okay" messages. Maybe this is done by
a single "update" function (which may need locking if called from
different threads), or it is in a separate thread connected by queues to
the original threads, or it is in a timer function called periodically
in the gui thread. There are many options.
 
But the worst idea is to have one thread set the status to "There's a
meltdown on it's way" only to have it immediately overwritten by another
thread saying "Everything's hunky-dory over here".
Paavo Helde <eesnimi@osa.pri.ee>: Dec 22 11:01PM +0200

22.12.2022 21:41 Frederick Virchanza Gotham kirjutas:
 
> I don't know if I've re-invented the wheel here but I can't remember having seen something like this in the C++ standard library nor Boost nor wxWidgets.
 
> Let's say we have a multi-threaded program, it has a main GUI thread and five worker threads, giving a total of six threads.
 
> The program at all times has a status string, which is a global 'std::string' object. All six threads read and write the global status string.
 
[snipped implementation with a proxy containing a mutex lock]
 
I have done such things in the past, but in retrospect this was not the
best idea, mainly because it hides the thread-locking step and makes the
code harder to follow and verify for correctness.
 
Nowadays I would just make a dedicated member function of the
StatusManager class which would just append to the string under a mutex
lock. If it appears this is becoming a bottleneck, one can redesign the
member function to e.g. move the appended string pieces to some kind of
fast inter-thread queue. With a locked proxy like in your design it
would be harder to rewrite the functionality.
Paavo Helde <eesnimi@osa.pri.ee>: Dec 22 11:05PM +0200

22.12.2022 22:41 David Brown kirjutas:
 
> But the worst idea is to have one thread set the status to "There's a
> meltdown on it's way" only to have it immediately overwritten by another
> thread saying "Everything's hunky-dory over here".
 
If you look at his code, he is appending to the status string, not
overwriting it. Of course, for a status string this is probably not the
best approach either, but I gather this was just meant as an
illustrative example.
David Brown <david.brown@hesbynett.no>: Dec 23 12:11AM +0100

On 22/12/2022 22:05, Paavo Helde wrote:
> overwriting it. Of course, for a status string this is probably not the
> best approach either, but I gather this was just meant as an
> illustrative example.
 
For my comments, it doesn't matter much how you change the common
variable - the fact that you are changing it from many places is the
core problem.
Juha Nieminen <nospam@thanks.invalid>: Dec 22 12:20PM


> This is obviously impenetrable, but I'd say that's because of every
> single word /other/ than the auto -- it's possible that the auto is the
> /only/ sensible part of such a line!
 
The problem is that even when both the variable and the function are named
in a descriptive manner, neither extremely rarely determine exactly what
the return value type of the function is. As in, the actual basic type,
type alias, struct or class (or even lambda), by name.
 
If you are just trying to get a cursory picture of what the code is doing,
as in, like, trying to understand at a higher level what the action is
that it's performing, or what algorithm it's implementing, then perhaps
knowing the actual return type, by name, isn't often extraordinarily
important.
 
However, when you *do* need to know exactly what the concrete type is,
by name, not just what "kind of type" it is (in a more abstract generic
manner), then it becomes an obstacle because the name has been hidden.
 
Quite often the exact type doesn't become clear even in the subsequent
code that follows the above. The subsequent code might, for example,
call some member function of that object... but that doesn't really
say much. Perhaps, if well named, it gives you a generic idea of what
is being done, but it doesn't tell you what the actual type is, or
where to find it.
 
It's just yet another obstacle in trying to understand in detail the
code. And yes, I have a lot of experience in trying to read and
undrestand that kind of code, and becoming frustrated when such types
are being hidden behind 'auto'. In many of these situations it's not
enough for me to get a cursory idea of the abstract behavior of that
type. I need to know the exact type, by name.
 
> wrong so the real question -- the only interesting question -- is what,
> if any, uses would not count as excessive, and what, if any, uses might
> actually help?
 
I don't understand that question at all.
 
"Excessive use of X is bad" does not mean "every single case of excessive
use of something is bad". Why are you clinging to that word, "excessive"?
Juha Nieminen <nospam@thanks.invalid>: Dec 22 12:32PM

> unsigned char *ap;
 
> ...
 
> ap = u_write_buffer = (unsigned char *)malloc(bufsize);
 
I think this is a quite good example where using acronyms and
abbreviations is detrimental to readability and undrestandability.
 
It's not clear at all what "d_mp" even stands for, much less what
it means. "get_ioaddr" is relatively clear, but I don't really see
a reason to abbreviate it (but it's by far not the worst offender
in this code). Same could be said of "bufaddr" and "bufsize".
 
(If "bufaddr" and "bufsize" are not just completely generic
variables/constants to be used in any sort of buffer allocation
and management, then they could be named generically like that
(although I would still prefer if they were not abbreviated).
However, if their intent is to determine the characteristics
of a *particular type* of buffer, or even a *particular buffer*,
it would be clearer if they said that, so that they couldn't be
confused with some other buffers that might be used in the program.)
 
The name "bp" is unnecessarily obscure. Is it "buffer pointer" perhaps?
I see no reason why it couldn't just say that outright, if that's what
it means. Why have the reader guess? There's no reason.
 
"remaining" is a full English word, yay! But yet, it doesn't really
say remaining what, exactly. IMO it could say what it's counting,
not just that it's counting a remaining amount of *something*.
 
It's not clear at all what "ap" means. Something pointer, I suppose,
but what? Why couldn't it just say it? It would be so much easier to
understand, without having to guess.
 
> anyone
> who doesn't know that shouldn't be mucking about in the
> code in the first place.
 
Is this some kind of elitism in programming?
scott@slp53.sl.home (Scott Lurndal): Dec 22 04:20PM

>> who doesn't know that shouldn't be mucking about in the
>> code in the first place.
 
>Is this some kind of elitism in programming?
 
No, it means that one should know the problem space before
modifying a program in that space.
 
This program is an full-system emulator for a mainframe
computer system. Which has a memory subsystem (mp),
so the 'mp' abbreviation, like the 'rd' (result descriptor)
abbreviation is understandable to someone familiar
with the problem space.
 
I'm not going to type memory_pointer instead
of mp. (the d_ prefix has meaning[*] within the context of the
application, granted it's not clear from a 15 line snippet).
 
[*] in this case, it visually identifies the identifier as
a class member for the base (c_dlp) class.
 
 
(And someone familiar with the problem space would instantly
recognized DLP as an abbreviation for Data Link Processor
which is a term of art in that problem space along with
IOCB for I/O control block).
 
 
/**
* Base class defining a Data Link Processor (DLP).
*/
class c_dlp: public c_thread {
...
c_logger *d_logger;
c_memory *d_mp;
c_processor *d_processor;
pthread_cond_t d_wait;
c_dlist d_iocbs; // IOCB's pending execution
 
bool d_busy; // pre-Revision B busy flag
volatile bool d_exit; // Thread exit flag
volatile bool d_exited; // Thread exit done flag
 
ulong d_testid; // DLP Type code
ulong d_channel; // channel for this DLP instance
 
 
...
 
/**
* Uniline DLP.
*
* A Uniline DLP is a three-card DLP containing a microprocessor and
* a universal synchronous/asynchronous receiver/transmitter. Firmware
* is downloaded to the DLP to provide line discipline code for managing
* point-to-point and multidrop RS-232C and Burroughs Two-Wire Direct (TDI)
* Interface block-mode display devices.
*
* Two firmware versions are available for the Uniline dlp:
* USP3BV - Operator Control Station (OCS) firmware. Supports
* a single contention mode station.
* UST3BH - Supports a multidrop configuration of one or more
* stations using the Burroughs Poll-Select
* protocol.
*
* When OCS firmware (USP3BV) is loaded, the dlp mode is set to
* U_OCS. A subsequent 'control cc/u netport' command should
* be issued to establish a network listener for the operator control
* station.
*
* When STC (Standard Terminal Control) firmware (UST3BH) is loaded,
* the dlp mode will be set to U_STC and a network listener will provide
* access to a remote terminal or remote job entry client.
*/
class c_uniline_dlp : public c_dlp,
c_timer_callable,
c_port_transport,
c_port_listener {
Juha Nieminen <nospam@thanks.invalid>: Dec 22 05:04PM

> I'm not going to type memory_pointer instead of mp
 
The question is: Why not?
 
I'm asking seriously. This is not just arguing for the sake of
arguing. What exactly is the problem in writing the full version
instead of the acronym?
 
I don't think lines of code become "too long" because of that
(and if they do, perhaps you should consider if you are
cramming too much into one single line of code...)
 
It does not decrease readability. (Seriously, it does not,
completely objectively speaking. I know certain individual(s)
here claim otherwise, but that's just not an objective fact.)
 
So why not?
 
Writing it out helps the person reading the code remind himself
what it means, and makes it much clearer in every instance
where it's used.
Ben Bacarisse <ben.usenet@bsb.me.uk>: Dec 22 08:43PM

> in a descriptive manner, neither extremely rarely determine exactly what
> the return value type of the function is. As in, the actual basic type,
> type alias, struct or class (or even lambda), by name.
 
Yes, of course. In order to move the discussion along, you need to
assume that the people you are talking to are not fools. I know that
using proper names won't determine the type.
 
 
> However, when you *do* need to know exactly what the concrete type is,
> by name, not just what "kind of type" it is (in a more abstract generic
> manner), then it becomes an obstacle because the name has been hidden.
 
That's begging the question. Please assume that everyone here will
agree then "when you *do* need to know exactly what the concrete type
is" then auto might be a hindrance. The question for you is, are there
any cases when seeing the exact type is a hindrance to reading the
code? I believe there are and then raises the question of we decide
when using auto is appropriate.
 
> are being hidden behind 'auto'. In many of these situations it's not
> enough for me to get a cursory idea of the abstract behavior of that
> type. I need to know the exact type, by name.
 
What do you mean "by name"? Many types are not explicitly named. (In
C, a "type name" includes things like "const int (*)[4]" so this is a
genuine question.)
 
Are there any times when you think that seeing exact type hinders
readability? For example, a while ago I was processing a collection
of sets of numbers indexed by strings (not in C++ as it happens). In
C++ processing a
 
std::map<std::string, std::vector<double>> string_frequencies;
 
with a loop should, to my mind, be done using something like
 
for (const auto &pair : string_frequencies) ...
 
I don't think the reader is helped by having the full type:
 
for (const std::pair<const std::string, std::vector<double>> &pair : string_frequencies) ...
 
 
> I don't understand that question at all.
 
> "Excessive use of X is bad" does not mean "every single case of excessive
> use of something is bad".
 
Yes of course. In fact it means almost nothing at all. It's not
helpful to say that excessive use of X is bad because that's what
excessive means -- too much to be good.
 
> Why are you clinging to that word, "excessive"?
 
Clinging? I'm not clinging to it. I commented on /your/ use of the
word. It makes your remark unhelpful. The whole point of my comment
was to suggest that /you/ should not use the term as it begs the
question: what counts as excessive use?
 
--
Ben.
Joseph Hesse <joeh@gmail.com>: Dec 22 12:05AM -0600

On 12/21/22 12:32, Keith Thompson wrote:
 
> In `auto fp = [] (int x[])`, x is an array parameter, which is treated
> as a pointer parameter. That equivalence applies only to function
> parameters. In your example, you just have an array object.
 
In the above program the x in the for loop is treated as
an int *. The fact that the program works means that the
for loop knows how far to increment the pointer to calculate
the sum. This is what surprises me, I thought the only information
a built in array type contains is a pointer to the first element.
 
Thank you,
Joe
"Öö Tiib" <ootiib@hot.ee>: Dec 22 12:00AM -0800

On Thursday, 22 December 2022 at 08:05:59 UTC+2, Joseph Hesse wrote:
> for loop knows how far to increment the pointer to calculate
> the sum. This is what surprises me, I thought the only information
> a built in array type contains is a pointer to the first element.
 
That is not true. The x in that program is int array of 4 elements,
not pointer. The array decays to pointer in lot of contexts but
range based for is not one of those. It treats x as an array.
<https://en.cppreference.com/w/cpp/language/range-for>
Ben Bacarisse <ben.usenet@bsb.me.uk>: Dec 22 12:42PM

> not pointer. The array decays to pointer in lot of contexts but
> range based for is not one of those. It treats x as an array.
> <https://en.cppreference.com/w/cpp/language/range-for>
 
You are right (of course!) but it's by no means trivial to follow the
details. The cppreference page gives an example, but it's explanation
is very hard to plough through for someone learning C++. You'd have to
understand how
 
auto&& __range = x;
 
works and what __range and __range + 4 mean following a declaration like
that.
 
Clearly (at least I think so) __range will be an rvalue reference to an
array of 4 ints, but when I try to ditch the auto with
 
int (&&__range)[4] = x;
 
g++ complains that it "cannot bind rvalue reference of type 'int
(&&)[4]' to lvalue of type 'int [4]'. What is the type that is being
deduced for __range in
 
auto&& __range = x;
 
? (Using a lvalue reference works but that's not what the standard says
is going on with this range-based for statement.)
 
--
Ben.
Bonita Montero <Bonita.Montero@gmail.com>: Dec 22 07:55PM +0100

Am 20.12.2022 um 18:00 schrieb Joseph Hesse:
 
>   cout << "sum = " << fp(v) << '\n';
 
cout << "sum = " << accumulate( v.cbegin(), v.cend() ) << endl;
"Öö Tiib" <ootiib@hot.ee>: Dec 21 06:03PM -0800

On Thursday, 22 December 2022 at 00:18:46 UTC+2, Lynn McGuire wrote:
> https://www.educba.com/3d-arrays-in-c-plus-plus/
> and
 
> https://stackoverflow.com/questions/8767166/passing-a-2d-array-to-a-c-function
 
If the xyz is raw fixed dimension array then passing by reference is best as it
is not losing any dimension information.
void amethod(double (&xyz)[5][2][14], ...)
 
About "multiple dimension variable" it depends on properties and usage.
What dimensions you want to be dynamic if any and with what algorithms
you want to process it. With little object of 140 doubles the usage does not
matter but if it is far bigger than 500 doubles then cache-friendliness of data
layout for processing can show noticeable differences.
Ben Bacarisse <ben.usenet@bsb.me.uk>: Dec 22 03:02AM

>>     double xyz [5] [2] [14];
>>     ...
>>     amethod (xyz, ...);
 
Best is always going to depend on lost of as yet unspecified factors.
Are the sizes (other than the "top-level" one) always know at compile
time? Are they always the same?
 
> I am looking at
> https://www.educba.com/3d-arrays-in-c-plus-plus/
 
That hardly scratches the surface!
 
> and
 
> https://stackoverflow.com/questions/8767166/passing-a-2d-array-to-a-c-function
 
C has variably modified types which means you can pass run-time array
sizes to a C function. C++ does not have this feature, but then that
page includes some C++ and does not use variably modified types (as far
as I could see).
 
You probably need to say more about your constraints to get better advice.
 
--
Ben.
Paavo Helde <eesnimi@osa.pri.ee>: Dec 22 09:07AM +0200

21.12.2022 22:37 Lynn McGuire kirjutas:
 
>     double xyz [5] [2] [14];
>     ...
>     amethod (xyz, ...);
 
This C approach only really works as long as the dimensions are known at
compile time and the arrays are small (larger arrays won't fit on stack).
 
In C++, suggesting a custom class containing a plain std::vector for
data storage and separate dimension info, providing some kind of 3D
interface on top of that.
 
Details depend on whether you prefer simple nice interfaces over speed,
what are your typical dimensions, and what are your typical operations.
 
In case you are interested in nice interfaces, one can provide nice
operator[] overloads with proxies, so that you can access elements of
your array via nice arr[x][y][z], or technically simpler arr.Elem(x, y, z).
 
In case you are interested in performance, one needs to avoid arithmetic
or multiple indirections when accessing single elements, and one needs
to take care about memory locality. This in general means that the 3D
abstraction becomes leaky; the algorithm working on the data would need
to take out pointers to the whole linear array, or to the most tightly
packed linear stretches, and iterate over them by itself. A single pixel
access function like arr[x][y][z] or arr.Elem(x, y, z) would be pretty
useless.
 
No doubt there are already a myriad of multidimensional array libraries
out there, but as the right solution depends on your needs, data, and
usage, finding a suitable library might be more complicated than writing
your own class suited to your needs.
Lynn McGuire <lynnmcguire5@gmail.com>: Dec 22 02:01AM -0600

On 12/21/2022 9:02 PM, Ben Bacarisse wrote:
> page includes some C++ and does not use variably modified types (as far
> as I could see).
 
> You probably need to say more about your constraints to get better advice.
 
I don't know if the array sizes are always known at compile time. I
suspect that they are but I am far from sure.
 
I am moving a 750,000 line F77 / C calculation engine to C++ using a
modified version of F2C. F2C converts all multiple dimension arrays to
single dimension arrays. Sometimes I convert them back. Still a few
unknowns to work out.
 
Thanks,
Lynn
"Öö Tiib" <ootiib@hot.ee>: Dec 22 12:25AM -0800

On Thursday, 22 December 2022 at 10:02:02 UTC+2, Lynn McGuire wrote:
> modified version of F2C. F2C converts all multiple dimension arrays to
> single dimension arrays. Sometimes I convert them back. Still a few
> unknowns to work out.
 
On such cases where we are transpiling and rearranging handling of
arrays it is worth to write some tests to verify correctness of result and
also to profile its efficiency.
 
Issue is that Fortran keeps arrays always in column-major order C keeps
arrays always in row-major order. Code of either can be optimised to take
that into account.
 
In C++ it is possible to design classes agnostic about that ordering. We
can see that in several linear algebra libraries. Matrices choose ordering
based on what was most efficient to get from algorithm and then
choose algorithms based on ordering what is most efficient to process
and that all is hidden behind scenes so user should not care.
Ben Bacarisse <ben.usenet@bsb.me.uk>: Dec 22 12:13PM

> modified version of F2C. F2C converts all multiple dimension arrays
> to single dimension arrays. Sometimes I convert them back. Still a
> few unknowns to work out.
 
You may better off just passing a pointer to the correct amount of space
and doing the indexing "by hand".
 
Do you ever need slices like column of row arrays? That might be
another reason to do the index arithmetic yourself.
 
--
Ben.
David Brown <david.brown@hesbynett.no>: Dec 22 03:03PM +0100

On 22/12/2022 03:03, Öö Tiib wrote:
 
> If the xyz is raw fixed dimension array then passing by reference is best as it
> is not losing any dimension information.
> void amethod(double (&xyz)[5][2][14], ...)
 
I would suggest that any time you have fixed size arrays, you are better
off using std::array<>. Then the size is clearly fixed in the type, and
there are no circumstances in which it "disappears" or the array decays
into a pointer. (As Ben pointed out in another thread, it's not always
easy to tell when this happens.) And you have container-style methods
for the array if you want them, while all the time keeping the
efficiency of a plain C array.
 
It can be a little ugly to use std::array's of std::arrays, so a
wrapping class might help. Or a template using :
 
template <class T, size_t x, size_t y, size_t z>
using array3d = std::array<std::array<std::array<T, z>, y>, x>;
 
array3d<double, 5, 2, 14> xyz;
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: