Thursday, July 25, 2019

Digest for comp.lang.c++@googlegroups.com - 16 updates in 2 topics

woodbrian77@gmail.com: Jul 24 07:10PM -0700

B"H
 
So far the only thing I've found in 2020 C++ that seems
helpful to me is std::span. I posted a question about coroutines:
https://www.reddit.com/r/cpp_questions/comments/c6mbls/coroutines/
 
and that didn't seem like a fruitful avenue. What am I
missing? Thanks in advance.
 
 
Brian
Ebenezer Enterprises - Enjoying programming again.
https://github.com/Ebenezer-group/onwards
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jul 25 04:45AM +0200

> https://www.reddit.com/r/cpp_questions/comments/c6mbls/coroutines/
 
> and that didn't seem like a fruitful avenue. What am I
> missing? Thanks in advance.
 
There's the math constants, finally an official standard library pi! :)
 
And there's the format library, <url:
http://www.zverovich.net/2019/07/23/std-format-cpp20.html>.
 
Designated initializers for structs.
 
The spaceship operator (more full fledged support for functionality like
`memcmp` and `std::basic_string::compare`).
 
Calendar functionality in <chrono>.
 
Ranges (but while most everybody's happy with that I think that library
gives verbose and impenetrable source code and adds inefficiency, and
constitutes yet another large sub-language - I much prefer my own simple
`up_to(n)` to the Ranges library's x times renamed unnatural something).
 
Coroutines.
 
Modules!
 
But while I very much want modules I'm not so happy to get them in
C++20. I think the modules thing is very much premature. Until they've
got Boost expressed as modules there's no relevant real world
experience, and the fact that they don't indicates it's not good enough.
 
Cheers!,
 
- Alf
David Brown <david.brown@hesbynett.no>: Jul 25 10:05AM +0200

> https://www.reddit.com/r/cpp_questions/comments/c6mbls/coroutines/
 
> and that didn't seem like a fruitful avenue. What am I
> missing? Thanks in advance.
 
I think you are missing an understanding of how coroutines work, and
what they can do for your code structure.
 
One rough analogy is that coroutines are to threads what cooperative
multitasking is to pre-emptive multitasking. Clearly, pre-emptive
multitasking and threads have their advantages for many uses - they can
be prioritised automatically, they can run in parallel on different
cores, and they stop bad processes from blocking the system. But
cooperative has its advantages too. They are much easier for
synchronisation and access to shared data - /you/ decide where tasks
block and other tasks can take over, eliminating all sorts of locking
and atomicity issues. They are lightweight, flexible, easy to
understand, and easy (or easier) to debug.
 
In comparison to asynchronous processes with callbacks, they are easier
to write and clearer to read, while maintaining the lightweight
features. A clear indication is what you see when you write the code.
With asynchronous processes with callbacks, you often see a code
structure of a function call with a lambda, which contains a function
call with a lambda, and so on - a new level of indentation for every
step. Your code is one statement - a single function call - that spans
perhaps dozens of lines of highly indented source code. The equivalent
written using coroutines will look like normal, sequential code, and can
contain whatever mix of statements, declarations, comments, etc., that
you like.
 
So I am looking forward to coroutines. I think it will take a while for
support to mature and be efficient in compilers, but it will give us a
new way to structure code.
 
 
 
 
Other than that, in C++20 I look forward to concepts and modules. The
spaceship operator will simplify some kinds of classes. Then there are
lots of small but nice features - more constexpr, consteval, etc.
Ranges are another big feature - I don't know quite how well that will
work out, but it is certainly interesting.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jul 25 10:14AM +0200

On 25.07.2019 10:05, David Brown wrote:
> [snip]
> Other than that, in C++20 I look forward to concepts and modules.
 
No concepts yet, sorry.
 
> lots of small but nice features - more constexpr, consteval, etc.
> Ranges are another big feature - I don't know quite how well that will
> work out, but it is certainly interesting.
 
Uhm, details and over-engineering, IMO. But I think the format library
will contribute to making C++ programming easier. And calendar stuff and
specification of what the epoch is, has been sorely missing.
 
On the strongly negative side, it looks as though the change of return
type for `std::filesystem::path::u8string` is there for keeps. That
means correct code for Windows needs to be modified to still compile
with C++20, and it means in Windows one can't obtain an UTF8-encoded
std::string from a path without an extra round of copying, inefficiency.
 
They're not stupid.
 
From Reddit discussion with one of them, judging by the very
intelligent side-steps, generalizations, attempts to mislead etc., I'm
fairly certain that they do understand what they do, that it's politics
again.
 
 
Cheers!,
 
- Alf
Bonita Montero <Bonita.Montero@gmail.com>: Jul 25 10:25AM +0200

> There's the math constants, finally an official standard library pi! :)
 
Hopefully it will be declared as a binary / hex floating point value
and not decimal in the headers.
Juha Nieminen <nospam@thanks.invalid>: Jul 25 08:40AM

> Designated initializers for structs.
 
It's very disappointing to me that in C++20 designated initializers
need to be listed in the same order as they appear in the struct
declaration.
 
Having programmed quite a lot in (Apple's) Objective-C++, I just love
how it beautifully and superbly mixes C99 designated initializers with
C++ struct member default values. In other words, things like this are
possible:
 
struct S { int a = 1, b = 2, c = 3; };
 
const S s = { .c = 10, .a = 20 };
 
After this, s.a == 20, s.b == 2, and s.c == 10.
 
I have found this feature extremely useful because it allows creating
eg. (const-type) "settings" listings that are easy to read and modify,
of the type like:
 
const Settings kSettings =
{
.amountOfThings = 20,
.anotherAmount = 35,
.aListOfValues = { 2, 4, 6, 10 },
.someName = "hello there"
.aBlockOfSubValues =
{
.firstSubValue = 5,
.secondSubValue = 10
}
};
 
Not all members need to be specifically included in this initialization,
as they can have default values, and the order in which the elements
are listed is free.
 
The above is impossible both in C and C++. It's only possible in (Apple's)
Objective-C++. With C++20 it will become possible... except for the
"wrong" order of initialization, as the above will be required to instead
be:
 
const S s = { .a = 20, .c = 10 };
 
This means that you can't list the elements in any order you want. This
can be a bit of a hindrance. The members of the struct itself might be
in a less-than-logical order for example for optimization reasons (the
size of a struct can be affected by the order in which the member
variables are listed), but in an initialization like above they can be
listed in a more logical order, grouping related settings together.
 
I understand why C++20 will require the initializers to be listed in the
same order as the declaration, but I still don't like it.
David Brown <david.brown@hesbynett.no>: Jul 25 10:48AM +0200

On 25/07/2019 10:14, Alf P. Steinbach wrote:
>> [snip]
>> Other than that, in C++20 I look forward to concepts and modules.
 
> No concepts yet, sorry.
 
Did they not make it to C++20 in the end? Why not? They are a /really/
nice idea, IMHO.
 
>> Ranges are another big feature - I don't know quite how well that will
>> work out, but it is certainly interesting.
 
> Uhm, details and over-engineering, IMO.
 
For my programming, constexpr and now consteval are definitely good. It
is not uncommon for me to need tables of various sorts, and filling them
at compile time in C++ is a good deal neater than using external code
generators as part of the build process.
 
Or did you mean that Ranges are over-engineering? I haven't looked at
them enough to judge.
 
> But I think the format library
> will contribute to making C++ programming easier.
 
Yes, that looks nice. It combines the advantages of printf with the
advantages of a C++ type-safe solution.
 
> And calendar stuff and
> specification of what the epoch is, has been sorely missing.
 
If you say so - I haven't needed that.
 
> means correct code for Windows needs to be modified to still compile
> with C++20, and it means in Windows one can't obtain an UTF8-encoded
> std::string from a path without an extra round of copying, inefficiency.
 
These are not features I have looked at. But when I see things that
were introduced in C++17 and changed in C++20, it looks bad.
 
 
> From Reddit discussion with one of them, judging by the very intelligent
> side-steps, generalizations, attempts to mislead etc., I'm fairly
> certain that they do understand what they do, that it's politics again.
 
Strings and encodings are always a bit of a mess. I doubt that there is
any way to get a result that everyone would be happy with, especially
with so much legacy in utf-16 (Windows, NTFS, QT, etc.).
Bo Persson <bo@bo-persson.se>: Jul 25 11:22AM +0200

On 2019-07-25 at 10:48, David Brown wrote:
 
>> No concepts yet, sorry.
 
> Did they not make it to C++20 in the end? Why not? They are a /really/
> nice idea, IMHO.
 
They did make it, actually. It's contracts that has been delayed.
 
 
Kind of confusing that everything seems to start with "co".
 
 
Bo Persson
David Brown <david.brown@hesbynett.no>: Jul 25 12:28PM +0200

On 25/07/2019 11:22, Bo Persson wrote:
 
>> Did they not make it to C++20 in the end?  Why not?  They are a /really/
>> nice idea, IMHO.
 
> They did make it, actually. It's contracts that has been delayed.
 
Oh, good. I had thought they were included, but assumed Alf had read
more up-to-date details on C++20 than I had.
 
contracts will also be nice, but I don't think they have had much of a
trial period as yet. (Concepts have been in gcc since version 7.)
 
 
> Kind of confusing that everything seems to start with "co".
 
Actually, what is confusing is that we are running out of "metanames".
Any word for describing groups or features seems to turn into a term in
C++. "C++20 introduces a range of new concepts...". I expect that for
C++23 there will be a new feature called "features" :-)
lightness1024@gmail.com: Jul 25 03:57AM -0700

Le jeudi 25 juillet 2019 19:28:23 UTC+9, David Brown a écrit :
> Any word for describing groups or features seems to turn into a term in
> C++. "C++20 introduces a range of new concepts...". I expect that for
> C++23 there will be a new feature called "features" :-)
 
range and concept are the most massive new feature C++ ever got from a standard.
Think about, it the biggest revision so far was C++11.
What paradigm shift did we get from it ?
-> lambdas
thanks to that, we are now halfway to (possibly) functional programming.
 
With C++20, the loop is closed because we have now as much generic type expressiveness as Haskell does.
 
Before we had:
templates -> universal types
now we add:
concepts -> existential types
the two quantifiers are now reunited. w00t.
this is huge.
 
C++ is now joining club with the likes of D, Nim, and Haskell, and even ahead of Swift which plans to get existentials. Even rust is not yet there https://varkor.github.io/blog/2018/07/03/existential-types-in-rust.html.
 
The consequences of that, is mostly peekable through the rangeV3 library on Eric Niebler's site.
Basically we'll now be able to program like if we had C#'s LINQ, but in C++ and with no overhead.
All the boost iterator adaptors goodness accessible with minimum fuzz, right there natively.
 
horray to filtered iteration a la jinja:
for (auto stuff : collection | only_interesting_dudes_btw)
{}
 
the pipe filter will not cost a thing, it will be evaluated as the underlying iterator goes; this should remind you of python's generator. And you'd be right because (I suppose) it's possible thanks to coroutines.
 
The direct availability of views, filters and adaptors will change how we write code. We'll have super cool pure functional queries instead of ugly loop counters and stateful code with 50 lines and copies or moves into intermediate values and stuff.
 
like such:
map<int,string> stuff;
function_that_only_takes_string_collections(stuff | second_adaptor); // BOOM
 
good luck without C++20 for this one.
You'd be forced to copy your strings into a temporary collection.
Or to use a custom written iterator if you're lucky enough that the API designer gave you template iterators for begin/end parameters.
Or integrate boost which most project hate to do because of how heavy it is.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jul 25 02:16PM

On Thu, 2019-07-25, David Brown wrote:
> Any word for describing groups or features seems to turn into a term in
> C++. "C++20 introduces a range of new concepts...". I expect that for
> C++23 there will be a new feature called "features" :-)
 
:-)
 
Reminds me of Perl, where "thingie" has a well-defined meaning.
Although that's more Larry Wall's sense of humor than running
out of names.
 
Personally, what I'd like to see in C++ is what Stroustrup asked for
the other year: slower and more reliable progress. I'm still on C++11.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
woodbrian77@gmail.com: Jul 25 07:49AM -0700

On Wednesday, July 24, 2019 at 9:45:37 PM UTC-5, Alf P. Steinbach wrote:
> C++20. I think the modules thing is very much premature. Until they've
> got Boost expressed as modules there's no relevant real world
> experience, and the fact that they don't indicates it's not good enough.
 
I'm not sure these things will help me much. I mentioned Coroutines
already. Maybe serialization support for chrono types?
 
 
Brian
https://github.com/Ebenezer-group/onwards
David Brown <david.brown@hesbynett.no>: Jul 25 07:13PM +0200

>> experience, and the fact that they don't indicates it's not good enough.
 
> I'm not sure these things will help me much. I mentioned Coroutines
> already. Maybe serialization support for chrono types?
 
We can only guess what might be helpful to /you/ on one single project.
But that would be a rather specific and personal thread anyway. I
think it is helpful to hear what others think might be features of
interest to them, or of likely wider interest.
 
I tried to answer your specific question of "what am I missing?"
regarding coroutines. Hopefully that was helpful to you.
David Brown <david.brown@hesbynett.no>: Jul 25 07:19PM +0200

On 25/07/2019 16:16, Jorgen Grahn wrote:
 
> Personally, what I'd like to see in C++ is what Stroustrup asked for
> the other year: slower and more reliable progress. I'm still on C++11.
 
> /Jorgen
 
I think many of the changes can lead to simpler programs, and in
particular, make it simpler to work with better choices of type. That
in turn means safer code as it is more likely for errors to be caught at
compile time.
 
In particular, "concepts" give you a grade between "any type" (i.e.,
"auto", or template parameter "T") and fixed, concrete types. They will
mean you can write functions that deal with a "Number" - it could be any
kind of a number, but it would not be a string or a pointer. You will
be able to write template classes and functions that work with
particular groups of types, without error-prone, ugly and barely
comprehensible "enable_if" clauses. And when something goes wrong, the
error messages should be a lot clearer.
Manfred <noname@add.invalid>: Jul 25 08:53PM +0200

On 7/25/2019 12:28 PM, David Brown wrote:
> Any word for describing groups or features seems to turn into a term in
> C++. "C++20 introduces a range of new concepts...". I expect that for
> C++23 there will be a new feature called "features" :-)
 
To this point I think that, even if the feature itself is really good,
the name "concept" is just a bad choice.
They are in fact requirements on template arguments, that Alex Stepanov
named as concepts with what looks to me a sort of philosophical jump:
apparently he assumes that that a set of requirements on "something" is
what defines such "something", and it is therefore identical to the
concept of such "something".
 
Now, while this may be an excellent argument for discussion, IMHO it is
just confusing when applied into a programming language (and possibly
one of the reasons for the tormented gestation of the feature).
Personally I would have preferred a name more directly coupled to
"requirement" as in fact they are (we already have typedef, typereq
might have been something clearer).
 
But Stroustrup takes Stepanov in very good consideration, so that's the
name we get.
aminer68@gmail.com: Jul 24 04:32PM -0700

Hello,
 
 
Understanding the spirit of Rust..
 
I think the spirit of Rust is like the spirit of ADA,
they are especially designed for the very high standards of safety, like
those of ADA, "but" i don't think we have to fear race conditions that Rust solve, because i think that race conditions are not so
difficult to avoid when you are a decent knowledgeable programmer
in parallel programming, so you have to understand what i mean,
now we have to talk about the rest of the safety guaranties of
Rust, there remain the problem of Deadlock, and i think
that Rust is not solving this problem, but i have
provided you with the DelphiConcurrent library for Delphi
and Freepascal that detects deadlocks, and there is also
the Memory Safety guaranties of Rust, here they are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
But notice that I have solved the number 1 and number 2 by inventing my
scalable reference counting with efficient support for weak references
for Delphi and Freepascal, read below to notice it, and for number 3
read my following thoughts to understand:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant , but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:

1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low the cache coherence traffic
4- Very good fast path performance (it has the same performance as the
scalable MCS lock when there is contention.)
5- And it has a decent preemption tolerance.
 
 
this is how i am an "inventor", and i have also invented other scalable
algorithms such as a scalable reference counting with efficient support
for weak references, and i have invented a fully scalable Threadpool,
and i have also invented a Fully scalable FIFO queue, and i have also
invented other scalable algorithms and there inmplementations, and
i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies.
 
 
Read my following writing to know me more:
 
More about computing and parallel computing..
 
The important guaranties of Memory Safety in Rust are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
I think i have solved Null Pointer Dereferences and also solved Dangling Pointers and also solved memory leaks for Delphi and Freepascal by inventing my "scalable" reference counting with efficient support for weak references and i have implemented it in Delphi and Freepascal, and reference counting in Rust and C++ is "not" scalable.
 
About the (3) above that is Buffer Overruns, read here about Delphi
and Freepascal:
 
What's a buffer overflow and how to avoid it in Delphi?
 
http://delphi.cjcsoft.net/viewthread.php?tid=49495
 
 
About Deadlock and Race conditions in Delphi and Freepascal:
 
I have ported DelphiConcurrent to Freepascal, and i have
also extended them with the support of my scalable RWLocks for Windows and Linux and with the support of my scalable lock called MLock for Windows and Linux and i have also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files inside the zip file to understand more.
 
You can download DelphiConcurrent and FreepascalConcurrent for Delphi and Freepascal from:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way to build Delphi applications which involve parallel executed code based on threads like application servers. DelphiConcurrent provides to the programmers the internal mechanisms to write safer multi-thread code while taking a special care of performance and genericity.
 
In concurrent applications a DEADLOCK may occurs when two threads or more try to lock two consecutive shared resources or more but in a different order. With DelphiConcurrent and FreepascalConcurrent, a DEADLOCK is detected and automatically skipped - before he occurs - and the programmer has an explicit exception describing the multi-thread problem instead of a blocking DEADLOCK which freeze the application with no output log (and perhaps also the linked clients sessions if we talk about an application server).
 
Amine Moulay Ramdane has extended them with the support of his scalable RWLocks for Windows and Linux and with the support of his scalable lock called MLock for Windows and Linux and he has also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files to understand more.
 
And please read the html file inside to learn more how to use it.
 
 
About race conditions now:
 
My scalable Adder is here..
 
As you have noticed i have just posted previously my modified versions of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in parallel programs.
 
But i have just read the following about how to avoid race conditions in Parallel programming in most cases..
 
Here it is:
 
https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/
 
This is why i have invented my following powerful scalable Adder to help you do the same as the above, please take a look at its source code to understand more, here it is:
 
https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal
 
Other than that, about composability of lock-based systems now:
 
Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking.
 
There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
And about Message Passing Process Communication Model and Shared Memory Process Communication Model:
 
An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine.
 
However, shared memory model may create problems such as synchronization and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral equivalent of gotos in un-structured programming (it's about time somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI application is significantly higher than that required to write a shared-memory version of it.
 
And more about my scalable reference counting with efficient support
for weak references:
 
My invention that is my scalable reference counting with efficient support for weak references version 1.35 is here..
 
Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.35, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
 
But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why:
 
A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used.
 
If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns.
 
Therefore, reference counting in TComponent has been removed.
 
Also because i have just added TAMInterfacedPersistent to my invention.
 
To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation.
 
I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems,
and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable".
 
And please read the readme file inside the zip file that i have just extended to make you understand more.
 
You can download my new scalable reference counting with efficient support for weak references version 1.35 from:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: