Monday, May 31, 2021

Re: 可惜我們的孩子不會中文

各位同仁、鄉親、朋友:
    想必各位與我一樣,均被中文之神奇所感動;而且因之更覺以身為華人為榮。但
我也同意Anna所說,只惋惜我們的下一代(尤其是出生異國與成長於異國者)"不會中文"。奈何 !
     無已,且讓我乘機就我所知、解說一下中文漢字的偉大貢獻,如說是世界唯一僅有,理應是當之無愧的。
世人倶想知到為何中華文明繼續不斷地連續繼承發揚5,000年之久(如將河姆渡遺址加入,應再加2,000年)。除了別的因素以外,中文漢字的出現,對團結與同化不同地區與族裔,所起之作用,是有其決定性;但卻被我們一般人所忽視。
    中文漢字與所有其他語言文字不同之處,用 [語音學] 的語言來說,漢字不是個
"拼音"文字。所以,儘管不同地區與族裔說不同發音的語言,他們都可以用同樣的
漢字符號來記事與交換意見。
據我的研究,漢字發源於 【伏羲時代】(公元前2,852年)。嗣後逐漸溝通了不同區域與團體,直到統一於 一個"同文同種"的大中華文明之下。
此點的重要性,要與其他文明與文化相比以後,才會真正彰顯出來。譬如羅馬帝國之興,相當於中國的秦漢時期。但羅馬帝國僅維持到公元5世紀就消失了。我在一本拙著指出這是由於它沒有中華帝國的兩項"法寶":第一,文化統一 之基礎 (漢武帝定儒家思想為"國教; 並
以此基礎作為開科取士的科舉制度,創建舉世文官制度之濫觴);第二,他缺乏類似漢字非拼音的符號文字。(要不然,羅馬帝國或許可以像中國一樣,延綿擴展迄今,歐洲就
不需要歐盟而早已成為一個整體的 "天下" 了。
     言歸正傳。由於除了以上兩個特點以外,中文漢字的第三個特點是,它每一個字,均是單音節 (mono-syllabic)。
因此,不同字與字間的排列組合變換,其可能性比任何拼音文字為高。所以,才能有那麽多的排列長短不同的文字組合。由於沒有拼音的局限,所以增加了不同字與字間排列的伸縮性。用電腦語言來說,這是字與字間的 configuration.
結論:所以中文(漢字)特有之奇異與美奐美侖,不是不可以解釋的。
jch
James C. Hsiung  (熊  玠), Ph.D.
Professor of Politics & Int'l Law
New York University
19 West 4th St.
New York, N.Y.
(212) 998-8523


On Mon, May 31, 2021 at 2:53 PM Anna <ampanwu@gmail.com> wrote:

中国TV台的

 

 

 

 

 

 

 

 

 

 

 

 

" 行走的小詩庫" 王恆忔三歲起就在TV上大展身手  不愛他都难  現在己七和奶奶二人参加大人賽程   精彩之極  会背400多首唐詩和宋詞   李昌鈺爱死他了

From: tina Soong [mailto:tsoongtotherim@aol.com]
Sent: Monday, May 31, 2021 12:57 PM
To: Tsu Teh Soong; TSL; Anna; Shu Gong; Christa Meadows; JamesHsiung
; Vivian Fung 孫均德; vickie Tu; Pearl Burke; marian chen; betty@lapack.com; Dolores Kuo; Jensie Tou; Lian Kwan; Sharon Kahn; Amy Tsai; amyhuang@yahoo.com; Selina Tsang; lileeusa@gmail.com; kykao53@hotmail.com; jessica sheu; Meilee Woo; Melissa Huang
Subject: Fwd:
可惜我們的孩子不會中文

 

   

 

From: YungChien Lew <yungchien.lew@gmail.com>
Date: May 31, 2021 at 4:54:30 AM CDT
To: Anita Kao <sansank99@yahoo.com>, Hsiao-Hung Kao <k6hsi@aol.com>, Norman Soong <nsoong21@gmail.com>, Tina Soong <Tsoongtotherim@aol.com>
Subject: 可惜我們的孩子不會中文

Digest for comp.lang.c++@googlegroups.com - 24 updates in 5 topics

Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: May 27 07:00PM +0100

On Thu, 27 May 2021 19:32:58 +0200
> design very complex if you really want to make it interruptible at
> every time while processing a call. That's not necessary if you'd
> design it not that stupid.
 
Interrupts were necessary with single-threaded programs. They are not
usually necessary with multi-threaded programs and POSIX provides
everything you need in consequence. You block the signals of interest
in all your threads and then have a single handler thread with blocks
on sigwait() and deals with the signals synchronously (synchronously
for the handler thread that is). Job done.
 
One other thing POSIX provides and C++11 doesn't is thread cancellation.
Some programmers who don't have relevant experience think thread
cancellation is a bad idea because it allows arbitrary cancellation
which cannot be adequately controlled (the "combing your hair with a
fork" jibe). Whilst that is true with windows it absolutely isn't with
POSIX. What you do is use deferred cancellation (the default in POSIX)
and block cancellation as normal policy, and only enable cancellation
at defined points in the code which are able to deal with it (normally
where a wait is to occur). Done properly, thread cancellation is far
easier to use than exceptions, which can jump out of anywhere if you
include std::bad_alloc in the mix. C++11 threads don't deal with
cancellation and will never be able to do so because they require OS
support - in this case POSIX support.
Paavo Helde <myfirstname@osa.pri.ee>: May 26 05:35PM +0300

26.05.2021 14:51 Juha Nieminen kirjutas:
 
> the program will still crash, because std::thread does not like being
> destroyed without being joined or detached, so it just terminates if that's
> the case.
 
std::thread is based on boost::thread which automatically detaches in
the destructor. Alas, this would create a rampant thread which cannot be
joined any more.
 
I guess when it got standardized, they felt that such automatic detach
is no good, but did not dare to enforce automatic join either, by some
reason. So they chose to std::terminate() which is the worst of them all
IMO.
 
See https://isocpp.org/files/papers/p0206r0.html (Discussion about
std::thread and RAII).
Juha Nieminen <nospam@thanks.invalid>: May 25 05:16AM

> while ((num = fread (buffer, sizeof (char), sizeof (buffer) - 1,
> pOutputFile)) > 0)
 
The standard mandates that sizeof(char) is 1. It cannot have any other
value.
 
> {
> // nothing to catch since any error causes this code to bypass
> }
 
If you are catching a memory allocation, why not catch it explicitly and
return an error code or print an informative error message or something
that indicates what happened, rather than silently failing and doing
nothing?
 
try
{
// your code here
}
catch(const std::bad_alloc& e)
{
// Could do, for example:
std::cout << "Memory allocation failed: " << e.what() << "\n";
}
catch(...)
{
std::cout << "Unknown excpetion thrown while trying to read file\n";
}
Lynn McGuire <lynnmcguire5@gmail.com>: May 27 02:08PM -0500

On 5/27/2021 12:50 AM, Christian Gollwitzer wrote:
> size_t for unsigned or ptrdiff_t for signed. That will correspond to a
> 32bit integer in 32 bit and a 64 bit integer in 64 bit.
 
>     Christian
 
Done. With checking against SIZE_MAX before casting the variable to size_t.
 
Yeah, if 1 GB is having trouble in Win32, 2+ GB will be much worse. The
code is now ok to fail without crashing the program. Porting to Win64
is needed in the near future. So many thing to do, so little time. I
will be 61 in a couple of weeks, kinda hoping to retire before 75.
 
Thanks,
Lynn
Lynn McGuire <lynnmcguire5@gmail.com>: May 25 12:46PM -0500

On 5/25/2021 12:04 PM, Paavo Helde wrote:
 
> OP was a bit unclear in this point. But there would be no point to have
> a 32-bit Windows 7 on a machine with 16 GB RAM, so I hope he has got a
> 64-bit OS after all.
 
Yes, I have Windows 7 x64 Pro. I cannot convert to win64 at this time.
When we do convert, it will be a steep hill as we started this code in
Win16 in 1987. The Win32 port was a very steep hill in 2000.
 
Lynn
Lynn McGuire <lynnmcguire5@gmail.com>: May 25 12:49PM -0500

On 5/25/2021 12:16 AM, Juha Nieminen wrote:
> {
> std::cout << "Unknown excpetion thrown while trying to read file\n";
> }
 
Thanks, I did not know the code for catching the bad_alloc explicitly.
 
Lynn
MrSpook_Ann1u@d0v_5eh1bqgd.com: May 25 04:10PM

On Tue, 25 May 2021 18:42:54 +0300
>> as real memory or as configured backing store (i.e. swap space).
 
>For fitting a 900 MB std::string into a memory one needs 900 MB of
>contiguous address space and Bonita is right in pointing out this might
 
Why? Only the virtual memory address space needs to be contiguous, the
real memory pages storing the string could be all over the place.
Bonita Montero <Bonita.Montero@gmail.com>: May 26 06:16AM +0200

Maybe it would be an idea to process your file in pieces ?
Christian Gollwitzer <auriocus@gmx.de>: May 26 08:36AM +0200

Am 25.05.21 um 19:20 schrieb Lynn McGuire:
> not funny.  My calculation engine is 850,000 lines of F77 and about
> 20,000 lines of C++ but it is still portable to the Unix boxen, probably
> mainframes too if any engineers ran them anymore.
 
Have you actually tried to recompile in 64bit? The Win32-API is the same
AFAIUI. Are you casting pointers to ints/longs? I haven't ported
million-LOC programs to 64bit, but in smaller projects there was
surprisingly little to do to make it work. I have no idea what goes
wrong when you link to F77, though.
 
Christian
Bonita Montero <Bonita.Montero@gmail.com>: May 26 09:22AM +0200

> Things would have to be pretty badly FUBARed for the VM to run out of virtual
> memory address space on a 64 bit system given the 16 exabyte max size!
 
Actually AMD64 supports "only" 48 bit page-tables where the lower 47
bit fall into user-space. Intel 64 has a recent change to 56 bit page
-tables, but I think that's rather for systems with large file-mappings.
Bonita Montero <Bonita.Montero@gmail.com>: May 26 08:17AM +0200

I'm too lazy to read your code.
 
> #define CT_L2_ALIGNMENT 128
 
But this is a false assumption. Except from the Intel Pentium 4 and
the IBM POWER's L4-caches (embedded DRAM) there are AFAIK no caches
that have 128 byte cacheline size. 128 byte cachelines have been pro-
ven to be inefficient below a certain cache-size because the cache-
lines become used too partitially as there's a too less reusage-be-
haviour of the working-set.
Better use this:
https://en.cppreference.com/w/cpp/thread/hardware_destructive_interference_size
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 25 07:03PM -0700

Fwiw, here is some crude Windows code I just coded up for a single
linked list using a futex. It only supports push and flush operations
for now. flush will wait if the stack is empty. I was to lazy to
implement an ABA counter. However, it shows an interesting way to use a
futex for a lock-free algorihtm.
 
Can you get it to run? Thanks.
__________________________________________________
 
 
// Futex Single Linked List by Chris M. Thomasson
//___________________________________________________
 
 
 
#include <iostream>
#include <thread>
#include <vector>
#include <functional>
#include <cassert>
 
 
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
 
 
#define CT_L2_ALIGNMENT 128
#define CT_ITERS 6666666
#define CT_NODES 42
 
 
#define CT_WAITBIT 0x1UL
 
 
static LONG g_memory_allocations = 0;
static LONG g_memory_deallocations = 0;
static LONG g_futex_signals = 0;
static LONG g_futex_waits = 0;
 
struct ct_node
{
ct_node* m_next;
 
ct_node() : m_next(NULL)
{
InterlockedAdd(&g_memory_allocations, 1);
}
 
~ct_node()
{
InterlockedAdd(&g_memory_deallocations, 1);
}
};
 
 
#define CT_NODE_SET_WAITBIT(mp_ptr) ((ct_node*)(((ULONG_PTR)(mp_ptr)) |
CT_WAITBIT))
#define CT_NODE_CHECK_WAITBIT(mp_ptr) (((ULONG_PTR)(mp_ptr)) & CT_WAITBIT)
#define CT_NODE_CLEAR_WAITBIT(mp_ptr) ((ct_node*)(((ULONG_PTR)(mp_ptr))
& ~CT_WAITBIT))
 
 
void ct_node_flush(ct_node* node)
{
while (node)
{
ct_node* next = node->m_next;
delete node;
node = next;
}
}
 
 
struct ct_futex_slist
{
ct_node* alignas(CT_L2_ALIGNMENT) m_head;
 
 
ct_futex_slist() : m_head(nullptr)
{
 
}
 
 
void push(ct_node* node)
{
ct_node* head = m_head;
 
for (;;)
{
ct_node* xchg = CT_NODE_CLEAR_WAITBIT(head);
node->m_next = xchg;
 
ct_node* ret =
(ct_node*)InterlockedCompareExchangePointer((PVOID*)&m_head, node, head);
 
if (ret == head)
{
if (CT_NODE_CHECK_WAITBIT(ret))
{
InterlockedAdd(&g_futex_signals, 1);
WakeByAddressSingle(&m_head);
}
 
return;
}
 
head = ret;
}
}
 
 
ct_node* flush()
{
ct_node* head_raw =
(ct_node*)InterlockedExchangePointer((PVOID*)&m_head, NULL);
ct_node* head = CT_NODE_CLEAR_WAITBIT(head_raw);
 
if (! head)
{
for (;;)
{
head_raw =
(ct_node*)InterlockedExchangePointer((PVOID*)&m_head, (ct_node*)CT_WAITBIT);
head = CT_NODE_CLEAR_WAITBIT(head_raw);
 
if (head)
{
break;
}
 
InterlockedAdd(&g_futex_waits, 1);
ct_node* waitbit = (ct_node*)CT_WAITBIT;
WaitOnAddress(&m_head, &waitbit, sizeof(PVOID), INFINITE);
}
}
 
return head;
}
};
 
 
struct ct_shared
{
ct_futex_slist m_slist;
 
 
~ct_shared()
{
ct_node* head_raw = m_slist.m_head;
 
if (CT_NODE_CHECK_WAITBIT(head_raw))
{
std::cout << "\n\nWAITBIT LEAK!\n";
}
 
ct_node_flush(head_raw);
 
if (g_memory_allocations != g_memory_deallocations)
{
std::cout << "\n\nMEMORY LEAK!\n";
}
 
std::cout << "\ng_memory_allocations = " <<
g_memory_allocations << "\n";
std::cout << "g_memory_deallocations = " <<
g_memory_deallocations << "\n";
std::cout << "g_futex_waits = " << g_futex_waits << "\n";
std::cout << "g_futex_signals = " << g_futex_signals << "\n";
}
};
 
 
void ct_thread(ct_shared& shared)
{
for (unsigned long i = 0; i < CT_ITERS; ++i)
{
for (unsigned long n = 0; n < CT_NODES; ++n)
{
shared.m_slist.push(new ct_node());
}
 
ct_node* node = shared.m_slist.flush();
ct_node_flush(node);
}
 
shared.m_slist.push(new ct_node());
}
 
 
int main()
{
unsigned int threads_n = std::thread::hardware_concurrency();
 
std::vector<std::thread> threads(threads_n);
 
std::cout << "Futex Single Linked List by Chris M. Thomasson\n\n";
 
std::cout << "Launching " << threads_n << " threads...\n";
std::cout.flush();
 
{
ct_shared shared;
 
for (unsigned long i = 0; i < threads_n; ++i)
{
threads[i] = std::thread(ct_thread, std::ref(shared));
}
 
std::cout << "Processing...\n";
std::cout.flush();
 
for (unsigned long i = 0; i < threads_n; ++i)
{
threads[i].join();
}
}
 
std::cout << "\nCompleted!\n";
 
return 0;
}
__________________________________________________
 
 
 
Here is my output:
__________________________________________________
Futex Single Linked List by Chris M. Thomasson
 
Launching 4 threads...
Processing...
 
g_memory_allocations = 1119999892
g_memory_deallocations = 1119999892
g_futex_waits = 21965
g_futex_signals = 55630
 
Completed!
__________________________________________________
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 25 11:21PM -0700

On 5/25/2021 11:17 PM, Bonita Montero wrote:
> haviour of the working-set.
> Better use this:
> https://en.cppreference.com/w/cpp/thread/hardware_destructive_interference_size
 
Yes, you are correct. std::hardware_constructive_interference_size is
the way to go. Thanks for taking a look at my code to begin with Bonita!
 
:^)
Bonita Montero <Bonita.Montero@gmail.com>: May 25 08:32AM +0200

> I can give you a generic waiting thing called an, oh wait. I want
> to see if you can do it for yourself. I can encrypt my answer....
 
You're stupid. We're talking about whether a lock-free queue
is possible without polling. And it is obviously not; if it
wouldn't be, it wouldn't be lock-free.
Bonita Montero <Bonita.Montero@gmail.com>: May 25 08:30AM +0200

>> an item in the queue the only opportunity that is left is polling.
 
> You are missing things again. Think! Remember back when you said that
> mutexes must use CAS?
 
I never said that because I implemended mutex with LOCK XADD a long
time ago. And this has nothing to do with that you don't understand
that lock-free programming isn't possible without polling.
Describe me a lock-free algorithm that works without polling !
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 24 11:33PM -0700

On 5/24/2021 11:32 PM, Bonita Montero wrote:
>> to see  if you can do it for yourself. I can encrypt my answer....
 
> You're stupid. We're talking about whether a lock-free queue
> is possible without polling. And it is obviously not
 
Why poll, when we can check the predicate then wait?
 
 
; if it
Bonita Montero <Bonita.Montero@gmail.com>: May 25 08:36AM +0200

>> that lock-free programming isn't possible without polling.
>> Describe me a lock-free algorithm that works without polling !
 
> http://fractallife247.com/test/hmac_cipher/ver_0_0_0_1?ct_hmac_cipher=27becd4634e98df52a89d06b2906ce8a7fdf24968475da63b3d57719e7eb01d4ab5f0e28d0a76c73883661af6abd0514412e774530b3af251b118a66f38c521f1bb4c5652c303411c9de7606f4690777a997ad6d65ac625c456673216ae073a067fd34e5f37010811231569824a22be1f5ab1ee525a30626b87986672f
 
Have you ever been checked by a mental health doctor ?
That has nothing to do with what we're talking about.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 25 12:07AM -0700

On 5/25/2021 12:05 AM, Bonita Montero wrote:
>>> Otherwise they woudln't be lock-free.
 
>> Oh argh! Did you read where I mentioned predicates?
 
> Yes, this predicates are polled with all lock-free algorithms.
 
The predicate is polled in a condvar as well, yawn. Getting tired. Just
try to think about it some more. I will get back to you tomorrow. I have
some other work to do. Will but out Relacy. Its fun to use.
Bonita Montero <Bonita.Montero@gmail.com>: May 25 12:53PM +0200

>> I've shown that fork()ing is much more slower than to create a thread.
>> And even more a lot slower when pages are CoWd.
 
> For the use cases of fork the time of creation is irrelevant.
 
F.e. if you have a webserver that fork()s for every request,
that's relevant as in most other cases.
Bonita Montero <Bonita.Montero@gmail.com>: May 25 09:10AM +0200

> The predicate is polled in a condvar as well, yawn. ...
 
A condvar isn't a lock-free structure.
MrSpook_m0gtwlu6_g@j_cyo9l9.edu: May 25 09:05AM

On Mon, 24 May 2021 18:51:58 +0200
>means that plugins, extensions, tabs, etc., can crash without bringing
>down the whole browser. It costs - especially in ram - but you get
>benefits from it.
 
Yup. Also some browsers - eg firefox - started out using seperate processes
for seperate tabs but switched over to multithreading - presumably to make
porting to Windows easier - with the consequent drop in reliability. Now
some have gone back to multi process which also has the benefit of making
some kind of javascript hacks difficult if not impossible if each tab is a
seperate process.
Bonita Montero <Bonita.Montero@gmail.com>: May 25 12:54PM +0200

>> thread( func, params ... ).detach() and forget the thread.
 
> What a pity I can't find some ascii art of someone with their head in their
> hands.
 
The above is a trivial statement. And you won't have to have any
shared memory, you simply pass the parameters to the thread and
they're destructed when the thread ends. That's magnitudes more
convenient than fork() ing and having shared memory.
Bonita Montero <Bonita.Montero@gmail.com>: May 24 02:19PM +0200

> creating a thread. Creating a thread on my Linux Ryzen 7 1800X is about
> 17.000 clock cycles and I bet fork()ing is magnitudes higher if you
> include the costs of making copy-on-writes of every written page.
 
I just wrote a little program:
 
#include <unistd.h>
#include <iostream>
#include <chrono>
#include <cstdlib>
#include <cctype>
#include <cstring>
#include <cstdint>
 
using namespace std;
using namespace chrono;
 
int main( int argc, char **argv )
{
using sc_tp = time_point<steady_clock>;
if( argc < 2 )
return EXIT_FAILURE;
uint64_t k = 1'000 * (unsigned)atoi( argv[1] );
sc_tp start = steady_clock::now();
char buf[0x10000];
for( uint64_t n = 0; fork() == 0; ++n )
{
memset( buf, 0, sizeof buf );
if( n == k )
{
double ns = (double)(int64_t)duration_cast<nanoseconds>(
steady_clock::now() - start ).count() / (int64_t)k;
cout << ns << endl;
break;
}
}
}
 
On my Ryzen 7 1800X Linux PC it gives me a fork-cost of about
180.000 to 222.000 clock cycles per fork. Any questions ? I'm
doing a minimal writing in a 64kB-block (hopefully this isn't
optimized away) to simulate some copy-on-write. As you can see
forking is magnitudes slower than creating a thread (17.000
cylces on the same machine).
Bonita Montero <Bonita.Montero@gmail.com>: May 31 09:33AM +0200

> If here really was the wrong place then I would not have been
> able to get Kaz to come back to comp.theory by posting here.
 
No matter what you say - your halting-problem-idea are always
off-topic here.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.lang.c++@googlegroups.com - 25 updates in 4 topics

"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 25 12:08AM -0700

On 5/25/2021 12:07 AM, Chris M. Thomasson wrote:
 
> The predicate is polled in a condvar as well, yawn. Getting tired. Just
> try to think about it some more. I will get back to you tomorrow. I have
> some other work to do. Will but out Relacy. Its fun to use.
 
Bust out Relacy! Damn Typos!
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 25 12:09AM -0700

On 5/25/2021 12:04 AM, Bonita Montero wrote:
 
> The discussion was about whether mutexes could be realized only
> with as counting mutexes and not whether they could be realized
> with compare-and-swap only. You have a weak memory !
 
You said they must use a semaphore, or CAS or something. Nope, you are
wrong. And we, Kaz and I had to correct you. I remember it. Anyway, I
have to take a look at some other code right now. Will get back to you
in a new thread sometime tomorrow, deal? Okay.
MrSpook_43zpj5@zrdy.biz: May 25 09:02AM

On Mon, 24 May 2021 18:15:31 +0200
>> English comprehension problem?
 
>I've shown that fork()ing is much more slower than to create a thread.
>And even more a lot slower when pages are CoWd.
 
For the use cases of fork the time of creation is irrelevant.
MrSpook_wBhaki@w1dhf2q0pnagn57.eu: May 24 08:26AM

On Fri, 21 May 2021 16:25:34 GMT
>>Your knowledge is very superficial when it comes to parallelization !
 
>You are both arguing past each other.
 
>Both forking and multithreading are useful in the right situation.
 
Thats all I was saying. But for people who've only ever developed on Windows
all they know is multithreading and see it as the answer to everything.
Bonita Montero <Bonita.Montero@gmail.com>: May 24 03:37PM +0200

> Copy-on-write takes cycles, of course - but you only need to do the copy
> on pages that are written. ...
 
Unfortunately you inherit the original fragmented heap of the parent
-process so that there might be a lot of Cows.
 
> in threading), and I expect forking to be more efficient if you need
> separation for reliability or security (since separation is the default
> for forking).
 
Why should forking ever be faster ? There's no reason for this. If
you have decoupled threads which don't synchronize or simpley read
-share memory, they could perorm slightly faster since a context
-switch doesn't always include a TLB-flush.
 
> You come from a Windows world, where forking is not supported.
> In the *nix world, it's a different matter.
 
My statement was a general statement - threaded applications are easier
to write and fork()ing is at best equally performant, but usually less.
 
> efficient than multi-processing, or that it is more common - I am merely
> disagreeing with your blanket generalisations about multi-threading
> /always/ being better and /always/ being used.
 
It's better almost every time.
Bonita Montero <Bonita.Montero@gmail.com>: May 25 06:40AM +0200

>> Why ?
 
> About the "monitor-objects are the fastest" comment... ;^)
 
I'm just ignoring lock-free queues because they're
impracicable because you've to poll.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 24 10:07PM -0700

On 5/24/2021 9:40 PM, Bonita Montero wrote:
 
>> About the "monitor-objects are the fastest" comment... ;^)
 
> I'm just ignoring lock-free queues because they're
> impracicable because you've to poll.
 
Why do have to poll?
Bonita Montero <Bonita.Montero@gmail.com>: May 25 08:19AM +0200

>> queue is full (but boost shows that the producer-side is also pos-
>> sible with blocking).
 
> Why?
 
Because lock-free is without waiting in the kernel.
MrSpook_ry@939_6htz773e0qeya.eu: May 27 08:18AM

On Wed, 26 May 2021 19:04:56 +0200
 
>We don't discuss physical threading but how the language presents
>threading; and C++11-threading is by far more convenient than pure
>pthreads.
 
A Big Mac is convenient, doesn't make it the best meal. The pthreads library
is extremely powerful, perhaps the boiler plate setup code can be a bit long
winded but its not hard to use.
MrSpook_b28s@jxgz6zklebr1.tv: May 27 10:24AM

On Thu, 27 May 2021 11:02:11 +0200
>> A Big Mac is convenient, doesn't make it the best meal. ...
 
>Programming pthreads directly has no advantages
 
Presumably you've never had to use 3 level locking or fine grain threading
control. Also the lack of proper interoperability with signals makes C++
threading on unix a bit of a toy frankly.
 
>and makes a lot of workd more.
 
A bit, not a lot.
Bonita Montero <Bonita.Montero@gmail.com>: May 27 12:33PM +0200

> Presumably you've never had to use 3 level locking or fine grain threading
> control. Also the lack of proper interoperability with signals makes C++
> threading on unix a bit of a toy frankly.
 
Signals are a plague. You can't write a libary which does have a
Signal-handling which is coordinatet independently from the code
it is later embedded into. Both have to be made of a single piece.
Signals are even more worse since there can't be different handlers
for synchonous signals for different threads.
And signal-codd always have a reentrancy problem, that makes them
a even bigger plague. And the ABI has to be designed around them
(red zone). That's not clean coding.
Therefore: Outsource asynchronous signals to differnt threads.
Windows has a more powerful handling for something like synchronous
signals, Structured Excetion Handling. And for the few asynchronous
signals Windows knows, Windows spawns a diffrent thread if a signal
happens.
 
And which threading-contol is needed beyond that what C++ provides ?
Bonita Montero <Bonita.Montero@gmail.com>: May 27 12:40PM +0200

>> control. Also the lack of proper interoperability with signals makes C++
>> threading on unix a bit of a toy frankly.
 
> Signals are a plague....
And even more: If I use C++-threading the places where signals could
occur and where I can't get the EAGAIN are only where I have locking
and / or waiting for a condition_variable. But the places where I
lock a mutex or wait for a CV with pthreads, Posix mandates you to
re-lock the mutex or re-wait for the CV - that's exactly what C++11
-synhroni-zation does also - so there's no difference here. So what
do you complain here ?
Bonita Montero <Bonita.Montero@gmail.com>: May 27 12:55PM +0200

> re-lock the mutex or re-wait for the CV - that's exactly what C++11
> -synhroni-zation does also - so there's no difference here. So what
> do you complain here ?
 
Oh, I'm partitially wrong here: pthread_mutex_wait behaves as described
_but_ pthread_cond_wait handles the signal-handler internally and con-
tinues waiting afterwards.
So there's still nothing different than with C+11-threads !
MrSpook_zdNaq@bltmyc.gov.uk: May 27 03:39PM

On Thu, 27 May 2021 12:24:36 +0000 (UTC)
>>>dicussion. You are just being an asshole.
 
>> Good morning Mr Happy, things going well in Finland today?
 
>Fuck off, asshole.
 
Oh dear, another bad day? Have a lie down and cuddle the therapy teddy.
MrSpook_b_x@ukpge.org: May 27 03:54PM

On Thu, 27 May 2021 13:14:36 +0200
>>> Signals are a plague. You can't write a libary which does have a
 
>> Says the windows programmer.
 
>Signals are simply a bad concept. No one would invent them today.
 
Behold! Our mighty sage has spoken - let it be known that interrupts are a bad
idea!
 
Whatever you say sweetie.
Keith Thompson <Keith.S.Thompson+u@gmail.com>: May 23 06:11PM -0700

Google Groups is broken. If you post to comp.lang.c++, it quietly drops
the "++" and posts to comp.lang.c. (You *might* be able to write the
newsgroup name as "comp.lang.c%2b%2b"; can someone confirm that?)
 
I've cross-posted this reply to both newsgroups, with followups to
comp.lang.c++. Any followups to this should go to comp.lang.c++.
 
(Consider using a Usenet server such as news.eternal-september.org with
a newsreader such as Thunderbird or Gnus (the latter runs under Emacs).)
 
(I've posted new text at the top of the quoted text to be sure it isn't
missed. The usual convention here is for new text to go *below* quoted
text.)
 
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
Keith Thompson <Keith.S.Thompson+u@gmail.com>: May 27 03:33PM -0700


> Why? size_t is guaranteed to hold the size of any object, which implies that
> it must be large enough to accomodate an object the size of the virtual address
> space. Generally it's minimum size in bits is the same as long.
 
That's likely to be true, but it's not absolutely guaranteed.
 
size_t is intended to hold the size of any single object, but it may
not be able to hold the sum of sizes of all objects or the size of
the virtual address space. An implementation might restrict the
size of any single object to something smaller than the size of
the entire virtual address space. (Think segments.)
 
Also, I haven't found anything in the standard that says you
can't at least try to create an object bigger than SIZE_MAX bytes.
calloc(SIZE_MAX, 2) attempts to allocate such an object, and I don't
see a requirement that it must fail. If an implementation lets you
define a named object bigger than SIZE_MAX bytes, then presumably
applying sizeof to it would result in an overflow, and therefore
undefined behavior.
 
Any reasonable implementation will simply make size_t big enough
to hold the size of any object it can create, but I don't see a
requirement for it.
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
Bonita Montero <Bonita.Montero@gmail.com>: May 25 12:56PM +0200

You've got a 2GB address-space and not a contignous piece of
memory which fits to your 900MB.
MrSpook_ddZgr4t4@okc9_pd48oig5.info: May 26 07:15AM

On Tue, 25 May 2021 19:57:47 +0300
>> real memory pages storing the string could be all over the place.
 
>Exactly. And if the memory allocator cannot find a free range of
>contiguous 900M addresses, guess what happens.
 
Things would have to be pretty badly FUBARed for the VM to run out of virtual
memory address space on a 64 bit system given the 16 exabyte max size!
MrSpook_dw5lvA4g@kak0_42x.edu: May 26 08:29AM

On Wed, 26 May 2021 10:54:27 +0300
 
>> memory address space on a 64 bit system given the 16 exabyte max size!
 
>It looks like you have overlooked the small fact that the OP is having a
>32-bit program and does not want to upgrade to 64-bit at this moment.
 
Fair enough. In which case running out of address space would be pretty
easy given modern application sizes.
"Alf P. Steinbach" <alf.p.steinbach@gmail.com>: May 26 06:33PM +0200

On 2021-05-25 03:46, Lynn McGuire wrote:
> some slop
>     fseek (pOutputFile, 0, SEEK_SET);
>     outputFileBuffer.reserve (outputFileLength);
 
[snip]
 
In the above code `ftell` will fail in Windows if the file is 2GB or
more, because in Windows, even in 64-bit Windows, the `ftell` return
type `long` is just 32 bits.
 
However, the C++ level iostreams can report the file size correctly:
 
 
----------------------------------------------------------------------------
#include <stdio.h> // fopen, fseek, ftell, fclose
#include <stdlib.h> // EXIT_...
 
#include <iostream>
#include <fstream>
#include <stdexcept> // runtime_error
using namespace std;
 
auto hopefully( const bool e ) -> bool { return e; }
auto fail( const char* s ) -> bool { throw runtime_error( s ); }
 
struct Is_zero {};
auto operator>>( int x, Is_zero ) -> bool { return x == 0; }
 
const auto& filename = "large_file";
 
void c_level_check()
{
struct C_file
{
FILE* handle;
~C_file() { if( handle != 0 ) { fclose( handle ); } }
};
 
auto const f = C_file{ fopen( ::filename, "rb" ) };
hopefully( !!f.handle )
or fail( "fopen failed" );
fseek( f.handle, 0, SEEK_END )
>> Is_zero()
or fail( "fseek failed, probably rather biggus filus" );
const long pos = ftell( f.handle );
hopefully( pos >= 0 )
or fail( "ftell failed" );
cout << "`ftell` says the file is " << pos << " byte(s)." << endl;
}
 
void cpp_level_check()
{
auto f = ifstream( ::filename, ios::in | ios::binary );
f.seekg( 0, ios::end );
const ifstream::pos_type pos = f.tellg();
hopefully( pos != -1 )
or fail( "ifstream::tellg failed" );
cout << "`ifstream::tellg` says the file is " << pos << " bytes."
<< endl;
}
 
void cpp_main()
{
try {
c_level_check();
} catch( const exception& x ) {
cerr << "!" << x.what() << endl;
cpp_level_check();
}
}
 
auto main() -> int
{
try {
cpp_main();
return EXIT_SUCCESS;
} catch( const exception& x ) {
cerr << "!" << x.what() << endl;
}
return EXIT_FAILURE;
}
-------------------------------------------------------------------------------
 
When I tested this with `large_file` as a copy of the roughly 4GB
"Bad.Boys.for.Life.2020.1080p.WEB-DL.DD5.1.H264-FGT.mkv", I got
 
 
[c:\root\dev\explore\filesize]
> b
!ftell failed
`ifstream::tellg` says the file is 4542682554 bytes.
 
 
- Alf
Keith Thompson <Keith.S.Thompson+u@gmail.com>: May 31 02:43PM -0700

> "int64_t", but someone would first have to check if it affected any real
> implementations before making such a change. But yes, that might be a
> way out and a way forward.
 
[...]
 
That would allow intmax_t to be 128 bits on implementations with
128-bit long long (are there any?), which seems like a good idea.
 
I think the point of both these proposals is purely for backward
compatibility, avoiding breaking code that already uses [u]intmax_t.
Both of them destroy the point of intmax_t, providing a type that's
guaranteed to be the longest integer type. Should intmax_t be
deprecated?
 
Perhaps some future version of C might have enough capabilities to
allow defining a longest integer type without causing ABI issues
the way intmax_t did.
 
And since, as far as I've been able to tell, no implementation
supports extended integer types, I wonder if they should be
reconsidered.
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
Keith Thompson <Keith.S.Thompson+u@gmail.com>: May 31 02:44PM -0700

Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
[...]
> Perhaps some future version of C might have enough capabilities to
> allow defining a longest integer type without causing ABI issues
> the way intmax_t did.
 
And I did it again. s/C/C++/, or s/comp.lang.c++/comp.lang.c/.
 
[...]
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
"daniel...@gmail.com" <danielaparker@gmail.com>: May 31 03:20PM -0700

On Monday, May 31, 2021 at 5:43:23 PM UTC-4, Keith Thompson wrote:
> Both of them destroy the point of intmax_t, providing a type that's
> guaranteed to be the longest integer type. Should intmax_t be
> deprecated?
 
Yes. "Give me the biggest integer type there is" is not a reasonable
thing to ask for, in any code that is intended to be portable across platforms
or over time on the same platform. You may as well have intwhatever_t.
 
Daniel
Lynn McGuire <lynnmcguire5@gmail.com>: May 24 08:46PM -0500

I am getting std::bad_alloc from the following code when I try to
reserve a std::string of size 937,180,144:
 
std::string filename = getFormsMainOwner () -> getOutputFileName ();
FILE * pOutputFile = nullptr;
errno_t err = fopen_s_UTF8 ( & pOutputFile, filename.c_str (), "rt");
if (err == 0)
{
std::string outputFileBuffer;
// need to preallocate the space in case the output file is a
gigabyte or more, PMR 6408
fseek (pOutputFile, 0, SEEK_END);
size_t outputFileLength = ftell (pOutputFile) + 42; // give it some slop
fseek (pOutputFile, 0, SEEK_SET);
outputFileBuffer.reserve (outputFileLength);
 
Any thoughts here on how to handle the std::bad_alloc in std::string
reserve ?
 
Thanks,
Lynn
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 24 11:20PM -0700

On 5/24/2021 11:19 PM, Bonita Montero wrote:
>>> sible with blocking).
 
>> Why?
 
> Because lock-free is without waiting in the kernel.
 
Why, again? You are missing key things here... Think...
Bonita Montero <Bonita.Montero@gmail.com>: May 25 08:35AM +0200

> Really? Should I bring up the older thread?
 
Yes, you've got false memories.
Bonita Montero <Bonita.Montero@gmail.com>: May 25 08:55AM +0200

> God you are being dense right now.
 
Lock-free structures never wait inside the kernel, that's while
the're non-locking. But a condvar mostly waits inside the kernel.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 24 11:58PM -0700

On 5/24/2021 11:55 PM, Bonita Montero wrote:
>> God you are being dense right now.
 
> Lock-free structures never wait inside the kernel, that's while
> the're non-locking. But a condvar mostly waits inside the kernel.
 
Huh? You are missing the fact that there is an algorihtm that can turn a
lock-free stack into something that can wait in the kernel when it needs
to for the push/pop functions. It can be a futex, but there is another
one out there...
Bonita Montero <Bonita.Montero@gmail.com>: May 25 09:00AM +0200

Lock-free algorithms never wait inside the kernel.
Otherwise they woudln't be lock-free.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 25 12:03AM -0700

On 5/24/2021 11:59 PM, Bonita Montero wrote:
>> You are the one who made Kaz correct you. I corrected you as well.
 
> But you are wrong about which topic.
> Sorry, but that's very weak, losing your memories so shortly.
 
No, I just have to end up teaching you again. Oh well. The new thread
will be beneficial to you and, perhaps, others.
Bonita Montero <Bonita.Montero@gmail.com>: May 25 09:05AM +0200

>> Lock-free algorithms never wait inside the kernel.
>> Otherwise they woudln't be lock-free.
 
> Oh argh! Did you read where I mentioned predicates?
 
Yes, this predicates are polled with all lock-free algorithms.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 25 12:02AM -0700

On 5/25/2021 12:00 AM, Bonita Montero wrote:
> Lock-free algorithms never wait inside the kernel.
> Otherwise they woudln't be lock-free.
 
Oh argh! Did you read where I mentioned predicates?
Bonita Montero <Bonita.Montero@gmail.com>: May 25 09:04AM +0200

> No, I just have to end up teaching you again. Oh well.
> The new thread will be beneficial to you and, perhaps, others.
 
The discussion was about whether mutexes could be realized only
with as counting mutexes and not whether they could be realized
with compare-and-swap only. You have a weak memory !
David Brown <david.brown@hesbynett.no>: May 31 08:33AM +0200


> One proposal is to make intmax_t mean int64_t, and leave it at that.
> Have no requirement that integer types can't be larger. No more ABI
> problem.
 
It might make more sense to tie it to "long long int" rather than
"int64_t", but someone would first have to check if it affected any real
implementations before making such a change. But yes, that might be a
way out and a way forward.
 
>> backwards compatibility.
 
> Hardly "most purposes", far from it. Without compiling with "-std=gnu++11",
> you don't even have std::numeric_limits<__int128>.
 
Well, it /is/ a gcc extension - choosing to enable it on the command
line makes sense to me. But I was thinking of the core language, rather
than the library, which can be somewhat independent of the compiler itself.
 
> languages such as rust with better type support see rapid growth
> of open source libraries that cover all manner of data interchange
> standards, C++ is comparatively stagnant.
 
Those relatively few programs that have need of int128_t can simply do a
typedef. It won't magically allow literals of the type, but it will
cover most cases.
"daniel...@gmail.com" <danielaparker@gmail.com>: May 31 08:21AM -0700

On Monday, May 31, 2021 at 2:33:52 AM UTC-4, David Brown wrote:
 
> Those relatively few programs that have need of int128_t can simply do a
> typedef. It won't magically allow literals of the type, but it will
> cover most cases.
 
A typedef? You've lost me.
 
Daniel
David Brown <david.brown@hesbynett.no>: May 31 06:13PM +0200

>> cover most cases.
 
> A typedef? You've lost me.
 
> Daniel
 
typedef signed __int128 int128_t;
typedef unsigned __int128 uint128_t;
 
You can wrap them in #ifdef's to check for gcc and support for the 128
bit almost integer types (and perhaps to check that your target doesn't
support standard int128_t types, as it will if "long long" is 128 bits).
Paavo Helde <myfirstname@osa.pri.ee>: May 25 07:47AM +0300

25.05.2021 04:52 Lynn McGuire kirjutas:
 
>> Thanks,
>> Lynn
 
> And I am building a Win32 program.  Not a Win64 program.
 
The usable memory space in a Windows 32-bit program is limited to 2GB.
It can be increased to 3GB, but this does not buy you much.
 
Catching std::bad_alloc as you have done in other responses is trivial,
but now what? Your program still does not work as expected.
 
If you are dealing with strings in GB range then you really should start
thinking of switching over to x64 compilation (this will involve some
64-bit bugfixing if this is your first time). Either that, or you need
to redesign your code to read and write files in smaller pieces, which
might be a lot of work.
 
Also, ftell() returns a signed 32-bit value in Windows so what you have
written here will cease to work when your files grow larger than 2 GB.
Suggesting to always use 64-bit alternatives for handling file sizes and
positions, even in 32-bit programs. Unfortunately these alternatives are
not portable and one needs to take some extra care for supporting
different OS-es.
 
It's also strange to base the program logic on the size of an *output*
file. But there are probably reasons.
Juha Nieminen <nospam@thanks.invalid>: May 27 12:24PM


>>Why don't you just fuck off, asshole? You aren't contributing to the
>>dicussion. You are just being an asshole.
 
> Good morning Mr Happy, things going well in Finland today?
 
Fuck off, asshole.
Bonita Montero <Bonita.Montero@gmail.com>: May 31 06:16AM +0200

You're so ultimately stupid.
Marcel Mueller <news.5.maazl@spamgourmet.org>: May 31 06:33PM +0200

Am 30.05.21 um 13:18 schrieb Bonita Montero:
> }
 
> Can anyone tell me whether the lambda has three pointers (24 bytes
> on 64 bit systems) instead of just one pointer inside the stack-frame,
 
A generic function pointer typically take 3 machine size words to
support all cases of polymorphism. Note that a lambda with closures is
in fact equivalent to a class with a (virtual) interface function with
the lambda signature. So we are talking about a /member function
pointer/ in general. Due to multiple inheritance and implicit upcasts
this is not simply a pointer to code.
 
> which could be an easy optimization ?
 
There is nothing you can do. The compiler on the other side is not
required to allocate the storage unless it is really needed.
 
 
Marcel
Bonita Montero <Bonita.Montero@gmail.com>: May 31 06:50PM +0200

> the lambda signature. So we are talking about a /member function
> pointer/ in general. Due to multiple inheritance and implicit upcasts
> this is not simply a pointer to code.
 
The three pointers have a fixed relationship to each other so that the
compiler could store a single pointer somewhere inside the stackframe
and address the three variables as [reg+a], [reg+b] and [reg+c].
Marcel Mueller <news.5.maazl@spamgourmet.org>: May 31 08:27PM +0200

Am 31.05.21 um 18:50 schrieb Bonita Montero:
[Lambda pointer]
> The three pointers have a fixed relationship to each other so that the
> compiler could store a single pointer somewhere inside the stackframe
> and address the three variables as [reg+a], [reg+b] and [reg+c].
 
No they have not. You may assign any other function pointer to the
function type f later.
 
 
Marcel
red floyd <no.spam.here@its.invalid>: May 31 12:41PM -0700

On 5/30/2021 9:16 PM, Bonita Montero wrote:
> You're so ultimately stupid.
 
Hint: an ad hominem reply means you have lost the debate
Bonita Montero <Bonita.Montero@gmail.com>: May 31 06:18AM +0200

> error. I am very very happy that you suggested that I look into that. I
> was able to completely abolish the contradiction.
> https://www.researchgate.net/publication/351947980_Refutation_of_Halting_Problem_Diagonalization_Argument
 
Your stop-problem-issues are interesting no one here.
They're generic to any language, so they're off-topic here.
olcott <NoOne@NoWhere.com>: May 31 12:12AM -0500

On 5/30/2021 11:18 PM, Bonita Montero wrote:
>> https://www.researchgate.net/publication/351947980_Refutation_of_Halting_Problem_Diagonalization_Argument
 
> Your stop-problem-issues are interesting no one here.
> They're generic to any language, so they're off-topic here.
 
Kaz was my best reviewer.
 
--
Copyright 2021 Pete Olcott
 
"Great spirits have always encountered violent opposition from mediocre
minds." Einstein
Bonita Montero <Bonita.Montero@gmail.com>: May 31 07:24AM +0200


>> Your stop-problem-issues are interesting no one here.
>> They're generic to any language, so they're off-topic here.
 
> Kaz was my best reviewer.
 
Then Kaz made the same mistake like you.
olcott <NoOne@NoWhere.com>: May 31 12:35AM -0500

On 5/31/2021 12:24 AM, Bonita Montero wrote:
>>> They're generic to any language, so they're off-topic here.
 
>> Kaz was my best reviewer.
 
> Then Kaz made the same mistake like you.
 
Kaz suggested that I study diagonalization and now I can easily refute
this much simpler proof. My refutation is on page 1 and Sipser's whole
proof is on page 2. So far the only critiques have been about punctuation.
 
https://www.researchgate.net/publication/351947980_Refutation_of_Halting_Problem_Diagonalization_Argument
 
 
--
Copyright 2021 Pete Olcott
 
"Great spirits have always encountered violent opposition from mediocre
minds." Einstein
Bonita Montero <Bonita.Montero@gmail.com>: May 31 08:14AM +0200

> this much simpler proof. My refutation is on page 1 and Sipser's  whole
> proof is on page 2. So far the only critiques have been about punctuation.
> https://www.researchgate.net/publication/351947980_Refutation_of_Halting_Problem_Diagonalization_Argument
 
Are you stupid ? It's no matter what Kaz said.
HERE IS THE WRONG PLACE !
olcott <NoOne@NoWhere.com>: May 31 01:34AM -0500

On 5/31/2021 1:14 AM, Bonita Montero wrote:
>> https://www.researchgate.net/publication/351947980_Refutation_of_Halting_Problem_Diagonalization_Argument
 
> Are you stupid ? It's no matter what Kaz said.
> HERE IS THE WRONG PLACE !
 
If here really was the wrong place then I would not have been able to
get Kaz to come back to comp.theory by posting here.
 
--
Copyright 2021 Pete Olcott
 
"Great spirits have always encountered violent opposition from mediocre
minds." Einstein
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.