- (comp.lang.c) More musings on the spam problem... - 4 Updates
- A Java- / .NET-like monitor - 11 Updates
- Lambda passed through a std::sort recursion - 1 Update
| gazelle@shell.xmission.com (Kenny McCormack): Nov 19 11:23AM The good news: The spam problem (both the so-called "Thai spam" and the "mushroom spam") is gone from clc and clc++, because Google has (for reasons of its own) banned both groups. What's funny about this is that normally people on these groups would be p*ssed off at Google for banning them, but in this instance, it is a happy coincidence that it stops the spam. So, we're good with it. The bad news is that it is still alive and well in many of the other groups. Right now, comp.editors is getting slammed - about 1 spam per minute, 24/7. So, the question becomes, is there any way we can get Google to ban all newsgroups, not just these 2? -- The randomly chosen signature file that would have appeared here is more than 4 lines long. As such, it violates one or more Usenet RFCs. In order to remain in compliance with said RFCs, the actual sig can be found at the following URL: http://user.xmission.com/~gazelle/Sigs/FreeCollege |
| Mike Terry <news.dead.person.stones@darjeeling.plus.com>: Nov 19 05:18PM On 19/11/2023 11:23, Kenny McCormack wrote: > minute, 24/7. > So, the question becomes, is there any way we can get Google to ban all > newsgroups, not just these 2? I don't see disconnection from GG as a Good Thing in the long term. Many groups have the core of their followers using GG, and how many will be persuaded to migrate to Usenet? But the real problem as I see it is for long term survival groups need a stream of NEW users, and I'd guess close to 100% of those come via GG, and hopefully they're persuaded later by regulars of the advantages of Usenet. When I started using the internet my ISP provided the connection obviously, and a document explaining how to configure my computer to connect to their email and USENET servers. So email, Usenet, and WWW were the 3 motivations for "getting the internet", and newsgroups were the place you went for general discussion. Those days are long gone, and whilst my current ISP still has a Usenet service (subcontracted to Giganews) there's absolutely no mention of it in their advertising, legal contracts, etc. and you have to hunt hard, knowing what you're looking for, to find any help pages for it. So no new internet users are going to think "Right, now how do I get a Usenet client, and where's my Usenet server?!" If they eventually find Usenet it will likely be via GG. Disconnection from GG will cut off the supply of new users - a kind of "kiss of death" for the long term health of the group. Groups with an essentially static membership could obviously continue as they are for years, dieing slowly as their members age and finally depart the group. It's like these groups are slowly dieing anyway, so another nail in the coffin for long-term Usenet health is hardly a problem - they want the SPAM gone NOW... It would be better long term if Google could apply some better SPAM filtering technology, perhaps leveraging all their clever AI technology?, to block the spam entering the system in the first place. It's at the point of initial entry that SPAM can be handled with minimum hassle; once it's circulating around the system it's an order of magnitude more effort to deal with. [Yeah, I get that Usenet SPAM is not Google's priority!] Mike. |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 19 11:51AM -0800 On 11/19/2023 9:18 AM, Mike Terry wrote: > It would be better long term if Google could apply some better SPAM > filtering technology, perhaps leveraging all their clever AI > technology?, to block the spam entering the system in the first place. [...] I wonder if somebody writes this method A simply trumps method B. The AI says ahhh shit, trump, and blocks the message? lol. |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 19 12:25PM -0800 On 11/19/2023 3:23 AM, Kenny McCormack wrote: > The good news: The spam problem (both the so-called "Thai spam" and the > "mushroom spam") is gone from clc and clc++, because Google has (for > reasons of its own) banned both groups. Afaict, they made them read only. In the past, they actually tried to ban them wrt reads and writes. Damn it. |
| Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Nov 18 08:09PM -0500 Scott Lurndal wrote: >> Again, what a system call that waits for many conditions to be met at >> once can be useful for? > Rendezvous. Is it even useful? I got familiar with it first in late 80s while learning Ada but never needed it since, even as a concept. > e.g. pthread_barrier_wait. It seems to work well as it is, my feeling is that implementing it in user space on top of some "wait_for_all_xxxs" (which is on itself too high-level for my taste) would not be efficient. After giving it some thought, I think I could write a reasonable implementation with 1 mutex, 1 condvar and 2 ints (unless some "predictable scheduling policy" is required -- but the implementation with "wait_for_all_xxxs" should have issue with it as well). |
| Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Nov 18 08:18PM -0500 Kaz Kylheku wrote: > Also this: > while (!this_condition() && !that_condition() && !other_condition()) > pthread_cond_wait(&cond, &mutex); I had to solve similar tasks once in a while, would usually just have a count of conditions left or a mask of condition wanted as bits and the thread satisfying a condition would only signal once count is zero or the mask covers all conditions, respectively to - avoid unnecessary wake-ups; - let waiters check that count or mask instead of doing multiple checks |
| Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Nov 18 08:20PM -0500 Chris M. Thomasson wrote: >> I took Pavel to mean 'wait for all conditions to be true' rather than >> 'wait for one or more conditions to be true'. > Ahhh. I missed that. Sorry everybody! ;^o NP, Scott is correct, this was about "wait for many conditions to be met at once" that Bonita missed on Linux for some reason I still cannot get or relate to. |
| Bonita Montero <Bonita.Montero@gmail.com>: Nov 19 05:58AM +0100 Am 18.11.2023 um 20:24 schrieb Scott Lurndal: > e.g. pthread_barrier_wait. pthread_barrier_wait is for multiple threads joining on one event, not one thread waiting for multiple events as discussed. |
| Kaz Kylheku <864-117-4973@kylheku.com>: Nov 19 05:02AM >> e.g. pthread_barrier_wait. > pthread_barrier_wait is for multiple threads joining on one event, > not one thread waiting for multiple events as discussed. That is untrue; there are effectively N events in a barrier: - thread 1 has arrived at the barrier - thread 2 has arrived at the barrier ... - thread N has arrived at the barrier When all these events have occurred, then all threads at the barrier are released (and an indication is given to one of them that it is the special "serial thread"). -- TXR Programming Language: http://nongnu.org/txr Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal Mastodon: @Kazinator@mstdn.ca NOTE: If you use Google Groups, I don't see you, unless you're whitelisted. |
| Bonita Montero <Bonita.Montero@gmail.com>: Nov 19 07:00AM +0100 Am 19.11.2023 um 06:02 schrieb Kaz Kylheku: >> pthread_barrier_wait is for multiple threads joining on one event, >> not one thread waiting for multiple events as discussed. > That is untrue; there are effectively N events in a barrier: No, a barrier internally consists of an atomic which each thread decrements and if it wasn't the last thread decrementing the atomic it waits for a semaphore, a.k.a. event, which is signalled by the last thread which decremented the atomic. |
| Kaz Kylheku <864-117-4973@kylheku.com>: Nov 19 06:08AM >> That is untrue; there are effectively N events in a barrier: > No, a barrier internally consists of an atomic which each thread > decrements That's an implementation detail. Because the events are one-shots we can just use a counter to represent the state that we need for the barrier to be able to conclude that all events have occurred. Each "thread P is waiting for barrier" event is a one shot, because once a thread is waiting, that fact stays latched. If the waiting threads could get nervous and leave the barrier before it activates, then a counter could not be used; there would have to be a set representation like a bitmask. Rest unread. -- TXR Programming Language: http://nongnu.org/txr Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal Mastodon: @Kazinator@mstdn.ca NOTE: If you use Google Groups, I don't see you, unless you're whitelisted. |
| Bonita Montero <Bonita.Montero@gmail.com>: Nov 19 12:05PM +0100 Am 19.11.2023 um 07:08 schrieb Kaz Kylheku: > That's an implementation detail. ... That's how it works. So from the scheduling / kernel perspective there's only one event. |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 19 12:08PM -0800 On 11/19/2023 3:05 AM, Bonita Montero wrote: >> That's an implementation detail. ... > That's how it works. > So from the scheduling / kernel perspective there's only one event. I suspect that you are unfamiliar with how WaitForMultipleObjects works... I remember a test for 50,000 concurrent connections, that was mentioned by Microsoft. It was testing events vs IOCP. Did you know that they recommended altering the indexes of the events waited for by WaitForMultipleObjects? I remember it, is was a long time ago, 2001 ish, iirc. Iirc, it was recommended to basically randomize, or shift the indexes. God damn, I need to find that old post on msdn. Again, iirc, if we did not do this, some events could starve in the server experiment. The funny part is that the event based model performed pretty good, not as scalable as IOCP, however, IOCP did not completely blow it out of the water wrt performance and throughput. |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 19 12:11PM -0800 On 11/19/2023 12:08 PM, Chris M. Thomasson wrote: > The funny part is that the event based model performed pretty good, not > as scalable as IOCP, however, IOCP did not completely blow it out of the > water wrt performance and throughput. After I read that msdn post, I basically created a proxy server. One using events, and another using IOCP. IOCP beats it, but did not slaughter it at that time. I found that to be interesting. |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 19 12:17PM -0800 On 11/19/2023 12:11 PM, Chris M. Thomasson wrote: > After I read that msdn post, I basically created a proxy server. One > using events, and another using IOCP. IOCP beats it, but did not > slaughter it at that time. I found that to be interesting. That's way back when I remember using AIO for the HTTP proxy server over on Linux and compared it to IOCP. Time flies! |
| Bonita Montero <Bonita.Montero@gmail.com>: Nov 19 12:04PM +0100 I was interested in whether my C++ implementations pass the lambda you supply with std::sort through the recursion of sort by coping. libstdc++ (g++) and libc++ (clang++) copy the lambda at each recursion level. MSVC stores the lambda only once. #include <iostream> #include <vector> #include <random> #include <unordered_map> #include <climits> #include <algorithm> using namespace std; int main( int argc, char **argv ) { mt19937_64 mt; uniform_int_distribution<int> uid( INT_MAX, INT_MAX ); vector<int> vi; vi.reserve( 1'000 ); for( size_t i = vi.capacity(); i--; ) vi.emplace_back( uid( mt ) ); unordered_map<void const *, size_t> addresses; bool invert = argc > 1; sort( vi.begin(), vi.end(), [&, invert = invert]( int lhs, int rhs ) { ++addresses[&invert]; if( !invert ) return lhs < rhs; else return rhs < lhs; } ); for( auto &pCount : addresses ) cout << pCount.first << ": " << pCount.second << endl; } I think for std::sort it would be the best to store the complete state the lambda refers to as a static or thread_local variable that the lambda itself doesn't need any state to be copied that there's actually no parameter passed through the recursion. |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment