- Anyone have any experience with __int128 for g++ or clang? - 4 Updates
- int8_t and char - 4 Updates
- Leigh Johnston Impersonation Posts - 4 Updates
- BlockingCollection: C++11 thread safe collection class - 1 Update
- Defining priority_queue objects - 2 Updates
| Daniel <danielaparker@gmail.com>: Oct 09 01:04PM -0700 Compiling on travis using g++ 4.8 and above (up to 8), and clang 3.8 and above (up to 6), on x64 and Ubuntu, I see std::numeric_limits<__int128>::is_specialized evaluate to false (unexpected.) On the other hand, using clang xcode 6.4 and above, I see std::numeric_limits<__int128>::is_specialized evaluate to true (as expected.) I'm fairly sure all of the tests should evaluate to true. Any suggestions why I'm getting these results welcome. (Or referral to a more appropriate group.) Daniel |
| "Öö Tiib" <ootiib@hot.ee>: Oct 09 01:53PM -0700 On Tuesday, 9 October 2018 23:05:08 UTC+3, Daniel wrote: > Compiling on travis using g++ 4.8 and above (up to 8), and clang 3.8 and above (up to 6), on x64 and Ubuntu, I see std::numeric_limits<__int128>::is_specialized evaluate to false (unexpected.) > On the other hand, using clang xcode 6.4 and above, I see std::numeric_limits<__int128>::is_specialized evaluate to true (as expected.) > I'm fairly sure all of the tests should evaluate to true. Any suggestions why I'm getting these results welcome. (Or referral to a more appropriate group.) I remember there were some discussions about it few years ago. Perhaps in some of gcc mailing lists. My understanding of it was such that when the __int128 was introduced it was not extended integer type at first. Reason was that the compiler that had (and used) it could not immediately follow all requirements that standards imposed upon extended integer types. Once the support was implemented to comply with requirements it was made into extended integer type too. I'm not saying that you should take it at face value ... perhaps you can search the archives. https://gcc.gnu.org/lists.html |
| David Brown <david.brown@hesbynett.no>: Oct 09 11:12PM +0200 On 09/10/18 22:53, Öö Tiib wrote: > integer type too. > I'm not saying that you should take it at face value ... perhaps you > can search the archives. https://gcc.gnu.org/lists.html IIRC there were a few things that made them reluctant to classify __int128 as an extended integer type - no integer constant suffix and no printf/scanf support. I don't think these are required by the standard, but would be something users would want. But if gcc declared __int128 to be an extended integer type, then maxint_t would have to be changed to 128-bit, and that could easily break existing code and ABIs. That does not stop you using the type, merely that it is a "compiler extension" and not an "extended integer type" as described in the standards. |
| Daniel <danielaparker@gmail.com>: Oct 09 04:05PM -0700 On Tuesday, October 9, 2018 at 4:53:53 PM UTC-4, Öö Tiib wrote: > that standards imposed upon extended integer types. Once the support > was implemented to comply with requirements it was made into extended > integer type too. Thanks for the pointer. Following up on that, it seems that the g++ std::numeric_limits specializations for __int128 are not defined when __STRICT_ANSI__ is defined, which is the case when code is compiled with -std=c++NN rather than - std=gnu++NN. Given the existence of a 128 bit integer type, all I need is min() and max(). So a second question: would it be save to forgo std::numeric_limits and rely on something like template<typename T> struct integer_traits { static const T __min = (((T)(-1) < 0) ? (T)1 << (sizeof(T) * 8 - ((T)(-1) < 0)) : (T)0); static const T __max = (((T)(-1) < 0) ? (((((T)1 << ((sizeof(T) * 8 - ((T)(-1) < 0)) - 1)) - 1) << 1) + 1) : ~(T)0); }; Thanks, Daniel |
| David Brown <david.brown@hesbynett.no>: Oct 09 10:12AM +0200 On 08/10/18 17:08, Tim Rentsch wrote: > The problem is that to other people it says something > different, and none of your arguments or biases is > ever going to change that. That is an important point, yes. I think few people will see it as significantly different, but I have to accept that /some/ people will. I can't give anything more than my own experience as a justification for thinking that "int8_t means 8-bit signed integer" is going to be the most common interpretation. Perhaps this is biased by by field of programming - in my line, the size-specific types are very heavily used, as are "home made" equivalents from before C99. You didn't answer my question - what names would you personally prefer or recommend for "small signed integer" and "8-bit -128 to +127 signed integer"? You have said you feel "int8_t" is a poor name, as it "says both too much and too little". Have you alternative suggestions? |
| jameskuyper@alumni.caltech.edu: Oct 09 09:36AM -0700 On Monday, October 8, 2018 at 12:11:09 PM UTC-4, Chris Vine wrote: > I don't think this one works much better than your last. > The words "there may also be implementation-defined extended signed > integer types" do not tell you what extended signed integer types are I left out a couple of other relevant clauses: "... The standard and extended signed integer types are collectively called signed integer types." (6.7.1p2) "The standard and extended unsigned integer types are collectively called unsigned integer types. ..." (6.7.1p3) This is what tells you everything you need to know about what extended integer types are, in order to cope with the possibility that your code might use them indirectly, such as through a typedef or template parameter. Everything that the standard says about singed integer types, or unsigned integer types, or integer types in general, or about arithmetic types - all apply to the extended integer types the same way it applies to the standard integer types. > or how they differ from the standard signed integer types (if they did > there would be no argument), just that they may exist. Here's some more relevant quotes that I left out: "The rank of any standard integer type shall be greater than the rank of any extended integer type with the same size. ... The rank of any extended signed integer type relative to another extended signed integer type with the same size is implementation- defined, but still subject to the other rules for determining the integer conversion rank." (6.7.4p1) The definitions of the extended integer types explains the main way in which they differ from the standard integer types: they are implementation-defined rather than defined by the standard. This is the other main thing you need to know how to use them - where to go to find out their names: the implementation's documentation. The two clauses cited above are the only other way that they differ from standard types. In one sense you're correct - if an extended integer type was otherwise implemented exactly the same way as a standard integer type, it would still have to have a lower integer conversion rank, which would probably have some testable consequences. But it's entirely permissible for that to be the only difference between them, just as it is permissible for that to be the only difference between "short" and "int" (as used to be commonplace), or between "int" and "long" (as is currently commonplace) or between "short" and "long" (which, as far as I know, was unique to Cray implementations), or between "long" and "long long" (which has also occurred in many real-world implementations). |
| Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 09 07:25PM +0100 On Tue, 9 Oct 2018 09:36:43 -0700 (PDT) > integer types are, in order to cope with the possibility that your code > might use them indirectly, such as through a typedef or template > parameter. ... [snip] Of course it doesn't. Nor does the part of your posting concerning the fact that integer types have conversion ranks (which I have snipped for clarity's sake) lend anything at all to the point. The standard actually says very little about what integer types are, other than by specifying minimum ranges for them and requiring unsigned integers to implement a 2^n binary representation for overflow purposes, leaving the natural (implicit) meaning of "integer" and "integral" to carry the weight. It also says nothing about how, say, an "extended" signed integer differs from a "standard" signed integer, saying only that the extended signed integer type is implementation defined, and that it must have an unsigned analogue. The fact that it is implementation defined does not mean that the implementation can define anything it wants to as an "extended signed integer type". Each word must be examined for its meaning. The word "integer" carries the necessary implication that what is implementation defined must be capable of holding whole numbers in an exact form (it cannot be outwardly represented in floating point form). The word "signed" means that that form must be capable of representing negative numbers. Consideration of what implication (if any) the word "extended" carries, which is an issue on which two reasonable people could differ, has been followed by your dogmatic insistence, by reference to portions of the standard that say nothing about the issue, that the standard demands that "extended" must have no meaning at all. I disagree on that. |
| jameskuyper@alumni.caltech.edu: Oct 09 12:32PM -0700 On Tuesday, October 9, 2018 at 2:25:29 PM UTC-4, Chris Vine wrote: > On Tue, 9 Oct 2018 09:36:43 -0700 (PDT) > jameskuyper@alumni.caltech.edu wrote: .... > Of course it doesn't. Nor does the part of your posting concerning the > fact that integer types have conversion ranks (which I have snipped for > clarity's sake) lend anything at all to the point. You said that the definitions didn't explain the differences between extended and standard integer types. I started out writing that they fully explained the only difference between the two categories. Then I remembered that "only" was incorrect. I added the bit about conversion ranks so as to properly qualify my assertion that the definitions cover the main difference between those two categories. If I had only said "main difference" without explaining what the other differences were, it would only have led to questions. > carry the weight. It also says nothing about how, say, an "extended" > signed integer differs from a "standard" signed integer, saying only > that the extended signed integer type is implementation defined, Every statement the standard makes about signed integer types, integer types, or arithmetic types, is a statement that constrains the implementation of extended signed integer types - and there's a lot of statements of that kind scattered through the standard, especially section 7. For example, 7.6.9p2 implies that relational operators must be supported for extended integer types. You've touched on only a small fraction of all the things it says about such types. The standard doesn't impose any other constraints on the implementation of signed integer types. > ... and > that it must have an unsigned analogue. Which is NOT a difference from the standard signed integer types. > of the standard that say nothing about the issue, that the standard > demands that "extended" must have no meaning at all. I disagree on > that. No, the word "extended" in "extended integer types" refers very specifically to the fact that the types are defined by the implementation as an extension to C++. A type with the same arithmetic properties and representation as signed char, but with a different conversion rank and without any corresponding operator overloads for standard library functions that treat it as a character type, would be a perfectly reasonable extension to C++ - I don't see how the concept of a extension could be interpreted as prohibiting such a type. |
| David Brown <david.brown@hesbynett.no>: Oct 09 10:53AM +0200 On 08/10/18 17:55, Rick C. Hodgin wrote: >> insult our intelligence. > My concern is for posterity and Google searches unrelated to > the regulars on this group. That is a fair point. I think, however, that the damage to your reputation will be very minimal. No one searches C++ group archives for religious posts - those would be quickly skipped. No one searches for religious posts in a C++ group. And I think anyone who /did/ come across and read these posts would quickly establish that there are two different people posting with the same name. I fully appreciate the principle here, but don't expect it to be an issue in practice. |
| "Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Oct 09 02:25AM -0700 On Tuesday, October 9, 2018 at 4:53:14 AM UTC-4, David Brown wrote: > religious posts in a C++ group. And I think anyone who /did/ come > across and read these posts would quickly establish that there are two > different people posting with the same name. I do not think it's so obvious. In addition, I expect people to be searching for my name in general, unrelated to C++, unrelated to this group, but finding search results here. My clarification posts make it unambiguous for them. -- Rick C. Hodgin |
| gazelle@shell.xmission.com (Kenny McCormack): Oct 09 01:17PM In article <pphq9g$q3v$1@dont-email.me>, >> the regulars on this group. >That is a fair point. I think, however, that the damage to your >reputation will be very minimal. No one searches C++ group archives for It's not possible to damage Ricky's rep any further than he already has. As I've mentioned many times, Leigh is doing the best he can to rehabilitate Ricky's image, but it is obviously a hard, uphill struggle. Every good thing that Leigh does, Rick quickly undoes. It's one step forward, one step back - at best. -- People sleep peaceably in their beds at night only because rough men stand ready to do violence on their behalf. George Orwell |
| Thiago Adams <thiago.adams@gmail.com>: Oct 09 10:10AM -0700 > > abuse of this group. > If you killfile Rick you won't see my parody posts either; unfortunately this form of parody is the only weapon I have to stop the likes of Rick spamming this technical newsgroup with his religious garbage. The quality of the parody is quite high IMO: last night's bombing run was simply quotes of the great atheist Christopher Hitchens (modulo the final "Meta" post). > /Leigh I suggest everyone stop to reply Rick's on topic posts as well. |
| gm127 <admin@codemachina.io>: Oct 09 05:04AM -0700 BlockingCollection is a C++11 thread safe collection class that provides the following features: - Implementation of classic Producer/Consumer pattern (i.e. condition variable, mutex); - Concurrent adding and taking of items from multiple threads. - Optional maximum capacity. - Insertion and removal operations that block when collection is empty or full. - Insertion and removal "try" operations that do not block or that block up to a specified period of time. - Insertion and removal 'bulk" operations that allow more than one element to be added or taken at once. - Priority-based insertion and removal operations. - Encapsulates any collection type that satisfy the ProducerConsumerCollection requirement. - Minimizes sleeps, wake ups and lock contention by managing an active subset of producer and consumer threads. - Pluggable condition variable and lock types. - Range-based loop support. https://github.com/CodeExMachina/BlockingCollection |
| Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Oct 08 11:41PM -0400 Öö Tiib wrote: > Default-constructed std::greater<int> does compare two ints so that > constructor argument may be omitted. Default-constructed lambda with > two int patrameters that returns bool however does not make sense, Why? Lambda is by and large syntactic sugar for an old-fashion function object that can perfectly default constructor regardless of the operator()'s parameter and return types. > so there you must pass actual lambda of that type to priority > queue's constructor. I think it's not standard library's but language's restriction. I hear it is going to be relaxed in C++ 20 so that std::priority_queue<int, std::vector<int>, decltype(cmp)> q3; will become legal. -Pavel |
| "Öö Tiib" <ootiib@hot.ee>: Oct 09 12:17AM -0700 On Tuesday, 9 October 2018 06:41:38 UTC+3, Pavel wrote: > Why? Lambda is by and large syntactic sugar for an old-fashion function object > that can perfectly default constructor regardless of the operator()'s parameter > and return types. On general case we can make run-time values and references to be part of object not its type. Lambdas may capture those. So lambda is of unspecified type that is not required to be default-constructive. > going to be relaxed in C++ 20 so that > std::priority_queue<int, std::vector<int>, decltype(cmp)> q3; > will become legal. There you may be correct of course. The paragraphs and diagnostics about lambda and its capture can be likely never confusing enough for franchised tastes. |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment