- Latest compiler and/or Boost.Variant issue with forward declarations - 5 Updates
- Memory model of C++ considered limited - 5 Updates
- How much different is it to insist that "C is not a high level assembler" vs. insisting the God shit (Was: if cargo blocks) - 4 Updates
- Learning modern C++, where to start? - 2 Updates
Michael Powell <mwpowellhtx@gmail.com>: Jan 10 11:16AM -0800 On Thursday, January 10, 2019 at 1:44:00 PM UTC-5, Mr Flibble wrote: > you writing your own? Even I have written one which is implemented in > terms of std::variant: > https://github.com/i42output/neolib/blob/master/include/neolib/json.hpp That's an interesting approach. A lot of type definitions going on there, including the self_type, basically the recursive "value type" use case. I did consider std::variant at one point on another project, however boost::variant is a requirement of sorts since I will be turning that around in a Boost Spirit Qi grammar. Appreciate the feedback, though. |
jameskuyper@alumni.caltech.edu: Jan 10 01:59PM -0800 On Thursday, January 10, 2019 at 1:39:52 PM UTC-5, Michael Powell wrote: > In terms of JSON AST modeling, the rub appears to be in a couple of places: defining the JSON AST Array, and making the JSON AST Object available to Value. Re: semantic terminology, Value is interchangeable with Element, in view of the JSON grammar. > Here's my attempt at a flattened single source example: > https://wandbox.org/permlink/83c3VXZ4W1DHoEBc Your example contains no instances of the identifier "kingdom", so it cannot be the actual source code that causes that error message shown above. Can you provide code, as short and as simple as possible, that does generate that message when compiled? It would be much easier to diagnose your problem if you could provide a proper example of it. The bottom of that web page contains error messages generated from your example code. Taking the first message, it points out that on line 100, your code refers to other.sign, when the relevant type has no member named "sign". Could you explain why you thought that it should have a member named "sign"? |
Michael Powell <mwpowellhtx@gmail.com>: Jan 10 02:39PM -0800 > above. Can you provide code, as short and as simple as possible, that > does generate that message when compiled? It would be much easier to > diagnose your problem if you could provide a proper example of it. That was the error I received under local build. The Wandbox example is an attempt at a MRE. > your code refers to other.sign, when the relevant type has no member > named "sign". Could you explain why you thought that it should have a > member named "sign"? Typo on my part. Should be base_type and not sign_type. With corrections. https://wandbox.org/permlink/EZ4DtRXFlOm20Ojs I've worked through some of this in my local project, sort of, and now landing in Boost Spirit Qi territory, however, what I am finding is that the forward declarations are a problem with it comes to leveraging Boost.Variant. The best I could come up with was to use loosely joined Tuples, Vectors of Tuples, etc, when it comes to modeling the JSON Object and so forth. It seems a bit clunky to me, but it will probably work. Concerning Sign itself, yes, I am planning to support an enhanced parsing of the JSON specification that includes, potentially, floating point, NaN, [+/-]Infinity, and so on. That is, unless there is something about natural C++ language support for Double Infinity that I'm unaware of. Cheers. |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 10 06:43PM On 10/01/2019 18:39, Michael Powell wrote: > I'm not sure how better to sort this out. At minimum, at least Value needs a forward declaration depending on how I arrange my headers, includes, etc. But then I still run into Value forward declaration undefined type issues. > I'm not sure it's as much a language issue as much as it is possible a Boost.Variant issue, per se. > Suggestions? There are plenty of pre-existing C++ JSON libraries out there so why are you writing your own? Even I have written one which is implemented in terms of std::variant: https://github.com/i42output/neolib/blob/master/include/neolib/json.hpp /Flibble -- "You won't burn in hell. But be nice anyway." – Ricky Gervais "I see Atheists are fighting and killing each other again, over who doesn't believe in any God the most. Oh, no..wait.. that never happens." – Ricky Gervais "Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Bryne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?" "I'd say, bone cancer in children? What's that about?" Fry replied. "How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil." "Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say." |
Michael Powell <mwpowellhtx@gmail.com>: Jan 10 03:06PM -0800 On Thursday, January 10, 2019 at 5:39:55 PM UTC-5, Michael Powell wrote: > I've worked through some of this in my local project, sort of, and now landing in Boost Spirit Qi territory, however, what I am finding is that the forward declarations are a problem with it comes to leveraging Boost.Variant. The best I could come up with was to use loosely joined Tuples, Vectors of Tuples, etc, when it comes to modeling the JSON Object and so forth. It seems a bit clunky to me, but it will probably work. > Concerning Sign itself, yes, I am planning to support an enhanced parsing of the JSON specification that includes, potentially, floating point, NaN, [+/-]Infinity, and so on. That is, unless there is something about natural C++ language support for Double Infinity that I'm unaware of. > Cheers. Best I could come up with was something like this, it builds and runs anyway. https://wandbox.org/permlink/P9Xel2Oiiy0O8Ygr Which I confirmed in local build. The issues I am visiting now are to do with Boost Spirit Qi relating to "incompatible Skippers". However, it seems like to me forward declarations are a problem, but I don't have a ready answer as to why. |
michael.podolsky.rrr@gmail.com: Jan 10 11:31AM -0800 On Thursday, January 10, 2019 at 1:46:26 PM UTC-5, Chris Vine wrote: > std::atomic<int> with relaxed memory ordering, save that the first is > not standard conforming (but works) and the second is standard > conforming and also works. The code emitted will be identical. Well, my point was to discuss the C++ Standard, not what can be done if we deviate to non-standard and platform-dependent things. If programming out of the standard, I can try to use "volatiles", rely on atomicity of "int" type, do whatever else possible and then claim that my code is 100% safe after analyzing the disassembly output. Or I can use std::atomic<int> [which may be still a bit awkward and too low-level if I actually have other types of data in my buffer, like doubles] and see it does not produce any additional burden if accessed with relaxed operations on my platform. If I program based on the C++ standard (and not based on the particular platform features), I cannot rely on that and my point is that C++ standard does not cover well the needs of Seqlock algorithm. |
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jan 10 09:17PM On Thu, 10 Jan 2019 11:31:57 -0800 (PST) > If I program based on the C++ standard (and not based on the > particular platform features), I cannot rely on that and my point > is that C++ standard does not cover well the needs of Seqlock algorithm. Well I think the C++ memory model deals with the issue as well as it can apart from the point (with which I agree) that perhaps more could be said in the standard about the requirement for relaxed memory ordering not to synchronize where the hardware is atomic on the data type in question. The Boehm paper you posted was interesting but it came up with two solutions within the C++ standard, the first of which is the one which would occur to me (the second was too subtle). The point about the overconstraining nature of acquire/release atomics comes up in other contexts. Take this example of the humble use of an atomic flag to spin on: std::string A; std::atomic<bool> Ready(false); Thread 1: A = "42"; Ready.store(true, std::memory_order_release); Thread 2: while (!Ready.load(std::memory_order_acquire)); std::string B = A; // B is guaranteed to hold "42" This is correct and looks OK but on every iteration in the while loop a fence instruction may be emitted, depending on the processor architecture. You can eliminate that with relaxed ordering and a separate fence. Here an acquire synchronization is only executed once: std::string A; std::atomic<bool> Ready(false); Thread 1: A = "42"; std::atomic_thread_fence(std::memory_order_release); Ready.store(true, std::memory_order_relaxed); Thread 2: while (!Ready.load(std::memory_order_relaxed)); std::atomic_thread_fence(std::memory_order_acquire); std::string B = A; // B is guaranteed to hold "42" This is basically the same approach as you would take in the days of being stuck with volatile ints for flags. |
hjkhboehm@gmail.com: Jan 10 02:02PM -0800 [Michael P. pointed me at this thread. ] A few comments: This kind of issues has been discussed repeatedly by WG21/SG1, the concurrency subgroup of the C++ committee. See e.g. wg21.link/P0690 . Although it's surprising to most people, there are good reasons to require the seqlock data accesses to be atomic: 1) If the reader code does more than just copy data, it usually does care about accesses being indivisible. If I read a pointer and its target in the reader critical section, my code is likely to trap if I read the concatenation of two half-pointers rather than a real pointer. That failure isn't likely on the most mainstream implementations, but it's otherwise allowed by the standard. We can sometimes live with byte-level atomicity, but the code has to be written very defensively. If you're accessing individual scalar or pointer fields, memory_order_relaxed accesses allow you to express your atomicity requirements, and I think they're commonly the right tool. 2) If the accesses are not marked atomic, you're lying to the compiler about what's going on. The consequences of that may be unfortunate. If I write in the reader critical section, where unsigned_int_field and ten_element_array are part of the shared data protected by the seqlock, and tmp is local: tmp = unsigned_int_field; if (tmp < 10) { ... ... = ten_element_array[tmp]; } the compiler may discover that it doesn't have room to keep tmp in a register. It can legitimately (assuming no intervening synchronization) decide that there is no point in saving the old value of tmp on the stack, when it can just be reloaded from unsigned_int_field again. That can result in an out-of-bounds array access and a segmentation fault. You would have discarded the result at the end of the seq_lock reader critical section, but you may not get that far. Re: replacing acquire loads with explicit fences. There are cases when that makes sense. But aside from seq_locks they seem to be getting less common. Acquire loads impose less of an ordering constraint than a fence, and thus may be cheaper to implement in hardware. The most recent ARM processors seem to be starting to realize that potential. And on x86, the code is likely to be the same either way. |
"Chris M. Thomasson" <invalid_chris_thomasson@invalid.invalid>: Jan 10 02:16PM -0800 >> conforming and also works. The code emitted will be identical. > Well, my point was to discuss the C++ Standard, not what can be done if we deviate to non-standard and platform-dependent things. If programming out of the standard, I can try to use "volatiles", rely on atomicity of "int" type, do whatever else possible and then claim that my code is 100% safe after analyzing the disassembly output. Or I can use std::atomic<int> [which may be still a bit awkward and too low-level if I actually have other types of data in my buffer, like doubles] and see it does not produce any additional burden if accessed with relaxed operations on my platform. > If I program based on the C++ standard (and not based on the particular platform features), I cannot rely on that and my point is that C++ standard does not cover well the needs of Seqlock algorithm. Wrt to strictly following the C++ standard, a seqlock does need its state to be comprised of atomics using relaxed memory order. The readers in such an algorithm simply do not care if there any concurrent writes. This is because of the way the seqlock algorithm handles its version numbers. When a reader notices that the version numbers read are valid wrt the rules of the algorithm itself, then it knows what it read is 100% coherent... Readers do not give a damn about any concurrent writes. Readers use the version number scheme to determine if it read a 100% coherent view of the state. I agree with Michael. |
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jan 10 06:46PM On Thu, 10 Jan 2019 10:11:30 -0800 (PST) > to "protect" an access to non-atomic data with the standard > synchronization primitives and achieves the correctness by the very > different means. I was covering my bases. Your "Now, of course we do not need our buffer to be an array of atomic memory cells, our protocol looks to care well about all the possible conflicts and simultaneous access to the buffer memory" seemed to involve some synchronization "protocol" which obviated the need for atomics. > Well, I don't see if this is related to my post either. I don't care > in my algorithm about spinlocks and if I cared, I would not have a > problem to make a spinlock around a C++11 atomic data. See above. > But > 1. the point is the standard does no give me any guarantee such an access > have the same effectiveness as an access to non-atomic memory. The standard provides no guarantee about volatile either as regards threads, which is your only alternative if you have no mutual exclusion and/or semaphores and/or spinlocks and don't trust std::atomic. However, for built-in types which on the particular platform in question are atomic at the hardware level, relaxed memory ordering will involve no synchronization operations in practice. Would it be nice to have something requiring that in the standard instead of being left as a quality of implementation issue? Yes, I think it would. > 2. Also, I cannot use strcpy() or sprintf() to write into my buffer. True. > then access to the "ddd" may demand either mutex/spinlock or "bus locked" > operation even for memory_order_relaxed model (surprise?) just to > guarantee the atomicity. Yes it might, but what alternative is there? > non-effective, causes massive memory copying and sychronozation > (mutex/spinlock) and cannot be considered as a resolution candidate at > all. Memory copying is something completely orthogonal to your original posting so I don't really understand what your point here is. (Copying an array would almost certainly require some explicit locking anyway). In any event you could use plain arrays and rely on pointer decay to avoid copying if you want to pass by pointer. Possibly also an array of std::atomic<int> would suit you better than an atomic array of int. But as I say I don't understand your point here. However, the overarching issue is that an array of volatile ints will have the same advantages and disadvantages as an array of std::atomic<int> with relaxed memory ordering, save that the first is not standard conforming (but works) and the second is standard conforming and also works. The code emitted will be identical. |
gazelle@shell.xmission.com (Kenny McCormack): Jan 10 05:38PM In article <q17uqh$ej4$2@dont-email.me>, David Brown <david.brown@hesbynett.no> wrote: ... >Roughly speaking, "The end justifies the means" is not good ethics? >That seems reasonable. In Rick's case, the "means" are unethical, and >the "ends" a complete failure, so he fails on both accounts. Yes, I get it. And I like what you posted immediately previous to this - to the effect that all Rick is doing is annoying people and making them all the more not want to end up like him. But... Why do you insist (in another thread on these forums) that C is not a high level assembler, making sure to be as insulting as possible to those who think of it that way? Surely, you understand that when people say that C is a high level assembler, they mean it in a certain way and that that certain way makes sense, both to them as to others. What's your agenda in publicly insisting, over and over ad nauseum, to the contrary? Is your agenda similar in nature to Rick's (although your chosen content and his are quite different) ? Note, BTW, that the term "high level assembler" is obviously, taken literally, an oxymoron, much like "jumbo shrimp". So, obviously, when someone uses that term, they are not intending it literally. -- > No, I haven't, that's why I'm asking questions. If you won't help me, > why don't you just go find your lost manhood elsewhere. CLC in a nutshell. |
David Brown <david.brown@hesbynett.no>: Jan 10 06:16PM +0100 On 08/01/19 22:13, Rick C. Hodgin wrote: > I have turned my back on David because of his treatment of me over > years. I still pray for him, as I do for you, Leigh. Your name is > often heard in my prayers, along with many others from these forums. And you have been told many times that I do not want your "prayers". It is this kind of personal abuse that annoys people - and makes sure that nobody ever wants to be the kind of fanatic that you are. |
David Brown <david.brown@hesbynett.no>: Jan 10 06:19PM +0100 On 09/01/19 20:29, Neil Cerutti wrote: >> making things worse for yourself and everyone else? > The actual outcome of an action is inadmissable evidence in any > Christian ethical system (cf. Bertrand Russell). Roughly speaking, "The end justifies the means" is not good ethics? That seems reasonable. In Rick's case, the "means" are unethical, and the "ends" a complete failure, so he fails on both accounts. |
Neil Cerutti <neilc@norwich.edu>: Jan 10 09:19PM > ethics? That seems reasonable. In Rick's case, the "means" are > unethical, and the "ends" a complete failure, so he fails on > both accounts. It's a defect of Christian ethics, according to Russell. You cannot justify an action by the practical effects it has. Bringing it back to C++: in Christian C++, certification that you were following the standard letter would be critical, while testing would be seen very suspiciously. -- Neil Cerutti |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 10 05:12PM On Tue, 2019-01-08, Manfred wrote: > On 1/7/2019 11:44 PM, Jorgen Grahn wrote: >> On Sat, 2019-01-05, Unto Sten wrote: ... >> and (years later) people started using them properly. > This sounds a bit simplistic, I think. If C++98 was a major > breakthrough, C++11 was somewhat of a major revision too. It's all rather subjective, but to me C++11 isn't /that/ big of a deal since it didn't fundamentally change my designs. And regarding the OP's "incorrect C++", my pre-C++11 code isn't suddenly incorrect, or even archaic. > So, if one wishes to catch up today, I would recommend the 4th edition > to start with. I agree. But at the same time, people should not avoid C++ because it seems to be undergoing drastic changes. ... >> I do use C++11 features (auto, ranged-for, uniform initializers, >> override, some of the library additions ...) I should have mentioned lambdas here, too. They are fairly easy to understand, and make <algorithm> more useful in practice, among other things. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
Manfred <noname@add.invalid>: Jan 10 08:31PM +0100 On 1/10/2019 6:12 PM, Jorgen Grahn wrote: > since it didn't fundamentally change my designs. And regarding the > OP's "incorrect C++", my pre-C++11 code isn't suddenly incorrect, or > even archaic. One of the greatest features of C++ is its reliability, in the sense of backward compatibility guarantee, which makes C++ code valid for a long time - which makes it suitable for serious applications. Still today, or even after C++20, you can write C-like or "C with classes" C++ code (I mean it is possible, not that you actually do) and rely that it will produce correct executables. But even if C++11 did not let into obsolescence the billions of lines of existing C++ code, still it introduced changes that I (and many others) consider relevant enough to put the language into a new perspective. Bjarne himself said it can almost be seen as an entirely new language. From this point of view, the fact that "C with classes" code still is valid C++ code is even more impressive, but does not change the fact. Some examples: constexpr, traits and type support (enable_if), function objects (including lambdas), move semantics, variadic templates, standardized smart pointers ad so forth do place the language into a new perspective, which is best learnt from start than caught up afterwards. >> to start with. > I agree. But at the same time, people should not avoid C++ because > it seems to be undergoing drastic changes. Not at all. As I said, one of the greatest values of C++ is your code will keep running for a very long time. I, like many others, have quite some code from the '90s which is still recompiled and run today. By the way, Bjarne is aware of the risk of too much growth too - look for "remember the Wasa" > I should have mentioned lambdas here, too. They are fairly easy to > understand, and make <algorithm> more useful in practice, among other > things. Good point that you mention "easy to understand" features. I think the most effective improvements in C++ are those that even when opening wide new perspectives, still are essential and thus easy to grasp. This adds to their effectiveness. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment