- Visual Studio 2019 Edit-and-continue - 9 Updates
- Strange code-generation - 2 Updates
- ""Rust is the future of systems programming, C is the new Assembly": Intel principal engineer, Josh Triplett" - 1 Update
- Conversion operator to pointer-to-array - 3 Updates
- Why does this work on Xcode ??? - 1 Update
- Initialization of std::vector<std::string> throws - 1 Update
- Problem solved: forward variable number of arguments precisely to a template method - 1 Update
- Provably unprovable eliminates incompleteness [ All infinite sets have the same cardinality ] - 4 Updates
- Is there something like MSVC's __assume with gcc? - 3 Updates
"Öö Tiib" <ootiib@hot.ee>: Sep 18 12:43AM -0700 On Tuesday, 17 September 2019 23:02:03 UTC+3, Rick C. Hodgin wrote: > [If anyone knows a better place to post this question, please let me > know and I'll take it there.] I don't think you will get good (or even much different) answer to that question from anywhere. It is generally always same. Q: Why a software product does not allow certain scenario of usage? A: Because support for that scenario hasn't been implemented. Q: Why it hasn't been implemented? A: Because implementers put their effort into other features. Q: Why? A: Because the stakeholders prioritized other features as being of higher importance. Q: Why? A: Because they wanted to. |
David Brown <david.brown@hesbynett.no>: Sep 18 10:18AM +0200 On 18/09/2019 09:43, Öö Tiib wrote: > of higher importance. > Q: Why? > A: Because they wanted to. (Disclaimer - I don't use MSVC, don't use edit-and-continue, and don't do C or C++ coding on Windows. That may make me ignorant.) Let's give the developers the benefit of the doubt here - sometimes features are not implemented because doing so would be impossible, or at least ridiculously hard. And sometimes it is because there are few people who want the feature or care about it - it might seem a marvellous idea to one person but it is not going to be implemented unless there are many people who would find it useful. And occasionally it is because they think the feature is a misfeature, directly counter-productive, or will limit other features or future efforts. As far as I understand the way Windows works (and I could be wrong here), this is likely to be /very/ difficult to implement. I don't see it as being a very important feature - if you are debugging a running program and make changes to its code, leaving your debugger attached seems reasonable. (Perhaps MSVS only lets you use one debugger at a time, in which case it would be more of a hardship.) And it is easy to see a world of abuse possibilities if there were tools to let you attach to a running program, change the code, and disappear without a trace. |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Sep 18 08:04AM -0400 On 9/18/2019 3:43 AM, Öö Tiib wrote: > of higher importance. > Q: Why? > A: Because they wanted to. If that's all it is I understand. If there's something I don't know about why it needs to remain attached, that's the thing I'd like to learn. From writing my own OS, debugger, and tools, I don't see a reason other than just a decision to keep the in-memory-app aligned with the on-disk-app, so they aren't out of sync. Seems to be a policy decision rather than a technical one. And that's fine. I'd just like to learn if it's otherwise (if it's otherwise). -- Rick C. Hodgin |
"Öö Tiib" <ootiib@hot.ee>: Sep 18 07:54AM -0700 On Wednesday, 18 September 2019 11:18:51 UTC+3, David Brown wrote: > unless there are many people who would find it useful. And occasionally > it is because they think the feature is a misfeature, directly > counter-productive, or will limit other features or future efforts. It is basically same what my Q/A tried to say. I did not imply that developers or stakeholders are bad somehow. It makes sense to prioritize expensive and rarely used features low since the bang per buck is low. > time, in which case it would be more of a hardship.) And it is easy to > see a world of abuse possibilities if there were tools to let you attach > to a running program, change the code, and disappear without a trace. I also suspect that edit-and-continue results with quite a hack (not normal linked binary) that uses features of debugger for to stay alive and so may be rather tricky to detach from. |
"Öö Tiib" <ootiib@hot.ee>: Sep 18 08:24AM -0700 On Wednesday, 18 September 2019 15:04:36 UTC+3, Rick C. Hodgin wrote: > with the on-disk-app, so they aren't out of sync. Seems to be a > policy decision rather than a technical one. And that's fine. I'd > just like to learn if it's otherwise (if it's otherwise). The whole edit and continue is quite rare feature in compiled languages. Is it because of policy decisions or technical issues? I see it as technically challenging feature. |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Sep 18 11:30AM -0400 On 9/18/2019 10:54 AM, Öö Tiib wrote: > I also suspect that edit-and-continue results with quite a hack (not > normal linked binary) that uses features of debugger for to stay > alive and so may be rather tricky to detach from. It's not a quick hack. It is not a traditional binary. If you step through the edit-and-continue code, you'll see function calls going to a branching off place, and then going to their destination. This allows functions to be swapped out as they are changed. It's a very logical process. It uses some fixed formats like that, such as an array of effective vtable-like addresses, to know where to go for each function. Memory addresses for constants can be changed. Local variable counts and locations on the stack can be changed if it's not too radical. And code already on the call stack will still navigate back through the stale versions, only new calls will go to the new ones. Edit-and-continue is, by far, the greatest developer asset I have. In languages I code where I don't have edit-and-continue, it is sorely missed. -- Rick C. Hodgin |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Sep 18 11:30AM -0400 On 9/18/2019 11:24 AM, Öö Tiib wrote: > The whole edit and continue is quite rare feature in compiled > languages. Apple added it to GCC back in the mid-2000s. They called it fix- and-continue, and it never went upstream, but they used it intern- ally for a while, then stopped. -- Rick C. Hodgin |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Sep 18 11:33AM -0400 On 9/18/2019 11:30 AM, Rick C. Hodgin wrote: > Apple added it to GCC back in the mid-2000s. They called it fix- > and-continue, and it never went upstream, but they used it intern- > ally for a while, then stopped. An old reference: http://www.sourceware.org/ml/gdb/2003-06/msg00500.html -- Rick C. Hodgin |
Ian Collins <ian-news@hotmail.com>: Sep 19 08:17AM +1200 On 19/09/2019 03:24, Öö Tiib wrote: > The whole edit and continue is quite rare feature in compiled > languages. Is it because of policy decisions or technical issues? > I see it as technically challenging feature. Indeed. I can also see a it as a good way to make malicious changes to a binary, so having to leave the debugger attached could be seen as a security feature. -- Ian. |
Tim Rentsch <tr.17687@z991.linuxsc.com>: Sep 18 08:47AM -0700 >> I think it would be even stupid to include that into the standard. > In standard C++ one cannot legally construct a "null reference", so > the standard cannot mandate any behavior regarding them. [...] Sure it could. The Standard doesn't say that they can't exist, and even if they couldn't exist the Standard could still specify a behavior for how they must be treated. And it might even be useful to give such a specification, for implementations that choose to provide null references by means of an extension. |
Paavo Helde <myfirstname@osa.pri.ee>: Sep 18 09:06PM +0300 On 18.09.2019 18:47, Tim Rentsch wrote: > a behavior for how they must be treated. And it might even be > useful to give such a specification, for implementations that > choose to provide null references by means of an extension. By that logic the C++ standard could also prescribe how the SQL language or the invisible unicorn in my garage must behave, in the case someone happens to incorporate them into their C++ implementation. More to the point, the standard contains the following verbiage: "A reference shall be initialized to refer to a valid object or function. [ Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the "object" obtained by indirection through a null pointer, which causes undefined behavior. ]" Without contradicting itself, the standard cannot say in one place "A reference shall be initialized to refer to a valid object" and "behavior is undefined" and then go on and discuss in another place what happens if a program violates this "shall", thus defining the behavior. |
legalize+jeeves@mail.xmission.com (Richard): Sep 18 04:55PM [Please do not mail me a copy of your followup] Daniel <danielaparker@gmail.com> spake the secret code >> example. >Still, I think it would be uncontroversial to say that C is still the #1 >choice over C++ for firmware/embedded system development? Yes, but mostly for sociological/historical reasons more than anything else. >When every byte counts? When every byte counts, people use assembly. As Stroustrup has stated, a design goal for C++ is that there should be no room for a language between C++ and assembly. C and C++ compilers have the same optimizations and in many cases with inline/templated code C++ compilers can outperform the combination of C compilers and libraries. (The canonical example is std::sort vs. qsort.) -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com> |
Juha Nieminen <nospam@thanks.invalid>: Sep 18 07:37AM Is there a way to express this without using the type alias? This is one instance where my C++-fu isn't strong enough. struct S { using DArray4 = double(*)[4]; operator DArray4() { return nullptr; } }; |
Tim Rentsch <tr.17687@z991.linuxsc.com>: Sep 18 08:58AM -0700 > using DArray4 = double(*)[4]; > operator DArray4() { return nullptr; } > }; There is. |
"Öö Tiib" <ootiib@hot.ee>: Sep 18 09:32AM -0700 On Wednesday, 18 September 2019 10:37:23 UTC+3, Juha Nieminen wrote: > using DArray4 = double(*)[4]; > operator DArray4() { return nullptr; } > }; You may not use () or [] in type identifier of conversion operator. It is still possible without type alias but then the solutions that I can produce look far worse than what you had. Example: #include <type_traits> struct S { operator std::common_type_t<double(*)[4]>() { return nullptr; } }; |
Tim Rentsch <tr.17687@z991.linuxsc.com>: Sep 18 08:50AM -0700 > Tim is a far better writer than I am [...] I'm not sure I deserve this compliment but I do very much appreciate it. |
Tim Rentsch <tr.17687@z991.linuxsc.com>: Sep 18 08:37AM -0700 > undefined or implementation-specific what default copy constructor does > on initializer_list because the data members of it are not defined by > the standard. This seems wrong to me. Please note I am not saying it IS wrong, only that it seems to me to be wrong. More specifically I don't think undefined behavior plays a role. I tried reading the Standard to verify that but found it impenetrable in this area. Can someone help find the relevant passages (and an accompanying explanation would also be appreciated)? > need something like = { 1, 2}; (note =) for {1,2} to be interpreted as > initializer_list; for vector<string> you should be able to initialize > with {"One", "Two"} directly. I'm pretty sure what you say about vector<int> isn't right. A declaration like vector<int> foo { 10, 1 }; gives a vector of length 2, not a vector of length 10. If the braces were parentheses vector<int> bas ( 10, 1 ); we would get a vector of length 10, using the ( count, value ) constructor. Disclaimer: the comments above are based solely on experimental results, not on understanding what the Standard says about it. |
Tim Rentsch <tr.17687@z991.linuxsc.com>: Sep 18 08:22AM -0700 > (o.*m)(args...); > } > }; Interesting. It took me a while to figure out what's going on and why this works. At first it looked exactly backwards, but in retrospect it makes sense. Kudos to your colleague. |
Juha Nieminen <nospam@thanks.invalid>: Sep 18 07:35AM >> theory, will you? > Keith Thompson proves that he fully understands the Infinitesimal > number system: Are you capable of understanding the "proof by contradiction" method in mathematics? Incidentally, the fact that there's no "smallest real number larger than 0" is one of the *classical* simple examples of applying proof by contradiction. "Infinitesimals", no matter how you define them, do not help here. You repeating your claims a million times does not change that fact. |
"Öö Tiib" <ootiib@hot.ee>: Sep 18 04:22AM -0700 On Wednesday, 18 September 2019 10:35:47 UTC+3, Juha Nieminen wrote: > contradiction. > "Infinitesimals", no matter how you define them, do not help here. You > repeating your claims a million times does not change that fact. Replying to Peter Olcott seems not helping him to understand anything. BTW, is that about him?: https://www.youtube.com/watch?v=wfPPJBYc2B0 |
Mike Terry <news.dead.person.stones@darjeeling.plus.com>: Sep 18 03:41PM +0100 On 18/09/2019 12:22, Öö Tiib wrote: >> repeating your claims a million times does not change that fact. > Replying to Peter Olcott seems not helping him to understand anything. > BTW, is that about him?: https://www.youtube.com/watch?v=wfPPJBYc2B0 Yes. |
peteolcott <Here@Home>: Sep 18 10:02AM -0500 On 9/18/2019 2:35 AM, Juha Nieminen wrote: > contradiction. > "Infinitesimals", no matter how you define them, do not help here. You > repeating your claims a million times does not change that fact. YES OF COURSE NOTHING HELPS WHEN YOU DON'T BOTHER TO READ WHAT IS SAID THERE IS A POINT ON THE NUMBER LINE IMMEDIATELY ADJACENT TO 0 On 9/16/2019 11:38 PM, Keith Thompson wrote: -- Copyright 2019 Pete Olcott All rights reserved "Great spirits have always encountered violent opposition from mediocre minds." Albert Einstein |
Bonita Montero <Bonita.Montero@gmail.com>: Sep 18 10:24AM +0200 I'm currently using the following macro: #pragma once #include <cassert> #if !defined(xassert) #if !defined(NDEBUG) #definde xassert(e) (assert(e)) #else #if defined(_MSC_VER) #define xassert(e) (__assume(e)) #else #define xassert(e) ((void)0)
Subscribe to:
Post Comments (Atom)
|
No comments:
Post a Comment