- Operators for min/max, or min/max-assignment operators? - 4 Updates
- Question on app safety - 13 Updates
- Open letter to Mr Stroustrup - 5 Updates
- Data structure in C++ or C - 1 Update
- "What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++" - 1 Update
- Writing a wrapper class for a local object to pass to a function - 1 Update
Juha Nieminen <nospam@thanks.invalid>: Jun 21 07:26AM > auto max(auto a, auto b) { > return (a >= b) ? a : b; > } Wouldn't that be passign the variables by value, and returning the result by value? Copying may be a very heavy operation in some cases. (Also, in the case of C++, copying might be disabled completely.) |
Juha Nieminen <nospam@thanks.invalid>: Jun 21 07:28AM > Which is why it's traditional to write macro names in all-caps (MIN() > and MAX()) so the reader is reminded that they're macros and can have > odd interactions with side effects. Good thing that the C standard followed that convention with names like assert and FILE. |
David Brown <david.brown@hesbynett.no>: Jun 21 10:06AM +0200 On 21/06/18 09:26, Juha Nieminen wrote: > Wouldn't that be passign the variables by value, and returning the result > by value? Copying may be a very heavy operation in some cases. (Also, in > the case of C++, copying might be disabled completely.) Yes, it is by value - but compilers will omit trivial copying when possible (such a function is likely to be inline). In C++, copy elision is allowed to skip copy constructors (but you can't make copies of uncopyable objects). A more complete implementation - such as for the standard library - would have const and constexpr, references, efficiency considerations (like trying to move rather than copy). |
Juha Nieminen <nospam@thanks.invalid>: Jun 21 12:33PM > possible (such a function is likely to be inline). In C++, copy elision > is allowed to skip copy constructors (but you can't make copies of > uncopyable objects). But suppose I wanted a reference to one of two large data containers, depending on which one compares "larger than": std::vector<T> hugeVector1, hugeVector2; ... std::vector<T>& bigger = max(hugeVector1, hugeVector2); I would prefer if max() took references and returned a reference, so that bigger would be a reference to either the original hugeVector1 or hugeVector2. For two reasons: Modifying 'bigger' would modify the original (rather than a temporary copy) and, of course, to elide needless copying (even if we aren't modifying anything). |
Ben Bacarisse <ben.usenet@bsb.me.uk>: Jun 21 01:11AM +0100 >> } > I think the goal is to not get past the error condition without > crashing, but rather to diagnose it when/if it occurs. What error condition you are talking about? > In addition, your logic does not match my logic. In my case, pass > will be 4 when it exits the loop (provided there is no error). That was deliberate. In fact pass is not even in scope outside the loop. >> no need to add error cases that don't otherwise naturally arise. > I disagree. There should never be a case where a non 1..3 value is > encountered, but if there is it should be flagged. Why? Do you write this: for (int i = 0; i < 10; i++) if (i < 0 || i >= 10) { // Help! help! something's wrong! } in every loop just to be sure? What sort of peculiar behaviour are you trying to guard against? >> to use functions. You get all sort of benefits from adding functions, >> not least of which is having more and simpler testable units.) > I appreciate your model. I disagree with it. You disagree with using functions here? Or you disagree with this code and you prefer to duplicate the number of passes as you do in your example? Maybe both? -- Ben. |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jun 20 08:55PM -0400 On 6/20/2018 8:11 PM, Ben Bacarisse wrote: >> I think the goal is to not get past the error condition without >> crashing, but rather to diagnose it when/if it occurs. > What error condition you are talking about? The error condition of pass not being 1..3. In the case it's some- thing else that's unexpected. The program should capture that infor- mation in some special way and report on it. >> will be 4 when it exits the loop (provided there is no error). > That was deliberate. In fact pass is not even in scope outside the > loop. In mine it is / was. > } > in every loop just to be sure? What sort of peculiar behaviour are you > trying to guard against? The concern is over cases where an "else condition" is not handled. In the case of the for loop above, it is set at 0, and it tests itself every iteration. It will leave the loop if it's >= 10. There's no reason to test in the middle, unless the code is acting differently on different values of i, and not just something trivial like "k=5*i;". > You disagree with using functions here? Or you disagree with this code > and you prefer to duplicate the number of passes as you do in your > example? Maybe both? I disagree with using functions in some cases of the switch-in-loop, because the documentation component is more important to me, as was indicated previously. There are other cases where the pre-for-loop prologue and post-for-loop epilogue code are needed as well, and these contain things updated by the local code. If I use a function to update those values I'm either passing a lot of parameters or parameter pointers, or I'm creating a custom context for the operation ... and both of those solutions are more clumsy to me than using a switch-in-loop. CAlive allows for special context objects, which are like an ad hoc struct created on-the-fly with only local scope, inherited by the functions that are called directly. The contexts allow local variable use within the context directly, and then the entire context is passed by a single pointer, slowing down remote access, but not significantly, as the compiler can sometimes resolve the direct address relative to the base/stack pointer. I think you are a brilliant developer, Ben, and very knowledgeable in C and general development. Please do not consider my replies to you as disparaging. I simply place higher value on other things, so my choices are sometimes (often??) different than yours. -- Rick C. Hodgin |
"Chris M. Thomasson" <invalid_chris_thomasson@invalid.invalid>: Jun 20 10:26PM -0700 On 6/19/2018 10:15 PM, Christian Gollwitzer wrote: > Strangely, he has responded to Flibble's posts afterwards, so it seems > he hasn't fulfilled has promise to plonk everybody who posts WTF. > WTF WTF WTF! Holy smoke! ;) Oh, sorry for missing that. Gosh darn it! Fwiw, I can see him getting mad at some Q-Bert ciphers as well: https://youtu.be/0yrhee8W7II Just try to be very polite, and all should be well? |
David Brown <david.brown@hesbynett.no>: Jun 21 09:56AM +0200 On 20/06/18 23:05, Rick C. Hodgin wrote: >> Dawkins. If it so easy for RCH to lie about Professor Dawkins, >> well, that says much about RCH. > I have not lied. (Note - this is a discussion, not an attack. I'd be happy to see a considered response to the points I make. I would not be happy to be palmed off with a generic "you are inspired by the devil" or "I teach the truth" response. I would not be surprised by such a response, but it would make a mockery of your beliefs.) Lying implies that you have intentionally and knowingly distorted the facts. You know very little about the facts of biology and evolution - that much is apparent in your writings. You /think/ you do, but you get your information from sources that are either mistaken, or lying. You are a victim of confirmation bias on a massive scale. Having decided on a particular viewpoint, you read or watch information that matches that viewpoint and take it to be evidence in its favour - and you take everything else as evidence of a conspiracy theory against it. It is a vicious cycle, and hard to break - it is like an addiction. Does that mean you are lying? I don't think so - that would imply different motives. Is it "false witness vis-a-vis Professor Dawkins"? Yes, it most certainly is - you are attributing opinions, attitudes and doubts to him that are totally at odds to the man and what he writes and says. I have nothing against your believing that the universe and life was made "as is" some 6000 years ago by a creator god. That's fine - religious freedom is a human right. (Freedom from religion is also a human right - one that many in this newsgroup would like respected. But it's too late for that now in this thread!) What I disagree with is distorting science and evidence. If you want to say that biology and evolution show the greatness of your god, with amazing forward planning skills, then that's fine. If you want to say that they show the work of the devil, that's okay too. If you want to say they exist so that we need to find god on faith, not evidence, then that is another consistent viewpoint. What is /not/ okay is to say the scientific evidence is against evolution, or that there is scientific evidence for a biblical viewpoint. That is not okay, because it is simply /wrong/ - it would be like saying gravity sometimes makes bricks fall upwards. It is okay to say we don't know everything about evolution, or how it works, and it is certainly okay to say that we have only some ideas of how everything started. But that does not mean you can fill in the gaps with ideas from thin air and say the evidence supports them! And what is even less okay is to make claims about what scientists in general, and particular scientists, say or believe. The overwhelming majority of scientists consider the theory of evolution to be of a similar level to the theory of gravity, and the overwhelming majority of them do not believe there is any /evidence/ for "intelligent design" any more than there is evidence for "intelligent falling". That /includes/ those who believe in a creator god. Be very clear here - Dawkins does not believe in intelligent design (by divine beings) or any sort of god. He does not believe there is any evidence for such things. Of course he believes that there could be alien design in the process (after all, the earth is unlikely to be the sole example of intelligence in the universe), but that we have found no evidence to suggest it. Distorting his words and taking things out of context does not change what he believes and says. > but includes discussion on how intelligent design might have come > about: > https://www.youtube.com/watch?v=BoncJBrrdQ8 Ben Stein is a lawyer and comedian, and is as qualified to talk about biological science as you and I are to talk about law and comedy. And he is a man with a very specific agenda of trying to force his point about intelligent design - he, and the film, are massively biased. I haven't seen the film myself (I have far better things to do with my time), but here are a couple of newspaper review quotations: "a conspiracy-theory rant masquerading as investigative inquiry" "an unprincipled propaganda piece that insults believers and nonbelievers alike." "Full of patronizing, poorly structured arguments, Expelled is a cynical political stunt in the guise of a documentary." > In The Blind Watchmaker, he says that what we see has the appearance > of intelligent design, but holds to the idea that it is of random > Darwinian-like processes. Yes, exactly. Re-read what you wrote. Evolution is a natural process with random variations and feedback, resulting in things that are so complex they /appear/ to have been designed. But they were /not/ designed - they merely /appear/ to be. It is an emergent system. Have you ever seen one of these "sand picture" frames, where you have a mixture of different types of sand, coloured water, and some air bubbles? You turn it over and it slowly produces a picture that looks like a lovely landscape. That is another fine example of an emergent system. The pictures produced /look/ like they were designed, or drawn by people, or inspired by real landscape, but they are the result of random processes and feedback. > intelligent design, stating that life appears to be created by > intelligent design, but what it does incorporate from Dawkins' is > that it could not have come from God. If you read what he said - what you quoted - you can see that he /has/ dismissed intelligent design. He says biology gives the /appearance/ of design - i.e., it was /not/ designed, it just looks that way. > He states that aliens could've > seeded the planet, and that it's an intriguing possibility to him. That is an idea with no evidence either for or against - but it is certainly possible that evidence could be found. One day I hope mankind will discover life on other planets (or moons), and it will be fascinating to see how it compares to our own biology. > of God. He doesn't want to be held accountable to anyone. But, he > is now getting quite old (77 years) and his time is almost up. He > will soon learn the truth about the nature of our existence. His primary driving force is to find out the truth, and spread that information. You might disagree with the answers he gets, but don't misunderstand his motivations. Like most people, he believes people should be free to /choose/ their religious beliefs - but not be forced into them, or to force other people. And he believes that since religion and science work in completely different ways to attempt to answer very different kinds of question, they should not be mixed up. > clearly each originated on their own, as per the intelligent design > model where each one was crafted for itself, and they did not have a > common ancestor): I haven't seen the video, and won't be watching it - I know you like youtube as a source of information, but I prefer text. > Venter appears to go along with it as he does not refute it, but just > let's Dawkins speak. This "uncut" version published by the "Richard > Dawkins Foundation for Reason & Science" YouTube channel: To my knowledge, Craig Venter is a geneticist and scientist. Of course he goes along with Darwinian evolution. The "bush of life" idea is not an alternative to Darwinian evolution, it is simply part of the more accurate and refined models that we have learned over time. In Darwin's work and the early days of DNA science, the "tree of life" was thought to be exactly that - a tree, with neat branches. Over time, we have collected more evidence, done more experiments, formed new theories, and we see the details are a lot more complex. Some genetic material passes across species, not just parent to child (bacteria are experts at this, but it happens across diverse types of species). Species intermix and interbreed. It is more like a "web of life" - or at least, the "tree" gets a bit fuzzy when you look closely and has paths crossing between branches. What you say about the "bush of life" sounds like examples of parallel evolution, which can be fascinating. But /none/ of this is evidence against evolution - all of it is evidence /for/ evolution. It is also evidence that we don't know all about biology or life on earth as yet, but that is hardly news to scientists. > control mode, he was not. Venter said the tree of life is a holdover > from an early viewpoint that isn't really holding up to modern > scientific scrutiny. Think of it like Newton's theory of gravity replacing Galileo's theories, or Einstein's theory of relativity replaying Newton's theories. It is a refinement of the theories, with better models. It does not mean the "tree of life" is wrong, merely that it is an approximate model and we now have better ones. And Venter's theories are about evolution - they are not /against/ it. You seem to imagine Dawkins and Venter are on opposite sides of some battle - they are colleagues, working in different niches of the same field, and with a little scientific rivalry going on. I am sure it was fun for one of them to embarrass the other with newer information. Like Dawkins, Venter is an atheist and has no belief in any god or "intelligent design". > origin of life. I've wondered why that is because it seems to have > been triggered from this panel interview, the subsequent damage > control video, and thereafter. He is currently working on synthetic genomics - tailoring bacteria to specific tasks. His interest appears to be more in genetics and lifeforms /now/ rather than in evolution as such. (And biogenesis - the origin of life - is a different field.) > DNA through twelve sep- arate machines. To make a single change to > that DNA would result in having to make a change that aligns with all > twelve machines simultan- eously, and that's just not possible. It is not at all clear if you are making a quotation here, and if so I can't see the source or where the quotation ends. Some bits of DNA are used in several different ways. Most is not. There may be 12 different mechanisms that use DNA - that does not mean that any particular piece of DNA is used in 12 ways. We know that some parts of DNA are particularly critical to life, and even small changes render the organism non-viable. We also know there are parts where there can be many changes without having serious effects. And we know there are parts that could have certain changes but the risk is high of causing failure - perhaps other changes are needed at the same time. All in all, it is a game of chance - and big changes to important parts of the DNA happen only rarely. But they do happen. > In addition, we now know the cause of many genetic diseases and some > of our abnormalities. Single bit errors in our entire strand of DNA > can result if fatal or severely debilitating diseases. Yes. As I said, some bits are critical. Other bits are not. Differences in hair colour involve a lot more than one bit change - but are hardly debilitating. > bodies, and that's only what they can rigorously prove. The > remaining 20% is believed to also be used, but they cannot prove it > yet. Lots of DNA is left over from the past. Some of it is still used, especially in embryo development (that's why humans go through stages of having gills and a tail). Some of it is not, but can be re-activated artificially (such as triggering old DNA in a chicken embryo so that it grows a dinosaur mouth and teeth). In the early days, it was thought that the only important part of DNA was the bits that coded for proteins. The rest, around 80% in humans (the value differs greatly for different species), was unhelpfully called "junk". The preferred term now is "non-coding DNA". And at least some of it has other known uses, such as controlling the activation of different coding parts. I would expect that most of our DNA is useful at some point or in some circumstances, while a small proportion is leftovers from the past. This is normal in biology. Small leftovers, mistakes, or inefficiencies hang around because there is not enough evolutionary pressure to make a change in the general population. Big wastage (as it would be if 80% of the DNA was actually junk) get tidied up eventually, by evolution. This is just normal science - it progresses as we learn more. There is no crisis going on here, no revolution. > how you get from A to B to C in the few million years that are > stated, there is not one viable model. They have not found any > transition fossils. Transition fossils have been found for a great many cases. You do your case no help by making up easily exposed nonsense. <https://en.wikipedia.org/wiki/List_of_transitional_fossils> The nature of evolution and development of species is to have long periods of relative stability, punctuated by short periods of speedy change due to environmental disruption. Transitional species are mostly short-lived in geological and evolutionary terms - they leave behind fewer fossils than long-lived species. The fact that transitional fossils are found, but only rarely, is evidence for Darwinian evolution. > Thy have found incredible explosions of life > which seem to have just appeared out of nothing. And they do find > evidence of Noah's Flood, There is /no/ evidence of a flood anything remotely resembling the description in the Bible - and /massive/ evidence that it could not possibly have happened. There is plenty of evidence for big floods, some of which could have been very dramatic and re-arranged cultures and social balances in their areas - certainly enough to inspire legends like the flood story in the Epic of Gilgamesh, and later copies in the Torah. > and the creation of different kinds of > animals in the laboratory today. Those things can be proven with > scientific methods which lend credence to the Biblical account. We can mess around with genetics, cross-breeding, etc. Humans have done so for 15,000 years at least, though we are far more efficient now. That gives absolutely no credence to the bible. > fact, the belief in evolution is even more of a matter of faith than > creation is because we see point after point after point which backs > up the Biblical account of creation. Evidence for evolution is so overwhelming and persuasive that there is no doubt about it. It is not a belief, any more than gravity is a belief. > Even the root Chinese alphabet contains aspects of the Biblical > account: > https://answersingenesis.org/genesis/chinese-characters-and-genesis/ Surely you are joking? I had a look at that page - it is absolutely ludicrous, even by the standards for that website (which are /not/ high). > The symbols used for some of the words are comprised of symbols > which talk about Noah and his family on the ark, for example, and It says that one possible etymological derivation of the Chinese symbol for "boat" is "vessel eight people". And you take that as evidence for Noah and his ark? Really? |
David Brown <david.brown@hesbynett.no>: Jun 21 10:21AM +0200 On 20/06/18 23:23, Rick C. Hodgin wrote: > It is a test in this case. If pass were ever 4 (which, as Ben pointed > out, it could (should) be changed to default:), it would set the value > to -1 and loop, causing it to then exit. You are right, I missed that bit. To be honest, it seems a really strange way to write the code - not much better than your own structuring. I just can't see any point in allowing pass to be an invalid value, and therefore no point in checking for it. Perhaps this would help: for (int _pass = 1; _pass <= 3; _pass++) { const int pass = _pass; switch (pass) { case 1: printf("specific code for pass %d\n", pass); break; case 2: printf("specific code for pass %d\n", pass); break; case 3: printf("specific code for pass %d\n", pass); break; } printf("common code for pass %d\n", pass); } Then the compiler will not let you change "pass" without complaint, and you are not going to change "_pass" accidentally. (If you want a feature for CAlive that I would like, make it illegal to change the "_pass" variable inside the for loop here. In other words, variables declared in a for clause should be considered "const" within the scope of the loop.) |
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 21 10:43AM +0200 On 21.06.2018 09:56, David Brown wrote: > them do not believe there is any /evidence/ for "intelligent design" any > more than there is evidence for "intelligent falling". That /includes/ > those who believe in a creator god. Well, at the risk of maybe giving the Christian readers the wrong impression, there is extremely convincing evidence of intelligent design at a very high level, namely of the cosmos. The probability of the various forces of nature being of the exact relative strengths to support star creation (in particular the fusion process), is 1 over a number with over a hundred zeroes. Just extremely unlikely. Ditto for the energy density of vacuum. It's off by a similar super-extreme factor, some hundred+ zeroes, compared to what theory says it should be. Some Nobel prize winning physicists favor the Anthropic Principle as explanation, the explains-nothing notion that if the strengths of the forces were any different, or if the local energy density of vacuum was much higher than it actually is, we just wouldn't be here to ponder the question. At least one Nobel prize winner instead favors the US Christian's silly intelligent design of humans and Earth idea, proving that also Nobel prize winners can be utter idiots. But as I see it the evidence really points to intelligent design of the cosmos, its physics. So whose design is it then? I believe it's ours. It's that simple: when theory is at odds with reality, where the factors involved have hundreds of digits, then one can conclude with /certainty/ that theory is wrong. So, given that cosmology and/or physics has been proved wrong, and that's what the evidence says, complete 100% certain proof, where should one look for the incorrect assumption or assumptions? My very personal speculation is that creation never ended. That we live in a universe with creation still going on, just about everywhere, opposing destruction, that also is going on, just about everywhere. That may sound far fetched, e.g. we don't see much possible creation except in galaxy centers and quasars and such, not anything locally!, but at least it removes the self-contradiction of the BB hypothesis. Cheers!, - Alf (very off-topic mode) |
Ben Bacarisse <ben.usenet@bsb.me.uk>: Jun 21 11:17AM +0100 > strange way to write the code - not much better than your own > structuring. I just can't see any point in allowing pass to be an > invalid value, and therefore no point in checking for it. My intention was simply to avoid the duplication in the number of passes so that what I thought was the original issue -- that the loop bound and the number of passes could get out of sync during development -- could not arise. A C for loop almost always finishes when the control variable becomes invalid. Zero is no more invalid than 4 is in the original. Yes, the loop is terminated by an explicit assignment, and I agree that that is not very pretty. And I agree that it's not much better. I was trying to remove only one wrinkle. Hence the prefix "if you must do this"! > Perhaps this would help: > for (int _pass = 1; _pass <= 3; _pass++) { > const int pass = _pass; If assignments to pass are the real problem, only using functions can really help. You get a very short loop, maybe only for (int pass = 1; pass <= 3; pass++) { code_for_pass[pass](); common_conde(); } so it's easy to check the pass is not assigned. <snip> -- Ben. |
David Brown <david.brown@hesbynett.no>: Jun 21 12:39PM +0200 On 21/06/18 10:43, Alf P. Steinbach wrote: > various forces of nature being of the exact relative strengths to > support star creation (in particular the fusion process), is 1 over a > number with over a hundred zeroes. Just extremely unlikely. No, that is not evidence for intelligent design - not remotely so. It is evidence that our theories about how the cosmos is put together are incomplete. Perhaps there are factors which force particular matches of the various cosmological constants involved - that they are not independent. This is a reasonable supposition, and would follow the patterns we have already seen in modern (say, the last 100 years) science where connections between different constants have been found. > Ditto for > the energy density of vacuum. It's off by a similar super-extreme > factor, some hundred+ zeroes, compared to what theory says it should be. No. There are two different theories as to what the vacuum energy should be, looking at it from different viewpoints. The values predicted by these two theories differ by over a 100 orders of magnitude. Either one of these theories is correct (in the sense of being a good approximation to reality), or neither of them is - they can't both be. Again, this is evidence that science does not yet have all the answers and a full picture (no news there) - it is not evidence for any kind of designer or intelligence being involved. > forces were any different, or if the local energy density of vacuum was > much higher than it actually is, we just wouldn't be here to ponder the > question. Yes, that's the Anthropomorphic Principle - the universe is "human friendly" because if it were not, we would not be here to observe it. A variation is to suggest that there are vast numbers of alternate universes with all sorts of different variations, and we are in one of them. As you say, it is an "explains nothing" notion. Perhaps we will never find any better explanation. But that does not mean we should stop looking, and it certainly does not mean we should make one up. > Christian's silly intelligent design of humans and Earth idea, proving > that also Nobel prize winners can be utter idiots. But as I see it the > evidence really points to intelligent design of the cosmos, its physics. Certainly there is (currently) no evidence to point /against/ an intelligent designer who concocted the big bang as a high-school science project. But there is no evidence for it, either. There is massive evidence against intelligent design for humans and life on earth in general - both in terms of direct evidence of natural development of life, and in terms of the mistakes in our "design". What sort of so-called "intelligent" designer could come up with something as impressive as the brain, yet get the eyeballs backwards? Or design male humans with a brain capable of rational thinking and a fine set of reproductive organs - but not enough blood to use them both at the same time? > So whose design is it then? I believe it's ours. It's that simple: when > theory is at odds with reality, where the factors involved have hundreds > of digits, then one can conclude with /certainty/ that theory is wrong. Sorry, you have confused me a little. I agree that some parts of our theories of the cosmos are badly wrong (we know this already), but the universe is not /our/ design. Our model of the universe, or our theories of it, are our design, if that's what you meant. > So, given that cosmology and/or physics has been proved wrong, and > that's what the evidence says, complete 100% certain proof, where should > one look for the incorrect assumption or assumptions? More research, experiments, thoughts and theories :-) > may sound far fetched, e.g. we don't see much possible creation except > in galaxy centers and quasars and such, not anything locally!, but at > least it removes the self-contradiction of the BB hypothesis. That is pretty much as good as any other hypothesis at the moment. There is a good deal about cosmology that is still very speculative. |
David Brown <david.brown@hesbynett.no>: Jun 21 12:42PM +0200 On 21/06/18 12:17, Ben Bacarisse wrote: > common_conde(); > } > so it's easy to check the pass is not assigned. Agreed - separate functions would probably be a nicer way to handle this. Even copy-and-paste might be better, depending on the size of the "code for pass" and "common code" sections: // code for pass 1 // common code // code for pass 2 // common code // code for pass 3 // common code No switches, loops or function calls needed. You wouldn't want to do this for common code more than a few lines, of course. |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jun 21 08:03AM -0400 On 6/21/2018 4:21 AM, David Brown wrote: > } > Then the compiler will not let you change "pass" without complaint, and > you are not going to change "_pass" accidentally. I think the issue isn't our code affecting pass, as it will not. But it would be an overwrite of data on the stack or something else going haywire due to unexpected things. As you know perfectly well and could teach a class on, a const value can still be updated externally by something that doesn't honor its const state (provided it's in read/write memory). > change the "_pass" variable inside the for loop here. In other words, > variables declared in a for clause should be considered "const" within > the scope of the loop.) That's an excellent idea. I will implement it. This is the second idea you've suggested that I've implemented. The first was namespaces. -- Rick C. Hodgin |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jun 21 08:07AM -0400 On 6/21/2018 6:17 AM, Ben Bacarisse wrote: > common_conde(); > } > so it's easy to check the pass is not assigned. CAlive has several features which allow something like this. One is the flow {..} blocks, which let you organize code outside of a top- down orientation. You can move code into a block and call it by name, allowing for the type of code that people initially suggested. The other is an ad hoc, allowing for essentially a local function to be created which can be called which also assumes the full local state of the environment visible to that which called it. In that way, the code can be organized and documented as being related to the process directly, and be called as the original responders indicated. And there are a few other solutions. I actually spent time last evening considering a way to do what you suggest here, to create code accessibly by array and run it. I'm going to continue to give that line of thinking some attention and see what I can come up with. -- Rick C. Hodgin |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jun 21 08:15AM -0400 On 6/21/2018 8:07 AM, Rick C. Hodgin wrote: > evening considering a way to do what you suggest here, to create > code accessibly by array and run it. I'm going to continue to give > that line of thinking some attention and see what I can come up with. I think a code array would work: array name | // common parameters here | // common return parameters here { // Code for element 0 here } next { // Code for element 1 here } next { // Code for element 2 here } next { // Code for element 3 here } next { // Code for element 4 here } // At this point countof(name) would indicate the max indices, // and sizeof(name) would allow the code as compiled to be known, // and even passed around as an array*. The only issue is you won't know the number, which you may not need to know. When you need to know the number, maybe: array name | // common parameters here | // common return parameters here code 0 { // Code for element 0 here } code 1 { // Code for element 1 here } code 2 { // Code for element 2 here } code 3 { // Code for element 3 here } code 5 { // Code for element 5 here } // In this one we skip 4, which essentially creates an empty // function. And each block is explicitly identified. In // addition to using numbers, a named token could be given, // allowing them to be called things like setup, pass1, pass2, // pass3, finalize, etc. I like it. -- Rick C. Hodgin |
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 21 02:30PM +0200 On 21.06.2018 12:39, David Brown wrote: >> support star creation (in particular the fusion process), is 1 over a >> number with over a hundred zeroes. Just extremely unlikely. > No, that is not evidence for intelligent design - not remotely so. A factor with over a hundred digits for improbability of coincidence, or mismatch of theory with reality, as as strong evidence as you get. You can't get it stronger. >> factor, some hundred+ zeroes, compared to what theory says it should be. > No. There are two different theories as to what the vacuum energy > should be, looking at it from different viewpoints. Not as far as I know. There is reality, estimated from upper limit of the cosmological constant. Not that I know the details of that estimate, but according to Wikipedia's article it puts the energy at 10^-9 joules per m^3. Theory, the standard model of physics (QED) plus the Planck constant, according to the same article, predicts 10^113 joules per m^3. That's 122 orders of magnitude difference between reality and theory. > magnitude. Either one of these theories is correct (in the sense of > being a good approximation to reality), or neither of them is - they > can't both be. You can be certain that reality is correct, that our estimate of it is nearly correct (we have measures of various effects to compare it to), and that it's the theory that's wrong by some 120+ order of magnitude. > Again, this is evidence that science does not yet have > all the answers and a full picture (no news there) - it is not evidence > for any kind of designer or intelligence being involved. You need intelligence to make that large an error. > Perhaps we will never find any better explanation. But that does not > mean we should stop looking, and it certainly does not mean we should > make one up. There I completely agree. > Certainly there is (currently) no evidence to point /against/ an > intelligent designer who concocted the big bang as a high-school science > project. But there is no evidence for it, either. I get tired of the arguments of "no evidence against" the FSM, say. For any given such hypothesis one can construct an opposite hypothesis, with the same zero weight of evidence. They cancel and there's nothing. For example, under the hypothesis that if I don't go to Oslo tomorrow I will die from an old badly maintained F-16 fighter plane crashing in my head, I /could/ do a cost-benefit analysis. Case 1, hypothesis is true. Cost of going to Oslo on short notice, 4-5000 NKr. Cost of dying, infinite. Case 2, hypothesis is false. No cost by staying home, same cost of going to Oslo. Since there's no evidence against the hypothesis it has some small but definite probability. Multiply the costs in case 1 by that probability and there's still infinite cost by staying home. Clearly I should go to Oslo, I should order those tickets Right Now™! Except that the opposite hypothesis is just as likely. The cost of staying home could by infinite under that opposite hypothesis. This is not completely irrelevant to religion. The F-16 scenario corresponds to Pascal's wager. Pascal, after whom the Pascal language is named, as well as Pascal's triangle, a math prodigy, was so blinded by religious ideas that he failed to consider the opposite hypothesis idea, that if a Christian god can be assumed, so can a god that's incompatible and offers the opposite costs and rewards for his chosen action. > humans with a brain capable of rational thinking and a fine set of > reproductive organs - but not enough blood to use them both at the same > time? Yes. The shark's nearly perfect eye design (blood vessels on back of retina) versus the human botched eye design (vessels on front of retina) is a case point. What retarded designer did the human eyes? > theories of the cosmos are badly wrong (we know this already), but the > universe is not /our/ design. Our model of the universe, or our > theories of it, are our design, if that's what you meant. Yes. Or models are what we believe the universe to be. And that construct is clearly artificial. If we had done it right it would have been natural. Some surprises here and there but no 10^122. >> least it removes the self-contradiction of the BB hypothesis. > That is pretty much as good as any other hypothesis at the moment. > There is a good deal about cosmology that is still very speculative. :-) Cheers!, - Alf (super-off-topic mode) |
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 21 02:36AM +0200 On 21.06.2018 00:50, jacobnavia wrote: > Dear Sir: Bjarne participated in this group some ten to fifteen years ago. He continued to participate in clc++m a while longer. But today he just isn't here, as far as I know, and clc++m is dead. > ship that sank after a few minutes sailing. > True, but the C++ ship sailed a long time ago. It is this endless > complexifying of simple things that makes it so indigest for the mind. The range-based `for` loop is not added complexity. On the contrary, it's a loop with fewer degrees of freedom and simpler syntax. Unfortunately it's not supported by ranges, like in Python `for range(...):`, and while the ranges proposal has a range view that can be used to loop over integers, that has a technical, verbose, non-intuitive name -- that to boot has changed during the library's evolution. So to gain the FULL benefits one has to rely on DIY support or third party libraries. Still, the range based `for` loop is without a doubt the loop that a beginner should start with. It's simplest in both notation and semantics. As long as you don't delve into the behind-the-scene things. > that in C++... > The only reason you put that MAX symbol there, is to make it ugly. Or > you aren't aware of that function? To me it's obvious that Bjarne intentionally wrote valid C code, so as to avoid having to present different examples for C and C++. For valid C code having the array bound as a constant or variable, such as the `MAX` here, is the common situation. Arguing by dissecting imagined nefarious motives is fallacious. I imagine, if this group were moderated, then one rejection reason could very well be "goes to imagined motive". It's in my experience an indication of trolling. > Of course you are but you need justifying "YAST": Yet Another Syntax > Trash that you have to remember. Arguing by calling names is also fallacious. I think it's valid sometimes to call a person, say, "you scumbag", or more common in Norway, "you bag of shit", as the then Norwegian minister of the environment Thorbjørn Berntsen called his British counterpart, to his face. That's not an argument. It's an expression of lack of regard and lack of respect, stating openly and clearly, helpfully, the conditions of the social room that the other must now maneuver in. But using such terms about a technical thing, much less meaningful. > Avoiding top heavy languages means cutting unnecessary, redundant > features such as this one. You will have a hard time cutting features from any language. Python did it, in the transition from 2.7 to 3.x, but it was a drawn out and painful process. Anyway, the range based `for` loop is not unnecessary and redundant; it's my impression that it's the main workhorse of loops nowadays. Cheers & hth., - Alf |
"Öö Tiib" <ootiib@hot.ee>: Jun 20 11:38PM -0700 On Thursday, 21 June 2018 01:50:38 UTC+3, jacobnavia wrote: > All standard containers provide a size() method that returns the number > of elements they actually contain. > <end quote> But not all provide operator[]. > for (int i=0; i<v.size(); i++) ++v[i]; > // increment each element of the array v > What is wrong with this way of writing software? It does not work with raw array. That works (since C++17) ...: for (size_t i=0; i<std::size(v); i++) ++v[i]; ... but neither works with std::set because there are no operator[]. In general I agree with your concern of complicating language but range-based for is really generic way of linear iteration over everything iteratable. |
Bo Persson <bop@gmb.dk>: Jun 21 11:19AM +0200 On 2018-06-21 00:50, jacobnavia wrote: > // increment each element of the array v > What is wrong with this way of writing software? > 1) It is very clear It is definitely not very clear. You have to mention i four times, each time inviting you to write j by mistake. Same for v, used twice. And, of course, the typical newbie error is to write <= instead of < in the comparison. In that view for (int& element : container) ++element; looks a lot clearer to me. And also works for things other than arrays. Having been around for 7 years it is hardly "a new construct". It is actually two language revisions old. Bo Persson |
Manfred <noname@invalid.add>: Jun 21 12:25PM +0200 On 6/21/2018 8:38 AM, Öö Tiib wrote: > It does not work with raw array. That works (since C++17) ...: > for (size_t i=0; i<std::size(v); i++) ++v[i]; > ... but neither works with std::set because there are no operator[]. In fact it seems to me that the example from the article is not appropriate. The comparison should not be between range for and a C (or C-like) loop; Instead, it should be with respect to a C++98 loop: for(V::iterator it = v.begin(); it.end() != it; ++it) ++(*it); (V being the type of the container, and avoiding 'auto' just to stick with C++98) > In general I agree with your concern of complicating language > but range-based for is really generic way of linear iteration over > everything iteratable. The above achieves all of that with no new construct. The only difference is that range for is more compact. The question is if this compactness is worth the cost of having to learn a new construct. Probably it is so, but still I have one objection: The range for syntax hides /how/ the sequence is traversed - while the traditional for loop makes it explicit that it is traversed from begin() to end() - which is the kind of detail that may be wanted to keep evident. [BTW this is why incidentally it allows with no overhead to specify a reversed traversal: for(V::reverse_iterator it = v.rbegin(); it.rend() != it; ++it) ++(*it); with containers that support reverse iteration] So, together with improved compactness, IMHO some piece of clarity is also lost. I'm not saying it is not worth it, just that it doesn't come for free. |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jun 21 10:59AM On Wed, 2018-06-20, jacobnavia wrote: ... > In your article you complain that C++ should remember a top heavy > danish ship that sank after a few minutes sailing. The ship was Swedish, actually. It was going to be used /against/ the Danes (this was before they started nice programming languages). /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
Juha Nieminen <nospam@thanks.invalid>: Jun 21 07:35AM > i think could be ok binary tree, i wrote something for that > and seems ok; but binary tree admit the insert of duplicate elemnts? > it seems yes but possible here i think wrong... std::set or std::map for unique elements, std::multiset or std::multimap for repeated elements. |
Juha Nieminen <nospam@thanks.invalid>: Jun 21 07:32AM > plans for his C++" > https://www.theregister.co.uk/2018/06/18/bjarne_stroustrup_c_plus_plus/ > "Language creator calls proposals 'insanity'" Well, he's completely free to create his own "better C++"... to be piled among the three-dozens-or-so other "better C++'s" out there already. Name it Z++ or something. Have it fade to obscurity before it's even completed. |
"Öö Tiib" <ootiib@hot.ee>: Jun 20 11:10PM -0700 On Wednesday, 20 June 2018 21:31:33 UTC+3, bitrex wrote: > appropriate to store a reference to the template type in the wrapper > class vs pointer to express that its lifetime is dependent on the > lifetime of the local object it wraps? Can you post some code that "works with the underlying Foo-type transparently"? For me virtual functions (in EventDataBase) that you have to work with in that "non-template function" have to make that underlying Foo rather opaque since EventDataBase has to work well when there are no Foo (like it is EventData<Moo>). |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment