Monday, January 30, 2023

Digest for comp.lang.c++@googlegroups.com - 17 updates in 6 topics

Kunal Goswami <20cs06014@iitbbs.ac.in>: Jan 30 01:04PM -0800

Hii ,
How can we assure that only one object is created for a class. And when second do not happen.
 
Note - don't use counter .
 
Someone asked me this in an interview.
 
--
*Disclaimer: *This email and any files transmitted with it are confidential
and intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify the
system manager. This message contains confidential information and is
intended only for the individual named. If you are not the named addressee
you should not disseminate, distribute or copy this e-mail. Please notify
the sender immediately by e-mail if you have received this e-mail by
mistake and delete this e-mail from your system. If you are not the
intended recipient you are notified that disclosing, copying, distributing
or taking any action in reliance on the contents of this information is
strictly prohibited.
scott@slp53.sl.home (Scott Lurndal): Jan 30 09:28PM

>Hii ,
> How can we assure that only one object is created for a class. And when second do not happen.
 
>Note - don't use counter .
 
A static class member seems like a good starting point.
 
Is it required to be thread-safe and/or re-entrant?
Frederick Virchanza Gotham <cauldwell.thomas@gmail.com>: Jan 30 01:39PM -0800

On Monday, January 30, 2023 at 9:04:19 PM UTC, 20cs wrote:
> How can we assure that only one object is created for a class. And when second do not happen.
 
> Note - don't use counter .
 
> Someone asked me this in an interview.
 
I'd probably have a global object "std::optional<SomeClass> g_object", and I'd protect it with an "std::recursive_mutex".
 
Any thread can 'emplace' or 'reset' it as they like so long as they lock the mutex first.
red floyd <no.spam.here@its.invalid>: Jan 30 01:51PM -0800

On 1/30/2023 1:28 PM, Scott Lurndal wrote:
 
>> Note - don't use counter .
 
> A static class member seems like a good starting point.
 
> Is it required to be thread-safe and/or re-entrant?
 
What Scott said. This is a well known design pattern called "Singleton".
 
class Singleton {
static Singleton the_singleton;
public:
static Singleton& get_object { return the_singleton; }
};
 
Singleton Singleton::the_singleton{};
legalize+jeeves@mail.xmission.com (Richard): Jan 30 04:24PM

[Please do not mail me a copy of your followup]
 
Lynn McGuire <lynnmcguire5@gmail.com> spake the secret code
 
>https://www.zdnet.com/article/c-programming-language-and-safety-heres-where-it-goes-next/
 
>"There's been a shift towards 'memory safe' languages. So, can updates
>to C++ help it catch up in the eyes of developers?"
 
That article doesn't mention cpp2 specifically, but the ISO C++ paper[*]
linked by the article does mention cpp2. I find cpp2 to be an
appealing approach that preserves existing code bases and maintains
100% compatability with ISO C++.
 
Herb Sutter's video on cpp2:
<https://www.youtube.com/watch?v=ELeZAKCN4tY>
 
[*] <https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2759r0.pdf>
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
wij <wyniijj5@gmail.com>: Jan 30 10:08AM -0800

On Tuesday, January 24, 2023 at 10:21:39 AM UTC+8, Lynn McGuire wrote:
 
> "There's been a shift towards 'memory safe' languages. So, can updates
> to C++ help it catch up in the eyes of developers?"
 
> Lynn
 
It is all about correctness and efficiency. The development of C++ has been deviating
the basics for many years, in searching for (blind) expressiveness.
David Brown <david.brown@hesbynett.no>: Jan 30 08:18AM +0100

On 29/01/2023 16:57, Frederick Virchanza Gotham wrote:
> solution is an abomination. If you don't have any qualms
> about editing object files, then I've given a few reasons
> why my solution is superior to editing source files.
 
That is a completely meaningless thing to write. Yes, if I think that
editing compiled files is a terrible idea, then I will think your idea
of editing compiled files is terrible - and if I don't have anything
against editing compiled files, then I won't object to it.
 
I /do/ have something against it - and it is not indoctrination.
Frankly, I haven't heard anyone suggest it before now, much less argue
either for or against it.
 
The norm, however, is that programmers write or edit source code, and it
is compiled and then linked. If you do something that is wildly
breaking that norm, you are going to cause chaos to anyone maintaining
the code or working with it later - so it is not something to consider
without a /huge/ benefit. And you don't have a huge benefit - you've
got nothing more than the laziness of not wanting to edit a few files.
(I don't think combining these programs like this in the first place is
a great idea, but that's a different matter.)
 
> definition and then cleaning up the resultant compiler errors,
> but that's work that might introduce bugs. And you've to
> re-do it every time the library is upgraded.
 
You've got to re-do your Frankenstein program for every code change
anyway. This is one reason why it is a bad idea to mix the different
programs (especially when they are security-related programs). You are
far better off using the programs as separate programs, or using libraries.
 
Once you have done the renaming once, you have a patchset and a git
branch. For many small updates to the original programs, you merely
need to re-apply the patches.
 
> -- with the binary editors on one side, and the binary intacters
> on the other. Cultural clash. Within one lifetime it's unlikely
> either of us will defect.
 
It is not about reliability of tools - it's about traceability and
making a clear, maintainable build that takes source and results in a
binary.
 
"Binary editing" can have its place. I use it all the time in my own
work in embedded systems - it is standard practice that after my
programs are built, debugged and tested as elf files, I have objcopy to
generate binary images to be flashed into an embedded system. It is
quite common that the binary file is augmented after the objcopy, with
checksums, programming information, and the like.
 
But that is done in a /controlled/ manner, an /expected/ manner - it is
a natural part of the build process. It is not some weird hack done in
code you don't really understand, in a way you don't fully comprehend,
as a lazy alternative to better methods.
Frederick Virchanza Gotham <cauldwell.thomas@gmail.com>: Jan 30 12:25AM -0800

On Monday, January 30, 2023 at 7:19:14 AM UTC, David Brown wrote:
 
> The norm, however, is that programmers write or edit source code, and it
> is compiled and then linked.
 
 
Generally speaking, when a person encounters something that they've never encountered before, they might react with suspicion, or they might react with intrigue. This is very much down to the person's mind frame and inner stillness. In some communities it is pervasive to react with suspicion, and in some communities it is pervasive to react with intrigue.
 
There are words we use to describe things we haven't seen before; we call them 'new', or 'strange', or 'abnormal', or 'odd', or 'out of the ordinary', or 'out of the norm'.
 
This dichotomy between "what is old" and "what is new" is an important part of some people's decision-making processes. Some people, when they come up with an idea, ask themselves, "Is this what everyone else is doing?", and if the answer is no then they might discard their idea. Personally I look at my own ideas and try to judge them by their own merits, rather than considering how frequently other people are doing it.
 
The only argument you've been able to pose here is that what I'm doing is not frequently done. You literally have no other argument. And the reason why you have no other argument is that my solution is superior for multiple reasons. In fact, you are arguing to me that instead of adding three lines to a Makefile (in fact it's only two lines because the 2nd can be piped into the 3rd), you think it's preferable to individually seek out and every use of the symbol in all of the source and header files, and to alter them individually. And then you will do this every time the library is upgraded. The only way that a sane and rational person could side with you here is if they were to agree that editing object files is taboo.
 
 
> breaking that norm, you are going to cause chaos to anyone maintaining
> the code or working with it later - so it is not something to consider
> without a /huge/ benefit.
 
 
Breaking the norm is fine when the consequences are either minimal or non-existent.
 
 
> got nothing more than the laziness of not wanting to edit a few files.
> (I don't think combining these programs like this in the first place is
> a great idea, but that's a different matter.)
 
 
Seriously. There is a possibility for introducing bugs any time you edit any part of a program. Editing source and header files by yourself (i.e. by a human) will always be more error-prone and susceptible to bug introduction than using an automated tool. The 'objcopy' program doesn't make typos, and it doesn't forget what it was supposed to be doing because the phone rang.
If the possibility of introducing new bugs can be eradicated (or at least greatly diminished) by using an automated tool, then a sane and rational person would use such a tool.
 
 
> You've got to re-do your Frankenstein program for every code change
> anyway.
 
 
I just copy the new source and header files in. If the library has a new source file that it didn't have before, I don't even have to change the Makefile because it use a wilcard: find -iname "*.c"
 
 
> This is one reason why it is a bad idea to mix the different
> programs (especially when they are security-related programs). You are
> far better off using the programs as separate programs, or using libraries.
 
 
Before I embarked on this endeavour to combine the programs, I considered what the complications might be. For example one of the programs might make a process-wide change that could adversely affect the other programs (for example if they used 'setrlimit' with 'RLIMIT_NOFILE' to limit the number of open files), or if one of them overloaded 'operator new'. Of course the most simple problem would be a name collision at compile time, for example if they both had a function called "save_settings", but I've found an automated solution to that problem using 'objcopy'.
 
 
> Once you have done the renaming once, you have a patchset and a git
> branch. For many small updates to the original programs, you merely
> need to re-apply the patches.
 
 
In my last job I wrote firmware for embedded Linux cameras. When building the firmware, we built a few dozen opensource libraries and we had a directory containing patch files for the libraries. The Makefile applied the patches before building the libraries. When we upgraded any library, we almost never were able to build it straight away, 9 times out of 10 there would be a patch failure. So I would have to get the old library, examine the effect of the patch on the old source file, and then try to create a new patch file compatible with the newer version of the file. All the while I might be introducing new bugs.
 
 
> It is not about reliability of tools - it's about traceability and
> making a clear, maintainable build that takes source and results in a
> binary.
 
 
Two lines in a Makefile.
 
 
> a natural part of the build process. It is not some weird hack done in
> code you don't really understand, in a way you don't fully comprehend,
> as a lazy alternative to better methods.
 
 
In the list of words I gave above to describe something not frequently seen, I forgot 'weird'.
 
My idea of editing object files isn't just a little bit superior -- it's vastly superior. Also I account for 'weak symbols' which you really need to be careful with.
Paavo Helde <eesnimi@osa.pri.ee>: Jan 30 10:52AM +0200

30.01.2023 10:25 Frederick Virchanza Gotham kirjutas:
 
> In my last job I wrote firmware for embedded Linux cameras. When building the firmware, we built a few dozen opensource libraries and we had a directory containing patch files for the libraries. The Makefile applied the patches before building the libraries. When we upgraded any library, we almost never were able to build it straight away, 9 times out of 10 there would be a patch failure. So I would have to get the old library, examine the effect of the patch on the old source file, and then try to create a new patch file compatible with the newer version of the file. All the while I might be introducing new bugs.
 
Let me guess - you attempted to apply diff patches in blind, not the
3-way merges provided by a proper version control system.
 
BTW, a proper way to use an open-source program as a library is to split
it up into a library and a small executable part, and convince the
project managers to accept those changes in the original repo. No
merging problems anymore.
Frederick Virchanza Gotham <cauldwell.thomas@gmail.com>: Jan 30 12:56AM -0800

On Monday, January 30, 2023 at 8:52:46 AM UTC, Paavo Helde wrote:
 
> Let me guess - you attempted to apply diff patches in blind, not the
> 3-way merges provided by a proper version control system.
 
 
Sometimes the new version of the library had deleted a function whose body was patched, and replaced it with another function by another name.
 
 
> it up into a library and a small executable part, and convince the
> project managers to accept those changes in the original repo. No
> merging problems anymore.
 
 
Yeah I could have built busybox and badvpn both as dynamic shared libraries. That's still an option.
I want to be able to provide 'ssh' as a statically linked executable though, and also as a fat binary to run on nearly any system.
David Brown <david.brown@hesbynett.no>: Jan 30 10:39AM +0100

On 30/01/2023 09:25, Frederick Virchanza Gotham wrote:
 
> There are words we use to describe things we haven't seen before; we
> call them 'new', or 'strange', or 'abnormal', or 'odd', or 'out of
> the ordinary', or 'out of the norm'.
 
Of course people should be sceptical to something new and unusual in a
software build. I don't know how many C and C++ programmers there have
been for the last few decades, but it is a /lot/. When someone comes up
with a radical divergence from established practices, the /correct/
reaction is scepticism. That does not mean it should immediately be
dismissed as a bad idea - it means you should have good justifications
for going outside the norm.
 
You don't have good reasons, other than a quick-fix hack. Sometimes a
hack is all you need. Sometimes an unusual solution is useful in rare
and niche circumstances. But if you want your program to be taken
seriously and used by other people, you need to have something that
other people are comfortable with. A hack solution is not that. An
unmaintainable cyborg program is not that. Amateur messing around with
security-critical code is not that.
 
Your solution here might work as a quick hack. But you are missing the
big picture. If you have any ambition for the program to be used
outside of your own testing on your own computer, you have to play with
others - you have to have a solution that others are comfortable working
with and using.
 
You've been given a variety of good advice and suggestions for
alternative ways to handle the situation. You can take your pick, or
combine them, or find other ways. Or you can try to persuade the rest
of the world to change.
 
 
Just to be clear here, the idea of being able to conveniently set up a
proxy via an ssh link is quite nice - I can see it appealing to some
people. But no one will use it when it is made the way you are
suggesting. If you put together a bash or Python script that started
the relevant programs with the right parameters, it could easily be a
useful little tool that could make its way into common Linux
distributions. But building it with glued bits of different projects,
with post-compilation muddling of object files, it will /never/ be
acceptable by anyone who takes security, reliability or maintainability
seriously.
gazelle@shell.xmission.com (Kenny McCormack): Jan 30 10:58AM

In article <tr7r0v$36k7e$2@dont-email.me>,
David Brown <david.brown@hesbynett.no> wrote:
...
>I /do/ have something against it - and it is not indoctrination.
 
I'm sure the Jonestown folks would state in all sincerity that they were
not "indoctrinated". No one ever thinks it could happen to them.
 
--
The motto of the GOP "base": You can't *be* a billionaire, but at least you
can vote like one.
Daniel <danielaparker@gmail.com>: Jan 29 04:23PM -0800

On Sunday, January 29, 2023 at 7:01:17 AM UTC-5, Kenny McCormack wrote:
 
> that other thread - the one about modifying object modules using
> established working tools. You guys (both of you) need to look up what
> "IMHO" means - and start liberally using it in your posts.
 
I've always thought that acronym was superfluous, if not the author's,
whose other opinion could it possibly be?
 
Daniel
Ben Bacarisse <ben.usenet@bsb.me.uk>: Jan 30 01:14AM

>> "IMHO" means - and start liberally using it in your posts.
 
> I've always thought that acronym was superfluous, if not the author's,
> whose other opinion could it possibly be?
 
It's not used to indicate /whose/ opinion it is, it's used to emphasise
that what is being presented in now firmly in the domain of opinion and
not fact.
 
--
Ben.
David Brown <david.brown@hesbynett.no>: Jan 30 08:22AM +0100

On 29/01/2023 18:23, Adam H. Kerman wrote:
 
> Oh for god's sake. Surely you must have noticed by now that "Pluted Pup"
> is just a sockpuppet who has been trolling you. Would you PLEASE spit
> out the damn hook already and stop troll feeding?
 
I have no idea who is or is not a troll in news.admin.net-abuse.usenet.
His posts do not stand out as being any worse than most of those from
that group (yours included).
Andrey Tarasevich <andreytarasevich@hotmail.com>: Jan 29 06:33PM -0800

On 01/27/23 12:27 PM, Pawel Por wrote:
 
> Thank you for an answer. Please note that the MyList<T>::push_front(T &&v) argument is still passed by lvalue reference. Only inside the body of its function the move semantics is in use.
 
What are you talking about? Where did you get thet idea that "argument
is still passed by lvalue reference"? What behavior made you to draw
that conclusion?
 
> ml.push_front
> T x = std::move( v ); (void) x;
> Dog::Dog(Dog&&)
 
Aaaaand...? What are we expected to see in this output?
 
> I think it is related to the concept of "universal references" coined by Scott Mayers
 
No
 
--
Best regards,
Andrey
Mr Flibble <flibble@reddwarf.jmc.corp>: Jan 30 12:49AM

Hi!
 
neolib::format implemented in terms of std::vformat to provide span
details of formatted parameters in result text.
 
auto result1 = neolib::format("{0}:{1}", "xyzzy", 42);
auto result2 = neolib::format("{{0}}:{1}", "xyzzy", 42);
auto result3 = neolib::format("{0}:{{1}}", "xyzzy", 42);
auto result4 = neolib::format("{0}:{0}:{1}", "xyzzy", 42);
 
assert((result1.text == "xyzzy:42"));
assert((result1.args == neolib::format_result::arg_list{ { 0, 0, 5 },
{ 1, 6, 8 } }));
assert((result1.arg_span(0) == "xyzzy"));
assert((result1.arg_span(1) == "42"));
assert((result2.text == "{{0}}:42"));
assert((result2.args == neolib::format_result::arg_list{ { 1, 6,
8 } }));
assert((result2.arg_span(1) == "42"));
assert((result3.text == "xyzzy:{{1}}"));
assert((result3.args == neolib::format_result::arg_list{ { 0, 0,
5 } }));
assert((result3.arg_span(0) == "xyzzy"));
assert((result4.text == "xyzzy:xyzzy:42"));
assert((result4.args == neolib::format_result::arg_list{ { 0, 0, 5 },
{ 0, 6, 11 }, { 1, 12, 14 } }));
assert((result4.arg_span(0) == "xyzzy"));
assert((result4.arg_span(1) == "42"));
 
/Flibble
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: