- "GotW #97: Assertions (Difficulty: 4/10)" by Herb Sutter - 2 Updates
- atomic<>-costs - 2 Updates
scott@slp53.sl.home (Scott Lurndal): Jan 06 09:35PM >unrecoverable errors, so a controlled exit with a nice stack dump and >core is the only safe course of action. >If asserts are disabled, how does the code know it is safe to proceed? Asserts are horribly user-unfriendly. Detect the issue, sure, but rather than the unfriendly and deadly assertion, one can 1) Correct the error and continue. 2) Report the error and reset state (e.g. an interactive command interpreter) 3) In library code, report the error to the caller though an error code (e.g. errno). We're quite careful to ensure that our customers never see an assertion trigger. static_assert is quite useful, however, in C++ code. |
Ian Collins <ian-news@hotmail.com>: Jan 07 11:06AM +1300 On 07/01/2021 10:35, Scott Lurndal wrote: >> core is the only safe course of action. >> If asserts are disabled, how does the code know it is safe to proceed? > Asserts are horribly user-unfriendly. I agree. > Detect the issue, sure, but rather than the unfriendly and deadly > assertion, one can > 1) Correct the error and continue. Don't use asserts where this is possible. Asserts are like exceptions on steroids. Use exceptions for exceptional conditions, asserts for impossible conditions... > 2) Report the error and reset state (e.g. an interactive command interpreter) That's effectively what we do (ours is an embedded controller). > 3) In library code, report the error to the caller though an error code (e.g. errno). Yes, asserts in library code are a bad idea. > We're quite careful to ensure that our customers never see an assertion trigger. Ours will see brief reset. > static_assert is quite useful, however, in C++ code. Very useful when writing cross-platform code. -- Ian. |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 06 12:31PM -0800 On 1/6/2021 6:09 AM, Bonita Montero wrote: > I wrote a little benchmark that measures the cost of atomic-operations > when a number of hw-threads are constantly hammering the same cacheline. > Here it is: [...] > So show me your data ! I get: xchg: 1 threads: 36.4439ns, avg fails: 0.999999 2 threads: 131.794ns, avg fails: 1.93928 3 threads: 169.609ns, avg fails: 2.3317 4 threads: 573.478ns, avg fails: 3.73222 xadd: 1 threads: 15.1847ns 2 threads: 92.2467ns 3 threads: 187.475ns 4 threads: 360.952ns one little nitpick, XCHG should be renamed to CMPXCHG. :^) XADD usually beats a CMPXCHG loop because it cannot fail. |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 06 12:56PM -0800 On 1/6/2021 6:09 AM, Bonita Montero wrote: > I wrote a little benchmark that measures the cost of atomic-operations > when a number of hw-threads are constantly hammering the same cacheline. > Here it is: [...] > XCHG. There are about 4 swap-failures until a fifth succeeds. > And a successful XADD is about 240ns, i.e. 700 clock cycles. > So show me your data ! One other nitpick... Make sure that your: atomic<uint64_t> atomicValue; Is isolated: Aligned and padded on a cache line boundary. Afaict, you did not do that! Its going to alter the test. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment