Wednesday, November 4, 2020

Digest for comp.lang.c++@googlegroups.com - 12 updates in 3 topics

Frederick Gotham <cauldwell.thomas@gmail.com>: Nov 04 01:54PM -0800

My current task is to ensure that an embedded Linux device fulfills all the criteria for a particular security standard.
 
One of the requirements in the standard is that a bruteforce attack takes a minimum amount of time which is specified as 2 days (i.e. 48 solid hours of trying keys).
 
For a 20-Bit cryptographic key, there is approx. 1 million possible unique keys. In order to peform a full bruteforce attack on 1 million keys in 2 days, it must take no longer than 173 milliseconds to try one key. And so I must write code that takes at least 173 milliseconds to use a key to decode data. An easy way of doing it would be:
 
void DecodeString(char *const p, uint_fast32_t const key)
{
    Sleep_For_Milliseconds(173u);
    Decrypt(p,key);
}
 
This is of course a little wasteful if the "Decrypt" function takes 140 milliseconds, as the entire operation overall will take 313 milliseconds.
 
In order to be less wasteful, I could code it something like as follows:
 
void DecodeString(char *const p, uint_fast32_t const key)
{
    uint_fast64_t const before = GetTickCount();
 
    Decrypt(p,key);
 
    uint_fast64_t const duration = GetTickCount() - before;
 
    if ( duration < 173u )
        Sleep_For_Milliseconds( 173u - duration );
}
 
This technique relies on checking the time twice: Once before the cryptographic function and once afterwards.
 
A function such as "GetTickCount" on Microsoft, or "get_uptime" on Linux would be preferable over using 'ctime' just in case the system time somehow changes.
 
Anyone got any ideas on how best to code this?
"Öö Tiib" <ootiib@hot.ee>: Nov 04 03:11PM -0800

On Wednesday, 4 November 2020 23:54:23 UTC+2, Frederick Gotham wrote:
 
> This technique relies on checking the time twice: Once before the cryptographic function and once afterwards.
 
> A function such as "GetTickCount" on Microsoft, or "get_uptime" on Linux would be preferable over using 'ctime' just in case the system time somehow changes.
 
> Anyone got any ideas on how best to code this?
 
20 bit key? It is like 6 decimal numbers that can be perhaps
some kind of pin code on door (3 sequential failed tries alarm)
not cryptography key.
"Öö Tiib" <ootiib@hot.ee>: Nov 04 02:59PM -0800

On Monday, 2 November 2020 22:31:01 UTC+2, Mr Flibble wrote:
> ^
> prog.cpp:4:7: note: declared protected here
> void f1() {}
 
It is not about this pointer but about base class objects of same
type. That works:
 
class foo
{
protected:
void f1() {}
};
 
class bar : public foo
{
public:
void f2(bar& o)
{
o.f1();
}
};
 
int main()
{
bar o1;
bar o2;
o2.f2(o1);
}
David Brown <david.brown@hesbynett.no>: Nov 04 09:11AM +0100

>> why I mentioned them.
 
> Right, you felt it necessary to make an ad hominem attack on Bonita. But why
> the ad hominem attack?
 
I was attacking Bonita's attitude in this matter, not her directly. And
a suggestion that general Windows programming is not done with the
development quality and care that is used in medical devices, automotive
industries, etc., is not an attack - it is simple fact. I would hope
most people understand that. If not, apply a little thought and I'm
sure it will be obvious.
 
There are far more types of programming and software development than
most people realise - even programmers who have worked professionally in
a range of fields for decades will be unaware of how things are done in
the majority of different branches. So when people make assumptions
about "no one needs to do that" or "no one /should/ do that", regardless
of the type of development, they are usually wrong. Simple ignorance is
easy to cure - you give the person a little more information. Wilful
ignorance - the kind that ridicules the idea of having a use for "#undef
bool" - is harder to handle.
"daniel...@gmail.com" <danielaparker@gmail.com>: Nov 04 05:12AM -0800

On Wednesday, November 4, 2020 at 3:11:44 AM UTC-5, David Brown wrote:
 
> I was attacking Bonita's attitude in this matter, not her directly.
 
A distinction without a difference, an ad hominin attack is an ad hominin
attack. Stroustrup once posted that a plea for civility can never be entirely
off topic, it is in that spirit that I offer this one.
 
Daniel
Juha Nieminen <nospam@thanks.invalid>: Nov 04 01:32PM

> about 400 days (IIRC). It was cheaper and lower risk to add a "restart
> the engine controller every 120 days" requirement to the maintenance
> manuals than to fix the bug - despite the bug fix being trivial in the code.
 
I find it a bit hard to believe that such a "fix" would be approved by
the aviation authorities.
 
It's essentially saying "we know that our aircraft can suffer a
catastrophic failure, therefore the software must be restarted by
maintenance every 120 days". Instead of fixing the bug, it just
relies on the maintenance personnel, human beings, always remembering
to follow that particular step, everywhere, for hundreds of airplanes,
for years and years to come.
 
One day that step will fall through the cracks due to human error.
The mainenance crew may accidentally skip doing it because of one of
a myriad of possible reasons. Then 400 people might die in a horrific
accident. Can the aircraft manufacturer then use the excuse "yeah, we
knew about the bug, and we knew how to fix it, but we just put an
instruction in the maintenance manual to kludge around the bug. It's
not our fault, it's the fault of the mainenance personnel. They should
have followed the kludge instruction."
 
If there's one thing about the aviation industry is that they don't
really like to leave anything to chance. If something can be made
safer, they will demand it to be made safer. There's a reason why
the validation process for any changes to aircraft is so long and
arduous.
Juha Nieminen <nospam@thanks.invalid>: Nov 04 01:41PM

> I've seen considerable numbers of bugs in apps across Android and
> whatever passes for an OS in smart TVs. And in my car (causing it to not
> restart after an auto engine cuttoff, or make the satnav go crazy).
 
Embedded systems are often ridden with bugs. One of the main reasons for
this is that in the embedded industry there's a general aversion towards
developing your own software and instead there's a tendency to use
existing third-party code, and a general false trust on the validity of
that code. If there exists an existing library or program for a particular
task, there's often a general tendency to want to use it, rather than
develop it from scratch.
 
Yes, even if it's just some random code from some random nobody on github.
I kid you not.
 
But even if it's a relatively popular library, there's still no guarantee
that there are no serious bugs in it. But most of the industry just blindly
trusts that's so.
 
(It's almost scary when the code is so buggy that even the *compiler*
itself gives warnings about those bugs. How many bugs there may be that
the compiler doesn't see?)
Richard Damon <Richard@Damon-Family.org>: Nov 04 08:59AM -0500

On 11/4/20 8:32 AM, Juha Nieminen wrote:
> safer, they will demand it to be made safer. There's a reason why
> the validation process for any changes to aircraft is so long and
> arduous.
 
You may find it hard to believe, but I do believe this is referencing a
factual case that HAS happened. (The time period was about 248 days, or
2**31 * 10ms)
 
One key feature to remember is that normal procedures would have that
controller shut down periodically for various types of maintenance long
before that point, so there isn't that much need for a 'special
procedure' to do this. The actual protocal adopted was to force a
restart at about 1/2 that time (120 days) so the step would have to have
been missed for about 3 months.
 
It also should be noted that they ARE working on a software fix for
this, probably tied with a lot of other changes they want to make, but
it WILL take a while for the fix to make it out (and it may have
actually been made by this time, this was a number of years ago). The
issue was that it takes a long time to re-certify the software to be
air-worthy.
red floyd <no.spam.here@its.invalid>: Nov 04 07:12AM -0800

On 11/3/2020 9:15 AM, Bonita Montero wrote:
 
>> You have no clue what's involved in real software development, it
>> appears.
 
> Nothing what you say is related to what I said.
 
You have made the common mistake of assuming that your use case
is the only one that is needed.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 04 04:33PM +0100


>> Nothing what you say is related to what I said.
 
> You have made the common mistake of assuming that your use case
> is the only one that is needed.
 
It is. There's no need to read a source without changing it.
David Brown <david.brown@hesbynett.no>: Nov 04 04:44PM +0100

On 04/11/2020 14:32, Juha Nieminen wrote:
>> manuals than to fix the bug - despite the bug fix being trivial in the code.
 
> I find it a bit hard to believe that such a "fix" would be approved by
> the aviation authorities.
 
Of course the long-term fix is to correct the software.
 
> relies on the maintenance personnel, human beings, always remembering
> to follow that particular step, everywhere, for hundreds of airplanes,
> for years and years to come.
 
Aircraft maintenance is done very rigorously, with lots of checklists.
These things don't get forgotten.
 
> safer, they will demand it to be made safer. There's a reason why
> the validation process for any changes to aircraft is so long and
> arduous.
 
Sure. But it is precisely because of such an attitude that you can't
simply fix the bug.
 
The development people will of course fix this bug. Then they will put
the new version on boards for long-term testing. They will not ship the
new software until it has undergone perhaps months of automated testing.
In the meantime, any new engines made will get the known bad software.
 
Known bugs like this are not a big risk - because the workaround is
clear and reliable. It is unknown bugs that are the risk.
Sjouke Burry <burrynulnulfour@ppllaanneett.nnll>: Nov 04 05:11PM +0100

On 04.11.20 14:59, Richard Damon wrote:
> actually been made by this time, this was a number of years ago). The
> issue was that it takes a long time to re-certify the software to be
> air-worthy.
 
Why not reboot when the plane shuts down, even if daily?
Which safety guy has decided that it is "a good idea" to
leave things running for weeks or months?
I would accuse myself of stupidity if I used my computers that way.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: