Sunday, May 31, 2020

Digest for comp.lang.c++@googlegroups.com - 4 updates in 3 topics

Juha Nieminen <nospam@thanks.invalid>: May 31 04:13PM

It's still unclear to me what the difference is between the function type
syntax of the form "ReturnType(*)(ParameterTypes)" and
"ReturnType(ParameterTypes)".
 
Sometimes they seem to be interchangeable, sometimes they don't.
 
For example, this is valid:
 
//-------------------------------------------
using F1 = int(*)(int);
using F2 = int(int);
 
void foo1(F1 funcPtr); // ok
void foo2(F2 funcPtr); // ok
//-------------------------------------------
 
However, this is not valid:
 
//-------------------------------------------
F1 funcPtr1 = someFunc; // ok
F2 funcPtr2 = someFunc; // error
//-------------------------------------------
 
Likewise:
 
//-------------------------------------------
std::function<int(int)> funcObj1; // ok
std::function<int(*)(int)> funcObj2; // error
 
std::set<int, bool(*)(int, int)> set1; // ok
std::set<int, bool(int, int)> set2; // error
//-------------------------------------------
 
So they are like the reverse of each other.
 
F2 above can be used for a function declaration, which might give some
insight into what it means. In other words:
 
//-------------------------------------------
F2 someFunction; // ok
 
int main()
{
int value = someFunction(5); // ok.
}
 
int someFunction(int v) { return v+1; }
//-------------------------------------------
Ben Bacarisse <ben.usenet@bsb.me.uk>: May 31 05:49PM +0100


> It's still unclear to me what the difference is between the function type
> syntax of the form "ReturnType(*)(ParameterTypes)" and
> "ReturnType(ParameterTypes)".
 
The first is a pointer type and the second a function type.
 
> using F2 = int(int);
 
> void foo1(F1 funcPtr); // ok
> void foo2(F2 funcPtr); // ok
 
Because functions can't be passed as parameters, function types are
converted to pointer-to-function types in this context. This is
annoying, but it's hang-over from C.
 
It's analogous to (raw) array types in parameter lists -- they also get
converted to pointer types.
 
> F1 funcPtr1 = someFunc; // ok
> F2 funcPtr2 = someFunc; // error
> //-------------------------------------------
 
And you can't assign to a function.
 
 
> //-------------------------------------------
> std::function<int(int)> funcObj1; // ok
> std::function<int(*)(int)> funcObj2; // error
 
You need a function type, not a pointer type, for std::function.
 
> std::set<int, bool(int, int)> set2; // error
> //-------------------------------------------
 
> So they are like the reverse of each other.
 
A set hold data objects, and pointers are data objects so a set of
pointers makes sense. Functions are not considered to be data objects
in either C or C++ so you can't put them into collections.
 
> insight into what it means. In other words:
 
> //-------------------------------------------
> F2 someFunction; // ok
 
Yes, F2 is a function type and can be used to declare (but not define)
functions.
 
 
--
Ben.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 27 12:26PM +0200

On 27.05.2020 01:52, Chris M. Thomasson wrote:
 
> Basically. However, I want to extend this to handle higher bases. Right
> how its only a 2-ary tree. It can be extended to n-ary. The code is down
> right crude and very basic for now.
 
Oh look, an efficient integral power function:
 
 
namespace impl
{
constexpr inline auto intpow( const double base, const int
exponent )
-> double
{
double result = 1;
double weight = base;
for( int n = exponent; n != 0; weight *= weight ) {
if( is_odd( n ) ) {
result *= weight;
}
n /= 2;
}
return result;
}
} // namespace impl
/// @endcond
 
/// \brief Efficient *x* to the *n*'th power, when *n* is an integer.
///
/// \param base The *x* in "*x* to the *n*'th".
/// \param exponent The *n* in "*x* to the *n*'th".
///
/// Essentially this is Horner's rule adapted to calculating a
power, so that the
/// number of floating point multiplications is at worst O(log2(n)).
constexpr inline auto intpow( const double base, const int exponent )
-> double
{
return (0?0
: exponent == 0? 1.0
: exponent < 0? 1.0/impl::intpow( base, -exponent )
: impl::intpow( base, exponent )
);
}
 
 
<url:
https://github.com/alf-p-steinbach/cppx-core-language/blob/master/source/cppx-core-language/calc/floating-point-operations.hpp>
 
 
[snip]
 
 
- Alf
Juha Nieminen <nospam@thanks.invalid>: May 27 06:35AM

> [Jesus Loves You] Messages of hope and a future
 
You may get a sense of martyrdom and righteous victimhood when you keep
spamming this newsgroup with provocative off-topic content and getting
negative reactions. However, even if your god were real and exactly
like you believe him to be, even he wouldn't approve of you behaving
like this. I think that even he would rebuke you for this kind of
disruptive behavior. He would not be pleased with you. I don't think
even he would want you to spread his word in this manner.
 
The fact that you don't accept that makes you an obsessed bigot.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Saturday, May 30, 2020

Digest for comp.lang.c++@googlegroups.com - 1 update in 1 topic

Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: May 30 10:00PM +0100

... beg the question.
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who doesn't believe in any God the most. Oh, no..wait.. that never happens." – Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Byrne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say."
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 6 updates in 6 topics

aminer68@gmail.com: May 29 02:10PM -0700

Hello,
 
 
I am a white arab that is an inventor of many scalable algorithms and there implementations, and now i will talk about:
"How to beat Moore's Law ?" and more about: "Energy efficiency"..
 
How to beat Moore's Law ?
 
I think with the following discovery, Graphene can finally be used in CPUs, and it is a scale out method, read about the following discovery and you will notice it:
 
New Graphene Discovery Could Finally Punch the Gas Pedal, Drive Faster CPUs
 
Read more here:
 
https://www.extremetech.com/computing/267695-new-graphene-discovery-could-finally-punch-the-gas-pedal-drive-faster-cpus
 
The scale out method above with Graphene is very interesting, and here is the other scale up method with multicores and parallelism:
 
Beating Moore's Law: Scaling Performance for Another Half-Century
 
Read more here:
 
https://www.infoworld.com/article/3287025/beating-moore-s-law-scaling-performance-for-another-half-century.html
 
 
More about Energy efficiency..
 
You have to be aware that parallelization of the software
can lower power consumption, and here is the formula
that permits you to calculate the power consumption of
"parallel" software programs:
 
Power consumption of the total cores = (The number of cores) * ( 1/(Parallel speedup))^3) * (Power consumption of the single core).
 
 
Also read the following about energy efficiency:
 
Energy efficiency isn't just a hardware problem. Your programming
language choices can have serious effects on the efficiency of your
energy consumption. We dive deep into what makes a programming language
energy efficient.
 
As the researchers discovered, the CPU-based energy consumption always
represents the majority of the energy consumed.
 
What Pereira et. al. found wasn't entirely surprising: speed does not
always equate energy efficiency. Compiled languages like C, C++, Rust,
and Ada ranked as some of the most energy efficient languages out there,
and Java and FreePascal are also good at Energy efficiency.
 
Read more here:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
RAM is still expensive and slow, relative to CPUs
 
And "memory" usage efficiency is important for mobile devices.
 
So Delphi and FreePascal compilers are also still "useful" for mobile
devices, because Delphi and FreePascal are good if you are considering
time and memory or energy and memory, and the following pascal benchmark
was done with FreePascal, and the benchmark shows that C, Go and Pascal
do rather better if you're considering languages based on time and
memory or energy and memory.
 
Read again here to notice it:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 29 01:08PM -0700

Hello,
 
 
In this post i have just put the web link about: "You cannot scale creativity", so read again my important post:
 
I am a white arab that is more smart, and i invite you to read the
following thoughts about: "You cannot scale creativity", it is related to my following powerful product that i have designed and implemented, because as you will read below: "The solution is to lessen the need for coordination: have different people work on different things, use smaller teams, and employ fewer managers.", here is my powerful product (that can also be applied to organizations):
 
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
 
Please read the following about Applying the Universal Scalability Law to organisations:
 
https://blog.acolyer.org/2015/04/29/applying-the-universal-scalability-law-to-organisations/
 
 
So read the following to understand:
 
 
Read the following from the following PhD computer scientist:
 
https://lemire.me/blog/about-me/
 
 
You cannot scale creativity
 
As a teenager, I was genuinely impressed by communism. The way I saw it, the West could never compete. The USSR offered a centralized and efficient system that could eliminate waste and ensure optimal efficiency. If a scientific problem appeared, the USSR could throw 10, 100 or 1000 scientists at it without having to cajole anyone.
 
I could not quite understand why the communist countries always appeared to be technologically so backward. Weren't their coordinated engineers and scientists out-innovating our scientists and engineers?
 
I was making a reasoning error. I had misunderstood the concept of economy of scale best exemplified by Ford. To me, communism was more or less a massive application of the Fordian approach. It ought to make everything better and cheaper!
 
The industrial revolution was made possible by economies of scale: it costs far less per car to produce 10,000 cars than to make just one. Bill Gates became the richest man in the world because software offers an optimal economy of scale: it costs the same to produce one copy of Windows or 100 million copies.
 
Trade and employment can also scale: the transaction costs go down if you sell 10,000 objects a day, or hire 10,000 people a year. Accordingly, people living in cities are typically better off and more productive.
 
This has lead to the belief that if you regroup more people and you organize them, you get better productivity. I want to stress how different this statement is from the previous observations. We can scale products, services, trade and interaction. Scaling comes from the fact that we need reproduce many copies of the essentially the same object or service. But merely regrouping people only involves scaling in accounting and human ressources: if these are the costs holding you back, you are probably not doing anything important. To get ten people together to have much more than ten times the output is only possible if you are producing an uniform product or service.
 
Yet, somehow, people conclude that regroup people and getting them to work on a common goal, by itself, will improve productivity. Fred Brooks put a dent in this theory with his Brook's law:
 
Adding manpower to a late software project makes it later.
 
While it is true that almost all my work is collaborative, I consistently found it counterproductive to work in large groups. Of course, as an introvert, this goes against all my instincts. But I also fail to see the productivity gains in practice whereas I do notice the more frequent meetings.
 
Abramo et al. (2012) looked seriously at this issue and found that you get no more than linear scaling. That is, a group of two researchers will produce twice as much as one researcher. Period. There is no economy of scale when coordinating human brains. Their finding contradicts decades of science policy where we have tried to organize people into larger and better coordinated groups (a concept eerily reminiscent of communism).
 
We can make an analogy with computers. Your quad-core processor will not run Microsoft Word four times as far. It probably won't even run it twice as fast. In fact, poorly written software may even run slower when there are more than one core. Coordination is expensive.
 
The solution is to lessen the need for coordination: have different people work on different things, use smaller teams, and employ fewer managers.
 
Read more here:
 
https://lemire.me/blog/2012/10/15/you-cannot-scale-creativity/
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 29 12:07PM -0700

Hello,
 
 
Read again, i correct about: Dematerialising the future: what role can technology and consumers play?
 
I have just posted before about Dematerialization Through Services,
read it here:
 
https://groups.google.com/forum/#!topic/soc.culture.usa/rVZUcghUe5E
 
 
The above makes it clear that the evidence indicates that
'dematerialization through services' is not a valid policy for
reducing carbon emissions.
 
But Dematerialising is still important, read the following to notice it:
 
Dematerialising the future: what role can technology and consumers play?
 
https://www.theguardian.com/sustainable-business/dematerialising-future-technology-consumers
 
Also read my following thoughts to notice:
 
About capitalism and the positive correlation between economic growth and environmental performance..
 
As an economy expands, resource usage becomes increasingly efficient and economies tend to move away from ecologically harmful behavior, while raising the standard of living of its participants. In fact, the 2018 Yale Environmental Performance Index shows a clear positive correlation between economic growth and environmental performance, read about it here:
 
https://epi.envirocenter.yale.edu/2018/report/category/hlt
 
So i think that we are on the right path, so as you are noticing that we have to dematerialize much more so that to avoid Environmental problems, but how will look like our near future that will be much more dematerialized ? look here in the following video to notice that one of our fellow techlead and software developer is doing it by much more dematerializing his life and he is happy by doing it:
 
My minimalist apartment (as a millionaire)
 
https://www.youtube.com/watch?v=EUeqHhbQWFc
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 29 11:56AM -0700

Hello,
 
 
I have just posted before about Dematerialization Through Services,
read it here:
 
https://groups.google.com/forum/#!topic/soc.culture.usa/rVZUcghUe5E
 
 
The above makes it clear that the evidence indicates that
'dematerialization through services' is not a valid policy for
reducing carbon emissions.
 
But Dematerialising is still important, read the following to notice it:
 
Dematerialising the future: what role can technology and consumers play?
 
https://www.theguardian.com/sustainable-business/dematerialising-future-technology-consumers
 
Also read my following thoughts to notice:
 
About capitalism and the positive correlation between economic growth and environmental performance..
 
As an economy expands, resource usage becomes increasingly efficient and economies tend to move away from ecologically harmful behavior, while raising the standard of living of its participants. In fact, the 2018 Yale Environmental Performance Index shows a clear positive correlation between economic growth and environmental performance, read about it here:
 
https://epi.envirocenter.yale.edu/2018/report/category/hlt
 
So i think we are on the right path and i think as consumption switches from goods to services, economic growth can be decoupled from the use of material resources, and this will help us very much to efficiently avoid environmental problems.
 
So as you are noticing that we have to dematerialize much more so that to avoid Environmental problems, but how will look like our near future that will be much more dematerialized ? look here in the following video to notice that one of our fellow techlead and software developer is doing it by much more dematerializing his life and he is happy by doing it:
 
My minimalist apartment (as a millionaire)
 
https://www.youtube.com/watch?v=EUeqHhbQWFc
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 29 10:24AM -0700

Hello,
 
 
2 drugs for Gaucher's disease also fight COVID-19, Israeli defense lab finds
 
The Defense Ministry-run Institute for Biological Research has found two drugs used to treat a genetic disorder known as Gaucher's disease are also effective against the coronavirus and potentially other viruses as well, the laboratory announced Tuesday.
 
As one of these drugs — Cerdelga — has already been approved for use by the US Food and Drug Administration and the second — Venglustat — has almost completed the approval process, they may be fast-tracked for use with COVID-19 patients, the Defense Ministry said.
 
Read more here:
 
https://www.timesofisrael.com/gauchers-disease-drugs-also-fight-covid-19-israeli-defense-lab-finds/
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 29 09:55AM -0700

Hello,
 
 
Gaucher's disease drugs effective against coronavirus: Israeli research
 
Read more here:
 
https://www.globaltimes.cn/content/1189623.shtml
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

Friday, May 29, 2020

Digest for comp.lang.c++@googlegroups.com - 6 updates in 2 topics

"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 29 01:43PM -0700

On 5/27/2020 3:26 AM, Alf P. Steinbach wrote:
>> Right how its only a 2-ary tree. It can be extended to n-ary. The code
>> is down right crude and very basic for now.
 
> Oh look, an efficient integral power function:
[...]
 
:^)
 
Fwiw, here is an expensive root finding algorihtm for complex numbers:
__________________________
ct_complex
ct_root(
ct_complex const& z,
unsigned int r,
unsigned int b
) {
double angle_base = std::arg(z) / b;
double angle_step = CT_PI2 / b;
 
double length = std::abs(z);
 
double angle = angle_base + angle_step * r;
double radius = std::pow(length, 1.0 / b);
 
ct_complex root(
std::cos(angle) * radius,
std::sin(angle) * radius
);
 
return root;
}
__________________________
 
r is the root to find in the base b.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 30 12:02AM +0200

On 29.05.2020 22:43, Chris M. Thomasson wrote:
> }
> __________________________
 
> r is the root to find in the base b.
 
You can replace the expensive `pow` call with a fastish integral power
as shown (the code you snipped).
 
You can reduce the number of expensive calls to `cos` and `sin`
 
* by recognizing special bases like 2, 3 and 4 (possibly you're only
using a very few bases, in which case this covers all of it!),
* for the more general use cases, by returning a root generator object
rather than directly a complex, where the generator object internally
retains the first non-1 b'th root of 1, and calculates the various angle
roots as powers of that, times root of norm.
 
For the last point the idea is that where code now calls ct_root
repeatedly to obtain all the various angle roots, with b calls to `pow`,
`cos` and `sin, it can instead call ct_root_gen with just 1 call to
`pow`, `cos` and `sin`, to get a generator which it then can call
repeatedly to more fastishly get the b roots.
 
 
- Alf
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 30 12:07AM +0200

On 30.05.2020 00:02, Alf P. Steinbach wrote:
 
>> r is the root to find in the base b.
 
> You can replace the expensive `pow` call with a fastish integral power
> as shown (the code you snipped).
 
Oh wait, that's bollocks. What was I thinking. But possibly/likely there
is a corresponding way to calculate real roots fastishly.
 
- - -
 
This however holds:
 
> `cos` and `sin, it can instead call ct_root_gen with just 1 call to
> `pow`, `cos` and `sin`, to get a generator which it then can call
> repeatedly to more fastishly get the b roots.
 
- Alf
"Öö Tiib" <ootiib@hot.ee>: May 29 12:20AM -0700

On Friday, 29 May 2020 02:02:07 UTC+3, Cholo Lennon wrote:
> WTL, I left MFC behind... well, due to MS winding ideas about GUI
> programming (hello new WinUI, OMG another framework again!) I left
> Windows GUI programming, but this is another story.
 
The core issue is that human operator is slow. Most time consuming
part is when user has to navigate back-and forth in forest of
inconveniently designed GUI for to get some common use-case covered.
 
GUI is often designed inconveniently because actual nuances of
end-user needs get often lost or initially misunderstood by
programmer. What is self-evident for one is unclear to other.
 
In C++ code the GUI is tree of data members. The data members
of classes can be tricky, time-consuming and error-prone to rearrange.
It really does not matter if such data member is made using
inheritance or CRTP of its base class. In script-based GUI that
is way simpler and so it wins.
 
Therefore the winner seems to be entirely text-parsing-based
run-time binding like HTML. That is way less efficient than both
CRTP of WTL or run-time polymorphic base classes of MFC whose
differences do not matter at all.
Paavo Helde <eesnimi@osa.pri.ee>: May 29 10:43AM +0300

29.05.2020 01:25 Mr Flibble kirjutas:
 
>> You are right in that MFC is not C++. It is a C++ library.
 
> I am sure it wouldn't be too hard to find some C++ non-conformance in
> the MFC "bag of shite" given how crap the M$ C++ compiler is.
 
I'm sure it is not too hard to find some C++ non-conformance in any
medium or large C++ project.
 
In MFC there are some functions like CWnd::GetSafeHwnd() whose
functionality depends on whether this==nullptr. Any application relying
on this is non-conforming. I don't know if MFC itself relies on such
things internally or not.
 
BTW, MSVC was crap 20 years ago. Nowadays it has become pretty decent.
"Miguel Giménez" <me@privacy.net>: May 29 11:17AM +0200

El 28/05/2020 a las 6:41, Heitber Andres Montilla Ramirez escribió:
> ld.exe____cannot find -lwxzlibd
 
> i tried to add those in the linker settings but don't work.
> how i may solve this problem i'm using codeblocks 17.12 and wxwidgets 3.0.4
 
Read this link:
 
http://wiki.codeblocks.org/index.php?title=FAQ-Compiling_%28general%29#Q:_How_do_I_report_a_compilation_problem_on_the_forums.3F
 
and then post the results in forums.codeblocks.org
 
My bet is you are trying to use debug libraries while you only have
release ones.
 
--
Saludos
Miguel Giménez
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 10 updates in 10 topics

aminer68@gmail.com: May 28 04:38PM -0700

Hello,
 
 
Creating a thread may cost thousands of CPU cycles. If you have a cheap function that requires only hundreds of cycles, it is almost surely wasteful to create a thread to execute it. The overhead alone is going to set you back.
 
Read more here:
 
https://lemire.me/blog/2020/01/30/cost-of-a-thread-in-c-under-linux/
 
 
This is why i have invented a powerful Threadpool that scales very well,
read my following thoughts about it:
 
https://community.idera.com/developer-tools/general-development/f/getit-and-third-party/72018/about-the-threadpool
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 28 02:25PM -0700

Hello,
 
 
Read again, i have formatted better my following webpage about my powerful product:
 
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
 
 
And read the following because it is so important:
 
More precision about: You cannot scale creativity
 
Please read the following about Applying the Universal Scalability Law to organisations:
 
https://blog.acolyer.org/2015/04/29/applying-the-universal-scalability-law-to-organisations/
 
I am a white arab that is more smart, and i invite you to read the
following thoughts, it is related to my following powerful product that i have designed and implemented, because as you will read below: "The solution is to lessen the need for coordination: have different people work on different things, use smaller teams, and employ fewer managers.", here is my powerful product (that can also be applied to organizations):
 
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
 
 
So read the following to understand:
 
 
Read the following from the following PhD computer scientist:
 
https://lemire.me/blog/about-me/
 
 
You cannot scale creativity
 
As a teenager, I was genuinely impressed by communism. The way I saw it, the West could never compete. The USSR offered a centralized and efficient system that could eliminate waste and ensure optimal efficiency. If a scientific problem appeared, the USSR could throw 10, 100 or 1000 scientists at it without having to cajole anyone.
 
I could not quite understand why the communist countries always appeared to be technologically so backward. Weren't their coordinated engineers and scientists out-innovating our scientists and engineers?
 
I was making a reasoning error. I had misunderstood the concept of economy of scale best exemplified by Ford. To me, communism was more or less a massive application of the Fordian approach. It ought to make everything better and cheaper!
 
The industrial revolution was made possible by economies of scale: it costs far less per car to produce 10,000 cars than to make just one. Bill Gates became the richest man in the world because software offers an optimal economy of scale: it costs the same to produce one copy of Windows or 100 million copies.
 
Trade and employment can also scale: the transaction costs go down if you sell 10,000 objects a day, or hire 10,000 people a year. Accordingly, people living in cities are typically better off and more productive.
 
This has lead to the belief that if you regroup more people and you organize them, you get better productivity. I want to stress how different this statement is from the previous observations. We can scale products, services, trade and interaction. Scaling comes from the fact that we need reproduce many copies of the essentially the same object or service. But merely regrouping people only involves scaling in accounting and human ressources: if these are the costs holding you back, you are probably not doing anything important. To get ten people together to have much more than ten times the output is only possible if you are producing an uniform product or service.
 
Yet, somehow, people conclude that regroup people and getting them to work on a common goal, by itself, will improve productivity. Fred Brooks put a dent in this theory with his Brook's law:
 
Adding manpower to a late software project makes it later.
 
While it is true that almost all my work is collaborative, I consistently found it counterproductive to work in large groups. Of course, as an introvert, this goes against all my instincts. But I also fail to see the productivity gains in practice whereas I do notice the more frequent meetings.
 
Abramo et al. (2012) looked seriously at this issue and found that you get no more than linear scaling. That is, a group of two researchers will produce twice as much as one researcher. Period. There is no economy of scale when coordinating human brains. Their finding contradicts decades of science policy where we have tried to organize people into larger and better coordinated groups (a concept eerily reminiscent of communism).
 
We can make an analogy with computers. Your quad-core processor will not run Microsoft Word four times as far. It probably won't even run it twice as fast. In fact, poorly written software may even run slower when there are more than one core. Coordination is expensive.
 
The solution is to lessen the need for coordination: have different people work on different things, use smaller teams, and employ fewer managers.
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 28 02:00PM -0700

Hello,
 
 
More precision about: You cannot scale creativity
 
Please read the following about Applying the Universal Scalability Law to organisations:
 
https://blog.acolyer.org/2015/04/29/applying-the-universal-scalability-law-to-organisations/
 
I am a white arab that is more smart, and i invite you to read the
following thoughts, it is related to my following powerful product that i have designed and implemented, because as you will read below: "The solution is to lessen the need for coordination: have different people work on different things, use smaller teams, and employ fewer managers.", here is my powerful product (that can also be applied to organizations):
 
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
 
 
So read the following to understand:
 
 
Read the following from the following PhD computer scientist:
 
https://lemire.me/blog/about-me/
 
 
You cannot scale creativity
 
As a teenager, I was genuinely impressed by communism. The way I saw it, the West could never compete. The USSR offered a centralized and efficient system that could eliminate waste and ensure optimal efficiency. If a scientific problem appeared, the USSR could throw 10, 100 or 1000 scientists at it without having to cajole anyone.
 
I could not quite understand why the communist countries always appeared to be technologically so backward. Weren't their coordinated engineers and scientists out-innovating our scientists and engineers?
 
I was making a reasoning error. I had misunderstood the concept of economy of scale best exemplified by Ford. To me, communism was more or less a massive application of the Fordian approach. It ought to make everything better and cheaper!
 
The industrial revolution was made possible by economies of scale: it costs far less per car to produce 10,000 cars than to make just one. Bill Gates became the richest man in the world because software offers an optimal economy of scale: it costs the same to produce one copy of Windows or 100 million copies.
 
Trade and employment can also scale: the transaction costs go down if you sell 10,000 objects a day, or hire 10,000 people a year. Accordingly, people living in cities are typically better off and more productive.
 
This has lead to the belief that if you regroup more people and you organize them, you get better productivity. I want to stress how different this statement is from the previous observations. We can scale products, services, trade and interaction. Scaling comes from the fact that we need reproduce many copies of the essentially the same object or service. But merely regrouping people only involves scaling in accounting and human ressources: if these are the costs holding you back, you are probably not doing anything important. To get ten people together to have much more than ten times the output is only possible if you are producing an uniform product or service.
 
Yet, somehow, people conclude that regroup people and getting them to work on a common goal, by itself, will improve productivity. Fred Brooks put a dent in this theory with his Brook's law:
 
Adding manpower to a late software project makes it later.
 
While it is true that almost all my work is collaborative, I consistently found it counterproductive to work in large groups. Of course, as an introvert, this goes against all my instincts. But I also fail to see the productivity gains in practice whereas I do notice the more frequent meetings.
 
Abramo et al. (2012) looked seriously at this issue and found that you get no more than linear scaling. That is, a group of two researchers will produce twice as much as one researcher. Period. There is no economy of scale when coordinating human brains. Their finding contradicts decades of science policy where we have tried to organize people into larger and better coordinated groups (a concept eerily reminiscent of communism).
 
We can make an analogy with computers. Your quad-core processor will not run Microsoft Word four times as far. It probably won't even run it twice as fast. In fact, poorly written software may even run slower when there are more than one core. Coordination is expensive.
 
The solution is to lessen the need for coordination: have different people work on different things, use smaller teams, and employ fewer managers.
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 28 01:30PM -0700

Hello,
 
 
I am a white arab that is more smart, and i invite you to read
the following webpage:
 
Sorting already sorted arrays is much faster?
 
https://lemire.me/blog/2016/09/28/sorting-already-sorted-arrays-is-much-faster/
 
 
This is why i have also designed and implemented my new Parallel Sort Library that is powerful, read about it and download it from here:
 
https://sites.google.com/site/scalable68/parallel-sort-library-that-is-more-efficient
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 28 12:37PM -0700

Hello,
 
 
Read the following from the following PhD computer scientist:
 
https://lemire.me/blog/about-me/
 
 
We are getting smarter as a matter of survival
 
A journalism student got very depressed after reading my post on genetically engineered intelligence (read it here:
https://lemire.me/blog/2013/03/17/is-genetically-engineering-intelligence-worth-it/). His feeling can be summarized by this question: if at some point in the near future, human beings or machines become orders of magnitude smarter than we are, why bother making an effort now? Won't the great novel you are writing look quaint? Won't your mathematical theory appear childish? Why bother learning calculus if IBM is about to come up with a computer able to solve all college calculus problems perfectly in seconds?
 
I want to make two important points as an answer to this question:
 
1. Each successive generation has been getting smarter
 
It shouldn't be shocking to think that our children will be smarter. We are smarter than our parents. A thousand years ago, if you knew how to read or write, you were a scholar (by definition). A few centuries ago, anyone who could read silently (without having to vocalize the words) was regarded with awe. Fifty years ago, people who could use computers for their daily tasks were wizards.
 
A common mistake is to think that "intelligence" is made of the piece of meat in your brain. Your intelligence is actually an aggregate of your brain with your environment and the tools and ideas around you. Tools extend our intelligence… with computers and robots being obvious examples. Physical tools are not the only things making us smarter however. If you study the work of Newton, the way he presented it himself, it may well be impossible to understand. Newton had to work with relatively weak intellectual tools: he had to do everything through geometry because that's how people did mathematics at the time. We can now do mathematics much more effectively. That is why there are millions of people who master calculus whereas it was once considered leading edge knowledge, only accessible to the great minds.
 
In some sense, true mathematics is about constructing mental tools so we can be smarter. So mathematicians have been busy making humanity smarter for centuries.
 
Many college students, if transported back a century or two in the past, would be phenomenal geniuses. Some might object to that statement. After all, the brains of these teenager is nothing extraordinary. But they are! Our brains are wired in ways that are vastly different. To learn is to rewire your brain. How would you differentiate a genius from someone who has visited the future long enough to steal the best ideas and train in their understanding? You simply couldn't! As Alan Kay put it: A change of perspective is worth 80 IQ points(read it here:
http://michaelnielsen.org/blog/867/) .
 
Of course, we are going to hack directly into our brains in the near future. I am still waiting for a chip that will give me access to the web at the speed of the thought. But this will not be a radical departure from what we have been doing for thousands of years: getting smarter faster and faster.
 
2. We absolutely need to get smarter at an accelerating pace.
 
Unfortunately, it is not a given that we are going to get much smarter: we could also go extinct or our civilization could collapse. It has happened before. To maintain a sophisticated civilization, we have to keep out-innovating our problems. You may have heard that our civilization is not sustainable. We burn too much fossil oil, we pollute too much, there are too many of us, and so on. This is all true. If we are going to keep on surviving, let alone get better, we need to keep on getting smarter at a rate that exceeds our growing problems. We are not just getting smarter for fun, we are getting smarter as a matter of survival.
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 28 11:45AM -0700

Hello,
 
 
I am a white arab that is more smart, and i invite you to read the
following thoughts, it is related to my following product that i have
designed and implemented, because as you will read below: "The solution is to lessen the need for coordination: have different people work on different things, use smaller teams, and employ fewer managers.",
here is my product:
 
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
 
 
So read the following to understand:
 
 
Read the following from the following PhD computer scientist:
 
https://lemire.me/blog/about-me/
 
 
You cannot scale creativity
 
As a teenager, I was genuinely impressed by communism. The way I saw it, the West could never compete. The USSR offered a centralized and efficient system that could eliminate waste and ensure optimal efficiency. If a scientific problem appeared, the USSR could throw 10, 100 or 1000 scientists at it without having to cajole anyone.
 
I could not quite understand why the communist countries always appeared to be technologically so backward. Weren't their coordinated engineers and scientists out-innovating our scientists and engineers?
 
I was making a reasoning error. I had misunderstood the concept of economy of scale best exemplified by Ford. To me, communism was more or less a massive application of the Fordian approach. It ought to make everything better and cheaper!
 
The industrial revolution was made possible by economies of scale: it costs far less per car to produce 10,000 cars than to make just one. Bill Gates became the richest man in the world because software offers an optimal economy of scale: it costs the same to produce one copy of Windows or 100 million copies.
 
Trade and employment can also scale: the transaction costs go down if you sell 10,000 objects a day, or hire 10,000 people a year. Accordingly, people living in cities are typically better off and more productive.
 
This has lead to the belief that if you regroup more people and you organize them, you get better productivity. I want to stress how different this statement is from the previous observations. We can scale products, services, trade and interaction. Scaling comes from the fact that we need reproduce many copies of the essentially the same object or service. But merely regrouping people only involves scaling in accounting and human ressources: if these are the costs holding you back, you are probably not doing anything important. To get ten people together to have much more than ten times the output is only possible if you are producing an uniform product or service.
 
Yet, somehow, people conclude that regroup people and getting them to work on a common goal, by itself, will improve productivity. Fred Brooks put a dent in this theory with his Brook's law:
 
Adding manpower to a late software project makes it later.
 
While it is true that almost all my work is collaborative, I consistently found it counterproductive to work in large groups. Of course, as an introvert, this goes against all my instincts. But I also fail to see the productivity gains in practice whereas I do notice the more frequent meetings.
 
Abramo et al. (2012) looked seriously at this issue and found that you get no more than linear scaling. That is, a group of two researchers will produce twice as much as one researcher. Period. There is no economy of scale when coordinating human brains. Their finding contradicts decades of science policy where we have tried to organize people into larger and better coordinated groups (a concept eerily reminiscent of communism).
 
We can make an analogy with computers. Your quad-core processor will not run Microsoft Word four times as far. It probably won't even run it twice as fast. In fact, poorly written software may even run slower when there are more than one core. Coordination is expensive.
 
The solution is to lessen the need for coordination: have different people work on different things, use smaller teams, and employ fewer managers.
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 28 11:14AM -0700

Hello,
 
Read this:
 
 
Read the following from the following PhD computer scientist:
 
https://lemire.me/blog/about-me/
 
 
Where does innovation come from?
 
I just finished Rational Optimist by Matt Ridley. Because I am an overly pessimistic individual, I expected to hate the book.
 
I loved the book.
 
I should point out where I read the book, because context is important in this case. I was in Berlin. My hotel room was about 50 meters away from Checkpoint Charlie the central point of the cold war. I was within 2 minutes the remains of a train station where thousands of Jews were sent to their death. I was near the remains of the Berlin wall built to prevent people from escaping communists. Berlin could easily be the mecca of pessimists.
 
Ridley is a very specific optimist: he believes that innovation is an almost unstoppable force. Food and energy shortages? We will invent new ways to produce more food and energy than we need. Effectively, human beings have become better at almost everything: producing goods and food, taking care of each other, learning, sharing and so on.
 
But he is also a pessimist: he believes that if we stop innovation, we suffer. We must constantly out-innovate our problems. We will soon run out of food, energy and breathable air if we keep doing the same thing at a greater scale. Only by inventing drastically better technologies and organizations can we hope to prosper. Innovation is required for our survival. Civilizations eventually collapse, when they become unable to innovate around their problems.
 
But where does innovation comes from? Ridley believes it comes from trade, taken in the broadest sense of the term. Traders are people who carry ideas from people to people. They are like bees in that they allow ideas to have sex… Traders allow people to specialize and to focus on perfecting ideas. Without trade, we would all need to be self-sufficient. Condemned to self-sufficiency, we would not have time to improve our methods nor share our ideas. Interdependency makes human beings better.
 
How do you get more innovation? Do you have your governments entice researchers like myself to pursue "strategic" research? Absolutely not. Governments cannot create innovation. Instead, they should limit the wealth they extract from the economy by remaining small. Other institutions like banks should also be kept in check. In effect, central planning, wherever it comes from, should be avoided as it stops innovation in its tracks.
 
Hence, civilization comes in as a result of trade, because it can siphon the newly generated wealth. It wasn't the Jewish traders in the 1930s who drained the wealth out of Germany. With their various enterprises, they were the source of much of the wealth that the state was extracting. They were not the parasites.
 
Ridley does not have much faith in science as a source of innovation. Most innovation comes through tinkering and trading ideas. Science and law come after the fact to codify what was learned. In effect, science may support innovations and inventions, but it is not the causal agent. What you want is trade and the freedom it brings. I share his vision. After all, Russians had top-notch scientists, but they were still unable to innovate in most practical enterprises.
 
He sees a cycle, where innovation creates value which is then captured and killed by bureaucrats or obsolete corporations. But innovation always reappears elsewhere. He believes that the best place to be right now is on the Web. One day, governments and corporations will kill Web-based innovations, but by then, a new frontier will have opened.
 
Ridley predicts the fall of corporations and the rise of bottom-up economics where individuals freely assemble to create value. Apple, Google and Facebook will soon collapse, faster than comparable companies a century ago.
 
This book also explains why Germany is at least marginally richer than the United Kingdom even though the United Kingdom won the two last great wars and Germany lost. Winning is overrated. Wealth cannot be put into boxes and piled up. Had you confiscated all the computers from 1970s, you would hold a collection hardly more valuable than a single iPad.
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 28 11:00AM -0700

Hello,
 
 
Read the following from the following PhD computer scientist:
 
https://lemire.me/blog/about-me/
 
 
The myth of the unavoidable specialization
 
In a recent essay, Malone et al. claimed that we were entering the age of hyperspecialization. Their core assumption: human beings are more efficient when doing specialized tasks. Thus, they claim, we are moving toward a future where software will distribute hyperspecialized tasks to expert individuals. They believe that we will progressively work on narrower and narrower problems.
 
Among intellectuals, specialization is often seen as a good omen. It is the safe thing to do: stick with a narrow topic (e.g., how polar bears raise their offsprings, or the chemistry of sugar). The usual argument is that with the growth of knowledge, we have no choice but to become narrow specialists. Conversely, people with a wide range of interests are pursuing a high risk strategy. Whenever you attempt to contribute to a new problem, you risk ridicule: maybe everyone who has worked ten years on this topic knows that you are pursuing a dead-end.
 
So, yes, humanity knows more about every single subject than ever before. Conversely, our brains are are biologically identical to what they were 2000 years ago. Thus, we ought to be increasingly mentally challenged. But this logic is flawed because it equates the mind with our brains. We are expanding our minds exponentially! Indeed, our minds are increasingly externalized. First, we started telling stories, using other brains to support our own cognitive abilities. Then we invented writing. Then we invented the Web. At each step, human beings become smarter and smarter in every respect. One might object that it is not I who becomes smarter when I am connected to the Web. That somehow, saying so, is cheating. But this is pure semantics. The fact is, with access to the Web, I could run circles around Sir Isaac Newton, even if he were allowed to have an entire library at his disposal.
 
We could still conclude that as we expand knowledge, the specialists have a greater and greater edge: it becomes riskier and riskier to be anything but a specialist. But I believe the opposite is happening. Far from moving toward hyperspecialization, we are in fact moving toward hypergeneralization. Millions of freelance workers worldwide fill out their taxes electronically, bypassing the specialists (accountants). Whereas researchers absolutely needed expert librarians to avoid wasting days in libraries, Google Scholar has made reference checking accessible to all, at no cost. I learned how to prepare pineapple like a chef in minutes using a simple YouTube query. Soon augmented reality glasses will allow you to walk in any park and know instantly the characteristics of any flower you encounter.
 
But wasn't the XXth century about specialization? Of course not! The XXth century was about people like Einstein who invented a new type of fridge and also a little something called relativity. The specialists are most often the poor people. You want to rise up in a company like Google or Facebook? Then be someone who can expand his mind as needed, not a silly Java specialist who can be replaced easily. Leaders like Henri Ford like specialization, for others, never for themselves.
 
Your future wealth is determined by how much you can expand your mind beyond the capacity of your biological brain, not by your current skills.
 
Take a chance and go work on a new problem, today.
 
Further reading: Lack of steady trajectories and failure and How information technology is really built. See also Serial Mastery.
 
 
 
Thank you,
Amjine Moulay Ramdane.
aminer68@gmail.com: May 28 10:09AM -0700

Hello,
 
 
Read the following from the following PhD computer scientist:
 
https://lemire.me/blog/about-me/
 
 
The future of innovation is in software
 
I keep reading about how the future will be shaped by new cheaper fuel or amazing new medications. I believe that we are misreading the trends. Yes, we will have better medications and cheaper fuel in the future. However, I believe we are clearly in the mist of an information revolution. The future will be shaped by software, defined broadly.
 
Specifically, I believe that:
 
Tele-work, tele-play, tele-learning will soon represent 80% of our lives.
 
There is much more room for innovation in software than in hardware.
 
There are few ways to build a house, but many more ways to build a virtual house.
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: May 28 09:06AM -0700

Hello,
 
 
We are only visiting the shore of mathematics
 
I like Doron Zeilberger's 66th Opinion:
 
"all what human mathematics does is apply implicit exponential-time algorithms, called "heuristics" to find some trivial pebbles on the shore of the (even decidable part!) of the mathematical ocean."
 
In short, a mathematician solves trivial problems, a mathematician with a computer solves semi-trivial problems, but we are only visiting the shore of mathematics.
 
It is very insightful. One could look at the current state of higher mathematics, observe that progress is slowing and conclude that we have pretty much covered the realm of useful mathematics. In truth, we have maybe covered the realm of mathematics we could handle with a human brain. And current computers probably can't help us too much.
 
Read more here:
 
https://sites.math.rutgers.edu/~zeilberg/Opinion66.html
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.