comp.lang.c++@googlegroups.com | Google Groups | ![]() |
Unsure why you received this message? You previously subscribed to digests from this group, but we haven't been sending them for a while. We fixed that, but if you don't want to get these messages, send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
- First to learn C if learning C++? - 11 Updates
- sleep and sleep_for on Linux - 1 Update
- Has anyone seen this? - 3 Updates
- Framework Qt with c++ - 4 Updates
"Öö Tiib" <ootiib@hot.ee>: Oct 19 07:37AM -0700 On Sunday, 19 October 2014 06:55:04 UTC+3, J. Clarke wrote: > compiler I am aware of that is commonly available, current, and accepts > C code is the Visual Studio C++ compiler which accepts that syntax just > fine unless you expelicitly set it to accept C rather than C++ code. The Microsoft C/C++ Compiler Driver has been called "cl.exe". All the Visual C compiler front-ends have been something like 'c1.dll'. By default that "driver" feeds all files that end in .c to "C compiler". Visual C++ compiler front-end has usually been 'c1xx.dll'. There have even been some side-by side versions of those dlls for different processor architectures. I can explicitly tell to the driver to use C++ compiler but that does not help with most C code. C code is far more likely C89 compliant than C++ compliant. > > up. > Huh? If that was an issue then all that would be needed was an > appropriate command line option. What golden software architecture advice is that? Just add more command line options and everything else happens by magic? |
David Brown <david.brown@hesbynett.no>: Oct 19 06:11PM +0200 On 12/10/14 18:22, JiiPee wrote: > see what happens in the looop. So its better readable. plus the assembly > code is the same in both, so they are equally fast > Or would you say the C code here is better? In what way? Actually, the assembly code is likely to be different (I'm guessing that inside the loop you are accessing a C array or C++ vector) - the first loop will be faster (whether compiled as C or C++). Whether or not the difference is significant is another issue, but you can't claim that they will be the same. In fact, your lack of knowledge here gives ammo to people who say that a disadvantage of C++ is that it has "hidden costs". When you use a std::vector, there is a layer of indirection - the "vec" object does not contain the data itself. An array in C, or a std::array in C++11, holds the data rather than a pointer to a heap-allocated memory block. This means one step fewer in pointers, and closer cache locations, and therefore slightly faster code. In C code, the "size" might be known at compile time, leading to more speed-ups. Does this matter? Probably not, in most code. It is certainly secondary to the issues of readability and maintainability. But you can't claim they are equivalent if they are not. (This is exactly why std::array was added to C++11.) |
Paavo Helde <myfirstname@osa.pri.ee>: Oct 19 12:05PM -0500 David Brown <david.brown@hesbynett.no> wrote in > Actually, the assembly code is likely to be different (I'm guessing > that inside the loop you are accessing a C array or C++ vector) - the > first loop will be faster (whether compiled as C or C++). Citation needed. Or at least a clarificiation. What makes the second loop inherently slower than the first? And why do you assume the second loop accesses a std::vector and not std::array for example? > std::array in C++11, holds the data rather than a pointer to a > heap-allocated memory block. This means one step fewer in pointers, > and closer cache locations, and therefore slightly faster code. This does not matter inside a loop. The extra indirection can be done once only before the loop (at least when the loop body is inline and the compiler can see that the location of the vector buffer does not change; and if the loop body is not inline the extra indirection does not matter). Note that for accessing the stack data there is also an indirection involved, via the stack pointer (e.g. EBP). The difference is that the compiler has much better control over when the stack pointer changes. > In C > code, the "size" might be known at compile time, leading to more > speed-ups. Or it might not, with the variable length arrays in C. > secondary to the issues of readability and maintainability. But you > can't claim they are equivalent if they are not. > (This is exactly why std::array was added to C++11.) Nope, the main optimization with std::array is to avoid a dynamic memory allocation. This is one thing which can really matter in some circumstances. Cache proximity may also help a bit. Cheers Paavo |
"Öö Tiib" <ootiib@hot.ee>: Oct 19 10:21AM -0700 On Sunday, 19 October 2014 19:11:47 UTC+3, David Brown wrote: > Actually, the assembly code is likely to be different (I'm guessing that > inside the loop you are accessing a C array or C++ vector) - the first > loop will be faster (whether compiled as C or C++). You suggest that the 'size' in first 'for' is compile-time constant? Does not look like that, C programmers usually tend to #define those as macros and to capitalize such macros (as 'SIZE'). So whatever it iterates over is likely dynamically allocated. > difference is significant is another issue, but you can't claim that > they will be the same. In fact, your lack of knowledge here gives ammo > to people who say that a disadvantage of C++ is that it has "hidden costs". You are correct that we can't tell what compiler produces (if it produces anything) from the above two 'for' cycles and if it is equal or not. The hidden costs are not that hidden. C programmer would have to write most of the code in standard library explicitly anyway for There are next to no overhead in C++ library, that is why it has so lot of documented cases of "undefined behavior" in it. Note that the first 'for' cycle modifies 'int' and compares it with other 'int' while second 'for' cycle likely advances thru container by internally incrementing some pointer to elements. First is less efficient if to take it literally but hopefully compiler optimizes that to do same what second 'for' does. > secondary to the issues of readability and maintainability. But you > can't claim they are equivalent if they are not. > (This is exactly why std::array was added to C++11.) Here you again go on with that ungrounded hypothesis of yours about unrevealed meta-information that the 'vec' there is 'std::vector' that was somehow picked without reason. |
"J. Clarke" <jclarkeusenet@cox.net>: Oct 19 02:13PM -0400 In article <df0700ba-36ee-4c00-9f37-e680de6a928d@googlegroups.com>, ootiib@hot.ee says... > Visual C++ compiler front-end has usually been 'c1xx.dll'. There have > even been some side-by side versions of those dlls for different > processor architectures. In other words there's no specific "Microsoft C compiler". > I can explicitly tell to the driver to use C++ compiler but that > does not help with most C code. C code is far more likely C89 compliant > than C++ compliant. Your point being? Your complaint seems to be that Microsoft has not chosen to embrace a newer C standard, probably because they don't see any purpose to be served in it other than making certain chronic complainers find something else to complain about. > > appropriate command line option. > What golden software architecture advice is that? Just add more > command line options and everything else happens by magic? Hey, if you don't like the idea that a compiler be designed to support different standards by setting a command line switch, go whine at Richard Stallman. |
"Öö Tiib" <ootiib@hot.ee>: Oct 19 12:08PM -0700 On Sunday, 19 October 2014 21:15:02 UTC+3, J. Clarke wrote: > > does not help with most C code. C code is far more likely C89 compliant > > than C++ compliant. > Your point being? My point was that it is better idea to write C that is compatible with C89 because Microsoft C compiler takes it and also there is decent amount of existing code that is compatible with it and so declaring loop variable inside 'for' just breaks that compatibility pointlessly without buying you anything. > chosen to embrace a newer C standard, probably because they don't see > any purpose to be served in it other than making certain chronic > complainers find something else to complain about. I did say nothing like that nor did I complain. 1989 was fairly recently and Microsoft cant be expected to invest something into being attractive to developers. It just won't, ever. So I suggested that it is reasonable to stick to existing practices. Year or two later some Windows 9 comes out and gcc won't work on it and everybody must keep source code in cloud of Microsoft. There will be nowhere to stick your -std=c99 or what it was. I don't complain now and won't complain then. > Hey, if you don't like the idea that a compiler be designed to support > different standards by setting a command line switch, go whine at > Richard Stallman. I did not say that it is bad idea to have command line switches on command line programs. It likely takes 5 minutes to add -std=c99, -std=c11, -std=c++11, -std=c++14, -std=Fortran2003, -std=c++17 and so on to whatever command line program. I was just indicating that it does not make that program to compile any programming languages. Is it surprising to you? Do you think Richard Stallman disagrees with me? |
Ian Collins <ian-news@hotmail.com>: Oct 20 08:09AM +1300 Paavo Helde wrote: > Citation needed. Or at least a clarificiation. What makes the second loop > inherently slower than the first? And why do you assume the second loop > accesses a std::vector and not std::array for example? I would go further and say the code inside the loop would be pretty much identical with std::vector or std::array. -- Ian Collins |
David Brown <david.brown@hesbynett.no>: Oct 19 09:20PM +0200 On 19/10/14 02:10, 嘱 Tiib wrote: > and 'X++' feels like for special, more complex cases. > We pick 'X++' because it was that way in for loops of the books written > by wise guys that we did read as novices ... there are no other reasons. ++x and x++ do the same thing to x, but return different values. When you want to use the value of the expression, you pick the correct one whether you are writing C or C++. Otherwise, when you are writing C, you write "x++" since that's the way it has always been done. When you write C++, you write "++x" because that's more efficient if "x" is a class with a bit of complexity, so you are encouraged to think that "++x" is the /right/ way to write it. So mostly it makes no difference - consistency (or habit) is the only decider. |
David Brown <david.brown@hesbynett.no>: Oct 19 10:03PM +0200 On 19/10/14 19:05, Paavo Helde wrote: > Citation needed. Or at least a clarificiation. What makes the second loop > inherently slower than the first? And why do you assume the second loop > accesses a std::vector and not std::array for example? The variable is called "vec", which means it is a fair guess that inside the loop we have a vector, not an array. And I explained below why I think the second loop (with a vector) is inherently slower. I don't think the use of the range for should make much of a difference - it will depend on whether "size" is a known compile-time constant or must be loaded. As for a citation, I've put a short test program along with the generated code at the end of the post. This was generated with gcc4.9 on 64-bit Linux, and should be reasonably representative. Of course the code is just an example, and makes assumptions, extrapolating from the very incomplete original lines. Of course, as I said earlier, it is unlikely that the differences will matter - maximal speed and code density is seldom the most important factor in choosing how to write your code. And it is only in code with very tight loops that an instruction or two extra could be noticed at all - most code will have more complex loops, making the details here irrelevant. My point was merely that the claim of "the assembly code is the same in both [cases]" is patently false, and the C-style code is faster. > compiler can see that the location of the vector buffer does not change; > and if the loop body is not inline the extra indirection does not > matter). It matters little - but it is not the same. And if the extra indirection means a cache miss, that can be hundreds of wasted cycles. >> code, the "size" might be known at compile time, leading to more >> speed-ups. > Or it might not, with the variable length arrays in C. Of course. I can only work from the partial example code. > Nope, the main optimization with std::array is to avoid a dynamic memory > allocation. This is one thing which can really matter in some > circumstances. Cache proximity may also help a bit. Actually, std::array gives several improvements over std::vector (assuming you don't need std::vector's flexibilities). Avoiding dynamic memory allocation skips a large overhead when it is created or destroyed, but also gives slightly faster code when it is accessed. Test code: #include <vector> #include <array> const int size = 8; std::vector<int> vec; std::array<int, size> arr; int carr[size]; void testVec(void) { for (auto i : vec) { vec[i] = i; } } void testVec2(void) { for (int i = 0; i < size; i++) { vec[i] = i; } } void testArr(void) { for (auto i : arr) { arr[i] = i; } } void testArr2(void) { for (int i = 0; i < size; i++) { arr[i] = i; } } void testCarr(void) { for (int i = 0; i < size; i++) { carr[i] = i; } } Command line: /usr/local/gcc4.9/bin/gcc4.9-gcc -c --std=c++11 a.cpp -Wall -Wextra -fno-toplevel-reorder -Wa,-ahdsl -O2 > a.lst Generated code: GAS LISTING /tmp/cchLoTpE.s page 1 1 .file "a.cpp" 2 .section .rodata 5 _ZStL19piecewise_construct: 6 0000 00 .zero 1 7 .globl vec 8 .bss 9 .align 16 12 vec: 13 0000 00000000 .zero 24 13 00000000 13 00000000 13 00000000 13 00000000 14 .globl arr 15 0018 00000000 .align 32 15 00000000 18 arr: 19 0020 00000000 .zero 32 19 00000000 19 00000000 19 00000000 19 00000000 20 .globl carr 21 .align 32 24 carr: 25 0040 00000000 .zero 32 25 00000000 25 00000000 25 00000000 25 00000000 26 .section .text.unlikely,"ax",@progbits 27 .LCOLDB0: 28 .text 29 .LHOTB0: 30 .p2align 4,,15 31 .globl _Z7testVecv 33 _Z7testVecv: 34 .LFB1253: 35 .cfi_startproc 36 0000 488B3500 movq vec(%rip), %rsi 36 000000 37 0007 488B3D00 movq vec+8(%rip), %rdi 37 000000 38 000e 4839F7 cmpq %rsi, %rdi 39 0011 4889F0 movq %rsi, %rax 40 0014 7419 je .L1 41 0016 662E0F1F .p2align 4,,10 41 84000000 41 0000 42 .p2align 3 43 .L5: 44 0020 486308 movslq (%rax), %rcx 45 0023 4883C004 addq $4, %rax 46 0027 4839C7 cmpq %rax, %rdi 47 002a 890C8E movl %ecx, (%rsi,%rcx,4) 48 002d 75F1 jne .L5 49 .L1: GAS LISTING /tmp/cchLoTpE.s page 2 50 002f F3C3 rep ret 51 .cfi_endproc 52 .LFE1253: 54 .section .text.unlikely 55 .LCOLDE0: 56 .text 57 .LHOTE0: 58 .section .text.unlikely 59 .LCOLDB1: 60 .text 61 .LHOTB1: 62 0031 66666666 .p2align 4,,15 62 66662E0F 62 1F840000 62 000000 63 .globl _Z8testVec2v 65 _Z8testVec2v: 66 .LFB1254: 67 .cfi_startproc 68 0040 488B1500 movq vec(%rip), %rdx 68 000000 69 0047 31C0 xorl %eax, %eax 70 0049 0F1F8000 .p2align 4,,10 70 000000 71 .p2align 3 72 .L9: 73 0050 890482 movl %eax, (%rdx,%rax,4) 74 0053 4883C001 addq $1, %rax 75 0057 4883F808 cmpq $8, %rax 76 005b 75F3 jne .L9 77 005d F3C3 rep ret 78 .cfi_endproc 79 .LFE1254: 81 .section .text.unlikely 82 .LCOLDE1: 83 .text 84 .LHOTE1: 85 .section .text.unlikely 86 .LCOLDB2: 87 .text 88 .LHOTB2: 89 005f 90 .p2align 4,,15 90 .globl _Z7testArrv 92 _Z7testArrv: 93 .LFB1255: 94 .cfi_startproc 95 0060 B8000000 movl $arr, %eax 95 00 96 .p2align 4,,10 97 0065 0F1F00 .p2align 3 98 .L12: 99 0068 486308 movslq (%rax), %rcx 100 006b 4883C004 addq $4, %rax 101 006f 483D0000 cmpq $arr+32, %rax 101 0000 102 0075 890C8D00 movl %ecx, arr(,%rcx,4) 102 000000 GAS LISTING /tmp/cchLoTpE.s page 3 103 007c 75EA jne .L12 104 007e F3C3 rep ret 105 .cfi_endproc 106 .LFE1255: 108 .section .text.unlikely 109 .LCOLDE2: 110 .text 111 .LHOTE2: 112 .section .text.unlikely 113 .LCOLDB3: 114 .text 115 .LHOTB3: 116 .p2align 4,,15 117 .globl _Z8testArr2v 119 _Z8testArr2v: 120 .LFB1256: 121 .cfi_startproc 122 0080 31C0 xorl %eax, %eax 123 .p2align 4,,10 124 0082 660F1F44 .p2align 3 124 0000 125 .L15: 126 0088 89048500 movl %eax, arr(,%rax,4) 126 000000 127 008f 4883C001 addq $1, %rax 128 0093 4883F808 cmpq $8, %rax 129 0097 75EF jne .L15 130 0099 F3C3 rep ret 131 .cfi_endproc 132 .LFE1256: 134 .section .text.unlikely 135 .LCOLDE3: 136 .text 137 .LHOTE3: 138 .section .text.unlikely 139 .LCOLDB4: 140 .text 141 .LHOTB4: 142 009b 0F1F4400 .p2align 4,,15 142 00 143 .globl _Z8testCarrv 145 _Z8testCarrv: 146 .LFB1257: 147 .cfi_startproc 148 00a0 31C0 xorl %eax, %eax 149 .p2align 4,,10 150 00a2 660F1F44 .p2align 3 150 0000 151 .L18: 152 00a8 89048500 movl %eax, carr(,%rax,4) 152 000000 153 00af 4883C001 addq $1, %rax 154 00b3 4883F808 cmpq $8, %rax 155 00b7 75EF jne .L18 156 00b9 F3C3 rep ret 157 .cfi_endproc 158 .LFE1257: GAS LISTING /tmp/cchLoTpE.s page 4 160 .section .text.unlikely 161 .LCOLDE4: 162 .text 163 .LHOTE4: 164 .section .rodata 165 0001 000000 .align 4 168 _ZL4size: 169 0004 08000000 .long 8 170 .section .text.unlikely._ZNSt6vectorIiSaIiEED2Ev,"axG",@progbits,_ZNSt6vectorIiSaIiEED5Ev,comdat 171 .align 2 172 .LCOLDB5: 173 .section .text._ZNSt6vectorIiSaIiEED2Ev,"axG",@progbits,_ZNSt6vectorIiSaIiEED5Ev,comdat 174 .LHOTB5: 175 .align 2 176 .p2align 4,,15 177 .weak _ZNSt6vectorIiSaIiEED2Ev 179 _ZNSt6vectorIiSaIiEED2Ev: 180 .LFB1456: 181 .cfi_startproc 182 0000 488B3F movq (%rdi), %rdi 183 0003 4885FF testq %rdi, %rdi 184 0006 7408 je .L20 185 0008 E9000000 jmp _ZdlPv 185 00 186 000d 0F1F00 .p2align 4,,10 187 .p2align 3 188 .L20: 189 0010 F3C3 rep ret 190 .cfi_endproc 191 .LFE1456: 193 .section .text.unlikely._ZNSt6vectorIiSaIiEED2Ev,"axG",@progbits,_ZNSt6vectorIiSaIiEED5Ev,comdat 194 .LCOLDE5: 195 .section .text._ZNSt6vectorIiSaIiEED2Ev,"axG",@progbits,_ZNSt6vectorIiSaIiEED5Ev,comdat 196 .LHOTE5: 197 .weak _ZNSt6vectorIiSaIiEED1Ev 198 .set _ZNSt6vectorIiSaIiEED1Ev,_ZNSt6vectorIiSaIiEED2Ev 199 .section .text.unlikely 200 .LCOLDB6: 201 .section .text.startup,"ax",@progbits 202 .LHOTB6: 203 .p2align 4,,15 205 _GLOBAL__sub_I_vec: 206 .LFB1462: 207 .cfi_startproc 208 0000 BA000000 movl $__dso_handle, %edx 208 00 209 0005 BE000000 movl $vec, %esi 209 00 210 000a BF000000 movl $_ZNSt6vectorIiSaIiEED1Ev, %edi 210 00 211 000f 48C70500 movq $0, vec(%rip) 211 00000000 211 000000 212 001a 48C70500 movq $0, vec+8(%rip) 212 00000000 212 000000 213 0025 48C70500 movq $0, vec+16(%rip) GAS LISTING /tmp/cchLoTpE.s page 5 213 00000000 213 000000 214 0030 E9000000 jmp __cxa_atexit 214 00 215 .cfi_endproc 216 .LFE1462: 218 .section .text.unlikely 219 .LCOLDE6: 220 .section .text.startup 221 .LHOTE6: 222 .section .init_array,"aw" 223 .align 8 224 0000 00000000 .quad _GLOBAL__sub_I_vec 224 00000000 225 .hidden __dso_handle 226 .ident "GCC: (GNU) 4.9.1" 227 .section .note.GNU-stack,"",@progbits GAS LISTING /tmp/cchLoTpE.s page 6 DEFINED SYMBOLS *ABS*:0000000000000000 a.cpp /tmp/cchLoTpE.s:5 .rodata:0000000000000000 _ZStL19piecewise_construct /tmp/cchLoTpE.s:12 .bss:0000000000000000 vec /tmp/cchLoTpE.s:18 .bss:0000000000000020 arr /tmp/cchLoTpE.s:24 .bss:0000000000000040 carr /tmp/cchLoTpE.s:33 .text:0000000000000000 _Z7testVecv /tmp/cchLoTpE.s:65 .text:0000000000000040 _Z8testVec2v /tmp/cchLoTpE.s:92 .text:0000000000000060 _Z7testArrv /tmp/cchLoTpE.s:119 .text:0000000000000080 _Z8testArr2v /tmp/cchLoTpE.s:145 .text:00000000000000a0 _Z8testCarrv /tmp/cchLoTpE.s:168 .rodata:0000000000000004 _ZL4size /tmp/cchLoTpE.s:179 .text._ZNSt6vectorIiSaIiEED2Ev:0000000000000000 _ZNSt6vectorIiSaIiEED2Ev /tmp/cchLoTpE.s:179 .text._ZNSt6vectorIiSaIiEED2Ev:0000000000000000 _ZNSt6vectorIiSaIiEED1Ev /tmp/cchLoTpE.s:205 .text.startup:0000000000000000 _GLOBAL__sub_I_vec .group:0000000000000000 _ZNSt6vectorIiSaIiEED5Ev UNDEFINED SYMBOLS _ZdlPv __dso_handle __cxa_atexit |
David Brown <david.brown@hesbynett.no>: Oct 19 10:22PM +0200 On 19/10/14 19:21, 嘱 Tiib wrote: > Does not look like that, C programmers usually tend to #define those > as macros and to capitalize such macros (as 'SIZE'). So whatever it > iterates over is likely dynamically allocated. We can only guess at the original intention from the partial examples. A non-constant "size" will only cause a couple of extra instructions at the start of the loop, but could of course mean a cache miss. > to write most of the code in standard library explicitly anyway for > There are next to no overhead in C++ library, that is why it has so > lot of documented cases of "undefined behavior" in it. I agree - that is why I talked about "people who /say/ that a disadvantage of C++ is that it has hidden costs". I don't think the costs are hidden at all - though they are obviously not apparent to all C++ programmers. And I fully agree that if you need the functionality a std::vector provides, you'd have a hard job duplicating the features more efficiently in C. C++ aims for "zero overhead" - you should not pay for features you don't use. It does not quite reach that, but comes very close (typically within a percent or two, depending of course on the program). Certainly the costs are normally not worth bothering about. However, if you ask the compiler for a std::vector when a C array or std::array would do the job, then you are /asking/ for extra features - and you will pay for them, even if you don't use them. It is not a high price by any means - but it is a price. > Here you again go on with that ungrounded hypothesis of yours about > unrevealed meta-information that the 'vec' there is 'std::vector' > that was somehow picked without reason. What do /you/ think "vec" is? Yes, I have assumed it to be a std::vector, since that is a natural fit for the name "vec" and the style. But it could be something else, such as a std::array. I would expect then that the two loops would be the same, since the compiler has the same information in each case (assuming a constant "size" as needed by std::array). Unfortunately, my tests (see my other post) give slightly different results. On an x86, it is very difficult to figure out which is faster, but it is clear which is smaller. I hope that is merely an optimisation opportunity that gcc has missed, and will improve in the future. |
"Öö Tiib" <ootiib@hot.ee>: Oct 19 03:52PM -0700 On Sunday, 19 October 2014 23:22:51 UTC+3, David Brown wrote: > What do /you/ think "vec" is? Yes, I have assumed it to be a > std::vector, since that is a natural fit for the name "vec" and the > style. I also think that it is 'std::vector<something>' in second loop and 'something*' in first loop. I even imagine definitions. something* vec = malloc(sizeof(*vec) * size); // first std::vector<something> vec(size); // second However ... that is unrevealed meta-information produced with our imagination. > out which is faster, but it is clear which is smaller. I hope that is > merely an optimisation opportunity that gcc has missed, and will improve > in the future. I did agree with you before your tests, it is rarely case that different code produces exactly same binary. Your tests did lack 'malloc'ed version of C array, but even that is irrelevant. Modern time we have fine compilers and processors so the outcome is usually sufficiently fast (between vector or array) with insignificant differences of few percent. When our software is slow then it is usually because of non-scalable algorithms. Better algorithms may perform orders of magnitude faster but often involve indexing, heapifying or sorting of our containers. That is lot easier to do with C++ standard lib than with C. With C++ there are next to no differences if we feed C array, 'std::array', 'std::vector' or 'std::deque' to standard algorithms. C++ "hides the costs". :D |
Emanuel Berg <embe8573@student.uu.se>: Oct 19 10:01PM +0200 > ad hoc stuff like your polling. > (Someone else suggested ways to cut down on the > influence.) OK, first, thank you, and thank you to J. Clarke as well. You are absolutely right, the problem is that there is RT and BE and they influence each other. The challenge is to isolate them except for what cannot be isolated, which is the shared DRAM. But, the RT part doesn't have to be isolated from the BE part, as long as the influence can be bounded. If it can be bounded, it can be computed, and then formalized, i.e., the BE influence will be incorporated and accounted for in the RT part. At this moment, the RT part knows how many accesses the BE part can be allowed to make, and this is communicated to the Linux kernel with a syscall. The BE processes are told to run exclusively on the BE core (in userspace) with the tool taskset ('taskset -c CORE') and with this GRUB_CMDLINE_LINUX_DEFAULT="quiet isolcpus=0" in /etc/default/grub - it works, they only run there and this can be confirmed with 'top' or 'ps'. Do you happen to know how the scheduler can be modified to, whenever told so (doesn't matter by means of an interrupt or polling), how the scheduler can be told to not execute those tasks, i.e. to freeze the BE core? Or is that better setup on a process basis, as in a field in the PCB or something? (I would think that's where ps and top get their information.) -- underground experts united |
Robert Hutchings <rm.hutchings@gmail.com>: Oct 19 10:25AM -0500 https://www.phy.duke.edu/~rgb/Beowulf/c++_interview/c++_interview.html I guess it's a fake interview with Stroustrup, but I think it illustrates the point.... |
"Öö Tiib" <ootiib@hot.ee>: Oct 19 09:03AM -0700 On Sunday, 19 October 2014 18:25:46 UTC+3, Robert Hutchings wrote: > https://www.phy.duke.edu/~rgb/Beowulf/c++_interview/c++_interview.html > I guess it's a fake interview with Stroustrup, but I think it > illustrates the point.... Everybody have seen those jokes. Yes, C++ programmers are hard to find but not because C++ is that hard. C++ is OK. Reason is that those losers who can't code themselves out of paper bag in any language all around just blog, joke and whine how bad C++ is. Why? Take Java, take Python, take PHP, take C or take C# and go and write something useful. Nah. Easier is to whine about C++. Is there some major force on Earth that pushes you towards C++? *NONE*. Microsoft pushes you to C#, Apple pushes you to Objective-C, Google pushes you to Go, Oracle pushes you to Java. So why you whine about C++? No one on Earth does want you to use it. Expect the actual software users, who like how it runs smoothly. For us it is OK, that is improving our salary of C++ programmers and life is good. ;) |
Robert Hutchings <rm.hutchings@gmail.com>: Oct 19 11:25AM -0500 On 10/19/2014 11:03 AM, Öö Tiib wrote: > Expect the actual software users, who like how it runs smoothly. > For us it is OK, that is improving our salary of C++ programmers > and life is good. ;) I tend to agree with you, and Bjarne never REALLY said any of that. C++ is the most powerful language ever created IMHO, and, yes, it can be difficult, but that is not necessarily a bad thing :) |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Oct 19 03:45PM +0100 On 19/10/2014 14:22, Andrea wrote: > the database, file system etc... Now I'm testing class for tcp socket > and it's not complicated. For now, my vote is positive, but I wanted to > know an opinion from people with more experience than me :) Qt is great for GUI but I wouldn't use Qt for anything else; for sockets I would use boost.asio. I wouldn't touch wxWidgets with a barge pole as it looks too much like that shite Microsoft effort called "MFC". /Flibble |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Oct 19 03:08PM On Sun, 2014-10-19, Andrea wrote: > Thank for the reply. > I am follow this group for a few months, and I never saw discussion > about Qt. I thought that is not used or that it's a bad product... I'm not a user because I don't do GUIs, but my impression is: - Qt /is/ discussed here now and then - but not very in-depth - and usually it's "your GUI toolkit sucks more than mine!" type discussions, or licensing wars - there are connections to the infected Linux desktop situation, too ... GNOME and so on > and I > wanted to know an opinion from /skilled/ people in this group before > lose my time with this framework :) Skilled with such toolkits, you mean. Yes, it's a valid question. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
"Öö Tiib" <ootiib@hot.ee>: Oct 19 08:13AM -0700 On Sunday, 19 October 2014 16:31:32 UTC+3, Andrea wrote: > about Qt. I thought that is not used or that it's a bad product... and I > wanted to know an opinion from /skilled/ people in this group before > lose my time with this framework :) Qt is quite decent product. It is fairly widely portable and simplifies lot of aspects of application-programming. You won't lose anything if you try it out, more likely you will get some good ideas. You can likely find some Qt-related advice here too but it is better to discuss it in Qt-specific forums. The reasons are that ... * Qt code it is not exactly standard C++ (that we discuss here). Qt has introduced some special keywords. The code is preprocessed before C++ compiler with utilities of Qt. The utilities do not handle C++ preprocessor macros or templates too well and that may give surprising results sometimes. It is better to follow certain Qt-specific idioms of coding with it. * Qt has introduced script language qml for user interface definition. That is processed run-time. It is not too well thought thru, HTML with its .css, .js and .htm files feels better GUI definition system. Anyway it is important part of Qt that has nothing to do with C++. |
Norbert_Paul <norbertpauls_spambin@yahoo.com>: Oct 19 05:47PM +0200 Andrea wrote: > I saw this framework and it seems very interesting (I have done some > tests with the Qt Creator editor and are positively surprised). > Does anyone use it? Opinions? I have to use it in my company. My predecessor in my job had /everything/ entagled with Qt -- even low-level networking stuff, where, for example, he used the quint8 data type as bytes. I am somewhat ambiguous about Qt, because whereas it has some nice features but I'd rather not use it. Some criticism in detail: (1) Qt "extends" C syntax by provinding features like sigals, slots, properties, etc. This leads to a huge amount of automatically generated intermediate C++ files (moc_<classname>.cpp) that implement these language extensions and are costly to compile. I often spend quite much time waiting for the compiler to finish after having made minor changes. (Actually, I suspect that this having to wait so much is also a design issue of our software). (2) Qt defines elements that compete with STL, like QString, QMap, QList, etc. Being a great fan of the STL I see no point why Qt tries to re-invented the wheel. There also exist QThred, QFile, QDirectory, QApplication, qMax<T>, ... (3) When you use the STL -- or templates in general -- (at least with Qt Creator that came along Qt 4.8) you'll find out that the IDE is not aware of templates. So when you have, say, std::vector<Foo*> foos; ... for ( std::vector<Foo*>::iterator i = foos.begin(), n = foos.end() ; i != n ; ++i ) { (*i)// Typing continues here... } then the IDE does not recognise the type of *i for auto-completion (but it compiles well). As I find auto-completion very helpful, in particular because it helps to avoid typing errors, I often end up in ugly idioms like for ( ... /*same as above*/ ) { Foo * pFoo = *i; pFoo// Typing continues here... } If I could decide at my own I'd only use Qt for the GUI and keep the application logic itself completely Qt-free. Such a Qt-free model also gives you the freedom to port the software to other windowing toolkits. Norbert |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment