- Highest paying! - 7 Updates
- "Linus Torvalds Was (Sorta) Wrong About C++" - 9 Updates
- Incomplete example from Accelerated c++ - 3 Updates
- Cgi and c++ - 4 Updates
- "Linus Torvalds Was (Sorta) Wrong About C++" - 1 Update
- "inline-assembly/C++ template" issue - 1 Update
woodbrian77@gmail.com: Mar 15 09:41PM -0700 On Friday, March 13, 2015 at 6:15:15 PM UTC-5, Melzzzzz wrote: > http://tech.co/highest-paying-programming-languages-2015-03 I had no idea what AWS was until I used https://duckduckgo.com to find it. Turns out it's an Amazon Web Service thing. I'm not sure why Python is ranked so high. But it's encouraging news for me from an advertising perspective. I think programmers make above average amount and C++ programmers are some of the best paid programmers. Brian Ebenezer Enterprises - So far G-d has helped us. http://webEbenezer.net |
scott@slp53.sl.home (Scott Lurndal): Mar 16 02:23PM >http://tech.co/highest-paying-programming-languages-2015-03 I suppose there are jobs where one simply programs in one language. I don't think I've ever seen one; for example, at my CPOE, programmers are expected to be proficient in assembler (AArch64, MIPS), C, C++, python, perl and shell scripting. This has been true wherever I worked starting in 1976 (when the languages were PAL-D, BASIC, FOCAL, FORTRAN, COBOL, SPL/3000 and later PASCAL, C, MACRO-32, BLISS-32, BPL, SPRITE and SPRASM followed in the late 80's by C++). Generally we hire software engineers, not "C++ Programmers", "C programmers" or "Python programmers". |
Christopher Pisz <nospam@notanaddress.com>: Mar 16 10:13AM -0500 On 3/16/2015 9:23 AM, Scott Lurndal wrote: > 80's by C++). > Generally we hire software engineers, not "C++ Programmers", "C programmers" > or "Python programmers". I bet you work on a linux platform... All my jobs have been single language for myself and my peers. In a Windows environment. Usually, C++ for backend and C# for front. No scripting languages needed, unless you count powershell for the build engineer. -- I have chosen to troll filter/ignore all subthreads containing the words: "Rick C. Hodgins", "Flibble", and "Islam" So, I won't be able to see or respond to any such messages --- |
Melzzzzz <mel@zzzzz.com>: Mar 16 04:28PM +0100 On 3/16/15 3:23 PM, Scott Lurndal wrote: > were PAL-D, BASIC, FOCAL, FORTRAN, COBOL, SPL/3000 and later PASCAL, C, > MACRO-32, BLISS-32, BPL, SPRITE and SPRASM followed in the late > 80's by C++). Late 80's C++? What compiler? > Generally we hire software engineers, not "C++ Programmers", "C programmers" > or "Python programmers". You pay 200$ per hour? |
woodbrian77@gmail.com: Mar 16 09:58AM -0700 On Monday, March 16, 2015 at 10:13:49 AM UTC-5, Christopher Pisz wrote: > > Generally we hire software engineers, not "C++ Programmers", "C programmers" > > or "Python programmers". > I bet you work on a linux platform... It could be FreeBSD. I think Niall Douglas likes FreeBSD more than Linux and I agree. For over 20 years my work has been at least 80% C++. I've also used Perl, Python and C. When I have some say in the matter, I'm leary of scripting languages. If I couldn't use C++, programming wouldn't be as much fun. Brian Ebenezer Enterprises |
scott@slp53.sl.home (Scott Lurndal): Mar 16 06:14PM >> Generally we hire software engineers, not "C++ Programmers", "C programmers" >> or "Python programmers". >I bet you work on a linux platform... I've programmed for platforms starting from the PDP-8, PDP-11, Burroughs, IBM and Univac mainframes, very large Unix systems (including one we sold to Dr. Hawking in 1998), and very large linux systems (> 200 cores). 90% of this was writing and/or maintaining operating systems and hypervisors (yes, both hypervisors were C++ as was one of the operating systems) or simulators. I've been fortunate to never have been required to write applications on a Windows box (although I have worked with and modified the NT 4.0 sources - which became Win2K) and I've done windows device drivers in the NT 3.51 timeframe. That experience more than anything soured me on using Windows for development. I'd rather punch cards. |
scott@slp53.sl.home (Scott Lurndal): Mar 16 06:19PM >> MACRO-32, BLISS-32, BPL, SPRITE and SPRASM followed in the late >> 80's by C++). >Late 80's C++? What compiler? Starting with Cfront 2.1 (before templates, before exceptions and before STL). Much of this was done on a motorola 88100 using PCC as the back-end for cfront. >> Generally we hire software engineers, not "C++ Programmers", "C programmers" >> or "Python programmers". >You pay 200$ per hour? Nice non sequitor. That works out to $400,000/yr. That's high for even here in the valley. But it's not completely out-of-line for experienced senior engineers once options are factored in. |
BGB <cr88192@hotmail.com>: Mar 16 12:00AM -0500 On 3/15/2015 5:33 PM, JiiPee wrote: >> cases to justify its inclusion. > yes definitely leave goto, its cool to have it there... good old basic > language feature. rarely needed but sometimes needed. yeah. I figure more people might notice/care if goto is gone at this point, than they will notice or care if: foo(x, y) int x, y; { ... } doesn't work... could maybe add it if really needed, but it is somewhere well behind getting multidimensional array support fully implemented, or fully implementing support for bitfields and similar. note that one area it does differ from normal C is that the compiler requires visible prototypes of function declarations by default (unlike in normal C compilers where they are optional). other differences (from normal/standard C): the VM formally nails down most type-sizes (except for pointers and "native long" and similar, *1); the same bytecode may be used for either 32-bit or 64-bit targets without recompilation (structure layout is flexible, and attempts to match the behavior of the native C ABI on the target); some type-promotion rules differ (*2); compiler may sometimes warn about implicit loss of precision (*3); ... *1: the core semantic type-model is loosely JVM-like: Int/Long/Float/Double/Pointer with each type having logical subtypes: Int (logical 32-bit): char (implicit sign), signed char (explicit sign), unsigned char short, unsigned short, wchar_t int, unsigned int enum types Long (logical 64-bit): long (Native Long), unsigned long (Unsigned Native Long) semantically 64-bit, may be stored as either 32 or 64 bits. long long, unsigned long long Float float Double double, long double Pointer primitive pointer types struct pointers array types intptr_t value types struct (by value) ... in many cases, the VM does not distinguish between subtypes of a given base type, and where relevant the VM uses different operators to distinguish between signed and unsigned variants (or to implement specific subtypes). *2: '(signed int) op (unsigned int)' in many cases promotes to '(signed long long)' rather than '(unsigned int)', with a few exceptions: * if the compiler can determine that the values will be in range; * if the destination is '(unsigned int)' (*2); operations between integer and floating types implicitly promote to double; ... *3: in the general case, loss of precision without a cast will generate a warning, however this warning will be suppressed in some cases: * if the compiler statically knows the value is in-range of the type ** ex: literal values will only generate a warning if out-of-range. * if the wider type was the result of an implicit promotion from types which are within the range constraints of the destination. this allows having some benefit from detecting implicit narrowing, without making narrower types nearly unusable due to excessive and pedantic use of casts and suffixes (a big annoyance with Java and C#). likewise, warnings may also be generated for cases where enums implicitly decay into integers. many cases of implicit conversion between pointer types is raised to an error (note, however, that '(void *)' follows GNUC semantics, essentially meaning it is an implicitly convertible version of "(char *)" which may not be dereferenced). this is because nearly all cases of implicit conversion IME are in-error (such as messing up with function arguments). currently, some static bounds-checking will be done, but currently runtime bounds-checking is not performed (has been considered, would be optimized away when bounds can be statically determined, *4). implicit conversion between pointers and integers is similarly disallowed. ... *4: actually, a run-time checked "non-trusted" mode has been considered, which would implicitly do a few things: pointers are implicitly boxed (conceptually similar to fat-pointers, but implemented differently); memory-accesses are checked (both bounds-checked and rights checked); "pointer spoofing" is disallowed via run-time checks (pointers loaded from memory would be validated if the memory was not known-safe, and the VM would reject the use of any pointer for which the code lacks access rights); access rights would be checked for function calls; ... here, the code would look and behave like normal native C, but the underlying machinery would differ. this is, however, not supported by the current VM (and is somewhat more complex to support, more so for a VM intended for a simple implementation to remain more useable on low-stat hardware). the idea would be that while you could technically have a pointer to any arbitrary address, any attempt to dereference it would trap. my past script VM actually operated under a similar model (implicitly, pointers were objects at the VM level, with internal methods for applying operations on them), and had also essentially used a variant of the Unix security model to validate access to memory and to classes and object instances (and to specific fields and methods). use was hindered some in that it was a bit of a hassle to work with (using it involved needing to declare access rights to things). one way it differed though from the traditional model was that execution threads held a "keyring" of active permissions (as UID/GID pairs), rather than being based around per-resource ACL checking (each resource having a list of possible UGID pairs and the access rights applied to each seperately). note that for the most part the script VM was able to do most of this as semi-static validation, so for the most part significant runtime costs could be avoided. |
452210015@qq.com: Mar 16 12:32AM -0700 在 2015年3月12日星期四 UTC+8上午3:50:24,Mr Flibble写道: > dynamic C arrays and static C arrays is identical: O(n). > If you have lots of vectors of small n you are probably doing it wrong. > /Flibble 1232322032 |
Martijn Lievaart <m@rtij.nl.invlalid>: Mar 16 10:33AM +0100 On Sun, 15 Mar 2015 13:26:37 -0500, BGB wrote: > then there is wackiness like "Duff's device", for which I am not > entirely certain whether or not it will work correctly in my compiler, > but it is obscure enough that I don't really care either way... If that is your mindset and the quality of your compiler, remind me not to use it. M4 |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Mar 16 01:30PM On Fri, 2015-03-13, Martijn van Buul wrote: > software on hardware hardware that exceeds most desktop PCs(Broadwell Core i5 > or i7, with about 16GB of memory) - yet it is still considered an embedded > system My embedded systems are bigger than yours -- they have 48 GB of RAM and 20 cores ;-) ... > the state machine from whatever it is controlling, (hopefully) leading to > better code quality. Whether that results in an unacceptable performance > penalty is largely the result of implementation details. Or another angle: the high-level tools C++ offers are excellent for building state machines. Don't know if they're considered OO, though. (I'm thinking, among other things, about template meta-programming for efficient and effective state machines.) /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
BGB <cr88192@hotmail.com>: Mar 16 08:51AM -0500 On 3/16/2015 4:33 AM, Martijn Lievaart wrote: >> but it is obscure enough that I don't really care either way... > If that is your mindset and the quality of your compiler, remind me not > to use it. it is mostly for personal use anyways... but, yeah, "basically works" is the level of quality for a lot of this, like it runs the code I want to run through it, rather than necessarily being able to handle any bit of code that already exists. the language also makes enough "fundamental" changes in enough areas (and partly ignores the standards in other cases) that it is debatable if it is still C, but it is at least a C variant. but, not everything needs to be "perfect" in most things, usually just good enough for the tasks at hand, and if something is broke, one can fix it and move along. likewise, a person can use test-cases to verify that everything basically works. the main thing that Duff's device does that is uncertain is that it uses case-statements across multiple levels of blocks, which is an unverified feature in this compiler, and I may actually consider formally disallowing it in this C variant if it comes up as an issue (basically, requiring all 'case' statements to be directly within the body of the switch). |
scott@slp53.sl.home (Scott Lurndal): Mar 16 02:09PM >whenever I want. But I was just saying I /personally/, with the kind >of problems I solve, rarely (not never) find a reason to use >(= implement) inheritance. I find that I use inheritence in one of two ways: - To implement java-like "Interface" using pure abstract base classes. - To implement base-object semantics that can be applied to multiple classes (e.g. a Thread class which is inherited by any class which desires a pthread). class c_thread: public c_dlist { /** * Thread identifier for this thread. */ pthread_t t_thread; c_logger *t_logger; std::string t_thread_name; volatile bool t_running; void insert(void); static c_dlist t_thread_list; static pthread_mutex_t t_lock; static void lock(void) { pthread_mutex_lock(&t_lock); } static void unlock(void) { pthread_mutex_unlock(&t_lock); } static void *run(void *); bool t_thread_initialized; protected: pthread_cond_t t_ready; pthread_mutex_t t_threadlock; void ignore_signals(void); void lock_thread(void) { pthread_mutex_lock(&t_threadlock); } void set_threadname(const char *); void thread_initialized(void); void unlock_thread(void) { pthread_mutex_unlock(&t_threadlock); } public: c_thread(const char *name, c_logger *); virtual ~c_thread(void); virtual void run(void) = 0; virtual void terminate(void); bool set_thread_affinity(cpu_set_t *, size_t); bool is_running(void) { return t_running; } bool in_context(void) { return t_thread == pthread_self(); } void *join(void) { void *p; pthread_join(t_thread, &p); return p; } const char *get_thread_name(void) { return t_thread_name.c_str(); } static void dump_threadlist(c_logger *); static void initializer(void) __attribute__((constructor)); }; |
BGB <cr88192@hotmail.com>: Mar 16 11:34AM -0500 On 3/16/2015 8:30 AM, Jorgen Grahn wrote: >> system > My embedded systems are bigger than yours -- they have 48 GB of RAM > and 20 cores ;-) yes, but I suspect these sort of things may not necessarily be what one normally thinks of as "embedded". usually, "embedded" implies something found in some level of consumer electronics or other hardware (such as cars), usually specifically for running that hardware, but generally excludes things for which its computer-like aspects are a dominant feature (such as phones or tablets or modern game consoles). likewise, I would probably exclude things like server racks or similar from this definition. in most consumer electronics though, about the cheapest processor is used that they can get away with for the task, generally in an attempt to keep the per-unit costs down (less people will buy it if it costs more, and it will cost more if they use significantly more computing power than is needed for the task). in my case, for personal hobbyist robotics type stuff, hardware is used which is sufficient but still basically affordable. for example, one doesn't want to put hundreds of $ worth of computing hardware at risk for a robot, when spending $20 to $40 for the controller is sufficient (such as using a Raspberry Pi hooked up mostly to custom wire-wrap boards, *). but, it is tempting to use slightly more powerful boards, for example, I may want to do things like computer-vision tasks or speech recognition or similar, which are a little steeper in terms of CPU cost. but, at the same time, not going to spend something like $250 on a controller board with a Tegra X1 or similar for this, just no, isn't going to happen. but, there are now $40 boards with multi-core ARMv7 / Cortex-A family processors, which I may consider going for (when I get around to ordering another one). *: wire-wrap has an advantage over breadboards in that it allows a lot more components, and that one can run a lot more power (for example, once wasn't careful enough and ended up running several amps through a breadboard (running a small DC motor), and had issues with wires starting to melt down and some slight melting of the plastic on the breadboard itself). typically, for a lot of wirewrap and similar, tend to use mostly 24AWG solid wire for everything, which can easily handle around 4A but can be pushed up to around 8-10 amps (though with notable voltage drops and heat). heavier wire is typically used for power wiring (sometimes a lot of 18 and 16 for point-to-point power wiring). though, for a smaller robot, a lot of the power-wiring was all 20AWG, with 16AWG for the main power-input, but this is because its power-use is fairly modest (generally under 10 amps). for another modest-sized robot, will likely need to build the power board to handle around 60 amps or so (so, probably some heavier wiring and a lot of solder gobs, as well as using some of my more expensive transistors...). had for testing recently built an analog driver for running motors (using a POT to control them), but it was built using a lump of (slightly) cheaper transistors (8A TO-220 NPNs, many wired in parallel) with the wiring encased in silicone so it can be water-cooled (via submersion). while the collector on the TO-220 packages is linked to the metal back-plate (and exposed to the water), I can mostly ignore this as all the collectors are in common (it is basically a big makeshift Darlington transistor). this is mostly because of the relatively limited power output of a lab power supply (in my tests, seems to be somewhere around 35W), whereas for some some of my tests, I need a few hundred watts, but also needed an ability to test things without necessarily throwing the full power of a lead-acid battery at the problem (don't necessarily need to throw several hundred amps or similar at it). what could be nice, but alas, would be an affordable lab power supply that could output up to maybe a few kilowatts or so (say, up to 20v 100A or so). but, something like this would be operating at the bare limits of normal residential circuit breakers, so lead-acid batteries basically work. > building state machines. Don't know if they're considered OO, though. > (I'm thinking, among other things, about template meta-programming for > efficient and effective state machines.) yeah. the main issue is that a lot of conventional OO style is based around lots of objects with interaction via deeply nested call-graphs (and doing damn-near everything with method calls). in contrast, for a lot of this stuff (incremental state-machines and event-driven code), you can't really afford deeply nested calls, or code that really does a whole lot of work at any given time (as this may put the integrity of the event scheduler at risk). so, a lot of the code will do a little bit of work, and then set it up so that more work can be done in the next event handler. also, while on a PC, you can do a fair bit of work in a microsecond (spin in a loop a few hundred times or maybe call a bunch of functions), but not nearly so much on hardware operating in the MHz range (maybe call to a largish function eats a usec all by itself). unlike with PC code, it may also not be affordable to make code which relies on the use of loops, so often the use of the event scheduling takes over for the use of a loop, by scheduling lots of events with a 0us delay, so that any other waiting events will execute before the next scheduled event. for example, the scheduler may be FIFO based, where any live events are executed immediately, and any events which can't be run yet are bumped to the end of the list (until they can be executed). doing a "loop" using the scheduler may be slower than looping directly, but generally in these cases, avoiding causing a delay is more important than how efficiently the loop can spin (*). and you don't really want to eat cycles in too many unnecessary calls. and, likewise, partly bypassing the supplied GPIO interface and going more directly to the HW registers, say, because the normally provided GPIO interface is absurdly slow (doing who knows what internally). ... *: for a VM, you can try to organize it so that interpreted code (executed as call-threaded code in the current VM) executes in pieces of at most 1us (determined by the VM using crude heuristics). like everything else, the execution of the VM is tied to the event scheduler, so normally VM code will not cause delays (even with loops and running otherwise being dead-slow). by constraining the execution time of call-thread traces (mostly by constraining their maximum length and similar), but otherwise running them as fast as possible, this does at least allow moderately higher interpreter performance, say, than if every bytecode operation would require a trip through the main scheduler. the main exception here is that calls into native land may result in stalls if the native-code results in a stall. or such... |
scott@slp53.sl.home (Scott Lurndal): Mar 16 06:04PM >running that hardware, but generally excludes things for which its >computer-like aspects are a dominant feature (such as phones or tablets >or modern game consoles). One of the largest users of big "embedded" SoC's (like the Cavium 48 core MIPS parts) are network appliances (switches, routers, intrusion detection, deep packet inspection, etc). |
scott@slp53.sl.home (Scott Lurndal): Mar 16 06:06PM >> any class which desires a pthread). > Could this also be accomplished by using composition with a thread > object (Sometimes called: »delegating to a thread object«)? Anything can be done. I see no advantage to delegation in this case. |
Paul <pepstein5@gmail.com>: Mar 16 08:28AM -0700 Accelerated c++ contains the code below which is discussed for the case n = 0. If n = 0, I can see that v is an empty vector but I don't see what effect the delete statement has. The delete statement deletes the iterator one past the end of an empty vector. This seems similar to deleting a null pointer but is not quite the same. I would think that the delete statement does nothing in the same way that deleting a null pointer does nothing. However, I can't find anything that explicitly says so. Thank you. Paul T* p = new T[n]; vector<T> v(p, p + n); delete[] p; |
Luca Risolia <luca.risolia@linux-projects.org>: Mar 16 05:22PM +0100 On 16/03/2015 16:28, Paul wrote: > deleting a null pointer does nothing. However, I can't find anything > that explicitly says so. Thank you. > T* p = new T[n]; delete[] p; If I remember well, if n = 0, by standard p cannot be nullptr, but delete should be safe. You should have a look at the standard for the exact wording. |
Paavo Helde <myfirstname@osa.pri.ee>: Mar 16 12:00PM -0500 Paul <pepstein5@gmail.com> wrote in > T* p = new T[n]; > vector<T> v(p, p + n); > delete[] p; p is a pointer to an object allocated by new[], so of course it is safe to delete[] it. And delete[] works on pointers, not iterators, so whether p can be interpreted in another context as an iterator or not is besides the point. Comment out the unneeded second line if this is confusing you. |
Tomas Garijo <tgarijo@gmail.com>: Mar 16 08:04AM -0700 El miércoles, 11 de marzo de 2015, 17:16:45 (UTC+1), Tomas Garijo escribió: > Hello, > I'm noob in c++. My problem is next. A linux system, is sending to my apache server a xml stream by http post. I need capture by CGI this stream xml and parsec it, but i don't know how to do this. The major problem is capture the xml stream. I don't know if i need a GCI API or Socket API. Please can you help me? > Regards. Hello, I don't know what happend with my code, but I can't receive xml data by http post. This is my code int main(int argc, char *argv[]) { cout << "Content-type:text/html\r\n\r\n"; cout << "<html>\n"; cout << "<head>\n"; cout << "<title>CGI Envrionment Variables</title>\n"; cout << "</head>\n"; cout << "<body>\n"; //cout << "<p>" << *argv[1] << "</p>" << endl; //cout << "argc = " << argc << endl; //cout << "argc =0 " << argv[0] << endl; char *inputlenstr; char *tmp; int inputlen; int status; cout << "Tmp= " << *tmp << endl; inputlenstr = getenv("CONTENT_LENGTH"); inputlen = atoi(inputlenstr); status = fread(tmp, 1, inputlen, stdin); cout << "Status= " << status << endl; cout << "Tmp= " << *tmp << endl; cout << "</body>\n"; cout << "</html>\n"; } and this is a python program that send xml file (sorry for use python) import requests url = 'http://192.168.3.32/cgi-bin/xmlengine' data = open('.//file.xml', 'r').read() r = requests.post(url, data=data) print(r.text) This is the result <html> <head> <title>CGI Envrionment Variables</title> </head> <body> argc = 1 argc =0 /var/www/.netbeans/remote/192.168.3.32/tgarijow7-Windows-x86_64/P/xmleng ine/dist/Debug/GNU-Linux-x86/xmlengine Tmp= | Status= 0 Tmp= | </body> </html> PD: I used a java program to send xml file by http post and I have had the same result. Regards |
drew@furrfu.invalid (Drew Lawson): Mar 16 03:27PM In article <3635c923-06ce-494d-96e8-d1efd9c68297@googlegroups.com> >This is my code [snip] > status = fread(tmp, 1, inputlen, stdin); > cout << "Status= " << status << endl; > cout << "Tmp= " << *tmp << endl; In all of this, tmp is an uninitialized pointer. -- Drew Lawson ". . . And I never give a reason" -- God, as channeled by Seven Nations |
Victor Bazarov <v.bazarov@comcast.invalid>: Mar 16 11:32AM -0400 On 3/16/2015 11:04 AM, Tomas Garijo wrote: > I don't know what happend with my code, but I can't receive xml data by http post. > This is my code > [..] If your program compiles and runs, it's probably fine AFA C++ language is concerned. Please post your CGI quesitons to an HTTP or CGI newsgroup. I recommend news:comp.infosystems.www.authoring.cgi . Sending and receiving files using HTTP is not part of C++ language. V -- I do not respond to top-posted replies, please don't ask |
Tomas Garijo <tgarijo@gmail.com>: Mar 16 09:03AM -0700 El miércoles, 11 de marzo de 2015, 17:16:45 (UTC+1), Tomas Garijo escribió: > Hello, > I'm noob in c++. My problem is next. A linux system, is sending to my apache server a xml stream by http post. I need capture by CGI this stream xml and parsec it, but i don't know how to do this. The major problem is capture the xml stream. I don't know if i need a GCI API or Socket API. Please can you help me? > Regards. I have the solution with the library Cgicc Code: Cgicc cgi; const CgiEnvironment& env = cgi.getEnvironment(); cout << env.getPostData(); Thanks to all Regards |
ram@zedat.fu-berlin.de (Stefan Ram): Mar 16 03:07PM > - To implement base-object semantics that can be applied to > multiple classes (e.g. a Thread class which is inherited by > any class which desires a pthread). Could this also be accomplished by using composition with a thread object (Sometimes called: »delegating to a thread object«)? |
fefe <fefe.wyx@gmail.com>: Mar 15 06:39PM -0700 On Sunday, March 15, 2015 at 7:39:36 PM UTC+8, Bo Persson wrote: > asm keyword doesn't support constexpr members of struct templates. > I very much doubt that this limitation is explicitly documented. > Bo Persson This is documented in the C++ specification. The grammar for asm-definition is: asm-definition: asm ( string-literal ) ; Which means only string literal can appear inside asm(). |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment