| Juha Nieminen <nospam@thanks.invalid>: Jun 30 07:57AM Character encoding was a problem in the 1960's, and it's still a problem today, no matter how much computers advance. Sheesh. Problem is, how to reliably write wide char string literals that contain non-ascii characters? Suppose you write for example this: const wchar_t* str = L"???"; In the *source code* that string literal may be eg. UTF-8 encoded. However, the compiler needs to convert it to wide chars. Problem is, how does the compiler know which encoding is being used in that 8-bit string literal in the source code, in order for it to convert it properly to wide chars? Some compilers may assume it's UTF-8 encoded source code. Others may assume it's ISO-Latin-1 encoded (I'm looking at you, Visual Studio). Obviously the end result will be garbage if the wrong assumption is made. In most compilers (such as Visual Studio) you can specify which encoding to assume for source files, but this has to be done at the project settings level. I don't think there's any way to specify the encoding in the source code itself. What does the C++ standard say? Does it say that source code files are always UTF-8 encoded, or is it up to the implementation? I assume that if it's the latter, the standard doesn't provide any mechanism to specify which encoding is being used. Or does it? |
| Kli-Kla-Klawitter <kliklaklawitter69@gmail.com>: Jun 30 10:05AM +0200 Am 30.06.2021 um 09:57 schrieb Juha Nieminen: > always UTF-8 encoded, or is it up to the implementation? I assume that if > it's the latter, the standard doesn't provide any mechanism to specify > which encoding is being used. Or does it? Use UTF-16 sourcefiles. |
| Ralf Goertz <me@myprovider.invalid>: Jun 30 10:08AM +0200 Am Wed, 30 Jun 2021 07:57:14 +0000 (UTC) > encoding to assume for source files, but this has to be done at the > project settings level. I don't think there's any way to specify the > encoding in the source code itself. When using UTF-encoding there is always A BOM you could use. Doesn't help much with iso encoding, though. And I also just found out that gcc doesn't notice that a source file with an appropriate byte order mark is encoded in utf32 BE. That's a bit disappointing. |
| Ralf Goertz <me@myprovider.invalid>: Jun 30 10:30AM +0200 Am Wed, 30 Jun 2021 10:05:40 +0200 > > assume that if it's the latter, the standard doesn't provide any > > mechanism to specify which encoding is being used. Or does it? > Use UTF-16 sourcefiles. That doesn't help with gcc. Even if you specify the encoding on the command line with -finput-charset=utf16be you run into trouble since then gcc assumes the include files (even those included implicitely) are assumed to be utf16be. |
| MrSpud_r5j@ywn9entw2s.org: Jun 30 08:39AM On Wed, 30 Jun 2021 07:57:14 +0000 (UTC) >always UTF-8 encoded, or is it up to the implementation? I assume that if >it's the latter, the standard doesn't provide any mechanism to specify >which encoding is being used. Or does it? Why should it care? To C and C++ strings are just a sequence of bytes, the encoding is irrelevant unless you're using functions specific to a particular encoding, eg: utf8_strlen() or similar. |
| "Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Jun 30 10:55AM +0200 On 30 Jun 2021 09:57, Juha Nieminen wrote: > Problem is, how does the compiler know which encoding is being used in > that 8-bit string literal in the source code, in order for it to convert > it properly to wide chars? The compiler necessarily assumes some source encoding. g++ and Visual C++ use different schemes for determining the source code encoding assumption. g++ uses a single encoding assumption that you can change via options, while Visual C++ by default determines the encoding for each individual file, which is a much more flexible scheme. However, in modern programming work you don't want to use that flexible Visual C++ scheme because the base assumption, when no other indication is present, is that a file is Windows ANSI encoded, while in modern programming work it's most likely UTF-8 encoded. So it's now a good idea to use the Visual C++ UTF-8 option, plus some others, e.g. /nologo /utf-8 /EHsc /GR /permissive- /FI"iso646.h" /std:c++17 /Zc:__cplusplus /Zc:externC- /W4 /wd4459 /D _CRT_SECURE_NO_WARNINGS=1 /D _STL_SECURE_NO_WARNINGS=1 > Some compilers may assume it's UTF-8 encoded source code. Others may > assume it's ISO-Latin-1 encoded (I'm looking at you, Visual Studio). > Obviously the end result will be garbage if the wrong assumption is made. Yes. You can to some extent prevent Visual C++ mis-interpretation by using the UTF-8 BOM as an encoding indicator, and I recommend that. However, there are costs, in particular that mindless Linux fanbois (all fanbois are mindless, even C++ fanbois) hung up on supporting archaic Linux tools that can't handle the BOM, can then brand you as this and that; and that's not hypothetical, it's direct experience. Also, even though using a BOM is a very strong convention in Windows the Cmd `type` command can't handle it, so that one is nudged in the direction of Powershell, which is a monstrosity that I really hate. > In most compilers (such as Visual Studio) you can specify which encoding > to assume for source files, but this has to be done at the project > settings level. Uhm, no, you can specify compiler options per file if you want, in each file's properties. Visual Studio 2019 screenshot: (https://ibb.co/tJ5jNJC) > I don't think there's any way to specify the encoding > in the source code itself. Not in standard C++. For Visual C++ there is an undocumented (or used to be undocumented) `#pragma` used e.g. in automatically generated resource scripts, .rc files. I don't recall the name. Also, there is the UTF-8 BOM. An UTF-8 BOM is a pretty surefire way to force UTF-8 assumption. > What does the C++ standard say? Does it say that source code files are > always UTF-8 encoded, or is it up to the implementation? It's totally up to the implementation. That wouldn't be so bad if the standard had addressed the issue of a collection of source files, in particular headers, with different encodings, e.g. if the standard had /required/ all source files in a translation unit to have the same encoding. That's the assumption of g++, but not of Visual C++. > I assume that if > it's the latter, the standard doesn't provide any mechanism to specify > which encoding is being used. Or does it? Right. It's a mess. :-o :-) But, practical solutions: • Use UTF-8 BOM and Just Ignore™ whining from Linux fanbois. • For good measure also use `/utf-8` option with Visual C++. • Where it matters you can /statically assert/ UTF-8 encoding. A `static_assert` depends on both that the the compiler's source file encoding assumption is correct, whatever it is, and that the basic execution character set (encoding of literals in the executable) is UTF-8. These are separate encoding choices and can be specified separately both with g++ and Visual C++. But assuming they both hold, constexpr inline auto utf8_is_the_execution_character_set() -> bool { constexpr auto& slashed_o = "ø"; return (sizeof( slashed_o ) == 3 and slashed_o[0] == '\xC3' and slashed_o[1] == '\xB8'); } When a `static_assert(utf8_is_the_execution_character_set())` holds you can be pretty sure that the source encoding assumption is correct. - Alf |
| David Brown <david.brown@hesbynett.no>: Jun 30 11:23AM +0200 On 30/06/2021 10:55, Alf P. Steinbach wrote: > Right. It's a mess. :-o :-) > But, practical solutions: > • Use UTF-8 BOM and Just Ignore™ whining from Linux fanbois. Should you also ignore the recommendations from Unicode people? I say you should ignore Windows Notepad fanbois, and drop the BOM. Some programs can't handle a UTF-8 BOM. Some programs can't handle (or at least, can't automatically recognise) UTF-8 encoding without a BOM. Some programs add a UTF-8 BOM automatically, some remove it automatically, some don't care whether it is there or not. Like it or not, your success with or without a UTF-8 BOM is going to depend on the programs you use. If you use a lot of programs that can't work properly without it (such as Windows Notepad), use a BOM. If you are able to live without it, perhaps by telling your editor to assume UTF-8 or adding a compiler switch to your build system, do so. When you have the option, /always/ choose UTF-8 encoding /without/ a BOM. It is far and away the most popular format for text, and it is the format you are already using. It is the format used by all the include files you have for all your libraries (including the standard library) on your system, and on every other system. That is because plain ASCII is also in UTF-8 with no BOM. It is the only unicode encoding that is fully compatible with the files you have - it is therefore your only option if your preference is to have a single encoding. The sooner encodings other than BOM-less UTF-8 die out, the better. That is the only way out of the mess. > } > When a `static_assert(utf8_is_the_execution_character_set())` holds you > can be pretty sure that the source encoding assumption is correct. Static assertions are always a good idea. So are pragmas forcing options, for compilers that support that. |
| Kli-Kla-Klawitter <kliklaklawitter69@gmail.com>: Jun 30 12:37PM +0200 Am 30.06.2021 um 10:30 schrieb Ralf Goertz: > command line with -finput-charset=utf16be you run into trouble since > then gcc assumes the include files (even those included implicitely) > are assumed to be utf16be. UTF-16-files have a byte-header which helps the compiler to distinguish ASCII-files and UTF-16-files. |
| Richard Damon <Richard@Damon-Family.org>: Jun 30 06:51AM -0400 On 6/30/21 3:57 AM, Juha Nieminen wrote: > always UTF-8 encoded, or is it up to the implementation? I assume that if > it's the latter, the standard doesn't provide any mechanism to specify > which encoding is being used. Or does it? You have this backwards, by the Standard, you don't tell the implementation what encoding the source files are, the implementation tells you what encoding it specifies that you should use. The implementation is allowed to give you a way to tell it what to tell you, but this is all implementation details. There is a fundamental issue with trying to define an in-source way to specify this, as we don't even have the ability to assume the ASCII is part of the encoding as it could be EBCDIC. Yes, if we got to throw out everything and start fresh, we might do things differently. As to the question of how to put the characters in a string, that is what escape codes like \u and \U are for. |
| Juha Nieminen <nospam@thanks.invalid>: Jun 30 10:52AM > Why should it care? To C and C++ strings are just a sequence of bytes, the > encoding is irrelevant unless you're using functions specific to a particular > encoding, eg: utf8_strlen() or similar. That would be correct if this were a char string literal. But it's not. It's a wide char string literal. L"something". This means that in the source file the stuff between the quotes is, for example, UTF-8 encoded, but the compiler needs to produce a wide char string into the compiled binary, so the compiler needs to perform at compile time a string encoding conversion from 8-bit UTF-8 to whatever a wchar_t* may be (most usually either UTF-16 or UTF-32). |
| Paavo Helde <myfirstname@osa.pri.ee>: Jun 30 02:03PM +0300 30.06.2021 10:57 Juha Nieminen kirjutas: > non-ascii characters? > Suppose you write for example this: > const wchar_t* str = L"???"; I want my code to work anywhere with any compiler/framework conventions and settings, and all my internal strings are in UTF-8 anyway, so I can use strict ASCII source files with hardcoded UTF-8 characters, e.g.: std::string s = "Copyright \xC2\xA9 2001-2020"; One can find the UTF-8 codes for such symbols quite easily from pages like https://www.fileformat.info/info/unicode/char/a9/index.htm For converting my strings to wide strings for Windows SDK functions, I have small utility functions like Utf2Win(): ::MessageBoxW(nullptr, Utf2Win(s).c_str(), L"About", MB_OK); This setup means I do not have to worry about source code codepage conventions *at all*. Fortunately I do not have much such texts. YMMV. |
| Juha Nieminen <nospam@thanks.invalid>: Jun 30 11:09AM > As to the question of how to put the characters in a string, that is > what escape codes like \u and \U are for. While perhaps not ideal in terms of code readability (or writability, as one needs to look up the unicode code points of each non-ascii character), I suppose this is the best solution for portable code. |
| Juha Nieminen <nospam@thanks.invalid>: Jun 30 11:13AM > and settings, and all my internal strings are in UTF-8 anyway, so I can > use strict ASCII source files with hardcoded UTF-8 characters, e.g.: > std::string s = "Copyright \xC2\xA9 2001-2020"; Does that work for wide string literals? Because I don't think it does. In other words: std::wstring s = L"Copyright \xC2\xA9 2001-2020"; However, as suggested in another reply, using "\uXXXX" instead ought to work just fine (regardless of whether it's a narrow or wide char literal). As long as you don't need the readability, of course. |
| MrSpud_3u59h8@0c9tv3nddl090w2ynhm.gov: Jun 30 11:28AM On Wed, 30 Jun 2021 10:52:50 +0000 (UTC) >char string into the compiled binary, so the compiler needs to perform >at compile time a string encoding conversion from 8-bit UTF-8 to >whatever a wchar_t* may be (most usually either UTF-16 or UTF-32). I suspect thats something only Windows programmers have to worry about. utf8 has been the de facto standard on *nix for years. |
| Richard Damon <Richard@Damon-Family.org>: Jun 30 07:35AM -0400 On 6/30/21 7:09 AM, Juha Nieminen wrote: > While perhaps not ideal in terms of code readability (or writability, as > one needs to look up the unicode code points of each non-ascii character), > I suppose this is the best solution for portable code. And, as somewhat common in the standard, this sort of thing was designed so if you wanted to, you could have written the file is some encoding that the compiler doesn't know, and then run it through a pre-processor that performs this transform to the 'ugly' codes. Thus the original source file (that would be the one to edit to make changes) would be very readable, and the uglyness is hidden in a temporary intermediary file. For example, the file might be in a .cpp16 file that is UTF-16 encoded. The make file will have a recipe for convert .cpp16 to .cpp switching to the compilers 'natural' character set. |
| Richard Damon <Richard@Damon-Family.org>: Jun 30 07:39AM -0400 On 6/30/21 7:13 AM, Juha Nieminen wrote: > However, as suggested in another reply, using "\uXXXX" instead ought > to work just fine (regardless of whether it's a narrow or wide char > literal). As long as you don't need the readability, of course. \x works in wide string literal too, and puts in a character with that value. The difference is that if the wide string type isn't unicode encoded then it might get the wrong character in the string. |
| "Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Jun 30 01:50PM +0200 On 30 Jun 2021 13:39, Richard Damon wrote: > \x works in wide string literal too, and puts in a character with that > value. The difference is that if the wide string type isn't unicode > encoded then it might get the wrong character in the string. It gets the wrong characters in the wide string literal, period. - Alf |
| David Brown <david.brown@hesbynett.no>: Jun 30 02:01PM +0200 >> whatever a wchar_t* may be (most usually either UTF-16 or UTF-32). > I suspect thats something only Windows programmers have to worry about. utf8 > has been the de facto standard on *nix for years. UTF-8 has been the standard for most purposes for many years (with UTF-32 used internally sometimes). However, there are a few very important exceptions that use UTF-16, because they started using Unicode in the early days when it looked like UTF-16 (or in fact just UCS-2) would be sufficient. That includes Windows, Java, QT and Javascript. Moving these to UTF-8 takes time. |
| Manfred <noname@add.invalid>: Jun 30 04:54PM +0200 On 6/30/2021 1:13 PM, Juha Nieminen wrote: > However, as suggested in another reply, using "\uXXXX" instead ought > to work just fine (regardless of whether it's a narrow or wide char > literal). Yes, it is mandated by the standard. Ref. "universal-character-name" As long as you don't need the readability, of course. But you have no readability with "\xHH" either, do you? |
| Paavo Helde <myfirstname@osa.pri.ee>: Jun 30 05:56PM +0300 30.06.2021 14:13 Juha Nieminen kirjutas: > However, as suggested in another reply, using "\uXXXX" instead ought > to work just fine (regardless of whether it's a narrow or wide char > literal). As long as you don't need the readability, of course. Sure, you can use \x also in wide strings. However, as a wide string is not an 8-bit encoding, one should not use UTF-8 encoding, but either UTF-16 or UTF-32, depending on sizeof(wchar_t). A working example for Windows/MSVC++ with UTF-16 and sizeof(wchar_t)==2: #include <Windows.h> #include <string> int main() { std::wstring s = L"Copyright \xA9 2001-2020\r\n" L"The smallest infinite cardinal number is \x2080\x5d0\r\n" L"A single hieroglyph not fitting in 16 bits: \xD840\xDC0F"; ::MessageBoxW(nullptr, s.c_str(), L"Test", MB_OK); } This would not be portable to platforms where sizeof(wchar_t)==4. As far as I gather, the \u and \U escapes ought to be more portable. |
| "Öö Tiib" <ootiib@hot.ee>: Jun 30 09:22AM -0700 On Wednesday, 30 June 2021 at 10:57:30 UTC+3, Juha Nieminen wrote: > Problem is, how does the compiler know which encoding is being used in > that 8-bit string literal in the source code, in order for it to convert > it properly to wide chars? By telling it to compiler. Or to IDE that deals with compiler. > Some compilers may assume it's UTF-8 encoded source code. Others may > assume it's ISO-Latin-1 encoded (I'm looking at you, Visual Studio). > Obviously the end result will be garbage if the wrong assumption is made. It was already in VS2008 something like ...open the file in VS, File->Advanced Save Options then "Encoding" combo let to select UTF-8. > to assume for source files, but this has to be done at the project > settings level. I don't think there's any way to specify the encoding > in the source code itself. Maybe some compilers examine BOM but I have no knowledge there. All of source files I keep in UTF-8. That does not need BOM. If I get some UTF-16 or UTF-32 file then I turn it into UTF-8 anyway first before committing anywhere. > always UTF-8 encoded, or is it up to the implementation? I assume that if > it's the latter, the standard doesn't provide any mechanism to specify > which encoding is being used. Or does it? Compiler's command line is not standardized yet. |
| James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 30 02:19PM -0400 On 6/30/21 7:50 AM, Alf P. Steinbach wrote: > On 30 Jun 2021 13:39, Richard Damon wrote: ... >> value. The difference is that if the wide string type isn't unicode >> encoded then it might get the wrong character in the string. > It gets the wrong characters in the wide string literal, period. "The escape \ooo consists of the backslash followed by one, two, or three octal digits that are taken to specify the value of the desired character. ... The value of a character-literal is implementation-defined if it falls outside of the implementation-defined range defined for ... wchar_t (for character-literals prefixed by L)." (5.13.3p7) The value of a wide character is determined by the current encoding. For wide character literals using the u or U prefixes, that encoding is UTF-16 and UTF-32, respectively, making octal escapes redundant with and less convenient than the use of UCNs. But as he said, they do work for such strings. |
| FBInCIAnNSATerroristSlayer <FBInCIAnNSATerroristSlayer@yahoo.com>: Jun 29 11:01PM -0700 MASSACRE CIA, NSA, FBI agents like FUCKING PIGS for SECRETLY CHIPPING Americans with MIND INVASIVE CHIPS and TORTURING them with Directed Energy Weapons (DEWs) Millions of amrikkkans including YOU were already SECRETLY CHIPPED and TORTURED with Mind Control Chips and Directed Energy Weapons by CIA n NSA PSYCHOPATHS New brain implants are so microscopic, you won't even know that you've been implanted https://www.naturalnews.com/050440_brain_implants_mind_control_transhumanism.html Using a special wire apparatus, the neural dust can be "dipped" into a person's cerebral cortex, report scientists, where it would remain embedded indefinitely. And since it's powered by special piezoelectric materials, this dust wouldn't require a recharge, which means once it's there, it's there for good. I have to TEACH the EVIL WHITE XTIAN VIRUS about their OWN DNA and Modus Operandi. The good white men DON'T HAVE THE BALLS to FIGHT the EVIL WHITE FILTH. So White women are breeding "COMPLETELY FILTHY COWARD VIRUS" and INFESTING this beautiful planet with PURE EVIL. Dumb cocksucking whitefucks SPEND their ENTIRE LIVES discussing liberalism, conservatism, democrats, republicans, sexism, misogyny, racism, abortions, islamophobia, sports, hitler, nazis, fascism and celebrity gossip and THEN DIE. EVIL amrikkkan govt is STEALTHILY TORTURING and MURDERING innocent americans JUST FOR FUN. Do you understand the meaning of the words JUST FOR FUN? I am addressing the ENTIRE WHITE RACE on the planet: You COCKSUCKING, ANIMAL RAPING, CONFUSED GENDER, PERVERTED FILTHY WHITE XTIAN MORONS, Western White CUNTRIES were NEVER DEMOCRACIES. They are PURE EVIL FASCIST CUNTRIES "DECEPTIVELY SOLD as democracies". Your POLITICIANS are NOT your govt, your REAL GOVT is FBI+CIA+NSA+IRS+DOD+NRO+DHS+DOE+DNI+DIA+DARPA etc in which EVERY amrikkkan including white house clown and congress LIVES IN FEAR OF. Amrikkka is a 1000 TIMES WORSE than fascist Chinese govt. You don't understand this, because YOU don't understand ANYTHING about WHITE XTIAN MENTALITY. WHITE race has INFINITE DECEPTION, CUNNING, SADISM, EVIL, PERVERSIONS and BLOOD LUST in their DNA. CIA and NSA EVIL PSYCHOPATHS "ENJOY REMOTELY INFLICTING NON-STOP PHYSICAL and MENTAL PAIN" on amrikkkans, even if you are a law abiding citizen. Everything in your COCKSUCKING LIVES is being REMOTELY PROGRAMMED and ORCHESTRATED by CIA and NSA PSYCHOPATHS, SADISTS and PERVERTS. Your Govts aka CIA, NSA, MI-6, MI5, ASIS and ASIO PSYCHOPATHS have been SECRETLY CHIPPING your pathologically LYING pale skinned COWARD asses with "MIND CONTROL CHIPS" for more than 40 years. You have NO FREE WILL for more than 4 decades. They REMOTELY INFLICT PAIN in ANY part of your body including eyes, hands, palms, legs, heart, stomach, intestines, shoulders, ears.....it doesn't matter and then ACCUSE the victims to be "delusional and schizophrenic". It is called NO TOUCH TORTURE, you cocksucking morons. They developed these technologies more than 50 years ago. They are in contact with Aliens. They have INVISIBILITY CLOAKING technology they either got from PHILADELPHIA EXPERIMENT or from Aliens. YES, they could be present right next to you where ever you are right now, and watching you from QUANTUM DIMENSIONS. They can see you, but you cannot see them. You don't/won't hear any foot steps, noise nothing whatsoever. This EVIL PSYCHOPATH zetasum@yahoo.com who posted the following is telling you cocksucking whitefilth that they (CIA, NSA, MI6, MI5, ASIS, ASIO PSYCHOPATHS) TORTURE people from OTHER DIMENSIONS. https://groups.google.com/d/msg/sci.anthropology/iuT2zRnuNDY/vDYLq_QR8NMJ That's exactly what they have been doing to me and MILLIONS of Americans and Non-Americans ALL AROUND THE WORLD. This NSA Psychopath PRETENDED to be a loony and DELIBERATELY misspelled and wrote poor grammar, to MAKE ITSELF look like a loony, when in REALITY, IT is an NSA Psychopath IN THE KNOW. EVIL WHITE FILTH DOESN'T understand THEIR OWN EVIL DNA, MENTALITY and MODUS OPERANDI This is YOU, the FILTHY EVIL WHITE RACE. WHITE CHRISTIANS "Deceiving Non-Whites" WITH SMALL POX INFESTED BLANKETS https://www.youtube.com/watch?v=EEHsR63F5Dw "When the Missionaries arrived, the Africans had the land and the Missionaries had the Bible. They taught how to pray with our eyes closed. When we opened them, they had the land and we had the Bible." ― Jomo Kenyatta The DECEPTION being perpetrated by the EVIL WHITE FILTH at CIA, NSA, MI6, MI5, ASIS, ASIO and MIC psychopaths is a "MILLION TIMES WORSE". Do you fucking UNDERSTAND the MEANING of the words "MILLION TIMES WORSE"? Pathological Lying, Deception, Cunning, Manipulation, Hypocrisy, Infinite Greed, Infinite Selfishness, Sadism, Perversions, Backstabbing, Racism, Thievery, Arrogance, Cowardice, Barbarism, Blood Lust and Evil are in White Christian DNA. SMILE, SHAKE HANDS and BACKSTAB. These are YOUR CHARACTER TRAITS. They are SEEPED in your EVIL WHITE XTIAN DNA. These FILTHY EVIL COCKSUCKING WHITE MOTHERFUCKERS have been STEALTHILY TORTURING ME for more than 20 years with MIND CONTROL CHIPS and Directed Energy Weapons. They are TORTURING YOU TOO but you are TOOOOOO FUCKING DUMB to know that you are being REMOTELY OPERATED like fucking MUPPETS, because your entire lives revolve around SUCKING human cocks, animal cocks, metal cocks, rubber cocks and whatever you can find to insert in your stinky, disease infested bodily orifices. Non-White Men should CONTINUOUSLY BREED with White Women and Completely ELIMINATE this "EVIL WHITE XTIAN VIRUS" which AFFLICTED human species, so humans become RESPECTABLE in the Universe. ===================================================================== Americans should MASSACRE CIA, NSA Psychopaths like FUCKING PIGS with AK47s, AR15s and MACHINE GUNS because US Govt has been SECRETLY CHIPPING Americans with MIND CONTROL CHIPS for more than 40 years. If you don't believe it, read this entire post. Americans have the RIGHT TO SELF DEFENSE from being SECRETLY CHIPPED and TORTURED by US Govt. I sincerely hope american whites POST the following on "alt right, white supremacist, republican, militia websites" SO the bible thumping BUBBAS understand HOW their EVIL AMERICAN GOVT CIA, NSA, DHS and FBI PSYCHOPATHS have been "SECRETLY CHIPPING Americans for more than 30 years" with MIND CONTROL CHIPS and TORTURING THEM with Directed Energy Weapons and also DESTROYING their relationships, lives and careers by REMOTELY PROGRAMMING THEM like muppets. FBInCIAnNSATerroristSlayer offered a BOUNTY for MASSACRING FBI, CIA and NSA PSYCHOPATHS LIKE FUCKING PIGS MERCILESSLY MASSACRE FBI, CIA, MI-6 and NSA AGENTS LIKE FUCKING PIGS https://groups.google.com/d/msg/rec.sport.cricket/jOUH9jhYnkk/JzzgdUN-pkoJ All these are CIA, NSA and deep state PSYCHOPATHS who are STEALTHILY TORTURING me and millions of americans with NO TOUCH TORTURE Directed Energy Weapons and Mind Control Chips. Kent Wills (from misc.survivalism who posted on rsc) - Compuelf@gmail.com or compuelf at gmail.com Bill Pollock tsp2opt@gmail.com Neil Ozman neilozman67@gmail.com Neil Kashmiri Sikh - kashmirisikhn@gmail.com Saffrongoonda - Saffrongoonda1337@gmail.com All cricket fans who contributed to rsc, uksc, asc, alt.privacy, sci.anthropology, misc.survivalism etc in the last 20 years have been "TORTURE VICTIMS" of CIA, NSA, MI-6, MI-5, ASIS and ASIO PSYCHOPATHS as mentioned in the two columns below. YOU JUST DON'T KNOW IT. BILL POLLOCK is a "GENIUS DISINFORMATION AGENT DEPLOYED" by the CIA, NSA, MI6, ASIS, PSYCHOPATHS to DECEIVE YOUR and all public's MINDS. If you noticed, this PSYCHOPATH came on uksc, rsc just recently with its newly created id and kept RESPONDING "ONLY to me" but never any cricket topics. This EVIL Psychopath Bill Pollock is ALREADY "PREPARING GROUND" and POISONING your minds and the future public's minds with this post TO THINK that I committed suicide, when in reality "THEY ARE GOING TO MURDER ME" and make it look like a suicide or accident or natural death just like they did to Michael Hastings, Frank Olson and Phil Schneider (Aliens Whistleblower) for example. Schizophrenics often commit suicide because they lack normal human emotions.'Schizophrenia is strongly linked to a higher-than-normal chance of suicide and suicide attempts. https://groups.google.com/g/uk.sport.cricket/c/AL9lYyPoZJk/m/kshLq7QaBAAJ These CIA n NSA PSYCHOPATHS have been stalking me everywhere for the last 20 years as they THEMSELVES "BRAGGED many times" on rsc and "THREATENED TO MURDER ME" THREAT TO MURDER ME Excerpt from bhandava (CIA and NSA PSYCHOPATH) You are a mere experiment to us. Judging by your thoughts you already realise what we represent, it seems you already know we could deliver a swift removal at our whim. How many people disappear without a trace, each and every year? We keep you for a reason, but the reason isn't yours to know. THREAT TO ME from bhandava (CIA and NSA PSYCHOPATH) http://groups.google.com/group/rec.sport.cricket/msg/bdd4802d0b3921b9 You are already a slave. You react to everything we do, exactly as we plan. You are a mere experiment to us. Judging by your thoughts you already realise what we represent, it seems you already know we could deliver a swift removal at our whim. How many people disappear without a trace, each and every year? We keep you for a reason, but the reason isn't yours to know. You have been ours for well over 7 years, well over 15 in fact. Your original MCS degree and new citizenship started us watching, but in the last few years your employment hasn't been quite what it could be, has it? You do wish for more success in your life, don't you? You already know how this could be achieved. You would be surprised how much of usenet is affiliated with our organisations. More than half the posts on RSC are between our operatives across the globe. Our code and methods are of the highest calibre. As we often say: "1000 software engineers working for 1000 years couldn't dent what we've implemented here". To the uninitiated, it appears normal. You can only be branded delusional if you say otherwise. All uninitiated reading this will assume I'm being imaginative and your torture is delusional. LOL, to them I'm just another internet crackpot to ignore. This previous post to you proves some things you feared, doesn't it? I slipped in things like Marietta, so you could better understand our scope. THREAT TO ME from bhandava (CIA and NSA PSYCHOPATH) http://groups.google.com/group/rec.sport.cricket/msg/f306111eb4fc577e Find our cameras? Did your useless friend find our GPS tracker in your car? LOL, our operatives were even in your Marietta apartment a couple of years ago while you slept. Due to our methods, we can access your current address when we please, no matter what useless precautions you take. Haha, sometimes we leave little reminders of this ability and we know you've noticed. You're so pathetic that you've mistaken others for our operatives, when I've personally been watching the whole scene. This has happened in Southfield and Livonia amongst others. You also seem to think our operatives stick to Michigan plates. I told you before, you are insignificant until you truly act; when that happens it will be noticed and swift removal will be dealt. You know who I represent and that I've directed our 2nd tier operatives to tail you for around 7 years now. This stops when I say it stops; you have no recourse and no credibility with anyone. You have a CHOICE whether to BELIEVE 1) Psychologist Ms. Carole Smith with years of experience and Mind Control Technologies Scientists 2) Dr. Jean Pierre Changeux (Chief Neuro-Scientist in France) 2) Former Chief Medical Officer Dr. Rauni Kilde 3) Dr. John Norseen 4) Dr. Ross Adey 5) Dr. Robert Becker 6) Dr. Elizabeth Rauscher 7) Dr. Carl Sanders 8) Dr. Colin Ross 9) Dr. Michael Persinger and scores of others OR this EVIL DISINFORMATION AGENT Bill Pollock "DEPLOYED specifically" for the PURPOSE of "DECEIVING your minds". It's up to you. ENTIRE MEDIA BBC, NBC, CBS, ABC, CNN, Fox and EVERY Psychology Association like AMA, APA are COVERTLY CONTROLLED by EVIL PSYCHOPATHS "CIA, NSA, MI6, MI5, ASIS, ASIO" psychopaths. New brain implants are so microscopic, you won't even know that you've been implanted https://www.naturalnews.com/050440_brain_implants_mind_control_transhumanism.html Dr. Rauni Kilde: One reason this technology has remained a state secret is the widespread prestige of the psychiatric DIAGNOSTIC STATISTICAL MANUAL IV produced by the U.S. American Psychiatric Association (APA), and printed in 18 lan-guages. Psychiatrists working for U.S. intelligence agencies no doubt participated in writing and revising this manual. This psychiatric "bible" covers up the secret development of MC technologies by labelling some of their effects as symptoms of paranoid schizophrenia. Victims of mind control experimentation are thus routinely diagnosed, knee-jerk fashion, as mentally ill by doctors who learned the DSM symptom list in medical school. Physicians have not been schooled that patients may be telling the truth when they report being targeted against their will or being used as guinea pigs for electronic, chemical and bacteriological forms of psychological warfare. Time is running out for changing the direction of military medicine, and ensuring the future of human freedom. -- Rauni Kilde, MD December 6, 2000 ___ These EVIL WHITE MOTHERFUCKERS have been STEALTHILY TORTURING me for the last 20 years AS DESCRIBED by Ms. Carole Smith in her column. All of you were SECRETLY CHIPPED with Mind Control Chips more than 12 years ago. Pentagon Missing $2.3 Trillions https://www.wanttoknow.info/050310pentagontrillionslost This missing Trillions of dollars were USED to DEVELOP FUTURISTIC WEAPONRY, MIND CONTROL TECHNOLOGIES, DIRECTED ENERGY WEAPONS etc as mentioned in these columns. Microchip Mind Control, Implants And Cybernetics - Dr. Rauni Kilde https://rense.com/general17/imp.htm On the Need for New Criteria of Diagnosis of Psychosis in the Light of Mind Invasive Technology https://www.globalresearch.ca/on-the-need-for-new-criteria-of-diagnosis-of-psychosis-in-the-light-of-mind-invasive-technology/7123 Excerpts: For someone who felt his mind was going to pieces, to be put into the stressful situation of the psychiatric examination, even when the psychiatrist acquitted himself with kindness, the situation of the assessment procedure itself, can be 'an effective way to drive someone crazy, or more crazy.' (Laing, 1985, p 17). But if the accounting of bizarre experiences more or less guaranteed you a new label or a trip to the psychiatric ward, there is even more reason for a new group of people to be outraged about how their symptoms are being diagnosed. A doubly cruel sentence is being imposed on people who are the victims of the most appalling abuse by scientific-military experiments, and a totally uncomprehending society is indifferent to their evidence. For the development of a new class of weaponry now has the capability of entering the brain and mind and body of another person by technological means. Harnessing neuroscience to military capability, this technology is the result of decades of research and experimentation, most particularly in the Soviet Union and the United States. (Welsh, 1997, 2000) We have |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |