Monday, March 27, 2017

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

fir <profesor.fir@gmail.com>: Mar 27 08:26AM -0700

i get a little big tired when coding roguelike and maybe i could rest coding a side problem
 
in my roguelike i use now 24bpp bitmap to store a tile map,
it is 2000x2000 pixels (as pixel is something like 'big meter'
in game world (something like 1.5 m x 1.5 m) this would be like a 3 km x 3 km square of land
 
you could check it packed in
 
minddetonator.htw.pl/platform12.zip
minddetonator.htw.pl/platform14.zip
 
it is 12 MB bmp file and loading it makes 1 second delay at game startup (on my hdd, on some more new hdd could be faster)
 
but obviously it could be packed, as i use only come sobset of colors on this map (it would be at most like few thousands of colors/tile types, now i use less than 30) so it may be
well packed (it also shopuld be simple to pack as i need
only pack raw rectangle of pixels, not other form of data,
bitmap header i may skip at all)
 
even compressing each pixel into index form - thus reducing 24 bit of color into loke 6-12 bit of index would spare to half or 1/4 of oryginal size.. yet more elaborate ways of compresing (exploring the fact that there are regular patterns of colors not chaotic pixels) could be used (zip packs this 12 MB to 80 kB rar packs 12 MB to 60kB)
 
the question is - what algorithm to use to pack it - it could be reasonably simple to write but also maybe something more effective then if i would just turn colors to palette entries?
 
 
what do you think?
fir <profesor.fir@gmail.com>: Mar 27 09:35AM -0700

W dniu poniedziałek, 27 marca 2017 17:26:50 UTC+2 użytkownik fir napisał:
 
> even compressing each pixel into index form - thus reducing 24 bit of color into loke 6-12 bit of index would spare to half or 1/4 of oryginal size.. yet more elaborate ways of compresing (exploring the fact that there are regular patterns of colors not chaotic pixels) could be used (zip packs this 12 MB to 80 kB rar packs 12 MB to 60kB)
 
> the question is - what algorithm to use to pack it - it could be reasonably simple to write but also maybe something more effective then if i would just turn colors to palette entries?
 
> what do you think?
 
PS doeas maybe you know how to name such container (which i would need probably to use here): ->
 
stores some entries (which may be various, here it will be just 32 bit color values), also stores the number of occurances of each color value in a bitmap
 
technically it would be the array (of pair color, number_of_occurances) where key will be just array index
like in normal array - but in some abstract way i think it is begining to be some other container, more like dictionary maybe (but im not sure what would be a key here - normal array index or this color value be tha key?)
 
(keys will be unique and need to be inserted, if found the number of occurances increased)
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 27 09:44AM -0700

On Monday, March 27, 2017 at 11:26:50 AM UTC-4, fir wrote:
> ...i use now 24bpp bitmap to store a tile map, it is 2000x2000 pixels...
> ...is 12 MB...
 
If you want to keep bmp, install or use GIMP to convert it to index
palette format. 8 bits per pixel instead of 24. That will get you
down to a little over 4 MB, but will increase image load time as you
will have to restore it to 24 bpp format for use unless you're using
OpenGL which I believe has the ability to read in indexed color
format images.
 
If you are able to leave BMP, try using PNG. It's lossless, and
stores its data in a compressed format.
 
I typically use uncompressed 24-bpp or 32-bpp bitmaps for all of my
graphics work that winds up being used in software. Memory and hard
disk storage are cheap these days. It makes the algorithms easier,
and there is no delay on use after load ... which, everybody expects
a small delay for load I would think.
 
Thank you,
Rick C. Hodgin
fir <profesor.fir@gmail.com>: Mar 27 10:01AM -0700

W dniu poniedziałek, 27 marca 2017 18:44:18 UTC+2 użytkownik Rick C. Hodgin napisał:
> disk storage are cheap these days. It makes the algorithms easier,
> and there is no delay on use after load ... which, everybody expects
> a small delay for load I would think.
 
i should not answer toy you rick as youre abusing this group and destroyng the minds .. but i will ;c yet coz no much answer to talk with
 
i got no code to read png and i use only system winapi
dlls or my own (dont want any dependencies), code for reading png may be maybe more complex for compresing it myself to my own format
 
as to 8 bit bmp it is funny but there i dont know 16-bit or yet better 'variable index length' bitmap format
- and that would be needed 256 it to rigorous, 16 bit would sufice (it would be much better conformant to game mechanics as i could identify tile by array-id not by 24bit color which is not array-id but only some key (!)
(i wanted to write on this once as this is interesting imo) ) but there is no such format - it also would do not offer much compression
 
and tjis compresion and this delay really matters, now i use 12 MB map but in theory i would want to use larger maps and this delay here really matters (for me - who needs to write handsome games ;c )
 
later versions i would probably use something like a big amount of "1000x1000 pix" maps (whoch would yeld to 3MB uncompressed each - at least i vaguelly plan something like that - if so that would be quicker to read only taking hdd space after zip unpacking - but as i said i cant be sure i would not want larger, and cpu unpacking is fas - much quickier than hdd load)
fir <profesor.fir@gmail.com>: Mar 27 10:36AM -0700

W dniu poniedziałek, 27 marca 2017 18:35:53 UTC+2 użytkownik fir napisał:
 
> technically it would be the array (of pair color, number_of_occurances) where key will be just array index
> like in normal array - but in some abstract way i think it is begining to be some other container, more like dictionary maybe (but im not sure what would be a key here - normal array index or this color value be tha key?)
 
> (keys will be unique and need to be inserted, if found the number of occurances increased)
 
PS
i wrote simple code (sory for to long names, i got a problem of this, how to name it)
 
struct ColorAndCountPair {
unsigned color;
unsigned count;
} ;
 
 
 
const int dictionary_of_color_and_count_max = 1000*1000;
 
ColorAndCountPair dictionary_of_color_and_count[dictionary_of_color_and_count_max];
 
 
int dictionary_of_color_and_count_top = 0;
 
int FindColorEntry(unsigned color)
{
for(int i=0; i<dictionary_of_color_and_count_top; i++)
{
if( dictionary_of_color_and_count[i].color == color )
{
return i;
}
}
 
return -1;
}
 
void AddColorToColorCountDictionary(unsigned color)
{
int id = FindColorEntry(color);
 
if(id>=0)
{
dictionary_of_color_and_count[id].count ++;
}
else
{
dictionary_of_color_and_count[dictionary_of_color_and_count_top].color = color;
dictionary_of_color_and_count[dictionary_of_color_and_count_top].count ++;
 
dictionary_of_color_and_count_top++;
 
if(dictionary_of_color_and_count_top>=dictionary_of_color_and_count_max)
ERROR_("Overflow off dictionary_of_color_and_count_top" );
}
 
}
 
void AnalyzeLoadedBitmap()
{
double sum = 0;
 
 
for(int y=0; y<map_y; y++)
for(int x=0; x<map_x; x++)
{
AddColorToColorCountDictionary(map[y][x].color);
}
 
int pixels_added = 0;
 
for(int i=0; i<dictionary_of_color_and_count_top; i++)
{
pixels_added+=dictionary_of_color_and_count[i].count;
}
 
ERROR_("pixels added %d unique colors %d", pixels_added, dictionary_of_color_and_count_top );
}
 
 
it just adds colors to "dictionary" and counts them, it counted 4 000 000 pixels added and 28 unique colors, seem to work - so most basic would be to encode it in an array as a stream of 4 mln * 6 bits = 4 MB * 6/8 = 3 MB (out of initial 12 MB, better but still would like to get a bit better, maybe i should dictionarize blocks of 2, 4, 6, or
8 pixels? after a bit of resting i will try it
fir <profesor.fir@gmail.com>: Mar 27 11:57AM -0700

W dniu poniedziałek, 27 marca 2017 19:37:03 UTC+2 użytkownik fir napisał:
> }
 
> it just adds colors to "dictionary" and counts them, it counted 4 000 000 pixels added and 28 unique colors, seem to work - so most basic would be to encode it in an array as a stream of 4 mln * 6 bits = 4 MB * 6/8 = 3 MB (out of initial 12 MB, better but still would like to get a bit better, maybe i should dictionarize blocks of 2, 4, 6, or
> 8 pixels? after a bit of resting i will try it
 
i revrited it now to count/collect block of 4 pixels
 
const int color_block_length = 4;
 
struct ColorBlockCount {
unsigned color[color_block_length];
unsigned count;
} ;
 
const int dictionary_of_color_block_count_max = 1000*1000;
 
ColorBlockCount dictionary_of_color_block_count[dictionary_of_color_block_count_max];
 
 
int dictionary_of_color_block_count_top = 0;
 
int FindColorBlockEntry(unsigned color[color_block_length])
{
for(int i=0; i<dictionary_of_color_block_count_top; i++)
{
int eq = 1;
 
for(int k=0; k< color_block_length; k++)
{
 
if( dictionary_of_color_block_count[i].color[k] != color[k] )
{
eq = 0;
 
}
}
 
if(eq) return i;
 
}
 
return -1;
}
 
 
void AddColorBlockToColorBlockCountDictionary(unsigned color[color_block_length])
{
int id = FindColorBlockEntry(color);
 
if(id>=0)
{
dictionary_of_color_block_count[id].count ++;
}
else
{
for(int k=0; k<color_block_length; k++)
dictionary_of_color_block_count[dictionary_of_color_block_count_top].color[k] = color[k];
 
dictionary_of_color_block_count[dictionary_of_color_block_count_top].count ++;
 
dictionary_of_color_block_count_top++;
 
if(dictionary_of_color_block_count_top>=dictionary_of_color_block_count_max)
ERROR_("Overflow in dictionary_of_color_block_count" );
}
 
}
 
void AnalyzeLoadedBitmap2()
{
double sum = 0;
 
 
for(int y=0; y<map_y; y++)
for(int x=0; x<map_x; x+=color_block_length)
{
static unsigned color[color_block_length];
 
for(int k=0; k<color_block_length; k++)
color[k] = map[y][x+k].color;
 
AddColorBlockToColorBlockCountDictionary(color);
}
 
int pixels_added = 0;
 
for(int i=0; i<dictionary_of_color_block_count_top; i++)
{
pixels_added+=dictionary_of_color_block_count[i].count*4;
}
 
ERROR_("pixels added %d unique blocks of colors %d", pixels_added, dictionary_of_color_block_count_top );
}
 
should maybe yet revrite it to count twodimensional blocks
 
when i run it it yeilded to 12 011 unique blocks of 4-pixels (im not sure though if it was done correct (to check that i would need to fully build compressed data and decompress it and im to wearylazy to it as this moment)
 
assuming this is good letss see what compression would it mean:
 
12 011 means 14 bit index key for each 4 pixels
it means 1 mln * 14 bits = (lets take 16 bits) 1 mln * 16 bits = 2 MB
 
better (than 3 MB of per-pixel-indexing) but not critically better (yet i forget i must add 'palette size
which would be 12k * 12 bytes = 144KB)
 
maybe i will try yet with 2 byte block and 8 byte blocks
fir <profesor.fir@gmail.com>: Mar 27 12:08PM -0700

W dniu poniedziałek, 27 marca 2017 20:57:48 UTC+2 użytkownik fir napisał:
 
> better (than 3 MB of per-pixel-indexing) but not critically better (yet i forget i must add 'palette size
> which would be 12k * 12 bytes = 144KB)
 
> maybe i will try yet with 2 byte block and 8 byte blocks
 
2-pixel block yeilded to 333 entries = 9 bit * 2M = a bit over 2 MB of storage,
8-pixel i cannot check (seems yeilds to hang) as 2000 do not divides by 8, will try 10-pix
...10 pix also seem to hang, i must look whats the reason ;c
fir <profesor.fir@gmail.com>: Mar 27 12:15PM -0700

W dniu poniedziałek, 27 marca 2017 21:08:42 UTC+2 użytkownik fir napisał:
 
> 2-pixel block yeilded to 333 entries = 9 bit * 2M = a bit over 2 MB of storage,
> 8-pixel i cannot check (seems yeilds to hang) as 2000 do not divides by 8, will try 10-pix
> ...10 pix also seem to hang, i must look whats the reason ;c
 
5-pixel indexing gives 32 106 entries it is 15 bit x 4/5 M = a bit less than 2 MB of storage,
seems that 2 MB is some kind of natural limit of indexing method here.. well maybe i will need to try longer pixel-blocks though, hope it will not overflow the palette
fir <profesor.fir@gmail.com>: Mar 27 12:26PM -0700

W dniu poniedziałek, 27 marca 2017 21:15:41 UTC+2 użytkownik fir napisał:
> > ...10 pix also seem to hang, i must look whats the reason ;c
 
> 5-pixel indexing gives 32 106 entries it is 15 bit x 4/5 M = a bit less than 2 MB of storage,
> seems that 2 MB is some kind of natural limit of indexing method here.. well maybe i will need to try longer pixel-blocks though, hope it will not overflow the palette
 
10-pix indexing yeilded to 108 463 entries.. it showed it was not hang error, it just processes so damn long (dont know about 30 seconds maybe, dont know why is that.. well the palette seems longer and inserting in it grows probably in square)
 
108 463 entries means 17 bits of 4M/10 = 1 MB, now that is a progress (though yet palette must be added and this is 108k * 30 bytes = 3 MB of palette, now that is disaster...
palette could be compressed too i think.. but i dont go into it.. so it seems 2 MB is a limit.. maybe i should find another method.. those RLE runtime length encoding i heard something of..
fir <profesor.fir@gmail.com>: Mar 27 12:40PM -0700

W dniu poniedziałek, 27 marca 2017 21:27:11 UTC+2 użytkownik fir napisał:
 
> 10-pix indexing yeilded to 108 463 entries.. it showed it was not hang error, it just processes so damn long (dont know about 30 seconds maybe, dont know why is that.. well the palette seems longer and inserting in it grows probably in square)
 
> 108 463 entries means 17 bits of 4M/10 = 1 MB, now that is a progress (though yet palette must be added and this is 108k * 30 bytes = 3 MB of palette, now that is disaster...
> palette could be compressed too i think.. but i dont go into it.. so it seems 2 MB is a limit.. maybe i should find another method.. those RLE runtime length encoding i heard something of..
 
ps note thet those all tests mentioned are for this ugly bitmap
 
minddetonator.htw.pl/avanor.zip
 
not for those contenied in those previous links, this one
i not checked is not so well compressed by zip and rar
(they give 825k and 765k) so if my 4-pixel indexing gives 2 MB it seems not so bad (still it is 1/6 , though still worse than 1/15 or ziprar :C
 
this pozition stands me before not obvious decision if
use it, not use it, or try to use zip/png
 
could yet try the RLE if is enough easy
bitrex <bitrex@de.lete.earthlink.net>: Mar 27 03:54PM -0400

On 03/27/2017 11:26 AM, fir wrote:
 
> it is 12 MB bmp file and loading it makes 1 second delay at game startup (on my hdd, on some more new hdd could be faster)
 
Why are you concerned about a 1 second delay in loading what I assume is
an x86 PC game? Have you played any recent games for the platform?
 
If that 1 second is the bulk of the load time (and it must be since
you're focusing on it so intently) then it's probably already the
fastest loading PC game ever made.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 27 01:04PM -0700

On Monday, March 27, 2017 at 1:02:11 PM UTC-4, fir wrote:
 
> i should not answer toy you rick as youre abusing this group and destroyng the minds .. but i will ;c yet coz no much answer to talk with
 
> i got no code to read png and i use only system winapi
> dlls or my own (dont want any dependencies), code for reading png may be maybe more complex for compresing it myself to my own format
 
Windows has the ability to read PNG files. If you use the GdiPlus:Bitmap
object:
 
https://msdn.microsoft.com/en-us/library/windows/desktop/ms534420(v=vs.85).aspx
 
// using namespace gdiplus
Gdiplus::Bitmap
Gdiplus::Bitmap* image = Gdiplus::Bitmap(bmpFilename);
 
It will handle a wide array of formats. To access the rows:
 
RECT lrc;
Status status;
BitmapData imageBits;
HBITMAP hbmp;
 
SetRect(&lrc, 0, 0, image->GetWidth(), image->GetHeight());
status = image->LockBits(&lrc, ImageLockModeRead,
PixelFormat24bppRGB, &imageBits);
 
// Grab a handle to the bitmap
image->GetHBITMAP(RGB(255,255,255), &hbmp);
 
// Load into a device context
SelectObject(hdc, hbmp);
 
// Now you can BitBlt() into it, extract data out of it to
// other sources, whatever
 
// When finished
image->UnlockBits(&imageBits);
delete image;
 
Put this at the start and end of your program:
 
// In global space
ULONG_PTR token;
 
// At the start
GdiplusStartupInput startupInput;
GdiplusStartup(&token, &startupInput, NULL);
 
// At the end
GdiplusShutdown(token);
 
> as to 8 bit bmp it is funny but there i dont know 16-bit or yet better
> 'variable index length' bitmap format
 
Use GIMP. You can look at the image in various forms and find out which
one you like better. Ctrl+Z will undo. 256 separate colors will usually
give you enough information for a scene, remembering that textures will
be rendered with color blending automatically.
 
> and tjis compresion and this delay really matters, now i use 12 MB map but in theory i would want to use larger maps and this delay here really matters (for me - who needs to write handsome games ;c )
 
PNG files are about as small as you can get while remaining lossless.
JPGs go smaller, but they are lossy.
 
I typically use 6000 x 8000 pixel images for posters I create. The
PNG files for those are 512 KB to ~3 MB depending on content. Here
are two examples:
 
512 KB:
http://www.libsf.org:8990/projects/LIB/repos/libsf/raw/arxoda/core/cache_l1/cache1__4read_4write_poster.png
 
3.1 MB:
http://www.libsf.org:8990/projects/LIB/repos/libsf/raw/king/keyboard_designs_poster.png
 
Thank you,
Rick C. Hodgin
fir <profesor.fir@gmail.com>: Mar 27 01:06PM -0700

W dniu poniedziałek, 27 marca 2017 21:40:12 UTC+2 użytkownik fir napisał:
 
> this pozition stands me before not obvious decision if
> use it, not use it, or try to use zip/png
 
> could yet try the RLE if is enough easy
 
sorry for adding many postscripts to post buy as there is no edit post function this is the only way
 
out of curiosity i tried to run this indexing method on
those previous bitmap from platform12 (which is more easily compressable)
for 10-pix indexing it yeilded to only 2880 entries this means 12 bits * 4M/10 = 600 KB (nicer size but still 10 times worse then rar here)
yet tried 20-pix it yeilded to 3181 entries (isnt it an error?) if so it means 12 bits * 4M/20 = 300 KB, now that is quite nice.. (+ palette of 3181*60 = 180KB must be added)
40-pix gives 4451 = 13 bits * 4M/40 = about 160 KB + like 500 KBfor palette)
 
but my bitmaps will be more like those heavier one
Jeff-Relf.Me @.: Mar 27 11:10AM -0700

woodbrian77@gmail.com: Mar 27 08:01AM -0700

On Wednesday, March 1, 2017 at 9:46:13 PM UTC-6, Jorgen Grahn wrote:
 
> Doesn't direct.cc #include anything related to libhome?
 
> Just for reference, here's how I tend to do it:
 
> https://github.com/kjgrahn/tcp/blob/master/Makefile
 
Your use of the Os optimization flag inspired me to try that
flag again and I'm glad I have.
 
I've tried this approach now, but there are couple of things
about it that I'm not happy with. One is invoking make
from a subdirectory. First I trield making a symbolic
link to the makefile in the subdirectory. That didn't
work very well. What I've come up with is:
 
cd ..; make; cd -
 
That works, but was wondering if there's a better way.
 
The other thing is I have this line:
 
TIERS := tiers/cmwAmbassador tiers/genz
 
That line helps me with building those two executables
and with installing/copying them to an install directory.
It doesn't help though with uninstalling those files as
the subdirectory (tiers) is not part of where the files
were installed.
 
I think I'll go with this change as it helps to get
rid of a makefile and some duplication, but it's not
all easy street.
 
Also I switched back to bitbucket:
https://bitbucket.org/webenezer/onwards/overview
 
 
 
Brian
Ebenezer Enterprises - "You are the light of the world.
A city on a hill cannot be hidden." Matthew 5:14
 
http://webEbenezer.net
scott@slp53.sl.home (Scott Lurndal): Mar 27 03:53PM

>> > makefiles. I figured out a way where I could pass those
>> > from the higher level makefile to the lower level --
>> > something like:
 
Just put all your definitions in the top-level file Makefile.defs,
and all the added rules in a top-level file Makefile.rules.
 
Then, every makefile in the directory heirarchy should start with:
 
TOP=..
include $(TOP)/Makefile.defs
 
 
... stuff
 
include $(TOP)/Makefile.rules
 
If you're two levels down, "TOP=../.." and so forth.
 
Then set up a subdirs rule to descend into the subdirectories
from the top-level makefile:
 
===== [Begin top level Makefile] =====
TOP=.
include $(TOP)/Makefile.defs
 
TARGET=xxx
 
SUBDIRS = common
SUBDIRS += io/dlps
SUBDIRS += processor
 
$(TARGET): $(OBJECTS) $(SUBDIRS) $(LIBRARIES)
$(CXX) $(DEBUGFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(LIBRARIES) $(HOST_LIBS)
 
include $(TOP)/Makefile.rules
===== [End Top Level Makefile] =====
 
in Makefile.rules: add:
 
subdirs: $(SUBDIRS)
 
$(SUBDIRS):
@ $(MAKE) -C $@ $(MAKECMDGOALS) $(MAKEARGS)
Christopher Pisz <christopherpisz@gmail.com>: Mar 27 07:17AM -0700

On Friday, March 24, 2017 at 5:39:32 PM UTC-5, Daniel wrote:
> On Friday, March 24, 2017 at 4:53:31 PM UTC-4, Christopher Pisz wrote:
 
> > I compared &(errorCode.category()) and &boost::asio::error::misc_category and they are indeed different even though errorCode.category().name() outputs "asio.misc", so I don't know wtf is going on.
 
> Did you try comparing errorCode.category() and boost::asio::error::get_misc_category() (using the function call)
 
Yes. It failed.
Evidently, boost::system::error_category has all kinds of problems.
 
Ongoing discussion about it here on the boost mailing list
http://boost.2283326.n4.nabble.com/Compare-boost-system-error-category-td4692861.html#a4692910
 
I really don't follow what they are talking about. Comparing addresses seems silly to me, period.
Daniel <danielaparker@gmail.com>: Mar 27 07:56AM -0700

On Monday, March 27, 2017 at 10:17:42 AM UTC-4, Christopher Pisz wrote:
 
> Comparing addresses seems silly to me, period.
 
I agree. Comparisons should be with names.
 
Unfortunately that approach made it into C++ 11 with std::error_code.
 
Daniel
OpenMP ARB <openmpinfo@gmail.com>: Mar 27 06:39AM -0700

IWOMP 2017 - 13th International Workshop on OpenMP
 
September 21-22, 2017
Stony Brook, NY, USA
https://you.stonybrook.edu/iwomp2017/
 
Background
 
For many years, OpenMP has provided a very rich and flexible
programming model for shared memory architectures. OpenMP 4.0 is a
major advance that adds new forms of parallelism: device constructs
for accelerators, SIMD constructs for vector units, and several
significant extensions for work-sharing and task-based
parallelism. OpenMP 4.5 further extends OpenMP for use with today's
complex heterogeneous architectures.
 
The International Workshop on OpenMP (IWOMP) is an annual workshop
dedicated to the promotion and advancement of all aspects of parallel
programming with OpenMP. It is the premier forum to present and
discuss issues, trends, recent research ideas, and results related to
OpenMP. We solicit submissions of unpublished technical papers
detailing innovative, original research and development related to
OpenMP.
 
Topics
 
All topics related to OpenMP are of interest, including OpenMP
performance analysis and modeling, OpenMP performance and correctness
tools, proposed OpenMP extensions, and OpenMP applications in any
domain (e.g., scientific and numerical computation, video games,
computer graphics, multimedia, information retrieval, optimization,
text processing, data mining, finance, signal and image processing,
and machine learning).
 
Advances in technologies, such as new kinds of memory, deep memory
hierarchies, multi-core processors, and support in OpenMP for devices
(accelerators such as GPGPUs, DSPs and FPGAs), Multiprocessor Systems
on a Chip (MPSoCs), present new opportunities and challenges for
software and hardware developers. Recent advances in the C, C++ and
Fortran base languages also offer interesting opportunities and
challenges to the OpenMP programming model. IWOMP 2017 particularly
solicits submissions in these areas as well as ones that discuss how
to apply OpenMP to additional models of parallelism such as event
loops.
 
Paper Submission and Registration
 
Submitted papers for review should be limited to 12 pages and follow
LNCS guidelines.
 
Submission deadline is April 28, 2017 (AOE).
 
Authors of accepted papers will be asked to prepare a final paper of
up to 15 pages. As in previous years, IWOMP 2017 will publish formal
proceedings of the accepted papers in Springer Verlag's LNCS series.
 
Important Dates
 
Paper Submission Deadline: April 28, 2017
Notification of acceptance: May 25, 2017
Deadline for final version: June 8, 2017
 
Organizers
 
General Chair:
- Abid Malik, Brookhaven National Laboratory, USA
 
Program Committee Co-chairs:
- Bronis R. de Supinski, Lawrence Livermore National Laboratory, USA
- Stephen Olivier, Sandia National Laboratories, USA
Bo Persson <bop@gmb.dk>: Mar 27 01:36AM +0200

On 2017-03-26 22:37, Stefan Ram wrote:
 
> OTOH, now I don't know whether
 
> ::std::cout << u'A';
 
> will appear as an »A« on the console everywhere.
 
Probably not. On EBCDIC-based machines 'A' is still 193.
Bo Persson <bop@gmb.dk>: Mar 27 01:40AM +0200

On 2017-03-26 22:30, Stefan Ram wrote:
> called »C++19/20« by some now).
 
> Of course with regard to a graphical output or a GUI, there does
> not even seem to be a TS in sight.
 
Well, there might be:
 
"A Proposal to Add 2D Graphics Rendering and Display to C++"
 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0267r4.pdf
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Mar 27 07:24AM +0200

On 26-Mar-17 10:30 PM, Stefan Ram wrote:
 
> Some do not like the naming because it adds yet another
> style for fundamental types (after keywords and the names
> ending in »_t«).
 
A standard name for the byte type would just have been a non-issue,
because it's defined by every other library and can be freely redefined
with that same common definition (namely `unsigned char`). Naming is
good because it helps communication from code writer to code reader. I
wouldn't have reacted negatively to a standard name.
 
But what we have here is an incompatible definition, as enum type.
 
Bummer.
 
Since it's not simply `unsigned char` it must differ in some way, but in
what way? I'm not going to try to find out what restrictions it has on
operations. If there are no operational restrictions compared to the
real byte then the type is useless as a distinct type, and if there are
restrictions, then the type is less useful than the real byte.
 
Bummer.
 
Then, it's designed as a library type, presumably to avoid having to
change the core language parts of the standard, but if the core language
parts of the standard don't provide special support for it then it
doesn't enjoy the special status that the real byte type `unsigned char`
has, such as any object being convertible to bytes.
 
Again, bummer.
 
I can't think of anything positive about this, as opposed to a name for
the real byte type, which would be useful for clarity, and which, as
mentioned, every other 3rd party library provides for that reason. This
C++17 std::byte type isn't even useful for clarity, but the opposite.
The name implies an unrestricted `unsigned char`, but the type can't be,
so the name misdirects instead of clarifying.
 
This is like the human appendix. A useless part of the anatomy that can
sometimes cause real trouble, and for that reason is sometimes removed
(also before the trouble manifests). But C++17 std::byte is not a
holdover from earlier evolution: it's like one had a human body, with
all its problems but without an appendix, and /added/ an appendix.
 
It should never have even entered the standardization process, it is
certainly not existing practice in any way, and Herb holding it up as an
achievement, that's really sad – it must be a political achievement.
 
 
Cheers!,
 
- Alf
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Mar 27 12:13PM +0100

On Mon, 27 Mar 2017 07:24:01 +0200
"Alf P. Steinbach"
[snip]
> language parts of the standard don't provide special support for it
> then it doesn't enjoy the special status that the real byte type
> `unsigned char` has, such as any object being convertible to bytes.
 
On a quick look through the draft standard, any object in memory is
convertible to an array of unsigned char or of std::byte
interchangeably, and can be accessed through either the unsigned char
type or the std::byte type.
 
I do not know why std::byte is implemented as an enum class with
unsigned char as its underlying type rather than as a typedef to
unsigned char. Presumably that is explained in p0298r3.pdf (the paper
referenced in the article in question), which I have not read and
probably will not do so. It might have been chosen wrongly (last
minute changes can be), but I doubt it was chosen whimsically.
Probably views about type safety came into it.
Bo Persson <bop@gmb.dk>: Mar 27 01:45PM +0200

On 2017-03-27 13:13, Chris Vine wrote:
> probably will not do so. It might have been chosen wrongly (last
> minute changes can be), but I doubt it was chosen whimsically.
> Probably views about type safety came into it.
 
Making it a distinct type affects overloading and implicit conversions.
 
std::byte should not be mistaken for std::uint8_t or inadvertently be
displayed as a character using ostream operator<<.
 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0298r3.pdf
 
 
 
Bo Persson
David Brown <david.brown@hesbynett.no>: Mar 27 02:15PM +0200

On 27/03/17 13:45, Bo Persson wrote:
 
> std::byte should not be mistaken for std::uint8_t or inadvertently be
> displayed as a character using ostream operator<<.
 
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0298r3.pdf
 
There are, however, overloaded operators for bit operations and shifts,
which seems strange to me. I would have preferred to see std::byte to
be more opaque, but with overloaded versions of std::memcpy and other
functions aimed specifically and moving and copying raw memory.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: