TinyScreen 8bit color issues

zet23t · 5 · 5933

zet23t

  • Full Member
  • ***
    • Posts: 44
    • View Profile
I have noticed in a graphic rendering test that color values are "missing": While I send distinct byte values to the screen, these color differences are indistinguishable from certain other values (I should be able recognize different colors in those cases, I am quite sensitive to that).

So I started testing around using the drawRect function. This is not entirely representative for my case since I only use the writeBuffer function in the code where I noticed the missing colors.
However, if you try out the demo here: https://codebender.cc/sketch:105460, you should see that something is wrong: The screen is displaying two color rectangles. The lower one is using the full r-g-b variant API (16bit colors) to draw a rect with a red-green color gradient. The result is super smooth - as expected.
The upper area of the screen however is displaying the "color palette" that results from the 256 values in the 8bit color value. Each color entry is represented by a 4x4 sized rectangle with a black dot indicating that there's a rectangle with solid filling. Looking at those 256 different colors, it's quite obvious that there are only 3 distinct red colors (should be 4) and 5 green values instead of 8. More or less, there seem to be groups of 2 palette entries each that look the very same.

I am not sure if there's a bug in the tinyscreen.h library or something else, but something isn't working as expected at least.

On a side note: Why is the format actually using rgb 2-3-3 and not rgb 3-3-2? Usually the red and green channels have a higher resolution than the blue channel because the human eye is less sensitive to blue. (like shown here: http://en.wikipedia.org/wiki/Color_vision#/media/File:Eyesensitivity.svg).


zet23t

  • Full Member
  • ***
    • Posts: 44
    • View Profile
Taking the code from the tinyscreen.cpp file and translating it to javascript (which works analogous):

function rgb(color)
{
   var r=(color&0x03)<<4;//two bits
   var g=(color&0x1C)<<1;//three bits
   var b=(color&0xE0)>>2;//three bits
   if(r&0x10)r|=0x0F;//carry lsb
   if(g&0x08)g|=0x07;//carry lsb
   if(b&0x08)b|=0x07;//carry lsb
   console.log(r,g,b);
}


Running it in Javascript reveals following mapping:

c = 0 => r = 0, g = 0, b = 0
c = 1 => r = 31, g = 0, b = 0
c = 2 => r = 32, g = 0, b = 0
c = 3 => r = 63, g = 0, b = 0

This is a bit unfortunate since this color difference is practically almost unnoticeable.

A better approach (and I think this is also a bit cheaper for the processor) is this:

function rgb(color)
{
   var r=(color&0x03);//two bits
   var g=(color>>2)&7;//three bits
   var b= color>>5;//three bits
   r|=r<<2|r<<4;//carry lsb
   g|=g<<3;
   b|=b<<3;
   console.log(r,g,b);
}


Basically the bits get just repeated until the 6 bits are full. This results in the following mapping:

1 => 21 0 0
2 => 42 0 0
3 => 63 0 0
4 => 0 9 0
5 => 21 9 0
6 => 42 9 0
7 => 63 9 0
8 => 0 18 0
9 => 21 18 0
10 => 42 18 0
11 => 63 18 0
12 => 0 27 0
13 => 21 27 0
14 => 42 27 0
15 => 63 27 0
16 => 0 36 0
17 => 21 36 0
18 => 42 36 0
19 => 63 36 0
20 => 0 45 0
21 => 21 45 0
22 => 42 45 0
23 => 63 45 0
24 => 0 54 0
25 => 21 54 0
26 => 42 54 0
27 => 63 54 0
28 => 0 63 0
29 => 21 63 0
30 => 42 63 0
31 => 63 63 0

So the gaps are now constant. Next I want to see how the writeBuffer function works and if it can be tweaked too.


zet23t

  • Full Member
  • ***
    • Posts: 44
    • View Profile
I did some more research and experiments.
Here's what I've found:
  • The drawRect method with the 8bit color value is in deed flawed - I made some photos of comparing the original and the modified version - you can see how there are more color tones in the modified version in the attached picture.
  • I've made a (slow) implementation for supporting writing a 16bit buffer. The 16bits are calculated from the 8bit values. Interestingly, the 16bit version with the conversion algorithm that I've used in the modified drawrect version yields pretty much exactly the same result as the 8bit writeBuffer method. There seem to be some slight differences, but I am not really sure if that's not just psychological ;). I still can't distinguish the colors that I see on my notebook display - I assume that they simply appear as too similar on the TinyScreen opposed to my Notebook display. So the 8bit write buffer color conversion is working as it should, so there seems to be no problem in the TinyScreen receiver function. I have also made a photo of a comparison between the 8bit and the 16bit version.

The modified writeBuffer method for 16bit support looks like this:

if (_bitDepth) {
      for(int j=0;j<count;j++) {
        uint8_t color = buffer[j];
        uint8_t r=(color&0x03);//two bits
        uint8_t g=(color>>2)&7;//three bits
        uint8_t b= color>>5;//three bits
        r = r<<1 | r << 3;
        g|= g<<3;
        b = b<<2 | b >> 1;
        // r: 5bits, g: 6bits, b:5bits
        uint16_t temp = r|g<<5|b<<11;
        SPDR=temp>>8;
        while (!(SPSR & _BV(SPIF)));
        SPDR=temp&0xff;
        while (!(SPSR & _BV(SPIF)));
      }
    } else {
      uint8_t temp;
      SPDR = buffer[0];
      for(int j=1;j<count;j++){
        temp=buffer[j];
        while (!(SPSR & _BV(SPIF)));
        SPDR=temp;
      }
      while (!(SPSR & _BV(SPIF)));
    }


The 16bit color format is 5-6-5 I've learned by playing around. Therefore I could make a 3-3-2 8bit format for comparison... however, I'd need to make some more changes to the existing code to try that out. As for using the 16bit writeBuffer function: I guess it could be implemented a bit more efficiently but even then, it would be simply too demanding for making games that require 20fps+. The current implementation runs at about 6fps... Maybe 10 could be achieved with some optimizations, but I think I won't go that way.


Ben Rose

  • Administrator
  • Hero Member
  • *****
    • Posts: 392
    • View Profile
Hi again- thank you very much for catching the 8 bit conversion bug. I'll try to get the fix into our github as soon as possible, although codebender won't update until next week.

I picked RGB 2-3-3 in error as well- I thought sensitivity was greater for blue than red. I think we're stuck with this for now, but I'll keep it in mind.

Your 16 bit writeBuffer implementation is definitely giving you about the same results as 8 bit. You're sending padded 8 bit data to the screen, loading the 16 bit graphics RAM on the screen the same way the onboard screen controller would load 8 bit data in. If you want 16 bit color, you need to send the full 5-6-5 color data. The only example we have is the BMP display demo at https://codebender.cc/sketch:86070 Hopefully the difference is clear in converting the 24 bit BMP to 16 or 8 bit, then writing those bytes with writeBuffer.

Thanks,
Ben


zet23t

  • Full Member
  • ***
    • Posts: 44
    • View Profile
Thank you for your response.

Yes, the 16bit framebuffer implementation is my experiment to see if there are differences to the 8bit encoding, hence the conversion from 8bits.

I am currently not sure if I want to test true 16bit images in the framework I am working on. Memory consumption and overhead seem to be too costly.



 

SMF spam blocked by CleanTalk