In Java, int
values are always 32 bits large. ARGB encoding uses a single int
value to store four values for red, green, blue and a transparency value (alpha). Each of these four values is eight bits large, allowing for 28 (256) different values from 0 to 255. The intensity of red, green and blue is on a linear scale from not used
(0) to fully used
(255). The alpha value ranges from 0 for transparent
(the pixel itself cannot be seen, only what is behind it) to 255 for opaque
(nothing from behind the pixel shines through).
Now for the actual encoding. The least signficant eight bits are used by blue, the next eight bits by green, then eight bits for red and the most significant eight bits are used for the transparency value. Here is some sample code that demonstrates how to decode and encode red, green, blue and alpha for such an int
value:
// (1) decoding int argb = ...; // comes from PixelGrabber, BufferedImage.getRGB etc.
int red = (argb >> 16) & 0xff;
int green = (argb >> 8) & 0xff;
int blue = argb & 0xff;
int alpha = (argb >> 24) & 0xff;
// (2) now modify red, green, blue and alpha as you like;
// make sure that each of the four values stays in the
// interval 0 to 255
... // (3) and encode back to an int, e.g. to give it to MemoryImageSource or
// BufferedImage.setRGB
argb = (alpha << 24) | (red << 16) | (green << 8) | blue;