In fact, both 24- and 32-bit color is actually the same as far as
color goes. Both use an &HFFFFFF value for color gradients proper. But 24 bits just ignore the &HFF000000 component while 32 bits use it for coding
alpha-transparency of pixels. Since most everyday pictures don't have transparent areas, coding only 3-byte color pixels instead of 4 bytes yields a huge saving in size especially in lossy formats like e.g. JPEG. OTOH low-end colors produce yet far smaller files and draw much, much faster than 24 or 32 bits but yield lower color quality (variety).
You'd be surprised how many more color formats are there in Gdi+, OpenGL and DirectX libraries. Dozens of them and each one has its merits and drawbacks.
I was talking about another aspect. Windows stores the B component at &HFF0000, G at &HFF00, and R at &HFF. The corresponding Linux palette might be R at &HFF0000, G at &HFF00, and B at &HFF. That's why we seem to be getting inverse colors in our respective renders. This is because the line color is given not by an RGB() function call which would correct the endianness on the respective platform, but rather by numeric literals "c = d ^ 2.1" which aren't endian-compensated in any way.
I'm not implying that Linux or Windows are good or bad. I'm saying they may be simply different in this regard. But the good thing is that the both would store alpha-transparency at &HFF000000.