Well, just thinking out loud here, since I obviously don’t know the details of the issue you’re dealing with, but allow me to go on at some length here. This is all going to be highly simplified compared to what you’re dealing with, I"m sure, but if there’s anything in here that helps even in the slightest, you’re welcome to it.
Say you wanted to be able see the stars in the background from Earth’s orbit while looking at the Sun. The Sun has a magnitude of -26.75, and Vega, one of the brightest stars (and the reference star for the current magnitude scale) has a magnitude of 0. Because of the wonky calculation used to determine astronomical magnitudes, this means that the Sun appears to be about 5e10 (50 billion) times brighter than Vega.
From Earth’s orbit, the Sun delivers ~100,000 lux (lumens per square metre; this is equivalent to ~150 W/m^2) in the visible band, and this makes up ~11% of the Sun’s actual output (bolometric values (i.e. values integrated over all wavelenghts) are ~1360 W/m^2, which is equivalent to ~ 136 million lux). This means that Vega has a visible brightness of approximately 2e-6 lux (3e-9 W/m^2).
On a base 10 logarithmic scale, the lux values for these two stars end up looking like:
Sun: log10(100,000) = 5
Vega: log10(2e-6) = -5.7
That’s a pretty big range to deal with, especially given that Vega is one of the brightest stars we can see in our night skies. We have a natural upper boundary, where we can set the Sun’s (monochromatic) pixels equal to 255, but we can’t a lower cutoff, since we want Vega to be visible. I don’t know how low a pixel brightness would be needed for a star to be considered “visible” on a 0-255 scale, but let’s pull a value out of thin air to act as our lower boundary: 10. We need to decide how faint something can be and still have a pixel brightness of 10 or more.
According to Wikipedia, there are ~500 stars in the sky brighter than magnitude 4. This is also the typical visible cutoff in smaller urban centres. Let’s call this the cutoff, then. This will produce the absolutely surreal experience of seeing the typical urban night sky, and the Sun on screen at the same time.
Stars at m = 4 are 2 trillion times fainter than the Sun, and so have a value of ~5e-8 lux.
log10(5e-8) = -7.3
This maps 5 -> 255 and -7.3 -> 10. Using a linear mapping, we find a slope of ~20, and an intercept of ~155. This gives Vega a pixel brightness of ~40. Only 4 stars (other than the Sun) will appear brighter, with Sirius peaking somewhere in the neighbourhood of 55.
This seems reasonable, but what does this do for nearby solar system objects? What if, say, we had a planet at Earth’s orbit with rings comparable to Saturn’s.
The particles in Saturn’s rings have an albedo of (very rough average) 0.45, so of the 100,000 lux falling on them, they would reflect 45% of it. For the sake of simplicity, let’s model the rings as a solid disk, and account for all of the empty space in them by assuming they’re ~90% empty, giving a total albedo of 0.045. This means they’re reflecting back 4500 lux. Since these are nearby, we have to worry about the inverse square law now, which means we need to pick a distance. Let’s say the rings are as large as Saturn’s, with an inner radius of ~65,000 km, and an outer radius of ~135,000. This gives them an area of ~5e9 km^2. If we want the ringspan to fit within, say, 60 degrees, we would need to be ~235,000 km away from the planet.
The resulting brightness of those rings will be ~8e-14 lux coming from each square metre of those rings. We now need to know how many square metres there are per pixel. Let’s assume the FoV is 60 degrees (so that the rings span the entire monitor; this now gets into the unrealistic, except for maybe in multiple star systems, situation where we have front-illuminated rings, with the Sun in the background, but what the hell; let’s have fun), and let’s assume standard full HD resolution with square pixels so we can measure the pixel sizes. Let’s also assume we’re looking at the ring system face on, for simplicity. The FoV is 60 degrees horizontal, so we have 1920 pixels spanning 270,000 km. This means each pixel is 140 km * 140 km, or 1.96e10 square metres.
This means each pixel is giving off ~0.001568 lux. log10(0.001568) = -2.8.
Each pixel in the ring, then, should have a monochromatic brightness of ~100. That seems reasonable.
What about the planet itself? Let’s assume the planet is actually Neptune (because blue is pretty), with an albedo of 0.29. We’ll only consider the lit pixel values, since the geometry of this fabricated scenario is starting to become slightly Lovecraftian.
Each illuminated square metre of Neptine-at-Earth’s-orbit will reflect back 29,000 lumens (lux*m^2). At our distance of 235,000 km, each square metre will deliver unto us ~5.25e-13 lux. This means each pixel will produce ~0.01 lux. log10(0.01) = -2.
This spits out a pixel brightness of 115. This is a nearly negligible difference from the brightness of the rings, even though each square metre of the planet is reflecting back well over 6 times as much light as each square metre of the rings. This is a bit wonky, but if you look at pictures of Saturn, the rings don’t appear to be that much dimmer than the planet itself.
The contrast between the planet and the rings can also be increased by taking stars of a lower magnitude as your floor, but the logarithmic scale is inherently insensitive to these changes in the mid ranges. For instance, raising the floor from magnitude 4 stars to magnitude 3 stars causes the difference in pixel brightness between the planet and the rings to increase from 15 to 17. Setting the floor at magnitude -1 (so that only Sirius is visible in the background) still only increases the difference to 19.
Switching to log20 for all (and keeping with a floor of magnitude 4) of the calculations brings the difference up to 21 (planet: 103, rings: 82), while keeping Vega’s brightness approximately the same. log30 increases the gap to ~24 (planet: 96, rings: 72), while again keeping Vega’s brightness the same.
So, switching up the base of your logarithms can help deal with the insensitivity of logarithmic scales in those middle regions. I don’t know how computationally expensive it is, though.