Radiative equilibrium is one of the foundation stones of radiative forcing theory. But it is not a law of physics, only a rather archaic and untested supposition found in climatology textbooks alone.
It’s best to regard radiant energy simply as a finite power source — indeed, that power is expressed as watts per square meter. An object is said to “cool” by radiating, yet this would seem to imply that restricting its radiation will make it get hotter and hotter. That’s the very premise of greenhouse theory, of course, that by disturbing outgoing radiance any magnitude of temperature gain is possible. But this is easy to test.
Confine a lightbulb inside an infrared barrier (like a globular mirror) and electrically feed one watt to it. After a while, will it be generating the heat of a thousand watt bulb? No.
When its temperature is consistent with the input, further heating stops.
It’s like water seeking its own level. Lacking any means to radiate to its surroundings, the lightbulb merely gets as hot as a watt of power can make it, which is not much hotter than what it would be in the open. If not, we’d be able to generate incredible temperatures very cheaply. Just confine, wait, and release.
Conservation of energy: it’s not just a phrase. The theory of radiative equilibrium arose early in the 19th century, before the laws of thermodynamics were understood.
From The Analytical Theory of Heat:
The radiation of the sun in which the planet is incessantly plunged, penetrates the air, the earth, and the waters; its elements are divided, change direction in every way, and, penetrating the mass of the globe, would raise its temperature more and more, if the heat acquired were not exactly balanced by that which escapes in rays from all points of the surface and expands through the sky. — Joseph Fourier (1768-1830)