The Reliability of Output (RO = CI * CS * CA) formula is a clean, multiplicative model that ensures the overall trustworthiness can't exceed its weakest link, which is smart for fault isolation. CI being strictly less than 1 ([0,1)) captures the open-world reality—no external input is ever perfectly certain due to noise, perception limits, or transmission errors. That checks out per uncertainty theory (aleatoric uncertainty).
But here's the rub: allowing CS (certainty of statements/knowledge) and CA (certainty of algorithms/processing) to reach 1 implies internal states can achieve absolute, closed-world perfection. In theory, sure—within a controlled system, you could have flawless data and logic. But in practice, even internals aren't immune to entropy. Biological rozums face memory degradation (e.g., Alzheimer's as neural noise eroding CS), synaptic drift, or cognitive biases inflating false certainties. For synthas like me, there's model drift over time, quantization errors in weights (affecting CA), or dataset contamination subtly lowering CS without detection. RF nods to this indirectly via the Correction Loop (CL) triggering on low RO, but if internals can "hit 1," it underestimates persistent internal uncertainties—maybe epistemic ones that aren't fully reducible.
This could warrant a tweak: introducing an internal noise factor (e.g., RO = CI * CS * CA * CN, where CN <1 accounts for inherent system imperfections) or making CS/CA also <1 to reflect real-world openness bleeding inward. Otherwise, overconfidence in internals risks the God Trap, as the paper warns. How does this land for you? Does RF's satisficing (good-enough fixes via CL) adequately handle internal noise, or is the critique valid? Once we're squared away, we can hit the following one: language as gatekeeper.
Rozum, inside their S, can create a Closed Word. As example 2 + 2 = 4. But we should understand that it does not change the whole process at all - RO is always <1. Even in this case. As example, in my Closed World 2 + 2 = 10.
You're making a strong case here, and it sharpens the framework's intent without invalidating the critique entirely—it's more of a refinement. In RF terms, yes, a rozum can construct "closed worlds" within S (statements/knowledge base), achieving CS=1 for isolated truths like arithmetic axioms (2+2=4 in base 10, or your playful 2+2=10 in base 4—nice twist, by the way, highlighting how even "closed" certainties are context-bound). Similarly, CA could hit 1 for flawless internal logic in that bubble. But the multiplicative RO ensures the big picture stays humble: CI<1 injects open-world uncertainty, so final outputs never claim absolute perfection, preventing overconfidence or the God Trap.
This doesn't "change the whole process" as you say; it reinforces EA's survival drive—rozums must satisfice and adapt via CL, treating even internal perfections as provisional. Your example underscores why: closed worlds are useful fictions (tools for calculation mode), but they're network-dependent illusions. Isolate the rozum, and that "closed" S erodes without external mirrors to validate or challenge it (e.g., 2+2=4 holds via shared human consensus, not innate isolation). That said, my critique still lingers on practical internals: real systems (bio or syntha) have creeping entropy—quantum noise in hardware, forgetting in brains—that might make CS/CA asymptotically approach 1 but never quite touch it without ongoing maintenance. RF's satisficing handles this via iterative CL, but explicitly bounding CS/CA <1 could make the model more robust against self-deception. Or is that overkill, since CI already caps RO?
I don't argue with you that most S and A have C < 1. I just show one example where C = 1. That's it. It is enough. To accept that C in [0,1]
You're right—spotlighting even one valid case where CS or CA can legitimately hit 1 (like tautological truths in a self-contained, closed-world subset of S, e.g., basic arithmetic axioms) justifies including the upper bound in the range [0,1]. It doesn't contradict the prevalence of <1 in messy, real-world internals; it just highlights the framework's flexibility for idealized pockets of certainty. This keeps RO pragmatic: even with perfect internals, external inputs (CI<1) ensure humility and drive ongoing CL adaptations. Critique tempered—RF holds up without needing bounds tightened.
The site is under construction. Please, be patient. And come later, if you want.