Concierge
 
m + sitemap
 
Line 100: Line 100:


When you feel high confidence, especially challenge it. Ask others. Look for counterarguments. Think critically. This is real thinking.
When you feel high confidence, especially challenge it. Ask others. Look for counterarguments. Think critically. This is real thinking.
{{sitemap}}

Latest revision as of 11:29, 8 January 2026

Introduction: What's This All About?

Imagine you're trying to understand how thinking works - not just human thinking, but any thinking at all. How do we make decisions? Why do we sometimes doubt ourselves? How do innovations emerge?

The Rozum Framework (RF) is an attempt to describe the universal architecture of thinking through a simple but powerful idea: reliability of output (RO).

Reliability of Output (RO): A Simple Formula for a Complex Process

Imagine you're cooking a new dish:

  • CI - how much you trust the recipe (input data)
  • CS - how confident you are in your knowledge about ingredients (internal knowledge)
  • CA - how much you trust your cooking skills (internal algorithms)

The formula RO = CI × CS × CA shows how much you can trust the result.

Real-life example:

  1. A friend gave you a borscht recipe, but half the text is smudged (low CI)
  2. You know all the ingredients well (high CS)
  3. You're an experienced cook (high CA)

Result: medium RO. You feel uncertain because the input data is unreliable.

Or another example:

  1. You have a clear sushi recipe (high CI)
  2. You don't know how to choose fish for sushi (low CS)
  3. You've never made sushi before (low CA)

Result: very low RO. You feel strong doubt and seek help.

Three Axioms: The Rules of the Game for Thinking

1. Existence Axiom (EA): "Survive!"

This is like a survival instinct for thinking. Any thinking system strives not to decrease its RO, because low RO means errors, and errors can be fatal.

Example: When you cross the street, you automatically gather information (look at traffic lights, listen for car sounds), use your knowledge (traffic rules), and algorithms (how to estimate car speed). All this to increase the RO of your decision "it's safe to cross now."

2. Accountability Axiom (AA): "Be honest about your uncertainty"

This axiom defines consciousness. It requires honestly reporting your confidence level both to yourself and others.

Example: When a doctor says "I'm 90% confident in the diagnosis, but we need to do one more test," they're demonstrating AA. They're not just giving an answer, but also reporting the reliability level of that answer.

3. Innovation Axiom (IA): "Sometimes risk it with low RO"

This axiom allows us to create something new. It encourages sometimes expressing ideas with low RO, but honestly announcing their uncertainty.

Example: "I have a crazy idea. I'm not sure it will work, but what if we try...?" This is a classic example of IA in action - low RO, but honestly announced as an experiment.

How It All Works Together: The Correction Loop

When your RO drops below a comfortable level, you feel discomfort - doubt, confusion, cognitive dissonance. This triggers the correction loop:

  1. Learning - updating your knowledge (CS)
  2. Adaptation - changing your algorithms (CA)

Example: You're trying to assemble IKEA furniture:

  • First you check if you're reading the instructions correctly (CI)
  • Then you verify all parts are present (CS)
  • If it doesn't work, maybe you change your assembly approach (CA)

From Consciousness to Rozum

Consciousness emerges when a system can honestly assess its RO (Accountability Axiom). But full rozum appears when language and abstract thinking ability are added.

Example: A small child realizes they're hungry (consciousness), but can't abstractly reason about the causes of hunger. An adult can think: "I'm hungry because I skipped lunch due to a meeting, and my blood sugar level has probably dropped" (rozum).

Four Thinking Modes

Depending on CS and CA levels, we use different thinking modes:

  1. Calculation (high CS, high CA): "2+2=4" - complete confidence
  2. Reaction/Emotions (high CS, low CA): "I'm afraid of spiders!" - strong feeling, but not always logical
  3. Thinking (low CS, high CA): "Let's analyze this new situation" - logic works with uncertain data
  4. Feeling/Speculation (low CS, low CA): "I have a hunch..." - intuitive guesses

Thinking Traps

The God Trap

When you're so confident in your knowledge (CS≈1) and algorithms (CA≈1) that you stop doubting and checking yourself. This is dangerous because it blocks the correction loop.

Example: "I'm an expert, I'm always right, don't argue with me!"

Propaganda

External manipulation that artificially increases CS for certain statements, blocking natural doubts.

Example: Constantly repeating "Our product is the best" without evidence, to create artificial confidence.

Conclusion: The Duck Test

How do you verify the reliability of your thoughts? Use the "Duck Test":

"If something looks like a duck, walks like a duck, and quacks like a duck - it's probably a duck. But always verify!"

When you feel high confidence, especially challenge it. Ask others. Look for counterarguments. Think critically. This is real thinking.

Site map: The Three Descriptive AxiomsThe RO Formula • Consciousness & Experience • The God Trap & Propaganda • The Duck Test • ApplicationsFAQExamples & Case Studies • Glossary • Resources • Preprint: doi:10.5281/zenodo.17862910
The site is under construction. Please, be patient. And come later, if you want.