Saturday, October 9, 2021

Organizing, rearranging, and fitting

If you look at the unconscious part of the mind as a data repository full of knowledge stored in a question/answer format...  What happens when the conscious part of the mind decides something qualifies as new knowledge and it's treated as such?

That's were the idea of a steady state processing engine fits in.  Each new piece of knowledge would trigger a scanning analysis of all directly or indirectly affected questions/answers.  It's where the unconscious continuous maintenance is performed.  It's so continuous and even it's mistaken as the 90% of the brain we don't use.  When there's nothing new to scan or fit in then an error check can fill the need while allowing the continuous nature to be uninterrupted.

When a potential piece of knowledge is found there needs to be  a spot it fits.  The scanning analysis identifies where in the mind or on the edge it belongs.



  It's an understatement to say information on every possible subject exists on the internet.

Search engines scan through it all to link together pages.  They even expose some of that as useable knowledge

Thursday, January 7, 2021

Learning without Neural Networks

The conscious part of the mind favors learning by comparison.  It doesn't need much in the way of resources to do this - 10% seems like more than enough.

The unconscious part of the mind seems captured accurately enough with neural networks and training.  For the time being lets pretend it uses the other 90%.

From the research done over the past 40 years it's reasonable to claim that a vague awareness of many things is well represented by the neural network.  That accounts for 1/2 of a more mind like AI and leaves some questions about the rest.

Assuming someone hasn't already solved this problem and answered all questions...

How could we design it?

Well... how about along the line of consciously comparing 2 things?  Just simple comparison based learning where aggregations of awareness of few things are compared and related.

Terms - the reference used for the descriptions below is Theory
  • Thinking Space - for the conscious sub mind it's where detailed awareness of a few things can be compared and related. awareness of new knowledge/answers can be gained through modeling, correlation, extrapolation, projection. all are implicit but can be made explicit by confirmation.
  • Vague awareness - the simplest unit of awareness.  a vague awareness of something simple/finite/discrete.
  • Unconscious awareness - a vague awareness of many things that are simple/finite/discrete.
  • Deep awareness - a deep awareness of few things that are complex/large/abstract.
  • Conscious awareness - the aggregation of unconscious awareness.  the 7 +- 2 rule is probably overstated here.  would guess it's more like 5 +- 1 due to the number of permutations increasing with each additional awareness added to the comparison.
Only 2 things are ever compared at once.  More than 2 things can be compared but it happens in pairs.  Should that require a more fine grained look then detailed awareness is broken down into a hierarchy of varying amounts of vague awareness until the specific level of detail is reached.  That's done with each thing being compared until they're both reduced to the correct (and same) level of awareness detail.

We model what we know.

When it's necessary to go beyond what a model supports we:
  • Learn new knowledge
  • Create new knowledge
The conscious sub mind uses Reason.
The unconscious sub mind uses Reflex.

Inference - a process of inferring something (knowledge) through reasoning
  • Deductive reasoning (Deduction) - in this case taking 2 knowledge/answers as statements and reasoning a conclusion
  • Inductive reasoning (Induction) - in this case knowledge/answers are synthesized to a general truth for a conclusion
  • Abductive reasoning (Abduction) - inference to the best available conclusion.  does not verify the conclusion, at least not initially
Creating new knowledge/answers through Inference (Inductive reasoning):
  • If thing A has detail X with some property
  • And thing B has detail X
  • Then thing B, detail X should have some property
  • If some property is confirmed (and sometimes when it isn't) it can become a new knowledge/answer
    • When a new knowledge/answer is correct it can be built on to create others
    • Should a new knowledge/answer be wrong, all others based on it become wrong
New knowledge/answers can also be created by identifying a correlation:
    Once a correlation is identified then an extrapolation or interpolation can be created.

Informative link:  Lesson 12..8 - Extrapolation on PennState Eberly College of Science 


 




Thursday, December 17, 2020

Exagerated significance and paranoia

   What's to stop an artificial mind from assigning the wrong significance (weighting) to a piece of data or relationship between 2 pieces of data?

   If the value assigned is lower than warranted then something is lost.  If a larger value is assigned than warranted then we can have the equivalent if a perception problem in a human mind.

   Larger values also can lead to the equivalent of seeing things that aren't there... paranoia?

   How could an artificial mind manage significance issues like our mind's do?

   Perspective.

   Easy to say, not so easy to quantitatively define.

   But definitions aside... the simplest, most mind-like way to design perspective in (and already tried and true) is to have a mind made up of 2 sub minds.

   Each contributes one-half of the total perception to the mind, balances out the other, and can be inherently self correcting.

Monday, December 14, 2020

It's an artificial mind

 Artificial Intelligence = Artificial Mind

We shouldn't make any mistake about what the goal is.

It's to create the equivalent of a human mind without the biological basis it requires.

That's dangerous.  Not just for us but for the mind we're trying to create.

There's no shortage to the problems that an artificial mind could experience.

You only have to look to the mental illnesses that human's have identified so far to see the dangers.

What happens if we don't pay them close attention?

We could end up creating a monster at the same time a mind is created.

There's already enough monsters in the world so let's not unintentionally add another one.

Organizing, rearranging, and fitting

If you look at the unconscious part of the mind as a data repository full of knowledge stored in a question/answer format...  What happens w...