Thursday, December 17, 2020

Exagerated significance and paranoia

   What's to stop an artificial mind from assigning the wrong significance (weighting) to a piece of data or relationship between 2 pieces of data?

   If the value assigned is lower than warranted then something is lost.  If a larger value is assigned than warranted then we can have the equivalent if a perception problem in a human mind.

   Larger values also can lead to the equivalent of seeing things that aren't there... paranoia?

   How could an artificial mind manage significance issues like our mind's do?

   Perspective.

   Easy to say, not so easy to quantitatively define.

   But definitions aside... the simplest, most mind-like way to design perspective in (and already tried and true) is to have a mind made up of 2 sub minds.

   Each contributes one-half of the total perception to the mind, balances out the other, and can be inherently self correcting.

Monday, December 14, 2020

It's an artificial mind

 Artificial Intelligence = Artificial Mind

We shouldn't make any mistake about what the goal is.

It's to create the equivalent of a human mind without the biological basis it requires.

That's dangerous.  Not just for us but for the mind we're trying to create.

There's no shortage to the problems that an artificial mind could experience.

You only have to look to the mental illnesses that human's have identified so far to see the dangers.

What happens if we don't pay them close attention?

We could end up creating a monster at the same time a mind is created.

There's already enough monsters in the world so let's not unintentionally add another one.

Organizing, rearranging, and fitting

If you look at the unconscious part of the mind as a data repository full of knowledge stored in a question/answer format...  What happens w...