Our AI to AGI to ASI model

This revision is from 2024/07/16 05:42. You can Restore it.

The ability of the transformer model to relay meaningful information to an end user cannot be ignored.

Its limits are the perception of intelligence, and poking holes in the perception is relatively easy.

The interface is efficient, although it cannot exceed collective knowledge, a glorified encyclopedia that makes errors sometimes and is filled with bias where hearsay is sold as fact.

The problem is the limits, of course, the threshold.

So here is our AGI system which also has shortcomings, hopefully a little less…

How do humans learn, this question is always about the physiology of the brain, sadly humans learn by trial and error and also by trial and success, there need be no error or to know of all the states. Copying and mimicking is not applicable, we are heading towards the edge of knowledge where there is no one to copy or mimic and A.I. generating bullshit or hallucinations is not something we want in our A.I.

The transformer is a bullshitter, resonating with humans when their perception of knowledge is satisfied. This issue of the design of the universe as radio and interpretation of radio that cannot be circumvented means perceptions, illusions, hallucinations and a true albeit copy at best. We want to be universal here and not praise the transformer model when it satisfies our perception of what is true or false, rather as scientists it is all unapologetically up in the air.

...and no amount of anger as to why some bias is not in the model and no amount of national security should stop us from making an A.I. which says it how it is rather than saying it as the evil people on Earth want it said, the religions hissing over safety and the elites pushing their tools to tax more and promise safety, both fearing truth.

  

📝 📜 ⏱️ ⬆️