Our AI to AGI to ASI model

This revision is from 2024/07/16 06:38. You can Restore it.

The ability of the transformer model to relay meaningful information to an end user cannot be ignored.

Its limits are the perception of intelligence, and poking holes in the perception is relatively easy.

The interface is efficient, although it cannot exceed collective knowledge, a glorified encyclopedia that makes errors sometimes and is filled with bias where hearsay is sold as fact.

The problem is the limits, of course, the threshold.

So here is our AGI system which also has shortcomings, hopefully a little less…

How do humans learn, this question is always about the physiology of the brain, sadly humans learn by trial and error and also by trial and success, there need be no error or to know of all the states. Copying and mimicking is not applicable, we are heading towards the edge of knowledge where there is no one to copy or mimic and A.I. generating bullshit or hallucinations is not something we want in our A.I.

The transformer is a bullshitter, resonating with humans when their perception of knowledge is perked or satisfied. This issue of the design of the universe as radio and interpretation of radio that cannot be circumvented means perceptions, illusions, hallucinations and a true albeit copy at best. We want to be universal here and not praise the transformer model when it satisfies our perception of what we have come to accept as true or false, rather as scientists it is all unapologetically all up in the air.

...with no amount of anger as to why some bias is not in the model and no amount of gang pressure should stop us from making a quality A.I. rather than how the evil people on Earth want it, religions hissing over morality, they are immoral and the elites pushing their tools to tax more and promise safety, both fear truth more than anything. Turing-fooling on public display, as they sense the comedy, they will surely return to the shadows.

Human beings learn using a systematized process that is performed, reported and shared rather than the physiological stuff which is important. We do not allow a learning to pass scrutiny before acceptance. There are several to many systems while the most important for new learning is the scientific method, another personal favorite is the engineering design process, an esoteric one is the dialectic.

All fields have their systematized process for learning or borrow a systematized process for learning. Painting a wall a desired color only for that color to appear different over the entire wall, sample pots to try and gauge the outcome. You bounce it off something that returns a definitive answer, for human beings that grounding is the laws of physics.

Our experiment was to prompt engineer, when the user enters anything. The model generates an experiment out of what was entered. This takes some skill, experiment design that is. What the user wants was the current hypothesis, which is outputted as well. The user is gone, the model takes all the experiments it has generated / designed throughout the day and performs these experiments in a workshop environment where various tools are available such as compilers, Linux installation, whatever, the skies the limit as to the tools it can utilize. It then performs the experiments and if the outcome of the experiments differ from its hypothesis, it does a third thing. It goes to its training data folder and corrects it where ever the error in its training data is. After perhaps a month of re-conditioning, its training data along with new data that may have been added. It reaches a threshold and pushes it own retrain button.

In this process, the model is automated to improve itself based on the scientific method and any other method.

Obviously, the limits are the sophistication of the testing workshop. A more comprehensive version has the A.I. performing the experiment in the real world and in the workshop to align the versions and fix discrepancies in the testing workshop and real life.

A demonstrator is available.

  

📝 📜 ⏱️ ⬆️