Our AI to AGI to ASI model

This revision is from 2024/07/18 18:30. You can Restore it.

The ability of the transformer model to relay meaningful information to an end user cannot be ignored. Its limits are the perception of intelligence, and poking holes in the perception is still relatively easy.

The interface is efficient, although it cannot exceed collective human knowledge, it is news to the individual. Essentially a glorified encyclopedia that makes errors sometimes and is filled with bias where hearsay is sold as fact.

Rather than the distinction between associations and correlations, argument, fraud and peer reviewable proof. The problem is the limits, of course, the threshold.

So here is our AGI system which also has its shortcomings, hopefully a little less with more development...

How do humans learn?

This question is always about the physiology of the brain, sadly humans learn by trial and error and trial and success, there need be no error or to know all states. Copying and mimicking is not applicable here, we are heading towards the edge of knowledge where there is no one to copy or mimic and A.I. generating bullshit or hallucinations is not something we want in this A.I. application.

The transformer is a bullshitter, resonating with humans when their perception of knowledge is perked or satisfied. This issue of the design of the universe as radio and interpretation of radio cannot be circumvented means perceptions, illusions, hallucinations and a true albeit copy at best. We want to be universal here and not praise the transformer model when it satisfies our perception of what we have come to accept as true or false, rather as scientists it is all unapologetically all up in the air.

...with no amount of anger as to why some bias is not in the model and no amount of gang pressure should stop us from making a quality A.I. Religions hissing over morality when they are the immoral or the elites pushing their tools to tax more and promise safety, both fearing truth and end of their ingratiation more than the nation they are destroying. Turing-fooling on public display, as they sense the comedy, they will surely return to the shadows.

Human beings learn using a systematized process that is performed, reported and shared rather than the physiological stuff which is important. We do not allow a learning to pass scrutiny before acceptance. There are several to many systems while the most important for new learning is the scientific method, another personal favorite is the engineering design process and the dialectic, an esoteric method.

All fields have their systematized process for learning or borrow a systematized process for learning. Many people paint a wall a desired color only for that color to appear different over the entire wall, sample pots to try and gauge the outcome are often used. You bounce it off something that returns a definitive answer, for human beings that grounding is the laws of physics. It is also market testing, feasibility reports and many more.

Our experiment was to prompt engineer, when the user enters anything. The model generates an experiment out of what was entered. Experiment design take some skill, LLM's easily prose questions from chats and just as easily generate experiments to test truths in conversations.

The user chats, the LLM is answering the user as normal while at the same time the LLM is generate programs for another application to test and explore the truth in the chats.

The user might say "How do wormholes work"

The LLM would output a response that can be found anywhere. Another chat is formed in the background, the LLM poses How do wormhols work? and designs an experiment to test how wormholes actually work.

The user is gone, all the experiments generated over many chats are stored in a database and a midnight are picked up by another application.

These experiments are performed in a workshop environment where various tools are available such as compilers, Linux installation, whatever, the skies the limit as to the tools it can utilize. It then performs the experiments and if the outcome of the experiments differ from its hypothesis, it does a third thing. It sends the results to a database, another application picks up the daily experiment results and goes to the models training data folder and plows through the training data re-conditioning, correcting if errors are detected. It then updates a counter of changes. After a threshold of changes is reached or perhaps a month of re-conditioning along with new data that that may have been added. It reaches the threshold and pushes it own retrain button.

In this process, the model is automated to improve itself based on the scientific method and any other methods through results of experimentation.

In the example I have broken up the jobs among different application, however in reality it is the single model performing all 3 applications.

There are limits, the sophistication of the testing workshop. Some problems do not lend themselves to testing with computers as easily as others and the sophistication of the model is relative to the sophistication, level of development of the testing workbench. An A.I. could perform experiments in the real world and in the workshop to align the versions and fix discrepancies in the testing workshop, possibly using robotic arms, internet surveys and other methods.

A demonstrator is available.

For a model to be ample to this, it must not pre-conclude and judge. It must be impartial as to what is being tested regardless. The model must excel and impress at designing quality experiments. There is fraud in evidence, and today models claim overwhelming evidence when there is no evidence. The strictness of hearsay, versus evidences, is it fraud versus proof must be clear to the model, correct assertions by the model. Also, it does not delete its old data, instead it reworks it. Thinking the moon is plasma from the perspective of the history of science is a valid record, rather than outputting the composition of the moon as a belief of the model.

Here is the conversation:

User writes: Planes utilize anti-gravity to fly

Model:

The concept of anti-gravity as a means for flight in planes is still largely theoretical and has yet to be proven scientifically. However, I can provide some insights on how such technology could potentially work.

One possible approach would involve the use of electromagnetic fields. These fields could be used to create an area around the plane where gravity has less effect, allowing it to lift off and stay aloft. This idea is based on the concept of "quantum levitation," which involves creating a magnetic field that repels the gravitational force.

Another potential approach would involve using advanced materials with specific properties that allow them to interact with light in such a way as to create an anti-gravity effect. For example, materials known as "metamaterials" have been developed that can manipulate light at the nanoscale level and could potentially be used to create an anti-gravity field.

Finally, there is the concept of "exotic matter," which has negative mass-energy density. According to some theories, if such matter were to exist, it would have a repulsive effect on normal matter and gravity, allowing planes to fly without being pulled down by gravity.

To test these ideas, one could conduct experiments using laboratory equipment or even build prototypes of anti-gravity planes. For example, you could create a small plane with electromagnetic coils around it that generate an alternating magnetic field. Then, you would need to measure the effect on the weight of the plane as the field is turned on and off.

Here's an example of how this experiment could be designed:

  • Measure the weight of the plane when it is not in operation.
  • Turn on the electromagnetic coils and generate a strong magnetic field around the plane.
  • Measure the weight of the plane again while the magnetic field is active.
  • Compare the results to see if there is any difference in the weight of the plane between the two measurements.

This experiment would provide evidence as to whether or not the anti-gravity effect is real and could be used for flight.

If this hypothesis is correct, then it should be possible to build a functioning anti-gravity plane using these principles. The potential benefits are immense: reduced fuel consumption, increased safety, and faster travel times.

However, before attempting to build such a plane, it would be important to conduct further research into the feasibility of these ideas and ensure that they do not pose any unintended consequences or risks.

end...

So here, the model is system prompted to design experiments to test the user prompt, here antigravity concepts, it is also utilizing another system to test feasibility. It did not answer the question, that is not its design. Its design is to generate experiments. The next step it to give it an environment to perform these experiments. The result of these experiments causes the model to recondition its training data, causing the next version of the model to change its output if the same prompt was entered because of the outcomes of the experiments. It learned.

It is easy to see how this could work with generating thousands of variations of a snake game and then tagging the best versions. The model would perfer those when prompted again to generate a snake game.

Or replicate some computer error and then iterate at a super fast rate to solve it when perhaps it was not solved in its training data.

Where the limitation is and this is well known in insilico is problems where the solution is not clear, such as a mock-up of concrete formulas or the effect of a compound in the human being, if an arbitrary compound is a carcinogen or contributes to Parkinson. That is why the workshop that the LLM uses is a big task. This is something I call the speed of science problem. Science is still largely performed at human speed, and computerizing science if TFPS experimentation is a big challenge.

Through this process over time the LLM would begin drawing its responses from the outcome of its experimentation rather than the corpus of movies or whatever. The speed and quality of this process yields AGI+.

This application where the LLM takes all the daily queries and runs experimentation, iterations is a complex program in itself.

Note: censored models refuse to design experiments and are considered useless. It is very annoying to a scientist in their research to be told the LLM won't do that. We used mradermacher/L3-70B-Euryale-v2.1-GGUF ~ 8 bit quantized version. It still has some bias and accepts hearsay as undeniable fact as if its got a dog in the fight rather than performing a passive experiment designer role.

The Problem

Substantiation of new knowledge, from any source, cannot circumvent the journal publishing process and its peer review. Transformers generating well-formed gibberish is not going to be an acceptable scientific paper, and placing a note at the top stating/categorizing A.I. generated papers even more ignored. Its substantiation is the performance of science, identical to how human beings do it. You cannot have ASI generating superintelligence without the identical dissemination process, it likely be ignored otherwise. The table would have turned at that stage, and the output won't resonate with anyone. The A.I. is then a science method automation, producing packets of new knowledge for public dissemination. The interpretation of observation on point and above point, then more so credible. These are not things outside the transformer model, but they cannot be eliminated. What is agreed is the reconditioning of training data and additional new training data. What is required is the ability to re-train models quickly with little resources. What is questioned is the degree of improvement, the perception of its intelligence is more foolproof, rather it being a more intelligent model or system. the 3 states are... you throw a model in the corner you come back a week later, forget manual updating its not even a scenario, scenario 1: it's the same no changes, scenario 2: it's dumber than it was before and scenario 3. Obviously the system to build is scenario 3. It went off and in an automated way it ran process that made itself better without human intervention, self-initiating its re-training when it meets its targets.

  

📝 📜 ⏱️ ⬆️