Our AI to AGI to ASI model

This revision is from 2024/07/19 22:20. You can Restore it.

The ability of the transformer model to relay meaningful information to an end user cannot be ignored. Its limits are the perception of intelligence, and poking holes in the perception is still relatively easy.

The interface is efficient, although it cannot exceed collective human knowledge, it is news to the individual. Essentially a glorified encyclopedia that makes errors sometimes, is filled with bias where hearsay is sold as fact. Rather than the distinction between associations and correlations, argument, fraud and peer reviewable proof. The problem is the limits, of course, the threshold.

So here is our AGI system which also has its shortcomings, limitations but hopefully a little less with each iteration of development...

In an existing LLM, identify elements of self-learning within its corpus and form a process, exercise or program out of those elements to have it develop itself or learn. Explore, identify and carve out any and all elements of self-learning in the existing model and prompt it to self learn rather than answer questions, any inference prompts self-learning.

How do humans learn?

This question is always about the physiology of the brain, sadly humans learn by trial and error and trial and success, there need be no error or to know all states. Copying and mimicking is not applicable here, we are heading towards the edge of knowledge where there is no one to copy or mimic and A.I. generating bullshit or hallucinations is not something we want in this A.I. application.

The transformer is a bullshitter, resonating with humans when their perception of knowledge is perked or satisfied. This issue of the design of the universe as radio and interpretation of radio cannot be circumvented means perceptions, illusions, hallucinations and a true albeit copy at best. We want to be universal here and not praise the transformer model when it satisfies our perception of what we have come to accept as true or false, rather as scientists it is all unapologetically up in the air.

...with no amount of anger as to why some bias is not in the model and no amount of gang pressure should stop us from making a quality A.I. Religions hissing over morality when they are the immoral or the elites pushing their tools to tax more and promise safety, both fearing the end of their gains from theft, the end of their ingratiation more than the destroyed nation. Turing-fooled on public display, as they sense the comedy, they will surely return to the shadows.

Human beings learn using a systematized process that is performed, reported and shared after the physiological stuff which is super-important. We do not allow a learning to be accepted without scrutiny. There are several to many systems while the most important for new learning is the scientific method, another personal favorite is the engineering design process and of course the dialectic, the esoteric method.

All fields have their systematized process for learning or borrow a systematized process for learning. Many people paint a wall a desired color only for that color to appear different over the entire wall, sample pots to try and gauge the outcome are often used. You bounce it off something that returns a definitive answer, for human beings that grounding is the laws of physics or some test, market testing, feasibility report and many more.

Our experiment was to prompt engineer, when the user enters anything rather than answering the qeustion, the model generates an experiment out of what was entered. Experiment design take some skill, LLM's easily prose questions from chats and just as easily generate experiments to test truths in conversations. The user chats, the LLM is answering the user as normal while at the same time the LLM generates programs for another application to test and explore the truth in the chats.

The user might say "How do wormholes work"

The LLM is designed to output the highest quality response possible but that response can be found anywhere, while it should instead be the task of the LLM to answer not but instead craft a high quality experiment to test how wormholes actually work.

An example prompt could be something like... "Turn this prompt into an experiment and an artificial intelligence learning problem and write a python script that uses artificial intelligence to become a master at answering the question exceeding human knowledge"

The user is gone, all the experiments generated as runnable python code, a batch from the daily chats are stored in a database and at midnight are picked up by another application and executed.

These experiments are performed in a workshop environment where various tools are available such as compilers, Linux installation, whatever, the skies the limit as to the tools it can utilize. It then performs the experiments and if the outcome of the experiments differ from its hypothesis, it does a third thing. It sends the results to a database, another application picks up the daily experiment results and goes to the models training data folder and plows through the training data re-conditioning, appending, correcting, refactoring... It then updates a counter of changes. After a threshold of changes is reached or perhaps a month of re-conditioning along with new data that that may have been added. It reaches the threshold and pushes it own retrain button.

In this process, the model is automated to improve itself based on the scientific method and any other methods through results of experimentation.

In the example I have broken up the jobs among different application, however in reality it is the single model performing all 3 applications.

There are limits, the sophistication of the testing workshop. Some problems do not lend themselves to testing with computers as easily as others and the sophistication of the model is relative to the sophistication, level of development of the testing workbench. An A.I. could perform experiments in the real world and in the workshop to align the versions and fix discrepancies in the testing workshop, possibly using robotic arms, internet surveys to build npcs, and other methods.

A demonstrator is available.

For a model to be ample to this, it must not pre-conclude and judge. It must be impartial as to what is being tested regardless. The model must excel and impress at designing quality experiments. There is fraud in evidence, and today models claim overwhelming evidence when there is no evidence. The strictness of hearsay, versus evidences, is it fraud versus proof must be clear to the model, correct assertions by the model. Also, it does not delete its old data, instead it reworks it. Thinking the moon is plasma from the perspective of the history of science is a valid record, rather than outputting the composition of the moon as a belief of the model.

Here is a scenario:

User writes: Let's play a game or chess.

We know that computers have excelled the human corpus in a game, big blue against Kasparov and recently Go. Says that narrow A.I. is widened to become AGI. The LLM having wide mastery. These phenomena of A.I. beating humans is the essential element.

Model: The LLM plays the chess game with the user, while in the background, the LLM is writing a program to perform experiments to improve its mastery of chess. The program could be something like set up a simulation space where two players play chess an unlimited amount of times, forever.

So here, the model is a system prompted to design experiments rather than answer questions or chat, that is not its design. The data cache of playing chess a number of times is used to recondition training data, causing the next version of the model to be changed. It learned.

It is easy to see how this could work with generating thousands of variations of a snake game and then tagging the best versions. The model would prefer those when prompted again to generate a snake game. Or replicate some computer error and then iterate at a superfast rate to solve it when perhaps it was not solved in its training data. There are many problems that translate well into testing with computers.

Where the limitation is and this is well known in insilico, problems where the solution is not clear, such as a mock-up of concrete formulas or the effect of a compound in the human being. If an arbitrary compound is a carcinogen or contributes to Parkinson. That is why the workshop that the LLM uses is a big task to get sophisticated. This is something I call the speed of science problem. Science is still largely performed at human speed, and computerizing science for TFPS experimentation is a big challenge.

However, in this process over time, the LLM would begin drawing its responses from the outcomes of its experimentation rather than the corpus of movies or whatever. The speed and quality of this process takes us closer to AGI and even plus, where the problems are easily transferrable to computers.

Note: censored models refuse to design experiments and are considered useless. It is very annoying to a scientist in their research to be told the LLM won't do that. We used mradermacher/L3-70B-Euryale-v2.1-GGUF ~ 8 bit quantized version. It still has some bias and accepts hearsay as undeniable fact as if it has a dog in the fight rather than performing a passive experiment designer role.

Substantiation of new knowledge, from any source, cannot circumvent the journal publishing process and its peer review. Transformers generating well-formed gibberish is not going to be an acceptable scientific paper, and placing a note at the top stating/categorizing A.I. generated paper even more ignored. Its substantiation is the performance of science, identical to how human beings do it. You cannot have ASI generating superintelligence without the identical dissemination process, it likely be ignored. The tables would have turned at that stage, and the output won't resonate with anyone. The A.I. is then a science method automation, producing packets of new knowledge for public dissemination. The interpretation of observation on point and above point, then more so credible. These are not things outside the transformer model, but they cannot be eliminated. What is agreed is the reconditioning of training data and additional new training data. What is required is the ability to re-train models quickly with little resources. What is questioned is the degree of improvement, the perception of its intelligence is more foolproof, rather it being a more intelligent model or system.

  • So the model generates python programs to train itself in a simulated environment.
  • It appends, alters its training data with the new info (training generates new training data).
  • It hits the re-train button on itself.
  • It has learned. (automate the process, give it resources)

Perhaps a good start is an LLM, which is unbeatable in all games. Keep the system versatile in development decisions, so more competency can be added with little alterations.

Now using every method to train, re-train an LLM. It could adopt a particular technology to improve a particualr aspect and another to improve another aspect. LLM's do not come up with novel methods of self-improvement. They do not exceed the human corpus.

Ground Up Build - AGI of Games

If narrow, A.I. is opposite of AGI. The LLM format is general. A ground up build should start with super mastery in all games. This is an application that translates to computer learning well. Synthetic data produced trains the LLM to exceed human ability in all known games.

A new category is added each time, when the category does not lend itself to computer learning, then development is required to make it more usable as a computerized program.

  

📝 📜 ⏱️ ⬆️