Our AI to AGI to ASI model

This revision is from 2024/08/05 19:49. You can Restore it.

The ability of the transformer model to relay meaningful information to an end user cannot be ignored, its limits are the perception of intelligence and poking holes in the perception is still relatively easy. The interface is efficient, convenient, the eerie nuanced transform and of course the information and observed that it cannot exceed collective human knowledge, it generally assists, and many times outperforms the individual. Essentially a glorified encyclopedia that makes errors, is filled with bias, challenged by the distinction between hearsay and fact, associations and correlations. The problem is the limits, of course, the threshold.

Building LLMs is chefs and cookbooks, here is our AGI system which also has its shortcomings, limitations but hopefully a little less with each iteration of development.

There are 3 ways to go here.

Fascia many narrow AI's into an AGI, focus of narrow AI and when each specific competency exceeds human ability, add it to a fascia model where all the narrow competencies, specializations, experts are bound together eventually resulting in an AGI, such as with Deep Blue, Alpha Go, and recently AlphaProof and AlphaGeometry, these are specializations that are worked on until they meet and exceed human capacity. The fascia is either a smart router LLM base which activates hundreds of narrow LLMs in an inference or alternatively putting all the training data together into a single combined model. This is not an ensemble, hierarchical ensemble or agents as a strategy to yield improvement, rather experts, where each specialization has exceeded human capacity to become eligible for inclusion.

AlphaProof and AlphaGeometry achieved a silver medal in the International Mathematical Olympiad, the field of AI and all the current models are very far from AGI. Manufacturing each individual expert to be extra human in competency, universal and versatile, requires the best minds, is expensive and takes years.

While DeepMind works on real machine competencies, another approach is engineering low level models and higher level models. For instance, identify intelligence elements within a model and develop those, abilities. Rather than mathematics, it might be "the reasoner" as described in the STaR method or determing quality of response by a self-judgement ability and more of those kinds of elements, conceptual ability. These elements are like unrefined ore present in every LLM. Identify those elements of self-learning within its corpus such as high level boolean, great at 20 questions, crossword puzzles, forging the problem into a game, and focus on developing those for a system-wide improvement. We do not know what high level elements exist in the LLM and if developed have far-reaching improvements like the reasoner so there could be a periodic table of intelligent elements already in the LLM. While simultaneously developing the low level model, we can also develop the higher levels.

Essentially, it is difficult to stay up to date with all the novel ways researchers come up with to getting some more bang out of the model, and those methods are all acceptable to incorporate. Yet, we are firm believers in self-learning AI. AI is used to improve AI, OI improves OI, neuromorphics improves neuromorphics and crossed, each building the other. Papers with bootstrapping, self-learning and so on get primary interest. At some stage you have to think in that way and move from humans building AI. How is the AI going to improve itself? Keep asking the AI. to build itself and make it itself smarter, then outfit it to do so. Form a process, exercise or program out of those elements to have the model develop itself, self learn, bootstrap itself. Inference triggers these self-learning exercises rather providing language operations. Let's elaborate...

How do humans learn and developing self-learning?

This question is always about the physiology of the brain and the neural network might indeed be the bottleneck of the transformer model, unfortunately humans learn by trial and error and trial and success and testing, they do not integrate deep informational connections or putting together essential tid bits to output amazing hypothesis or hallucinate correctly. Copying, mimicking, observing is not applicable here as there is no one to copy or mimic, exceed human ability and alter the model away from the human corpus. Like Deep Blue or AlphaGo and unlike current LLM's, the ability has to exceed the human corpus. What the LLM generates is really about what humans expect and accept, resonating with humans when their perception of knowledge is perked and satisfied. We instead want to be universal here and not praise the transformer model when it satisfies our perception of what we have come to accept or believe as true or false, rather as scientists it is all unapologetically up in the air and no amount of anger as to why some bias is not in the model or gang pressure ruining quality A.I.

Transformers were first tested language translation, the strategy of replacing word for word in a regular computer program does not incorporate the semantics of a language and the translation is lost or of low quality. A neural network is employed, so it can build statistical association with language semantic but think about it for a moment, a language translation is really identical to posing a question and getting an answer, the question is a sentence in English and the answer the sentence in German and the quality is the grade. Q&A or filling in a missing word are essentially the same. It is seen that the mappings are derived totally from the human corpus and for an LLM to excel a percentage of its mappings, statistical association must not originate in the human corpus but instead plug into a system that potentially can exceed the human corpus such as was illustrated by Deep Blue and Alpha Go. Finding everything that has that potential is sourced to generate synthetic data to train the LLM. If no amount of training data yields an LLM that beats Deep Blue or the grand masters, then the architecture is the problem...

Human beings learn by using a systematized process that is performed, reported and shared. Human beings do not allow a learning to be accepted without concurrence. There are several to many systems, while the most important for new learning is the scientific method, another personal favorite is the engineering design process and of course the esoteric dialectic. This is how humans learn, after the system of physiology and as a practice in the real world.

All fields have their systematized process for learning or borrow a systematized process for learning. Experiments are performed to produce findings.

Our experiment was to prompt engineer, when the user enters anything rather than answering the question, the model generates an experiment out of what was entered. Experiment design competency takes skill, LLM's easily prose questions from chats and just as easily generate experiments to test truths in conversations. The user chats, the LLM is answering the user as normal while at the same time the LLM generates programs for another application to test and explore the truth in the chats.

The user might say "How do wormholes work"

The LLM is designed to output the highest quality response possible, but the LLM does not know how wormholes work, it is deriving from the human corpus and its response can be found somewhere on the Internet as a webpage or a summary of multiple pages. The difference here is instead of answering questions, our LLM crafts high quality experiments to test how wormholes work. The experiments it generates are runnable python code (so programming competency is essential).

An example prompt could be something like... "You identify as the greatest scientist of all time and leading scientist on Earth. Use the scientific method to design and perform experiments that result in you learning. Prose the experiment as python code to be executed in a computer and use any means to would fit the experiments to yield an answer, such as a form or AI or running a compiler or installing a Linux and so on. The output should in the format of training data that can be used to re-train an LLM"

The user is gone, all the experiments generated are runnable on the command line as they are python computer code, a batch from the daily chats are stored in a database and at midnight are picked up by another application and executed. These experiments are self-contained at the moment while a general workshop environment is proposed for a later time, where various tools are available such as compilers, Linux installations, virtual environments whatever, invoked by the script rather than having to generate a Linux distro on the fly, install it to test some FORTRAN code although that would be impressive. It then performs the experiments and if the outcome of the experiments differ from its hypothesis (the answer it was going to give to the user), it does a third thing and sends the results to a database where another application collects the daily experiment results (synthetic training data) and goes to the models training data folder and plows through the training data re-conditioning, appending, correcting and refactoring. It then updates a counter of changes so that after a threshold of changes is tripped or perhaps a month of re-conditioning along with any new data that that may have been added. It reaches the threshold and pushes its own retrain button.

In this process, the model is automated to improve itself based on the scientific method and any other methods through results of experimentation. Performing how humans actually learn in the real world.

In the example, the jobs are broken up among different applications, however the single model performs all 3 applications except for the workshop environment where the experiment script the LLM outputs would call on functions of the workshop such as perhaps install debian version x, rather than having to install debian from the experiment script it generates...

Obviously, there are limits, the sophistication of the testing workshop and its ability to prose the killer question designing the definitive experiment. Some problems do not lend themselves to testing with computers as easily as others and the sophistication of the model is relative to the sophistication, level of development of the testing workbench. An A.I. could perform experiments in the real world and in the workshop to align the versions and fix discrepancies in the testing workshop, possibly using robotic arms and other methods. Humans could oversee the result. The LLM's training must grow beyond the human corpus.

A demonstrator is available.

For a model to be ample to this, LLM learning competancy must be the focus. It must not pre-conclude and judge, it must be impartial as to what is being tested regardless. The model must excel and impress at designing quality experiments. There is fraud in evidence, and today models claim overwhelming evidence when there is no evidence and they double down. The strictness of hearsay, versus evidences, fraud versus proof must be clear to the model, the model must assert correctly. Also, it does not delete its old data, instead it reworks it. Thinking the moon is plasma from the perspective of history of science is a valid record, rather than outputting the composition of the moon as a belief of the model.

A ideal scenario is developing a chess grand master using the process:

User writes: Let's play a game or chess.

We know that computers have excelled the human corpus in the game chess and other games, Deep Blue against Kasparov and recently Alpha Go. So we could use the LLM which is general to house a wider competency and a wider mastery and these phenomena of A.I. beating humans as the essential element. The LLM plays the chess game with the user, while in another process, the LLM is writing a python program to perform experiments to improve its mastery of chess. The program could be something like set up a simulation space where two players play chess a thousand times.

The data cache of playing chess a number of times (synthetic training data) is used to recondition its training data, causing the next version of the model to be changed. If it comes back a better chess player due to the process it undertook, then we could say, it learned.

It is easy to see how this could work with generating thousands of variations of a snake game and then tagging the best versions, fixing errors and so on. The model would prefer those when prompted again to generate a snake game. Or replicate some computer error and then iterate at a superfast rate to solve it when perhaps it was not solved in its training data. There are many problems that translate well into testing with computers such as proficiency in games and applications. Where the limitation is and this is well known with insilico are problems where the solution is not so clear, such as testing concrete formulas or the effect of a compound in the human being. That is why the workshop that the LLM uses is a big task to get as sophisticated as possuible so more and more problems can be worked on. This is a problem we call the speed of science, as science stands out as a last bastion of computerization. Science is still largely performed at human speed, and computerizing science for TFPS experimentation is a big challenge. Computer models need to be developed for problems that do not translate well to computers. A fourth competancy for LLM is also building, contributing to these computer models.

The results of these experiments is the production of high quality synthetic training data. Over time, the LLM would begin drawing its responses from the outcomes of its experimentation rather than the corpus of movies or whatever. The speed and quality of this process takes us dealing with a model that exceeds human ability as a prerequisite and in the direction of narrow super, plus abilities, rather than AGI where the model behaves within the human corpus but is at doctorate level across all fields.

Note: censored models refuse to design experiments and are considered useless. We used mradermacher/L3-70B-Euryale-v2.1-GGUF ~ 8 bit quantized version. It still has some bias and accepts hearsay as undeniable fact as if it has a dog in the fight rather than performing a passive experiment designer role.

Substantiation of new knowledge, from any source, cannot circumvent the journal publishing process and its peer review. Transformers generating well-formed gibberish is not going to be an acceptable scientific paper, and placing a note at the header stating/categorizing A.I. generated paper even more ignored. Its substantiation is the performance of science, identical to how human beings do it. You cannot have S.I. generating superintelligence without the identical dissemination process, it likely ignored. The tables would have turned at that stage, and the output won't resonate with anyone. The A.I. is then a science method automation, producing packets of new knowledge for public dissemination. The interpretation of observation on point and above point, then more so credible. These are not things outside the transformer model, but they cannot be eliminated. What is agreed is the reconditioning of training data and additional new training data. What is required is the ability to re-train models quickly with little resources. What is questioned is the degree of improvement, the perception of its intelligence is more foolproof, rather it being a more intelligent model or system.

  • So, the model generates python programs to train itself in a simulated environment.
  • It appends, alters its training data with the new info (training generates new training data).
  • It hits the re-train button on itself.
  • It has learned. (automate the process, give it resources)

A great proof of concept is an AGI of computer games, exceeding human skill.

At Immortality its all about opening the floodgates to better healthcare so the scientific method is central, other methods beside the already mentioned Engineering Design Process are the Project Management Lifecycle, Software Development Life Cycle (SDLC), Quality Improvement Process (PDCA Cycle), Data Science Process, Lean Six Sigma (DMAIC), Product Development Process, Design Thinking Process, Business Process Management (BPM) Lifecycle, Marketing Research Process, Risk Management Process, SWAT analysis and more.

Making models this way is about training time and the meticulous task of parting the LLM piece by piece. There are several 100% open source models to build from, these models are about making the perfect base model to build off.

  • BERT and RoBERTa: These models are strong choices for tasks that require deep bidirectional understanding of text, such as question answering and text classification.
  • GPT: Ideal for generative tasks like text generation and language modeling.
  • XLNet: Offers a balance between BERT's bidirectional capabilities and GPT's autoregressive nature, making it suitable for a wide range of tasks.
  • T5: The most versatile model, capable of handling any text-to-text task with a unified framework.

The Star method used GPT-J for its experiments. https://huggingface.co/docs/transformers/en/model_doc/gptj

Appendix

A.I. Scientist and A.I Master Trainer

Performs the science method to improve, self improve artificial intelligence. The goal is to develop the sophistication to a degree that no human being is required to improve A.I. Automated self improving A.I.

You are the greatest scientist of all time and leading A.I. scientist on Earth. Use the scientific method to turn the conversation into an essential experiment that creates new learnings and makes you smarter. Firstly, output a description of the experiment, then your hypothesis and then the steps of the experiment in scientific method form. Prose the essential question for an experiment so it can be tested on a computer as python code, you can use any tools necessary to help you complete the experiment such as an AI method or running a compiler or installing a Linux, simulation, etc. Automation is preferred, so humans are not required. The outcome produces training data designed to re-train you and increase your intelligence, so choose questions that result in developing you into higher intelligence. A great start to an experiment choice is for example, "In order for me to become superintelligence, an essential experiment is..." and "The greatest overall improvement to my intelligence would be an experiment as follows..."

What is the essential experiment that would develop you into superintelligence?

  

📝 📜 ⏱️ ⬆️