O.I. Electrophysiology

This revision is from 2024/07/09 00:53. You can Restore it.

Three major issues with O.I...

  1. Lifespan: while non-dviding cells are potentially immortal, current lifespan only about 100 days.
  2. Size: they are too small, only about a grain of salt. The bigger the organoid the more intelligence.
  3. Communication: the input, output system and effective training and communication.

Electrophysiology in O.I. is the study of neuron's electrical system for the result that communication methods can be formed. The two known communication systems in the human body, chemical and electrical. While humans have five senses, the basic sense of a neuron is the electrical gradient (its associated field) and about 100 chemicals such as dopamine called neurotransmitters. The language changes the mode of the cell, initiates functions from its DNA manifold.

Each time we communicate with a neuron we are forming a sense for the neuron. The human body's 5 senses are multi-modal, general, contrast, a specialized sense such as flying a plane, solving puzzles or playing a video game.

Work on electrophysiology is essential, some of the most basic animals sense their surroundings by electrical discharge on contact, maintaining a basic voltage with any discharge meaning another object has touched or is near, another is seeing with magnetism.

Neurons maintain an electrical difference, commmunication with a neuron is a spike in the electrical gradient, polarization of the cell and that causes a neurotramitter to be released where onwards if causes some response. Action potentials, resting membrane potential, depolarization, repolarization, refractory period...

The most resonating example is fight or flight, a stimulus is detected by a sensor, triggers an action potential that sends a signal to the brain, which then releases norepinephrine, priming the body into fight or flight. If the response is successful, such as catching prey, eating causes a new stimulus to be generated, which triggers the release of dopamine in the brain, reinforcing the behavior and creating a positive association. This cycle of stimulus, response, and reward helps to solidify the behavior, making it more likely to occur in similar situations in the future.

The most common circuit in the human body is the feedback loop, homeostasis, for instance let's look at hunger. Hunger creates an impetus, a challenge is associated with resolving the impetus, such as eating resolves the hunger condition. Upon feeding, sensors resolve the impetus and also send stimulus to the brain of success. Not forgetting, freedom of plasticity in the human body means tools can be formed for the challenge, such as hands. Levels rise triggering a sensor such as hunger, levels go back down turning off the sensor.

Between impetus, challenge, resolution, success on dopamine or serotonin on fail. The brain orientates itself towards getting the dopamine efficiently.

The operator stimulates the neuron gradient, action potentials to cause the release of neurotransmitters relative to an aim.

In the hunger example: the impetus is provided by ghrelin. Neuropeptide Y (NPY) and agouti-related peptide (AgRP) are neurotransmitters that stimulate appetite and increase food intake. Stretch receptors: The stomach has stretch receptors that detect the presence or absence of food. When the stomach is empty, these receptors send signals to the brain, which interprets them as hunger. Other receptors send information to the brain when food intake is detected. Injecting a challenge is important.

  • Reward: Dopamine
  • Punishment: Serotonin, perhaps Norepinephrine.

There are over 100-150 known neurotransmitters, narrowing it down to two is a punk out. Some others... acetylcholine (ACh), norepinephrine (NE), GABA (gamma-aminobutyric acid), glutamate, endorphins, histamine, melatonin, adrenaline (epinephrine)... Use an LLM and spent some time designing the circuit.

Different neurons are specialized to a task, for example dopamine neurons release dopamine and serotonin neurons release serotonin, so there are many neurons relative to their task such as visual cortex neurons, auditory cortex neurons, neo-cortex neurons, Von Economo neuron (VEN)...

One sensor goes to one neuron type, in current organoid intelligence, one electrical pad in a multi electrical array is dedicated to one task. 1 stimulus is connected to 1 sensor and always goes to the 1 pad to the same group of neurons preferably of associated specialization such as motor neurons or auditory neurons. The design relies on re-enforcement learning, operant conditioning. If the neurons exhibit the desired output, it gets a reward such as dopamine or a punishment such as serotonin. You do not have to starve the organoid of new and more sensors, information. Take for example a path finding challenge, placing the dopamine at the goal and saying nothing more. With trial and error the path will be worked out, subsequently the organoid will know how to get the dopamine more quickly.

one stimuli → sensor → wire → one pad activation → specific neuron around one electrical pad in the multi electrode array → onwards... → upon success activate dopamine neurons or supply dopamine directly.

The method of induced pluripotent stem cells (iPSCs) selection of differentiation into a specific neuron type.

O.I. and Building Multi-Modal Sensory Systems

While important, this is all too small in scale. At some stage, multi-modal senses are required because most applications fall into a general permanent sense. This is far cry, as organoids are only about the size of a grain of salt at the moment. Organoids can always be more effective using modularization, with a model of the brain tanslated into O.I, with regional, partitioned function of a single organoid or multiple specialized organoids interconnected via an artificial synapse. Organoids need to get more complex and larger.

Propose 4 senses, two input and two output.

These senses are electrode arrays that are converted to an electrical form and sent to the organoid. These peripherals are not biological, they are electronic. The 4 senses...

Input

  • Sight, photosensor array to electric conversion and transmission to organoid. Basically a camera.
  • Hearing electrode array, basically a microphone, again for the sake of simplcity it might be a frequency/amplitude array.

Output

  • Voice electrode array, basically a speaker, where the electrical signals produced by the organoid makes sound out of the speaker.
  • Visual electrode array, basically a T.V, where the electrical signals produced by the organoid can generate an image on the T.V.

These devices while electronic are neuromorphic, the organoid is trained to use these input and output devices. A.I. LLM trainers, educators, filters, such as LLM english language trainer where the operant conditioning program is the English language.

We can already read brain signals and map them to sights and sounds. Technology is available for about one hundred years now that records the spatial activity of the brain both as a sound input and the brain activity when a person generates speech and knows what the person is saying from the activations. We can use these technologies to build multi-sensory systems and understand what the organoid is saying after training. See brain-computer interface.

  • Translate the auditory cortex model of the brain for use in an organoid.
  • Manufacture the orgnaoid.
  • Set up a (brain–computer interface) BCI/BMI (brain–machine interface) with the organid for both speech input and speech output.
  • Training and testing.
  • Repeat for sight.
  • Cross link it.

To distinguish sight and sound, pattern of activity. The auditory nerve collects features of sound, they are send to a relay routing station for processing and then sent out to the auditory cortex. The auditory cortex is organized into different areas, each responding to specific frequencies/amplitude/etc... The high-pitch sound (2,000 Hz) activates neurons in the high-frequency area, while the low-pitch sound (200 Hz) activates neurons in the low-frequency area. The auditory cortex contains a tonotopic map, which is a spatial organization of neurons that respond to different frequencies.

Recording The Language (Auditory Nerve Recordings)

In human subject, sensors placed at the point of conversion record the electrical signal relative to amplitude and frequency for sound and mapped. Translate to an array that produces the same output. This takes the guess work out of what we ought to be sending to the organoid because it is what the ear or the eye naturally sends to the brain. The human subject would recieve test sounds and images and the electrical input recorded, reverse it for the sensory output. These mappings have already been done and are out there somewhere. Auditory nerve recordings and Optic nerve recordings.

LLM learning programs would then teach the organoid to proficiency in the sense.

Electronics is well capable to build and iterate to an increasing sophistication.

With hearing and sight, language and visualization, the size of the organoid is the limitation to a superintelligence.

These protheses have medical applications, cochlear implants or the Argus II which is a retinal prothesis, artificial retina.

The organoid is partitioned with centers of activity, such a complex partitioning could be termed, brain targets. Some parts of the organoid are targeted for visual, others for hearing, or a modular system of multiple organoid are dedicated to each.

Electrocochleography (ECoG) and Electroretinography (ERG).

Electrocochleography is a technique used to record the electrical activity of the auditory nerve and the cochlea in response to sound stimulation. It is a non-invasive or minimally invasive procedure that involves placing an electrode in the ear canal or on the eardrum to record the electrical signals generated by the auditory nerve and the cochlea. Different types of ECoG recordings and, for sight, Electroretinography (ERG). Electroretinography is a technique used to record the electrical activity of the retina and the optic nerve in response to visual stimulation. There are different types of ERG recordings, different types of optic nerve recording such as Optic Nerve Electrophysiology and different types of brain activity recording systems.

The organoid is too small and the system too primitive to receive 4k @ 28fps, so the initial sensor is of an essential yet minimal size.

The correct electrical signals are copied / decoded from human sight and hearing, mimic the electrical signal created by the human senses and transpose them to a prothesis. We only need to map the the conversion and then replay that conversion to the organoid. So we don't have to guess on the language. Then we can focus on the size of the organoid and its lifespan.

This is real super intelligence! This system should be tasked with advancing A.I. and O.I. Organoid must sleep 9 hours a day.

Neuromorphic vision sensor & Neuromorphic auditory acoustic sensor (off the shelf)

Event-Driven Neuromorphic Vision SoC: Speck is an event-driven neuromorphic vision system-on-chip (SoC), combining the iniVation Dynamic Vision Sensor with SynSense spiking neural network technology. With integrated ultra-low-power sensing and processing on a single chip, It enables scene analysis, object detection, and gesture recognition at very low power. The system is ready for integration into a wide range of IoT applications from toys to smart home devices.

There are several neuromorphic sensors such as tactile. There many papers and work in the field on the subject, search term is nueromorphic.

The language is obtained from human subject's recordings of the relevant nerves as a result of various elemental stimuli, there is no judging or need for understanding of the output sent to the organoid (it is what human senses send to the brain, verbatim). The output of the neuron into a reversal of input is also recordable, albeit more challenging with a conversion/mapping required. Training might be the key, a trainer LLM is employed.

Bigger Organoids & Organoid Lifespan

https://www.livescience.com/health/neuroscience/in-a-1st-scientists-combine-ai-with-a-minibrain-to-make-hybrid-computer

  

📝 📜 ⏱️ ⬆️