O.I. Electrophysiology

This revision is from 2024/07/07 17:52. You can Restore it.

Three major issues with O.I...

  1. Lifespan: while non-dviding cells are potentially immortal, current lifespan only about 100 days.
  2. Size: they are too small, only about a grain of slat. The bigger the organoid the more intelligence.
  3. Communication: the input output system and effective training and communication.

Electrophysiology in O.I. is the study of neuron electrical system for the result that communication methods can be formed. The two known communication systems in the human body, chemical and electrical. While humans have five senses, the basic sense of a neuron is the electrical gradient (its associated field) and about 100 chemicals such as dopamine called neurotransmitters. The language changes the mode of the cell, initiates functions from its DNA manifold.

Each time we communicate with a neuron we are forming a sense for the neuron. The human bodies 5 senses are multi-modal, general, contrast, a specialized sense such as flying a plane, solving puzzles or playing a video game.

This work on electrophysiology is essential, some of the most basic animals sense their surroundings by electrical discharge on contact, maintaining a basic voltage with any discharge meaning another object has touched or is near. Another is seeing with magnetism. Neurons maintain an electrical difference that when fluctuated cause the neuron to act. Commmunication with a neuron is a change in the electrical gradient, polarization of the cell and that causes a neurotramitter to be released where onwards if causes some reponse. Action potentials, resting membrane potential, depolarization, repolarization, refractory period...

The operator stimulates the neuron gradient, action potentials to cause the release of neurotransmitters relative to an aim.

action potential → neurotransmitter → a serotonergic neuron will release serotonin, while a dopaminergic neuron will release dopamine. The action potential provides the "go" signal. The different specializations of neurons communicate an outcome. For instance, if a dopamine neuron is triggered, it means success, while if a serotonin neuron is fired it means fail.

  • Reward: Dopamine
  • Punishment: Serotonin, perhaps Norepinephrine.

There are over 100-150 known neurotransmitters, narrowing it down to two is a punk out. Some others... acetylcholine (ACh), norepinephrine (NE), GABA (gamma-aminobutyric acid), glutamate, endorphins, histamine, melatonin, adrenaline (epinephrine)...

There are many neurons relative to their task such as visual cortex neurons, auditory cortex neurons, neo-cortex neurons, Von Economo neuron (VEN)...

One sensor goes to one neuron type...

one stimuli → sensor -> wire -> specific neuron -> onwards to other specializations (prep, prime and act).

Take fight or flight, a stimulus is sensed, and an action potential is sent down the wire specifically to neuron that release norepinephrine. Onwards, the communication propagates to the brain and the system is primed for fight or flight. If it takes down the animals and feeds, a new sense, an unassociated process will trigger dopamine in the brain and behavior re-enforced.

Take a feedback loop, levels rise and a sensor is activated and triggers hunger, teaching the action of feeding which brings levels back down turning off the sensor and providing dopamine.

The simplest circuit is a senseor dedicated to a single type of stimuli and the triggers a specific types of neurons. Establishing an input, processing and output.

1 stimulus is connected to 1 sensor and always goes to the same group of neurons. The design relies on re-enforcement learning, operant conditioning. If the neurons exhibit the desired output, it gets a reward such as dopamine or a punishment such as serotonin. You do not have to starve the organoid of new and more sensors, information.

The method of induced pluripotent stem cells (iPSCs) selection of differentiation into a specific neuron type.

O.I. and Building Multi-Modal Sensory Systems

At some stage, multi-modal senses are required because most applications could fall into a general permanent sense. A repeatable organoid construction could facilitate most applications. This is far cry, as organoids are only about the size of a grain of salt at the moment, only capable of operant conditioning. Organoids can always be more effective using modularization, with a model of the brain tanslated into O.I, with regional, partitioned function of a single organoid or multiple specialized organoids interconnected via an artificial synapse. Organoids need to get more complex and larger.

Propose 4 senses, two input and two output.

These senses are electrode arrays that are converted to an electrical form and sent to the organoid. These peripherals are not biological, they are electronic. The 4 senses...

Input

  • Sight, photosensor array to electric conversion and transmission to organoid. Basically a camera.
  • Hearing electrode array, basically a microphone, again for the sake of simplcity it might be a frequency/amplitude array.

Output

  • Voice electrode array, basically a speaker, where the electrical signals produced by the organoid makes sound out of the speaker.
  • Visual electrode array, basically a T.V, where the electrical signals produced by the organoid can generate an image on the T.V.

The organoid is trained to use these input and output devices. A.I. LLM trainers, educators, filters, such as LLM english language trainer where the operant conditioning program is the English language.

We can already read brain signals and map them to sights and sounds. Technology is available for about one hundred years now that records the spatial activity of the brain both as an sound input and the brain activity when a person generates speech and know what the person is saying. We can use these technologies to build multi-sensory systems and understand what the organoid is saying after training. See brain–computer interface.

  • Translate the auditory cortex model of the brain for use in an organoid.
  • Manufacture the orgnaoid.
  • Set up a (brain–computer interface) BCI/BMI (brain–machine interface) with the organid for both speech input and speech output.
  • Training and testing.
  • Repeat for sight.
  • Cross link it.

To distinguish sight and sound, pattern of activity. The auditory nerve collects features of sound, they are send to a relay routing station for processing and then sent out to the auditory cortex. The auditory cortex is organized into different areas, each responding to specific frequencies/amplitude/etc... The high-pitch sound (2,000 Hz) activates neurons in the high-frequency area, while the low-pitch sound (200 Hz) activates neurons in the low-frequency area. The auditory cortex contains a tonotopic map, which is a spatial organization of neurons that respond to different frequencies.

Recording The Language (Auditory Nerve Recordings)

In human subject, sensors placed at the point of conversion record the electrical signal relative to amplitude and frequency for sound and mapped. Translate to an array that produces the same output. This takes the guess work out of what we ought to be sending to the organoid because it is what the ear or the eye naturally sends to the brain. The human subject would recieve test sounds and images and the electrical input recorded, reverse it for the sensory output. These mappings have already been done and are out there somewhere. Auditory nerve recordings and Optic nerve recordings.

LLM learning programs would then teach the organoid to proficiency in sense.

Electronics is well capable to build and iterate to an increasing sophistication.

With hearing and sight, language and visualization, the size of the organoid is the limitation to a superintelligence.

These protheses have medical applications, cochlear implants or the Argus II which is a retinal prothesis, artificial retina.

The organoid is partitioned with centers of activity, such a complex partitioning could be termed, brain targets. Some parts of the organoid are targeted for visual, others for hearing, or a modular system of multiple organoid are dedicated to each.

Electrocochleography (ECoG) and Electroretinography (ERG).

Electrocochleography is a technique used to record the electrical activity of the auditory nerve and the cochlea in response to sound stimulation. It is a non-invasive or minimally invasive procedure that involves placing an electrode in the ear canal or on the eardrum to record the electrical signals generated by the auditory nerve and the cochlea. Different types of ECoG recordings and, for sight, Electroretinography (ERG). Electroretinography is a technique used to record the electrical activity of the retina and the optic nerve in response to visual stimulation. There are different types of ERG recordings, different types of optic nerve recording such as Optic Nerve Electrophysiology and different types of brain activity recording systems.

The organoid is too small and the system too primitive to receive 4k @ 28fps, so the initial sensor is of an essential yet minimal size.

The correct electrical signals are copied / decoded from human sight and hearing, mimic the electrical signal created by the human senses and transpose them to a prothesis. We only need to map the the conversion and then replay that conversion to the organoid. So we don't have to guess on the language. Then we can focus on the size of the organoid and its lifespan.

This is real super intelligence! This system should be tasked with advancing A.I. and O.I.

Organoid must sleep 9 hours a day.

Neuromorphic vision sensor & Neuromorphic auditory acoustic sensor (off the shelf)

Event-Driven Neuromorphic Vision SoC: Speck is an event-driven neuromorphic vision system-on-chip (SoC), combining the iniVation Dynamic Vision Sensor with SynSense spiking neural network technology. With integrated ultra-low-power sensing and processing on a single chip, It enables scene analysis, object detection, and gesture recognition at very low power. The system is ready for integration into a wide range of IoT applications from toys to smart home devices.

There are several neuromorphic sensors such as tactile. There many papers and work in the field on the subject, search term is nueromorphic.

The language is obtained from human subject's recordings of the relevant nerves as a result of various elemental stimuli, there is no judging or need for understanding of the output sent to the organoid (it is what human senses send to the brain, verbatim). The output of the neuron into a reversal of input is also recordable, albeit more challenging with a conversion/mapping required. Training might be the key, a trainer LLM is employed.

Bigger Organoids & Organoid Lifespan

https://www.livescience.com/health/neuroscience/in-a-1st-scientists-combine-ai-with-a-minibrain-to-make-hybrid-computer

  

📝 📜 ⏱️ ⬆️