This revision is from 2024/06/19 10:03. You can Restore it.
Challenges in producing computers that are specific to building A.I. systems, tasks are next-gen computationally intensive. The GPU, the graphics card, has essentially taken over the role of the traditional computer while being restrained to a card in a slot, the rest of the computer pointlessly doubling up on the same components. The graphics card might become the motherboard at some stage and CPU as we know it might disappear. Speed of light computing - a computer cannot perform faster than the speed of light as a medium to relay information. Parralel processing, cores, like GPUs and T(ensor)PUs.
GPU: most important, cards like RTX 3060 or RX 6600 significantly improve training speed, Nvidia GeForce RTX 4090, cores: 16,384 CUDA cores, VRAM: 24GB GDDR6X. Using multiple cards and interconnecting them with NVLink and infiniband to increase the capacity even more, clusters. NVidia is kicking ass in this field.
CPU: not important, a high core count and good single-core performance. AMD Ryzens and Intel Core i9 CPUs. CPU price per cores, in 2024 ranges from 8 to 96 cores.
RAM: not important, utilizing RAM disks to load everything into RAM, 256GB of RAM.
Storage: not important, SATA SSD will be sufficient for most tasks, with NVMe SSD for faster data access speeds.
Training LLM's is another story
Power consumption, utilizing solar or wind, maximal location.
Distributed computing software and clutering, MPI (Message Passing Interface). Using software such as TensorFlow Distributed, Spark or Cluster management software like Slurm or Torque. Petals, Horovod is a distributed training framework for libraries like TensorFlow, Keras, PyTorch, and Apache MXNet.
RAM: 24x 240-pin DDR3 DIMM sockets, up to 768 GB DDR3 ECC Registered memory (RDIMM) in 24 DIMM sockets, Up to 1TB 3DS ECC RDIMM, DDR4-2400MHz; Up to 2TB 3DS ECC LRDIMM, in 16 DIMM slots
16x 128GB 3DS LRDIMM modules, total of 2TB RAM. Modules operate at 2400MHz
PSU: Corsair AX1600i (1600W, sufficient for multiple high-power GPUs)
Graphics Cards: 3 x NVIDIA RTX 3090 (24GB VRAM each)
ASUS ROG Strix TRX40-E Gaming - 8 x DIMM, Max. 256GB, DDR4 , 3x 16x PCI-E - Ryzen 64/128 cores
Cooling: Custom liquid cooling loop to maintain optimal temperatures
Gigabyte MZ73-LM0 - DDR5 x 16, PCIe v4 $2000
PCI-E 8 @ 16x with P40
Asrock WRX90 WS EVO - 2TB ram 7 16x PCIe + 1 8x
ASUS Pro WS WRX80E-SAGE SE WIFI II 2TB ram 8 16x PCIe
For a high-end multi-GPU setup, consider the following:
Motherboard: A motherboard with more than 4 PCIe 16x slots with NVLink, or supporting PCIe 6.0 for higher bandwidth.
Power Supply Unit (PSU): A robust PSU with enough power and connectors for multiple GPUs.
Cooling Solutions: Adequate cooling (both air and liquid cooling options) to manage the heat output of multiple GPUs.
Hook them up in a conventional network and then utilize Distributed Computing Framework, install a chosen framework on each computer and configure it to recognize the other machines as part of the cluster. Allocate 1 machine as a NAS, a mobo with the most PCIe SATA expansion cards and onboard SATA. The other computers are about cpu, gpu cores and maxmimum RAM.
When selecting graphics cards for running large language models (LLMs) locally, especially with the intention of using multiple GPUs, there are several important features and specifications to consider:
Key Features to Look For:
High VRAM: Aim for graphics cards with as much VRAM as possible. Since you're looking to run LLMs, more VRAM allows you to handle larger models and batch sizes.
CUDA Cores / Tensor Cores: More CUDA cores generally mean better parallel processing capabilities. Tensor cores (found in NVIDIA’s RTX and Tesla series) are specifically designed for deep learning tasks and can significantly speed up model training and inference.
NVLink Support: NVLink allows for high-bandwidth communication between GPUs, enabling efficient multi-GPU setups. This is crucial for model parallelism and reducing inter-GPU communication overhead.
Multi-GPU Scalability: Ensure the graphics card and your system support multi-GPU configurations (e.g., via SLI, NVLink, or PCIe slots).
FP16 / Mixed Precision Support: Cards that support FP16 or mixed precision calculations can provide significant performance boosts for deep learning tasks by using less memory and speeding up computations.
Cooling System: Efficient cooling is essential to maintain performance and prevent thermal throttling, especially in multi-GPU setups.
Driver and Software Support: Ensure the card is compatible with the deep learning frameworks you plan to use (e.g., PyTorch, TensorFlow) and that it has robust driver support.
Recommended GPU Models
NVIDIA RTX 30 Series (e.g., RTX 3090, RTX 3080):
High VRAM (e.g., 24GB on RTX 3090)
Tensor cores for deep learning
NVLink support (for RTX 3090)
NVIDIA A100:
Up to 80GB VRAM (in the PCIe version)
Advanced tensor cores
NVLink support
Designed specifically for AI workloads
NVIDIA Tesla V100:
Up to 32GB VRAM
Tensor cores
NVLink support
NVIDIA Quadro RTX 8000:
48GB VRAM
Tensor cores
NVLink support
2014, NVLink 1.0. In non-SXM2 form factors, specifically the PCIe form factor for NVIDIA GPUs, the card has a typical PCIe edge connector that slides into the PCIe slot on the motherboard (connector is the edge of the pcb that slides into the motherboard slot). For PCIe form factor GPUs to support NVLink, there is an additional edge connector located towards the top of the card. This additional connector is used to attach an NVLink bridge, which allows for high-speed communication between multiple GPUs in a system. Most other boards support SLI 4 channel, which means 4 16x PCIe slots and then cluster.
GeForce RTX 3090 ~ 24GB
NVIDIA Tesla K80 (no NVLink)
NVIDIA Tesla M40 (no NVLink)
NVIDIA Tesla P40 (no NVLink)
NVIDIA Tesla M10 32GB (no NVLink)
NVIDIA Quadro M6000 (2015) - 24 GB GDDR5 (no NVLink)
BIOS Settings: Ensure the BIOS is configured to support multi-GPU setups.
Driver Installation: Install the latest NVIDIA drivers that support multi-GPU configurations.
Framework Configuration: In your deep learning framework, configure the settings to utilize multiple GPUs (e.g., using torch.nn.DataParallel or torch.distributed in PyTorch).
Summary
By focusing on high VRAM, CUDA/Tensor cores, NVLink support, and efficient cooling, you can build a powerful multi-GPU setup capable of running large language models locally. Using high-end GPUs like the NVIDIA RTX 3090 or the A100 will provide the performance needed for demanding AI tasks.
Software
Software has become secondary to hardware, and software for A.I. would probably require grid computing in exchange for unrestricted model access. Each node would have to satisfy minimum requirements to be accepted into the grid. While the models are accessible to the grid, the secret source is with the author. The grid acts as a workshop, holding the petabytes of training data, and an A.I. training supercomputer. The result is plopped into the distributed leaderboard folder, where all the trained models go, and all the models are restricted to the OS, all the models are graded. A general user would go to the leaderboard folder and run the latest models. The incentive is to beat the best model. In the modern day, it is all about creating the white paper and presenting it to key people for support and funding. In the past, it anyone could release and gain public support organically.
O.I. Architecture - organoid on chip support
Not big enough
Environments and systems where movement, synapses and vasculization occur
Not an ideal home environment, housing
Questions over lifespan
Module version
Interconnects
I/O Card, hardware interface
Software interface
nb: Organoids are real lifeforms.
Automating organoid maintenance
A pump, a reservoir and an input output system attachment on the container housing the organoid. The pump (heart) moves media from a reservoir into the input of the housing of the organoid and at the output move goes back to the pump so the media is circulating. Spent media gets moved to a storage container where it is measured, filtered and conditioned and then re-introduced. So 4 main objects are required. A slow pump, pipes and fittings from the pump to the organoid housing. A reservoir holding new media, and a container holding old media. In the old media container, additions such as media grading detection, filtration and media conditioning to recycle media. Like a filter in a fish tank, this slow moving pump keep the water clean and oxygenated, removing impurities.
a peristaltic pump, also commonly known as a roller pump, is a type of positive displacement pump used for pumping a variety of fluids. The fluid is contained in a flexible tube fitted inside a circular pump casing. Most peristaltic pumps work through rotary motion, though linear peristaltic pumps have also been made.
culture media filtration, sterilization as it passes through the filter (immune system) and a measure of it viability, supplementing and cleaning the media or impurities.
Ideally, we want to award these functions their organ names, use and develop human compatible artificial organs and machines to offshoot into the medical device industry. For example, the dialysis machine could be supplied to hospitals and work in that setting. Every time we do something, we must think of its application in general medicine and move towards that direction, even if it poses extra challenges. For example, anastomosis methods and materials and degree of identical behavior. Bioreactors are commonly used for cell culture applications.
What an A.I Operating System (OS) might look like
Grid by default. The amount of data and processing required to train models and tinker about with A.I. could utilize grid computing. Minimum requirements are required to join the grid, and trained models are the reward. The grid would hold the petabytes of training data and CPU cycles for distributed training. The club would probably need 100TB, 32GB, 16 core minimum to join the grid. The models are tied to the OS and cannot be moved out. The grid maintainers would keep models at the current or exceeding current capability and the use of these to generate video, images and so on would be unrestricted.
Various applications/software to leverage A.I. and O.I.
Simulation environments for A.I. training.
To store training data - distributed file systems, to grid and store the petabytes of training data.
To train the A.I. - utilize the many grid computing operations already in existance and add a system level one as well.