Re: my previous item on LLMs being RAM-hungry while iPhones are relatively low on RAM, this certainly isn’t news to Apple. Back in December, a team of eight researchers from Apple published this paper, which states in its abstract:
This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of flash memory, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Within this hardware-informed framework, we introduce two principal techniques. First, “windowing” strategically reduces data transfer by reusing previously activated neurons, and second, “row-column bundling”, tailored to the sequential data access strengths of flash memory, increases the size of data chunks read from flash memory. These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5× and 20-25× increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.
Link: arxiv.org/pdf/2312.11514.pdf
Title: ‘LLM in a Flash: Efficient Large Language Model Inference With Limited Memory’ (PDF)
URL: https://arxiv.org/pdf/2312.11514.pdf
Source: Daring Fireball
Source URL: https://daringfireball.net/
Date: April 22, 2024 at 10:02PM
Feedly Board(s): Schule