Uncategorized

‘LLM in a Flash: Efficient Large Language Model Inference With Limited Memory’ (PDF)

Re: my previous item on LLMs being RAM-hungry while iPhones are relatively low on RAM, this certainly isn’t news to Apple. Back in December, a team of eight researchers from Apple published this paper, which states in its abstract:

This paper tackles the challenge of efficiently running LLMs that
exceed the available DRAM capacity by storing the model parameters
in flash memory, but bringing them on demand to DRAM. Our method
involves constructing an inference cost model that takes into
account the characteristics of flash memory, guiding us to
optimize in two critical areas: reducing the volume of data
transferred from flash and reading data in larger, more contiguous
chunks. Within this hardware-informed framework, we introduce two
principal techniques. First, “windowing” strategically reduces
data transfer by reusing previously activated neurons, and second,
“row-column bundling”, tailored to the sequential data access
strengths of flash memory, increases the size of data chunks read
from flash memory. These methods collectively enable running
models up to twice the size of the available DRAM, with a 4-5× and
20-25× increase in inference speed compared to naive loading
approaches in CPU and GPU, respectively. Our integration of
sparsity awareness, context-adaptive loading, and a
hardware-oriented design paves the way for effective inference of
LLMs on devices with limited memory.

 ★ 

Re: my previous item on LLMs being RAM-hungry while iPhones are relatively low on RAM, this certainly isn’t news to Apple. Back in December, a team of eight researchers from Apple published this paper, which states in its abstract:

This paper tackles the challenge of efficiently running LLMs that
exceed the available DRAM capacity by storing the model parameters
in flash memory, but bringing them on demand to DRAM. Our method
involves constructing an inference cost model that takes into
account the characteristics of flash memory, guiding us to
optimize in two critical areas: reducing the volume of data
transferred from flash and reading data in larger, more contiguous
chunks. Within this hardware-informed framework, we introduce two
principal techniques. First, “windowing” strategically reduces
data transfer by reusing previously activated neurons, and second,
“row-column bundling”, tailored to the sequential data access
strengths of flash memory, increases the size of data chunks read
from flash memory. These methods collectively enable running
models up to twice the size of the available DRAM, with a 4-5× and
20-25× increase in inference speed compared to naive loading
approaches in CPU and GPU, respectively. Our integration of
sparsity awareness, context-adaptive loading, and a
hardware-oriented design paves the way for effective inference of
LLMs on devices with limited memory.

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy