Skip to content

Conversation

@SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Aug 18, 2025

Stack from ghstack (oldest at bottom):

Summary:

  • Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
  • Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking

Motivation

Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.

Full Context

During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:

  • source data obtained from loading the model
  • staging buffer
  • GPU texture/buffer

The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.

Current Order of operations

Legend:

  • W represents total weight nbytes
  • w represents weight nbytes for one tensor
  • A represents total activations nbytes
  • M represents approximation of total memory footprint

First, model file is loaded

Then, when building compute graph, for each weight tensor:

  1. Weight data is loaded from NamedDataMap (M = W)
  2. GPU texture/buffer for weight is initialized + memory allocated (M = 2W)
  3. After building the graph, graph->prepare() is called which currently allocates memory for the activation tensors as well (M = 2W + A)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:

  1. Staging buffer initialized (M = 2W + A + w)
  2. Copy CPU weight data to staging + CPU Weight data is freed (M = 2W + A)
  3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (M = 2W + A - w)

The peak usage in mainline will be M = 2W + A + w

Revised order of operations

This change revises the order of operations:

  1. Weight data is loaded from NamedDataMap (M = W)
  2. GPU texture/buffer for weight is initialized, but memory is not allocated (M = W)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:

  1. Staging buffer initialized (M = W + w)
  2. Memory allocated for GPU texture/buffer (M = W + 2w)
  3. Copy CPU weight data to staging + CPU Weight data is freed (M = W + w)
  4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (M = W)

Then, after all prepacking operations complete, only then is Activation memory allocated (M = W + A)

Under this scheme, peak memory is reduced to M = W + A (or alternatively M = W + 2w if 2w > A) which is (or at least very close to) the theoretical minimum.

Test Plan:

Logging Memory Usage

Using

uint64_t getVmRssInKB() {
  std::ifstream statusFile("/proc/self/status");
  std::string l, num;
  while (std::getline(statusFile, l)) {
    if (l.substr(0, 5) == "VmRSS") {
      size_t pos = l.find_first_of("0123456789");
      num = l.substr(pos);
      break;
    }
  }
  uint64_t vmRssInKB = std::stoi(num);
  return vmRssInKB;
}

uint64_t getVmaStatsInKB() {
  auto stats =
      vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
  uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
  return vmaBlockInKB;
}

to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.

With changes: P1908051860 (Meta only)

Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)

WIthout changes: P1908054759 (Meta only)

Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)

It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.

Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.

Differential Revision: D80460033

Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking

## Motivation

Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.

## Full Context

During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:

* source data obtained from loading the model
* staging buffer
* GPU texture/buffer

The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.

### Current Order of operations

Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint

First, model file is loaded

Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)

The peak usage in mainline will be `M = 2W + A + w`

### Revised order of operations

This change revises the order of operations:

1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)

**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)

Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.

Test Plan:
## Logging Memory Usage

Using

```
uint64_t getVmRssInKB() {
  std::ifstream statusFile("/proc/self/status");
  std::string l, num;
  while (std::getline(statusFile, l)) {
    if (l.substr(0, 5) == "VmRSS") {
      size_t pos = l.find_first_of("0123456789");
      num = l.substr(pos);
      break;
    }
  }
  uint64_t vmRssInKB = std::stoi(num);
  return vmRssInKB;
}

uint64_t getVmaStatsInKB() {
  auto stats =
      vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
  uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
  return vmaBlockInKB;
}
```

to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.

With changes: P1908051860 (Meta only)

```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```

WIthout changes: P1908054759 (Meta only)

```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```

It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.

Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Aug 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13474

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Unrelated Failure

As of commit 961f9a3 with merge base 8ef9595 (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

  • pull / test-binary-size-linux-gcc / linux-job (gh) (trunk failure)
    /pytorch/executorch/kernels/portable/cpu/op_stack.cpp:129:26: error: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘long unsigned int’} and ‘ssize_t’ {aka ‘long int’} [-Werror=sign-compare]

This comment was automatically generated by Dr. CI and updates every 15 minutes.

SS-JIA added a commit that referenced this pull request Aug 18, 2025
Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking

## Motivation

Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.

## Full Context

During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:

* source data obtained from loading the model
* staging buffer
* GPU texture/buffer

The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.

### Current Order of operations

Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint

First, model file is loaded

Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)

The peak usage in mainline will be `M = 2W + A + w`

### Revised order of operations

This change revises the order of operations:

1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)

**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)

Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.

Test Plan:
## Logging Memory Usage

Using

```
uint64_t getVmRssInKB() {
  std::ifstream statusFile("/proc/self/status");
  std::string l, num;
  while (std::getline(statusFile, l)) {
    if (l.substr(0, 5) == "VmRSS") {
      size_t pos = l.find_first_of("0123456789");
      num = l.substr(pos);
      break;
    }
  }
  uint64_t vmRssInKB = std::stoi(num);
  return vmRssInKB;
}

uint64_t getVmaStatsInKB() {
  auto stats =
      vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
  uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
  return vmaBlockInKB;
}
```

to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.

With changes: P1908051860 (Meta only)

```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```

WIthout changes: P1908054759 (Meta only)

```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```

It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.

Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.

ghstack-source-id: b2d943d
Pull Request resolved: #13474
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 18, 2025
@SS-JIA
Copy link
Contributor Author

SS-JIA commented Aug 18, 2025

@SS-JIA has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

…lazily"

Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking

## Motivation

Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.

## Full Context

During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:

* source data obtained from loading the model
* staging buffer
* GPU texture/buffer

The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.

### Current Order of operations

Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint

First, model file is loaded

Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)

The peak usage in mainline will be `M = 2W + A + w`

### Revised order of operations

This change revises the order of operations:

1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)

**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)

Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.

Test Plan:
## Logging Memory Usage

Using

```
uint64_t getVmRssInKB() {
  std::ifstream statusFile("/proc/self/status");
  std::string l, num;
  while (std::getline(statusFile, l)) {
    if (l.substr(0, 5) == "VmRSS") {
      size_t pos = l.find_first_of("0123456789");
      num = l.substr(pos);
      break;
    }
  }
  uint64_t vmRssInKB = std::stoi(num);
  return vmRssInKB;
}

uint64_t getVmaStatsInKB() {
  auto stats =
      vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
  uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
  return vmaBlockInKB;
}
```

to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.

With changes: P1908051860 (Meta only)

```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```

WIthout changes: P1908054759 (Meta only)

```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```

It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.

Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.

Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Aug 18, 2025
Pull Request resolved: #13474



* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking

## Motivation

Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.

## Full Context

During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:

* source data obtained from loading the model
* staging buffer
* GPU texture/buffer

The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.

### Current Order of operations

Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint

First, model file is loaded

Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)

The peak usage in mainline will be `M = 2W + A + w`

### Revised order of operations

This change revises the order of operations:

1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)

**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)

Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.

Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033/)
ghstack-source-id: 303779654
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80460033

…lazily"

Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking

## Motivation

Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.

## Full Context

During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:

* source data obtained from loading the model
* staging buffer
* GPU texture/buffer

The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.

### Current Order of operations

Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint

First, model file is loaded

Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)

The peak usage in mainline will be `M = 2W + A + w`

### Revised order of operations

This change revises the order of operations:

1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)

**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)

Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.

Test Plan:
## Logging Memory Usage

Using

```
uint64_t getVmRssInKB() {
  std::ifstream statusFile("/proc/self/status");
  std::string l, num;
  while (std::getline(statusFile, l)) {
    if (l.substr(0, 5) == "VmRSS") {
      size_t pos = l.find_first_of("0123456789");
      num = l.substr(pos);
      break;
    }
  }
  uint64_t vmRssInKB = std::stoi(num);
  return vmRssInKB;
}

uint64_t getVmaStatsInKB() {
  auto stats =
      vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
  uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
  return vmaBlockInKB;
}
```

to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.

With changes: P1908051860 (Meta only)

```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```

WIthout changes: P1908054759 (Meta only)

```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```

It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.

Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.

Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Aug 18, 2025
Pull Request resolved: #13474



* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking

## Motivation

Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.

## Full Context

During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:

* source data obtained from loading the model
* staging buffer
* GPU texture/buffer

The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.

### Current Order of operations

Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint

First, model file is loaded

Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)

The peak usage in mainline will be `M = 2W + A + w`

### Revised order of operations

This change revises the order of operations:

1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)

**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)

Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.
ghstack-source-id: 303830115

Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80460033

@trivedivivek trivedivivek added the release notes: vulkan Changes to the Vulkan backend delegate label Aug 18, 2025
…lazily"

Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking

## Motivation

Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.

## Full Context

During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:

* source data obtained from loading the model
* staging buffer
* GPU texture/buffer

The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.

### Current Order of operations

Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint

First, model file is loaded

Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)

The peak usage in mainline will be `M = 2W + A + w`

### Revised order of operations

This change revises the order of operations:

1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)

**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)

Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.

Test Plan:
## Logging Memory Usage

Using

```
uint64_t getVmRssInKB() {
  std::ifstream statusFile("/proc/self/status");
  std::string l, num;
  while (std::getline(statusFile, l)) {
    if (l.substr(0, 5) == "VmRSS") {
      size_t pos = l.find_first_of("0123456789");
      num = l.substr(pos);
      break;
    }
  }
  uint64_t vmRssInKB = std::stoi(num);
  return vmRssInKB;
}

uint64_t getVmaStatsInKB() {
  auto stats =
      vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
  uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
  return vmaBlockInKB;
}
```

to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.

With changes: P1908051860 (Meta only)

```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```

WIthout changes: P1908054759 (Meta only)

```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)

Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```

It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.

Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.

Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Aug 18, 2025
Pull Request resolved: #13474



* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking

## Motivation

Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.

## Full Context

During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:

* source data obtained from loading the model
* staging buffer
* GPU texture/buffer

The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.

### Current Order of operations

Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint

First, model file is loaded

Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)

The peak usage in mainline will be `M = 2W + A + w`

### Revised order of operations

This change revises the order of operations:

1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)

Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)

**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)

Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.
ghstack-source-id: 303862303

Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80460033

@facebook-github-bot facebook-github-bot merged commit ac46761 into gh/SS-JIA/292/base Aug 19, 2025
103 of 106 checks passed
@facebook-github-bot facebook-github-bot deleted the gh/SS-JIA/292/head branch August 19, 2025 02:24
SS-JIA added a commit that referenced this pull request Aug 19, 2025
Summary:
It seems #13474 was not merged correctly via the cherry pick bot. This PR manually syncs internal and fbcode.

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Aug 19, 2025
Summary:
It seems #13474 was not merged correctly via the cherry pick bot. This PR manually syncs internal and fbcode.

ghstack-source-id: fe9c81a
Pull Request resolved: #13512
SS-JIA added a commit that referenced this pull request Aug 19, 2025
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* __->__ #13512

Summary:
It seems #13474 was not merged
correctly via the cherry pick bot. This PR manually syncs internal and
fbcode.
agrima1304 pushed a commit to agrima1304/executorch that referenced this pull request Aug 26, 2025
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* __->__ pytorch#13512

Summary:
It seems pytorch#13474 was not merged
correctly via the cherry pick bot. This PR manually syncs internal and
fbcode.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported release notes: vulkan Changes to the Vulkan backend delegate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants