Skip to content

Commit 85e5b6e

Browse files
authored
[ET-VK] Allocate memory for weight and activation tensors lazily (#13501)
Summary: * Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph * Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking ## Motivation Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model. ## Full Context During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points: * source data obtained from loading the model * staging buffer * GPU texture/buffer The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once. ### Current Order of operations Legend: * `W` represents total weight nbytes * `w` represents weight nbytes for one tensor * `A` represents total activations nbytes * `M` represents approximation of total memory footprint First, model file is loaded Then, when building compute graph, for each weight tensor: 1. Weight data is loaded from NamedDataMap (`M = W`) 2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`) 3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`) Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually: 1. Staging buffer initialized (`M = 2W + A + w`) 2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`) 3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`) The peak usage in mainline will be `M = 2W + A + w` ### Revised order of operations This change revises the order of operations: 1. Weight data is loaded from NamedDataMap (`M = W`) 2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`) Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually: 1. Staging buffer initialized (`M = W + w`) 2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`) 3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`) 4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`) **Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`) Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum. Test Plan: ## Logging Memory Usage Using ``` uint64_t getVmRssInKB() { std::ifstream statusFile("/proc/self/status"); std::string l, num; while (std::getline(statusFile, l)) { if (l.substr(0, 5) == "VmRSS") { size_t pos = l.find_first_of("0123456789"); num = l.substr(pos); break; } } uint64_t vmRssInKB = std::stoi(num); return vmRssInKB; } uint64_t getVmaStatsInKB() { auto stats = vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics(); uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10; return vmaBlockInKB; } ``` to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes. With changes: P1908051860 (Meta only) ``` Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA) Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA) Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA) Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA) Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA) Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA) ``` WIthout changes: P1908054759 (Meta only) ``` Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA) Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA) Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA) Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA) Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA) Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA) ``` It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph. Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model. Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033) [ghstack-poisoned]
1 parent 29a8612 commit 85e5b6e

File tree

10 files changed

+220
-39
lines changed

10 files changed

+220
-39
lines changed

backends/vulkan/runtime/api/containers/Tensor.cpp

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -897,6 +897,16 @@ VkMemoryRequirements vTensor::get_memory_requirements() const {
897897
return {};
898898
}
899899

900+
bool vTensor::memory_is_bound() const {
901+
switch (storage_type()) {
902+
case utils::kBuffer:
903+
return storage_->buffer_.has_memory();
904+
case utils::kTexture2D:
905+
case utils::kTexture3D:
906+
return storage_->image_.has_memory();
907+
}
908+
}
909+
900910
void vTensor::bind_allocation(const vkapi::Allocation& allocation) {
901911
switch (storage_type()) {
902912
case utils::kBuffer:
@@ -909,6 +919,18 @@ void vTensor::bind_allocation(const vkapi::Allocation& allocation) {
909919
}
910920
}
911921

922+
void vTensor::acquire_allocation(vkapi::Allocation&& allocation) {
923+
switch (storage_type()) {
924+
case utils::kBuffer:
925+
storage_->buffer_.acquire_allocation(std::move(allocation));
926+
break;
927+
case utils::kTexture2D:
928+
case utils::kTexture3D:
929+
storage_->image_.acquire_allocation(std::move(allocation));
930+
break;
931+
}
932+
}
933+
912934
void vTensor::update_metadata() {
913935
numel_ = utils::multiply_integers(sizes_);
914936
strides_ = calculate_strides(sizes_, dim_order_);

backends/vulkan/runtime/api/containers/Tensor.h

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -560,6 +560,12 @@ class vTensor final {
560560
*/
561561
VmaAllocationCreateInfo get_allocation_create_info() const;
562562

563+
/*
564+
* Checks if the tensor's underlying buffer or image resource is bound to a
565+
* memory allocation.
566+
*/
567+
bool memory_is_bound() const;
568+
563569
/*
564570
* Return the VkMemoryRequirements of the underlying resource
565571
*/
@@ -570,6 +576,11 @@ class vTensor final {
570576
*/
571577
void bind_allocation(const vkapi::Allocation& allocation);
572578

579+
/*
580+
* Binds and acquires a rvalue memory allocation
581+
*/
582+
void acquire_allocation(vkapi::Allocation&& allocation);
583+
573584
private:
574585
/*
575586
* Assuming sizes, dim order, or axis mapping was modified, recompute all

backends/vulkan/runtime/graph/ComputeGraph.cpp

Lines changed: 39 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -356,8 +356,6 @@ ValueRef ComputeGraph::add_tensor(
356356
const utils::GPUMemoryLayout memory_layout,
357357
const int64_t shared_object_idx,
358358
const utils::AxisMapLayout axis_map_layout) {
359-
bool allocate_memory = shared_object_idx < 0;
360-
361359
ValueRef idx(static_cast<int>(values_.size()));
362360
check_no_active_value_ptrs();
363361
values_.emplace_back(api::vTensor(
@@ -366,10 +364,10 @@ ValueRef ComputeGraph::add_tensor(
366364
dtype,
367365
storage_type,
368366
memory_layout,
369-
allocate_memory,
367+
false,
370368
axis_map_layout));
371369

372-
if (!allocate_memory) {
370+
if (shared_object_idx >= 0) {
373371
get_shared_object(shared_object_idx).add_user(this, idx);
374372
}
375373
return idx;
@@ -626,6 +624,17 @@ SharedObject& ComputeGraph::get_shared_object(const int64_t idx) {
626624
return shared_objects_.at(idx);
627625
}
628626

627+
void ComputeGraph::create_dedicated_allocation_for(const ValueRef idx) {
628+
vTensorPtr tensor = get_tensor(idx);
629+
if (!tensor->memory_is_bound()) {
630+
VmaAllocationCreateInfo alloc_create_info =
631+
context()->adapter_ptr()->vma().gpuonly_resource_create_info();
632+
tensor->acquire_allocation(
633+
context()->adapter_ptr()->vma().create_allocation(
634+
tensor->get_memory_requirements(), alloc_create_info));
635+
}
636+
}
637+
629638
void ComputeGraph::update_descriptor_counts(
630639
const vkapi::ShaderInfo& shader_info,
631640
bool execute) {
@@ -873,6 +882,20 @@ void ComputeGraph::prepare_pipelines() {
873882
vkapi::ComputePipelineCache::Hasher>();
874883
}
875884

885+
void ComputeGraph::prepare_pipelines() {
886+
for (std::unique_ptr<PrepackNode>& node : prepack_nodes_) {
887+
node->prepare_pipelines(this);
888+
}
889+
for (std::unique_ptr<ExecuteNode>& node : execute_nodes_) {
890+
node->prepare_pipelines(this);
891+
}
892+
context_->pipeline_cache().create_pipelines(pipeline_descriptors_);
893+
894+
pipeline_descriptors_ = std::unordered_set<
895+
vkapi::ComputePipelineCache::Key,
896+
vkapi::ComputePipelineCache::Hasher>();
897+
}
898+
876899
void ComputeGraph::submit_current_cmd(const bool final_use) {
877900
context_->submit_cmd_to_gpu(VK_NULL_HANDLE, final_use);
878901
}
@@ -952,6 +975,18 @@ void ComputeGraph::prepack() {
952975
submit_current_cmd_and_wait(/*final_use=*/true);
953976
context_->flush();
954977
staging_nbytes_in_cmd_ = 0;
978+
979+
// Initialize allocations for intermediate tensors
980+
for (SharedObject& shared_object : shared_objects_) {
981+
shared_object.allocate(this);
982+
shared_object.bind_users(this);
983+
}
984+
// Make sure all remaining tensors have allocations
985+
for (int i = 0; i < values_.size(); i++) {
986+
if (values_.at(i).isTensor()) {
987+
create_dedicated_allocation_for(i);
988+
}
989+
}
955990
}
956991

957992
void ComputeGraph::execute() {

backends/vulkan/runtime/graph/ComputeGraph.h

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -827,6 +827,13 @@ class ComputeGraph final {
827827

828828
SharedObject& get_shared_object(const int64_t idx);
829829

830+
/*
831+
* Creates a dedicated memory allocation for a vTensor value, and have the
832+
* tensor acquire the allocation object. If the tensor is already bound to a
833+
* memory allocation, this function will be a no-op.
834+
*/
835+
void create_dedicated_allocation_for(const ValueRef idx);
836+
830837
//
831838
// Graph Preparation
832839
//

backends/vulkan/runtime/graph/ops/PrepackNode.cpp

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,10 @@ void PrepackNode::encode(ComputeGraph* graph) {
9797
}
9898

9999
{
100+
// If the vTensor is not yet bound to a memory allocation, create a new one
101+
// and aquire it.
102+
graph->create_dedicated_allocation_for(packed_);
103+
100104
vkapi::PipelineBarrier pipeline_barrier{};
101105
vkapi::DescriptorSet descriptor_set = context->get_descriptor_set(
102106
shader_, local_workgroup_size_, spec_vars_, push_constants_offset);

backends/vulkan/runtime/vk_api/memory/Buffer.cpp

Lines changed: 38 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ VulkanBuffer::VulkanBuffer()
2020
allocator_(VK_NULL_HANDLE),
2121
memory_{},
2222
owns_memory_(false),
23+
memory_bundled_(false),
2324
is_copy_(false),
2425
handle_(VK_NULL_HANDLE) {}
2526

@@ -33,6 +34,7 @@ VulkanBuffer::VulkanBuffer(
3334
allocator_(vma_allocator),
3435
memory_{},
3536
owns_memory_(allocate_memory),
37+
memory_bundled_(allocate_memory),
3638
is_copy_(false),
3739
handle_(VK_NULL_HANDLE) {
3840
// If the buffer size is 0, allocate a buffer with a size of 1 byte. This is
@@ -77,6 +79,7 @@ VulkanBuffer::VulkanBuffer(
7779
allocator_(other.allocator_),
7880
memory_(other.memory_),
7981
owns_memory_(false),
82+
memory_bundled_(false),
8083
is_copy_(true),
8184
handle_(other.handle_) {
8285
// TODO: set the offset and range appropriately
@@ -91,6 +94,7 @@ VulkanBuffer::VulkanBuffer(VulkanBuffer&& other) noexcept
9194
allocator_(other.allocator_),
9295
memory_(std::move(other.memory_)),
9396
owns_memory_(other.owns_memory_),
97+
memory_bundled_(other.memory_bundled_),
9498
is_copy_(other.is_copy_),
9599
handle_(other.handle_) {
96100
other.handle_ = VK_NULL_HANDLE;
@@ -99,16 +103,19 @@ VulkanBuffer::VulkanBuffer(VulkanBuffer&& other) noexcept
99103
VulkanBuffer& VulkanBuffer::operator=(VulkanBuffer&& other) noexcept {
100104
VkBuffer tmp_buffer = handle_;
101105
bool tmp_owns_memory = owns_memory_;
106+
bool tmp_memory_bundled = memory_bundled_;
102107

103108
buffer_properties_ = other.buffer_properties_;
104109
allocator_ = other.allocator_;
105110
memory_ = std::move(other.memory_);
106111
owns_memory_ = other.owns_memory_;
112+
memory_bundled_ = other.memory_bundled_;
107113
is_copy_ = other.is_copy_;
108114
handle_ = other.handle_;
109115

110116
other.handle_ = tmp_buffer;
111117
other.owns_memory_ = tmp_owns_memory;
118+
other.memory_bundled_ = tmp_memory_bundled;
112119

113120
return *this;
114121
}
@@ -119,14 +126,22 @@ VulkanBuffer::~VulkanBuffer() {
119126
// ownership of the underlying resource.
120127
if (handle_ != VK_NULL_HANDLE && !is_copy_) {
121128
if (owns_memory_) {
122-
vmaDestroyBuffer(allocator_, handle_, memory_.allocation);
129+
if (memory_bundled_) {
130+
vmaDestroyBuffer(allocator_, handle_, memory_.allocation);
131+
// Prevent the underlying memory allocation from being freed; it was
132+
// freed by vmaDestroyImage
133+
memory_.allocation = VK_NULL_HANDLE;
134+
} else {
135+
vkDestroyBuffer(this->device(), handle_, nullptr);
136+
// Allow underlying memory allocation to be freed by the destructor of
137+
// Allocation class
138+
}
123139
} else {
124140
vkDestroyBuffer(this->device(), handle_, nullptr);
141+
// Prevent the underlying memory allocation from being freed since this
142+
// object doesn't own it
143+
memory_.allocation = VK_NULL_HANDLE;
125144
}
126-
// Prevent the underlying memory allocation from being freed; it was either
127-
// freed by vmaDestroyBuffer, or this resource does not own the underlying
128-
// memory
129-
memory_.allocation = VK_NULL_HANDLE;
130145
}
131146
}
132147

@@ -136,6 +151,24 @@ VmaAllocationInfo VulkanBuffer::allocation_info() const {
136151
return info;
137152
}
138153

154+
void VulkanBuffer::bind_allocation_impl(const Allocation& memory) {
155+
VK_CHECK_COND(!memory_, "Cannot bind an already bound allocation!");
156+
if (!is_copy_) {
157+
VK_CHECK(vmaBindBufferMemory(allocator_, memory.allocation, handle_));
158+
}
159+
}
160+
161+
void VulkanBuffer::bind_allocation(const Allocation& memory) {
162+
bind_allocation_impl(memory);
163+
memory_.allocation = memory.allocation;
164+
}
165+
166+
void VulkanBuffer::acquire_allocation(Allocation&& memory) {
167+
bind_allocation_impl(memory);
168+
memory_ = std::move(memory);
169+
owns_memory_ = true;
170+
}
171+
139172
VkMemoryRequirements VulkanBuffer::get_memory_requirements() const {
140173
VkMemoryRequirements memory_requirements;
141174
vkGetBufferMemoryRequirements(this->device(), handle_, &memory_requirements);

backends/vulkan/runtime/vk_api/memory/Buffer.h

Lines changed: 19 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,10 @@ class VulkanBuffer final {
100100
Allocation memory_;
101101
// Indicates whether the underlying memory is owned by this resource
102102
bool owns_memory_;
103+
// Indicates whether the allocation for the buffer was created with the buffer
104+
// via vmaCreateBuffer; if this is false, the memory is owned but was bound
105+
// separately via vmaBindBufferMemory
106+
bool memory_bundled_;
103107
// Indicates whether this VulkanBuffer was copied from another VulkanBuffer,
104108
// thus it does not have ownership of the underlying VKBuffer
105109
bool is_copy_;
@@ -162,13 +166,21 @@ class VulkanBuffer final {
162166
return (handle_ == other.handle_) && is_copy_;
163167
}
164168

165-
inline void bind_allocation(const Allocation& memory) {
166-
VK_CHECK_COND(!memory_, "Cannot bind an already bound allocation!");
167-
if (!is_copy_) {
168-
VK_CHECK(vmaBindBufferMemory(allocator_, memory.allocation, handle_));
169-
}
170-
memory_.allocation = memory.allocation;
171-
}
169+
private:
170+
void bind_allocation_impl(const Allocation& memory);
171+
172+
public:
173+
/*
174+
* Given a memory allocation, bind it to the underlying VkImage. The lifetime
175+
* of the memory allocation is assumed to be managed externally.
176+
*/
177+
void bind_allocation(const Allocation& memory);
178+
179+
/*
180+
* Given a rvalue memory allocation, bind it to the underlying VkImage and
181+
* also acquire ownership of the memory allocation.
182+
*/
183+
void acquire_allocation(Allocation&& memory);
172184

173185
VkMemoryRequirements get_memory_requirements() const;
174186

0 commit comments

Comments
 (0)