-
Notifications
You must be signed in to change notification settings - Fork 11
Description
Hi, following up on #311. I tried using the new code on the master branch and was still having issues. Playing around with the test script from #312 I think the issue is that when sim.run_steps() is called many times in a row a memory leak occurs. I am defining a memory leak here to be the resident memory associated with the process as seen in top. In the below code, when the for loop is called many times in a row, after each .run_steps(), a section of memory is taken up proportional to the number of time steps just called. This stacks up throughout the run and can result in a lot of memory. If .run_steps() is only called a few times, then this memory gets released when gc.collect() is called. If a lot of .run_steps() are called, then this memory does not get released correctly and stays reserved (Case 1 vs Case 2 below).
So I think that the lack of releasing the memory is likely a bug, but is it also possible to not have that memory reserved at all? In my use case, I am trying to run a reservoir for a long time, and even if the memory is correctly released at the end of the simulation, currently it takes up ~20 Gb of memory due to the length of the simulation and I can not scale the network any further. I have no probes in my simulation and do not need any past information about the reservoir, but currently the memory requirement is proportional to how many time steps I have taken, which seems like unwanted behavior.
Thanks!
import gc
import weakref
import nengo
import numpy as np
import nengo_loihi
class NewClass:
def __init__(self):
self.input_size = 10
self.n_neurons = 1024
self.initialize_nengo()
def initialize_nengo(self):
network = nengo.Network()
with network:
def input_func(t):
return np.ones(self.input_size)
def output_func(t, x):
self.output = x
input_layer = nengo.Node(
output=input_func, size_in=0, size_out=self.input_size
)
ensemble = nengo.Ensemble(
n_neurons=self.n_neurons,
dimensions=1,
)
output_layer = nengo.Node(
output=output_func, size_in=self.n_neurons, size_out=0
)
conn_in = nengo.Connection(
input_layer,
ensemble.neurons,
transform=np.ones((self.n_neurons, self.input_size)),
)
conn_out = nengo.Connection(ensemble.neurons, output_layer, synapse = None)
self.network = network
def run(self, case, num_resets):
for i in range(num_resets):
with nengo_loihi.Simulator(self.network, precompute=True) as sim:
if case == 1:
""" Memory increases throughout the run and at the end the total memory that has been
reserved up to this point is maintained for future simulations and not released.
The amount of memory is not increased if future simulations have the same number
of run_steps. If they have more, then the memory starts to increase again once
more run_steps have been called. """
for _ in range(10000):
# for _ in range(10000 * (i+1)): #to demonstrate what happens for increasing run_steps.
sim.run_steps(10)
elif case == 2:
""" Memory will increase like in case 1, but at the end of the simulation the memory
is correctly released and starts accumulated over again. """
for _ in range(10):
sim.run_steps(10000)
elif case == 3:
""" Demonstration of the memory increasing in jumps after each run_steps"""
for _ in range(2):
sim.run_steps(50000)
input('run_step done, Press Enter')
elif case == 4:
""" Baseline case. Memory usage is consistently low except for a brief spike
at the very end of the run_step when it spikes but then drops back down
because gc.collect is called and correctly releases the memory."""
for _ in range(1):
sim.run_steps(100000)
input('run_step done, Press Enter')
print('finished iter', i+1)
gc.collect()
#RAM will accumulate until gc is manually called.
#Has to be called outside of 'with' code block to release memory.
num_resets = 3
nengo_class = NewClass()
case = 4
nengo_class.run(case, num_resets)