CUDA Out Of Memory #12722
Unanswered
akul-goyal
asked this question in
Help: Other Questions
CUDA Out Of Memory
#12722
Replies: 1 comment
-
One way I can think of optimizing your code is to change the import pandas as pd
d = {"text": ["first text", "second text"], "foo": [1, 2]}
df = pd.DataFrame.from_dict(d)
for row in range(len(df)):
df.at[row, "text"] = f"here {row}" The original data frame:
and after the modification:
In your case I think you can do: for row in range(len(df)):
text = df.iloc[row]["text"]
doc = nlp(text)
df.at[row, "text"] = doc.text I think that this should work, because the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am trying to run some text preprocessing on my data, and I want to use the GPU. My code is as follows:
Even with the "doc_cleaner" the memory within the GPU is still increasing. On certain indexes within X, I will get the memory out of error. However, if I run preprocessing on those indexes in isolation (without running all the preprocessing before those indexes) I do not get a memory out of error. Is there a way to move the output of the preprocessor to CPU and free up the memory on the GPU?
Beta Was this translation helpful? Give feedback.
All reactions