The issue was with memory. This line was fixed by reading the file in chunks.
chunksize = 10000 chunk_red = pd.read_csv(self.red_path, sep=',', chunksize=chunksize, names=colnames, header=None) red_df = pd.concat(chunk_red, ignore_index=True)
I now have the same problem at another line where I try to match points to their nearest neighbour using KD trees though, this uses even more memory and can’t really be chunked up.
CLICK HERE to find out more related problems solutions.