I strongly suspect that your data, while on a grid, is not ordered so as to allow a simple reshape of the values. You have two solutions available, both involving reordering the data in different ways.
Since you’re already using
np.unique to extract the grid, you can get the correct ordering of
vs using the
px, ix = np.unique(xs, return_inverse=True) py, iy = np.unique(ys, return_inverse=True) pz, iz = np.unique(zs, return_inverse=True) points = (px, py, pz) values = np.empty_like(vs, shape=(px.size, py.size, pz.size)) values[ix, iy, iz] = vs
return_inverse is sort of magical, largely because it’s so counterintuitive. In this case, for each element of values, it tells you which unique, sorted gross location it corresponds to.
By the way, if you are missing grid elements, you may want to replace
np.empty_like(vs, shape=(px.size, py.size, pz.size)) with either
np.zeros_like(vs, shape=(px.size, py.size, pz.size)) or
np.empty_like(vs, np.nan, shape=(px.size, py.size, pz.size)). In the latter case, you could interpolate the
nans in the grid first.
The more obvious solution would be to rearrange the indices so you can reshape
vs as you tried to do. That only works if you’re sure that there are no missing grid elements. The easiest way would be to sort the whole dataframe, since the pandas methods are less annoying than
df.sort_values(['x', 'y', 'z'], inplace=True, ignore_index=True)
When you extract, do it efficiently:
xs, ys, zs, vs = df.to_numpy().T
Since everything is sorted, you don’t need
np.unique to identify the grid any more. The number of unique
x values is:
nx = np.count_nonzero(np.diff(xs)) + 1
And the unique values are:
bx = xs.size // nx ux = xs[::bx]
y values go through a full cycle every
bx elements, so
ny = np.count_nonzero(np.diff(ys[:bx])) + 1 by = bx // ny uy = ys[:bx:by]
bz == 1):
nz = by uz = zs[:nz]
Now you can construct your original arrays:
points = (ux, uy, uz) values = vs.reshape(nx, ny, nz)
CLICK HERE to find out more related problems solutions.