cf.Data.count

Data.count(axis=None, keepdims=True, split_every=None)[source]

Count the non-masked elements of the data.

See also

count_masked

Parameters:
axis: (sequence of) int, optional

Axis or axes along which the count is performed. The default (None) performs the count over all the dimensions of the input array. axis may be negative, in which case it counts from the last to the first axis.

keepdims: bool, optional

By default, the axes which are collapsed are left in the result as dimensions with size one, so that the result will broadcast correctly against the input array. If set to False then collapsed axes are removed from the data.

split_every: int or dict, optional

Determines the depth of the dask recursive aggregation. If set to or more than the number of input Dask chunks, the aggregation will be performed in two steps, one partial collapse per input chunk and a single aggregation at the end. If set to less than that, an intermediate aggregation step will be used, so that any of the intermediate or final aggregation steps operates on no more than split_every inputs. The depth of the aggregation graph will be the logarithm to the base split_every of N, the number input chunks along reduced axes. Setting to a low value can reduce cache size and network transfers, at the cost of more CPU and a larger dask graph. See dask.array.reduction for details.

By default, dask heuristically decides on a good value. A default can also be set globally with the split_every key in dask.config.

Returns:
Data

The count of non-missing elements.

Examples

>>> d = cf.Data(numpy.arange(12).reshape(3, 4))
>>> print(d.array)
[[ 0  1  2  3]
 [ 4  5  6  7]
 [ 8  9 10 11]]
>>> d.count()
<CF Data(1, 1): [[12]]>
>>> d[0, :] = cf.masked
>>> print(d.array)
[[-- -- -- --]
 [ 4  5  6  7]
 [ 8  9 10 11]]
>>> d.count()
<CF Data(1, 1): [[8]]>
>>> print(d.count(0).array)
[[2 2 2 2]]
>>> print(d.count(1).array)
[[0]
 [4]
 [4]]
>>> print(d.count([0, 1], keepdims=False).array)
8