cf.read¶
-
cf.
read
(files, external=None, verbose=None, warnings=False, ignore_read_error=False, aggregate=True, nfields=None, squeeze=False, unsqueeze=False, fmt=None, select=None, extra=None, recursive=False, followlinks=False, um=None, chunk=True, field=None, height_at_top_of_model=None, select_options=None, follow_symlinks=False, mask=True, warn_valid=False)[source]¶ Read field constructs from netCDF, CDL, PP or UM fields files.
NetCDF files may be on disk or on an OPeNDAP server.
Any amount of files of any combination of file types may be read.
NetCDF unlimited dimensions
Domain axis constructs that correspond to NetCDF unlimited dimensions may be accessed with the
nc_is_unlimited
andnc_set_unlimited
methods of a domain axis construct.CF-compliance
If the dataset is partially CF-compliant to the extent that it is not possible to unambiguously map an element of the netCDF dataset to an element of the CF data model, then a field construct is still returned, but may be incomplete. This is so that datasets which are partially conformant may nonetheless be modified in memory and written to new datasets.
Such “structural” non-compliance would occur, for example, if the “coordinates” attribute of a CF-netCDF data variable refers to another variable that does not exist, or refers to a variable that spans a netCDF dimension that does not apply to the data variable. Other types of non-compliance are not checked, such whether or not controlled vocabularies have been adhered to. The structural compliance of the dataset may be checked with the
dataset_compliance
method of the field construct, as well as optionally displayed when the dataset is read by setting the warnings parameter.CDL files
A file is considered to be a CDL representation of a netCDF dataset if it is a text file whose first non-comment line starts with the seven characters “netcdf ” (six letters followed by a space). A comment line is identified as one which starts with any amount white space (including none) followed by “//” (two slashes). It is converted to a temporary netCDF4 file using the external
ncgen
command, and the temporary file persists until the end of the Python session, at which time it is automatically deleted. The CDL file may omit data array values (as would be the case, for example, if the file was created with the-h
or-c
option toncdump
), in which case the the relevant constructs in memory will be created with data with all missing values.PP and UM fields files
32-bit and 64-bit PP and UM fields files of any endian-ness can be read. In nearly all cases the file format is auto-detected from the first 64 bits in the file, but for the few occasions when this is not possible, the um keyword allows the format to be specified, as well as the UM version (if the latter is not inferrable from the PP or lookup header information).
2-d “slices” within a single file are always combined, where possible, into field constructs with 3-d, 4-d or 5-d data. This is done prior to the field construct aggregation carried out by the
cf.read
function.When reading PP and UM fields files, the relaxed_units aggregate option is set to
True
by default, because units are not always available to field constructs derived from UM fields files or PP files.Performance
Descriptive properties are always read into memory, but lazy loading is employed for all data arrays, which means that no data is read into memory until the data is required for inspection or to modify the array contents. This maximises the number of field constructs that may be read within a session, and makes the read operation fast.
See also
cf.aggregate
,cf.load_stash2standard_name
,cf.write
,cf.Field.convert
,cf.Field.dataset_compliance
- Parameters
- files: (arbitrarily nested sequence of)
str
A string or arbitrarily nested sequence of strings giving the file names, directory names, or OPenDAP URLs from which to read field constructs. Various type of expansion are applied to the names:
Expansion
Description
Tilde
An initial component of
~
or~user
is replaced by that user’s home directory.Environment variable
Substrings of the form
$name
or${name}
are replaced by the value of environment variable name.Pathname
A string containing UNIX file name metacharacters as understood by the Python
glob
module is replaced by the list of matching file names. This type of expansion is ignored for OPenDAP URLs.Where more than one type of expansion is used in the same string, they are applied in the order given in the above table.
- Parameter example:
The file
file.nc
in the user’s home directory could be described by any of the following:'$HOME/file.nc'
,'${HOME}/file.nc'
,'~/file.nc'
,'~/tmp/../file.nc'
.
When a directory is specified, all files in that directory are read. Sub-directories are not read unless the recursive parameter is True. If any directories contain files that are not valid datasets then an exception will be raised, unless the ignore_read_error parameter is True.
- external: (sequence of)
str
, optional Read external variables (i.e. variables which are named by attributes, but are not present, in the parent file given by the filename parameter) from the given external files. Ignored if the parent file does not contain a global “external_variables” attribute. Multiple external files may be provided, which are searched in random order for the required external variables.
If an external variable is not found in any external files, or is found in multiple external files, then the relevant metadata construct is still created, but without any metadata or data. In this case the construct’s
is_external
method will returnTrue
.- Parameter example:
external='cell_measure.nc'
- Parameter example:
external=['cell_measure.nc']
- Parameter example:
external=('cell_measure_A.nc', 'cell_measure_O.nc')
- extra: (sequence of)
str
, optional Create extra, independent field constructs from netCDF variables that correspond to particular types metadata constructs. The extra parameter may be one, or a sequence, of:
extra
Metadata constructs
'field_ancillary'
Field ancillary constructs
'domain_ancillary'
Domain ancillary constructs
'dimension_coordinate'
Dimension coordinate constructs
'auxiliary_coordinate'
Auxiliary coordinate constructs
'cell_measure'
Cell measure constructs
This parameter replaces the deprecated field parameter.
- Parameter example:
To create field constructs from auxiliary coordinate constructs:
extra='auxiliary_coordinate'
orextra=['auxiliary_coordinate']
.- Parameter example:
To create field constructs from domain ancillary and cell measure constructs:
extra=['domain_ancillary', 'cell_measure']
.
An extra field construct created via the extra parameter will have a domain limited to that which can be inferred from the corresponding netCDF variable, but without the connections that are defined by the parent netCDF data variable. It is possible to create independent fields from metadata constructs that do incorporate as much of the parent field construct’s domain as possible by using the
convert
method of a returned field construct, instead of setting the extra parameter.- verbose:
int
orNone
, optional If an integer from
0
to3
, corresponding to increasing verbosity (else-1
as a special case of maximal and extreme verbosity), set for the duration of the method call (only) as the minimum severity level cut-off of displayed log messages, regardless of the global configuredcf.LOG_LEVEL
.Else, if
None
(the default value), log messages will be filtered out, or otherwise, according to the value of thecf.LOG_LEVEL
setting.Overall, the higher a non-negative integer that is set (up to a maximum of
3
) the more description that is printed to convey how the contents of the netCDF file were parsed and mapped to CF data model constructs.- warnings:
bool
, optional If True then print warnings when an output field construct is incomplete due to structural non-compliance of the dataset. By default such warnings are not displayed.
- ignore_read_error:
bool
, optional If True then ignore any file which raises an IOError whilst being read, as would be the case for an empty file, unknown file format, etc. By default the IOError is raised.
- fmt:
str
, optional Only read files of the given format, ignoring all other files. Valid formats are
'NETCDF'
for CF-netCDF files,'CFA'
for CFA-netCDF files,'UM'
for PP or UM fields files, and'CDL'
for CDL text files. By default files of any of these formats are read.- aggregate:
bool
ordict
, optional If True (the default) or a dictionary (possibly empty) then aggregate the field constructs read in from all input files into as few field constructs as possible by passing all of the field constructs found the input files to the
cf.aggregate
, and returning the output of this function call.If aggregate is a dictionary then it is used to configure the aggregation process passing its contents as keyword arguments to the
cf.aggregate
function.If aggregate is False then the field constructs are not aggregated.
- squeeze:
bool
, optional If True then remove size 1 axes from each field construct’s data array.
- unsqueeze:
bool
, optional If True then insert size 1 axes from each field construct’s domain into its data array.
- select: (sequence of) str, optional
Only return field constructs which have given identities, i.e. those that satisfy
f.match_by_identity(*select)
. Seecf.Field.match_by_identity
for details.- recursive:
bool
, optional If True then recursively read sub-directories of any directories specified with the files parameter.
- followlinks:
bool
, optional If True, and recursive is True, then also search for files in sub-directories which resolve to symbolic links. By default directories which resolve to symbolic links are ignored. Ignored of recursive is False. Files which are symbolic links are always followed.
Note that setting
recursive=True, followlinks=True
can lead to infinite recursion if a symbolic link points to a parent directory of itself.This parameter replaces the deprecated follow_symlinks parameter.
- mask:
bool
, optional If False then do not mask by convention when reading the data of field or metadata constructs from disk. By default data is masked by convention.
The masking by convention of a netCDF array depends on the values of any of the netCDF variable attributes
_FillValue
,missing_value
,valid_min
,valid_max
andvalid_range
.The masking by convention of a PP or UM array depends on the value of BMDI in the lookup header. A value other than
-1.0e30
indicates the data value to be masked.See https://ncas-cms.github.io/cf-python/tutorial.html#data-mask for details.
New in version 3.4.0.
- warn_valid:
bool
, optional If True then print a warning for the presence of
valid_min
,valid_max
orvalid_range
properties on field contructs and metadata constructs that have data. By default no such warning is issued.“Out-of-range” data values in the file, as defined by any of these properties, are automatically masked by default, which may not be as intended. See the mask parameter for turning off all automatic masking.
See https://ncas-cms.github.io/cf-python/tutorial.html#data-mask for details.
New in version 3.4.0.
- um:
dict
, optional For Met Office (UK) PP files and Met Office (UK) fields files only, provide extra decoding instructions. This option is ignored for input files which are notPP or fields files. In most cases, how to decode a file is inferrable from the file’s contents, but if not then each key/value pair in the dictionary sets a decoding option as follows:
Key
Value
'fmt'
The file format (
'PP'
or'FF'
)'word_size'
The word size in bytes (
4
or8
)'endian'
The byte order (
'big'
or'little'
)'version'
The UM version to be used when decoding the header. Valid versions are, for example,
4.2
,'6.6.3'
and'8.2'
. The default version is4.5
. In general, a given version is ignored if it can be inferred from the header (which is usually the case for files created by the UM at versions 5.3 and later). The exception to this is when the given version has a third element (such as the 3 in 6.6.3), in which case any version in the header is ignored.'height_at_top_of_model'
The height (in metres) of the upper bound of the top model level. By default the height at top model is taken from the top level’s upper bound defined by BRSVD1 in the lookup header. If the height at top model can not be determined from the header and is not provided then no “atmosphere_hybrid_height_coordinate” dimension coordinate construct will be created.
If format is specified as
'PP'
then the word size and byte order default to4
and'big'
respectively.This parameter replaces the deprecated umversion and height_at_top_of_model parameters.
- Parameter example:
To specify that the input files are 32-bit, big-endian PP files:
um={'fmt': 'PP'}
- Parameter example:
To specify that the input files are 32-bit, little-endian PP files from version 5.1 of the UM:
um={'fmt': 'PP', 'endian': 'little', 'version': 5.1}
New in version 1.5.
- umversion: deprecated at version 3.0.0
Use the um parameter instead.
- height_at_top_of_model: deprecated at version 3.0.0
Use the um parameter instead.
- field: deprecated at version 3.0.0
Use the extra parameter instead.
- follow_symlinks: deprecated at version 3.0.0
Use the followlinks parameter instead.
- select_options: deprecated at version 3.0.0
Use methods on the returned
FieldList
instead.
- files: (arbitrarily nested sequence of)
- Returns
FieldList
The field constructs found in the input file(s). The list may be empty.
Examples:
>>> x = cf.read('file.nc')
Read a file and create field constructs from CF-netCDF data variables as well as from the netCDF variables that correspond to particular types metadata constructs:
>>> f = cf.read('file.nc', extra='domain_ancillary') >>> g = cf.read('file.nc', extra=['dimension_coordinate', ... 'auxiliary_coordinate'])
Read a file that contains external variables:
>>> h = cf.read('parent.nc') >>> i = cf.read('parent.nc', external='external.nc') >>> j = cf.read('parent.nc', external=['external1.nc', 'external2.nc'])
>>> f = cf.read('file*.nc') >>> f [<CF Field: pmsl(30, 24)>, <CF Field: z-squared(17, 30, 24)>, <CF Field: temperature(17, 30, 24)>, <CF Field: temperature_wind(17, 29, 24)>]
>>> cf.read('file*.nc')[0:2] [<CF Field: pmsl(30, 24)>, <CF Field: z-squared(17, 30, 24)>]
>>> cf.read('file*.nc')[-1] <CF Field: temperature_wind(17, 29, 24)>
>>> cf.read('file*.nc', select='units=K') [<CF Field: temperature(17, 30, 24)>, <CF Field: temperature_wind(17, 29, 24)>]
>>> cf.read('file*.nc', select='ncvar%ta') <CF Field: temperature(17, 30, 24)>