cf.read

cf.read(files, external=None, verbose=False, warnings=False, ignore_read_error=False, aggregate=True, nfields=None, squeeze=False, unsqueeze=False, fmt=None, select=None, extra=None, recursive=False, followlinks=False, um=None, chunk=True, field=None, height_at_top_of_model=None, select_options=None, follow_symlinks=False)[source]

Read field constructs from netCDF, CDL, PP or UM fields files.

NetCDF files may be on disk or on an OPeNDAP server.

Any amount of files of any combination of file types may be read.

NetCDF unlimited dimensions

Domain axis constructs that correspond to NetCDF unlimited dimensions may be accessed with the nc_is_unlimited and nc_set_unlimited methods of a domain axis construct.

CF-compliance

If the dataset is partially CF-compliant to the extent that it is not possible to unambiguously map an element of the netCDF dataset to an element of the CF data model, then a field construct is still returned, but may be incomplete. This is so that datasets which are partially conformant may nonetheless be modified in memory and written to new datasets.

Such “structural” non-compliance would occur, for example, if the “coordinates” attribute of a CF-netCDF data variable refers to another variable that does not exist, or refers to a variable that spans a netCDF dimension that does not apply to the data variable. Other types of non-compliance are not checked, such whether or not controlled vocabularies have been adhered to. The structural compliance of the dataset may be checked with the dataset_compliance method of the field construct, as well as optionally displayed when the dataset is read by setting the warnings parameter.

CDL files

A file is considered to be a CDL representation of a netCDF dataset (https://www.unidata.ucar.edu/software/netcdf/netcdf/CDL-Syntax.html) if it is a text file that starts with the seven characters “netcdf ” (six letters followed by a space). It is converted to a temporary netCDF4 file using the external ncgen command, and the temporary file persists until the end of the Python session, at which time it is automatically deleted. The CDL file may omit data array values (as would be the case, for example, if the file was created with the -h or -c option to ncdump), in which case the the relevant constructs in memory will be created with data containing missing values.

PP and UM fields files

32-bit and 64-bit PP and UM fields files of any endian-ness can be read. In nearly all cases the file format is auto-detected from the first 64 bits in the file, but for the few occasions when this is not possible, the um keyword allows the format to be specified, as well as the UM version (if the latter is not inferrable from the PP or lookup header information).

2-d “slices” within a single file are always combined, where possible, into field constructs with 3-d, 4-d or 5-d data. This is done prior to the field construct aggregation carried out by the cf.read function.

When reading PP and UM fields files, the relaxed_units aggregate option is set to True by default, because units are not always available to field constructs derived from UM fields files or PP files.

Performance

Descriptive properties are always read into memory, but lazy loading is employed for all data arrays, which means that no data is read into memory until the data is required for inspection or to modify the array contents. This maximises the number of field constructs that may be read within a session, and makes the read operation fast.

Parameters:
files: (arbitrarily nested sequence of) str

A string or arbitrarily nested sequence of strings giving the file names, directory names, or OPenDAP URLs from which to read field constructs. Various type of expansion are applied to the names:

Expansion Description
Tilde An initial component of ~ or ~user is replaced by that user’s home directory.
Environment variable Substrings of the form $name or ${name} are replaced by the value of environment variable name.
Pathname A string containing UNIX file name metacharacters as understood by the Python glob module is replaced by the list of matching file names. This type of expansion is ignored for OPenDAP URLs.

Where more than one type of expansion is used in the same string, they are applied in the order given in the above table.

Parameter example:

The file file.nc in the user’s home directory could be described by any of the following: '$HOME/file.nc', '${HOME}/file.nc', '~/file.nc', '~/tmp/../file.nc'.

When a directory is specified, all files in that directory are read. Sub-directories are not read unless the recursive parameter is True. If any directories contain files that are not valid datasets then an exception will be raised, unless the ignore_read_error parameter is True.

external: (sequence of) str, optional

Read external variables (i.e. variables which are named by attributes, but are not present, in the parent file given by the filename parameter) from the given external files. Ignored if the parent file does not contain a global “external_variables” attribute. Multiple external files may be provided, which are searched in random order for the required external variables.

If an external variable is not found in any external files, or is found in multiple external files, then the relevant metadata construct is still created, but without any metadata or data. In this case the construct’s is_external method will return True.

Parameter example:

external='cell_measure.nc'

Parameter example:

external=['cell_measure.nc']

Parameter example:

external=('cell_measure_A.nc', 'cell_measure_O.nc')

extra: (sequence of) str, optional

Create extra, independent field constructs from netCDF variables that correspond to particular types metadata constructs. The extra parameter may be one, or a sequence, of:

extra Metadata constructs
'field_ancillary' Field ancillary constructs
'domain_ancillary' Domain ancillary constructs
'dimension_coordinate' Dimension coordinate constructs
'auxiliary_coordinate' Auxiliary coordinate constructs
'cell_measure' Cell measure constructs
Parameter example:

To create field constructs from auxiliary coordinate constructs: extra='auxiliary_coordinate' or extra=['auxiliary_coordinate'].

Parameter example:

To create field constructs from domain ancillary and cell measure constructs: extra=['domain_ancillary', 'cell_measure'].

An extra field construct created via the extra parameter will have a domain limited to that which can be inferred from the corresponding netCDF variable, but without the connections that are defined by the parent netCDF data variable. It is possible to create independent fields from metadata constructs that do incorporate as much of the parent field construct’s domain as possible by using the convert method of a returned field construct, instead of setting the extra parameter.

verbose: bool, optional

If True then print a description of how the contents of the netCDF file were parsed and mapped to CF data model constructs.

warnings: bool, optional

If True then print warnings when an output field construct is incomplete due to structural non-compliance of the dataset. By default such warnings are not displayed.

ignore_read_error: bool, optional

If True then ignore any file which raises an IOError whilst being read, as would be the case for an empty file, unknown file format, etc. By default the IOError is raised.

fmt: str, optional

Only read files of the given format, ignoring all other files. Valid formats are 'NETCDF' for CF-netCDF files, 'CFA' for CFA-netCDF files, 'UM' for PP or UM fields files, and 'CDL' for CDL text files. By default files of any of these formats are read.

aggregate: bool or dict, optional

If True (the default) or a dictionary (possibly empty) then aggregate the field constructs read in from all input files into as few field constructs as possible by passing all of the field constructs found the input files to the cf.aggregate, and returning the output of this function call.

If aggregate is a dictionary then it is used to configure the aggregation process passing its contents as keyword arguments to the cf.aggregate function.

If aggregate is False then the field constructs are not aggregated.

squeeze: bool, optional

If True then remove size 1 axes from each field construct’s data array.

unsqueeze: bool, optional

If True then insert size 1 axes from each field construct’s domain into its data array.

select, select_options: optional TODO

Only return field constructs which satisfy the given conditions on their property values. Only field constructs which, prior to any aggregation, satisfy f.match(description=select, **select_options) == True are returned. See cf.Field.match for details.

recursive: bool, optional

If True then recursively read sub-directories of any directories specified with the files parameter.

followlinks: bool, optional

If True, and recursive is True, then also search for files in sub-directories which resolve to symbolic links. By default directories which resolve to symbolic links are ignored. Ignored of recursive is False. Files which are symbolic links are always followed.

Note that setting recursive=True, followlinks=True can lead to infinite recursion if a symbolic link points to a parent directory of itself.

um: dict, optional

For Met Office (UK) PP files and Met Office (UK) fields files only, provide extra decoding instructions. This option is ignored for input files which are not PP or fields files. In most cases, how to decode a file is inferrable from the file’s contents, but if not then each key/value pair in the dictionary sets a decoding option as follows:

Key Value
'fmt' The file format ('PP' or 'FF')
'word_size' The word size in bytes (4 or 8)
'endian' The byte order ('big' or 'little')
'version' The UM version to be used when decoding the header. Valid versions are, for example, 4.2, '6.6.3' and '8.2'. The default version is 4.5. In general, a given version is ignored if it can be inferred from the header (which is usually the case for files created by the UM at versions 5.3 and later). The exception to this is when the given version has a third element (such as the 3 in 6.6.3), in which case any version in the header is ignored.
'height_at_top_of_model' The height (in metres) of the upper bound of the top model level. By default the height at top model is taken from the top level’s upper bound defined by BRSVD1 in the lookup header. If the height at top model can not be determined from the header and is not provided then no “atmosphere_hybrid_height_coordinate” dimension coordinate construct will be created.

If format is specified as 'PP' then the word size and byte order default to 4 and 'big' respectively.

Parameter example:

To specify that the input files are 32-bit, big-endian PP files: um={'fmt': 'PP'}

Parameter example:

To specify that the input files are 32-bit, little-endian PP files from version 5.1 of the UM: um={'fmt': 'PP', 'endian': 'little', 'version': 5.1}

New in version 1.5.

umversion: deprecated at version 3.0.0

Use the um parameter instead.

height_at_top_of_model: deprecated at version 3.0.0

Use the um parameter instead.

field: deprecated at version 3.0.0

Use the extra parameter instead.

follow_symlinks: deprecated at version 3.0.0

Use the followlinks parameter instead.

select_options: deprecated at version 3.0.0

Returns:
FieldList

The field constructs found in the input file(s). The list may be empty.

Examples:

>>> x = cfdm.read('file.nc')

Read a file and create field constructs from CF-netCDF data variables as well as from the netCDF variables that correspond to particular types metadata constructs:

>>> f = cfdm.read('file.nc', extra='domain_ancillary')
>>> g = cfdm.read('file.nc', extra=['dimension_coordinate', 
...                                 'auxiliary_coordinate'])

Read a file that contains external variables:

>>> h = cfdm.read('parent.nc')
>>> i = cfdm.read('parent.nc', external='external.nc')
>>> j = cfdm.read('parent.nc', external=['external1.nc', 'external2.nc'])
>>> f = cf.read('file*.nc')
>>> f
[<CF Field: pmsl(30, 24)>,
 <CF Field: z-squared(17, 30, 24)>,
 <CF Field: temperature(17, 30, 24)>,
 <CF Field: temperature_wind(17, 29, 24)>]
>>> cf.read('file*.nc')[0:2]
[<CF Field: pmsl(30, 24)>,
 <CF Field: z-squared(17, 30, 24)>]
>>> cf.read('file*.nc')[-1]
<CF Field: temperature_wind(17, 29, 24)>
>>> cf.read('file*.nc', select='units=K')
[<CF Field: temperature(17, 30, 24)>,
 <CF Field: temperature_wind(17, 29, 24)>]
>>> cf.read('file*.nc', select='ncvar%ta')
<CF Field: temperature(17, 30, 24)>