[ VIGRA Homepage | Function Index | Class Index | Namespaces | File List | Main Page ]
Access to HDF5 files. More...
#include <vigra/hdf5impex.hxx>
Public Types | |
enum | OpenMode |
Set how a file is opened. OpenMode::New creates a new file. If the file already exists, overwrite it. More... |
Public Member Functions | |
void | cd (std::string groupName) |
Change the current group. If the first character is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group. | |
void | cd_mk (std::string groupName) |
Change the current group; create it if nescessary. If the first character is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group. | |
bool | cd_up () |
Change the current group to its parent group. returns true if successful, false otherwise. | |
template<unsigned int N, class T > | |
void | createDataset (std::string datasetName, typename MultiArrayShape< N >::type shape, T init, int iChunkSize=0, int compressionParameter=0) |
Create a new dataset. This function can be used to create a dataset filled with a default value, for example before writing data into it using writeBlock(). Attention: only atomic datatypes are provided. For spectral data, add an dimension (case RGB: add one dimension of size 3). | |
std::string | filename () |
Get the name of the associated file. | |
void | flushToDisk () |
Immediately write all data to disk. | |
std::string | getAttribute (std::string datasetName, std::string attributeName) |
Get an attribute string of an object. | |
hssize_t | getDatasetDimensions (std::string datasetName) |
Get the number of dimensions of a certain dataset If the first character is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group. | |
ArrayVector< hsize_t > | getDatasetShape (std::string datasetName) |
Get the shape of each dimension of a certain dataset. Normally, this function is called after determining the dimension of the dataset using getDatasetDimensions(). If the first character is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group. | |
HDF5File (std::string filename, OpenMode mode) | |
Create a HDF5File object. | |
std::vector< std::string > | ls () |
List the content of the current group. The function returns a vector of strings holding the entries of the current group. Only datasets and groups are listed, other objects (e.g. datatypes) are ignored. Group names always have an ending "/". | |
void | mkdir (std::string groupName) |
Create a new group. If the first character is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group. | |
std::string | pwd () |
Get the path of the current group. | |
template<unsigned int N, class T > | |
void | read (std::string datasetName, MultiArrayView< N, T, UnstridedArrayTag > array) |
Read data into a multi array. If the first character of datasetName is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group. | |
template<class T > | |
void | readAtomic (std::string datasetName, T &data) |
Read a single value. This functions allows to read a single datum of atomic datatype (int, long, double) from the HDF5 file. So it is not nescessary to create a MultiArray of size 1 to read a single number. | |
template<unsigned int N, class T > | |
void | readBlock (std::string datasetName, typename MultiArrayShape< N >::type blockOffset, typename MultiArrayShape< N >::type blockShape, MultiArrayView< N, T, UnstridedArrayTag > array) |
Read a block of data into s multi array. This function allows to read a small block out of a larger volume stored in a HDF5 dataset. | |
void | root () |
Change current group to "/". | |
void | setAttribute (std::string datasetName, std::string attributeName, std::string text) |
Attach a string attribute to an existing object. The attribute can be attached to datasets and groups. The string may have arbitrary length. | |
template<unsigned int N, class T > | |
void | write (std::string datasetName, const MultiArrayView< N, T, UnstridedArrayTag > &array, int iChunkSize=0, int compression=0) |
Write multi arrays. Chunks can be activated by setting. | |
template<unsigned int N, class T > | |
void | write (std::string datasetName, const MultiArrayView< N, T, UnstridedArrayTag > &array, typename MultiArrayShape< N >::type chunkSize, int compression=0) |
Write multi arrays. Chunks can be activated by providing a MultiArrayShape as chunkSize. chunkSize must have equal dimension as array. | |
template<class T > | |
void | writeAtomic (std::string datasetName, const T data) |
Write single value as dataset. This functions allows to write data of atomic datatypes (int, long, double) as a dataset in the HDF5 file. So it is not nescessary to create a MultiArray of size 1 to write a single number. | |
template<unsigned int N, class T > | |
void | writeBlock (std::string datasetName, typename MultiArrayShape< N >::type blockOffset, const MultiArrayView< N, T, UnstridedArrayTag > &array) |
Write a multi array into a larger volume. blockOffset determines the position, where array is written. | |
~HDF5File () | |
Destructor to make sure that all data is flushed before closing the file. |
Access to HDF5 files.
HDF5File proviedes a convenient way of accessing data in HDF5 files. vigra::MultiArray structures of any dimension can be stored to / loaded from HDF5 files. Typical HDF5 features like subvolume access, chunks and data compression are available, string attributes can be attached to any dataset or group. Group- or dataset-handles are encapsulated in the class and managed automatically. The internal file-system like structure can be accessed by functions like "cd()" or "mkdir()".
Example: Write the MultiArray out_multi_array to file. Change the current directory to "/group" and read in the same MultiArray as in_multi_array.
#include <vigra/hdf5impex.hxx>
Namespace: vigra
enum OpenMode |
Set how a file is opened. OpenMode::New creates a new file. If the file already exists, overwrite it.
OpenMode::Open opens a file for reading/writing. The file will be created, if nescessary.
Create a HDF5File object.
Creates or opens HDF5 file at position filename. The current group is set to "/".
void write | ( | std::string | datasetName, |
const MultiArrayView< N, T, UnstridedArrayTag > & | array, | ||
int | iChunkSize = 0 , |
||
int | compression = 0 |
||
) |
Write multi arrays. Chunks can be activated by setting.
. The chunks will be hypercubes with edge length size.
Compression can be activated by setting
where 0 stands for no compression and 9 for maximum compression.
If the first character of datasetName is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group.
void write | ( | std::string | datasetName, |
const MultiArrayView< N, T, UnstridedArrayTag > & | array, | ||
typename MultiArrayShape< N >::type | chunkSize, | ||
int | compression = 0 |
||
) |
Write multi arrays. Chunks can be activated by providing a MultiArrayShape as chunkSize. chunkSize must have equal dimension as array.
Compression can be activated by setting
where 0 stands for no compression and 9 for maximum compression.
If the first character of datasetName is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group.
void writeBlock | ( | std::string | datasetName, |
typename MultiArrayShape< N >::type | blockOffset, | ||
const MultiArrayView< N, T, UnstridedArrayTag > & | array | ||
) |
Write a multi array into a larger volume. blockOffset determines the position, where array is written.
Chunks can be activated by providing a MultiArrayShape as chunkSize. chunkSize must have equal dimension as array.
Compression can be activated by setting
where 0 stands for no compression and 9 for maximum compression.
If the first character of datasetName is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group.
void writeAtomic | ( | std::string | datasetName, |
const T | data | ||
) |
Write single value as dataset. This functions allows to write data of atomic datatypes (int, long, double) as a dataset in the HDF5 file. So it is not nescessary to create a MultiArray of size 1 to write a single number.
If the first character of datasetName is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group.
void readBlock | ( | std::string | datasetName, |
typename MultiArrayShape< N >::type | blockOffset, | ||
typename MultiArrayShape< N >::type | blockShape, | ||
MultiArrayView< N, T, UnstridedArrayTag > | array | ||
) |
Read a block of data into s multi array. This function allows to read a small block out of a larger volume stored in a HDF5 dataset.
blockOffset determines the position of the block. blockSize determines the size in each dimension of the block.
If the first character of datasetName is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group.
void readAtomic | ( | std::string | datasetName, |
T & | data | ||
) |
Read a single value. This functions allows to read a single datum of atomic datatype (int, long, double) from the HDF5 file. So it is not nescessary to create a MultiArray of size 1 to read a single number.
If the first character of datasetName is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group.
void createDataset | ( | std::string | datasetName, |
typename MultiArrayShape< N >::type | shape, | ||
T | init, | ||
int | iChunkSize = 0 , |
||
int | compressionParameter = 0 |
||
) |
Create a new dataset. This function can be used to create a dataset filled with a default value, for example before writing data into it using writeBlock(). Attention: only atomic datatypes are provided. For spectral data, add an dimension (case RGB: add one dimension of size 3).
shape determines the dimension and the size of the dataset.
Chunks can be activated by providing a MultiArrayShape as chunkSize. chunkSize must have equal dimension as array.
Compression can be activated by setting
where 0 stands for no compression and 9 for maximum compression.
If the first character of datasetName is a "/", the path will be interpreted as absolute path, otherwise it will be interpreted as path relative to the current group.
© Ullrich Köthe (ullrich.koethe@iwr.uni-heidelberg.de) |
html generated using doxygen and Python
|