Registered StackHub users may elect to receive email notifications whenever a new package version is released.
There are 0 watchers.
The NREL CSV extension provides convenience functions for the import and export of CSV data. The extension is not designed to import and export Haystack-compliant CSV data, but rather to provide a flexible interface for exchanging arbitrary, general purpose CSV data with other software.
The NREL CSV extension requires SkySpark 3.1 or later and nrelUtilityExt 2.0 or later. Most functions are overridable. See the nrelCsvExt Github repository for important release notes and a change log. Please report issues here.
csvImportRecs and related functions provide flexible import and parsing of records from CSV data.
csvImportRecs accepts a column specification dictionary, or spec
, that governs the interpretation of CSV data columns. In the spec
, key names match to column names in the CSV file (after applying toTagName) and key values are strings specifying a column data type. The supported data types are:
auto
: guesses data type automatically using parseAutobool
: parsed as a Booleancoord
: parsed as a coorddate
: parsed as a datedatetime
: parsed as a dateTimedict
: parsed as a dict (must follow valid Zinc syntax)float
: parsed as a floating point numberignore
: ignored (skipped)integer
: parsed as an integerlist
: parsed as a list (must follow valid Zinc syntax)marker
: any non-empty value is interpreted as setting a marker flagnumber
: parsed as a numberref
: parsed as a ref; handles an optional trailing dis
string
: parsed as a stringtags
: parsed as set of tags provided as name-value pairs (must follow valid Zinc syntax for each tag)time
: parsed as a timeuri
: parsed as a uriAny other value in spec
will cause an error when the corresponding column is parsed. Any column without an entry in spec
is parsed as a string. The "tags"
and "coord"
data types use an alternate seperator; see the sep
option below.
Options for csvImportRecs are passed as name-value pairs within the opts argument. The following options are supported:
delimiter
: Column delimiter character; passed through to ioStreamCsvnoHeader
: Passed through to ioStreamCsvsep
: Tag and value separator used to interpret "tags"
and "coord"
data types (must not be the same as delimiter
; default = " "
)template
: Optional dict of tags to apply to all imported recordschecked
: Boolean; if true
then throw an error on parsing failures; if false
return null
for CSV data that fails to parse (default = true
)commit
: Boolean; if true
commit records to the Folio as well as returning them (default = false
)log
: Boolean; if true
create a log entry for each committed record (default = false
)warn
: Boolean; if true
log warnings; if false
suppress them (default = true
)datePattern
: Pattern for parsing dates (for default, see parseDate)dateTimePattern
: Pattern for parsing dateTimes (for default, see parseDateTime)timePattern
: Pattern for parsing times (for default, see parseTime)tz
: Time zone for parsing dateTimes (default = now().tz
)The checked
, commit
, log
, and
warn' options may be also be passed as markers, with x
equivalent to x:true
and -x
equivalent to x:false
. For date, time, and dateTime parsing patterns see sys::Time.toLocale.
You may import Haystack weatherStation
, site
, space
, equip
, and point
and entities using the tailored functions:
These functions use the same column specification format, data types, and options as csvImportRecs. However, they provide two additional features:
siteRef
, equipRef
, etc.)Relationships for records are assigned using the following (in priority order):
template
option*Ref
of data type ref
, if present*Name
of data type string
and using the value to map the record to an existing entity by name/descriptionIf multiple sources exist, the higher-priority source is used with a warning (suppressable by setting the warn
option to false).
Mapping refs by name requires matching each imported record to an existing target entity either by description (as returned by dis(); default) or by the value of a user-specified tag on the target entity (e.g. navName
). To override the default behavior, use the option:
matchRefsBy
: Str; tag name to use when matching and assigning refs by nameAs an example, consider assigning an equipRef
tag to a set of imported points based on CSV data:
equipRef
column is available, parse it as a refmatchRefsBy:"navName"
and an equipName
column is available, use its value to match an equip
record by the value of its navName
tagequipName
column is available, use its value to match an equip
record by its description, per dis()The matchRefsBy
option applies to any name-based record matching. Different behavior for different types of refs (e.g. sites vs. spaces) is not supported. Instead, use post-processing or the template
option to assign the correct refs.
Entities that can be nested (such as sites, spaces, and equips) also support mapping of parent records of the same type using the *Name
pattern. However, for this to work properly, the parent records must already exist in the database. Making this work in practice requires multiple import passes.
Default column specification for csvImportWeatherStations:
Tag | Type | Description ------------------ | ------- | ------------------------------------------------ dis | string | Weather station display name weatherStationTags | tags | Arbitrary set of tags to apply geoAddr | string | Full address geoStreet | string | Street address geoCity | string | City geoCounty | string | Subdivision of state/province geoState | string | State or province geoCountry | string | Country code geoPostalCode | string | Postal code (ZIP code, in USA) geoCoord | coord | Latitude/longitude in decimal format geoElevation | number | Elevation above sea level tz | string | String designation for weather station time zone
Imported weather stations always receive the weatherStation
tag, plus any user-specified template. If a tz
column is not provided, imported weather stations also receive a tz
tag from the tz
import option (default = now().tz
).
Default column specification for csvImportSites:
Tag | Type | Description ------------------ | ------- | ------------------------------------------------------------------------------------ dis | string | Site display name siteTags | tags | Arbitrary set of tags to apply area | number | Area of site geoAddr | string | Full address geoStreet | string | Street address geoCity | string | City geoCounty | string | Subdivision of state/province geoState | string | State or province geoCountry | string | Country code geoPostalCode | string | Postal code (ZIP code, in USA) geoCoord | coord | Latitude/longitude in decimal format geoElevation | number | Elevation above sea level primaryFunction | string | The primary function of the building siteName | string | Parent site name; used to construct 'siteRef' via named-based matching siteRef | ref | Site ref; overrides 'siteName' tz | string | String designation for site time zone weatherStationName | string | Weather station name; used to construct 'weatherStationRef' via named-based matching weatherStationRef | ref | Weather station ref; overrides 'weatherStationName' yearBuilt | number | Year in which the building was constructed
Imported sites always receive the site
tag, plus any user-specified template. If a tz
column is not provided, imported weather stations also receive a tz
tag from the tz
import option (default = now().tz
).
Default column specification for csvImportSpaces:
Tag | Type | Description ------------ | ------- | -------------------------------------------------------------------------- dis | string | Space display name navName | string | Space navigation name (alternative to 'dis') spaceTags | tags | Arbitrary set of tags to apply area | number | Floor area of space floor | marker | Sets the 'floor' marker floorNum | integer | Floor number per European convention room | marker | Sets the 'room' marker siteName | string | Associated site name; used to construct 'siteRef' via named-based matching siteRef | ref | Site ref; overrides 'siteName' spaceName | string | Parent space name; used to construct 'spaceRef' via named-based matching spaceRef | ref | Space ref; overrides 'spaceName' zone | marker | Sets the 'zone' marker
Imported spaces always receive the space
tag, plus any user-specified template. In addition, any imported space records that lack the dis
and disMacro
tags but include a navName
tag receive a default display macro:
disMacro:"$siteRef $navName"'
A custom disMacro
may be specified using the template
option.
Default column specification for csvImportEquips:
Tag | Type | Description ------------ | ------- | ---------------------------------------------------------------------------- dis | string | Equipment display name navName | string | Equipment navigation name (alternative to 'dis') equipTags | tags | Arbitrary set of tags to apply equipName | string | Parent equip name; used to construct 'equipRef' via named-based matching equipRef | ref | Equip ref; overrides 'equipName' siteName | string | Associated site name; used to construct 'siteRef' via named-based matching siteRef | ref | Site ref; overrides 'siteName' spaceName | string | Associated space name; used to construct 'spaceRef' via named-based matching spaceRef | ref | Space ref; overrides 'spaceName'
Imported equipment always receive the equip
tag, plus any user-specified template. In addition, any imported equip records that lack the dis
and disMacro
tags but include a navName
tag receive a default display macro:
disMacro:"$siteRef $navName"'
A custom disMacro
may be specified using the template
option.
Default column specification for csvImportPoints:
Tag | Type | Description ------------------ | ------- | ------------------------------------------------------------------------------------ dis | string | Point display name navName | string | Point navigation name (alternative to 'dis') pointTags | tags | Arbitrary set of tags to apply equipName | string | Associated equip name; used to construct 'equipRef' via named-based matching equipRef | ref | Equip ref; overrides 'equipName' kind | string | Kind of point (e.g. "Number" or "Bool"); must match SkySpark kind siteName | string | Associated site name; used to construct 'siteRef' via named-based matching siteRef | ref | Site ref; overrides 'siteName' spaceName | string | Associated space name; used to construct 'spaceRef' via named-based matching spaceRef | ref | Space ref; overrides 'spaceName' tz | string | String designation for point time zone unit | string | Unit for numeric point (e.g. "kW") weatherStationName | string | Weather station name; used to construct 'weatherStationRef' via named-based matching weatherStationRef | ref | Weather station ref; overrides 'weatherStationName'
Imported points always receive the point
tag, plus any user-specified template. In addition, any imported equip records that lack the dis
and disMacro
tags but include a navName
tag receive a default display macro:
disMacro:"$equipRef $navName"
disMacro:"$weatherStationRef $navName"
A custom disMacro
may be specified using the template
option.
If tz
is not provided, imported points are also assigned a time zone based on one of the following (in priority order):
tz
import option (default = now().tz
).For minimum Haystack compliance, pointTags
should include one of sensor
, cmd
, or sp
(or these tags should be imported via other columns).
For best matching of refs, use the following import sequence:
If any entities of the same type are nested (e.g. rooms within floors), split them into separate files and import the parents prior to the children.
csvImportHistory imports time series data from a CSV source and writes it to point histories.
csvImportHistory imports CSV data from a single handle
(String or Uri), processes it, and writes it to one or more points
. To receive CSV data, each point record must:
Import control and customization is provided by tags attached to each target point record and a set of function options passed via the opts
argument.
The control tags for csvImportHistory are:
csvColumn
: (Required) A string or integer specifying the CSV data column that contains the target point's history
csvUnit
: String specifying the units of numeric CSV datacsvCallback
: Optional callback function to be applied to the CSV data; follows the same conventions as OnWrite callbackscsvConvert
: Optional point value conversion as described in Point Conversion to be applied to the CSV datacsvRollupInterval
: Number that defines a rollup interval for the CSV datacsvRollupFunc
: String that defines a rollup function for the CSV dataOnly the csvColumn
tag is required. All other tags are optional, but provide a flexible framework for post-processing the imported CSV data prior to writing history to the target points.
Control options for csvImportHistory are passed as name-value pairs within the opts
argument. The following options are supported:
checked
: Boolean; if true
then throw an error on parsing failures; if false
insert NA
for CSV data that fails to parse (default = true
)delimiter
: passed through to ioReadCsvnoHeader
: passed through to ioReadCsvwarn
: Boolean; if true
log warnings; if false
suppress them (default = true
)overwrite
: Boolean; allows existing history to be overwritten, ignoring hisEnd
(default = false
)preview
: enables preview mode and specifies the number of rows to preview for each pointspan
: time span for which to write history; anything accepted by toSpantsColumn
: specifies the timestamp column(s) (default = "ts"
; see parsing timestamps)tsPattern
: pattern(s) for parsing CSV timestamps (see parsing timestamps)tz
: time zone for CSV timestamps (default = now().tz
)The checked
and warn
options may be also be passed as markers, with x
equivalent to x:true
and -x
equivalent to x:false
. If the preview
option is provided as a marker, it defaults to preview:10
.
The key to properly specifying these tags and options to achieve a desired import workflow is understanding the order of operations that occur during data import:
handle
.points
:
tsColumn
option specifies the timestamp column(s).csvColumn
tag specifies the value column.tsPattern
option (see parsing timestamps).kind
tag (see parsing values).
csvUnit
tag (overrides any embedded units).csvUnit
is missing and the parsed data has no embedded units, the point's unit
tag (if any) will be used instead.csvCallback
tag, the specified callback function is applied to the imported data (see callbacks).csvConvert
tag, the specified point conversion is applied to the imported data using pointConvert.csvRollupInterval
tag, the data are rolled up to the specified interval (see rollups).span
option is specified, data rows with timestamps outside of span
are dropped.hisEnd
tag) are dropped (unless the overwrite
option is set).The tsColumn
, tsPattern
, and tz
options for csvImportHistory provide flexible configuration of timestamp parsing.
tsColumn
may be any of the following:
Timestamps spread across multiple columns (such as date in one column and time in another column) are combined in the order specified. Column indices are zero-based, i.e. the first column is index 0.
tsPattern
may be a single string or a list of strings, each providing a valid dateTime patterns per sys::Time.toLocale.
"YYYY-MM-DD'T'hh:mm:SS.FFFFFFFFFz zzzz"
tsPattern
is a list, patterns are tried in order until either one is successful or they have all been exhaustedtz
is silently ignored" "
(space) prior to parsing; tsPattern
must reflect the expected pattern of the concatenated columnsValues are pased according to kind
: Number, Bool, or Str. If necessary, numeric data are converted to Boolean with any nonzero value representing true
. Supports the same keywords for true, false, null, NA, NaN, and infinity as parseAuto.
NA values are returned in the following circumstances:
checked
is falseFor kind:"Str"
, the string is returned as-is (NA, null, etc. keywords are not parsed). To detect and handle NA values (or any other special cases), use the csvCallback
option or an hisOnWrite function.
The csvCallback
tag defines a callback function to be applied to the imported history data after initial import. The syntax follows the same conventions as OnWrite functions. The tag value must reference a valid top-level function. Partial application is supported. Lambda functions are not supported.
For security reasons, the callback function executes within an ephemeral task with the permissions of the synthetic task user. If the task extension is not loaded, attempting to use csvCallback
will throw an error.
The csvRollupInterval
tag enables automatic rollup of imported history before writing data. Rollup behavior is as follows:
csvRollupFunc
tag is present, hisRollup is called using the specified rollup function.csvRollupFunc
tag is missing, hisRollupAuto is used instead.csvRollupFunc
must be the name of a valid top-level function. Like callbacks, history rollups execute within an ephemeral task for security reasons and require the task extension to be loaded.
By default, csvImportHistory()
drops values prior to each point's hisEnd
before writing data. This prevents overwriting existing data. To intentionally overwrite existing history, use the following options:
span
option to the time span you wish to overwriteoverwrite
option to trueIf span
is not provided, then the overwrite
option is ignored, with a logged warning. When using overwrite, it is recommended to also set warn
to false to avoid filling the log with history out-of-order warnings.
The preview
option enables preview mode. In preview mode, csvImportHistory returns a combined history grid previewing the data import but does not write the imported history to the database. The number of rows per point (data column) is limited to the numeric value of preview
; see options. (The total number of rows may be greater than preview
if not all rows of the original CSV file include values for every point.)
csvImportHistory does its best to recover from import errors: any point that encounters an import error is skipped and execution continues to the next point. Error logging is as follows:
checked = true
: Logged as errors, with an error thrown on function exitchecked = false
and warn = true
: Logged as warnings, with a clean function exitchecked = false
and warn = false
: Not logged; clean function exitGeneral errors are always logged and thrown, regardless of the checked
and warn
options.
csvExportHistory exports point history to a CSV file, optionally with some convenience processing. The function arguments are:
points
: Point(s) for which to export history (may be any value supported by toRecList)span
: Timespan for which to export history (may be any value supported by toSpan)handle
: Uri of output fileopts
: Dict of control optionsControl options for csvExportHistory are passed as name-value pairs within the opts
argument. The following options are supported:
clean
: Boolean; if true
then numeric data will be range-cleaned prior to export (see range cleaning) (default = false)defVal
: Number or NA; governs range cleaning behavior (see range cleaning)delimiter
: passed through to ioWriteCsvheaderUnits
: Boolean; if true then units will be appended to data column headers (default = true
)interpolate
: Boolean; if true then the data will be interpolated (default = false
)interval
: Number; optional rollup interval (default = none)noHeader
: passed through to ioWriteCsvpreview
: enables preview mode and specifies the number of rows to previewremoveUnits
: Boolean; if true then units will be removed from numerical data prior to export (default = true
)tsPattern
: Str; Optional timestamp format patternThe clean
, headerUnits
, interpolate
, and removeUnits
options may be also be passed as markers, with x
equivalent to x:true
and -x
equivalent to x:false
. If the preview
option is provided as a marker, it defaults to preview:10
.
The export workflow is as follows:
interval
is provided)interpolate
is true)removeUnits
is true)headerUnits
is true)handle
using ioWriteCsvIf preview
is set, then step 8 instead returns a preview of the output that would have been written.
The preview
option enables preview mode. In preview mode, csvExportHistory returns a grid previewing the data export but does not write to file. The total number of rows is limited to the numeric value of preview
; see options.
If the clean
option is true
, values outside the range [minVal, maxVal]
, as defined by the point's minVal and maxVal tags, are removed from the data set. (Missing minVal
and maxVal
tags imply permissible minimum and maximum values of negative and positive infinity, respectively.)
If the defVal
option is also provided, values removed are replaced with defVal
; otherwise, they are replaced with Null
. Optionally, defVal
may be defined as a tag on each point; defVal
point tags override the global option on a per-point basis.
Range cleaning is performed prior to rollup and/or interpolation.
If the interval
option is provided, data are rolled up at the specified interval using hisRollupAuto. (For maximum compatibility with other software, Boolean points are instead rolled up using durTrue, then normalized by the interval to obtain a fractional value between 0 and 1.) Rollup occurs prior to interpolation.
If interpolate
is true
, the entire history grid is interpolated prior to export, following SkySpark's normal interpolation rules.
Version | 1.0.1 |
---|---|
License | BSD-3-Clause-Clear |
Build date | 1 year ago on 23rd Aug 2023 |
Depends on | |
File name | nrelCsvExt.pod |
File size | 19.28 kB |
MD5 | 544e6dbae0f2da5f952b737b73d2f64e |
SHA1 | 33feaf6251760ed95c6d6b19c99d9b296eb31d51 |
Published by NRELDownload nowAlso available via SkyArc Install Manager |