Hi everyone, I’m currrently a student doing research on the onboarding process for building analytics/monitoring, especially the the part where you collect the context you need from the building team to make the data usable (point lists, equipment mapping, naming conventions, change history, etc.)
If you’ve onboarded buildings into Haystack/other analytics stacks: what are the top 2–3 things that most often cause delays or trust issues later (mis-tagging, missing context, point remaps, unit quirks, time sync, backfills, etc.)? And what artifacts/processes have worked best (tagging templates, checklists, commissioning docs, workflows)?
If you’re open to a quick 15-min chat, I’d really appreciate it.
Thanks for reaching out! Sounds like you are working on interesting research.
It is important to consider how to work with entity data and timeseries data.
When working with entity data I have seen incorrect or inconsistent tagging prevent code from being reapplied, which caused extra time and money spent on a project. Also, the time and cost to tag entities can prevent projects from happening.
The Project Haystack community has been working on Xeto and improved semantics to help address these challenges and scale building analytics.
Here is an example showing how to work with Project Haystack entity data using Xeto specs and Python. Also, here is an overview on the webinar that presented this example, which includes a link to the recording.
For timeseries data it is important to consider gaps in data. Project Haystack's NA was introduced for this. Within the next few months I plan to present an example how to work with timeseries data and Project Haystack's NA using dataframes in Python. In the meantime, if this topic interests you I can show you some working concepts.
Kevin Chafloque Yesterday
Hi everyone, I’m currrently a student doing research on the onboarding process for building analytics/monitoring, especially the the part where you collect the context you need from the building team to make the data usable (point lists, equipment mapping, naming conventions, change history, etc.)
If you’ve onboarded buildings into Haystack/other analytics stacks: what are the top 2–3 things that most often cause delays or trust issues later (mis-tagging, missing context, point remaps, unit quirks, time sync, backfills, etc.)? And what artifacts/processes have worked best (tagging templates, checklists, commissioning docs, workflows)?
If you’re open to a quick 15-min chat, I’d really appreciate it.
My email: [email protected]
Rick Jennings Today 8:47am
Hi Kevin,
Thanks for reaching out! Sounds like you are working on interesting research.
It is important to consider how to work with entity data and timeseries data.
When working with entity data I have seen incorrect or inconsistent tagging prevent code from being reapplied, which caused extra time and money spent on a project. Also, the time and cost to tag entities can prevent projects from happening.
The Project Haystack community has been working on Xeto and improved semantics to help address these challenges and scale building analytics.
Here is an example showing how to work with Project Haystack entity data using Xeto specs and Python. Also, here is an overview on the webinar that presented this example, which includes a link to the recording.
For timeseries data it is important to consider gaps in data. Project Haystack's NA was introduced for this. Within the next few months I plan to present an example how to work with timeseries data and Project Haystack's NA using dataframes in Python. In the meantime, if this topic interests you I can show you some working concepts.
Rick