1. Introduction

The Open Geospatial Consortium (OGC) is releasing this Call for Participation (CFP) to solicit proposals for the OGC Testbed-18. The Testbed-18 initiative will explore six tasks, including advanced Interoperability for Building Energy; Secure, Asynchronous Catalogs, Identifiers for Reproducible Science, Moving Features and Sensor Integration, 3D+ Data Standards and Streaming, and Machine Learning Training Data.

Testbed18 Final WebsiteBanner 2

1.1. Background

OGC testbeds are an annual research and development initiative that explores geospatial technology from various angles. They take the OGC Baseline into account, and at the same time explore selected aspects with a fresh pair of eyes. Testbeds integrate requirements and ideas from a group of sponsors, which allows leveraging symbiotic effects and makes the overall initiative more attractive to both participants and sponsoring organizations.

1.2. OGC Innovation Program Initiative

This initiative is being conducted under the OGC Innovation Program. This program provides a collaborative agile process for solving geospatial challenges. Organizations (sponsors and technology implementers) come together to solve problems, produce prototypes, develop demonstrations, provide best practices, and advance the future of standards. Since 1999 more than 110 initiatives have taken place.

1.3. Benefits of Participation

This initiative provides an outstanding opportunity to engage with the latest research on geospatial system design, concept development, and rapid prototyping. The initiative provides a business opportunity for stakeholders to mutually define, refine, and evolve service interfaces and protocols in the context of hands-on experience and feedback. The outcomes are expected to shape the future of geospatial software development and data publication. The Sponsors are supporting this vision with cost-sharing funds to partially offset the costs associated with development, engineering, and demonstration of these outcomes. This offers selected Participants a unique opportunity to recoup a portion of their initiative expenses.

1.4. Master Schedule

The following table details the major Initiative milestones and events. Dates are subject to change.

Table 1. Master schedule
Milestone Date  Event

M01

07 February 2022

Release of CFP.

M02

18 February 2022

Questions for CFP Bidders Q&A Webinar due.

M03

24 February 2022

Bidders Q&A Webinar to be held 10:00-11:00 EST.

M04

31 March 2022

CFP Proposal Submission Deadline (11:59pm EST)

M05

25 April 2022

All testbed Participation Agreements Signed. OGC will start sending preliminary offers to conduct negotiations no later than March 23.

M06

3-5 May 2022

Kickoff Workshop held as a virtual meeting on May 3-5, May 24, and July 17.

M07

15 June 2022

Initial Engineering Reports (IERs).

M08

June 2022 (specific date TBD)

IER presentations at the Member Meeting.

M09

30 September 2022

Technology Integration Experiments (TIE) component implementations completed & tested; preliminary Draft Engineering Reports (DERs) completed & ready for internal reviews.

M10

31 October 2022

Ad hoc TIE demonstrations & Demo Assets posted to Portal; Near-Final DERs are ready for review; WG review requested.

M11

18 November 2022

Final DERs (incorporating internal and WG feedback) posted to pending to meet the 3-week-rule before the technical committee (TC) electronic vote for publication.

M12

15 December 2022

Last deadline for the final DER presentation in the relevant WG for publication electronic vote.

M13

16 December 2022

Last deadline for the TC electronic vote on publishing the final DER.

M14

31 December 2022

Participants' final summary reports are due.

M15

17 & 18 January 2022

Outreach presentations at an online demonstration event.

2. Technical Architecture

This section provides the technical architecture and identifies all requirements and corresponding work items. It references the OGC standards baseline, i.e. the complete set of member approved Abstract Specifications, Standards including Profiles and Extensions, and Community Practices where necessary.

Please note that some documents referenced below may not have been released to the public yet. These reports require a login to the OGC portal. If you don’t have a login, please contact OGC using the Additional Message textbox in the OGC Innovation Program Contact Form.

Testbed Threads

The Testbed is organized in a number of tasks. For organizational purposes, related tasks are handled in threads. Each thread combines a number of tasks that usually share architectural or thematic aspects. Threads allow us to keep related work items closely together.

threads
Figure 1. Testbed-18 Threads and Tasks

The threads include the following tasks

2.1. 3D+ Data Standards and Streaming

The exact positioning of sensors in 3D space and corresponding 3D data streaming, analytics, and portrayal plays an important role in many geospatial scenarios and applications. Remote sensing of the Earth’s ground, atmosphere, or stratosphere has become a routine in many domains.

3D Header

In classical remote sensing, a Coordinate Reference System (CRS) is a framework used to precisely measure locations on the surface of the Earth as coordinates. It is the application of the abstract mathematics of coordinate systems and analytic geometry to geographic space. A particular CRS specification comprises a choice of Earth ellipsoid, horizontal datum, map projection (except in the geographic coordinate system), origin point, and unit of measure. Thousands of coordinate systems have been specified for use around the world or in specific regions and for various purposes, necessitating transformations between different CRSs. CRSs are now a crucial basis for the sciences and technologies of Geoinformatics, including cartography, geographic information systems, surveying, remote sensing, and civil engineering. This has led to their standardization in international specifications such as the EPSG codes and ISO 19111:2007 Geographic information—Spatial referencing by coordinates, prepared by ISO/TC 211 and published by the OGC as Abstract Specification, Topic 2: Spatial referencing by coordinate. In addition, some software exists that supports the transformation from one CRS to another.

Now, it is time to go beyond geospatial, but to enable full location determination, orientation, and trajectory description of objects in orbit of celestial bodies or in free flight in our solar system. This Testbed-18 task shall evaluate existing standards with regards to their ability to represent a full suite of multidimensional CRS and their associated geometries. These CRSs are referred to here as 3D+ CRS, as they go beyond the classical view often applied in remote sensing. The task shall evaluate current standards with respect to the exact positioning of sensors at any location within the solar system and their corresponding data streams.

Eventually, a suite of multi-dimension (3D+) standards must be created that expands upon existing standards to cover broader needs to include temporal aspects and special relativity as identified in Minkowski spacetime. Minkowski spacetime is a combination of three-dimensional Euclidean space and time into a four-dimensional manifold where the spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. 3D+ has many definitions and many applications. This task intends to build a framework that will allow existing and emerging technologies to exchange multi-dimensional information in an interoperable way. Therefore, standards and their applicability to the space mission are essential.

2.1.1. Problem Statement and Research Questions

Currently, most OGC standards focus on data that is observed on ground or directly above planet Earth. Other standards, such as GeoSciML, look into the planet and provide a data model and transfer standards for geological data. In addition, recent citizen-science-driven projects have looked into the seas and oceans.

The extra-terrestrial space and the exact location of the remote sensors has been less in focus. This Testbed task shall start with a status quo evaluation of current standards and then stepwise expand or create new standards in order to

  1. Describe objects in orbit of any celestial body or in free flight in our solar system with respect to their location, trajectory, and orientation;

  2. Enable standard-based mechanisms to transform a location within a reference frame to a location within another reference frame;

  3. Enable standard-based mechanisms to stream and integrate data from various sensors mounted on devices that are described in different reference frames and located anywhere in the solar system.

Earth-centered inertial (ECI) coordinate frames have their origins at the center of mass of Earth and are fixed with respect to the stars."I" in "ECI" stands for inertial (i.e., "not accelerating"), in contrast to the "Earth-centered - Earth-fixed" (ECEF) frames, which remain fixed with respect to Earth’s surface in its rotation, and then rotate with respect to stars.

For objects in space, the equations of motion that describe orbital motion are simpler in a non-rotating frame such as ECI. The ECI frame is also useful for specifying the direction toward celestial objects. To represent the positions and velocities of terrestrial objects, it is convenient to use ECEF coordinates or latitude, longitude, and altitude. Expressed in a nutshell:

  • ECI: inertial, not rotating, with respect to the stars; useful to describe motion of celestial bodies and spacecraft.

  • ECEF: not inertial, accelerated, rotating with respect to stars; useful to describe motion of objects on Earth surface.

The extent to which an ECI frame is inertial is limited by the non-uniformity of the surrounding gravitational field. For example, the Moon’s gravitational influence on a high-Earth orbiting satellite is significantly different than its influence on Earth, so observers in an ECI frame would have to account for this acceleration difference in their laws of motion. The closer the observed object is to the ECI-origin, the less significant the effect of the gravitational disparity is.

In this context, the following research topics should be explored:

  1. Evaluate existing standards in regards to their ability to represent a full suite of multi-dimension CRS and associated geometries. Identify shortfalls and propose steps to resolve them.

  2. Build on the work in #1, focusing on objects in orbit of a celestial object or in free flight in the solar system.

  3. Develop a framework to standardize the transformation of locations across 3D+ CRS. Reference frame transformations should include at least the following coordinate reference frames:

    1. Earth-Centered, Earth-Fixed (ECEF)

    2. Earth-Centered Inertial (ECI) of epoch (J2000 or similar)

    3. ECI Mean of Date

    4. ECI True Equator Mean Equinox

    5. ECI True of Date

    6. Latitude/Longitude to Right Ascension /DX

    7. Local tangent plane to ECEF

  4. Evaluate support for 3D+ data in existing streaming and portrayal standards. Identify requirements for new or enhancements to existing standards.

The OGC Temporal DWG has begun development of a discussion paper on Dynamic Features. Dynamic Features are 3-D Moving Features that can change location, shape, and state as a function of time. This discussion paper will also explore temporal and spatial reference systems applicable to Features moving at relativistic velocities. Special relativity of Features is to be based on Minkowski spacetime. Participants in this work item are expected to support the OGC Temporal DWG in development of this Discussion Paper and to incorporate its findings in the Testbed-18 work program.

2.1.2. Aim

This task aims to identify an architecture framework and corresponding standards that will allow for the description of a comprehensive set of orbital and non-orbital space-based assets, objects and observations as well as terrestrial observations.

This work shall lay the foundation for modeling, representation, and serialization from space-based assets operating at any location in our solar system. This type of data is referred to here as 3D+ data. The task shall further evaluate the ability to stream 3D+ data to visualization devices (screen, Augmented Reality, Virtual Reality) for presentation. This presentation may require the ability to query for additional metadata supporting feature identification and/or targeting.

2.1.3. Previous Work

The OGC Abstract Specification Topic 2: Referencing by coordinates defines the conceptual schema for the description of referencing by coordinates. In addition, it describes the minimum data required to define coordinate reference systems.

3D data streaming community standards I3S and 3D Tiles are available on the OGC website.

The OGC Testbed-13: 3D Tiles and I3S Interoperability and Performance Engineering Report coming out of OGC Testbed-13 evaluated the 3D data streaming capabilities of different community standards. This work represents important preliminary work for the 3D Data Container and Tiles API Pilot, with results described in the 3D Data Container and Tiles API Pilot Summary Engineering Report.

Portrayal in general and the development of a portrayal framework were subject to several OGC Testbeds. The OGC Testbed-15: Open Portrayal Framework Engineering Report and the OGC Testbed-15: Portrayal Portrayal Summary Engineering Report describe the Open Portrayal Framework, a set of emerging specifications that support interoperable portrayal of heterogeneous geospatial data. The Open Portrayal Framework facilitates the rendering of geospatial data in a uniform way, according to specific user requirements. The primary topics addressed in Testbed-15 covered supporting style sharing and updates, client- and server-side rendering of both vector- and raster data, and converting styles from one encoding to another; all following a single conceptual style model.

Portrayal was further explored in the OGC Portrayal Concept Development Study, with results summarized in the OGC Portrayal Concept Development Study Engineering Report. The study assessed the current state of feature portrayal. This was done through a high-level overview (Strengths, Weaknesses, Opportunities and Threats (SWOT) Analysis), a review of existing portrayal workflows and relevant technologies, standards, and research papers, and by conducting two case studies.

Testbed-13 consolidates the work done in previous OGC Testbeds with a focus on portrayal semantics. The OGC Testbed-13: Portrayal Engineering Report captures all requirements, solutions, models and implementations of the Testbed-13 Portrayal Package. This effort leveraged the work on Portrayal Ontology development and Semantic Portrayal Service conducted during Testbed-10, -11 and -12. Testbed-13 work identified and completed the gaps in the latest version of the portrayal ontology defined in Testbed-12, implemented the Semantic Portrayal Service by adding rendering capabilities, and demonstrated the portrayal service that showcased the benefits of the proposed semantic-based approach.

2.1.4. Work Items & Deliverables

The following figure illustrates the high-level software architecture with all work items and deliverables of this task.

work items 3D+
Figure 2. 3D+ High-Level Software Architecture, Work Items and Deliverables

The following list identifies all deliverables that are part of this task. Detailed requirements are stated above. All participants are required to participate in all technical discussions and support the development of the Engineering Report(s) with their contributions.

D023 3D+ Standards Framework Engineering Report – An Engineering Report which captures the work done to identify existing capabilities and propose future work required to develop a suite of 3D+ standards.

D024 3D+ Data Space Object Engineering Report – An Engineering Report which captures the specific work done to improve describing objects in orbit of a celestial object or in free flight in our solar system.

D025 Reference Frame Transformation Engineering Report – An Engineering Report which describes the standards-based mechanisms to transform location within a reference frame to a location in another reference frame.

D026 3D+ Data Streaming Engineering Report – An Engineering Report which describes how the testbed effort improves multi dimensional data streaming to multiple device types.

2.2. Machine Learning Training Datasets

Artificial Intelligence (AI) and Machine Learning (ML) algorithms have great potential to advance processing and analysis of Earth Observation (EO) data. Among the top priorities for efficient machine learning algorithms is the availability of high-quality training data. Training data is the initial dataset used to train machine learning algorithms. Models create and refine their rules using this data. Training data is also known as training dataset (TDS), learning set, and training set. TDS is an essential component of every machine learning model and helps the model to make accurate predictions or perform a desired task.

ML Header

The goal of this Testbed-18 task is to develop the foundation for future standardization of TDS for Earth Observation applications. The task shall evaluate the status quo of training data formats, metadata models, and general questions of sharing and re-use. Several initiatives, such as ESA’s AI-Ready EO Training Datasets (AIREO) have developed suggestions that could be used for future standardization. Other initiatives focused on the development of training data repositories, such as the Radiant MLHub, an open-access geospatial training data repository where anyone can discover and download ML-ready training datasets.

2.2.1. Problem Statement and Research Questions

Training datasets are crucial for ML and AI applications, but they are becoming a significant bottleneck in EO’s more widespread and systematic application of AI/ML. The issues include:

  • General lack and inaccessibility of high-quality TDS;

  • Absence of standards resulting in inconsistent and heterogeneous TDS (data structures, file formats, quality control, meta data, repositories, licenses, etc.);

  • Limited discoverability and interoperability of TDS;

  • Lack of best-practices and guidelines for generating, structuring, describing and curating TDS.

This task shall tackle primarily the following research questions:

  1. How to describe a training dataset to enable efficient re-use in ML/AI applications?

  2. What are the main characteristics of the training dataset itself, what additional information needs to be provided to sufficiently understand the nature and usability of the dataset?

  3. What metadata is required, recommended, or optionally provided?

  4. What does human and machine readable documentation look like?

  5. How to catalog TDS?

  6. How to split source data and annotated training data?

  7. What does self-explanatory mean in the context of TDS?

  8. How to express the quality of a TDS?

  9. Is it possible to auto-generate quality indicators? If so, which?

  10. How to enable FAIR (findable, accessible, interoperable and re-usable) data principles to be at the heart of future TDS standardization?

2.2.2. Aim

The objective of this task is to document current approaches and possible alternatives to lay out a path for future standardization of training datasets for Earth Observation applications.

2.2.3. Previous Work

The OGC Testbed-16: Machine Learning Training Data Engineering Report documented the OGC Testbed-16 work that focused on understanding the potential of existing and emerging OGC standards for supporting ML applications in the context of wildland fire safety and response. In this context, integrating ML models into standards-based data infrastructures, the handling of ML training data, and the integrated visualization of ML data with other source data was explored. Emphasis was on the integration of data from the Canadian Geospatial Data Infrastructure (CGDI), the handling of externally provided training data, and the provisioning of results to end-users without specialized software.

2.2.4. Work Items & Deliverables

The following figure illustrates the work items and deliverables of this task.

work items ML
Figure 3. Machine Learning Training Datasets Work Items and Deliverables

The following list identifies all deliverables that are part of this task. Detailed requirements are stated above. All participants are required to participate in all technical discussions and support the development of the Engineering Report(s) with their contributions.

D027 Machine Learning Training Data Engineering Report - This Engineering Report captures all results of the ML TDS task and can serve as a baseline for future standardization.

2.3. Secure, Asynchronous Catalogs

Data Centric Security (DCS) is an approach to apply security directly to the data, independently of security features provided by the network, servers, or applications. For Data Centric Security in the geospatial domain, proof of concept implementations were developed through work in OGC Testbed-15 and Testbed-16. Initially using NATO STANAG 4774 and 4778 XML based standards to label and protect geospatial feature data in Testbed-15, the work expanded into JSON based structures during Testbed-16. For Testbed-17, the primary goal of the DCS task was to apply Data Centric Security in the context of OGC APIs that deliver binary data representations such as images and GeoPackage. Testbed-18 will extend the developed solutions to OGC API-Records to allow encrypted delivery and access of catalog metadata between communication partners.

Catalogs Header

Current OGC APIs define synchronous RESTful web services that return the response on the same socket pair as the one used to receive the request. This interaction pattern works well if the requested response format is streaming-ready and can be delivered immediately. It does fail in typical asynchronous interaction scenarios where clients want to subscribe to updates for previously provided queries. Testbed-18 will evaluate the current asynchronous communication discussion that is taking place in OGC API-Common and OGC API-Processes SWGs. This task shall contribute by implementing an asynchronous catalog scenario. In this scenario, publishers push new data to catalog instances that lead to new or updated catalog entries; and subscribers are being informed about these updates.

With OGC CSW and the emergence of OGC API-Records, users have two alternatives to interact with catalog services. Whereas OGC CSW has been used for ISO 19115 metadata extensively in the past, OGC API-Records is still in development. The OGC API-Records specification being developed defines three main building blocks: Record, Collections, and the Records API for interaction. The ‘record building block’ defines the core schema of a catalogue record. It includes a small number of properties that are common across all resource types and needs to be further profiled in order to support ISO 19115:2014 and ISO 19115:2003. Testbed-18 shall explore search processes that are supported in a classical OGC CSW/ISO 19115 environment with OGC API-Records.

2.3.1. Problem Statement & Research Questions

This task shall explore catalogs, asynchronous communication, and data centric security in a single scenario. Depending on the level of CRUD support at the time Testbed-18 kicks off, the goal is at least that subscribers can register their queries at a catalog and be informed about changes to the catalog. Ideally, publishers can send new or updated data to an OGC API-Records instance with encrypted content that fulfills DCS principles.

Catalog APIs and resources models are currently emerging in the context of RESTful web APIs. OGC API-Records and STAC are both under development and will be available soon for data discovery and catalog management. Both are not fully aligned yet, though STAC is making good progress towards full OGC API-Records support. OGC API-Records has defined three building blocks, but no support for classical metadata models such as ISO 19115. This task shall develop the necessary extensions for ISO 19115 support.

This scenario will address the following research questions:

  • To which extent does OGC API-Records already support classical discovery workflows comparable with OGC CSW/ISO 19115 set ups?

  • How to establish asynchronous communication with publishers providing new or updated data to catalogs that support OGC API-Records and subscribers interested in these updates?

  • What do interfaces and resource models look like if data centric security principles are applied to all data?

It is emphasized that these research questions shall be addressed in close collaboration with the OGC API-Records Standard Working Group (SWG). Participants of this task are expected to be aware of ongoing discussions within the OGC API-Records SWG. Testbed-18 and the SWG will enter into agreements on how to best organize the workload on OGC-API Records at the Testbed-18 kickoff meeting.

2.3.2. Aim

The task aims at developing ISO 19115 support for OGC API-Records. It shall explore DCS in the context of OGC API-Records and develop a candidate standard for a Key Management Service API. The work shall further explore asynchronous communication with OGC API-Records instances.

2.3.3. Previous Work

The OGC API-Records core specification is still in development at the time of the Testbed-18 Call for Participation release. The current timeline anticipates the release of the initial, tagged 1.0-draft version towards the end of Q1/2022. Followed by a series of feedback rounds, sprints, and implementation work, the final 1.0 version shall be available in Q4/2022 with a subsequent submission to ISO in 2023. The core specification as well as extensions are available in their most recent version in this Github repository. The OGC Innovation Program conducted some research on catalogs in previous years, though most of the work originates before the major shift towards web APIs and RESTful architectures was conducted. Therefore, this task needs to build on the latest discussions taking place in the OGC API-Records and related working groups.

2.3.4. Work Items & Deliverables

The following figure illustrates the high-level architecture with all work items and deliverables of this task.

work items catalogs
Figure 4. Secure, Asynchronous Catalogs Work Items and Deliverables

The following list identifies all deliverables that are part of this task. Detailed requirements are stated above. All participants are required to participate in all technical discussions and support the development of the Engineering Report(s) with their contributions.

D112 OGC CSW - OGC CSW service instance with support for ISO 19115 (CSW-ebRIM profile is of particular interest)

D113 OGC API-Records - OGC API-Records (draft) service instance

D114 Catalog Client - The Catalog supports two interaction patterns. First, the client explores data from both catalog instance types (CSW and OGC API-Records) to compare functionality and ease-of-use. Second, the client serves as a subscriber to Catalog data updates. The client needs to fulfill the following requirements:

  • Support OGC API-Records (draft) and CSW (CSW-ebRIM profile is of particular interest)

  • Support OpenID based authentication.

  • Support Key Management System (KMS) operations.

  • Support decoding, verifying and decrypting JSON Object.

  • Signing and Encryption (JOSE) specifications and JSON Web Token (JWT) content.

  • Support decoding and decrypting content delivered by catalog instances of this task.

D115 Authorization and Identity Management System and Key Management Server - A software environment consisting of an authorization and identity management system with support for OpenID Connect. The Key Management Service shall support the Key Management Service API developed as D008. D115 is the only unfunded work item. OGC expects in-kind contributions to be made available based on services available during previous testbeds.

D007 Secure Asynchronous Catalog Engineering Report - This Engineering Report captures all results, experiences, and lessons learned from this task.

D008 Key Management Service API Engineering Report - This Engineering Report describes an OGC candidate standard for a Key Management Service API.

2.4. Moving Features and Sensor Integration

Interoperability is, as of today, the main challenge preventing the original vision of an Internet of Things (IoT) to be fully realized. Indeed, the lack of standardization and interoperability between sensor vendors, system integrators, and data integration/processing platforms makes it difficult for organizations dealing with large numbers of sensor systems to efficiently process data and control their assets. Although standard protocols enable communications between all kinds of devices and systems at low levels, large amounts of code are still required to integrate heterogeneous systems so that they can be accessed and operated with integration tools (e.g., to produce a common operating picture). Many domains such as smart cities, smart manufacturing, disaster management or military operations would greatly benefit from standardization efforts that facilitate IoT interoperability and sensor integration.

MovingFeaturesHeader

This Testbed-18 task shall further advance interoperability between sensors - where sensor design constraints prevent the usage of standardized protocols for the sensors themselves - and between sensing systems. Testbed-18 shall build on the experiences and lessons learned from previous testbeds and integrate the preliminary work in the context of sensor integration and moving features. The goal of this task is to develop a framework for interoperable sensor (data) integration and to demonstrate its capabilities in the context of moving features. More precisely, the framework shall demonstrate the integration of multiple sources of detected moving objects into a common analytic environment. This shall be done in conjunction with the development of a set of quality metrics for Moving Features data. The corresponding Engineering Reports shall describe the architecture framework and corresponding standards; addressing the multi-sensor integration challenge in the context of moving features and moving feature data with corresponding quality metrics.

2.4.1. Problem Statement and Research Questions

This Testbed task addresses two main challenges: First, the harmonization of sensor integration approaches across the existing and emerging OGC and external standards. Second, the maturation of the Moving Features architecture and its integration with the harmonized OGC sensor architecture. Both challenges build on previous work done in Testbed-17.

Sensor systems are built using many different standards, formats, and protocols. This is a significant barrier to sensor integration. Yet there are good reasons why this should be so. Sensors must be deployed where they will best measure a physical phenomenon. That imposes non-negotiable constraints on their size, weight, power consumption, and communications capabilities. These constraints limit the options available to the designer. They must select the best option that will work within their technical constraints. Thus, successful sensor integration must embrace sensor diversity. What is required is not a one-size-fits-all solution, but a framework of standards that enables the integration of a wide range of sensors. The OGC has addressed the sensor integration challenge as part of its Sensor Web Enablement (SWE) initiative. SWE includes both data models and APIs. The suite of SWE standards have been developed over 20 years and therefore reflects various state-of-the-art approaches, including different data serialization models, interface types, and communication patterns. The first objective of this task is to consolidate and harmonize SWE standards with modern OGC APIs as well as externally developed standards as necessary, while embracing diversity among sensors and sensing systems.

Testbed-17 continued the work started in Testbed-16 on moving features. There are several systems available to detect and report moving features. Most work as stove-piped systems that cannot work collaboratively to improve detection rates or moving object characteristics determination. Therefore, the second objective of this task is to mature the moving feature architecture developed in Testbed-17 and explore multi-source/multi-system moving object detection and attribution. The second objective shall build on the architecture framework and corresponding standards to integrate several sensors’ observations into a common analytic environment. To achieve these objectives, the following research topics shall be explored:

  1. Mature the architecture and draft standards for Moving Feature content developed through Testbed-17.

    1. Which standards play which role?

    2. Where do these standards fit together, and, where do they need to be refined?

    3. Do these standards feature a unified sensor and unified observation model? If not, to what extent does it need to be defined?

  2. Mature the sensor integration work from Testbed-17 with an emphasis on semantic transformation. Integrate the sensor integration capabilities into the Moving Feature architecture.

    1. What role does the OGC Definition Server play?

    2. How to handle situations where resource definitions are not accessible via a standardized interface?

  3. Identify and integrate new sensor types into the Moving Feature architecture.

  4. Explore best practices for extending existing Feature holding to support Moving Features content seamlessly.

    1. How to assign Moving Feature characteristics to an existing Feature Collection?

    2. How to establish links between a Feature and its mobility data?

  5. Explore the use of software analytics and AI (Artificial Intelligence) to improve and enhance Moving Feature data by taking advantage of multi-sensor input.

  6. Explore quality metrics for Moving Features and tracks and demonstrate the value added by software analytics through the associated quality metrics.

Note: Moving Features include movement in location as well as changes in shape and property values. The overall architecture is illustrated in the following figure.

architecture MF
Figure 5. Moving Features & Sensor Integration Architecture

2.4.2. Aim

Sensor standards must embrace sensor diversity. The aim is not to develop a one-fits-all solution, but a harmonized framework of standards, which enables sensor integration regardless of technical constraints. The goal of this task is to develop a framework for interoperable sensor (data) integration and to demonstrate its capabilities in the context of moving features from multiple sources into a common analytic environment.

2.4.3. Previous Work

The OGC Testbed-17 had two work items: Sensor Integration and Moving Feature. The Sensor Integration task focused on demonstrating the feasibility of implementing concepts described in the Sensor Integration Framework (SIF) standard developed by the National System for Geospatial Intelligence (NSG) and United States MASINT System (USMS). The SIF provides a standard framework for the integrating of sensor systems regardless of their technical constraints and deployment environment. Thus SIF targets systems that could be deployed on enterprise networks as well as in Denied, Degraded, Intermittent, or Limited Bandwidth (DDIL) environments. A thorough assessment of the SIF standard documents and data models is provided in the OGC Testbed-17: Sensor Integration Framework Assessment Engineering Report. In Testbed-17 it became clear that OGC should continue to promote the SIF vision of integrated sensor systems as it is entirely in line with the vision behind the OGC SWE standards. Therefore, it is recommended to further refine/develop the SIF at OGC by defining Best Practices based on existing OGC SWE standards, not by developing new data models. Testbed-18 shall build on these experiences and further enhance sensor integration frameworks by emphasizing semantic interoperability. These tasks will include an assessment and possible integration of the OGC Definition Server. To further complete the SWE-based SIF, Testbed-18 shall develop a fully harmonized conceptual model that encompasses not only sensor observations but also command and control aspects. In this context, work needs to be coordinated with the ongoing UxS Command and Control Interoperability Experiment.

The OGC Testbed-17 Moving Features task addressed the exchange of moving object detections between software components, shared processing of detections for correlation and analysis, and visualization of moving objects within common operational pictures. The OGC Moving Features Engineering Report documents the developed architecture for collaborative distributed object detection and analysis of multi-source motion imagery. Testbed-17 produced a powerful Application Programming Interface (API) for discovery, access, and exchange of moving features and their corresponding tracks and exercised this API in a near real-time scenario. The API has been handed over to the Moving Features Standard Working Group for further development.

An additional goal of Testbed-17 was to investigate how moving object information can be made accessible through HTML in a web browser using Web Video Map Tracks (WebVMT) as part of the ongoing Web Platform Incubator Community Group (WICG) DataCue activity at W3C. This aims to facilitate access to geotagged media online and leverage web technologies with seamless integration of timed metadata, including spatial data.

The complete Testbed-17 work on moving features is documented in the two Engineering Reports: OGC Testbed-17: OGC API - Moving Features Engineering Report and OGC Testbed-17: Moving Features Engineering Report. In addition, Testbed-16 results are available here.

2.4.4. Work Items & Deliverables

The following figure illustrates the high-level architecture with all work items and deliverables of this task.

work items MF
Figure 6. Moving Features and Sensor Integration Work Items and Deliverables

The following list identifies all deliverables that are part of this task. Detailed requirements are stated above. All participants are required to participate in all technical discussions and support the development of the Engineering Report(s) with their contributions.

D140 Moving Feature Collection - In this deliverable, a collection of Moving Features will be deployed for two experiments: First, to link moving features to a shared (with D141) collection of features (that are not moving). The output of this task is developing a Best Practice for extending existing Feature dataset with Moving Features data. Second, to serve as an ingestion system that ingests moving feature detections into the Sensor Hub (D142). Ideally, different ingestion protocols can be explored. The participant contributes in particular to the Best Practice Engineering Report (see D020).

D141 Moving Feature Collection - similar to D140.

D142 Sensor Hub - A software system that can enable any sensor, actuator, process, forecast model, robot, or system to be discovered, accessed, and controlled through OGC/SWE standard services and APIs. The Sensor Hub receives moving feature detections from the ingestion systems (i.e., D140/141), integrates these into a persistent data store, and provides an OGC API-Moving Features to the client. The participant serves as the lead author for the sensor integration architecture.

D143 Client - A client application that uses AI technology to improve and enhance moving feature data taking advantage of multi-sensor input. The client connects to the Sensor Hub, retrieves data from both moving feature ingestion systems, and enhances track data and moving feature data by multi-sensor fusion. If possible, the client shall connect to additional synthesized data sources.

D020 Moving Features ER - The Engineering Report captures the proposed architecture, identifies the necessary standards, describes all developed components, and reports on the results of all Technology Integration Experiments (TIEs) conducted between participants. This Engineering Report consists of three parts: First, the Moving Features summary, addressing the initially outlined research questions 1-3. Second, a Dataset Extension part that proposes Best Practices for extending an existing Feature dataset to support Moving Features. Third, the Quality Metrics part that presents a set of quality metrics for Moving Feature data. For formal reasons, all three parts need to be delivered as individual Engineering Reports, but redundancies shall be avoided where possible.

2.5. Identifiers for Reproducible Science

Lots of publications address the issues around true science. What is true science? The common denominator of most publications is the reproducibility of studies. This leads directly to the following questions: What is reproducibility, or more applied, what makes a study reproducible? OGC’s mission is to make location information Findable, Accessible, Interoperable, and Reusable (FAIR). When we apply this to scientific research, we need to ask ourselves how to extend the underlying technologies of the FAIR concept to reproducible-FAIR. This Testbed task will explore and develop best practices for reproducible-FAIR.

Science Header

Most scientific studies in the Earth observation realm start with the idea that leads to data collection or start with already existing data. This data, or more precisely, the subset of the data that has been used within a study, needs to be identified. Next, scientists push that data through a sequence of processing steps, often starting with data clean-up processes followed by data alignment processes such as re-gridding, re-projection, or resampling. Once data is ready for actual analysis, fusion processes combined with analytical processes produce results that often represent only the first step in a publication pipeline. Finally, results from scientific processes need to be translated into the language of policy and decision-makers, which often includes additional processing such as classification, generalization, or even simplification.

https://wholetale.org/index.html is an NSF-funded Data Infrastructure Building Block (DIBBS) initiative to build a scalable, open source, web-based, multi-user platform for reproducible research enabling the creation, publication, and execution of tales - executable research objects that capture data, code, and the complete software environment used to produce research findings. This task shall explore Whole Tale results and ideally work collaboratively with the Whole Tale team in this task.

2.5.1. Problem Statement and Research Questions

Earth observation (EO) workflows typically use a combination of pre-processed data, fusion, and analytical functions. To achieve reproducible results, each step in a scientific workflow needs to be described in sufficient detail. Once defined, these descriptions need to be made available to other scientists. Describing the various stages is not a simple task. Data is available in multiple formats and served at different interfaces, including static files and APIs. Data might change due to changes in the sampling regime, due to corrections, modified distribution policies, or many other reasons. APIs may provide access to changing data sets without necessarily notifying the user.

This task shall develop best practices to describe all steps of a scientific workflow, including

  • Input data from various sources such as files, APIs, data cubes;

  • The workflow itself with the involved application(s) and corresponding parameterizations;

  • Output data

Several mechanisms exist to provide identifiers for resource types like data, applications, and workflows. Digital Object Identifiers (DOI) are a widespread mechanism to identify resources on the Internet. These identifiers are often based on a syntax definition that allows the compact representation of what is a complex query. ISO 26324:2012, Information and documentation — Digital object identifier system, defines such a syntax for digital objects identifier names, which is used for the identification of an object of any material form (digital or physical) or an abstraction (such as a textual work) where there is a functional need to distinguish it from other objects.

The International DOI Foundation (IDF) is the governance and management body for the federation of Registration Agencies providing Digital Object Identifier (DOI) services and registration, and is the registration authority for the ISO standard (ISO 26324) for the DOI system. The DOI system provides a technical and social infrastructure for the registration and use of persistent interoperable identifiers, called DOIs, for use on digital networks. The list of registration agencies - the members of IDF offering the system to customers who wish to assign DOI names - includes CrossRef and DataCite. Both are used intensively to make research objects easy to find, cite, link, assess, and reuse. Combined, the topics described above can be summarized in the following research questions. All research questions shall be answered in the context of Earth observation research.

  • How to identify data and workflows including workflow components and parameterization to ensure reproducible Earth observation science?

  • Which mechanisms and technologies are best suited to share and use identifiers that describe data and workflows previously used?

  • What are best practices for reproducible science based on FAIR?

  • How to best identify and ensure accessibility of specific data served at OGC APIs and OGC Web services?

  • What aspects should become part of OGC standardization processes?

  • What level of reproducibility is possible, where does reproducibility have its limits?

2.5.2. Aim

The task shall develop best practices to identify the essential ingredients of scientific Earth observation workflows and demonstrate these best practices with a selected set of Earth observation workflows. The task shall further show the role of standards and describe which set of standards form a solid, scalable foundation for reproducible Earth observation science.

2.5.3. Previous Work

https://wholetale.org/index.html is an NSF-funded Data Infrastructure Building Block (DIBBS) initiative to build a scalable, open source, web-based, multi-user platform for reproducible research enabling the creation, publication, and execution of tales - executable research objects that capture data, code, and the complete software environment used to produce research findings. This task shall explore Whole Tale results and ideally work collaboratively with the Whole Tale team in this task.

The Earth Observation Applications Pilot looked closely at the topic of input, results, and parameterization descriptions. The Pilot built on previous OGC Innovation Program initiatives, with the OGC Testbed-15: Catalogue and Discovery Engineering Report of particular interest here. The report documents how to expose applications and related services through a Catalogue service.

2.5.4. Work Items & Deliverables

The following figure illustrates the high-level architecture with all work items and deliverables of this task.

work items doi
Figure 7. Identifiers for Reproducible Science Work Items and Deliverables

The following list identifies all deliverables that are part of this task. Detailed requirements are stated above. All participants are required to participate in all technical discussions and support the Engineering Report(s) development with their contributions.

D165-169 Reproducible FAIR Workflows - Each workflow shall describe all essential resources with the necessary details to ensure reproducibility. Participants are invited to identify and use existing data and applications or set up their data services or applications. In the latter case, it shall be ensured that all components are available for at least six months after the end of Testbed-18 and all pertinent data service/application details (configurations, settings, libraries, versions, etc.) are documented within or alongside the produced notebooks/workflows. In addition, participants are invited to make use of exploitation platforms, Jupyter Notebooks, or other scientific workflow environments. In any case, reproducibility shall be demonstrated.

D040 Reproducible FAIR Best Practices Engineering Report - Best Practices Engineering Report enhances FAIR to reproducible FAIR. These Best Practices shall respond to the challenges and research questions documented above and provide clear guidance for future use and enhancements.

D041 Reproducible FAIR Summary Engineering Report - This Engineering Report captures all results, experiences, and lessons learned from this task. The report shall describe the various components used in each workflow, elaborate on issues experienced, and recommend solutions for each reproducible workflow generated in this task.

2.6. Building Energy Spatial Data Interoperability

The Canadian Geospatial Data Infrastructure (CGDI) represents Canada’s national Spatial Data Infrastructure (SDI). Similar to traditional physical infrastructure that helps Canadians with their everyday lives (e.g., roads, utilities, telecommunications), CGDI is an infrastructure for geospatial (i.e., location) information. In short, CGDI helps Canadians find, access, use, and share geospatial information.

The development of RESTful Web APIs, enhancements in the representation of 3D building data, and the growing availability of data from the energy sector allow extending CGDI capabilities to be used within the Building Energy sector. This specific capability enables the CGDI to serve as a management and analysis tool for Green House Gas emissions reduction and contributes to the development of a sustainable future in Canada and beyond.

BE Header

This Testbed-18 task will explore existing data sets from both the energy as well as the geospatial sector and will develop a mapping and integration approach to combine data into a single model. Data compliant with this model can then be used for direct analysis, simulation, and visualization without further integration and mapping efforts. The data will be served at standard-based Web services that are compliant with the latest set of OGC Web API standards. The goal is to design a Building Energy SDI that can become part of the CGDI and and allow running Building Energy experiments and analysis natively. All data models and API functionality will be demonstrated in a rich scenario that includes several services and clients. Processing services will use data from the data services, produce value-added products, and serve these data sets again at data services with interfaces and data models similar to the original data services. This approach enables the client applications to integrate various data without additional conversion costs.

2.6.1. Problem Statement and Research Questions

Testbed-18 will feature “Housing Retrofit Program Planning” and “Utility Hosting Capacity Analysis” as use cases to explore the use of OGC geospatial standards, in particular OGC APIs, to support an Energy SDI. This will be achieved either through a) using the current OGC APIs; b) an extension of existing OGC APIs; or c) recommending development of new OGC APIs to meet building energy interoperability requirements. It is expected that outcomes of this task will support multiple uses within the CGDI, NRCan, nationally, and internationally. Additional objectives include:

  • Understanding how an Energy SDI model can support decarbonization within Canada and internationally.

  • Exploring how the Energy SDI geospatial standards framework can support and leverage existing building and utility data sets.

  • Identify strategies to improve NRCan’s ability to communicate the value of an Energy SDI to departmental executives and the general public.

  • Understand how NRCan and GeoConnections can further improve Canada’s ability to leverage building energy data to help meet national and international priorities for the Government of Canada.

This task shall address the following two main challenges for establishing an Energy SDI:

  1. Data Models and Associated Schemas – Evaluating interoperability requirements of existing building energy datasets and developing draft data models and associated schemas to enable broad interoperability.

  2. Geospatial Standards – Demonstrating the potential of existing or new OGC standards to support building energy data interoperability and recommending future standards development activities to fully implement an Energy SDI.

Additionally, the following overarching research questions shall further help guide the work for each task:

  • Are there specific interoperability requirements for building energy data and applications that differ from those for other geospatial data types and systems?

  • How can the design of an Energy SDI geospatial standards framework support interaction with other standard-based frameworks (e.g., “user to the data” approaches for Earth observation data processing)?

  • How can an Energy SDI support the use of advanced technology tools, such as machine learning?

  • What opportunities exist to link/leverage other related interoperability efforts (e.g., outcomes from the OGC’s Building Energy Mapping and Analytics Concept Development Study and/or OGC’s Integrated Digital Built Environment Pilot)?

  • How will elements of an Energy SDI support multi-jurisdiction requirements to support interoperability between organizations (e.g., public and private utilities) and governments?

  • How can an Energy SDI support enclave computing processes for ensuring protection of privacy and commercial confidentiality?

The following sections provide additional context as well as detailed task descriptions for the two challenges:

Task 1: Data Models

This task shall explore the development of data models and associated schemas to enable interoperability for key building energy datasets. These efforts will leverage the characteristics of existing building energy data to determine data model requirements for interoperability, and to explore if existing data models can be adapted for the Energy SDI vision. The application could include extracting information from building energy models for map attribution or conversely, extracting available attributes from a map for a number of dwellings in the stock to feed into building energy simulations. This task shall also explore how existing geospatial datasets related to buildings (e.g., 3D building representations) can be enhanced using building energy data provided through the data model.

The key outcome from this task shall be the creation of at least one draft model for a category of building energy data. The draft model shall be sufficiently complete such that it can be implemented with minimal further modification required. This work shall build on and extend previous work in the domain, potentially including but not limited to the National Building Layer data model, and data models relating to EnerGuide for Houses and the Housing Technology Assessment Platform (HTAP). HTAP supports data input via JSON formatted 'options' files, which are described here. The options file defines the attributes of a HOT2000 house model that can be changed within HTAP, the options that they can be changed to, and the data values that will be changed within the target .h2k file. A partial list of attributes in EnerGuide and a partial mapping of HTAP attributes to the Energy ADE specification will be made available prior to the kick-off meeting. Other relevant data sources required to develop 3D rendered models such as LiDAR and earth observation imagery can also be considered.

The task shall demonstrate the ability of the data model to support interoperability through reference implementations of the data model within at least two client applications. This demonstration shall include an example of geospatial analysis applied using data described by the data model.

The task shall demonstrate that the data model can enhance at least one existing geospatial building dataset, using building energy data (modeled or measured) described by the data model. Relevant existing building data to be used for this requirement will be determined in consultation with the sponsor and participants during the testbed execution.

Task 2: Geospatial Standards

Building on task 1, this task no. 2 will explore how existing or new API standards can support building energy data interoperability. Building energy data encoded using the data model(s) will act as an input to this task, providing the information that will be exchanged with multiple users and technologies over the internet through standard-based APIs.

Task 2 shall evaluate the potential of existing geospatial standard-based APIs to support the interoperability requirements for building energy data. If existing API solutions are identified, the task shall evaluate the potential of the API approach through a two-part demonstration:

  1. making building energy data available through an API, and

  2. accessing building energy data from the API and integrating it with other forms of geospatial data.

If existing API solutions are deemed to be inappropriate for supporting building energy interoperability requirements, the task shall propose a new geospatial standard-based API framework for building energy data. This framework shall be sufficiently complete such that it can be incorporated into existing international standards development processes for future recognition as an official international geospatial standard.

Development of the API solution shall also consider how to support access to building energy data through cloud-based infrastructure. This task shall also explore cross-cloud interoperability of building energy data through the API in order to ensure building energy information can be shared between clouds, and that data access can be maintained if changes to a building energy data cloud-service are required (e.g., changing to a new cloud provider).

This cloud-oriented task shall also investigate implementing analysis and processing capabilities for building energy data through the API framework. Potential linkages with recent, similar efforts completed as part of the OGC’s Earth Observation Applications Pilot, OGC Testbed-16’s Data Access and Processing API for Geospatial Data, and OGC Testbed-17 Geospatial Data Cube API shall be explored and leveraged to the greatest extent possible to maximize API use and interoperability.

Outcomes from this task shall demonstrate the ability of the proposed API solution to enable cloud access to building energy information, support building energy data interoperability between clouds, and allow building energy cloud solutions to be modified with no changes to interoperability. The work shall also demonstrate how the proposed API will support analysis and processing of building energy information in an interoperable way. These outcomes shall be incorporated into the implementation plan as outlined for the API development task to ensure this functionality is incorporated into the final standard.

While cloud-based interoperability is a key consideration for this effort, the work shall also consider how the API can support interoperability with other forms of infrastructure (e.g., data lakes, workstation-based file systems, high performance computing systems, etc.). Demonstrating an ability to link with non-building energy data systems represents an important aspect of ensuring building energy data can be seamlessly used on different forms of infrastructure.

Finally, this task shall result in creation of a best practice document that summarizes how geospatial standard-based APIs can enable building energy data interoperability. Findings for the three project use cases will be incorporated into the best practice as case studies.

Major Steps

This task consists of the following major steps to complete the above tasks:

  1. Develop at least one data model to support standardization of building energy information at the data level. This approach shall consist of 1) evaluating the potential of existing standard-based data models to capture required information about building energy data to enable interoperability, and 2) developing a data model, using existing approaches where possible and developing new models where required.

  2. Demonstrate that the proposed data model supports interoperability through reference implementations of the data model within at least two applications.

  3. Demonstrate that the data model can support the enhancement of at least one existing geospatial building dataset, using building energy data (modeled or measured) and building attributes described by the data model.

  4. Develop an OGC API to support building energy data interoperability at a service level. This API shall use either a) the current OGC API structure; b) an extension to an existing OGC API; or c) propose a new OGC API to meet building energy data interoperability requirements.

  5. Demonstrate access to and analysis/processing of building energy information through the proposed API via a cloud environment. NRCan shall also be permitted to independently test this functionality of the API itself and/or with other Government of Canada departments if it so chooses.

  6. Demonstrate that the proposed API solution enables cross-cloud interoperability for building energy data in the context of the project’s use cases, and that interoperability can be maintained if the cloud environment experiences significant change (e.g., transitioning to a different cloud provider).

  7. Demonstrate that the proposed API solution is able to meet NRCan needs within the context of the project’s use cases. If all NRCan needs cannot be met through the solution, provide suggestions for future work that will enable NRCan requirements to be met.

  8. Demonstrate that the proposed API supports building energy data interoperability within multiple contexts (e.g., cloud environments, public and hybrid cloud systems, workstation-based file systems, etc.).

  9. Develop an implementation plan allowing the proposed API approach to be incorporated into the OGC standards program.

2.6.2. Aim

This task shall undertake initial research towards the development of an Energy SDI. This will include:

  • Leveraging existing building energy datasets to evaluate specific interoperability requirements.

  • Developing draft data models for key building energy datasets.

  • Demonstrating the potential of existing geospatial standards to support access and use of building energy data, and their interoperability with other geospatial information.

  • Determining where future geospatial standards development work is required to fully implement an Energy SDI.

2.6.3. Previous Work

The Building Energy Mapping and Analytics: Concept Development Study Report documents the research conducted in the context of mapping and analysis of the energy consumption of buildings, which is currently undertaken in Canada by local municipalities, energy utilities, and federal agencies independently and for various purposes and across different scales. These groups derive energy usage using many different sources and methods, yet fundamentally the data are the same: understanding of the building stock–the numbers, floor areas, and other characteristics of various building archetypes and how they impact energy usage. Despite this commonality, there is little to no coordination between these groups, resulting in differing methodologies, duplication of effort, lost energy savings, and lost opportunities for decarbonization, climate change mitigation, and climate resilience.

OGC’s 3D IoT Platform for Smart Cities Pilot advanced the use of open standards for integrating environmental, building, and IoT data in Smart Cities. The pilot investigated the capabilities to be supported by a 3D IoT Smart City Platform under CityGML, IndoorGML, and SensorThings API.

2.6.4. Work Items & Deliverables

The following figure illustrates the high-level architecture with all work items and deliverables of this task.

work items BE
Figure 8. Building Energy Spatial Data Interoperability Work Items and Deliverables

The following list identifies all deliverables that are part of this task. Two client applications will be developed to allow multiple TIEs with two building energy data services and various external geospatial data services. The Building Energy Processing Service is an interface for at least two applications that make use of the various data services. All results, lessons learned, and experiences will be documented with Best Practices for Building Energy SDIs in a single Engineering Report. Further detailed requirements are stated above. All participants are required to participate in all technical discussions and support the Engineering Report(s) development with their contributions.

D122 Building Energy Data Service - Web API instance serving Building Energy data according to the data model defined in D012, and the mapping defined through D013. This instance is implemented as a profile of a current, emerging, or newly defined OGC Web API. The Web API instance shall support the cloud portability requirements as described above.

D123 Building Energy Data Service - Similar to D122.

D124 Building Energy Client - Client application that accesses building energy data at web interfaces that implement current, emerging, or newly defined OGC Web APIs; external geospatial data services, and processing applications available at the Building Energy Processing Service web interface.

D125 Building Energy Client - Similar to D124.

D126 Building Energy Processing Service - Web API instance providing access to applications that make use of the components D122, D123, D127, and D128. The Web API instance shall support the cloud-portability requirements as defined above.

D127 External Geospatial Data Service - Web API instance providing access to geospatial data that is required for the applications offered at D126.

D128 External Geospatial Data Service - Similar to D127.

D012 Building Energy Data Interoperability Engineering Report and Best Practice - This Engineering Report captures all results and experiences from this task. It shall respond to all the requirements listed above. The Engineering Report shall contain a plain language executive summary to outline the motivations, goals clearly, and critical outcomes of this task, taking into account the mandates of the OGC and NRCan. The report shall be made available to the public. Best Practice will recommend approaches to leverage geospatial standard-based APIs to enable building energy data interoperability. Findings from project use cases can be included as case studies.

D013 Building Energy Data Model Engineering Report - This Engineering Report describes the mapping to/from the data model(s) and geospatial building datasets. In addition, the report shall include diagrams illustrating how data model(s) from the energy domain relate to at least one existing geospatial building dataset.

2.7. Advanced Filtering of SWIM Feature Data

OGC Testbeds implement a multi-year strategy to enhance the Federal Aviation Administration (FAA) System Wide Information Management (SWIM). Testbed-16 experimented with RESTful Web APIs and Linked Data to explore new ways for locating and retrieving SWIM data. Testbed-16 successfully produced OGC API-Features façades for existing SWIM services to enable consumers to consume SWIM data more easily in their business applications. Testbed-17 explored fusion services for SWIM data served by OGC API-Features instances.

aviationHistory
Figure 9. History of OGC experiments to enhance the System Wide Information Management (SWIM)

Testbed-18 shall explore filtering mechanisms for feature data served by OGC API-Features instances. The experiments shall include filtering of native and fusioned SWIM data and experiment with filtering services. These filtering services, possibly defined as profiles of OGC API-Processes, provide filtering capabilities not supported by the data services. Decoupling filtering from data provisioning has advantages both for the data provider and the data consumer. The data provider can serve data at a rather simple interface and does not need to operate complex software that supports all sorts of filtering. Consumers benefit from highly performant filtering services that provide exactly the subset of data required by the consumer. Thus, consumers don’t need to further filter data received from data providers with limited filtering capabilities.

The filtering service envisioned here shall be enabled to work with filtering rule documents that define the filtering rules in some machine readable way. This allows dynamic configuration of filtering services at runtime. It is assumed that the filtering service a-priori knowledge of all details of the data services, including API characteristics and the content schema(s) of the offered resources.

2.7.1. Problem Statement and Research Questions

OGC API-Features endpoints define their filtering capabilities. Filtering is standardized across different parts of OGC API-Features (see section Previous Work), with two parts still in draft status. Advanced filtering capabilities require sophisticated server software. Not all data providers will be able to operate such a powerful service endpoint. The following research questions shall be answered and solutions be developed in the context of FAA SWIM:

  • How does filtering of SWIM data served by OGC API-Features endpoints work?

  • Is the metadata required by the various OGC API-Features parts sufficient to allow clients to fully understand the filtering capabilities of a service endpoint?

  • OGC API - Features - Part 3: Filtering and the Common Query Language (CQL) supports queryables that are not directly represented as resource properties in the content schema of the resource. Is it possible to identify best practices for their usage?

  • Clients may know the content schema of offered resources. How to use this knowledge for advanced filtering beyond what is defined in particular in OGC API - Features - Part 3: Filtering and the Common Query Language (CQL)?

  • How does a filtering service look like that allows advanced filtering for rather simple OGC API-Features-based SWIM data endpoints?

  • How does such a service work in situations where a data publisher has restricted filtering on certain properties (for example, because the backend datastore has not been configured to allow high-performance queries on those properties)?

  • How can a client application support a customer that has knowledge of the content schema of an offered resource in the creation of filter statements? What are the key requirements for a developer GUI that allows visualization and management of these filtering tools?

  • Is it possible to easily create a new filtered dataset by creating machine readable filtering rules based on the metadata required by the OGC API-Features standards?

  • How can these rules be provided to the Filtering Service at runtime?

The following two figures illustrate the intended interactions between the components described in the Work Items & Deliverables section of this document. The two figures illustrate the workflows for using the filtering service for data subsetting (Figure 10) from the perspective of a business client; and for configuring the filtering service at runtime (Figure 11) from the perspective of the filtering rules developer.

An OGC API-Features façade to SWIM Data Service data service offers insufficient filtering capabilities to its customers. The Business User Client does not want to access large data sets and then perform filtering itself. Instead, the client wants to make use of a Filtering Service that can handle the filtering of the data and provide the subset of the data that the client is interested in. If the filtering service receives a data request from the client, it connects to the data service to access the necessary data, filters out everything that is not requested by the client, and eventually delivers the result to the client.

filteringSequenceBiz
Figure 10. Workflow from the perspective of a business user that needs filtered data

The second workflow is illustrated in Figure 11 below. This workflow demonstrates how a filtering service can be configured at run time. It is assumed that the Developer Client is aware of the API characteristics of the data service as well as the content schema of the data served by the data server. Based on both, the client supports the user with a GUI in the definition of the filtering rules. The user can then register these rules with the filtering service, which is now configured to run the data service specific filtering. In this Testbed, we will assume a 1:1 relationship between Filtering Services and Data Services. Though Filtering Services can support more than on Data Service, Filtering Services shall not combine data from multiple Data Services.

filteringSequenceDev
Figure 11. Workflow from the perspective of a rules developer that needs to create rule sets for Filtering Service configuration at runtime

2.7.2. Aim

This task shall experiment with OGC API-Features filtering mechanisms and explore if best practices for advertising filtering capabilities are required beyond what is already defined in the various OGC API-Features parts. If so, these best practices shall be documented.

In addition, this task shall demonstrate advanced filtering in situations where the data endpoints support only rudimentary filtering by introducing a new service type “Filtering Service”. This service assumes that the content schema of a resource being queried is available for inspection. Clients to the filtering service shall make use of this knowledge and instruct the Filtering Service with schema-specific information.

The filtering service shall be configurable at runtime. This requires that the filtering rules for a specific data service are provided at runtime in a machine-readable manner. This task shall define how these filtering rules can be serialized and exchanged between rule developing clients and the filtering service.

2.7.3. Previous Work

OGC API - Features - Part 1: Core (and the draft OGC API - Common - Part 2: Geospatial Data standard) define two filtering parameters on the resource at path /collections/{collectionId}/items: bbox and datetime. OGC API - Features - Part 1: Core also adds support for simple equality predicates logically joined using the AND operator. These capabilities offer simple resource filtering for HTTP requests.

The Filter requirements class defined in OGC API - Features - Part 3: Filtering and the Common Query Language (CQL) defines additional query parameters that allow more complex filtering expressions to be specified when querying server resources.

The Filtering service does not need to be a new service type necessarily. It should be explored if for example defining a profile of the OGC API-Processes would be a possible solution.

2.7.4. Work Items & Deliverables

The following figure illustrates the software architecture with all work items and deliverables of this task.

work items Filtering
Figure 12. Advanced Filtering of SWIM Feature Data Work Items and Deliverables

The following list identifies all deliverables that are part of this task. Detailed requirements are stated above. All participants are required to participate in all technical discussions and support the development of the Engineering Report(s) with their contributions.

D001 Features Filtering Summary Engineering Report - The Engineering Report captures the proposed architecture, identifies the necessary standards, describes all developed components, and reports on the results of all Technology Integration Experiments (TIEs) conducted between participants. It is the goal of this task to get a good understanding on the current filtering capabilities and limitations around OGC API-Features and how filtering can be decoupled from the data services. The Engineering Report editor shall organize appropriate test scenarios with the participants D100/101/102/103. The report shall recommend future activities with respect to a generally enhanced SWIM architecture.

D002 Filtering Service and Rule Set Engineering Report - The Engineering Report shall document any best practices identified for features filtering and describe in detail how filtering can be decoupled from data services, how filtering rules can be provided to Filtering Services at runtime, and the describes the API of the Filtering Service.

D100 OGC API-Features Façade to SWIM Data Service - SWIM data service based on OGC API-Features with support for different filtering capabilities (the service shall allow activating/deactivating filtering capabilities as defined in the various OGC API-Features parts. The service may serve synthetic SWIM data or act as a façade to a native SWIM service. Optionally, the published data is the result of some fusion process.

D101 OGC API-Features Façade to SWIM Data Service - SWIM data service similar to D100.

D102 Filtering Service - RESTful Web API instance that supports advanced filtering for services that do not offer these capabilities themselves (D100/D101). Participants shall experiment with this service type. Ideally, the service allows filter statements sent together with data requests. The filter statements could even support filter statements that are based on a-priori knowledge of the content-schema of the offered resources.

The filtering service shall support configuration at run time by providing a transactional interface. The interface allows the registration of rule sets serialized in a format to be defined in this Testbed between D102/103/D002 participants.

The filtering service shall support filtering for both D100 and D101 individually. It does not need to support filter statements that require combining data from both data services.

D103 Filtering Service - RESTful Web API instance similar to D102

D104 Business User Client - The client shall execute Technical Interoperability Experiments (TIEs) with the filter services D102/D103 and the data services D100/101. The testing of the services can be very basic by simply sending requests and evaluating the responses. Emphasis is on interacting with the filter services D102/D103. This client does not need to provide a graphical user interface.

D105 Developer Client - Client application that supports the customer to define filter statements that can be expressed in a machine-readable way and exchanged with the filtering service. These filter statements should support insights into the content schema of the offered resources at the data server and the API characteristics of the data service. A-priori knowledge of all data service specifics can be assumed. The client shall provide a graphical user interface.

3. Deliverables Summary

The following tables summarize the full set of Initiative deliverables. Technical details can be found in section Technical Architecture.

Please also note that not all work items were supported by sponsor funding at time of CFP publication. Negotiations with sponsors are ongoing, but there is no guarantee that every item will ultimately be funded.

Bidders are invited to submit proposals on all items of interest under the assumption that funding will eventually become available.

Table 2. CFP Deliverables - Grouped by Task

Task

ID and Name

3D+ Data Standards and Streaming

  • D023 3D+ Standards Framework Engineering Report

  • D024 3D+ Data Space Object Engineering Report

  • D025 Reference Frame Transformation Engineering Report

  • D026 3D+ Data Streaming Engineering Report

Machine Learning Training Datasets

  • D027 Machine Learning Training Data Engineering Report

Secure, Asynchronous Catalogs

  • D112 OGC CSW

  • D113 OGC API-Records

  • D114 Catalog Client

  • D115 Authorization and Identity Management System and Key Management Server

  • D007 Secure Asynchronous Catalog Engineering Report

  • D008 Key Management Service API Engineering Report

Moving Features and Sensor Integration

  • D140 Moving Feature Collection

  • D141 Moving Feature Collection

  • D142 Sensor Hub

  • D143 Client

  • D020 Moving Features Engineering Report

Identifiers for Reproducible Science

  • D165-169 Reproducible FAIR Workflows (5 instances)

  • D040 Reproducible FAIR Best Practices Engineering Report

  • D041 Reproducible FAIR Summary Engineering Report

Building Energy Spatial Data Interoperability

  • D122/123 Building Energy Data Service

  • D124/125 Building Energy Client

  • D126 Building Energy Processing Service

  • D127/128 External Geospatial Data Service

  • D012 Building Energy Data Interoperability Engineering Report and Best Practice

  • D013 Building Energy Data Model Engineering Report

4. Miscellaneous

Call for Participation (CFP): The CFP includes of a description of deliverables against which bidders may submit proposals. Several deliverables are more technical in nature, such as documents and component implementations. Others are more administrative, such as monthly reports and meeting attendance. The arrangement of deliverables on the timeline is presented in the Master Schedule.

Each proposal in response to the CFP should include the bidder’s technical solution(s), its cost-sharing request(s) for funding, and its proposed in-kind contribution(s) to the initiative. These inputs should all be entered on a per-deliverable basis, and proposal evaluations will take place on the same basis.

Once the original CFP has been published, ongoing updates and answers to questions can be tracked by monitoring the CFP Corrigenda Table and the CFP Clarifications Table. The HTML version of the CFP will be updated automatically and stored at the same URL as the original version. The PDF version will have to be re-downloaded with each revision.

Bidders may submit questions using the Additional Message textbox in the OGC Innovation Program Contact Form. Question submitters will remain anonymous, and answers will be regularly compiled and published in the CFP clarifications.

A Bidders Q&A Webinar will be held on the date listed in the Master Schedule. The webinar is open to the public, but anyone wishing to attend must register using the provided link. Questions are due on the date listed in the Master Schedule.

Participant Selection and Agreements: Following the submission deadline, OGC will evaluate received proposals, review recommendations with Sponsors, and negotiate Participation Agreement (PA) contracts, including statements of work (SOWs). Participant selection will be complete once PA contracts have been signed with all Participants.

Kickoff: The Kickoff is a meeting where Participants, guided by the Initiative Architect, will refine the Initiative architecture and settle upon specific use cases and interface models to be used as a baseline for prototype component interoperability. Participants will be required to attend the Kickoff, including breakout sessions, and will be expected to use these breakouts to collaborate with other Participants and confirm intended Component Interface Designs.

Regular Telecons and Meetings After the Kickoff, participants will meet frequently via weekly telecons and quarterly OGC Member Meetings. @@NOTE: is this the right link?

Development of Deliverables: Development of Components, Engineering Reports, Change Requests, and other deliverables will commence during or immediately after Kickoff.

Under the Participation Agreement contracts, ALL Participants will be responsible for contributing content to the ERs, particularly regarding their component implementation experiences, findings, and future recommendations. But the ER Editor will be the primary author on the shared sections such as the Executive Summary.

More detailed deliverable descriptions appear under Types of Deliverables.

Final Summary Reports, Demonstration Event and Other Stakeholder Meetings: Participant Final Summary Reports will constitute the close of funded activity. Further development work might take place to prepare and refine assets to be shown at webinars, demonstration events, and other meetings.

Assurance of Service Availability: Participants selected to implement service components must maintain availability for a period of no less than six months after the Participant Final Summary Report milestone.

Appendix A: Testbed Organization and Execution

A.1. Initiative Policies and Procedures

This initiative will be conducted within the policy framework of OGC’s Bylaws and Intellectual Property Rights Policy ("IPR Policy"), as agreed to in the OGC Membership Agreement, and in accordance with the OGC Innovation Program Policies and Procedures and the OGC Principles of Conduct, the latter governing all related personal and public interactions.

Several key requirements are summarized below for ready reference:

  • Each selected Participant will agree to notify OGC staff if it is aware of any claims under any issued patents (or patent applications) which would likely impact an implementation of the specification or other work product which is the subject of the initiative. Participant need not be the inventor of such patent (or patent application) in order to provide notice, nor will Participant be held responsible for expressing a belief which turns out to be inaccurate. Specific requirements are described under the "Necessary Claims" clause of the IPR Policy.

  • Each selected Participant will agree to refrain from making any public representations that draft Engineering Report (ER) content has been endorsed by OGC before the ER has been approved in an OGC Technical Committee (TC) vote.

  • Each selected Participant will agree to provide more detailed requirements for its assigned deliverables, and to coordinate with other initiative Participants, at the Kickoff event.

A.2. Initiative Roles

The roles generally played in any OGC Innovation Program initiative include Sponsors, Bidders, Participants, Observers, and the Innovation Program Team ("IP Team"). Explanations of the roles are provided in Tips for New Bidders.

The IP Team for this Initiative will include an Initiative Director and an Initiative Architect. Unless otherwise stated, the Initiative Director will serve as the primary point of contact (POC) for the OGC.

The Initiative Architect will work with Participants and Sponsors to ensure that Initiative activities and deliverables are properly assigned and performed. They are responsible for scope and schedule control, and will provide timely escalation to the Initiative Director regarding any high-impact issues or risks that might arise during execution.

A.3. Types of Deliverables

All activities in this testbed will result in a Deliverable. These Deliverables generally take the form of Documents or Component Implementations.

A.3.1. Documents

Engineering Reports (ER) and Change Requests (CR) will be prepared in accordance with OGC published templates. Engineering Reports will be delivered by posting on the (members-only) OGC Pending directory when complete and the document has achieved a satisfactory level of consensus among interested participants, contributors and editors. Engineering Reports are the formal mechanism used to deliver results of the Innovation Program to Sponsors and to the OGC Standards Program for consideration by way of Standards Working Groups and Domain Working Groups.

Tip

A common ER Template will be used as the starting point for each document. Various template files will contain requirements such as the following (from the 1-summary.adoc file):

The Executive Summary shall contain a business value statement that should describe the value of this Engineering Report to improve interoperability, advance location-based technologies or realize innovations.

Ideas for meeting this particular requirement can be found in the CFP Background as well as in previous ER content such as the business case in the SELFIE Executive Summary.

Document content should follow this OGC Document Editorial Guidance (scroll down to view PDF file content). File names for documents posted to Pending should follow this pattern (replacing the document name and deliverable ID): OGC Testbed-18: Aviation Engineering Report (D001). For ERs, the words Engineering Report should be spelled out in full.

A.3.2. Component Implementations

Component Implementations include services, clients, datasets, and tools. A service component is typically delivered by deploying an endpoint via an accessible URL. A client component typically exercises a service interface to demonstrate interoperability. Implementations should be developed and deployed in all threads for integration testing in support of the technical architecture.

Important

Under the Participation Agreement contracts, ALL Participants will be responsible for contributing content to the ERs, particularly regarding their component implementation experiences, findings, and future recommendations. But the ER Editor will be the primary author on the shared sections such as the Executive Summary.

Component implementations are often used as part of outreach demonstrations near the end of the timeline. To support these demos, component implementations are required to include Demo Assets. For clients, the most common approach to meet this requirement is to create a video recording of a user interaction with the client. These video recordings may optionally be included in a new YouTube Playlist such as this one for Testbed-15.

Tip

Videos to be included in the new YouTube Playlist should follow these instructions:

  • Upload the video recording to the designated Portal directory (to be provided), and

  • Include the following metadata in the Description field of the upload dialog box:

    • A Title that starts with "OGC Testbed-18:", keeping in mind that there is a 100-character limit [if no title is provided, we’ll insert the file name],

    • Abstract: [1-2 sentence high-level description of the content],

    • Author(s): [organization and/or individuals], and

    • Keywords: [for example, OGC, Testbed-18, machine learning, analysis ready data, etc.].

Since server components often do not have end-user interfaces, participants may instead support outreach by delivering static UML diagrams, wiring diagrams, screenshots, etc. In many cases, the images created for an ER will be sufficient as long as they are suitable for showing in outreach activities such as Member Meetings and public presentations. A server implementer may still choose to create a video recording to feature their organization more prominently in the new YouTube playlist. Another reason to record a video might be to show interactions with a "developer user" (since these interactions might not appear in a client recording for an "end user").

Tip

Demo-asset deliverables are slightly different from TIE testing deliverables. The latter don’t necessarily need to be recorded (though they often appear in a recording if the TIE testing is demonstrated as part of one of the recorded weekly telecons).

A.4. Proposal Evaluation

Proposals are expected to be brief, broken down by deliverable and precisely addressing the work items of interest to the bidder. Details of the proposal submission process are provided under the General Proposal Submission Guidelines.

Proposals will be evaluated based on criteria in two areas: technical and management/cost.

A.4.1. Technical Evaluation Criteria

  • Concise description of each proposed solution and how it contributes to achievement of the particular deliverable requirements described the Technical Architecture,

  • Overall quality and suitability of each proposed solution, and

  • Where applicable, whether the proposed solution is OGC-compliant.

A.4.2. Management/Cost Evaluation Criteria

  • Willingness to share information and work in a collaborative environment,

  • Contribution toward Sponsor goals of enhancing availability of standards-based offerings in the marketplace,

  • Feasibility of each proposed solution using proposed resources, and

  • Proposed in-kind contribution in relation to proposed cost-share funding request.

Note that all Participants are required to provide at least some level of in-kind contribution (costs for which no cost-share compensation has been requested). As a rough guideline, a proposal should include at least one dollar of in-kind contribution for every dollar of cost-share compensation requested. All else being equal, higher levels of in-kind contributions will be considered more favorably during evaluation. Participation may also take place by purely in-kind contributions (no cost-share request at all).

Once the proposals have been evaluated and cost-share funding decisions have been made, the IP Team will begin notifying Bidders of their selection to enter negotiations to become and initiative Participant. Each selected bidder will enter into a Participation Agreement (PA), which will include a Statement of Work (SOW) describing the assigned deliverables.

A.5. Reporting

Participants will be required to report the progress and status of their work; details will be provided during contract negotiation. Additional administrative details such as invoicing procedures will also be included in the contract.

A.5.1. Monthly Reporting

The IP Team will provide monthly progress reports to Sponsors. Ad hoc notifications may also occasionally be provided for urgent matters. To support this reporting, each testbed participant must submit (1) a Monthly Technical Report and (2) a Monthly Business Report by the first working day on or after the 3rd of each month. Templates and instructions for both of these report types will be provided.

The purpose of the Monthly Business Report is to provide initiative management with a quick indicator of project health from each participant’s perspective. The IP Team will review action item status on a weekly basis with assigned participants. Initiative participants must remain available for the duration of the timeline so these contacts can be made.

A.5.2. Participant Final Summary Reports

Each Participant should submit a Final Summary Report by the milestone indicated in the Master Schedule. These reports should include the following information:

  1. Briefly summarize Participant’s overall contribution to the testbed (for an executive audience),

  2. Describe, in detail, the work completed to fulfill the Participation Agreement Statement of Work (SOW) items (for a more technical audience), and

  3. Present recommendations on how we can better manage future OGC Innovation Program initiatives.

This report may be in the form of email text or a more formal attachment (at the Participant’s discretion).

Appendix B: Proposal Submission

B.1. General Proposal Submission Guidelines

This section presents general guidelines for submitting a CFP proposal. Detailed instructions for submitting a response proposal using the Bid Submission Form web page can be found in the Step-by-Step Instructions below.

Important

Please note that the content of the "Proposed Contribution" text box in the Bid Submission Form will be accessible to all Stakeholders and should contain no confidential information such as labor rates.

Similarly, no sensitive information should be included in the Attached Document of Explanation.

Proposals must be submitted before the deadline indicated in the Master Schedule, 17 March 2022.

Bidders responding to this CFP must be organizational OGC members familiar with the OGC mission, organization, and process.

Proposals from non-members or individual members will be considered provided that a completed application for organizational membership (or a letter of intent) is submitted prior to or with the proposal.

Tip

Non-members or individual members should make a note regarding their intent to join OGC on the Organizational Background page of the Bid Submission Form and include their actual Letter of Intent as part of an Attached Document of Explanation.

The following screenshot shows the Organizational Background page:

organizational background page
Figure 13. Sample Organizational Background Page

Information submitted in response to this CFP will be accessible to OGC and Sponsor staff members. This information will remain in the control of these stakeholders and will not be used for other purposes without prior written consent of the Bidder. Once a Bidder has agreed to become a Participant, they will be required to release proposal content (excluding financial information) to all initiative stakeholders. Sensitive information other than labor-hour and cost-share estimates should not be submitted.

Bidders will be selected for cost share funds on the basis of adherence to the CFP requirements and the overall proposal quality. The general testbed objective is to inform future OGC standards development with findings and recommendations surrounding potential new specifications. Each proposed deliverable should formulate a path for (1) producing executable interoperable prototype implementations meeting the stated CFP requirements and (2) documenting the associated findings and recommendations. Bidders not selected for cost share funds may still request to participate on a purely in-kind basis.

Bidders should avoid attempts to use the initiative as a platform for introducing new requirements not included in Technical Architecture. Any additional in-kind scope should be offered outside the formal bidding process, where an independent determination can be made as to whether it should be included in initiative scope or not. Out-of-scope items could potentially be included in another OGC IP initiative.

Each selected Participant (even one not requesting any funding) will be required to enter into a Participation Agreement contract ("PA") with the OGC. The reason this requirement applies to purely in-kind Participants is that other Participants will likely be relying upon their delivery. Each PA will include a Statement of Work ("SOW") identifying specific Participant roles and responsibilities.

B.2. Questions and Clarifications

Once the original CFP has been published, ongoing updates and answers to questions can be tracked by monitoring the CFP Corrigenda Table and the CFP Clarifications Table

Bidders may submit questions using the Additional Message textbox in the OGC Innovation Program Contact Form. Question submitters will remain anonymous, and answers will be regularly compiled and published in the CFP clarifications.

A Bidders Q&A Webinar will be held on the date listed in the Master Schedule. The webinar is open to the public, but anyone wishing to attend must register using the provided link. Questions are due on the date listed in the Master Schedule, 18 February 2022.

B.3. Proposal Submission Procedures

The process for a Bidder to complete a proposal is essentially embodied in the online Bid Submission Form. Once this site is fully prepared to receive submissions (soon after the CFP release), it will include a series of web forms, one for each deliverable of interest. A summary is provided here for the reader’s convenience.

For any individual who has not used this form in the past, a new account will need to be created first. The user will be taken to a home page indicating the "Status of Your Proposal." If any defects in the form are discovered, this page includes a link for notifying OGC. The user can return to this page at any time by clicking the OGC logo in the upper left corner.

Any submitted bids will be treated as earnest submissions, even those submitted well before the response deadline. Be certain that you intend to submit your proposal before you click the Submit button on the Review page.

Important

Because the Bid Submission Form is still relatively new, it might contain some areas that are still brittle or in need of repair. Please notify OGC of any discovered defects. Periodic updates will be provided as needed.

Please consider making local backup copies of all inputs in case any need to be re-entered.

B.3.1. High-Level Overview

Clicking on the Propose link will navigate to the Bid Submission Form. The first time through, the user should provide organizational information on the Organizational Background Page and click Update and Continue.

This will navigate to an "Add Deliverable" page that will resemble the following:

proposal submission form AddDeliverable
Figure 14. Sample "Add Deliverables" Page

The user should complete this form for each proposed deliverable.

Tip

For component implementations having multiple identical instances of the same deliverable, the bidder only needs to propose just one instance. For simplicity, each bidder should just submit against the lowest-numbered deliverable ID. OGC will assign a unique deliverable ID to each selected Participant later (during negotiations).

On the far right, the Review link navigates to a page summarizing all the deliverables the Bidder is proposing. This Review tab won’t appear until the user has actually submitted at least one deliverable under the Propose tab first.

Tip

Consider regularly creating printed output copies of this Review page at various points during proposal creation.

Once the Submit button is clicked, the user will receive an immediate confirmation on the website that their proposal has been received. The system will also send an email to the bidder and to OGC staff.

Tip

In general, up until the time that the user clicks this Submit button, the proposal may be edited as many times as the user wishes. However, this initial version of the form contains no "undo" capability, so please use caution in over-writing existing information.

The user is afforded an opportunity under Done Adding Deliverables at the bottom of this page to attach an optional Attached Document of Explanation.

proposal submission form attached doc
Figure 15. Sample Dialog for an "Attached Document of Explanation"
Important

No sensitive information (such as labor rates) should be included in the Attached Document of Explanation.

If this attachment is provided, it is limited to one per proposal and must be less than 5Mb.

This document could conceivably contain any specialized information that wasn’t suitable for entry into a Proposed Contribution field under an individual deliverable. It should be noted, however, that this additional documentation will only be read on a best-effort basis. There is no guarantee it will be used during evaluation to make selection decisions; rather, it could optionally be examined if the evaluation team feels that it might help in understanding any specialized (and particularly promising) contributions.

B.3.2. Step-by-Step Instructions

The Propose link takes the user to the first page of the proposal entry form. This form contains fields to be completed once per proposal such as names and contact information.

It also contains an optional Organizational Background field where Bidders (particularly those with no experience participating in an OGC initiative) may provide a description of their organization. It also contains a click-through check box where each Bidder will be required (before entering any data for individual deliverables) to acknowledge its understanding and acceptance of the requirements described in this appendix.

Clicking the Update and Continue button then navigates to the form for submitting deliverable-by-deliverable bids. On this page, existing deliverable bids can be modified or deleted by clicking the appropriate icon next to the deliverable name. Any attempt to delete a proposed deliverable will require scrolling down to click a Confirm Deletion button.

To add a new deliverable, the user would scroll down to the Add Deliverable section and click the Deliverable drop-down list to select the particular item.

The user would then enter the required information for each of the following fields (for this deliverable only). Required fields are indicated by an asterisk ("*"):

  • Estimated Projected Labor Hours* for this deliverable,

  • Funding Request*: total U.S. dollar cost-share amount being requested for this deliverable (to cover burdened labor only),

  • Estimated In-kind Labor Hours* to be contributed for this deliverable, and

  • Estimated In-Kind Contribution: total U.S. dollar estimate of the in-kind amount to be contributed for this deliverable (including all cost categories).

Tip

There’s no separate text box to enter a global in-kind contribution. Instead, please provide an approximate estimate on a per-deliverable basis.

Cost-sharing funds may only be used for the purpose of offsetting burdened labor costs of development, engineering, documentation, and demonstration related to the Participant’s assigned deliverables. By contrast, the costs used to formulate the Bidder’s in-kind contribution may be much broader, including supporting labor, travel, software licenses, data, IT infrastructure, and so on.

Theoretically there is no limit on the size of the Proposed Contribution for each deliverable (beyond the raw capacity of the underlying hardware and software). But bidders are encouraged to incorporate content by reference where possible (rather than inline copying and pasting) to avoid overloading the amount of material to be read in each proposal. There is also a textbox on a separate page of the submission form for inclusion of Organizational Background information, so there is no need to repeat this information for each deliverable.

Important

A breakdown (by cost category) of the "Inkind Contribution" may be included in the Proposed Contribution text box for each deliverable.

However, please note that the content of this text box will be accessible to all Stakeholders and should contain no confidential information such as labor rates.

Similarly, no sensitive information should be included in the Attached Document of Explanation.

This field Proposed Contribution (Please include any proposed datasets) should also be used to provide a succinct description of what the Bidder intends to deliver for this work item to meet the requirements expressed in the Technical Architecture. This language could potentially include a brief elaboration on how the proposed deliverable will contribute to advancing the OGC standards baseline, or how implementations enabled by the specification embodied in this deliverable could add specific value to end-user experiences.

A Bidder proposing to deliver a Service Component Implementation can also use this field to identify what suitable datasets would be contributed (or what data should be acquired from another identified source) to support the proposed service.

Tip

In general, please try to limit the length of each Proposed Contribution to about one text page per deliverable.

Note that images cannot be pasted into the field Proposed Contribution textbox. Bidders should instead provide a link to a publicly available image.

A single bid may propose deliverables arising from any number of threads or tasks. To ensure that the full set of sponsored deliverables are made, OGC might negotiate with individual Bidders to drop and/or add selected deliverables from their proposals.

B.4. Tips for New Bidders

Bidders who are new to OGC initiatives are encouraged to review the following tips:

  • In general, the term "activity" is used as a verb describing work to be performed in an initiative, and the term "deliverable" is used as a noun describing artifacts to be developed and delivered for inspection and use.

  • The roles generally played in any OGC Innovation Program initiative are defined in the OGC Innovation Program Policies and Procedures, from which the following definitions are derived and extended:

    • Sponsors are OGC member organizations that contribute financial resources to steer Initiative requirements toward rapid development and delivery of proven candidate specifications to the OGC Standards Program. These requirements take the form of the deliverables described herein. Sponsors representatives help serve as "customers" during Initiative execution, helping ensure that requirements are being addressed and broader OGC interests are being served.

    • Bidders are organizations who submit proposals in response to this CFP. A Bidder selected to participate will become a Participant through the execution of a Participation Agreement contract with OGC. Most Bidders are expected to propose a combination of cost-sharing request and in-kind contribution (though solely in-kind contributions are also welcomed).

    • Participants are selected OGC member organizations that generate empirical information through the definition of interfaces, implementation of prototype components, and documentation of all related findings and recommendations in Engineering Reports, Change Requests and other artifacts. They might be receiving cost-share funding, but they can also make purely in-kind contributions. Participants assign business and technical representatives to represent their interests throughout Initiative execution.

    • Observers are individuals from OGC member organizations that have agreed to OGC intellectual property requirements in exchange for the privilege to access Initiative communications and intermediate work products. They may contribute recommendations and comments, but the IP Team has the authority to table any of these contributions if there’s a risk of interfering with any primary Initiative activities.

    • Supporters are OGC member organizations who make in-kind contributions aside from the technical deliverables. For example, a member could donate the use of their facility for the Kickoff event.

    • The Innovation Program Team (IP Team) is the management team that will oversee and coordinate the Initiative. This team is comprised of OGC staff, representatives from member organizations, and OGC consultants. The IP Team communicates with Participants and other stakeholders during Initiative execution, provides Initiative scope and schedule control, and assists stakeholders in understanding OGC policies and procedures.

    • The term Stakeholders is a generic label that encompasses all Initiative actors, including representatives of Sponsors, Participants, and Observers, as well as the IP Team.

    • Suppliers are organizations (not necessarily OGC members) who have offered to supply specialized resources such as cloud credits. OGCs role is to assist in identifying an initial alignment of interests and performing introductions of potential consumers to these suppliers. Subsequent discussions would then take place directly between the parties.

  • Proposals from non-members or individual members will be considered provided that a completed application for organizational membership (or a letter of intent) is submitted prior to or with the proposal.

    • Non-members or individual members should make a note regarding their intent to join OGC on the Organizational Background page of the Bid Submission Form and include their actual Letter of Intent as part of an Attached Document of Explanation.

  • Any individual wishing to gain access to the Initiative’s intermediate work products in the restricted area of the Portal (or attend private working meetings / telecons) must be a member-approved user of the OGC Portal system.

  • Individuals from any OGC member organization that does not become an initiative Sponsor or Participant may still (as a benefit of membership) observe activities by registering as an Observer.

  • Prior initiative participation is not a direct bid evaluation criterion. However, prior participation could accelerate and deepen a Bidder’s understanding of the information presented in the CFP.

  • All else being equal, preference will be given to proposals that include a larger proportion of in-kind contribution.

  • All else being equal, preference will be given to proposed components that are certified OGC-compliant.

  • All else being equal, a proposal addressing all of a deliverable’s requirements will be favored over one addressing only a subset. Each Bidder is at liberty to control its own proposal, of course. But if it does choose to propose only a subset for any particular deliverable, it might help if the Bidder prominently and unambiguously states precisely what subset of the deliverable requirements are being proposed.

  • The Sponsor(s) will be given an opportunity to review selection results and offer advice, but ultimately the Participation Agreement (PA) contracts will be formed bilaterally between OGC and each Participant organization. No multilateral contracts will be formed. Beyond this, there are no restrictions regarding how a Participant chooses to accomplish its deliverable obligations so long as these obligations are met in a timely manner (whether a 3rd-party subcontractor provides assistance is up to the Participant).

  • In general, only one organization will be selected to receive cost-share funding per deliverable, and that organization will become the Assigned Participant upon which other Participants will rely for delivery. Optional in-kind contributions may be made provided that they don’t disrupt delivery of required, reliable contributions from the assigned Participants.

  • A Bidder may propose against any or all deliverables. Participants in past initiatives have often been assigned to make only a single deliverable. On the other hand, several Participants in prior initiatives were selected to make multiple deliverables.

  • In general, the Participant Agreements will not require delivery of any component source code to OGC.

    • What is delivered to OGC is the behavior of the component installed on the Participant’s machine, and the corresponding documentation of findings, recommendations, and technical artifacts contributed to Engineering Report(s).

    • In some instances, a Sponsor might expressly require a component to be developed under open-source licensing, in which case the source code would become publicly accessible outside the Initiative as a by-product of implementation.

  • Results of other recent OGC initiatives can be found in the OGC Public Engineering Report Repository.

Appendix C: Abbreviations

The following table lists all abbreviations used in this CFP.

AI

 Artificial Intelligence

CFP

Call for Participation

CR

Change Request

DDIL

Denied, Degraded, Intermittent, or Limited Bandwidth

DER

Draft Engineering Report

DWG

Domain Working Group

ER

Engineering Report

GPKG

GeoPackage

IP

Innovation Program

OGC

Open Geospatial Consortium

ORM

OGC Reference Model

OWS

OGC Web Services

NSG

 National System for Geospatial Intelligence

PA

Participation Agreement

POC

Point of Contact

Q&A

Questions and Answers

RM-ODP

Reference Model for Open Distributed Processing

SIF

Sensor Integration Framework

SOW

Statement of Work

SWG

Standards Working Group

TBD

To Be Determined

TC

OGC Technical Committee

TEM

Technical Evaluation Meeting

TIE

Technology Integration / Technical Interoperability Experiment

URL

Uniform Resource Locator

WFS

Web Feature Service

WPS

Web Processing Service

WG

Working Group (SWG or DWG)

Appendix D: Corrigenda & Clarifications

D.1. Corrigenda Table

The following table identifies all corrections that have been applied to this CFP compared to the original release. Minor editorial changes (spelling, grammar, etc.) are not included.

Table 3. Corrigenda Table
Section Description

Date of Change

2.3.4

Statement added: "(CSW-ebRIM profile is of particular interest)"

10 February 2022

2.7

Section added

10 February 2022

2.1

Section modified, Minkowski spacetime aspect and OGC Temporal DWG reference added

14 March 2022

1.4

Schedule modified

D.2. Clarifications Table

The following table identifies all clarifications that have been provided in response to questions received from organizations interested in this CFP.

Please us this convenience link to navigate to the end of the table.

Table 4. Clarifications Table
Question Clarification

-- Pre-Release --

Q: Is D113 expected to implement an encrypted response?

A: D113 should be able to send and receive responses through the catalog, the Data Centric Security functions are built as a layer on top of the catalog.

2022-02-24

Q: Is it acceptable for an organization to partner or group together and reply as one entity/one funding request?

A: The OGC encourages each organization to apply individually so we can contract with the appropriate funding for each organization. Please do note if you intend to work with additional organizations so this can be taken into account when planning deliverable fulfillment. The Testbeds are all about collaboration, before, during, and after the initiative takes place.

Q: Is a potential participant expected to the reply and/or propose for all deliverables in the testbed?

A: No, the OGC encourages an organization to apply for any deliverable(s), represented as the items D###, they find appealing and fit within their scope. This can be one or more deliverable.

Q: Will an organization be selected for only one deliverable in the Testbed?

A: No, an organization can be selected for multiple deliverables.

-- Date --

Q: ?

A: .

 — 

 — 

D.3. End of Clarifications Table (convenience link)

.