The following table identifies all corrections that have been applied to this CFP compared to the original release. Minor editorial changes (spelling, grammar, etc.) are not included.
Section | Description |
---|---|
[20 December] Added link to Part 2 ITT solicitation and revised the language in the Note to past tense. |
|
[20 December] Added Note reminding the reader that details of the ITT bidding requirements are provided earlier in the document. Added ITT deliverables to CFP Deliverables and Funding Status table. |
|
[20 December] Last paragraph removed. Previous paragraph modified to include ARML. |
|
[20 December] Descriptions of D136 and D137 corrected. Both referenced D108/D109 erroneously. Now references point to D134 and D135. |
|
[17 January] Added EOC Deliverables to Appendix B. |
|
[18 January] Revised Kickoff dates to 10-12 April. |
|
[4 February] Synchronized Final DER delivery date to coincide with TC 3-week rule. |
|
[21 March] Repaired ExploitationPlatform link. |
- Introduction
- 1. General Proposal Submission Guidelines
- 2. Proposal Evaluation Criteria
- 3. Master Schedule
- 4. Summary of Testbed 14 Deliverables
- Appendix A: Management Requirements
- A.1. Proposal Submission Procedures
- A.2. Conditions for Participation
- A.3. Proposal Evaluation and Invitations to Selected Bidders
- A.4. Kickoff Workshop
- A.5. Participant Communication and Reporting
- A.6. General Requirements for Proposing Deliverables
- A.7. Specific Requirements for Proposing Document Deliverables
- A.8. Specific Requirements for Proposing Component Implementation Deliverables
- A.9. Project Monitoring and Control
- A.10. Tips for New Bidders
- Appendix B: Technical Architecture
- B.1. Introduction
- B.2. Testbed Baseline
- B.3. Testbed Threads
- B.4. Tasks
- B.5. Machine Learning, Deep Learning & Artificial Intelligence
- B.6. Information Registries & Semantic Enablement
- B.7. Next Generation OGC Web Services, Federated Clouds, Security & Workflows
- B.8. Complex Feature Handling
- B.9. Swath Data and the Climate Forecast Convention
- B.10. Application Schema Modeling and Conversion
- B.11. LiDAR Point Cloud Data Handling
- B.12. CityGML and Augmented Reality
- B.13. Portrayal
- B.14. MapML
- B.15. Quality of Service & Experience
- B.16. Compliance and Interoperability Testing
- B.17. OGC Testbed-14 – ESA Sponsored Threads – Exploitation Platform
- B.18. Deliverables
- Appendix C: Bibliography
The following table lists all abbreviations used in this CFP.
3DPS |
3D Portrayal Service |
ABI |
Activity Based Intelligence |
AOI |
Area of Interest |
AMQP |
Advanced Message Queuing Protocol |
AR |
Augmented Reality |
AtomPub |
Atom Publishing Protocol |
AVI |
Aviation |
BBOX |
Bounding Box |
BPMN |
Business Process Model and Notation |
CDR |
Content Discovery and Retrieval |
CIS |
Coverage Implementation Schema |
CITE |
Compliance Interoperability and Testing |
CF |
Climate and Forecasting |
CFP |
Call for Participation |
CMD |
Command Center |
CSMW |
Community Sensor Model Working Group |
CSW |
Catalog Service Web |
CTL |
Compliance Testing Language |
DAP |
Data Access Protocol |
DCAT |
Data Catalog Vocabulary |
DDIL |
Denied, Degraded, Intermittent, or Limited Bandwidth |
DGIWG |
Defense Geospatial Information Working Group |
DISA |
Defense Information System Agency |
DWG |
Domain Working Group |
EO |
Earth Observation |
EOWCS |
Earth Observation Profile Web Coverage Service |
ER |
Engineering Report |
EXI |
Efficient XML Interchange format |
FGDC |
Federal Geographic Data Committee |
FIXM |
Flight Information Exchange Model |
FO |
Field Operations |
GDAL |
Geospatial Data Abstraction Library |
GEOINT |
Geospatial intelligence |
GeoXACML |
Geospatial XACML |
GIBS |
Global Imagery Browse Services |
GML |
Geography Markup Language |
GUI |
Graphical User Interface |
HDF |
Hierarchical Data Format |
HTTP |
Hypertext Transfer Protocol |
HTTPS |
Hypertext transfer protocol secure |
ISO |
International Organization for Standardization |
JSON |
JavaScript Object Notation |
JSON-LD |
JSON Linked Data |
KML |
Keyhole Markup Language |
LiDAR |
Light detection and ranging |
MEP |
Mission Exploitation Platform |
MTOM |
Message Transmission Optimization Mechanism |
NASA |
National Aeronautics and Space Administration |
NetCDF |
Network Common Data Form |
NetCDF-CF |
NetCDF Climate Forecasting |
NSG |
National System for Geospatial Intelligence |
OAuth |
Open Authorization |
OBP |
Object Based Production |
OGC |
Open Geospatial Consortium |
OPeNDAP |
Open-source Project for a Network Data Access Protocol |
PKI |
Public Key Infrastructure |
PMT |
(ShapeChange) Profile Management Tool |
POI |
Points-of-interest |
PubSub |
Publication Subscription |
RDF |
Resource Description Framework |
SAML |
Security Assertion Markup Language |
SOS |
Sensor Observation Service |
SPARQL |
SPARQL Protocol and RDF Query Language |
SSO |
|
SWAP |
Size, Weight, and Power |
SWE |
Sensor Web Enablement |
SWG |
Standards Working Group |
T13 |
Testbed-13 |
TEAM |
Test, Evaluation, And Measurement Engine |
TEP |
|
TSPI |
Time-Space-Position-Information Standard |
TWMS |
Tiled Web Mapping Service |
US |
United States |
UML |
Unified Modeling Language |
USGS |
U.S. Geological Survey |
W3C |
World Wide Web Consortium |
WCPS |
Web Coverage Processing Service |
WCS |
Web Coverage Service |
WFS |
Web Feature Service |
WIS |
Web Integration Service |
WKT |
Well Known Text |
WMS |
Web Mapping Service |
WMTS |
Web Mapping Tile Service |
WPS |
Web Processing Service |
WS |
Web Service |
WSDL |
Web Services Description Language |
XACML |
eXtensible Access Control Markup Language |
XOP |
XML-binary Optimized Packaging |
XXE |
XML External Entity Injection |
Introduction
The Open Geospatial Consortium (OGC®) is releasing this Call for Participation ("CFP") to solicit proposals for the OGC Testbed-14 ("the Testbed") initiative.
The Testbed 14 solicitation is being issued in two parts due to specialized sponsor procurement requirements:
-
Part 1 - this CFP document ("Part 1 CFP" or just "CFP")
-
Part 2 - an Invitation To Tender pack ("Part 2 ITT" or just "ITT") under the European Space Agency (ESA) Electronic Mailing Invitation to Tender System (EMITS). The Part 2 ITT can also be accessed manually in EMITS by selecting Entities → CGI IT UK LTD → Open Invitations to Tender.
Part 2 ITT is described separately from this document and has distinct response requirements. So any Bidder wishing to respond to both Part 2 ITT (described externally) and the Part 1 CFP (described herein) should deliver two separate proposals, one for each set of response requirements. A Bidder wishing to respond to one or the other (but not both) would submit only one proposal.
Important
|
The Part 2 ITT was released on 14 December, 2017. To provide a complete description of Testbed-14 scope, this CFP documentation was extended through Corrigenda to combine all deliverables from both parts (Part 1 CFP and Part 2 ITT). All work items marked as belonging to Part 2 ITT have distinct response requirements that are provided as part of the ITT. Bidding on these ITT requirements and work items requires an independent proposal! ITT requirements and work items are only copied into the Part 1 CFP solicitation documents for completeness. The response requirements described in the here in the CFP body and in Appendix A Management Requirements apply to Part 1 CFP requirements and deliverables only. |
Under this Part 1 CFP, the OGC will provide cost-sharing funds on behalf of sponsoring organizations ("Sponsors") to partially offset expenses uniquely associated with the Testbed. This CFP requests proposals from organizations ("Bidders") wishing to participate in delivery and, in some cases, to receive cost-sharing funds. Any Bidder interested in Testbed participation should respond by submitting a proposal per the instructions provided herein.
OGC intends to involve as many technology developers and providers ("Participants", to be selected from among the Bidders) as possible to the extent that each Participant can contribute to and benefit from initiative outcomes. Not all proposals are required to request cost-sharing funds. While the majority are expected to include a combination of cost-sharing request and in-kind contribution, responses offering solely in-kind contributions (i.e., requesting no cost-sharing funds whatsoever) are also welcomed.
The only offers that should be included in formal proposals are those directly addressing express CFP requirements. Proposals covering additional requirements should be submitted separately for independent consideration.
The OGC Innovation Program provides global, hands-on, collaborative prototyping for rapid development and delivery of proven candidate specifications to the OGC Standards Program, where these candidates can then be considered for further action. In Innovation Program initiatives, Participants collaborate to examine specific geo-processing interoperability questions posed by the initiative’s Sponsors. These initiatives include Testbeds, experiments, pilots, and plugfests – all designed to foster the rapid development and adoption of open, consensus-based standards.
The OGC recently reached out to potential initiative sponsors to review the OGC technical baseline, discuss results of prior initiatives, and identify current Testbed requirements.
Important
|
Discussions are ongoing for potential sponsorship of additional requirements that are not yet firm enough for inclusion in this CFP. Should funding for these additional requirements be provided later, a follow-on CFP with a compressed response timeline might also be issued if the work can be included without interfering with the original Master Schedule or Appendix B Technical Architecture. |
Benefits of Participation
In general, Bidders should propose specifically against the list of deliverables described under the Summary of Testbed Deliverables section below. But Bidders may go beyond funded deliverables to propose in-kind contributions that will address unfunded requirements as well. Participants should note, however, that Sponsors have committed to fund only those deliverables identified as being funded.
This Testbed provides a business opportunity for stakeholders to mutually define, refine, and evolve service interfaces and protocols in the context of hands-on experience and feedback. The outcomes are expected to shape the future of geospatial software development and data publication. The Sponsors are supporting this vision with cost-sharing funds to partially offset the costs associated with development, engineering, and demonstration of these outcomes. This offers selected Participants a unique opportunity to recoup a portion of their Testbed expenses.
Testbed Policies and Procedures
This CFP incorporates the following additional documents:
This Testbed will be conducted in accordance with OGC Innovation Program Policies and Procedures.
OGC Principles of Conduct will govern all personal and public interactions in this initiative.
One Testbed objective is to support the OGC Standards Program in the development and publication of open standards. Each Participant will be required to allow OGC to copyright and publish documents based in whole or in part upon intellectual property contributed by the Participant during Testbed performance. Specific requirements are described under the "Copyrights" clauses of the OGC Intellectual Property Rights Policy.
Initiative Roles
The roles generally played in any OCG Innovation Program initiative are defined in the OGC Innovation Program Policies and Procedures, including Sponsors, Bidders, Participants, Observers, and the Innovation Program Team ("IP Team").
Additional explanations of the roles of Sponsors, Bidders, Participants, and Observers are provided in the Tips for New Bidders.
The IP Team for this Testbed will include an Initiative Director, an Initiative Architect, and multiple Thread Architects. Unless otherwise stated, the Initiative Director will serve as the primary point of contact (POC) for the OGC.
Thread Architects will work with Participants, Sponsors, and each other to ensure that Testbed activities and deliverables are properly assigned and performed. They are responsible for scope and schedule control, as well as for within-thread and cross-thread communications. They will also provide timely escalation to the Initiative Director regarding any severe issues or risks that happen to arise.
1. General Proposal Submission Guidelines
This section presents general guidelines for submitting a CFP proposal. Detailed instructions for submitting a response proposal using the Bid Submission Form web page can be found in Appendix A Management Requirements.
Note
|
The Narrative Response Template Word document and the Financial Response Template Excel spreadsheet that were required for proposal submissions under prior testbeds are no longer being used in this Testbed. |
Proposals must be submitted before the appropriate response due date indicated in the Master Schedule.
Bidders responding to this CFP must be OGC members and must be familiar with the OGC mission, organization, and process. Proposals from non-members will be considered provided that a completed application for OGC membership (or a letter of intent to become a member) is submitted prior to (or with) the proposal.
Information submitted in response to this CFP will be accessible to OGC and Sponsor staff members. This information will remain in the control of these stakeholders and will not be used for other purposes without prior written consent of the Bidder. Once a Bidder has agreed to become a Testbed Participant, they will be required to release proposal content (excluding financial information) to all Testbed stakeholders. Commercial confidential information should not be submitted in any proposal and should generally not be disclosed during Testbed execution.
Bidders will be selected to receive cost sharing funds on the basis of adherence to the requirements stipulated in this CFP and the overall quality of their proposal. The general Testbed objective is for the work to inform future OGC standards development with findings and recommendations surrounding potential new specifications. Bidders are asked to formulate a path for producing executable interoperable prototype implementations that meet the stated CFP requirements, and for documenting the findings and recommendations arising from those implementations. Bidders not selected for cost sharing funds may still be able to participate in addressing the stated CFP requirements on a purely in-kind basis.
However, to help maintain a manageable process, Bidders are advised to avoid attempts to use the Testbed as a platform for introducing new requirements not included in Appendix B Technical Architecture. Any additional in-kind scope should be offered outside the formal bidding process, where an independent determination can be made as to whether it should be included in Testbed scope or not. Items deemed out-of-Testbed-scope might be more appropriate for inclusion in another OGC Innovation Program initiative.
Each selected Participant (including pure in-kind Participants) will be required to enter into a Participation Agreement contract ("PA") with the OGC. The reason this requirement applies to pure in-kind Participants is that other Participants will be relying upon their delivery to show component interoperability across organizational boundaries.
Each PA will include a statement of work ("SOW") identifying Participant roles and responsibilities. The purpose of the PAs is to encourage and enable Participants to work together to realize Testbed goals for the benefit of the broader OGC community.
1.1. Questions and Clarifications
Once the original CFP has been published, ongoing authoritative updates and answers to questions can be tracked by monitoring the CFP Clarifications page. Instructions for accessing this page are included under Proposal Submission Procedures.
Bidders may submit questions via timely submission of email(s) to the OGC Technology Desk (techdesk@opengeospatial.org). Question submitters will remain anonymous, and answers will be regularly compiled and published on the CFP clarifications page.
OGC may also choose to conduct a Bidder’s question-and-answer webinar to review the clarifications and invite follow-on questions.
2. Proposal Evaluation Criteria
Proposals will be evaluated according to criteria that can be divided into two areas: Management and Technical.
2.1. Management Criteria
-
Bidder willingness and ability to comply with Appendix A Management Requirements,
-
Feasibility of proposed solution utilizing proposed resources,
-
Proposed in-kind contribution in relation to proposed cost-share funding request.
2.2. Technical Criteria
-
Understanding of and compliance with requirements as stated in Appendix B Technical Architecture,
-
Quality and suitability of proposed design, and
-
Where applicable, proposed solutions are OGC-compliant.
3. Master Schedule
The following table details the major Testbed milestones and events:
Milestone | Date | Event |
---|---|---|
8 December 2017 |
CFP Release (ITT released 14 Dec.) |
|
5 January 2018 |
Final Bidder Questions Due (CFP-only) |
|
12 January 2018 |
(CFP-only) Bidders Q&A Webinar (tentative) |
|
21 January 2018 |
CFP Proposal Submission Deadline (11:59pm U.S. Eastern time) |
|
2 February 2018 |
First Round of Bidder Notifications Started |
|
16 February 2018 |
Second Round of Bidder Notifications Started |
|
31 March 2018 |
All CFP Participation Agreements Signed |
|
10-12 April 2018 |
Kickoff Workshop Event (date updated on 18 Jan.) |
|
31 May 2018 |
Initial Engineering Reports (IERs) |
|
30 June 2018 |
Component Implementation Designs |
|
31 July 2018 |
IER-to-DER status check; TIE Connectivity Test; Early implementations of any component on which another component depends |
|
31 August 2018 |
TIE Readiness Review |
|
30 September 2018 |
TIE-Tested Component Implementations completed; Preliminary DERs complete & clean, ready for Testbed internal reviews |
|
31 October 2018 |
Ad hoc TIE demonstrations (as requested during the month) & Demo Assets posted to Portal; Near-Final DERs posted to Pending & WG review requested |
|
21 November 2018 |
Final DERs (incorporating WG feedback) posted to Pending to meet 3-week rule for vote at December TC Meeting (date updated on 4 Feb.) |
|
30 November 2018 |
Participant Final Summary Reports |
|
[Date TBD] December 2018 |
Demonstration Event |
3.1. Sequence of Events, Phases, and Milestones
The following diagram provides a notional schedule of major Testbed events and milestones, and their approximate sequence of occurrence. The Testbed will use rolling-wave project management whereby more detailed scheduling will take place as each milestone draws near.
Participant Selection and Agreements:
Once the original CFP has been published, ongoing authoritative updates and answers to questions can be tracked by monitoring the CFP Clarifications page. Instructions for accessing this page are included under Proposal Submission Procedures.
Bidders may submit questions via timely submission of email(s) to the OGC Technology Desk (techdesk@opengeospatial.org). Question submitters will remain anonymous, and answers will be regularly compiled and published in the CFP clarifications document.
OGC may also choose to conduct a Bidder’s question-and-answer webinar to review the clarifications and invite follow-on questions.
Following the closing date for submission of proposals, OGC will evaluate received proposals, review recommendations with Sponsors, and negotiate Participation Agreement (PA) contracts, including statements of work (SOWs), with selected Bidders. Participant selection will be complete once PA contracts have been signed with all Participants.
Kickoff Workshop: A Kickoff Workshop ("Kickoff") is a face-to-face meeting where Participants, guided by thread architects, will refine the Testbed architecture and settle upon specific interface models to be used as a baseline for prototype component interoperability. Participants will be required to attend the Kickoff, including the breakout sessions of each thread for which they were selected. Participants will be expected to use these breakouts to collaborate with other Participants and confirm intended Component Interface Designs.
After the face-to-face Kickoff, most Testbed activities will be conducted remotely via web meetings and teleconferences until the Demonstration Event near the end of Testbed execution.
Development of Engineering Reports, Change Requests, and Other Document Deliverables: Development of Engineering Reports (ERs), Change Requests (CRs) and other document deliverables will commence during or immediately after Kickoff. Participants will deliver an Initial Engineering Report (IER) plus several iterations of a Draft Engineering Report (DER). Full process details can be found in the OGC ER Process.
Under the Participation Agreement (PA) contracts to be formed with selected Bidders, ALL Participants will be responsible for contributing content to the ERs. But the ER Editor role will assume the duty of being the primary ER author.
Important
|
As compared to the ER Editor role in Testbed-13, ER Editors in Testbed-14 must adopt a more proactive approach to consuming and documenting the knowledge being generated during component implementation. In other words, ER Editors will serve as primary authors in addition to their editor role. All participants are required to support the editors by making necessary documentation material available. |
Component Design, Development, and Preliminary Testing: Participants will continue documenting detailed and final Component Interface Designs in the Testbed wiki. This documentation will allow task Participants to confirm a mutual understanding of each other’s interfaces so that subsequent interoperability testing can take place. A preliminary Technology Integration Experiment ("TIE", sometimes also referred to as a "Technology Interoperability Experiment") Connectivity Test milestone will be used to ensure that Testbed service endpoints can be reached by Testbed clients.
Important
|
Each proposed service component implementation deliverable should be accompanied by identification of one or more particular datasets that would be suitable for the proposed service. Details are provided under Data Requirements for Proposing Service Component Implementation Deliverables. |
TIE Readiness Review: A TIE Readiness Review will be conducted with a Thread Architect to confirm that each TIE Participant is prepared to start conducting TIE testing with counterpart Participants.
Component Interoperability Testing, and Acceptance: Participants should deliver completed and fully TIE-tested component implementations no later than the 30 September milestone (unless an earlier milestone is identified in Appendix B Technical Architecture). The primary acceptance criterion for a component implementation deliverable is the conduct and recording of the TIE test. This test can also prove useful in accelerating the development of Demonstration Event assets such as video recordings.
Draft Engineering Reports: Participants should also deliver complete and clean Draft Engineering Report (DERs) by the 30 September milestone. A complete DER is one for which all major clauses have been populated with meaningful content. A clean is one where all known editorial defects have been repaired. This milestone will impact ALL Testbed Participants, even component implementers, who will be responsible for making necessary documentation material available to the ER Editor for use in authoring the ER. Testbed Sponsors and Thread Architects will review these DERs in the weeks following delivery.
DER Rework and SWG/DWG Reviews: Participants will be required to perform rework based on the reviews from Sponsors and Thread Architects. The ER editor will then make a request to the selected OGC SWG or DWG to perform its review and to consider making a request that the DER be voted on by the OGC Technical Committee (TC) for publication. The OGC 3-week rule must be followed if the DER is to receive a vote at a TC Meeting. The DER must be posted to the members-only OGC Pending Documents directory to enable this TC vote. Participants will likely have to perform one final round of rework based on SWG/DWG feedback. The further along the DER is when submitted to the WG, the less rework will be required.
Final Summary Reports and Demonstration Event: Participants Final Summary Reports will constitute the close of funded activity. Further development work might take place to prepare and refine assets to be shown at the Demonstration Event.
Assurance of Service Availability: Participants selected to implement service components must maintain availability for a period of no less than one year after the Final Delivery Milestone. OGC might be willing to entertain exceptions to this requirement on a case-by-case basis.
Detailed requirements for meeting all these milestones are provided in Appendix A Management Requirements.
4. Summary of Testbed 14 Deliverables
The following tables show the full set of Testbed 14 deliverables, including ID, deliverable name, task, and funding status.
A deliverable’s funding status can be funded ("F"), unfunded ("U"), or under negotiation ("Un-Neg"), depending on the current state of sponsor funding.
-
For a deliverable with a funding status of "F", sponsor funding has already been confirmed.
-
A deliverable with a funding status of "U" is within CFP scope, but has a lower priority and does not have any sponsor funding. It’s possible that a Sponsor might later agree to provide funding. But no Sponsor funding is currently available, and any bids for these deliverables would necessarily be purely in-kind.
-
A deliverable with a funding status of "Un-Neg" is one for which a sponsor intends to provide funding, but a final commitment of this funding is still pending.
Note
|
A deliverable with a funding status of "ITT" can only be proposed under the Part 2 ITT solicitation. See the Main Body Introduction for details regarding how to bid on these deliverables. |
Please note that each deliverable indicated as "F" or "Un-Neg" would be funded at most once. No deliverable should be interpreted as offering multiple instances. For any deliverable still under negotiation ("Un-Neg"), if funding for that deliverable ends up not being committed, any bid for cost-sharing on that deliverable will be dismissed. The deliverable would change to an unfunded "U" status, and a Bidder could potentially choose to contribute it purely in-kind.
Important
|
Discussions are ongoing for potential sponsorship of additional requirements that are not yet firm enough for inclusion in this CFP. Should funding for these additional requirements be provided later, a follow-on CFP with a compressed response timeline might also be issued if the work can be included without interfering with the original Master Schedule or Appendix B Technical Architecture. |
Testbed deliverables are organized into threads, which are described in detail in Appendix B Technical Architecture.
All Participants are required to provide at least some level of in-kind contribution (i.e., activities requesting no cost-share compensation). As a rough guideline, a proposal should include at least one dollar of in-kind contribution for every dollar of cost-sharing compensation requested. All else being equal, higher levels of in-kind contributions will be considered more favorably during evaluation.
Some participation may be fully in-kind. However, to help maintain a manageable process, Bidders are advised to avoid attempts to use the Testbed as a platform for introducing new requirements not included in Appendix B Technical Architecture. Any additional in-kind scope should be offered outside of the formal bidding process, where an independent determination can be made as to whether it should be included in Testbed scope or not. Items deemed out-of-Testbed-scope might be more appropriate for inclusion in another OGC Innovation Program initiative.
Any item proposed as a fully in-kind contribution to meet a requirement already included in Appendix B Technical Architecture will likely be accepted if it meets all the other evaluation criteria.
In the tables below, document deliverables are numbered D001 and increasing, and component implementation deliverables are numbered D100 and increasing.
The thread abbreviations are as follows:
-
CITE: Compliance
-
EOC: Earth Observation & Clouds
-
MoPoQ: Modeling, Portrayal, and Quality of Service
-
NextGen: Next Generation Services
4.1. CFP Deliverables and Funding Status
Additional technical details can be found in Appendix B Technical Architecture.
ID | Document / Component | Thread | Funding Status |
---|---|---|---|
D003 |
Secure Client Test ER |
CITE |
F |
D027 |
Compliance ER |
CITE |
F |
D112 |
Secure Client Tests and Implementations |
CITE |
F |
D152 |
DGIWG CAT 2.0.2 Tests |
CITE |
F |
D153 |
DGIWG CAT 2.0.2 Reference Implementation |
CITE |
F |
D154 |
WFS3.0 Compliance Tests |
CITE |
F |
— |
— |
— |
— |
D007 |
Swath Coverage Engineering Report |
EOC |
F |
D008 |
Application Package ER |
EOC |
ITT |
D009 |
ADES & EMS Results and Best Practices ER |
EOC |
ITT |
D010 |
Authorization, Authentication, & Billing ER |
EOC |
ITT |
D100 |
Secured ADES Implementation I |
EOC |
ITT |
D101 |
Secured ADES Implementation II |
EOC |
ITT |
D102 |
Secured EMS Implementation I |
EOC |
ITT |
D103 |
Secured EMS Implementation II |
EOC |
ITT |
D104 |
TEP Client I |
EOC |
ITT |
D105 |
TEP Client II |
EOC |
ITT |
D106 |
Application Package and App Development I |
EOC |
ITT |
D107 |
Application Package and App Development II |
EOC |
ITT |
D108 |
Billing Service Implementation I |
EOC |
ITT |
D109 |
Billing Service Implementation II |
EOC |
ITT |
D134 |
WCS for Swath Coverage v1.1 & REST |
EOC |
F |
D135 |
WCS for Swath Coverage v1.1 |
EOC |
F |
D136 |
Client to WCS-EO 1.1 for Swath Coverage |
EOC |
F |
D137 |
Client to WCS-EO 1.1 & REST for Swath Coverage |
EOC |
F |
— |
— |
— |
— |
D001 |
SWIM Information Registry ER |
MoPoQ |
F |
D002 |
Semantically Enabled Aviation Data Models ER |
MoPoQ |
F |
D011 |
WMS QoSE ER |
MoPoQ |
F |
D012 |
MapML ER |
MoPoQ |
F |
D013 |
PointCloud Data Handling ER |
MoPoQ |
F |
D022 |
NAS ER |
MoPoQ |
F |
D029 |
Symbology Engineering Report |
MoPoQ |
F |
D030 |
Machine Learning ER |
MoPoQ |
F |
D115 |
Multiple WMS with QoSE support |
MoPoQ |
F |
D116 |
Client with WMS QoSE support |
MoPoQ |
F |
D117 |
WMS QoSE test suite |
MoPoQ |
F |
D118 |
MapML Browser Extension |
MoPoQ |
F |
D119 |
MapML Cloud Proxy |
MoPoQ |
F |
D132 |
Image Repository and Feature Store |
MoPoQ |
F |
D133 |
Client to Knowledge Base |
MoPoQ |
F |
D141 |
Machine Learning Validation Client |
MoPoQ |
F |
D144 |
NAS Implementation Shape Change |
MoPoQ |
F |
D145 |
NAS Ontologies and Samples |
MoPoQ |
F |
D160 |
Portrayal Ontology |
MoPoQ |
F |
D161 |
GeoPackage with Portrayal Support |
MoPoQ |
F |
D162 |
WMS with Portrayal Support |
MoPoQ |
F |
D163 |
WMTS with Portrayal Support |
MoPoQ |
F |
D164 |
Machine Learning Knowledge Base |
MoPoQ |
F |
D165 |
Machine Learning System |
MoPoQ |
F |
— |
— |
— |
— |
D021 |
Secure Resource Oriented Geospatial Data Handling ER |
NextGen |
F |
D023 |
Federated Clouds Engineering Report |
NextGen |
F |
D024 |
Security ER |
NextGen |
F |
D025 |
WPS-T ER |
NextGen |
F |
D026 |
Workflow ER |
NextGen |
F |
D028 |
CityGML and AR ER |
NextGen |
F |
D040 |
Complex Feature Handling ER |
NextGen |
F |
D041 |
Complex Feature Handling Study Support |
NextGen |
F |
D113 |
Next Generation Service Implementation 2 |
NextGen |
F |
D140 |
Next Generation Service Implementation 1 |
NextGen |
F |
D142 |
Next Generation Client Implementation 1 |
NextGen |
F |
D143 |
Next Generation Client Implementation 2 |
NextGen |
F |
D146 |
NGA Analysis Tool WPS |
NextGen |
F |
D147 |
Security Mediation Service WPS |
NextGen |
F |
D148 |
BPMN Service |
NextGen |
F |
D149 |
WPS-T Implementation |
NextGen |
F |
D150 |
CSW for WPS Processes & Analytic Metadata |
NextGen |
F |
D151 |
OAuth2.0 Authorization Server |
NextGen |
F |
D155 |
CityGML and AR Service I |
NextGen |
F |
D156 |
CityGML and AR Service II |
NextGen |
F |
D157 |
CityGML and AR Client I |
NextGen |
F |
D158 |
CityGML and AR Client II |
NextGen |
F |
D159 |
CityGML and AR Content |
NextGen |
F |
— |
— |
— |
— |
< end of main body >
Appendix A: Management Requirements
This appendix presents detailed CFP management requirements for submitting a bid. It also covers procedures for participation during Testbed execution. The Appendix B Technical Architecture presents the detailed CFP technical requirements.
All potential Bidders, even experienced Testbed Participants, should read this appendix carefully from end-to-end (and notify OGC immediately if any defects are discovered). There are significant differences as compared to prior testbed solicitations. Bidders who are new to OGC Testbeds are also encouraged to review the Tips for New Bidders.
The following sections describe the processes for a Bidder to submit a proposal and for a Participant (i.e., a selected Bidder) to perform against a Participation Agreement (PA) contract. The order of topics roughly parallels the sequence described in the Master Schedule. In general, the term "activity" is used as a verb describing work to be performed and the term "deliverable" is used as a noun describing artifacts to be memorialized and delivered for inspection and use.
A.1. Proposal Submission Procedures
The process for completing a Testbed 14 proposal is essentially embodied in the online Bid Submission Form. A summary is provided here for the reader’s convenience.
Once an online account has been created, the user will be taken to a home page indicating the "Status of Your Proposal." If any defects in the form are discovered, this page includes a link for notifying OGC. The user can return to this page at any time by clicking the OGC logo in the upper left corner.
Important
|
Because the Bid Submission Form is new for Testbed 14, it might contain some areas that are still brittle or in need of repair. Please notify OGC of any discovered defects. Periodic version updates will be provided as needed. Please consider making backup local copies of all inputs in case any of them need to be re-entered. Please also note that this form has already gone "live" as of the CFP release date. Any submitted bids will be treated as earnest submissions, even those submitted well before the response deadline. Be certain that you intend to submit your proposal before you click the Submit button on the Review page. |
Clicking on the Clarifications link will navigate to the CFP clarifications page.
On the far right, the Review link navigates to a page summarizing all the deliverables the Bidder is proposing.
Tip
|
Consider regularly creating printed output copies of this Review page at various checkpoints during proposal creation in case an input error is made later. |
Once the final version of the information for all the proposed deliverables has been entered, the Bidder can submit the completed proposal to OGC by clicking the Submit button at the bottom.
Tip
|
In general, up until the time that the user clicks this Submit button, the proposal may be edited as many times as the user wishes. However, this initial version of the form contains no "undo" capability, so please use caution in over-writing existing information. |
The user is provided an opportunity under Attached Documentation at the bottom of this page to attach collateral documentation (one document per proposal). This document could conceivably contain any specialized information that wasn’t suitable for entry into a Proposed Contribution field under an individual deliverable. It should be noted, however, that this additional documentation will only be read on a best-effort basis. There is no guarantee it will be used during evaluation to make selection decisions; rather, it could optionally be examined if the evaluation team feels that it might help in understanding any specialized (and particularly promising) contributions.
The Propose link takes the user to the first page of the proposal entry form. This form contains fields to be completed once per proposal such as names and contact information.
It also contains an optional Organizational Background field where Bidders (particularly those with no experience participating in an OGC testbed) may provide a description of their organization. It also contains a click-through check box where each Bidder will required (before entering any data for individual deliverables) to acknowledge its understanding and acceptance of the requirements described in this appendix.
Clicking the Update and Continue button then navigates to the form for submitting deliverable-by-deliverable bids. On this page, existing deliverable bids can be modified or deleted by clicking the appropriate icon next to the deliverable name. Any attempt to delete a proposed deliverable will require scrolling down to click a Confirm Deletion button.
To add a new deliverable, the user would scroll down to the Add Deliverable section and click the Deliverable drop-down list to select the particular deliverable of interest.
The user would then enter the required information for each of the following fields (for this deliverable only). Required fields are indicated by an asterisk ("*"):
-
Estimated Projected Labor Hours* for this deliverable
-
Funding Request*: total U.S. dollar cost-share amount being requested for this deliverable (to cover burdened labor only)
-
Estimated In-kind Labor Hours* to be contributed for this deliverable
-
Estimated In-Kind Contribution: total U.S. dollar estimate of the in-kind amount to be contributed for this deliverable (including all cost categories)
Cost-sharing funds may only be used for the purpose of offsetting burdened labor costs of development, engineering, and demonstration of Testbed outcomes related to the Participant’s assigned deliverables. By contrast, the costs used to formulate the Bidder’s in-kind contribution may be much broader, including supporting labor, travel, software licenses, data, IT infrastructure, etc.
Finally, a Bidder proposing to deliver a Service Component Implementation should use the Proposed Contribution (Please include any proposed datasets) field to identify what suitable datasets would be contributed (or what data should be acquired from another identified source) to support the proposed service. Full details of this requirement can be found under Data Requirements.
This field Proposed Contribution (Please include any proposed datasets) could also be used to provide a succinct description of what the Bidder intends to deliver for this work item to meet the requirements expressed in Appendix B Technical Architecture. This language could potentially include a brief elaboration on how the proposed deliverable will contribute to advancing the OGC technical baseline, or how implementations enabled by the specification embodied in this deliverable could add specific value to end-user experiences.
However, to help maintain a manageable process, Bidders are advised to avoid attempts to use the Testbed as a platform for introducing new requirements not included in Appendix B Technical Architecture. Any additional in-kind scope should be offered outside the formal bidding process, where an independent determination can be made as to whether it should be included in Testbed scope or not. Items deemed out-of-Testbed-scope might be more appropriate for inclusion in another OGC Innovation Program initiative.
The statements of work (SOWs) in the Participation Agreement (PA) contracts will ultimately require performance during testbed execution against the deliverables as described in Appendix B Technical Architecture plus any subsequent Corrigenda.
A single bid may propose deliverables arising from any number of Testbed threads. To ensure that the full set of sponsored deliverables are made, OGC might negotiate with individual Bidders to drop and/or add certain deliverables from their proposals.
A.2. Conditions for Participation
A selected Bidder will become a Testbed Participant by executing a Participation Agreement (PA) contract with OGC, which will require the following conditions:
-
Each Bidder should use the Bid Submission Form to make their proposal.
-
Proposals from non-OGC-members will be considered provided that a completed application for OGC membership (or a letter of intent to become a member) is submitted prior to (or with) the proposal.
-
No work facilities will be provided by OGC. Each Participant will be required to perform its PA obligations at its own provided facilities and to interact remotely with other Testbed stakeholders.
-
In general, a proposed component deliverable based on a product that has earned OGC Certification will be evaluated more favorably than one which has not.
-
Bidders proposing to build interoperable components should be prepared to test and demonstrate interoperability with components supplied by other Participants. In particular, server-endpoints must be accessible over the public Internet during Testbed TIEs.
-
Participants selected to implement component deliverables will be expected to participate in the full course of interface and component development, TIEs, and demonstration support activities throughout initiative execution. Participants selected as Editors will also be expected to participate in the full course of activities throughout the initiative, documenting implementation findings and recommendations and ensuring document delivery.
-
Participants should remain aware of the fact that the Testbed components will be developed across many organizations. To maintain interoperability, each Participant should diligently adhere to the latest technical specifications so that other Participants may rely on the anticipated interfaces during the TIEs.
-
All Selected Participants (both cost-share and pure in-kind) must send at least one technical representative per assigned thread to the Kickoff Workshop. Participants are also encouraged to send at least one technical representative to the Demonstration Event tentatively scheduled for December, 2018.
A.3. Proposal Evaluation and Invitations to Selected Bidders
Specific proposal evaluation criteria were presented earlier. Several process steps to be conducted by the IP Team are presented below to aid readers in understanding the overall process.
Soon after the proposal submission deadline, the IP Team and Sponsors will begin reviewing proposals, examining suitability of the proposed deliverables in light of the CFP requirements. During this analysis, the IP Team might need to contact Bidders to obtain clarifications and better understand what is being proposed.
At the Decision Technical Evaluation Meeting I (TEM I), the IP Team will present Sponsors with draft recommendations regarding which parts of which proposals should be offered cost-share funding. Sponsors will use this opportunity to suggest modifications.
The IP Team will notify each Bidder of their selection, including a description of the particular deliverables being offered, and OGC’s intent to enter into a Participation Agreement (PA) with them. Selected Bidders must be available for these contacts to be made to enable confirmation of continued interest. Bidder acceptance of the offer essentially locks in the scope that will be used to form the SOW to be attached to the forthcoming PA. A Bidder may reject the offer (for example, if the offer is to deliver only a small subset of its proposed deliverables).
A Decision Technical Evaluation Meeting II (TEM II) meeting will be conducted to review the outcomes of initial offers to selected Bidders, and any modifications that were needed due to Bidder rejections.
Following TEM II, the IP Team will develop the formal SOW and attach it to the full PA for each selected Bidder. Selected Bidders must be available to enable finalization of their PA contract.
Each PA will include a final identification of all assigned deliverables. The specific requirements that these deliverables must meet will be those documented in Appendix B Technical Architecture (plus any subsequent corrigenda).
Because Testbeds tend to deal with the lowest maturity level of OGC standards technology, they operate under the principle that “interoperable running code wins” (based loosely on an Internet Engineering Task Force founding belief).
In contrast with ordinary commercial contracts, which often focus on delivering a single-customer solution, the Participation Agreement contracts will encourage Participants to work in the broader interests of the OGC community. These interests are generally represented in the OGC Standards Working Groups (SWGs) and Domain Working Groups (DWGs). Testbed sponsorship enables a selected subset of these interests to be brought to bear on the production of interoperable running code and the reporting back of findings and recommendations as Change Requests (CRs) and document deliverables under the OGC Engineering Report Process.
In general, at least one Participant will be assigned for each client-server pair as an Engineering Report (ER) Editor, who will be responsible for authoring the ER, though there might be exceptions. See General Requirements for Proposing Deliverables for details. An ER Template will be provided to assist in the breakdown of content into multiple chapters describing specific findings and recommendations, a summary chapter, and supporting chapters (e.g., containing references and revision history).
Important
|
As compared to the ER Editor role in Testbed-13, ER Editors in Testbed-14 must adopt a more proactive approach to consuming and documenting the knowledge being generated during component implementation. In other words, ER Editors will serve as primary authors in addition to their editor role. All participants are required to support the editors by making necessary documentation material available. |
OGC members who who are not selected to make any deliverables may still observe all Testbed activities by visiting the (members-only) OGC Observer Agreements page and registering to become a Testbed-14 Observer. The Observer Agreement will require that Observers follow the same intellectual property protection rules as apply to other Testbed stakeholders (e.g., to avoid sharing other stakeholder’s confidential information with individuals outside the Testbed).
Testbed stakeholders should also avoid making any representations that any of the intermediate technical decisions made internally during Testbed execution reflect any sort of endorsed position of the OGC. Formal Testbed recommendations will become "official" once the containing ER has received approval to be made public.
A.4. Kickoff Workshop
Testbed execution will commence with a Kickoff Workshop event ("Kickoff"). In general, the Kickoff ensures a common Participant understanding of the overall Testbed architecture and of all interfaces relevant to their assigned deliverables. Under the Testbed’s rapid pace, and guided by the Thread Architect, periods of development will be followed by synchronization among component developers, so that the likelihood of interoperation is maximized at each iteration. To maintain interoperability, each Participant should diligently adhere to the latest agreed specifications so that other Participants may rely on the anticipated interfaces in subsequent TIE testing.
Since Participants will be relying on one another to deliver interoperable components, all Participation Agreement (PA) contracts should be settled by the start of Kickoff. Any Participant that has not yet executed a PA by start of Kickoff will be required to attest to its commitment to a PA Statement of Work (SOW). The full PA must then be executed with OGC no later than Kickoff completion (two days after Kickoff start).
The Kickoff will include both plenary and thread-based breakout sessions. Thread breakouts will be used to ensure a mutual understanding of the detailed interfaces that will support component interoperability. The Kickoff will also present an opportunity to finalize regularly scheduled thread and subthread teleconferences ("telecons").
All selected Participants must send at least one technical representative per assigned thread to the Kickoff Workshop, and a representative should be present at the breakout sessions for every thread for which the Participant has been assigned a deliverable.
Important
|
In-person attendance at these breakout meetings has been found to be critical to later Testbed success. Remote attendance will not be an option. |
These requirements will also apply to Participants not requesting any cost-share funding (i.e., purely in-kind contribution) since their deliverables are still being relied upon by other Participants.
A.5. Participant Communication and Reporting
A.5.1. Points of Contact
Each selected Participant should designate a primary point of contact ("Primary POC") who shall remain available throughout Testbed execution for communications regarding status. The POC should identify at least one alternative point of contact to support the Primary POC as needed. The POCs should provide contact information including their e-mail addresses and phone numbers.
All proposals should include a statement attesting to the POCs’ understanding and acceptance of the duties described herein. This statement will be enabled by a simple check-box in the Bid Submission Form.
A.5.2. Kickoff Status Report
Selected Participants should provide a one-time Kickoff Status Report that includes a list of personnel assigned to support the initiative and assurance that the Participants understand the schedule for all of its deliverables. This report should be submitted in electronic form to a designated email address no later than the last day of the Kickoff event.
A.5.3. Monthly Progress Reports
Participant Business and Technical POCs should provide, respectively, monthly business and technical progress and status reports due on the 3rd of each month (or the first working day thereafter) and covering the previous month.
Monthly Technical Reports will be accessible to all Testbed Stakeholders and should contain no confidential information (e.g., labor rates). Monthly Business Reports may contain confidential information and should be shared only with identified OGC POCs.
Any Participant who has a reliable forecast of what will take place in the remaining days of any particular reported month may submit its report early and subsequently report any urgent, last-minute updates via a follow-on email.
Detailed monthly reporting requirements and procedures will be provided during PA contract negotiation. It is likely that the Portal-based Actions feature (used in Testbed-13) will continue to be used for Testbed-14 monthly technical reporting.
The IP Team Thread Architects will review action item status on a weekly basis with assigned Participants, who should be available for these contacts to be made. Additional details regarding project management activities during Testbed execution can be found at Project Monitoring and Control.
As a preview, monthly technical reports will likely require the following information for each deliverable:
-
Deliverable ID and Name
-
Health: G, Y, R (green, yellow, red)
-
%complete (0%-100%)
-
Work accomplished in reporting period (last month)
-
Work anticipated in reporting period+1 (current month)
-
Any known risks or issues (see definitions under Project Monitoring and Control)
-
Response to mitigate the risk or remedy the issue
A.5.4. Final Summary Report
Participants should provide a Final Summary Report near the end of Testbed execution. Detailed requirements and procedures will be provided during PA contract negotiation. These reports will likely require the following information:
-
Describe, in detail, the work completed to fulfill the PA SOW items,
-
Summarize Participant’s overall contribution to the project, and
-
Present recommendations for future OGC Innovation Program and Standards Program efforts.
Any timely Participant who is up-to-date with all the Monthly Technical Status Reports in the Portal will likely be permitted to optionally satisfy the first "in detail" requirement by merely referencing the detail already contained in the Monthly Technical Reports. Timely Participants may also be permitted choose to build their Final Summary Report in the body of an email if desired (or, alternatively, submitted as a document attachment).
A.5.5. Regular Teleconferences
At least one of Participant Technical POC should be available for regularly scheduled telecons for each thread in which it is participating. Weekly thread telecons will be conducted. Records will be kept in meeting minutes posted to the Wiki and/or GoToMeeting recordings uploaded to the Portal.
These telecons are intended to accelerate understanding and action regarding relevant Testbed activities, particularly status of any risks or issues that might block or are blocking progress toward timely delivery.
Participants assigned as ER Editors may, with permission from the relevant Thread Architect, lead the substantive discussions during any ad hoc or regularly scheduled sub-thread telecons.
A.5.6. Correspondence and Collaboration
Participants should be able to utilize the following correspondence and collaboration tools:
-
Send and receive emails to/from thread-based email broadcast lists,
-
Contribute technical content using the OGC Member Wiki tool,
-
Participate in telecons using the GoToMeeting tool,
-
Edit ER source files in the Asciidoc format using the OGC ER Template,
-
Upload ER source files to the OGC Testbed-14 GitHub repository.
A.6. General Requirements for Proposing Deliverables
In general, Participants will be required to make deliverables meeting the requirements stated in CFP Appendix B Technical Architecture.
Under the Participation Agreement (PA) contracts to be formed with selected Bidders, all Participants will be responsible for contributing content to the ERs.
But the ER Editor role will assume the duty of being the primary ER author. As compared to the ER Editor role in Testbed-13, ER Editors in Testbed-14 must adopt a more proactive approach to consuming and documenting the knowledge being generated during component implementation. In other words, ER Editors serve as primary authors in addition to their editor role. All other participants are required to support the editors by making necessary documentation material available.
In addition, component implementers (Participants selected to deliver component implementations) will be responsible for contributing completed technical artifacts for inclusion as ER annexes. These include UML diagrams, XML schemas and instances, and abstract test suites, all of which have their own specialized clauses in the ER Template. Component implementers will also be responsible for contributing demo assets such as video recordings of component operation.
ER Editors will be responsible for observing the component implementation work and documenting (in the assigned ER) all findings and recommendations (except the technical artifacts identified above). They will also be responsible for ensuring the document’s overall integrity, authoring summary material and ensuring that global sections (e.g., References, Terms, Revision History, Bibliography) have all been completed.
ER Editors will also be responsible for enabling all required ER reviews (and subsequent rework), including internal reviews by Thread Architects and Sponsors, as well as requests for reviews by an appropriate OGC Standard Working Group (SWG) or Domain Working Group (DWG).
ER Editors will also be responsible for creating (and referencing) any Change Requests (CRs) arising from the work described in the assigned ER. CRs can be created using the OGC CR system.
A.7. Specific Requirements for Proposing Document Deliverables
The primary acceptance criterion for a document deliverable (e.g., an ER) will be approval from a Thread Architect to post the document to an OGC (members-only) folder called the OGC Pending directory. Permission to post will not be granted until the full OGC Engineering Report Process has been followed, including the step to request a review from an OGC SWG or DWG. All appropriate tooling should also have been utilized (ER Template, Asciidoc, GitHub), and the document should have achieved a satisfactory level of consensus among all assigned Participants.
A.8. Specific Requirements for Proposing Component Implementation Deliverables
In general, prototype component implementation deliverables (including services, clients, datasets, and tools) will be provided by methods suitable to its type and stated requirements. A service (e.g. a WFS instance or even an intermediary component) will be delivered by deployment for use in the Testbed via an accessible URL. A client will be used to exercise a service to test and demonstrate interoperability. Components will be developed and deployed in all threads for integration and interoperability testing in support of agreed-upon thread scenario(s) and technical architecture. The Kickoff will provide the opportunity to establish many of the agreed-upon details. These technical artifacts could also be utilized for cross-thread scenarios in demonstration events.
More specifically, the first acceptance criterion for any component implementation deliverable will be a successful demonstration of capability in a Technology Integration Experiment ("TIE") test.
The next acceptance criterion will be assurance that the work-package ER Editor has access to sufficient documentation material to record all findings and recommendations in the ER and associated CRs.
Another criterion will be evidence that the implementer has directly authored full ER content for any relevant Abstract Test Suite (ER Template Annex A), XML Schema Document (ER Template Annex B), or UML Model (ER Template Annex C).
A Participant assigned to deliver a client component implementation should, in addition, record TIE test execution (as evidence of delivery) and create demo assets, brief (several-minute) video recordings of client operation reflecting the newly enabled capabilities. These assets will become candidates for incorporation into executive-level briefings to be shown at the Demonstration Event near the end of the Testbed.
A Participant assigned to deliver a service component implementation should, in addition, provide assurance that the service endpoint will remain available to OGC and Sponsors for a period of no less than one year after Testbed execution. OGC might be willing to entertain exceptions to this requirement on a case-by-case basis.
A Participant assigned to deliver a service component implementation should, in addition, identify of one or more particular datasets that would be suitable for the proposed service. Details are provided under Data Requirements for Proposing Service Component Implementation Deliverables.
Some requirements might request delivery of a packaged prototype component (e.g., using a virtual machine or container). Consult the specific language in the Appendix B Technical Architecture for details regarding these requirements.
A.8.1. Data Requirements for Proposing Service Component Implementation Deliverables
A Bidder proposing to deliver a prototype service component implementation should provide detailed information regarding what data would be contributed (or what data should be acquired from another identified source) to support the proposed service. Some Testbed Sponsors might provide data, but it might also be necessary to complement these with additional datasets.
As an illustration, in the Testbed-12 initiative, the San Francisco Bay served as the primary area of interest (AOI) for service component implementation data. Full test dataset details are described in the document Testbed-12 Test Dataset Implementation with Documentation (16-136). One of the primary sources for Testbed-12 data was the Homeland Infrastructure Foundation-Level Data (HIFLD).
A similar pattern could be followed for a Testbed 14 service component implementation by proposing one or more HIFLD datasets for an assumed AOI covering the area of south Florida impacted by Hurricane Irma in September, 2017. Specific HIFLD data matching these criteria can be found at the HIFLD Hurricane Irma Response site.
Alternatively, a Bidder could identify a non-HIFLD source (and even an alternative AOI) if another option would be more suitable to meet the technical requirement stated in the Appendix B Technical Architecture.
For example, one of the Sponsors identified the NGA Data Depot from the GEOINT Solutions Marketplace as a potential alternate source of appropriate data.
As another example, Testbed-12 reused OpenStreetMap feature data (which had actually originated in Testbed 11).
A.9. Project Monitoring and Control
Testbed execution monitoring and control will follow an encapsulation principle. As long as a Participant continues [1] supporting activities on which other Testbed stakeholders are relying (e.g., telecons, TIEs, ER inputs) and [2] making timely delivery of their own deliverables, the question of what’s "going on" inside the Participant organization will likely not be raised.
Otherwise, any issues (or risks) that block (or threaten to block) timely delivery will enter a remediation status requiring the following responses:
-
Development, implementation, and reporting of a credible mitigation or recovery response to get back on track,
-
Recording in the Monthly Participant Technical Status Report a "Red Flag" Action Status indicating that the deliverable is in remediation (plus either "Red" or "Yellow" health in the Action Description),
-
Notification of the relevant Thread Architect(s), who will review the recovery plan and report status to the Initiative Director and Sponsors.
The following definitions will be used to communicate and track deliverable status:
-
For purposes of the Testbed, a risk can be defined as any uncertain future event that has a sufficiently high exposure to threaten Participants' ability to make on-time delivery of the promised deliverable scope.
-
An issue is a risk that has actually occurred (no longer any uncertainty surrounding whether it will occur or not).
-
A mitigation response is one that reduces a risk’s exposure (e.g., finding an alternative source for meeting a data dependency).
-
A recovery response is one that remedies the damage caused by the issue (e.g., getting the deliverable back on schedule or being excused from delivery of some particular requirement).
-
A deliverable whose timely delivery is threatened should have an overall status of being in remediation and should reflect a Red or Yellow health.
-
A Red deliverable health is appropriate when the response has not yet been created and agreed, or it has been created and agreed but is not believed to be sufficient to make timely delivery.
-
A Yellow deliverable health is appropriate when the response has been created and agreed, and is in the process of execution (but not yet fully completed).
-
Once the response has been fully completed, the deliverable health should return to Green and the deliverable would no longer be considered in remediation.
In general, risks and issues that are contained within a particular thread will be monitored and managed by the assigned Thread Architect, who will report ongoing status to other stakeholders. Any risks or issues that cross threads should be brought to the attention of all relevant Thread Architects and the Initiative Director.
A.10. Tips for New Bidders
Bidders who are new to OGC Testbeds are encouraged to review the following tips, extracted and updated from the Testbed-13 Clarifications document.
-
The roles generally played in any OCG Innovation Program initiative are defined in the OGC Innovation Program Policies and Procedures, from which the following definitions are derived and extended.
-
Sponsors are OGC member organizations that contribute financial resources to steer Testbed requirements toward rapid development and delivery of proven candidate specifications to the OGC Standards Program. These requirements take the form of the deliverables described herein. Sponsors representatives help serve as "customers" during Testbed execution, helping ensure that requirements are being addressed and broader OGC interests are being served.
-
Bidders are organizations who submit proposals in response to this CFP. A Bidder selected to participate will become a Participant through the execution of a Participation Agreement contract with OGC. Most Bidders are expected to propose a combination of cost-sharing request and in-kind contribution (though solely in-kind contributions are also welcomed).
-
Participants are OGC member organizations that generate empirical information through the definition of interfaces, implementation of prototype components, and documentation of all related findings and recommendations in Engineering Reports, Change Requests and other artifacts. They also make substantial in-kind contributions to an initiative. Participants assign business and technical representatives to represent their interests throughout Testbed execution.
-
Observers are individuals from OGC member organizations that have agreed to OGC intellectual property requirements in exchange for the privilege to access Testbed communications and intermediate work products. They may contribute recommendations and comments, but the IP Team has the authority to table any of these contributions if there’s a risk of interfering with any primary Testbed activities.
-
The Innovation Program Team (IP Team) is the management team that will oversee and coordinate the initiative. This team is comprised of OGC staff, representatives from member organizations, and OGC consultants. The IP Team communicates with Participants and other stakeholders during Testbed execution, provides Testbed scope and schedule control, and assists stakeholders in understanding OGC policies and procedures.
-
The term Stakeholders is a generic label that encompasses all Testbed actors, including representatives of Sponsors, Participants, and Observers, as well as the IP Team. Testbed-wide email broadcasts will often be addressed to "Stakeholders".
-
Suppliers are organizations (not necessarily OGC members) who have offered to supply specialized resources such as capital or cloud credits. OGCs role is to assist in identifying an initial alignment of interests and performing introductions of potential consumers to these suppliers. Subsequent discussions would then take place directly between the parties.
-
-
Non-OGC member organizations must become members in order to be selected as Participants.
-
Any individual wishing to gain access to Testbed intermediate work products in the restricted area of the Portal (or attend the private Testbed working meetings / telecons) must be a member-approved user of the OGC Portal system.
-
The reason for this restriction is that members are bound by the OGC bylaws, which offer protections to other members. Affording the same privileges to non-members could have a chilling effect on what are intended to be free and innovative Testbed discussions.
-
-
Individuals from any OGC member organization that does not become a Testbed Sponsor or Participant may still (as a benefit of membership) quietly observe all Testbed activities by registering as a Testbed Observer.
-
Prior Testbed participation is not a direct bid evaluation criterion.
-
However, prior participation could accelerate and deepen a Bidder’s understanding of the information presented in the CFP.
-
-
All else being equal, preference will be given to proposals that include a larger proportion of in-kind contribution.
-
All else being equal, preference will be given to proposed components that are certified OGC compliant.
-
All else being equal, a proposal addressing all of a deliverable’s requirements will be favored over one addressing only a subset.
-
Each Bidder is at liberty to control its own proposal, of course. But if it does choose to propose only a subset for any particular deliverable, it might help if the Bidder prominently and unambiguously states precisely what subset of the deliverable requirements are being proposed.
-
-
Sponsors will be given an opportunity to review selection results and offer advice, but ultimately the Participation Agreement (PA) contracts will be formed bilaterally between OGC and each Participant organization. No multilateral contracts will be formed.
-
Beyond this, there are no restrictions regarding how a Participant chooses to accomplish its deliverable obligations so long as the Participant’s obligations are met in a timely manner (e.g., with or without contributions from third-party subcontractors).
-
-
In general, only one organization will be selected to receive cost-share funding per deliverable, and that organization will become the Assigned Participant upon which other Participants will rely for delivery.
-
Optional in-kind contributions may be made provided that they don’t disrupt delivery of the required, reliable contributions from Assigned Participants.
-
-
A Bidder may propose against any or all Testbed threads.
-
Participants in past Testbeds have often been assigned to make only a single deliverable. At the other extreme, there is theoretically no upper limit on the number of deliverables a single organization could be assigned to make.
-
-
The period of performance for PAs is expected to run through December.
-
In general, the PAs will not require delivery any component source code to OGC.
-
What is delivered instead is the behavior of the component in the TIEs, and the corresponding documentation of findings, recommendations, and technical artifacts in the related ER(s).
-
In some instances, a Sponsor might expressly require a component to be developed under open-source licensing, in which case the source code would become publicly accessible outside the Testbed as a by-product of implementation.
-
-
Results of other recent OGC initiatives can be found in the OGC Public Engineering Report Repository.
-
A Bidders Q&A Webinar will likely be conducted in January. The webinar will be open to the public, but prior registration will be required.
< end of appendix >
Appendix B: Technical Architecture
B.1. Introduction
This Annex B provides background information on the OGC baseline, describes the Testbed-14 architecture and thread-based organization, and identifies all requirements and corresponding work items. For general information on Testbed-14, including deadlines, funding requirements and opportunities, please be referred to the Testbed-14 CFP Main Body.
Each thread aggregates a number of tasks. Each task specifies a number of requirements that need to be fulfilled by work items. The work items are funded by different sponsors.
An overview of all threads and assigned tasks is provided further below.
B.2. Testbed Baseline
B.2.1. Types of Deliverables
OGC Testbed-14 threads require several types of deliverables. Deliverable indications "funded" or "unfunded" are provided in the Testbed-14 CFP Main Body. Please make sure you understand your tasks and provision requirements according to Annex A.
Documents
Engineering Reports (ER) and Change Requests (CR) will be prepared in accordance with OGC published templates. Engineering Reports will be delivered by posting on the OGC Portal Pending Documents list when complete and the document has achieved a satisfactory level of consensus among interested participants, contributors and editors. Engineering Reports are the formal mechanism used to deliver results of the Innovation Program to sponsors and to the OGC Standards Program and OGC Standard Working Group or Domain Working Groups for consideration.
Important
|
It is emphasized that participants delivering Engineering Reports must also deliver Change Requests that arise from the documented work. |
Implementations
Services, Clients, Datasets and Tools will be provided by methods suitable to its type and stated requirements. For example, services and components (e.g. a WFS instance) are delivered by deployment of the service or component for use in the Testbed via an accessible URL. A Client software application or component may be used during the Testbed to exercise services and components to test and demonstrate interoperability; however, it is most often not delivered as a license for follow-on usage. Implementations of services, clients and data instances will be developed and deployed in all threads for integration and interoperability testing in support of the agreed-up thread scenario(s) and technical architecture. The services, clients, and tools may be invoked for cross-thread scenarios in demonstration events.
B.2.2. OGC Reference Model
The OGC Reference Model (ORM) version 2.1, provides an architecture framework for the ongoing work of the OGC. Further, the ORM provides a framework for the OGC Standards Baseline. The OGC Standards Baseline consists of the member-approved Implementation/Abstract Specifications as well as for a number of candidate specifications that are currently in progress.
The structure of the ORM is based on the Reference Model for Open Distributed Processing (RM-ODP), also identified as ISO 10746. This is a multi-dimensional approach well suited to describing complex information systems.
The ORM is a living document that is revised on a regular basis to continually and accurately reflect the ongoing work of the Consortium. We encourage respondents to this RFQ to learn and understand the concepts that are presented in the ORM.
This Annex B refers to the RM-ODP approach and will provide information on some of the viewpoints, in particular the Enterprise Viewpoint, which is used here to provide the general characterization of work items in the context of the OGC Standards portfolio and standardization process, i.e. the enterprise perspective from an OGC insider.
The Information Viewpoint considers the information models and encodings that will make up the content of the services and exchanges to be extended or developed to support this Testbed. Here, we mainly refer to the OGC Standards Baseline, see section Standards Baseline.
The Computational Viewpoint is concerned with the functional decomposition of the system into a set of objects that interact at interfaces – enabling system distribution. It captures component and interface details without regard to distribution and describes an interaction framework including application objects, service support objects and infrastructure objects. The development of the computational viewpoint models is one of the first tasks of the Testbed, usually addressed at the kick-off meeting.
The Engineering Viewpoint is concerned with the infrastructure required to support system distribution. It focuses on the mechanisms and functions required to:
-
support distributed interaction between objects in the system, and
-
hides the complexities of those interactions.
It exposes the distributed nature of the system, describing the infrastructure, mechanisms and functions for object distribution, distribution transparency and constraints, bindings and interactions. The engineering viewpoint will be developed during the Testbed, usually in the form of TIEs, where Testbed participants define the communication infrastructure and assign elements from the computational viewpoint to physical machines used for demonstrating the Testbed results.
B.2.3. OGC Standards Baseline
The OCG Standards Baseline is the complete set of member approved Abstract Specifications, Standards including Profiles and Extensions, and Community Standards.
OGC standards are technical documents that detail interfaces or encodings. Software developers use these documents to build open interfaces and encodings into their products and services. These standards are the main "products" of the Open Geospatial Consortium and have been developed by the membership to address specific interoperability challenges. Ideally, when OGC standards are implemented in products or online services by two different software engineers working independently, the resulting components plug and play, that is, they work together without further debugging. OGC standards and supporting documents are available to the public at no cost. OGC Web Services (OWS) are OGC standards created for use in World Wide Web applications. For this Testbed, it is emphasized that all OGC members have access to the latest versions of all standards. If not otherwise agreed with the Testbed architects, these shall be used in conjunction with - in particular - Engineering Reports resulting from previous Testbeds.
Any Schemas (xsd, xslt, etc.) that support an approved OGC standard can be found in the official OGC Schema Repository.
The OGC Testing Facility Web page provides online executable tests for some OGC standards. The facility helps organizations to better implement service interfaces, encodings and clients that adhere to OGC standards.
B.2.4. OGC Best Practices and Discussion Papers
OGC also maintains other documents relevant to Innovation Program initiatives, including Engineering Reports, Best Practice Documents, Discussion Papers, and White Papers.
B.2.5. Data
All participants are encouraged to provide data that can used to implement the various scenarios that will be developed during the Testbed. A number of Testbed sponsors will provide data, but it might be necessary to complement these with additional data sets. Please provide detailed information if you plan to contribute data to this Testbed.
B.2.6. Services in the Cloud
Participants are encouraged to provide data or services hosted in the cloud. There is an overarching requirement to provide cloud-hosting capabilities to allow thread participants to move services and/or data to the cloud.
B.2.7. Dependencies between Threads
Dependencies between threads have been minimized. In some rare cases one thread uses components from another thread. In this case, the thread providing components used by another thread need to provide the relevant components in time. No cross-thread dependencies exist where component developers from disjunct threads need to agree on a common interface. The interface is always defined within a single thread exclusively.
B.3. Testbed Threads
Testbed-14 is organized in a number of threads. Each thread combines a number of tasks that are further defined in the following chapters. The threads integrate both an architectural and a thematic view, which allows to keep related work items closely together and to remove dependencies across threads.
-
Thread 1: Modeling, Portrayal, and Quality of Service (MoPoQ)
-
Information Registries & Semantic Enablement
-
Application Schema Modeling and Conversion
-
Portrayal
-
MapML
-
Quality of Service & Experience (QoSE)
-
Machine Learning, Deep Learning & Artificial Intelligence
-
LiDAR Point Cloud Data Handling
-
-
Thread 2: Earth Observation & Clouds (EOC)
-
Swath Data and the Climate Forecast Convention
-
Exploitation Platform
-
-
Thread 3: Next Generation Services (NextGen)
-
Next Generation OGC Web Services, Federated Clouds, Security & Workflows
-
Complex Feature Handling
-
CityGML and Augmented Reality
-
-
Thread 4: Compliance (CITE)
-
Compliance and Interoperability Testing
-
B.4. Tasks
Each of the following sections provides a detailed description of a particular task.
Note
|
Please note that a few links of the links below to Testbed-13 Engineering Reports will result in “404 Not Found” results as some these reports are still being prepared for publication. |
B.5. Machine Learning, Deep Learning & Artificial Intelligence
One of the most interesting recent developments in Machine Learning (ML) has been in Deep Learning, a specialization of Artificial Neural Networks.
Deep Learning (DL) can be defined as a neural network approach that involves a large number of parameters and layers in one of four fundamental network architectures: unsupervised pre-trained networks, convolutional neural networks, recurrent neural networks and recursive neural networks. Automatic feature extraction is one of the areas where DL has shown great promise.
OGC Web services play a pivotal role in many geospatial work flows. Facading any type of geospatial data store and processing system, they can be used to support Machine Learning, Deep Learning & Artificial Intelligence systems. The goal of this task is to develop a holistic understanding and to derive best practices for integrating Machine Learning, Deep Learning, and Artificial Intelligence tools and principles into OGC Web service contexts. Even though an image analysis workflow is used here to describe interactions between components and interface requirements, it is emphasized that this task is not constraint to image analysis but should work with all types of data including vector information!
Essentially, the following two questions shall be answered:
-
How to best support Machine Learning and Artificial Intelligence using OGC Web Services?
-
How to best publish inputs to and outputs from Machine Learning and Artificial Intelligence using OGC Web Services?
To some extent, these questions can build on previous work, which allows to integrate machine learning tools behind OGC Web service interfaces by revising OGC Web Image Classification Service Discussion Paper (OGC 05-017) to support image classification using Machine Learning and Artificial Intelligence.
Previous Work OGC previously prototyped a Web Image Classification Service (WICS) interface (OGC 05-017) that defines these operations:
-
GetClassification
-
TrainClassifier
-
DescribeClassifier Operation
Using these operations, the WICS provides three types of classifications:
-
Unsupervised classification: getCapabilities, describeClassifier, and getClassification.
-
Supervised classification using server archived trained classifiers: getCapabilities, describeClassifier, and getClassification.
-
Supervised classification using classifiers trained based on client supplied training data.
The previous work was not done with a Deep Learning Classifier and with focus on gridded imagery data, but that may not affect the Interface operations that are independent of algorithm type. The WICS operations directly match the functions shown in Figure 3 of the CFP.
Implementation of Components and Services
On the implementation side, this task shall support an image analysis and processing scenario. Imagine satellite imagery is stored in an image archive that is available to human image analysts and Machine Learning tools. An image analyst identifies a set of feature types on the images and makes this data available to the data store. These annotated data are available to both consumers and Machine Learning tools that use that data for training purposes. Depending on the type of available imagery data, Testbed-14 may differentiate two scenarios. In the first scenario, the image analyst uses prior- and post event data to identify for example areas destroyed by a natural disaster such as a hurricane. In the second scenario, the image analyst identifies feature types that do not take any change detection into account. The identified features could be compliant with Geography Mark-up Language (GML) application schemas such as the e.g. Multinational Geospatial Co-production Program (MGCP) Application Schema and the NSG Application Schema (NAS).
The Image Analyst queries image data (or is informed about new data by push notifications), identifies specific feature types, and provides the results of the analysis back to the system. These data include the identified feature types, the associated snippets (set of pixels in an image), as well as additional metadata as defined in Testbed-14.
A Machine Learning tool uses the Image Analysts' results as input training data to improve its learning algorithm. Rather than concentrating on the quality of the feature identification results, Testbed-14 shall explore the best workflows, service interaction patterns, an application schemata to express both analysis results and associated metadata such as trust levels and confidence, associated body of evidence, associated imagery, and optimized learning capabilities. The general workflow is illustrated in the figure below. The "Learning" box in the lower right corner shall illustrate the iterative nature of the workflow that uses results from the "Machine Learning and Artificial Intelligence" box to further train the model (e.g. by using external processes or data).
All data shall be made available at OGC data access services (e.g. WFS, WCS, SOS) as identified in responses to this Call for Participation. Additional catalog services might become part of the overall scenarios if necessary. This Call for Participation does not specify details on the involved service technology (i.e. participants can identify services types), but provides suggestions for possible implementations in the figure below.
Concept Study and Engineering Report
The Testbed-14 Machine Learning Engineering Report shall describe all experiences and lessons learned of the Machine Learning, Deep Learning & Artificial Intelligence implementations. In addition, it shall includes the results of a concept study that provides a truly holistic approach on the integration of machine learning and OGC Web services. This holistic approach may require some experimentation and shall include the following items:
-
Receive and process metadata about the applied processes, quality parameters, links to original images and/or snippets, etc. Gal’s thesis on Uncertainty in Deep Learning provides solid background on uncertainty and quality aspects.
-
Describe training data handling
-
Integrate machine learning tools behind OGC Web service interfaces
-
Provide performance information about the AI tool
-
Describe how to best support Machine Learning and Artificial Intelligence using OGC Web Services
-
Describe How to best publish outputs from Machine Learning and Artificial Intelligence using OGC Web Services
-
Reflect on Principles for Algorithmic Transparency and Accountability, as there is growing evidence that some algorithms and analytics can be opaque, making it impossible to determine when their outputs may be biased or erroneous.
-
Make recommendations for new OGC standards or extending OGC standards to support AI/ML geospatial algorithms, e.g., Simple Features for Big Data as proposed at OGC Big Data DWG, September 2017.
-
Ideally make recommendations about use of tiles or grid structures for the analysis including DGGS that help integrating data from various sources.
In addition, the concept study shall analyze a geospatial feature extraction workflow that is slightly more complex than what can be realized in the implementation scenario described above. For simplicity, the process is outlined here in the form of Input-Process-Output stages. Issues that affect the Input stage include:
-
Conversion of images from their original format into a format suitable for feature extraction and classification. This is an area where WPS could play a role.
-
Filtering of images to remove those of very low quality (e.g. those mostly covered by clouds) that might skew the ML model. This is an area where a WCPS-enabled WPS could play a role.
Some of the issues that affect the Process stage of a geospatial feature extraction workflow include:
-
Organizations that are innovating on the algorithms side do not always have access to the vast amounts of data required to develop a fully-fledged production-level ML model. One option is to apply the "move-the-algorithm-to-the-data" approach that was demonstrated in Testbed-12 and Testbed-13. This is an area where WPS could play a role. A possible outcome could be a WPS Profile for developing DL models for geospatial feature extraction. Of course, the "move-the-data-to-the-algorithm" approach could also be applied as well. The benefit would be faster innovation by enabling organizations to implement and try out their ML algorithms on infrastructure that can handle large realistic datasets. Note the relevance to the NSG Data Analytic Architecture Service (NDAAS). The benefit to image providers would be increased use of their imagery products.
-
A well-trained artificial neural network has a parameter vector representing the weights and biases of the model being applied. These weights and biases are re-adjusted to increase the significance of some bits of information and minimize the significance of others. This enables the model to learn which features are tied to which outcomes. In practice however, other sources of information also play a role in the classification of features. The benefit of an integrative approach would be greater efficiency and effectiveness of knowledge sharing. The benefit to imagery and feature data providers would be increased use of their imagery exploitation services.
Some of the issues that affect the Output stage of a geospatial feature extraction workflow include:
-
Provision of metadata in a format that adequately describes the provenance of the model and its configuration parameters. Note that configuration of neural networks is a key part of developing DL models. A possible outcome could be an ISO 19115 profile or extension (perhaps building on the Data Quality work of Testbed-12).
-
Export of outputs in a form that allows the outputs to be re-ingested as inputs for further learning, in a closed feedback loop as described in the e-mail sent yesterday. A possible outcome could be profiles for WPS, WFS and OWS Context. Note that this concept is consistent with the NGA 2020 analysis technology plan (page 6) which states that “the role of human analysts will evolve over the next six years as computers assume an active (vice today’s mainly passive) role in GEOINT exploitation: human analysts will spend more time interrogating and validating the machine’s work, exploiting the gaps and applying broader contextual awareness to the algorithmic outputs”. The concept is also related to Human-Agent Collectives. Another important aspect to consider is the use of controlled vocabularies to allow consistent usage of results from Machine Learning tools. Options here include for example the Multinational Geospatial Co-production Program (MGCP) or the NSG Core Vocabulary.
B.5.1. Work Items Overview
The following figure illustrates the Machine Learning, Deep Learning & Artificial Intelligence work items.
B.5.2. Deliverables
To re-iterate, the aim of this task is to develop a holistic understanding and derive best practice for integrating Machine Learning, Deep Learning, Artificial Intelligence tools and practices into OGC Web Services. This is to be achieved through focusing on two key questions of How to best support Machine Learning and Artificial Intelligence and How best to publish outputs from Machine Learning and Artificial Intelligence both using OGC Web Services. The suite of deliverables should interact with each other to demonstrate this.
The following list identifies all deliverables that are part of this task. Additional requirements may be stated above. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.
-
D030 Machine Learning Engineering Report - that covers are results from the implementation scenarios and the detailed concept study described above. Additionally, it should also examine the implementations of Next Generation OGC Web Services, and how these might need to be addressed.
-
D132 Image Repository and Feature Store - that makes all imagery and additional feature data available to Knowledege Base and AI System. Should support push of images into knowledge base.
-
D133 Client to Knowledge Base - Visualization and exploration client that allows interacting with knowledge base and image repository. Allows to explore all imagery, processing results, quality information, machine learning validation results.
-
D141 Machine Learning Validation Client - Client to the Machine Learning System and Knowledge base that will rate the correctness of the machine learning system output. Makes that data available at the knowledge base and as training input to the machine learning system.
-
D164 Machine Learning Knowledge Base - Knowledge Base that contains all imagery, imagery snippets, analysis results etc. Exposes data access and transactions interfaces.
-
D165 Machine Learning System - Actual machine learning system, exposes a WPS or similar interface. System will classify output using specified NAS/profile to suit use case.
B.6. Information Registries & Semantic Enablement
In the last decade, a technological framework known as SWIM (System Wide Information Management) has been developed in the FAA, EUROCONTROL, and more recently the Asia-Pacific Region as a viable model for Air Traffic Management (ATM) applications. "SWIM is a Federal Aviation Administration (FAA) advanced technology program designed to facilitate greater sharing of Air Traffic Management (ATM) system information, such as airport operational status, weather information, flight data, status of special use airspace, and National Airspace System (NAS) restrictions. SWIM will support current and future NAS programs by providing a flexible and secure information management architecture for sharing NAS information. SWIM will use commercial off-the-shelf hardware and software to support a Service Oriented Architecture (SOA) that will facilitate the addition of new systems and data exchanges and increase common situational awareness." (wikipedia)
SWIM addresses the communications and interoperability requirements of highly-distributed, loosely-coupled, and platform-independent components by consistently applying the principals of Service-Oriented Architecture (SOA). The major goal of SWIM is providing information to members of the civil aviation community. This information includes (but is not limited to) aeronautical, meteorological, and flight information, and it is rendered in a formal language such as XML.
The work in Testbed-14 is primarily motivated by the following aspects:
-
Lack of information discovery capabilities. Although SWIM supports service discovery by providing a searchable catalog (e.g., a service registry) of all SWIM-enabled services, information provided by the services can only be discovered in indirect and often inefficient ways. To establish whether a specific type of information is provided by SWIM, a user generally attempts to deduce what the “right” service is for the given type of information and then has to access and examine that service’s meta-data in order to determine whether the service does in fact provide the needed information. In other words, there is no information discovery; the information can only be discovered by extension of a service discovery.
-
Deficiency of semantic representation of service meta-information. A description of information provided by a service usually is limited to documents that generally focuses on defining how the service input/output can be constructed and manipulated (e.g., XML schemas, WSDL files) and fail to capture enough semantics. Conversely, free-text human-consumable documents such as the FAA’s Web Service Description Document (WSDD) support a sufficient amount of semantics but cannot be accessed and used by a software agent. Neither formal-language nor human-readable documents are suitable for automated information discovery and provide very limited support for semantic interoperability.
These aspects are subject of the following, simplified but realistic use case:
-
Assume a registry about services already exist.
-
Each service advertises its own features. These are presented as GML Application Schemas
-
A client (user driven) interrogates the registry for specific information, not always using the semantics of the features registered in the registry. This interrogation happens several times, since users will be required to inspect various services until the user finds the information needed. For example, if searching for an “alternative destination airport” the user will use a service that provides some form of a flight plan.
-
Once the right service is located, the user needs to study several service artifacts, for example, the Web Service Description Document (WSDD), the XML schema, and WSDL to determine whether the information about “alternative destination airport” is used and how it can be retrieved. During this process, the user may need to consult subject matter experts to understand the data definitions contained in the XML schema.
Testbed-14 shall address the following requirements:
-
Testbed-14 shall provide recommendations for development of an “information registry”, that is, a searchable catalog of information collectively provided by services in the context of a specific service inventory, such as FAA’s SWIM. The results shall be captured in the Information Registry Engineering Report (D001).
-
Testbed-14 shall provide recommendations for making existing data models used in aviation industry (e.g., AIXM, WXXM, and FIXM) “semantically enabled”. The data models shall be enabled to present their contents in formats suitable for adaptation by Semantic Web technologies, including considerations for role and applicability of ontologies and linked data approaches to complex information realms such as SWIM. The work shall provide a clear definition of next steps. The results shall be captured in the Semantically Enabled Aviation Data Models Engineering Report (D002).
B.6.1. Work Items Overview
The following figure illustrates the Information Registries & Semantic Enablement work items.
B.6.2. Deliverables
The following list identifies all deliverables that are part of this task. Additional requirements are stated above. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.
-
D001 Information Registry Engineering Report - Engineering Report capturing all experiences and results of the information registry.
-
D002 Semantically Enabled Aviation Data Models Engineering Report - Engineering Report capturing all experiences and results of the semantic enabling of aviation industry data models. The report shall cover at least AIXM, WXXM, and FIXM.
B.7. Next Generation OGC Web Services, Federated Clouds, Security & Workflows
This task combines a number of OGC technologies. It addresses both new capabilities at the service level as well as complex interaction patterns between services. In addition to the components described in this chapter, it makes use of WPS instances developed as part of the Exploitation Platform task.
The task consists of the four main topics WFS3.0, Federated Clouds, Security, and Workflows that make use of components provided by the Exploitation Platform task. The following figure provides an overview.
-
WFS3.0: Next Generation OGC RESTful Web service interfaces: Implementation of secured WFS3.0 service instances. Goal is to experiment with the new WFS3.0 specification and to add security mechanisms based on OAuth2.0.
-
Federated Clouds: Integration of services that are hosted in different security environments, as it is often the case when services exist in different clouds. Using secured WFS3.0 instances from above and WPS instances from the Exploitation Platform work item, Federated Clouds shall explore mediation mechanisms to handle the different secured environments.
-
Security: Goal is to better understand security mechanisms in modern Spatial Data Infrastructures. This part combines all security aspects in this task.
-
Workflows: Creation and execution of secured workflows within the geospatial domain. Focus is on Business Process Model and Notation (BPMN)-powered workflows that handle even complex security settings, including new, previously unregistered processes added to transactional WPS.
The following diagram provides an overview of all work items of this task. Elements from this diagram are used repeatedly further below to emphasize the various foci of the different topics.
B.7.1. Next Generation OGC Web Services, Federated Clouds, Security & Workflows Topics
Next Generation OGC RESTful Web service interfaces
OGC Web Services were developed to implement a Service Oriented Architecture (SOA) architecture pattern. For this they work well. However, a new architecture pattern has seen wide adoption. Resource Oriented Architectures have become popular, particularly in browser-facing environments. This architecture pattern focuses on the exchange of resources. Services, where they exist at all, perform their job in the background.
Moving OGC services onto a resource oriented architecture requires more than just adding RESTful operations to the existing services. The whole concept of a service must be re-imagined. Participants in this task shall re-imagine OGC services in preparation to the production of a new generation of OGC Service standards.
Participants in this task shall develop a prototype OGC standard for OGC Resource-Oriented Services. This standard shall be based on the WFS 2.0 capabilities, but is free to implement those capabilities in the most appropriate manner. Participants shall use the REST Users Guide and the REST Architecture Engineering Report produced by Testbed-12. If there is a reason to deviate from these documents (in particular the REST Users Guide), then the editor of the WFS3.0 ER (D021) in cooperation with the WFS3.0 implementers shall revise the Testbed-12 documents!
In addition, editors and implementers shall survey existing Resource-Oriented APIs such as those found at http://www.pingable.org/the-top-15-web-apis-for-your-site.
The prototype standard shall be based on the most recent WFS3.0 definition that is available on Github, maintained by the OGC WFS/FES SWG. It shall be validated through two or more independent service implementations. All service implementations shall support HTTP over TLS (HTTPS) and HTTP Basic Authentication. In addition, all service implementations shall participate in the discussions about mutual authentication and delegated coarse granular access control with OAuth2 to explore issues and propose approaches to address those issues. An OAuth2 authorization server is available for Testbed-14.
Testbed-14 doesn’t require fine grained policy-based access control, so appreciates any work done in this regard, in particular on the integration of (Geo-)XACML with OAuth2.
The following figure illustrates the work items included in the Next Generation OGC RESTful Web service interfaces topic.
All WFS3.0 service instances shall be accompanied by a simplified client (a cURL-based client is sufficient). In addition, the service instances will be accessed by a dedicated client application which provides a GUI for user interactions and visualization capabilities (D142).
All WFS3.0 results need to be captured in the WFS3.0 Engineering Report (D021); appropriate Change Requests shall be submitted. This includes a summary of and references to the more detailed security discussion that is primarily captured in the Security Engineering Report (D024).
The different WFS3.0 prototypes shown in the figure above implement different features:
-
At least one of these services shall include a façade, providing resource-oriented access to an existing WFS 2.0 service (D140).
-
At least one other service one shall implement a number of Best Practices as published by the joint OGC/W3C Spatial Data on the Web Working Group (D113). The service instance shall be developed as Free and Open Source Software (FOSS) and should be based on one of the leading WFS Open Source implementations (e.g. Geoserver). Whereas best practices 1 and 2 are mandatory, the service should support ideally all of the following best practices:
-
Best Practice 1: Use globally unique persistent HTTP URIs for Spatial Things every resource has its own persistent URL
-
Best Practice 2: Make your spatial data indexable by search engines. The implementation should take care that at a minimum each resource has a HTML + embedded schema.org landing page. The Testbed-14 Engineering Report shall provide information of the role of sitemaps for indexability and provide general advice on how to improve indexing of service offerings (and offered spatial data). Further on, the prototype implementation should support the following best practices:
-
Best Practice 3: Link resources together to create the Web of data is dependent in BP 1 and BP 2
-
Best Practice 5: Provide geometries on the Web in a usable way at least GML and GeoJSON
-
Best Practice 7: Choose coordinate reference systems to suit your user’s applications at least WGS84
-
Best Practice 12: Expose spatial data through 'convenience APIs'. Though WFS 3.0 is not necessarily a “convenience API”, the Testbed-14 implementation shall support:
-
Use REST and document the API according to OpenAPI spec
-
Use content negotiation as described here: https://www.w3.org/TR/dwbp/#Conneg
-
Return data in chunks fit for use in Web applications and as useful sets of information.
-
Simplify geometries
-
-
All OGC standards require a description of the conformance test approach and an Abstract Test Suite. The existing WFS conformance test approach is unlikely to be sufficient for the prototype services. Therefore, the Participants of this task shall support the Compliance task team and work in cooperation with the OGC CITE development team to document an appropriate conformance test approach for Resource Oriented services in the conformance section of their standard (D021).
While this task will concentrate on the development and implementation of the next generation of a Web Feature Service, the effort shall include an analysis of the implementation recommendations and their suitability for use in the next generation of Web Map Service and Web Coverage Service. The ability to form a core set of implementation requirements that will be supported by WFS, WCS and WMS is critical to the future success of these redesigned services (D021).
Federated Clouds
A big advantage of Cloud architecture is that it allows processing to go to where the data is combined with the ability to easily scale on demand. This works fine as long as the data and the processing reside in the same Cloud: "The Cloud Computing paradigm advocates centralized control over resources in interconnected data centers under the administration of a single service provider. This approach offers economic benefits due to supply-side economies of scale, reduced variance of resource utilization by demand aggregation, as well as reduced IT management cost per user due to multi-tenancy architecture." (Kurze et al, 2011). Just, we currently see that industry is generating a proliferation of Clouds. Everything from the global scale commercial Clouds such as AWS or Azure, over private clouds down to personal Clouds which can run on a smart phone. The challenge is to bring the advantages of Cloud computing together while lowering the adverse effects of vendor or software lock-in.
A possible solution is described as Cloud Federations or Federated Clouds. "Cloud federation comprises services from different providers aggregated in a single pool supporting three basic interoperability features: resource migration, resource redundancy and combination of complementary resources resp. services." (Kurze et al, 2011) The goal is to achieve the same platform transparency on the Cloud federation that has been achieved in single-vendor Cloud offerings. Testbed-14 focuses on the third aspect, the combination of resources and services.
Participants in this task shall build on the Testbed-13 Earth Observation Clouds thread to further advance Cloud federation. Testbed-13 results are documented in three Engineering Reports:
-
The OGC Testbed-13: Application Package ER defines a data model and serialization for Application Packages that can be used to efficiently deploy any type of containerized application onto a Cloud environment.
-
The actual deployment and execution of these Application Packages is defined in the OGC Testbed-13: Application Deployment and Execution Service ER
-
The OGC Testbed-13: Cloud ER describes the use of OGC Web Processing Service (WPS) for cloud architectures. The report addresses issues in lack of interoperability and portability of cloud computing architectures which cause difficulty in managing the efficient use of virtual infrastructure such as in cloud migration, storage transference, quantifying resource metrics, and unified billing and invoicing.
In Testbed-14, the work shall focus on the following issues:
-
Multi-level secure environments (federation across security levels and compartments).
-
Mediate security controls across US and non-US owned clouds.
-
Maintain confidentiality and integrity of information as it crosses Cloud boundaries.
-
Integration of Clouds on DDIL networks.
For each issue, participants shall provide a detailed definition of the problem, propose a solution, and demonstrate that solution using at least two independently developed services.
Participants should also be aware that the U.S. National Institute for Standards (NIST) and the IEEE have teamed together to develop the “Standard for Intercloud Interoperability and Federation” (SIIF) (see http://standards.ieee.org/news/2017/intercloud_interoperability_and_federation.html). The Charter for the SIIF is here http://sites.ieee.org/sagroups-2302/files/2017/07/P2302_PAR_Detail.pdf Participants in this thread shall monitor, and where possible participate in the NIST/IEEE effort representing the needs of the Geospatial Industry.
The federated cloud work embedded in this task makes use of components from the WFS3.0 and Security topics as well as services provided by the Exploitation Platform task. The following diagram illustrates all work items involved.
The dedicated client application (D143) interacts with a WPS service that acts as a workflow execution service (D149). The executed work flows include services that are hosted in two different security environments. It is supported by another WPS that acts a Security Mediation Service (D147).
All Federated Clouds results need to be captured in the Federated Clouds Engineering Report (D023). This includes a summary of and references to the more detailed security discussion that is primarily captured in the Security Engineering Report (D024).
Security
Testbed-13 demonstrated the implementation of Identification and Authentication (I&A) and simple access controls by OGC Web Services. Results are documented in the OGC Testbed-13: Security ER. Testbed-14 shall continue these efforts with focus on two topics: I&A Mediation and Security for Next Generation OGC RESTful Web service interfaces. Given that JSON and JSON Schema will play an important role in this context, Testbed participants shall consider the NSA document Security Guidance for the Use of JavaScript Object Notation (JSON) and JSON Schema as an important source of information. The following diagram illustrates all work items involved.
The I&A Mediation Service (SecMed. Server (D147)) provides propagation of trusted identities across security domains with different Identification and Authentication protocols. Developers of the I&A Mediation Service shall demonstrate the ability to mediate e.g. between:
-
Basic authentication
-
OAuth2.0
-
HTTP with TLS
-
HTTP with TLS and mutual authentication
For that purpose, the I&A Mediation Service shall interact with the WFS3.0 services and the Exploitation Platform WPS 2.0 instances. These services are part of a Shibboleth environment, whereas the WFS3.0 services are part of an OAuth2/OpenID Connect environment. The following diagram illustrates the client/server interactions.
The service implementer may decide to setup any number of additional services an use these instead of the WPS instances D100 and D101. In addition, the service implementer shall provide a simple client (a cURL-based client is sufficient) that allows exploring the I&A Mediation aspects.
As described in the Workflows section further below, this architecture shall be extended so that security settings can be managed as part of BPMN-powered workflows.
In this case, the dedicated client (D143) uses a transactions WPS (D149) that allows registering new processes defined using BPMN. The WPS is supported by a BPMN Engine (D148). Further details are provided below.
The Security ER (D024) needs to capture all security related results of this task (including results from WFS3.0, Federated Clouds, and Workflows). As the various task topics explore various aspects of security in modern Spatial Data Infrastructures, the ER shall provide a consolidated view on that topic. The Security ER editor has a central role in coordinating and documenting the various aspects explored across this task.
Workflows
Testbed-14 shall explore five aspects as part of the Workflows topic. These include
The following diagram illustrates all work items involved. The Workflows topic uses security components from the WFS3.0 and Security topics.