Corrigenda

The following table identifies all corrections that have been applied to this CFP compared to the original release. Pure editorial changes are not listed.

Section Description

4

Master Schedule updated

5.2

Work items added and changed

B3

Figure 3 updated, new work packages "Enhanced Web Services" and "Point Cloud Streaming" added

B6

Section B6 merged with section B21 to form new section B6 "Semantic Registry"

B12

Requirements eased on AB105 and AB106

B16

Profile information added to NG102, NG103

B16

NG113 added to figure 23 and deliverables section

B21

Section B21 merged with section B6 to form new section B6 "Semantic Registry".

B22

Profile information added to NG119, NG120

B22

NG011 added to deliverables

B23

B23.1 & B23.6 "Event-driven automatic analytics workflow" deleted

B23

B23.2 requirements dropped

B23

Figure 31 updated to reflect new requirements

B23

Clarification added to B23.4 "Building Workflows"

B23

Clarification added to B23.5 "Cataloguing Workflows"

B23

Fit-for-Purpose Workflow numbering changed in B23.7

B23

"Automatic Analytics" removed in figure 34 and "Workflows" requirements section

B23

Figure 32 deleted

B23

Figure 33 updated to reflect changes in figure 31

B25

New section added "Enhanced Web Services"

B26

New section added "Point Cloud Streaming"

Table of Contents
Abbreviations

The following table lists all abbreviations used in this Call for Proposals

ABI

Activity Based Intelligence

AOI

Area of Interest

AMQP

Advanced Message Queuing Protocol

AtomPub

Atom Publishing Protocol

AVI

Aviation

BBOX

Bounding Box

CDR

Content Discovery and Retrieval

CITE

Compliance Interoperability and Testing

CFP

Call for Proposals

CMD

Command Center

CSMW

Community Sensor Model Working Group

CSW

Catalog Service Web

CTL

Compliance Testing Language

DAP

Data Access Protocol

DCAT

Data Catalog Vocabulary

DDIL

Denied, Degraded, Intermittent, or Limited Bandwidth

DGIWG

Defense Geospatial Information Working Group

DISA

Defense Information System Agency

DWG

Domain Working Group

EO

Earth Observation

EOWCS

Earth Observation Profile Web Coverage Service

ER

 Engineering Report

EXI

Efficient XML Interchange format

FGDC

Federal Geographic Data Committee

FIXM

Flight Information Exchange Model

FO

Field Operations

GDAL

Geospatial Data Abstraction Library

GEOINT

Geospatial intelligence

GeoXACML

Geospatial XACML

GIBS

Global Imagery Browse Services

GML

Geography Markup Language

HDF

Hierarchical Data Format

HTTP

Hypertext Transfer Protocol

HTTPS

Hypertext transfer protocol secure

ISO

International Organization for Standardization

JSON

JavaScript Object Notation

JSON-LD

JSON Linked Data

KML

Keyhole Markup Language

LiDAR

Light detection and ranging

MEP

Mission Exploitation Platform

MTOM

Message Transmission Optimization Mechanism

NASA

National Aeronautics and Space Administration

netCDF

network Common Data Form

NetCDF-CF

NETCDF Climate Forecasting

NSG

National System for Geospatial Intelligence

OAuth

Open Authorization

OBP

Object Based Production

OGC

Open Geospatial Consortium

OPeNDAP

Open-source Project for a Network Data Access Protocol

PKI

Public Key Infrastructure

POI

Points-of-interest

PubSub

Publication Subscription

RDF

Resource Description Framework

SAML

Security Assertion Markup Language

SOS

Sensor Observation Service

SPARQL

SPARQL Protocol and RDF Query Language

SSO

Single Sign On

SWAP

Size, Weight, and Power

SWE

Sensor Web Enablement

SWG

Standards Working Group

T13

Testbed-13

TEAM

Test, Evaluation, And Measurement Engine

TEP

Thematic Exploitation Platform

TSPI

Time-Space-Position-Information Standard

TWMS

Tiled Web Mapping Service

US

United States

UML

Unified Modeling Language

USGS

U.S. Geological Survey

W3C

World Wide Web Consortium

WCPS

Web Coverage Processing Service

WCS

Web Coverage Service

WFS

Web Feature Service

WIS

Web Integration Service

WKT

Well Known Text

WMS

Web Mapping Service

WMTS

Web Mapping Tile Service

WPS

Web Processing Service

WS

Web Service

WSDL

Web Services Description Language

XACML

eXtensible Access Control Markup Language

XOP

XML-binary Optimized Packaging

XXE

XML External Entity Injection

1. Introduction

The Open Geospatial Consortium (OGC®) is releasing this Call for Participation ("CFP") to solicit proposals for the OGC Testbed 13 ("T13") initiative. The CFP is in two parts due to specialized sponsor procurement requirements:

  • Part 1 - this CFP document ("Part 1 CFP" or "CFP")

  • Part 2 - an Invitation To Tender pack ("Part 2 ITT") ref: AO320105 Thematic Exploitation Platform (TEP) Thread, released via the European Space Agency (ESA) Electronic Mailing Invitation to Tender System (EMITS).

Important
Part 2 ITT is described separately from this document and has distinct response requirements that can be found at the link provided above. So any Bidder wishing to respond to both Part 2 ITT (described externally) and the Part 1 CFP (described herein) must deliver two separate proposals, one conforming to each set of response requirements. A Bidder wishing to respond to one or the other (but not both) would submit only one proposal (conforming to the relevant part’s requirements).

Under Part 1 CFP, the OGC, on behalf of initiative-sponsoring organizations ("Sponsors"), will provide cost-sharing funds to partially offset expenses uniquely associated with T13. Thus this solicitation requests proposals from bidding organizations ("Bidders") wishing to receive cost-sharing funds. However, not all proposals are expected to request cost-sharing funds. OGC intends to involve as many technology developers and providers ("Participants", to be selected from among all the bidders) as possible to the extent that each Participant can contribute to and benefit from initiative outcomes. So this solicitation also seeks responses offering solely in-kind contributions (i.e., no cost-sharing funds whatsoever). The majority of responses are expected to include a combination of a cost-sharing request along with a proposal for an in-kind contribution.

Note

Once the CFP has been published, ongoing updates can be tracked by monitoring the Testbed 13 CFP web page.

1.1. Background

The OGC Interoperability Program ("IP") provides global, hands-on, collaborative prototyping for rapid development and delivery of proven candidate specifications to the OGC Standards Program, where these candidates can then be considered for further action. In IP initiatives, Participants collaborate to examine specific geo-processing interoperability questions posed by the initiative’s Sponsors. These initiatives include testbeds, experiments, pilots, and plugfests – all designed to foster the rapid development and adoption of open, consensus-based standards. Additional information can be found in the OGC IP policies and procedures documentation.

The OGC recently reached out to potential initiative sponsors to review the OGC technical baseline, discuss results of prior initiatives, and identify current testbed requirements. After analyzing these inputs, the OGC recommended that the content of the testbed be organized around two parts, Part 1 CFP and Part 2 ITT.

A complete list of all testbed deliverables (including both Part 1 CFP and Part 2 ITT) is provided in the Summary of Testbed Deliverables section below.

Tip

In addition to the funding opportunities provided by the sponsors, Testbed 13 will provide a new opportunity for selected participants to seek additional venture capital investment in technical areas associated with their Testbed 13 assigned work areas. For more information, see additional information here.

1.2. Participant Roles and Benefits

Participants may play any of several possible roles:

  • Developer of one or more software components implementing interfaces and protocols for one or more of the testbed services,

  • Developer of one or more tools to assist in the testing and demonstration of implemented software components,

  • Editor of one or more Engineering Reports or User Guides that documents findings and recommendations, and/or

  • Provider of general-purpose resources such as labor hours and infrastructure assets (e.g., data, software, hardware, facilities).

In general, Bidders should propose specifically against the list of deliverables described under the Summary of Testbed Deliverables section below. But Bidders may go beyond funded deliverables to propose in-kind contributions that will address unfunded requirements as well. Participants should note, however, that Sponsors are committed to funding only those deliverables identified as being funded.

This testbed provides a business opportunity for stakeholders to mutually define, refine, and evolve services, interfaces and protocols in the context of hands-on experience and feedback. The outcomes are expected to shape the future of geospatial software development and data publication. The Sponsors are supporting this vision with cost-sharing funds to partially offset the costs associated with development, engineering, and demonstration of these outcomes. This offers selected Participants a unique opportunity to recoup a portion of their testbed expenses.

1.3. CFP Documents

This Part 1 CFP incorporates the following additional documents:

Any Bidder interested in participating in this testbed should respond by submitting a proposal per the instructions provided herein. Limited cost-sharing funds are available to partially offset costs incurred by Participants in support of this initiative.

1.4. Intellectual Property in the Testbed

One testbed objective is to support the OGC Standards Program in the development and publication of open standards. Each Participant will be required to allow OGC to copyright and publish documents based in whole or in part upon intellectual property contributed by the Participant during testbed performance. Specific requirements are described under the "Copyrights" clauses of the OGC IPR document identified above.

1.5. Principles of Conduct

The OGC Principles of Conduct document identified above will govern all personal and public interactions in this initiative.

2. Proposal Submission Instructions

Important
The instructions described in this section pertain specifically to proposals in response to the Part 1 CFP. Full instructions for responding to the Part 2 ITT solicitation can be found in the ITT pack referenced above. Part 2 ITT has specific requirements on the proposal format and submission process and has funding restrictions applied to ESA member states, associated member states, and states with cooperation agreements (Canada).

Bidders must be OGC members and must be familiar with the OGC mission, organization, and process. Proposals from non-members will be considered provided that a completed application for OGC membership (or a letter of intent to become a member) is submitted prior to or with the proposal.

Documentation submitted in response to this CFP will be distributed to OGC and Sponsor staff members. Submissions will remain in the control of these stakeholders and will not be used for other purposes without prior written consent of the Bidder. Please note that each Bidder will be requested to release the content of its proposal (excluding financial details) to all testbed stakeholders (including other Participants) once it has agreed to participate in the testbed initiative. Confidential information must not be submitted under this request and should not be disclosed at any time during testbed solicitation or execution.

Part 1 CFP Participants will be selected to receive cost sharing funds on the basis of adherence to the requirements stipulated in this CFP and the overall quality of their proposal. The general testbed objective is for the work to inform future OGC standards development with findings and recommendations surrounding potential new specifications. Bidders are asked to formulate a path for producing running, interoperable prototype solutions. Bidders not selected for cost sharing funds are encouraged to participate in the initiative on an in-kind basis.

Each selected Part 1 CFP Participant will be required to enter into a Participation Agreement ("PA") with the OGC. The PA will include a statement of work (SOW) identifying Participant roles and responsibilities. The purpose of the PA is to encourage and enable Participants to work together to realize testbed goals for the benefit of the broader OGC community.

2.1. How to Transmit Your Response Proposal

To submit a response proposal, complete the two Response Templates (narrative and financial) and email them as attachments to the OGC Technology Desk at techdesk@opengeospatial.org. Any of the following attachment output formats is acceptable:

  • Microsoft Office (.DOCX, .XLSX),

  • Open Document Format (.ODT, .ODS),

  • Portable Document Format (.PDF).

Part 1 CFP proposals must be received at OGC before the appropriate response due date indicated in the Master Schedule.

2.2. Proposal Format and Content

For a Bidder’s response to qualify for consideration, the response must provide all required information in accordance with Part 1 CFP instructions, including those contained in appendices and the two templates. Please note that the Financial Response Template contains one worksheet for a cost-sharing request and another for in-kind contributions. Bidders must use these templates in preparing their proposals.

Note that proposal reviewers will be instructed to avoid reading or evaluating any material in excess of stated page limits.

2.2.1. Technical Proposal

The Part 1 CFP Technical Proposal should be based on the Narrative Response Template and must include the following:

  • Completed Title Page

  • Table of Contents

  • Overview, not to exceed two pages (this section will not be considered in making the evaluation of the proposal)

  • Proposed contribution(s) in each thread or work package (this section will form the basis for the technical evaluation of the proposal)

  • Proposed work organized by technical activity type (this section will be considered in making the management evaluation of the proposal)

Additional detailed instructions for each template can be found in the template itself.

2.2.2. Cost Proposal

The Part 1 CFP Cost Proposal should be based on the two worksheet templates contained in the Financial Response Template and must include the following:

  • Completed Testbed Cost-Sharing Funds Request Form

  • Completed Testbed In-Kind Contribution Declaration Form

Additional detailed instructions are contained in the template itself.

2.3. Questions and Clarifications

Once the Part 1 CFP is issued, potential Bidders will be permitted to submit questions to support their proposal development and submission. Questions should be emailed by the Bidder-question due date (indicated in the Master Schedule) to the OGC Technology Desk (techdesk@opengeospatial.org). Question submitters will remain anonymous, and answers will be compiled and published in a regularly updated CFP clarifications document. OGC may also choose to conduct a Bidder’s question-and-answer webinar to review clarifications and invite follow-on questions.

2.4. Reimbursement Restrictions

Selected Participants will not be reimbursed for any of the following:

  • Costs incurred in procuring any hardware or software

  • Costs incurred in connection with preparing proposals in response to this CFP

  • Costs incurred for travel to or from the Kickoff or demonstration events

2.5. Venture Capital Coordination Opportunity

Organizations responding to this CFP, are invited to express their interest in being considered by select venture capital investment firms (VCs) regarding elements in their testbed proposal. OGC has teamed with the venture capital firm Data Tribe. OGCs role is to assist in the alignment of interests. After identification of common interests, OGC will introduce the VC and the Participant, and subsequent discussions should take place directly between these two parties.

If your organization is interested in coordinating with a VC, include a statement in your response to that effect, including an identification of the specific technology areas that should be considered by the VC. An outline of testbed technology areas appears below. Additional technical details can be found in Appendix B, including an overview of thread allocations for all work packages.

  • Cloud Computing Environment for Earth Observation Data

  • USGS Topo Combined Vector Product data to GeoPackage

  • Map Markup Language & Web-Map HTML

  • Climate Data Accessibility for Adaptation Planning

  • Vector Tiling

  • CDB

3. Proposal Evaluation Criteria

Proposals will be evaluated according to criteria that can be divided into two areas: Technical and Management.

3.1. Technical Criteria

  • Understanding of and compliance with requirements;

  • Quality and suitability of proposed design;

  • Where applicable, proposed solutions are OGC-compliant.

3.2. Management Criteria

  • Adequate, concise descriptions of all proposed activities, including how each activity contributes to achievement of particular requirements and deliverables. To the extent possible, it is recommended that Bidders utilize the language from the CFP itself to help trace these descriptions back to requirements and deliverables.

  • Costing and planning:

    • Proposed solutions are feasible (can be delivered using proposed resources),

    • Cost-share compensation request is reasonable for proposed effort;

    • In-kind contribution is of value to the initiative, manpower deployment, experience and capacity of the tenderer, and compliance with substantive tender and contract conditions;

  • Experience and capacity of the tenderer with OGC initiatives.

4. Master Schedule

The following table details the major events and milestones associated with the testbed and this CFP:

Table 1. Master schedule
Milestone Event

3 February 2017

Final Bidder Questions Due

7 February 2017

Bidders Q&A Webinar

20 February 2017

Part 1 CFP Proposal Submission Deadline

1 March 2017

First Round of Bidder Notifications Started

8 March 2017

Second Round of Bidder Notifications Started

31 March 2017

All Part 1 CFP Participation Agreements Signed

4-6 April 2017

Kickoff Workshop Event

30 June 2017

Preliminary Design and Implementations Milestone

30 September 2017

Delivery of Preliminary Clean, Full DERs and TIE-tested Component Implementations

31 October 2017

DERs Posted to Pending and WG Review Requested

15 November 2017

Demo Assets

30 November 2017

Participant Final Summary Reports

[date in December 2017 TBD]

Demonstration Event

Sequence of Events, Phases, and Milestones

The following diagram provides a notional schedule of major testbed events, phases, and milestones and their approximate sequence of occurrence. The testbed will use rolling-wave project management whereby more detailed scheduling will take place as each milestone draws near.

Milestones
Figure 1. Overview of events, phases, and milestones

Participant Selection and Agreements: Once the Part 1 CFP is issued, potential Bidders will be permitted to submit questions to support their proposal development and submission. Questions should be emailed by the Bidder-question due date (indicated in the Master Schedule) to the OGC Technology Desk (techdesk@opengeospatial.org). Question submitters will remain anonymous, and answers will be compiled and published in a regularly updated CFP clarifications document. OGC will also conduct a Bidder’s question-and-answer webinar to review clarifications and invite follow-on questions.

Following the closing date for submission of proposals, OGC will evaluate received proposals, negotiate with selected Bidders, and communicate testbed status to the OGC Technical and Planning Committees. Participant selection will be complete once PA contracts, including statements of work (SOWs), have been signed with all Participants.

Kickoff Workshop: A Kickoff Workshop ("Kickoff") is a face-to-face meeting where Participants, guided by thread architects, will refine the testbed architecture (including generic interfaces and protocols to be used as a baseline for software components) and the demonstration concept. Participants will be required to attend the Kickoff, including thread activities of each thread for which they were selected.

Component Development, Test, and Refinement: After the Kickoff, Participants will develop components based on the interface designs for insertion into the testbed, and integrate selected components for support of TIEs and demonstration deliverables. These activities will be conducted remotely via web meetings and teleconferences.

Preliminary Design and Implementations Milestone: Development work leads up to the Preliminary Design and Implementations milestone. This is a critical milestone to complete draft documents based on collaboration among participants in thread teams during Kickoff (e.g., design documents or preliminary service implementations). These draft documents should confirm each Participant’s understanding of its requirements, components to be delivered, and remaining delivery schedule.

Final Delivery Milestone: Participants will be required to make final delivery of all items no later than the Final Delivery milestone, which will constitute the close of funded activity. Further development may take place to refine demonstration assets.

Final Activities and Demonstration Event: A testbed Demonstration will be conducted to highlight findings and recommendations to the TC, Sponsors, and the broader community of interest. This event could entail multiple demonstrations to highlight particular capabilities. Participants selected to deploy demonstration assets may do so after the Final Delivery Milestone, but they must provide a technical representative to participate in or support the development of the integrated demonstration.

Assurance of Service Availability: Participants selected to implement service components must maintain availability for a period of no less than one year after the Final Delivery Milestone. Some Sponsors may be willing to entertain exceptions to this requirement on a case-by-case basis.

Participant requirements for proposing activities to support these phases can be found in Appendix A.

5. Summary of Testbed Deliverables

The following tables show the full set of testbed deliverables, including ID, deliverable name, work package, and funding status.

A deliverable’s funding status can funded ("F"), unfunded ("U"), or under negotiation ("Un-Neg"), depending on the current state of sponsor funding.

  • For a deliverable with a funding status of "F", sponsor funding has already been confirmed.

  • A deliverable with a funding status of "U" is within CFP scope, but has a lower priority and does not have any sponsor funding.

  • A deliverable with a funding status of "Un-Neg" is one for which a sponsor intends to provide funding, but a final commitment of this funding is still pending.

Please note that each deliverable indicated as "F" or "Un-Neg" would be funded at most once. No deliverable should be interpreted as offering multiple instances. For any deliverable still under negotiation ("Un-Neg"), if funding for that deliverable ends up not being committed, any bid for cost-sharing on that deliverable will be dismissed.

All deliverables have been assigned to work packages, which will be organized into larger threads before Kickoff. A preliminary set of threads has been provided in Appendix B.

All Participants are required to provide at least some level of in-kind contribution (i.e., activities requesting no cost-share compensation). As a rough guideline, a proposal should include at least one dollar of in-kind contribution for every dollar of cost-sharing compensation requested. All else being equal, higher levels of in-kind contributions will be considered more favorably during evaluation.

Some participation may be fully in-kind. Any item proposed as a fully in-kind contribution will likely be accepted if it meets all the other evaluation criteria (i.e., other than the Management Criterion "Cost-share compensation request").

Important
The following requirements pertain to all all web service implementation deliverables prefixed by "NG…​".

Please note that the following additional requirements apply to all web service implementation deliverables prefixed by "NG…​":

  • All web service implementation deliverables with prefix “NG…​” must, in addition to meeting that deliverable’s unique requirements, implement either a DGIWG or NSG Profile of that service if one exists.

  • All web service deliverables implementing either a DGIWG or NSG Profile must execute and pass any corresponding profile compliance test if one exists.

  • All web service implementation deliverables under the Workflows work package below must implement the requirements of a security architecture for service chaining.

  • Preference will be given to web service implementation deliverables that are proposed to be implemented in a cloud-based environment.

In the tables below, document deliverables are numbered ..001 and increasing, and implementation deliverables are numbered ..101 and increasing.

5.1. Part 2 ITT Thematic Exploitation Platform (TEP) Deliverables and Funding Status

Important
The Part 2 ITT Thematic Exploitation Platform (TEP) Deliverables are listed here to provide a complete overview of all T13 work items. Full instructions for responding to the Part 2 ITT solicitation can be found in the ITT pack referenced above.

Additional technical details can be found in Appendix B, including an overview of thread allocations for all work packages.

Table 2. Part 2 ITT Thematic Exploitation Platform (TEP) Deliverables and Funding Status
ID Document / Component Work Package Thread Funding Status

ES001

EP Application Package ER

TEP

EOC

F

ES002

Application deployment & execution service ER

TEP

EOC

F

ES101

EP Application package implementation 1

TEP

EOC

F

ES102

EP Application package implementation 2

TEP

EOC

F

ES103

ES103: TEP client 1

TEP

EOC

F

ES104

ES103: TEP client 2

TEP

EOC

F

ES105

Application deployment service implementation 1

TEP

EOC

F

ES106

Application deployment service implementation 1

TEP

EOC

F

ES107

EP Application for Forestry TEP implementation

TEP

EOC

U

5.2. Part 1 CFP Deliverables and Funding Status

Additional technical details can be found in Appendix B, including an overview of thread allocations for all work packages.

Table 3. Part 1 CFP Deliverables and Funding Status
ID Document / Component Work Package Thread Funding Status

NR001

Cloud ER

Cloud

EOC

F

NR101

Cloud WPS 1

Cloud

EOC

F

NR102

Cloud WPS 2

Cloud

EOC

F

 — 

 — 

 — 

 — 

 — 

UG002

DCAT/SRIM ER

Semantic Registry

CCI

F

UG101

DCAT/SRIM Server

Semantic Registry

CCI

F

NG124

PubSub CSW

Semantic Registry

CCI

Un-Neg

 — 

 — 

 — 

 — 

 — 

NG006

Point Cloud Streaming ER

Point Cloud Streaming

CCI

Unfunded

NG117

Point Cloud Streaming Server

Point Cloud Streaming

CCI

Unfunded

NG118

Point Cloud Streaming Client

Point Cloud Streaming

CCI

Unfunded

 — 

 — 

 — 

 — 

 — 

FA001

Abstract Quality Model ER

Aviation QoS

CCI

F

FA002

Data Quality Specification ER

Aviation QoS

CCI

F

FA003

Quality Assessment Service ER

Aviation QoS

CCI

F

 — 

 — 

 — 

 — 

 — 

FA004

Geospatial Taxonomies ER

Aviation Taxonomies

CCI

F

 — 

 — 

 — 

 — 

 — 

DG001

Fit-for-Purpose ER

Fit for Purpose

FO

F

DG101

CSW or WPS with fit-for-purpose support

Fit for Purpose

FO

F

AB103

WFS Data service with fit-for-purpose support

Fit for Purpose

FO

F

AB104

Client with fit-for-purpose support

Fit for Purpose

FO

F

 — 

 — 

 — 

 — 

 — 

UG001

US Topo GeoPackage ER

Geopackage

FO

F

UG102

USGS Topo GeoPackage

Geopackage

FO

F

AB102

GeoPackage Client

Geopackage

FO

F

 — 

 — 

 — 

 — 

 — 

NR002

MapML ER

MapML

CCI

F

NR103

MapML Server

MapML

CCI

F

 — 

 — 

 — 

 — 

 — 

AB001

Concepts of Data and Standards for Mass Migration ER

Mass Migration

DSI

F

AB002

Security ER

Mass Migration

DSI

F

PM001

NIEM IEPD Engineering Report (ER)

Mass Migration

DSI

Un-Neg

AB101

OAuth-enabled Web Service

Mass Migration

DSI

F

AB105

Security-enabled Desktop client (EOC Desktop Client)

Mass Migration

DSI

F

AB106

Security-enabled Mobile client (EOC Mobile Client) (GeoPackage & Web Service)

Mass Migration

DSI

F

PM101

Messages and Schemas or CVISR (+ POS-IAN-VINFO-NOA) IEPDs

Mass Migration

DSI

Un-Neg

PM102

AIS Vessel Info Data Service (WFS)

Mass Migration

DSI

Un-Neg

PM103

SAML-enabled Web Feature Service with Transactions (WFS-T)

Mass Migration

DSI

Un-Neg

PM104

NIEM-GML Integration Component (WPS)

Mass Migration

DSI

Un-Neg

PM105

Security Component - SAML Authentication Service

Mass Migration

DSI

Un-Neg

PM106

Security Component - Federated ID Management Service

Mass Migration

DSI

Un-Neg

 — 

 — 

 — 

 — 

 — 

GE101

QGIS Security Client

Security

DSI

F

 — 

 — 

 — 

 — 

 — 

DS001

Vector Tiles ER

Vector Tiling

S3D

F

OS101

Vector Tiles implementation

Vector Tiling

S3D

F

OS102

Vector Tiles client implementation

Vector Tiling

S3D

F

NG116

WFS for Vector Tiling

Vector Tiling

S3D

Un-Neg

DS101

Vector Map Tiling Service

Vector Tiling

S3D

F

 — 

 — 

 — 

 — 

 — 

NA001

Climate Data Accessibility for Adaptation Planning ER

Modeling

DSI

F

NA101

Agriculture Scientist Client

Modeling

DSI

F

NA102

Non-Scientist or Analyst Client

Modeling

DSI

F

NA103

Prediction WPS

Modeling

DSI

F

NA104

WCS access to climate data

Modeling

DSI

F

 — 

 — 

 — 

 — 

 — 

NG001

CDB ER

CDB

S3D

Un-Neg

NG101

Feasibility Study

CDB

S3D

Un-Neg

NG102

CDB WFS

CDB

S3D

Un-Neg

NG103

CDB WCS

CDB

S3D

Un-Neg

NG104

CDB WFS (3D)

CDB

S3D

Un-Neg

NG105

CDB Client

CDB

S3D

Un-Neg

 — 

 — 

 — 

 — 

 — 

NG002

3D Tiles & i3s Interoperability & Performance ER

3DTiles and i3s

S3D

Un-Neg

NG106

CDB Implementation

3DTiles and i3s

S3D

Un-Neg

NG107

CityGML Datastore

3DTiles and i3s

S3D

Un-Neg

NG108

Streaming Engine-1

3DTiles and i3s

S3D

Un-Neg

NG109

Streaming Engine-2

3DTiles and i3s

S3D

Un-Neg

NG110

3D Performance Client

3DTiles and i3s

S3D

Un-Neg

NG111

CDB Performance Client

3DTiles and i3s

S3D

Un-Neg

 — 

 — 

 — 

 — 

 — 

NG003

NAS Profiling ER

NAS Profiling

S3D

Un-Neg

NG112

ShapeChange Enhancements

NAS Profiling

S3D

Un-Neg

NG113

Data Models

NAS Profiling

S3D

Un-Neg

 — 

 — 

 — 

 — 

 — 

NG004

Disconnected Network ER

DDIL

FO

Un-Neg

NG005

SWAP ER

DDIL

FO

Un-Neg

NG114

Compression Test Server

DDIL

FO

Un-Neg

NG115

Compression Test Client

DDIL

FO

Un-Neg

 — 

 — 

 — 

 — 

 — 

NG008

Portrayal ER

Portrayal

DSI

Un-Neg

NG122

Portrayal Demonstration

Portrayal

DSI

Un-Neg

 — 

 — 

 — 

 — 

 — 

NG125

Enhanced WMTS

WxS

DSI

Unfunded

NG126

Enhanced WMS

WxS

DSI

Unfunded

NG127

Tile-handling WPS-1

WxS

DSI

Unfunded

NG128

Tile-handling WPS-2

WxS

DSI

Unfunded

 — 

 — 

 — 

 — 

 — 

NG007

Asynchronous Services ER

Asynchronous Services

FO

Un-Neg

NG119

Asynchronous WFS-1

Asynchronous Services

FO

Un-Neg

NG120

Asynchronous WFS-2

Asynchronous Services

FO

Un-Neg

NG121

GeoSynchronization Service

Asynchronous Services

FO

Un-Neg

NG011

GeoSynchronization Service Best Practice ER

Asynchronous Services

FO

Un-Neg

 — 

 — 

 — 

 — 

 — 

NG009

Workflow ER

Workflows

FO

Un-Neg

NG130

Workflow WPS-1

Workflows

FO

Un-Neg

NG131

Workflow PubSub Server

Workflows

FO

Un-Neg

NG132

Workflow Data Server-1

Workflows

FO

Un-Neg

NG135

Workflow Catalog Server

Workflows

FO

Un-Neg

NG136

WPS Client

Workflows

FO

Un-Neg

 — 

 — 

 — 

 — 

 — 

NG010

CITE ER

Compliance

COT

Un-Neg

NG137

CITE NSG WFS Suite

Compliance

COT

Un-Neg

NG138

CITE NSG WMTS Suite

Compliance

COT

Un-Neg

Appendix A: Management Requirements

A.1. Initiative Activities and Roles

A.1.1. Roles

The roles generally played in any OCG Interoperability Program initiative are defined in the OGC Interoperability Program (05-127r8). The following role definitions are derived from that document with added detail to clarify how the roles will be played in this particular testbed initiative.

  • Sponsors are OGC member organizations that contribute financial resources in support of the testbed. They drive testbed requirements, technical scope & agenda, and demonstration form & content. Sponsor Representatives are assigned by the Sponsor to represent the Sponsor’s interests and position to OGC throughout the testbed duration.

  • Participants are OGC member organizations that contribute to the definition of interfaces, prototypical implementations, and other engineering support for testbed. Participants typically commit to making a substantial in-kind contribution to an initiative. Participants will be represented in the testbed by assigned business and technical representatives.

  • Observers are OGC member organizations that have agreed to the initiative’s intellectual property requirements. Observers do not have a vote in an initiative, but they are afforded the privilege of access to initiative email lists, web sites and periodic initiative-wide teleconferences. Observers may make recommendations and comments to the participants via any of these forums. The Initiative Manager has the authority to table any comments, recommendations or other discussions raised by observers at any point without prior warning. Failure of an observer to comply may result in suspension of privileges.

  • The IP Team is the engineering and management team that will oversee and coordinate the initiative. This team is comprised of OGC staff, representatives from member organizations, and OGC consultants. It facilitates architectural discussions, synopsizes technology threads, and supports the specification editorial process.

The IP Team for this testbed will include an Initiative Manager, an Initiative Architect, and multiple thread architects. Unless otherwise stated, the Initiative Manager will serve as the OGC primary point of contact ("OGC POC").

The thread architects will work with the IP Team, other thread Participants, and Sponsors to ensure that testbed work (activities and deliverables) is properly assigned and performed. The thread architects are responsible for work and schedule control, as well as for within-thread communication. They will also provide timely notice to the full IP Team on important issues or risks that could impact initiative success.

A.1.2. Activities

Testbed program management activity requirements on Bidders and Participants are presented below. These requirements govern what obligations Bidders must meet to properly propose and what obligations selected Participants must meet to properly perform during testbed execution. The order of topics roughly parallels the Master Schedule.

In general, these requirements are expressed as various technical activities that may be proposed in a bid. Additional activities may be considered during bid evaluation based on cost (i.e., in-kind vs. cost-share) and the extent to which the proposed activity meets testbed requirements and conforms to the testbed architecture. However, Bidders are advised to avoid attempts to use the testbed as a platform for introducing new requirements not included in the Summary of Testbed Deliverables.

In the material that follows, the term "activity" describes work to be performed and "deliverable" describes work to be memorialized and delivered for inspetion and use. This appendix focuses primarily on activities, while the Summary of Testbed Deliverables focuses on deliverables.

In the requirements listed below, bold italic text indicates that the work described is mandatory. Just as a Bidder is not required to propose all deliverables in the Summary of Testbed Deliverables, a Bidder is not required to propose to perform all listed activities. For example, a Bidder that is already a member of the OGC should forego the activity of submitting a membership application with its proposal. Some activities are absolutely required, however, and a Bidder has no choice but to propose performing it. For example, every Bidder must use the supplied templates in its proposal.

A.2. Proposal Development Requirements

The following requirements apply to the proposal development process and activities.

  • Selected Participants must be OGC members. Any Bidder who is not already a member of the OGC must submit an application for membership with its proposal.

  • Bidders should identify any relationships between the proposed work and relevant OGC standards.

  • Bidders should identify any relationships between the proposed work and related international standards (including specific sections) being developed by ISO, OASIS, IEEE, IETF, IAI or other standards development organizations.

  • No work facilities will be provided by OGC. Selected Participants will perform all awarded work at their own facilities. Some work, particularly servers in Technology Integration Experiments ("TIEs", sometimes also referred to as Technical Interoperability Experiments), will require Participants to provide access via the public Internet.

  • Proposals may address selected portions of the testbed requirements and architecture as long as the solution ultimately fits into the overall testbed architecture.

  • A single proposal may address requirements arising from multiple threads. To ensure that all work items in the Summary of Testbed Deliverables are delivered, the OGC may negotiate with individual Bidders to drop, add, or modify some of the proposed work.

  • Bidders proposing to build interoperable components must be prepared to test and demonstrate interoperability with components supplied by other Participants.

  • Components proposed as in-kind contributions should be publicly or commercially available products or services or prototype/pre-release versions intended to be made available. Exceptions may include products/services which are internally used by government/sponsor agencies.

  • Participants selected to implement component deliverables must participate in the full course of interface and component development, TIEs, and demonstration support activities throughout the initiative. Participants selected to edit and/or author document deliverables that depend on these implemented components must also participate in the full course of activities throughout the initiative.

  • Bidders are welcome to suggest alternatives to the initial testbed architecture. However, it should be noted that proposals will be selected on the basis of how successfully the various components from all Participants interoperate. A radically divergent architecture that would require intensive rework on the part of a significant number of other Participants would have to be supported by rationale showing a substantial benefit-to-cost ratio. In such a case, advance coordination with other potential Participants to present a coherent, realistic, and reasonable approach acceptable to all involved Participants could improve the likelihood of acceptance.

  • In general, a proposed component deliverable that has earned OGC Certification will be evaluated more favorably than one which has not.

  • All Bidders must use the supplied templates in their proposals. All Selected Participants receiving cost-sharing funding must send at least one technical representative to the Kickoff Workshop. Participants providing only in-kind contributions may forego this requirement with prior permission. Participants are also encouraged to send at least one technical representative to the Demonstration event.

A.3. Proposal Evaluation Process

Proposal evaluation criteria are listed in the main body of this document. Several steps conducted solely by the IP Team are presented below to aid readers in understanding the overall process. The IP Team and Sponsors will begin reviewing proposals soon after the proposal submission deadline. During this analysis, the IP Team may need to contact Bidders to obtain clarifications and better understand what is being proposed.

A.3.1. IP Team Review of Proposals

Each review will commence by analyzing the proposed deliverables in the context of the Summary of Testbed Deliverables, examining viability in light of the requirements and assessing feasibility against the use cases. The review team will analyze (1) proposed specification refinement or development and (2) proposed testing methodologies.

The IP Team will take the opportunity to potentially modify the testbed architecture in light of new ideas tentatively selected Proposals. Any candidate interface or protocol specification received from a Bidder will be added to the architecture and presented at the Kickoff.

The IP Team will also create a draft demonstration concept explaining how the tentatively selected software components will work together in a demonstration context. It will also identify any remaining gaps. The demonstration concept might include references to existing and emerging resources on OGC Network, including those expected to be under development in this testbed. Testbed execution will eventually culminate in one or more Demonstrations, which could be a combination of virtual and physical events (depending on Sponsor constraints and preferences).

A.3.2. Decision Technical Evaluation Meeting I

At the Decision Technical Evaluation Meeting I (TEM I), the IP Team will present Sponsors with the updated testbed architecture and demonstration concept, along with the proposed program management approach. The team will also present draft recommendations regarding which parts of which proposals should be offered cost-sharing funding (and at what level). Sponsors will decide whether and how draft recommendations in all these areas should be modified.

A.3.3. Initial Notification of Potential Participants

Immediately following TEM I, the IP Team will begin to notify Bidders of their selection to enter negotiations for potentially becoming Participants. Selected Bidders must be available for these contacts to be made to enable confirmation of continued interest.

A.3.4. Decision Technical Evaluation Meeting II

A Decision Technical Evaluation Meeting II (TEM II) meeting will be conducted where the IP Team will present to Sponsors the revised artifacts and Participant recommendations. In addition to confirming the modifications decided in TEM I, Sponsors will have a final opportunity to review proposed Participant recommendations.

A.3.5. Second Notification of Potential Participants

Following TEM II, the IP Team will finalize the testbed architecture, demonstration concept, and program management approach. It will also develop the SOW and full Participant Agreement (PA) for each selected Bidder and notify this organization of its selection to enter final negotiations for becoming an initiative Participant. Selected Bidders must be available for these contacts to be made to enable ongoing negotiation of each PA contract.

A.4. Kickoff Workshop Requirements

Testbed execution will commence with a Kickoff Workshop event ("Kickoff"). Refer to the Master Schedule for the target date(s). Each Participant must attend the Kickoff of any thread for which it was selected.

Prior to Kickoff, each Participant should have executed a Participation Agreement (PA) contract with OGC. Each PA will include a final description of all assigned deliverables (potentially including any mutually agreed modifications to the CFP requirements).

By the commencement of Kickoff, any Participant which has not yet executed a PA will be required to attest to its commitment to a preliminary PA Statement of Work (SOW). The PA must then be executed with OGC no later than Kickoff completion.

The Kickoff itself will address two interdependent and iterative development activities: (1) component interface and protocol definitions, and (2) demonstration scenario development. The scenarios used in the testbed will be derived from those presented in the CFP and other candidates provided by OGC and the sponsors.

Kickoff activities might include any or all of the following (note that there could be multiple iterations of interface definition and scenario development breakouts, and these may be interleaved):

  • Interface definition technical breakouts: Participants assigned to deliver components must have technical representatives in attendance to assist in the initial assessment and interaction of the interfaces. Participants assigned to work on interface definitions should consider in their analyses any use cases developed during demonstration scenario development.

  • Demonstration scenario technical breakouts: assigned Participants will begin demonstration scenario design and creation. The activity will include the development of use cases to record their decisions and to enable other Participants to explore the impact of Scenario design decisions on other parts of the testbed. Participants assigned to work on demonstration scenario development should consider in their analysis any use cases developed during interface definition activities. Participants in this activity must understand that various data sources will be proposed, and should receive consideration, as part of demonstration scenario design. The design must also account for the requirements and dependencies of the overall testbed system, including any client/tool designs, any server designs, and service interfaces.

  • Technical plenary sessions: these meetings will enable collaboration across breakout topics (e.g., Participants working on interface definitions can interact with those working on demonstration scenario development).

One of the Kickoff work products will be a development schedule that includes more detailed milestones for subsequent activities.

A.5. Communication and Reporting Requirements

A.5.1. Participant Points of Contact

Each selected Participant, regardless of any teaming arrangement, must designate a primary point of contact ("Primary POC") who shall remain available throughout testbed execution for communications regarding status. The POC must identify at least one alternative point of contact to support the Primary POC as needed. The POCs shall provide contact information including their e-mail addresses and phone numbers.

All proposals must include a statement attesting to the POCs’ understanding and acceptance of the duties described herein.

A.5.2. Kickoff Status Report

Selected Participants must provide a one-time Kickoff status report that includes a list of personnel assigned to support the initiative. This report must be submitted in electronic form to the testbed Initiative Manager no later than the last day of the Kickoff event.

A.5.3. Monthly Progress Reporting

Participant business/contract representatives are required (per a term in the Participation Agreement contract) to report the progress and status of the Participant’s work. Detailed requirements for this reporting will be provided during contract negotiation. Initiative accounting requirements (e.g., invoicing) will also be described in the contract.

The IP Team will provide monthly progress reports to Sponsors. Ad hoc notifications may also occasionally be provided for urgent matters.

To support this reporting, each Participant must submit (1) a Monthly Technical Report and (2) a Monthly Business Report by the 3rd of the following month (or the first working day thereafter).

Any Participant who has a reliable forecast of what will take place in the remaining days of any particular reported month may submit its report early and subsequently report any urgent, last-minute updates to the Initiative Manager via a follow-on email.

The purpose of these reports is to provide initiative management with high-quality, summary-level indicators of project technical and financial performance from the perspective of each Participant. Templates for both of these report types will be provided.

The IP Team may also provide an occasional status report to an OGC governance body such as the Technical Committee or Planning Committee. Participants may be invited to present preliminary findings in these reports.

The IP Team will review action item status on a weekly basis with assigned Participants, who must be available for these contacts to be made.

A.5.4. Regular and Ad Hoc Web Meetings and Teleconferences

At least one of the Participants POCs must be available for both regularly scheduled and ad hoc teleconferences for each thread in which it is participating.

In particular, weekly (biweekly at IP Team discretion) thread telecons will be conducted and recorded in minutes posted on the portal. These meetings are intended to accelerate understanding and action regarding relevant testbed activities, particularly Participant work assignments and responses to requests for additional status by the IP Team.

In addition to the Participant POC, a knowledgeable Participant or Sponsor engineer who has been (or will be) working on an activity to be discussed on a telecon could also be a valuable attendee. Such individuals would have to either be a Participant or Sponsor employee, or must have signed a testbed Observer Agreement before they would be permitted to join the telecon.

A.5.5. Email Correspondence and Wiki Collaboration

At least one of the Participants POCs must be available to participate in specification and prototype component development via the testbed email lists and wiki website.

A.5.6. Action Item Status Reporting

At least one of the Participants POCs must be available to report the status of assigned Participant actions to the relevant thread architect.

A.5.7. Communication Tools

The following tools will be implemented for use during the testbed:

  • A testbed-wide email list reflector, primarily for non-technical communication and accessible via the email address testbed-13@lists.opengeospatial.org

  • A thread email reflector for each testbed thread, primarily for technical discussions. The reflectors are not intended for exchanging files. Instead, the Portal should be used to upload files, followed by notification via reflector to others

  • A public project web site

  • A Wiki sites for collaboration

  • Web meeting tools such as GoToMeeting, and teleconferences

  • The OGC web-based Portal with modules for calendaring, contact lists, file upload (with version control), timeline, action items, and meeting scheduling

A.6. Requirements for Proposing Technical Activities

Each work item in a labor funding request or in-kind labor contribution declaration (1) must identify the particular Deliverable from the Summary of Testbed Deliverables to which the work item applies and (2) must identify the particular Technical Activity Type for the proposed activity to perform the work item. The mandatory narrative and financial response templates will assist Bidders in meeting these requirements.

An extended outline of predefined Technical Activity Types is provided below. Each work item that a Bidder proposes or declares must either match (approximately) one of these types or provide an explanation and justification for why the proposed work item does not match anything from the list.

Adopting predefined activity types will help maintain consistency across Participants during testbed execution.

Under the testbed’s rapid pace, exposed issues can drive requirements for subsequent rounds of specification refinement, coding, and test. Guided by the thread architect, each cycle will proceed incrementally but rapidly, with focus on a bounded scope at each turn of the cycle. Periods of development will be followed by periods of synchronization between various component developers, enabling issue resolution before divergence can occur between the various components that must interoperate.

A.6.1. Specification Development Activity Types

This type of activity would define and develop models, schemas, encodings, and/or interfaces necessary to realize the testbed architecture. This type of activity may include coordination with the OGC Standards Program. Particular Specification Development Activity Types that may be specified in the Proposal include the following:

  • Model Development: representing a service, interface, operation, message, or encoding that is being developed for the initiative

  • Schema Development: specifying a representation of a model as an XML Schema that is being developed for the initiative

  • Encoding Development: specifying an encoding that is being developed for the initiative

  • Interface Development: specifying operations, encodings or messages that are being developed for the initiative_

  • Standards Program Coordination: submitting Engineering Reports (ERs) developed in the testbed to the OGC Technical Committee for review and presenting reports to relevant OGC TC groups and working with members to resolve issues that the members may raise with regard to the ER

A.6.2. Component Development Activity Types

This type of activity would develop prototype interoperable software components based on draft candidate implementation specifications or adopted specifications necessary to realize the testbed architecture. Particular Component Development Activity Types that may be specified in the Proposal include the following:

  • Prototype Server Software Development: development of new server software or modification of existing server software to exercise the interfaces developed under Specification Development activities. Selected Participants must be able to demonstrate operation to Sponsors for review and input during the initiative and must make their findings available (to editors) for inclusion in any relevant ERs in the same work package. (Note that the development of prototype server software intended primarily for use in the OGC Compliance Program would fall under one of the Compliance Test Development Activity Types (described below).

  • Prototype Client Software Development: development of new client software or modification of existing client software to exercise the servers being developed. Participants who develop server software must also develop client software (or make arrangements with other Participants to utilize their client software) to exercise this server software during the course of the initiative. Use of another Participant’s client is subject to approval by the IP Team to ensure that the third-party client is appropriate for exercising the functionality of the relevant server.

  • Special Adaptations: adaptations of client or server software to exercise relevant mainstream IT technology and standards such as PKI and e-commerce technologies.

A.7. Testing and Integration Activity Types

This type of activity would integrate, document, and test functioning interoperable components that execute operational elements, assigned tasks, and information flows required to fulfill a set of testbed requirements. Particular Testing and Integration Activity Types that may be specified in the Proposal include the following:

  • Component Interface Tests: Participants selected to deploy any testbed components must conduct one or more formal TIEs that exercise each server and client component’s ability to properly implement the interfaces, operations, encodings, and messages developed during the testbed. Multiple TIEs and multiple iterations of a particular TIE will be conducted during the testbed.

  • Test Result Analysis: Participants required to participate in TIEs must report the outcomes and relevant software reporting messages to the IP Team and in Monthly Technical Reports.

  • Configuration Management: communication of the location (URLs) of the server and other components, provision of any updates about the location and operational status of the components, and provision of information about the interface implemented by the servers.

A.7.1. Solution Transfer Activity Types

This type of activity would prepare prototype interoperable components to enable them to be assembled at another site. Particular Solution Transfer Activity Types that may be specified in the Proposal include the following:

  • Software Installation: Participants implementing testbed components may provide a licensed copy of testbed-relevant software components or data for integration onto the OGC Network. This could be accomplished by making the software components available from an open site on their network OR by installing it (and ensuring stability) on a sponsor or other host machine on the OGC Network. If the latter option is taken, then the Participant must provide a technical representative to support installation of the software components.

    • Virtualization and Containerization: Participants implementing testbed components may optionally package and distribute these components using virtual machines (VMs) or containers. The purpose of this option is to experiment with these technologies and provide recommendations for their potential use in future initiatives.

A.7.2. Demonstration Activity Types

The testbed Demonstration will build upon the initiative characteristics developed during Kickoff demonstration scenario design and creation discussions. The goal is for Participants to build and implement prototypes that clearly demonstrate the capabilities of the components by exercising Sponsor scenarios. All Demonstration assets (e.g., video recordings) must delivered to OGC where these assets will be available to Sponsors via the Internet, either for presentation purposes, or for use in their internal labs.

Demonstration activities (instances of the Activity Types listed below) would define, develop, and deploy functioning interoperable components that execute operational elements, assigned tasks, and information flows required to fulfill a set of testbed requirements. In contrast to Testing and Integration activities, Demonstration activities are intended primarily to support demonstration of enabled end-user capabilities. Particular Demonstration Activity Types that may be specified in the Proposal include the following:

  • Demonstration Use Case Development: provision of a technical representative to develop or support the development of use cases that define and explain the utility of the interfaces and encodings developed during the testbed. These use cases will be used to provide a basis for Demonstration storyboards and for the Demonstration itself.

  • Demonstration Storyboard Development: provision of a technical representative to develop or support the development of the storyboards that will define the structure and content of the Demonstration.

  • Demonstration Preparation and Delivery: Participants selected to deploy any testbed components must provide a technical representative to develop or support the development of the Demonstration that will exercise the functionality of the interfaces developed during the testbed. A representative must also be available to support the Demonstration event itself. Participants must perform four sub-activities: design, build, and test the Participant’s demonstrated components, and then package these for public sharing. This activity could also include the identification of other relevant data providers and incorporation of their data sources.

  • Assurance of One Year of Availability: Participants selected to deploy any server testbed component must maintain this software and make the service endpoint available to OGC for a period of no less than one year after the completion of the first Demonstration. Some sponsors may be willing to entertain exceptions to this requirement on a case-by-case basis.

A.7.3. Documentation Activity Types

This type of activity would ensure development and maintenance of the pre-specification, pre-conformant interoperable OGC technologies (including draft and final Engineering Reports) and the system-level documentation (sample user documentation, etc.) necessary to execute the testbed. This type of activity may include coordination with the OGC Standards Program.

Important
The three requirements described in the first bullet below are substantially different from those in prior testbeds. These requirements should be examined carefully and included in any bid proposing document deliverables such as ERs.

Particular Documentation Activity Types that may be specified in the Proposal include the following:

  • Engineering Report Development: Participants selected to perform engineering report development must provide a technical representative to serve as editor of, or contributing author to the relevant Engineering Report (ER) (or subsection thereof). ER editors will be required to carry out three additional activities in this testbed:

    1. To consult the most relevant SWG/DWG regarding its current status and latest discussions on the ER subject matter

    2. To join the relevant email list to observe ongoing WG discussions, and

    3. To provide language in the initial ER describing how the testbed work aligns with WG discussions.

  • ERs must also include all relevant items from the following list as applicable:

    • Findings

    • Recommendations

    • Change Request(s)

    • Use Case(s)

    • Architectural Overview

    • Relevant UML Model(s)

    • XML Schema Document(s)

    • Abstract Test Suite(s)

  • Independent Change Request Development (not included as part of an ER): Participants selected to perform independent change request development (not included as part of an ER) must provide a technical representative to serve as editor of, reviewer of, or contributor to the relevant Change Request (CR) to an existing OGC standard. All developed CRs must be entered into the CR system at http://ogc.standardstracker.org/.

  • Independent Use Case Development (not included as part of an ER): Participants selected to deploy any (server or client) testbed components must provide a technical representative to develop use cases to show the functionality of their software components in the context of the testbed architecture.

  • Independent Architectural Overview Development (not included as part of an ER): Participants selected to deploy any (server or client) testbed components must provide a technical representative to develop an architectural overview of their software components as relevant to the testbed architecture.

  • System Configuration Development: Participants selected to deploy any testbed components to be installed at sponsor or other host sites connected to the OGC Network must provide a technical representative to develop a detailed document describing the combined environment of hardware and software components that compose their contribution to the testbed.

  • Installation Guide Development: Participants selected to deploy any testbed components to be installed at sponsor or other host sites connected to the OGC Network must provide a technical representative to develop an installation guide for their software components.

  • Training Material & User Guide: Participants selected to deploy any testbed components to be installed at sponsor or other host sites connected to the OGC Network must provide a technical representative to develop a User Guide and Training Materials pertaining to their software components developed or modified for the testbed.

A.8. Compliance Test Development Activity Types

This type of activity involves the development of draft compliance test guidelines (at a minimum) and test suites for engineering specifications detailed in Engineering Reports. This type of activity would likely include coordination with the OGC Compliance Program. Particular Compliance Test Development Activity Types that may be specified in the Proposal include the following:

  • Summarization of TIEs, Demo Results, and Data Issues: provision of a summary of information detailing progress pertaining to the implementation of the interface by including TIE results, lessons-learned from the demo, and particular data issues.

  • Full Compliance Test: provision of an outline of all of the necessary information to conduct a valid compliance test of the interface, including the sub-activities below.

    • Compliance Test Cases: provision of an outline of a valid compliance tests for the software component, including identification of all required and optional server requests in the interface, the acceptable results for testing servers, the syntax checks to perform for testing client requests, an explanation of an acceptable verification of the results (machine, human, etc.), a list of expected/valid warnings or exceptions to interface behavior, and a matrix of test dependencies and explanation of ordering tests appropriately for inherent tests and dependencies.

    • Compliance Test Data: identification of appropriate data sets for use in conducting a compliance test for an interface(server or client) or encoding.

    • Compliance Test Recommendations: documentation of recommendations to resolve issues with the current state of the interface or with the compliance tests. For candidate specifications, this documentation must, at a minimum, consist of test guidelines that would form the basis for development of more detailed and complete test scripts as the specification matures toward an approved specification. For mature candidate specifications, Participants must evolve existing or prepare test scripts to form a complete set of tests to fully test an implementation of a specification for compliance with its requirements. This documentation must be embodied in an Engineering Report as well as any GitHub repository that exists for a particular standard.

Appendix B: Technical Architecture

B.1. Introduction

This Annex B provides background information on the OGC baseline, describes the Testbed-13 architecture and thread-based organization, and identifies all requirements and corresponding work items. For general information on Testbed-13, including deadlines, funding requirements and opportunities, please be referred to the Testbed-13 CFP Main Body.

Each thread aggregates a number of requirements, work items and corresponding deliverables, which are funded by different sponsors. The work items are organized in work packages that correspond to one or more related requirements. The work packages have then been assigned to 6 Threads:

  • Thread 1: Dynamic Source Integration (DSI)

  • Thread 2: Earth Observation Clouds (EOC)

  • Thread 3: Cross Community Interoperability (CCI)

  • Thread 4: Field Operations (FO)

  • Thread 5: Streaming & 3D Data (S3D)

  • Thread 6: Compliance Testing (COT)

B.2. Testbed Baseline

B.2.1. Types of Deliverables

The OGC Testbed 12 threads require several types of deliverables. It is emphasized that deliverable indications "funded" or "unfunded" in this Annex B are informative only. Please be referred to Testbed-13 CFP Main Body for binding definitions and make sure your deliverables are made available after the final demonstration of the testbed according to the requirements defined in section Annex A: Solution Transfer Activity and Demonstration Activity.

Documents

Engineering Reports (ER) and Change Requests (CR) will be prepared in accordance with OGC published templates. Engineering Reports will be delivered by posting on the OGC Portal Pending Documents list when complete and the document has achieved a satisfactory level of consensus among interested participants, contributors and editors. Engineering Reports are the formal mechanism used to deliver results of the Innovation Program to sponsors and to the OGC Standards Program and OGC Standard Working Group or Domain Working Groups for consideration. It is emphasized that participants delivering engineering reports must also deliver Change Requests that arise from the documented work.

Implementations

Services, Clients, Datasets and Tools will be provided by methods suitable to its type and stated requirements. For example, services and components (e.g. a WFS instance) are delivered by deployment of the service or component for use in the testbed via an accessible URL. A Client software application or component may be used during the testbed to exercise services and components to test and demonstrate interoperability; however, it is most often not delivered as a license for follow-on usage. Implementations of services, clients and data instances will be developed and deployed in all threads for integration and interoperability testing in support of the agreed-up thread scenario(s) and technical architecture. The services, clients, and tools may be invoked for cross-thread scenarios in demonstration events.

B.2.2. OGC Reference Model

The OGC Reference Model (ORM) version 2.1, provides an architecture framework for the ongoing work of the OGC. Further, the ORM provides a framework for the OGC Standards Baseline. The OGC Standards Baseline consists of the member-approved Implementation/Abstract Specifications as well as for a number of candidate specifications that are currently in progress.

The structure of the ORM is based on the Reference Model for Open Distributed Processing (RM-ODP), also identified as ISO 10746. This is a multi-dimensional approach well suited to describing complex information systems.

The ORM is a living document that is revised on a regular basis to continually and accurately reflect the ongoing work of the Consortium. We encourage respondents to this RFQ to learn and understand the concepts that are presented in the ORM.

This Annex B refers to the RM-ODP approach and will provide information on some of the viewpoints, in particular the Enterprise Viewpoint, which is used here to provide the general characterization of work items in the context of the OGC Standards portfolio and standardization process, i.e. the enterprise perspective from an OGC insider.

The Information Viewpoint considers the information models and encodings that will make up the content of the services and exchanges to be extended or developed to support this testbed. Here, we mainly refer to the OGC Standards Baseline, see section Standards Baseline.

The Computational Viewpoint is concerned with the functional decomposition of the system into a set of objects that interact at interfaces – enabling system distribution. It captures component and interface details without regard to distribution and describes an interaction framework including application objects, service support objects and infrastructure objects. The development of the computational viewpoint models is one of the first tasks of the testbed, usually addressed at the kick-off meeting.

rmodp
Figure 2. Reference Model for Open Distributed Computing

The Engineering Viewpoint is concerned with the infrastructure required to support system distribution. It focuses on the mechanisms and functions required to:

  1. support distributed interaction between objects in the system, and

  2. hides the complexities of those interactions.

It exposes the distributed nature of the system, describing the infrastructure, mechanisms and functions for object distribution, distribution transparency and constraints, bindings and interactions. The engineering viewpoint will be developed during the testbed, usually in the form of TIEs (Technical Interaction Experiments), where testbed participants define the communication infrastructure and assign elements from the computational viewpoint to physical machines used for demonstrating the testbed results.

B.2.3. OGC Standards Baseline

The OCG Standards Baseline is the currently approved set of OGC standards and other approved supporting documents, such as the OGC abstract specifications and Best Practice Documents. OGC also maintains other documents relevant to the Interoperability Program including Engineering Reports, Discussion Papers, and White Papers.

OGC standards are technical documents that detail interfaces or encodings. Software developers use these documents to build open interfaces and encodings into their products and services. These standards are the main "products" of the Open Geospatial Consortium and have been developed by the membership to address specific interoperability challenges. Ideally, when OGC standards are implemented in products or online services by two different software engineers working independently, the resulting components plug and play, that is, they work together without further debugging. OGC standards and supporting documents are available to the public at no cost. OGC Web Services (OWS) are OGC standards created for use in World Wide Web applications. For this testbed, it is emphasized that all OGC members have access to the latest versions of all standards. If not otherwise agreed with the testbed architects, these shall be used in conjunction with - in particular - engineering reports resulting from previous testbeds.

Any documents and Schemas (xsd, xslt, etc.) that support an approved (that is, approved by the OGC membership) OGC standard can be found in the official OGC Schema Repository.

The OGC Testing Facility web page provides online executable tests for some OGC standards. The facility helps organizations better implement service interfaces, encodings and clients that adhere to OGC standards.

B.2.4. Data

All participants are encouraged to provide data that can used to implement the various scenarios that will be developed during the testbed. A number of testbed sponsors will provide data, but it might be necessary to complement these with additional data sets. Please provide detailed information if you plan to contribute data to this testbed.

B.2.5. Services in the Cloud

Participants are encouraged to provide data or services hosted in the cloud. There is an overarching work item to provide cloud-hosting capabilities to al- low thread participants to move services and/or data to the cloud.

B.3. Testbed Threads

Testbed-13 is organized in a number of threads. Each thread combines a number of work packages that are further defined in the following chapters. The threads integrate both an architectural and a thematic view, which allows to keep related work items closely together and to remove dependencies across threads. The following figures illustrates the allocation of work packages to threads.

ThreadsOverview
Figure 3. Overview of work package allocation to threads

B.4. Work Packages

Each of the following sections provides a detailed description of a particular work package.

Note

Please note that a few links of the links below to Testbed 12 Engineering Reports will result in “404 Not Found” results as some these reports are still being prepared for publication.

B.5. Cloud Computing Environment for Earth Observation Data

Nowadays Earth Observation satellites generate Tera to Peta bytes of raw data each year. When looking at the future, new high-resolution sensors, either multi-, super-, or hyperspectral, will lead to even higher data quantities to be processed. This huge amount of data needs to be stored, preserved, processed to higher levels and be distributed to the user communities Cossu et al. At the same time, Earth Observation commercial data sales have increased a 550% in the last decade. The field is considered a key element in the European Research Road Map and an opportunity market for the next years globally. The forecast for this decade is $4 billion in commercial data sales at the end of 2019. This makes EO a major field of new business opportunities and work (entice, Becedas et al, 2015a, Becedas et al, 2015b, Ramos, 2016).

cloud CC0
Figure 4. Cloud: Upload, download, process, interaction

Cloud computing is rapidly growing in importance for many organizations, with ongoing take-up of a wide range of cloud services and the transition of both data and applications to cloud computing environments. The topics of interoperability and portability are significant considerations in relation to the use of cloud services. Cloud computing is having an enormous impact on how organizations manage their information technology resources. The abundance of easy to access computing resources enabled by cloud computing provides significant opportunities for organizations, but poses challenges for enterprises in a number of areas. The current cloud computing landscape consists of a diverse set of products and services that range from infrastructure services (IaaS), to specific software services (SaaS) to development and delivery platforms (PaaS), and many more. The variety of cloud services has led to proprietary architectures and technologies being used by vendors, increasing the risk of vendor lock-in for customers. Incidents such as a cloud service providers shutting down operations or the discovery of significant security vulnerabilities in applications have highlighted this risk [Cloud Council, 2014].

Testbed-13 shall help clarifying the specific interoperability and portability concerns that arise in the large cloud eco-system with its variety of cloud services offered. From the interoperability perspective, in particular two aspects are of primary interest in T13:

  1. Cloud API interoperability is a major issue at the moment, with all cloud providers offering a dedicated API to interact with their specific cloud. The APIs provide multiple interfaces to cover all types of interactions, such as e.g. for the functional administration of cloud services, authentication and authorization, billing, or invoicing. Ideally, these APIs would be standardized, so interaction with different clouds would have minimal impact on the customer’s components. Changing cloud services across providers would be a smooth experience.

  2. Application portability is the ability to easily transfer an application or application components from one cloud service to a comparable cloud service and run the application in the target cloud service. The ease of moving the application or application components is the key here. The application may require recompiling or relinking for the target cloud service, but it should not be necessary to make significant changes to the application code.

Testbed-13 addresses key elements in cloud computing research, such as loosely-coupled PB-sized archives for rapid geospatial information product creation at any scale based on open standards. This work item on Cloud Computing Environment for Earth Observation Data shall be reviewed in conjunction with the Part 2 ITT issued through ESA EMITS. The goal is to develop an integrated solution that works for the ESA Exploitation Platforms as well as for the Canadian Forestry Service.

The topic of the Canadian Forestry Service application is extracting forest features or biophysical parameters from space-borne Synthetic Aperture Radar (SAR) and optical data products while using the synergistic combination of Earth Observation (EO) missions to estimate forest biomass in Canada.

Testbed-13 shall develop a cloud computing environment for Earth observation data (big data) that is integrated with OGC web services. The environment shall support hosting of data processing tools including all necessary deployment and management steps. The environment is illustrated in the following figure. The individual steps are further described below.

cloudEnvironment
Figure 5. Cloud Environment Overview

The cloud computing environment shall support the following aspects, indicated as steps in the figure above.

  1. Software toolbox deployment, configuration and maintenance;

  2. Receiving job request through OGC WPS (WPS being part of the cloud optionally). It is imperative to use the open source software by Array Systems Computing RadarSat-2 Toolbox (RSTB) and adding a WPS layer on top; the Array’s open-source RSTB can be downloaded from http://step.esa.int/main/download/;

  3. Allocating resources dynamically based on demand and performing job splitting/ scheduling/ processing/ tracking;

  4. Allocating required scratch storage for intermediate and final product;

  5. Supporting batch processing multiple Radarsat-2 or other SAR/optical images (a generic big /high volume data processing);

  6. Capable to integrate or exchange data from different sources hosted in a cloud environment (and/or traditional computing network);

  7. Gathering output elements into final products;

  8. Disseminating final products through OGC Web services such as WCS and WMS;

  9. Providing cloud usage statistics and user notification.

The actual cloud environment documented above shall be further developed into an operation model that would support (pre-)processing of high volumes of Earth observation data. This (pre-)processing is envisioned to be handled in the Cloud where processing power and storage can be elastic. In the Cloud, processors and storage can be increased or decreased when needed. Resources are only used when necessary thus reducing overhead costs of maintaining expensive servers or computing power.

Requirements for the operational Cloud model include:

  1. The ability to leveraging large pools of computing resources (storage, networking and computing capacity/processors) from Public, Private, Hybrid Cloud and traditional dedicated servers (example use of OpenStack). As ownership of data may be a concern to some agencies, the location of the storage of the datasets may require a Hybrid Cloud solution with the use of traditional dedicated servers.

  2. The ability to easily create or expand number of Instances/VMs when needed and not need to reconfigure how WPS services are advertised.

  3. The ability to control access and authentication of users of the Web services and instances/VMs

  4. The ability to log usage and jobs being performed

  5. Must allow for the integration of WPS 2.0 Interface standards including constructs for service discovery, service capabilities, job control, execution and data transmission of inputs and outputs in a chain.

  6. Will have a Web-enabled dashboard of current usage and capacity of computing resource of the Cloud infrastructure. Ideally this dashboard can be integrated into the WPS dashboard that monitors the execution requests, responses etc.

  7. Must be able to publish and consume OGC services like WMS, WCS, and WFS

  8. The operational Cloud model must be easily reproducible and documented.

  9. The operational cloud model should be general enough to support any type of EO data processing supported by Radarsat-2 Toolbox.

  10. Delivery of scripts that allow the re-installation of the cloud environment. In this context, licenses and data aspects need to be addressed during the testbed.

The last point addresses a general aspect of concern in previous testbeds: The repeatability of test experiments. To allow testing/experiencing the cloud environment as developed during the testbed, the participants shall make use of tools such as AWS CloudFormation, Azure Resource Manager templates, OpenStack Heat, or - to remain more vendor agnostic - Terraform. Here, OpenStack Stack is the preferred platform, as the sponsor requires to re-install and run the orchestration on OpenStack Heat in their own private cloud environment. These tools allow the details of an infrastructure to be codified into a configuration file. The configuration files allow the infrastructure to be elastically created, modified and destroyed. Depending on the actual setup, additional tools for lower level configuration management such as e.g. Chef or Puppet may be applied to manage bootstrapping and initializing resources.

Applying cloud orchestration engine templates allows the participants to develop the cloud platform as described above, and sponsors with access to these scripts can re-hydrate the test environment whenever they want, e.g. one week later, or even 2 years later. It even allows the Cloud sponsors to execute test environments at any preferred scale in their own accounts.

OGC has addressed cloud technologies and several testbeds before. The work performed under this work item shall take previous work into account, such as Testbed 10 Performance of OGC® Services in the Cloud: The WMS, WMTS, and WPS cases.

Requirements

cloudRequirements
Figure 6. Radar data image processing in the cloud requirements and work items

The following test case will need to be satisfied by the established environment for the environment setup and operation.

  1. For cloud computing environment setup:

    1. deploy and configure software RADARSAT-2 Toolbox (RSTB) from Array Systems Computing (Array) on the cloud.

    2. The vendor will successfully demonstrate the WPS 2.0 functionality by using an Integrated Development Environment –ETL Extract Transform Load package (like Pentaho/GeoKettle) to process RADAR data through the Cloud enabled RSTB.

  2. For cloud computing environment operation:

    1. Receive job request via OGC WPS;

    2. Allocating resources based on the number of input Radarsat-2 SQW images;

    3. Split the job, perform scheduling, and start tracking;

    4. Start batch processing;

    5. Fetch each Radarsat-2 SQW data from a cloud source or a specified network as a zipped file; Read in Radarsat-2 SQWdata in Single Look Complex (SLC) format from the fetched zip file from step above; Perform Compact Polarimetry (Compact-pol) simulation from SQW data for simulated RCM (Radarsat Constellation Mission) Compact-pol products;

    6. Generate required compact-pol Stokes parameters;

    7. Perform compact-pol decomposition and generate decomposition parameters;

    8. Perform terrain correction on both decomposition and Stokes parameters;

    9. Output all geo-corrected compact-pol products to specified storage; Gathering output elements and organize final products; Disseminate output products through OGC Web services such as WCS and WMS;

    10. Provide cloud usage statistics and send user notification.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NR001: Cloud ER - Engineering Report capturing all experiences made during the implementation of the Cloud Computing Environment for Earth Observation Data scenario. The Engineering Report shall define the WPS interface and communication protocol between clients and WPS instance that work as interfaces to the cloud computing environment. The Engineering Report requires close alignment with the work done as part of the Thematic Exploitation Platform work.

  • NR101: Cloud WPS 1 - WPS implementing the cloud computing environment as described above. Hosting the WPS on the cloud itself is preferred but not mandatory. Ideally, the entire setup is delivered to support re-installation and running of the orchestration on OpenStack Heat in sponsor’s own private cloud environment. Alternative template approaches such as CloudFormation template to allow re-instantiation are optional.

  • NR102: Cloud WPS 2 - WPS implementing the cloud computing environment as described above. Hosting the WPS on the cloud itself is preferred but not mandatory. Ideally, the entire setup is delivered to support re-installation and running of the orchestration on OpenStack Heat in sponsor’s own private cloud environment. Alternative template approaches such as CloudFormation template to allow re-instantiation are optional.

B.6. Semantic Registry

Testbed-12 introduced the Semantic Registry Information Model (SRIM), a superset of the W3C DCAT ontology and the Semantic Registry Service. SRIM can accommodate registry items other than dcat:Dataset, such as Service description, Schema and Schema Mapping description, and Portrayal Information (Styles, Portrayal Rules, Graphics and Portrayal Catalog), Layer, Map Context, etc. The Semantic Registry Service is used as an integration solution for federating and unifying information produced by different OGC Catalog Service information by providing a simplified access through hypermedia-driven API, using JSON-LD, Linked Data and HAL-JSON. During the testbed, the Semantic Registry was used to store information about geospatial datasets and services, schemas and portrayal information.

The Semantic Registry Information Model (SRIM), is defined as a superset of W3C DCAT standard and encoded as a OWL ontology. However, the OWL does not capture some of the semantic integrity constraints that are necessary to validate the instance information encoded using the SRIM ontology profiles. This is not an isolated problem. The DCAT ontology, for example, defines a set of classes and properties and reuses a large number of external vocabularies such as Dublin Core, but does not provide any restrictions in the ontology. Users have to read the profile documents such as DCAT-AP or GeoDCAT-AP to know which and how properties should be applied for a given class (mandatory, recommended or optional). For example, a Dataset could have only one title per language, or contact information should have either a person name, organization name or position name, and either email or telephone number. These kind of restrictions cannot be captured with OWL, and until now it required human interpretation to implement the constraints in code.

To fill these gaps, the emerging W3C standard called Shape Constraint Language (SHACL) provides a powerful framework to define the "shape" of the graph data and the ability to define complex integrity constraints using well-defined constraints constructs defined in RDF and SPARQL/Javascript constraints. SHACL is not a replacement of RDFS/OWL, but a complementary technology that is not only very expressive but also highly extensible. While RDFS and OWL are used to define vocabularies terms (classes/properties) and their hierarchies (subclasses, subproperties), as well as the nature of the classes and properties (union, intersection, complement of classes, transitive, inverse, symmetric properties, etc.), SHACL is more appropriate to capture the property constraints (cardinality, valid values or shape values and interdependencies between them) and capable of accommodating multiple profiles by providing different shapes for the same ontology. The SHACL vocabulary is not only defined in RDF itself, but the same macro mechanisms can be used by anyone to define new high-level language elements and publish them on the web. This means that SHACL will not only lead to the reuse of data schemas but also to domain-specific constraint languages. Furthermore, SHACL can be used in conjunction with a variety of languages beside SPARQL, including JavaScript. Complex validation constraints can be expressed in JavaScript so that they can be evaluated client-side. In addition, SHACL can be used to generate validation report for quality control with potentially suggestions to fix validation errors. Overall, SHACL is a future-proof schema language designed for the Web of Data. While SHACL is not yet a standard, there are already existing implementations using it (Topbraid for example).

Overall goal of Testbed-13 is to investigate the applicability of metadata sets provided by the sponsor to SRIM and consecutively improve interoperability between ISO and DCAT metadata, to improve the ISO/DCAT integration, and to analyze the usage of SHACL to constrain the content graphs.

Further on, it should be evaluated to which extent the SRIM API is or can be designed to match the SKOS and OWL vocabularies, taking advantage of the fact that most modern vocabulary content is structured using SKOS and OWL classes and predicates. This API can then be used as the basis for various higher-level vocabulary applications (NLP applications, Concept Recommender, Semantic Enricher, etc.). These, in turn, can be used to enrich, for example, ISO 19115 metadata and other OGC services using controlled vocabularies in their metadata.

Testbeds-11 and 12 have explored the high-level description of ontologies and schemas to support semantic mediation, see the OGC 16-059: Semantic Portrayal, Registry, Mediation Services Engineering Report, which documents the findings of the activities related to the Semantic Portrayal, Registry and Mediation components implemented during the OGC Testbed-12.

Testbed-13 should explore in more detail the kind of metadata needed to enable search on controlled vocabularies by defining an ontology that addresses the following aspects:

  • Classification of vocabulary types

  • Relationships to other vocabularies (extensions, imports, specialization, metadata vocabularies, etc.)

  • Statistical information about vocabularies (number of concepts, concept schemes, classes, properties, instances, datatypes)

  • Schema encoding (OWL, RDF Schema, SKOS)

  • Expressiveness

  • Preferred prefix

  • Preferred Namespace URI

  • Governance metadata

  • Versioning information

Based on this ontology, a standard REST API can be designed to search and access vocabulary metadata and their terms using best practices in REST API (hypermedia driven for example).

For Testbed-12, the Semantic Registry harvested information from a federation of CSW services as the focus was to exercise the Semantic Registry Information Model (SRIM) and the REST API. For Testbed-13, intent is to focus on improving the efficiency of the harvesting process by investigating the publish/subscribe protocol and versioning management of the register items in the Semantic Registry as they change over time.

Requirements

The following figure illustrates the work items and requirements in the context of DCAT/SRIM.

DCATRequirements
Figure 7. DCAT/SRIM requirements and work items

This work shall include:

  • SRIM applicability to USGS metadata - Testbed-13 shall investigate the applicability of SRIM to a number of sponsor-provided metadata sets. In order to do so, a SRIM Layer and Map Profile needs to be developed. Layers and Maps are very commonplace, but there is no standard way to describe their metadata. While Layer and Map are both derived from dcat:Dataset, they have their own specific metadata. Testbed-13 shall investigate a profile for Layer and Map that extends the Registry Item and relates to Datasets, Services and Portrayal Information developed for the Semantic Registry and Semantic Portrayal Service.

  • SRIM Best Practices - Testbed-13 shall produce a best practice document for producing metadata for datasets and services for mapping ISO 19115 to SRIM. This work shall include requests for changes to improve the current standard ISO 19115 to better align with the current best practices for Linked Data publication, in particular the use of controlled vocabularies, linked data friendly identifiers, and better service description that enables automated access to services.

  • SHACL - Testbed-13 shall investigate the application of SHACL shapes to define application profiles, form generation and data entry, data validation and quality control of linked data information.

  • Pub/Sub Support - Testbed-13 shall extend extend the Semantic Registry with publish/subscribe support for harvested catalogs or other instances of Semantic Registries.

  • Controlled Vocabularies - Testbed-13 shall explore in detail the kind of metadata needed to enable search on controlled vocabularies by defining an ontology that addresses all aspects listed above.

  • Registry API - Testbed-13 shall develop a REST API to search and access vocabulary metadata and their terms. The API shall support REST best practices as defined in the Testbed-12 REST User Guide (OGC 16-057) (e.g. hypermedia support).

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • UG002: DCAT/SRIM ER - Engineering Report that captures all requirements, solutions, and implementation experiences of the DCAT/SRIM work in Testbed-13.

  • UG101: DCAT/SRIM Server - REST API to SRIM data. The server shall support all requirements for DCAT/SRIM as stated above.

  • NG124: Pub/Sub CSW - Instance of a CSW 2.0.2 or 3.0 with support for Publish/Subscribe to allow the SRIM-instance to subscribe for updates.

B.7. Quality of Service in the Aviation Domain

In recent years, the concept of data quality has generated a notable interest among SWIM implementers, both organization-specific and global. In the context of SWIM — and SOA implementations in general — data quality pertains to two major use cases, service advertising and service validation.

  1. In service advertising, a service makes known to a potential consumer the quality of the data provided by the service. Based on this information, the consumer can determine whether or not the service meets its needs.

  2. In service validation, assurance is given that the quality of the data provided by a service is consistent with the quality that is explicitly defined in a service contract or any kind of agreement that may exist between a service provider and service consumer.

Both use cases share two common preconditions: (1) an unambiguous definition of the concept of data quality exists, and (2) a set of measurable parameters that allow specifying data quality is defined.

The successful completion of the artifacts specified in the requirements defined below may well lead to development of a Data Quality Assessment Service (DQAS). While the requirements and funding for supporting development of a DQAS have not yet been established, the following considerations are included so as to help future implementers to see the “whole picture”:

  • The DQAS will be an intermediary service whose purpose is determining the quality of data produced by information services.

  • The DQAS may receive data for assessment either by invocation of an information service or as directly inputted data infoset. This may lead to a use of DQAS either in design-time or run-time.

  • The DQAS will evaluate the quality of data based on a set of criteria, which can either be specified in external document (e.g. organization-specific requirements for data quality) or can be provided as part of DQAS’ request. The inputted parameters (criteria) should be derived from the Data Quality abstract model and assessment specification defined in FA001 and FA002 respectively.

  • As an output, the DQAS should generate a data quality assessment report.

The DQAS work shall consider and extend a previously tested pattern for quality assessment, which separates the rules from the actual data and uses a Web Processing Service to apply the rules to data. In detail, this pattern includes:

  1. Data Service (holding the data to be checked)

  2. Rules Registry (holding the rules to be applied to the data)

  3. WPS (access the Data Service and Rules Registry; applies the rules to the data; produces a report)

Figure Data Quality Assessment activities depicts the activities that need to be completed to produce a prototype of a DQAS. It should be noted that the process of developing a DQAS as shown in the figure below will require further elaboration. It may include development of a test information service, a client, determining a type and format of data, and so on.

DQASactivities
Figure 8. Data Quality Assessment activities. For the activities and artifacts in green, the funding is established and requirements have been developed; for activities in red, requirements and funding are yet to be determined.

Requirements

The following figure illustrates the work items and requirements in the context of Quality of Service in the Aviation Domain.

QualityRequirements
Figure 9. Quality of Service in the Aviation Domain requirements and work items
  1. Testbed-13 shall develop an abstract or conceptual model for data quality in the context of SOA services in general and OGC-compliant services in particular. This model should articulate the concept of data quality as well as associated facets, concepts and relationships. It shall be based on widely-used industry standards and models, such as the Service Description Conceptual Model (SDCM), ISO19157, and QualityML. The results of this work shall be captured in an Engineering Report "Abstract Quality Model".

  2. Upon successful completion of the abstract model, Testbed-13 shall develop a Data Quality Assessment Specification. This specification should define a set of data quality parameters as well as the methods and units of measure employed for measuring these parameters. This specification should be information domain neutral, i.e., it should specify data quality characteristics and methods that can be applied to all aviation information domains: weather, flight, and aeronautical. The results of this work shall be captured in an Engineering Report "Data Quality Specification", which should also include:

    • An extension mechanism for the abstract model to be extended to address domain-specific requirements.

    • A mechanism for augmenting the SDCM with classes/concepts for describing a service’s data quality. This includes taxonomies that capture defined parameters, methods of measurement, and units of measure.

    • Discussions of the relationships between Quality of Service (QoS) parameters already defined in the SDCM and data quality parameters proposed in the specification.

  3. Upon completing the data quality specification, Testbed-13 shall develop a Data Quality Assessment Service Specification. The specification shall include design patterns, extension mechanisms, and service interface considerations. The specification should also support future extensions to allow using it with domain-specific data quality parameters.The results of this work shall be captured in an Engineering Report "Quality Assessment Service".

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • FA001: Abstract Quality Model ER - Engineering Report that will capture status quo, discussions, and results in the context of requirements defined above.

  • FA002: Data Quality Specification ER - Engineering Report that will capture status quo, discussions, and results in the context of requirements defined above.

  • FA003: Quality Assessment Service ER - Engineering Report that will capture status quo, discussions, and results in the context of requirements defined above.

B.8. Taxonomies in the Aviation Domain

One of the critical factors in the overall usability of services - and SWIM-enabled services in particular - is the ability of a service to be discovered. The ability of a service to be discovered is assured by providing a uniformly interpretable set of service metadata that can be accessed by a service consumer through a retrieval mechanism (e.g., a service registry). Such a set of metadata (commonly referred to as a service description) has been defined by FAA and EUROCONTROL and formalized in a Service Description Conceptual Model (SDCM). The SDCM is currently used in standard service description documents and service registries by both FAA and EUROCONTROL. As part of the effort of enhancing service discovery, both organizations also use a number of categories that can be associated with all services and are generally referred to as taxonomies. The current set of taxonomies used by both EUROCONTROL and FAA categorizes (tags) services based on their availability status, interface model, data product, etc. However, despite the increasing role of OGC-compliant services in the SWIM environment, no taxonomies for categorizing services based on geographical coverage or other geospatial characteristics have been defined.

Requirements

The following figure illustrates the work items and requirements in the context of Taxonomies in the Aviation Domain.

TaxonomiesRequirements
Figure 10. Taxonomies in the Aviation Domain requirements and work items
  1. Develop a concept of geospatial taxonomies that will efficiently support classification of services based on their geospatial characteristics (e.g., geographical coverage). The concept should take into account all relevant geospatial characteristics, such as nation states, flight information regions, and airspace classifications.

  2. Provide considerations for modifications of the SDCM to support the use of geospatial taxonomies.

  3. Produce one or more taxonomies in formats suitable for use by software clients (e.g., XML, RDF).

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • FA004: Geospatial Taxonomies ER - Engineering Report that will capture status quo, discussions, and results in the context of geospatial taxonomies in the aviation domain. It shall at least address all requirements described above.

B.9. Adding “Fit for Purpose” as a user parameter into OGC services

Summary: Testbed-13 shall investigate updates to OGC Web service interface specifications (WMTS, WCS, WFS, etc.) to allow a user to specify a “purpose” in the query requests rather than explicitly having to provide specific filter attributes. This would be valuable for end users that do not understand or know all the associated metrics, or even when they do understand them we continue to have the problem that providers do not use consistent metadata standards. This “purpose” abstracts the filter requirements away from the end user. The “purpose” would in essence be a reference to one of a set of predefined profiles which drive the selection criteria. Associated with this would be the need for a new OGC service operation to discover the available profiles. Within the testbed it shall be demonstrated how a user could discover available profiles and access data via these profiles within a client application. Ideally, Testbed-13 would show how these profiles can utilize metrics (like A3C quality metadata from Testbed-12) to allow clients to get more focused data responses using these profiles to better assess a true “Fit for Purpose” (FfP) response from the provider.

Background: End users likely understand that the use of imagery, imagery analytics or other forms of geospatial data may be used to address their specific problem space. Currently, it requires someone with geospatial expertise to know what geospatial products may be used, and how to discover and access this data. This is particularly difficult given inconsistency in the use of metadata standards across providers. OGC services currently help with the discovery and access of data, but we are still limited to the assumption that the client and client systems must explicitly provide the filter criteria to the servers in the discovery process. In order to open up geospatial data to broader markets, we need to reduce the level of complexity. Unfortunately the industry is actually moving in the opposite direction and has in fact become more complex for several reasons:

  • New types of sensors are being deployed (small satellites, CAVIS, LiDAR, etc.)

  • More protocols exist related to new sensors (e.g. point cloud specifications, etc.)

  • More single sensor and derived multi-sensor products are being defined and created (e.g. information products, etc.)

  • Multiple vendors exist with different product specs and models for cataloging data

The goal of this testbed is to simplify the process for non-geospatial experts by standardizing a way that provider services can create and present predefined profiles (defined by the service provider) which are tailored to address specific use cases. By using the profiles the end user can be unaware of the actual details required by the backend system to perform the desired functions.

Migration
Figure 11. Mass migration routes map. source: edmaps.com

The following use case, focused on Testbed-13 mass population migration scenario, shall help directing the Testbed-13 developments on "Fit for Purpose" parameters.

A user is trying to understand and monitor refugee migration across several countries following a civil war in a neighboring country. The user is trying to determine the following:

  1. Where is migration happening (to and from)?

  2. How fast are the refugee camps growing and what are the estimated populations?

  3. What are the road logistics around the refugee camps?

  4. Are there water sources available?

  5. Where are houses being abandoned?

The user is a non-geospatial user, but knows recent satellite imagery and other geospatial data should provide the information he needs. What the user needs is for the discovery and analysis tools to provide him access to the information based on his desired result. The ideal use case for the first question would look something like this:

  1. The user goes to a vendor’s discovery tool and opens the browser and locates the general area on a map.

  2. The tool provides a link to access all the predefine use-case based discovery options (i.e. profiles) that the vendor supports, in this case let’s assume the tool lists the following discovery profiles: Road Mapping, Agricultural Land Use, Change Detection, Urban Mapping, etc.

  3. Let’s assume the user wants to look first at road infrastructure near the current refugee camps so they select the “Road Mapping” profile.

  4. On the backend, the profile is translated into a set of “Fit for Purpose” attributes, in this case the ground sample distance, accuracy, the currency, and other factors the provider knows will result in a good road mapping product. These attribute filter options are sent to the catalog system.

  5. The results are presented to the user on the display (as browse samples), and the user is given the option to produce a mapping product for downloading.

  6. The user selects the option to produce and download the map product, at which point the client system passes the request (with the selected imagery references) to a WPS service with the same Road Mapping profile selected.

  7. The backend system translates the profile into a set of “Fit for Purpose” attributes again, this time representing processing attributes such as product type (1:5,000 scale mosaic, bands to use, requested GSD, bit depth of product, projection, …) and the WPS process creates the imagery and places in a cloud hosted location for the user

  8. The user is presented the option to view the full resolution product within the client tool or download for offline processing.

Conceptual Implementation Example

Below is another example with some sample data for further clarification of the profile concept. An EO imagery provider defines several profiles for WFS as illustrated in Figure 2. Each attribute should be annotated with an optional description.

profileTable
Figure 12. Summary of defined profiles

The key point here is that the provider, who knows and understands the metadata, can more effectively identify the best filter criteria for a certain function, but still provide via the overridable fields the ability to tailor the response.

Now imagine accessing the provider service through a client application. The user isn’t geospatial savvy so he navigates in the client system GUI to the point where he can access the predefined profile available for querying. The client system accesses the providers GetProfiles operation and retrieves a list of profiles. At this point the response includes all the profiles tagged as public:

  • Agriculture Land Registry

  • Urban Land Registry

  • Road Detection

  • Old Imagery

The client system may categorize them by class or provide the full list without classes as defined by the client system code. Alternatively the user could supply a class filter “Land Use” in the GetProfiles query and get the following response:

  • Agriculture Land Registry

  • Urban Land Registry

The user is interested in the “Agriculture Land Registry” Profile and selects an option to retrieve additional details on that profile. The client system accesses the details via the GetProfileDetails operation and receives the following:

  • Source: “DigitalGlobe”, Override allowed: Yes

  • Format: “Gridded Raster” Override allowed: No

  • Spatial Resolution:< 1 meter, Description: “Identifies the minimum size differentiator between objects visible in the imagery”, Override allowed: No

  • Horizontal Accuracy:< 5 meters, Description: “Identifies the accuracy of the location of the perceived object and it’s actual location on the earth”, Override allowed: No

  • Spectral: Color, Override allowed: Yes

  • Age: < 5 years old, Override allowed: No

  • Cloud Coverage: < 5%, Override allowed: Yes

  • 3D: No, Override allowed: No

Some filters are fixed, while others allow the user to override the default parameter settings. This may be useful if the user wants to further restrict a parameter such as cloud cover to a smaller percentage than foreseen by the default settings. There are various reasons why certain parameters cannot be overwritten. They are still listed to allow the profile provider side to have a complete set of filter criteria, for documentation, or for provenance reasons.

In our little example, the user decides to use the “Agriculture Land Registry” profile, selects an area of interest, and initiates the query. In this case the user doesn’t intend to override any default values. The client system issues a WFS query that includes the geometry and reference to the desired profile. The WFS system returns several raw or finished products for the user to review:

  • Feature 1: Image ID, attribute 1, attribute 2, …

  • Feature 2: Image ID, attribute 1, attribute 2, …

Since the response may includes a large amount of data, even with the defined profile filtering, the client system can also use the defined profile as a way to prioritize and present the results. For instance, agriculture land use profiles may prioritize and list imagery which is the most consistent in age, whereas road detection would simply prioritize based on most current imagery first.

Summary

Today, we are trusting end users to properly query and find data that supports their particular use case. This means customer satisfaction is based on the knowledge of the user, of both the problem space and the provider metadata. The definition and use of profiles would move this need for knowledge back to the service provider, and provide a much easier interface experience for many users. The use of profiles would also insure that both the customer and the provider can be confident that the provided data will fit the purpose as defined by the profile – ultimately resulting in higher customer satisfaction.

Requirements

The following figure illustrates the work items and requirements in the context of Adding “Fit for Purpose” as a user parameter into OGC services.

FitForPurposeRequirements
Figure 13. Fit for Purpose parameter requirements and work items

The Testbed-13 deliverables shall meet the following requirements:

  1. A set of guidelines on defining profiles for OGC services, including:

    • What types of attributes would be included

    • How many attributes can be specified

    • How ranges and limits are imposed

    • How profile work in conjunction with other optional filter criteria

  2. A list of what OGC services should support these profiles

  3. Proposed standards for how the profiles are implemented in the existing OGC service (how they are passed into the destination system in the OGC service queries from the client) and how the use of profiles works in conjunction with existing service functions. This may be presented as proposed updates to each of the services we feel would benefit from supporting profiles.

  4. Definition of a new operation to discover available profiles (like “Get Capabilities” but something like “Get Profiles”), and guidance/proposed standards about what should be in the response, such as:

    • Name of profile

    • Version of profile

    • Description of profile (high level summary of how the profile is expected to be used, presumably to be shown to a client if requested (hover text, etc)

    • Profile Class (category of profile to add in discovery)

    • Optional metadata about profile (what attributes are set and with what values so that a more savvy user could assess the value of the profile for his use case) – could be implemented in Get Profiles or as a distinct Get Profile Details call.

  5. A sample implementation showing the use of profiles in at least the following OGC services:

    • WFS (for discovery)

    • WPS (for building a product based on a profile)

General Note: It is assumed that Testbed-13 would specify how to accept and use profiles, but not define what actual profiles should ultimately be supported, except to define some specific examples for use within the demonstration itself. The data providers would best know their data (both actual data and metadata) to know what profiles they can support and how.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • DG101: CSW or WPS with fit-for-purpose support: CSW- or WPS-based Profile Discovery & Execution Service that provides aggregated profile to the actor. The service needs to be complemented with a test client and shall support the requirements defined above.

  • DG001: Fit-for-Purpose ER: Engineering Report capturing all experiences made during implementation of the services involved in this work item, as well as the profile specification

    • interaction patterns with services to support profiles

    • change requests for identified services

    • implementation overview documentation

    • workflow/interaction documentation

  • AB103: WFS Data service with fit-for-purpose support - Data service that shall implement the fit-for-purpose concept described for this work item.

  • AB104: Client with fit-for-purpose support - Client application that can interact with both the CSW/WPS as well as the W*S services.

B.10. USGS Topo Combined Vector Product data to GeoPackage

Testbed-12 experimented with USGS Topo Map Vector Data Products being stored in GeoPackages. The results are documented in OGC 16-037. The focus of the Testbed-12 research was on generation of GeoPackages by combining USGS Topo Map Vector Data Products and the Topo TNM Style Template.

The resulting GeoPackages contain both features and instructions for styling of these features as well as orthoimagery, shaded relief raster tile sets, national wetlands raster tile sets and elevation data derived from USGS provide 1/9 arc second elevation imagery. OGC 16-037 explains the GeoPackage generation process, discusses problems and obstacles encountered decoding the source product and styles and converting these artifacts to a GeoPackage, and provides recommendations for improvements.

Requirements

Testbed-13 shall build on the Testbed-12 results and continue to investigate the OGC GeoPackage as a single alternative delivery format for the USGS Topo Combined Vector Product and the Topo TNM Style Template. The following figure illustrates the work items and requirements in the context of USGS Topo Combined Vector Product data to GeoPackage.

USGSTopoGeopackageRequirements
Figure 14. USGS Topo Combined Vector product data to GeoPackage requirements and work items

This work shall include:

  • Continue to extend the GeoPackage embedding to include labelling/annotation.

  • Continue the work to extend the content of the Topo Combined Vector Product formatted as the OGC GeoPackage to include imagery and hill shade data.

  • Develop a standardized approach for internal triggering mechanisms: If some attribute value is changed within a GeoPackage, make sure the correct symbology link is automatically changed as well (or make sure that all symbology as signed at run-time by reading attribute values). If the new attribute requires symbology that is currently not available in the GeoPackage, but could be loaded at next sync., sync instructions should be set to ensure updating the missing symbology.

  • Verify whether the point, multi-point, line, and polygon contents in the GeoPackage can be tied directly to a pre-defined symbology set via the Symbology Encoding Implementation Specification.

  • Continue to extend the embedded symbology to the feature attributes so that when an attribute is edited, the symbology gets automatically updated.

  • Also continue work with vector tiles within the GeoPackage. Detailed requirements on vector-tiles will be provided at kick-off meeting.

  • The tools and/or code shall support to download the GeoPackage (encoded with symbology) via the Esri POD Server

  • Implementation of the USGS Topo GeoPackage supporting the requirements mentioned above. This includes enhanced tools and/or code used to convert the USGS Topo Combined Vector Product data to GeoPackage format and to encode GeoPackage data with US Topo Map symbology as defined in the Topo TNM Style Template.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • UG001: US Topo GeoPackage ER - The Engineering Report shall describe all testbed activities on the integration of USGS Topo Combined Vector Product data to GeoPackage, all experiences made during implementation, including recommendations to the sponsor, and provide any resulting standards change requests to the appropriate standards working groups. Engineering reports will also cover these specific items:

    • Problems / obstacles encountered working on the USGS specific GeoPackage and geospatial/non geospatial metadata integration requirements.

    • Documented process used in meeting the requirements including process for downloading the GeoPackage (encoded with symbology) via the Esri POD Server.

    • Recommendations for further work needed specific to these Testbed-13 requirements.

  • UG102: USGS Topo GeoPackage - Implementation of the USGS Topo GeoPackage supporting the requirements mentioned above. This includes enhanced tools and/or code used to convert the USGS Topo Combined Vector Product data to GeoPackage format and to encode GeoPackage data with US Topo Map symbology as defined in the Topo TNM Style Template. The tools and/or code shall support to download the GeoPackage (encoded with symbology) via the Esri POD Server.

  • AB102: GeoPackage Client - Client application that supports all requirements stated above, including download from Esri POD Server. Ideally, this component can be used to support the Mass Migration Source Integration work package also.

B.11. Map Markup Language & Web-Map HTML Element

The Map Markup Language (MapML) format was created by Natural Resources Canada, and is managed in an open, collaborative process by the Maps for HTML Community Group.

Map Markup Language is a text format for encoding map information for the World Wide Web. The objective of MapML is to allow Web-based user agent software (browsers and others) to interact with web servers by relying only on publicly defined Web standards (URI, HTTP, MapML specification), and not on URL recipes/templates, with the goal of displaying modern interactive Web maps.

The MapML specification describes MapML as follows: MapML is needed because while Web browsers implement HTML and SVG, including the <map> element, those implementations do not meet the requirements of the broader Web mapping community. On the one hand, the semantics of maps are quite different from those of Scalable Vector Graphics, while on the other hand, the semantics of the HTML map element are incomplete or insufficient relative to modern Web maps and mapping in general. Robust web maps are implemented by a variety of non-standard technologies. Web maps do not work without script support, making their creation a job beyond the realm of beginners' skill sets. In order to improve collaboration and integration of the mapping and Web communities, it is desirable to enhance or augment the functionality of the <map> element in HTML to include the accessible user interface functions of modern web maps (e.g. panning, zooming, searching for, and zooming to, styling, identifying features’ properties, etc.), while maintaining a simple, declarative, accessible interface for HTML authors. At the same time, the HTML interface should provide low-level programmable mapping hooks in the style of the Web. While we’re at it, maybe we can improve the state of Web mapping by adopting and incorporating the core values of the Web, namely federation by virtue of hyperlinks and standardized media types.

MapML
Figure 15. MapML Example (source Maps4HTML)

MapML is a proposal to the Web and Mapping communities. The intention is to define a new hypertext format (MapML) which encodes map semantics directly, but which leverages existing standards where possible and desirable, such as Cascading Style Sheets, for example. MapML will provide an essential part of the contract between Web user agents and Web servers when map features are exchanged, in a manner based on the architectural style of the Web, in a similar way to how HTML provides (part of) the contract for documents.

A MapML tile servlet maven Java project is available on Github. A number of OGC compliant Web services that could be used to serve maps as part of the Testbed-13 implementations are available also here.

Related to the MapML specification is the <web-map> HTML Element proposal. The standard HTML <map> element allows the HTML author to designate sub-areas of an image to be used as hyperlinks, creating an "image map". This functionality while useful, is limited to the simplest of mapping applications. A more generally useful <map> element would enable (functions like) zooming and panning of the map in a traditional Web mapping way, while simultaneously being equally simple and declarative for an HTML author to include on their Web page. The HTML Element proposal specification proposes the syntax and semantics of such an element. Examples of how the web-map custom element works are provided by the Maps4HTML community. In addition, when an SVG document is included in an HTML document via the <img src=”….svg”> element, JavaScript embedded in the SVG document does not run, it is an inert picture, under the control of the HTML author, which is as it should be. But when it is the principal target of a browser request, and if it has internal links to JavaScript resources, those scripts are executed, just as they would be for an HTML document. Currently, MapML is only conceived as a somewhat non-static application executed in the context of a <map> element. But if a MapML document was the principal target of a request by a browser, could scripts linked by the MapML document work some magic? What would that magic look like? Would a MapML document need a DOM model that is analogous to the SVG DOM, which is effectively a JavaScript API? If this was the case, a web of maps might be useful even independent of the Web of HTML documents. This aspect is covered in requirement number 5 below and shall be investigated in Testbed-13.

The applications of Web maps are diverse. The wide scope of use of Web maps appears similarly broad to that of other Web media types, such as video or audio. In other words, there are a multitude of reasons for wanting to include a map on your Web page. The HTML <web-map> element proposed by this document should be provider-agnostic, and should only depend on Web standards, including Uniform Resource Identifiers, Document Object Model, Cascading Style Sheets and media types. To that end, the present specification is a sibling to another related specification proposal, that of Map Markup Language.

Requirements

The following figure illustrates the work items and requirements in the context of Map Markup Language.

MapMLRequirements
Figure 16. Map Markup Language requirements and work items

The following requirements shall be met by the MapML Engineering Report and implementation:

  1. Specify and implement a media type "text/mapml" which interoperably encapsulates the semantics of maps to support the stateless client-server requirements of Web browsers. This could be accomplished using specific configurations wrapping existing OGC service (WMS, WFS, WMTS, operational services are available here) implementations as back-end services.

  2. Specify and implement an extension to the text/html media type which implements the core functions of Web maps as they are understood today. Such an extension could take different forms. 'Native' browser code, browser plugin or Custom Element.

  3. Join the Maps For HTML Community and provide Github pull requests support. Evaluate patent/licensing considerations that may arise in this context.

  4. Collect OGC community feedback in the form of Github pull requests to the appropriate github:Maps4HTML/repository-name. Topics to be addressed by such pull requests include, but are not limited to new normative and non-normative sections or updates to existing sections about:

    1. Tiled Coordinate Reference System (TCRS) definitions

    2. Image georeferencing markup

    3. TCRS/projection negotiation

    4. Language negotiation

    5. Security considerations

    6. Hyperlinking within and between map services

    7. Accessibility considerations for map feature markup

    8. Microdata / microformat semantic markup recommendations

    9. Feature creation / update / deletion sections via PUT, POST, DELETE considerations

    10. Caching discussion

    11. Extent processing discussion / algorithm

    12. Feature styling with Cascading Style Sheets

  5. Potentially develop a JavaScript API for map documents which are accessed as the primary resource in the manner of SVG documents, i.e a MapMLDocument. This requirement addresses the execution of MapML in a world of maps independently of HTML.

The authors of MapML have tried their best to keep the definition of MapML, its schema and the implementation in sync, so as to try the best to eliminate cognitive dissonance from the mind of the reader (keeping it real). As a result there are things that could be added to address simple use cases, and possibly fixes or clarifications may be needed for existing things. In addition to these clarifications, Testbed-13 welcomes the imagination of the community to help realize some useful things using MapML!

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NR002: MapML ER: Engineering Report capturing all specifications or clarifications to the existing specifications in a way that they can be directly integrated into the MapML and <web-map> HTML Element specifications. The Engineering Report shall further capture all experiences made during the implementation of NR103: MapML Server.

  • NR103: MapML Server: Server implementation will support for all requirements identified above. The server shall be demonstrated and needs to be complemented with a number of examples requests to illustrate its functionality in a Web browser.

B.12. Mass Migration Source Integration

A fundamental goal of this Mass Migration Source Integration work is to understand and document how information sharing and safeguarding interoperability tools and practices, including open geospatial standards, can enable cross-domain interoperability on an international level for structured, communication exchange and border surveillance for law enforcement and humanitarian aid including a Maritime context.

Testbed 13 will focus on addressing challenges related to the coordination of multi-regional / multi-national operations and messaging related to the displacement and mass movement of populations in response to conflict. The current exodus of people from the Middle East to multiple nations in Europe and other countries around the world will be used as a scenario for this discussion.

mapMigration
Figure 17. European Migrant Crises 2015 (source: wikimedia by Dörrbecker, CC BY-SA 2.0)

B.12.1. Mass Migration Scenarios

Several scenarios are candidates to be exercised utilizing a platform for information sharing to facilitate situation awareness in a Common Operational Picture including inter-exchange of specific incident messaging related to vessel tracking, emergency response to border-related and maritime incidents regarding refugees, tracking/monitoring the health and well-being of individuals and families after entry, and the repatriation of populations after resolution of conflicts. The candidate scenarios are:

  • Tracking of Vessels – Tracking and monitoring vessel movement, port arrival of authorized transport operations, as well as identification, tracking and interception of unauthorized vessels

  • Emergency Response at Sea – identification of emergency situations, planning and conduct of emergency / disaster response through response centers equipped with a geospatially-enabled Common Operational Picture

  • Well-being and Health Monitoring - assurance of shelter, food and medical services for displaced populations while in transit and when placed in housing

  • Return of refugees post conflict

  • Local Risk Analysis - Analyzing the local situation is key to understand the risks, vulnerabilities, and potential number of people that leave their home country and possible migration routes

  • Migration Routes Modeling and Analysis - Understanding possible migration routes involves many variables that need to be considered, such as political situation and security in neighboring countries, willingness of Europe to accept refugees, capacity of camps, harbors, ships, etc., border control mechanisms, permeability of frontiers etc.

  • Situation in places of refuge - Capacities of camps, willingness of national states to accept refugees, internal distribution mechanisms and others

B.12.2. Messaging Frameworks

National Information Exchange Model (NIEM)

NIEM is a US Government National Standard that facilitates information sharing across organizational and jurisdictional boundaries at all levels of government. It is based upon a data model that provides agreed-upon terms, definitions and formats for various business concepts; rules to related the concepts; and independent from system storage systems. The following NIEM IEPD message formats associated with the Maritime Information Sharing Environment (MISE) are candidates for investigation, test and demonstration in Testbed 13. The emphasis will be to advance the understanding and use of the CVISR IEPD, which is associated with the other maritime information exchange IEPDs listed below.

  • Consolidated Vessel Information and Security Reporting (CVISR) - Exchange Model Description. Defined and standardized levels characterizing how much is known about a vessel (and associated people, cargo, and infrastructure) at a given time. CVISR is generally an assembled product consisting of essential elements from the previous four IEPDs.

  • Position (POS) - Exchange Model Description A geospatial position, course, heading, speed, and status of a vessel at a given time. A series of position reports can be combined to produce track information.

  • Indicators and Notifications (IAN) - Exchange Model Description. Indicators are information used to inform or contribute to an analytical process. Notifications include warnings of a possible event and alerts about the execution of an event.

  • Notice of Arrival (NOA) - Exchange Model Description. 96-hour advance notices that all vessels inbound to US ports are required to submit, which lists vessel, crew, passenger, and cargo information.

  • Vessel Information (VINFO) - Exchange Model Description. Static vessel characteristics information, such as vessel tombstone data.

B.12.3. Geospatial Capabilities and Information

Testbed 13 shall exercise capabilities and location information for Mass Migration Source Integration in the context of cooperating operations centers via a Common Operational Picture, and through point-to-point messaging between cooperating partners.

Potential Information Sources:

A variety of information sources are expected to be used to test and demonstrate Mass Migration Source integration. These types of information include:

  • Ship Transponder (AIS) information and analytics

  • Imagery from government and commercial providers

  • Cloud Services, Imagery and Big Data

  • Other related geospatial feature data (e.g. transportation)

  • Geospatial Web and Mobile Services

Ship Transponder (AIS) information and analytics

Automatic Identification System (AIS) is a maritime technical standard developed by the International Maritime Organization (IMO). AIS is a sophisticated radio technology which combines GPS, VHF and data processing technologies to enable the exchange of relevant information in a strictly defined format between different marine entities. This may be the simple exchange of position, course, speed and identity information between individual vessels or more sophisticated data exchanges between specialist shore and buoy located devices.

Imagery from government and commercial providers

Sources of aerial or satellite imagery, available from commercial and government organizations, shall be used and represents a valuable source to support a wide variety of uses including navigation, environment, land-use, and emergency response.

Cloud Services, Imagery and Big Data

Testbed 13 will make use of cloud-based geospatial archives to be identified that provide vast amounts of geospatial data along with tools to extract useful information from that data.

Innovations of loosely coupled petabyte-scale archives, such as satellite imagery, based on open standards can be used to provide rapid geospatial information product creation at any scale. These capabilities will be realized using standards and related technologies to achieve:

  • Interoperable system of systems where users can select, view, interact and access the data they need transparently from multiple clouds in support of interdisciplinary geospatial analytics.

  • Portability of data across clouds requires Information models, semantics, encodings that are recognized independent of the specific cloud in which they are hosted.

Big data techniques may be used to provide mechanisms to identify useful, usable information in a timely fashion – actionable analytics. To support analytics in this manner, data must be suitably accessible – Analysis Ready Data.

Related geospatial feature data

Government and private industry shall be identified that provide a multitude of geospatial data sources available to support effective and efficient support in population centers and during mass migrations. Such geospatial data sources may include transportation networks, locations and capacities of medical facilities, food supplies and sources, and many others.

B.12.4. Geospatial Web and Mobile Services

Testbed 13 shall test and demonstrate use of a variety of services and capabilities to support Mass Migration source integration as listed below:

  • Web Map Service

  • Web Feature Service

  • Web Processing Service

  • Geospatial Visualization Clients

  • Mobile Clients

  • Limited / Disconnected environment capability

Web Map Service

The OpenGIS® Web Map Service (WMS) Implementation Specification enables the creation and display of registered and superimposed map-like views of information that come simultaneously from multiple remote and heterogeneous sources.

When client and server software implements WMS, any client can access maps from any server. Any client can combine maps (overlay them like clear acetate sheets) from one or more servers. Any client can query information from a map provided by any server.

In particular WMS defines:

  • How to request and provide a map as a picture or set of features (GetMap)

  • How to get and provide information about the content of a map such as the value of a feature at a location (GetFeatureInfo)

  • How to get and provide information about what types of maps a server can deliver (GetCapabilities)

Web Feature Service

The OpenGIS® Web Feature Service (WFS) Implementation Specification allows a client to retrieve geospatial data encoded in Geography Markup Language (GML) from multiple Web Feature Services. The specification defines interfaces for data access and manipulation operations on geographic features, using HTTP as the distributed computing platform. Via these interfaces, a Web user or service can combine, use and manage geodata — the feature information behind a map image — from different sources.

Web Processing Service

The OGC Web Processing Service (WPS) Interface Standard provides a standard interface that simplifies the task of making simple or complex computational processing services accessible via web services. Such services include well-known processes found in GIS software as well as specialized processes for spatio-temporal modeling and simulation. While the OGC WPS standard was designed with spatial processing in mind, it can also be used to readily insert non-spatial processing tasks into a web services environment. It supports both immediate processing for computational tasks that take little time and asynchronous processing for more complex and time-consuming tasks. Moreover, the WPS standard defines a general process model that is designed to provide an interoperable description of processing functions. It is intended to support process cataloguing and discovery in a distributed environment.

Geospatial Visualization Clients

To realize and use the vast amounts of geospatial data for planning, analysis and decision-making requires processing and visualization to facilitate city planners, emergency responders, 3D representations and informed international cooperation during events that result in large and sudden population movements.

Mobile Clients

More than half of the world population lives in cities, but mobile access to the Internet and communications exceeds fixed (land-line) access. Mass migrations often result in temporary shelter and population encampments. Fixed communications infrastructures are largely nonexistent in these situations where they are needed. Mobile devices that include Mobile Clients provide the necessary and sometimes only networks available.

Limited / Disconnected environment capability

Mobile networks provide necessary capabilities for communications and information processing, however, they may be limited in capacity or insufficient to handle to the amount of information flow needed to fully support local operations. To answer this shortfall, technologies, such as digital compression techniques can be applied where bandwidth is a limiting factor. Where network connectivity is unavailable or intermittent, other techniques may be utilized such as OGC GeoPackage to deliver necessary data to remote or disconnected areas.

B.12.5. Standards, Models, and Frameworks

Testbed 13 shall use a variety of geospatial and related standards that provide a consistent and well-understood means to express and interchange information among cooperating partners. The following list of standards are expected to play a role in this Mass Migration Source intergration. Additional details for these standards or categories of standards are described in the sections that follow.

  • Geography Mark Up Language (GML)

  • Lightweight Encodings (JSON / GeoJSON)

  • GeoPackage

  • Security Authentication and Authorization (SAML / OAuth)

  • Federated Identity Management

Geography Mark Up Language (GML)

The Geography Markup Language (GML) is an XML grammar for expressing geographical features. GML serves as a modeling language for geographic systems as well as an open interchange format for geographic transactions on the Internet. As with most XML based grammars, there are two parts to the grammar – the schema that describes the document and the instance document that contains the actual data. A GML document is described using a GML Schema. This allows users and developers to describe generic geographic data sets that contain points, lines and polygons. However, the developers of GML envision communities working to define community-specific application schemas [en.wikipedia.org/wiki/GML_Application_Schemas] that are specialized extensions of GML. Using application schemas, users can refer to roads, highways, and bridges instead of points, lines and polygons. If everyone in a community agrees to use the same schemas they can exchange data easily and be sure that a road is still a road when they view it. Clients and servers with interfaces that implement the OpenGIS® Web Feature Service Interface Standard read and write GML data. GML is also an ISO standard (ISO 19136:2007)

Lightweight Encodings (JSON / GeoJSON)

JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.

GeoJSON is a format for encoding a variety of geographic data structures based on JSON. A GeoJSON object may represent a geometry, a feature, or a collection of features. GeoJSON supports the following geometry types: Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon, and GeometryCollection. Features in GeoJSON contain a geometry object and additional properties, and a feature collection represents a list of features.

GeoPackage

A GeoPackage is a platform-independent SQLite database file that may contain vector geospatial features, tile matrix sets of imagery and raster maps at various scales, and extensions.

Since a GeoPackage is a database, it supports direct use, meaning that its data can be accessed and updated in a "native" storage format without intermediate format translations. GeoPackages are interoperable across all enterprise and personal computing environments, and are particularly useful on mobile devices like cell phones and tablets in communications environments with limited connectivity and bandwidth. This OGC® Encoding Standard defines the schema for a GeoPackage, including table definitions, integrity assertions, format limitations, and content constraints.

Security Authentication and Authorization (SAML / OAuth)

Testbed 13 will advance the understanding for use of authentication and authorization security technologies with OGC web service standards interfaces and encodings in the context of real-world scenarios. The focus in Testbed 13 will be to implement, test and demonstrate the use of SAML, OAuth and Open ID Connect. The capabilities of these specifications have been advancing during recent years.

The SAML, OAuth and Open ID Connect work to be conducted in this Testbed aims to refresh and advance the understanding in for use of these technologies that are gaining favor as a means to complement the prior XACML/GeoXACML work.

Much has been learned during several prior OGC Testbeds that conducted significant work to implement, test and demonstrate security technologies focused on use of XACML/GeoXACML based architectures. For background, prior OGC Testbeds have produced the following Engineering Reports that captured results for implementation of security technologies that focused primarily on XACML/GeoXACML:

  • OGC 08-176R1 OWS-6 Secure Sensor Web ER

  • OGC 09-035 OWS-6 Security ER

  • OGC 12-118 OWS-9 Security ER

  • OGC 12-139 OWS-9 SSI Security Rules Service ER

  • OGC 15-022 Testbed 11 - Implementing Common Security Across the OGC Suite of Service Standards ER

  • OGC 15-051r3 Testbed 11 Geo4NIEM Architecture Design and Implementation Guidance and Fact Sheet

  • OGC 15-050r3 Testbed 11 Test and Demonstration Results for NIEM using IC Encoding Specifications ER

SAML, or Security Assertion Markup Language, is an OASIS open standard based on XML for exchanging authentication and authorization information between parties. SAML provides or a standard mechanism for attribute exchange to establish trust relationships between identity providers and service providers. SAML is comprised of three main components:

  • Assertions – for authentication to prove identities; and for attributes used to generate specific information about a person

  • Protocols – provides the mechanism to request and receive assertions

  • Bindings – defines how SAML messages are exchanged in conjunction with a protocol

OAuth 2.0 authorization framework in conjunction with OpenID Connect enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf.

Federated Identity Management

Federated Identity Management is management techniques and technologies, standards and use-cases, which serve to enable the portability of identity information across otherwise autonomous security domains. The ultimate goal of identity federation is to enable users of one domain to securely access data or systems of another domain seamlessly, and without the need for completely redundant user administration. Identity federation comes in many flavors, including "user-controlled" or "user-centric" scenarios, as well as enterprise-controlled or business-to-business scenarios.

Federation is enabled through the use of open industry standards or openly published specifications, such that multiple parties can achieve interoperability for common use-cases. Typical use-cases involve things such as cross-domain, web-based single sign-on, cross-domain user account provisioning, cross-domain entitlement management and cross-domain user attribute exchange.

Managing user identities and privileges is critically important to ensure that information is shared with the right personnel with proper authorities. The authoritative federation of identities along with biometric technologies provides a mechanism to identify and exclude “bad actors” from the privileged information and provides a means to address the threat.

In the US, the Global Federated Identity and Privilege Management (GFIPM) program is funded jointly by the U.S. Department of Justice (DOJ) and the U.S. Department of Homeland Security (DHS), and is under the direction of the Global Justice Information Sharing Initiative. The conceptual foundation of the GFIPM is the idea of federated identity and privilege management (FIPM). FIPM is an extension of the more common concept of federated identity management, which provides the ability to separate management of user identities from management of systems and applications in which those identities are used. In a federation, identity providers (IDPs) manage user identities, while service providers (SPs) manage applications and other resources. It is well-understood that federated identity management provides valuable benefits for information sharing, including greater usability due to identity reuse, as well as improved privacy and security. The FIPM concept seeks to extend federated identity management by addressing the issue of authorization, or privilege management, within the systems and applications that exist in a federated environment. Each system or application in a federation typically has its own set of business requirements and access control policies, and FIPM provides a cost-effective framework that enables these systems to be made available to federated users while still respecting their native requirements.

Requirements

Testbed 13 shall focus on addressing challenges related to the coordination of multi-regional / multi-national operations and messaging related to the displacement and mass movement of populations in response to conflict. The current exodus of people from the Middle East to multiple nations in Europe and other countries around the world will be used as a scenario for this discussion.

MassMigrationRequirements
Figure 18. Mass Migration Source Integration requirements and work items

Testbed 13 shall demonstrate situational awareness utilizing Internet and web technologies in a shared information exchange platform to realize a Common Operational Picture (COP) for coordination of activities among coordinating nations, and will exercise security-enabled interoperable inter-exchange of messages.

Information exchange and interoperability of messages shall be tested that adhere the US National Information Exchange Model (NIEM) to exercise secure information sharing among public safety and security organizations on a range of topics. Testbed 13 shall emphasize identification and recommended resolution of interoperability issues, and shall document a standards-based interoperable reference architecture that can be scaled.

A range of geospatial information sources and supporting standards-based technologies will be employed. Information sources will include imagery from government and commercial sources, geospatial features and other location referenced data relevant to the movement of populations and individuals.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables. Some of the work items are used in other work packages as well.

  • AB001: Concepts of Data and Standards for Mass Migration ER - This Engineering Report shall describe the results of implementation, integration of messaging and exchange in the context of the information sharing platform and common operating picture. This includes the results of the enhanced WMS/WMTS work described in section Web Service Enhancements.

  • AB002: Security ER - This Engineering Report shall describe the activities and results of development and integration of security components, encodings in the context of the mass migration scenarios. The ER shall document how security architecture of secured web service resource components as well as the security-enabling components using SAML, OAuth and Open ID Connect.

  • PM001: NIEM IEPD Messages and Schemas Engineering Report (ER) - This Engineering Report shall capture the experience from integration, use and demonstration of NIEM messages selected and used in support of the Mass Migration scenarios in Testbed 13

  • AB101: OAuth-enabled Web Service - A security enabled OGC Web Service that implements an authorization interface using OAuth 2.0 and Open ID Connect shall be deployed to provide source data for display and analysis in the EOC Client for information sharing and Common Operating Picture (COP).

  • AB105: Security-enabled Desktop client (EOC Desktop Client) - The Security-enabled Desktop Client, or EOC Desktop Client, shall serve as the EOC client to act as the primary (main) client to perform visualization and analysis for the shared information exchange platform to realize a Common Operational Picture (COP) for coordination of activities among coordinating nations, and will exercise security-enabled interoperable inter-exchange of messages. This client ideally supports the enhanced WMS and WMTS implementations coming out of the Web Service Enhancements work package.

  • AB106: Security-enabled Mobile client (EOC Mobile Client) (GeoPackage & Web Service) - A security enabled mobile client using SAML shall be implemented that is capable to read and write GeoPackage and interface with a selection of web services being deployed in Testbed 13 that serve various data sources for use in mass migration scenarios and for information sharing and analysis in the EOC and with actors in the field. This client ideally supports the enhanced WMS and WMTS implementations coming out of the Web Service Enhancements work package.

  • PM101: Messages and Schemas or CVISR (+ POS-IAN-VINFO-NOA) IEPDs - Testbed 13 shall shall investigate the use and integration of the CVISR and related schemas listed below to support Mass Migration scenarios. The investigation will include analysis of IEPDs that includes message documentation, schemas, and if necessary, create basic test message instances for use in test and demonstration. These messages will be served as source messages for input and processing by the NIEM-GML Integration Component WPS for further processing, storage and analysis in other services, such as the SAML-enabled WFS-T.

    Some messages may contain information that is sensitive or classified that must be secured during processing. These messages will be processed in the architecture that is using services and related components such as SAML-enabled WFS-T, authentication service or federated ID management

  • PM102: AIS Vessel Info Data Service (WFS) - This Web Feature Service shall ingest, store and serve AIS Ship Transponder information for use, test and demonstration in the context of Mass Migration Source integration and scenarios. This service shall be used as an element of the operation center’s shared information exchange platform and Common Operational Picture (COP).

  • PM103: SAML-enabled Web Feature Service with Transactions (WFS-T) - This Web Feature Service with Transactions capability shall be used to ingest processed NIEM message content from the NIEM-GML Integration WPS for display and analysis in the operations center’s shared information exchange platform and Common Operational Picture (COP).

  • PM104: NIEM-GML Integration Component (WPS) - The NIEM-GML Integration Component shall be realized using Web Processing Service to read and process NIEM messages to prepare selected content in a format suitable to read and store in a Transactional Web Feature Service (WFS-T). Content to be selected from the NIEM message will be determined by the type of NIEM message.

  • PM105: Security Component - SAML Authentication Service - The Authentication Service shall serve as an Identity Provider service to support for SAML initiated requests.

  • PM106: Security Component - Federated ID Management Service - Federated Identity Management is management techniques and technologies, standards and use-cases, which serve to enable the portability of identity information across otherwise autonomous security domains. The ultimate goal of identity federation is to enable users of one domain to securely access data or systems of another domain seamlessly, and without the need for completely redundant user administration.

    This deliverable shall serve as a component to federate and share identities from separate security domains, such as cross-border partners who need to share information for coordinated support and response for mass migration scenarios. Federated identities to be managed shall include identities managed by PM105 SAML Security Authentication Service and the Geonovum OAuth Authorization and Open ID Connect Identity provider service.

B.13. Climate Data Accessibility for Adaptation Planning

Testbed 13 supports the NASA Earth Science Data System (ESDS) Vision: "Make NASA’s free and open earth science data interactive, interoperable and accessible for research and societal benefit today and tomorrow". The Earth Science Data System (ESDS) Program oversees the lifecycle of earth science data with the principle goal of maximizing the scientific return from NASA’s missions and experiments for research and applied scientists, decision makers and society at large.

Testbed 13 results will be applicable to this ESDS Goal: Ensure access to data and services that are useful and useable by a wide community of users. To meet this goal, Testbed 13 will:

The first two elements are described in more detail further below.

To meet these goals, a pre-Testbed-13 concept development study, executed by OGC, will identify a number of data sets, portals, data centers, simulation models, and other Web services that can be used for the implementation of the demonstration scenarios around NASA ESDS data. OGC will issue a Request for Information (RFI) to its membership to solicit interest, experiences, and data and model availability. In this context, OGC will closely cooperate with ESIP to ensure close cooperation with ESIP partners. ESIP is an open, networked community that brings together science, data and information technology practitioners with the mission to support the networking and data dissemination needs of ESIP members and the global Earth science data community by linking the functional sectors of observation, research, application, education and use of Earth science. The OGC and ESIP have an Memorandum of Understanding (MoU) that promotes coordination between the two organizations on topics of common interest.

In addition, Testbed-13 participants are requested to help completing the set of available material. The overall goal is to gain experiences in the integration process, develop best practices and recommendations back to NASA on potential improvements that will further simplify the integration process, understand potential interoperability issues arising from different formats, interfaces, protocols, and access policies. The Climate Data Accessibility for Adaptation Planning work package shall be implemented as part of the mass migration scenario.

Improved climate data accessibility for the scientist and non-scientist

To broaden distribution access and formats of climate reanalysis, climate model data, and climate based observational data for ease of system-to-system ingest, access (EG: API, WCS, other), and data delivery formats (File: HDF, NetCDF, Shapefile, GEOTIF) for ingest by the scientist and non-climate scientist. Basis of requirements is that many needing climate reanalysis and climate model data (some cases observational data) do not have effective system-to-system access to climate model data and do not understand HDF, NetCDF, GRIB, or other data formats, nor do they want to learn to read these file formats, and are not climate scientists yet still need climate data for their work. The following use-cases shall be considered:

  • Use Case 1:  Agriculture Researcher considering the effects of climate on crop production needs easy access to climate prediction/data for ingest into crop forecasting models, potentially requiring temporal and spatial subsetting of climate model variables relevant to rain and soil conditions. Access to the right datasets, with effective system-to-system API, GIS based data formats (EG Shape, GEOTIF) in addition to providing option to ingest subset HDF, NetCDF formats.

  • Use Case 2:  Use by non-scientist or analysts to better understand climate impacts to populations by accessing climate model prediction data for ingest into local GIS based systems for population and critical infrastructure overlay.

Broadening Climate Adaptation Essentials

Similar to the Testbed-11 activity focused to understand impacts of sea level rise, Testbed-13 shall broaden focus to understanding impacts to climate change by inland “drought and flooding” via climate prediction models (single model, ensemble, reanalysis, other) against global population centers. In support of the overall Testbed-13 theme of mass migration, predictions of drought that lead to mass migration can be a focus.

  • Predicting drought and migration. Even if climate-induced environmental stresses do not lead to conflict, they are likely to contribute to migrations that exacerbate social and political tensions, some of which could overwhelm host governments and populations. The National Intelligence Council describes the situation in the Implications for US National Security of Anticipated Climate Change report "NIC WP 2016-01" from September 2016 as follows: Long-term changes in climate will produce more extreme weather events and put greater stress on critical Earth systems like oceans, freshwater, and biodiversity. These in turn will almost certainly have significant effects, both direct and indirect, across social, economic, political, and security realms during the next 20 years. These effects will be all the more pronounced as people continue to concentrate in climate-vulnerable locations, such as coastal areas, water-stressed regions, and ever-growing cities. These effects are likely to pose significant national security challenges […​].

  • Connecting Models to Social Systems: Modeling the potential impact of drought on a population requires data on water levels, water use, laws governing water use, population location, growth and movement, and other data. Given the disparate models and methods used in the different domains of interest, greatly enhanced data interoperability capabilities will be essential for social system modeling. In this context, it is referred to the National Academy of Press publication From Maps to Models:  Augmenting the Nation’s Geospatial Intelligence Capabilities.

Requirements The following figure illustrates the work items and requirements in the context of Climate Data Accessibility for Adaptation Planning.

modelingRequirements
Figure 19. Climate data accessibility for adaptation planning requirements and work items

The Testbed-13 deliverables shall meet the following requirements:

  1. Data integration: Demonstrate the integration and analysis of earth observation data made available at various portals (e.g. http://www.prepdata.org) and data serving systems in the context of use cases one and two described above.

  2. Model integration: Integrate simulation models into the demonstration that can be parameterized and executed on demand through standardized Web APIs such as OGC WPS.

  3. Process improvements: All demonstrations shall implement the context described above. The work shall capture all experiences made during the integration process and derive best practices and recommendations for improvements that simplify the overall integration process for scientists and non-scientists, taking into account data, data access and data formats, models and model parameterization and execution, portals, access policies, and other elements that influence the integration experience.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables. Some of the work items are used in other work packages as well.

  • NA101: Agriculture Scientist Client – This client component will provide access to climate data, agriculture predictions and other data relevant for an analyst to assess the effects of drought on mass migration. The client shall support the integration of on-demand models.

  • NA102: Non-Scientist or Analyst Client – This client component will provide access to climate data and other data relevant for an agriculture scientist to predict the effects of drought on crop production. The client shall support the integration of on-demand models.

  • NA103: Prediction WPS – This component accessible by Web Processing Service (WPS) will enable access and control of predictive models relevant to drought and agriculture prediction based on climate and other data. The service shall support different parameterization options to serve both scientists and non-scientists.

  • NA104: WCS access to climate data – This WCS server will provide access to NASA Climate data for use by the other components in Testbed-13. The climate data shall be made available in formats and resolutions appropriate for scientists and non-scientists.

  • NA001: Climate Data Accessibility for Adaptation Planning ER – This Engineering Report (ER) will describe all testbed activities on the NASA Climate Data Accessibility for Adaptation Planning requirements, all experiences made during implementation, including recommendations to the sponsor, and provide any resulting standards change requests to the appropriate standards working groups. It shall develop best practices for data and model integration and serve as a guidance document to work with NASA ESDS and externally provided data.

B.14. Security

B.14.1. QGIS Security Support

Modern security standards are important for many governments. Standards like x509(PKI) and SAML have their own local profiles and are used often by governments; usually in combination with SOAP based message exchanges as well as for web single sign on. OAuth has recently received a lot of attention and is seen as particularly useful in the context of RESTful APIs.

There is some gap between these security standards, OGC interface standards such as e.g. WMS, WMTS, or WFS, and modern client applications. Security policies based on modern security standards can be easily added to an existing WMS or WFS server, as it has been demonstrated in e.g. Testbed-11 or Testbed-12. Some organizations, e.g. Dutch Kadaster, even have such a service in operation as part of their production environments. Still, users still report unsatisfying experiences when interacting with secured services due to the lack of support in client applications such as QGIS, ArcMap, and others.

Partially this is just due to lack of testing and documentation, whereas in other situations clients don’t provide full software-side support to successfully interact with secured OGC Web service instances. The OGC OWS Common - Security SWG has identified some need on the specification side, but it became evident that more implementation work and testing is required to successfully establish functional secure client-server communication.

Testbed-12 produced the OWS Common Security Extension (16-048r1) to overcome interoperability issues between a secured service instance and a client. This OWS Common Security Extension adds content to the OWS Common standard regarding the implementation of security controls in such a way as to preserve interoperability. Though the Testbed-12 results still need to be implemented by the OGC Standardization Program, the additions to the OWS Common specification will be in two areas. The first extension will provide more detail on the use of the HTTP protocol, particularly as it’s related to security controls. The second extension will address discovery and negotiation of security controls. This will provide an annotation model for the Capabilities document to enable a service provider to specify the security implemented at a service instance (endpoint). Assuming the approach described in this Engineering Report gets standardized by the OWS Common - Security SWG, it is expected that all future OGC Web Service standards address security and ensure compatibility with the approach.

Requirements

The following figure illustrates the work items and requirements in the context of Security.

SecurityRequirements
Figure 20. Security requirements and work items

Testbed-13 shall develop an open-source implementation of client software based on QGIS that communicates securely with OGC Web service instances supporting OAuth v2.0 authorization in combination with OpenID connect authentication; or SAML v2.0 for both authentication and authorization. Ideally, both authentication/authorization approaches are supported. In order to develop a client application with security support, the sponsor through Dutch Kadaster will offer a WMS server instance with OAuth based security and an OAuth Authorization Server where all testbed participants will have access/accounts.

To support the development of powerful client applications, OGC and sponsors try to arrange for a number of secured Web services including WMS, WFS, and WMTS. If available, these services will implement the OWS Common Security Extension developed in Testbed-12. The availability of a SAML Identity Provider Service cannot be guaranteed at this stage.

Testbed-13 shall develop a plug-in/extension for QGIS that enables QGIS to interact successfully with the provided secured OGC Web service instances. For this purpose, all service endpoints will be ready right at the beginning of the testbed allowing the developer enough time for development and testing.

The following diagram illustrates the availability of components that are made available in order to develop the security client.

SecurityComponents
Figure 21. Security components. Blue components are sponsored, grey components are provided by the sponsor.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • GE101: QGIS Security Client - Working QGIS client software, that can successfully interact with the secured endpoints offered in the testbed based on the draft specifications by the OWS Common Security SWG using OAuth with OpenID Connect and SAML. Interaction with at least the WMS-Auth shall be demonstrated in the testbed. Ideally, the list of supported services includes WMS, WMTS, and WFS; all secured with either OAuth or SAML.

  • The following work items are used for the Security Work Package, but are primarily used and defined in section Mass Migration Source Integration - Deliverables:

    • AB002: Security ER

    • AB101: OAuth-enabled Web Service

    • PM103 SAML-enabled Web Feature Service with Transactions (WFS-T)

    • PM105 Security Component - SAML Authentication Service

B.15. Vector Tiling

The public but proprietary Vector Tile Specification from Mapbox is becoming more widely adopted. It continuously develops towards advanced capabilities for client-side styling and interrogation of data combined with the benefits of caching and optimized encoding and transmission protocols. Whilst the specification explicitly defines that vector tiles should not contain information about the bounds and projection of the data within the vector tile, the projection of reference used is Web Mercator (EPSG:3857) with the Google tile scheme as the tile extent convention of reference.   For many use cases these will provide enough accuracy. However for many of the scenarios within a mass migration theme (for example emergency response at sea), other projections and a standards based approach to tile schemes may be required for interoperability and/or accuracy. Furthermore, the Mapbox Vector Tile Specification does not currently provide good support for moving features, likely to be of high interest in a mass migration scenario, where moving features (whether humans or resources) may require some form of continued communication and update into clients whilst still utilizing the benefits of caching and optimized encoding within the vector tiles themselves.

With respect to usage of Vector Map tiles, it has been shown that they are increasingly being considered as a method for providing geospatial data to users via optimized vector web services, (such as MVT.PBF services, e.g. Mapzen) over both enterprise and low bandwidth environments. However, current vector tiling implementations are focused on the tiling and styling of vector data. As such, specific concepts provided by current OGC Standards are not fully addressed. The goal is to develop a "best of breed" solution to cover vector tiling using key characteristics from existing services. These include:

  • WMTS – tiling service applied to a vector implementation (distributes as raster)

  • WMS – server styling of OGC Symbology Encoding (SE) and Style Layer Descriptor (SLD)

  • WFS – flexibility to integrate, disseminate features and attributes and supporting powerful filter mechanisms

As such there are several limitations associated with these implementations. These include:

  • Styling: The majority of key implementations currently utilize implementation specific formats to store and disseminate the vector styling data. Whilst these formats may help optimize performance this does not fit in with an open architecture. In addition to this the use of OGC best practices such as an SLD, to style the vector data, may result in sub-optimal implementations. This either converts vector data to raster data which limits the usability of the final service or it limits performance by relying on constant communication between the client and server.

  • Attribution: The majority of implementations currently allow limited dissemination of feature attributes. Current implementations often limit the choice of attributes to those used for styling the feature. However, users may wish to utilize additional feature attributes including those that may not be numbers or text based e.g. an image, which they can’t currently do.

  • Compression and Generalization: Current implementations of vector map tilling have limited ability to generalize (simplify) vector data. Issues such as the over simplification, the creation of invalid vectors and reversed winding orders have not been adequately solved. In addition to this many implementations do not utilize compression techniques to optimize the service.

Providing map tiles to users via optimized Web services or over low bandwidth networks enables users to integrate features, whilst benefiting from compression and generalization approaches. OGC 16-021 and 16-055 explored compression techniques, low bandwidth, generalization techniques and created implementation(s).

In OGC Testbed-12 two Engineering Reports, 16-067 and 16-068, examined vector tiling and documented an implementation. OGC Engineering Report 16-068 stated the different elements that incorporate vector tiling, these include:

  • Performance and Memory Use

  • Symbology (portrayal)

  • Attribute handling

  • Geometry handling

  • Generalization

  • Simplification

  • Compression

Developing a standardized approach through an implementation would allow the demonstration of best practice approaches to these different elements. Vector tiling is incorporated into various existing OGC Standards, OGC GeoPackage, OGC Web Services, and OGC CDB (OGC Engineering Report OGC 16-067 explores this further). The goal is the optimization of OGC geospatial Web services to deliver faster, lighter and more robust services.

Requirements

The following figure illustrates the work items and requirements in the context of Vector Tiling.

VectorTilingRequirements
Figure 22. Vector tiling requirements and work items

Testbed-13 shall implement the following requirements:

  • Testbed-13 shall conduct and document the results of a thorough feasibility study evaluating a standardized vector tiling model. This study shall evaluate vector tiling without constraints of being tied to a single data container or service. The vector tiling study shall take a broader view for spatial indexing, in which a tiled, multileveled structure, e.g., an R-tree, is used to enhance rapid access to a vector feature dataset and/or data store. Testbed-13 shall undertake a dedicated spatial index study to investigate and recommend the way forward for an OGC vector tiling model.

  • Testbed-13 shall develop recommendations to aid standardization of the widely adopted Mapbox Vector Tile Specification with other commonly used projections, moving feature data and interoperability with existing standards where appropriate. The Vector Tile Implementation shall support Vector Tiles utilizing WGS84 (EPSG:4326), ETRS89 (EPSG:4258) and British National Grid (EPSG: 27700) projection data. It shall include moving features utilizing standards based tile schemas.

  • Vector map tiles to be styled using a non-proprietary format that is open and not implementation specific. To ensure the service is optimized for analysis and low bandwidth networks a non-raster should be utilized. Where possible this should utilize existing OGC standard(s) or best practice approaches such as Symbology Encoding and Styled Layer Descriptor.

  • The ability to associate attribute(s) with vector features as appropriate for publishing as a Vector Map Tiling service. The users of the Vector Map Tiling Service shall be enabled to select which attribute are assigned to the vector feature.

  • Vector Map Tiling service incorporating the full range of Geometry Types and Tiling Strategies (OGC ER 16-068). Attention: This is not necessarily a new service type! Instead, existing services shall be investigated to be extended with the necessary capabilities to support vector tiling, featureing the concepts mentioned above.

  • Server based implementation that is optimized for low bandwidth environments, which requires compression and generalization. The implementations shall support to compress and generalize vector map tiles. The work shall draw upon work undertaken in Testbed-12. For example, OGC ER 16-021 and 16-055 focused on a working implementation(s) and demonstration of generalization and compression of Web Feature Service (WFS).

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • OS101: Vector Tiles implementation - Implementations of Vector Tiles containing WGS84, ETRS89 and British National Grid projection data, standards based tile schemas and moving features. The tiles shall further support all geometry types and tiling strategies as discussed in OGC 16-068 and incorporate styling using SLD/SE.

  • OS102: Vector Tiles client implementation - Open source client integration and demonstration of Vector Tiles implementation available for 12 months for sponsor interaction (with no ongoing support). The requirements can be satisfied by providing a vector tiles library with create, read, update and delete as well as visualization support, by developing a plugin for an existing open source client, e.g. QGIS, or by providing a specialized open source client.

  • DS101: Vector Map Tiling Service - Vector Tiling Service Implementation that incorporates the following elements (which are documented in OGC ER’s 16-067 and 16-068). It is emphasized that this service is ideally delivered following the Virtualization and Containerization option described in Annex A: Solution Transfer Activity Types.

    • Server based implementation that is optimized for low bandwidth environments

    • Open Source approach preferred but not mandatory

    • Incorporates the best practice for OGC SE/SLDs, but ensures efficient use of bandwidth

    • The ability to associate attribute(s) with vector features as appropriate for publishing as a Vector Map Tiling service

    • Vector Map Tiling service incorporating the full range of Geometry Types and Tiling Strategies (OGC ER 16-068)

    • The service shall be complemented with a simple client that allows sending and visualizing service requests and responses

  • NG116: WFS for Vector Tiling - This WFS 2.5 shall support the recommendations of the Vector Tile Study described in the requirements section. The service shall be capable of providing tiled vector data to CDB and to GeoPackage either directly or through the use of a Web Processing Service. Implementing the NSG WFS Profile.

  • DS001: Vector Tiles ER - Engineering Report which provides recommendations that would contribute to the creation of a future OGC Vector Tiling specification. The ER shall capture all study results in response to the requirements stated above. In addition, it should also summarize the cost benefits of utilizing OGC or proprietary styling approaches for Vector Map Tiling. This should include the impact on performance and interoperability. In addition, the Engineering Report shall describe all implementations and capture experiences made in the context of Vector Tiles, including support for full range of geometry types, SLD/SE support, support for projections, tile attribution, and moving features.

B.16. CDB

OGC recently approved the Common Database (CDB) Standards. The CDB standard was brought into the OGC at the request of United States Special Operations Command (US SOCOM), it is widely used in the Modeling and Simulation community. The standard, as currently approved, utilizes the old FACC format for Feature and Attribute encoding. The CDB Standards Working Group (SWG) has agreed to bring the standard current with the NSG Application Schema (NAS) Feature and Attribute encoding as part of a revised version. NGA has provided the NAS to the SWG for consideration. The CDB SWG has also agreed to engage with the CityGML SWG to support 3-D integration. The objective of this effort would be to build on the work of the SWG within the Testbed through in-depth analysis, rapid development and prototyping of data models and supporting services integrating NAS, CityGML and CDB.

The work on CDB is structured in three phases and shall include a feasibility study, the implementation of data models and schemas that are based on the study results, and a set of services that implement the data models and schemas in the form of OGC WFS and WCS. It is emphasized that these phases built on each other. Therefore, special timing requirements have to be met in order to allow all three phases to be implemented.

Requirements

The following figure illustrates the work items and requirements in the context of CDB.

CDBRequirements
Figure 23. CDB requirements and work items

This work shall include:

  • CDB Feasibility Study - Conduct and document the results of a thorough feasibility study of CDB interoperability and use beyond the current modeling and simulation community. The study shall evaluate support for and interoperability issues related to the use of multiple Feature Data Dictionaries and schemas in CDB. Specifically, the study shall evaluate the current CDB data model as compared to the NSG and DGIWG data models and investigate interoperability between the current CDB data contents and a NAS based content.

    CDB should also be able to expose its content using OGC services such as WFS. The study shall take an initial assessment of requirements for utilizing OGC WFS and WCS as data sources and delivery mechanisms for CDB content. The study shall determine best practice solutions for encoding and interoperability with OGC services and other OGC standard encodings such as CityGML. Where necessary, change requests shall be generated and submitted to the appropriate SWG.

    Eventually, the study shall explore and document the interoperability implications within existing (modified) viewers supporting multiple FDD’s with the existing OGC CDB Standard on a real time simulation implementation.

  • CDB NAS-Profile - Testbed-13 shall utilize the efforts of the –CDB Feasibility Study to develop a CDB Profile with a NAS based data model. This model shall support the feature, attribute and geometry requirements for vector data as required by the modeling and simulation community. Data suitable to exercise the data model and schema shall be synthesized or will be provided.

  • CDB Services - Testbed-13 shall develop OGC services and prototypes which support the discovery, access, and retrieval of data content from CDB, supporting the CDB NAS-Profile.

  • CDB Viz and Manipulation - Testbed-13 shall develop client software for visualization and manipulation of static CDB data as a supplemental source data for the NGA analyst. This client shall access CDB data using the OGC services developed in this work package. The client shall support an existing CDB dataset that uses FACC encoding and supports the transformation to a different Feature Data Dictionary (NAS) according to the CDB-NAS Profile.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG001: CDB ER - Engineering Report capturing the CDB Feasibility Study, all experiences made during the profile development and the client and service implementation.

  • NG101: Feasibility Study - Feasibility Study on CDB interoperability as described above.

  • NG102: CDB WFS - WFS implementation supporting the new NAS-based CDB profile. Implementing NSG WFS 2.0 Profile.

  • NG103: CDB WCS - WCS implementation supporting the new NAS-based CDB profile. Implementing DGIWG WCS 2.1 Profile.

  • NG104: CDB WFS (3D) - WFS implementation supporting the new NAS-based CDB profile and CityGML/3D.

  • NG105: CDB Client - Client implementation supporting the requirements defined above. The client shall interact with CDB Web services, and hall support model transformation and visualization.

  • NG113: Data Models - Provision of data models supporting the tailored urban centric scenario. Further details provided in section NAS Profiling.

B.17. 3DTiles and i3s: Interoperability & Performance

The OGC has approved two new work items and the start of the review and approval process on independent Community Standards related to 3D capabilities with Streaming: 3D Tiles and Indexed 3d Scene Layer (I3S). NGA seeks to test the interoperability between the two specifications, formats and streaming capabilities.

The 3DTiles and i3s: Interoperability & Performance work package is organized in three phases. The objective of phase-1 is to conduct and document the results of a thorough feasibility study to investigate whether both specifications (3D Tiles and i3s) support interoperable data content, determine if the current software will support data based on the CDB/CityGML, and identify enhancements as needed.

The 3DTiles and i3s: Interoperability & Performance analysis in Testbed-13 shall provide a demonstration of all examined test scenarios. The demonstration shall include 3D data streaming capabilities supporting CDB and CityGML data streamed according to the 3D Tiles and i3s Community Standards. Ideally, this work takes the results from the CDB Feasibility Study as described in section CDB into consideration. As timing of this work is largely dependent upon the work of the CDB and NAS Profiling work described there, it is expected that preliminary work would utilize a non-specialized CDB/CityGML data set served through an i3s and a 3D Tiles server.

While the intent is to stream NAS based CDB and CityGML in their native formats, the results of the study may lead to recommendations for alternative solutions involving conversions into a common format. Testbed-13 is encouraged to evaluate all risks and potential payoffs of the innovative technology options that are investigated and recommend the option that best achieves the software interoperability objective of this technology pursuit.

The following picture illustrates the general setup of the performance and interoperability study.

PerformanceStudy
Figure 24. Performance Study overview

Phase-2 builds on the results of the first phase and develops 3D Tiles and i3s enhancements as necessary to solve all interoperability issues resulting from the phase-1 study.

Phase-3 implements the urban centric scenario and a demonstration of i3s and 3D Tiles visualizations.

Requirements

The following figure illustrates the work items and requirements in the context of 3DTiles and i3s Performance. It is emphasized that the figures does not show all realization arrows to improve readability.

3DPerformanceRequirements
Figure 25. 3DTiles and i3s: Interoperability & Performance requirements and work items

This work shall include:

  • Interoperability & Performance Study - This thorough feasibility study shall investigate whether both specifications (i.e. 3D Tiles & i3s) support interoperable data content, determine if the currently available software (software providing 3D Tiles & i3s) will support data based on the CDB/CityGML, and identify enhancements as needed. The study shall be based on experiments with commercially available software. Each analysis shall identify a test scenario which identifies how the issue under investigation will be tested, measured, and reported.

    The study shall include the following:

    1. An analysis of the benefits and drawbacks of streaming each of these formats (i.e. CDB & CityGML).

    2. An assessment of the degree of interoperability between the two formats CDB and CityGML.

    3. Recommendations on alternatives which address any interoperability issues and can be supported by both streaming standards.

    4. Benchmark data comparing the performance of each streaming service against the performance of existing CDB rendering clients (including delivery of large 3D data sets).

    5. Evaluation of 3D Tiles & i3s delivery mechanisms compared to the 3D Portrayal Standard

  • i3s and 3D Tiles enhancements: Enhance i3s and 3D Tiles as necessary to support the data models and interoperability challenges identified in the Interoperability & Performance Study.

  • 3D Streaming Scenario: Provide streaming capabilities and clients supporting an urban centric scenario. The demonstration client shall implement visualization capabilities supporting i3s and 3D Tiles and needs to include at least support for the OGC 3D Portrayal Server.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG002: 3D Tiles & i3s Interoperability & Performance ER - The ER shall capture all results from the 3D Tiles and i3s study described above. The ER shall further develop i3s and 3DTiles enhancements as necessary to support all issues resulting from the study. The ER shall capture all results from the service and client implementations and interoperability tests performed as part of this work package and compare the different data streaming engines.

  • NG106: CDB Implementation - CDB implementation, loaded with scenario data as described above. The data will either be provided or needs to be synthesized by the participant.

  • NG107: CityGML Datastore - CityGML datastore, loaded with scenario data as described above. The data will either be provided or needs to be synthesized by the participant. .

  • NG108: Streaming Engine-1 - Server that can provide 3D Tiles and/or i3s data from the CDB and CityGML data stores described above. Ideally, this server implements the OGC 3D Portrayal Service specification.

  • NG109: Streaming Engine-2 - Server that can provide 3D Tiles and/or i3s data from the CDB and CityGML data stores described above. Ideally, this server implements the OGC 3D Portrayal Service specification.

  • NG110: 3D Performance Client - Client application supporting i3s and 3D Tiles data streams. The client shall support the performance tests to allow comparison to commercially available CDB clients. The client shall implement visualization capabilities supporting i3s and 3D Tiles and interaction with at least the OGC 3D Portrayal Server.

  • NG111: CDB Performance Client - Commercially available CDB client to be used for performance tests and scenarios as described above.The client shall support direct access to CDB instances.

B.18. NAS Profiling

NGA utilizes the open source ShapeChange software tool as an integral piece in NAS development. This tool is used to take NAS based UML models and create XML and RDF based schemas. Testbed-12 began development of capabilities for extracting profiles supporting specific mission functions from the full NAS content in the previous testbed. Testbed-13 shall further refine this NAS Profiling by incorporating those capabilities previously defined for the purpose of developing a Urban Military Profile. This profile shall define the vector contents requirements for the CDB and CityGML urban centric profiles from the NAS described in section CDB. This profile shall be capable of being used to define the data model requirements for an Urban Military CDB and simultaneously an Urban Military Application Domain Extension (ADE) for use in CityGML.

The participant may require transforming the NAS Platform Independent Model to Platform Specific Models (PSM). This will require a flattening of the NAS to support Geodatabase and CDB/CityGML Platform Specific Models. This “backed in/flattening” for a PSM is necessary to assure interoperability of disparate developments/implementations. As an example, these are two of many effects of a “flattening transformation” (further details are given in the Testbed-12 ShapeChange Engineering Report (OGC 16-020)):

  1. The need for multi-valued attributes by transforming them to multiple single-valued attributes.

  2. The need to handle interval-valued attributes by transforming them into a triple of three single-valued attributes of “specific natures”.

There are numerous other transformations that need to be performed, most generally that of transforming a multi-geometry entity into a set of single-geometry entities, and that of “migrating” selected attributes of selected associated-entities to the applicable geometry-centric entities.

The NAS Profiling work is organized in three phases. During the first phase, Testbed-13 shall conduct and document the results of a thorough feasibility study to investigate whether a strict subset of the NAS is capable of supporting an urban centric profile or whether additional content is required. This study shall identify any gaps in the CityGML standard towards supporting the development of a Urban Military Profile. Further the study shall make recommendations to overcome those shortfalls. The study shall outline the requirements to develop a NAS based Urban Military ADE for CityGML and an Urban Military Profile for a NAS based CDB.

During the second phase, Testbed-13 shall enhance ShapeChange to develop data models that meet the requirements for an Urban Military Profile. In addition, ShapeChange shall be enhanced to provide a user interface with the ability to generate profiles from the NAS. This user interface shall be able to present already produced profiles which can be edited to delete and add content from the NAS. The user interface shall also be able to generate profiles from scratch based on user input.

During the last phase, Testbed-13 shall provide models supporting the tailored urban centric scenario as described in the CDB work package.

Requirements

The following figure illustrates the work items and requirements in the context of NAS Profiling.

NASProfilingRequirements
Figure 26. NAS Profiling requirements and work items

This work shall include:

  • Urban Military Profile Study - Testbed-13 shall conduct a thorough feasibility study as described above.

  • ShapeChange Enhancements - Testbed-13 shall develop the ShapeChange enhancements as described above.

  • Data Models Provision - Testbed-13 shall provide models supporting the tailored urban centric scenario as described in the CDB work package.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG003: NAS Profiling ER - This Engineering report shall capture all results and implementation details of the NAS Profiling work package.

  • NG112: ShapeChange Enhancements - Enhancements to ShapeChange to meet the requirements defined above.

  • NG113: Data Models - Provision of data models supporting the tailored urban centric scenario as described in the CDB work package.

B.19. Denied, Degraded, Intermittent, or Limited Bandwidth (DDIL)

Not all systems reside on the World Wide Web. Many are hosted on Denied, Degraded, Intermittent, or Limited bandwidth (DDIL) communications environments. Testbed-13 will examine how OGC Services can be adapted to accommodate two properties which are common to many DDIL environments.

B.19.1. Case 1: Disconnected Networks

Web Services assume the presence of a global interconnected Internet. Case 1 will examine the options available for running OGC Web Services when the Internet is not available.

  1. There will be an IP-based network available

  2. There will be a name resolution service available so that node names can be associated with IP addresses.

  3. There will be no connectivity between the local network and the Internet

  4. The vast majority of URLs will be “broken” since they reference resources outside of the local network.

  5. Loss of Internet connectivity may not be by design (such as in a disaster scenario)

The contractor shall develop strategies for OGC Web Services to operate in these environments. These strategies will be exercised and validated using WFS, WMS, WCS, and CS-W. Validation will include transition from an Internet-connected to a disconnected state.

B.19.2. Case 2: Size, Weight, and Power (SWAP)

Devices in DDIL environments often have strict limits on the physical size, weight, and power available to them. These constraints also restrict the available bandwidth. There is growing interest in binary encodings to accommodate these limitations.

Google Protocol buffers and Apache Avro are two flexible binary mechanisms for serializing structured data. Both have been used in U.S. Intelligence Community and DoD systems to encode geospatial data. This task shall subject these two technologies to the same evaluation regime that Testbed-12 applied to GZIP and EXI. The resulting data will be used to populate the WFS Low Bandwidth Extension proposed in Testbed-12. All Testbed-12 results and comparisons with external studies is available in the OGC 16-055: Compression Techniques Engineering Report. Implementation details including DGIWG profile support for WFS are documented in OGC 16-021: Low Bandwidth and Generalization Engineering Report.

In addition to the Testbed-12 tests, which compared the compression rates of various data sets and the size of the resulting transferred data, Testbed-13 shall evaluate the following properties:

  1. Server Computational Cost: what is the cost in CPU time to encode a data set?

  2. Client Computational Cost: what is the cost in CPU time to decode a data set?

  3. Fault Tolerance 1: How well does the technology detect and recover from single and multi-bit errors?

  4. Fault Tolerance 2: How well does the technology detect and recover from lost packets?

Testbed-13 shall conduct and document the results of a thorough feasibility study to investigate the requirements for support to Low Bandwidth Techniques. This study shall document the analysis processes, collected data, conclusions, and recommendations for using Google Protocol Buffers and Avro.

Requirements

The following figure illustrates the work items and requirements in the context of Denied, Degraded, Intermittent, or Limited Bandwidth (DDIL).

DDILRequirements
Figure 27. Denied, Degraded, Intermittent, or Limited Bandwidth (DDIL) requirements and work items

This work shall include:

  • Disconnected Networks - Testbed-13 shall conduct and document the results of a thorough feasibility study to investigate the requirements for implementations on a disconnected network. This study shall describe the strategies, validation strategy, and validation results from this subtask. Where necessary, change requests shall be generated and submitted to the appropriate SWG.

  • Size, Weight, and Power - The objective of Testbed-13 is to conduct and document the results of a thorough feasibility study to investigate the requirements for support to Low Bandwidth Techniques. This study shall document the analysis processes, collected data, conclusions, and recommendations for using Google Protocol Buffers and Avro.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG004: Disconnected Network ER - This Engineering Report shall capture the results of the Disconnected Networks study and resulting Change Requests.

  • NG005: SWAP ER - This Engineering Report shall capture the results of theSWAP study and resulting Change Requests.

  • NG114: Compression Test Server - Server implementation supporting Google Protocol Buffers and Apache Avro in support of the SWAP study.

  • NG115: Compression Test Client - Client implementation supporting Google Protocol Buffers and Apache Avro in support of the SWAP study.

B.20. Portrayal

During the Testbed-11, portrayal ontology was introduced that was focused on point-based symbology (icons for Emergency Management). Testbed-12 has extended this work by providing a richer symbolizer and graphics ontology that can accommodate line and area-based symbols along with graphic attributes applicable to these symbols. The Testbed-12 work is described in the OGC 16-059: Semantic Portrayal, Registry, Mediation Services Engineering Report, which documents the findings of the activities related to the Semantic Portrayal, Registry and Mediation components implemented during the OGC Testbed-12. This effort is a continuation of efforts initiated in the OGC Testbed-11.

For Testbed-13, the intent is to extend the ontology to accommodate more complex symbols such as composite symbols and symbol templates to describe more advanced symbology standards such as the family of MIL2525 symbols. Testbed-12 work rendered a symbol legend based on their definition; however, more work is needed to develop a renderer based on data in the portrayal ontology and producing SVG.

Requirements

The following figure illustrates the work items and requirements in the context of Workflows.

PortrayalRequirements
Figure 28. Portrayal requirements and work items

This work shall include:

  • Expressiveness of Styles - Testbed-13 shall build on the results from Testbed-12, which laid out the foundation to express styles. Testbed-13 shall continue the developments to have a least the same expressiveness as SLDs.

  • Advanced Symbols - Testbed-13 shall extend the portrayal ontology to represent composite symbols and symbol templates.

  • Advanced Output - Testbed13 shall investigate other renderer outputs such as JSON encoding of the portrayal information, so they can be handled on the client side in HTML 5 Canvas or other rendering libraries such as D3.js. Other renderers may also be investigated, i.e., SLD production from the RDF descriptors, and how unsupported features from the portrayal ontology can be supported in less expressive graphic languages than SVG, such as KML.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG008: Portrayal ER - Engineering Report that captures all results of the portrayal work package.

  • NG122: Portrayal Demonstration - A client/server demonstration of the new enhanced portrayal features developed in Testbed-13.

B.21. Semantic Registry

This section has been merged with section Semantic Registry.

B.22. Asynchronous Services

Web Services are built around a request response message exchange model. However, there are many times when the response cannot be delivered in a timely fashion. For example, RESTful operations are synchronous meaning once a request is issued, the client should wait for a reply. This is fine for normal Web surfing. It will not work for standing queries. In those cases the client may have to wait hours, days, or even weeks for a response. Testbed-13 shall explore two potential use cases with the goal of extending asynchronous capabilities for OGC services. The first use case relates to DDIL environments where users are constrained by their network capabilities and require a full update of their mission specific data over a temporary local area network as they come in proximity of this LAN. The second use case is one where a user issues a standing query to a service requesting notification of new or updated data as it becomes available over their area of interest.

The work shall take results from Testbed-12 into consideration: The Implementing Asynchronous Services Response Engineering Report (OGC 16-023) summarizes and compares the results from asynchronous communication experiments executed in Testbed-12. Testbed-12 implemented the WPS façade approach against WFS and WCS service instances, added support for asynchronous communication to a WFS using additional request parameters, and added Publish/Subscribe support to catalogs. Further details on the implementation of the Publish/Subscribe Interface Standard using catalogs are provided in the PubSub / Catalog Engineering Report (OGC 16-137).

It is envisioned that in the future analytic environment, analysts will no-longer have to search for content. Rather than a query-response model, analysts should be able to post their data requirements, then receive a notification whenever data is acquired which meets those requirements. The basic technology to do this is called publish-subscribe (pub-sub). However, pub-sub systems traditionally have been based on topics. Analysts will need a much more complex query language similar to the OGC Filter language.

The OGC Publish Subscribe Core Standard provides an overarching model to extend OGC services with Publish/Subscribe capabilities. The Publish/Subscribe model is distinguished from the request/reply and client/server models by the asynchronous delivery of messages and the ability for a Subscriber to specify an ongoing or persistent expression of interest (Standing Query). This model is particularly applicable to the DDIL environment because there is very little coupling between the service provider and consumer. Once a request has been issued, the results can be retrieved at a very different place and time.

The Testbed-12 PubSub / Catalog ER (OGC 16-137) provided a model for extending OGC services to support PubSub. It also applies that model to extend CS-W. Testbed-13 shall build on that work by applying the PubSub extension model to the Web Feature Service.

Requirements

The following figure illustrates the work items and requirements in the context of Asynchronous Services.

AsynchronousServicesRequirements
Figure 29. Asynchronous Services requirements and work items

This work shall include:

  • Asynchronous Services Study - Testbed-13 shall evaluate and build on the Asynchronous Services work done in Testbed-12 to create an Engineering Report document describing how to extend OGC Services for Asynchronous operations supporting both use cases described above. Where necessary, change requests shall be generated and submitted to the appropriate SWG.

  • Asynchronous Services Validation - Testbed-13 shall validate the recommendations by async-enabling two WFS simulating the DDIL environment and an additional Geosynchronization Service implementing Standing Queries with Pub/Sub support.

  • Complex Filter - Testbed-13 shall validate that OGC Pub-Sub can support complex OGC Filter Queries. Support for temporal as well as spatial operators is essential.

  • Standing Queries - Testbed-13 shall evaluate and build on the Testbed-12 Pub/Sub work by extending the operations to Web Feature Service. Further this study shall include analysis of Pub/Sub support for Standing Queries which implement complex OGC Filter queries.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG007: Asynchronous Services ER - This Engineering Report captures all results of the Asynchronous Services work package, including the the use cases, implementation best practices, and experiences and results from the implementation work. It covers all solutions an implementation scenarios of this work package.

  • NG119: Asynchronous WFS-1 - Asynchronous delivery enabled WFS simulating the DDIL environment and implementing the recommendations provided in the Asynchronous Services ER. Implementing NSG WFS 2.0 Profile

  • NG120: Asynchronous WFS-2 - Asynchronous delivery enabled WFS simulating the DDIL environment and implementing the recommendations provided in the Asynchronous Services ER. Implementing DGIWG WFS 2.0 Profile

  • NG121: GeoSynchronization Service - Implementation of a Geosynchronization Service implementing Standing Queries with Pub/Sub support.

  • NG011: GeoSynchronization Service Best Practice ER - The GeoSynchronization Service Best Practice ER shall document best practices in the use and application of GSS. Of particular interest is the GSS integration into asynchronous workflows; recommendations on when to use GSS over/in addition to other asynchronous workflow solutions; and how GSS can be integrated with data processing services supporting standing queries.

B.23. Workflows

B.23.1. Introduction

The intent of this work package is to develop a consistent, flexible, adaptable workflow that will run behind the scenes to perform the following tasks:

  • Gather data

  • Check ellipsoid/projection

  • Check data quality

  • Run conflation

  • Deliver data

Testbed-13 envisions two types of workflow or service chaining implementations. Planned workflows are constructed ahead of time. They require that someone sits down and analyze what services are required, how they should be ordered, and how the information flows between the stages in the workflow. The second type ad-hoc workflows are not structured. For prior art look at Event-Driven Architectures and Autonomous Agents. The key to ad-hoc workflows are standing queries and orders (Pub-sub).

Cascading Web Processing Service (WPS) execute requests are part of the WPS specification. Testbed-12 evaluated and developed several WPS 2.0 profiles, one was for Conflation and a second for Data Quality. They served as proof-of-concept and blue-prints for future profiling efforts. Combining the conflation WPS and the data quality WPS is required for Testbed-13. This work is documented in the Testbed-12 WPS Conflation Service Profile Engineering Report (OGC 16-022).

NGA requires one “planned” workflow process which includes a Data Quality analysis of the two or more datasets with overlapping content to be merged in the conflation process. For example, all input data shall be analyzed for its positional accuracy by comparing dataset locations against known accurate positions. The results of this evaluation will then be used by the Conflation process to select those feature themes, e.g., transportation, buildings, hydrography, with the best positional accuracy for retention in the final conflated dataset. This entire process shall be as automated as possible through the use of Workflow Engine Rules. Complete automation of the tasks above is not always achievable nor is it always preferable. For example in the process flow above it is expected that the results of the data quality analysis will inform the conflation task as to the most appropriate data for specific results. It is highly likely interim points should be built into the workflow where the analyst is prompted to perform a logic check prior to execution of further steps in the workflow. Another example, there may key information sources that must be present in order to effectively support objectives of the scenario. If those sources are not gathered, further search of available sources may be necessary.

The Testbed-13 testing effort will extend Testbed-12 work with additional implementation requirements. Testbed-13 shall study and develop proposed recommendations where an analytic environment can be assisted by workflows. A workflow approach should help with automating the tasks associated with this scenario as described under “Workflow Tasks” below. A service to manage and execute the workflow shall be implemented (NG131).

B.23.2. Workflow Engines

Previous OGC Testbeds have evaluated the integration of workflow engines into the OGC web service architecture. Largely these efforts evaluated the Business Process Execution Language (BPEL) workflow engine. There are now a number of workflow engines which provide potential solutions for NGA use cases.

This task shall include an initial study of the potential workflow engines with a recommendation for follow-up testing implementation. The study should consider the ease of integration into the OGC Architecture, that implementation will be in a Cloud based environment, the cost if any, potential benefits and drawbacks. Assorted Workflow Engines and possible candidates include: (OMG) BPMN / jBPM, Apache Taverna, Amazon Simple Workflow and (W3C) xProc, BPEL, and Windows Workflow. The study shall investigate the workflow setup addressed in Testbed-13 and illustrated below. The figure does not show the client and catalog as well as other interactions to improve readability.

workflowOverview
Figure 30. Overview of the workflow work package with work item numbers in red

It is emphasized that this workflow can be extended and modified in many ways. Further discussions at the Testbed-13 kick-off meeting shall define the final level of complexity.

B.23.3. Workflow Security

One item that must be considered in the workflow process is “Securing the Service Chain”. Service chaining must be accomplished in such a way that it adequately protects all of the accessed resources through the entirety of the workflow. Without the ability to pass access control credentials from one service to the next will break the service chain. Testbed-13 Participants shall evaluate techniques to provide this level of protection and execution assuming:

  1. OGC Services and content encodings

  2. Fine grained access control implemented through an Attribute Based Access Control (ABAC) infrastructure

  3. Services and content are provided by more than one organization utilizing more than one security environment

  4. Services are not accredited to the same level of trust

  5. Some content may require a higher level of protection than some services are accredited to provide

  6. All actions must be logged and associated with the user who initiated the service chain

  7. Some of the content is sensitive (Personal Identifiable Information). Unauthorized release of this data must be avoided.

The participants should be familiar with the OGC GeoDRM body of work. There should be many re-usable concepts.

The Testbed-13 Workflow Security work shall consider the following three use cases. The mapping on the high level architecture is illustrated in the figure below.

workflowSecurity
Figure 31. The figure has been removed
Use Case 1: Dominating Privileges

Computers used in the Defense and Intelligence community are accredited to support a specified level of processing. Like a person, these non-person entities (NPE) are assigned clearances and access privileges. They may also be constrained by location (see OGC 15-050 for an example).

Consider the case where a user has more privileges than the computer system they are using. Access control based solely on the users’ privileges could result in a security violation. Access control must be based on both the users’ and the clients’ identity and privileges.

  1. User requests access to a service.

  2. The client authenticates to the service.

  3. The service authenticates back to the client. (a 2-way SSL session has been established)

  4. The user request is sent to the service.

  5. The service requests user authentication.

  6. The user authenticates to the service.

  7. The user then requests a resource from the service.

  8. The service retrieves security attributes for both the user and the client.

  9. The service validates that both the user and the client have sufficient privileges to handle the requested resource.

  10. The service returns the resource to the user.

Use Case 2: Tunneling Proxies

Many systems are protected behind firewalls or similar bastion services. These services intercept all traffic and impose restrictions on what is forwarded to the end system. HTTPS, SSL, and TLS connections are point-to-point. Therefore, they terminate when they come to the proxy. Normally this is not a problem. The proxy forms a new secure connection with the end system. Then the traffic can flow over this pair of secure connections transparent to the user. This approach is not sufficient for use case 1. It’s not the client that authenticates to the service but the proxy. So the service has no way of determining if requested resources can be released to the client.

The solution to this problem is a robust technique for HTTPS identity delegation. Testbed-13 shall consider this problem when making recommendations.

Use Case 3: Identity Mediation

There are many different forms of Identification and Authentication. There will be cases where the I&A used by the client is not the same as that of the Service. For example, the U.S. Department of Defense (DoD) uses X.509 certificates and Attribute Based Access Controls (ABAC) to secure resources. Amazon Web Services (AWS) has their own security architecture which follows a Role Based Access Control (RBAC) model. AWS is one of the major cloud providers to the DoD. When a DoD user requests a resource from AWS, the request must be translated from an X.509 certificate to an AWS Security Credential. In addition, the ABAC security polities protecting the resource must be implemented using RBAC. This challenge must be considered as part of this study.

One approach to this problem is to deploy a Policy Enforcement Point (PEP). The PEP is a proxy which appears to the client as the end system. It is also responsible for implementing the ABAC controls. The effect is to divide the security policy into an RBAC and ABAC component. RBAC is enforced by the AWS. ABAC by the PEP. The PEP interacts with the AWS hosted services using the users’ delegated and translated identity.

B.23.4. Building Workflows

The use of workflows, stored and standing queries coupled with pub/sub should increase the efficiency of the analysts work. The process should also automate the discovery of data and lessen the chance that valuable information is overlooked. The development of workflows should also be as automated and re-useable as possible. Testbed-13 requires evaluation and development of a mechanism to build workflows. Workflows should be parameterized in such a way that facilitates reuse. All results and actions shall be documented in the NG009: Workflows ER.

B.23.5. Cataloguing Workflows

The development of workflows should also be as automated and re-useable as possible. Testbed-13 requires evaluation and development of a mechanism to store workflows in a catalogue for reuse by other analysts executing the same basic requirements. Workflows should be parameterized in such a way that facilitates reuse. All results and actions shall be documented in the NG009: Workflows ER. The work item is supported by a catalogue service implementation, NG135: Workflow Catalog Server.

B.23.6. Workflow: Fit for Purpose

This purpose shall directly support the Fit For Purpose work package and employ predefined profiles for imagery, imagery analytics or other forms of geospatial data to discover and access information consistent with their intended use. These predefined profiles, developed as part of Testbed-13, are intended to provide the needed filter criteria for forms of GEOINT that are applicable to the user’s specific problem space. The workflow consists of three main tasks, one addressing imagery, the second data quality, and third conflation. The basic concepts are illustrated in the figure below.

workflowFitForPurpose
Figure 32. Overview of the Fit for Purpose workflow. Clients need to support WPS profiles and the process needs to support manual interaction
Check ellipsoid/projection

Testbed-13 shall develop a service profile containing logic for determining source data ellipsoid and projection information once the data has been gathered. The service shall compare against the predefined profiles identified in the Fit For Purpose work package. Where ellipsoid and or projection are not in conformance with those specified by the Fit for Purpose guidance or otherwise overruled by analyst intervention, the service shall execute the process steps to transform the data to the specified ellipsoid/projection. This part is supported by NG130: Workflow WPS-1. All results and actions shall be documented in the NG009: Workflows ER.

Check data quality

It is anticipated that complex workflows integrated with OGC services including WPS will be required to deal with multiple aspects of data quality for a given dataset. Complex workflow solutions which chain atomic tests for raster map data, vector feature data, gridded data, and imagery and produce data quality metadata are required. It is likely that different use cases will require different types of testing to produce relevant compliant metadata.

This work package shall develop a mapping of the seven elements of ISO 19157-2 to DQ WPS processes. Evaluations from the DQ WPS processes shall aligned to the elements specified with ISO 19157 concepts and encoding. The six elements of ISO 19157 shall be supported with WPS DQ processes. These elements are completeness, logical consistency, positional accuracy, thematic accuracy, temporal quality, and usability. Mappings across those of Citizen Observatory Web (COBWEB) derived seven pillar processes, ISO 19157 DQ elements, and NSG Metadata Framework (NMF) shall be developed and documented. New mapping processes shall be developed when required to achieve the automatic mapping between the COBWEB pillar processes, ISO 19157 DQ elements, and the NMF.

Testbed-13 shall expand upon the progress made in Testbed-12 Data Quality analysis by workflow linking the output results into the input parameters for the conflation process. The Testbed-12 work on handling and assignment of quality parameters based on atomic WPS instances that allow quality assessment and metadata development is documented in OGC 16-041: WPS ISO Data Quality Service Profile ER. The report addresses the provisioning of quality parameters as defined in ISO 19157. OGC 16-050: Imagery Quality and Accuracy ER addresses the concept of data quality for images, taking into account aspects such as completeness, logical consistency, positional accuracy, temporal accuracy and thematic accuracy. The work is based on Digital Globe’s A3C framework (A3C standing for Accuracy, Currency, Completeness, and Consistency), ISO 19157 and QualityML vocabularies, and proposes encodings compatible with common metadata standards. This part is supported by NG130: Workflow WPS-1. All results and actions shall be documented in the NG009: Workflows ER.

Execute conflation

Given that multiple data sources may be available for use, one of which may be characterized by very good positional accuracy while another may have inferior positional accuracy but have superior information content (features, attribution), the sources must be conflated to achieve the “best” representation of the information. The current conflation workflow is based on automated conflation tool (Hootenanny). There is no user interaction to account for data quality during a conflation run which can result in a sub-optimal final result. This process could be enhanced by allowing the user to interact during a conflation run by considering the results or the data quality process before and run the conflation with modified parameters. Testbed-13 shall develop a method for user interaction in the Hootenanny conflation process at appropriate points of execution. Testbed-13 shall develop a method of integrating Data Quality results into the Conflation decision process. This part is supported by NG130: Workflow WPS-1. All results and actions shall be documented in the NG009: Workflows ER.

Requirements

The following figure illustrates the work items and requirements in the context of Workflows.

WorkflowsRequirements
Figure 33. Workflows requirements and work items

This work shall include:

  • Workflow Management - Testbed-13 shall conduct and document the results of a thorough study on workflow setup, management, and execution. The study shall include all elements described in the Introduction and Workflow Engines sections above and shall focus on the following aspects:

    • pub-sub protocols,

    • query languages,

    • support logic which manages pre-conditions and posting of results,

    • security handling

  • Workflow Security - Testbed-13 shall conduct and document the results of a thorough feasibility study evaluating Secure Service Chaining. This study shall document the analysis performed, conclusions, and recommendations for future work. Testbed-13 shall implement the Security architecture to support the evaluation of executing a workflow process or service chain.

  • Building Workflow - Testbed-13 shall conduct and document the results of a thorough feasibility study evaluating building workflows as described in section Building Workflows above.

  • Cataloguing Workflow - Testbed-13 shall conduct and document the results of a thorough feasibility study evaluating cataloguing workflows as described in section Cataloguing Workflows above.

  • Fit for Purpose Workflow - Testbed-13 shall support the workflow development and analysis by implementing the Fit-For-Purpose Workflow.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG009: Workflow ER -This Engineering Report shall capture all results of the workflows work package. It shall addrss all requirements as stated above, including workflow management, security, building and cataloguing of workflows, WPS-derived analytic packages and the workflow implementations.

  • NG130: Workflow WPS - This WPS shall support all work flow requirements as stated above. It shall support the security implementation, event-driven analytics, facade algorithms for imagery control and transformation, and algorithms for feature data quality control and conflation; and support manual user interaction. Testbed-13 shall incorporate into the workflow process the concepts of Standing Queries with Pub/Sub notifications to deliver the content specified by workflows. The workflows shall provide analyst notification of completion and provide link to the data. The service may has to be setup two or three times (instances).

  • NG131: Workflow PubSub Server - This server shall support the workflow management and execution as described above. The type of service can be defined as considered best appropriate by the participants. The service needs to support the workflows described above. Testbed-13 shall incorporate into the workflow process the concepts of Standing Queries with Pub/Sub notifications to deliver the content specified by workflows. The workflows shall provide analyst notification of completion and provide link to the data.

  • NG132: Workflow Data Server-1 - A transactional data server (WFS-T or SOS-T) to be used to store workflow (intermediate) results. The service shall support the security architecture defined above.

  • NG135: Workflow Catalog Server - This catalog server shall support the workflow cataloguing work as described above.

  • NG136: WPS Client - The client application shall support the implementation of all work flows defined above. Clients must be able to understand WPS profile metadata and possibly servers from different vendors, implementing the same profile. Using this approach, usefulness of the profiles can be tested.

B.24. Compliance Testing

Testbed-13 shall develop CITE tests for NSG profiles. The work can build on previous work as documented on the CITE website. .

Requirements

The following figure illustrates the work items and requirements in the context of Compliance Testing.

CITERequirements
Figure 34. Compliance Testing requirements and work items

This work shall include:

  • NSG WFS 2.0 Profile - Testbed-13 shall develop CITE tests for NSG WFS 2.0 Profile, including data, client, server, and test.

  • NSG WMTS 1.0 Profile - Testbed-13 shall develop CITE tests for NSG WMTS 1.0 Profile, including data, client, server, and test.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG010: CITE ER - This Engineering Report shall document all tests and Change Requests to existing implementations.

  • NG137: CITE NSG WFS Suite: Testbed-13 shall develop a test suite for NSG WFS 2.0 Profile, including data, client, server, and test.

  • NG138: CITE NSG WMTS Suite: Testbed-13 shall develop a test suite for NSG WMTS 1.0 Profile, including data, client, server, and test.

B.25. Web Service Enhancements

The Testbed-12 WMS/WMTS Enhanced Engineering Report (OGC 16-042) describes requirements, challenges, and solutions to improve multidimensional Earth Observation (EO) discovery, data access, and visualization through WMS, WMTS, and corresponding extensions.

In the context of WMS, the Engineering Report provides solutions to better support NetCDF-CF data visualization and exploration, which is done by enhancing the GetMap operation to enrich map rendering options, and by improved Capabilities for long layer lists.

In the context of WMTS, enhancements have been developed to reduce semantic ambiguities when querying time-varying layers (i.e. layers with data that varies over time). This has been achieved by means of an enhanced time-query semantic and encoding model. Extensions to the Capabilities model now allow for more efficient data exploration even for long list of layers, and improvements to the WMTS client-server communication now reduce empty tile requests. These improvements are based on three operations to support efficient inspection of all layers and the discovery of relationships among multidimensional layers. The operation include DescribeDomain for compact domain inspection, GetHistogram for domain distribution inspection, and GetFeature for detailed value inspection.

B.25.1. WMTS

Testbed-12 evaluated the requirements for WMTS to better access and visualize earth observation (EO) data; specifically support for time varying layers. The following recommendations have been developed:

Advanced Time-Query

The following time-query semantics were proposed as the extension of WMTS to better support time varying layer query:

Table 4. WMTS time-query semantics/encoding extension
Proposed time-query semantics Description

at

Selects a server-defined semantic for a single time instant

asof

Selects all times from negative infinity up to and including the given time instant.

series

Selects one frame from a fixed time series identified by the first time instant of the frame.

interval

Selects all times in a given arbitrary inclusive time interval.

anim.at

Selects an animation of “at” image frames.

anim.asof

Selects an animation “as-of” image frames.

anim.series

Selects an animation of fixed-time-series image frames.

anim.interval

Selects an animation of arbitrary-interval image frames with an arbitrary time resolution.

This work item shall be supported by WMTS NG125 is optionally supported by the client applications AB105 and AB106. All results shall be documented in the Engineering Report AB001.

Extended Capabilities

The proposed time-query semantics/encoding was recommended to be advertised through the Capabilities document with the following extensions:

  1. Extend Capabilities <Dimension> element for time-query declaration

  2. Extend Capabilities <ResourceURL> element for time-query declaration in the RESTFul API

This work item shall be supported by WMTS NG125 is optionally supported by the client applications AB105 and AB106. All results shall be documented in the Engineering Report AB001.

Extended WMTS Interface

The following recommendations for extending WMTS interface support for efficient layer domain inspection/relationship discovery have been developed:

Table 5. Proposed WMTS interfaces for layer domain inspection/relationship discovery
Extended operation Description

DescribeDomains

The extended operation is used for compact domain inspection.

GetHistogram

The extended operation is used for inspecting domain distribution histogram.

GetFeature

The extended operation is used for detailed value combination inspection and description.

This work item shall be supported by WMTS NG125 is optionally supported by the client applications AB105 and AB106. All results shall be documented in the Engineering Report AB001.

Dynamic Visualization in a WMTS environment

Testbed-13 shall develop WMTS implementations supporting the following workflow:

  1. Dynamically track a water vessel on a client by displaying maritime map data served by a WMTS.

  2. Overlay the vessel track on the maritime map by employing the referenced standard (can be plugged into ODNI Mass Migration scenario).

  3. A vessel has a unique identification at a worldwide scale (IMO number, or MMSI), the position of a vessel (ship position) is a point feature over the time. The voyage feature of a vessel (ship voyage) is characterized by several attributes such as: speed over the ground, course over the ground, heading and true heading.

  4. To build the trajectory of a vessel, two main abstract feature types are relevant for Moving Feature standard: ship position and ship voyage.

The workflow shall be supported by the WMTS NG125 and is optionally supported by the client applications AB105 and AB106. All results shall be documented in the Engineering Report AB001.

Enhanced Tile Access

With the increased incidence of WMTS instances, the demand for an efficient access and exchange of tile collections is emerging. The OGC 16-049: Testbed-12 Multi-Tile Retrieval Engineering Report describes experiments with and extensions to WMTS and WPS to allow for efficient exchange of high number of tiles in container formats such as GeoPackage. OGC 16-049 suggests the implementation of the WMTS GetTiles operation and extensions to the WPS GetStatus and GetResult operations to provide partial outputs during process execution. The recommendations provided in OGC 16-049 shall be further explored in Testbed-13, in particular the GetTile operation.

This work item shall be supported by WMTS NG125 and WPSs NG127 and NG128. It is optionally supported by the client applications AB105 and AB106. All results shall be documented in the Engineering Report AB001.

B.25.2. WMS

The Testbed-12 WMS/WMTS Enhanced Engineering Report (OGC 16-042) describes several enhancements for WMS to extend the map rendering options on WMS GetMap operation and to extend the filtering options on WMS GetCapabilities operation.

Recommendations for extending map rendering options on WMS GetMap operation

The impact of weather and other environmental conditions must be considered in planning military operations and in providing a relevant common operational picture to deployed forces. ncWMS publishes multi-dimensional data as an OGC compliant Web Map Service (WMS) for visualization of very large multi-dimensional data files (such as those associated with environmental and weather data). In order to better support NetCDF-Climate and Forecast (CF) data visualization and discovery, the following extended ncWMS GetMap parameters were proposed in Testbed-12 and introduced to support additional map render options in WMS. Testbed-13 shall follow up on these recommendations with service implementations to demonstrate usability in production systems.

Table 6. Extended rendering options of WMS GetMap operation
Extended WMS parameter Description

COLORSCALERANGE

Of the form min,max this is the scale range used for plotting the data (mapped to the COLORSCALERANGE_MIN and COLORSCALERANGE_MAX env vars)

NUMCOLORBANDS

The number of discrete colours to plot the data. Must be between 2 and 250 (mapped to the NUMCOLORBANDS env variable)

ABOVEMAXCOLOR

The colour to plot values which are above the maximum end of the scale range. Colours are of the form 0xRRGGBB or 0xAARRGGBB, and it also accepts “transparent” and “extend”

BELOWMINCOLOR

The colour to plot values which are below the minimum end of the scale range. Colours are of d the form 0xRRGGBB or 0xAARRGGBB, and it also accepts “transparent” and “extend”

LOGSCALE

“true” or “false” - whether to plot data with a logarithmic scale

OPACITY

The percentage opacity of the final output image as a number between 0 and 100 (maps to OPACITY env var by translating it to a number between 0 and 1)

ANIMATION

“true” or “false” - whether to generate an animation. The ncWMS documentation states that TIME has to be of the form <tt class="docutils literal">starttime/endtime</tt>, but OGC16-042 states that TIME needs to be a list of discrete times instead. Animation requires using the “image/gif” as the response format (as the only format supporting animation)

Testbed-13 shall apply the criteria proposed above to earth observation data characterized by high temporal resolution and multiple dimensions. Demonstrate and confirm usability for typical data. Identify any problems associated with implementation and provide recommendations for overcoming problems encountered. Develop Change Request (CR) as needed to make required changes to the WMS specification.

This work item shall be supported by WMS NG126 is optionally supported by the client applications AB105 and AB106. All results shall be documented in the Engineering Report AB001.

Recommendations for extending filtering options on WMS GetCapabilities operation
Table 7. Extended GetCapabilities parameter
Extended GetCapabilities parameter Description

DATASET

The parameter will be set with layer/dataset name. Once the parameter is provided in the request, only content regarding the layer/dataset will be returned.

Testbed-13 shall apply the criteria proposed above to earth observation data characterized by high temporal resolution and multiple dimensions. Demonstrate and confirm usability for typical data. Identify any problems associated with implementation and provide recommendations for overcoming problems encountered. Develop a change Request (CR) to make required changes to the WMS specification.

This work item shall be supported by WMS NG126 is optionally supported by the client applications AB105 and AB106. All results shall be documented in the Engineering Report AB001.

Requirements

The following figure illustrates the work items and requirements in the context of Web Service Enhancements.

WxSRequirements
Figure 35. Web Service Enhancements requirements and work items.

This work shall include:

  • Advanced WMTS - Testbed-13 shall implement a WMTS, apply the criteria proposed above to additional service and data sources and confirm usability. Develop a Change Request (CR) to make required changes to the WMTS specification. Document all results as part of the AB001: Concepts of Data and Standards for Mass Migration ER as described in section Mass Migration.

    The WMTS implementation shall support the Mass Migration Scenario. Considering recommendations provided in OGC document 14-083r2: OGC Moving Features Encoding Part I: XML Core, the WMTS implementation shall support the following scenario:

    • Dynamically track a water vessel on a client by displaying maritime map data served by a WMTS.

    • Overlay the vessel track on the maritime map by employing the referenced standard (can be plugged into ODNI Mass Migration scenario).

    • A vessel has a unique identification at a worldwide scale (IMO number, or MMSI), the position of a vessel (ship position) is a point feature over the time. The voyage feature of a vessel (ship voyage) is characterized by several attributes such as: speed over the ground, course over the ground, heading and true heading.

    • To build the trajectory of a vessel, two main abstract feature types are relevant for Moving Feature standard: ship position and ship voyage.

  • Advanced WMS - Testbed-13 shall implement an enhanced WMS, apply the criteria proposed above to additional data sources and confirm usability. Develop a Change Request (CR) to make required changes to the WMS specification.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG125: Enhanced WMTS (unfunded) - Implementation of an enhanced WMTS with support for the requirements provided above. Implementation of the NSG Profile.

  • NG126: Enhanced WMS (unfunded) - Implementation of an enhanced WMS with support for the requirements provided above and described in Testbed-12 WMS/WMTS Enhanced Engineering Report (OGC 16-042) to allow for advanced rendering options and Capabilities. Implementation of the DGIWG Profile.

  • NG127: Tile handling WPS-1 (unfunded) - Implementation of an enhanced WPS as described in OGC 16-049: Testbed-12 Multi-Tile Retrieval Engineering Report to allow experimentation of advanced tile handling. This WPS needs to support acceptance of new input parameters for a running thread during thread execution. The WPS shall further support a "two WPS instances exchanging information" setup as described in OGC 16-049.

  • NG128: Tile handling WPS-2 (unfunded) - Implementation of an enhanced WPS as described in OGC 16-049: Testbed-12 Multi-Tile Retrieval Engineering Report to allow experimentation of advanced tile handling. This WPS needs to support acceptance of new input parameters for a running thread during thread execution. The WPS shall further support a "two WPS instances exchanging information" setup as described in OGC 16-049.

Notes:

  1. All results shall be documented as part of the AB001: Concepts of Data and Standards for Mass Migration ER as described in section Mass Migration.

  2. Clients supporting the advanced WMS and WMTS implementations are provided as part of the Mass Migration work package (AB105/106).

B.26. Point Cloud Streaming

Testbed-12 developed an initial version of a streaming LiDAR service. That service was limited to LAS formatted data and LAZ compression. Testbed-13 shall build on that work by introducing more complex point cloud data such as Geiger Mode collections encoded in SIPC format. In addition, Testbed-13 shall explore the suitability of this service for use in DDIL environments. In particular, it shall investigate:

  • The bandwidth cost as a user traverses from low to high resolution tiles.

  • The ability of the streaming protocol to detect and recover from single and multi-bit errors.

  • The ability of the streaming protocol to detect and recover from lost packets or tiles.

  • The ability of a client to do useful work if comms fails in the middle of a point cloud download.

Requirements

The following figure illustrates the work items and requirements in the context of Point Cloud Streaming.

PointCloudStreamingRequirements
Figure 36. Point Cloud Streaming requirements and work items

This work shall include:

  • Point Cloud Streaming - Testbed-13 shall conduct and document the results of a thorough feasibility study evaluating OGC service support for point cloud Geiger Mode collections encoded in SIPC format. This study shall describe the design and proposed points of standardization for the Steaming LiDAR service. Further the study shall document the approach taken, data collected, analysis, and recommendations in regard to the bandwidth, error detection, and disconnected operations issues described above. The study shall also document the evaluation of such a service constrained by an DDIL operating environment.

  • Point Cloud Streaming Implementation - Testbed-13 shall implement a service capable of streaming Geiger mode LiDAR data. Point Cloud support shall first consider the 3D Tiles and i3s implementations and recommendations for satisfying this implementation requirement. The server shall be matched by a client application capable of manipulating and visualizing the streaming Geiger mode LiDAR data. Ideally, this client even supports the 3D Tiles and i3s implementations and recommendations.

Deliverables

The following list identifies the work items assignment to requirements. Thread assignment and funding status are defined in section Summary of Testbed Deliverables.

  • NG006: Point Cloud Streaming ER (unfunded) - Engineering Report capturing all results of the Point Cloud Streaming study and the service/client implementation.

  • NG117: Point Cloud Streaming Server (unfunded) - This Engineering Report shall capture the results of theSWAP study and resulting Change Requests.

  • NG118: Point Cloud Streaming Client (unfunded) - Implementation of a client capable of manipulating and visualizing the streaming Geiger mode LiDAR data. Ideally, this client even supports the 3D Tiles and i3s implementations and recommendations.

Appendix C: Bibliography

  • J. Becedas, R. Pérez and G. González, “Testing and validation of cloud infrastructures for Earth observation services with satellite constellations”, International Journal of Remote Sensing, vol. 36, nº 19-20, pp. 5289-5307, 2015.

  • [Becedas et al, 2015b] J. Becedas, R. Pérez, G. González, J. Álvarez, F. García, F. Maldonado, A. Sucari and J. García, “Evaluation of Future Internet Technologies for Processing and Distribution of Satellite Imagery”, The 36th International Symposium on Remote Sensing of Environment, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., vol. XL-7/W3, pp. 605-611, doi:10.5194/isprsarchives-XL-7-W3-605-2015, 2015.

  • [Ramos, 2016]J. J. Ramos and J. Becedas, “Deimos’ gs4EO over ENTICE: A cost-effective cloud-based solution to deploy and operate flexible big EO data systems with optimized performance”, Procedings of the 2016 conference on Big Data from Space (BiDS’16), pp. 107-110, Santa Cruz de Tenerife, Spain, 2016.