PML Validation Service Requirements

From Inference Web

Jump to: navigation, search
  • POC: Stephan
  • Start date: Jan 2011
  • End date: still active

This is a requirements gathering activity for the next-generation PML validation service. We need to define what conditions the validation service will test for, and what validation level (fatal, error, warning, info) each condition occupies.

This is an activity of SWaMP PML Validator Working Group.

Contents

Validation Levels

  1. Fatal - validation service stops validation upon a fatal error, returns validation results up to that point (always including info on fatal error)
  2. Error - validation service logs error information, continues validation past error. Error information returned unless service consumer specified a validation level of 'fatal only'.
  3. Warning - validation service logs warning, continues validation. Warning information returned unless service consumer specified a validation level of 'error only' or greater.
  4. Info - validation service logs info, continues validation. Info information returned unless service consumer specified a validation level of 'warning only' or greater.

Requirement Levels

PML validator test/requirement table uses the terminology NOT CARE/SHOULD/MUST for requirements described in a test. We have also added a Core PML Best Practices column to differentiate best practices from the formal spec (OWL ontology).

  • Failure of a MUST requirement on the Valid RDF or Valid OWL requirement will result in an error-level response.
  • There are no other requirements on the formal PML Specification test suite. (it is NO CARE for every other requirement/test).
  • For Best Practices requirements either NO CARE or SHOULD requirements are supported, failure on a SHOULD requirement results in an information-level response.
  • For IW & Probe-It suites NO CARE, SHOULD, and MUST requirements are supported. SHOULD corresponds to a information-level response and MUST corresponds to a warning-level response.

TODOs

  • Add another column in the table to distinguish among Core-PML, Probe-It!, IW Browser, etc. [DONE]
  • TODO: create a PML example repository to run through validator
    • TODO: Ping to add her "non Information" example to repository.
    • TODO: Cynthia to add Tim's "##" example to repository.

trac

TODO: ask Jitin what types of validation he has been doing.

Validation Tests

Test Test Suite Comment Implementation Status
Core PML Specification Core Best Practices Inference Web Probe-It!


Test DataLoading Data Loading MUST MUST MUST MUST error (could not obtain data) DONE


Test RDFSyntax Valid RDF Syntax MUST MUST MUST MUST error (does not meet RDF specification) DONE


Test OWL Semantics Consistent OWL Semantics MUST MUST MUST MUST error (does not meet OWL specification) DONE


Test 1: InferenceStep has one of the following: hasInferenceRule and hasInferenceEngine, or hasSourceUsage NOT CARE SHOULD (ask Cynthia) NOT CARE Stephan - Does an InferenceStep make 'logical' sense if it has neither a source usage or an inferring agent? DONE


Test 2: InferenceStep associated with at least one NodeSet via the isConsequentOf predicate NOT CARE SHOULD NOT CARE NOT CARE Stephan - Should checking for orphaned InferenceSteps be part of the core test suite? DONE


Test 3: Information asserts one of the following; hasRawString or hasURL predicate NOT CARE SHOULD (ask Cynthia) MUST Why can't you dereference the individual to obtain the contents? (Tim) This is apparently no longer an issue with IWBrowser; constraint for Probe-It! only? (Nick) Probe-It! checks if hasPrettyString was asserted, if not then checks hasRawString, if not then checks hasURL. We can change Probe-It! to first check if can dereference an info instance URI, if cannot then check hasPrettyString, ect. DONE
Test 4: pmlj:Query has hasAnswer NOT CARE SHOULD NOT CARE SHOULD (MUST?) Probe-It! requires a PMLQuery to have this property, IWBrowser accepts PMLQuery without answers. DONE
Test 5: NodeSet min 1 isConsequentOf predicate NOT CARE NOT CARE NOT CARE SHOULD (MUST?) Probe-It! requires a NodeSet to have a min 1 of this property, IWBrowser does not have this requirement. DONE


Test 6: InferenceStep has hasIndex predicate NOT CARE NOT CARE SHOULD NOT CARE Used for ordering/sorting multiple InferenceSteps by the tools (WHICH TOOLS). "hasIndex" is not mandatory for IWBrowser. What will IWBrowser do if multiple InferenceSteps and no hasIndex values?

csc -- IWBrowser will think the order of the inference steps is not important and put steps in a random order.

DONE


Test 7: Information instances referenced by hasConclusion use the hasFormat predicate NOT CARE NOT CARE SHOULD SHOULD We can inform users that their nodeset conclusions will default to being displayed as text in Probe-It! and IWBrowser - no hasFormat predicate will break visko. IWBrowser uses hasFormat if there is a renderer for that format, otherwise to default of text. DONE
Test 10: The NodeSets antecedent graph is a Directed Acyclic Graph NOT CARE SHOULD (ask Cynthia) MUST (ask Nick) Example: A->B->C->D->A. DONE
Test 11: Antecedents modified after their conclusions. NOT CARE SHOULD (NOT CARE?) NOT CARE NOT CARE Stephan - I changed this to 'Info'. Perhaps make sure that a node's modification date is after it's creation date. DONE
Test 12: The pmlp:hasUsageDateTime on an inference step's SourceUsage is not younger than the same NodeSet's pmlp:hasCreationDateTime NOT CARE SHOULD (NOT CARE?) NOT CARE NOT CARE Stephan DONE
Test 13: NodeSet URIs must be resolvable NOT CARE NOT CARE (?) MUST MUST IW infrastructure (i.e., IW Browser) expects URIs to be resolvable, for example, the proof will not load if NodeSet URI is not resolvable. Stephan - What about uploaded RDF/XML? Perhaps a flag to disable this test? Or not run this test if RDF/XML uploaded via a POST? This validation is no longer needed, at least for IWBrowser which is able to handle non-net accessible urls now. To be implemented


Test 14: (more general version of no double-hash problem) NOT CARE  ?  ?  ? Cynthia TODO, Stephan - what is the double-has problem? Follow-up with Cynthia. To be implemented
Test 15: double "has" (not HASH) problem?  ?  ?  ?  ? Stephan thinks this is different than the double ## problem To be implemented


Discussion

  • Many of these tests are conditions for well-formed PML that are not currently strictly required by the PML OWL Ontology. We could remove some tests/conditions from this list by adding restrictions to the PML OWL Ontology, but I am not suggesting we do that at this time. Just that we put together a list of conditions for valid, or good, PML documents that should be tested for by the PML Validation service --Stephan
  • Everything on this page, validation levels, documented conditions, validation service operation, etc is open for discussion --Stephan
  • How does this work relate to RPI PML Validator and UTEP PML Validator?
  • Stephan: valid OWL breaks the tool b/c of assumed disjoint in tool but now explicit in ontology. (e.g. NodeSets and InferenceSteps as same instance) (e.g. Document can't be Information according to the tool) (the conclusion needs to have Information as conclusion.)
  • Jim switched from RESTlet to Wink.
  • Can an hasVariableMapping exist on a InferenceStep that has 1) NO hasInferenceRule or 2) an hasInferenceRule that points to something distinct from DeclarativeRule. (i.e., must a declarative rule be present if the hasVariableMapping is present?) -Tim and Arun
  • how does this validator relate to http://tw.rpi.edu/wiki/On_Validating_PML_Documents ?
  • PLEASE include the URI of the test that is failing. This way, people looking at the tests have a pointer to a developing web resource that they can go to to help understand the problem and get suggestions on how to fix it. -Tim
  • PLEASE reorganize the source code to have each Validator Test embodied by a single, SHORT java class file. This way, when people fail, the wiki page for the test can point to the .java in the svn so they can inspect it.
  • allow user to specify the checklist of tests they want to apply. (Deborah suggests from Chimaera)
  • A tool being able to handle something is NOT a reason to remove a test.
  • A tool being able to handle something just means it will pass the test :-)
  • We need these things sticking around for regression testing and to reflect our developments.

Validation Service Output

Reuse existing RDF vocabularies to describe test results

Related Trac issues

Personal tools
Navigation