Architecture Repository/Patterns/Canonical data modeling

= =

Command-query responsibility segregation

Allows content/knowledge to be understood by people, programs and machines outside the traditional boundaries of MediaWiki. And, as far as possible, allows consumers to request only what they need.

What is the structure of "knowledge" and how does it flow across the system? Building this data model requires defining boundaries around data objects and their interrelationship. A page, for example, is a collection of sections. (And templates, which we did not tackle here.) Sections are also part of collections about a topic (physics, for example.) In our modeling, we:


 * Defined a predictable structure using industry-standard formats like schema.org (to support predictability and reusability)
 * Broke down preexisting structures (all the content on the Philadelphia page) into parts (a section on the History of Philadelphia) and establish interrelationships between the parts (to support "only what they need") using hypermedia linking.
 * Enhanced the structure with contextual information by associating parts with Wikidata (to enable natural collections like US Cities) and indexing collections with Elasticsearch.
 * Enabled interaction with the structure via API calls. Multiple API calls can be wrapped into a single payload -- or not.