Specification - linked-data-api - Linked Data API Specification - Project Hosting on Google Code(About) The API is intended to be a middle-ware layer that can be deployed
in-front of a SPARQL endpoint, providing the ability to create a
RESTful data access layer for accessing the RDF data contained in the
triple store. The middle-ware is configurable, and is intended to
support a range of different access patterns and output formats. "Out
of the box" the system provides delivery of the standard range of RDF
serialisations, as well as simple JSON and XML serializations for
descriptions of lists of resources. The API essentially maps
parameterized URLs to underlying SPARQL queries, mediating the content
negotiation of the results into a suitable format for the client.
Lost Boy: Bee Node Deconstructed(About) "ADC pattern" (ASK, DESCRIBE, CONSTRUCT): a way to probe a remote data set to see if it has information that is of interest and then extract information from that data set with increasing levels of precision and control.
XMLArmyKnife - Experimenting with EmbeddedRDF and GRDDL Support(About) Embedded RDF is a method of embedding (a subset of) RDF within XHTML and HTML documents. A simple XSLT transformation can be used to extract the RDF from within the document.
A related and more generalised technology is GRDDL which defines how to associate transformation algorithms (i.e. XSLT stylesheets) with XHTML profiles or microformats so that there's a clear mapping from embedded metadata into RDF.
I've been experimenting with adding support for both of these technologies in the XMLArmyKnife SPARQL query service. This provides a means to directly query RDF embedded in XHTML documents.
XTech 2006: SPARQLing Services(About) This paper will review the SPARQL specifications and its potential benefits to Web 2.0 applications. Focusing on the SPARQL protocol for RDF, the paper will provide implementation guidance for developers interested in adding SPARQL support to their applications.
Slug: A Semantic Web Crawler(About) Slug is a web crawler (or Scutter) designed for harvesting semantic web content. Implemented in Java using the Jena API, Slug provides a configurable, modular framework that allows a great degree of flexibility in configuring the retrieval, processing and storage of harvested content.