This week I finally got around to working on a task that's been haunting me: OpenLayers as an AtomPub client. Chris Schmidt has done much of the work already, but I'm going to see it all the way through over the next few months.
A quick review of AtomPub and how it relates to geospatial applications is probably needed here. There are 3 different types of resources in the Atom Publishing Protocol (omitting for now workspaces): services, collections, and entries. the analogous components in a GIS architecture are: services, featuretypes (or coverages), and features. An AtomPub service document is in some ways like an OWS capabilities document, but simpler because AtomPub doesn't fool around with non-HTTP semantics and bakes more metadata into its single protocol. An Atom feed document is rather a lot like a WFS feature collection document, and an Atom entry is also much like a GML feature, but much more standardized. OpenLayers does WFS already, and needs little modification to be a good AtomPub client.
Here's the little modification that I have done: a new OpenLayers.Format.Atom lets me ignore the jumble of namespaces and elements that is RSS 1 and 2. This Atom.js is simpler than GeoRSS.js. The Atom entry atom:content element is the prime avenue for extending Atom, and I've also added support for that. Best practice for georeferencing Atom entries is to use a GML simple features geometry (see OGC 06-049r1) within a georss:where element and OpenLayers.Format.GMLSF supports simple features GML within my new Atom format. The GMLSF format writes gml:pos and gml:posList only, no gml:coordinates, and uses "exterior" and "interior" as ring element tags. On the other hand, it is just as forgiving as OpenLayers.Format.GML when reading. So now an OpenLayers app can write well calibrated feature entries to be POSTed to an AtomPub-style collection like the one at http://sgillies.net/kw/++rest++knowhere/demo. (I'll explain the odd URL in a future post).
That collection is live. I've copied over a few entries from my Hammock app like so:
sean@lenny:~$ curl -X GET http://sgillies.net/hammock/places/6.atom > 6.atom
sean@lenny:~$ curl -X POST -v \
-H "Content-Type: application/atom+xml" \
-H "Slug: Navarro Vineyards" \
--data @6.atom \
http://sgillies.net/kw/++rest++knowhere/demo
* About to connect() to sgillies.net port 80
* Trying ...... connected
* Connected to sgillies.net (...) port 80
> POST /kw/++rest++knowhere/demo HTTP/1.1
> User-Agent: curl/7.15.5 (i486-pc-linux-gnu) ...
> Host: sgillies.net
> Accept: */*
> Content-Type: application/atom+xml
> Slug: Wehlener Sonnenuhr
> Content-Length: 1301
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
HTTP/1.1 201 Created
< Date: Sun, 17 Feb 2008 20:46:21 GMT
< Server: Twisted/2.5.0 ...
< Content-Length: 73
< Accept-Ranges: bytes
< X-Content-Type-Warning: guessed from content
< Location: http://sgillies.net/kw/++rest++knowhere/demo/wehlener-sonnenuhr
< Content-Type: text/plain;charset=utf-8
Connection #0 to host sgillies.net left intact
* Closing connection #0
Location: http://sgillies.net/kw/++rest++knowhere/demo/wehlener-sonnenuhr
(The ability to copy resources is built into HTTP.) If you browse to http://sgillies.net/kw/demo/map (and wait a few seconds for a pile of javascript to download) you'll see this placemark displayed among others as a vector layer in an OpenLayers map of the demo collection. Toward implementing AtomPub, I have a second vector to serve as a buffer for to-be-posted placemarks. Draw a new geometry in the map using the OpenLayers tools and you'll see it in green. Then set a title, summary, text, a placemark name slug, and click "post". If you've picked a slug that collides with an existing placemark name, you'll see an error message. Otherwise, you'll see the location of the newly created placemark.
Still yet to do are placemark edit and deletion as Chris's AtomPub demo already does. I think that getting this right will require a new OpenLayers.Layer.AtomPub, one that behaves much like the existing WFS layer, but interacts with the server using GET/PUT/DELETE/POST. The protocol implementation, such as it is, currently exists in this page template. The only real novelty in it is the use of the HTTP Slug request header to make suggestions for resource names.
function postPlacemark() {
var feature = buffer.features[0];
feature.attributes.title = $("pm-title").value;
feature.attributes.description = $("pm-summary").value;
feature.attributes["content"] = $("pm-content").value;
var atom = format.write(feature);
var options = {
method: "post",
contentType: "application/atom+xml",
postBody: atom,
onSuccess: updatePage,
onFailure: function(xhr) {
update_status("Failed post (status code "+xhr.status+"). Check your URL.");
},
};
var slug = $("pm-slug").value;
if (slug.length > 0) {
options.requestHeaders = ["Slug", slug];
}
new OpenLayers.Ajax.Request(collection_URL, options);
}
My server has the final say, and currently just lowercases the slugs and replaces whitespace with a dash.
I submitted the new Atom and GMLSF formats to OpenLayers (#1366) and look forward to working with the developers to get them into an upcoming release.
Comments
Re: Digitizing Ancient Inscriptions with OpenLayers
Author: Paul Ramsey
What's the use case for digitized inscriptions? I don't comprehend.Re: Digitizing Ancient Inscriptions with OpenLayers
Author: Yves Moisan
Same question here, but I do find it neat to use OpenLayers in images other than geospatial in nature. Annotations seem to me like the low hanging fruit here. An example : being able to share e.g. an air photo amongst a group of people that could trace out where they think the terminal moraine is and discuss through annotations the relative merits of delineation A vs B would be one heck of an interesting tool for photointerpreters. I have to look seriously at OpenLayers.Re: Digitizing Ancient Inscriptions with OpenLayers
Author: Sean
Collaborative interpretation, for sure. And last night I received an email from a researcher who is using traces like these to train character or inscription extraction software. The humanities are more quantitative than you'd guess.Re: Digitizing Ancient Inscriptions with OpenLayers
Author: Tom Elliott
I've started a thread aimed at answering Paul's question at Current Epigraphy. There's already one helpful response ...Re: Digitizing Ancient Inscriptions with OpenLayers
Author: KoS
Great work. Neat use. I was thinking. From the stand point of capturing the text. Couldn't a laser scanner, running a edge detection on an image or something else similar, do almost the same thing? If you want vectors too, run a raster to vector conversion afterwards. KoSRe: Digitizing Ancient Inscriptions with OpenLayers
Author: Tom Elliott
KoS: Certainly, and there are some folks working with automated techniques (see: M. Terras, Image to Interpretation: An Intelligent System to Aid Historians in Reading the Vindolanda Texts, Oxford, 2006, ISBN: 9780199204557 - publisher's blurb). Tools for manual work can still be valuable ... for example, for expert cleanup of automated vector creation (i.e., supervision and selection), or just for singling out arbitrary portions of an image for subsequent annotation.Re: Digitizing Ancient Inscriptions with OpenLayers
Author: Stefano Costa
Really nice, for an archaeologist like me. This makes me think about 2 things: 1) it would be great to have an OGRGeometry method (and a Shapely one, too) like ExportToSVG(). This would allow easy vector image generation from geometries. 2) when using typical GIS tools like OL, OGR, for something that is geometric but not geographic, it's not always clear how to state that your XY(Z) coordinates are not in a specific CRS, nor lat/lon. See for example http://iosa.sharesource.org/ where we use OGR for the analysis of an ancient wall.Re: Digitizing Ancient Inscriptions with OpenLayers
Author: Sean
Stefano, I know almost nothing about archaeological photogrammetry, so I'm very interested in learning from your projects.