With Google Earth, one can browse the web of KML documents and author new
KML documents. It's nearly the ideal client for a read-write "GeoWeb". The one
missing feature: publishing KML documents and placemarks to the Web. I've
written previously about this missing feature and now I'm going to explain
exactly how it should be done. Andrew, consider this a little free legwork for
your Agile Geography effort.
Google Earth (GE) already allows you to POST KML documents to a Keyhole
website. I am simply proposing more of the same using a simple and effective
RESTful protocol that has already been implemented on a large scale within
Google (see GData): the Atom Publishing Protocol (aka Atompub). Nevermind
transactional OGC Web Feature Services -- WFS is a sorry compromise between
interests who really, in their hearts, want SOAP and more pragmatic interests
who want to at least be able to GET features with their web browser -- the
future of GIS web services is RESTful.
The Google Earth team already recognizes this, even if not overtly. Consider
the new behavior toward description hypermedia links that is proposed for
KML 2.2. Links to KML documents found in placemark descriptions will be
dereferenced by GE itself. In other words, GE will use hypermedia as the engine
of application state, one of the tenets of REST. A RESTful approach to
publishing data to the Web would complement the new geo-browsing feature
perfectly.
So, how would one use Atompub with KML and Google Earth? Interestingly (maybe
coincidence, maybe not), KML 2.2 is already paving the way by adding
atom:link. There are three main classes of links in Atompub: to a Collection
URI, to a Member URI, and to a Media Resource URI. Add such links to KML
elements and Atompub interoperability is in hand. More specifically, every KML
Document SHOULD specify an Atompub Collection URI:
<Document>
<atom:link rel="self" href="http://gis.example.com/places/"/>
...
</Document>
The URI specified in the href attribute is the one to which a client (such as
the future KML-publishing version of Google Earth) would POST a new
placemark, and thereby insert new data into the collection that is providing
the current KML document. Likewise, every Placemark that one may edit
SHOULD specify Member URI and Media Resource URIs:
<Placemark>
<atom:link rel="edit" href="http://gis.example.com/places/1"/>
<atom:link rel="edit-media" href="http://gis.example.com/places/1.kml"/>
...
</Placemark>
The URI in the "edit" link is the one to which metadata updates (attribution, etc) would be PUT by the client. The URI in the "edit-media" link is the one to which the client would PUT an updated KML representation.
That's it. It's mind-bogglingly simple. The catch for implementors, of course,
is authentication. Most sites/services won't (for good reason) accept
anonymously posted placemarks or allow drive-by editing and deletion of
existing data. This means that Google Earth would need to implement many of the
security features that we usually take for granted in our web browsers. Still,
authentication is no more a problem for this RESTful approach using Atompub
than it is for WFS-T.
Atompub + KML == agile, read-write geography.
Update: I just remembered that Andrew Turner almost made the Atompub connection here. His proposal for linking to KML from Atom entries fits perfectly with what I've outlined above.
Comments
Re: Uninformed
Author: Chris Andrews
Sean, I don't disagree with you... but how do you explain particle physics before first explaining addition. I would encourage you to better describe the issue to help inform the general GIS audience. Thanks for the feedback. ChrisRe: Uninformed
Author: Sean
Luckily, grokking REST is a lot easier than understanding quantum field theory. Starting from zero, the first thing to explain to someone is: With REST, every piece of information has its own URL. Once they get this, move on to explaining the tenets of REST in engineering terms like Bray did (above). If you've still got an audience, get into the details of representation formats, resource modeling, and PUT.Re: Uninformed
Author: Allan Doyle
Or maybe we're not looking for engineering terms here... First, the term "web" means that you are interacting with things using HTTP, a protocol most people are familiar with only through their browsers. Generally, HTTP in a browser uses "GET" to fetch HTML, but can also be used to do a lot more. Also, generally, "POST" is used to push information back to a server/service. There are other common but lesser-known operations, "PUT" and "DELETE". REST is a style of interaction that constrains the way these four "verbs" are used during a web-based interchange. That means that not all possible uses of "GET" are considered to be proper uses. In particular, any use of "GET" should always result in pulling information from a server. Thus it's non-RESTful to use "GET" to push information back to the server that changes the state of the server. For that you should use POST, PUT, or DELETE. When you use your browser to access something like http://finance.google.com/finance?q=AAPL you expect to get back the Google finance page for Apple. You could conceivably also set things up so that the NASDAQ could use http://finance.google.com/finance?q=AAPL¤t=136.82 to update the stock price. Well, if they do that using "GET", that's not REST. REST says that if you want to update a resource (in this case, the price of Apple stock) you should use the HTTP PUT operation. (Note that REST would also favor the use of http://finance.google.com/finance/AAPL instead, but that's a more subtle point) And, let's say I get my company, RESTfulServicesInc, listed after my wildly successful IPO, then a RESTful way to initialize the new listing would be by using HTTP POST of the initial information to http://finance.google.com/finance/. And, sadly, when my company tanks, I use DELETE. So REST talks about verbs (GET, POST, PUT, DELETE) and specifies how each of those should be used. It's like me telling you never to hammer nails with the handle of a screwdriver. Sure, you can do it, but it's not going to follow the generally accepted constraints of carpentry. REST also talks about the use of other parts of HTTP, such as how to use the "header" information that is transferred with each of the four verbs. Why does all this matter? Adherents of REST say that by accepting the constraints REST imposes on the use of the HTTP verbs, you can build more robust web services and that those web services will, in fact, work properly with the existing mechanisms of the web, including things generally hidden to end-users like proxies and caches. Furthermore, they say that using non-REST (SOAP, WS-*, or just a general mish-mash of HTTP), systems are more prone to failure over the long term. Ultimately it boils down to some simple questions of style. Do you prefer to work with a more tightly coupled interaction between client and server that is influenced by the previous generations of client/server frameworks like DCE, CORBA, OLE/COM, etc. (which is really a distributed computing platform added on top of HTTP as a separate layer) or do you prefer to work with a a style of interaction that uses HTTP as the actual layer of interaction. REST is the latter.Re: Uninformed
Author: Sean
Beautiful, Allan. I like the carpentry analogy for REST constraints.