2007 (old posts, page 20)

Atompub and KML Demo

Update (2009-05-20): Hammock has been retired, replaced since December 2007 by Knowhere.

I have repurposed my Hammock application into a demonstration of the Atompub, KML, and Google Earth integration. Now, from an Atom perspective there is a service document and one collection at:



Subscribe to that second URL, it's an Atom feed. Any newsreader should be able to use it to monitor changes to the collection. Next, start Google Earth and in your temporary places container create a new network link to this URL:


The same (very simplistic) data model supports both the Atom feed and the KML document.

Ready to publish data to the Web, Atompub-style? All you need is a terminal, curl or another moderately sophisticated HTTP client, and a little make-believe. Imagine that Google Earth or another future geo-browser is doing all the POST and PUT dirty work for you.

In the temporary places container, create a new point placemark anywhere. Give it a name and a plain text description. Save it to your filesystem as hammock.kml. To create a new, publicly-shared place within the collection based on your new placemark you send the contents of that file in the body of a request to the collection URI. I showed that URI above, but the sure place to find it is in the atom:link element of the KML file. View the source on that file and verify that there's a link to http://sgillies.net/hammock/places/. Now send a request like:

POST /hammock/places/ HTTP/1.1
Host: sgillies.net
Content-type: application/vnd.google-earth.kml+xml


You can use curl to do it like this:

$ cat hammock.kml | curl -d @- --header

It's very important to specify the proper content type. If you've typed carefully, you will get response with a Location header containing the URL of the newly created resource. If you refresh the Atom feed and network link, you'll see the new resource reflected in each. You've seen this before in other apps like FeatureServer. What's new is that the KML document itself carries a link to the collection URI, the factory for new placemark resources. That's Atompub in action.

Furthermore, any placemark resource represented in http://sgillies.net/hammock/places.kml can be edited via its "edit" or "edit-media" (they are equivalent in this demo) atom:links using the HTTP PUT method. View the source to find the edit link for your newly created resource, and copy it (remember, one day Google Earth should do all this for you). Now, move the placemark in Google Earth or create a new one, and save it to the filesystem. Finally, PUT the saved placemark to the edit URI you copied using curl like:

$ cat edit.kml | curl -X PUT -d @- --header

In this case, I've edited the third place. Note the use of the PUT method. If you typed carefully, the changes you made will be reflected in your newsreader and in the Google Earth network link.

It almost seems too simple to work: serve KML to users that provides the very links by which users can add to, or modify, the web resources represented in the document. I'd be surprised if there is a tidier solution. I hope this simple demo can convince a few people to take a closer look at using Atompub for geospatial services.


Re: Atompub and KML Demo

Author: Christopher Schmidt

The KML doesn't use the Atom namespace at all, so far as I can tell. Is that intentional? Seems odd to me...

Re: Atompub and KML Demo

Author: Sean


Re: Atompub and KML Demo

Author: Christopher Schmidt

So, I'm confused on one thing: you say '... can be edited via its "edit" or "edit-media" (they are equivalent in this demo) atom:links ...' What is the difference? It seems like in general, the edit-media URLs allow you to edit the different representations of the same resource. For FeatureServer, it seems like this means that there could (theoretically) be edit-media URLs for each of the supported input services (Atom, KML, JSON, etc.). Is that right? Is there a reason these URLs have to be different, or is differentiation between 'Content-Type' on the PUT, as defined by the 'type' attribute on the <atom:link> sufficient?

Re: Atompub and KML Demo

Author: Christopher Schmidt

For the record, I've added a 'atom:link rel="self"' and rel='edit-media' to featureserver's KML output. I suppose I should add the same to the atom feeds? Is there any atom-pub supporting client I could test out?

Re: Atompub and KML Demo

Author: Sean

Christopher, I think that anything we made would be the first truly geospatial Atompub client. I can see myself contributing to one to for OpenLayers. Pete Lacey's appfs is a really intriguing client that might support geo use cases. Seems like a lightweight WebDAV. The canonical example for media resources and edit-media links is an image collection (ala Flickr). Posting a new image to the collection creates both an image (media) resource and a media link resource. That media link resource contains metadata for the image resource. The resources in the canonical example are completely normalized; there's no overlap between the media and metadata (media link). You update the image by a PUT to the collection entry's "edit-media" URI, and update the metadata by a PUT to the entry's "edit" URI. What's the rationale for explicit "edit-media" links? Here's one case: completely separate backends for images and metadata/annotations. On the other hand, a KML placemark and Atom entry overlap significantly, and I've limited the scope of my demo to the point where the overlap is complete. You can PUT KML to the "edit-media" URI, or an Atom entry to the "edit" URI, but the effect is the same. That's just a detail of my demo's implementation, not a recommendation. Andrew Turner's work on factoring KML into modules could tie into Atompub very well. An Atompub + KML client could PUT the metadata module's elements (as Atom) to an "edit" URI, and PUT everything else (as KML) to the "edit-media" URI.

The Shapely Alchemist

Following the example at byCycle.org I've figured out how to use Shapely geometries with SQLAlchemy and PostGIS. Here's the custom type:

from sqlalchemy import types
from shapely.wkb import loads

class Geometry(types.TypeEngine):

    def __init__(self, srid, geom_type, dims=2):
        super(Geometry, self).__init__()
        self.srid = srid
        self.geom_type = geom_type
        self.dims = dims

    def get_col_spec(self):
        return 'GEOMETRY()'

    def convert_bind_param(self, value, engine):
        if value is None:
            return None
            return "SRID=%s;%s" \
            % (self.srid, value.wkb.encode('hex'))

    def convert_result_value(self, value, engine):
        if value is None:
            return None
            return loads(value.decode('hex'))


>>> db = create_engine("postgres://localhost/the_db")
>>> metadata = MetaData(db)
>>> places = Table("places", metadata,
...     Column("the_geom", Geometry(4326, "POINT"))
...     )
>>> result = places.select().execute()
>>> row = result.fetchone()
>>> row
(<shapely.geometry.point.Point object at 0xb771490c>,)
>>> row.keys()
>>> row.the_geom
<shapely.geometry.point.Point object at 0xb771482c>
>>> row.the_geom.wkt
'POINT (-106.0000000000000000 40.0000000000000000)'
>>> row.the_geom.x
>>> row.the_geom.y


The industry mainstream has now heard of REST, but not everyone gets it yet. Example: this Directions article.

When writing about REST, the most important things to communicate are the central tenets. Tim Bray explains them like this:

  • You have a lot of things in the system, identified by URIs.
  • The protocol (probably HTTP) only knows one MEP [message exchange pattern]: single-request, single-response, with the request directed at a URI.
  • An important subset of the requests, what HTTP calls GET, are read-only, safe, idempotent.
  • You expect to ship a lot of URIs around in the bodies of requests and responses, and use them in operations that feel like link following.

Andrews doesn't address the tenets and, instead, muddles around with "GET URLs" and "POST URLs" in an article that misinforms readers. Most of what's wrong in the article is summarized in the first sentance of the closing paragraph (emphasis mine):

Until then, I don't have much of an opinion about whether the industry should standardize on POST or GET request types. Both types of request use predictable, standardized formats that can be executed elegantly or poorly. The REST discussion reveals that the industry should consider that predictability is not the only goal of standards. Intuitiveness and accessibility may be equally important when establishing information exchange standards.

In reality, there's no such controversy. A RESTful GIS web service uses both POST and GET: the former to create new web resources, the latter to fetch representations of resources in a safe (no side-effects) way.


Re: Uninformed

Author: Chris Andrews

Sean, I don't disagree with you... but how do you explain particle physics before first explaining addition. I would encourage you to better describe the issue to help inform the general GIS audience. Thanks for the feedback. Chris

Re: Uninformed

Author: Sean

Luckily, grokking REST is a lot easier than understanding quantum field theory. Starting from zero, the first thing to explain to someone is: With REST, every piece of information has its own URL. Once they get this, move on to explaining the tenets of REST in engineering terms like Bray did (above). If you've still got an audience, get into the details of representation formats, resource modeling, and PUT.

Re: Uninformed

Author: Allan Doyle

Or maybe we're not looking for engineering terms here... First, the term "web" means that you are interacting with things using HTTP, a protocol most people are familiar with only through their browsers. Generally, HTTP in a browser uses "GET" to fetch HTML, but can also be used to do a lot more. Also, generally, "POST" is used to push information back to a server/service. There are other common but lesser-known operations, "PUT" and "DELETE". REST is a style of interaction that constrains the way these four "verbs" are used during a web-based interchange. That means that not all possible uses of "GET" are considered to be proper uses. In particular, any use of "GET" should always result in pulling information from a server. Thus it's non-RESTful to use "GET" to push information back to the server that changes the state of the server. For that you should use POST, PUT, or DELETE. When you use your browser to access something like http://finance.google.com/finance?q=AAPL you expect to get back the Google finance page for Apple. You could conceivably also set things up so that the NASDAQ could use http://finance.google.com/finance?q=AAPL&current=136.82 to update the stock price. Well, if they do that using "GET", that's not REST. REST says that if you want to update a resource (in this case, the price of Apple stock) you should use the HTTP PUT operation. (Note that REST would also favor the use of http://finance.google.com/finance/AAPL instead, but that's a more subtle point) And, let's say I get my company, RESTfulServicesInc, listed after my wildly successful IPO, then a RESTful way to initialize the new listing would be by using HTTP POST of the initial information to http://finance.google.com/finance/. And, sadly, when my company tanks, I use DELETE. So REST talks about verbs (GET, POST, PUT, DELETE) and specifies how each of those should be used. It's like me telling you never to hammer nails with the handle of a screwdriver. Sure, you can do it, but it's not going to follow the generally accepted constraints of carpentry. REST also talks about the use of other parts of HTTP, such as how to use the "header" information that is transferred with each of the four verbs. Why does all this matter? Adherents of REST say that by accepting the constraints REST imposes on the use of the HTTP verbs, you can build more robust web services and that those web services will, in fact, work properly with the existing mechanisms of the web, including things generally hidden to end-users like proxies and caches. Furthermore, they say that using non-REST (SOAP, WS-*, or just a general mish-mash of HTTP), systems are more prone to failure over the long term. Ultimately it boils down to some simple questions of style. Do you prefer to work with a more tightly coupled interaction between client and server that is influenced by the previous generations of client/server frameworks like DCE, CORBA, OLE/COM, etc. (which is really a distributed computing platform added on top of HTTP as a separate layer) or do you prefer to work with a a style of interaction that uses HTTP as the actual layer of interaction. REST is the latter.

Re: Uninformed

Author: Sean

Beautiful, Allan. I like the carpentry analogy for REST constraints.

Atompub, KML and Google Earth

With Google Earth, one can browse the web of KML documents and author new KML documents. It's nearly the ideal client for a read-write "GeoWeb". The one missing feature: publishing KML documents and placemarks to the Web. I've written previously about this missing feature and now I'm going to explain exactly how it should be done. Andrew, consider this a little free legwork for your Agile Geography effort.

Google Earth (GE) already allows you to POST KML documents to a Keyhole website. I am simply proposing more of the same using a simple and effective RESTful protocol that has already been implemented on a large scale within Google (see GData): the Atom Publishing Protocol (aka Atompub). Nevermind transactional OGC Web Feature Services -- WFS is a sorry compromise between interests who really, in their hearts, want SOAP and more pragmatic interests who want to at least be able to GET features with their web browser -- the future of GIS web services is RESTful.

The Google Earth team already recognizes this, even if not overtly. Consider the new behavior toward description hypermedia links that is proposed for KML 2.2. Links to KML documents found in placemark descriptions will be dereferenced by GE itself. In other words, GE will use hypermedia as the engine of application state, one of the tenets of REST. A RESTful approach to publishing data to the Web would complement the new geo-browsing feature perfectly.

So, how would one use Atompub with KML and Google Earth? Interestingly (maybe coincidence, maybe not), KML 2.2 is already paving the way by adding atom:link. There are three main classes of links in Atompub: to a Collection URI, to a Member URI, and to a Media Resource URI. Add such links to KML elements and Atompub interoperability is in hand. More specifically, every KML Document SHOULD specify an Atompub Collection URI:

  <atom:link rel="self" href="http://gis.example.com/places/"/>

The URI specified in the href attribute is the one to which a client (such as the future KML-publishing version of Google Earth) would POST a new placemark, and thereby insert new data into the collection that is providing the current KML document. Likewise, every Placemark that one may edit SHOULD specify Member URI and Media Resource URIs:

  <atom:link rel="edit" href="http://gis.example.com/places/1"/>
  <atom:link rel="edit-media" href="http://gis.example.com/places/1.kml"/>

The URI in the "edit" link is the one to which metadata updates (attribution, etc) would be PUT by the client. The URI in the "edit-media" link is the one to which the client would PUT an updated KML representation.

That's it. It's mind-bogglingly simple. The catch for implementors, of course, is authentication. Most sites/services won't (for good reason) accept anonymously posted placemarks or allow drive-by editing and deletion of existing data. This means that Google Earth would need to implement many of the security features that we usually take for granted in our web browsers. Still, authentication is no more a problem for this RESTful approach using Atompub than it is for WFS-T.

Atompub + KML == agile, read-write geography.

Update: I just remembered that Andrew Turner almost made the Atompub connection here. His proposal for linking to KML from Atom entries fits perfectly with what I've outlined above.

Amateurs: STFU

I'd love to see Jeff Thurston debate Larry Lessig on the merits of "The Cult of the Amateur".

The gall of people (like me) with no GIS certification -- no authority at all -- writing GIS software! And poor suckers are actually using it! This can only lead to human sacrifice, dogs and cats living together -- mass hysteria.

Update: as if to underscore how unreliable and amateurish we bloggers are, Jeff Thurston seems to have abandoned his GeoVisualization blog and started a new one. No redirect, no link to archives. Gone. His deleted post expressed great admiration for Andrew Keen's thoroughly dismantled and disparaged book, "The Cult of the Amateur".


Re: Amateurs: STFU

Author: Yves Moisan

"Thanks to somebody in the world, ... not any expert at all in formal terms, just an expert, just one of those real experts, that is to say, a human being thinking clearly and carefully about something of importance." Eben Moglen @ Holland Open Software Conference 2006 (http://en.wikisource.org/wiki/Keynote_about_GPL3_at_HOSC_2006) Experts aren't always where we think they are.

Re: Amateurs: STFU

Author: Bill Thorp

GIS is headed for a disaster of biblical proportions. Old Testament, real wrath-of-God type stuff.

Re: Amateurs: STFU

Author: Jason Birch

No, it's just going to fizzle out and become another desktop publishing application... :)

Re: Amateurs: STFU

Author: Fantom Planet

I just hope no one misses me. If so, feel free to paste my pic on a milk carton with the GeoVisualization blog.

Re: Amateurs: STFU

Author: Matt Perry

Only a small percentage of GIS professionals actually have GIS certification. What matters more than formal certification is experience and fundamental knowledge. I've run across far too many people who claim to "do GIS" and don't know what a datum or projection is, don't understand the difference between vector and raster data models, aren't aware of scale issues, cartographic conventions, classification methods, etc... I bring this up not to say that amateur mappers should have to pass some minimal knowledge test in order to create maps .. heck create as many maps as you'd like. I plot my favorite bike rides on google maps with very little regard for the "professional" quality of the data simply because I like to see where I've ridden. Hobby maps are fun! But when it comes to critical applications (eg GIS data that may need to be defended in a court of law, published in a research paper, etc), you'd better believe that I expect a minimal level of competency in the people involved in the software and data used in my analysis. I don't need to see formal credentials, just some reassurance that those involved in the process have the background knowledge that is crucial for creating high-quality geospatial data and apps.

Re: Amateurs: STFU

Author: Sean

Matt, I agree. More here.

Selectively Running Python Tests


What is a good technique to order tests such that the complicated, far reaching stuff is run after the basics?

No comments on Kurt's blog, so I'll have to try to answer here. That sounds like integration (not unit) testing to me and cries out for doctest, which lets you build up the complexity of your test suite as you go and guarantees that tests are run in the listed order. With unittest your best bets are to name tests so they cmp() predictably, or subclass TestLoader and implement your own test ordering (sorting) algorithm. The latter also gives you control over which tests get added to your suite.

OGC WTF of the Day

I'm reading over the WFS 1.1.0 spec (0GC 04-094) and see in section 6.3.1:

HTTP supports two request methods: GET and POST. One or both of these methods may be defined for a particular web feature service and offered by a service instance. The use of the Online Resource URL differs in each case.

HTTP supports just two request methods? I could understand an oversight like this in a spec from 2000, when RFC 2616 was still just one year old, but WFS 1.1.0 was approved in May of 2005. Browser forms support GET and POST only, but that's not a limitation of HTTP/1.1, and the XMLHTTPRequest implementations in almost all browsers do support the full range of HTTP verbs.

On the other hand: the WMS 1.0 spec (00-028), to the credit of its authors, doesn't make the same mistake and even references RFC 2616. Somewhere after WMS 1.0, the OGC lost the trail to REST.

KML Output for Mush

Genshi is great for generating KML once you discover the trick for dealing with CDATA. Add format=kml to a Mush request to get a KML document instead of the Atom feed default [example].