REST vs SOAP at ESRI DevSummit

Yesterday, I saw a lot of links to David Chappell's ESRI Developers Summit keynote. I tried and failed to stick with the video, but I did read through the slides. They're largely good, but there are some errors. Before his presentation becomes canon in the ESRI user community, I'd like folks to consider:

The watering down of REST constraints on slide 14 into optional principles that you apply or not "whenever possible", and disregard for the hypertext constraint. In fact, there's no mention of the hypertext constraint at all in the slides. Does Chappell not understand it? Not believe in it? It's impossible to say from the slides; maybe he went into it more, live. At any rate, substitute "when you want certain derived properties" for "whenever possible" and you get closer to the essence of REST. If you want the property of cacheability, then you add the uniform interface and stateless communication constraints. If you want the property of loose coupling, you add the hypertext constraint.

On slide 18, Chappell fails to mention an option that's familiar to all of us even if we're not entirely aware of it: code on demand. The client library provided by the service can be downloaded by clients at the time the service is accessed. On demand. You're probably already using OpenLayers (or something like it) in a way that's close to an on-demand library for negotiating with a service. Chappell's second option on that slide, while useful enough, has very little to do with REST.

On slide 20, Chappell confuses REST with the half-baked "REST APIs" that don't use service descriptions or any sort of hypertext representations. If you stick to standardized representation formats, there's no more dependence on written documentation than in the SOAP case. Developers of SOAP tools had to read a pile of specs. Developers of RESTful HTTP tools will have to read a pile of specs too.

Finally, there's a lot of slides spent trying to explain how to choose between SOAP and REST. Ultimately, it comes down to this: if you want your system to have the properties that derive from REST constraints, you choose the REST style. If you want to integrate in the Web, you choose REST. If you want a system that tries to abstract away the network, letting your developers program with remote services as though they were local objects, you choose SOAP. Pick your architectural style. I don't think it has to be any more complicated than that.

Update (2009-04-09): Pete (below) has shamed me into watching the video all the way through. I'm glad I did. I knew this stuff already, but it's a very good presentation for the ESRI user/developer. Unfortunately, despite Pete's claims, my quibbles with his slides stand. Chappell didn't cover the hypertext constraint (though he did an excellent overview of the wins from uniform interface and universal identifiers). An adequate treatment would have consumed too much time, granted, but some mention would be nice. The hypertext constraint has indeed been a matter of debate, but the wins are becoming clear. Without links in content, well-defined formats, and code on demand, developers are indeed dependent on service provider libraries, but this state of affairs isn't a fault of the REST style. Still, to use SOAP in your enterprise behind the firewall, to use the REST style outside on the Web is the obvious conclusion. Good answers to questions, too, including comments about AtomPub and the issue of snowflake APIs.

Comments

Re: REST vs SOAP at ESRI DevSummit

Author: Pete

I like how you are trolling a presentation you didn't even bother to watch.

As you say yourself: "It's impossible to say from the slides".

If you had watched the presentation, you would know that he deals with most of your arguments.

Re: REST vs SOAP at ESRI DevSummit

Author: Sean

I found it hard to watch. Too much emoting. Not that I'm better (surely not), but it was not to my taste. How did he deal with these arguments? Or at what times? Maybe I could fast-forward to them.

Keytree 0.2.1

Keytree provides some utilities for manipulating KML using the ElementTree API. I've added factories for KML placemark and geometry elements, input being geojson or Shapely objects:

>>> from geojson import Feature
>>> f = Feature('1',
...             geometry={
...                 'type': 'Point',
...                 'coordinates': (-122.364383, 37.824663999999999)
...                 },
...             title='Feature 1',
...             summary='The first feature',
...             content='Blah, blah, blah.'
...             )

Any object that provides the Python geo-feature interface will do. Next, you need a KML document as context for a new placemark:

>>> data = """
... <kml xmlns="http://www.opengis.net/kml/2.2">
...   <Document>
...   </Document>
... </kml>
... """
>>> from xml.etree import ElementTree
>>> kml = ElementTree.fromstring(data)

Make a placemark element using the element factory:

>>> elem = keytree.element(kml, f)
>>> import pprint
>>> pprint.pprint((elem.tag, elem.text, list(elem)))
('{http://www.opengis.net/kml/2.2}Placemark',
 None,
 [<Element {http://www.opengis.net/kml/2.2}name at ...>,
  <Element {http://www.opengis.net/kml/2.2}Snippet at ...>,
  <Element {http://www.opengis.net/kml/2.2}description at ...>,
  <Element {http://www.opengis.net/kml/2.2}Point at ...>])
>>> pprint.pprint(list((e.tag, e.text, list(e)) for e in elem))
[('{http://www.opengis.net/kml/2.2}name', 'Feature 1', []),
 ('{http://www.opengis.net/kml/2.2}Snippet', 'The first feature', []),
 ('{http://www.opengis.net/kml/2.2}description', 'Blah, blah, blah.', []),
 ('{http://www.opengis.net/kml/2.2}Point',
  None,
  [<{http://www.opengis.net/kml/2.2}Element coordinates at ...>])]

This element could be appended to the Document element, or you could use the subelement factory:

>>> elem = keytree.subelement(kml[0], f)

More at http://pypi.python.org/pypi/keytree/.

Now, I'm trying to decide if something similar would be useful for Atom with GeoRSS.

Sensible observation services, part 2

Another good question from the OGC REST discussion:

In essence, within REST, how do I ask for a collection of measurements between any time1 and time2? This is simple with SOA, but seems to require predefined granularization by the service in ROA or perhaps uses an adaptation of SOA-like specifications as Pat has suggested in the past.

SOA didn't say how to spatially or temporally slice data. SOA said "have services". Services with well-defined interfaces. It's up to communities to define those interfaces. It's the same for RESTful architectures. REST just says use universal identifiers for entities, use uniform methods, and use hypertext as the engine of application state. The entities and the hypertext formats are up to the community.

How would you ask a sensible observation service for measurements between two times? This device is blogging away, adding observations to the head of a feed every N microseconds. If you're interested in historical data – and everything becomes historical quickly if it's sampling very frequently – the service should allow you to slice temporally, returning feeds that begin and end at or near specified times. The Atom format spec doesn't specify how to do this. It was developed in the tradition of layering specifications and recommendations. Search is to be layered on top of Atom like feed paging (RFC 5005) is layered on top of Atom. One possible way to do this is with XHTML forms.

Consider a sensible observation service identified by the URI http://example.com/sosa. Dereferencing that gets you an Atom feed of N entries, ideally containing links to the next "page" and a link to a search form:

<feed xmlns="http://www.w3.org/2005/Atom">
  <link rel="search"
    type="application/xhtml+xml"
    href="http://example.com/sosa/search"
    />
    ...
  </feed>

The "search" relation isn't standardized yet, but something like it will be. The exact URI of the search form isn't important at all, only that it can be found via a link. Its representation would be something like this:

<html xmlns="http://www.w3.org/1999/xhtml"
  xmlns:sos-search="http://example.com/namespaces/sos-search"
  >
  <body>
    <form id="sos-search:form"
      method="GET"
      action="http://example.com/sosa/search"
      >
      <input id="sos-search:start" type="text" name="start"/>
      <label for="sos-search:start">Minimum bounding timestamp, RFC 3339</label>
      <input id="sos-search:stop" type="text" name="stop"/>
      <label for="sos-search:stop">Maximum bounding timestamp, RFC 3339</label>
      <input type="submit" name="search"/>
    </form>
  </body>
</html>

The "start" and "stop" inputs are borrowed from Python's sequence slicing syntax. A community that standardized on form element ids could have a self-describing search interface, one that was also helpful to human users (at least) outside that community. Any "sos-search" aware client could recognize from this form, by parsing elements with "sos-search" ids, that URIs like http://example.com/sosa/search?start=2003-12-13T18:30:02Z&stop=2003-12-13T19:30:02Z identify Atom feeds of temporally sliced data.

I'm not sure XHTML forms are the best solution, but it's illustrative of one way to do RESTful search interfaces.

REST in reality

How to RESTfully change the state of power lines and poles? From a thread on the OGC's mass-market-geo list (archives available only to subscribers, for shame):

Actually, I am really talking about poles and wires. Specifically power poles and lines. My example is directly derived from real life. A local electric system has tons of old wood poles that it wants to replace with concrete poles. It is doing this by installing the concrete pole 1m behind the existing pole and then restringing the power lines.

So, the client has a feature type POLES (Oracle table) with all the poles and a feature type LINES (Oracle table) with all the power lines. Not the actual named but you get the idea.

For each pole that is replaced, they update the existing pole record since the new pole will have the same pole id as the pole it replaces but the new pole will be in a slight different position.

Using the current WFS specification, I can make these changes in a single request. I simply POST the following the the URL of the server:

<wfs:Transaction>...

The question of how to do this in a single transaction is a big juicy red herring that has list subscribers rather distracted. The process of disconnecting and taking down wires takes some time, and presumably has to be done before you can take down the pole they connect to. Certainly the new pole can't be erected before the old one is torn down, and only after that happens can new lines be brought in to connect to the new pole. This is not an atomic operation. Bad weather, equipment failure, injury, or any mishap might interrupt the process and leave it an unfinished state for some time. Treating it as atomic in your information system seems more hopeful than realistic, and perhaps even harmful. Once the lines and poles come down, you can't be giving anyone the false impression that they're still up.

Let's say you've modeled your power system RESTfully. You've got pole resources like http://example.com/power/poles/1. You've line resources like http://example.com/power/lines/1 and http://example.com/power/lines/2. These resources aren't poles and lines by virtue of the strings in the URI, of course, that's just a convenient design. Lines 1 and 2 connect at pole 1. A utility crew keeps your information system up to date as they work like this:

This isn't finance. Sometimes non-transactional is more honest.

Comments

Re: REST in reality

Author: Gary Sherman

In reality the new poles will all be installed (1 m behind existing), then the lines will be moved all at once.

Re: REST in reality

Author: Sean

I was imaging the lines would be upgraded too. If not, even easier: PUT new state (new location, new material attributes, etc) to http://example.com/power/poles/1 when the lines are hung on the new pole. Don't change the state of the line at all.

Re: REST in reality

Author: Jason Birch

Keeping the same id feels a bit contrived. Normally, you'd want to retain the old ID in a "removed" state for asset managment reporting on upgrade history, service lifetime, etc, etc. So, in this case, I think you'd PUT the new state (retired) to the old pole, and POST a new pole.

Typically a line is connected to many poles, and a pole can have many lines strung on it. Assuming this relationship was modeled by GETing lists of hyperlinks from the following URIs (don't know if this is "correct"):

GET http://example.com/power/poles/1/lines

GET http://example.com/power/lines/1/poles

How would you move the lines from one pole to another?

DELETE http://example.com/power/poles/1/lines/1

POST http://example.com/power/poles/2/lines/1

That seems a bit odd to me, because you wouldn't

GET http://example.com/power/poles/2/lines/1

Separate "service"?

DELETE http://example.com/power/pole/1/lines {content: 1,2,5,61}

POST http://example.com/power/pole/2/lines {content: 1,2,5,61}

Re: REST in reality

Author: Cedric Moullet

Jason's use case is quite interesting. I was thinking to another one: usually, when line modification occurs, a mailing is done in order to inform the customers of the outage (typical question: provide me the list of all customers affected by the deletion of this line).

What would be the correct URI for this kind of question ?

Re: REST in reality

Author: Sean

I'm not sure what you mean by URI for the question. If you designed a system such that the customers for each line (or segment of the power grid? I'm getting out of my depth, here) were enumerated in a resource like http://example.com/power/lines/1/customers, you might have a form to which you could POST a message like http://example.com/power/lines/1/customers/notice (delivered by email and snail mail).

OpenLayers constrained by hypertext

I go on at times about REST and its hypertext constraint. I'm sure some people wonder if I ever actually apply these principles or just like to argue about them. Much of my work involves Plone, which is a poor environment for doing things in a RESTful manner, but I try to find good patterns where I can. One of them involves OpenLayers, puts the REST hypertext constraint in a familiar context, and isn't something that I've seen in the OpenLayers examples or gallery.

The site I'm working on has resources about some places of the ancient world, such as the the place known as Aphrodisias. The URI http://pleiades.stoa.org/places/638753 leads to an HTML representation. There are also Atom, JSON, and KML representations, and these are linked from the HTML page as shown below:

<link rel="alternate"
    href="http://pleiades.stoa.org/places/638753#this" />
<link rel="alternate" type="application/atom+xml"
    href="http://pleiades.stoa.org/places/638753/atom" />
<link rel="alternate" type="application/json"
    href="http://pleiades.stoa.org/places/638753/@@json?sm=1" />
<link rel="alternate"
    type="application/vnd.google-earth.kml+xml"
    href="http://pleiades.stoa.org/places/638753/kml" />

The HTML page has an OpenLayers map of ancient world features contained in this place. You'll see one, the feature digitized from the Barrington Atlas map sheet, unless you're a authenticated user that can view Tom Elliot's unpublished features. Previous incarnations of these map pages called on dynamically generated scripts that contained the data, but I'm preferring now to get data from the alternate application/json representation of the place resource. It's trivial, more loosely coupling, and more scalable (at least in my Plone setup) to separate code and data, parsing the JSON link href out of the document and using it in an OpenLayers "GML" layer:

function getJSON() {
    var documentNode = document;
    var linkNode = documentNode.evaluate(
                    '//link[@rel="alternate" and @type="application/json"]',
                    documentNode,
                    null,
                    XPathResult.FIRST_ORDERED_NODE_TYPE,
                    null
                    ).singleNodeValue;
    var jsonURI = linkNode.getAttribute("href");
    return jsonURI;
}

function initMap() {
    var jsonURI = getJSON();
    vectors = new OpenLayers.Layer.GML(
                "Place",
                jsonURI,
                {format: OpenLayers.Format.GeoJSON}
                );

All data the OpenLayers code needs to render a map of the place is now discoverable through HTML links without having to go out of band.

Comments

Re: OpenLayers constrained by hypertext

Author: Vish

Hi Sean,

So, does that mean that you are pulling down the same data from the server to the browser twice? First in HTML and then JSON?

Thank You,

Vish

Re: OpenLayers constrained by hypertext

Author: Sean

The same data twice? No. There's a small overlap between the text/html and application/json representations, yes. A title string, an id, a URL. 100 bytes or so. There's no geometry or coordinates in the HTML page, and so the fractional overlap diminishes as the number of line and polygon features increases.

Re: OpenLayers constrained by hypertext

Author: Vish

Hi Sean,

I was just curious as to what your HTML representation looked like. Also, not to pick on things here, i am just thinking out loud...

1) Does your HTML representation not contain any attribute information about the feature or are u omitting it along with the geometry?

2) What are your thoughts on omitting the geometry for the feature in its HTML representation? Should it be in there as SVG/VML? Is it alright to have only subsets of the feature info in the different format? Or should they all have the same info?

3) If your HTML only contains the 100 bytes of info you mention... then why have a HTML representation at all? REST doesn't say anything about HTTP/HTML. Is it only for the web crawlers?

Thank You,

Vish

Re: OpenLayers constrained by hypertext

Author: Sean

Vish, I meant 100 bytes of overlap between the HTML and JSON representations. How much the representations overlap is totally up to the application (and me). Currently, the JSON representation is primarily in the service of the HTML pages. If users wanted richer JSON data, we'd consider providing it.

Spring is officially here

The warm weather and blossoms have been hinting at it, but now it's official.

http://farm7.staticflickr.com/6094/7016282337_05a6ed4bdd_z_d.jpg

This turkey vulture arrived at 920 West Mountain Avenue between 2 and 5 PM today. The first one came also on the 25th of March in 2008 and 2009. Before that I wasn't keeping close track, but I'm sure it was within a couple days of the 25th. I was in France in 2010 and then in London last year and missed the big event. Is the vernal equinox the signal for them to head north from at least 4 days away?

Good because it's good

Here's a quote from Tim Bray that's relevant to the WMTS discussion:

REST isn’t good because it’s REST, it’s good because it’s good.

I get the impression that the OGC is keen to have a REST standard only because it fears missing out on the booming interest in REST and wants to keep in control of geospatial protocols. It perceives REST as a ripe emerging market rather than a distinct architectural style. This is reflected in the WMTS "KVP, SOAP, or REST" offerings, which seem like "Bud Light, Bud Dry, or Bud Ice" to me. REST isn't good because it's trendy, it's good because it's good. Loose coupling. Scalability. Evolvability. Serendipitous reuse. A real alternative to RPC.

Commenting on OGC WMTS

I finally caught wind of the OGC's request for comments on the Candidate Web Map Tiling Service Standard via Geoff Zeiss with 11 days remaining to comment. Not for lack of OGC press releases, it's just that I don't follow that particular media very closely. I have sent in a "public comment" advising the authors on how to better follow the REST style. To be honest, I'd rather the OGC stayed away from REST, but if it won't, I'll insist it's done properly and doesn't misinform mainstream GIS developers. I'll even try to help as much as the OGC's closed process will allow. Supposedly, the OGC's comment list is open to the public. I've registered, but it's coming up on 24 hours and I have received neither confirmation of receipt of my comment nor have I been subscribed to the list. Frankly, the public comment process isn't very smooth. Does the OGC actually expect comments? Will there be any others? I'm curious to find out. Meanwhile, I've started a discussion on geo-web-rest by posting my comment verbatim, and will write a bit more about the candidate standard here.

Update (2009-03-21): I'm subscribed to the list, finally, thanks to intervention by the manager. I'm not sure whether the normal subscription process is working or not.

Comments

Re: Commenting on OGC WMTS

Author: Allan Doyle

My comments on the geo-web-rest are a bit of a shot in the dark. I'm not clicking "Accept" on the OGC click-through license. I've tried many times to get OGC to stop using them but it's a bit like tilting at windmills.

Implications of WMTS for S3 tiles

Check out this interesting post by Randy George concerning S3 map tiles for DeepEarth:

The project also includes an example showing how to set up a local tile set. The example uses 256×256 tiles but not in the OSGeo TMS directory structure. Here is an example using this DeepEarth local example TileProvider DeepEarth BlueMarble. You can see the tiles spin in from a local directory store on the server. The resolution is not all that great, but a full resolution BlueMarble isn’t that hard to get from BitTorrent. The alternative selection “Blue Marble Web” tiles are full 500m resolution hosted on Amazon S3 courtesy of ModestMaps.org. The Amazon S3 bucket is a flat structure, in other words the buckets don’t have an internal directory tree, which is why the tiles are not stored in a TMS directory tree.

The DeepEarth local TileProvider was easily adapted to suit the TMS directory so I could then directly pull in my El Paso tiles and show them with a DeepEarth interface. However, if I wished to take advantage of the high availability low latency S3 storage, I would need to flatten the tile tree. In S3 subdirectory trees are hiddeen inside buckets as file names. In the S3 case the tile names include the zoom level as a prefix: http://s3.amazonaws.com/com.modestmaps.bluemarble/5-r8-c24.jpg The 5 on the 5-r8-c24 nomenclature is the zoom level while row is 8 and column 24 at that level. TMS would encode this in a tree like this: ../bluemarble/5/24/8.jpg. The zoom level= subdirectory, row = subdirectory name, and column = name of tile. The beauty of TileProvider class in DeepEarth is that minor modifications can adapt a new TileProvider class to either of these encoding approaches.

Performance and reliability is a lot nicer on an Amazon S3 delivery, especially with heavy use. Once in S3 a map could also be promoted to a CloudFront edge cache without much difficulty. I imagine that would only make sense for the upper heavily used zoom levels, say 1-5 for BlueMarble. Once down in the level 6 part of the pyramid the number of tiles start escalating dramatically and repetitive hit rates are less likely.

The OGC's proposed Web Map Tiling Service standard's strict tile resource URL specification (in effect, just another form of RPC) obstructs you from using S3 tiles directly. That's a problem.

Update (2009-03-20): It's pointed out in comments that I'm wrong about the obstruction, and that you can in fact simulate the proposed WMTS tile resource hierarchy in S3. I'm not wrong about the RPC-ness of the candidate spec, something that can yet be fixed.

Comments

Re: Implications of WMTS for S3 tiles

Author: Richard Marsden

I've also thought of using S3/Cloudfront for map tiles, and it has been asked about on the forums of my site.

It would be a relatively simple and cost effective way of creating a tile server that didn't involve the hassles of building & installing something like MapServer.

</p><p> The proposed URL standard could be a problem, but would this simply cause a pragmatic variation of the standard to develop on its own?</p>

Re: Implications of WMTS for S3 tiles

Author: Michal Migurski

I'm surprised they're not providing for arbitrary positions of the x/y/z bits. The S3 URL given as an example above looks the way it does because I didn't want to build a directory tree in the filesystem prior to uploading - there's no reason it can't look like the proposed standard if I understand it correctly, but the "(z)-r(y)-c(x).jpg" has been convenient: http://oakland-1877.s3.amazonaws.com/14-r6331-c2627.jpg, http://oakland-1967.s3.amazonaws.com/14-r6330-c2628.jpg, etc. It's worth remembering that the "/" is a perfectly valid character in S3 object IDs.

We've also used a more VEarth-like quadtree style: http://road.tiles.map.london2012.com/a/03/13/13/13/11/13/13/3.jpg

Re: Implications of WMTS for S3 tiles

Author: Sean

I didn't know about the slashes. That's interesting, but possibly misleading because buckets can't contain other buckets, and objects can't contain other objects. The object foo/bar/x isn't really contained in foo/bar.

The suggestion I've made to the WMTS authors would make it easier to link to your existing "(z)-r(y)-c(x).jpg" tiles.

Re: Implications of WMTS for S3 tiles

Author: Michal Migurski

Thanks for making that suggestion! As far as S3 is concerned, the semantics of the slashes is actually quite fluid. The fact that they give the appearance of containment is completely separate from the implementation behind the scenes. S3 provides ways to browse buckets with a client-provided delimiter, so you can pretend there's a full directory structure there. The more I work with S3, the more impressed I am with their design decisions.

Re: Implications of WMTS for S3 tiles

Author: Joan Maso

The use of a flat structure like

.../5-r8-c24.jpg

in a prerendered very large and detailed layer is not a good idea.

You could end with thousands of files in the same directory. Common

operating systems do not deal well with so many files a in single

directory.

Splitting scales and rows in different directories helps in these situations.

Re: Implications of WMTS for S3 tiles

Author: Michal Migurski

Joan: on S3, objects aren't files, necessarily - I don't actually know what they are, it doesn't seem to be a relevant detail. They aren't files in TileCache's ".../5/24/8.jpg" scheme, either. URL structure doesn't need to bear any resemblance to underlying implementation. In particular, I'm loving Sean's term "hierarchical URL fetishism" in the linked discussion: http://groups.google.com/group/geo-web-rest/browse_thread/thread/fa0633b1dd009c02?hl=en

Re: Implications of WMTS for S3 tiles

Author: Sean

Like Michal says: how tiles are stored is an implementation detail. It's completely orthogonal to REST concerns.

Sensible observation services

I cry like the man in the classic anti-littering PSA whenever I read about implementations of the OGC's sensor "web" specification because environmental observation and real web architecture fit like a hand and a glove. What is an ASOS station or a data buoy if not a very dedicated and precise non-human blogger? Instead of routinely cranking out blurbs about Google Earth imagery updates, ArcGIS service patches, or RESTful architecture, it cranks out regular sets of scalar values: air or water temperature, dew point or salinity, wind speed or wave height.

ObsKML, too, recognizes the fitness of syndication for sensor observations, but KML doesn't really handle this kind of payload very well. Neither does RSS 2.0. Atom does have a general and robust content model that can deal with sensor observations as well as it does imagery or HTML: the observations go in an entry's content element, and all other entry elements serve as metadata.

...
 <entry>
   <id>urn:uuid:a261b59f-3692-4a00-a3bb-320a7448e0d8</id>
   <title>ASOS 89000 Obs</title>
   <updated>2009-03-14T16:00Z</updated>
   <georss:where>
     <gml:Point><gml:pos>-90.0 0.0</gml:pos></gml:Point>
   </georss:where>

   <!-- Human-readable summary -->
   <summary>Temp:-35.0, WindSpeed:5.0/10.0, WindDir:15.0, ...</summary>

   <content type="text">
     <!-- ASOS data here -->
     0187725020200111250051+40690-074170FM-15+0009KEWR...
   </content>
 </entry>

OpenSearch gives you a way to specify how a client slices a logical feed into bite-sized chunks. Goodbye, GetRecords(), GetCapabilites(), and GetObservations(). Hello, observations.

Comments

Re: Sensible observation services

Author: Jeremy Cothran

Thanks for the mention of ObsKML, it might be better in broader practice/acceptance to separate the idea of a simple Obs xml schema from the particular location/time xml wrapper whether it be KML,GeoRSS,etc.

The human/machine bias/divide that I continue to see is that any possible syndication format/wrapper/xml-based host(KML,GeoRSS,Atom,etc) has at its core a human audience focus with free-format text/html as the data payload(content tag in Atom) not providing support for simple/minimal xml schemas leaving the possibility of simple machine-to-machine syndication and automated parsing unaddressed.

As I have not seen any evidence of public interest in simple xml content schemas(public is a consumer of display/styled data, and only a producer in regards again to human-only-readable data in regards to free-form text/html) and no simple xml schema interest/support from technically capable search engine companies,instrument manufacturers,etc then the OGC spec/standards for better/worse seem to be the only game in town.

Thanks

Jeremy

Re: Sensible observation services

Author: Sean

No, Atom supports all kinds of payload, even text/xml, not only HTML.

Content schema isn't really the point here. I'm talking about how you get content/data to clients and how you synch it. Atom is simply better for this than SOS.

Re: Sensible observation services

Author: Jeremy Cothran

Yes, I was incorrect in stating that Atom content tag doesn't support xml schema, think I looked at an old spec page and see obvious examples as listed at http://data.octo.dc.gov Probably a little confused also by your example where the content is type 'text' and some domain-specific text-only format.

I understand and agree with your point that Atom is the bees knees for XML syndication(with an eye on browser performance Javascript/AJAX/JSON-related influence/interaction in the future).

Just adding pointwise that without any popular simpler content standards which might be understandable/implementable by non-OGC experts(the wider public, similar to the popularization of GML as KML), that the content standard and its related syndication are left in OGC's field and influence alone.

Jeremy

Re: Sensible observation services

Author: Sean

And I think that the OGC tends to get the expert formats right. It's the protocols for interacting with the services that suck for the web. Atom with SensorML or GML payloads could be a good solution.