2007 (old posts, page 8)

Irrelevant

I saw this Linus Torvalds quote (full interview here) in the OpenGeoData blog:

Me, I just don't care about proprietary software. It's not "evil" or "immoral," it just doesn't matter. I think that Open Source can do better, and I'm willing to put my money where my mouth is by working on Open Source, but it's not a crusade -- it's just a superior way of working together and generating code.

It's superior because it's a lot more fun and because it makes cooperation much easier (no silly NDA's or artificial barriers to innovation like in a proprietary setting), and I think Open Source is the right thing to do the same way I believe science is better than alchemy. Like science, Open Source allows people to build on a solid base of previous knowledge, without some silly hiding.

But I don't think you need to think that alchemy is "evil." It's just pointless because you can obviously never do as well in a closed environment as you can with open scientific methods.

Exactly right.

Comments

sticking up for alchemy

Author: Brian Timoney

Somewhere the ghost of Isaac Newton is extremely pissed off at Linus' dissing of alchemy.... BT

Re: Irrelevant

Author: Sean

The ghost of Isaac Newton is too busy experimenting with quantum gravity to read blogs.

Re: Irrelevant

Author: Andy

If it is a superior way to develop software why is the Linux desktop light years behind Windows and Mac? For something to be truly superior it must be superior to what it's competitors are currently offering. Linux isn't superior to Windows from the only perspective that really matters and that is the end user perspective. It isn't superior to Mac OS X from an end user perspective either. So how is this a superior way to develop an OS? There are some superior Open Source alternatives out there such as Mapserver, Apache, Firefox, and PostgreSql, but by and large proprietary software and systems lead in just about every metric. CAD desktop, GIS Desktop, Desktop documents, inter application communication and automation, fonts, UIs, general usability, SCADA systems, GPS Systems, Topo mapping software, routing software, ..... I could go on for a long list of what proprietary software has that is better than it's open source counter parts. I work on Open source projects, I support it financially as well because I believe in the underlying concept that information should be freely shared to improve the society in which we live in. I don't support it because I believe it currently creates better software at the same pace as proprietary software companies can. Developers have to eat and support their families. Until Open source can figure out how to coordinate large projects across hundreds of developers in a timely fashion, and pay them all good wages, Open Source software won't out pace the rate at which proprietary software puts out better solutions. The most successful Open Source projects have one person or a very small team of Core developers working in close communication towards a common goal. When projects get larger than that they fall behind their proprietary competitors or they fall apart completely. In the end Open Source the way it is currently done is a form of Communism and history has shown that Communism doesn't work on a large scale no matter how noble it's ideals are. Communism works in small groups like tribes, kibbutz's, etc. but it doesn't scale to a nation level. History has born this out many times over. Incentive based systems such as capitalism work better on a large scale by far than Communism does. Open Source today seems to be the same way. It works very well in small core groups and can produce outstanding results but when it tries to scale to encompass huge projects it falls apart. I believe the way to change this is to revamp the way Open Source is run so that it is no longer run in a communist fashion. The way to change this would be to have companies that develop software open up their source and take input on that source from the larger community while still paying their developers and generating revenue as a company from sales of their software and from the maintenance of it. Then you would have an incentive based system that still shared it's knowledge freely. This is the way AT&T Bell Labs did many of it's projects and it worked very well at the time. PGP also works this way. I think it could work on a large scale but I may never find out unless their is a radical shift in the way proprietary software companies start working. It is sort of a catch 22, the key to making Open Source really work is in the hands of those companies that fear it the most. If we can get them to change their mindset then I think the sky would be the limit and Open Source in it's new form would be a truly superior way to develop software. In it's current incarnation I don't believe Open Source is the best way to develop software on a large scale but we can change this over time and until we do get it changed it is still worth supporting because knowledge should be free and we should work to make our society better for everyone not just the ones who can afford it.

Re: Irrelevant

Author: Sean

There will eventually be excellent open source alternatives in every software category you listed. We're just getting started.

Re: Irrelevant

Author: Paul Ramsey

The Eclipse IDE is currently the best desktop integrated development environment (well, maybe Visual Studio is better, but regardless we are talking about a very close race). How can this be? No one is selling it. But there are lots of traditionally paid developers working for big companies working on it. Lots of different big companies too. The "communist" label is just a big red (ha ha) herring designed to rattle Americans who have not gotten over the propaganda surrounding their previous Official Enemy. Open source seems to flourish once a significant part of the marketplace decides that a particular piece of functionality is no longer useful as a product differentiator. Server operating systems (Linux), IDE/application frameworks (Eclipse), scripting languages (Python/Perl/PHP/Ruby/etc), web servers (Apache). Desktop operating system interfaces have innovated enough in the last five years (thanks, Apple!) to keep marginally ahead of their open source followers, but if they slow down for too long, they too will feel pain as "good enough" and free alternatives catch up. Oracle is vacating the database market as fast as it can, and moving into areas where it can offer real value, like business intelligence and CRM -- they see the writing on that particular wall. There will always be a place for proprietary software in the niches, but this is a very long game, and the onus is on the proprietary companies to continuously improve their products to stay ahead of the game -- to deliver real value for money (like Apple does with OS/X). The days of locking down a customer base and charging monopoly rents ad infinitum are over.

Re: Irrelevant

Author: Andy

"The days of locking down a customer base and charging monopoly rents ad infinitum are over." This I agree with completely.

Re: Irrelevant

Author: Dave Smith

I wouldn't characterize either as superior. Each has its' pros and cons. Certainly the collaborative aspect and low cost of open source makes it fun, accessible and provides a great deal of value and sustainability. However, end users have little control over QA problems and patches, little control on enhancements, have limited means of support, and either have to roll their sleeves up and fix/modify the product themselves or wait for someone else to deal with it. That is fine if your project has budget, capability and resources for scratch-building things, but otherwise for production end-users, it causes some concern and risk. On the other hand, some (but certainly not all) of that is alleviated with COTS products, however here you are stuck with proprietary code, formats, APIs, high cost and a host of other issues. In the long run, however, these things tend to follow a cycle of commoditization - where a piece of technology becomes less unique and more ubiquitous, and is relegated to a state of commoditization, at which point proprietary pieces become irrelevant as there are at that point many Open Source pieces which have evolved as stable and low-risk, to push the proprietary aside. At this point, the proprietary needs to turn to modularization and cutting loose the commoditized pieces to turn its efforts to other pursuits. It's a dynamic and continually-emergent process.

More ArcGIS and JSON

Again, found this from Jithen Singh while stalking keywords. Still no details about whether it's for geospatial features or other application data.

There is a fair bit of noise in the Technorati search feeds, but no more so than on Planet Geospatial. Maybe even less.

Update: now there are details from Matt Priour.

Comments

Re: More ArcGIS and JSON

Author: Brian Flood

AGS does not currently support REST and they are only *considering* adding it to 9.3. I for one hope they do and made sure to tell as many ESRI folks as I could about it. the more chatter they hear, the more they will consider it. I think Matt's (excellent) post above is essentially rolling his own JSON support which is definitely possible right now. I would just love it if ESRI baked it in by default, IMO this would open AGS as a platform to many more frontends (openlayers, gmaps, ve, etc) and position it as a callable service instead of just the monolithic system they promote now (e.g. all ESRI, all the time) cheers brian

Re: More ArcGIS and JSON

Author: Matt Priour

According to Rex & Art at ESRI, the ArcGIS Server Web ADF should emit & consume JSON at 9.3 . This is a separate issue from REST support. Brian is correct that my above referenced post is purely a DIY thing. This has nothing directly to do with ArcGIS Server or its JSON support. I am merely demostrating how to take your own data which in not inherently spatial but is geo-referenced in some way and emitting it as a serialized object which ties the geo-reference to the data. From here you now have an object that you can use in a number of web-mapping platforms. I plan on demostrating ArcWeb Explorer, Google Maps, Yahoo Maps, & Virtual Earth.

Re: More ArcGIS and JSON

Author: Brian Flood

hey matt yea, I can see future conversations now: "Hey, does AGS support REST? sure does, the ADF emits/consumes JSON." and some helpless developer reports that all is well in ESRI land. FUD reigns true as an implementation detail of the ADF is mistaken for easy interoperability. AGS 9.3 *might* have REST endpoints, after talking to several developers there, I did not walk away thinking it's a done deal. I do hope they add it in and I certainly hope it's not part of the ADF. cheers brian

Re: More ArcGIS and JSON

Author: Sean

I'm skeptical about JSON interoperability. Rolling your own JSON for use in your own application, as Matt and I are doing, is fine. If you need interoperability and extensibility, use XML. But I reserve the right to change my mind on this if anything compelling comes out of the Geo-JSON working group ;) RESTful ArcGIS? Non-SOAP services, sure, but I can't imagine anything more than that.

Geo-Enterprise to Geo-Web

Ron Lake's The Architecture of the GeoWeb mishandles a subtle point of the Web's architecture. Web crawlers don't find and read files, they traverse the web of resources, URL to URL, and parse the documents that represent those resources. The contents of enterprise spatial databases are not passed over because they are not files, but because they have no URLs. They are not of the Web.

Any Rails, Django, or Zope (for example) application has resources that are stored in a database, and these are duly indexed because they have URLs. The geographic places of Pleiades, such as Antiphellos/Habesos, persist in an object database (ZODB) and have URLs. These places are connected to the Web by virtue of their links from other documents, and are spatially referenced by linking to KML or GeoRSS representations. This is a geo-web architecture that truly builds upon that of the Web.

Go read Lake's post. It's a good overview of new web spatial indexes, and the current isolation of enterprise geo-data from the Web.

Trac Changeset Links from Chatzilla

Update: Chatzilla 0.9.78 breaks this script. Looks like I'll have to add some priority levels to the munger rules when I get some spare time.

Update: New version 0.2, here, allows per-channel Trac sites.

Following this tutorial, I just wrote a Chatzilla script that makes links to Trac changesets from IRC text matching /r\d{1,6}/. You enter:

r42

and you see:

<a href="http://.../changeset/42">r42</a>

It's a bit rough (no per-channel control) but useful enough.

Under your "Auto-load scripts" directory, create a sub-directory named "trac-rev". Save the following script as trac-rev/init.js and modify the changeset URL accordingly:

CHANGESET_URL = "http://YOUR_PROJECT/changeset/%s";

function initPlugin(glob)
{
  plugin.id = "trac-rev";
  plugin.major = 0;
  plugin.minor = 1;
  plugin.version = plugin.major + "." + plugin.minor;
  plugin.description = "link to trac revisions";
  client.munger.addRule(
    "trac-rev-link", /(?:\s|\W|^)(r\d{1,6})/i, mungerTracRev, 1
    );
  display(plugin.id + " v" + plugin.version + " enabled.");
}

function mungerTracRev(matchText, containerTag)
{
  var number = matchText.match(/(\d+)/i)[1];
  var anchor = document.createElementNS("http://www.w3.org/1999/xhtml",
                       "html:a");
  anchor.setAttribute("href", CHANGESET_URL.replace("%s", number));
  anchor.setAttribute("class", "chatzilla-link");
  anchor.appendChild(document.createTextNode(matchText))
  containerTag.appendChild(anchor);
}

Comments

W*S and REST, Again

I'm experimenting with watching Planet Geospatial less, and stalking more keywords on Technorati. This led me to a blog I hadn't seen before, and a post asserting the RESTful-ness of OGC W*S, a notion that I thought we'd put to bed last year.

Statelessness, HTTP transport, and use of URLs are necessary but insufficient elements of RESTful architecture. Statelessness is only one aspect of "hypermedia as engine of application state". Using HTTP as a simple pipe, ignoring request and response headers, is not RESTful. Finally, the important thing about URLs is that they can be dereferenced to obtain a resource. This WFS query:

http://example.com/cgi-bin/wfs?
  typename=PointsOfInterest&
  maxfeatures=50&
  SERVICE=WFS&
  VERSION=1.0.0&
  REQUEST=GetFeature&
  SRS=EPSG%3A4326&
  BBOX=3.048788%2C36.755769%2C3.071012%2C36.773231

is not a resource URL. It's a remote procedure call.

Comments

Re: W*S and REST, Again

Author: Allan

OK, I'll bite. To what extent would a collection of GML objects returned by a query be a resource no matter how you request it? Or, put differently, how would you do the same thing in REST? I guess in database terms, you could define a View as a query and then the resource would be the view. Or the view would be the resource. In that case, then it would see to me that http://example.com/cgi-bin/wfs?view=[query expression] would point you at a resource. Then if you substitute the actual WFS request for [query expression] you have REST. So REST depends on the plausibility of calling the thing you get back a resource. I think early on Ron Lake was pushing XQuery, Xpath, and XLink as the means of doing what a WFS does. Would that me more RESTful? I'm neither a full-bore student of REST, nor a database person, so the distictions may be too subtle for me.

Re: W*S and REST, Again

Author: Paul Ramsey

It is not clear to me that the W*S baseline as currently defined can be mapped neatly into REST. The example that has the most resonance for me is WMS versus Google Maps. Both provide a way of getting maps on your screen. By pivoting the problem slightly, Google Maps is able to provide a completely RESTful solution to the problem (every tile is definitively a resource). Now, I could flip WFS into REST easily enough, by defining each feature as a resource. But I am not sure how scalable that approach would be. At some point you need to query. And frankly, that aspect of RESTful handwaving still eludes me: how is a REST query operation any less of a schmozzle than any other web services query? And once your data domain gets large enough that the only practical way to work with it is via query mechanisms, is it still possible to see the RESTful aspects of the design behind all the querying? At that point does REST not degerate into the much simpler "I push XML over HTTP, and try to respect the HTTP operations and headers while doing so".

Re: W*S and REST, Again

Author: Sean

You misunderstand me. Queries are essential to a RESTful GIS. I'm only reminding readers that WxS is RPC, not REST.

Re: W*S and REST, Again

Author: John Caron

We are trying to define a RESTful alternative to W*S. I would really appreciate understanding why "WxS is RPC, not REST" and how to do the same thing RESTfully. My best guess would be something like http://example.com/cgi-bin/wfs/PointsOfInterest/EPSG%3A4326? BBOX=3.048788%2C36.755769%2C3.071012%2C36.773231& maxfeatures=50 The role of query parameters vs. URI path is rather confusing.

Mr. Plow, Meet the Plow King

How annoying. At least ArcSDE on PostgreSQL will be interoperable with PostGIS. If OSGeo is right, that's the best deal we can get.

Simpsons reference: 9F07

Comments

Re: Mr. Plow, Meet the Plow King

Author: Andy

I don't think it is a big deal that they will be providing their own types. We will just have to add that to OGR and then we can convert back and forth between them and whatever else we need.