There will be a semantic web session at the 37th annual international conference on Computer Applications and Quantitative Methods in Archaeology, 22-26 March 2009 in Williamsburg, VA [CAA 2009]:
The Semantic Web: 2nd Generation Applications
Chairs: Leif Isaksen, University of Southampton, United Kingdom, and Tom Elliott, Institute for the Study of the Ancient World, New York University, USA
Semantic Web technologies are increasingly touted as a potential solution to the data integration and silo problems which are ever more prevalent in digital archaeology. On other hand, there is still much work to be done establishing best practices and useful tools. Now that a number of projects have been undertaken by interdisciplinary partnerships with Computer Science departments, it is time to start drawing together the lessons learned from them in order to begin creating second generation applications. These are likely to move away from (or at least complement) the monolithic and large-scale 'semanticization' projects more appropriate to the museums community. In their place we will need light-weight and adaptable methodologies more suited to the time and cash-poor realities of contemporary archaeology.
See Leif's post to the Antiquist group for details.
One of my current projects, named Concordia, aims to bootstrap a graph of open, linked data for ancient world studies. Our decision to defer use of properties of the CIDOC Conceptual Reference Model (CRM) is explained in this memo.
In a nutshell: there's no existing web of CRM-linked data, and implementing the standard gives Concordia no near-term wins. Furthermore, mismatches between the CRM and currently published data mandate a level of effort and expense that cannot be borne at this time. Because the Web is an "open world", CRM details can be added in future, as needed.
Update (2008-12-13): I've received some good feedback concerning non-technical issues that keep museum data shut in and will try to write more about that next week.
The Shapely tests depend quite heavily on Numpy, and are somewhat broken in the new project, but it seems to work:
sean@lenny:~/code/shapely-3k$ python3.0 Python 3.0 (r30:67503, Dec 4 2008, 12:17:44) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information.
It's strings vs bytes now in Python 3.0, which is better, but demands the change in Shapely idiom shown above.
OpenLayers is large by default, but can be made more trim. Prune away the vector and raster formats you won't need, and you can get it to a more reasonable size. It's also not true that OpenLayers must pull image tiles from a WMS or TileCache instance. It's my understanding that the EveryBlock project has written a custom image layer that fetches tiles from an HTTP server or proxy, which is what Kapil is looking for. The big imagery projects on my radar are going to involve collaborative annotation, requiring some easy digitization and drawing tools and RESTful protocols for publishing captured features. OpenLayers provides these tools.
A few years ago, a colleague asked me how to lay out a Python project. He was asking about web projects specifically, coming from a PHP background with an established tradition of file and directory structure. I didn't have a recommendation at the time other than to follow the practices of other successful projects. As Python use increases in GIS, and shops move beyond one-off scripts to reusable packages of standard processing code designed to be distributed around their systems, more GIS developers will be asking the same question. My answer is the same, but now I recommend a particular tool for arriving at the answer: paster and Paste Script. In its author's words:
paster is a two-level command, where the second level (e.g., paster help, paster create, etc) is pluggable.
Commands are attached to Python Eggs, i.e., to the package you distribute and someone installs. The commands are identified using entry points.
Paste Script has many more features, but is worth getting for the project creation feature alone. Get it from the Python Package Index: http://pypi.python.org/pypi/PasteScript/1.7.2, or by using easy_install:
$ easy_install PasteScript
The paster program comes with one template for a basic Python package. When executed, it prompts you for package name and metadata. A package of code that put a nice wrapper around arcgisscripting could be created like this:
sean@lenny:/tmp$ paster create -t basic_package Selected and implied templates: PasteScript#basic_package A basic setuptools-enabled package Enter project name: biarcgis Variables: egg: biarcgis package: biarcgis project: biarcgis Enter version (Version (like 0.1)) ['']: Enter description (One-line description of the package) ['']: ... Creating template basic_package Creating directory ./biarcgis Recursing into +package+ Creating ./biarcgis/biarcgis/ Copying __init__.py to ./biarcgis/biarcgis/__init__.py Copying setup.cfg to ./biarcgis/setup.cfg Copying setup.py_tmpl to ./biarcgis/setup.py Running /usr/bin/python setup.py egg_info
biarcgis |-- biarcgis | `-- __init__.py |-- biarcgis.egg-info | |-- PKG-INFO | |-- SOURCES.txt | |-- dependency_links.txt | |-- entry_points.txt | |-- not-zip-safe | |-- paster_plugins.txt | `-- top_level.txt |-- setup.cfg `-- setup.py 2 directories, 10 files
Let's say your shop has a standard license and authorship policy, wants more package metadata, wants to make sure all packages have a standard test suite, or just prefers a different layout. You can write your own project templates to meet these needs. I like the Pylons wiki example.
Update (2008-12-02): Kurt Schwehr wants to know where is the ChangeLog. The basic package template doesn't provide one, perhaps because the author doesn't want to presume what you want to name your log of changes. The basic Zope and Plone templates from ZopeSkel do provide a docs directory and a HISTORY.txt file for changes. The former even provides a test suite. ZopeSkel is an example of a paster template package that a shop (or community, like Zope's) might make to standardize its Python software.
The OGC's service architecture originated in CORBA/COM, evolved into pseudo-SOAP bindings tunneling GIS data and processes through HTTP POST, and now mandates actual SOAP bindings for new services. The OGC has never pushed anything but this particular style of architecture. To the extent that INSPIRE is guided by the OGC, how could it have chosen anything but SOAP?
... Resources derive from the solution domain, not part of the problem domain. Creating resources for concepts that the solution requires is how modeling works in REST terms; they don’t have to derive from any aspect of the problem in order to be justified.
Interestingly, the quote occurs in a thread about "query resources", a notion I've blogged and which remains difficult to swallow for GIS folks.