Openness in data formats

Me and Tantek

Tantek wrote this thought provoking entry about data formats and openness. Which I can't help but kind of agree on and disagree on. So first his entry.

  1. ASCII is dependable. Project Gutenberg insists on publishing their e-books as plain ASCII text as Mark Pilgrim noted, and their reasons are solid.
  2. Compatible XHTML is now also dependable. In the 15+ years since its public introduction, I believe that HTML has established itself sufficiently prominently worldwide that I feel quite comfortable declaring that HTML will be accepted to be as reliable as ASCII in coming years. In particular, authoring what I like to call Compatible XHTML, that is, valid XHTML 1.0 strict that conforms to Appendix C, is IMHO the way to author HTML that will have longevity as good as ASCII. Note that files in most file systems have no sense of “MIME-type”, thus the winged-mythological-creatures-on-the-head-of-a-pin style arguments about text/html vs. application/xhtml+xml that are often used to discredit either HTML or XHTML (or both) are irrelevant for the most common case of keeping archives of files in file systems.
  3. Plain old XML (POX) formats in the long run are no better than proprietary binary formats. XML, both in technology and as a “technical culture” is too biased towards Tower of Babel outcomes. I've spoken on this many times, but in short, the culture surrounding XML, especially the unquestioned faith in namespaces and misplaced assumed requirement thereof, leads to (has already lead to) Tower of Babel style interoperability failures. As this is a cultural bias (whether intentional or not) built into the very foundations of XML, I don't think it can be saved. There may be a few XML formats that survive and converge sufficiently to be dependable (maybe RSS, maybe Atom), but for now XHTML is IMHO the only longerm reliable XML format, and that has more to do with it being based on HTML than it being XML.
  4. Formats that are smaller (e.g. define fewer terms) tend to be more reliable.
  5. Formats that are simpler (e.g. define fewer restrictions/rules for publishers) tend to be more reliable.
  6. Formats that are more compatible with existing reliable formats tend to be more reliable, e.g. HTML worked well with existing systems that supported “plain text” (AKA ASCII)
  7. Formats that are easier to use, i.e. publish, and more immediately useful, rapidly become widely adopted, and thus become reliable as a breadth of software and services catches up with a breadth of published data in those formats.

The microformats principles were based on these observations. Now this doesn't mean I think microformats will replace existing reliable formats. Not at all. For example, I feel quite confident storing files in the following formats:

  • ASCII / “plain text” / .txt / (UTF8 only if necessary)
  • mbox
  • X)HTML
  • JPEG
  • PNG
  • WAV
  • MP3
  • MPEG

So my take on Tantek's thoughts.

Plain old XML (POX) formats in the long run are no better than proprietary binary formats. See I take issue with this, I understand what Tantek is getting at but I would say plain xml without a schema isn't leaning towards the Tower of Babel. And like Tantek already mentioned RSS and ATOM are pretty close to the non-tower of babel direction. I would also add FOAF and OPML to the list. I would love for SVG to also be included in this but alas its not. Formats that are smaller (e.g. define fewer terms) tend to be more reliable. Good point, hence why things should be broken down like how XHTML and SVG got Modularization.

My list of formats are slightly different too.

  • XHTML (Unicode)
  • XML (Unicode)
  • JPEG
  • PNG
  • MPEG3 audio
  • MPEG4 video
  • WAVE
  • SVG

Comments [Comments]
Trackbacks [0]