Take a Picture, It’ll Last Longer…

Last week, early web folklorist, OG net artist, and friend of Rhizome, Olia Lialina wrote a post that dug at Art.sy for how severely their image processing system had mangled an image of her piece My Boyfriend Came Back From The War. Despite (for the lulz) comparing the problem to the recent destruction of a 19th century fresco, Olia is correct: the image of her work, as processed by Art.sy’s system does look pretty bad. This is just one manifestation of an underlying problem I have been pondering lately: how can documentation of works that are screen-based, and inherently low-resolution, exist within systems that are designed specifically for high-resolution documentation of works that exist in the physical world?

For a while now Rhizome has been sharing records and images of works from the ArtBase with a hand full of carefully chosen fine art image databases. It’s a nice thing to see lesser known computer based works alongside more established artists and media, and we like the idea of exposing our collection and the history of art engaged with technology to a broader audience. Every time we begin one of these projects we are faced with the same conundrum: image specifications. Image collections such as ArtStor, Art.sy, and Google Art Project all serve high resolution images of paintings, prints, photographs, and objects. The user experience of these platforms is engineered to best represent documentation of an object that exists in the physical world. However, nearly all artworks in the ArtBase are screen based – be they software, web sites, video, or animated gifs. This means that these works are inherently low-resolution. With compter or screen based works, there is often no finer grain of visual detail than native screen resolution. In documenting these works, we are not faced with the bottomless pursuit of capturing (or exceeding) human perception, as with the documentation of physical works of art; the pixel is the lowest level of detail. Furthermore, when endeavoring to capture images of authentic renderings (i.e. period specific web browser and operating system), the dimensions of the image are (or at least, in some cases should be) limited to the native resolution of displays of the time when the work was created.

 

Detail of My Boyfriend Came Back From The War

 

For example, the image of My Boyfriend Came Back From The War we shared with Art.sy (seen here) is a 746 x 436 px lossless PNG screenshot of the website, as rendered by Netscape Navigator 3.0 (1996) running in Mac OS 9.0 (1999) emulated by SheepShaver. Although though the image was cropped to remove the operating system’s graphical user interface, and the outer frame of the web browser, it still possesses inherent historic accuracy and artifactual and evidential quality. The dimensions of the image could have been slightly smaller or slightly larger, but they were defined by what was a comfortable browser window size within the emulation, which was sized to a resolution (800 x 600) appropriate to typical hardware of the time. As well, the images embedded in Olia’s HTML have variable percentage based widths, and adjust to the size of the browser window. This reinforces the importance of the size of the rendering, as modern browsers use a blurry interpolation algorithm, as opposed to the browsers at the time of the work’s creation. The delicate and sensitive nature of screen capture images is significant. Any scaling or heavy handed compression can easily destroy the subtle artifactual qualities that the image was carefully designed to capture. With screen graphics, especially text and images from the early web, the difference of a few pixels can completely alter the feeling of a work.

 

Detail of My Boyfriend Came Back From The War, as processed by Art.sy

 

It is unsurprising that Art.sy’s system messed with the image so severely, as it is a system designed for down-scaling incredibly high resolution images, not upscaling low-res images. Here’s a few thoughts on how the system could potentially handle intentionally low-res images of born-digital materials:

1) Do nothing: do not scale the images, use lossy compression with care.
2) Improve the image processing methodology to be adaptive to images that are intentionally low-res. I am guessing that when high-resolution images are uploaded to the Art.sy cms, they derive a set of progressively smaller images that can be fed to the image-zooming viewer. A reverse/mirror image of this process could be developed, where instead of scaling down, the images are scaled up using nearest neighbor interpolation at each level. In theory the original image size would be the smallest, and zooming in the image viewer would appear to provide a strict enlargement of the original pixels.

Speaking realistically, Art.sy is a unique entity among the image repositories we are talking about. They have an in-house team of talented and curious engineers constantly working on improving the platform, which of course is still very new. They are thinking about how they can attack this problem this as I type. I seriously doubt if larger, older platforms with less resources, or a different engineering culture would be able to invest in developing new image processing solutions for what is a very small subset of their content. In light of this, it behooves archivists and conservators of computer based works to consider how we can use documentation strategies that gel with these existing systems. Furthermore, although screenshots are the reigning paradigm in the documentation of computer based works, do they really do the work justice in these contexts? If not – why should platforms invest in accommodating them? A strategy used by SFMOMA when contributing documentation of Miranda July’s web based Learning to Love You More, to Google Art Project, was to tile many screenshots to compose one high-res image.

 

 

While on the one hand, this strategy solves the problem of resolution, the result just doesn’t feel right. It amplifies what I feel to be the problem with screenshot based documentation: it denies the work any broader context. While lossless screenshots of computer based works are immensely valuable for preservation purposes, this approach completely neglects the physical aspect of the works. Software is not experienced in a disembodied graphical space – we interact with it though machines. If one of the major driving forces behind sharing with these image repositories is education, it seems logical to employ a documentation strategy that is simple and effective in visually communicating the context of these works, not simply a strategy that meets the image specifications. We are beginning to employ a documentation strategy at Rhizome that will touch all of these bases. It’s quite simple really: take a picture.

 

Rafaël Rozendaal’s falling falling .com

 

The above two photos taken (the latter taken with my iPhone) are not suggested to be an example of quality documentation – I just happened to have these on hand. They are, however, exemplary of how instantly readable a still image of a web based work of art is, when it depicts the work from the perspective of the viewer, not the computer. Such documentation does not replace the role of lossless screenshots of authentic renderings, but in the context we are speaking of – image repositories that are designed for handling high resolution content, and which have a diverse audience – they are arguably far more evocative of the work, more educational in terms of historic context and technology, and finally, these images are inherently more durable in terms of image processing and compression. Of course there are significant setup costs involved in producing this type of documentation: camera, lighting, and period specific hardware. In some cases there are software shortcuts that can be taken if hardware isn’t your thing. For example, document the work displayed on a CRT display of the proper vintage, but rather than going to the trouble of setting up a vintage Mac or PC, connect it to a modern computer running a fullscreen emulation. This approach also requires less maintenance – a library of virtual machines is far more stable than a collection of vintage computers.

 

 

It will take some time as we go about collecting the hardware, purchasing a camera and lighting, and developing a workflow (computer displays, especially CRTs are a tricky thing to photograph), but Rhizome should be able to start producing documentation under this new rubric (high resolution, photographic, historically accurate hardware [not just software]) in the very near future. Until then, perhaps we’ll see something from Art.sy that does a better job of handling sensitive pixel-perfect historic screenshots.

 

Media Archeology: The VODER

Voder demonstration at the 1939 World's Fair

I wrote a piece for Rhizome about an object that is currently on display at the New Museum for the Ghosts in the Machine exhibition: Homer Dudley’s VODER. It’s a really fantastic piece of history that arguably ushered in the modern era of speech synthesis, and influenced culture in some very significant ways. Here’s the full article, and here for your enjoyment is a six minute demonstration of the VODER.

Storify Is Bad For Preservation

tl;dr: Storify is not a Twitter archiving tool, but it easily could be.

After the great conversation at #ArtsTech on 6/13, I collected tweets from the evening [see them here] using Storify. It was the first time I’d ever used it. My takeaway echoes most people who have used Storify: fantastic.

However: there is one major gap that Storify isn’t addressing. One that would be trivial for them to implement, but would have a major impact on the landscape of personal digital preservation tools. To summarize the issue: Storify is a black-box service. When they inevitably cease to exist, so too will all of the stories and narratives that people have documented.

First things first. If you’ve never used or seen Storify, it is a free service that lets you search for, and arrange tweets into a linear narrative. It’s good for documenting small-scale things like a conversation, and large-scale things like conference hashtags. It has been well documented that Twitter’s search index is very shallow chronologically speaking, hence the need for such tools. There is hardly a shortage of Twitter archiving tools. From ifttt recipes, to ThinkUp, and various homebrew solutions – there are options aplenty.

Where these all fall short (and where Storify excels) is in facilitating hand-selection, and producing a decent look and feel that is human readable, and in the style of a twitter conversation. Storify makes it easy to hand-pick tweets, or start broad with an entire hashtag and edit down from there. The end result maintains the look of a content stream, including avatars, and a “pretified” timestamp (i.e. “3 days ago”). You can retweet or reply to tweets directly from a finished Storify, which facilitates continued conversation, rather than rendering a static archive.

The great thing about all of the other Twitter archiving tools I mentioned, is that they provide you with a local copy of the data. When you use these tools, you are essentially creating a backup. When the makers of those tools close up shop, you will still have your archive of tweets in a relatively platform agnostic format. Storify does not let you locally save and archive any of the content you create with it. They do provide an “export” feature, which embeds your Storify on a site powered by WordPress, Drupal, Tumblr, (and a few other platforms). While at first glance this looks great, it is entirely misleading.

Taking a look at what Storify actually posts to your site, every last bit of it (from js, to images, and css) is hotlinked. Meaning: when Storify goes down, so will the content you’ve “exported.” To boot – they use infinite scroll javascript, so web archiving with a web crawler is pretty much out of the question. Of course there are simple ways to mitigate this: print a PDF of the page, do a “save as webpage”, etc. This seems besides the point though. The point is that Storify has built what is essentially the most “human” tool for archiving and presenting interactions on Twitter. If they were to provide a true “export” feature that allowed users to locally backup their Storify content, they would be in the position of being one of the most comprehensive personal digital preservation tools for Twitter.

 

ArtsTech: Digital Conservation

Keyboard Archeology

I came across an interesting question on twitter a few days ago that sent me spiraling into a brief bout of research. Via Matthew Kirschenbaum, Matt Schneider posed the question of when the greater than (>) and less than (<) symbols first appeared on keyboards. I managed to come up with two contenders.

 

 

Above is the 1955 Olympia SM3 De Luxe (with science and math keys). It seems that typewriters beat computers to the <pun>punch</pun>, as there were some early keyboard layouts that included mathematical symbols at a time when computers were programed in assembly languages that were coded/punched on keyboards whose layouts did not include scientific or mathematical symbols (or were programmed on colossal Univac keyboards). Looking at early-mid 20th century IBM card & key punches, it can be observed that while keypunches with alphanumeric “repertoire” emerged in 1933 with the IBM Type 032 Printing Punch, this keyboard layout was strictly alphanumeric.

 

 

Above is the keyboard layout of the IBM 026 (1949). The earliest example I managed to dig up, of a “computer” keyboard with the greater than (>) and less than (<) symbols was the IBM 029 Card Punch, from 1964.

 



(via)

In the category of “close, but no cigar, but nonetheless interesting” we have the Smith-Corona Classic 12 (Greek), which included ten keys of Greek symbols (but none useful for equations).

 

 

wget cheat-sheet

Hello Internet, I made you something. There seems to be a lack of a basic wget cheat-sheet. Today I got tired of referring back to the usual sources, which tend to include all possible flags, most of which I never use. Here’s a .pdf you can print and hang at your desk.

-e robots=off

-m
--mirror

-r
--recursive

-p
--page-requisites

-k
--convert-links

-l depth
--level=depth
 (5 is maximum)

-o logfile
--output-file=logfile

-i file
--input-file=file

--random-wait

-nd
 --no-directories

-nH
--no-host-directories

-E
 --html-extension
 (apends .html)

-U agent-string
--user-agent=agent-string

-A acclist
--accept acclist
 (comma-separated extensions)

-R rejlist
--reject rejlist
(comma-separated extensions)

-D domain-list
--domains=domain-list
(domains to follow)

--exclude-domains domain-list

-H
--span-hosts

-L
--relative
(follow only relative links)

-np
--no-parent

Interview on the LOC’s Digital Preservation Blog

Trevor Owens of NDIIP and the Library of Congress recently interviewed me for The Signal about Rhizome & the ArtBase. Here’s a bit where he asks what exactly my title (digital conservator) means:

>>  full interview here

Trevor: I don’t think there are many people out there with the title of digital conservator. Could you tell us a bit about how you define this role? To what extent do you think this role is similar and different to analog art conservation? Similarly, to what extent is this work similar or different to roles like digital archivist or digital curator?

Ben: I drew the distinction with my title for two reasons: 1) I am at the service of an institution that lives within a museum, and 2) the digital objects I am cataloging and preserving access to are not “records” by the archival definition. They are artifacts – and as such require a different kind of care.

I am responsible for the stewardship of intellectual entities that are often inseparable from their digital carriers, due to the artist’s exploitation of the inherent characteristics of the material. It calls for a high degree of regard for the creator’s intent, and a thorough understanding of the subtleties of the materials. A digital archivist tasked with preserving the records of an office probably isn’t going to wonder if the use of Comic Sans in the accountant’s email signature has artifactual significance.

Of course the lines are much blurrier than that and there plenty of examples of people with the title “digital archivist” or “digital curator” doing significant work on preserving the subtle artifactual quality of digital materials (not to mention the incredible people who are contributing to significant projects in their spare time). This is a new phenomenon though, where you have individuals with the title “archivist” or “curator” devoting a level of care to documents, that with paper materials would be the work of a document conservator.

While I would hesitate to compare the two, I think that the conservation of digital artifacts, and the conservation of objects, documents and the like, at their essence hold many similarities. They both require an empathy for the artist, expertise with the medium, and understanding of the proper environment. Sometimes I go to the Greek and Roman galleries at the Met, and daydream about what net art from the 90’s will look like hundreds of years from now.

An Incomplete Introduction to Digital Preservation

Here are slides from a presentation I gave last night, providing an introduction to some basic digital preservation concepts. I focused on the Trustworthy Repositories Audit & Certification criteria, Archivematica as a manifestation of the OAIS model, some historic examples, and recent projects in web based emulation of obsolete systems. Nothing new here for practitioners, but ok intro for the curious.
PDF warning » download here