Adventure!

I tend to get a lot of questions about my weekend bike adventures. Cyclists and non-cyclists alike ask about my routes, what I eat on the ride, what I pack, how long the ride takes, if I bike in the rain, etc. I’m going to start doing a better job of documenting all of this, starting with this post about a little trip I took up to the Catskills.

FullSizeRender 13

First: the bike. I’ve been riding a Specialized Tarmac for two years for both recreational and competitive road riding. This summer though I’ve become increasingly obsessed with adventure cycling and bikepacking. I love long endurance rides, I really really love the fun and challenge of riding less traveled dirt and gravel roads, and I’ve been wanting to get out camping more — so bikepacking really is the perfect combination. While there is a trend in ultra-endurance racing to fit out race bikes with frame and saddle bags, I wanted something that I knew I could thoroughly abuse, without worrying about carbon, or without being afraid to lock it up or leave it unattended at my campsite. I found myself desiring something real… something steel.

Specialized’s AWOL fit the bill perfectly, and I had been drooling over the Comp pretty much all summer, so recently I went for it and had my favorite local bike shop build one up for me. This thing is the ultimate zombie-apocalypse ready tank; stable as hell when loaded up with full camping gear, even when bombing a descent at 50 mph, when ripping up a trail over rocks, tree roots, and rail ties, or when dropping the hammer on the road. Simply put – it’s a f***ing sick bike. It likes to go fast, and it likes to go everywhere.

FullSizeRender 12

I set the bike up for covert adventure deployments with a Tubus low rider front rack, as that’s where the AWOL likes its weight. So far I’ve been using classic Ortlieb back rollers (but on front), but am kinda considering going with a rackless setup. I also threw three King Cage titanium bottle cages on, as well as a randonneur bag up front. That’s really all the cargo space I need – even once I start doing trips longer than five days – I firmly believe in packing light and packing smart.

FullSizeRender 45

I was invited recently to spend a few days in the Catskill Mountains with a bunch of far out futurists, tech nerds, and generally awesome people. When I learned I would be camping for three days in the mountains, 150+ miles from home, naturally I decided to leverage this as an opportunity for a bike adventure. I got my rig all packed up on a Tuesday night. Here’s what I brought:

img_9474.jpg

Two spare tubes, one spare tire, five cO2 cartridges, small frame pump, multitool, Garmin, small lightweight bike lock, four tubes of electrolytes, four shot blok sleeves, four energy gels, titanium french press, coffee, lightweight mug, flask full of Kings County bourbon (I may pack light, but I’m not a monster), small backpacking stove and gas, headlamp, small bike lights, auxilary battery for charging USB devices, 100% waterproof phone case with extra battery life, tent, sleeping bag, two shirts, one pair of shorts, extra pair of cycling kit, first aid kit, camp shoes, two USB wall chargers. That’s it! It all adds up to roughly 35 Lbs.

Wednesday morning I commuted in to work on the AWOL with all my gear, and at the end of the day I just hopped on the bike, escaped the city, and embarked on the short first leg of the trip. Destination: a campsite in Harriman State Park, roughly 45 miles from the office.

The name of the game was to try and get to camp before sundown, but I just missed the mark, and rode the final 6 miles in the dark. Turn on the headlamp.
FullSizeRender 58

To polish off the ride, it ended with a nice little 1.7 mile category 3 climb. This was a good warm-up for the suffering that awaited me in the Catskill Mountains the next day.

I set up camp, scarfed down my dinner (I grabbed a burger and fries at The Market on the way out – not the smartest nutrition for the night before a 100 mile ride, should have planned smarter on this point) texted the fam, took a couple swigs from my flask and was off to bed. The tent in case you are wondering is a Tarptent Moment DW.


Made coffee in the morning and then it was off to the mountains.

Starting a ride in the wilderness, rather than fighting through city traffic is pretty magical. This was the view that I started with.

It was about 8 miles through Harriman, and then cutting across route 87, over to Sterling Forrest State Park, at which point I passed this derelict building.

Up through Blooming Grove, up through Walden (where I stopped for lunch and had some KILLER pizza), and on to Wallkill. Just on the outskirts of Wallkill, 40 miles in, the route sent me to continue onto this portion of the Wallkill Valley Rail Trail, which much to my delight was a pretty rough around the edges trail of dirt, gravel, and tennis ball size stones.

The AWOL ate this all up and wanted more. The only issue with this trail is the brush gets pretty overgrown in sections and I had to stop three times to remove and branches from my drivetrain. More than once on the road I’ve had a surprise stick in the derailleur, causing it to rip from the hanger, but not today. Lucky. This definitely left me ruminating on the virtues of a belt drive paired with an internal hub.

Sadly, this portion of the trail only lasted about a mile. The trailhead on the other end was pretty ridiculous – it narrows to singletrack, and you’re sent down an extremely steep embankment, and into a ditch. Upon exiting the trail I started to see signs along the lines of “State Correctional Facility Land – No Stopping”. Well then. The route directed me  down a road that was clearly marked *DO NOT ENTER*, which had it not been a prison, I would have considered disregarding. Being the law abiding cyclist I am, I rerouted up to State Highway 208, which I would have wound up on eventually anyway. I noticed Garmin’s battery was below 50% at this point so I decided to try out the backup battery. It worked like a charm, and made me really glad I had recently added the rando bag to my setup.

IMG_9631

Moments later, the sky opened up, and I was soaked by what would be the first of at least five thunderstorms that day. It was then 20 miles of flat terrain to hammer through to Rosendale, where I would meet up again with the Wallkill Valley Rail Trail – this section even more epic than the last.

FullSizeRender 38

By now it was steadily pouring rain, and I had a grin on my face as I ripped down the dirt and gravel trail found on the other end of the footbridge. I found myself surrounded by tall trees, splashing through mud puddles, with a steep embankment on the left, and Joppenbergh Mountain on my right. At one point I felt a blast of freezing cold air, and looking to the right realized that it was coming from a huge cave in the side of the mountain – cold air and fog was billowing out of it. Unfortunately I couldn’t get a picture – the waterproof case for my phone allows for it to be used in the rain, but once there is a bit of water on the screen, the phone struggles to register swipes, rendering it difficult to unlock and activate the camera. In any case, pedal on. Don’t want to wait around for the bears.

4 miles later, in Marbletown, after being sent down a gravel private drive, I found my way to Fording Place Rd. The irony of this name was not lost on me as I found myself riding through a 6 inch deep 30ft long puddle that was the entire width of this dirt road. That puddle however was just the warm-up. After turning a corner on the trail I found myself facing the Esopus Creek. So what do you do when you’re faced with fording a 1.5 ft deep creek? You thank yourself for having 100% waterproof paniers, and keep on trucking.

My panniers were about 2/3 submerged but I’m happy to say they held their own and kept my gear totally dry. On the other side of the stream, there is a big corn farm with a nice flat gravel road. On through Lomontville, and Pacama, and before you know it, I found myself at the Ashokan Reservoir – rolling hills  surrounded by seriously beautiful forrest, and occasionally a cute little cabin, or decrepit barn.

FullSizeRender 29

The Forrest eventually gives way to this view from a small bridge.


Shortly thereafter, the road leads to the Ashokan Reservoir itself. Not a bad view here either.


From here it was essentially one grueling 20 mile false flat all the way to the tiny little town of Fleishmans, which was the last civilization I would see until reaching my destination. What came after that 20 mile slog through the false flat, you ask?

Pure. Mountain. Death.

Over the course of the final 10 miles, there were four back to back brutal climbs, as I entered the real mountains. I love climbing, and I enjoy the suffering, but man, when you are hauling 32 lbs of gear – it’s a whole new level of pain. By my calculations, that last 10 miles had around 2,800 feet of climbing. 100% suffering. Here’s the upshot though – one of the climbs had a perfectly straight, incredibly steep descent, with a clear line of sight. I hit it full gas, and broke my own speed PR – 49.45 MPH.

All of this climbing was rewarded by the sudden arrival at my destination – where I was greeted not only by the smiling faces of friends, but the most incredible mountaintop modernist home I’ve ever set foot in.

FullSizeRender 21

And so ensued a weekend of discussions, presentations, debates, and fireside chats on the future of technology and it’s role in society. The weather up in the mountains was incredible – giant thunderstorms would come and go with a moment’s notice.

FullSizeRender 17

FullSizeRender 65

… and when Sunday came, I was back on the road. I took the same route most of the way back, but eventually cut over east of the Hudson River so that I could take the train from Beacon. I would have preferred to have ridden the whole way and split it again over the course of two days, but I had to get back to the office on Monday. Here are some photos from the much sunnier (and hotter) return trip.

FullSizeRender 12

FullSizeRender 14

FullSizeRender 53

FullSizeRender 50

FullSizeRender 18

FullSizeRender 46

FullSizeRender 39

FullSizeRender 41

FullSizeRender 59

FullSizeRender 61

FullSizeRender 44

Until next time – get out there and ride!

It Takes a Village to Save a Hard Drive

Doron in utter disbelief

In the final days of  the XFR STN exhibition at the New Museum, we encountered what was hands-down the most challenging born-digital recovery to have occured during the run of the exhibition. On August 30th, artist Phil Sanders arrived at the New Museum with an amalgam floppy disks, and two external hard disk drives. XFR STN technician Kristin MacDonough went into production mode with recovering the floppy disks. In just a few hours Kristin was able to recover 146 of Phil’s floppy disks. Prolific!

Kristin MacDonough rescuing floppy disks

Kristin MacDonough, Phil Sanders, and family

While Kristin tended to the sea of floppy disks, I investigated the hard drive situation. The first external disk drive was a peripheral used with an Amiga. The enclosure’s external interface was nothing we could use with our variety of adapters and forensic bridges, so I opened it up to take a look at the internal interface.

Phil Sanders' Amiga external hard disk drive

Phil Sanders' Amiga external hard disk drive

Luckily the drive inside was just a standard 3.5″ SCSI hard disk. Using a Tableau SCSI bridge (or “write-blocker”) and FTK Imager we made a raw (dd) disk image of the drive. Having worked previously with Amiga hard disk images I knew this wasn’t the end of the story. This raw image of the entire disk would only really be useful if it was a system disk, and if Phil wanted to emulate his old Amiga system. If all that Phil really wanted was the files on the disk, the image would be useless to him for a few reasons: 1) the Amiga Fast File System (AFFS) is not supported in FTK Imager, so we would be unable to browse the file system or dump the files for him there, as was the workflow for most disk images at XFR STN. 2) AFFS is fortunately supported in Linux, but the partitioning scheme used on Amiga disks is not, meaning this raw image we’ve produced of Phil’s disk can not be mounted as-is. Michael Kohn made a brilliant tool that provides a solution to this – not only can you view partitions on the disk image, and browse around the file system, you can use his tool to dump a raw image of just one partition. This “dumped” raw image can then be mounted natively in Linux, allowing you to get the files and do whatever it is you please. We used this process to provide Phil with the full raw image of the disk, the raw image devoid of partitioning scheme, and a dump of all of the files. Start-to-finish this does not take much time at all… if you’ve used the tools before, maybe one hour tops.

Phil Sanders' "Sider" external hard disk drive

The next external hard disk drive, originally used with an Apple //e, was a wholly different scenario. The external interface appeared at first glance to be SCSI, but after counting the pins it became apparent that we were dealing with something else. I posted pictures of the drive to the Digital Curation list, and Mark Matienzo was able to find the manual for the drive, confirming that the connection was in fact SASI, an interface that was precursor to SCSI.

The Sider's external SASI interface

I opened up the enclosure, hoping that the internal interface was something that we could easily work with, only to find that not only was the internal interface equally obscure, but that the disk was a whopping 5.25″ form factor, as opposed to the standard 3.5″ encountered in most personal computer hard disk drives.

Pictured right: The Sider's hard disk. Left: 3.5" hard disk for scale

Phil’s hard drive is pictured above on the right. On the left is a 3.5″ SCSI hard disk for scale. As we had no simple way of interfacing with the Sider, either through its external or internal interface, we knew that given our limited time and expertise, the easiest way to work with the drive would be with the original computer it was used with. Luckily, Phil had held on to his Apple //e. Knowing that our lab setup at XFR STN could easily recover 5.25″ floppy disks, my plan was to essentially migrate files from the 10MB hard drive, to 5.25″ floppies, which we could then recover and extract the files from. The following week, on September 6th, Phil brought in his Apple //e and all of its peripherals. By some crazy stroke of fate, Apple ][ expert and friend Jason Scott,  happened to stop by to visit XFR STN just as we were setting up Phil’s computer. This was a life saver, as my knowledge of Apple DOS leaves a bit to be desired. Phil informed us that The Sider contained the bulk of his artistic output from the 80’s, including much material that he produced while artist-in-residence at NYU. He also informed us that the Sider was in fact the boot disk for the //e. All of the hardware was plugged in and waiting. It was with great anticipation we proceeded to power on Phil’s Apple //e, and listened to a 10lb, 10MB hard disk attempt to spin up for the first time in over two decades.

Jason Scott powering on The Sider, Doron and Phil looking on

Initially, nothing happened. The drive sounded horrible, and the //e informed us that there was an IO error. The drive sounded like it was spinning, but it just sounded bad. It sounded like a mechanical device that had not been properly exercised in two decades. Slowly though, it began to spin faster. After some coaxing, powering the //e on and off a few times, and perhaps a few prayers to the god of dead media, we heard some head activity coming from the drive, and miraculously, the CRT monitor offered up a menu.

Booting the Apple //e

Phil’s memory had been accurate. Not only was the Sider the //e’s boot disk, but it had multiple operating systems available, and this boot menu that allowed us to choose what OS to use. We booted into DOS and proceeded to take a look at what was on the disk.

Jason Scott, hard drive whisperer.

The disk was mainly cooperative, and we only had to reboot a few times due to IO errors. There was one area of the disk that appeared to be corrupt or, unreadable. We found a lot of content on the healthy areas of the disk, but what we saw were mostly system files and software. No art, but we copied the files we found to 5.25″ floppy disk anyway just to be on the safe side. Phil then informed us that he hardly used Apple DOS, and primarily worked in ProDOS. We restarted the machine and booted into ProDOS. We then found the mother lode. What had appeared to be a corrupt part of the disk was in fact the area of the disk that only ProDOS could read – and it contained massive amounts of artwork made by Phil during the 80’s.

We had to act fast – there was no telling how long the Sider would spin before finally buying the farm. While in DOS we had simply copied files in batches to floppy disk, we found that in ProDOS we could use the Copy II Plus program to actually produce a backup of the entire Sider hard disk. Jason initiated the process.

L to R: Jason Scott, Doron Ben-Avraham, Walter Forsberg, Phil Sanders

We only happened to have seven 5.25″ floppy disks on hand,  yet after indexing the entire hard disk, Copy II Plus told us that we would need 24 floppies in all. This was not initially any cause for alarm, as we realized that we would recover the floppies immediately as Copy II Plus produced them, and that once a floppy was imaged, it could be re-used. As Matthew Kirschenbaum so eloquently put it at the next day’s symposium, we were operating a veritable “bucket brigade” between the Apple //e and our floppy recovery station, bits sloshing over the side as we rescued Phil’s artwork from certain oblivion.

Copy II Plus backup on 5.25" Floppy disk

The post-it note pictured above was an indicator that this particular floppy was a backup of Slot 7, Volume 1, disk 1 of 24. Slot 7 was the Sider, and on the Sider there were in fact two volumes. After seven disks, we attempted to re-use the first one we created, only to come to the terrible discovery that Copy II Plus refuses to overwrite what it detects as a backup disk. We ran back to our Kryoflux, as I recalled that one can use the device not only for recovery, but for writing back to disk. Unfortunately DOS 3.3 is not yet one of the supported writing formats. We wrote an Amiga format disk image back to disk, hoping that the Apple //e would see that it was not in DOS 3.3 format, and attempt to reformat the floppy before backing up to it. Unfortunately it simply refused to acknowledge this disk, rather than offer to reformat. It was then, that New Museum director of IT, Doron Ben-Avraham posed the idea of erasing the floppy disks with a magnet. I was completely skeptical, figuring that if the //e refused to reformat an Amiga disk, why would it react differently to a disk whose geometry had been obliterated? Doron managed to find a tiny magnet in the office.

Pure ingenuity

Amazingly… it worked! Doron assumed floppy erasing duties, and our bucket brigade was back in action, writing to floppy with the //e, recovering, and then erasing and reusing the disk. We managed to back up the first volume of the disk. The day had come to an end, and we needed to call it quits, but there was a whole other volume remaining to be backed up. This was on a Friday, and the soonest we would be able to pick up where we left off would be the following Sunday. Not wanting to risk spinning down the Sider hard disk drive, we left the whole system set up and running for two days in the resource center.

Restoration in progress

On Sunday, Phil and I took a close look at the contents of the two volumes, and found that Volume 2 was simply a direct mirror of Volume 1. Our backup was complete! We took this as an opportunity to run and document Phil’s work. Walter Forsberg had the brilliant idea to do direct video capture from the //e, so we moved to one of the video preservation stations, and proceeded to do just that. I asked Phil questions about the fidelity of the image quality we were seeing.

Phil Sanders demoing his work during live capture

It is rather incredible that likely none of this recovery process would have been available or accessible to Phil without a resource like XFR STN. Nearly a decade of Phil’s born-digital artwork now lives on the Internet Archive in the form of Floppy disk images, hard drive dumps, and  an hour of 10-bit uncompressed direct video capture. It sets the stage for further work in restoring an operational emulation of Phil’s //e. This really drives home what was the core and fundamental principle of the XFR STN project. This is a level of care and preservation commonly only available to artists that have already been written into the cannon. It is not simply rehetoric or an overstatement to say that this project did indeed turn the capitalist meritocracy of institutional preservation on its head. It is incredibly rewarding to know that we set the stage for allowing some lesser known artists to have the opportunity to be discovered decades from now. More important and more rewarding than seeing these all of these fundamental ideals in action though, was getting the opportunity to witness Phil and his wife see his work for the first time in decades, and to share it with their daughter for the first time ever. Thanks Phil.

Phil Sanders and family

Authenticity is Relative

8131833936_69149d9a92_c

For those interested in video game preservation, I highly recommend giving the following article a careful read: “In Search of Scanlines: The Best CRT Monitor for Retro Gaming.” Considering the wave of acquisitions at MoMA my colleagues and I have been spending a whole lot of time thinking about how display hardware shapes the visual experience of a game, and in each case, what should be considered the ideal rendering by which to judge any sort of emulation. Needless to say, I’ve been chatting a bit with Nick Montfort.

I found the article interesting not for its discussion of CRTs, but because Fudoh’s approach is a bit different than most I have encountered. He doesn’t care about CRTs out of concern for historically accurate hardware, and thus image quality. Rather, his desire is to achieve the best possible image. He is obsessed with the signal to the extent that he will modify a console that originally output composite, so that it offers RGB. The quality he is achieving is one that game designers and players would never have seen when designing and playing these games. This is the antithesis of the CRT emulation camp, whose concern is accurate reproduction of an image quality that bears fidelity to consumer grade CRTs of a given game’s period.

Fudoh’s work is impressive to be sure, but is he barking up the wrong tree? On the other hand, does CRT emulation preserve the wrong thing? Is there a hybrid approach that combines these two apparently opposing schools of thought? What do you think?

How to backup your Tumblr

By now you’ve likely heard that Yahoo! intends on acquiring Tumblr. While, the acquisition is not (at time of writing) confirmed and even if it goes down, it does not mean death to Tumblr, you are likely wondering how to take your stuff and run. Today. Here’s how.

Bad news: there is no official Tumblr backup tool. Some people have cooked up well intentioned tools, but none of them preserve the look and feel of your Tumblr. For those that have spent countless hours perfecting their own theme, this will simply not do. If you want a backup of your Tumblr that looks right, and that you can easily upload to your own server I’d recommend using HTTrack.

Head over to the official HTTrack site and download the distribution you need. If you’re on a Mac and use Homebrew, you can just do that instead. The Windows build provides a GUI, but the others do not. The command line interface is cross-platform however. If you’ve installed correctly, you can copy and paste the below, replacing the URL with the URL of your Tumblr, and let er rip.

httrack -w -n -c8 -N0 -s0 -q -v -I0 -p3 http://example.tumblr.com

This can take quite a while depending on the the size of your Tumblr. If you use infinite scroll, this should work regardless, so long as you’ve maintained the “next” and “previous” pagination hyperlink markup in your template. If you haven’t (this would certainly be an edge case, but I’ve seen it with some artist’s themes), I’m sorry, but your site just isn’t crawlable. When all is said and done you’ll be left with flat HTML files, css, js, images, videos, audio, etc with all hyperlinks to crawled content modified to relative paths – meaning it is a backup you can toss on any server. If you’re curious about the options I’ve used in the line above, here are their full descriptions from the documentation. Enjoy.

w *mirror web sites (--mirror)
n get non-html files 'near' an html file (ex: an image located outside) (--near)
cN number of multiple connections (*c8) (--sockets[=N])
NN structure type (0 *original structure, 1+: see below) (--structure[=N])
  or user defined structure (-N "%h%p/%n%q.%t")
q no questions - quiet mode (--quiet)
%v display on screen filenames downloaded (in realtime) (--display)
I *make an index (I0 don't make) (--index)
pN priority mode: (* p3) (--priority[=N])
  0 just scan, don't save anything (for checking links)
  1 save only html files
  2 save only non html files
 *3 save all files
  7 get html files before, then treat other files

An overdue update

Dear internet: over the course of the past week, a few people have mentioned that they have heard I was leaving Rhizome. This is not the case. There have, however, been some some wonderful changes in my professional life that I have not quite shared publicly. I’d like to update you, dear reader, on the particulars of these changes – lest misinformation befall you.

As of two weeks ago, I am officially splitting my time between Rhizome and the conservation department of the Museum of Modern Art. I have joined the fantastic team at MoMA to lead on the development of the Digital Repository for Museum Collections – a suite of tools and services that will together form an infrastructure for the effective preservation and conservation management of born-digital materials in the museum’s permanent collection. It is an incredibly exciting project, and I am glad to have the opportunity to help shape its future, and to work with the brilliant team at MoMA.

This is a half-time appointment – I have not left Rhizome. I’m fortunate enough to have colleagues that are open to a little institutional polyamory. I am grateful for this, as things have never been more exciting in Rhizome’s conservation department. We have been hard at work on restoring The Thing BBS – one of the earliest online communities of contemporary artists, and I am pleased to say that a small portion of what we’ve dug up will be on display as part of the New Museum’s next exhibition, “1993: Experimental Jet Set, Trash and No Star.” The exhibition is already partially on view, but opens fully next Thursday, Feb 14th. In conjunction with the exhibition, we are hosting an event in March titled “The Internet Before the Web: Preserving Early Networked Cultures.” I will be in conversation with Wolfgang Staehle (artist and founder of The Thing BBS), and none other than Jason Scott. Needless to say, you might want to reserve your tickets asap.

So – that’s it. Lots of new things… more to come.

Analyzing Browser History

I recently participated in First Five – a Tumblr where guests list the first five websites they visit daily (my five here). Similar to recent contributor Luke Robert Mason, the concept seems foreign to me. As a poster child for consumption via aggregation, apps, and streams, I do not pull up my bookmarks in the morning as though unfolding the daily newspaper. Rather than opting to compensate for this by providing (as many contributors seem to) my favorite five, I decided to provide a strict, data driven answer to the question – of a sample of the first five URLs I type into my browser every morning, which are the most common? Although my content consumption is divided heavily between apps on my mobile device, desktop and browser based apps on my laptop, I chose, for time and feasibility’s sake to focus on my browser history. I hypothesized that the data would show a few major content sources, mostly browser based channels (such as Twitter and Prismatic), followed by a long tail of heterogeneous content they directed me to.

I use Chrome, so the clear route was to analyze the SQLite database where Chrome stores it’s history. On a Mac, this is located at ~/Library/Application Support/Google/Chrome/Default/History  | Had I prior experience analyzing SQLite with Python, I could have written something that worked with this file directly. This not being the case, I exported a CSV of results from the following query:

 

SELECT datetime(((visits.visit_time/1000000)-11644473600), "unixepoch"), urls.url,
FROM urls, visits WHERE urls.id = visits.url;

 

I cleared my history near the end of May, so this yielded about three months (a pithy 7.9MB) of data in the following format:

 

"2012-09-01 20:03:15","http://example.com/therest/ofthe/url"

 

I wrote a few lines of Python that look at each row of the CSV and add each day’s first five unique hostnames to a dictionary. At the end, each hostname is counted, and the results are printed to stdout.
 
The resultant data needed to be tidied up a bit – there were analogous hostnames such as drive.google.com and docs.google.com, which could be consolidated. As well, I use Twitter via a desktop client, not Twitter.com. T.co, the hostname of Twitter’s url shortener scored very highly, but rather than trace these back to their original URLs, I opted to count these as Twitter.com visits. Interestingly, Netflix ranked highly with a 15% share. I don’t watch Netflix in the morning, rather this registered due to the fact that Netflix is often the last website in my browser at night. In the morning, when waking my computer, the open tab refreshes thus gaining a post 6am, pre 11am entry in the database. I chose to remove this from the results.

browser data

The data seems to at least somewhat reflect my hypothesis: Twitter, email (mainly listservs and News.me), and Prismatic all being aggregators, followed by a long tail of diverse content sources. A clear next step would be to analyze the from_visit field of visits in the long tail, to see if indeed the referring visits trace back to the top aggregators. All in all, the exercise does seem to illustrate stream-based browsing habits, and the idea that more and more content is fluid – less and less tied to specific websites as vessels.

Take a Picture, It’ll Last Longer…

Last week, early web folklorist, OG net artist, and friend of Rhizome, Olia Lialina wrote a post that dug at Art.sy for how severely their image processing system had mangled an image of her piece My Boyfriend Came Back From The War. Despite (for the lulz) comparing the problem to the recent destruction of a 19th century fresco, Olia is correct: the image of her work, as processed by Art.sy’s system does look pretty bad. This is just one manifestation of an underlying problem I have been pondering lately: how can documentation of works that are screen-based, and inherently low-resolution, exist within systems that are designed specifically for high-resolution documentation of works that exist in the physical world?

For a while now Rhizome has been sharing records and images of works from the ArtBase with a hand full of carefully chosen fine art image databases. It’s a nice thing to see lesser known computer based works alongside more established artists and media, and we like the idea of exposing our collection and the history of art engaged with technology to a broader audience. Every time we begin one of these projects we are faced with the same conundrum: image specifications. Image collections such as ArtStor, Art.sy, and Google Art Project all serve high resolution images of paintings, prints, photographs, and objects. The user experience of these platforms is engineered to best represent documentation of an object that exists in the physical world. However, nearly all artworks in the ArtBase are screen based – be they software, web sites, video, or animated gifs. This means that these works are inherently low-resolution. With compter or screen based works, there is often no finer grain of visual detail than native screen resolution. In documenting these works, we are not faced with the bottomless pursuit of capturing (or exceeding) human perception, as with the documentation of physical works of art; the pixel is the lowest level of detail. Furthermore, when endeavoring to capture images of authentic renderings (i.e. period specific web browser and operating system), the dimensions of the image are (or at least, in some cases should be) limited to the native resolution of displays of the time when the work was created.

 

Detail of My Boyfriend Came Back From The War

 

For example, the image of My Boyfriend Came Back From The War we shared with Art.sy (seen here) is a 746 x 436 px lossless PNG screenshot of the website, as rendered by Netscape Navigator 3.0 (1996) running in Mac OS 9.0 (1999) emulated by SheepShaver. Although though the image was cropped to remove the operating system’s graphical user interface, and the outer frame of the web browser, it still possesses inherent historic accuracy and artifactual and evidential quality. The dimensions of the image could have been slightly smaller or slightly larger, but they were defined by what was a comfortable browser window size within the emulation, which was sized to a resolution (800 x 600) appropriate to typical hardware of the time. As well, the images embedded in Olia’s HTML have variable percentage based widths, and adjust to the size of the browser window. This reinforces the importance of the size of the rendering, as modern browsers use a blurry interpolation algorithm, as opposed to the browsers at the time of the work’s creation. The delicate and sensitive nature of screen capture images is significant. Any scaling or heavy handed compression can easily destroy the subtle artifactual qualities that the image was carefully designed to capture. With screen graphics, especially text and images from the early web, the difference of a few pixels can completely alter the feeling of a work.

 

Detail of My Boyfriend Came Back From The War, as processed by Art.sy

 

It is unsurprising that Art.sy’s system messed with the image so severely, as it is a system designed for down-scaling incredibly high resolution images, not upscaling low-res images. Here’s a few thoughts on how the system could potentially handle intentionally low-res images of born-digital materials:

1) Do nothing: do not scale the images, use lossy compression with care.
2) Improve the image processing methodology to be adaptive to images that are intentionally low-res. I am guessing that when high-resolution images are uploaded to the Art.sy cms, they derive a set of progressively smaller images that can be fed to the image-zooming viewer. A reverse/mirror image of this process could be developed, where instead of scaling down, the images are scaled up using nearest neighbor interpolation at each level. In theory the original image size would be the smallest, and zooming in the image viewer would appear to provide a strict enlargement of the original pixels.

Speaking realistically, Art.sy is a unique entity among the image repositories we are talking about. They have an in-house team of talented and curious engineers constantly working on improving the platform, which of course is still very new. They are thinking about how they can attack this problem this as I type. I seriously doubt if larger, older platforms with less resources, or a different engineering culture would be able to invest in developing new image processing solutions for what is a very small subset of their content. In light of this, it behooves archivists and conservators of computer based works to consider how we can use documentation strategies that gel with these existing systems. Furthermore, although screenshots are the reigning paradigm in the documentation of computer based works, do they really do the work justice in these contexts? If not – why should platforms invest in accommodating them? A strategy used by SFMOMA when contributing documentation of Miranda July’s web based Learning to Love You More, to Google Art Project, was to tile many screenshots to compose one high-res image.

 

 

While on the one hand, this strategy solves the problem of resolution, the result just doesn’t feel right. It amplifies what I feel to be the problem with screenshot based documentation: it denies the work any broader context. While lossless screenshots of computer based works are immensely valuable for preservation purposes, this approach completely neglects the physical aspect of the works. Software is not experienced in a disembodied graphical space – we interact with it though machines. If one of the major driving forces behind sharing with these image repositories is education, it seems logical to employ a documentation strategy that is simple and effective in visually communicating the context of these works, not simply a strategy that meets the image specifications. We are beginning to employ a documentation strategy at Rhizome that will touch all of these bases. It’s quite simple really: take a picture.

 

Rafaël Rozendaal’s falling falling .com

 

The above two photos taken (the latter taken with my iPhone) are not suggested to be an example of quality documentation – I just happened to have these on hand. They are, however, exemplary of how instantly readable a still image of a web based work of art is, when it depicts the work from the perspective of the viewer, not the computer. Such documentation does not replace the role of lossless screenshots of authentic renderings, but in the context we are speaking of – image repositories that are designed for handling high resolution content, and which have a diverse audience – they are arguably far more evocative of the work, more educational in terms of historic context and technology, and finally, these images are inherently more durable in terms of image processing and compression. Of course there are significant setup costs involved in producing this type of documentation: camera, lighting, and period specific hardware. In some cases there are software shortcuts that can be taken if hardware isn’t your thing. For example, document the work displayed on a CRT display of the proper vintage, but rather than going to the trouble of setting up a vintage Mac or PC, connect it to a modern computer running a fullscreen emulation. This approach also requires less maintenance – a library of virtual machines is far more stable than a collection of vintage computers.

 

 

It will take some time as we go about collecting the hardware, purchasing a camera and lighting, and developing a workflow (computer displays, especially CRTs are a tricky thing to photograph), but Rhizome should be able to start producing documentation under this new rubric (high resolution, photographic, historically accurate hardware [not just software]) in the very near future. Until then, perhaps we’ll see something from Art.sy that does a better job of handling sensitive pixel-perfect historic screenshots.

 

Media Archeology: The VODER

Voder demonstration at the 1939 World's Fair

I wrote a piece for Rhizome about an object that is currently on display at the New Museum for the Ghosts in the Machine exhibition: Homer Dudley’s VODER. It’s a really fantastic piece of history that arguably ushered in the modern era of speech synthesis, and influenced culture in some very significant ways. Here’s the full article, and here for your enjoyment is a six minute demonstration of the VODER.

Storify Is Bad For Preservation

tl;dr: Storify is not a Twitter archiving tool, but it easily could be.

After the great conversation at #ArtsTech on 6/13, I collected tweets from the evening [see them here] using Storify. It was the first time I’d ever used it. My takeaway echoes most people who have used Storify: fantastic.

However: there is one major gap that Storify isn’t addressing. One that would be trivial for them to implement, but would have a major impact on the landscape of personal digital preservation tools. To summarize the issue: Storify is a black-box service. When they inevitably cease to exist, so too will all of the stories and narratives that people have documented.

First things first. If you’ve never used or seen Storify, it is a free service that lets you search for, and arrange tweets into a linear narrative. It’s good for documenting small-scale things like a conversation, and large-scale things like conference hashtags. It has been well documented that Twitter’s search index is very shallow chronologically speaking, hence the need for such tools. There is hardly a shortage of Twitter archiving tools. From ifttt recipes, to ThinkUp, and various homebrew solutions – there are options aplenty.

Where these all fall short (and where Storify excels) is in facilitating hand-selection, and producing a decent look and feel that is human readable, and in the style of a twitter conversation. Storify makes it easy to hand-pick tweets, or start broad with an entire hashtag and edit down from there. The end result maintains the look of a content stream, including avatars, and a “pretified” timestamp (i.e. “3 days ago”). You can retweet or reply to tweets directly from a finished Storify, which facilitates continued conversation, rather than rendering a static archive.

The great thing about all of the other Twitter archiving tools I mentioned, is that they provide you with a local copy of the data. When you use these tools, you are essentially creating a backup. When the makers of those tools close up shop, you will still have your archive of tweets in a relatively platform agnostic format. Storify does not let you locally save and archive any of the content you create with it. They do provide an “export” feature, which embeds your Storify on a site powered by WordPress, Drupal, Tumblr, (and a few other platforms). While at first glance this looks great, it is entirely misleading.

Taking a look at what Storify actually posts to your site, every last bit of it (from js, to images, and css) is hotlinked. Meaning: when Storify goes down, so will the content you’ve “exported.” To boot – they use infinite scroll javascript, so web archiving with a web crawler is pretty much out of the question. Of course there are simple ways to mitigate this: print a PDF of the page, do a “save as webpage”, etc. This seems besides the point though. The point is that Storify has built what is essentially the most “human” tool for archiving and presenting interactions on Twitter. If they were to provide a true “export” feature that allowed users to locally backup their Storify content, they would be in the position of being one of the most comprehensive personal digital preservation tools for Twitter.