Colin Brooks ↰

App stores and museums

A couple months ago I had the pleasure or burden of releasing either the first apps ever, or the first apps in the last decade for the Whitney Museum. Working with Steven Fragale we revived Sapponckanikan, an app he and Alan Michelson released and first showed at the Whitney in 2019 as part of Wolf Nation.

The lobby of the Whitney Museum with augmented reality tobacco plants growing from the floor

Sapponckanikan is an augmented reality (AR) work that overlays the native strain of tobacco plants that the Lenape originally planted on the island that would become Manhattan on top of the viewer’s camera feed in the Museum lobby. The app was built with Unity, and the plants modeled after references from Alan’s sister’s garden.

By 2024 the version of the app from 2019, built from Unity and submitted to the Apple and Android app stores from Steven’s own accounts, was no longer publicly available. Privately, locally, the app would not build for current phone operating systems (OS’s) at all, and Steven had to rewrite the app multiple times—at least once for all the changes to ARKit, ARCore, and Vuforia (a common SDK for AR experiences), and at least once more again for the transition from Intel to Apple Silicon breaking shadow rendering.

That an app released for a show that ended in 2020 was already off the app stores and unbuildable by early 2024 was a bit shocking, even to someone (me) who already thought native apps decay incredibly quickly. I was expecting there to be some amount of platform drift like, we’d have to re-sign some accounts or app files, and maybe tweak a few settings for deprecated bits of iOS and Android, not having to effectively re-code the app twice and even then still run into new roadblocks.

One of the things that still feels the strangest to me about this entire process was that even when we got the app working on both iOS and Android, both app stores denied our initial submissions for release, for reasons that lie somewhere on the spectrum from perplexing to infuriating. For Google the issues were relatively straightforward, requiring extra safety language built in for warning children that they should have an adult present and pay attention to their surroundings. That seems slightly overbearing to me, but fine, sure, though I’ll point out that those sorts of changes required creating new assets in Unity and laying out new elements which is not nothing.

On the other hand Apple just was not having it.

Initially apple blocked submission of the app on the grounds that it had “Minimum functionality” (Guideline 4.2.1). Most of the 10 back and forth messages I had with my app reviewer was over this, though helpfully they later added Guideline 2.1 Information Needed, asking for a video that demonstrates the app on actual hardware, before rescinding that additional requirement during appeal.

If this was something like a todo list or budgeting app I could understand the need to make sure there’s enough basic functionality in the app to make it reasonable for it to be an app in the first place (though my bird app was also caught by this), but at the risk of stating the obvious this one is art, and I can confidently say I don’t know what the “minimum functionality” should be for an artwork. I attempted to plead this case through 10 messages without any luck, and honestly figured this just wasn’t going to be releasable on iOS as an app, and would have to wait for whatever far off day mobile Safari will gain meaningful WebXR support.

A possum with it's mouth open with the caption, Your art does not meet Guideline 4.2.1 Minimum Functionality, followed by [Screaming beings].

Thanks to Reddit I had run across the suggestion that sometimes it’s worth just renaming the project and resubmitting to get a new app reviewer, a strategy that makes me slightly nervous even typing out but one that also in our case worked. After submitting a new project Apple promptly approved Sapponckanikan, and it entered the world anew.

It’s worth noting that this process took a few weeks, every day of which brought us closer to the opening of the show that the app was supposed to be in and available for the public to download, and increased my own anxiety about the project. I’m sympathetic to how much garbage must be submitted to the app stores every day that Apple and Google must have to wade through, but at the same time it’s pretty disheartening that artist’s experimentation with new technologies like AR are so limited by what’s able to get approved in the app stores. And that doesn’t touch on the barriers like having developer accounts (not seen, our original Apple account getting banned for reasons I still don’t understand), paying any developer fees, validating who you are (I got to see the Whitney’s water bill for the first time, our official documentation for our organization), learning how to archive and sign the apps, posting privacy policies online, etc etc etc.. There’s a lot in a public release of a project even if only a few hundred people might ultimately use it. Next to all this is the web, a platform that for all its faults, basically lets you post anything you want and if you don’t need to touch it again run forever in many cases.

Sapponckanikan was a good learning experience for me, I’m sure Steven and Alan, and I imagine others throughout the museum as we figured out the process to have a more institutional presence on the app stores. At the same time and more surprising than not, it reinforced my own belief that the web is a really great place for artists in a way that the more locked down app stores are not, even if technically they can offer bleeding edge features that the web only comes to later. For $5 you can have a VPS or for mostly no dollars Github Pages or Cloudflare Pages or any number of other free static hosts, and if you lean on well supported native web technologies like HTML, CSS, JS, or even PHP or Flash if this was 10 years ago (thanks Ruffle), then you’ll almost without question outlast any native app. Everything decays eventually, but the time horizon on web apps that don’t do anything too outlandish or depend on content management systems or big external dependencies looks a lot farther off than anything I’ve written in Xcode.

Check out Sapponckanikan on iOS or Android, and hopefully eventually the web.

[Read this on Medium]

Sapponckanikan

I worked with Steven Fragale to revive this 2019 augmented reality (AR) work for re-release in 2024 on both the Apple and Android app stores, in addition to an exhibition-specific build for display on iPad.

Daily Bird: a bird a day

I’ve always liked the Wikipedia widgets for seeing featured articles, so I made an app to do that for birds. Daily Bird pulls data from public sources and displays various birds of the day, filterable by endemic NZ birds or only pigeons.

xhairymutantx

For the 2024 Whitney Biennial I worked with Holly Herndon + Mat Dryhurst to build an app that takes user generated prompts and creates images with SDXL and a custom LoRA.

More classic net art

After [too long] all 57 artport “Gate Pages” are properly archived and mostly working again. This series ran from 2001 to 2006, and it’s an interesting time capsule of an experimental era, filled with Perl, Flash, and Java applets.

Keyword tagging artworks with GPT4 and Google Vision

I have no data to back this up (a great way to start a blog about data), but I think when people visit a museum’s online collection there are two kinds of things they’re likely to type in a search box if you give them the chance 1) something specific and relevant to that collection 2) an animal they like. For somewhere like the Whitney the former is largely solvable with existing metadata, but the latter presents a real problem. How do you tag a constantly growing collection of tens of thousands of objects with terms that may also shift and change over time? The answer might ideally be “carefully” and “with actual people”, but this is Museums and everyone is already busy.

For a few years we’ve been using Google’s Vision API to automatically keyword tag artworks in our collection, after a brief flirtation with AWS’ Rekognition. This has always been an internal staff-only feature, as it’s never felt particularly accurate and has a huge amount of noise with bad tags. But in limited use it can be really helpful—if there’s a big storm in NYC and we want to post something like an image of lightning, the Google Vision API has been fairly good at that kind of labeling, making up for the fact that not every artwork with lightning in it would neccessarily have it in the title. It’s also been alright if you want like, cats (though it’s oddly less good at birds or whales). But considering we’re a museum with a huge amount of contemporary art, with people and their bodies and complex social contexts…it’s just been too much to feel comfortable making these tags public without some kind of serious review, and the tags spit out by Google Vision are often quite basic and unhelpful anyways.

ENTER GPT4

Of course the story with AI over the last year has been dominated by OpenAI and GPT4, and with the public launch of GPT4 with Vision the opportunity for testing it at image tagging became available. This is something I’ve been quite excited about, because as best as I can tell Google Vision has stayed pretty unchanging in the years we’ve used it, and my hope has been that GPT4 with Vision might offer a generational leap forward.

The methodology

To compare GPT4 with Vision and Google Vision, I coded the output from running both over 50 random works in the Whitney’s collection that have public images. For Google Vision I used the Google Vision API’s label detection functionality, and for GPT4 I gave it the following prompt:

Create a comma separated list of keywords for the image

From there I coded the resulting keywords into 3 categories: “good”, “okay”, and “bad”. “Good” means the keyword is accurate and useful, “okay” means it’s technically true but not very useful, and “bad” means it’s straight up wrong.

There’s a lot of caveats and considerations here including:

  • What’s good/okay/bad is still quite subjective.
  • Because it’s a random 50 works, it doesn’t cover all the kinds of artworks in the collection (i.e. we have way more prints and drawings than we do installations or films).
  • The GPT4 completions were not limited to a specific number of keywords, while Google almost always gave back exactly 10 (the default).
  • GPT4 is less deterministic than the Google model, so output can vary more widely each time it’s run.
  • I’m ignoring Google’s confidence scoring on labels.
  • I’m sure someone else can write a better GPT4 prompt.

But something is better than nothing.

The results

Screenshot of a Google Sheets spreadsheet.
This spreadsheet took longer to make than I expected.

After a few hours coding all of the keywords for 50 artwork images (on top of more anecdotal investigation), it’s clear that GPT4 is returning much better results than Google Vision. On a basic level the results are:

OpenAI GPT4 with Vision keywords
Good: 576
Okay: 86
Bad: 69
Total: 730

Google Vision label detection keywords
Good: 177
Okay: 131
Bad: 189
Total: 496

Overall GPT4 returned keywords that were 79% “good” compared to Google Vision’s 36%. And Google dwarfed GPT4 in terms of bad keywords, with fully 38% being “bad” compared to only 9.5% of GPT4’s. But this doesn’t even tell the full story.

Quality AND quantity

Photograph of a purplish red Sea Fan.
Lesley Schiff, Sea Fan, 1980. Photocopy, sheet: 10 1/2 × 8 1/2 in. (26.7 × 21.6 cm). Whitney Museum of American Art, New York; gift of Judith Goldman 2004.3.8

The quality of GPT4’s keywords more often than not put Google Vision’s to shame. For the above work, Lesley Schiff’s Sea Fan, here are the two sets of tags:

GPT4

sea fan, coral, marine life, red, intricate, natural pattern, underwater organism, fan-shaped, marine biology, texture, ocean, delicate structure

Google Vision

Gesture, Art, Font, Pattern, Rectangle, Visual arts, Drawing, Illustration, Printmaking, Painting

…or how about for the below work, Edward Hopper’s Merry Xmas:

Ink drawing on paper of animals looking at a fence that says Merry Xmas.
Edward Hopper, Merry Xmas, 1895–1899. Pen and ink and graphite pencil on paper, sheet: 5 5/16 × 6 7/16 in. (13.5 × 16.4 cm). Whitney Museum of American Art, New York; Josephine N. Hopper Bequest 70.1559.83. © Heirs of Josephine N. Hopper/Licensed by Artists Rights Society (ARS), New York

GPT4

sketch, Christmas, greeting card, Merry Xmas, cats, piano keys, musical theme, black and white, ink drawing, whimsical, holiday message, to Pop, from Marion

Google Vision

Organism, Font, Art, Pattern, Drawing, Illustration, Visual arts, Rectangle, Line art, Artwork

GPT4 is able to be specific in its keywords in a way that Google is not. Tags like “sea fan” and “delicate structure” are up against “art” and “pattern” in the case of Sea Fan. In Merry Xmas, it’s tags like “Christmas” and “ink drawing” versus “Organism” and “drawing”. Individually none of these provide a slam dunk winner (especially given the errors GPT4 made with Merry Xmas) but over a sequence of 50 images this trend of greater specificity broadly holds true.

A large number of both the “good” and “okay” Google Vision keywords are things like “art” or “rectangle”, and anytime there’s a person it often just names different parts of their body that while technically visible are not the focus. GPT4 by contrast tends to only use those keywords when they’re particularly prominent. GPT4 will label a Guerilla Girls poster with “feminism” and “gender inequality” while Google Vision will only say “poster” and “handwriting”. GPT4 also shows a greater understanding of context. Consider this work by Rodney Graham:

Aged looking photograph of an tree in a field, upside down.
Rodney Graham, Oak, Middle Aston, 1990. Chromogenic print, sheet (sight): 90 5/16 × 71 3/8 in. (229.4 × 181.3 cm) Image (sight): 90 5/16 × 71 3/8 in. (229.4 × 181.3 cm). Whitney Museum of American Art, New York; gift from the Emily Fisher Landau Collection 2011.152. © Rodney Graham

GPT4

inverted, tree, snow, framed, photograph, winter, landscape, surreal, nature, upside-down

Google Vision

Rectangle, Branch, Plant, Art, Twig, Tints and shades, Tree, Font, Painting, Visual arts

Visually by far the most defining aspects of this photograph are that it is a) of a tree and b) upside down. Google gets the “tree” just like GPT4, but it does not give anything indicating the inversion. Google also adds maybe technically true tags like “twig” and “tints and shades”, but how useful are those?

GPT4 not only does better on the details, but when it misses you can often tend to see how it got confused. It’s far more understandable to interpret what appears to be regular grass as potentially snow-covered than it is to categorize this whole image as a painting like Google.

Conclusions

Without claiming that this is in any way an exhaustive comparison, it is still abundantly clear that for many of the kinds of works in the Whitney’s collection GPT4 with Vision is much better at keyword tagging than Google Vision. Annecdotally and through this (limited) analysis, the two are far enough apart that there really isn’t any question on what’s more useful for us. If anything, this raises the much thornier question on if this is good enough to incorporate into the public interface of the collection.

Full disclosure, it’s at this point I was planning to use Henry Taylor’s THE TIMES THAY AINT A CHANGING, FAST ENOUGH! as an example of where GPT4 could make the kind of mistake that would throw some ice water on that consideration.

Stylized painting of Philando Castile in a car as he's shot by police.
Henry Taylor, THE TIMES THAY AINT A CHANGING, FAST ENOUGH!, 2017. Acrylic on canvas, overall: 72 × 96 in. (182.9 × 243.8 cm). Whitney Museum of American Art, New York; purchase, with funds from Jonathan Sobel & Marcia Dunn 2017.192. © Henry Taylor

When I first started experimenting with GPT4 with Vision, GPT4 would tend to describe this image as being abstract, inside a car, with a bear in the backseat. What the painting actually shows is the murder of Philando Castile in his car by a police officer in 2016. While it isn’t necessarily surprising that an algorithm might mistake Taylor’s stylized abstraction of a man for something else, given the documented history of racism by AI/ML algorithms, this isn’t the kind of mistake to be shrugged off. But when I re-ran this image through our tagging pipeline the current keywords were:

GPT4

abstract, painting, colorful, modern art, bold lines, geometric shapes, blue background, yellow, green, white, black, brush strokes, contemporary

Google Vision

World, Organism, Paint, Art, Creative arts, Font, Rectangle, Painting, Tints and shades, Pattern

No bear.

I’m not entirely sure what to make of this. The lack of determinism with GPT4 may mean that run enough times, I’d get back the problematic keywords once again (though at the time of writing this and running it half a dozen times I’m not). Or it’s possible the model has changed enough since I first started working with it that this wouldn’t happen anymore. I have no way to know.

The actual conclusion

GPT4 with Vision is a big enough improvement over Google Vision label detection that it was an easy decision to swap over our internal usage to it. Whether this is good enough to utilize publicly, and can be contextualized well enough or the prompt massaged enough to blunt the concerns of bad keywords, I’m still unsure. An imperfect tool is a lot easier to explain to staff who use our online collection regularly than it is to a person who might come to our site once from a link someone else shared with them, and a lot less likely to cause harm.

There’s clearly a lot of promise here, and I hope we can find ways to utilize it.

[Read this on Medium]

Rachel Rossin: The Maw Of

The Maw Of is an immense, incredible project with a ton of moving pieces. And some of those pieces took a fair bit of work to adapt to the museum’s various systems, including an IRL installation for Refigured.

Rebuilding digital signage at the Whitney

Hark a webpage.

It’s webpages. The museum’s new digital signage system is webpages.

Some background

Since opening in 2015, the Whitney’s 99 Gansevoort building has been served by digital signs on every floor plus a pair outside the lobby on the terrace. Each of these approximately 18ish displays of various shapes and sizes have been driven by some fairly complicated software, outputting some manner of template that mostly shows the various exhibitions and events happening around the museum. For the screens outside of the building, this is mostly promotional content, while the interior screens act as tools for wayfinding and determining what’s available to a visitor and on what floor. It’s these interior screens that are most critical to people visiting the Whitney, and also cause the most technical problems.

A very orange elevator screen on the left, and some very off-color and overlapping text on the right.

As the hardware began to age out and fail, the opportunity was there to re-work how these kinds of displays were driven, and build a more modern and supportable software solution. For us that meant turning to the web.

The old way: Flash + XML

The interior screens were basically Flash templates fed by a big XML file that we output regularly from the CMS that powers whitney.org. That XML file was ingested by a separate stack of software, representing multiple pieces of 3rd party solutions, and it was in that ingestion that many of the problems we’ve faced over the last few years came from. Fundamentally the data models between what we input on whitney.org to build out our exhibitions and events online did not match that of the system we were pushing to for signage, and in that gap there have been a lot of bugs. Neither data model was necessarily wrong, but the fact they didn’t match was an issue, and it became more and more of one as we gradually altered the content we were posting.

Similarly, our digital design practices also drifted over the years, while the signage templates stayed static. Part of that was technical limitations, but part of it was also just the fact that nobody wanted to touch anything when the stakes are multi-day screen outages.

The new way: HTML + React

It feels a little goofy to say this, but since 2015 the web has only continued its growth as an actual platform for applications. Whether that’s the popularity of solutions like Electron for app delivery, or React and Vue for building interactive UI’s, or just the fact that CSS has gotten to the point where complex layouts are no longer a pain thanks to Flexbox and Grid. The web is really really good for building out the kind of solution you need for robust digital signage, and it has the advantage of being something that the Museum is already skilled-up to do. It can also be iterated on alongside our primary website, whitney.org, in a way that would never be possible with a more bespoke software solution.

For the new signage software we built out some new endpoints for our CMS’s API, and new single-page React apps for each kind of sign. We are using websockets for some instantaneous controls over the signs, but for the most part they just poll for changes every 30 seconds. Content is pulled (via the API) largely from the same exhibition and events records we enter into the CMS for display on whitney.org, so there’s no double-entry into separate systems.

The biggest obstacle to this approach wasn’t building the web apps (this was somewhat tricky, but we didn’t need to use websockets or have robust animations), or getting buy-in (this was an easy sell, given how messy the alternatives would be). What turned out to be the trickiest piece of the puzzle was figuring out what hardware would be most appropriate to run these webpages.

Enter BrightSigns

Essentially what we needed for hardware would be something that would:

  1. Be able to display a webpage with a relatively modern browser
  2. Be relatively easy to setup
  3. Be very stable
  4. Be performant enough for smooth animations at high resolution
  5. Not require constant updates, or hounding about updates, or messages from the OS, or any other kind of interruptions when we least expect it

It’s possible to configure PCs, or Macs, or Linux boxes to do everything we needed, but the overhead for that kind of solution would be extremely challenging for our relatively small staff. We’re not in a position where we could be building out robust tooling for management of a fleet of full-on computers, and troubleshoot whatever issues we’d surely run into with OS updates or problems with running a chromeless Chrome. So we needed something more off the shelf, designed to run a webpage fullscreen for long periods of time with minimal upkeep. And thankfully for us BrightSigns are able to do just that.

Hark BrightSigns.

BrightSigns are basically just small Linux computers running an OS built for digital signage. If you work at a museum that shows video installations, you might already have a whole bunch of them. They’re relatively inexpensive and easy to source, and they form the hardware side of our new web-based approach. They’re not perfect, and BrightAuthor:connected is…not exactly dream software, but it’s good enough.

The results

The world’s largest cardboard box and the display that came out of it. There are also displays all over the building.

At the time of writing this, we’ve replaced the majority of the old displays with new hardware, and we’ve swapped out all content inside the building to be running on the new website/BrightSign stack. While there’s been some hiccups as we’ve adjusted, there’s no question this was the right approach for the future. We are in a far better position now to own the experience on these displays, and adapt them as the museum continues to grow and change.

And because they’re webpages, they also play Sunrise/Sunset now 😊.

[Read this on Medium]