Description
Explore a sampling of more than 300,000 images from Yale University Library’s Visual Resources Collection, a legacy slide collection used primarily for teaching in art history and architecture studies between 1940s-early 2000s, via the image similarity visualization tool PixPlot. Each image was processed with an Inception Convolutional Neural Network, trained on ImageNet 2012, and projected into a two-dimensional manifold with the UMAP algorithm such that similar images cluster together. Through PixPlot, researchers can engage and interpret the VRC images and metadata in new ways and at new scales, while also providing a broader perspective of pedagogical practices in Yale’s History of Art Department over the last 60 years.
PixPlot facilitates the dynamic exploration of tens of thousands of images. Inspired by Benoît Seguin et al.’s paper at the DH Krakow conference in 2016, PixPlot uses the penultimate layer of a pre-trained convolutional neural network for image captioning to derive a robust featurization space in 2,048 dimensions.