🏞👀 Decontextualized images clusters (with human vision), annotated

Visualize thematic clusters within a small set of decontextualized images.

This approach relies on human vision to detect thematic clusters among images. In addition, removing backgrounds helps focusing on the “main” elements contained in the pictures. After removing backgrounds, images are clustered according to their content or other criteria, by hand. The process allows to have more control on definition of thematic clusters but it is very time consuming, therefore it works only with small set of images.

🗄️ Examples

🧱 Inputs from TCAT

📃 Steps

  1. Install Tab Save Chrome extension
  2. Copy and paste URLs list in Tab Save
  3. Download images from URLs list with Tab Save
  4. Go to
  5. Remove background of each image with
  6. Import images without background in
  7. Cluster images according to their content (or other criteria)
  8. Annotate clusters (add labels, demarcate clusters with lines and circles)

🐙 Inspiration, acknowledgments and contributors

This and other visual methods recipes were originally formulated by Gabriele Colombo drawing on his doctoral work exploring the design of composite images. They were documented and refined for a module on Digital Methods for Internet Studies: Concepts, Devices and Data convened by Liliana Bounegru and Jonathan Gray at the Department of Digital Humanities, King’s College London, leading to a set of collaborative group projects with their students and the European Forest Institute. The approaches behind these recipes draw on several years of experimentation with images in the context of research and teaching at the Visual Methodologies Collective (Amsterdam University of Applied Sciences), the Digital Methods Initiative (University of Amsterdam), DensityDesign Lab (Politecnico di Milano), the médialab (Sciences Po, Paris) and beyond. You can read more about these approaches in Colombo, 2019 and Niederer & Colombo, 2019. Further readings can be found in the visual methods Zotero bibliography.