When AI can generate unlimited Holocaust imagery, protecting authenticity requires establishing methodological constraints rather than avoiding these technologies. Three principles - transparent marking of generated content, scholarly verification, and spatial grounding - could transform AI from a threat to authenticity into a tool for responsible visualisation. These constraints accept that some visual representation, even if imperfect, may serve memory better than leaving crucial narratives invisible to contemporary audiences - provided the imperfection is acknowledged and minimised through rigorous frameworks.