Using ControlNet to Create Battle Maps

๐Ÿ—“ ยท ๐Ÿ“

Using ControlNet to Create Battle MapsUsing ControlNet to Create Battle MapsUsing ControlNet to Create Battle Maps Image Generative AI is great for creating art, but will struggle with something like a battle map that needs consistent room sizes, doors, etc that conform to a rigid grid. Using ControlNet, we can try to constrain the output of Stable Diffusion to the specific geometry of a battle map. Stable Diffusion models trained on the 'look/feel' of battle maps * models * D&D battlemaps - 1024 v1.0 | Stable Diffusion Checkpoint | Civitai * \[!\] 2.1 model,

Image Generative AI is great for creating art, but will struggle with something like a battle map that needs consistent room sizes, doors, etc that conform to a rigid grid.

Using ControlNet, we can try to constrain the output of Stable Diffusion to the specific geometry of a battle map.

Stable Diffusion models trained on the 'look/feel' of battle maps

300

Raw Map

  • Dungeon Scrawl can generate quick maps which we can feed to ControlNet.

300

Taking the raw map from Dungeon SCrawl, we can preprocess it with the Canny ControlNet model to create distinct edges.

300

Using the canny output, we can put these all together to constrain the Stable Diffusion model to create this output.

Pasted image 20230506214553.png

Nice! Obviously it's a bit blurry but that's fixable. What we have proved is we can provide composable inputs that add style to a map without changing the underlying content.


From here, we probably want to explore Upscaling Stable Diffusion Outputs to add more detail and resolution to the generated maps.

We also might want to explore Regional Prompting With Stable Diffusion to specify specific areas as a bedroom, tavern interior, forest, etc.