Using ControlNet to Create Battle Maps
Using ControlNet to Create Battle MapsUsing ControlNet to Create Battle MapsUsing ControlNet to Create Battle Maps Image Generative AI is great for creating art, but will struggle with something like a battle map that needs consistent room sizes, doors, etc that conform to a rigid grid. Using ControlNet, we can try to constrain the output of Stable Diffusion to the specific geometry of a battle map. Stable Diffusion models trained on the 'look/feel' of battle maps * models * D&D battlemaps - 1024 v1.0 | Stable Diffusion Checkpoint | Civitai * \[!\] 2.1 model,
Image Generative AI is great for creating art, but will struggle with something like a battle map that needs consistent room sizes, doors, etc that conform to a rigid grid.
Using ControlNet, we can try to constrain the output of Stable Diffusion to the specific geometry of a battle map.
Stable Diffusion models trained on the 'look/feel' of battle maps
- models
- D&D battlemaps - 1024 v1.0 | Stable Diffusion Checkpoint | Civitai
- [!] 2.1 model, not compatible (yet) with ControlNet models
- DnD Map Generator - v3 | Stable Diffusion Checkpoint | Civitai
- D&D battlemaps - 1024 v1.0 | Stable Diffusion Checkpoint | Civitai
- LoRAs
Raw Map
- Dungeon Scrawl can generate quick maps which we can feed to ControlNet.
Taking the raw map from Dungeon SCrawl, we can preprocess it with the Canny ControlNet model to create distinct edges.
Using the canny output, we can put these all together to constrain the Stable Diffusion model to create this output.
Nice! Obviously it's a bit blurry but that's fixable. What we have proved is we can provide composable inputs that add style to a map without changing the underlying content.
From here, we probably want to explore Upscaling Stable Diffusion Outputs to add more detail and resolution to the generated maps.
We also might want to explore Regional Prompting With Stable Diffusion to specify specific areas as a bedroom, tavern interior, forest, etc.