Skip to main content
How I AI

Mastering Midjourney: How to create consistent, beautiful brand imagery without complex prompts | Jamey Gannon

49 min episode · 2 min read
·

Episode

49 min

Read time

2 min

AI-Generated Summary

Key Takeaways

  • Style References Over Mood Boards: When Midjourney mood boards produce inconsistent results, switching to individual SREFs (style references) pulled directly from Pinterest images delivers sharper stylistic consistency. Midjourney tends to "average out" diverse mood boards, while targeted SREFs give the model cleaner, more specific visual direction — particularly for editorial or high-contrast aesthetics.
  • Publication and Artist Names as Prompt Shortcuts: Referencing specific publications like *Days* or *Vogue* in prompts communicates contrast levels, lighting style, and subject treatment without lengthy descriptions. A single editorial name carries the equivalent of dozens of descriptive words, making prompts shorter while producing more stylistically precise outputs — especially useful when generating hundreds of images daily.
  • Camera Model as Style Code: Pasting specific camera models (Sony RX100, DSLR, mirrorless, film) into prompts functions as a one-word style shortcut for controlling realism, grain, and era. Maintaining a saved list of camera names eliminates the need to remember technical specs like aperture settings while still achieving distinct visual treatments across large image sets.
  • Nano Banana as Spoken Photoshop: After generating base images in Midjourney, Nano Banana handles targeted edits — replacing objects, fixing hands, swapping logos — using plain language instructions. Specifying exact constraints like "keep position and size identical" and "only the left side of the keyboard is visible" prevents unintended changes and reduces iteration cycles on high-resolution final assets.
  • Deliverable as Prompt Package, Not Just Images: Instead of delivering a finished photo set and waiting for repeat business, the workflow packages the exact Midjourney setup — personalization codes, SREF codes, reference images, and final prompts — into a Figma document clients can reuse independently. This shifts the creative director's value to upfront brand definition rather than ongoing production dependency.

What It Covers

AI creative director Jamey Gannon demonstrates a repeatable Midjourney workflow for generating consistent brand imagery using style references, personalization codes, and mood boards — replacing traditional agency retainer models with a client-empowering system that delivers reusable prompt packages and reference codes.

Key Questions Answered

  • Style References Over Mood Boards: When Midjourney mood boards produce inconsistent results, switching to individual SREFs (style references) pulled directly from Pinterest images delivers sharper stylistic consistency. Midjourney tends to "average out" diverse mood boards, while targeted SREFs give the model cleaner, more specific visual direction — particularly for editorial or high-contrast aesthetics.
  • Publication and Artist Names as Prompt Shortcuts: Referencing specific publications like *Days* or *Vogue* in prompts communicates contrast levels, lighting style, and subject treatment without lengthy descriptions. A single editorial name carries the equivalent of dozens of descriptive words, making prompts shorter while producing more stylistically precise outputs — especially useful when generating hundreds of images daily.
  • Camera Model as Style Code: Pasting specific camera models (Sony RX100, DSLR, mirrorless, film) into prompts functions as a one-word style shortcut for controlling realism, grain, and era. Maintaining a saved list of camera names eliminates the need to remember technical specs like aperture settings while still achieving distinct visual treatments across large image sets.
  • Nano Banana as Spoken Photoshop: After generating base images in Midjourney, Nano Banana handles targeted edits — replacing objects, fixing hands, swapping logos — using plain language instructions. Specifying exact constraints like "keep position and size identical" and "only the left side of the keyboard is visible" prevents unintended changes and reduces iteration cycles on high-resolution final assets.
  • Deliverable as Prompt Package, Not Just Images: Instead of delivering a finished photo set and waiting for repeat business, the workflow packages the exact Midjourney setup — personalization codes, SREF codes, reference images, and final prompts — into a Figma document clients can reuse independently. This shifts the creative director's value to upfront brand definition rather than ongoing production dependency.

Notable Moment

Gannon described spending fifteen minutes trying to get Midjourney to generate a hand with extra fingers for an AI-themed article — after years of doing the opposite and removing unwanted extra fingers. The reversal highlighted how deeply model behavior has shifted in just a few years of development.

Know someone who'd find this useful?

You just read a 3-minute summary of a 46-minute episode.

Get How I AI summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from How I AI

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.

You're clearly into How I AI.

Every Monday, we deliver AI summaries of the latest episodes from How I AI and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime