Hey friends, and a special welcome to new subscribers!
This is post is a great example of
a) how the AI space is evolving fast
b) how rules are rewritten in the blink of an eye
c) how most people won’t respond fast, so there’s opportunity for those that do
Welcome to the age of banana marketing…
Maybe you’ve seen bananas flooding your timeline lately?
Google dropped a new AI model called Gemini 2.5 Flash. Nicknamed "Nano Banana" because... sure, why not.
This one hits different though.
We've had image generators for a while now. Midjourney, all that. These tools can be great to generate experimental creatives. But, if you just want to add a logo to an image? Not so much.
Want the same character across multiple images? You're basically playing lottery tickets until something looks consistent.
Not really fit for the milk-and-bread creative work that a traditional agency would do, for instance.
Nano Banana actually gets this. You can show it an image and say "make his shirt blue" or "move this over here." It just... does it. Flawlessly.
Or, you can go further. Like this example:

(The input was all the images below, and the output was the photo up top. Actually insane)
I'm already playing around with it for client work. Same logo treatment across different contexts. Consistent product shots. Brand assets that actually look like they belong together.
Some of this is time-saving (don’t have to fire up Photoshop).
Other things are net-new.
Here’s an example: if you run a small business selling some kind of physical product you may be doing a few photoshoots a year for brand assets. Probably paying thousands of dollars for it. Which is probably similar to what your competitors are doing.
There’s no reason you can’t go and experiment with Nano Banana. Right now. Generate 10 ad variations – 100. While your competition still runs their ads with the same stale assets from last years photoshoot.
Everyone inside the AI bubble are going mad over this new image model. Everyone outside of it haven’t noticed.
What playbook would you be interested in?
Tactic: 5 ways to get started with Google’s new image model
Getting started with the Nano Banana (new Google image model is easy):
and make sure the model selector is set to “2.5 Flash” then activate the image tool. When your prompt input looks like this, you’re ready to go:

Here are five quick tactics to get starter:
Remove a object you don’t want in the scene
Upload an image and a logo and add the logo to the image
Repurpose a social asset into multiple formats (eg. turn Instagram photo into a Youtube thumbnail). When doing this in-chat it sometimes doesn’t work so well, but via the API it does. So here’s a great opportunity to build a tool (we already built one internally for this).
Generate multiple scenes with the same model. It’s great at character consistency!
Remix your way to a product shot, then use the image as the starting frame in the video model, Veo3 to create an animated scene.
To get you started, here’s a quick example of me ideating for a project I’m working on:
(ps; send me an email if you want to be on the list for this when it drops :)

The most important tip: do small iterations. Make one edit at a time, then repeat instead of throwing a bunch of tasks at it in the same prompt.
That’s it for today. Go play with Nano Banana. Then experiment systematically. Then become expert at it.
99% of you reading this won’t do that. But the 1% that do can probably build a business on top of this skill.
If you have ideas you want me to write about, hit reply!
Until next week,
Martin
PS: Go connect with me on LinkedIn and say hi! It’s always fun to chat with new readers
PS2: Want to kickstart you business’ AI transformation? Fill out this interest form and my consultancy will be in touch asap for a free discovery call.