AI has been a topic of interest to me for a number of years. Likely for many of us, our first interactions with silicon intelligence came in the form of video games requiring us to monitor patterns and exploit them. However, the opportunity to automate functionality (intelligence) is far more interesting. Music was the first place I was able to put it to work with an Alesis drum machine that allowed me to create entire songs or just let it keep time while practicing and improvising. Years later, Garageband took this a step further with intelligent beat emphasis detection for kits and playing styles.
Over the years I’ve automated intelligence for content production and marketing but nothing resembled the AI explosion in 2022, even though I had been following developments and testing when a breakthrough was compelling. Finally, word and image generation had leveled up to the point where it could create compelling drafts and elements with which to work into projects. The models had been distilled down to pieces of software that could run on a MacBook with a M1 Apple Silicon chipset.
Of course, I started writing about the developments and turned them into a series of releases in different formats as explained in my content development workbook.
In December of 2022, I released the first piece called “Human Spirit: Ghost in the Machine”. It was just an opinion piece but it got me thinking. What if I used Stable Diffusion to create images for the piece and used them for a video version of the article? I’m not a graphic artist and stock video gets heavily reused, to the point that when we see overused clips our brains turn off - the opposite of the desired effect.
This led me to dig up a few older pieces that had been sitting in unfinished states to complete them for publishing. Then I found a writing cohort that was pushing writers to publish a one-thousand-word article once a week, so I joined that. That momentum pushed me onward, in total I created ten pieces over the course of 6 months. That would be underwhelming if it were exclusively editorial. However, I turned each article into a fully voiced video with accompanying imagery created by generative diffusion models to help ground the points in visual narrative when it made sense.
This is one of those instances of an idea sneaking up on you. I never intended to produce all these pieces but it was fun to experiment with the technology and let my mind wander. I packaged all this up for distribution on social networks that I use. Not all of them provide metrics but I’d estimate the total impressions across all social media to be in the ~250k range without any paid advertisements. That’s not bad given my personal following is only ~5k across all networks. One standout was on YouTube, where I posted the accompanying videos. There were only about 250 followers of my channel when I started but the series garnered ~9k views. These are small victories that collectively resulted in new connections and ideas.
You can do this for your company and projects, repeatedly and with greater measured success with planning and focused execution. Let me tell you a little bit about the process I went through.
Executing the Vision
Primarily the content would focus on a piece of software or a freshly released AI model, addressing how I could use them to explore creativity. At the time I was preparing my first fiction release as an audiobook. So, I would jot down my ideas and fill in the gaps with details about the folks involved with a project, get quotes, or capture screenshots of interesting things as I was using the platforms.
Serving a dual purpose, I would proofread the editorial aloud while recording to use the reading as a voiceover. It helped me to find any lingering errors or weird phrases in the text, and there are always a few. It’s funny how often the things we write are very complex to perform as a reading. Practice makes perfect.
Because I was using Substack as a blogging platform and newsletter distribution channel, the layout of the articles included pictures, embedded videos, and even tweets. All of these things formed the basis of the pictorial interpretation of the text-based sentiments I had written.
Next, I ordered the images sequentially in video editing software such as iMovie or Davinci Resolve. In some cases, these would be exclusively artistic representations of a subtext interpretation I was attempting to relay. There’s nothing wrong with approaching this sort of work like creating a short film to break out of the typically stiff work associated with business.
Of course, I have a collection of music I have created over the years that can be used for these types of pieces, but you can find royalty-free music on the web or use an AI music generation model to create tracks to use commercially.
My point in featuring this content theme as a case study is to highlight how an unexpected trend can lead to insightful media to build a brand’s presence in an organic way.
Even if there doesn’t seem to be an immediate connection to your business or service, you can focus on the topic through the lens of your industry. Consider ways to participate in popular conversations and add valuable perspectives.