In addition to traditional video, I create and augment traditional video workflows using AI powered tools like Midjourney, Stable Diffusion, Wan 2.5, Runway Studio, Kling-AI, Veo/Sora, and more.
Reach out if you’re interested in learning more about ai capabilities. I’d love chat.
These tools are not a gimmick and can be leveraged for inspiration, iteration and final production!
ADIDAS AURA-X | controlled chaos
—
I wanted to see how far I could push AI while keeping a cohesive visual language, design, and style - with the goal for a finished proof of concept piece for @adidas
It took a bunch of tools (a lot credits) — and 5+ days of human time — to bring it all together.
AI can’t replace human creativity, but it can help augment it.
Tools:
@apple MacBook Pro
@stablediffusion
@midjourney
@klingai_official
@adobe Premiere Pro (editing/sound design)
@adobe After Effects (compositing)
@photoshop
@blackmagicnewsofficial Davinci Resolve (color grading)
@topazlabs Gigapixel
@topazlabs Video AI (upresing)
@googlegemini
@rockstarenergy
POSTCARDS FROM PARADISE: Bahia vespertina 🌊
—
Hopefully the first of a series, this was an experiment in creating a cohesive aesthetic and vibe using entirely AI generative imagery (and a whole lotta hours). AI has been notoriously hard to art direct and so I challenged myself to make something that felt ‘of-a-piece’. I created a custom look prompting profile within Midjourney & Stable Diffusion, then iterated hundreds of images with directed prompts from which I made selects and created a storyboard edit for flow. Then added the magic 🪄(movement) using @luma_ai newest #ray2 Image to Video Gen with custom prompts (and many more iterations). I put it all together, edited and custom sound design in @adobe Premiere Pro. Grateful to be part Luma’s creator program and access to test these new tools!
🎥 📷 🪄
YUMMM. you can almost taste the pixels!
Created with Gen-1, MJ, and Stable Diffusion. I utilized depth maps created in Stable Diffusion for the 2.5D effect.
FASHION FORWARD >>
playing with NERFs (no, not that kind…)
Neural Radiance Fields (NeRF) is a machine learning method that represents a 3D scene using a fully connected deep neural network, where the network parameters map from a 3D location and a viewing direction to the volume density and view-dependent emitted radiance at that location. The model then uses volume rendering techniques to generate 2D images from these 3D representations, creating highly detailed and photorealistic renderings from any viewpoint. The camera movement is created using custom keyframes in a fully 3D environment.
Imagine the possibilites: PRODUCT, ARCHITECTURE, VIDEO GAMES, ENVIRONMENTAL DESIGN, FUN REELS FOR BRANDS…