Disclaimer: This is an internal pipeline experiment, built with AI-assisted Python scripting. It’s a work-in-progress for testing my own productions — not a finished or production-ready tool.


08-15-2025
While updating my Blender add-ons in preparation for a new practice scene, I saw a big update for Shot Manager and decided to experiment with a pipeline that included its new features along with Blender 4.4.
I’ve been a big fan of this add-on for a while, and I highly recommend trying it out, and purchasing if it fits your workflow. My goal for this project was to both sharpen my cinematic design skills and broaden my collaboration skillset with tech art, specifically building tools to smooth out my cinematic workflows.

The initial challenge
As I explored Shot Manager’s latest updates, I realized I wanted to export specific layers from Blender and bring them into After Effects for compositing.
These layers included:
- Characters
- Ships
- Grease pencil VFX (smoke, trails)
- Background smoke and back trails
- Ships
- Grease pencil VFX (smoke, trails)
- Background smoke and back trails
Having these on separate layers would let me add targeted filters and effects in AE.
But before I could get there, I needed to set some naming conventions and organization rules.
This was my starting point:



The growing problem
As the project grew, so did the folders. My current process was slow:
1. Go into each folder manually.
2. Select a PNG sequence.
3. Check and adjust the import settings.
2. Select a PNG sequence.
3. Check and adjust the import settings.
It also got more complicated when switching machines, I discovered my default AE ingest framerate was different on each computer. Files were importing at the wrong speeds, forcing me to go back and fix them one-by-one.

Planning for scalability
This made me realize:
Framerate management needed to be baked into the pipeline.
I wanted the system to handle multiple scenes or even an ongoing series.
I needed a process that would catch errors early instead of fixing them later.
I wanted the system to handle multiple scenes or even an ongoing series.
I needed a process that would catch errors early instead of fixing them later.
I started documenting my pain points, mistakes, and ideas for standardizing the process.

A naming epiphany
While figuring out how to script the first batches of shots, I started playing with terminology to describe these collections of files. I kept circling back to the word lode:
Lode: a vein of metal ore in the earth.
The analogy felt right; these batches were like valuable veins of data to be mined and processed. I landed on Shot Lode as the name.
Naming may seem minor, but having a clear, logical, and evocative naming scheme early on helps avoid confusion when expanding a toolset.

Next steps
With ChatGPT, I began mocking up early versions of a Shot Lode Loader pipeline tool. The scope was clear:
Locate exported PNG sequences from Blender’s Shot Manager output.
Automatically structure them for After Effects ingestion.
Provide room for user adjustments before the final export.
Automatically structure them for After Effects ingestion.
Provide room for user adjustments before the final export.
This was the start of my journey toward a unified media file manager that can handle both the creative and technical demands of cinematic design.
Images below show progress comparisons from start to current build.
Future updates coming soon.
Shot Lode Builder.
Shot Lode Loader/Inspector.




One of the key features I added to the Shot Lode Loader is the ability to define and preserve layer order for After Effects. This ensures that when the ingest script builds new comps, the layers are stacked exactly as intended. For example, Foreground (FG) at the top, Midground (MG) in the middle, and Background (BG) at the bottom.
In practice, each numbered shot (e.g., 001, 002, 003) has its own folder and a corresponding `_prime` comp. The ingest script now also generates a new `prime_main` comp, which assembles the FG, MG, and BG sub-comps in the correct order. This setup mirrors the structure of the `_prime` comp exported directly from Blender, but breaks it into modular layers with alphas. Those layers can then be stacked and adjusted individually in After Effects, allowing for targeted effects and per-layer refinements.



Disclaimer: This post describes the creation of an internal tool I am developing for my own cinematic pipeline experiments. It is both a practical exercise in systematic problem-solving and a way of exploring how large language models (LLMs) like ChatGPT can assist with Python scripting for technical art workflows. This tool is actively being tested only on my own productions, and I make no claims about its completeness or reliability. The work shared here is meant as an exploration of what is possible for non-expert coders when using AI to simulate a tech-art dialogue, rather than as a finished or production-ready solution.
UPDATE: 08-28-25
I decided to make a Video Blog Companion piece, in part because I'd like to really drill down into this topic for my own skill development but also incase sharing this way of approaching the craft in case others find it helpful in their own skill acquisition journey.
I originally thought this would only take a bout a week - because I was going to just animate the images above and add in the dev obs screen captures I've been collecting along the way.
That was not the case and the projects scope kept expanding.
I was able to get a handle on it and wrap it up in just under 3 weeks. Along the way I continued to plan out how to work this sort of video into the large pipeline context.
I'll attempt to discuss that process in future blog posts as well, I ended up getting a pretty good system in place for using Premiere, After Effects, Photoshop, Midjourney, ElevenLabs, Chat GPT 5, Grok and Suno.