Project Cactus

Like so many, I purchased a Home Depot cactus. This after my super weirdo big sprangly multi-ball cactus from the hillside bulk foods adjacent garden center deflated into goo. What else was I to do? So I planted this thing as directed in a tasteful desert scene on the dining room table. Didn’t take too long till I got to wondering “Is this thing growing? Is it even alive? Plastic? Did the HD sell me a plastic cactus and I’m too stupid to realize?” Well the only way to find with certainty is to take a daily picture and then stitch ‘em into a movie sans nature documentary right? AND because software, gotta do it with Python

First problem I had after taking some pics and downloading them twice is the names are re-used, and that isn’t nice. Imagine my horror while sipping a beer, DSC_253 copied over. Oh dear! Crisis averted with a PowerShell script that Bing Copilot wrote a little wrong but Claud.ai fixed up. It renames the files based on the last modified date which in this case, is when the picture was taken. Problem solved!

Next, how to paste these all together into a movie? To do this, I snuck some mostly working code off YouTube and got my new jam Claud.ai to do some fix-ups after a few go arounds, I mean come, on nobodai’s perfect.

Now I have my 10 FPS movie and boy does that cactus got moves! Dance’n the Cactus Shuffle! Now I knew it would be a jigglin’ on account that I made no attempt whatsoever to register the frame or hard-mount the camera like the pros. So my next move will be image stabilization. That and a T-Shirt. The whole reason I’m doing this is for a T-shirt.

So far, all I can say is, wow! You can really see that little guy growing away. Gonna be tall as a monster in a few years.

Stabilize...this may have been a bad idea

Finally! Python day...House Python...3D print a shield with a bunch of pythons on...Anyway, got a chance to do some image stabilization attempt. Attempt you say? Well here it is...

Um. Actually not that bad until some idiot changed the background color. Seems to have propelled the cactus into hyperspace. Think it’s now orbiting Proxima Centauri. I am oft obliged to wax nostalgic for the days of steam; to use not a pen that writes upside down when a pencil will do. I have a wood shop after all, and a whiteboard design for an overly-complicated adjustable table-top camera hard-mount. Mr. V came by, upon observing asked “What’s that?”. I told him to go away. WHAT is THAT? Art! The old black wall-mount rotary phone obsolete gathering dust on the prop master’s shelf. Useless until needed, then essential!

Or I might do round 2. Might even plant a popsicle stick, abhorrent and ruinous to the desert scene as it would be, but just the ticket to recovering that little cactus from yon distant star? Perhaps.

Let's Get Serious BEFORE Giving Up

I chatted up some more AIs to no significant better avail that I'll get into later. Then, over a plate of steak tartar and caviar, mid-puff of cigar straight from the idea jar; the Lucas Kanade Optical Flow Algorithm! Of Course! As explained here, LKOFA assumes the following:

  1. The displacement of any object is not large.
  2. The intensity of the object doesn't change.
  3. Neighboring pixels containing the object move in the same direction.
And I thought, that's basically kinda what I have maybe. And it's using OpenCV and it's there to try. So I figured I'd try resetting the reference image in the current stabilization script. You see, the ai had the stabilization script use the first image as the reference for all the others, comparing the reference to each to find matching points for perspective warping. Now that might work on the fake plastic flower adorning so many a Home Depo Cacti, but this cactus is growing, not just moving. I added some code to output the pictures pre-warp with the matching markers:

Look at those markers twinkle-jinkle around. I don't see how this could work...and it doesn't. But what if I instead re-reference every 10 frames? - turns out 10 frames was best.

Feh. Other than the fact that the cactus is turning into a rabbit at gunpoint, it's just ok. At least we're not traveling on the hyperspace track or whatever they call it in the Planet Dirt AI generated/read audiobook I'm currently falling asleep to. If there was only another algorithm I could try...

Some days later...

Ok so not so sure of LKOFA I mean what do I need a law degree or something? Dude's doing derivatives and matrix action and that code on the Github was just like drawing markers on the frame which, been there done that and I've got a better idea that avoids warping the image. The old Crop and Move (C&M). Or is it Move and Crop (M&C)...think it's M&C. Anyway, do some color match masking, find the lowest cactus pixel on the left and right, or just the lowest, we'll find out. Then shift all the images so that the locus of points (just wanted to throw that in so I sound smart) align - i.e. put all the lowest pixels in the same place. That should kinda fix the base of the cactus, oh sure it might flap around a little, but gonna try it anyway.

And of course, some moron used a lime green background that ruined all masking until the 9-second mark. But then...tantalizing potential? There's some masking artifacts that disallow use of a straight "find the lowest white pixel" algo (that's slang for algorithm). It might however work if constrained to a bounding box but not really there's some frames with rando-white pixels close to our friend. What if bounding box AND ignore isolated white pixels?

OR, had another idea. Face recognition!...but on a cactus. Can I face-recognize the base of the cactus? Does OpenCV do that? You bet it does, so I cropped out this little bit-o-cact.

And used the OpenCV matchTemplate() method, which takes the full cactus image and the little cropped piece as the "template". You can run it with different algos (again, that's industry slang for algorithms). Most of the algos worked pretty well so I used TM_CCOEFF_NORMED why not.

See that bright spot? That's promising. Thought I might need to use a mask - you can mask out a portion of your image to constrain the search - but this is working on all the pictures except maybe the first one, so foop it (more industry slag).

Eureka Moment

Well I think the video speaks for itself...

The rest is just mechanics of shifting the images so the top left corners of all the green template rectangles are in the same place. I did this using the power of math, placing each image on a larger white background, shifted as needed. Then, again harnessing the power of math, cropped out the white background, leaving the final video.

The only thing left to do is continue taking pictures, standing back a little further to avoid cutting off the top of the cactus in the cropping process (plus it's growing). So what did I learn? While the HD does sell cactus with fake glued-on plastic flowers, the actual cactus...not plastic.