My pipeline went camera -> dvrescue -> ffmpeg -> clip chunking -> gemini for auto tagging of family members and locations where things were shot.
We now have all our family's footage hosted on a NAS with Jellyfin serving over Tailscale to my parents Macbooks. I found the clip chunking in particular made the footage a lot more watchable than just importing the two-hour long tapes although ymmv.
The biggest issue I ran into was that while the audio and video were properly synced up in the original .dv file (due to it being an interleaved format), when I re-encoded the videos, the audio and video would drift out of sync as the video went on.
I was able to fix the sync issues by using dvgrab to split the original dv file into a bunch of 3 minute chunks. I then wrote a script to extract the audio track from each chunk, pad the end of the audio with milliseconds of silence to the exact length of the video track, combine the padded audio tracks, encodes the combined track, and muxes the fixed audio track with the encoded video. This worked really well; the silence padding is imperceptible, but the audio and video are still in sync - even after 2 hours.
A final point that needs making is that doing anything with dv files in ffmpeg (even -c:v copy) destroys the SMPTE timecodes embedded in the original file, making it much harder to split by scene.
The Video8 tapes have already been digitalized via a Digital8 camcorder, but apparently you can get even better quality out of old analog tapes with the vhsdecode project. Let's see if I ever get around to that, but at least it bypass Firewire entirely: https://github.com/oyvindln/vhs-decode https://www.reddit.com/r/vhsdecode/
Wish vhsdecode was easier to use in practice! Such a cool idea but a bit too inconvenient to hack your own hardware like this...
Play the footage on a tv in a dark room. Place a 4k camera on a tripod and record the tv with audio into the camera audio port.
Worked perfectly.
With all respect, reading this part made me feel uneasy.
My pipeline was camera -> WinDV -> DVdate (to extract exact datetimes into srt subtitles) -> Handbrake (to convert to mp4).
In the olden days when I got paid to shoot real video on a VX2000 and edit it for people, captured using a PCI Firewire card and dvgrab in Slackware, rewrapped with probably mencoder shading towards ffmpeg when it became more popular (and developed!), dual-boot into Windows 2000 and cut in Premiere 5.0, then back into Linux to transcode back to DV if I wanted to write it out to DV tape.
These days I shoot on a PD150 or DSR500 (and quite often some HDV cameras), capture via a PCIe Firewire card and dvgrab in Ubuntu, rewrap with ffmpeg, and edit in Resolve, without the dual-booting step.
If you use dvgrab it will split the capture up into separate clips on shot boundaries based on the pause/unpause markers on the tape. I have not found a way to extract good/no good from the stream, but if you're not shooting on a broadcast camera you don't have this anyway. Timecode is preserved though!
When you load it all up in Resolve, one of the options in the Cut page is "Source Tape View" which runs all your clips together by timecode, and lets you view them as though they were a continuous tape of your rushes, which is how we used to do basic assemble editing in the olden days of clunky tape decks and edit controllers with big rows of red 7-segment displays.
Edit your old home videos. You can do that now, and they'll be far more watchable.
I also wanted to do that, but then I realised I needed to invest more time and may need some hardware, so one day I simply had enough, went to a commercial shop and had them turn all the old stuff into digital. The cost wasn't that huge either, so considering that I could also save time (doing it myself), I am ok with that investment. Hopefully the future has digital everywhere. Storage to be cheaper too, ideally.
I'll have a shot of this I think.