Skip to main content

Analog composite video wrinkled my brain

I recently bought this "car camera" as a cheesy way to get a decent sized display at a cheaper price. Most searches for displays for IoT/making end up in touch enabled puny screens, which are 3-10x the price of this entire unit. I don't care for touch, so I was happy to just "salvage" the display from this combo & the camera just becomes a bonus.

However, giving it a try, I found out that it's a rather nice unit that works beautifully with minimal configuration as a closed circuit camera system, and it's pretty cheap for that purpose. I'm actually considering getting a few of them, and set up a full monitoring system.

I'm just day dreaming about it in peace & quiet, meanwhile dad saw it & passed a comment: "Oh, is it a CCTV? does it record?"

*Thanks dad, thanks a lot!* I thought... but "yeah, you can just use a DVR (and 'repurpose' the monitor muhaha)" is what I said.

But it immediately hit me... can I build something simple that takes the video information, processes it for future retrieval & saves it on an SD card? Preferably with an MCU like a Raspberry Pi Pico or an ESP32?

Why not? I thought. It's a dumb RCA cable, carrying some low-def video... what's so fancy that I can't bruteforce with >200MHz sampling rate, how complicated can it be that I won't be able to handle with multiple ADC channels anyway? Like a FOOL!

So, I looked up the composite video signaling and timing information.

Aside from the fact that it's a fucking old tech, that 40's researchers took straight to the grave before it was properly uploaded to the modern Internet, whatever information is publicly available, is quite incoherent & fragmented across multiple sources.

But eventually I understood how things work.

The amount of information that can be packed with phase, amplitude & level shifting (along with timing management) over a single goddamn signal trace, is simply insane!

It gets unusable at higher data rate & integrity becomes an issue but at <50MB/s it's quite alright.
BTW, the inherent strength of multi-bit representation with level shift can't be denied, and it's going to start showing up in high speed digital data communications... like DDR5 or PCIe 5.0 starting to use PAM4.
So anyway... after much considerations, I've boiled down the several approaches & rated them with 2 key factors: wack & ease - let's go...

Approach 1: wack 10, ease 10

  1. Use ADC to read the levels for an entire scanline as an array
  2. Fill with some special character for no change in values in next pixel
  3. Ignore phase/hue (ain't nobody got time for measuring phase shifts)
  4. Send as character bytestream to SD card storage over SPI
Result: Black and white only (fine for most purposes), timing & fill pixels need custom decoder for play back, but play back can emulate an actual camera source. Practically lossless (except for colors, which can be added with phase shift calculations & added pixel information).

Approach 2: wack 5, ease 1

  1. Use or emulate something like MAX9526 to convert the analog composite stream to RGB or YCbCr (digital) framebuffer by taking both the interleaved frames
  2. Running some simple deinterlace filter
  3. Then convert the I-frame into a bitmap
  4. Consider using a hardware codec to compress the image to acceptable size
  5. Alternatively, use a hardware codec to create a video directly from the buffers
  6. Save the media over to the SD card
  7. Go to next frame(s) and repeat.
Result: unless compressed, it'll result in massive ~30MiB image files per frame, and not much different for uncompressed videos either. It requires min 2, but practically 3 different dedicated microcontrollers to get the job done; and using a generic MCU or FPGA would require a massive firmware/application, that's months of work to even show any signs of life. Costs no less than an off the shelf 16 channel DVR, just by component cost. #GGWP #KTHXBAI

Approach 3: wack 3, ease 7

  1. Use a composite capture card with USB interface
  2. Read USB data signal with a Pico & broadcast it through an nRF24
  3. Capture it with another nRF24, and pass it to another Pi Pico over SPI
  4. Connect the Pico to a computer, posing as capture card through its TinyUSB stack
Result: Wireless Capture Card!
No promises, but approach 3 has just the right wack & ease factor for me to actually consider making this. With a USB hub & an OBS studio setup, it should be fairly easy to have an array of different camera feeds coming in - no matter if they're being recorded or live streamed.