Camera Compression Complications Continuation

Cameras are going to be a huge part of our robot (Mainly in terms of importance, but it just so happens that they are some of the largest components in the robot, save for the UDOO). So making sure we have them working efficently, at good quality and reliably is of utmost importance.

We ran into an issue in a previous blog article where our cheap Logitech cameras wouldn’t work on a single USB port, due to USB 2.0 bandwidth limitations. The solution, as we thought, would be to use cameras which support MJPEG compression.

Before ordering our new cameras, we made sure that they supported this format, with only three USB ports on the UDOO (with one taken up by the servos already), it was absolutely essential that we have multiple cameras running on the same port.

Imagine our dissapointment when we got the same error that we had with the Logitechs. You know that feeling when you break something important and you feel that awful sense of dread? It was much like that, did we just spend ~$400 on cameras that aren’t going to work?

But here at S.A.R.T. we do not give up easily.

So Kyle and I started with the basics. It appears that perhaps MJPEG was not being used, even though it was supported. Motion supports a pallette option, to select the desired format. We set it to the relevant value to force MJPEG. It didn’t work.

Well, it did work, but we still had the same issue. Motion, during startup, alerts us to which pallette it is using and it was definately using MJPEG for all the cameras. So what was the problem? Why didn’t it work like it should, like the oCams?

A wise man (me) once said, “when in doubt, consult the documentation”. So we decided to have a deep look and try to understand as much about Motion, UVC and v4l2 as we could. Kyle and I ended up searching down half a dozen false leads, trying to find any possible cause for the issue. At one stage we thought it might be v4l (video4linux). During Motion’s startup, the first camera would say it was using v4l2 however subsequent cameras would report that they are using v4l1. A clear downgrade. Also a deadend of confusion.

We took the framerate down to 1fps and resolution down to 240p and still we couldn’t have two cameras. A 1 fps stream is most definately not saturating the bus.

Finally, in the FAQ for Motion (of all places) was a section on USB camera bandwidth. It advises that you can probably only have one camera per bus. Stuff we already know. It also has a section on increasing bandwidth by disabling sound over USB modules, which I don’t believe would have helped, because we know that the actual size of the data stream would not be saturating the entire bus.

It was the section below that which caught my eye. Ubuntu uvc quirks. All it says is pretty much just “Bandwidth can be increased with the application of camera specific quirks.” with a link to the UVC driver FAQ.

I followed the link and right at the bottom of the page:

I get a “No space left on device” (-28) error when trying to stream from more than one camera simultaneously.

Aha! That’s our issue alright. I was quite dissapointed to find out it started off by listing all the things we had already tried. Reducing resolution, forcing compressed format, putting cameras on different ports, etc.

But right near the bottom, I found what I was looking for:

If none of those options are possible or effective, read on.

How mysterious. Apparently devices themselves are responsible for telling UVC how much bandwidth they will need. Now, say hypothetically, you were a camera manufacturer and you decided to save a bit of time off your development schedule by, I don’t know, maybe requesting the entire bandwidth of the port.

Thankfully, there is a quirk for that! Linux has these things called kernel modules which are usually analogous to drivers for Windows. You have modules for things like sound, input, display but also some non driver things like process schedulers, and you know, all the good things. There is a kernel module called uvcvideo which handles USB webcams.

Quirks are basically little settings for extreme cases. I’ve only ever had to use module quirks once before, years ago trying to get arcade joysticks working with a Raspberry Pi.

The site says there is a quirk called FIX_BANDWIDTH, which ignores the reported bandwidth from the camera. It states it should be completely safe to enable, the absolute worst that could happen is corrupted images. There is a small issue, which is that it only works with uncompressed formats. This made sense, uncompressed formats mean the stream will always be the same size. When compressed, the stream will change in size depending on what the camera is seeing. We’ll burn that bridge when we get to it.

I did a search for how to enable this quirk, and found this handy post on StackOverflow.

Remove the module with sudo rmmod uvcvideo and reload with relevant quirk with modprobe uvcvideo quirks=640 , this will also load the RESTRICT_FRAME_RATE quirk, which I believe makes the driver only use the first framerate interval that it recieves, so it doesn’t end up with inconsistent framerates. I don’t think is needed in our situation, but it shouldn’t hurt to have.

I set the Motion pallette option back to YUYV, an uncompressed format. And to my suprise, it worked! We had multiple cameras running off the same USB port! I had two running at the same time, but increasing framerate and resolution resulted in issues. It was something at least, but we really needed to get compression working so we could have all three working and decent resolution and framerate.

The StackOverflow post has another answer, this one had a link to this post. It details how to enable the FIX_BANDWIDTH quirk for compressed formats. We need to download the Linux Kernel source, edit the UVC driver’s source code, compile it into a new module and load the new module with the relevant quirks, replacing the old one.

The post gets the kernel source from Linus Torvalds’ own repo which is cool, but we’d rather use the Ubuntu kernel, as we are using Ubuntu Server 16.04. Ubuntu provides instructions for getting the kernel source here.

First of all, you’ll need to enable the source repositories for apt. Edit /etc/apt/sources.list and uncomment the following line:

Now we’re ready to download the kernel.

Now we need to modify two files.

The first is the makefile, we need to specify that it will be creating a new module with a new name. We’re going to call this module aauvcvideo, just so we know it is a different module than the unmodified one.

Edit Makefile and modify it so it looks like this. It’s slightly different to the StackOverflow post.

Now we need to edit the code itself and allow it to enable the quirk for compressed formats.

Edit uvc_video.c and find the function named uvc_fixup_video_ctrl() and at the end of the function add:

Now we can build the new module!

Remove the old module and reload the new one with our favourite quirks:

That’s all there is to it. It would be straightforward to load this new module on startup, just add the original module to the modprobe blacklist and load the modified one instead. I loaded up Motion with MJPEG compression on, and it worked! We had three cameras running at a good framerate and resolution at the same time.

Four cameras, compressed, at the same time, on the same USB port! Majestic.

A success!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.