Site icon ProVideo Coalition

MOX and Open-Source Video

MOX and Open-Source Video 1

Earlier this week Brendan Bolles launched an Indiegogo campaign to support a six month development effort that would result in a new open-source video format called MOX. I had the opportunity to interview Brendan about the project, and the full transcript of the interview follows.

MOX is intended to be a moving image format that belongs to no company, so that the code is completely transparent as with still image formats like JPEG, PNG or EXR. Unlike MXF, some of which MOX is likely to use as a container, it is not intended to be open-ended but instead limited to what will work universally on any platform (which in this day and age means Mac, Windows and Linux).

There are many potential users of MOX; if successful it could be as universal as the still image formats mentioned above, and it serves similarly broad target users. The developer comes from the world of visual effects and understands what makes a format like EXR useful in that world, but he is also familiar with the problems of a prosumer format like QuickTime and how it fails to properly serve professionals, despite its ease of use and ubiquity. The idea is to get rid of problems such as files that won’t write or play on a particular platform, or require a missing codec, or have technical issues such as inconsistent gamma that can’t be solved by anyone except the company owning the format, who doesn’t openly share details (hello, Apple).

MOX is not a new container or codec format that risks becoming obsolete; its main risk is lack of adoption or interest. It came about because the developer proposing it realized that he has access to a significant amount of code relating to container and codec formats that basically just needs to be put together in a usable, elegant form.

Once MOX has been created, it belongs to no one, and it doesn’t rely on Brendan to update it, or on any particular company to remain in existence. it is designed to be accessed and updated by anyone, a model used extensively from Github to Wikipedia to OpenEXR.

The campaign is already over 50% toward its modest goal; extra resources help compensate Brendan to add more to the initial code, as detailed in the interview. The full transcript follows, and addresses many of the pointier questions associated with this effort.

 

Mark Christiansen: To begin, I just want you to tell me, how do you envision what you’re making being used?

Brendan Bolles: It’s really for two different audiences. Depending on who you’re talking to, you may describe it in two different ways. One would be an open source ProRes, so that would be for your video editor type people. The other people might be in visual effects, so you might say it’s like OpenEXR in a movie format. For the ProRes side, you say that the advantage of this is that you can use it in the same way you use ProRes and move video from one program to another or from one person to another. The main advantage is that because it’s open, it would be completely cross-platform. So you can go to Windows or Linux, and in Linux, to my knowledge, you don’t really have a good intermediate format.

The other problem is that people sending video back and forth often, if they’re using QuickTime, will see these gamma issues and other problems, but what are you going to do? Call Apple technical support and tell them your QuickTime movie has problems? Because of its closed nature, you just have so little control over what’s happening. You don’t have other tools that can independently examine your movie…it’s all through the Apple system.

So, for the ProRes people, that’s the main thing. We’re open because we want to be cross-platform, not just because we love open. If QuickTime was cross-platform and had API’s that could run on Linux and Windows and give you access to everything, then at least for that side we wouldn’t need to do this.

MC: You would still have the issue of the trouble you have looking under the hood when things go strange with QuickTime, like you were referencing with gamma issues, where something changes at Apples’ end and they don’t publish anywhere what changed.

Brendan: It’s true. If it was close-sourced you would still be at their mercy. That’s the advantage of open. You can get to the bottom of a problem.

It’s kind of like if you had a JPG, or a PNG. PNG is a good example because there’s a gamma tag in it. In the past, some people have opened a PNG in a web browser and think how it looks weird, but because it’s an open format you can open it in another program and get a handle on whether the file itself is bad or whether the program is reading it wrong. But because QuickTime always goes through their run-time engine, different programs are getting it from the same place so you really can’t do that. The openness allows you to go in there and see exactly what’s happening as pixels are going from one place to another, especially if you have a programmer at your disposal. You can actually get to the bottom of a problem.

 

MC: Let’s talk for a second even more basically about the difference between a container format like QuickTime and codecs, which run along with that container format. Maybe you can just describe how those two are handled by what you’re proposing.

Brendan: I don’t know what the official terminology is, if there is such a thing. I think a format is a container plus codecs. Oftentimes you use them interchangeably. Like with QuickTime. When you say “QuickTime”, you often mean the QuickTime container and the codecs that QuickTime supports.


MC: Let me restate, since it sounds like you don’t agree with the framework of the question. I’m trying to elucidate what this is for someone who’s only peripherally familiar with intermediate codecs or all the problems that come up. Or even people who are happy with QuickTime.

So what you’re building is going to be in an existing file format. And it’s going to standardize and open the standards of a number of different codecs. So let’s talk about MXF as the alternative to QuickTime and why you chose it and what you see as the positives and any potential drawbacks of going with a .mxf file format.

Brendan: First of all, I’ll just say that I don’t think people really care what the container actually is. Nobody who uses QuickTime probably cares about the actual structure of how that format works. They just want it to work properly and hold video and certain metadata. So, the main thing is I didn’t want to create my own container because the whole basis of this project being able to be successful with just one person working on it goes back to the fact that I noticed there was all this technology that was already out there. There was already a container format that people had worked on for years and had been standardized and been used in productions, so we’re pretty confident it works. With all the codecs, many of them such as FLAC and Opus are already being used all over the world. For all the still image formats, like PNG, obviously we know they work. And then you have the Dirac codec by BBC that they’ve been using themselves. They put that out there as open source, but it hasn’t really found a home, because it hasn’t been adopted by the people who control the movie formats. It’s out there, and Apple could put it in QuickTime, but they just haven’t decided to.

So, it was all of this technology already existing that made me realize that this was a project I could do now. If I had to create any of those pieces from scratch…now you’re talking about a huge scope of a project.

MC: So essentially you’re assembling a bunch of pieces that are themselves already open source, but don’t all work together currently?

Brendan: Exactly.

For example, take Dirac from BBC and MXF. Apparently, at least at the BBC, they’re already doing that. So the natural question is, “why don’t you just say you’re making MXF with Dirac?” The thing is that MXF as just a complete container format is unrestricted in terms of what could possibly be in there. You get an MXF from certain cameras and it’s going to have MPEG-2, and a different camera might have MPEG-2 but then use different metadata and be packaged differently.

So the big idea for MOX is to not have it be open ended. We want to use that container that works, but we actually want to limit what you can put into it. And that’s the key to it actually working anywhere. By knowing these are all the codecs that can possibly be in a MOX file, you can have confidence that a MOX you get from anywhere will read in any actual MOX reader. With MXF, you just can’t say that. In fact, if someone gave you an MXF with Dirac, I don’t know if there are any readers from a commercial editing software that could read it. Something could probably read it, but it’s really a roll of the dice. Even though Dirac and MXF is a completely legal thing to do.

So that’s why we can basically say, MOX is an MXF container, plus these certain codecs for audio and video, and further metadata that we’ll also standardize. MXF is also completely open in terms of metadata, which is not very helpful when you’re trying to exchange. On both sides, you need people to have a standard for how they’re actually going to communicate it. MXF is sort of like we’re using telephone copper wire, but then you need to know exactly what language you’re speaking to actually communicate. And again, that goes back to using these pieces, because I wouldn’t want to be making the entire phone system infrastructure. But, to get people to agree that we’re going to be speaking English and talking at a certain time is doable. In a sense, that hasn’t been happening yet, so it’s what we plan to do.      

 

MC: From a user point of view, if I have MOX, what I have is what we would call an intermediate format. I can use it to openly exchange files and within that I can set my own codec according to my taste. If it looks more like EXR where it has jillions of channels and very little in the way of compression, that’s one way it could go. Or if it’s fairly elegantly but aggressively compressed for archiving purposes of , say, a lot of material that started out compressed, that’s an option as well. So I can go small or I can go very complete. I have those kinds of options provided that I’m choosing a codec that is open source and therefore is part of MOX. Is that correct?

Brendan: Right. And for all of the codecs in MOX, that is the #1 rule. They have to be open source and patent-free. That’s another key thing to point out. It’s kind of very technical, but a lot of people will ask me, “why didn’t you just use H.264?”

For one, I’m not sure that it’s actually well suited for this because that’s really a delivery codec, unless you have a hardware that can remake it in real-time. But the other answer is that H.264, even though there’s open source software that can read and write it, it’s patent-encumbered. If somebody just wanted to make a little utility that uses it, they’d have to pay licensing fees, and it gets much more complicated.

The BBC released Dirac as patent-free. They own patents on it, but then they said they were making it freely available to everyone. And that’s a key reason why this format is possible.

But I want MOX to be very similar to QuickTime. So you get a dialogue that lists the codecs, and you can choose one based on your particular needs and the different capabilities and then you just write a MOX and on the reading end all those codecs will be the same. Any codec you chose will read any other program. It should be pretty simple. 

Actually, the way I’m envisioning the interface for that is kind of the opposite of QuickTime. In QuickTime, you pick the codec and then you go and see, “millions of color,” millions of colors alpha,” and all those various options. I was thinking, and this is purely an interface thing and has nothing to do with the file that will be written, that it would be good to do the opposite way. Where you would just say something like, “I’m going to write 16 bit images with an alpha with lossless encoding.” And then your codec list would just be a subset of what could actually do that, rather than scrolling through codecs to find the thing that you wanted.

So, for example, if you said you wanted floating-point, then it gives you OpenEXR as your only option.        

And writing the file is the same. The writer knows it’s writing that codec, and obviously you have to write it within the constraints of the codec.

 

MC: I’m going to quote Adam Wilt from the MOX discussion on CML here, and the gist of his question was, “how is this really different from what MXF started out trying to be and then failed at being when it got too tied up with too many variations and other barriers to it being a truly open interchange?”

Brendan: I kind of use the cliché that “less is more.” We’re using the MXF container, not because we want to use all the features in it. And in fact, we’ll very specifically say, “you are limited to these features.” This might evolve as we are creating the format and we get feedback from contributors and all the people who are testing it out.

For example, MXF lets you have your media in a completely separate location from the actual MXF file. My intention right now is not to support that. Obviously, it can open up some abilities, but it also means you might have a file that you give to someone that won’t work without that media.  

MC: I would not be sad to see that structure go away.

Brendan: Right. On the other hand, people do use QuickTime reference movies, so sometimes people do like that. That’s where the feedback is going to come in. Again, my interest in MXF is not to use all of it, and in fact, I don’t think it’s comprehensible to use all of it. It’s mainly because I believe it can do what we want to do, which is just store audio and video codecs in a container format. Because it’s well tested, that’s the reason to use it. Again though, I don’t think anyone cares. If we hit some weird roadblock and for some reason MXF doesn’t work for our purposes, I don’t think anyone would really care if we booted it. I think they really just want a movie format that works. But I have no reason to believe it won’t work. 

MC: So when you have the file format that you’ve created, it’s not an MXF file, it’s MOX. And then under the hood, it provides whatever your API allows in terms of all the variations and options of what you do or don’t allow. It sounds like you’re trying to create something that is transparent first, but also easy to use and less breakable than some of the other alternatives. It sounds like that’s the goal.

Brendan: Yeah, definitely. With MXF, they really thought of a lot of different things for a lot of situations, like streaming, and I’m hoping to just get that for free. Again, if I had to come up with my own container format, that suddenly doubles the size of this project.    

The reason why I even got that idea is I made a WebM plug-in for Premiere. So I got to know that format. And WebM is similar to this. It’s just a generalized container, but limited to a specific set of codecs. In that case, it’s the Matroska container, using VP8 and VP9 for video and Opus and Vorbis for audio. So, it’s really the same thing.

People were writing Matroska with the Divx codec and all these sort of things, and if you got an MKV file, unless you knew exactly how it was written, you couldn’t have a whole lot of confidence that you’d be able to read it. But with WebM it’s the same technology, but because it’s restricted you know that if you get a WebM file, you can read it in anything that says it can read WebM because it can only be this thing.

And again, I don’t think people really care that it’s Matroska or whatever. They just picked Matroska because it was open source, and was patent-free, and it had streaming as one of its design goals.  

MC: In the case of this, what people care about is that they’re using a format which won’t later on break or disappear. Obviously the biggest concern with any new format is whether it has the institutional backing to stick around.

 

So, let’s switch the conversation to your campaign and what it’s designed to achieve. And then we can talk about what real success would look like for this campaign and then beyond it.

Tell me about what you’re looking for with the campaign. Who are you looking to have support it, and what would be the ideal outcome besides reaching your target goal…which, frankly, is pretty modest. You cut your own salary in half by your estimate in terms of what you thought was required to pay a top level programmer like yourself. 

Brendan: It’s a weird situation to be in when you’re talking about hiring yourself. Especially something that’s going to be for the good of the community, so that’s why I’m bilking myself from my full rate.

I did write something in response to Adam’s comment, as it made me think about this. Besides being able to fund the thing, I think the reason this is so important to crowdsource is to have this little army, but hopefully a big army, of people who are interested. We’re doing the kind of IndieGogo that’s like Kickstarter where it’s all or nothing. That’s the beauty of it. Let’s say we had this idea and couldn’t get anybody interested. Then we’d know we wouldn’t have to do anything. It would show us this idea really doesn’t have traction, and I can be glad we didn’t spend six months working on it to release it and nobody cared. If we get enough people on board not only will we know that people are interested, but then we have people to help push it. We know they’re kind of on the team.

The other thing is that they kind of share the risk. They’re contributing money directly as well as time and interest. So, if I were to work for however long to make a file format, and it just flops, that would be six months of my life completely wasted for nothing. But here, we’re distributing the risk as well. It just makes it more possible to do this without having some big company hiring people to do it. Unfortunately, I think this is the reason this hasn’t been done before, because if you’re a company and you make a video technology, usually you want to hold onto that and not make it open. That’s the entire reason why we have problems with these video technologies, is that they’re not open.   

 

MC: I can think of at least one very large video software company that lacks their own intermediate format. So, it seems to me, that if I were in your shoes, one of my main goals would be to get a company like that to adopt this and to fully support it. I see that among your stretch goals, are to explicitly support Adobe software. Is that the reason behind that, or can you say a little bit more about what would be the big win in terms of going beyond your initial target and also who might respond and contribute to get behind this?  

Brendan: So, you’re asking why am I starting the Adobe stuff and then moving to NUKE…the real answer is that I actually don’t know if there’s any other choice because Final Cut doesn’t have a Plug-in architecture for doing this. Avid doesn’t have one either. So, I don’t think I really could start anywhere else. But then also, there’s the simple fact that it’s the stuff I’ve been working on in the past. My expertise is with the Adobe APIs. After Effects plug-ins and Premiere plug-ins. I’ve written a few NUKE plug-ins, but that would be a little more new to me, but they also have an SDK that’s publically available. For whatever reason, Apple does not. They actually used to.

I’ve been researching this. You used to be able to make a QuickTime component. In fact, WebM has a QuickTime component, but those only work in QuickTime 7. When they upgraded everything to QuickTime X they dropped support for that. Now, somewhere there is support for that because RED has a plug-in for Final Cut X that will let you read RED files. To make that, they must have gotten access to some sort of secret APIs that Apple doesn’t make public. 

MC: So the gist of what you’re saying is that you’re dealing with a couple companies that are very closed and already have their own format, they haven’t opened that format up, and that’s part of the reason you’re creating this. And you’re first working with the companies that initially already have the framework to basically let you do the work to at least get your software working with them. And from there, you envision an open sourced project taking off from there. And from that point, it wouldn’t just be that the whole world now relies on Brendan Bolles to keep this going. At that point, it exists and it’s completely open, so you did your initial work based on what was contributed to your campaign. Maybe you keep working on it, maybe not, but it doesn’t cease to exist based on the success or failure of your own company.  

Brendan: Yes, absolutely. And also, that is a key part of open source. It’s kind of like Wikipedia. On Wikipedia, you write an article but then you might lose interest, but because anyone can edit that article it doesn’t rely on you to keep it going. Same thing with open source. It doesn’t rely on the people who originally created it to continue it on. It doesn’t belong to any one person.

Also, the reason for writing those Adobe plug-ins is I think there are people who just want to move video from Premiere on their computer to Premiere on someone else’s computer. If you just had a Premiere plug-in, I think that would be useful. Then there’s many more people who want to be able to render a movie from After Effects on this computer and then move it to the editor over on another computer. Actually, After Effects can render out of Media Encoder, so even just for that first plug-in, After Effects would be able to render MOX files. So again, that workflow alone is useful. People will be able to use that.

And then add NUKE, that’s another one. There are definitely people that move things back and froth from NUKE to After Effects and NUKE to Premiere. Just having those would make it useful, which would be okay. More importantly, if people can use it in their work then they can be testing it and we can know that it does work.

And then from there, once it’s shown to be working and if it’s easy to adopt then I would hope we could get some more people on board. Like the DaVinci stuff would be great. They have no plug-in API for that, so they would have to do it themselves. By having an easy C++ library like OpenEXR had, hopefully it lowers the barrier to adding support so that people just add it because why not? They’d think why wouldn’t we do this? For the companies that don’t have APIs, a lot of it is going to be some user lobbying to make that happen.       

  

MC: Two part question – one, is there a standards committee that would ultimately play a role in further legitimizing this format? And the second question is what other institutional player would be most helpful to you? That’s to say, once they’re on board, or you get a certain level backing, you’re confident you’ve created something that now has legs and it’s safe for everyone to go in the water.

Brendan: The standard thing is an interesting question. Actually, I will probably want feedback on that. I’ve said though that OpenEXR is kind of the model that I kind of follow. For one, because it ended up being tremendously successful in terms of adoption, but it’s interesting because it doesn’t have a standards committee that’s known.

It was created by ILM really without anyone else’s input. They were just using it. And then they just sent it out into the world. Although they do collaborate it’s not like it’s a SMPTE standard or anything. I definitely don’t want to be the MOX monarch, but I’ve also seen how those SMPTE standards groups take forever to do something. Again, I’m hoping to get the first plug-ins of this thing out in six months. I don’t think any SMPTE standard goes that quick. Just talking about what they’re going to do takes much longer than six months.

To some degree that’s part of the whole thing about this. It’s sharing the risk. People are having confidence in me to get this thing done based on the work I’ve done, but then there is some risk because people have to say they’re going to give me some money, give me some time and then in six months we’ll see. If it was a company getting started that would be costing millions of dollars, then you’d be much more cautious about letting someone just try to do that. Hopefully because we are so much more modest people will be more likely to say, “let’s see what happens”.

I would like there to be some sort of inner circle…some sort of MOX board. I guess I kind of am the MOX monarch so I’d be in charge of putting that together.   

MC: It’s been my experience that with a crowd-funding campaign you do find yourself in conversations with people who now know about your project. So who would be your dream teammates? Who would you love to have show up to the table?

And as a quick follow up, on ProVideo Coalition you’re going to be talking to a lot of individual professionals. Plenty of our audience work at larger companies, but there are a lot of professional freelancers. So what are you looking for from that level of backers?

Brendan: I definitely want to have input from all of those various types of people. You’d want to have people who are purely artists, like a video editor. Then you’d also want to have people that were working on DaVinci, that were purely on the technical side and people from the software side. I would want to have both industry and user side represented there.

In terms of what those professional freelancers are going to get out of this, when I first started thinking about this I went to Kickstarter and searched for “open source” because I wanted to see other examples of this. And there are shockingly few that are only software. There are a ton of things that are open sourced hardware, probably because those are so much easier. You say, “give me money, we make this thing and you get this physical thing.” So you’re buying a thing basically. 

But for this, because the thing that we’re making isn’t a physical thing and it’s going to be free, I think it’s a much more pure form of crowd-funding. You’re putting your money in because you say, “I want this thing to exist.” People can say their $20 doesn’t make a difference, but if nobody puts $20 in then it’s not going to happen at all and if we all do it then it will happen. So I don’t have an easy sell in terms of what people are going to “get”.

There are some limited perks though, and I said I’ll give credits on the documentation, so if it takes off people can look back and show they were involved when it was just the twinkle in someone’s eye. And I’ve actually already had a few people contribute $1,000 so they can have their logo or their name displayed on the website. But that’s really all I can offer in terms of perks.

The thing about it that’s interesting to me though is that’s really the original idea behind things like Kickstarter. It wasn’t supposed to be product people because there was already a pipeline for getting products developed and made where you usually got a loan and investors, and that sort of thing. Kickstarter was originally for someone like a sculptor or artist and gave them the ability to tell their fans that they wanted to make some large, expensive piece which they don’t have the money to front. So the idea was that these fans would give money just to see something exist that they wanted to see. The biggest reward you would see would be a private party or something like that when it opened.

 

MC: You’re just saying, do you believe in this, and if you do, remember that I’m putting the time and sweat into making it. Therefore, you vote for me doing that by backing me to pay my rent and eat while I do that.

Brendan: Yes, exactly.

Obviously I hope this gets funded because I’m personally another person that really wants to see this project happen. Taking less pay is my contribution.

MC: Let’s talk about that for a second because we already established that you undervalued yourself by about 50% right from the get go. So let’s suppose this really takes off. You’re adding stretch goals, and it sounds like you have some more in mind, and I think that’s a great idea. Essentially, is your plan to just keep adding onto your own time and investment in it if you go beyond your goal?

Brendan: Yeah, that’s the idea. For every possible plug-in that we could ever write, and there are a few more that aren’t on that list, the more plug-ins we can work on from the get go the more use cases we have and the more likely we’ll run into any problems. For example, the RV Player you can actually write plug-ins for. So, if we were able to add that onto the list, that thing probably has different sort of requirements from something like After Effects or NUKE, where you’re just getting a frame. By having more of those things all bubbling up together, the overall process of getting everything final will slow down, but we’ll be running into the problem earlier and we’ll be able to get more users testing more stuff.

So yeah, that’s basically what the stretch goals are. It’s me saying if we add this much more time, then I can make another thing. And then that will help us get more of a critical mass where you can say not only can you move it from Premiere to Premiere and After Effects to NUKE, but now you can play it in RV…  

MC: For sure. It’s funny, you’re taking me back to that 90’s kind of feeling about when you would just see a new format or codec appear in a pulldown with new software. In a way, it’s been a long time since that’s been a normal thing. It’s kind of been the tried and true stuff for awhile. It’s kind true that it’s a name recognition thing party. Okay, maybe there are various MacGyver ways to install it in many more places than are initially supported, but on the other hand if it’s just there in the pulldown, you’re going to go “hey, what’s this?” Which is what we always used to do.

Brendan: Right. And one other thing that I think is interesting is that there are these various video compression technologies out there like Dirac, which I was surprised to see because BBC probably spent millions to make that, and it’s free and nobody is really doing anything with it. But then there are these other image compression technologies where they just really exist as a kind of a little piece of code in like a sample command line executable. So there’s oftentimes a lot of stuff out there that maybe could be really useful, but because they can’t get Apple or Avid to include it, we never see it because that’s the only way that people can take advantage of it. So I think it would be interesting to see that.

For example, OpenEXR just recently added another codec. It's actually high-quality lossy codec kind of like ProRes but for floating-point which is great timing because we’re totally going to love using that. Being open is going to allow us to sort of adopt new technologies if they become available and if they add something. We’re not going to add a codec just because it exists. But if you have a new codec that’s higher quality or faster or just adds something, then new things can be added. The key is that you just never drop support for the old stuff. And open source makes that possible.

With OpenEXR, it’s fine that they add new codecs. You want them to if it improves the format. The important thing is that they can never drop an old one. But because you have all of the code right there, even if the people who run EXR decided they were going to drop it, they couldn’t force anyone else to drop it because they don’t control it anymore.

 

MC: Are there ever interim updates required to just keep a given codec running? So basically you’re saying once it’s in there it’s in there?

Brendan: Yeah, I mean, the most that could possibly happen is, say, you get a new version of Xcode and some little snippet doesn’t compile anymore. But then you just fix it and you’re fine.

And that’s why open source is so important, because maybe some else is using a different compiler than me and with the closed source you just get a library saying it’s not working or there's a bug here or it’s not linking, or any of these other technical things. With open source they can fix it, and then they can send the changes back up there so everybody gets those changes. And that’s the important thing about open source.

And I’ve contributed to OpenEXR for example. I had that exact problem where, to build Adobe plugins I use an older version of the compiler but when they have new stuff coming I download them and I try it out and I find problems and I send them up the changes. Otherwise, the alternative is that there’s one person who’s supposed to have access to every single platform and build their library for every platform.

MC: So it sounds like OpenEXR is in a pretty major way a great model for what you're trying to do. Are there any flaws with EXR that you want to change or is that what you’d put out there as a success in terms of what you’re doing and you would love to see it do that?

Brendan: I am a huge fan of OpenEXR and I really don’t have anything to mention about it as a flaw. For one thing, I’ve never seen anything better from the technical/programmer side. I’ve never seen a better written library than that. It’s so well written and so clean. It just does everything right. In fact, I’ve learned a lot of my better C++ techniques really just from looking at that library. 

But then the fact that they use multiple codecs and the fact that the format has changed over time is amazing. For example, when it first came out it was scanline only but certain people wanted to add tiles and MipMap support for things like textures, and for bucket rendering. So they added that. But they made it so that everyone who had written a reader that used just the simple scanline interface could use the library which would do everything for you under the hood. So you just had to get the new library that knew about tiles, recompile and everything just worked.

And that’s the one thing I’ve learned from writing these other movie formats as a programmer. When I was writing the WebM format, I had to learn everything. I had to learn how to write the container, I had to learn how did it program the codec. EXR though, there’s all these codecs and you don’t have to know anything about them as a programmer. You’re just sending it pixels. That’s the codec, so you can tell it to go crazy. Same thing with reading. You don't have to know anything about how that codec works.

It’s kind of funny to me how in movies it’s often not like that. For example, when you’re writing a movie you're encoding video and audio but often you put some video frames in, you put some audio in and they don’t instantly spit out a video frame and audio frame. That’s the way encoders work. They often have to get a series of frames before they’re ready to produce something.

So it’s a real pain in the neck to have to think about that when you're writing the movie. You have think about how you have to buffer it, when all you want to do is just say here’s video, here’s audio, you take care of it.

MC: Right, multipass, etc…

Brendan: Yes. So from doing it with EXR, it was just very clear to me that EXR was so simple and movie formats should be like that too. And then you can write little modules for new codecs. You shouldn’t have to actually make everybody learn the new codec where you have to get it from here, put it in there…they shouldn’t have to know anything.

 

MC: So we touched on Adam Wilt’s concerns, but others are wary of adopting a codec that we can’t be sure will be supported for many, many years to come. We’ve already seen other supposedly reliable codecs disappear. And some think it would be far better for Adobe to bite the bullet and issue something or perhaps making this whole effort an official Adobe project.

Brendan: I would actually disagree that it would be better to make it an official big company project. And I think OpenEXR is a good example. That was made by ILM which is a company, but they’re a company of users. They’re the users of the format, and they’re not trying to profit off the format. I’m a big fan of Adobe, but then you’re asking the question in terms of who do you trust more, Apple or Adobe? But when it’s open sourced you don’t have to trust anybody. That code exists, and it can never not exist.

This is the development model Git uses, and why I'll be using GitHub, and why Linus Torvalds, the creator of Linux, made Git. Because it doesn’t require you to trust anybody. He’s also said that he never has to back anything up, because everything he does is spread all around the world. So even though everyone thinks of him as the king of Linux, technically he has no more authority than anybody else. It's just that people trust his thinking.

So if I make this thing and then I start being untrustworthy, it wouldn’t really matter because people would already have the code and there would be nothing I could do to it. And then if I came out and told everyone to forget this new codec it wouldn’t matter. I could tell everyone that the plug-ins that I uploaded don’t have that codec anymore, but everyone could just say they weren’t using that one and could use the ones they built which still has what they want in it. 

MC: Right, but the other thing you might want to address is the fact that the example used is about a codec. So in other words, there may even be confusion that where you're creating is in fact a codec.

Brendan: That’s a good point. People say that, and I am not creating any codecs. If I was creating a codec, I would have to raise hundreds of thousands of dollars.

I mentioned this in a tweet. When Google was making WebM, they didn’t have an open source video codec and patent-free video codec to use so they bought a company for 124 million dollars. So apparently that’s what it costs to make a video codec. And we’re not going to do that.

So, not only would I refuse to use some sort of closed source video codec that could possibly break in the future, but I’m unable to do that. That’s the nature of open source. It cannot be taken away from you, because it’s not under anyone’s control.

And hey, when Microcosm came out years ago, people we seeing this sixteen-bit lossless codec in QuickTime and thinking how it was great. Even if Microcosm had been open source it still would have problems because QuickTime has eliminated the ability to make codecs. So it was subjected to this closed source thing that it was plugging into. Of course, it wasn’t open source. If it had been open source we’d be able to at least continue supporting it as a codec. If the operating system had changed or if the company lost interest, we’d be able to keep that going. But because Apple controls the container, we couldn’t even do that. But with MOX, everything is open. The codecs and the container. The software that makes all of those things. There’s simply no way that it can be taken away from you.

And I think that’s really important, because like I said in the MOX video, imagine if you had a digital camera, but instead of JPG it had its own file format. With RAW you kind of have that, but at least Adobe has figured out how to read all of those. But even that’s a little scary because Adobe could decide not to read them anymore. Anyway, imagine if you had a camera that didn’t write JPG or RAW that Adobe could read. It just wrote its own crazy format and you could only read it using their software. That would be kind of terrifying. The idea that you had taken all these photos and then years from now…imagine if there was some crazy technology where you couldn’t even convert it to something else. That’s terrifying to me. The idea that your work is trapped in their format and something could happen that would shut it off. Especially when you might look at it fifty years from now. I mean, who knows what could happen. That company could just be gone, and it’s not going to run on that hardware or software in fifty years. Everything is going to be different.

With open source though, it will always be able to be moved up. And by being open like a JPG, you just know that you can read it forever. And even if Photoshop stops doing JPG, any other program could pick it up. So that's what terrifies me: having your work trapped in someone else’s file format. And it’s just not necessary. It was necessary but now that all these pieces have been released as open source it's no longer necessary. We just have to put the pieces together. So much of this work has been done by so many people over the years and we’re just pulling it together in a nice little package.

 

MC: So you’ve demonstrated really well why you’re a guy who’s comfortable taking that step and you’re doing us all a favor of taking it this far, and your campaign is doing well. Anything else you wanted to throw out?

Brendan: For one, anyone who has these concerns can post them on the Indiegogo page. You can tweet me @moxfiles. I’m totally down to discuss these things. And I’m not saying these concerns are unfounded. And I also can’t say that everything will work out as we hope. But I think it’s in a place where the risk is low enough and the rewards will be high enough that it’s worth doing it and finding out.

And again, I really like the fact that OpenEXR was kind of made in production. I feel like a lot of these standards are things you sit around and talk about. I’ve been in these meetings where it’s people just taking and talking about what would work or what wouldn’t work. I much prefer to just try it. Let’s get a thing working.

MC: It does seem to me that one difference between what you’re doing and OpenEXR is that ILM kind of threw down and said they were going to use it, no matter what. Even if nobody else could see why it was so cool, they were going to use it and they were going to be around for at least a few more years. So it seems to me that your ideal backer is someone who can commit at a similar level.

I mean, having talked to you about this, it really does sound like this will have legs and once it's in use anywhere it will have the means to continue to be in use everywhere. But honestly, the one thing is that there just has to be enough interest that there is more than one guy who is going to keep rewriting this and there’s somebody who has said, “Yeah, you know what? We’re going to use this, so we’re invested.”

Brendan: Absolutely. I'm hoping that the initial release gets some traction. If you look at OpenEXR, now it’s been contributed to by people from Weta, people from DreamWorks. So that’s definitely where I’d like to be.

Really, I’d like it to be so that at some point I didn’t even have to be involved because it would just be its own thing and I just helped create it. That’s really where it should be.

And like you said about ILM, they came out and said even if nobody else was going to use it they were going to. But how many employees does ILM have? It’s quite a few, but if we have several hundred backers and they were all using it, that would be roughly the same size. I think you could potentially say, “we have this many people using it,” but that’s not where we want it to end.

MC: Well, there is this institutional backing thing and maybe it's just psychological, but in your case, let’s just say a bunch of the VFX studios just at the high end decide say, “you know what? We’ll move away from still image formats and using those so much because this actually solves many of the concerns that we’ve had with archiving.”

Brendan: Yeah. By the way, I should say that I’m not hoping to displace still image formats. I think still image formats are totally great for certain things, and if MOX had been invented first, you’d ask, well, now with render farm how do I render each frame until…you’d have to create the still-image formats to do what you need to do on a render farm.

So to me, it’s not that I want to replace still image formats, but there are some times, like in OpenEXR, where you want to store this floating-point imagery but you also want to have it with sound.

MC: Yeah, and it’s easy to think that over time, there’s more risk of just that sound file and that visual file becoming so divorced from one another that they never find each other again. Simply because right there, you’ve thrown down and said those are two separate files. We've all had it happen. You can’t remarry them because they weren’t named the same way, or one of them goes corrupt, or whatever. I mean

Brendan: Sometimes, people just want to make movies.

MC: It is moving images after all. So in a way, the still sequence plus audio file has always been a workaround. It’s just in that industry everybody is so used to it that nobody thinks twice about it.

Brendan: It’s true. And also, the big studios have full-time people running around to make sure that the right audio file get loaded with the right frame sequence and gets played at the frame rate and gets the right color space stuff. So if you could have a full-time person you could have the hope of babysitting that system. I think the people that like this the most are the smaller shops. Especially in advertising. They’re the people I hear from the most that want to make movies and they’re kind of more adverse to image sequences. I think it is because they’re moving things around themselves, and they don’t have a big infrastructure.

MC: Not mention that as you get more into graphics there's less patience for a format that you can’t just click to play.

Brendan: Right.

Exit mobile version