Addons for Empathy 2: Proxies and a bonus!

Hi Folks! We’ll try to do a bi-weekly installement of Addons for Empathy (until we run out of addons). This one is a two-parter: Our main installment is about working with proxies in Blender, the second is about a bold new experiment in Rig UI. Proxy Workflow and Transparent Proxies Addons: Get the files from […]

Hi Folks!
We’ll try to do a bi-weekly installement of Addons for Empathy (until we run out of addons). This one is a two-parter: Our main installment is about working with proxies in Blender, the second is about a bold new experiment in Rig UI.

Proxy Workflow and Transparent Proxies Addons:

Get the files from my gitorious

The video is about two addons, both making proxy editing in the sequencer more friendly to our project. A quick explanation:

Blender’s Video Sequence Editor or VSE for short has a feature called proxies. This basically allows an in-place replacement of strips by 25%, 50%, 75% or 100% versions, in a fast format (.jpg or motion jpg) This is especially useful when:

  1. Editing Large format files that are too slow to be realtime – either in resolution (2K or 4K) or in type (.EXR!!!)
  2. Editing over the network, especially files of the previous types
  3. Working with complex and multiple effects that could be benefit from being cached

So Proxies in Blender work a bit like a combination of proxies and caches. I prefer them as the former, since it skips having to recalculate every single you change some timing – instead they only need to be recalculated when the sources change.

However, working with proxies in Blender can be painful by default, and this is where Proxy Workflow Addon comes in:

  1. Editing Proxy settings must be done strip by strip: Proxy Workflow lets you set them for all selected strips at once
  2. Default location is in the same folder as the originals, which is bad in the case of network shares; Proxy Workflow automatically sets them to a local directory “TProxy” that contains all the proxies for the edit, and can be moved around like a scratch disk
  3. Sometimes Blender tries looking for the original files even when it is using proxies. If you are trying to use proxies to avoid using the network/internet, this becomes a problem. Proxy workflow allows ‘Offlining’ strips, and then ‘Onlining’ them again when you can reconnect to the network
  4. Blender doesn’t know when the source files are ‘stale’ and need to be re-proxied – for instance if you rerender. Proxy workflow timestamps as it makes proxies, allowing you to select a bunch of strips and re-proxify only the changed ones.
  5. Proxy workflow is designed to work with movies and image strips only for now, as I’m interested in true proxies, not caching effects.

A seperate addon is called ‘Transparent Proxies’ and does what it says on the tin (and no more): It allows making proxies of image sequences that preserve the alpha channel for alpha over effects. It does this by cheating: It uses Imagemagick on the commandline to make a .tga proxy, and just renames to .jpg to satisfy Blender. You need to install imagemagick first for it to work.

 Bonus: Rig UI Experiment:

Code is at gitorious
This brings us to the bonus round- the Rig Selection UI. I’m continuing my round of experimentation with BGL and modal addons, to make the kind of ‘typical’ rig ui where animators can select or act on a rig by clicking on an image. This ui is using an SVG file to define the hotspots, and a PNG to actually draw the image. It already works, though I’m still going to refine it and add more options/ easier rig customizability. The end goal is to be able to do Rig UIs without writing code, simply by drawing them in Inkscape and pressing a few buttons in Blender. Stay tuned!!!

 

More
1 comment

Addons For Empathy – Floating Sliders

Hello all, long time no post! As we’re getting closer and closer to releasing our files, I’m noticing that we have a huge (and I mean huge) trove of Python code that is largely undocumented. Some of it is pretty specific to this project, And other bits are useful in general. Even the specific stuff […]

Hello all, long time no post!
As we’re getting closer and closer to releasing our files, I’m noticing that we have a huge (and I mean huge) trove of Python code that is largely undocumented. Some of it is pretty specific to this project, And other bits are useful in general. Even the specific stuff could be adapted, so it’s worth going over.

To address this we’ve thought of doing an ‘Addons for Empathy’ video series, quickly explaining what some of the addons do, in addition to more traditional docs. The first I’ll do in this way is the Floating Sliders Addon: In short, this pops up small, keyframable Open GL sliders for any Floating point Pose-bone properties. The code is on gitorious, and following is a simple video explanation of what it does and how to use it:

As always, the video is licensed CC-BY, while the addon itself is GPL.
You can also download this video as a high resolution .webm or .mp4 file, or watch it on youtube

The screencast itself was edited in Pitivi, with Inkscape titles. Video was captured via the Gnome screencast feature, and audio with Audacity

Big thanks to Campbell Barton for help getting min/max of custom properties, and explaining some of the finer points of keymaps, and to Dalai Felinto for showing a possible hack to make a popup menu (I ended up using a slightly different way)

More
12 comments

Adventures in Blender’s Video Sequence Editor

Blender’s Video Sequence Editor (or VSE for short) is a small non-linear video editor cozily tucked in to Blender, with the purpose of quickly editing Blender renders. It is ideal for working with rendered output (makes sense) and I’ve used it on many an animation project with confidence. Tube is being edited with VSE, as […]

sequencer

Blender’s Video Sequence Editor (or VSE for short) is a small non-linear video editor cozily tucked in to Blender, with the purpose of quickly editing Blender renders. It is ideal for working with rendered output (makes sense) and I’ve used it on many an animation project with confidence. Tube is being edited with VSE, as a 12 minute ‘live’ edit that gets updated with new versions of each shot and render.  I’ve been trying out the Python API to streamline the process even further. So… what are the advantages of the Video Sequence Editor. Other than being Free Software, and right there, it turns out there are quite a few:

  1. familiar interface for blender users: follows the same interface conventions for selecting, scrubbing, moving, etc. Makes it very easy to use for even beginning to intermediate users.
  2. tracks are super nice: there are a lot of them, and they are *not* restricted: you can put audio, effects, transitions, videos or images on any track. Way to go Blender for not copying the skeuomorphic conventions that makes so many video editors a nightmare in usability.
  3. Since Blender splits selection and action, scrubbing vs. selection is never a problem, you scrub with one mouse button, select with the other, and there is never a problem of having to scrub in a tiny target, or selecting when you want to scrub. I’ve never had this ease of use in any other editor.
  4. simple ui, not super cluttered with options
  5. covers most of the basics of what you would need from a video editor: cutting, transitions, simple grading, transformations, sound, some effects, alpha over, blending modes, etc.
  6. has surprisingly advanced features buried in there too: Speed control, Multicam editing, Proxies for offline editing, histograms and waveform views, ‘meta sequences’ which are basically groups of anything (movies , images, transitions , etc) bundled together in one editable strip on the timeline.
  7. as in the rest of Blender, everything is keyframable.
  8. you can add 3D Scenes as clips (blender calls them strips) making Blender into a ‘live’ title / effects generator for the editor. They can be previewed in openGL, and render out according to the scene settings.
  9. it treats image sequences as first class citizens, a must!!!
  10. Python scriptable!!!! big feature IMO. (uses the same api as the rest of Blender)

proxysettings

Disadvantages are also present, I should mention a few:

  1. UI is blender centric! so if you are not a blender user, it does not resemble $FAVORITEVIDEOEDITOR at all. Also, you have to expose it in the UI (only a drop down away, but most people don’t even realize it is there)
  2. no ‘bin’ of clips, no thumbnail previews on the video files, though waveform previewing is supported.
  3. lacks some UI niceties for really fast editing, though that can be fixed with python operators, and also is getting improvements over time.
  4. could be faster: we lost frame prefetching in the 2.5 transition, however, it is not much slower than some other editors I’ve used.
  5. not a huge amount of codec support: Since Blender is primarily not a video editor, supporting a bajillion codecs is not really a goal. I believe this differs slightly cross platform.
  6. bad codec support unfortunately means not only that some codecs don’t work, but that some of the codecs work imperfectly.
  7. needs more import/export features (EDL is supported, but afk only one way)
  8. some features could use a bit of polish. This is hampered by the fact that this is old code, a bit messy, and not many developers like to work with it.

Needless to say this is all ‘at the time of writing’. Things may improve, or the whole thing gets thrown into the canal 😉

So what have I been up to with Blender’s video editor? Quite a bit! Some of it may end up not-so-useful in the end, but experimentation could yield some refinements. The really good thing about using Python, is that I can ‘rip up’ complex things and rearrange / redo them. So the experiments don’t result in a huge waste. Lets have a peak.

More
19 comments

News + MediaGoblin is Rad

We are getting really excited to show all the incredible animation and amazing render tests coming off the farm. And even though we don’t want to let *too* much slip before time, I know Bassam is planning an update with some teaser images and production notes pretty soon. Today I’m happy to share news of […]

We are getting really excited to show all the incredible animation and amazing render tests coming off the farm. And even though we don’t want to let *too* much slip before time, I know Bassam is planning an update with some teaser images and production notes pretty soon.

Today I’m happy to share news of MediaGoblin, a libre software “publishing system” for images, video, audio and more that friends of Bassam’s and mine are building. It’s a single replacement for Flickr, YouTube, SoundCloud, and similar that anyone can run (like WordPress), but federated to keep files under user control. It’s very extensible, with support just added for 3D models now suggesting an alternative to Thingiverse. But I’m especially excited about MediaGoblin because it will establish the core functionality we can use to implement a lot of cool ideas we’ve had during Tube production for a collaborative platform that also fills the huge need for a solid asset management pipeline, a kind of super-Helga with some interesting properties. We’ve been talking to a bunch of developers about putting together a free software project after Tube, in which there’s been a lot of interest, and I have a thought that we could get studios to pool resources instead of each rolling their own and occasionally making a dead-end free software release.

A few weeks ago at the Blender Conference, we were talking with the renderfarm.fi developers about how, together with their distributed rendering, and these fairly near-future pipeline/collab possibilities, it seems like a lot of big pieces falling into place. MediaGoblin is worthy in its primary goals, but of especial interest for providing much of the functionality we’d need, plus perks like federation that we’ve dreamed about. Coding with Will Kahn-Greene, formerly of the Participatory Culture Foundation, the project lead is Chris Webber, until recently a developer at Creative Commons, and also a Blender user who did the anim in the excellent Patent Absurdity doc. And as part of the Tube Open Movie, Chris helped build pipeline scripts as well as our Reference Desk tool, one of the programs inspiring the new asset branch in Blender.

bassam_bconf2012_sebastiankoenig

Today MediaGoblin has a nice write-up at Libre Graphics World, concluding:

If you are concerned about having full control over images, videos and audio records that you put online, you have just a few days left to support development of MediaGoblin — an awesome free software project that decentralizes media storage.

If you are a VFX or animation studio, or even a 3D printing company, you have even more reasons to support the project. With initial support for 3D models (STL and OBJ) MediaGoblin has a great chance to grow into a scalable digital asset management solution that is free to use and modify.

Finally, if you are a developer who’s good at Python, MediaGoblin could do with your contributions.

** Donations are tax-deductible in the US and also support the Free Software Foundation, which hosts the campaign.

And thanks for anything you can do to help this awesome project by passing the word!

More
8 comments

Cycles + Internal Test

  Made a test from  Bassam’s tutorial on using cycles and internal together. The result is not terrible, but not perfect. Used another model cause cycles crashes during render (too many objects) and doesn’t work with multiple UV layers like internal does.  Cycles is good to use but it has a lot of limits so […]

 

Made a test from  Bassam’s tutorial on using cycles and internal together. The result is not terrible, but not perfect. Used another model cause cycles crashes during render (too many objects) and doesn’t work with multiple UV layers like internal does.  Cycles is good to use but it has a lot of limits so far. :(    Not suitable for animation yet – too slow and crashes often.  Maybe for statics and interiors. Difficult to work with lights – it seems only be possible to change intensity. Internal also has problems with SSS : for instance, no shadows in ShadowPass. Anyway we won’t use cycles for characters yet, maybe for environments. Now to make some more tests! I’m looking forward for further developments in cycles.

Note from Bassam: If this is Dimetrii’s ‘not perfect’ result, I think his perfect one would give me a heart attack :) brilliant as usual.

More
3 comments
©URCHIN 2015