This year at Libreplanet I gave a small talk about Python in Blender. My primary goal behind was to talk about the ways that Blender, by exposing its Python API directly in the interface, there for users to discover, gives a new meaning to ‘free software’ not just in licensing or community (though these things are important) but also in the design of the program itself. How many programs do you know that have an ‘edit source’ button for each interface element?
One of the elements of the talk was a tutorial on how to use Blender’s Python API, and add a new primitive type into the Add Mesh menu. This is made very easy thanks to the template scripts included in Blender, that allow you to create your own with just a few edits, and by the fact that you can grab data from Blender, and pipe it into your script directly.
I’m not sure if the video from the conference will be available from the FSF; they are slowly putting talks up on the FSF mediagoblin site – I’ll post something if it does. In the meantime, I recorded a Python tutorial for just this part:
Going back and recording the tutorial actually made it better than what I presented at the conference. I used the Addon template, instead of the Operator template, which makes it even easier to share the code. Interestingly, during the presentation, I found a bug in the operator template, which I fixed after it was over. The fix is now in Blender 2.77, a nice way in which a presentation about a free software project leads directly to a contribution 😉
I was rifling some old shelves and came across an ancient notebook, filled with storyboards! Some of them were for stuff I never ended up making (though perhaps I will revisit them someday in a different form), but some of those boards were for an early animation I did in Blender, “Chicken Chair” which was itself just an animation test for an impossibly more ambitious project. In any case, it was a fun project, and relatively meticulous for me at the time. You can see the resulting animation here – not the antique four by three aspect ratio! a classic!
This was the first or second ‘serious’ project I made in blender and it was a somewhat insane undertaking. I used the NLA (it existed!) do do the walk cycle on a path (this feature doesn’t actually exist anymore in blender), but couldn’t find a way to blend the animations between the cycle and other bits of the movie. I ended up using multiple rigs, and using animated constraints to switch between them. I’m pretty sure the entire animation (all of the shots) is in one file.
The motion blur in the video is I believe the old multisampled motion blur- rendering several subframes and blurring in between them (very nice results but slow to render). All the postproduction, such as it is, is done in Gimp and Blender’s sequence editor – Gimp for painting the glow in the eyes ‘streaking’ over fast frames and for the lightening effects, Blender for everything else.
I actually don’t remember making storyboards for the project, but I must have, because here they are:
What I do remember is that I made most of the film in Rao’s coffee in Amherst, and that it took me almost a month to UV Unwrap and texture the chair. The second hardest bit was animating the damn wire (using a super long IK chain that was impossible to control)
Well, that’s it! Feel free to share any of your own early animation projects in the comments 🙂
We’ll try to do a bi-weekly installement of Addons for Empathy (until we run out of addons). This one is a two-parter: Our main installment is about working with proxies in Blender, the second is about a bold new experiment in Rig UI.
The video is about two addons, both making proxy editing in the sequencer more friendly to our project. A quick explanation:
Blender’s Video Sequence Editor or VSE for short has a feature called proxies. This basically allows an in-place replacement of strips by 25%, 50%, 75% or 100% versions, in a fast format (.jpg or motion jpg) This is especially useful when:
Editing Large format files that are too slow to be realtime – either in resolution (2K or 4K) or in type (.EXR!!!)
Editing over the network, especially files of the previous types
Working with complex and multiple effects that could be benefit from being cached
So Proxies in Blender work a bit like a combination of proxies and caches. I prefer them as the former, since it skips having to recalculate every single you change some timing – instead they only need to be recalculated when the sources change.
However, working with proxies in Blender can be painful by default, and this is where Proxy Workflow Addon comes in:
Editing Proxy settings must be done strip by strip: Proxy Workflow lets you set them for all selected strips at once
Default location is in the same folder as the originals, which is bad in the case of network shares; Proxy Workflow automatically sets them to a local directory “TProxy” that contains all the proxies for the edit, and can be moved around like a scratch disk
Sometimes Blender tries looking for the original files even when it is using proxies. If you are trying to use proxies to avoid using the network/internet, this becomes a problem. Proxy workflow allows ‘Offlining’ strips, and then ‘Onlining’ them again when you can reconnect to the network
Blender doesn’t know when the source files are ‘stale’ and need to be re-proxied – for instance if you rerender. Proxy workflow timestamps as it makes proxies, allowing you to select a bunch of strips and re-proxify only the changed ones.
Proxy workflow is designed to work with movies and image strips only for now, as I’m interested in true proxies, not caching effects.
A seperate addon is called ‘Transparent Proxies’ and does what it says on the tin (and no more): It allows making proxies of image sequences that preserve the alpha channel for alpha over effects. It does this by cheating: It uses Imagemagick on the commandline to make a .tga proxy, and just renames to .jpg to satisfy Blender. You need to install imagemagick first for it to work.
Bonus: Rig UI Experiment:
Code is at gitorious
This brings us to the bonus round- the Rig Selection UI. I’m continuing my round of experimentation with BGL and modal addons, to make the kind of ‘typical’ rig ui where animators can select or act on a rig by clicking on an image. This ui is using an SVG file to define the hotspots, and a PNG to actually draw the image. It already works, though I’m still going to refine it and add more options/ easier rig customizability. The end goal is to be able to do Rig UIs without writing code, simply by drawing them in Inkscape and pressing a few buttons in Blender. Stay tuned!!!
Hello all, long time no post!
As we’re getting closer and closer to releasing our files, I’m noticing that we have a huge (and I mean huge) trove of Python code that is largely undocumented. Some of it is pretty specific to this project, And other bits are useful in general. Even the specific stuff could be adapted, so it’s worth going over.
To address this we’ve thought of doing an ‘Addons for Empathy’ video series, quickly explaining what some of the addons do, in addition to more traditional docs. The first I’ll do in this way is the Floating Sliders Addon: In short, this pops up small, keyframable Open GL sliders for any Floating point Pose-bone properties. The code is on gitorious, and following is a simple video explanation of what it does and how to use it:
As always, the video is licensed CC-BY, while the addon itself is GPL.
You can also download this video as a high resolution .webm or .mp4 file, or watch it on youtube
The screencast itself was edited in Pitivi, with Inkscape titles. Video was captured via the Gnome screencast feature, and audio with Audacity
Big thanks to Campbell Barton for help getting min/max of custom properties, and explaining some of the finer points of keymaps, and to Dalai Felinto for showing a possible hack to make a popup menu (I ended up using a slightly different way)
Blender’s Video Sequence Editor (or VSE for short) is a small non-linear video editor cozily tucked in to Blender, with the purpose of quickly editing Blender renders. It is ideal for working with rendered output (makes sense) and I’ve used it on many an animation project with confidence. Tube is being edited with VSE, as a 12 minute ‘live’ edit that gets updated with new versions of each shot and render. I’ve been trying out the Python API to streamline the process even further. So… what are the advantages of the Video Sequence Editor. Other than being Free Software, and right there, it turns out there are quite a few:
familiar interface for blender users: follows the same interface conventions for selecting, scrubbing, moving, etc. Makes it very easy to use for even beginning to intermediate users.
tracks are super nice: there are a lot of them, and they are *not* restricted: you can put audio, effects, transitions, videos or images on any track. Way to go Blender for not copying the skeuomorphic conventions that makes so many video editors a nightmare in usability.
Since Blender splits selection and action, scrubbing vs. selection is never a problem, you scrub with one mouse button, select with the other, and there is never a problem of having to scrub in a tiny target, or selecting when you want to scrub. I’ve never had this ease of use in any other editor.
simple ui, not super cluttered with options
covers most of the basics of what you would need from a video editor: cutting, transitions, simple grading, transformations, sound, some effects, alpha over, blending modes, etc.
has surprisingly advanced features buried in there too: Speed control, Multicam editing, Proxies for offline editing, histograms and waveform views, ‘meta sequences’ which are basically groups of anything (movies , images, transitions , etc) bundled together in one editable strip on the timeline.
as in the rest of Blender, everything is keyframable.
you can add 3D Scenes as clips (blender calls them strips) making Blender into a ‘live’ title / effects generator for the editor. They can be previewed in openGL, and render out according to the scene settings.
it treats image sequences as first class citizens, a must!!!
Python scriptable!!!! big feature IMO. (uses the same api as the rest of Blender)
Disadvantages are also present, I should mention a few:
UI is blender centric! so if you are not a blender user, it does not resemble $FAVORITEVIDEOEDITOR at all. Also, you have to expose it in the UI (only a drop down away, but most people don’t even realize it is there)
no ‘bin’ of clips, no thumbnail previews on the video files, though waveform previewing is supported.
lacks some UI niceties for really fast editing, though that can be fixed with python operators, and also is getting improvements over time.
could be faster: we lost frame prefetching in the 2.5 transition, however, it is not much slower than some other editors I’ve used.
not a huge amount of codec support: Since Blender is primarily not a video editor, supporting a bajillion codecs is not really a goal. I believe this differs slightly cross platform.
bad codec support unfortunately means not only that some codecs don’t work, but that some of the codecs work imperfectly.
needs more import/export features (EDL is supported, but afk only one way)
some features could use a bit of polish. This is hampered by the fact that this is old code, a bit messy, and not many developers like to work with it.
Needless to say this is all ‘at the time of writing’. Things may improve, or the whole thing gets thrown into the canal 😉
So what have I been up to with Blender’s video editor? Quite a bit! Some of it may end up not-so-useful in the end, but experimentation could yield some refinements. The really good thing about using Python, is that I can ‘rip up’ complex things and rearrange / redo them. So the experiments don’t result in a huge waste. Lets have a peak.