Blender’s Video Sequence Editor (or VSE for short) is a small non-linear video editor cozily tucked in to Blender, with the purpose of quickly editing Blender renders. It is ideal for working with rendered output (makes sense) and I’ve used it on many an animation project with confidence. Tube is being edited with VSE, as a 12 minute ‘live’ edit that gets updated with new versions of each shot and render.  I’ve been trying out the Python API to streamline the process even further. So… what are the advantages of the Video Sequence Editor. Other than being Free Software, and right there, it turns out there are quite a few:

  1. familiar interface for blender users: follows the same interface conventions for selecting, scrubbing, moving, etc. Makes it very easy to use for even beginning to intermediate users.
  2. tracks are super nice: there are a lot of them, and they are *not* restricted: you can put audio, effects, transitions, videos or images on any track. Way to go Blender for not copying the skeuomorphic conventions that makes so many video editors a nightmare in usability.
  3. Since Blender splits selection and action, scrubbing vs. selection is never a problem, you scrub with one mouse button, select with the other, and there is never a problem of having to scrub in a tiny target, or selecting when you want to scrub. I’ve never had this ease of use in any other editor.
  4. simple ui, not super cluttered with options
  5. covers most of the basics of what you would need from a video editor: cutting, transitions, simple grading, transformations, sound, some effects, alpha over, blending modes, etc.
  6. has surprisingly advanced features buried in there too: Speed control, Multicam editing, Proxies for offline editing, histograms and waveform views, ‘meta sequences’ which are basically groups of anything (movies , images, transitions , etc) bundled together in one editable strip on the timeline.
  7. as in the rest of Blender, everything is keyframable.
  8. you can add 3D Scenes as clips (blender calls them strips) making Blender into a ‘live’ title / effects generator for the editor. They can be previewed in openGL, and render out according to the scene settings.
  9. it treats image sequences as first class citizens, a must!!!
  10. Python scriptable!!!! big feature IMO. (uses the same api as the rest of Blender)


Disadvantages are also present, I should mention a few:

  1. UI is blender centric! so if you are not a blender user, it does not resemble $FAVORITEVIDEOEDITOR at all. Also, you have to expose it in the UI (only a drop down away, but most people don’t even realize it is there)
  2. no ‘bin’ of clips, no thumbnail previews on the video files, though waveform previewing is supported.
  3. lacks some UI niceties for really fast editing, though that can be fixed with python operators, and also is getting improvements over time.
  4. could be faster: we lost frame prefetching in the 2.5 transition, however, it is not much slower than some other editors I’ve used.
  5. not a huge amount of codec support: Since Blender is primarily not a video editor, supporting a bajillion codecs is not really a goal. I believe this differs slightly cross platform.
  6. bad codec support unfortunately means not only that some codecs don’t work, but that some of the codecs work imperfectly.
  7. needs more import/export features (EDL is supported, but afk only one way)
  8. some features could use a bit of polish. This is hampered by the fact that this is old code, a bit messy, and not many developers like to work with it.

Needless to say this is all ‘at the time of writing’. Things may improve, or the whole thing gets thrown into the canal ;)

So what have I been up to with Blender’s video editor? Quite a bit! Some of it may end up not-so-useful in the end, but experimentation could yield some refinements. The really good thing about using Python, is that I can ‘rip up’ complex things and rearrange / redo them. So the experiments don’t result in a huge waste. Lets have a peak.

Read More

We are getting really excited to show all the incredible animation and amazing render tests coming off the farm. And even though we don’t want to let *too* much slip before time, I know Bassam is planning an update with some teaser images and production notes pretty soon.

Today I’m happy to share news of MediaGoblin, a libre software “publishing system” for images, video, audio and more that friends of Bassam’s and mine are building. It’s a single replacement for Flickr, YouTube, SoundCloud, and similar that anyone can run (like WordPress), but federated to keep files under user control. It’s very extensible, with support just added for 3D models now suggesting an alternative to Thingiverse. But I’m especially excited about MediaGoblin because it will establish the core functionality we can use to implement a lot of cool ideas we’ve had during Tube production for a collaborative platform that also fills the huge need for a solid asset management pipeline, a kind of super-Helga with some interesting properties. We’ve been talking to a bunch of developers about putting together a free software project after Tube, in which there’s been a lot of interest, and I have a thought that we could get studios to pool resources instead of each rolling their own and occasionally making a dead-end free software release.

A few weeks ago at the Blender Conference, we were talking with the developers about how, together with their distributed rendering, and these fairly near-future pipeline/collab possibilities, it seems like a lot of big pieces falling into place. MediaGoblin is worthy in its primary goals, but of especial interest for providing much of the functionality we’d need, plus perks like federation that we’ve dreamed about. Coding with Will Kahn-Greene, formerly of the Participatory Culture Foundation, the project lead is Chris Webber, until recently a developer at Creative Commons, and also a Blender user who did the anim in the excellent Patent Absurdity doc. And as part of the Tube Open Movie, Chris helped build pipeline scripts as well as our Reference Desk tool, one of the programs inspiring the new asset branch in Blender.


Today MediaGoblin has a nice write-up at Libre Graphics World, concluding:

If you are concerned about having full control over images, videos and audio records that you put online, you have just a few days left to support development of MediaGoblin — an awesome free software project that decentralizes media storage.

If you are a VFX or animation studio, or even a 3D printing company, you have even more reasons to support the project. With initial support for 3D models (STL and OBJ) MediaGoblin has a great chance to grow into a scalable digital asset management solution that is free to use and modify.

Finally, if you are a developer who’s good at Python, MediaGoblin could do with your contributions.

** Donations are tax-deductible in the US and also support the Free Software Foundation, which hosts the campaign.

And thanks for anything you can do to help this awesome project by passing the word!

Read More


Made a test from  Bassam’s tutorial on using cycles and internal together. The result is not terrible, but not perfect. Used another model cause cycles crashes during render (too many objects) and doesn’t work with multiple UV layers like internal does.  Cycles is good to use but it has a lot of limits so far. :(    Not suitable for animation yet – too slow and crashes often.  Maybe for statics and interiors. Difficult to work with lights – it seems only be possible to change intensity. Internal also has problems with SSS : for instance, no shadows in ShadowPass. Anyway we won’t use cycles for characters yet, maybe for environments. Now to make some more tests! I’m looking forward for further developments in cycles.

Note from Bassam: If this is Dimetrii’s ‘not perfect’ result, I think his perfect one would give me a heart attack :) brilliant as usual.

Read More

I recently received a digital copy of Blender 2.5 Character Animation Cookbook from Packt publishing. This book is written by Virgilio Vasconcelos, a blender animator and rigger who is currently animating shot ‘a1s38′ on this project :)
The target of this book I feel is strong beginners or intermediate level artists/learners, who are either new to rigging, animation, or to blender itself. Advanced users could benefit from it but more sporadically (ooh, I didn’t realize you could do that!, or as a reference, and students who are absolute beginners may get lost in some terms, or not yet know why you would want to do certain things.
Virgilio’s past experience both as a professional animator and as an animation professor is evident in this book. He writes in a clear, concise fashion, and has a knack of excluding super-complex detail while still taking things to a production level in a surprisingly simple seeming step by step way.
The first part of the book focuses on character rigging, and I really appreciate that he starts from the basics- setting good bone orientations, shapes etc., rather than leave these things as an uexplained step for later on. The rigging lessons build on each other, so after some basic lessons they quickly ramp up to a level where students must really be diligent and pay attention to learn. By the end of the section students should be confident rigging cartoony biped characters, and have enough experience that they can start experimenting with ‘invention’, creating new setups for new situations, or their own personalized ones for improving common ones. I really love that Virgilio shows some of the very strong production techniques in Blender, such as using sculpting for creating corrective shapes.

In the second part of the book, the focus is all on animation, starting with a simple ball exercise, and rapidly ramping up into character animation. The first chapter is mainly technical (like the rigging section) in it’s setup: that is, he starts with workflow, then with things like IK/FK switching, etc. This book introduces workflow and technique first, so the focus at start is learning animation in blender, not learning animation in general yet. This chapter is basically an introduction to blender for animators, and I think maya or even 2D animators picking up Blender will spend most of their time here.

After the technically-heavy blender intro, the rest of the animation chapters return to the basics a bit, with lessons in timing, spacing, anticipation, squash and stretch, etc… All those basic animation principles we know and love. The book is good at using blender features to enable animators to get what they need done efficiently, using Blender’s path-drawing features to adjust their arcs, or using the Open GL preview to better see their timing.  As in the rigging sections, the downloads for the book contain Blend files that make it easy for students to get right in with each chapter working on the exercises with no fuss.

The book ends with an appendix with some useful tips on planning, organisation, and terms.


Some criticisms: Even in a good book such as this, I can find some things to crit ;), but they are mainly small things. In the rigging section, Virgilio fails to warn his audience about the (current) fragility of one setup, when talking about the corrective shapes (an otherwise excellent segment). Luckily, a current summer of code project fixes this problem, so it’s likely that any such warning will be unneeded in the next release of Blender! Another tiny nitpick is that Virgilio uses the term spacing in two different ways, the first time unconventionally (referring to actual physical locations) the second more like the usual way for animators. I feel that he could have picked a better word for the first time. Finally, in the rigging section, I think that a tiny introduction to Python for creating interfaces would be quite good, and give riggers an alternative to the object/bone based sliders in the 3D view.

Conclusion: These are really tiny nitpicks. This book really is good, in fact, I’d say it’s the best animation and rigging reference for Blender yet, and even as a general reference for riggers and animators in 3D applications (since most techniques will be similar in different programs). While I read through linearly from beginning to end, the book also has ‘See also’ segments at the end of each section, that allow students focusing on a particular track, to follow a different path of learning in the book, something I thought was a good idea. I would put this on my ‘recommend’ list, as a book for intermediate/strong beginners, as a Blender reference for riggers/animators from other software,  or as a book for teachers to use as a textbook.

Read More


Сейчас я работаю над персонажем Gilgamesh. Пришлось сделать полностью новую модель и вылепить лицо. Трудно было с УВ-разверткой. Но!!! Пришло время рисования текстур! Много тестов с Projection Painting и слоями (diffuse, bump, specular….).

Мы встроили Cycles из исходника с помощью отличных инструкции Брехта и с помощью DingTo на blendercoders IRC. После выключения эффектов рабочего стола и второго монитора мы смогли использовать опции аппаратного ускорения (с помощью CUDA) для действительно быстрого рендеринга. Просто для удовольствия, мы добавили модели Gilgamesh и скульптуры, и создали несколько простых рендеров в Сycles, опираясь главным образом на самосветящиеся кубы в качестве источников света. Работа в этом рендере очень отличается от Blender Internal в настройке физически точных материалов с использованием нод для объединения шейдеров. Этот рендер никогда не останавливается, и если у вас есть терпение ждать, можно получить картинку с хорошим качеством . Это действительно очень быстро, если вы используете простые шейдера и рендеринг.
Для окончательного рендера Tube мы, скорее всего, будем использовать Blender Internal , так как Cycles вряд ли будет готов до завершения проекта, но это определенно “путь в будущее”.
Небольшой совет: использовать Cuda, вам нужна новая версия драйвера NVIDIA (270 или больше), которого нет в Ubuntu 10,10. Вы должны использовать PPA или получить ее прямо из NVIDIA.

English Version:

I am currently working on the character Gilgamesh. I had to make a completely new model and sculpt the face. It was difficult to do the UV unwrapping. But! It’s time to paint textures! Many tests with Projection Painting and layers (diffuse, bump, specular ….).

We built Cycles from Source using Brecht’s excellent instructions and with help from DingTo on blendercoders irc. After turning off Desktop effects and the second monitor we were able to use the GPU acceleration option (using Cuda) for really fast rendering. Just for fun, we brought in the Gilgamesh model and sculpt and set up some simple renders using Сycles, mainly relying on emitting cubes as light sources. Working in this renderer is very different from blender internal- you set up physically accurate materials using a node tree to combine shaders, quite unlike the diffuse-specular-etc model that internal uses. The render never stops, rather it continues to get better as long as you have patience to wait, but it’s really fast if you use simple shaders and GPU rendering.
For the final render of Tube we will most likely use Blender Internal, as Cycles is unlikely to become production ready before the project is over, but this definitely feels like ‘the way of the future’.
Small tip: to use Cuda, you need a new version of the nvidia driver (270 or greater) which is not in the Ubuntu 10.10 repo. You have to use a PPA or get it directly from nvidia.

Read More

I’m a huge fan of Blenderaid, a great way to manage your blender projects. You run a small server that is capable of crunching through your project, finding all objects, dependencies, etc., then point your browser to it and get a graphical overview. You can look at individual files, see the names of objects/materials/etc., rename them, view dependencies, fix broken links, and now check and update SVN status etc. etc., all from the comfort of your browser window. I’m using the Python 3 version, which for me necessitated installing PySVN from source, since the Ubuntu modules are Python 2 only. Other than that, I had a smooth install; I’m looking forward to continuing to use this version and further goodies in the future.

Some cool things you can do with it:

  • Find errors in your project globally without having to check each file one by one in blender- and fix them (could benefit from batch tools so you can do multiple at a time)
  • Create ‘bundles’ of files, e.g. to send to an off-site animator who doesn’t have SVN access, by quickly seeing all the dependencies of a given scene file. This can be done by hand right now, but I’m pretty sure it could be scripted fairly easily.
  • Make sure your files are up to date, track problems with SVN visually
  • Rename models/assets, find out where they get used, etc.
  • Probably a lot more :)

Blenderaid could change the way we work with SVN for projects – instead of checking out several gigabytes of production data, each artist need only check out exactly what they need– saving time, local disk space, bandwidth. We could use it also to have versions of assets and switch (optionally) some scenes to use newer versions or to continue working with the old.

I’m hoping to have time after tube to experiment with blenderaid in conjunction with helga, or alone, and to have server-side installation as well as the local one. This could be the key for large-scale projects in blender, big thanks to Jeroen and Monique for writing it, and I look forward to seeing how it evolves.

Quick note from Jeroen: the python2 version saves time by removing the need for additional compiling, and should work without any problem. (I was under the mistaken notion that Blenderaid’s python version had to match Blender’s).

Read More

I’ve often wanted to have lines for ‘rule of thirds’ in the Blender Camera as a composition aid – I’ve got countless blend files with little no-face meshes parented to cameras (that have to be moved or scaled whenever I change the camera view angle). Granted this problem could be solved with a driver (That might not update – driving on camera angle is not dependable yet), but I got tired of ad-hoc solutions.

I don’t use the Title safe option that much or at all, so with the help of a trusty text editor (gedit in my case) I hacked a couple of files and now I have ‘Thirds’ instead of title safe for the camera. The internal property is still the same, it just displays differently, so no messing with RNA happened.

If you want the same functionality and are comfortable building blender/applying patches, you can get it here . Usual disclaimers about baby eating and such apply.

Free/Open Source software is nice, isn’t it?

Read More

green shoes of awsomenessAs always, the conference was awesome- an intense three days of talking, listening, meeting, blending, eating the traditional conference sandwiches, drinking coffee, beer and mojitos, not-enough-sleeping, more blending, etc.
After a sleepless but uneventful flight to Amsterdam I walked into the Blender Institute the day before the conference, only to have Andy pressgang recruit Pablo and me into making the Suzanne festival and award interstitial animations with him. We had a (very sleepy) blast working till the wee hours, and more in the next morning, and I got to go up in the projection booth once again and play the festival off my laptop, thanks to the power of totem/gstreamer and python (for making the playlist). I apologize for the one or two glitches- a couple of the videos needed to be re-encoded for smooth playback, but we somehow missed that in the studio.

Jeroen Bakker showed me his awesome openCL nodes in the compositor on his laptop, running 20!zoom!! times faster than the CPU equivalent. When this stuff hits it’s going to make a mini-revolution for Blender. I’m no longer a sceptic about GPU computing I guess :)
Wolfgang Draxinger did a fantastic job making the stereoscopic version of Elephants Dream. Great choices, hard work and technical precision- I’m blown away both by the result, which rivals the best stereo work from major studios, and by the amount of work he put into it. He’s planning Big Buck Bunny next, but in the meantime, some snaps of us removing (the unfortunately crumpled) screen after the show:IMG_3906

I met with Josh, Henri, Francesco, Jason, Jonathan, Jean Sebastian, Heather, and recruited Dolf, Tal, and perhaps Luciano, Andy and Pablo for our project. We had a meeting the second day of the conference, which gave me a chance to finally pitch the story and current animatic to the team in person, talk about where we are at in the project and assign some short-term tasks. We also had a presentation on Sunday, mainly about technical issues: rigging, though I did not demo rigamarule- turns out auto-registration of operators had somewhat broken the UI while I wasn’t looking (it’s fixed in current tube SVN). Josh showed off his work on procedural animation, and Henri demoed building scene layouts from library models using our LODing system and the landmark-snapping system created by Pablo Lizardo.

As Fateh has blogged, Tube member Jarred De Beer won the Suzanne Animation award, congrats dude!

The presentation had an unexpected benefit; it introduced the project to new contributors- Thanks Tal :)

Sadly I missed some people- Malefico has too many conferences on his plate to make it to Blender conference this year, and I was too swamped to meet up with Stani, Python coder and artist extraordinaire.

Finally, I had the honor of working for a bit on Andy and Eva’s awesome stopmotion animation project- Omega- which has some CG elements. I spent a large part of Monday (the day after the conference) rigging an amazingly designed and detailed character Andy built for the movie.

Big thanks to Ton, Anja, Anna, Nathan and everyone who made the conference possible and enjoyable.

Read More

In between bouts of python coding I’ve been working on the model for the train and carriages. The model is asymmetric and modular so in theory loads of variations could be created by combining bits from the two models shown below.  The two ends of each train are different, as are the pantographs and all of the side panels.  The design is based on a mix of old NYC subway cars and Soviet-era Eastern European engines.  I built the model on top of an early design for the undercarriage which had been modeled by Jean-Sébastien Guillemette and Jarred de Beer.

The model for the train is currently very high poly as it will probably be used in a few close-up shots (these renders are all geometry – no texture normal/bump maps!).  There’s a bit of work still left to do (naming 3000 objects and parenting them in a nice ordered heirarchy for one!) and making a low poly version, not to mention further tweaking of the design and rigging the moving parts!  To make the low poly version lots of the objects can simply be moved to a hidden layer (or excluded from the ‘HI’ group – we use group instances to bring models into scene files), but many of the meshes need to be split up into smaller parts so that the smaller parts can be excluded from the low poly renders – for example we need to move the rivets out of the body panel meshes as they will be so small in many of the long shots they won’t be noticeable – even at 2K!

For continuity’s sake we need to make sure the rivets are in the same place in the high and low poly versions.  To save the texture painters some time later down the pipeline I’ve written a script which goes through every object in the mesh and separates the details from the main lower poly mesh component (I can define these using vertex groups etc) and then bakes the details’ AO ‘shadow’ onto the lower poly part of the mesh and saves the texture in a maps file.  The script also intelligently names all of these textures in case links get broken up as the SVN gets reorganized over time.  For example if the script finds an object called ‘side_door’ it will split the object into ‘LOW0001side_door’ (for low poly) and ‘DET0001side_door’ (for details) and then bake out the AO to an image called ‘IMG0001side_door’ while making sure that all the meshes stay linked to save memory (rendering without any textures is already taking up over 2GB of memory).

Unlike the normal P-key behaviour, the script makes sure that separating meshes affects all the linked duplicates, not just one of them.  The numerical prefix helps identify one specific detail mesh with its corresponding specific lower poly mesh.  For example if there are 5 duplicates of ‘side_door’ (‘side_door.001′,’side_door.002′…), the first will be renamed ‘LOW0001side_door’ with its details saved in ‘DET0001side_door’, the second will be renamed ‘LOW0002side_door’ and its details saved in ‘DET0002side_door’ and so on, so future ‘tubers’ to work on the model won’t have to spend hours searching the outliner list to find the right object!  The reason the number is at the start and not the end of the name is to stop blender messing with the numbers, and to help sorting in an alphabetized list!

Fingers crossed there won’t be any surprising bugs in the script, as baking out 1000 unique meshes is likely to take some time and we haven’t written a resume function for the script yet!

Read More

Just a small utility I’ve been using to make my life easier, a little addon that lets you assign names to layers in an armature and hide/unhide them in the 3D View Properties area. Download it here, unpack then install via the add-ons area in user preferences. Works on current SVN (and the soon-to-come beta hopefully)

There’s also a ‘hidden’ operator you can use in the 3d view (search for change bone name layer) that allows you to switch selected bones to a named layer.  It’s a bit rough, but the panel at least beats having to hunt and peck one of those 32 nondescript little buttons :)

This isn’t really part of the gilgamesh rig, just a utility I’ve been using while rigging. The final UI (you can see a peak of it just under the layers in the screenie) has hardcoded layer names, and only those that are relevant to animation.

Read More