Sun and Sky mini Tutorial

Hi folks; a mini update in form of a tutorial! Two of our timelapse shots have the outdoor sky above; one just for the lighting and reflections, and the other – which is now scattered across half a dozen blendfiles – actually has some honest-to-goodness sky and clouds visible. There’s a bunch of interesting stuff […]

shotsHi folks; a mini update in form of a tutorial!

Two of our timelapse shots have the outdoor sky above; one just for the lighting and reflections, and the other – which is now scattered across half a dozen blendfiles – actually has some honest-to-goodness sky and clouds visible. There’s a bunch of interesting stuff going on in these shots, but for now, I’ll focus on the humble World background.

The World background in blender is like an infinite sphere around your scene; you can put an HDR image and light your scene (we use HDRs frequently for our interior shots- generating them from equirectangular renders of our sets) or you can use Blender’s Sky Texture node that simulates a cloudless sky with a sun positioned in it:

Sun and world with no rotations (directly overhead)

So that’s our raw setup that we’ll start from. It’s noon over the equator (I guess) and the sun is directly overhead. You’ll see that in this default state of both the Sky Texture and the Sun lamp (as seen in the chrome ball reflection) line up perfectly. It’s also one of the most boring lighting setups. Download the .blend file here.

Understanding how the Sky Texture node works (by hand):

We can rotate the sun lamp directly to change the angle of the sun. Unfortunately, this doesn’t automatically change the world background. We need to fiddle with the Sky Texture Sun Position widget to match the glow around the sun and the sky simulation with the actual lamp position.

The value you are tweaking when you play with the nice circular widget is a 3 value vector (x,y,z) of the Sun position in the sky. If you imagine a super huge spherical dome, with a radius normalized 1, then that vector is (0,0,1) when the sun is directly overhead. As the sun angle changes, that vector is rotated by that rotation, to match the position of the sun, intuitively:

 Rotate the default (0,0,1) vector by the angle of the sun

Sadly, that’s easier said than done, as the widget doesn’t really help us orient north/south/east/west very easily with the viewport, and is not a very accurate input method.

Understanding how the sun lamp works:

The sun lamp is represented by a lamp and a line in the 3D view. Light from the sun is completely parallel with the same angle as the lamp, but the position of the lamp in the scene is totally ignored – only the rotation matters, and it represents the angle of the light. A rotation of 0,0,0 means the sun is beating down directly overhead (so the lamp points down by default):

The rotation of the lamp is the angle of the sun’s rays – The location of the lamp doesn’t matter.

For me the

Putting it together (Python):

Taking the above intuitively we have the following code snippet, assuming we’re using Euler rotations for the sun (for simplicity):

 

vec = Vector((0,0,1))ร‚ย  # construct the default vector
vec.rotate(bpy.data.objects['Sun'].rotation_euler)ร‚ย  # rotate it using the sun lamp rotation
bpy.context.scene.world.node_tree.nodes['Sky Texture'].sun_position = vec

This can be used in (at least) two ways:

  1. Screenshot from 2015-09-05 15-25-28As an update handler: We can have this running every frame change, on file load, and before rendering. This is a pretty immediate way to get our code working in the file, but it does have a few downsides: It requires Python scripts to run in the file, which might not work if you have that setting disabled, and it only updates on a frame change – so if you rotate the sun, you’ll need to change frames to see the result, a bit clunky. We can fix some of this…
  2. By using the snippet as a driver: We can also put the formula into a driver namespace and drive the value directly, and have it return the x, y and z value. Then we need to press ‘D’ while hovering over the Sky Texture Sun Position Value (or right click and Add Driver) and then Edit the X, Y and Z values to get the x, y and z rotations of the sun and feed them as variables into a scripted Screenshot from 2015-09-05 18-00-26expression. This will now update ‘live’ even during rendering. But it still requires that you have Python scripts autorunning, and there’s a slight chance that scripted expression Python drivers will cause crashes with multithreading. In addition, we now have the inconvenience of both writing Python and doing a fair amount of clicking and setup for this to work.

Putting it together (No Python!):

no pythonThere are some problems to our Python solutions, they might not work on some people’s Blenders, crash, not update, and they require some coding knowledge.
So we need another way. fortunately we can set up our formula in the 3D view with empties instead!
First we need the rotation of the sun. Since I don’t want to deal with it’s position, I’m going to create an empty at 0,0,0 and add a copy rotation constraint from the sun to the empty. Now the rotation of the Sun is captured perfectly by my empty, let’s call it sun_rotation
Now we need our default 0,0,1 vector. To do this:

  1. Disable the copy rotation constraint temporarily so the sun_rotation empty is not rotated any more (make sure you clear it’s rotation)
  2. Create a new Empty called sun_vector, and place it at location 0,0,1. This is now our 0,0,1 Vector.
  3. Make the sun_vector a child of sun_rotation.
  4. Re-enable the copy rotation constraint on the sun_rotation empty.

Now Create a driver on the sun position of the Sky Texture Node as you did before, by right clicking or pressing ‘d’, then:
For each X, Y and Z rotation in the driver panel, make the sun_vector the target object, and use it’s X, Y, and Z location corresponding to each rotation. Set the driver type to min, max, average or sum (it doesn’t matter since we only have one variable)
Now we are done! Our scene will update automatically on sun rotation, no need to wait for a frame change , and it does not depend on the user or renderfarm enabling python scripts. Example blend here.

Extra Credit: Visible Sun, Night Sky

For tube shots we wanted a bit more: a visible sun disk in the world (that doesn’t light the scene) and a nighttime mode with stars and horizon fog (maybe a moon for the future) so we did a bit more and rigged it with an armature. Feel free to examine the file.

Post URL
1 comment

The Making of User Lib

Last October Libby Reinish from the Free Software Foundation commissioned Urchin to make a short film in celebration of the FSF’s 30th anniversary, to support their annual funding campaign. We were thrilled at the chance to work on messaging we care about, that’s so close to our own mission. The short deadline and limited budget […]

Last October Libby Reinish from the Free Software Foundation commissioned Urchin to make a short film in celebration of the FSF’s 30th anniversary, to support their annual funding campaign. We were thrilled at the chance to work on messaging we care about, that’s so close to our own mission. The short deadline and limited budget made for some interesting challenges. And of course we would produce this film (like everything else we do) with Free Software, especially Blender. You can use the embed code from the FSF, or Urchin’s as follows:
<iframe width="640" height="390" src="http://video.urchn.org/usrlib/" frameborder="0" allowfullscreen></iframe>

Script and Design

We started pre-production with only a target audience matrix related to technical knowledge and potential philosophical alignment. The FSF gave us a free hand to propose a script, so before starting to write, we took some time to consider and Fateh did a lot of interesting research. There were many more points we’d like to have included, but we had to be brutal in deciding what could be in the scope of this small project. We came away inspired to find ways of continuing the much needed work of free software messaging. (And actually, Fateh already has a very cool plan in development.)

The theme: User Lib!

One of the most important tenets of the FSF stood out: that this is predominantly a movement about people’s rights; it’s not just a programming methodology that favors openness and collaboration (though this is an awesome side-effect) or about writing better code — it is about making sure that users are in control of their computing, and not the other way around. A play on /usr/lib/ as ‘User Liberation’ that came out of our banter at Libre Planet provided the theme. (Side note: we are aware that /usr does not traditionally mean user, but punning logic prevails.)

Visual Pre-production

Once the bare bones of the script came into focus, I started working on on little tests in Krita and Blender, with the mantra to make cartoony motion, and flat shade all the things (no lights). Even light effects are done with materials at differing values and saturations.


I also started thinking about our main character. It’s tricky to design a character without race or gender when pressed for time, but that was our goal. We named it ‘Mo’ (short for anything from Maureen to Mohammad) and called it “they” (neutral pronouns).
Early Mo ConceptClose upMo Hammering

The “Go” decision

Once Fateh got approval from the FSF we switched into production with her great script to work from. (software: Piratepad, Gedit, Textplay and Trelby)

Production

Boards

boards

With very limited production time and a carefully prepared script it didn’t make sense to do elaborate boards and animatic: I’d rely on small thumbnails for the short, one per shot, and a very simple animatic with no animation. I used Krita to draw the boards, then cropped and exported into Blender for the animatic. Not much to say here except that I really like Krita- If it gains python scripting it’ll be perfect for me. When time permits in future preproductions, I might use the new grease pencil tools to animate on top of Krita Layers in Blender.

Scene Organization

terminal_svnProduction_files_nautilus

With my decision to do cartoony animation came a little bit of problem. Blender typically allows you to link (or for Maya-speakers, reference) characters into scenes. However, this comes at a bit of cost: every deformer and many decisions have to be rigged in the library file: you can’t just slap an ad-hoc lattice onto your character, or make a shapekey specific to a shot, or delete or replace a mesh simply while animating: All those things must be rigged-in, and, given my short production time, that just didn’t feel practical. As a result, most shots have Mo local (though the rig is perfectly capable of being linked). The short production and relatively few shots made me feel fairly OK about this (it would be murder on a Wires For Empathy level of complexity though)

Shrinkwrap, Dependency Graph and Workarounds

fleximo

So, Mo is basically a hierarchy of Blobs and Limbs. You get the body-blob, with arms and legs and head sprouting off it, followed by (in the case of the arms) hand blobs, with little sausage fingers growing from them. There’s the minimum number of fingers, and every limb is supposed to move freely on the blob – an arm can be placed anywhere during the animation, so can a leg, and so can the finger on each hand. I compromised with the head and made it the only fixed element on the surface with the traditional chest-neck-head hierarchy.

IKworkaround_shrinkwrap

Initially I wanted to shrinkwrap the arms to the surface, but this got me into Blender dependency graph limitations- You can’t break in and out of the armature dependency graph without creating an object-level cycle (sorry if this is greek). Luckily, Sergey is fixing this- but for now I used single bone IK chains to constrain the arms and legs to a sphere, and then added a movable bone so the animator could manually keep the arm on the surface. The mesh itself is shrinkwrapped, and the setup is far more forgiving than you’d think.Those cool yellow X-man logos on the body and the hands are actually what allows the limbs to travel around on their blobs…

Hair

hairmo

I wanted Mos hair to be a recognizable shape of a sphere/circle with 3 swirls coming out of it. My initial approach was to model this in 3D, but the resulting silhouette was always awkward depending on the camera angle. Attempts to fix this with rigging and shapekeys were time consuming and unsatisfying, so I ended up using the flat shading to my advantage, ditching the 3D hair and splitting the hair into a 3D ‘junction’ with the skin and eyes and a flat shape (with shapekeys for animation) that I could just place relative to the camera to get the desired shape. It worked brilliantly and I only used 3D hair in one shot, where there were too many Mos to adjust manually. In this shot however, Mos hair is the same back to front – this Mo had no face.

creepymo

Broken Rigging


I know what you’re thinking, but that’s not what I’m talking about. As I did my first test of Mo hammering I found that, as I moved the shoulders for the extreme cartoony poses, this would totally break the arcs of the arms and hands (and hammer). I could just compensate by either moving the (hidden) shoulder on an arc, or counter-animating in the arms and hands.. but yuck. A simple arc motion gets into lengthy tweakage. Luckily, I’ve been introduced to the concept of broken rigs by the folks at Anzovin Studios, some of the best 2.5d animators and riggers around. The idea is to get away from 3D character hierarchies and FK/IK complexities, and just make every joint a sibling in world space. Animating a character feels more like 2D, freely sculpting its pose, and the lack of inheritance means you can finesse every arc on every joint without having to compensate for what the hierarchy is doing. With little time to create a full broken rig I did the next best thing: I added a broken mode to Mo’s arms in addition to the FK and IK controls. Then I animated the hammer strike again. It took a fraction of the time and was uber smooth. I became an instant convert. As I got into the ‘sometimes hierarchies are nice for posing’ issue, I created a quick script that allowed copying poses between broken and FK controls, so I could pose in FK and then keyframe in broken mode. Mo’s arms are keyframed with broken IK in most shots, there may be only one exception (incidentally, the above shot illustrates that outside of camera view, most of my animations look terrible)

Orbiting Particles

This is a story about letting go. I wanted the first shot to start with particle systems orbiting Mo ‘surrounded by software’ like an atom or a solar system. Here the age of Blender’s particle systems started to show. The best way I found to create the orbit was using hair guides, but they have a fatal limitation: the particle can only traverse the guide once in its lifetime. So unless you want super slow orbits (I didn’t) you get particles popping in and out of existence at the end of the guide. I struggled to fix this by coiling the guides so they were more than one loop, but this introduced a new type of popping where particles would flip 180 degrees and that was even more jarring. In the end, I just ‘lived with it’. If you examine the first shot closely, you can see this happen. Also some of the particles go through Mo’s head ๐Ÿ˜‰

Text Effects

Well, there’s lots of text in this short. And I wanted to have it typing, randomly animating, with a blinking cursor (sometimes) at the end. Luckily, much earlier, I had written a small addon for Jakub Steiner to do a ‘typewriter text’ effect. I much later built it into a bigger system of text effects called… well, TextFX. This allows you to stack different text segments into a single Blender Text Object, and then stack multiple Animations on each Segment (like typing, blinking, changing material or font, etc.).

I could then also use Blender’s own animation tools, like putting text on paths, animating spacing or size, for a variety of typography animation. For instance, using a lattice deform to wrap text around a hammer (the lattice is one dimensional, uses linear deform, and wraps multiple times around the hammer) I got into some small issues with Blender’s text though:

The fill (tesselation) algorithm for text is very fast but also very ugly: lots of long triangles that don’t deform well. I used remesh to give me a quad grid, but I had to solidify the text first, as remesh only works on volumes. Then I hit a bug, either in the font or in Blender: at certain sizes, the tesselator fails to give a closed surface, breaking the solidify/remesh. I ended up using scaling instead of font size to avoid this problem.

Pixel Hammer

mosaicfilter

This was a fun one that I wish I had just a little bit more time to refine: I wanted all the Mos to hammer out colored circles, which would change color with each strike. As we zoom out, the individual circles become large pixels of the next shot: the “Software Highway” that the Mos themselves are creating through collaboration and sharing work. So first I animated one Mo hammering (actually, I stole the animation from a previous shot and made it cyclic) then exported a bunch of mesh-caches. All the Mos you see animated are using the same animation and the mesh cache modifier, with curves to make them a bit offset. As we zoom out the rest of the field is covered with non-animated simple Mo Meshes, that are too far away to see clearly. The pattern of the hammering is synced during the zoom out with frames from the next shot render, using a simple python script called ‘pixel_mosaic’ which is unfortunately slow to run (it takes longer than the render of the shot!). The shot is not bad, but if I had a bit of time I would have refined it more to be even better. One of the issues with working at this scope is organization, and I would have gone fairly insane without a bunch of python scripts for arraying and duplication, renaming, and timing to the animation:

scripts

The Road

This one is fairly straightforward. I used Arrays and Curves to build the road and constrain the car motion. There is a certain design issue with curves deform: the bounds/fit options are on the curve rather than modifier. This means you need to duplicate the curve for the road in order to deform the cars, rather than use one curve for both. In practice, the hook modifiers I used to rig the road needed to be applied anyway or I got odd velocity changes on the cars.

latticeandscale

One fun thing is having a lattice around the camera to simulate lens distortion, and scaling the cars as they pass the camera for a neat distorting motion-blur-like effect. At night the cars are just streaks of light, but in the daytime, the tailights have shapekeys that allow a real 3D motion blur effect.

nodegroups

I used Blender Internal in this (and most shots) but I got into a small issue: I wanted shadows from the sun on a shadeless material. As a result, most of the surfaces use nodes, with various nodegroups that blend colors with a shadow-only material. To simulate lighting changes, a node group called ‘day_night’ outputs a single value that mixes in each materials day/night/dusk and shadow colors. Animating the one nodegroup (and the angle of the sun, and the world colors) allows a global change in lighting over the entire scene.

Stock Tickers

Here I used my text FX script but I found that it was too cumbersome, so I added a new animation mode to the addon: increment or ‘add’ that adds a number to the original string (assumed to be a float number or you’ll get a python error). That way I could animate the stock prices with one segment and a single curve – If you want to be fancy you could grab real data from a stock exchange for the prices! for the little up and down arrows I used Blender’s “object font”, as well as for the grid behind the numbers.

objectfont

Voice Recording- remember to turn the fridge back on!

We are not primarily a sound studio, but we have a nice Studio Projects Mic (thanks for the recommendation Jan), a USB sound interface (a mic port pro), and lots of bookshelves to provide sound dampening. And with Audacity running on my laptop, we switched off the fridge (ominous foreboding). Sound recording went almost without a hitch. Had we the time, we’d have experimented more with performance. But the biggest disaster was when two days later Urchin’s artist-in-residence tried to eat some beans.

Editing and Sound

audacity

I really wanted a chance to evaluate Pitivi or another editing program in a production environment. But I was so down to the wire that I had to use Blender’s proven VSE for the editing. Without time or budget to arrange for a sound designer (I literally had no window of locked timing to send somebody) I was forced to do ‘sound design’ myself in Blender. Our friend Jim (James P. McQuoid) kindly offered to make some guitar/bass music so we got together online and on the phone, and he sent me some recordings he made in his home studio with analog effects and Ardour for the recording late on the penultimate day, when I was oh so close to locking picture. On my end, I couldn’t get Ardour running! So I edited the sound in Blender as best as I could. Blender’s not really ideal for sound design: you can’t move sounds on subframes, there is no level meter, no sound plugins, and even the levels you hear while editing might not be exactly what you output, especially if they are animated levels. With no time to spare for foley, I downloaded a tonne of sounds from freesound.org, an amazing resource. Most are CC 0, some are CC BY. Here’s a list of credits:

   airtaxi   benboncan   cactus2003   crashoverride61088   dave-des
   davidbain   ecfike   elliotlp   flint10
   gchase   hunter4708   irishcinema   jamesabdulrahman
   jasonlon   lavik89   lloydevans09   ludvique
   martypinso   misscellany   monotraum   muses212
   northern-monkey   pandotrix-emark   primeval-polypod   simosco
   snapper4298   soundsexciting   swiftoid   wjoojoo

Rendering

With flat shaded materials most shots don’t have lights – Blender is actually very close to being able to render most these shots in OpenGL alone. However Blender Internal is quite fast especially with this setup. In the few cases where I wanted the appearance of lighting, I’d use separate materials with different brightness on the same object. The road shots were an exception with their ray traced shadows, but even those relied on animated colors instead of lighting.

worldsun

For the shot with the planes I used cycles, but only so I could process the world sun/sky via nodes to produce a cartoony sun. Internal doesn’t give nodes to the world material as far as I know.

Flat Shading, Soft Gradients and Banding

posterize

A small note about using subtle gradients and washes: they are highly susceptible to banding when rendered into 8 bit formats. This was an issue in multiple ways. Blender has a ‘Dither’ option in post processing that shuffles the pixels around and prevents banding in the 8bit png outputs. Inkscape does not; I ended up recreating .svg images within Blender to avoid the banding – making inkscape more of a mockup tool (for this project) than a production pipeline tool. The above picture uses the posterize filter in Krita to illustrate the effect of banding on the right side of the image.

Finally compression to video format further reduces the chroma information in the image and increases the banding even more. Here I resorted to adding a subtle grain to the images (you can find 35 mm grain videos online) on overlay at 30% just before compressing. This also adds a subtle grain which can be a nice detail. Those files are not free to reshare, it would be nice to have either full CC licensed grain files for use with open movies, or a grain node in Blender / grain effect in Pitivi for procedural grain.

Compression

I have a working knowledge of what containers and codecs are out there, as well as their relative free/non free status- but I am not an expert by any means, especially at the end of an 80 hour week/ 28 hour single ‘day’ marathon. So I opted to use Transmageddon to produce .webm files for the web. Initially I used Blender’s .ogv output for a test, but then used the lossless h264 mode to make a master.

A scheduled power outage forced me onto my laptop. Where Transmageddon/Gstreamer in Fedora 21 suddenly lost the ability to play h264 streams, due to a bug in Gstreamer, packagekit, or both. So I re-rendered my master in huffyuv (nicely avoiding h264 altogether), and went back to Transmageddon… which just didn’t work – no errors, it would just sit there, consuming 0% CPU. Finally I fired up Pitivi, dropped the huffyuv track into the timeline, and outputed multiple resolution .webm files from there.

Production Files

BLAM! An open movie is born. We’re sharing all the production files — feel free to use them under a CC BY SA license, while all scripts with the project are under the GNU GPL (2 or later). You can use these to produce language localizations, look at how shots were made, or make your own new work. Download HERE.
If you want to make an (audio) localization of the movie, we’ve made a sfx/music track available for you to use with new dialog as a short-cut. You can download it HERE.

In Summation

Overall this project was a positive experience, with only a few things I’d change. There are some small glitches/ non-ideal details in some shots that I could fix, nothing huge (a dissolve here, timing change here, background change there), and some overall pacing I could tighten (the road shots, stock tickers and internet shot could be a bit shorter/ feel faster). Given some extra time, I’d have a professional sound engineer do the mixing and mastering, experiment with different deliveries of the V.O., and probably drop in a bit more music.

Bugs I should be filing:

  1. Tesselator for fonts fails for certain font/ size combinations in Blender
  2. h264 not working in gstreamer in Fedora 21: This is tricky as it has to do with multiple pieces of software: Gstreamer, PackageKit and RPMFusion (since h264 is non-free/ patent encumbered and not shipped by default in Fedora)
  3. Transmageddon failing weirdly in Fedora 21

Some list of wants in no particular order:

  1. Better Dependency and File-linking capability in Blender: probably coming by the end of 2015! This would allow a more fluid and cartoony Mo rig, and the ability to both link the character in and do custom deformers right in your animation files.
  2. Better Particle system, tessellation options for text in Blender: This isn’t really needed due to either workarounds or compromises for this project, but it would be useful for either speeding up workflow or enabling more elaborate effects.
  3. Out of the box working ‘pro’ sound in Fedora: I’d love to see Ardour3 actually work out of the box. I think sound on linux is ‘fixed’ for consumer uses thanks to pulseaudio, but the whole Jack/ Realtime kernel integration feels over-engineered and fiddly. even Audacity, which supposedly works with Pulse, required me to run it with pasuspender or it would crash.
  4. What seems like an oxymoron: ‘pro’ compression for dummies: a compressor/transcoder that is bullet proof, with constraints or presets for different devices/ presets, and perhaps quality preview for output – and the ability to save a batch script for multiple output targets and resolutions. What we have right now is either consumer oriented and limited (Transmageddon, Handbrake) or you get to fiddle with low-level codec knobs and break the output.. not fun.
  5. Support for .png and other image sequences in Pitivi for work with transparencies, masks, etc.
  6. Funnily enough, Firefox’s new h264 support crashes the browser on my computer. For now, I’ve avoided this by placing the .webm sources for video first, but this might break safari…

That’s all for now, I hope you found this post informative!

Addons for Empathy 2: Proxies and a bonus!

Hi Folks! We’ll try to do a bi-weekly installement of Addons for Empathy (until we run out of addons). This one is a two-parter: Our main installment is about working with proxies in Blender, the second is about a bold new experiment in Rig UI. Proxy Workflow and Transparent Proxies Addons: Get the files from […]

Hi Folks!
We’ll try to do a bi-weekly installement of Addons for Empathy (until we run out of addons). This one is a two-parter: Our main installment is about working with proxies in Blender, the second is about a bold new experiment in Rig UI.

Proxy Workflow and Transparent Proxies Addons:

Get the files from my gitorious

The video is about two addons, both making proxy editing in the sequencer more friendly to our project. A quick explanation:

Blender’s Video Sequence Editor or VSE for short has a feature called proxies. This basically allows an in-place replacement of strips by 25%, 50%, 75% or 100% versions, in a fast format (.jpg or motion jpg) This is especially useful when:

  1. Editing Large format files that are too slow to be realtime – either in resolution (2K or 4K) or in type (.EXR!!!)
  2. Editing over the network, especially files of the previous types
  3. Working with complex and multiple effects that could be benefit from being cached

So Proxies in Blender work a bit like a combination of proxies and caches. I prefer them as the former, since it skips having to recalculate every single you change some timing – instead they only need to be recalculated when the sources change.

However, working with proxies in Blender can be painful by default, and this is where Proxy Workflow Addon comes in:

  1. Editing Proxy settings must be done strip by strip: Proxy Workflow lets you set them for all selected strips at once
  2. Default location is in the same folder as the originals, which is bad in the case of network shares; Proxy Workflow automatically sets them to a local directory “TProxy” that contains all the proxies for the edit, and can be moved around like a scratch disk
  3. Sometimes Blender tries looking for the original files even when it is using proxies. If you are trying to use proxies to avoid using the network/internet, this becomes a problem. Proxy workflow allows ‘Offlining’ strips, and then ‘Onlining’ them again when you can reconnect to the network
  4. Blender doesn’t know when the source files are ‘stale’ and need to be re-proxied – for instance if you rerender. Proxy workflow timestamps as it makes proxies, allowing you to select a bunch of strips and re-proxify only the changed ones.
  5. Proxy workflow is designed to work with movies and image strips only for now, as I’m interested in true proxies, not caching effects.

A seperate addon is called ‘Transparent Proxies’ and does what it says on the tin (and no more): It allows making proxies of image sequences that preserve the alpha channel for alpha over effects. It does this by cheating: It uses Imagemagick on the commandline to make a .tga proxy, and just renames to .jpg to satisfy Blender. You need to install imagemagick first for it to work.

ร‚ย Bonus: Rig UI Experiment:

Code is at gitorious
This brings us to the bonus round- the Rig Selection UI. I’m continuing my round of experimentation with BGL and modal addons, to make the kind of ‘typical’ rig ui where animators can select or act on a rig by clicking on an image. This ui is using an SVG file to define the hotspots, and a PNG to actually draw the image. It already works, though I’m still going to refine it and add more options/ easier rig customizability. The end goal is to be able to do Rig UIs without writing code, simply by drawing them in Inkscape and pressing a few buttons in Blender. Stay tuned!!!

 

Addons For Empathy – Floating Sliders

Hello all, long time no post! As we’re getting closer and closer to releasing our files, I’m noticing that we have a huge (and I mean huge) trove of Python code that is largely undocumented. Some of it is pretty specific to this project, And other bits are useful in general. Even the specific stuff […]

Hello all, long time no post!
As we’re getting closer and closer to releasing our files, I’m noticing that we have a huge (and I mean huge) trove of Python code that is largely undocumented. Some of it is pretty specific to this project, And other bits are useful in general. Even the specific stuff could be adapted, so it’s worth going over.

To address this we’ve thought of doing an ‘Addons for Empathy’ video series, quickly explaining what some of the addons do, in addition to more traditional docs. The first I’ll do in this way is the Floating Sliders Addon: In short, this pops up small, keyframable Open GL sliders for any Floating point Pose-bone properties. The code is on gitorious, and following is a simple video explanation of what it does and how to use it:

As always, the video is licensed CC-BY, while the addon itself is GPL.
You can also download this video as a high resolution .webm or .mp4 file, or watch it on youtube

The screencast itself was edited in Pitivi, with Inkscape titles. Video was captured via the Gnome screencast feature, and audio with Audacity

Big thanks to Campbell Barton for help getting min/max of custom properties, and explaining some of the finer points of keymaps, and to Dalai Felinto for showing a possible hack to make a popup menu (I ended up using a slightly different way)

Managing Animation Pipelines

Introduction: Bart from Blendernation is collecting information for ‘Redcurrant’ – a project about doing pipeline and asset management for Gooseberry. He asked me about the approach we’ve taken in the production of Tube/Wires for Empathy. I have also had a similar way-overdue request from Dylan of the OpenDam project, so I thought to make it […]

Introduction:

Bart from Blendernation is collecting information for ‘Redcurrant’ – a project about doing pipeline and asset management for Gooseberry. He asked me about the approach we’ve taken in the production of Tube/Wires for Empathy. I have also had a similar way-overdue request from Dylan of the OpenDam project, so I thought to make it a public post since it might be useful information to other people.

Breaking up the Pipeline into Chunks:

We often use the word ‘pipeline’ pretty loosely in the context of what we do to make an animated film. It is a fairly linear way of describing a problem set that has non-linearities and interdependencies. I’m going to use it here in a very ‘common knowledge’ sense, but chomp it up into fairly standard bits that can be easier to define. In practice, everything has to fit together, and the same piece of software might manage more than one thing:

  • Assets and Task Management – Top level but detailed description of the project as a bunch of assets and tasks.
  • Production Pipeline – I’ll place file management/versioning/sharing here-ร‚ย  but this is what people are usually thinking of when they think of pipeline- though I’m half thinking of breaking it out into it’s own category.
  • Feedback Pipeline – Tools for reviewing, commenting and finalling (and un-finalling) assets and tasks

Some more detailed descriptions follow:

Asset and Task Management:

This is something of a top-level thing: If you combine this with resource management (people, computers, renderfarms) you could get some top level gantt business going, and really plan your project. At this level everything we deal with is going to be an abstraction or database entry for real stuff.

Assets and Tasks are a handy way to think of what a project is; loosely you need elements, i.e. assets- often breaking into things like shots, models, etc. and tasks, things to be done on any asset to make it complete. Assets are nouns, Tasks are verbs, so a shot (an asset) needs:
layouting, animating, shading, rendering, compositing to be completed; and each step potentially needing review by the director or pipeline lead.

Assets are interdependent: A shot asset may need several models – themselves assets with their own set of tasks – to be completed. One might further claim that models shaders and rigs are their own sub assets, or simply consider them as tasks. The nature of reuse plays a role in deciding here.

Tasks are interdependent: For instance, animation cannot begin without rigs! Shading cannot begin without modeling! Further, tasks on one asset are dependent on tasks in another: you cannot animate shot NN without rigging character model X…

An Asset/ Task manager in the abstract can be considered to be a combination of an asset database and a dependency graph. That graph can be designed as task-based, or it can include asset-to-asset dependencies. I’m a bit more in favor of the former, as it allows for more flexibility but seems to be still ‘doable’ in the abstract. It might be necessary to simplify this when dealing with actual files.

Production Pipeline:

Assets and Tasks are abstractions. What happens above is independent of any actual work; you could do ‘fantasy filmmaking’ where you build a film entirely without any real production. Of course the final result will not be a film but merely a plan for one!

Making a film is a big task- it is more feasible to work on smaller chunks, and then be able to put those together. Our loose definition of the pipeline is whatever enables us to figure out what and how to work on those chunks, and to ensure they fit together to make a film. The Feedback cycle is mostly about aesthetic fit, production is about doing the actual work.

So we need to get software and resources to people, allow them to work on them, and pass that work through a pipeline. These resources (typically files, though you could argue more abstractly, data) are the ‘physical’ embodiment of the assets. The production files / data have their own set of dependencies, that may/may not mimic the dependency graph of the global project – one key to successful production is to balance organization with ad-hoc solutions- too much organization might limit artists, but too many hacks may be impossible to manage in the end.

Production pipeline can be considered to be an amalgam of

  • Production Software; i.e. what you use to make stuff, which spills into file formats and capabilities, can also include automation / project specific software that depends on project practices.
  • Standards and Best practices: naming conventions, project file organization, dos and don’ts, limits on poly count, image size, etc.
  • Pipeline software: any software or scripts that automate the pipline, project on a metalevel, or enforce the best practices, or better, allow only those best practices to be used (i.e. instead of using append/link, use the project approved asset manager, which only links in predictable ways)

From an artist’s perspective, this implies the following needs:

  1. An artist has to be able to get any initial data/files they need to start. For example: an animator needs a shot with layout, a texture artist needs models and perhaps photo texture library access, etc.
  2. SVN is not a solution for this on a big project. GIT is even *worse* – at least if you use them simplistically. Project size could be expected to grow into very large sizes: Gigabytes for a Blender short, Terrabytes for a typical short* , Petabytes for a feature.
  3. An artist needs the software to work on the files.
  4. This should be as much as possible idiot proof- so we need to help maintain naming conventions, ‘what goes where’ and how the files link to each other.
  5. Finally, artists needs to fit work back into the project.

Feedback Pipeline:

Project Directors and Concept Crew generate verbal/text description, concept art, story boards and animation reference.

  1. Artists use this material to start work
  2. Frequent (daily or weekly feedback sessions)
  3. Artists can post work (somewhere) to get feedback in form of comments, draw-overs, grease pencil lines on animation.
  4. Artists refine until task is complete (agreed by artist and supervisor or director to be final)
  5. Sometime you need to un-final things. The system has to be flexible enough to allow this.

Existing Tools

These warrant looking at either for inspiration or use – I’m only listing Free Software ones, but I’ll mention shotgun and alienbrainz as two commercial ones to look at.

  1. Helga: Helga was created as a student project at Hampshire College/Bit Films, where we are visiting artists. It is largely stagnant, though in heavy use at Hampshire. It handles mostly the feedback pipeline (fairly well) and the asset/task list in a so-so way. It also has some nifty features like the ability to start a render right from Helga, and to update shot folder SVN. Helga is basically a bunch of TCL scripts, a database and a web front end. It is very hackable, provided you speak TCL. It is reportedly rather difficult to install, and no public repo is currently available.
  2. Attract: Attract resembles Helga, but is a PHP web application. It handles mostly the same parts of the production Helga does. It was developed for Tears of Steel by Francesco Siddi, also a Tube artist, and he has added some useful additional features at our request ๐Ÿ™‚
  3. Tactic: Recently open sourced, something I have been really wanting to evaluate but so far haven’t had the time.
  4. OpenDAM: An open source asset management project with what seems to be a very dedicated crew. I just found out about this recently!
  5. OPAM: Created initially at SparkDE, this has been used on productions; could be likened to Helga and Attract in scope and function.
  6. Damn: I just found out about this one, apparently written in Python and has a Blender Logo on the front page! Promising!
  7. Subversion: Despite this not beeing ‘cool’ anymore, it is an *extremely* robust and functional software VCS, and though not designed for animation project, does the best job of the shelf that I have seen of managing the project Repository.
  8. Git: Git is a distributed version control tool. The idea is that every ‘checkout’ is actually a repository, and users can branch and merge and then merge or share their branches. It is very technical and- as is- not too friendly for animation projects, were it not for:
  9. Sparkleshare: The cute name might fool you, but this is a damn good effort of putting Drop-box like functionality on top of Git. I wouldn’t use this for the entire project, but for sharing individual ‘bits’ with artists, it is invaluable. Think of Sparkleshare/Git projects as mini-staging projects an artist can work in, before pushing the work back into the main repository.
  10. Renderfarm software: quite a few to talk about, commercial renderfarms, Brender, etc. One really nice thing is to integrate the renderfarm into the rest of your software, for instance, Hampshire uses Tractor (a commercial one from Pixar) but hooks it into Helga: this makes managing rendered frames a breeze, highly recommended to have a setup like this. I’ll try to expound later. But using Attract as a front end for Brender could be a killer combo too. Another very interesting thing would be to provide an API (from Helga or Attract) for commercial renderfarm integration- this could be *awesome* for both users and for the companies’ businesses.
  11. Production specific libraries: There are some open source tools out there for simulation and crowd animation that could be quite interesting to integrate into a production pipeline. There are a lot more that I’m probably forgetting. Doing research here can help a lot, for instance, had I known about Smartbody I probably would not have developed our own crowd script, but integrated that instead (Really look at Smartbody and possibly Sybren‘s work for crowd stuff – ours should be nice enough but limited to simple walking, not crowd behavior)
  12. F/LOSS graphics software of course: Blender, Krita, Mypaint, Gimp, Inkspace, Ardour, Audacity, Synfig, Pitivi, Kdenlive… etc. There is a huge amount of libre graphics software out there, many at or approaching ‘pro’ quality. Many also have really approachable development teams, very open to fixing bugs, and helping with workflow or features. In a sense, they are like a combo of software and extra TDs (technical directors) for your project. Using them is a no-brainer, but getting in touch with the developers is easy and can be great for both the production and the software.
  13. IRC : a nice low key place to be able to stay in touch, and also happens that almost every free software project has an IRC channel ๐Ÿ™‚
  14. Python deserves its own entry (though any scripting language could be used), since it is quite popular in this industry, is supported by many applications, and is very friendly to use.
  15. From the comments (Thanks Nelson!): stalker , oyProjectManager , Rules which are project managers that escaped my attention, and owncloud which might have use for animation projects. stalker and oyProject manager are both python/ support applications with python api. Rules looks very interesting and feature-ful too, especially for scrubbing/previewing stuff.

Tube Production Specifics

To achieve the pipeline specified above, Tube uses Helga, Git, Sparkleshare, SVN and custom Python tools developed to meet the various needs of a complex distributed production. In addition, we are evaluating Attract on the side thanks to Francesco Siddi. We look forward to exploring more of the applications out there, but we are limited by production constraints to use what is already working, even if it is not always ideal, and is sometimes too rigid and fragile.

Tube Asset/Project Management:

We are happy to have been invited to field-test Helga’s web-based asset management, though it is not yet ideal; Assets are Shots and Models only, entering new assets is cumbersome, deleting (or just cutting from the show) is also a bit tricky. Task trees are rigidly defined (you cannot define custom tasks). Entering all the data on the web interface is time consuming and often replicates data already known in other places (violating the ‘define once’ principle). It lacks a function outside of email notifications to help the director or project leads organize items for review. A database expert could get around all this… but, the reason to use this stuff is *not* to force everybody to be an SQL wizard :). So we end up entering data multiple times:

Helga:

So Helga covers the basics: you have shots, models, and users (people) and a project hierarchy (Helga supports multiple simultaneous projects). Each Asset (Shots and Models) can have a Task tree- a branching hierarchy of tasks on the asset. Further, each task can be assigned to a user, and the asset itself has a comment-preview page that I’ll go over in the feedback part.

Tasks can be assigned and reassigned to users with drag and drop, and given status ‘not started’ ‘in progress’ and ‘final’

Task types are hard coded: you cannot add a new kind of task, you have to pick from a (quite big) list. This is similar to assets being hard coded as shots and models. It leads sometimes to ‘fudging’ i.e. putting a script in models, or to having tasks and assets not represented on Helga. Also there is not yet a full Python API (though it is in development). It would be great to integrate e.g. Helga and Blender, the specifics of which would merit an entire post of its own.

Attract:

Attract is really similar to Helga in practical use – however the internals differ a lot. Both are ‘database-y’ but Attract is a PHP web application, while Helga is TCL scripts running on a server in addition to the web application. There are some plans to rewrite Helga in Python, which would change things further. Since neither application is mature yet (and maybe even in the case that they are), hackability is of primary concern. Helga is handy here since Chris Perry and myself can add extra functionality when needed, and Attract could be cool if Francesco is on the team as a pipeline TD since he is the person most familiar with it.

Personally I would highly recommend a Python application: most TDs know Python (it is the most ‘standard’ language used by CG productions) and it is more prone to being readable, hackable code.

Both Attract and Helga offer Statistics pages to give overviews of the project. In their current incarnation, I think this should continue to evolve further to be really useful. Another really handy thing would be export/import functionality, to allow generating stats from the data in a spreadsheet (without tedious data reentry).

Blender/Geppetto:

Geppetto is the first complex addon I wrote for blender 2.5, and reflects my inexperience at the time. It kept in sync a text-only breakdown file that listed the shots in the project with the edit – in other words, it added extractable metadata to each shot in the edit. In addition it kept track of any resources (storyboard images and animatic .blends) used to construct a shot and could be used like a ‘make’ command to rebuild if someone e.g. painted on an .xcf file. We haven’t used it since the storyboarding phase. Ideally an asset manager should have a Blender addon / Python API to integrate into things such as Geppetto and, as we shall see, Reference Desk. So just to be clear what Gepetto does:

  1. Maintains a list of shots in the film: each shot is associated with a number/name, a description, and a metastrip in the edit
  2. Saves the shot metadata into a shot breakdown file- ideally this would actually sync to helga/attract
  3. For each metastrip, there is an image subfolder for the shot in svn, and verious blend/.xcf files that generate/use the images
  4. SVN integration for the above to force an update/refresh
  5. Gimp integration to re-output the images from a gimp file

Final note: Geppetto is pretty hackish code, was written for the 2.5 API (it most likely still works, but might need some small api fixes) and was pretty hard-coded to work specifically for Tube. A more general tool/addon would be great, something I plan to do in the future. But this type of effort is ultimately sterile if it does not integrate into the bigger picture. So stay tuned for Geppetto 2 ๐Ÿ˜‰

Spreadsheets:

Various google docs and ods spreadsheets can be handy. In the future, attract or helga should be able to export .csv files for this purpose. Currently we have had 5 or 6 of these active on different times of the project, and it takes a day or two to get data synced (sometimes much less) because we don’t have an automated way to do it.

Reference Desk:

Reference Desk is our asset manager inside Blender. It doesn’t track shots but linkable assets, and is comprised of two parts: a Blender addon and a .json library file (you can have many of the latter). for each asset the library file holds information about:

  • groups to link and instance
  • groups to link and not instance
  • python scripts to link and execute
  • python scripts to link but not execute

In practice reference desk should also have hooks to non blender stuff, including helga/attract alikes. It could be extended further:

  • support directly other datablocks than texts and groups
  • support appending as well as linking
  • record in a third file a project wide dependency graph (including noting SVN versions of appended files)
  • integration with web based/top level asset manager, reference desk becomes a component in a larger system. We could maybe then instead of using the .json format file, just add reference desk data into the one and only project database- or maybe we use flat json files *as* the database? this is for the really smart people to figure out, not me ๐Ÿ™‚

 

Tube Production Pipeline:

(I’ve separated file management into it’s own subcategory.)

Reference Desk (again):

In addition to being part of the asset side, Reference Desk is mostly about production, specifically:

  • It stores file locations of assets in .json form so users don’t have to know it. An asset can be spread among multiple files.
  • It automates asset insertion into files using post install scripts. Insuring correct layer placement, instancing proxies, instancing groups in specific locations (some sets are built from many sub-assets as one thing, even hundreds) and linking in functionality scripts (like rig uis)
  • By automation it insures best practices. Things that use reference desk are consistent across the entire project.

In practice Reference Desk is used by artists/ TDs to add assets into the asset list, and by layout artists, who now have one-click asset importing capability, making building layouts much easier and less error prone – rather than document where an asset lives and how to link it, just hook it up into reference desk and you have both documentation and automation done.

Blendflakes:

Blendflakes checks blend files – specifically asset files – for ‘Errors’. These errors are not typical blender errors, instead, they enforce best practices such as naming objects, putting them in groups, documenting what the groups are for, which objects/groups are on which layer, etc. It tries to ensure that when an artist opens a file they don’t have to question what an empty on layer 7 is doing and if it should be removed.

I wrote Blendflakes after having wasted a lot of time debuging really huge and complicated blend files that had broken or needed to be extended, confronted with default names, objects on haphazard layers, and minimal docs. It’s one thing to have a naming convention, but another to have a tool to actual check and enforce it.

Layernames:

Before Layernames we had a standard layer convention- a bit rigid, especially for lighters who might need to do tricks to get the renderlayers they need. So we needed a way to use non standard layers but still know where stuff lives- our layer manager is not the only one written for blender but it is pretty damn good ๐Ÿ˜‰

Layernames is our layer manager. It works for armature and scene layers:

  1. Associate each layer with a human readable name
  2. Give the layer a ‘type’ useful info
  3. 3D view interface and Properties editor Panel
  4. Scene layers are integrated with Render Layers
  5. ‘Render State’ is information about which layers are on, and which scene is active. This is integrated into the Helga/renderfarm scripts to avoid accidentally rendering black frames/ frames without the character, etc.

Zippo:

Lighting was taking too long: when you went into rendered view it took minutes to load all the images and create shaders, and then very slow progressive rendering. We discussed several ways to make this faster (while not compromising final renders) and came up with Zippo. As I started implementing some of our ideas, I stopped as just the first one (Scene simplify combined with smaller textures) made things much more interactive. Without more ado:

Zippo is a simple lighting tool and a small command line python script.

  1. The command line script creates quarter resolution copies of all the images in the maps/ folder.
  2. The addon allows switching (temporary) image paths to this low rez folder (trivial in this case since all our images are in one folder)
  3. The addon also reproduces the scene simplify option
  4. The addon could be extended to use ‘proxy’ node paths in the cycles materials

Zippo makes interactive lighting with cycles practical again: with these settings, lighters can get feedback in seconds rather than minutes, even on relatively standard hardware. This allows us to experiment, and to be more artistic.

Wiki Documentation:

Tube’s Wiki has a wealth of infomation/ guidelines on how to use the scripts, how to create and link files to each other, how to layout and light etc. Initial pipeline design was targetted almost fanatically towards animation, and needed massive rearchitecture when we came to lighting. It is really only now that we are making lighting as practical as animation, but we have a bit more to go before that finally clicks. One of our issues was switching render engines (we started on BI but ended on Cycles) and we still actually use BI for specific things.

SFX and Production Scripts:

In addition to pipeline oriented scripts, we create tools to automate or enable production. These are actually far more numerous than the pipeline scripts, but we are pretty bad at documenting all of them. Some of the ones that have docs for your perusal: Mushroomer, Auto Walker, Particle Baker, Multi Verts, etc. Our rigs in themselves involve a lot of scripting, for instance Rigamarule (something similar to Rigify) and rig scripts. One of our finalizing goals is to document each and every production and pipeline script – I’m hoping our wiki will be complete before the project release.

Tube File Management (really part of production pipeline):

Subversion:

Ok, this is not too bad, you get version control, it handles binary diffs really well, repo size is quite small. In practice, however, there are issues:

  • Partial checkouts are tricky: you can check out a directory by itself, but not ‘a file here/a file there’ depending on dependency.
  • Checkout size can grow a lot: Subversion, in an effort to save network trips, saves a lot of temporary data with no way to purge it. As you work with a repo, you waste more than the checkout size in these temporary files… quite inconvenient.
  • Subversion is actually too technical for some artists: not every talented (and I mean talented!!) artist is capable of using it. you put a barrier in front of contributions.
  • network bandwidth to checkout the entire project (or even a few folders) to work with one file is just too expensive.
  • Not all files are practical to store in Subversion: for instance, renders are too expensive to store (or any generated stuff like caches). We should probably version metadata (i.e. what subversion render provided the render)

One big thing to consider as you layout your tree: Keep your image maps from poluting the entire tree . Doing this allows:

  • Quicker checkouts and updates for those who don’t need maps
  • Quick scripts can do resolution tricks for closeups with simple path changes (maps/ maps_hi/ maps_low/) etc.

Another concern is Rendered frames and Sim results. Currently we do not commit either into Subversion (it is too big and prone to churn) Helga organizes render in a tree as follows:

Renders->Act Name/Sequence Name->Shot/Scene Name->Resolutions->Renderlayers->Renderlayer->Actual image files

In addition we have for caches the following structure:

Non-SVN->Act Name/Asset subfolder (mimics SVN tree)->Shot/Scene Name->Cache Folder->Cache Files

Note that blender has some caches that need to be in a sub folder of the same one the file is in.

Possibible improvements:

  • Commit metadata for the rendered frames or caches including originating file and SVN revision of said file, and a timestamp
  • Script regenerating stale bakes/ check for bakes before render.

Git:

Ok, so what about Git? as far as I can see, Git, and other dvcs’s solve some Subversion problems. Unfortunately, they solve absolutely no problem that we encounter in an animation project, while introducing worse ones. For instance, now a ‘checkout’ is also a ‘repo’ requiring either near prescient subdivision of the project into multiple modules (hint: you do not want this headache) or that everybody has to checkout the entire project, making it even harder to work with. Technical difficulty is even harder than Subversion, and binary/large size support is rumored to be bad/memory hungry, a problem Subversion does not have.

Sparkleshare + Git + Sparkleshareit.py + Subversion + Helga:

  • Helga manages Rendered frames as we saw.
  • We store on our server further non-svn stuff (caches).
  • Subversion stores are main central repo: some artists (mainly local ones and a few of the remotes) work of this. We religiously keep textures in a maps/ subfolder to ease the pain on animators who do not need to load images.
  • sparkleshareit.py is a very simple bpy script that runs inside a .blend file. It finds all dependencies of the file (in term of library .blends and images) and gives options to copy one or both to a brand new directory tree. It then commits this to Git…
  • Git on the server is used to store many ad-hoc ‘mini projects’ for animators
  • Sparkleshare is a user friendly front end to Git that enables animators to work not-too-technically.
  • Rig updates and shot updates can be copied manually (usually by me) to and from Git and Subversion.

Tube Feedback Pipeline

Helga

So Helga in addition to top level asset management has pages for shots and models, in the form of list view or thumbnail views. Each one, when clicked, opens it’s own page that allows the following actions:

  1. Seeing a preview of the shot/asset
  2. A view of the current status in the task tree of that asset
  3. Assign people to a ‘watch list’ to recieve email updates on any activity in the asset, such as:
  4. Getting updates of the artist work on the asset (can include image, movie, or file previews)
  5. Someone else in the project giving feedback on that work (can also include image, movie, or file attachements)
  6. In the case of a shot: starting a render, or collating rendered frames into a movie
  7. Viewing the rendered frames or movie
  8. Specify render settings (size, frames, which file from svn to render, etc.)

Hangouts/Weeklies/Skype/email/etc

We meet twice a week: a shading weekly and animation weekly. ‘In person’ reviews are often more efficient, getting more eyes is also useful.

Future plans

One Application to rule them all, and in the interweb, bind them:

From an artist perspective we should have a single webapp (or desktop app with internet capabilities) one place where:

  • You can see your assignment/s
  • You can follow your comments/todos
  • You can click in one place to get your files (maybe even automatically open them in the right application?)
  • You have either automatic up-loads or you get to click (again one button) to send your work.
  • Plugins for Blender, Gimp, Krita/ any applicable creation tools where needed.

This can be refined further but simplicity is key.

From a producer/supervisor perspective you should have:

  • Easy and fast data entry for new shots/assets etc.
  • Link the data everywhere, all production scripts should talk to that same asset database
  • Good tracking/setup of dependencies/etc
  • Good overview of project
  • Easy .csv export for spreadsheeting

Federation/ Mediagoblin Integration:

Consider that the feedback part of the project already looks like a typical media sharing site (Mediagoblin, Youtube, Flickr): Ya posts yer media and ya gets yer comments. Some sites already have some implementation of ratings/etc to ‘gamify’ the process a bit.. and support remixing etc. If we could manage to integrate a management app with Mediagoblin (which is Python based, and has a plugin infrastucture) we would get the following amazing scenario:

  • Distribution Platform is the same as the Making Platform
  • Optional Federation
  • A lot more that is out of scope for the current topic, and requires its own post. Fateh has some very interesting ideas about this, from a big picture perspective.

 

 

ยฉURCHIN 2015