After an astonishingly successful 3D rerelease at the cinema, The Lion King finally comes to Blu-Ray in the UK today.

Not only can you now watch the film at home in gloriously high definition, but you can also do so in 3D. A few weeks ago we published our interviews with Don Hahn, the producer of the film. We also spoke with Robert Neuman, Disney’s stereoscopic supervisor who has been responsible, not only for the conversion on The Lion King and Beauty and the Beast, but worked on Bolt and Tangled as well.

At the time of the interview, the conversion on The Little Mermaid hadn’t been announced (it may not have even been decided – we spoke to him back in early September), but our conversation should give you some idea of why it will take the company two years to complete it, while they can do two Pixar films in the same amount of time.

I’ve just spoken with Don about the process you used to create this. You had Don, the two directors, yourself and your team. Presumably it started with the depth script, would you be able to talk about that?

Yeah. It started off with a couple of spotting sessions. We’d watch the movie with sound, and then without sound, and basically have the filmmakers highlight to me what the big moments were for them, what they really wanted to hit hard, and also have me sound off some of my ideas, and what I wanted to do in terms of the depth of it, and bounce it off of them. That was invaluable. There’s a bit of trepidation when you’re approaching a project, especially something that’s achieved the status of being a classic. It’s different than when it’s a current production, which is why it was great to have the original filmmakers on board in that way, I could go to them and say, ‘is this in line with what your vision of the film was?’ With a production of this nature, that was great.

You mention it’s different from doing a standard 3D film; I know you worked on Tangled as well, so using that, perhaps, as the base line, what is the difference in the process?

Well, it begins the same way in terms of depth script being the first thing we do, [then] we go through the movie and like I said, this is informed by the filmmakers, we look at it, and we say, ‘on a scale of one to ten, what is the emotional content of any particular shot, of any particular point in time?’ And then, following that, that informs then [a process where] I go through, and I do the mark-ups, roughly 1,200 shots in the film, and I make annotations. It’s basically sticking a signpost in the ground at a marker, and saying ‘OK, I want this to be a certain depth into the screen, and this point to be a certain depth out of the screen’, and that was then issued to the artist, so that point was all the same. Where it diverges is how you implement that.

First of all, the interesting thing about this is that, since we’re dealing with everything on a level by level basis, we can do something that would be – let’s say there’s a physical reality behind this, which there isn’t because there’s a lot of artistic license taken, this is something from an artist’s imagination, but if this were a real shot of a real Savannah, there are certain things that would be dictated by optics when we shoot this, how much depth would be in certain parts of it – Now it becomes more of an artistic choice as to how much depth we give a particular part of it.

With something like Tangled we have the same flexibility, because we have the ability to use multiple stereo rigs, and assign different inter-ocular distances to different parts of the shot, and dial in the amount, so we can create a certain amount of volume of Rapunzle, a certain amount on the background and put it all together, so we’re quite used to using that, that’s actually the same process.

Obviously you’re somewhat constrained by the animation, for instance, you may need to keep the point of convergence* in front of everything if you have foliage or similar elements on the outskirts of the screen.

Well, we have a technique that we use called ‘floating windows’ that allows us to deal with things on the frame line. If we need to bring the convergence to a certain point, which we need to do for visual continuity across shots, we need to have the convergence at a particular point, and yet the composition of the shot, the staging of the shot is such that there is going to be some foliage on the edge of frame, what we do is float this fake proscenium out, place it in front of it, and the shot works.

So in effect, it’s a fake side of the screen.

Right. We create an artificial masking. Instead of the actual frame line, we have a frame line where we’re controlling the offset between the left and right eye to give it depth, so we can actually bring that in front of something that’s coming out of the screen, and put it behind this virtual screen.

You’re working on something where there’s a real risk of you ending up with a decoupage picture rather than a film that was actually 3D. Was that something that you worried about?

No, because the whole undertaking was predicated upon, I’d had an early idea about how I would like to do it – when I was first made stereoscopic supervisor at Disney animation, part of the roadmap that I laid out was this idea of doing a 3D adaptation of hand drawn films in which they have actual volume. To me it was all predicated on the idea of doing it that way to begin with, and I had some ideas of the tool set we could do, and the idea was to have our software developers realise those.

We use pixel displacement. We take the image, plus a depth map which gets sculpted by the artist we have working on this. We create volume in the character by sculpting out this depth. The things furthest forward, like Scar’s finger is the whitest, as you get to the furthest point, it gets darker. The trick is then, how do you create this map? We have gradient algorithms, that’s the most basic level. It’s this thing that takes the shape of an artwork level, so it takes the lion’s shape, and it makes a rounded map where it’s darker on the edges and lighter in the centre based on the shape. That’s the equivalent of, if you had stitched a Mylar lion balloon, it’s the equivalent of putting air into that. So what you’re getting is something very non-specific, but volumetric. It looks kind of puffy. But then we want to do something more specific.

Then we have a gradient primitive tool set that we created to add structure in. So we have a gradient cube, so we can place that cube in space to add structure. Faces have planes on the cheeks, planes on the front of the face, so that adds structure to it. [We also have] joint tools to articulate the limbs gradient wise, then ellipsoids to add a nose or any other feature. We came up with a depth painting tool where, by adding a couple of little brushstrokes greyscale, basically little hints, so like the closest thing would be the tip of the nose, you’d put a white dot there, you’d put a little dark grey dot back [near the eye]. Every little hint you add to [the depth painting] makes it look more and more like a sculpture of Mufasa. So it was quite a powerful tool.

So it’s a piece of software that you guys have developed that reduces the workload on the artist.

Right, from having to individually paint stuff frame-by-frame that you want to avoid at all costs, because you’d never get the picture done if you had to do it like that.

Presumably there is still an element of hand-tweaking on everything?

Definitely.

So, essentially what you’ve developed is a process that, as long as it was digitally inked and painted, you can take any Disney animation, ‘3d-ify’ it badly, then tweak it into 3D that’s great.

That’s right. And anything before then would be the same process, plus the initial image segmentation phase, where you’d have to do the rotoscoping to get your individual artwork levels, and then once that’s done you can put it through the same pipeline.

And that was developed by you and your team?

Yes.

So, I’ve got to ask, are you an artist or a computer programmer?

Well, since the early 1990s I’ve been working as an artist and filmmaker, but before that I had been an electrical engineer, so I have a certain technical background that allows me to use both hemisphere, left and right, which was helpful because a lot of this stuff, the toolset wasn’t there, which I suppose would be more daunting if I didn’t have a technical background. So I was able to help design the tools.

Presumably this technique is shared with Pixar now?

Yeah. Their conversion stuff on Toy Story One and Two was based on their digital archives, which are 3D in nature – that is, archive files which are already 3D geometry, essentially. So they didn’t need to do it, but they’re sharing some of the other tools we developed. Our CG-stereo rig was imported over there when they started on Up!

So this process has been in development since before Up!?

Well, this is different. We have a different camera base.

It was just the CG stereo rig you mentioned.

Which is something that was developed for our CG stuff. That’s more or less what it was used on, Bolt and Tangled. That was technology that we shared with Pixar. This is there for them, but it’s  not clear what they’d use it for at this point. Actually, we have used this on some of our CG stuff. For one-off shots where you have a background that isn’t used in other scenes, we often do a matt painting for it, where you have a painted background, so for that there’s no actual geometry, so for 3D purposes, we have used this to convert those.

Actually, that gets on to something that is interesting with The Lion King. There were a couple of moments, particularly on the desert floor, you seem to have allowed the background to remain flat.

That’s because of scale. It’s a valuable lesson we learned, particularly on a show like this, where you have this majesty and grandeur to the backgrounds – there’s very much a David Lean sense to the cinematography – in a case like that, what happens, with the tools we have and the artists working on it, I had to, at some point pull them back because, if you go back to my original mark-ups, I was very careful to use a very limited amount of parallax across a background because the first thing that happens when you start to add depth in is this miniaturisation, this Lilliputian effect that you get, where something looks like it would fit on top of a cake, rather than be a huge, majestic background.

I suppose the other problem you have is that, because it’s a flat, hand-drawn animated background, you tend not to have focus in terms of depth of field.

The painters are really great in terms of building that aerial diffusion into stuff. You’d be surprised, there’s a lot of that inherent in the artwork, but we’re very careful to not put depth in just because we can, and that’s what you’re seeing there. We want to feel the scale of the desert, we don’t want the desert to feel like it would fit on a postage stamp.

There is one other thing that didn’t seem to work as well as the rest of the film – when you’ve got the shaking, when the ground is moving. It’s moving faster than my eyes and able to keep track of, so it became blurry.

You’re talking about the scene that had camera shake, in the cave with the hyenas?

Yeah.

That’s a tough shot. That goes back to us not wanting to change the original. We could have taken that out or softened it, but I didn’t want to. Here’s the interesting thing about 3D, you wind up with the same thing that’s in the original, only more so. Things tend to be amplified, so strobing, anything that has a strobing effect, is going to be more pronounced in 3D. We fought that to a certain extent, where it wasn’t changing the intent of the shot, so there’s certain cases where we added motion blur, which wasn’t able to be done when the film was done originally, we added that in to soften some of the strobing, because we knew it would be more – strobing, is inherent in any sort of hand drawn animation, or stop motion animation for that matter those type of things have an inherent ‘strobey’ quality, which is only going to be enhanced when you put it into 3D.
We did fight that, but that was something that I was aware of in that scene with the cave, but I thought it was quite fun though, in a way, and I didn’t want to lose that original flavour.

If, in future, there is something like a fast track that doesn’t work too well in 3D, in a film that you’re converting – I’m thinking perhaps the magic carpet scene in Aladdin is going to be something that you’re going to have an issue with, are you going to make it work in 3D?

Yeah. To the extent that it doesn’t change the character of the shot. That’s kind of our mandate. Like I said, we did do some motion blur, optical flow techniques on this, where it wouldn’t change the effect of the shot, but that case where there was this explicitly over the top camera shake that was put into the original film, anything that we did to change that would be changing the intent of the shot, and that’s what we wanted to stay away from, but to the point where it didn’t change the character of the shot, I would do whatever would make for the better 3D.

The Lion King 3D Is out on 3d Blu-ray, Blu-ray and DVD today.

*The point where the left eye and the right eye converge on an object. Anything set behind this point recedes into the screen, anything in front of it comes out from the screen.