There are rare moments when I'm at the cinema and I'm so inspired by what I see, I try to think of ways I can incorporate such ideas in my Machinima.
In Blade 2 we saw the introduction of the L Cam. CGI shots of digital stunt men were seamlessly merged with live action shots, providing more fluid action scenes.
It's a live action shot and Blade gets punched, sending him hurtling into the air. The action slows down and he comes so close to the camera (he's now the CGI Blade) that we can see the sunshades on his head wobble a little. He smacks into the wall, and the live action Blade lands on the ground.
Traditionally this is done by cutting the CGI and live action shots together but the L Cam technique allowed it to be done in just one shot! Apparently the L stands for "liberated" and as far as Machinima goes we've almost ALWAYS had a liberated camera. The problem for me is that my mind wasn't quite this liberated, and for good reason. When I first tried my hand at Machinima I really went to town with the disembodied camera idea. Almost every shot in my first film was a dolly, the camera was weaving through people's legs, pipes, hovering in the sky, I was out of control! I had to learn to reign that camera in and in that, perhaps some of the freedoms afforded by a virtual camera were forgotten. Until I saw Blade 2. Bouncers, had I finished it, would have had some some great action sequences thanks in part to this film (I might still finish it!!).
Despite what people may think from my early films I've always been a bit of a facial animation enthusiast. Back in the Quake 2 days the technical process for facial animation made it so difficult to get a good performance that by the time I came up with the idea used to animate the faces in Beast (an idea which was and is still unique, to my knowledge) I was just happy I could have lips moving at all. The facial animation in Beast made the characters in Bouncers look like stroke victims, however it still wasn't as good as it could have been. My first gripe is that the characters in Beast don't blink in the whole film. This wasn't impossible in Crazy talk 4.5, it was just difficult to implement while keeping other facial expressions going. My second gripe is that their eyeballs didn't move much. Other than on one occasion they always faced forward. This is where the cinema inspiration slips in again.
When The Polar Express hit the box office one seemingly persistent criticism of the CGI was that the characters' eyes seemed dead, giving them a very eerie feel. In Beowulf they combated this by using Electrooculography to actually capture the movement of the eyes exactly as the actors moved them, and the result was a much improved virtual performance. Now, I have no access to this technique, but it made me think of what kind of things I could do to improve on Beast's method, and luckily Crazy Talk 5 accommodated. One thing that makes eyes seem more alive is jitter. The eyeballs never rest perfectly still, a fact that makes control of a computer via eye movement a challenge for interface designers. Again, 4.5 could have done this, but not without difficulty. Due to the live puppeteering in CT5 I'll be able to make the characters blink, roll their eyes around, AND attempt to simulate a small level of retinal jitter - all in one pass.
With my animation muscles nicely flexed the next thing that's really given me a brain itch is sound. As old fans of Binary Picture Show will know, I struggled with sound quality for quite a while. Now that I understand it a bit better things have improved and I can now move on to spending every other waking moment thinking about the actual sound effects. This is even more important in Digital Memory because of the main character, who my faithful blog readers might remember, is a robot. "Should a robot really make some kind of noise every time it moves, or would that just be annoying?", I often ask myself. Well, Pixar's latest gem, WALL-E tells me yes, yes they do make noise with every movement. However I get the troubling feeling that if this isn't done very well it would indeed descend into an assault on the ears, annoying the same way someone persistently zipping and unzipping their trousers in your face would be annoying. It's not just the sound work that was inspiring though. I found this film even more visually appealing than Finding Nemo. As the two main characters don't exactly have English as their first and commonly spoken language, their actions (or animations) did the bulk of the talking, and it was done so well, especially since they weren't humanoid in their design. Just as facial animation helps a character appear more life-like, the sound effects given to Wall-E's every roll forward, or lifting of an arm, or twitch of his eyebrows, added to his presence.
If I can get anywhere near a similar result in Digital Memory I'll be a very happy man. It's not impossible. Phil Rice and Ricky Grove have kindly offered to help (and we all know how good they are), but the amount of sound work seems so staggering I doubt I could let them at it in good conscience. In Beast, most of the sound effects were already in place when it went to Phil. Ricky did some clean-up (there were some clipping problems in the dialogue files, which I now know occurs during the video capture process in Motionbuilder) and Phil added a few sounds and reverb effects, etc, to give it a more engrossing atmosphere. Hopefully I can do something similar for Digital Memory so that it doesn't become a chore at any point in their helping. It's a difficult thought since the sound in this is going to be so much more complex than in Beast. As always a cross my fingers for a good outcome.
Totally off topic I saw a film today, Twaddlers, made in Antics. The viewer comments on Youtube reminded me why I don't like Youtube, and partly why I left Machinima.com. Infantile comments aside, it was fun, but really annoyed me because of it's similarity to an idea I had in University and was really looking forward to producing some day. Twaddlers could have been made a little better, some polish here and there, but the random humor is very funny, I loved it. Give it a look if you can. from the comments, some people get it and some just don't.
So anyone who read yesterdays post (maybe about 10 people, then I imagine two of you returned) may have left wondering:
"DAZ3D models in Machinima? Too polygon rich, this fool's finally lost it."
"You fool, you think that's original but it's already been done! FOOOL!!"
Well both schools of thought are correct. Take the very popular Daz 3D model, Victoria 3 for example. She's somewhere in the region of 75,000 polygons if memory serves, with the reduced resolution version somewhere between 32-45,000 polygons and that's without clothes and hair. Way too high to have just for one Machinima character, no matter how hot she's supposed to be.
Then again, it can't be that bad, because my main man Tom Jantol regularly uses Daz3d models in his films, and he seems to get along fine. Well, yes he does use them, and my goodness they look great! Oh those beautiful curves and not a straight line in sight! But as many people will know, more polygons in the scene require more power. This can be one contributing factor to why Tom doesn't have many of these characters on screen at once. What's more if you notice, the characters aren't clothed. They're naked as the day they were born, and have a stone/marble sort of texture on them.
Then it can get even worse if you want to easily implement facial animation. with Mimic you can get your lip syncing done easier but getting your characters to actually emote still isn't as easy as using the CT/MB technique
The reason all this is so important is that both Tom and I are Motionbuilder users. We are part of a very small crowed that uses the tool to actually capture the end result. For me it's the only real-time environment that gives many of the freedoms I had back when I used Quake 2. Ever since leaving game engines behind (and even before that really) it's been a problem finding where the next model for each film is going to come from. If you use a game all that stuff comes pre-packaged. Break it open and your good to go, but when you leave that it becomes more important to provide for yourself. Daz and poser have huge amounts of content available relatively cheaply so if you wanted you could even sell the resulting film, but how would I get around the problems I mentioned earlier? I want more people in my films, and I want to use the same technique for facial animation as I used in BEAST.
Well, with the help of Tom, I've been theorising loads on a possible solution (sometimes I think that's all I do). It involves reducing the number of polygons in the models down to a point were they are much more manageable, but still retain their quality. Anyone with some experience in this will know that this is a messy job. Usually when you do it the models get real ugly real damn fast and things become unrecognisable. My research led me to understand that it can indeed be done less destructively. I can't explain the technicals, but DAMN it makes one hell of a difference!
With it, I have been able to reduce a 10,000 polygon head to 3000 polygons and keep most of the juicy goodness. Now that's still a hi count for a face, but hey, it could be worse. Then I have to simplify the eyes and mouth areas so that they will accept the Crazy Talk technique better (I really should give it a name). I found out that the Iclone G2 characters are around 10-14,000 polygons each so I've set that as my quota here. The next challenge is to do the whole body, but because of the detail on heads and the time we spend looking at them, they are much harder, so I believe the difficult part is mostly done.
So here for you today is the head of Victoria 2, at around 3000 polys. Just so she'd look a lot less like an alien I gave her hair for the Sims 2 (2000 polys), from the great site, xmsims.com. I deleted an ear and some of the scalp so in total it came to just around 5000 polygons. There are still many improvements that can be made, through UV manipulation and texture baking but for a test vid, I think the result has been great!
So the hope is that by reducing the polycounts and tinkering here and there I can populate a whole film using this technique, and it's what I hope to do for our next big one. But that's all for today! On Monday we'll look even closer at the idea of creating these abominations. I've only touched very lightly on the idea of mixing resources from different games into one engine. Obviously this can be taken much further, so stick around and we'll learn more + I didn't even get round to talking about Iclone 3. For now have a fun weekend!
It's been extremely hectic here at The Show over the last few months but finally things have cooled down and I can get back to updating this blog and working on our next big film.
Some of the work we've done recently, you will know about, whereas others have been kept fairly quiet. Shortly after finishing Roommate Wanted I started another commissioned project for Antics Technologies. Very much like RW, the aim was to make a film that showed some of the strengths of the tool and show how accessible it can be. For anyone who has used Antics (there's a free version now, so you really have no excuse if not) it has some great benefits such as simple set construction and the great way the characters can interact with objects and scenery. Everytime I use it I end up thinking it's very much like The Sims 2 without all the annoying things you have to do to get the characters to behave.
One thing that was very difficult to get around though, was the basic lip sync and lack of facial animation, and of course using one of my favorite Reallusion products to fix that was not a big option in this case. Regardless, I think it turned out quite nicely. It's actually been out for a few weeks now, but because I've been so deep in another commission and recently moved house, I could only announce it now. It's called Anonymous Coward and you can catch it in the Antics Cinema (where you will also notice a film by CJ Ambrosia). The guys at Antics seemed quite pleased with it, so hopefully you guys will enjoy it too.
The third project was a big one. Unlike the previous two which I was easily able to do alone, this project had a much bigger budget so really needed the team and as always, Dreaded Kane emerged from the bat cave and rolled up his sleeves (for any1 who doesn't know, Kane is a long standing member of the Justice Lea - er.. Binary Picture Show). The film was called Peter's Story, and was unlike anything I ever imagined us doing. This was a 6 minute information video and as the title suggests, it's a narrative film and I worked very closely with Professor Paul Foley of De Montfort University (going to last years UK Machinima festival was very worth it).
It was great to do (first 'useful' thing we've done) and everyone loves money, but now that's over I can get back to writing films with lots of swearing, angst, and possibly some nudity until the next such project comes along. For ages I've been meaning to fix up our website, so that's a big priority too.
I'm resuming work on the project I started shortly after BEAST. It's a Sci-Fi film in which I hope to use Daz 3D character models . Yes, they're way too high in polycount, but tomorrow I hope to shed some light on it all (should be very interesting), along with the part the recently released Craytalk 5 will play in the film. What's more, I was given a sneak peak at Iclone 3 and it's got me very excited! But enough for today. check back later for more happenings at Binary Picture Show and my thoughts on IC3!
Sep 20, 2007 at 9/20/2007 07:23:00 AM | 0 Comments
Well, Beast is finally out and from the response it received at its premiere all its aims where met. For those who haven't yet seen it, here's the youtube upload.
The time spent working on the story was worth it, and it has indeed turned out to be an emotional film. As such the facial animation played a key role and as someone who's watched it without, it makes a big difference. Of course the time spent trying to get it done in time for the Europe Machinima fest wasn't worth it, as it didn't get nominated but I'm hoping this flic is of a level that will see a Binary Picture Show film doing alright at other festivals. Thanks a lot to the guys at Machiniplex.com for organising the release event, and you can see a high quality stream of it over there, or at Stage6. Stay tuned cause I should soon be posting some notes on the film's production for those interested in how the creation process went.
Aug 25, 2007 at 8/25/2007 01:53:00 PM | 2 Comments
The last test video (Meet the heavy Spoof) went well. I definitely intend to use this method on the new 'Bouncers' series, but before I commit to it entirely we'll actually be making a short film that will rely heavily on the technique, to see just how far we can push it and if it's really feasible to do it for a runtime above 1 minute. So this test project is called 'Beast' and it's heavy on the dialogue. One problem Machinima has almost always been plagued by since inception is the lack of emotional expression available. Facial animation was always difficult to implement and on the whole emotional Machinima has had to rely solely on audio. Great actors and a few choice tunes were really all you could do, and you don't need to be a veteran to know that great acting is rare.
Thankfully now, there is Half-Life 2 and UT2K4. However many of the popular engines still have no lip syncing tools. The Sims 2 is a great example. The film dialogue has to be laid over characters who are actually moving their lips to something else (ie lines from the game). Because of this I've always thought the technique relied too much on luck, or accidents. Facial expression's are do-able using a few tricks, but it's not really possible to get a range of emotions to be as fluid as in an engine with a dedicated tool. Another great example is Second Life. Highly popular for Machinima, but unlike it's counterpart, There.com, it doesn't come with lip sync abilities. And this is where it get's interesting. It's becoming popular, not just in Second Life but also other lip sync lacking engines, to use Crazy Talk. This way you could potentially lip synch any engine, although some video editing is often required, and it can be extensive.
In Machinima's progress, not only are we seeing better graphics as the engines improve, but also a greater ability to connect with the audience. It's from this 'fight for emotion' that 'Beast' will be born. With any luck the facial animation will do what the voice acting cannot, as we are one of the many groups who don't have easy access to great actors. 'Beast' is designed in such a way that the facial animation is not a nice extra, but rather an absolute necessity. Simply having lips move is not enough anymore, and not having them move at all.... So hopefully in a week, we'll have some interesting results. We've been working on it for almost 3 weeks now so it's very close.