There are rare moments when I'm at the cinema and I'm so inspired by what I see, I try to think of ways I can incorporate such ideas in my Machinima.
In Blade 2 we saw the introduction of the L Cam. CGI shots of digital stunt men were seamlessly merged with live action shots, providing more fluid action scenes.
It's a live action shot and Blade gets punched, sending him hurtling into the air. The action slows down and he comes so close to the camera (he's now the CGI Blade) that we can see the sunshades on his head wobble a little. He smacks into the wall, and the live action Blade lands on the ground.
Traditionally this is done by cutting the CGI and live action shots together but the L Cam technique allowed it to be done in just one shot! Apparently the L stands for "liberated" and as far as Machinima goes we've almost ALWAYS had a liberated camera. The problem for me is that my mind wasn't quite this liberated, and for good reason. When I first tried my hand at Machinima I really went to town with the disembodied camera idea. Almost every shot in my first film was a dolly, the camera was weaving through people's legs, pipes, hovering in the sky, I was out of control! I had to learn to reign that camera in and in that, perhaps some of the freedoms afforded by a virtual camera were forgotten. Until I saw Blade 2. Bouncers, had I finished it, would have had some some great action sequences thanks in part to this film (I might still finish it!!).
Despite what people may think from my early films I've always been a bit of a facial animation enthusiast. Back in the Quake 2 days the technical process for facial animation made it so difficult to get a good performance that by the time I came up with the idea used to animate the faces in Beast (an idea which was and is still unique, to my knowledge) I was just happy I could have lips moving at all. The facial animation in Beast made the characters in Bouncers look like stroke victims, however it still wasn't as good as it could have been. My first gripe is that the characters in Beast don't blink in the whole film. This wasn't impossible in Crazy talk 4.5, it was just difficult to implement while keeping other facial expressions going. My second gripe is that their eyeballs didn't move much. Other than on one occasion they always faced forward. This is where the cinema inspiration slips in again.
When The Polar Express hit the box office one seemingly persistent criticism of the CGI was that the characters' eyes seemed dead, giving them a very eerie feel. In Beowulf they combated this by using Electrooculography to actually capture the movement of the eyes exactly as the actors moved them, and the result was a much improved virtual performance. Now, I have no access to this technique, but it made me think of what kind of things I could do to improve on Beast's method, and luckily Crazy Talk 5 accommodated. One thing that makes eyes seem more alive is jitter. The eyeballs never rest perfectly still, a fact that makes control of a computer via eye movement a challenge for interface designers. Again, 4.5 could have done this, but not without difficulty. Due to the live puppeteering in CT5 I'll be able to make the characters blink, roll their eyes around, AND attempt to simulate a small level of retinal jitter - all in one pass.
With my animation muscles nicely flexed the next thing that's really given me a brain itch is sound. As old fans of Binary Picture Show will know, I struggled with sound quality for quite a while. Now that I understand it a bit better things have improved and I can now move on to spending every other waking moment thinking about the actual sound effects. This is even more important in Digital Memory because of the main character, who my faithful blog readers might remember, is a robot. "Should a robot really make some kind of noise every time it moves, or would that just be annoying?", I often ask myself. Well, Pixar's latest gem, WALL-E tells me yes, yes they do make noise with every movement. However I get the troubling feeling that if this isn't done very well it would indeed descend into an assault on the ears, annoying the same way someone persistently zipping and unzipping their trousers in your face would be annoying. It's not just the sound work that was inspiring though. I found this film even more visually appealing than Finding Nemo. As the two main characters don't exactly have English as their first and commonly spoken language, their actions (or animations) did the bulk of the talking, and it was done so well, especially since they weren't humanoid in their design. Just as facial animation helps a character appear more life-like, the sound effects given to Wall-E's every roll forward, or lifting of an arm, or twitch of his eyebrows, added to his presence.
If I can get anywhere near a similar result in Digital Memory I'll be a very happy man. It's not impossible. Phil Rice and Ricky Grove have kindly offered to help (and we all know how good they are), but the amount of sound work seems so staggering I doubt I could let them at it in good conscience. In Beast, most of the sound effects were already in place when it went to Phil. Ricky did some clean-up (there were some clipping problems in the dialogue files, which I now know occurs during the video capture process in Motionbuilder) and Phil added a few sounds and reverb effects, etc, to give it a more engrossing atmosphere. Hopefully I can do something similar for Digital Memory so that it doesn't become a chore at any point in their helping. It's a difficult thought since the sound in this is going to be so much more complex than in Beast. As always a cross my fingers for a good outcome.
Totally off topic I saw a film today, Twaddlers, made in Antics. The viewer comments on Youtube reminded me why I don't like Youtube, and partly why I left Machinima.com. Infantile comments aside, it was fun, but really annoyed me because of it's similarity to an idea I had in University and was really looking forward to producing some day. Twaddlers could have been made a little better, some polish here and there, but the random humor is very funny, I loved it. Give it a look if you can. from the comments, some people get it and some just don't.
So anyone who read yesterdays post (maybe about 10 people, then I imagine two of you returned) may have left wondering:
"DAZ3D models in Machinima? Too polygon rich, this fool's finally lost it."
"You fool, you think that's original but it's already been done! FOOOL!!"
Well both schools of thought are correct. Take the very popular Daz 3D model, Victoria 3 for example. She's somewhere in the region of 75,000 polygons if memory serves, with the reduced resolution version somewhere between 32-45,000 polygons and that's without clothes and hair. Way too high to have just for one Machinima character, no matter how hot she's supposed to be.
Then again, it can't be that bad, because my main man Tom Jantol regularly uses Daz3d models in his films, and he seems to get along fine. Well, yes he does use them, and my goodness they look great! Oh those beautiful curves and not a straight line in sight! But as many people will know, more polygons in the scene require more power. This can be one contributing factor to why Tom doesn't have many of these characters on screen at once. What's more if you notice, the characters aren't clothed. They're naked as the day they were born, and have a stone/marble sort of texture on them.
Then it can get even worse if you want to easily implement facial animation. with Mimic you can get your lip syncing done easier but getting your characters to actually emote still isn't as easy as using the CT/MB technique
The reason all this is so important is that both Tom and I are Motionbuilder users. We are part of a very small crowed that uses the tool to actually capture the end result. For me it's the only real-time environment that gives many of the freedoms I had back when I used Quake 2. Ever since leaving game engines behind (and even before that really) it's been a problem finding where the next model for each film is going to come from. If you use a game all that stuff comes pre-packaged. Break it open and your good to go, but when you leave that it becomes more important to provide for yourself. Daz and poser have huge amounts of content available relatively cheaply so if you wanted you could even sell the resulting film, but how would I get around the problems I mentioned earlier? I want more people in my films, and I want to use the same technique for facial animation as I used in BEAST.
Well, with the help of Tom, I've been theorising loads on a possible solution (sometimes I think that's all I do). It involves reducing the number of polygons in the models down to a point were they are much more manageable, but still retain their quality. Anyone with some experience in this will know that this is a messy job. Usually when you do it the models get real ugly real damn fast and things become unrecognisable. My research led me to understand that it can indeed be done less destructively. I can't explain the technicals, but DAMN it makes one hell of a difference!
With it, I have been able to reduce a 10,000 polygon head to 3000 polygons and keep most of the juicy goodness. Now that's still a hi count for a face, but hey, it could be worse. Then I have to simplify the eyes and mouth areas so that they will accept the Crazy Talk technique better (I really should give it a name). I found out that the Iclone G2 characters are around 10-14,000 polygons each so I've set that as my quota here. The next challenge is to do the whole body, but because of the detail on heads and the time we spend looking at them, they are much harder, so I believe the difficult part is mostly done.
So here for you today is the head of Victoria 2, at around 3000 polys. Just so she'd look a lot less like an alien I gave her hair for the Sims 2 (2000 polys), from the great site, xmsims.com. I deleted an ear and some of the scalp so in total it came to just around 5000 polygons. There are still many improvements that can be made, through UV manipulation and texture baking but for a test vid, I think the result has been great!
So the hope is that by reducing the polycounts and tinkering here and there I can populate a whole film using this technique, and it's what I hope to do for our next big one. But that's all for today! On Monday we'll look even closer at the idea of creating these abominations. I've only touched very lightly on the idea of mixing resources from different games into one engine. Obviously this can be taken much further, so stick around and we'll learn more + I didn't even get round to talking about Iclone 3. For now have a fun weekend!
It's been extremely hectic here at The Show over the last few months but finally things have cooled down and I can get back to updating this blog and working on our next big film.
Some of the work we've done recently, you will know about, whereas others have been kept fairly quiet. Shortly after finishing Roommate Wanted I started another commissioned project for Antics Technologies. Very much like RW, the aim was to make a film that showed some of the strengths of the tool and show how accessible it can be. For anyone who has used Antics (there's a free version now, so you really have no excuse if not) it has some great benefits such as simple set construction and the great way the characters can interact with objects and scenery. Everytime I use it I end up thinking it's very much like The Sims 2 without all the annoying things you have to do to get the characters to behave.
One thing that was very difficult to get around though, was the basic lip sync and lack of facial animation, and of course using one of my favorite Reallusion products to fix that was not a big option in this case. Regardless, I think it turned out quite nicely. It's actually been out for a few weeks now, but because I've been so deep in another commission and recently moved house, I could only announce it now. It's called Anonymous Coward and you can catch it in the Antics Cinema (where you will also notice a film by CJ Ambrosia). The guys at Antics seemed quite pleased with it, so hopefully you guys will enjoy it too.
The third project was a big one. Unlike the previous two which I was easily able to do alone, this project had a much bigger budget so really needed the team and as always, Dreaded Kane emerged from the bat cave and rolled up his sleeves (for any1 who doesn't know, Kane is a long standing member of the Justice Lea - er.. Binary Picture Show). The film was called Peter's Story, and was unlike anything I ever imagined us doing. This was a 6 minute information video and as the title suggests, it's a narrative film and I worked very closely with Professor Paul Foley of De Montfort University (going to last years UK Machinima festival was very worth it).
It was great to do (first 'useful' thing we've done) and everyone loves money, but now that's over I can get back to writing films with lots of swearing, angst, and possibly some nudity until the next such project comes along. For ages I've been meaning to fix up our website, so that's a big priority too.
I'm resuming work on the project I started shortly after BEAST. It's a Sci-Fi film in which I hope to use Daz 3D character models . Yes, they're way too high in polycount, but tomorrow I hope to shed some light on it all (should be very interesting), along with the part the recently released Craytalk 5 will play in the film. What's more, I was given a sneak peak at Iclone 3 and it's got me very excited! But enough for today. check back later for more happenings at Binary Picture Show and my thoughts on IC3!
Jan 15, 2008 at 1/15/2008 07:17:00 PM | 0 Comments
Since doing BEAST I've been so busy I haven't even had time to blog about the new things I've learned, or the new plans that I have for the next big Binary Picture piece. I haven't had chance to talk with many of my close online friends, and the research I was doing for our next big film has been on pause for about a month now (when I finally unveil that it'll be so cool though!).
But it's not all bad news! One of the things I have been able to do recently is a little comedy short for Reallusion, using Iclone 2.5 It's a fun little piece fit for the family (no need to cover your kid's eyes while a man bashes the crap out of another man who's tied up in the chair this time) and it's called "Roommate Wanted". Although not yet at it's peak, I've always liked Iclone, and thought it has potential to contribute very nicely to Machinima. For various reasons most of the things made in it seem to be music videos so I was quite happy to do a film. Hope you all Enjoy it. You can find the youtube link below and a Stage 6 version should follow shortly.
For any of you Icloners I've also made available the living room set that I made for the film. You can import it to Iclone as a prop. Has transparent windows, and you can edit the texture.
Jun 13, 2007 at 6/13/2007 07:29:00 PM | 5 Comments
It's been ages since I posted any progress on Bouncers, so this 1 is massive. I finally got round to trying an idea I had for Lip syncing in the new Bouncers series and here's a demonstration. Those of you who have seen the "Meet the Heavy" video for Team Fortress 2 should get an extra laugh from this.
I never imagined I'd have that kinda control. The facial animation is done in Iclone's Crazytalk, then brought into Motionbuilder. I did a few animations to make sure he wasn't standing still, and perfecto. This is a great improvement from the old method I used when in Quake 2. And to think I had the idea when waking up one morning.
For anyone wondering why the blog is so sparse: due to an unfortunate event all the old posts are gone. This is technically a new blog. I was able to salvage the html file, so my old article "Machinima's Missing Child" was re-posted. I may bring back other semi important ones later.