Author: VFX Blog

  • What is UV Mapping?

    UV mapping is a critical task in 3D modeling. Its purpose is to convert the 2D figure meshes into 3D while adding a layer of texture to it. It is quite a tedious job for anyone, let alone a beginner. Whether you hate it or love, you cannot still avoid it. They are essential to understanding 3D modeling.

    There are many problems that a novice can get into while working on UV mapping. However, with the right texture and knowledge of the concepts of UV modeling, it would not be impossible to correct the problems.

    In this article, we will be writing anything that an amateur would need to know about textures and UV mapping.

    What is UV?

    In UV mapping, everything depends on the ‘U’ and ‘V’ terms. These terms deal with the 2-dimensional planes. They denote the X and Y axes. In 3D models, all X, Y, and Z axes already exist. So the ‘U’ and ‘V’ are used to denote the axes in 2D models. They are used primarily in calculating quaternion rotations, a common operation in computer graphics.

    Using the seams as a guide, the 3D model is laid out on a 2D surface. Pattern-making in sewing can be compared to this process. A custom image can be created based on the pattern formed after the mapping process is complete.

    The 3D model is then used to apply it. Models with a high level of color and detail can be created using this method. However, there are other ways to color models, but they are limited in scope.

    UV Unwrapping

    It is possible to see some side effects when flattening or removing texture from a model. They’re called seams, and they’re a common side effect. To create a 2D UV map from a 3D mesh, a seam had to be created.

    In UV unwrapping, the seams are created in the model are kept as minimized as possible while making minor distortion in the wireframe. Distortions in the UV map depend on the size or shape the polygons used in the model changed. Too much distortion will affect the details that the model usually offers.

    More seams will result in more distortion. Some rules to follow to keep the seams unnoticeable are:

    • Follow hard edges where they are usually less noticeable to make them more noticeable.
    • Make them blend in with the rest of the design.
    • People are less likely to notice if you tuck them under or behind your model’s centerpiece.

    Overlapping UVs

    The overlap of UVs is another issue that needs to be addressed in UV mapping. In the UV map, two or more polygons are placed on top of one another and create overlapping UVs. As a result, the texture information for these two parts of your model will be identical because they occupy the same UV space.

    To fix the overlapping UVs, you need to fix them manually by following specific computer commands or Unfold and Layout features in your application.

    In most cases, only smaller models, like the programs for your mobile phone, tend to have overlapping UV problems. These will cause the program to lag behind and even show supplicate data.

    UV Channels

    Even though most games have a few UV channels, excessive use can cause the game to crash. Using a Static Mesh’s UV Channel, you can map the mesh’s vertices to 2D coordinates in 2D space. Since a single object can have multiple UV maps, video games typically have 1-2 UV channels.

    Texture with Varied Effects

    In most cases, a raw mesh will be a 3D monochrome model with no effects, animation, or textures in it. Unfortunately, you can’t simply add these effects into your model. Things aren’t that simple. To produce quality objects, you need to use specialized textures to enhance the realism of your work.

    To add an attractive texture to the model, you can use many other map system programs. Some of the more common ones are:

    Diffuse Map

    The texture it will give to your model will have an actual color. The color will also have a simple shaded effect that reflects in the light. The impact it forms will be subtle, almost invisible, but it will make your work much more realistic-looking.

    Albedo Map

    They have the same functions as a diffuse map and give your model the same sort of texture. However, it does not produce any contrasting lighting or sparkling effects. The paints used are extremely simple.

    Specular Map

    When it comes to the specular map, it has two different variations: the level map and the color map. The color map gives highly fancy and classy-looking textures to control the amount of light you want to reflect on your model. It also offers diffused lighting effects.

    Bump Maps

    These come in handy because they simulate relief in the maps. This texture shortcut allows you to show relief on a surface without adding polygons to the model. It helps make a curved surface look curved instead of looking flat. You can also change every pixel on a scale of 0 to 100% to determine the height of each pixel on the surface.

    Normal Maps

    They are used to control surface warping in real-time. The image can only be processed in RGB forms- red, green, and blue. By combining these colors, you can warp the surface in 3 Dimensional rather than the two axes.

    Conclusion

    The journey to learning UV mapping is long and winding. As long as you resolve and have a clear view of your goal, you will make it in no time.

    After reading through this article, hopefully, you have a better understanding of UV mapping. Practice more, and you will surely get past the hurdle you were stuck in before.

  • What is a Storyboard Artist?

    storyboard artist takes a script (or just a concept) and turns the words into a visual story. It plays a significant role in many vital industries and media. Even as a choice of career, it comes off as very exotic and exciting. But it is also quite similar to being a comic artist. Storyboard artists are often called story artists or visualizers.

    They are an essential persona when it comes to animation, filmmaking, and many other fields. Storyboard artists’ work translates a director’s idea, probably verbal, into a visual interpretation.

    It’s a critical position because how the storyboard comes out will control what the film will become in the end. Other members related to the production of the media will take references from the storyboard and make progressions in their project.

    Storyboard artists will often be compared to comic artists due to the similarities in their work. Both of the tasks involve art and a good amount of creativity. However, what sets storyboard artists from a comic artist is that storyboard artists do not use their ideas but instead the director’s idea.

    How Do Storyboard Artists Work?

    Storyboard artists work from home and send in their final product via mail or other messaging applications. They might work as a permanent employee under a company or as a freelancer.

    Often storyboard artists are supplemented with a script from the scriptwriter. In case they aren’t, the director present’s his ideas both verbally and written. The artist draws up all important panels and fills them with different scenarios, like a comic strip, that will take place throughout the media. Often the artist will include gags or their ideas to present their visions.

    In the past, artists drew their storylines using pencil and markers. Nowadays, they do all the necessary illustration work on a tablet and computer.

    Depending on the production quality, the storyboard mind is later cleaned-up, proofread, and rechecked. It ensures that the drawings are well detailed and easy to follow up with. They may also work with other photographers and writers for follow-up assistance.

    The artist may also fill in background details and use built-in clip or custom clip arts to further elaborate scenes that the producers might find hard to understand.

    The duty of a storyboard artist generally includes:

    • Creating images using paper or computer programs
    • Researching projects
    • Developing a story by working closely with other creative staff and animators
    • Editing, adding, and eliminating scenes as the final product develops
    • Making changes to the storyboards based on feedback
    • Meeting and discussing projects with directors, clients, etc.

    Who Do Storyboard Artists Work with?

    Being a storyboard artist is a beautiful job with lots of opportunities and a convenient location since most artists work remotely. It is a productive career for anyone who seeks to work as an artist, director, or writer. Storyboard artists have different goals in different industries.

    Advertising

    When it comes to advertising, usually freelance storyboard artists are hired for a single project. The agency sends them the data or short visual interpretation, which the artist must compile and transform into a fully-fledged storyboard. Agency storyboards are usually kept to a minimum number of shots covering about one or two key-frames.

    Animation

    In animations, the projects are provided on a series of screenplays or storylines. The artist working on these projects are mainly permanently hired and, in most cases, work with a team of similar storyboard artists. The team does the entire work of creating and polishing the storyboard before submitting it to the animation committee for the final review.

    Live-action Film

    When producing films, the storyboard artist is hired at the beginning of the project and for that project only. The screenwriter or director provides the storyline, and the artist breaks down the script into shots that can be filmed.

    What Must a Storyboard Artist Be Good at?

    The storyboard artist must be good at what he is supposed to create. The necessary skills he must have in him to be good in his craft are:

    • Have excellent drawing skills and produce artwork with various styles.
    • Be creative and think of how he could improve the project
    • Know layout, composition, sequential drawing, and editing, as well as a strong understanding of framing.
    • Have a strong knowledge of narration and storytelling and put the director’s words into an exciting story.
    • Have a passion for the animation industry.
    • Take the initiative to ask questions to the employer/s if the situation arises.
    • Have good management of time.

    Training to Be a Storyboard Artist

    Becoming a storyboard artist is in no way easy. There is no degree or formal studies that teach people how to become storyboard artists. However, with enough talent and artistic skills, storyboard artists could find a successful career in this industry.

    Even if they do not get enough opportunities as a storyboard artist, they could still see work as game design or graphics design artists.

    Nowadays, most people who go into film school pursue a career as a storyboard artist. Such schools help them develop various skills, including:

    • Artistic talent
    • Communication skills
    • Computer skills
    • Drawing
    • Filmmaking, including knowledge of lighting, editing, sound, and basic acting.
    • Public speaking skills

    Also, if anyone intends to find a job as a storyboard artist, building a portfolio is a must. A winning portfolio includes some of the works you might have completed. It is practically a sample of your work style, which the employers will assess you on.

    When you do get the work being easygoing and offering ideas is a plus point. You’ll also need to be respectful under pressure. Most of all, get the work done by the time limit, or you might lose the job.

    Conclusion

    A storyboard artist is a prolific and exciting career. Most of all, it comes with various perks. Instead of sitting on a desk and working from your office, you get to work from home. Relax and make sure to follow the timeline o get your things done right.

    Even if you are under pressure, do not crumple. Try hard, and you might find the right place for yourself.

  • High vs. Low Poly Modeling – What’s the Difference?

    Both high and low poly modeling are incredibly similar and can also process the same images. However, the simple difference in the no. of polygons used in a model separates them at the core.

    Manufacturers and marketing companies use 3D models and mapping to represent their products before going on to the next stage. To produce these 3D models, the developers use a high level of blending, CGI resources, and VR/AR interactions.

    These processes all depend on the no. of polygons you will be using on your project. While some scenes will use a high no. of polygons, others use a smaller no. of them.

    High and low poly modeling, as the name suggests, are processes used during blending to process high and low polygon models in a scenario. Both approaches are essential in CGI blending. This article deals with both pros and cons as well the differences between the two.

    Low Poly Modeling

    As the term suggests, this technique creates models with less complex and lower numbers of polygon meshes. It occurs in real-time applications and is highly used in games that make use of 3D computer graphics.

    Applications of Low-poly Modeling

    Low poly modeling is used in projects that make use of 3D computer graphics. Such applications include:

    • Game Engine
    • Subdivision Modeling
    • Rigging and animations
    • Low-poly topologies

    Pros

    • Lot easier to load, count and edit in a machine with lower specs
    • Lightweight and require less storage space
    • They make use of less complicated polygon meshes

    Cons

    • The finished product cannot have a high level of detail
    • Lower maneuverability since they make use of less complicated geometry
    • The quality of the topography will not be that good since a lot of GPU power will be needed to give more visual rendering

    High Poly Modeling

    High poly modeling is the same as low poly modeling in the base. However, low poly modeling requires a higher number of polygons with more vertices.

    The end product will be far greater than when it is done with low poly modeling, as the models will be smoother and will appear more photorealistic.

    Applications of High Poly Modeling

    The high poly modeling method is excellent for creating all models. While using this method to create simple models is inefficient, more complicated models can be easily created. Some of the scenarios where high poly modeling is applied are:

    • Digital sculpting or arts
    • Simulation programs
    • 3D architecture models
    • High VFX movie creations
    • Mapping

    Pros

    • Far more detailed models with more zoom-ins
    • Increased rendering capabilities
    • Models have a high percentage of realism and higher resolution
    • It can be used to blend process scenarios that include motion

    Cons

    • Require ample memory space to store and a device with a strong GPU
    • It is complicated and hard to use
    • Due to a high number of polygons being processed, crashes might occur, or the program might lag in speed

    Differences between High and Low Poly Modeling

    While Low poly modeling should be used to create models that require more interaction, models that are not generally static and need higher controllability should be made with high poly modeling.

    Every 3D model is made from various 2D polygons. The polygons with more vertices are processed with high poly modeling, while ones with lower vertices count are created with low poly modeling.

    In a 3D model, both types of modeling might be used in a scenario, depending on the places that require them. However, there are differences between them. Using low poly modeling to create more realistic details (like a crease on a shirt) would be time-consuming and irrational. There are many factors that these two methods differ in.

    Complexity of Geometry

    CG artists will require a lot of time to build and check the geometry of a model. A high Poly method is used to process chipping, visible seams, holes in the surface, etc.

    Low Poly models cannot offer much control over Geometry. However, that is what makes them attractive to create the geometry of logos or app design.

    Texture Quality

    It is easy to import the texture from any external sources and even improve it further or make slight changes, like color. The textures can also be created from scratch.

    The matters related to low poly modeling cannot be too complicated. Due to such limitations, CG artists make their textures based on previous data and keep their pixels minimum.

    Processing Time

    High poly models take longer to process due to their nature of dealing with data with more complexity. CG artists often use render farms to accelerate the process. Of course, whether the process is accelerated or not, the end product will still be highly photorealistic.

    They are optimized and created very quickly. So creators use them in VR/AR developments, in which the engines need to process the data instantaneously.

    Static or Motion

    High poly modeling renders produce motion with great details. So these models are featured in animation and zoom effects. The models can also be controlled as wished without any restrictions, as long as they are programmed.

    These models work effectively with rendering. They have fabulous interactive features, making them a great inclusion in games and VR/AR systems. However, they cannot offer much maneuverability and any zoom effect.

    Conclusion

    The type of modeling depends entirely on you and what you are going to create. With high poly modeling, you could get greater freedom to develop models without restriction; only veterans can cope with their usage complexity.

    Even poly models are pretty popular among both professionals and those who are not so professional. So do not look down on them. If you intend to render a simple visual, low poly modeling might be a good choice for you. However, the one you’ll choose should match your preferences and this is what I ask you most to consider.

  • What is a Visual Development Artist?

    The word “visual development” can be is used in many different settings. As a result, most people may be perplexed about what visual development stands for. If we were to put it plainly, visual development is the foundation for all forms of visual art, including graphic design, concept art, animation, illustration, and video gaming.

    A visual development artist’s work allows them to exercise a degree of professional creativity autonomy that is uncommon. It’s because their work is adaptable to a wide range of media types.

    Every item of animated films’ mood, tone, and color palette is created by visual development artists. It may encompass everything from scenery to people to dress to objects. The work is challenging, but it also gives you a lot of leeway in creating an imaginative universe from scratch.

    Who is a Visual Development Artist? 

    A visual development artist is a multimedia artist in charge of building a realistic and cinematic environment for motion pictures, television, and video games, among other kinds of creative entertainment.

    They set the setting’s tone, atmosphere, and color palette, ensuring that these components are consistent with the project’s plot and genre. They bring the director’s ideas to life and elicit an emotional response to the show’s premise and central message.

    Their roles include animator, art director, lighting and color key, costume designing, visual development lead, environment design artist, etc.

    What Does a Visual Development Artist Do?

    Depending on their specific function, visual development artists are responsible for several tasks. For example, when working with animation, they give ideas for how the animated world should appear and feel depending on the plot, action, and characters.

    Even small details can help establish the tone for the story and character development, and storytelling. Many films rely on shifts in backdrop color or lighting to propel the plot forward. When the activity heats up, chilly blues or greens, for example, change to a wild orange or red.

    Certain time-tested retellings of great stories would be very different without their costume, lighting, and set design decisions. For example, visual development artists were widely used to arrange and color each scene in Disney’s modern-day rendition of Beauty and the Beast and Aladdin. The films were a box office success thanks to their efforts.

    What are the Skills of a Visual Development Artist?

    There are specific skills you need to have to become a visual development artist. However, most visual development artists employ illustration, animation, painting, sketching, and graphic design talents to bring creative ideas to life. In addition, video game and programming experience is also advantageous in this field.

    Remember, you will also have to do a lot of 2D or 3D work, and you will have to master applications like Photoshop and Maya to create drafts and fine-tune designs.

    How to Become a Visual Development Artist 

    Visual development is not quite an easy skill to perceive. You would need to be persistent and patient. However, to perceive the craft, you will also need proper guidance and recognition.

    Get a Degree

    It is critical to have a thorough grasp of graphic design, animation, and illustration for a visual development artist. The proper basics of these concepts will assist you in doing your job more efficiently. Study graphic design, fine art, communication, or animation to get a bachelor’s or master’s degree in the visual arts. These degrees might help you get a solid foundation in this industry.

    Discover Different Majors

    Like we have already seen, there are many different sectors in the field of visual development. You have to decide what kind of visual development artist you want to be. Consider which of your design abilities you like the best.

    Allow your studies, research, and experiences to guide you in determining which area of visual development most interests you while you seek a degree. As an example, you could choose to specialize in environment design or creative direction.

    Get an Internship Experience

    Next, you need to consider interning for an excellent agency to obtain significant hands-on experience and to further your visual development artist abilities. Internships in animation studios or independent production companies, for example, may be available.

    As you enter the profession of visual development, you’ll be able to add this experience to your resume. An internship might also assist you in networking with other creative professionals in your sector.

    Build a Solid Portfolio

    It is essential to have a portfolio when you enter the market. So, you need to make a portfolio that appropriately demonstrates your technical expertise in this field. Remember, to emphasize yourself as a qualified applicant for the job you are applying to, provide your most spectacular work, including professional projects and internship work.

    Also, include portions that are most relevant to the position you are looking for. Showcase your expertise working in a comparable place if you are seeking a job with an animated TV program.

    Build Your Network

    Finally, you need to look for opportunities to network with professionals in your field. It’s necessary for career advancement in any area. Attending relevant events or panels is a good idea in the field of visual development.

    You may also network online by connecting with other professionals who share your interests in the field. It is important to remember, some of the contacts you make may lead to future work possibilities.

    Final Thoughts

    A career in the field of visual development can be highly competitive. To be a successful visual development artist, you need to have exceptional experience and an impressive portfolio. While you are receiving your academic guidance to perceive the skills, you must gather first-hand experience. These practical experiences will strengthen your knowledge and creativity.

  • Monkey Man: Rick Baker Fleshes Out ‘Planet of the Apes’

    Tim Burton’s adaptation of Pierre Boulle’s novel “Monkey Planet” may have been inspired by the 1968 classic “Planet of the Apes,” but Burton’s film is a whole new barrel of monkeys — courtesy of Oscar-winning makeup artist and ape specialist Rick Baker (“King Kong,” “Greystoke,” “Gorillas in the Mist”).

    Baker first became involved with the project in 1995, when it was to be directed by Oliver Stone. At that time, he debated whether to build mechanized heads, which would have looked absolutely real, or to follow the lead of the original film’s Oscar-winning makeup artist, John Chambers, who had designed prosthetic makeup for each character. “I wanted the look to be realistic,” Baker says, “but that’s not ‘Planet of the Apes.’ Part of the charm of the original movies was that they had such actor-driven performances. Maintaining that &#91ethic] meant that we had to take a makeup approach.”

    Baker was determined to address certain limitations in Chambers’ original designs, particularly the fact that the teeth were glued into the prosthetic mouths, making it impossible for the apes’ lips to move independently over their choppers. Baker’s solution was to create as large a set of false teeth as possible, distorting the actor’s mouth into a rudimentary muzzle that projected out to be nearly even with the tip of his nose. Baker then applied a very thin prosthetic ape face over the actor’s altered features. “The idea was to try to get as much of a muzzle out of their faces as I could before we applied any makeup,” Baker recalls. “When I tested that on myself six years ago, I made the biggest pair of teeth I could possibly plant in my mouth so I would have as little rubber on me as possible. I have a good-sized nose, so I needed to make a huge set of teeth and really push my lips out just to get it to the tip of my nose. The problem was that my lips kind of flapped down to the floor when I took the teeth out! After I did the first test on myself, I thought I should do a test on someone with a better face — a smaller nose and a longer upper lip. That test was much more successful!”

    By the time Baker was hired to work on Burton’s project six years later, he knew exactly what facial characteristics would best suit the prosthetic makeup. “I told Tim in the very beginning that casting would be really important, because the physiognomy of the actor’s face would greatly affect how well the makeup worked and how convincing it looked,” he explains. “But then they cast Tim Roth as Thade, a chimp who’s the villain of the piece, and he has a bigger nose than I do! I said, ‘This is about as bad a face as we can possibly get for an ape, but he’s a good actor, so we’ll make it work.’ Tim probably had bigger dentures and more foam on his face than anybody else, but I think he turned out to be one of the more interesting apes in the movie. The fact that he has a bigger nose makes his character look different right away.”

    Once the actors were selected, Baker’s crew made dental castings of their mouths and created positive casts over which they sculpted upper and lower ape dentures, which they then molded and cast. When the dentures were fitted over the actor’s own teeth, they pushed their lips out into the muzzle-like orientation Baker desired. Next, a lifecast was made of each actor wearing the dentures, over which Baker and his crew sculpted the actual ape makeups. Baker then made prosthetic pieces by injecting foam latex into molds — essentially the same material and technique Chambers pioneered for the original film. “My approach was very similar to his work in many respects,” Baker acknowledges. “There’s a major face piece for most of the apes that includes the brow and the upper muzzle, and there’s a lower chin. Because the teeth are independent from the muzzle, the noses don’t protrude as much as the original film’s makeups did, but I think the performance is really what’s important. The idea was ‘less is more.’”

  • Blur Studio – What a Thrill

    The ride film genre is nothing new to Venice Calif.-based Blur Studio. The 3D animation, visual effects, and motion graphics design company already has several 3D computer-animated motion simulator rides to its credit, including Meteor Attack for the Tobu Zoo Park in Tokyo and the traveling attraction Star Trek World Tour. Blur’s most recently completed project, Stan Lee’s 7th Portal 3D Simulation Experience for Paramount Theme Parks worldwide, takes ride films further than they’ve ever gone before — literally.

    Based on Stan Lee Media’s 7th Portal Internet superhero series, the 7th Portal ride not only draws audiences into a whole new dimension, but is one of the most technically advanced ride films ever produced. Co-directed by Blur Studio’s Aaron Powell and Yas Takata, and creative directed by Tim Miller, the 4-minute piece, which made its debut this spring, builds on the characters and story featured in the Internet series. Audience members are cast in the role of beta testers of a new video game when one of the game’s villains bursts through the screen and drags them into a parallel dimension known as Darkmoor. There, they join the six 7th Portal superheroes in battling Mongorr, a tyrant who has subjugated six of the Universe’s seven dimensions and is bent on conquering Earth. The adventure includes a battle with a giant, green, rock-like beast, an edge-of-the-seat rocket ride through the heart of Darkmoor, and a final, life-or-virtual-death showdown with the towering Mongorr himself. 3D effects are used throughout the piece to give the audience a sense of interacting directly with the larger-than-life characters on the screen.

    “What makes this ride film really unique,” explains Powell, “is that there’s a lot of character interaction. With about 90 percent of the ride films that are out there, the audience rides through space or flashes through time. There are often some explosions, so they get bounced around a bit. But there really aren’t many characterscast into the ride. Here, we have six good guys and six villains who all have their own super powers and about 4 minutes to show what everyone could do. In that time, there’s just so much interaction between the characters.”

    Powell points out that the look of the piece itself also stands out from other computer-animated ride films. “Artistically speaking, we approached this project very differently,” he explains. “Ride films tend to either have a dark, Bladerunner-esque look, or a look that aims for photoreal. Our film has a rich, illustrative look with saturated colors and sharp edges. It’s a colorful, vibrant atmosphere we created totally unique from everything else out there. And, what is particularly striking is the level of detail — when you’re in the arena you feel as though you could reach out and high-five someone sitting in the stands.”

    To create the dramatic, comic-book-come-to-life look and feel of the ride, it took Blur seven months and more than a dozen studio artists, producers, and technicians working with leading edge technology and software. The team used Discreet’s 3ds max 3D modeling, animation, and rendering software; Adobe Photoshop; Digimation’s Bones Pro for scanning and deformation; Boxx Technology’s rack-mounted render boxes, and Vicon’s 12-camera M series motion capture system for facial motion capture.

    “[Paramount] came to us and said, ‘We want you to direct this, write it, and deliver everything,’” explains Powell. “First, we listened to them, to what all the elements were that they wanted to see, and we put all of it in. Then, we began cutting it down, trimming it back, and creating quick, thumbnail storyboards for the entire piece. We then scanned each storyboard and took them into Premiere to create a storyboard animatic [ previz rendering of the whole project to get down timing and motion]. Basically, you hold each frame for a certain duration and then you cut to the next frame or dissolve to the next frame so you can get your timing. You throw down some rough sound effects and basically present a 4-minute ride film like a picture show. That gives you a feeling of how the progression of the whole ride is going to go. We used character studio for all our character animation, so all of our characters would be bi-peds. Then, that’s where we did something different than we’ve ever done before. Since everything is so key on timing and motion, and I wanted to make sure that the characters could move from here to there in real-time, I actually went through and motion-captured for the entire animatic. I just wanted to make sure we could achieve what we wanted to in the given amount of time. In fact, in the very end of the project, we actually ended up using some of the original animatic motion capture because we got the timing down so perfectly.

    “One of the hardest challenges for me on this project was, I wanted to come up with a look that was totally different. I wanted to go for more of a graphic novel look. Not necessarily cartoon looking, but not quite 3D either. Kind of a hybrid in between with really nice sharp edges, like illustration, but something that was also obviously 3D. I wanted the details to be in the geometry and not in the texture maps.”

    According to Powell, the team’s experience with Vicon’s motion capture system was essential to the success of the final product. The system was not only used to speed up the production process but also to the make the characters appear uncannily lifelike. “It really gave us tremendous creative flexibility in creating, re-creating, or modifying character motion.” Powell says that it was for this project that the company decided to purchase its own motion capture system. “We had some money left over from another project for motion capture. And, I thought, let’s just get our own rig. For me, it was a blessing. This is the second motion capture studio I built. I worked for eight years at Westwood Studios in Las Vegas, and pioneered their motion capture studio. Eventually, we went with a Vicon system.

    “Then, once I came here and we decided to buy one, I thought, there’s no question that it’s going to be Vicon — especially with the new M cameras, which are 1000 x 1000 digital. You get four times the resolution that you would with the old cameras.” In order to thoroughly research the project prior to working on it, the entire team jumped onto a red eye and flew to Orlando, FL. There, they visited three theme parks in one day, and two more the second day. “We wanted to see any attraction that was featuring 3D animation,” says Powell. “We saw ‘Honey I Shrunk the Audience,’ ‘Bugs Life,’ ‘Spiderman,’ and ‘Terminator 2.’ What we noticed was, when the audience is close to standing still, the 3D effect really works well because they have time to resolve the 3D. When they’re going really fast, it all kind of blurs, and the 3D effects start to fall apart. That observation proved to be very instrumental in how we planned our 3D gags. We slowed the camera down to a point where the audience could actually see the 3D gag.”

    Besides becoming one of the latest additions to the Paramount Theme Parks, “7th Portal” will also be distributed internationally by Iwerks Entertainment. Therefore, the design team at Blur had to take into account the differences between the projection systems used by Paramount and Iwerks. “The film had to work in an 8/70mm version in both 2D and 3D, and in a 5/70mm version in 2D and 3D,” explains Powell. “In other words, we had to deliver four versions of the film and the effects needed to work on all platforms.” In addition, Blur needed to double its in-house rendering capacity to handle the number crunching challenge posed by 7th Portal.

    Because it was a 3D film, Blur needed to produce not one, but two, 4-minute films (for the left and right projectors); more than 14,000 frames of animation. As each scene involved multiple layers of computer animation, the completed piece comprised more than a terabyte (one thousand gigabytes of data). It required a staggering 18 days to record the final imagery to film. Blur ended up buying a number of rack-mounted render boxes from BOXX Technologies.

    Blur also supervised the production of the music and sound effects, and a pre-show video that will be used with the attraction. Overall, it’s an ambitious project that features fully rendered, fully fleshed out characters that seem to interact with each other, as well as the audience.

    Currently, Blur Studio is working on another 3D computer-animated ride film. This time around, the studio is producing “Batman” for Premiere Parks, based on the DC Comics and Warner Bros. movie series. “We completely built Gotham City more than it has been built in any of the films,” says Powell. There are more than 200 buildings. The audience starts in the bat cave, flies out over the river, and flies through Gotham. Again, we’re using motion capture. And, this time, we’re shooting for near photoreal.” The attraction will also feature popular Batman villains including Cat Woman, The Joker, and Mr. Freeze. Powell is the visual effects director on the project. The ride, which is entirely computer animated, will debut in 2002 at Warner Bros. Movie World in Australia and at the new Warner Bros. Movie World set to open next year in Madrid.

  • “Enterprise” Reaches New Heights

    It’s never easy to follow up on a legend, especially when you’ve had a hand in creating it. But that’s exactly the task Paramount recently gave to Dan Curry when he signed on as a visual effects producer on the new weekly TV series, “Enterprise.”

    A fan of cult classics like “Things To Come,” “Forbidden Planet” and “War of the Worlds” — and a 1979 MFA graduate of Humboldt State University — Curry cut his visual effects teeth on Universal’s TV shows “Buck Rogers” and “Battlestar Galactica.” After working on 118 feature films, he was eventually hired as visual effects supervisor on “Star Trek: The Next Generation.”

    The success of Curry’s work there — as the artist who helped influence the recasting of Klingons from “space Nazis” into the Samurai-style race portrayed by Worf, and who invented the crescent-shaped “batleth” in the process — has kept him in the Paramount fold. As a result, Dan Curry has produced visual effects for “Star Trek: Deep Space Nine,” “Star Trek: Voyager” and many of the “Star Trek movies.”

    Yet, despite having spent decades working on the Enterprise — literally! — Curry says he feels like he’s working on his “first professional job out of school. In fact, everybody’s jazzed about working on ‘Enterprise.’”

    This isn’t just hype. A look at any of the Enterprise episodes proves his point. In fact, the visual effects team of Curry, Ronald B. Moore, and Mitch Suskin has come up with a new look for the space warhorse that is both fresh and retro. This is critical, given that Enterprise is cast in the uneasy place of happening after our own time in reality, but before the “Star Trek” timeline of the original series that the fans are so familiar with.

    Walking this tightrope is a challenge. On the one hand, “we owe the audience visual effects of a quality and calibre that they’re used to,” explains Curry. On the other hand, Paramount’s team must keep the “dating” of these effects right, so that they don’t seem out of place in the well-established Trek universe.

    Designing The NX-01 Enterprise: Past Meets Future
    Without a doubt, it’s the Enterprise itself that posed the biggest challenge for the art department headed by Herman Zimmerman and Doug Drexler, and Curry & Co. in visual effects.

    The solution? “The Enterprise was aesthetically evolved backwards from the ‘later ships,’ just like the Chrysler PT Cruiser,” says Curry. “The PT Cruiser is basically a 1930s car, but it’s got all the modern technology of today, and carries a sense of modern design with it.”

    Inside, the Enterprise is based on Herman Zimmerman’s visit to a nuclear submarine. That’s why “it feels like a place where people actually work and live, with handles to grab onto when the going gets rough,” he notes. The ‘first’ Enterprise is also more functional than its forebears. “We wanted something that fit what a pilot would want, because he’s the one who actually flies the ship.”

    Outside, the Enterprise is “brushed metal with a hint of copper to it,” says Curry. Packaged in a thin, saucer-shaped craft that reminds one of the future Enterprises, viewers can easily deduce the chain of evolution here. The latest ship still stands out as a unique design statement.

    So Where’s The Model?
    There’s something else that’s very different about this Enterprise ship — quite simply, it doesn’t actually exist. Admittedly, Dan Curry does have a foam core mockup made by designer Doug Drexler which Curry flies around the office — Dan claims he only does this to map out the ship’s motion for storyboarding.

    “We’re doing the show all CGI this time,” Curry explains. In other words, there are no detailed physical models of the NX-01, fated to hang in Planet Hollywood decades from now. (The Enterprise 1701-A can be found at the Las Vegas location, if you’re curious.)

    Instead, the ship itself was digitally designed and mapped by Paramount’s art department. From there, the design was turned over to Curry, Moore, and Suskin. With the help of Foundation Imaging’s Rob Bonchune and Pierre Drolet, plus CGI software like Newtek’s LightWave 3D and Alias|Wavefront Maya (run on Alpha Workstations), they turned the Enterprise NX-01 into a living, moving reality. One that has yet to exist in true time and space.

    Why is Enterprise exclusively relying on CGI when previous Star Trek productions have stuck with models? “We decided that we wanted to raise the reality quotient of the show, and have the freedom of design and motion afforded by CGI,” answers Curry.

    However, there are two other reasons why Enterprise has wholeheartedly embraced CGI. One is the need to realize scripts on screen within budget, and on time. That was relatively easy in the days of Captain Picard, but as “Deep Space Nine” progressed, “the writers kept coming up with bigger and bigger shows,” Curry says. “There were programs where we had to show fleets of hundreds and hundreds of ships. This was physically impossible to photograph; besides, the improvements in digital animation meant that the quality concerns that had kept us with models were no longer a problem.”

    The second reason is flexibility. “With models, there’s only so far you can rotate them before you see the mount,” explains Curry. “There’s also limits in how far you can move east/west in the shot, and how far they can travel across the track. With CGI, all these limits are gone: we have the complete freedom we need to do true filmmaking.”

    It’s this freedom that Curry likes best about CGI. “It’s a lot like composing music,” he says. “It lets you create a feeling; a flow, a rhythm, and majesty to the visual sequences.”

    Launching A Love Affair With Fans
    To prove his point, Curry talks about the most-anticipated part of the Enterprise pilot: the launch of the new (or old) Enterprise from orbital drydock.

    In a word, it was magnificent — visually slow and graceful, like a great ocean liner leaving a pier.

    According to Curry, it was supposed to be. “This is the first time you get to see the Enterprise in total,” he explains. “My desire was that, by the time the ship comes fully into view, the audience has fallen in love with it.”

    What really enhanced the launch sequence — and made it believable — were the details. The hoses pulling away from the Enterprise, Space Shuttle-style, as it moved away from the station. Two spacemen are working on the orbital platform, looking below as the massive Enterprise passes underneath.

    The hoses were meant to link Enterprise to the NASA launches of our time; one that the viewers feel a sense of connection to. The spacemen outside were meant to offer a human scale of comparison, much as they did in the opening credits of Deep Space Nine. The end result, as anyone who has seen these and other Enterprise sequences will attest, is a profoundly believable piece of visual effects.

    What’s Next For The NX-01
    So far, “Enterprise” has truly gone where no TV franchise has gone before. That is, Paramount’s been able to dust off a concept that’s grown a bit long in the tooth, and give it a whole new lease on life. Without a doubt, much of the credit has to go to the artistry of the visual effects generated by Curry, Moore, and Suskin.

    It also has to be attributed to their decision to go fully CGI. In today’s entertainment industry, where the public demands better effects and more of them, CGI has become a must. “However, it should be noted that technology is no substitute for artistry,” cautions Curry. “The requirements of storytelling, and the artistic vision of the filmmakers, still reigns supreme.”

    So what happens next? “We’ll have to wait and see what the writers come up with,” laughs Curry. “No doubt they’ll find something even grander to challenge us with, even with all our technology.”

  • Post Production Equipment: Yours or Theirs?

    Renting cameras, lighting, support gear, and other equipment is standard operating procedure in the motion-picture industry. With the rise of digital cinema, however, some equipment has become affordable enough for purchase, particularly in the postproduction arena. Increasingly, production companies and producers face the question of whether to rent or purchase production and/or postproduction equipment. Every situation requires careful analysis of the pros and cons of either scenario. There are numerous benefits to renting, and quite a few for purchasing, so how does one go about arriving at a final, educated decision?

    The important issues to consider are the specifics of the project or business you are taking part in. You have to know how long the project will last, and how long the equipment will be utilized. A solid budget or financial plan will also be a major deciding factor. Finally, you have to anticipate what you will be doing a year from now, and whether you will still need the equipment at that point.

    Your overall timeframe is an important element in analyzing your needs. If you anticipate that your upcoming project will be of short duration, then renting is the option for you. If, however, you expect a continuous flow of new business and projects, you might want to consider purchasing.

    As we all know, technology develops faster than we can keep up with, and before too long, you realize that all that new software and hardware you have is outdated, replaced by newer, faster, and more efficient versions. We all like to wrinkle our noses at the manufacturers, but in reality we should be thanking them for constantly improving on their products. Let’s face it: Whatever they can do to make our lives easier is more than welcome. The plain fact is that technology — especially digital technology — is advancing at an ever-quickening pace. That’s good news when it comes to improving creative tools, but it can be tough when it comes to amortizing upgrades.

    Obviously, the cost of upgrading equipment every time there’s a new software development is prohibitive. The compatibility issues inherent in ever-advancing computer gear can also prove maddening — and expensive. Prudent producers and fiscally responsible companies inevitably cry “Foul!” And once they’re done venting their disgust at manufacturers reps, they inevitably pick up the phone and call the local rental house.

    Renting’s major advantage is that it enables a producer or production company to be constantly up to date on the latest technological developments. And as updates occur the rental equipment is configured to the new technology, ensuring that users always have state-of-the-art equipment at their fingertips. Essentially, you are paying a premium for the top-of-the-line and most advanced equipment.

    “We have very experienced technicians and long-standing relationships with the manufacturers,” comments Bill Weisman, manager of the rental department at Moviola, in Hollywood. “This ensures that the equipment or system being rented will be reliable and hassle-free. And if for some reason a client runs into a snag, we will be right on top of getting them a replacement, or repairing the problem in the most efficient manner possible.”

    Renting also presents a consistent demand on your cash flow. You can budget rental fees far in advance, as there is only slight fluctuation in the market costs from year to year, and an accurate estimate is relatively simple. Perhaps the most attractive feature of renting is that it also eliminates the need to come up with the funds for the major capital investments that purchasing requires

    When you own your own equipment, you are solely responsible for maintenance and repairs. These costs can be excessive, especially when you are dealing with high-tech digital items. As a renter, you typically have access to service and tech support 24 hours a day, seven days a week. Most of the time, the service contract is included in the rental agreement. In some cases, contracts can be purchased for an additional sum, much like securing an insurance policy or extended warranty.

    Post Production Workflow is often the most overlooked element of creative production. Most budgets set aside a certain portion labeled as the “contingency.” This is to cover the costs of unforeseen circumstances beyond the control of the management that would be detrimental to the timeframe and the project itself. One way to assume more control over the workflow is to ensure the reliability and performance of the equipment you use. When you purchase a piece of equipment, you have to keep it up and running at top performance for the duration of your project. One can never predict when a piece of software will lock up or a piece of hardware will malfunction. When presented with a work stoppage as a result of equipment failure, most would prefer to make a phone call and have the equipment replaced without missing a beat. Those pressed by deadlines and pressure from investors and executives are the best candidates for renting. A reputable rental house will generally repair or replace the faulty item immediately. After all, customer satisfaction is their business. Their motivation lies in the happiness of their customers. In the rental arena, good relationships, customer service, and client satisfaction lead to consistent and repeated business.

    If service and support fits into your business plan or budget, then purchasing the equipment may be the right choice. If you are considering purchasing, but don’t have the revenue to warrant a support department, you might consider a third party. Most companies that sell digital equipment also offer the option of a service contract, which for a fee will ensure that your equipment is performing smoothly for the duration. And don’t forget the manufacturers warranties; varying in length, your warranty will guarantee your equipment in the event of a malfunction or defect not caused by you.

    Although it may overwhelmingly look like renting is the best option in any circumstance, there’s still a lot to be said for purchasing. There are many types of companies that benefit greatly from buying and maintaining their own equipment. Most advertising agencies will have in-house production, editorial, and graphics departments handling the various different aspects of a particular project. To maintain a smooth workflow, they will configure all of their hardware and software to work in sync across many computers. By purchasing, they have total control over the equipment’s configuration and operation. The same goes for media departments in major corporations, where the output is critical to the company’s overall communication and operations. These large companies can afford the service costs and benefits from owning. Smaller companies that cannot afford to do all of their media work in-house will farm business out to post houses, graphics, and media-design firms. It’s these companies that benefit the most from owning the equipment. Their revenue is based upon billable periods of time. When they purchase their equipment, they calculate the amortization, and charge accordingly. Once the equipment cost has been recouped through billable usage, the revenue is pure profit, less upgrades and maintenance.

    “We are equipped to both rent and sell equipment and systems,” explains Randy Paskal, Moviola’s Managing Director. “Either way, our goal is to help the client determine their needs, and see that they receive top-of-the-line service and support for whichever route they choose.”

    There is a financial benefit, both to renting and to buying. When you rent a piece of equipment that relates to your business, you can simply write off the expense. A rental is a clean transaction. This is ideal for those producers and companies that do not want any assets at the end of the day. On the other hand, if you are purchasing, you can depreciate the equipment and write off the loss. With the latter, you can amortize over three to four years, and integrate upgrades into your amortization schedule. This will allow you to maintain your margin of profit.

    With rentals, at the end of the day, you have completed your project, delivered your product, and are left with no loose ends to tie up or equipment to pay for. When you own, you have completed your project, and can turn around and wait for the next one with the equipment in hand.

    The demographic of renters versus buyers has changed in the creative industries. Major studios that have historically owned all of their equipment are now turning to renting to eliminate the service and upgrade costs from their bottom line. Smaller companies, which in the past could only afford to rent, are now purchasing and maintaining their own equipment. As the demand for digital content increases, so will the demand for companies to pump it out.

  • Glassworks Creates ‘Fingers’ for BBCi

    London-based 3D animation and digital effects company Glassworks created a series of IDs in a campaign called “Fingers” for the BBC via agency Duckworth Finn Grubb Waters and director Alex Winter of The Brave Film Company. The IDs make use of live-action heads composited onto live-action hands to promote the BBC’s interactive content on all media, including the Internet and interactive television, under its new name, BBCi. The :20, :40 and :60 IDs are now airing on the BBC.

    The 60-second ID begins with the image of a man and a woman sitting next to each other — the image is cropped to reveal only the couple’s heads and shoulders. As the camera pulls back, however, the couple is revealed to be a set of human hands with heads where the wrist would normally join. The bored-looking couple is sitting on a sofa on either side of a remote control. The woman’s finger hovers over the buttons of the TV remote control. The spot’s virtual camera goes on to show other hands in other environments, all looking bored and fed up. Some of the hand people are in an office, bouncing on their keyboards, while others are in a bar, leaning against their drinks. One of them kicks a peanut.

    Click for Large Image
    The music picks up as the hand woman presses a red button on the remote control — bright lights shine on the hand people and they look up in amazement at the new interactive world they have discovered. More hands join them — some on the sofa, others using a computer mouse to look at the Internet. A hand on the bar jumps up and down in celebration of a football goal being scored. The ID finishes with the image of another couple walking slowly together, joining hands as they turn to look at a giant screen that fills the background.

    Glassworks carried out all of the special effects that made the spots possible, compositing the heads and hands together to create a convincing world for the hand people. The process began with tests using DV camcorders before the shoot took place. The shoot itself lasted a week and was followed by more testing to achieve the seamless integration demonstrated in the spot.

    “After the weeklong shoot, the project spent two weeks in our inferno* suite,” said Glassworks inferno* artist Crawford Reilly. “The footage was shot without motion control because motion control would have restricted the director too much — we performed the camera moves within inferno* on all the shots. The shots with a lot of movement were the most difficult, but I think we managed to achieve very convincing results.”

  • Making of “Walking with Beasts”

    “Walking with Dinosaurs,” the BBC decided to bring the magic back to the small screen with a sequel focusing on early mammals, titled “Walking with Beasts.” Technological advances in computer animation made it possible for this series, which aired in November in the United Kingdom, to be even more naturalistic than its predecessor.

    Beasts’ creators at the BBC turned again to Crawley Creatures for the animatics and Framestore for the animation series. The project ended up being one of Framestore’s most extensive animation and visual effects projects to date. Mike Milne, director of animation, doubled the size of his team, which was charged with creating ranges of movement and texture even more extensive than those designed for ‘Dinosaurs.’

    Principal filming took place last year on several continents and in some instances beneath the sea. Jez Gibson Harris led the model team, which grew from seven to 18 during production. Additional support was also called in from several accessible lance specialists. In the end, over 40 items were created that ranged from full-sized mammoths to small shrew-like creatures. Making the models took a year and a half, with some of the creatures requiring twice the usual time due to the advent of more complicated animatronics. “There were more movements than before,” said Harris, “Lips, ears, and eye movements, which were all more sophisticated.”

    The under-skull and body-forms were created in the Crawley Creatures’ ‘mechy’ department, where the team made radio-controlled mechanisms to move eyebrows, whiskers, noses, and mouths. All of these movements were combined to create snarls, snorts and blinks in completed models. The expressions were created through a mix of manual and radio control. “Its very tiring,” said Harris. “Creatures like the mammoth had five or six puppeteers, and special backpacks were designed to take the strain off the operators’ back and hands.” Larger engineering work went into producing Steady Arm rigs — which are similar to the Steady Cam rigs worked by cameramen — to support smaller heads during puppeteering. The rigs helped carry the weight of the heads and facilitate larger movements, while puppeteers controlled smaller movement in close-up shots.

    The team also designed a wheeled dolly featuring a counterbalanced arm with universal movement in order to make the bigger creatures move more easily. “It could be assembled in 10 minutes, and could be moved around on quad bike wheels,” said Harris. The team used the dolly to help with operating the heads of larger animals like the mammoth and woolly rhino. There were several underwater shots, all of which the team filmed in one day. The most extravagant required that a mammoth fall through a sheet of ice. For this, Crawly Creatures devised a system involving wires and animatronics whereby the mammoth looked as if it were struggling as it dropped.

    Once the filming with all of the animatronics was completed, the footage was passed over to Framestore for 18 solid months of work. ‘Beasts’ was such a big task that it would not have been practical for Frametsore to start from scratch. The team used a pipeline similar to the one created for “Walking with Dinosaurs” for animation and rendering, employing 271 separate programs to make the production process run smoothly. 30 artists working on Silicon Graphics and NT workstations spent a total of 11,490 processing hours creating all of the CG that went into the program. Framestore’s render farm, which consisted of 35 dual Processor NT render machines, ran for 24 hours a day for close to a year.

    The basic approach for each digital creature was the same. First, a 2D computer animated model of the creature was created, from a selection of angles — top, left side, right side, bottom etc. — which were put together to create a 3D version. The artists then did three layers of texture mapping — first color, then bump mapping and shine. This created realistic looking animals with lifelike skin. After they placed the animals in a shot, the artists added shadow as a final layer.

    Perhaps the most time-consuming aspect of the effort was creating the basic skeletons of the many creatures in ‘Beasts’ — several animals had never before been reconstructed, so this was a step up from “Walking with Dinosaurs.” Many of the creatures have evolved into modern animals such as pigs, cats, shrews and elephants. The sheer variety of elements on a mammal’s face — eyelids, eyebrows, whiskers, jowls, twitching noses and ears — made bringing them to life a challenge.

    “All of those elements have to be animated,” said Mike Milne, head of computer animation at Framestore. “The number of animation controls the animators have to work with is vast compared with dinosaurs.” The series also contains humanoid, upright creatures, which were difficult to animate because of the obvious comparisons that viewers could make and because the team didn’t know how early hominids moved. “This was more adventurous for us,” said Max Tyrie, animation supervisor at Framestore. “There were more complex shots, more creatures, more hurdles to overcome. It was all very enjoyable.”

    Framestore didn’t stop at creating the animals. The team investigated new angles and filming methods to push the animation even further. “We worked with different camera styles from hand-held tracking to wide angle, fish eye lenses,” said Tyrie. They also took various approaches to film speed, using tricks such as time ramping, ultra slow motion and time-slice photography. The team used time-lapse photography in a scene that showed a herd of mammoths grazing while the clouds flew by overhead. “We took our inspiration from ‘Dinosaurs’,” said Tyrie, “and then took into consideration what trends have appeared since then. With this work we are attempting to break the mold.”

    Of all the hurdles that Framestore had to jump, realistic fur was probably the largest. “It’s quite tricky,” said Martin Macrae, digital texture artist. “There’s no easy way to do it.” The team tried out a selection of softwares and considered creating a custom program to solve the problem. Finally, they chose a combination of off-the-shelf packages to create the fur pipeline, consisting of Softimage3D, Maya, Mental Ray and PhotoShop. Strands or tufts of hair were hand-sculpted, with one hair in a hundred being individually created. The artists would then enter constraints into the program about how long the hair should be, how dense, etc. The program did the mathematical calculations to create the designated hair. This approach worked best for long hair, while short hair could more often be painted on.

    The hard work paid off — the project and the technology behind it have reproduced a world not seen for 65 million years, an ice-bound wilderness in which mammals flourished.