Microfloaties is a FREE* tool for Cinema 4D that lets you quickly add floating dust particles to your scene. It’s a rig fashioned from some of the mograph tools (cloners and effectors), some xpresso, and given specific user data to control the amount, size and speed of the dust particles, as well as the scale of the area they occupy in your scene file. Microfloaties requires Cinema 4D R12 Studio. You will need to have the mograph module and dynamics for Microfloaties to work properly. Special thanks to C4D-ers Dominic Faraway and Brett Morris for their feedback and assistance.
You can grab it here: MICROFLOATIES This zip file contains the Microfloaties C4D project file as well as a library file for use with your Content Browser in Cinema. If you wish to keep Microfloaties handy in your Content Browser, just drag the microfloaties.lib4d file into your Browser folder found within Maxon>Preferences>Library folders. Here is a tutorial to help get you started. Enjoy!
(*By FREE I mean feel FREE to make a donation if you have found Microfloaties to be useful to your work. Thank You )
To all who have been asking about adding functionality to allow the change of materials and cloner objects… Yes, I do intend to look into adding these functions to a future version of Microfloaties. For now, it is what it is
Playing with an old file and tried rendering it using the Physical Sky in C4D. Added some post work in After Effects, mainly Red Giant Looks, Optical Flares and a touch of grain. Would love to render this out as a short animation clip, but unfortunately I am getting very long render times. This render was 90 minutes. Perhaps I’ll try baking the AO, and possibly the GI.
I have recently purchased VRAy for C4D so I will next try rendering it with the VRAY engine and see what kind of times I get.
I know this is just an experiment, but thanks for looking!
Was messing around in Cinema 4D with no particular goal in mind. Decided to give the new aerodynamics feature a go which, like so many things in c4d, was a piece of cake to set up. Added a physical sky, a touch of sub surface scattering to the paper whirligigs and used a tracer object on them to mimic dynamic trailing strings. Rendered with the Physical renderer, and post work done in AE with a touch of Looks, OF and RSMB.
Wasted render due to forgetting to turn off hair dynamics, so thought I’d play with it!
French Cinema 4D artist and developer Cesar Vonc has released a cool little python plugin which generates volumetric “swiss cheese” based on C4D’s noise shaders (as well as any 3d shader). Medical animators will especially find this plugin to be perfect for quick creation of trabecular bone matrix for your osteoporosis animation. But more than that, this plugin is just plain fun.
The plugin can be found here:
For those whose knowledge of the French language hasn’t progressed beyond the ham and cheese croissant you ordered at Starbucks, the US strings can be found here in the Maxon forums on CGTalk
Just replace the french strings in the plugin folder with it.
Oh…and if you like it and plan on using it in production in the future, please be sure to donate via the paypal link on his page.
Some more experiments. Very un-sciency.
Just a test render for a head lice project I’ve started. Yuck.
Just some shots from various work over the past year or so
Finished a spot for a new show on History called Invention USA. Had a short window of time to come up with the concept and go to finish, which I do admit consists mainly of a bunch of things that Cinema 4D does well (and fast), namely stuff falling on a floor and bouncing around. A little lazy perhaps on the concept side, but it was one of those situations where I just needed to come up with something fast, in the midst of doing 12 other things, and see how it flies. Client liked the concept so I went with it.
I gotta say though, when I'm in a tight bind, C4D always comes through for me. Setting up the dynamics required minimal effort. It just works. And the rendering was fast as well. I started off meaning to use GI and also tested the scene with the new R13 physical renderer, but because time was an issue (needed to go straight to finish in a very short amount of time) I went with the standard renderer. The still above was rendered with the new physical renderer, however the final shot (not shown here) didn't really look much different in the end and rendered in a fraction of the time. Hope to eventually post the spot. Some style frames also included below.
Just a test to get up to speed with the physical renderer in Cinema 4D 13. Wondering if it is viable for real animation work. The kind with deadlines.
I am currently getting ~8 min per frame rendering at SD widescreen. Only rendered a short sequence (130 frames) so I looped it a few times.
SSS was turned OFF as it was taking ages for the irradiance thing to cache (this was with optimized SSS settings).
I have 1 bounce for self illumination, AO on, motion blur and DOF on.
At the moment it looks like I still have some more optimization options to explore. Render times feel too heavy at the moment for long-form medical animation (usually between 4 to 10 minutes of HD animation) but maybe Ok for short shots for spots and such. We’ll see.
Continuing my experimentation. Removed sub-poly displacement and put SSS back in. Reduced settings significantly. see below)
Got much faster render times (~2min/frame). Still needs refining. Lighting is all HDRI now so could use a bit more directional light. Turned on reflections as well instead of using an HDRI environment map for fake reflections. Also added a second layer of phospholipids (now its a proper bilayer) and threw a couple of proteins into the dynamics group.
What started out as a simple test to explain camera-driven shader effects in Cinema 4D to a friend in need, turned into, well I don’t know, THIS I guess. Staring at that Optical Flare in the hazy distance makes me ponder the meaning of existence. Then I just click on a cat video and I’m all better.
But I digress. I’m using the gradient shader in “3D spherical mode” set to camera space applied to cloned spheres. As objects come closer to the camera, they glow. Objects in the distance are dark. A nice way to fake fog in your scenes (and a lot of other things depending on which channels you use it in)
The radius value (below) determines the length of the gradient effect (left to right, left being the camera which is at the center of the “virtual” sphere). I copied this shader setting into my diffuse channel to darken the objects in the distance.
I’ve done some post work on the clip above which kind of makes the camera falloff luminance harder to spot, but hopefully it makes sense. I got a little carried away.
I came upon this technique maybe a decade ago as described by CG/VFX artist Richard Morris (aka Jackals Forge of Gallerie Abominate fame) on his website documenting his cg production process for the science documentary BodyStory 2 for BBC. Make sure you check out his work. Although his website is over a decade old, the work still looks great and the information can still very useful to those in the biomedical visualization field. The relevant information can be found in the “Dynamic Shading” section of the article.
Jackal’s Forge BodyStory 2 Tech. Click here for website/article
Just got Cinema 4D R13 and thought I’d see if i could figure out some of the new rendering features, mainly the new sub-surface scattering system as well as the the physical renderer and camera. Struggling a bit, but challenges make this stuff more fun.
I was inspired by Maxon’s marketing image for R13 of the grapes by Marijn Raeven, which I’m sure many of you have already seen. Didn’t want to waste time modeling anything, so I did the next best (i.e. easiest) thing, and cloned a bunch of dynamic spheres and applied SSS. 13 minutes later I had this image.
Over the next few months I’ll see if I can reign in the times and quality to see if its feasible to use physical rendering in animation production, but in the meantime I think I will be sticking with R12.
Here is the first image I created. Its a model of a trichinomas (yucchy parasite). I chose it as I thought it would be a good object to test the SSS due to its general blobbyness. Its a starting point. Still a ways to go…
A few years ago I got to work on an episodic TV project called Factory Floor. You may have already seen my other posts about it earlier on in my blog. Yeah, I know, I’m milking it. It was not a particularly successful series (it only lasted one season) but looking back it was really a great experience overall, and I learned a lot about what its like to be involved in a TV series production. There were ups and downs, funny stories, sleepless nights, stress, and ultimately, a feeling of relieved satisfaction bordering on triumph when we delivered the last approved graphic call after a roughly nine-month period of working. I still marvel at how I and my then business partner R. Scott Purcell, our producer Bob Larkin, (and the occasional 1 or 2 extra freelancers) were able to turn around 100 graphics calls while jumping through all the usual hoops of a production schedule.
The image above is the beauty pass render for a piece of machinery called a pencil extruder. It squeezes out bits of lead that will be inserted into the pencil. Below is what the final call looked like after compositing (and thats CAULK GUN by the way. Get your head out of the gutter.;)
One of the things I really liked about this project was that it required me to QUICKLY build and rig dozens of mechanical objects, mostly factory machinery, but sometimes the odd item like cheese curds, toilet paper or potato chips. I’d get a script, and usually a rough cut of a given episode, and from the low res quicktime footage, as well as google picture reference, I’d need to knock out various models in a short amount of time. I’d usually spend no more than 5 or 6 hours a night researching, building a model, then get started on roughing out the animation timed to the corresponding piece of the rough cut. Each night I’d shoot to have a shot ready to go. After a few weeks, (and then months), the process was like second nature. Research, build, animate.
The style of the graphics was to resemble a sort of moving 3D blueprint. We developed a workflow right from the beginning in which we had a master Cinema 4D file which included all the shaders and lighting we would need in every scene.
The rendering and compositing workflow would go like this:
1. Beauty Pass. We would apply realistic shaders and textures to our models with a decent lighting setup and render this pass usually with ambient occlusion if rendering time allowed. We had a set of metallic shaders we would reuse again and again for the various pieces of machinery.
2. X-RAY PASS. We also developed a set of x-ray shaders using a variety of colored fresnel (angle of incidence) gradients loaded into the alpha channel of the shader. We would take our master project of the given scene and then apply these shaders and render that pass.
(below , the X-ray pass for the lead extruder call, a fire extinguisher, and a taser X-ray right below that)
3. LINE ART PASS. We heavily utilized the Cinema 4D module Sketch and Toon to generate a bluish line-art pass on white. Although the S&T render method can give a big hit to your render times, we found that setting S&T quality to low and turning off AA sped up rendering times considerably and still looked great. Sometimes we would create custom overshoot and perspective lines and render these in yet another line art pass.
4. BUFFERS. Each animation would require us to hold and highlight a specific part of the object or machinery. When this occurred we would need to glow each part. To help with this we rendered object buffers for each part of the model. If a piece of the model was obscured in some way, like for instance, a battery or valve inside the main object we would render those pieces out separately in their own scene file.
Scott developed a a template in After Effects from which we would create each and every shot from. It included a blueprint overlay, background textures, labels and leader lines as well as a layered workflow to style the renders into that blueprint look which would need to mix with the textured background. Scott used the linear light layer pass which yielded the perfect look. We would also use feathered masks to accentuate certain areas, and bring out more of the beauty pass, line pass or X-ray pass as needed.
1. We would typically drop the beauty pass in first then apply the layer effects to mix it with the background texture.
2. Then we would bring in the x-ray pass, then mask out areas as needed, and fade it back considerably.
3. We would drop the line pass in and mix it a bit, set to multiply.
Below is an example of a typical AE Comp.
In the end the final look was nice, and completely in line with what was appropriate for the show, as well as being exactly what the producers wanted. I’ve recently gone back through the files, remembering how much time went into building them all. I thought Id share some of the pre-processed beauty passes and showcase them a bit, as much of the detail gets lost in those original blueprint renders. These are by no means examples of rendering excellence. No global illumination or linear workflow, just some simple down and dirty 3-point lighting and metallic shaders with a touch of AO.
I may continue to add more images to this post, so check back. I have tons of them.
Thanks for looking!
Just a C4D doodle, playing with text and the mograph extrude object. Applied a shader effector to the extrude object to alter visibility and scale of extrusion with the noise shader. Nothing groundbreaking here but fun to explore.
I think the GI looking text on the reflective cyc might be a bit overdone these days– so easy to do.
Five or six years ago I was contracted to design and build a character for an up-and-coming game company. It was really my first character gig, but was excited to dive in and give it a shot. Not sure if the game ever saw the light of day (not much of a gamer actually) but it was a fun project to work on and learned a lot in the process.
Never finished it completely–the boots were meant to be snowboard boots (Its Danté, the Snowboarding Demon from Hell!), textures were never refined and sub surface scattering’s a bit blown out. There are some anatomical problems too. Blah blah blah…
But thought I’d share anyway–just to show that it’s not always microbes and slime for me.
Working on a tutorial that will show you how to create this tiled cell image below with nothing more than a rubber band and a paperclip. Actually, a sphere, plane, and a disk primitive. Plus a little help from some Cinema 4D generators…
Took it a little further….
Working on a new tool /rig for Cinema 4D users for the quick creation of animatable floating dust particle environments. Should be ready soon!
Doing some R&D look and feel tests for an upcoming project. Playing with sub-poly displacement and the native sub surface scattering shader in Cinema 4D. Long render but looks kind of cool. AO is baked in, so will have to see what else I can do to reign in the render times.
I recently had the opportunity to work with Dominic Faraway, a talented and quite prolific cg animator/art director residing in London. Check out the life-size holograms he produced for a recent Black Eyed Peas concert.
Above: a B-Lymphocyte undergoes apoptosis (programmed cell death) in the bone marrow.
Below, the next two shots approach a macrophage as it engulfs immune complexes which have accumulated on tissue surface. There was an exaggeration of scale in order to get the concept across. The antibodies would be much smaller than what is shown.
All cg was created in Cinema 4D, with heavy use of thinking particles and the mograph module for many of the effects throughout. To allow for smooth renderfarm renders via the NET render module, all particles were baked using a separate cloner targeting each thinking particle group. I then swapped the particle geometry from TP and used it as a child for the cloners. The result is an identical duplicate of the thinking particle setup, which can easily be baked using the mograph cache tag.
For post work, I used the plugin Zblur 2 to generate good depth passes and then used Frischluft Lenscare in After Effects for the depth of field. Trapcode Particular was also used for some additional particle effects and smoke. Video Copilot’s Optical Flares was used rather extensively (perhaps abusively!) in order to accurately replicate the lens flares you’d get from teeny the tiny floodlights deep within the interstitial spaces of the human body. Viva la lens flare.
Just a few test shots done in Cinema 4D. Thought they looked interesting.
The top one is procedurally animated using R12 dynamics (which I love). Unfortunately still wrestling with some GI issues so the animation hasn’t come out so great. When I get a chance I’ll try again. I really need to do it sooner than later as I’m itching to be the first Cinema 4D animator to render dynamic GI spheres.
The other 2 look pretty but I will have to figure out how to animate them in such a way that the renders would finish within this century.
Messing around with the glass and GI settings in Cinema 4D R12. Don’t get a lot of opportunities to explore these 2 facets of C4D but thought I’d share the result. Inspired a bit by the Maxon R12 logo. Trying to get rid of that white sphere reflected on the floor to the right. Would like it to reflect in the glass but not on the floor. Tried excluding it via the compositing tag just on the floor. I’ll get it…
Looking for some micro-environment inspiration and found a nice scanning electron microscopy image of Stomach Lining. Still a work in progress.
The displacement really needs to be enhanced as the surface doesn’t look like its composed of the individual mucosal cells, looks more like a displaced surface. Need to add the thin webbed membrane stretched over the surface as well. And also need to add a better sense of the folds and “caves” prevalent on the stomach surface.
Cinema 4D render:
Some more fiddling. One mesh. Five deformers. (not sub-poly displacement–displacement is achieved with displacer deformers)
Still NOTHING like the reference image, but it illustrates how when using multiple versions of the same displacer deformer with the same noise loaded in and set to Vertex Normal you can get quite interesting looking surface. (Updated a bit since original post)
The bottom 2 displace deformers are using the same noise/settings.
The first of the 2 displaces the normals based on the noise shader. The second 2nd of the 2 takes the new normals (pushed out and altered by the 1st deformer) and pushes them further with the same noise. This way you create the illusion of separate spherical objects while its really all one mesh.
Below is the mesh with just one 1 displacer deformer activated used just to create an undulating base mesh from which to apply the bottom 2 deformers to.
Exploring some type ideas for a project in C4D. Stumbled upon a technique using Thinking Particles, Mograph Tracer and the HAIR shader (applied to tracer). DOF blur created with ZBlur (BIOMEKK Plugins) which allows for in-camera DOF (as well as depth maps) that doesn’t override other post effects (for instance: HAIR) like the built in C4D DOF does. Hair WILL render with built in DOF, however the depth map ignores the HAIR and blurs it indiscriminately.
The blurred hair creates a subtle smoke like effect.
EDIT: Animation rendered:
And below with the c4d built in DOF effect. Notice how the blur is not applied to the hair where in front of the text object.
It’s nice to have text tools built right into Cinema 4D, especially when considering that in some 3D apps you would need to create your text elsewhere and then import. Even so, there is definitely room for improvment of these tools.
One little tip I’d like to share here might be helpful to those of you wishing to create text that uses sub/superscripts. Unforuntately there are few controls beyond font style, height and vertical/horizontal spacing of type when using the MoText object, and only one style can be applied at a time, so there are limitations.
If you wish to create sub/superscripts natively in c4d, you might be able to get away with this faux-script solution I discovered:
• Add a MoText object to your scene (Mograph menu>MoText in R12).
• Make sure your text is aligned LEFT for this example.
• In the attributes manager, hit return where you wish your subscript to begin.
• Add enough character spaces prior to your subscript characters so the text is pushed to the right far enough not to have any characters on the line above it.
• For SUBSCRIPTS, enter a negative value of whatever the text height is in the “vertical spacing” box (may need to tweak the value by eye).
• For SUPERSCRIPTS, enter double the value of the text height into the vertical spacing box.
For true super/subscripts you would want the font size to be ~75% of the normal text, but unfortunaltely, as mentioned above, you cannot have multiple font sizes/styles applied to the same block of text. Maybe we can get some more precise tools (kerning, multiple styles) in a future upgrade of Cinema 4D.
In part 1 of this tutorial I will show you one technique for modeling “spongy bone matrix”, the cavernous mesh-like bone tissue found deep inside human bone.
Even if you aren’t a medical animator, or have no interest at all in science visualization, I think you may learn a few things about metaballs and the mograph module. The mograph tools in cinema 4D are not only great animation tools, but useful modeling aids as well.
Part 2 will cover texturing, lighting and rendering.
UPDATED version: November 21, 2010
• Trimmed some fat and boosted the audio.
Back in November on ’09, R.Scott Purcell, Bob Larkin and I were asked to work on title design for the History Channel show “Swamp People” which premiered just recently. Although the client wound up going with another studio in the end, I thought I’d post some images anyhow.
Scott did some kickass comps, filming elements in his bathtub and incorprating the results with footage from the show. Below are some screen grabs from his submissions.
You can check out more of Scott’s work here: Betatron
I set off in a different direction, exploring some 3d approaches in Cinema 4D. I will not deny that the shot below may have been influenced from having just recently watched that old 70s Sid and Marty Krofft show: Land of the Lost. Rarrgh!
Got to play with the C4d plugin Ivy Grower for this one. See my post about Ivy Grower here.
And this one below was an idea we had to turn the show title into a night lantern of sorts, surrounded by a swarm of fireflies and plenty of mosquito’s I’m sure.
Here’s my first recorded tutorial for Cinema 4D which covers a quick way to simplify very dense splines. Now and then the point count on an editable spline will get away from you, maybe due to having used a plugin to generate the path or from having subdivided it 8 times too many.
Here’s a method you can use to generate a lighter spline which utilizes the Mograph module. As I said, it’s my first, so be kind Hopefully the info is clear and you will find it helpful.
(I’d recommend clicking the fullscreen toggle for better visual clarity)
You can also check out the Remotion plugin DiTools which includes a spline simplifier tool (among MANY other useful tools!)
DiTools Link. Click here for info on the plugin
The plugin OneSpline by Kuroyume generates a single spline based on all of its spline children.
I used this plugin to initially create the paths used in the tutorial.
OneSpline Link. Click here for info on the plugin
I’m working on an animation project right now which is once again science oriented, but this time instead of focusing on a world usually seen only through a scanning eleectron microscope, I’m going the other direction in terms of scale:
BIG SPACE THINGS!
I hope to finish this thing SOON, but in the meantime here are some progress stills.
The planetary texture maps were wrangled from various places. NASA’s Blue Marble website had some nice maps, but I also found this great site: JHT’s Planetary Pixel Emporium which had some super high res Earth images which you could download for $8 USD. The free maps that are also there are very nice (include many planets and moons in the solar system) and could also be found elsewhere on the net if you google, but for me, the hi-res maps were worth the 8 dollars.
I did some extensive shader work in Cinema 4d to get the looks I was after, especially for the Earth maps.
The sun and star shaders were all procedural using the many noise shaders available in Cinema and mixing them in various ways.
I must say, doing these space oriented projects is a lot of fun!
A Friday night Cinema 4D experiment. Playing with pyrocluster, a volumetric shader that creates some interesting volumetric cottonballs among other things. I use it a lot for smoky environments.
Applied a little (OK a LOT) of Video Copilot’s TWITCH plugin in post as well as ReelSmart motion blur and a touch of frischluft Out of Focus.
I’d say it was inspired by undersea footage of the gulf oil leak, but I’d be lying.
Loops three times. Don’t stare too long.
Just a fun little test with modynamics. The ball bounce is keyframed by hand, the fat trails are sweep nurbs applied to the mograph tracer object. Its just the scattering balls at the end that utilize dynamics. Lots of flicker. Gotta see why that’s happening…
Here’s a clip from a 10-minute animation about HDL-cholesterol and it’s effects on the body. The client had asked us to choose several shots from the show and repurpose them into a shorter piece: an “eye candy” montage that would be displayed on a super-wide wraparound screen mounted above a cafe booth at a trade show. The dimensions were 1800 x 400 pixels. Click the full screen toggle for a better view.
I used cinema 4d and the mograph module to set up the blood flow, and then used the aging but trusty pyrocluster shaders applied to a particle emitter inside the vessel to add the sense of smoky plasma whooshing through. Not happy with how the pyrocluster smoke came out, a bit patchy, but hope to remedy it soon. Really needs sound!