Old QTDDTOT thread is dead.
Is using noise reduction filters generally frowned upon for stills or is it acceptable for some cases?
So, I'm doing a Batch Render on Maya 2016 of a animation I made. Thing is, it keeps rendering the same image of a few frames back, yet the frame count keeps rising and the rendered images each have their number. This never happened to me before, and it was working fine till like frame 300. What do?
I found a couple of solutions ranging from creating a geometry cache to changing the textures, but didn't work. I ended up rendering two frames at a time until I got all of the scene, ugh.
If you have the base mesh you can just duplicate it , edit it and scale it to fit your desires, or you can extrude the faces for the clothes.
Fuse and Marvelous Designer are great for apparel.
As a shit modeler, how can i make enough paypal monies to order novelty pens off ebay
im not looking to buy a house here, only novelty pens
this highly depends on your renderer and settings, e.g. whether you are using pathtracing, bidirectional pathtracing, branched pathtracing, metropolis, whether you are using multiple importance sampling or not, ...
Most pathtracers have issues with very strong indirect lighting (e.g. having a wall that is very brightly illuminated by a directional light source that then provides the main bulk of lighting for your scene), as well as light that is going through small holes or long, winding tunnels. metropolis is the other way around and has most issues with direct light-sources being in view (but there are easy ways to mitigate that.)
Most pathtracers also give you helpers to accelerate rendering like light portals (e.g. cycles)
i never said anything about me making 100k
this is all i have available to sell and giveaway for free
if the model has no floor underneath the light won't bounce back on the model, this can create more noise
if you are using some kind of reflective material this can create more noise as well
anyone know any easy character creation software? i learned a lot about rigging and animating in highschool but not sure i want to learn the long way of creating characters from scratch just for funsies.
im new here so allow me to introduce myself
im currently 24, i have made a few small projects practicing mostly 2D animation. im looking to make the jump into 3D animation. im very comfortable with blender and dope sheets but as far as my own models are concerned im not so great with that.
so here are my questions now that you know a little bit about me
1 : where should i start, is there a goto book or documentation on getting started in 3D animation?
2 : what would be the best program for 3D animations because ive done some in blender with someone else's model and it was a little wonky looking. ill attach a preview of it.
Scour youtube for animation tutorials, then move on to pirating/paid learning sites to learn more.
I would also suggest learning about how a human body moves when performing actions in addition to learning how to animate.
Just downloaded an iphone 6 obj that I have to destroy with a bullet hit by the end of the month for a short film.
The mesh is fine for still images but royally fucked for dynamics stuff. Bullet crashes maya and ncloth is really slow. What options do I have ? will houdini take it ?
Noob here, trying to wrap my head around basic human topology. I'm trying to model a low poly female. Which of the three body types is right? The hexagons under the body types are a top view of the leg and waist. A is supposed to have faces on it's side while b and c have edges. B and c are identical save for the boob orientation.
I haven't decided how to do the back yet.
Pretty much my thoughts. What concerns me with it is that the character might come out looking flat from the side or that I'll need to add extra loops while making the mesh and having to go back remake most of the mesh.
Anyone here use Luxrender for blender?
When trying to export smoke all it shows is just the bounding box and no smoke. And all the old tutorials are from years ago and are out dated. Does anyone know how to export smoke with their new UI workaround? I'm still using classic API.
How would you go about replicating this hair?
The geometry is easy, but what are some key things that you'd highlight in the process?
A nice diffuse and normal, and anisotropy in the shader? That's all I got honestly.
Not sure how I'd go about painting or creating textures like that
I'm working on making a uv texture for a low poly model and it's taking fucking ages.
Am I doing this wrong or am I just bad at it?
What are some general tips for making low resolution uv maps?
you generate UV maps in ur 3d software
however you can export the UV layout and paint over in gimp or photoshop
there are several programs that are suitable for 3d painting including photoshop
actually you can texture paint inside of blender too
>however you can export the UV layout and paint over in gimp or photoshop
That's what I've done.
What I want to know is the general method people use to go from a solid color to something like this.
And whether or not the UV map in my previous picture has problems that would slow the process down.
just simply avoid overlapping faces and weird stretched shapes
texture painting is done on the model, meaning you paint on the model itself and the colors write into the UV islands in an empty texture
here is a good tutorial
been having trouble with the boobs on this model for ages the model on the left is the one i loosely based them on, having trouble getting the shape down, it there any secrets to modeling boobs
i usually model the shape i want asit would be without gravity, so i get nice even loops and the perfect shape, then attach them to the chest and use lattice modifiers + manual moving to gravity-correct them.
nconstraints menu, mostly constraint to constraint
there's this book series "how to cheat in maya for animators" that is pretty good especially if you're more of an animator than a computer guy. other than that I don't know like a single central resource since I was initially taught 3d animation in school. but 11secondclub is good, like watching all of the crits. victor navone's tips. carlos baena. lots of googling. I animate in maya, as do most people.
Looks to me like it's mainly in the shader since hair is somewhat unique, and that's from a cinematic. I don't know if you could get a result of that quality with just texture painting... but you could probably get something decent at least. Don't take my word for it
I've been offered a job as a render wrangler at Pixar. I wrangled renders almost a decade ago before moving into network architecture and it sucked. Would it be worth the resumé points or would 'render wrangler' jump out more strongly than 'Pixar'?
You need to make sss material. Use white diffuse with bluish sss. Find some snow bump/normal map, and use a bit sharper noise for reflection glossines. And you need some vfx for "sparkly" effect.
If you want some ultra realistic renders, you need to use some fur plugin.
snow isn't jello, it doesn't transmit light you dicknipple cumkid
nigga, I almost live at the north pole, and let me assure you, unless you live in Beijing, snow transmit light like a whore sucks dicks.
I suppose it's not impossible that a blind person would overlook the SSS in picrelated.
So I am trying to take control of my life a little bit and get rolling at a community college.
What should I declare my major as if I want to go into 3DCG as a career? My community college doesn't have a specific 3D program, so I'll have to transfer somewhere, but what should I declare as until then? Art/Fine Art?
Depends on your end goal. Most colleges will have architecture degrees, which can be used for archviz as long as you make sure to develop your 3d skills. Or you could go for a degree in animation. Or product design. 3d can be used in so many different industries, so you're best off going with a degree that is in the direction of your desired career
> An igloo, (Inuit language: iglu, Inuktitut syllabics ᐃᒡᓗ [iɣˈlu] (plural: igluit ᐃᒡᓗᐃᑦ [iɣluˈit])), also known as a snow house or snow hut, is a type of shelter built of snow, typically built when the snow can be easily compacted.
So, about the student versions of Autodesk software:
Since theyre for 3 years, is it 3 years from the starting day, or is it so that the end of this year would count as a whole year? As then it would be 2 years 2 months.
I just started using Cinema 4D today and I'm enjoying it.
However, whenever I go to add primitives, they're always placed on their side.
e.g. in this scene I'm working in, I'll add a cylinder and it will be lying on its side.
But if I start new and add a cylinder, it's "standing up", like a can for example.
What did I mess up, and how do I change it back like a new scene? Sorry for noob and broken english, but I am enjoying thus far.
Pardon my stupid question about Blender 3D,
Is it possible to pin a mesh on another mesh like to pin a painting on a wall? I know it's possible to simply merge them with ctrl+J, but I want the pinned mesh to be separate, because i want to make it interchangeable with the Replace Mesh function.
Hi, I create this wrist and reduced some verts for connecting to the arm. Is there a way to average the spacing between the verts without having to extend the mesh and use smooth surface?
I tried "average verts" but I am not sure what it's doing, it just seems to shrink the edge loop inwards.
>I ended up filling the face to get and intersecting vertex to use as a center point.
>I then created a circular plane and set the vertices equal to the wrist.
>Placed the circle of the center point and matched the plain.
>Then I matched up the verts on the wrist with the circular plane and then delete the additional faces.
Yo, anatomy question.
Does anyone happen to know an accurate limit on the rotation of the Glenohumeral ligaments, the "shoulder joint"?
Examining myself it seem to Somewhere around 120 degrees of rotation.
In the long run it speeds up rendering animations and allows you to achieve a clean result much quicker. A good biased renderer like Mentalray can do a dynamically updating FG or GI photon map that simply adds or remove points from the map as things change in the scene, instead of recalculating the whole thing each frame. It also has some more advanced techniques like importons and irradiance particles...
There's also redshift now, which is a GPU accelerated biased renderer.
This may be a dumb question, but when you animate a character walking or something, do you move their main character control control in the path they will take and then move their hip and feet controls in sync with how fast the global control is moving, or do you leave the main control in place and move the character by their hip and feet controls?
I do the latter because it makes more sense to me, but I don't know if it even matters or not
My range is slightly more than 180, I work out shoulders semi regularly to combat office fatigue and stay in minimal shape. Used to target the rotator cuffs (muslces involved in what you describe) pretty seriously a few years back when I was weightlifting, they are very small and weak muscles and are easy to injury if you don't look after them.
I mean legal protection.
As in: let's say I do a parody of a work, and not only do I use the same characters but actually use the ripped meshes of the original work. Would that be legal?
When 3D modeling anything in general, is there any reason to avoid (or not spend time avoiding) intersection of multiple pieces?
For example, say I make a window with bars, but in my model the bars are not physically connected to the frame, but are simply pushed through the frame to give the illusion of a connection.
Does intersecting the bars cause any sort of problems down the line, or are there any benefits to physically connecting the bars?
I've asked a few teachers and highly knowledgeable students, but nobody can give me a logical reason for one way or the other, apart from texturing methods.
I personally have my jimmies rustled by floating objects through each other and try to find ways of connecting them.
imo it's more like future proofing your work, because you're 10x less likely to have problems if you do connect them - textures line up nicer, modifiers like chamfers will chamfer where they're connected smoothing it out etc.
I haven't found a solid there-and-then reason to though, so do so when you please.
How do I get normal maps in Unity to display 100% correctly?
I bake a normal map in Blender, looks correct in Blender view port, looks correct in Unreal 4, but its not totally correct in Unity. I mean its workable, but it is kinda annoying. It seems like Unity doesnt exactly use tangent space normal maps, but some slight variation. If you have worked with unity you probably know what I am talking about.
Check if one of your channels are flipped. But yes, there are several ways to calculate a final normal from a tangent space map. Unity's unpacking in their built-in shaders gives pretty boring results. You wanna learn to write your own shaders if you want unity to look as good as unreal.
Here's an example of a shader with a deeper looking normal unpack I wrote for Unity 4 last year.
Hey, I wanted to know if there is any way of modeling hair like in pic related on blender pls respond
Box modelling? Also yes.
I'm not too sure about Blender, but if it were me, I'd start with a cube, roughly shape it by subdividing and moving some points around, then extruding faces and collapsing edges to form the many tufts of hair coming out, with some joining back in or such.
I could demonstrate it or show a topo image but only if you're interested in trying the method I'd use
Thanks for the reply I basically just extruded edges and made posts of plane 'hairs'. It looks nice but I'm going to try this to see if it looks better. I had another problem tho, some bad polys I can't recalculate or flipbdirection
What's the best way to learn Maya in a somewhat structured way without taking a class/signing up for some kind of subscription? (Video series rather than individual ones and/or books, for example.) I've checked the sticky, but it's almost like a scavenger hunt when it comes to getting started. Thanks in advance.
I've been using c4d for 2d animations (i suck at ae graph editor and i don't have the creative suite yet) i don't have problems with the external compositing tag but it only works with rectangles is there a way to export non-square animated planes from Cinema4D to After Effects? -pic related
i also tried object buffer, luma and alpha track mattes and color keying but they don't get the keyframes for movement that the external compositing tag export.
Any help would be much appreciated.
Through youtube vids. I would find something about the modelling toolkit in 2016 first and foremost. Something to explain the basics of the UI and how to find what tools you have available to you. Something to learn about the hypershade and the way nodes work in Maya.
The UV editor and it's tools.
To Get you started:
Youtube / watch?v=EK2DikWCnk8
something is wrong with blender, when i mirror something it the other half appears rotated half inside and half outside. I managed to fix it once by messing around but i dont remember how
What's the advantage of Source Film Maker over complete 3D software packages like Maya or Blender when it comes to animation?
I guess it must ease the process somehow at the cost of less freedom, but how?
What exactly do you mean by low poly? Is low poly something like that is made to fit a smartphone game for example or by low poly you mean anything not done in zbrush? Can something with sub surf modifyer be low poly?
There's no precise definition.
It is used to refer to the base mesh regardless of tris and target platform. A character in a fighting game for a modern platform might have 30k tris, a character for a strategy game for mobile might have a few hundred.
It is also used to refer to an aesthetic.
When building say a character, whats the best way to do it?
Ive seen this one guy start off with this ball like shape then shaped it into a torso and hips then just kinda kept going with like some sort of...Extender like thing
Then I saw Mikes way https://www.youtube.com/watch?v=QqXI4sc2BT8
Which is just going so fast I cant tell what the fuck, but the results are good
I'm new to c4d.
my models turn up black when rendered.
they didn't do that 20 minutes ago and I really don't get whats happening.
any ideas ?
>Luxrender 1.5.1 comes out
>Try to set it up in Blender
Can anyone explain what this is and how to fix it?
anyone know why this triangle is showing up in substance painter? it isn't part of my geometry
Is there a way in Belnder to prevent the automatic weightpainting from overwriting certain groups? I'm merely experimenting with an armature setup but I hate having to redo certain areas every time I make a small change to the bone structure.
I've used Source Film Maker and Maya both extensively.
Maya, in my opinion, is a much more robust piece of software. I remember SFM being fairly unstable and unpredictable at times. Also the limitations of the Source engine seem pretty primitive compared to the rendering engines you can use in Maya.
Just my two cents.
In 3ds max 2016 when I rotate my view and click anywhere in the active view port the camera fly's off into space and I can't see my model anymore, when I refocus my view the camera will do it again randomly when I click on stuff. It's making it impossible to do anything and I can't find a solution anywhere on the internet. I've tried changing the graphics driver, exporting and importing my file as a fbx and obj, unplugging the wacom, and changing settings for the viewport and nothing works.
I have a i7 4790, r9 390, and windows 7 64bit. I could try reinstalling max but I doubt that will work, should I just switch to maya? this is making it impossible to work
Forgot the rest, as you can see I'm a filthy self-taught rank-amateur.
I'm trying to model Muffet from Undertale, when I realized too late that adjusting the weights for that shirt overhang over her 2 lower pairs of arms will be a bitch to do. I was thinking maybe just simulate the overhang, the ribbons and the hair as cloth, make the rest colliders, and rig the rest. Is that possible?
does it go to like billions of units in every direction?
does the same thing happen if you are working on a completely different project. there may be some verts or issues with one of your objs that is messing with the way the camera works. it may be scaling its movement to how large it thinks your scene is.
this is not an issue with your drivers or hardware, its a maya software issue that may happen with partial corrupted objs and shit.
i know exactly what you are talking about, it happend to me about a year ago. im not sure how i fixed it though.
< How is this called?
When people make textures for low poly models like this, are they self illuminated to hide the geometry from other light sources catching them the wrong way?
Also, I guess generally, what are some techniques one can use to hide low poly geometry with texture work?
Usually they just don't have shadows cast on themselves or on them, as it'd just reveal the heavily angular and sharp features of the low-poly model, so it's more on rendering style than texturing
So yes, usually self-illuminated, and as a result, the general look and environment usually reflect this as well, so it's a unified "oh there are no shadows in this art direction", except maybe in certain situations, and it wouldn't be cast shadows.
Most would be a blurry circle under characters or objects acting as fake shadow, and that's it. Maybe a more accurately shaped shadow for under a tree? Stuff like that.
I tried adding more edges near the cup handle like in the pic, but in my case, when I apply smoothing groups or even turbo smooth, I don't get a smooth surface. Why and is that even possible in 3ds max 2015?
I'm trying to connect a handle to a cup that needs more edge support.
Pic related - found this on net and it seems that this model has no smoothing problems around the handle.
I thought that maybe I did something wrong when connecting cup and handle so then I just tried inserting a single polygon into other polygon on a side of the cup. Got these visible deformations(red circles even when everything is on single smoothing group.
How could I go about create a shading style in a game like 3DS Max's consistent colors shading?
Pic is an example of what I'm talking about.
Simple, preferably free application to make a 3d terrain?
I would like the end result in .obj or .raw. pic related
That's just a 100% ambience shader that has a direct shadows on it. Basically, surface angle has no effect on shading, and you only have a single directional light source for those hard shadows, and an ambient light to keep those shadows from being 100% black.
Tinkering with vertices helps a bit near the handle, but no matter what I do on the right inserted polygon, it still doesn't give that smooth surface.
Or is it just not possible to have that kinda inserted polygon and completely smooth surface?
Mesh needs more interpolation. ''poles'' don't bend and smooth well on curvy surfaces.
The inset pole near the handle will create pinching on your sub-D mesh. You can't kill poles, you can only hide them.
What if I'm modeling for games and need to keep it low poly? What would be the best way to add a part with more edges(handle) to a part with fewer edges(cup)? I tried just making them separate objects but it looks kinda weird, you can clearly see that they are not the same piece.
Judging from a Youtube video of the game, I'd say both.
The "shadow" is just a circular blur (probably a flat plane geometry with alpha) under characters, and the texture on the character models has "occlusion" shadows painted on, such the the shadows of the arm which are on the sides of her dress, since her default pose is her arms by her side, and the shadow doesn't move when she lifts her hand or what not.
In many games today, character animations are able to appropriate the characters footing and animate accordingly.
I've made games by simply creating 3D animations and then calling those animations in call states and blending the animation sequences to find dynamic in-betweens. What is involved in making a character appropriate their footing when they begin hitting stairs?
Yeah, depending on the game we create a collision mesh which is lower poly do do collision testing.
But still, think of GTA or Just Cause, the characters legs are doing calculations beyond some simple animation calls.
I'm just starting out in blender and trying to make some extremely basic primitive shapes. Holy shit is the user interface always this bad? The guy I normally pay to do my 3d work is out on vacation.
Noob here. Let's say I want to model pic related using this image as reference. No straight front/side/rear views and no ortho. The topology is easy enough for me to figure out. But how do I get the dimensions right? Am I supposed to eyeball it or something? What do I do if the object I'm trying to model is something complex and my only references are things like irl photos of things with more complex and organic shapes?
>But how do I get the dimensions right?
You don't. You'll keep reiterating it and updating it with every new piece of info you get. Eventually you'll have to make your own blueprints if you want it to be "accurate," whatever that means.
About 12-14 years ago I was really in to 3D modeling. I enjoyed high poly work, I did stuff like cars, weapons, scenery. I used Maya and I'm still familiar with it from using it off and on over the past decade. I kind of want to start 3D work again but do things like vfx and particle work, is it worth it to learn Blender? It almost feels like starting from scratch with how foreign that program looks. I was also looking at Houdini recently due to a video series on scripting Houdini I was going through. I know Python and I know all three of those programs can utilize Python scripts.
You eyeball it and just line it up to get a roughly the same size. Never try to adhere perfectly to a 2D drawing unless it's some real-life mechanical object where you have detailed schematics. You should always be making judgments about the forms based on how it's looking in 3D, doing what looks best, not restricting yourself to the exact ratios in the drawing.
measure it. then eyeball it, it's better to use cad for blocky shit, especially mechs, cause you can redo things on the fly, the feature system is as non-destructive as possible. also you can export every part as a separate mesh, that way you can easily rig it. keep in mind cad exports shitty tris meshes, so if you plan to do additional shit in 3d, it won't work, but you won't need to in most cases.
>inb4 this looks nothing like it
I'm not an artist
You should look into Soup for Maya if you're interested in VFX. Maya and Houdini are definitely the top programs used in VFX these days. They both have their pros/cons, but are solid in their own respects.
Im trying to destroy an object mid-air in houdini. it must act as if thrown sideways by someone, so fly arch and rotation is necessary. I cant control it whatsoever in houdini (maya was too slow to even try) and I dont believe using fields to drive the object will work. Any tips? Willing to use maya also
In zbrush's uv master it has a "attract from abmient occlusion" button which just blams some ocntrol paitning into the dips and crvices of your mesh for UV purposes. What I wanna know is if there's such a button for regular poly painting. I dont want to go through the tedium of setting up surface materials in maya or using projection master in zbrush if i can just get a quick and dirty AO as a base to begin paintng. I am aware of the automatic masking for brushes that lets you paint only dips/bumps but is there like a paint bucket version?
You either have a locked control scheme, use dynamic foot placement tools, or you you make all your steps everywhere uniform in size and just kinda hope it lines up perfectly everytime.
The first one is easy to do and always looks good, but might be annoying to the player. It's where you press "use" or touch the stairs and a complete stair walking animation plays that the player cannot control until it is over.
The popular premade engines these days all have foot ik utilities that are easy to impliment but require your rig to be built with the iks in mind. You'd have to see what your engine requires in order to make it work. If your using a more bare bones engine then you'd have to program your own system.
The last option isn't as lame as it sounds. You make your stair walking animations and make sure all stairs everywhere have the same depth and height in their steps. The animation should look good under casual scrutiny and as long as the player is doing normal stuff. They might catch a foot going through the floor if they juke the camera and try doing weird maneuvers. But thay only happens when people are.intantionally trying to mess up the game.
Are there any good/recommended online course for 3d modeling?
I was offered an online course in anything I want and don't know what else to get.
I'm not very artistic though and I can't draw for shit so I wonder if that's even a good idea.
Enroll in the Foundational courses online from Gnomon, there the best teachers you can get for 3D, active industry professionals.
Anatomy, Digital Sculpting and Introduction to 3D with Maya are the ones you'll want to enroll in for the most part.
I model (in maya) for animation and NOT for games. Am I allowed to animate and render with models that are in smooth preview mode, or do I actually need to subdivide them until the polygonal edges are too small to see?
Also, which render engine should I be using? I've been lead to believe different engines are good for different things, so how do I know which is right for what I'm doing?
There is no one best way. You will have to model so many humans before you get good at it that where you start isn't really a factor that will determine the outcome in any way.
But typically you wanna focus where your weaknesses are.
mental ray allows you to render your smooth preview if you hit 3 before you render, so from my experience I rarely needed to smooth unless I had to use some edges to control the shape and optimize lines
I have experience on visualisation mainly with BIM program Revit and know a little bit on materials. And some experience using Sketchup for small scale stuff with Vray.
I mainly do architecture designs, but now I wanted to move on to more organic stuff (humans, games asset etc). I usually use autodesk programme, but Im split wether to learn Blender or 3DSMax. Blender is free but most reviews saying the UI is too bothersome to learn.
I wanted to learn more on easy modelling, easy the UV unwrap texture whatever, and small animations at least. And specifically easy topology work and vertices.
This is the render I made for a working project.
pls someone how is it called when you place basic geometry on a tracked video clip? i have done it but don't really know how it is called.
Came across this little problem sudden;y. Could this be a driver problem, or from Blender or Luxrender?
This may sound like a stupid question. How do you extrude a plane out of a solid object in 3ds max (2015)?
Many games do this for hair and whatnot so it clearly mustn't be impossible.
Shift dragging the edge (as you would on a hold) seems like the obvious answer but it does nothing. Extruding the edges just makes a ridge pop out with extra faces, and it won't let me weld or collapse those verts.
Extruding an edge sort of makes a "pyramid" out of the it. So instead of just pulling out one plane, it splits the original selected edge into two
Setting width to 0 makes the desired look but it still keeps the unneeded extra verts.
Can I ask why you are trying to do this? Because this goes against proper modeling conventions. You're allowing an edge to support 3 different faces, which breaks edge-flow and makes your modeling tools not be able to interpret the mesh correctly anymore, not to mention it creates a surface junction that cannot smooth or shade properly, it's an impossible shape, a surface with no thickness attached to a surface that does.
If you need to add a single-sided extrusion to mesh, then just create a polygon and have it penetrate the mesh, don't try to pull it out of the mesh. Geometry does not need to be physically connected to be part of the same model. This is how hair is done for most games, it's just polygon planes floating around the head, not actually attached.
Sorry for my ignorance, I'm still new at the low poly part of it. What I'm trying to do is make a low poly feather look sort of like how they made the neck ruffles and wings on this wow gryphon.
So in order to keep that look, I just need two identically-positioned planes (top/bottom) that aren't attached to the "body" of the mesh?
Then fuse the unneeded verts?
Just collapse the edge then?
If I were you, I'd set width to 0, then select the two verts who are at the same location, then merge em
I'm not sure about max, so I'm just saying what I'd do on Maya if extrude did result in those bevels
>when the camera moves too far
How? The camera isn't even far away from the sphere.
3DStudio Max question here
do you guys know a way to add to a selection of object a material modifier and assign a number to them sequentially?
like if i have 10 objects, give them all a material modifier and set the id number from 1 to 10.
Im doing it manually and is a pain in the ass.
here is an offtopic render of a boat im doing .
>The bump mapping is only at .1
Well, how big is that sphere?
If you are assuming 1 Blenderunit = 1 meter then that's a 10 cm bumpmap.
Does the problem get better when you remove the bumpmap?
>Does the problem get better when you remove the bumpmap?
However on a side note when I added a plane under the sphere to allow some bounced light the shading looked normal again. Could it also be from insufficient light data? What about lamp sizes?
>Could it also be from insufficient light data? What about lamp sizes?
What kind of lamp are you using?
With a point- or spotlight there's not much you can do, with most other lights you can try to increase the size to make the shadows softer.
E.g. you can just scale an area light, or increase the "relative size" parameters of a sunlight, or increase the "theta angle" of a distant light.
This might help.
But if removing the bump mapping fixes the problem, try to decrease the bump amount as well.
Or, if you need such a strong bump effect, consider to use more subdivisions + displacement.
Or try a combination of all three of these.
I used an area lamp but what I didn't notice before was the size. The size was like, .1 or something casting onto a 1 meter sphere. I still feel stupid for not seeing that. Anyways it works fine now. even with the bump mapping.
Thanks for the the advice though.
For organic forms, you should always maintain quads unless the triangle is going to be hidden away in some unseen place, like inside the nostril. Quads allow your modeling tools to continue working properly, allow you mesh to subdivide/smooth and deform properly as well.
Webbing is hard to do with poly modeling, you really should just sculpt it. Hand webbing is something you'd sculpt blend-shapes for so you can shift to that blend-shape as the fingers open.
Then I'm really sorry anon, I'm a Maya user, and I have no idea how Max is like anymore since I always assumed it would be fundamentally similar to Maya
Try to get a Max user's attention
So why is this spotlight not showing while rendering? Shading is set to GLSL.
I'm running an i5 3570k @ 3.4ghz w/ 8gig ram
I use Maya 2015 and mental ray to render
What sort of benefit (if any) will I see if I switch to a £250 i7 and 16gig ram? 20% less render time? Something like that?
16gb is a bare minimum regardless of your CPU/GPU. If you have more than one program open (Maya/photoshop/Zbrush/Mudbox) you're going to hit 8gb in no time. Then the RAM gets paged and your computer slows down to a crawl.
Hey, Anon. What would you say to this setup for a selfmade rendering cluster. Software would be Blender/Cycles using Blenders own network render capabilities. Everything running on Xubuntu, or maybe Ubuntu Server.
Don't pay more money for an overclocked GPU, especially 970. Find the cheapest 4GB 970, and simply use MSI Afterburner to overclock it yourself. Save a lot of money and get a higher OC.
Pentium G4500? The fuck are you thinking? You only ever plan to use the node for pure GPU rendering? I guess that's ok then, but still kinda of really limiting yourself. The CPU still needs to do a decent amount of compute preparing things. And why would you buy DDR4 RAM if you're not going to be using CPU rendering?
GPU rendering does not use your system RAM, it only uses the VRAM. So that RAM is going to waste, especially paying for DDR4.
Would anybody here know how to convert this model from 3DWarehouse into a .tah file that can be used with 3D Customg Girl?
I used to know I guy who could convert them but I can't reach him anymore. Any help would be appreciated. Thank you.
How do I start legs?
Ive been using the Extrude tool so far, but how do I devide the legs out?
Alright, can someone explain to me how do I make hair/skirt physics in blender?
I'm making a character that I will later import to unity, but I have no idea how to make the hair react to gravity/movement.
How do I create, say, hair physics like in MMD models?
I'm asking because in every damn place I ask and lurk, they just say "use cloth simulation"
Now this is a one super NOT-practical way to make physics that run in a game.
You do stuff like that trough scripting. Custom 'Verlet integration' based particles are very popular for that type of features.
This Gamasutra paper by Thomas Jakobsen on the physics from the original Hitman game is the starting point just about everyone will have based their first solutions on.
That is nothing comes from blender except the bones+skin or the cloth/hair mesh, all the physics calculations are handled by a script inside Unity that you write yourself, or have your programmer write for you, or purchase from someone else etc.
Making some bathroom tiles using displacement maps. My displacement map is, as far as I can physically see, completely straight, black and white. I don't see any reason for any sort of variation on this level.
Why am I getting those artefacts where it should just be completely straight? pic related. many thanks
Does a 3D sculpt of the Helios bust exist already?
I seriously want to 3D print one to put on my desk so I am a e s t h e t i c
To make a anime character is it best to just leave the eyes out then have a series of faces overlay the head?
Been looking up some tutorials and stuff and everyone has such different methods on how to model
This one guy started with a ball and just grabbed groups and edited to make it look like a head, like what I have done here
Then theres this other guy who does EVERY line by hand, probably will take a ton of experience later
And another guy who just uses the Extrude tool the entire time to get most of the shape
What are the best way to learn the basics?
Would the eyes be inside the head in normal tradition with eyeballs
Or would they just not have eye holes and animate say a imaged layer where they would be
With that style, basically you're gonna have an "eyeball" in the head
However, depending on the proportions of the eye, the "eyeball" will probably be a flattened sphere or just a curved plane, with the texture projected onto it
I'm not too sure how Xrd does it, but the project I'm working on right now uses the method I just mentioned
You COULD keep the eye and face as one mesh, but I feel that with a separate eye and face, the projection is easier and the "eyesocket" of the face blocks off the part of the eye that is hidden, and the projection would work much easier, instead of somehow masking the projection only to the eye area if it's one mesh
I see, so the eyeball is still there but it never needs to move, but this way eyelids can still work the same?
Got any pictures? Im new to all this kinda stuff and find everything fascinating
Here, it's the test I did early on of the project when I was fighting for non-spherical eyes
Very simple, it's just a texture projected following the controller you see moving
It shouldn't be a problem, you just need to parent it to the "head" joint (what is usually after the neck joint) so that it would follow the head when it moves around
Here's a very easy tutorial to illustrate it all (I'd personally mute it since it's just Touhou music):
And for if you're using one mesh and animating the texture:
I'm trying to get a particle system (Blender) using shapes to randomly embed themselves into a plane (webm related), but it's not working as well as I had hoped.
It should be similar to dropping random objects into sand, some bury deeper than others.
What it should be doing is stopping the shape at random intervals, so as to appear like the shapes are buried at different depths, but they all seem to be stopping at the same point.
For the most part, I'm relying on the damping section of the collision physics on the ground plane to stop the particles.
I've tried adjusting the permeability setting, but it seems to stop them, and then let them go through at random times.
I've also tried adjusting the physics properties of the particles themselves, notably the integration setting of the physics.
Here's my settings for both the particle emitter, and the ground plane, if it helps.
Actually, I forgot to mention.
My problem is I want them to actually be embedded a bit more shallow.
With what it's at in the webm being the maximum.
As for the previous solution, they end up stopping, but after a bit, fall right through to the 0% plane.
Use an invisible plane to stop them early, and to prevent stopped particles from falling through later have it so that any particles it stops are killed, setting the particles to still render if dead.
I've kind of done a combination of these two.
I have a displaced plane emitting upwards, and a plane clipping into it that dampens it.
The ones above the plane stay above, and the ones below get embedded at a shallower depth.
Thanks everyone. I just couldnt really think of a good solution because I was fixated on having it done through the parameters alone.
There's waaaaay more efficient way of doing this.
Have the plane emit particles with a very high normal or Z velocity, and then give the plane a texture that will affect particle velocity.
Because particles move away from their point of origin even in the frame they are emitted, if you have them all emit in frame 1 and do the render in frame 1 the particles will still be displaced by their velocity, thus the velocity texture will have randomised the particle positions.
Don't forget to set the velocity of the particle very high so that it's displaced significantly in the first frame of its existence (the texture influencing velocity just reduces velocity btw, white is what you set the velocity to and black is zero)
This actually works pretty well, and it's a bit easier to fine tune.
Of course, it all depends on how you write your shader. Outside of us labling it as such there is nothing special about a 'normal map' that makes it a 'normal map'.
just that inside the shader there is a function that decode the 0-1 range value of each RGB value of the texture sample to represent a XYZ direction.
Write or modify that value to be calculated differently and you can derive your normals from maps or any other source however you see fit.
What is it that you're trying to do?
I saw some beautiful examples of how with the color ramps (in blender 3d) the light that reflects on a mesh can be modified and amplied to create a more aesthetic effect, i wonder if that effect could be extended to the illumination of the normal map of the textures
Perhaps this concept is what you've stumbled upon: http://wiki.polycount.com/wiki/BDRF_map
Such look up ramps should already interface with the normals of your surface, since that's what they're dependent upon to work.
Would a curvature map recognize the corner between the cylinder and square?
alright so this has been plaguing my since starting maya, why do all the mia_x glass materials illuminate the shit out of everything behind them when they stack? What do I need to look into to make it look more realistic? (please ignore the frostiness etc, still WIP)
How do I begin to use render layers or passes?
I'm kind of confused as to what the proper term is. As I understand passes are just like the diffuse/glossy/emit that sort of thing. And layers are the different elements of the scene?
I'm trying to wrap my head around blender's compositor to work on my scene. For example, having my emit objects render on their own so I can add glow, or give the background atmospheric perspective.
I picked up the shader nodes pretty easily, but the compositor is pretty foreign to me.
Is there a decent tutorial to get me started on how to use it properly?
For the most part, I've just been rendering masks and shit and putting it together in photoshop, as well as glow and DoF, but it always seems to be a bit less accurate than doing it with the actual data.
I started messing with Blender in order to help my team in 4CC.
I think I got basics of creating simple stuff down, but how do I transfer stuff from Blender to the PES?
For example, I want to make it so ball looks like a bucket, so players will be kicking a bucket around through the match.
So I made a bucket model in Blender.
How do I substitute? What files? Will I have to extract it from other files?
Will changing a ball model in such way result in change of physics?
I tried extruding an edge along a curve; and the faces are here but I can only see them when I have the model selected. When deselected, the faces don't show up. Anyone have any idea why?
So I was doing some model and shit
I think I got the hang of it doing the top, all still pretty basic
But then the legs came around and its like I forgot what legs looked like
I went to draw it on paper and it came out fine but I just cant get these fucking legs to work 3D
So I scrapped it and did it again from Scratch
this time I actually made some legs
Was it just because I had to many lines on it?
Is Simpler better for early stuff like this?
What is this little patch of geometry doing way under the map? What is it for?
I want to do some lite CAD work around designing custom 3d printed human-machine interfaces.
I'm currently evaluating blender and PTC Creo Elements/Direct Modelling Express (what a mouthful).
Any other software suites I should look at?
>TFW I watched that right after I posted, it's been a big help. It makes more sense now. Do I have to separate everything every time I want to do something like this? Or could I just separate out just what I need?
Yes, you'll always have to separate out every pass, since if you separate out a specific render pass you want to edit, then you'll need to properly recombine it later with the other passes (according to the equation) to get the proper final image.
Idk, maybe you could try to subtract out (in PS) just a single pass from the Combined pass, edit the single pass, and re-add it to see what kind of results you get? It might work ok even though it violates the order of operations in the combine equation.
Another similar method for this is to bake render data to UV textures on only objects you want to touch-up (like caustics on a floor plane, which you can then de-noise in isolation on that baked texture).
Im texturing a cellphone for a short film and I need to separate the screen image from the reflections so I can adjust it during comp. I probably already started this wrong as my approach was to make the screen image more reflective, any tips? Maya and Renderman
How would I go about creating textures like these?
This a texture from a game called LSD Dream Emulator. I want to be able to create textures similar to this. I'm guessing they used some software or compressed images to get this effect. Or maybe someone just handpainted 50 of these.
Does anyone have anyone have any good guides for creating low-poly meshes such as vehicles, humans, buildings, etc.?
Just like the most efficient way to go about modelling this kind of stuff with anywhere between <500-1000 tris
Stuff like this
Months ago, I think in the thread before this one, I posted pic related. It's just a cylinder with a hole in the middle that blender can't unwrap properly. An anon here suggested a solution that I can't remember or find in my files. Something about blender checking and fixing the object prior to uv maping. Does anyone know this solution?
Alternatively, what should I do before marking seams to ensure the unwrapped islands will come out right?
Hey /3/, probably really fucking dumb question here.
So I have the 3D-scanned-.stl-model of a piece of wasps-nest.
I want to edit the 3D-model in C4D, so that the space between the 'floor'-planes (for a lack of better wording) is see-through or tunnel-like (see reference-photo of the object).
Whenever I'm trying to model that though, I get weird artifacts and don't know what I'm doing wrong (see close-up screenshot for reference). The tools I'm using in C4D are just the Pull and Wax-tools in negative-mode, where they just take stuff away fromt he model instead of adding it.
English is not my native language, so i don't even know what I'm supposed to Google to get around this problem, and I feel like there has to be an easy AF solution to that that I'm totally missing, there is no way you CAN'T do that in 3D-sculpting on the PC.
Hope someone here is able to help, cheers /3/