Why can't AI generate 3DCG models yet?

Why can't AI generate 3DCG models yet?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 11 months ago
    Anonymous

    Not useable ones
    What works is creating a bunch of perspectives of the same prompt of an image diffusion model, turning that into a distance field and then into a textured mesh.
    The quality that produces is abysmal. Only usable for background assets.

  2. 11 months ago
    Anonymous

    Would be pretty neat but it would probably be hard to do layered things like clothes correctly. A basic model from a front/side chart should probably be doable but getting the training dataset might be an issue.

    Instead what I think AI could be good at is animation:
    >animating 3D models based on 2D footage
    seems like it would be pretty straightforward to train.
    and something that probably someone is already doing and if not they should:
    >generating in-between frames for 2D animation

    • 11 months ago
      Anonymous

      >A basic model from a front/side chart should probably be doable but getting the training dataset might be an issue.
      I speculated before that there might be a relatively easy way to assemble a dataset.
      Sure you can't scrape the web for images but every game comes with several gigabytes of 3D assets.
      It will probably take some time till the currently pending lawsuits are decided but if the AI companies win it should provide enough of a precedent for just buying essentially the entire steam library once, ripping all the meshes, textures and rigs and using that as a dataset for training.

      • 11 months ago
        Anonymous

        I mean, yeah, you could probably do that to generate 3D models from text input
        I think being able to generate 3D models from images would be much cooler but that seems like it would be harder

        • 11 months ago
          Anonymous

          The problem is for a transformer to work you need a dataset of the kind of data you want to generate.
          You can take either or both approaches both text-to-3d and image-to-3d would work.
          Image to 3d is arguably even easier because you can just render the 3d asset and use the resulting images for training.

  3. 11 months ago
    Anonymous

    because the task is several orders of magnitude more difficult than generating 2d images.

    it's the same reason why you can just walk into a 3d scanner and have a film/game ready avatar in an hour. because between that scan and the final result you have to:
    >clean up the scan
    >replace the eyes, hair and mouth
    >retopologise the mesh or wrap a basemesh head around the scan and replace the body
    >resculpt lost details
    >uv the mesh
    >retexture the vast majority of the mesh
    >make props for the mesh
    >if going to a game optimise the mesh by baking maps and throwing away parts of geo under the clothes
    >rig the mesh and weight paint
    >create a groom, optimise the groom

    literally none of these problems can be automated right now and each of those steps is several hours to several dozens of hours of work depending on the quality you're looking for

    and then you have the issue where current 2d image generation quality isn't near the standard you need to be at to do similar things in 3d. you can't fudge details in 3d. you hair can't merge with clothing and eyes bleed into eye lids

    • 11 months ago
      Anonymous

      also, this is after walking into a multimillion dollar scanning rig. add another dozen or two hours if you're trying to do it at home.

      • 11 months ago
        Anonymous

        >also, this is after walking into a multimillion dollar scanning rig. add another dozen or two hours if you're trying to do it at home.
        kek It's worse
        The pop2 3d scanner was one of my impulse kickstarter buys. It's fricking useless because the point clouds never align properly.
        The resulting mesh is so bad it's better to just use reference images.

        • 11 months ago
          Anonymous

          lel, if it spits out raw pointclouds it might be worth just grabbing a few and meshing and aligning them manually in houdini.
          still only good for reference really.
          easier to just use a camera and something like zephyr at home if you've got the time and patience.

    • 11 months ago
      Anonymous

      I feel like teaching the AI to clean up 3D scans into functional models would actually be the easy part, again, if you have access to training data.

      • 11 months ago
        Anonymous

        it's a non-trivial task, but people are working on it.
        meta / disney / nvidia - they have access to the scanning facilities and the data/the money to create the data. the scan process itself also isn't fast and the data is very noisy, but nerf progress is decent.

        solving any of those problems to a decent degree would change game dev and vfx overnight. they're all expensive and time consuming.

        we're 5-10 years off any of this imo but i'll happily eat my hat if someone solves it sooner.

        • 11 months ago
          Anonymous

          I doubt we're still 10 years away. I personally would estimate maybe 5 years tops exactly because so many huge companies are working on it that already (theoretically) have lots of the needed data. I wouldn't be surprised if in 2-3 years Disney announces their first film using "AI made" 3D models using their proprietary in house tech.

    • 11 months ago
      Anonymous

      This, and even for environment props it's still too complicated. Hell even unwrapping UVs and baking normal maps isn't fully automated yet despite all the attempts. On paper it is, but in reality you have to clean up before or after the automated process.
      When AI can do game ready 3D, it'll be ready to do code as well and then the entirety of bot is fricked.

  4. 11 months ago
    Anonymous

    it can' even do perspective properly, it needs to be trained on 3d data, I don't see any other way.

  5. 11 months ago
    Anonymous

    For anyone interested in automating asset generation for 3D worlds, here's a little prototype I made, didn't go any further, probably never will, just tested an idea to see how far automation can go currently.
    Big objects = hand made models with fully generated tiled textures (even for furniture), just the color, no other maps. Smaller objects = 2D images on a plane, fully generated in SD but I still had to do alpha manually. Characters = parappa style cutouts on a flat rigged mesh, 2 sides fully generated (yes you can consistently generate two sides of one character if you get the prompt just right), manually did alpha. Very fast pipeline, one man can poop out an entire open world RPG like this and those characters take literal minutes to make if you reuse the rig. Obviously look rough as hell, but you can really push the scale.

    • 11 months ago
      Anonymous

      Another one, even the street lamps are just 2D cutouts

    • 11 months ago
      Anonymous

      https://i.imgur.com/YGkFxxS.jpg

      Another one, even the street lamps are just 2D cutouts

      Things like dream texture for blender is already a huge win.

      lel, if it spits out raw pointclouds it might be worth just grabbing a few and meshing and aligning them manually in houdini.
      still only good for reference really.
      easier to just use a camera and something like zephyr at home if you've got the time and patience.

      I haven't figured out a way to get the point raw cloud data, at least not frame by frame.
      The hardware is just a depth camera and a rgb camera, but they did something to the firmware so you can't just get to it as if were a regular usb camera.
      I'm in the hopes that somebody will eventually reverse engineer it and fix it, but not me.

  6. 11 months ago
    Anonymous

    Cuz yall n1bbas ayn kno math, all these AI niQQaz just fkin with "big computer" "more data" sm.h famalam

  7. 11 months ago
    Anonymous

    AI is incapable of generating anything.
    It is just a photocopier that copies things together.
    Not only that but people are losing their jobs because of it.

    • 11 months ago
      Anonymous
  8. 11 months ago
    Anonymous

    There's isn't a massive repository of 3D data to train it like there is with images

    • 11 months ago
      Anonymous

      Technically big AAA gaming companies should have giant repositories of models they've made over the years. Should but probably don't because they're moronic and they don't keep their old data

  9. 11 months ago
    Anonymous

    ai pedo posters itt

  10. 11 months ago
    Anonymous

    When will this AI fad end so that I no longer have to hear morons asking it to do things it clearly isn't designed to do?

  11. 11 months ago
    Anonymous

    it can, or nvidia can

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *