
[ad_1]
Generate Cool 3D Mesh Using Machine Learning
Image building with deep learning models is all over Twitter these days. Whether it’s DALL-E, MidJourney, Stable Diffusion, or Crayons, generative art has become a phenomenon that has resulted in a segment last week tonight,
While these models only generate 2D images, there is a relatively simple way to convert them into 3D models.
In this short tutorial, we’ll discuss how we can leverage machine learning (again) to generate cool 3D meshes from 2D images of generative art. The resulting mesh can then be viewed from different angles and used as content in your games, 3D videos, and more. Don’t worry if you don’t know how to code – it’s not needed here!

OK, let’s jump right into it.
The above image labeled “Source Image” shows an automatically generated image produced by stationary diffusion using the “A rendering of Pete Buttigieg” signal.
The algorithm gave it an artistic or cartoony spin. You can try it yourself: https://beta.dreamstudio.ai/,
The images on the right show a 3D mesh that was generated automatically from the source image using modern machine learning techniques, which we’ll explain in a second.
Once we have this mesh, we can visualize it using any 3D modeling software, apply shading to it, and maybe a nice texture. Note that we can reuse the original source image as a texture.
However, it seems I missed to explain how we got from the source image to the mesh. Well, the missing element here is a process called “unicellular depth estimation“(MDE).
To create a 3D mesh, we need to be able to assign each pixel a depth value that tells us how far it is from the camera.
Think of it as the distance from the camera to each point in the image. Algorithms for MDEs do just that. Personally, I’m using deep depth and Intel’s Magical ways. Below we see an example of the approximate depth of each pixel in an image. We call this the depth channel:

The color of each pixel in the depth channel (middle) tells us the distance to the camera – white pixels are closer to the camera, black pixels are further away. The third image above is a mesh generated from the information available in both the source image and the depth channel (middle). Roughly speaking, we take each pixel and move it backward or forward depending on the depth value. The result is a position in 3D space for each pixel.
The good news is that there is a lovely website that can help us with all of the above things with just one click. just open https://picto3d.com/ And upload the picture you want to convert to mesh.
In our case, we’ll take a generated photo of a person who looks like Uncle Walt and turn it into a trap.

The site allows you to modify the intensity of the depth, in other words, how deep the scene is. There are many other parameters you can play with. Most importantly, the website allows you to store the resulting 3D model in a variety of file formats. I would recommend using .stl as it is widely used. We’re finished, aren’t we?
Well, unfortunately not! While it’s not visible in the picture above, the heavy lifting is done by the texture and the mesh isn’t that great yet. However, we can do something about it. For that, let’s clear it up in 3D modeling software. Below I’ll use Blender, a free software tool available for Windows, Mac, and Linux. download it here: https://www.blender.org/
Opening Blender and importing the downloaded *.stl file looks something like this.

Pretty good, but not good enough. But if you click on the object and then right-click on it a menu will expand. The very first entry in the menu is a command called shadow smooth which we will apply here. The result looks a little better than before:

So far, we haven’t really changed the quality of the model, we just changed the shading from flat shading to gourd shading (for all the geeks out there). Though it’s time to really do something about the rigidity of the model. For that, we will apply a modifier. You can find the modifiers menu when you click on the button highlighted by an arrow in the above picture. Just click on the button and then “Add Modifier”.
we will use Laplacian smoothing After the modifier is selected, set the repeat to the value 2 and the lambda factor to 4.6. The repeat value defines how many times smoothing will be applied and the lambda value roughly specifies the degree of smoothing applied; Higher values mean more smoothing. The result looks like this:

Very neat, isn’t it? OK, so now we need some texture.
For that, go to the Material Properties menu whose button is highlighted by an up arrow.
Clicking “New” creates a new content. After the new material is created we need to set Basecolor
For the original source image of Uncle Walt. The gif below shows how to do this.
Basically, we click on the yellow dot next to the bus Basecolor
and select an image texture from a file.

Now the only thing left to do is to make sure that the texture is in the right place on the model.
For that we can go to UV editing menu. In the left part of the UV editor, we select the source image.
In the right part of the UV editor, we select the menu entry “UV , project from viewThe following gif visualizes this.

The orange dots at the top of our texture are the mesh coordinates. We need to make sure they overlap well.
This can be done by pressing the keyboard button “s” for scaling.
Now we can go back to the Layout menu and voila, 3D Model.

Once the model is in Blender and cleaned up, when to add lights, change perspective, add other objects, etc.
This can achieve some really impressive results. For example see the picture below.


That’s all, I hope you liked the tutorial and its insights. Looking forward to some cool models!
[ad_2]
Source link
#DALLE #Turn #Generative #Art #Mesh #Henny #Ben #Amor #September