I don't know about you, but the idea of turning an animation into a texture just blows my mind. The ability to convert a very complex animation with dynamics or fluids into a texture to me is very exciting and opens up a lot of possibilities.
So where to start? My first attempt was to go through mel. So what I came up with is getting the amount of vertices (width) and the amount of frames (height). Then I created a plane using those width and height settings. I then get the position from each vertex at every frame and convert that to an RGB value and assign that as a vertex color value to the vertex face. I add a camera and edit the render settings so that the resolution gate fits exactly with the plane (width and height) So now, in theory, if I render out my image I should have the texture I need. But I couldn't get the render or bake to give me exactly the pixel perfect render. And since the pixel perfect color is the most important thing here I needed to find another way.
Enter Python and the OpenMaya.MImage() class. This let's you create a matrix of values (width*height*depth) that you can fill and then convert that into a texture which is exactly what I needed.
m_image = OpenMaya.MImage()
m_image.create(m_height, m_width, m_depth )
m_pixels = m_image.pixels()
m_arrayLen = m_width * m_height * m_depth
for i in range(0, m_arrayLen, m_depth):
m_util.setUcharArray(m_pixels, i+0, m_color[0])
m_util.setUcharArray(m_pixels, i+1, m_color[1])
m_util.setUcharArray(m_pixels, i+2, m_color[2])
m_util.setUcharArray(m_pixels, i+4, 0)
m_image.setPixels(m_pixels, m_height, m_width)
m_image.writeToFile( "{}/{}.png".format(m_path,m_fileName), '.png' )
Although I hadn't done anything in Python before, I felt confident enough that I could fairly quickly adapt and recreate my script. Before I could use this, my script needs to know what vertex corresponds with what column. To do that I create a new UVset in which all vertices are laid out in order from left to right in the center of the pixel. I also use that UVset in my shader, since the vertex order in Unity and Maya didn't match.
Another thing is that I need to convert the position floats into a 0-1 range for the RGB value. I do this by looping through all the vertex positions (XYZ) and determine the highest value (positive or negative) I later need this value to get the correct scale back in the shader, so I store it in the export name of the file. I've also been looking at ways to get the scale factor without the user having to enter it manually. What I did was add one extra row of pixels at the end where I store the value within the first 2 pixels, the first pixel's RGBA values for the values above 0 and the second pixel's RGBA values for the fractional ones. It works, but I don't think it's the way to go. Since my shader supports files with multiple animations in them for which you would need all the start and end frames, it would be an option to use a meta file containing the scale and the start- and endframes. A separate C# script could the read that out these values and apply them to the shader.
I would still like to look into a 16-bit image since that would mean I could store float values within the image and get more precision. Sadly, Maya kept crashing when trying to create a float array so I left it out.
I would still like to look into a 16-bit image since that would mean I could store float values within the image and get more precision. Sadly, Maya kept crashing when trying to create a float array so I left it out.
No comments:
Post a Comment