In this project, we replicate and extend ViCMA (Visual Control of Multibody Animations), a visual control method that exploits object motion and visibility to manipulate shading during an animation sequence. For replication, we used blender to construct simulations and built off of our cloth simulation code to implement the ViCMA shading control. In doing so, we modify the rendering pipeline to support polyhedral structures, generalized spatial hashing, and per-object texture mapping. For extensions, we add support for gradient color transitions (as opposed to single-frame flips) and modify the ViCMA heuristic to include color locality (beyond just the position and velocity localities). Our full code implementation is available at the following github repo: https://github.com/jonnypei/illusion-sim.
pachinkotestwithtriangles
.
bpy
.
The script generates two important resources, a custom scene.json
and a set of anim_files
for each object.
scene.json
The scene.json
contains initialization parameters for both static and dynamic objects in the scene and records their starting location, color settings, and render parameters for non-spherical meshes. Below is an example of a random entry in scene.json
.
"Sphere.001": { "start_color": [0.75,0.26,1,0.9], "end_color": [0.3,1,0,0.9], "origin": [-8.361442565917969,26.15984535217285,29.98017120361328], "radius": 1 }
anim_files
anim_files
is a folder containing files labeled scene_name/object_name.txt
which records position data of a dynamic object at each frame of the simulation.
In each file, every three entries cooresponds to one (x,y,z)
position.
v = item.data.vertices[vertex_ind].co mat = item.matrix_world v = mat @ v
AnimationObject
structscene.json
and anim_files
into the simulator, we first created a new set set of AnimationObject
structures that could load in and process the extra data.
Two structs, Spheres
and Polygons
, were implemented to handle this task. Below are some notable object parameters and functions that were implemented for each of these classes in the simulator.
int curr_frame; int start_transition_frame; vector<Vector3D> positions; vector<Vector3D> velocities; nanogui::Color start_color; nanogui::Color curr_color; nanogui::Color end_color; shape objShape; string name; void render(GLShader &shader); void simulate(); void reset();The process of simulating
AnimationObjects
was simple, and consisted of updating the curr_frame
parameter. The render()
function would then use curr_frame
to update the position of the rendered object based on positions
.
json
files and anim_files
was handled by modifications to loadObjectsFromFile
. Basic key-switch operations were added to include the new polygons, and we also implemented the readFileAndGetPosVec
function to convert each anim_file
into vector<Vector3D>
positions.
void AnimationObject::renderPoly(GLShader &shader) { Matrix4f model; model << 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1; shader.setUniform("u_model", model); Vector3D centroid = Vector3D(0, 0, 0); vectorscaled_vertices; for (auto &vertex : vertices) { centroid += vertex; } centroid /= vertices.size(); for (auto &vertex : vertices) { scaled_vertices.push_back((vertex - centroid) * 1 + centroid); } for (auto &face : faces) { MatrixXf positions(3, face.size()); MatrixXf normals(3, face.size()); for (int i = 0; i < face.size(); i++) { positions.col(i) << scaled_vertices[face[i]].x, scaled_vertices[face[i]].y, scaled_vertices[face[i]].z; normals.col(i) << vertex_normals[face[i]].x, vertex_normals[face[i]].y, -vertex_normals[face[i]].z; } if (shader.uniform("u_color", false) != -1) { shader.setUniform("u_color", this->curr_color); } shader.uploadAttrib("in_position", positions); if (shader.attrib("in_normal", false) != -1) { shader.uploadAttrib("in_normal", normals); } shader.drawArray(GL_TRIANGLE_FAN, 0, face.size()); } }
clothSimulator.cpp
, mainly writing logic in ClothSimulator::drawContents()
..json
files in main.cpp
. Then, during rendering, we simply perform an appearance change for a given object at its transition frame.
|
|
|
|