16-825 Assignment 1: Rendering Basics with PyTorch3D¶

Sayan Mondal¶

Goals: In this assignment, you will learn the basics of rendering with PyTorch3D, explore 3D representations, and practice constructing simple geometry.

You may find it also helpful to follow the Pytorch3D tutorials.

Number of late days used¶

Late days

1. Practicing with Cameras¶

1.1. 360-degree Renders (5 points)¶

On your webpage, you should include a gif that shows the cow mesh from many continously changing viewpoints.

Cow render 360

1.2 Re-creating the Dolly Zoom (10 points)¶

On your webpage, include a gif with your dolly zoom effect.

Cow Dolly

2. Practicing with Meshes¶

2.1 Constructing a Tetrahedron (5 points)¶

On your webpage, show a 360-degree gif animation of your tetrahedron. Also, list how many vertices and (triangle) faces your mesh should have.

Tetrahedron 360

There should be 4 vertices and 4 (triangle) faces.

vertices = torch.tensor([[math.sqrt(3),-1.5,-1],[0,-1.2,2],[-math.sqrt(3),-1,-1],[0,2.5,0]])
faces = torch.tensor([[1,0,3],[3,2,1],[0,2,3],[0,1,2]])

2.2 Constructing a Cube (5 points)¶

On your webpage, show a 360-degree gif animation of your cube. Also, list how many vertices and (triangle) faces your mesh should have.

Cube 360

There should be 8 vertices and 12 (triangle) faces.

vertices = torch.tensor([[1,-1,1],[1,-1,-1],[-1,-1,-1],[-1,-1,1], [1,1,1],[1,1,-1],[-1,1,-1],[-1,1.,1.]])
faces = torch.tensor([[1,0,2],[0,3,2],[2,3,6],[7,6,3],[1,4,0],[1,5,4],[1,2,6],[1,6,5],[0,4,7],[0,7,3],[4,5,7],[4,6,7],])

3. Re-texturing a mesh (10 points)¶

In your submission, describe your choice of color1 and color2, and include a gif of the rendered mesh.

Cow render retextured

Choice of color1 and color2:

color1 = torch.tensor([0,0,1]) # blue
color2 = torch.tensor([1,0,0]) # red

4. Camera Transformations (10 points)¶

In your report, describe in words what R_relative and T_relative should be doing and include the rendering produced by your choice of R_relative and T_relative.

R_relative is the rotation matrix of the initial camera coordinate system with respect to the new camera coordinate system.

T_relative is the translation of the initial camera coordinate system's origin with respect to the new camera coordinate system's origin.

R_relative=[[0, -1, 0], [1, 0, 0], [0, 0, 1]]
T_relative=[0, 0, 0]

Cow transform1

R_relative=[[1, 0, 0], [0, 1, 0], [0, 0, 1]]
T_relative =[0,0,1.5]

Cow transform2

R_relative=[[1, 0, 0], [0, 1, 0], [0, 0, 1]]
T_relative=[0.55, -0.4, 0]

Cow transform3

rotate_angle = -90. / 180. * math.pi
R_relative=[math.cos(rotate_angle), 0, math.sin(rotate_angle)], [0, 1, 0], [-math.sin(rotate_angle), 0, math.cos(rotate_angle)]
T_relative=[3., 0, 3]

Cow transform4

5. Rendering Generic 3D Representations¶

5.1 Rendering Point Clouds from RGB-D Images (10 points)¶

In your submission, include a gif of each of these point clouds side-by-side.

The point cloud corresponding to the first image:

Plant1

The point cloud corresponding to the second image:

Plant2

The point cloud formed by the union of the first 2 point clouds:

Plant12

5.2 Parametric Functions (10 points)¶

In your writeup, include a 360-degree gif of your torus point cloud, and make sure the hole is visible. You may choose to texture your point cloud however you wish.

Torus 360

5.3 Implicit Surfaces (15 points)¶

In your writeup, include a 360-degree gif of your torus mesh, and make sure the hole is visible. In addition, discuss some of the tradeoffs between rendering as a mesh vs a point cloud. Things to consider might include rendering speed, rendering quality, ease of use, memory usage, etc.

Torus implicit 360

Point cloud is generated by sampling from the parametric functions. This is easy to use. The sampling stage takes O(n) memory to store the points. During generating points the memory usage depends on the number of parameters in the parametric functions (in torus example we have 2, so O(n^2)). The rendering quality depends on the density of the points. Unlike meshes that we can map textures and shadings on, it will be sparse all the time.

When rendering implicit surfaces (for example, signed-distance function), we have to first create a voxel grid which takes O(n^3) space. Then use matching cube technique to find out the wanted vertices on the surface and create a mesh. The process would take longer. The visualization is better because of the mesh presentation but the location of vertices are not exactly the position where implicit function is a zero-level set. The finer we voxelize, the better results we get.

6. Do Something Fun (10 points)¶

Include a creative use of the tools in this assignment on your webpage!

An areoplane rendered by mesh and given a new colored texture:

Airplane Naruto

(Extra Credit) 7. Sampling Points on Meshes (10 points)¶

You need to provide the following results to get score

  1. Render each pointcloud and the original cow mesh side-by-side, and include the gif in your writeup
  2. Render each pointcloud and the original joint_mesh side-by-side, and include the gif in your writeup

Original cow mesh:

Original cow

cow pointclouds:

10 points:

Points 10

100 points:

Points 100

1000 points:

Points 1000

10000 points:

Points 10000

Original joint_mesh:

Original joint_mesh

joint_mesh pointclouds:

10 points:

Joint Points 10

100 points:

Joint Points 100

1000 points:

Joint Points 1000

10000 points:

Joint Points 10000

In [ ]: