ashawkey commited on
Commit
a64a526
·
1 Parent(s): 1efdf4d

update readme

Browse files
Files changed (2) hide show
  1. nerf/renderer.py +6 -0
  2. readme.md +2 -2
nerf/renderer.py CHANGED
@@ -320,6 +320,12 @@ class NeRFRenderer(nn.Module):
320
  nears.unsqueeze_(-1)
321
  fars.unsqueeze_(-1)
322
 
 
 
 
 
 
 
323
  #print(f'nears = {nears.min().item()} ~ {nears.max().item()}, fars = {fars.min().item()} ~ {fars.max().item()}')
324
 
325
  z_vals = torch.linspace(0.0, 1.0, num_steps, device=device).unsqueeze(0) # [1, T]
 
320
  nears.unsqueeze_(-1)
321
  fars.unsqueeze_(-1)
322
 
323
+ # random sample light_d if not provided
324
+ if light_d is None:
325
+ # gaussian noise around the ray origin, so the light always face the view dir (avoid dark face)
326
+ light_d = - (rays_o[0] + torch.randn(3, device=device, dtype=torch.float))
327
+ light_d = safe_normalize(light_d)
328
+
329
  #print(f'nears = {nears.min().item()} ~ {nears.max().item()}, fars = {fars.min().item()} ~ {fars.max().item()}')
330
 
331
  z_vals = torch.linspace(0.0, 1.0, num_steps, device=device).unsqueeze(0) # [1, T]
readme.md CHANGED
@@ -104,13 +104,13 @@ latents.backward(gradient=grad, retain_graph=True)
104
  * Other regularizations are in `./nerf/utils.py > Trainer > train_step`.
105
  * The generation seems quite sensitive to regularizations on weights_sum (alphas for each ray). The original opacity loss tends to make NeRF disappear (zero density everywhere), so we use an entropy loss to replace it for now (encourages alpha to be either 0 or 1).
106
  * NeRF Rendering core function: `./nerf/renderer.py > NeRFRenderer > run_cuda`.
 
107
  * Shading & normal evaluation: `./nerf/network*.py > NeRFNetwork > forward`. Current implementation harms training and is disabled.
108
- * use `--albedo_iters 1000` to enable random shading mode after 1000 steps from albedo, lambertian ,and textureless
109
  * light direction: current implementation use a plane light source, instead of a point light source...
110
  * View-dependent prompting: `./nerf/provider.py > get_view_direction`.
111
  * ues `--angle_overhead, --angle_front` to set the border. How to better divide front/back/side regions?
112
  * Network backbone (`./nerf/network*.py`) can be chosen by the `--backbone` option, but `tcnn` and `vanilla` are not well tested.
113
- * the occupancy grid based training acceleration (instant-ngp like) may harm the generation progress, since once a grid cell is marked as empty, rays won't pass it later.
114
  * Spatial density bias (gaussian density blob): `./nerf/network*.py > NeRFNetwork > gaussian`.
115
 
116
  # Acknowledgement
 
104
  * Other regularizations are in `./nerf/utils.py > Trainer > train_step`.
105
  * The generation seems quite sensitive to regularizations on weights_sum (alphas for each ray). The original opacity loss tends to make NeRF disappear (zero density everywhere), so we use an entropy loss to replace it for now (encourages alpha to be either 0 or 1).
106
  * NeRF Rendering core function: `./nerf/renderer.py > NeRFRenderer > run_cuda`.
107
+ * the occupancy grid based training acceleration (instant-ngp like, enabled by `--cuda_ray`) may harm the generation progress, since once a grid cell is marked as empty, rays won't pass it later...
108
  * Shading & normal evaluation: `./nerf/network*.py > NeRFNetwork > forward`. Current implementation harms training and is disabled.
109
+ * use `--albedo_iters 1000` to enable random shading mode after 1000 steps from albedo, lambertian, and textureless.
110
  * light direction: current implementation use a plane light source, instead of a point light source...
111
  * View-dependent prompting: `./nerf/provider.py > get_view_direction`.
112
  * ues `--angle_overhead, --angle_front` to set the border. How to better divide front/back/side regions?
113
  * Network backbone (`./nerf/network*.py`) can be chosen by the `--backbone` option, but `tcnn` and `vanilla` are not well tested.
 
114
  * Spatial density bias (gaussian density blob): `./nerf/network*.py > NeRFNetwork > gaussian`.
115
 
116
  # Acknowledgement