Skip to content

Commit

Permalink
Merge pull request #6 from Linyou/real_scene
Browse files Browse the repository at this point in the history
Support real scenes
  • Loading branch information
Linyou authored Feb 9, 2023
2 parents 090c310 + d04f776 commit 9434fb0
Show file tree
Hide file tree
Showing 3 changed files with 307 additions and 188 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,10 @@

[![License](https://img.shields.io/badge/license-Apache-green.svg)](LICENSE)

##### Update 2022-10-27: Support all platforms, Windows and Linux (CUDA, Vulkan), MacOS (Vulkan)
##### Update 2022-10-23: Support depth of field (DoF)
##### Update 2023-02-09: Support real scenes! Try with `python taichi_ngp.py --gui --scene garden`

<!-- ##### Update 2022-10-27: Support all platforms, Windows and Linux (CUDA, Vulkan), MacOS (Vulkan)
##### Update 2022-10-23: Support depth of field (DoF) -->

This is a [Instant-NGP](https://github.com/NVlabs/instant-ngp) renderer implemented using [Taichi](https://github.com/taichi-dev/taichi), written entirely in Python. **No CUDA!** This repository only implemented the rendering part of the NGP but is more simple and has a lesser amount of code compared to the original (Instant-NGP and [tiny-cuda-nn](https://github.com/NVlabs/tiny-cuda-nn)).

Expand Down
50 changes: 50 additions & 0 deletions converter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
import torch
import numpy as np
import argparse

if __name__ == '__main__':

parser = argparse.ArgumentParser()
parser.add_argument('--src', type=str)
parser.add_argument('--dst', type=str, default='./model.npy')
args = parser.parse_args()

state_dict = torch.load(args.src, map_location='cpu')['state_dict']

# padding = torch.zeros(13, 16)
# rgb_out = state_dict['model.rgb_net.output_layer.weight']
# print(rgb_out.shape)
# rgb_out = torch.cat([rgb_out, padding], dim=0)


model_keys = {
'per_level_scale', 'n_neurons',
'sigma_n_input', 'sigma_n_output',
'rgb_depth', 'rgb_n_input', 'rgb_n_output',
'cascade', 'box_scale',
}

new_dict = {
# 'camera_angle_x': meta['camera_angle_x'],
'K': state_dict['K'].numpy(),
'poses': state_dict['poses'].numpy(),
'directions': state_dict['directions'].numpy(),
'model.density_bitfield': state_dict['model.density_bitfield'].numpy(),
'model.hash_encoder.params': state_dict['model.hash_encoder.params'].numpy(),
# 'model.xyz_encoder.params':
# torch.cat(
# [state_dict['model.xyz_encoder.hidden_layers.0.weight'].reshape(-1),
# state_dict['model.xyz_encoder.output_layer.weight'].reshape(-1)]
# ).numpy(),
# 'model.rgb_net.params':
# torch.cat(
# [state_dict['model.rgb_net.hidden_layers.0.weight'].reshape(-1),
# rgb_out.reshape(-1)]
# ).numpy(),
'model.xyz_encoder.params': state_dict['model.xyz_encoder.params'].numpy(),
# 'model.xyz_sigmas.params': state_dict['model.xyz_sigmas.params'].numpy(),
'model.rgb_net.params': state_dict['model.rgb_net.params'].numpy(),
}
for key in model_keys:
new_dict[f'model.{key}'] = state_dict[f'model.{key}'].item()
np.save(args.dst, new_dict)
Loading

0 comments on commit 9434fb0

Please sign in to comment.