Spaces:
Running
on
Zero
Running
on
Zero
JianyuanWang
commited on
Commit
•
f6a127b
1
Parent(s):
559ce8f
add clear buttion
Browse files- app.py +8 -6
- viz_utils/__pycache__/viz_fn.cpython-310.pyc +0 -0
app.py
CHANGED
@@ -231,9 +231,8 @@ with gr.Blocks() as demo:
|
|
231 |
<li>upload the images (.jpg, .png, etc.), or </li>
|
232 |
<li>upload a video (.mp4, .mov, etc.) </li>
|
233 |
</ul>
|
234 |
-
<p>The reconstruction should take <strong> up to 1 minute </strong>. </p>
|
235 |
-
<p>SfM methods are designed for <strong> rigid/static reconstruction </strong>.
|
236 |
-
<p>If both images and videos are uploaded, the demo will only reconstruct the uploaded images. By default, we extract one image frame per second from the input video. To prevent crashes on the Hugging Face space, we currently limit reconstruction to the first 20 image frames. </p>
|
237 |
<p>If you meet any problem, feel free to create an issue in our <a href="https://github.com/facebookresearch/vggsfm" target="_blank">GitHub Repo</a> ⭐</p>
|
238 |
<p>(Please note that running reconstruction on Hugging Face space is slower than on a local machine.) </p>
|
239 |
</div>
|
@@ -252,10 +251,13 @@ with gr.Blocks() as demo:
|
|
252 |
reconstruction_output = gr.Model3D(label="Reconstruction", height=520)
|
253 |
log_output = gr.Textbox(label="Log")
|
254 |
|
255 |
-
|
256 |
-
|
|
|
|
|
|
|
257 |
examples = [
|
258 |
-
|
259 |
[british_museum_video, british_museum_images, 2, 4096],
|
260 |
]
|
261 |
|
|
|
231 |
<li>upload the images (.jpg, .png, etc.), or </li>
|
232 |
<li>upload a video (.mp4, .mov, etc.) </li>
|
233 |
</ul>
|
234 |
+
<p>The reconstruction should take <strong> up to 1 minute </strong>. If both images and videos are uploaded, the demo will only reconstruct the uploaded images. By default, we extract one image frame per second from the input video. To prevent crashes on the Hugging Face space, we currently limit reconstruction to the first 20 image frames. </p>
|
235 |
+
<p>SfM methods are designed for <strong> rigid/static reconstruction </strong>. When dealing with dynamic/moving inputs, these methods may still work by focusing on the rigid parts of the scene. However, to ensure high-quality results, it is better to minimize the presence of moving objects in the input data. </p>
|
|
|
236 |
<p>If you meet any problem, feel free to create an issue in our <a href="https://github.com/facebookresearch/vggsfm" target="_blank">GitHub Repo</a> ⭐</p>
|
237 |
<p>(Please note that running reconstruction on Hugging Face space is slower than on a local machine.) </p>
|
238 |
</div>
|
|
|
251 |
reconstruction_output = gr.Model3D(label="Reconstruction", height=520)
|
252 |
log_output = gr.Textbox(label="Log")
|
253 |
|
254 |
+
with gr.Row():
|
255 |
+
clear_btn = gr.ClearButton([input_video, input_images, num_query_images, num_query_points, reconstruction_output, log_output], scale=1)
|
256 |
+
submit_btn = gr.Button("Reconstruct", scale=3)
|
257 |
+
|
258 |
+
|
259 |
examples = [
|
260 |
+
[cake_video, cake_images, 3, 4096],
|
261 |
[british_museum_video, british_museum_images, 2, 4096],
|
262 |
]
|
263 |
|
viz_utils/__pycache__/viz_fn.cpython-310.pyc
CHANGED
Binary files a/viz_utils/__pycache__/viz_fn.cpython-310.pyc and b/viz_utils/__pycache__/viz_fn.cpython-310.pyc differ
|
|