03 — Training Test¶
Example file:
examples/03_training_test.py
Ever stared at a wall of decreasing loss numbers in your terminal for ten minutes, feeling confident, only to discover the model's output is a solid grey rectangle? Yeah, us too.
Reading loss values off a scrolling console is about as reliable as reading tea leaves. This chapter puts GT and prediction side by side on screen so you can see whether the network is actually learning.
What we're building¶
A tiny MLP (2 → 64 → 64 → 3) fitting a 256×256 PyTorch logo in real time. The window has three panels:
| Area | Content |
|---|---|
| Left | GT panel — the target image (what you're fitting) |
| Right | Prediction panel — live network output, updated every frame |
| Bottom | Info panel — FPS, loss, iteration, progress bar, and a slider |
Everything on screen, nothing buried in the terminal.
New friends¶
Chapters 01 and 02 were all static — bind() + run(), done.
This time we make things move:
| New thing | What it does | How to use |
|---|---|---|
| @view.on_frame | A function that runs once per frame — put your training step here | @view.on_frame |
| @panel.on_frame | A function that runs inside a specific panel — put interactive controls here | @info_panel.on_frame |
| create_tensor | Allocates a CUDA tensor that shares memory with the display, so updates appear on screen instantly | vultorch.create_tensor(H, W, ...) |
| vultorch.imread | Load an image file into a CUDA tensor (no PIL needed) | vultorch.imread(path) |
| side="bottom" | Place a panel at the bottom of the window | view.panel("Info", side="bottom") |
What's a widget?
In UI terminology, a widget is any interactive element — a button,
a slider, a text label, a progress bar. Things you can see and
(sometimes) click on. In Vultorch, you create widgets by calling
methods like panel.text("hello"), panel.slider("x", 0, 1), etc.
inside a @panel.on_frame callback. No HTML, no CSS, no Qt —
just Python method calls.
Write PyTorch code inside the view callback; put widgets inside the panel callback. Vultorch handles the tensor-to-screen dance every frame.
Full code¶
from pathlib import Path
import torch
import torch.nn as nn
import torch.nn.functional as F
import vultorch
class TinyMLP(nn.Module):
def __init__(self):
super().__init__()
self.net = nn.Sequential(
nn.Linear(2, 64),
nn.ReLU(inplace=True),
nn.Linear(64, 64),
nn.ReLU(inplace=True),
nn.Linear(64, 3),
nn.Sigmoid(),
)
def forward(self, x):
return self.net(x)
if not torch.cuda.is_available():
raise RuntimeError("This example needs CUDA")
device = "cuda"
img_path = Path(__file__).resolve().parents[1] / "docs" / "images" / "pytorch_logo.png"
gt = vultorch.imread(img_path, channels=3, size=(256, 256), device=device)
H, W = gt.shape[0], gt.shape[1]
ys = torch.linspace(-1.0, 1.0, H, device=device)
xs = torch.linspace(-1.0, 1.0, W, device=device)
yy, xx = torch.meshgrid(ys, xs, indexing="ij")
coords = torch.stack([xx, yy], dim=-1).reshape(-1, 2)
target = gt.reshape(-1, 3)
model = TinyMLP().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=2e-3)
# -- View + panels (high-level declarative API) -------------------------
view = vultorch.View("03 - Training Test", 1280, 760)
info_panel = view.panel("Info", side="bottom", width=0.28)
gt_panel = view.panel("GT", side="left", width=0.5)
pred_panel = view.panel("Prediction")
gt_panel.canvas("gt").bind(gt)
# 4 channels — zero-copy GPU display path
pred_rgba = vultorch.create_tensor(H, W, channels=4, device=device,
name="pred", window=view.window)
pred_rgba[:, :, 3] = 1.0
pred_panel.canvas("pred").bind(pred_rgba)
state = {
"iter": 0,
"loss": 1.0,
"ema": 1.0,
"steps_per_frame": 6,
}
@view.on_frame
def train():
for _ in range(state["steps_per_frame"]):
optimizer.zero_grad(set_to_none=True)
out = model(coords)
loss = F.mse_loss(out, target)
loss.backward()
optimizer.step()
state["iter"] += 1
state["loss"] = loss.item()
state["ema"] = state["ema"] * 0.98 + state["loss"] * 0.02
with torch.no_grad():
pred = model(coords).reshape(H, W, 3).clamp_(0, 1)
pred_rgba[:, :, :3] = pred
@info_panel.on_frame
def draw_info():
info_panel.text(f"FPS: {view.fps:.1f}")
info_panel.text(f"Iteration: {state['iter']}")
info_panel.text(f"Loss (MSE): {state['loss']:.6f}")
info_panel.text(f"EMA Loss: {state['ema']:.6f}")
state["steps_per_frame"] = info_panel.slider_int(
"Steps / Frame", 1, 32, default=6
)
progress = min(1.0, state["iter"] / 3000.0)
info_panel.progress(progress,
overlay=f"Training progress {progress * 100:.1f}%")
info_panel.text_wrapped(
"Left is GT, right is prediction. "
"Increase 'Steps / Frame' for faster fitting."
)
view.run()
That's it. Run it and watch the grey blob on the right morph into the PyTorch logo in a few seconds.
What just happened?¶
-
Data —
vultorch.imreadloads the image straight into a float32 CUDA tensor (no PIL, no numpy). Pixel coordinates getmeshgrid'd into(H*W, 2), normalized to[-1, 1]. -
Model — a two-hidden-layer MLP (64 wide). Takes
(x, y), outputs(r, g, b). Small enough to run inside a per-frame callback without tanking your framerate. -
Layout —
side="bottom", width=0.28puts the Info panel at the bottom and gives it 28% of the window height. (Yes,width=controls height when the panel is at the bottom — it's the size along the split direction.)side="left", width=0.5puts GT on the left half of the remaining space. Prediction fills whatever is left. -
Two callbacks:
-
@view.on_frame— runs once per frame before panels are drawn. This is where you put your training loop, data mutation, model updates — any computation. -
@info_panel.on_frame— runs inside the Info panel's drawing context. Everypanel.text(),panel.slider_int(),panel.progress()call you make here creates a widget (text label, slider, progress bar) inside that specific panel. You don't need to worry about positioning — widgets just stack top-to-bottom, likeprint()statements.
-
Key takeaways¶
-
@view.on_frame— a plain Python function that runs once per displayed frame (~60 times/second). Put any PyTorch code in here. At the end of each frame, Vultorch uploads every bound tensor to the screen automatically. -
create_tensor— looks and feels liketorch.zeros, but the underlying memory is Vulkan/CUDA shared. When you write into it, the changes appear on screen the next frame with zero copy — no.cpu(), noupload(), nothing. -
Layout shorthand —
side="left"/"right"/"bottom"/"top"splits the window, andwidth=controls how big that split is (as a 0–1 ratio). That's it. No coordinates, no grids. -
Panel widgets —
@panel.on_frameruns inside a panel. Callpanel.text(),panel.slider_int(),panel.progress()— each call creates one interactive element, stacked top-to-bottom like lines ofprint()output. -
No terminal spam — all live stats live in the Info panel. Your console stays clean for warnings and tracebacks.
Tip
Crank Steps / Frame up to 32 for blazing-fast convergence.
But don't get too greedy — go too high and your framerate will
drop because each frame spends more time training.
Note
create_tensor is called once at init, not every frame.
After that you just write into the tensor each frame — practically free.