Training loop (high-level):
model = DualModel(resnet18(), num_classes=10) opt = torch.optim.Adam(model.parameters()) criterion_cons = nn.MSELoss() for epoch in range(epochs): for (img_lab, y), (img_unlab, _) in zip(labeled_loader, unlabeled_loader): # supervised logitsA, logitsB = model(img_lab) loss_sup = F.cross_entropy(logitsA, y) + F.cross_entropy(logitsB, y) dualdl
loss_cons = MSE(softmax(predA), softmax(predB)) Training loop (high-level): model = DualModel(resnet18()