Skip to content

The seeds didn't work fully #189

@Arimkyie

Description

@Arimkyie

I have tried several 3DGS SLAM systems. Although seeds were set in all of them, the final results were inconsistent each time. This is my setting:

    seed = 0
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)
    os.environ["PYTHONHASHSEED"] = str(seed)
    os.environ['CUBLAS_WORKSPACE_CONFIG'] = ':4096:8'
    np.random.seed(seed)
    random.seed(seed)
    torch.use_deterministic_algorithms(True, warn_only=True)  # , warn_only=True
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.enabled = False

To avoid the influence of multi-threading, I changed the code to be completely single-threaded for testing. I found that when initialize_map, after several iterations, the loss began to change slightly. This is the loss result of the first few iterations during my two tests. To observe the changes in the number of decimal places, I magnified it several times when outputing:

tensor(3950788.5000, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3782442.2500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3728197.5000, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3675866.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3627247.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3580127.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3534494.2500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3490134.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3446650.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3403643.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3360857.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3318130., device='cuda:0', grad_fn=<MulBackward0>)
tensor(3275350., device='cuda:0', grad_fn=<MulBackward0>)
tensor(3232415., device='cuda:0', grad_fn=<MulBackward0>)
tensor(3189297.5000, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3146000.2500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3950788.5000, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3782442.2500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3728197.5000, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3675866.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3627247.5000, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3580127., device='cuda:0', grad_fn=<MulBackward0>)
tensor(3534494.2500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3490134.5000, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3446651., device='cuda:0', grad_fn=<MulBackward0>)
tensor(3403643.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3360858., device='cuda:0', grad_fn=<MulBackward0>)
tensor(3318130.2500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3275350.7500, device='cuda:0', grad_fn=<MulBackward0>)
tensor(3232415., device='cuda:0', grad_fn=<MulBackward0>)
tensor(3189298., device='cuda:0', grad_fn=<MulBackward0>)

This leads to the gradual change of the tracking iteration loss and convergence results in the first and second frames that follow.
Is it normal for this phenomenon to occur? Or is there something wrong with me?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions