Skip to content

Making sense of the log files and the console outputs  #1

@nikunjsanghai

Description

@nikunjsanghai

Hi,
I have read the paper and was trying to recreate the results under Fig 2 and Fig 3 talking about 4 different motions. First random(Fig 2) then Fig(3) deals with circular, forward and lateral motions. Please correct me if I am wrong. The issue I am having is that when I run the main_expectation.py file, the objective scores for different cameras is different from what is mentioned in the paper. According to my understanding the current main file is giving me 2 to 6 k(camera positions) for it with objective scores, I think the median RMSE values I am getting on the console seem almost same, but the objective scores i got are as follows:
Best Score till now: 1.1915899979021663e-12
Next best Camera is:
R: [
6.12323e-17, 0, 1;
0, 1, 0;
-1, 0, 6.12323e-17
]
t: 0.15 0 0


Best Score till now: 2.061258456434215
Next best Camera is:
R: [
-0.5, 0, 0.866025;
0, 1, 0;
-0.866025, 0, -0.5
]
t: -0.15 0 -0.15


Best Score till now: 5.720280523043846
Next best Camera is:
R: [
-1.83697e-16, 0, -1;
0, 1, 0;
1, 0, -1.83697e-16
]
t: 0.15 0 -0.15


Best Score till now: 6.001388289750302
Next best Camera is:
R: [
-0.866025, 0, -0.5;
0, 1, 0;
0.5, 0, -0.866025
]
t: -0.15 0 0.15


Best Score till now: 6.037328144844131
Next best Camera is:
R: [
0.866025, 0, -0.5;
0, 1, 0;
0.5, 0, 0.866025
]
t: -0.15 0 0


Best Score till now: 6.049854050133662
Next best Camera is:
R: [
-1, 0, 1.22465e-16;
0, 1, 0;
-1.22465e-16, 0, -1
]
t: 0.15 0 0.15


Selected candidates are :
[[ 5]
[23]
[33]
[37]
[43]
[67]]
The score for traj greedy: 6.049854050

I am trying to recreate the simulation results first with an end goal of verifying the results with new data and getting similar results. Any help would be appreciated. I am not able to make sense of the objective scores here and why am I getting a run from k=2 to k=6 cameras here where as in the main file it states
if name == 'main':
''' construct the 3D world and the trajectory'''
''' Sample all the camera configurations. In sim I have ~300 configs '''
''' The goal is to pick the best N among these placeents.'''
''' Run greedy first, get a initial baseline.'''
''' Use greedy solution as initial value'''

parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
                                 description='runs experiments for different benchmark \
                                             algorithms for optimal camera placement\n\n')

parser.add_argument('-n', '--num_runs', help='number of runs in the experiment', default=10)
parser.add_argument('-s', '--select_k', help='number of cameras to select', default=2)
parser.add_argument('-t', '--traj_type', help='Type of trajectory 1:circle, 2:side, 3:forward, 4:random', default=4)
parser.add_argument('-o', '--output_dir', help='Output dir for output bag file', default='.')
parser.add_argument('-c', '--config_file',
                    help='Yaml file which specifies the topic names, frequency of selection and time range',
                    default='config/config.yaml')

so shouldn't I get only the 2 camera placements since the default is 2 and I have not provided specific arguments. When I am looking at the log files, there seem to be 30, for 10 runs of each equal, standard, and random. My understanding is that these are the 3 benchmark algorithms the paper was talking about to be used for comparision to show how greedy and frank-wolfe differ from the 3.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions