|
36 | 36 | "location; which is why one usually requires a minimum coherence for any\n", |
37 | 37 | "processed sample.\n", |
38 | 38 | "\n", |
39 | | - "With those assumptions and choices, I search for the maximum coherence\n", |
40 | | - "in a 10 m window after the coherence first exceeds 0.85 and name this\n", |
41 | | - "sample the POCA. In another window from 5 to 50 m after the POCA, I\n", |
42 | | - "search for a rise in coherence. This is where the swath processing\n", |
43 | | - "starts.\n", |
| 39 | + "With those assumptions and choices, I search for the maximum of the\n", |
| 40 | + "smoothed coherence in a 10 m window after the coherence first exceeds\n", |
| 41 | + "the coherence threshold and name this sample the POCA. In another window\n", |
| 42 | + "from 5 to 50 m after the POCA, I search for a rise in coherence. This is\n", |
| 43 | + "where the swath processing starts.\n", |
44 | 44 | "\n", |
45 | 45 | "In the following, \"edge cases\" are recognized by an early or late swath\n", |
46 | 46 | "start. For the _k_ earliest and latest of those, the coherence waveform\n", |
|
53 | 53 | "metadata": {}, |
54 | 54 | "outputs": [], |
55 | 55 | "source": [ |
56 | | - "k_smallest = 5\n", |
57 | | - "k_biggest = 5\n", |
58 | | - "k_random = 5" |
| 56 | + "k_zero = 3\n", |
| 57 | + "k_smallest = 3\n", |
| 58 | + "k_biggest = 3\n", |
| 59 | + "k_random = 3" |
59 | 60 | ] |
60 | 61 | }, |
61 | 62 | { |
|
95 | 96 | "import numpy as np" |
96 | 97 | ] |
97 | 98 | }, |
| 99 | + { |
| 100 | + "cell_type": "code", |
| 101 | + "execution_count": null, |
| 102 | + "metadata": {}, |
| 103 | + "outputs": [], |
| 104 | + "source": [ |
| 105 | + "import random" |
| 106 | + ] |
| 107 | + }, |
98 | 108 | { |
99 | 109 | "cell_type": "code", |
100 | 110 | "execution_count": null, |
|
110 | 120 | "metadata": {}, |
111 | 121 | "outputs": [], |
112 | 122 | "source": [ |
113 | | - "for idx in np.argpartition(l1b_data.swath_start.values, kth=k_smallest)[:k_smallest]:\n", |
114 | | - " cryoswath.test_plots.waveform.coherence(l1b_data.isel(time_20_ku=[idx]))" |
| 123 | + "for idx in random.sample(np.argwhere(l1b_data.swath_start.values==0).flatten().tolist(), k=k_zero):\n", |
| 124 | + " ax = cryoswath.test_plots.waveform.coherence(l1b_data.isel(time_20_ku=[idx]))" |
115 | 125 | ] |
116 | 126 | }, |
117 | 127 | { |
|
120 | 130 | "metadata": {}, |
121 | 131 | "outputs": [], |
122 | 132 | "source": [ |
123 | | - "for idx in np.argpartition(l1b_data.swath_start.values, kth=-k_biggest)[-k_biggest:]:\n", |
124 | | - " cryoswath.test_plots.waveform.coherence(l1b_data.isel(time_20_ku=[idx]))" |
| 133 | + "sortkeys = np.argsort(l1b_data.swath_start.values)[sum(l1b_data.swath_start.values==0):]" |
125 | 134 | ] |
126 | 135 | }, |
127 | 136 | { |
|
130 | 139 | "metadata": {}, |
131 | 140 | "outputs": [], |
132 | 141 | "source": [ |
133 | | - "import random" |
| 142 | + "for idx in sortkeys[:k_smallest]:\n", |
| 143 | + " ax = cryoswath.test_plots.waveform.coherence(l1b_data.isel(time_20_ku=[idx]))" |
| 144 | + ] |
| 145 | + }, |
| 146 | + { |
| 147 | + "cell_type": "code", |
| 148 | + "execution_count": null, |
| 149 | + "metadata": {}, |
| 150 | + "outputs": [], |
| 151 | + "source": [ |
| 152 | + "for idx in sortkeys[-k_biggest:]:\n", |
| 153 | + " cryoswath.test_plots.waveform.coherence(l1b_data.isel(time_20_ku=[idx]))" |
134 | 154 | ] |
135 | 155 | }, |
136 | 156 | { |
|
139 | 159 | "metadata": {}, |
140 | 160 | "outputs": [], |
141 | 161 | "source": [ |
142 | | - "for idx in random.sample(range(len(l1b_data.time_20_ku)), k=k_random):\n", |
| 162 | + "for idx in random.sample(sortkeys.tolist(), k=k_random):\n", |
143 | 163 | " cryoswath.test_plots.waveform.coherence(l1b_data.isel(time_20_ku=[idx]))" |
144 | 164 | ] |
145 | 165 | } |
|
0 commit comments