Skip to content

Commit aa866a7

Browse files
committed
doc: mention location history being encrypted on device
1 parent dd1993c commit aa866a7

File tree

3 files changed

+31
-17
lines changed

3 files changed

+31
-17
lines changed

README.md

Lines changed: 29 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,6 @@ Since google slowly removes your old data over time, I would recommend periodica
2626

2727
- Chrome
2828
- Google Play Store
29-
- Timeline
3029
- Keep
3130
- My Activity
3231
- Select JSON as format
@@ -44,7 +43,7 @@ This currently parses:
4443
- Chrome History - `Chrome/BrowserHistory.json`
4544
- Google Play Installs - `Google Play Store/Installs.json`
4645
- Keep (Notes) - `Keep/*.json`
47-
- Location History:
46+
- Location History (though, this may not available through any new takeouts anymore as [its stored on device](https://9to5google.com/2023/12/12/google-location-history-timeline-device/)):
4847
- Semantic Location History`Location History/Semantic Location History/*`
4948
- Location History `Location History/Location History.json`, `Location History/Records.json`
5049
- Youtube:
@@ -113,22 +112,26 @@ Out[2]: 236654
113112
`$ google_takeout_parser --quiet merge ./Takeout-Old ./Takeout-New --action summary --no-cache`
114113

115114
```python
116-
Counter({'Activity': 366292,
117-
'Location': 147581,
118-
'YoutubeComment': 131,
119-
'PlayStoreAppInstall': 122,
120-
'LikedYoutubeVideo': 100,
121-
'ChromeHistory': 4})
115+
Counter(
116+
{
117+
"Activity": 366292,
118+
"Location": 147581,
119+
"YoutubeComment": 131,
120+
"PlayStoreAppInstall": 122,
121+
"LikedYoutubeVideo": 100,
122+
"ChromeHistory": 4,
123+
}
124+
)
122125
```
123126

124127
Can also dump the info to JSON; e.g. to filter YouTube-related stuff from your Activity using [jq](https://jqlang.github.io/jq/):
125128

126129
```bash
127130
google_takeout_parser --quiet parse -a json -f Activity --no-cache ./Takeout-New |
128-
# select stuff like Youtube, m.youtube.com, youtube.com using jq
129-
jq '.[] | select(.header | ascii_downcase | test("youtube"))' |
130-
# grab the titleUrl, ignoring nulls
131-
jq 'select(.titleUrl) | .titleUrl' -r
131+
# select stuff like Youtube, m.youtube.com, youtube.com using jq
132+
jq '.[] | select(.header | ascii_downcase | test("youtube"))' |
133+
# grab the titleUrl, ignoring nulls
134+
jq 'select(.titleUrl) | .titleUrl' -r
132135
```
133136

134137
Also contains a small utility command to help move/extract the google takeout:
@@ -163,6 +166,7 @@ To parse one takeout:
163166

164167
```python
165168
from google_takeout_parser.path_dispatch import TakeoutParser
169+
166170
tp = TakeoutParser("/full/path/to/Takeout-1599315526")
167171
# to check if files are all handled
168172
tp.dispatch_map()
@@ -178,15 +182,21 @@ To cache and merge takeouts (maintains a single dependency on the paths you pass
178182

179183
```python
180184
from google_takeout_parser.merge import cached_merge_takeouts
181-
results = list(cached_merge_takeouts(["/full/path/to/Takeout-1599315526", "/full/path/to/Takeout-1634971143"]))
185+
186+
results = list(
187+
cached_merge_takeouts(
188+
["/full/path/to/Takeout-1599315526", "/full/path/to/Takeout-1634971143"]
189+
)
190+
)
182191
```
183192

184193
If you don't want to cache the results but want to merge results from multiple takeouts, can do something custom by directly using the `merge_events` function:
185194

186195
```python
187196
from google_takeout_parser.merge import merge_events, TakeoutParser
197+
188198
itrs = [] # list of iterators of google events
189-
for path in ['path/to/Takeout-1599315526' 'path/to/Takeout-1616796262']:
199+
for path in ["path/to/Takeout-1599315526" "path/to/Takeout-1616796262"]:
190200
# ignore errors, error_policy can be 'yield', 'raise' or 'drop'
191201
tk = TakeoutParser(path, error_policy="drop")
192202
itrs.append(tk.parse(cache=False))
@@ -198,6 +208,7 @@ The events this returns is a combination of all types in the [`models.py`](googl
198208
```python
199209
from google_takeout_parser.models import Location
200210
from google_takeout_parser.path_dispatch import TakeoutParser
211+
201212
# filter_type can be a list to filter multiple types
202213
locations = list(TakeoutParser("path/to/Takeout").parse(filter_type=Location))
203214
len(locations)
@@ -231,7 +242,10 @@ This exposes some functions to help parse those, into text, markdown, or just ex
231242
```python
232243
from google_takeout_parser.path_dispatch import TakeoutParser
233244
from google_takeout_parser.models import CSVYoutubeComment
234-
from google_takeout_parser.parse_csv import extract_comment_links, reconstruct_comment_content
245+
from google_takeout_parser.parse_csv import (
246+
extract_comment_links,
247+
reconstruct_comment_content,
248+
)
235249

236250

237251
path = "./Takeout-1599315526"

google_takeout_parser/models.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ def key(self) -> int:
124124
return int(self.dt.timestamp())
125125

126126

127-
# considered re-using model above, but might be confusing
127+
# considered reusing model above, but might be confusing
128128
# and its useful to know if a message was from a livestream
129129
# or a VOD
130130
@dataclass

tests/test_json.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -508,7 +508,7 @@ def test_keep(tmp_path_f: Path) -> None:
508508

509509
def test_keep_2021(tmp_path_f: Path) -> None:
510510
"""
511-
Check that pre-April 2022 (or earler) Keep entries which didn't have createdTimestampUsec are parsed gracefully
511+
Check that pre-April 2022 (or earlier) Keep entries which didn't have createdTimestampUsec are parsed gracefully
512512
"""
513513
fp = tmp_path_f / "file"
514514
fp.write_text(

0 commit comments

Comments
 (0)