Powerwall History Import #644
Replies: 2 comments
-
Hi @SeanNazareth - to start, I think you might enjoy a history of the project which could also answer some of you confusion: #645 We have some work to do to make it easier to use and understand all these tools and random bits. Fundamentally, local access provide higher fidelity (same rate of 5s vs 5m), richer data (e.g. fan, temps, strings) and more accurate results. But there are issues getting that local access, as you have seen. ;) |
Beta Was this translation helpful? Give feedback.
-
Hi @SeanNazareth - I can clarify your questions, or at the least, show you a way you can utilise the tesla-history docker container to avoid having to install python dependencies on your system (and without it running in daemon mode, which you don't want or need). First off, Jason has written a great history summary of the project - it's both informative and entertaining! The project has evolved a lot over the past few years, and it has certainly felt like we are the Rebels fighting against the evil Empire at times. 😄 In the context of your questions though, it's important you also understand the evolution of the tesla-history script specifically - which will explain why there's a docker container and "daemon" mode in the first place, and what it's purpose was.... The script was originally developed to pull past data from the Tesla Cloud into the dashboard, and to fill in missing data or gaps due to loss of communication with the gateway. Refer: #12 Back then Powerwall Dashboard had only one mode - Local Access. This local only mode needed a Powerwall + Gateway however, which meant it couldn't be used by Tesla Solar-Only users. As the script was getting data from the cloud instead, we then adapted it for Tesla Solar-Only users. This is why there is a "daemon" mode, with supporting docker container. It would continually poll the Tesla Cloud to retrieve solar data. Refer: #183 The "daemon" mode operates differently to normal usage of the script, in that it does not look for data gaps in existing data, and will always write the energy data to InfluxDB regardless (i.e. same as using the --force option of the script). It was an alternative for Tesla owners that didn't have a Powerwall (i.e. Solar-Only) or have local Gateway connectivity. Since then, Powerwall Dashboard and pypowerwall has evolved, and from these learnings now has built in Cloud & FleetAPI modes. This has made the tesla-history "daemon" mode and docker container somewhat deprecated. However, there are still some in the community with specific setups that are utilising the docker container with daemon mode, so it's not completely defunct. And, as a bonus, since the container exists already.... Yes - you can use it as a means to avoid needing to install python dependencies! Since you will not want it to run in "daemon" mode though, there a few simple changes you need to make to the example docker compose file to achieve this. So to answer your question 1. here's steps you can follow to get it working. Create a file named NOTE: Don't modify the stacks default # Change to PWD directory
cd ~/Powerwall-Dashboard/
# Create docker compose extend file
vi powerwall.extend.yml Add the below to the docker compose file. This is a modified version of the sample provided in the tools/tesla-history folder, with 3 additional lines added (per comments).
Next, refresh the stack. # Refresh stack - pulls tesla-history container and starts it
./compose-dash.sh up -d ![]() Now you would be able to run the tesla-history tool as below. Python dependencies will not need to be installed. # Run tesla-history --login to initiate account setup & creation of auth token first
docker exec -it tesla-history python3 tesla-history.py --login After setup & auth token creation, any time you want to run the tesla-history script, just remember to prefix the regular commands with # e.g. Run script to retrieve data for yesterday
docker exec -it tesla-history python3 tesla-history.py --yesterday This will utilise the running container therefore avoiding the need to install python dependencies. To answer your question 2. Please refer to: To answer your question 3. As far as I'm aware, there is no difference in mode2 (Tesla Cloud) and mode3 (FleetAPI) metrics or granularity. It would provide the same data. The difference is simply that the FleetAPI is the official Tesla supported API. 3a. make sure you are using TEDAPI to connect to your gateway. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I was trying to add some historical data using the tesla-history.py tool. While I did figure out most of how to use it, I noticed a few things that I have questions on.
The instructions in the README show you how to run the tool interactively. This is useful to generate the authentication info and run some tests. However, this also means that you need to install the python "dependencies". I noticed that there was a dockerfile and a docker image, but it wasn't obvious how to run it. I did see the YAML docker-compose file, but I wasn't able to just use that YAML file as-is. Even if I could "launch" the docker image, it would be daemon mode (and not --test mode). While I'm not a super-expert on the docker-compose configuration, it looks like we need to combine this fragment with the docker-compose YAML file that I use to launch the original "stack"?
For testing, I used a different computer where I launched the "stack" with mode2 (cloud mode). After a few hours passed, I interactively launched the history tool and backfilled data. This did seem to work, but....
a. It didn't have any string data or system vitals.
b. while the data was pretty similar to that shown in the tesla app, it didn't exactly match. I was surprised since both the app and now the dashboard are using the same 5m granularity data, right?
Can you comment on the difference between using tesla cloud (mode2) and tesla fleetAPI (mode3). I've only setup mode2, but I'm not sure if mode3 has more metrics or a different granularity.
A few quick notes:
a. When I try the URL http://:8675/fans/pw, no data is returned, and I don't see any temperature/fan data in the dashboard of the 4.7.2 version. Not sure why. I also tried this on my other docker instance which is using MODE4 which has a similar result (fans/pw endpoint)
b. I'm not sure what the "daemon" mode does. It implies that it would find any "missing" data, but it wasn't clear if this is what I really want. Would the preferred intent be to run the docker container with the history.py running in daemon mode to automatically detect and fixed any missing data?
Beta Was this translation helpful? Give feedback.
All reactions