Replies: 9 comments 1 reply
-
Thank you for starting the discussion on this @jawache ! I think when thinking about the granularity for the calculation will mostly depend on the state of data that is made available right? Especially, for places where we don't have organizations like WattTime offering data at higher granularity, maybe the organizations are limited to just using coarser data. Off the top of my head, for meaningful compute jobs, maybe ~30 minutes buckets are even better? (provided that kind of data is potentially available @Henry-WattTime) ? But, a question then for you @jawache: how should we adapt the SCI so that it can be flexible in that calculation? |
Beta Was this translation helpful? Give feedback.
-
Vaughn: Australia they have 5-minutes increment of pricing. EVs are going to be plugged in the evening, this will soar the demand and change the grid dynamics. (e.g. people finish work at 5pm and this kind of spike is possible) Will: Calculations could likely be programmatic (e.g. trapezoidal rule for integration) https://numpy.org/doc/stable/reference/generated/numpy.trapz.html Vaughn: Yearly averages would be very different, e.g. solar intensity based on seasonal and diurnal changes Will: the variations would be quite severe. There would be different tiers and granularity. Stating the sample rates as a part of the assumptions would be useful. Real-time curves down to 5 minutes and compare against annual calculations to see what we are missing. Might need someone with more formal experience in carbon measurement methodologies. @buchananwp can help bring someone in from Microsoft for this. (@seanmcilroy29 can you create an action item for this) Sarah: The bucket will depend on the impact of the software. e.g. batch job running for 3 hours then having hourly buckets might be enough but for RT applications having even smaller buckets would be more actionable. |
Beta Was this translation helpful? Give feedback.
-
I missed this discussion, but based on the comments, maybe we should say that you should use the finest granularity possible with the information you have? If you are only able to get monthly energy consumption and annual emissions factors, then you should use that to assess your software and make decisions about architecture and region. If you are able to get hourly or 5-minute data, you would then be able to optimize timing. It sounds like it may be use case specific? |
Beta Was this translation helpful? Give feedback.
-
I like the formulation that you make here @Henry-WattTime in that the SCI can prioritize the use of more fine-grained data but would that then fall under a recommendation? Would there be a way that we can make this a structural quality of the SCI? |
Beta Was this translation helpful? Give feedback.
-
I think this might be a chicken and the egg question. If your application is not carbon aware, then it doesn't matter what granularity of CI you use since your app doesn't change at all as that CI number changes, so it can be a yearly average. If your application is carbon aware, then you will already have fine-grained (<= 1hr) granularity of carbon intensity data, since without that your application just cannot be carbon aware. Perhaps the point I am making here, is that we standardise on the granularities to make everyone's lives a lot easier. Either use a yearly average if your application is not carbon aware or an hourly average if your application is carbon aware. I really like using hourly buckets, it aligns SCI with the 24/7 hourly matching goals that organisations are now starting to subscribe to. If we snap to hourly buckets the SCI will naturally be the standard people use if their organisation has set a 24/7 hourly matching target. |
Beta Was this translation helpful? Give feedback.
-
I think there might be a case where you can make meaningful 'carbon aware' decisions using annual emissions factors. For example, you could have no timing flexibility in your service, but you could decide what hardware (efficiency) or location (also carbon aware?) to run on. Both of those affect the total emissions of your software, but can be optimized. Also, I assume you mean annual marginal or hourly/granular marginal? |
Beta Was this translation helpful? Give feedback.
-
This is making me think of the formula that might be effective (stealing from @buchananwp in pr #28 ): C = O + E Where: e = energy measurement Taking the things we have discussed thus far, to change O (operational emissions) you can either make your hardware more efficient, reducing e (energy measurement) or change the timing or location of your energy consumption, which changes i (emissions intensity). To change E (embodied emissions) you can use less hardware, requiring less overall investment in infrastructure. Overall, per Asim's point, all of these actions (and we think all potential actions are captured) help reduce C (total carbon). I think I'm just catching up/summarizing where we stand. |
Beta Was this translation helpful? Give feedback.
-
That's a great point about yearly averages still being useful for region shifting, hadn't considered that, makes a lot of sense. I am talking about marginal averages, yes. |
Beta Was this translation helpful? Give feedback.
-
@jawache can I conclude this discussion given that we've accounted for this in the spec through #76 ? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The operational carbon emissions for a piece of software is the
energy consumption * carbon intensity
Carbon awareness is a pillar of green software, if an application does more when there are more renewables available and less when there are less then it's reduced real-world carbon emissions. The calculation above needs to be sensitive to that kind of change to an application and the problem with most carbon emissions metrics these days is that they are not sensitive to carbon awareness, so it's hard for product teams to invest in making their application carbon aware.
The number in the above equation links to carbon awareness is
carbon intensity
but I don't think we can treat it as one number, I think we need to treat the above as a bucketed calculation, a weighted average.So let's say my application runs for two hours. In the first hour I use 5 kWh in my second hour I use 45 kWh, the total energy consumption is 50 kWh.
The carbon intensity in the first hour is 2000gCO2/kWh and the carbon intensity in the second hour is 400gCO2/kWh. What is the carbon intensity number I will use in the above equation?
It's not the average of
(2000 + 400)/2
, that doesn't make sense. It has to be the weighted average based on the amount of energy consumed for each of those hours.( (5 * 2000) + (45 * 4000) ) / 50 = 560gCO2/kWh
If we use an hourly weighted average then if you were to make your application more carbon aware, if you were to make it do more when there is more renewables available and less carbon intensity, then the operational carbon emissions number will go down.
Questions:
Beta Was this translation helpful? Give feedback.
All reactions