Time is one of the trickiest things to handle in computer systems with historical data. As evidence of this, 2 of the 5 Simple IoT ADRs deal with time, and another ADR about time is in process.
According to timeanddate, the time changes at 2AM in the US. Grafana shows this on March 12, 2023:
If we are totalizing daily flow, what should we do on days when we add or subtract an hour? It probably does not matter in most cases, but in some, it may.
How do we handle this when time is used in rule schedules? Do we store time as UTC or local time + time zone?
When someone in a different time zone is looking at data, do we display the data’s local time or the user’s local time?
The simplest is to do everything in UTC, but this is a poor user experience when users have to translate every timestamp they process. Additionally, if time is used in schedule rules, then they need to be adjusted every time the time changes.
My opinion is that all systems and data storage (databases, logs, etc) always use only UTC. Then when presenting the data to humans in fancy ways like in GUIs or web interfaces, it’s the job of the GUI or web interface to translate the UTC timestamps into what ever localtime the human wants. For situations like daylight saving time changes, the GUI or web interface need to clearly show why a time happens twice or not at all on a given day with some kind of marker.
I do have a personal problem with the name “Universal Coordinated Time” as it’s not “universal” but it’s only global. Time on the moon or Mars being referenced to how fast the Earth is spinning seems like a silly way to keep time in those locations. 1 second should stay 1 second regardless of where you are, but even that isn’t true as it depends on how fast you’re traveling. Time is a funny thing
This prompted me to read up on UTC a bit, and I discovered it stands for “Coordinated Universal Time,” not “Universal Coordinated Time.” That is perhaps a little better, but it still seems “Global” would have been better than “Universal.”
You could use smear time like what all hyperscalers use
This is a clock where a second is just a tin tiny bit longer . This smears out the leap year .
They need this because their systems are eventually consistent too and a leap jump is really bad
The problem with smearing is that it doesn’t comply with pretty much any of the actual standards around time. It also means that 1 second is not 1 second, which matters for some use-cases where measuring time is actually important or when synchronizing to other clocks is important. Even if most of the use cases where measuring actual duration is important shouldn’t be using the time-of-day clock, often it gets used anyways. And if you wanted to sync to something like GPS time, you need extra code to calculate how much smear to apply beyond just knowing the GPS time offset.
There’s no right answer, just many different choices.