Time Synchronization and Daylight savings time (DST) in IoT applications

Time is one of the trickiest things to handle in computer systems with historical data. As evidence of this, 2 of the 5 Simple IoT ADRs deal with time, and another ADR about time is in process.

According to timeanddate, the time changes at 2AM in the US. Grafana shows this on March 12, 2023:

Notice the time jumps from 1:50 to 3:00 AM and skips 2 AM.

On Nov 5, 2022, the time went back. Notice in the below Grafana chart, 1 AM is repeated twice.

A number of questions come up around this issue:

  • If we are totalizing daily flow, what should we do on days when we add or subtract an hour? It probably does not matter in most cases, but in some, it may.
  • How do we handle this when time is used in rule schedules? Do we store time as UTC or local time + time zone?
  • When someone in a different time zone is looking at data, do we display the data’s local time or the user’s local time?

The simplest is to do everything in UTC, but this is a poor user experience when users have to translate every timestamp they process. Additionally, if time is used in schedule rules, then they need to be adjusted every time the time changes.

The Meteor framework attempts to synchronize time in the client frontend with the server time:

This library is a crude approximation of NTP, at the moment. It’s empirically shown to be accurate to under 100 ms on the meteor.com servers.

Another example of problems caused by DST:

https://backstage.forgerock.com/docs/idcloud/latest/idm-schedules/schedules-dst.html

This is a good discussion of this topic:

Comment:

How do you store that info? My best solution is to use UTC + one “master time zone” The users in the “master time zone” win (stay fixed).

Things can get pretty tricky, but in general I have found the UTC solves more problems than it introduces.

I’m coming to the same conclusion.

My opinion is that all systems and data storage (databases, logs, etc) always use only UTC. Then when presenting the data to humans in fancy ways like in GUIs or web interfaces, it’s the job of the GUI or web interface to translate the UTC timestamps into what ever localtime the human wants. For situations like daylight saving time changes, the GUI or web interface need to clearly show why a time happens twice or not at all on a given day with some kind of marker.

I do have a personal problem with the name “Universal Coordinated Time” as it’s not “universal” but it’s only global. Time on the moon or Mars being referenced to how fast the Earth is spinning seems like a silly way to keep time in those locations. 1 second should stay 1 second regardless of where you are, but even that isn’t true as it depends on how fast you’re traveling. Time is a funny thing :slight_smile:

1 Like

This prompted me to read up on UTC a bit, and I discovered it stands for “Coordinated Universal Time,” not “Universal Coordinated Time.” That is perhaps a little better, but it still seems “Global” would have been better than “Universal.”

Why is UTC not abbreviated CUT?

1 Like

Because of the French :wink:

You could use smear time like what all hyperscalers use

This is a clock where a second is just a tin tiny bit longer . This smears out the leap year .
They need this because their systems are eventually consistent too and a leap jump is really bad

There are golang libs that do all this for you .

1 Like

The problem with smearing is that it doesn’t comply with pretty much any of the actual standards around time. It also means that 1 second is not 1 second, which matters for some use-cases where measuring time is actually important or when synchronizing to other clocks is important. Even if most of the use cases where measuring actual duration is important shouldn’t be using the time-of-day clock, often it gets used anyways. And if you wanted to sync to something like GPS time, you need extra code to calculate how much smear to apply beyond just knowing the GPS time offset.

There’s no right answer, just many different choices.

1 Like