I just finished reading "A Deepness in the Sky" a 2000 SF book by Vernor Vinge. It's a great book with an unexpected reference to seconds since the epoch.
>Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.
jvanderbot 24 days ago [-]
That is one of my favorite books of all time. The use of subtle software references is really great.
I recommend Bobiverse series for anyone who wants more "computer science in space" or permutation city for anyone who wants more "exploration of humans + simulations and computers"
nnf 24 days ago [-]
I’ll second the Bobiverse series, one of my favorites. Its descriptions of new technologies is at just the right level and depth, I think, and it’s subtly hilarious.
MobileVet 24 days ago [-]
Just starting the third book, really fun series. Highly recommend for anyone interested in computing and science fiction.
noja 24 days ago [-]
The audio books are narrated brilliantly too. Stange fact: bobiverse has no dedicated Wikipedia page.
MobileVet 24 days ago [-]
Ray Porter, the narrator, is quite the talent. He does a brilliant job with ‘Project: Hail Mary’ as well which is the second book from the author of ‘The Martian.’ It has quite a bit more science and humor than The Martian and is one of my favorites.
ascorbic 23 days ago [-]
Thanks for the recommendation. Looks like they're on Kindle Unlimited so I'll definitely give them a try
move-on-by 25 days ago [-]
Without fail, if I read about time keeping, I learn something new. I had always thought unix time as the most simple way to track time (as long as you consider rollovers). I knew of leap seconds, but somehow didn’t think they applied here. Clearly I hadn’t thought about it enough. Good post.
I also read the link for “UTC, GPS, LORAN and TAI”. It’s an interesting contrast that GPS time does not account for leap seconds.
foobar1962 25 days ago [-]
Saying that something happened x-number of seconds (or minutes, hours, days or weeks) ago (or in the future) is simple: it’s giving that point in time a calendar date that’s tricky.
miki123211 24 days ago [-]
> Saying that something happened x-number of [...]days or weeks) ago in the future) is simple
It's not, actually. Does 2 days and 1 hour ago mean 48, 49 or 50 hours, if there was a daylight saving jump in the meantime? If it's 3PM and something is due to happen in 3 days and 2 hours, the user is going to assume and prepare for 5PM, but what if there's a daylight saving jump in the meantime? What happens to "in 3 days and 2 hours" if there's a leap second happening tomorrow that some systems know about and some don't?
You rarely want to be thinking in terms of deltas when considering future events. If there is an event that you want to happen on jan 1, 2030 at 6 PM CET, there is no way to express that as a number of seconds between now and then, because you don't know whether the US government abolishes DST between now and 2030 or not.
To reiterate this point, there is no way to make an accurate, constantly decreasing countdown of seconds to 6PM CET on jan 1, 2030, because nobody actually knows when that moment is going to happen yet.
Izkata 24 days ago [-]
You ignored the last part of their comment. All your examples are things they did say are hard.
Also natural events are the other way around, we can know they're X in the future but not the exact calendar date/time.
PaulDavisThe1st 24 days ago [-]
No. The problems begin because GP included the idea of saying "N <calendar units> in the future".
If the definition of a future time was limited to hours, minutes and/or seconds, then it would be true that the only hard part is answering "what calendrical time and date is that?"
But if you can say "1 day in the future", you're already slamming into problems before even getting to ask that question.
AnthonyMouse 24 days ago [-]
The real problem here is that people keep trying to screw up the simple thing.
If you want to know the timestamp of "two days from now" then you need to know all kinds of things like what time zone you're talking about and if there are any leap seconds etc. That would tell you if "two days from now" is in 172800 seconds or 172801 seconds or 169201 or 176400 etc.
But the seconds-counting thing should be doing absolutely nothing other than counting seconds and doing otherwise is crazy. The conversion from that into calendar dates and so on is for a separate library which is aware of all these contextual things that allow it to do the conversion. What we do not need and should not have is for the seconds counting thing to contain two identical timestamps that refer to two independent points in time. It should just count seconds.
growse 24 days ago [-]
Agree, but people often miss that there's two different use cases here, with different requirements.
"2 days from now" could either mean "after 2*86400 seconds have ticked" or it could mean "when the wall clock looks like it does now, after 2 sunset events". These are not the same thing.
The intent of the thing demanding a future event matters. So you can have the right software abstractions all you like and people will still use the wrong thing.
The problem is that programmers are human, and humans don't reason in monotonic counters :)
PaulDavisThe1st 24 days ago [-]
One might also recall the late Gregory Bateson's reiteration that "number and quantity are not the same thing - you can have 5 oranges but you can never have 5 gallons of water" [0]
Seconds are numbers; calendrical units are quantities.
[0] Bateson was, in some ways, anticipating the divide between the digital and analog worlds.
AnthonyMouse 23 days ago [-]
> "2 days from now" could either mean "after 2*86400 seconds have ticked" or it could mean "when the wall clock looks like it does now, after 2 sunset events". These are not the same thing.
Which is why you need some means to specify which one you want from the library that converts from the monotonic counter to calendar dates.
Anyone who tries to address the distinction by molesting the monotonic counter is doing it wrong.
sarusso 24 days ago [-]
I recently built a small Python library to try getting time management right [1]. Exactly because of the first part of your comment, I concluded that the only way to apply a time delta in "calendar" units is to provide the starting point. It was fun developing variable-length time spans :) I however did not address leap seconds.
You are very right that future calendar arithmetic is undefined. I guess that the only viable approach is to assume that it works based on what we know today, and to treat future changes as unpredictable events (as if earth would slow its rotation). Otherwise, we should just stop using calendar arithmetic, but in many fields this is just unfeasible...
> I guess that the only viable approach is to assume that it works based on what we know today, and to treat future changes as unpredictable events
No, the only way is to store the user's intent, and recalculate based on that intent when needed.
When the user schedules a meeting for 2PM while being in Glasgow, the meeting should stay at 2PM Glasgow time, even in a hypothetical world where Scotland achieves independence from the UK and they get different ideas whether to do daylight saving or not.
The problem is determining what the user's intent actually is; if they set a reminder for 5PM while in NY, do they want it to be 5PM NY time in whatever timezone they're currently in (because their favorite football team plays at 5PM every week), or do they want it to be at 5PM in their current timezone (because they need to take their medicine at 5PM, whatever that currently means)?
ForOldHack 15 days ago [-]
And if the number of seconds changes, such as in a batch job on a super computer, you should adjust the time of the computer first, and then adjust the billing for the job , after it completes. I asked IBM if they quantium cloud could count the time in either direction... At first they were confused, but then they got the joke.
Dylan16807 24 days ago [-]
I would argue that 2 days and 1 hour is not a "number of seconds (or minutes, hours, days or weeks)"
If you say something will happen in three days, that's a big time window.
foobar1962 23 days ago [-]
I understand what you’re saying, but minutes, hours, days and weeks are fixed time periods that can be reduced to a number of seconds. Months and years are not which is why I did not include those in my earlier post.
Calculating the calendar date for an event that’s 365 days in the future needs to consider whether leap-time corrections need to be made during the period. We do that already for days with our standard calendar.
Edited.
Dylan16807 23 days ago [-]
If someone says an event is in X days, they almost never mean a multiple of 86k seconds.
Really, I don't think you can reduce any of these to a specific number of seconds. If someone says an event is in 14 hours, the meaning is a lot closer to 14±½ * 3600 than it is to 14 * 3600.
GolDDranks 25 days ago [-]
But because of the UNIX time stamp "re-synchronization" to the current calendar dates, you can't use UNIX time stamps to do those "delta seconds" calculations if you care about _actual_ amount of seconds since something happened.
wodenokoto 25 days ago [-]
Simple as long as your precision is at milliseconds and you don’t account for space travel.
We can measure the difference in speed of time in a valley and a mountain (“just” take an atomic clock up a mountain and wait for a bit, bring it back to your lab where the other atomic clock is now out of sync)
mytailorisrich 24 days ago [-]
I have come to the conclusion that TAI is the simplest and that anything else should only be used by conversion from TAI when needed (e.g. representation or interoperability).
cbarrick 24 days ago [-]
> There’s an ongoing effort to end leap seconds, hopefully by 2035.
I don't really like this plan.
The entire point of UTC is to be some integer number of seconds away from TAI to approximate mean solar time (MST).
If we no longer want to track MST, then we should just switch to TAI. Having UTC drift away from MST leaves it in a bastardized state where it still has historical leap seconds that need to be accounted for, but those leap seconds no longer serve any purpose.
paulddraper 24 days ago [-]
I agree that deviating from MST costs more than it benefits.
---
However, this proposal is not entirely pointless. The point is:
1. Existing UTC timekeeping is unmodified. (profoundly non-negotiable)
2. Any two timestamps after 2035 different by an accurate number of physical seconds.
---
Given that MST is already a feature of UTC, I agree removing it seems silly.
newpavlov 24 days ago [-]
In the ideal world, you are right, computer systems should've been using TAI for time tracking and converted it to UTC/local time using TZ databases.
But in the real world a lot of systems made the wrong choice (UNIX being the biggest offender) and it got deeply encoded in many systems and regulations, so it's practically impossible to "just switch to TAI".
So it's easier to just re-interpret UTC as "the new TAI". I will not be surprised if some time in the future we will get the old UTC, but under a different name.
phicoh 24 days ago [-]
The is no such thing as TAI. TAI is what you get if you start with UTC and then subtract the number of leap seconds you care about. TAI is not maintained as some sort of separate standard quantity.
In most (all?) countries, civil time is based on UTC. Nobody is going to set all clocks in the world backwards by about half a minute because it is somewhat more pure.
GPS time also has an offset compared to TAI. Nobody care a bout that. Just like nobody really cares about the Unix epoch. As long as results are consistent.
LegionMammal978 24 days ago [-]
> The is no such thing as TAI. TAI is what you get if you start with UTC and then subtract the number of leap seconds you care about. TAI is not maintained as some sort of separate standard quantity.
There is, though? You can easily look at the BIPM's reports [0] to get the gist of how they do it. Some of the contributing atomic clocks are aligned to UTC, and others are aligned to TAI (according to the preferences of their different operators), but the BIPM averages all the contributing measurements into a TAI clock, then derives UTC from that by adding in the leap seconds.
The only think we can be certain of is that the Summer Solstice occurs when the mid summer sun shines through a trillithon at Stonehenge and strikes a certain point. From there we can work outwards.
The logical thing to do is to precisely model Stonehenge to the last micron in space. That will take a bit of work involving the various sea levels and so on. So on will include the thermal expansion of granite and the traffic density on the A303 and whether the Solstice is a bank holiday.
Oh bollocks ... mass. That standard kilo thing - is it sorted out yet? Those cars and lorries are going to need constant observation - we'll need a sort of dynamic weigh bridge that works at 60mph. If we slap it in the road just after (going west) the speed cameras should keep the measurements within parameters. If we apply now, we should be able to get Highways to change the middle of the road markings from double dashed to a double solid line and then we can simplify a few variables.
... more daft stuff ...
Right, we've got this. We now have a standard place and point in time to define place and time from.
No we don't and we never will. There is no absolute when it comes to time, place or mass. What we do have is requirements for standards and a point to measure from. Those points to measure from have differing requirements, depending on who who you are and what you are doing.
I suggest we treat time as we do sea level, with a few special versions that people can use without having to worry about silliness.
Provided I can work out when to plant my wheat crop and read log files with sub micro second precision for correlation, I'll be happy. My launches to the moon will need a little more funkiness ...
ForOldHack 24 days ago [-]
Sorry to say Stonehenge or the plate on which is stands is moving... to the east, but the wobble of the earth is changing.
gerdesj 21 days ago [-]
Isn't the Eurasian plate moving widdershins?
The Wiltshire Downs and Salisbury Plain is mostly chalk/limestone. That is a porous rock which will expand and contract on water ingress/egress and be affected by atmospheric humidity. I've no real idea but I suspect that Stonehenge will rise and fall vertically(ish) on a seasonal and other longer rhythms.
ForOldHack 15 days ago [-]
Widdershins: "in a direction contrary to the sun's course.." Interesting. The Eurasian plate is moving east, which is widdershins to the longitude, but not the latitude.
( Note: I am a Scot, and added widdershins to my dictionary. )
ForOldHack 24 days ago [-]
The hack is literally trivial. Check once a month to see if UTC # ET. If not then create a file called Leap_Second once a month, check if this file exists, and if so, then delete it, and add 1 to the value in a file called Leap_Seconds, and make a backup called 'LSSE' Leap seconds since Epoch.
You are not expected to understand this.
It keeps both systems in place.
If you want, I could make it either a hash or a lookup table.
colanderman 24 days ago [-]
Note also that the modern "UTC epoch" is January 1, 1972. Before this date, UTC used a different second than TAI: [1]
> As an intermediate step at the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that 1 January 1972 00:00:00 UTC was 1 January 1972 00:00:10 TAI exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2.
So Unix times in the years 1970 and 1971 do not actually match UTC times from that period. [2]
A funny consequence of this is that there are people alive today that do not know (and never will know) their exact age in seconds[1].
This is true even if we assume the time on the birth certificate was a time precise down to the second. It is because what was considered the length of a second during part of their life varied significantly compared to what we (usually) consider a second now.
[1] Second as in 9192631770/s being the the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom
24 days ago [-]
benlivengood 24 days ago [-]
[2] in a particular gravity well.
DrBazza 25 days ago [-]
There's a certain exchange out there that I wrote some code for recently, that runs on top of VAX, or rather OpenVMS, and that has an epoch of November 17, 1858, the first time I've seen a mention of a non-unix epoch in my career. Fortunately, it is abstracted to be the unix epoch in the code I was using.
pavlov 25 days ago [-]
Apparently the 1858 epoch comes from an astronomy standard calendar called the Julian Day, where day zero was in 4713 BC:
To make these dates fit in computer memory in the 1950s, they offset the calendar by 2.4 million days, placing day zero on November 17, 1858.
hanche 23 days ago [-]
It’s called the modified Julian day (MJD). And the offset is 2,400,000.5 days.
In the Julian day way of counting, each day ended at noon, so that all astronomical observations done in one night would be the same Julian day, at least in Europe. MJD moved the epoch back to midnight.
PostgreSQL internally uses a 2000-01-01 epoch for storing timestamps.
schneehertz 25 days ago [-]
This means that some time points cannot be represented by POSIX timestamps, and some POSIX timestamps do not correspond to any real time
GolDDranks 25 days ago [-]
What are POSIX timestamps that don't correspond to any real time? Or do you mean in the future if there is a negative leap second?
schneehertz 24 days ago [-]
Yes, negative leap seconds are possible in the future if leap second adjustments are not abandoned
growse 25 days ago [-]
This has always been true. Pre 1970 is not defined in Unix time.
deepsun 24 days ago [-]
Related question that leads too deep: "What was before the Big Bang?"
usrnm 25 days ago [-]
Why? time_t is signed
_kst_ 24 days ago [-]
Neither C nor POSIX requires time_t to be signed.
The Open Group Base Specifications Issue 7, 2018 edition says that "time_t shall be an integer type". Issue 8, 2024 edition says "time_t shall be an integer type with a width of at least 64 bits".
C merely says that time_t is a "real type capable of representing times". A "real type", as C defines the term, can be either integer or floating-point. It doesn't specify how time_t represents times; for example, a conforming implementation could represent 2024-12-27 02:17:31 UTC as 0x20241227021731.
It's been suggested that time_t should be unsigned so a 32-bit integer can represent times after 2038 (at the cost of not being able to represent times before 1970). Fortunately this did not catch on, and with the current POSIX requiring 64 bits, it wouldn't make much sense.
But the relevant standards don't forbid an unsigned time_t.
_kst_ 24 days ago [-]
Apparently both Pelles C for Windows and VAX/VMS use a 32-bit unsigned time_t.
growse 25 days ago [-]
From IEE 1003.1 (and TFA):
> If year < 1970 or the value is negative, the relationship is undefined.
layer8 24 days ago [-]
In addition to being formally undefined (see sibling comment), APIs sometimes use negative time_t values to indicate error conditions and the like.
8n4vidtmkvmk 24 days ago [-]
Probably because the Gregorian calendar didn't always exist. How do you map an int to a calendar that doesn't exist?
Well, at least there isn't any POSIX timestamp that correspond to more than one real time point. So, it's better than the one representation people use for everything.
brianpan 24 days ago [-]
Not yet.
paulddraper 24 days ago [-]
No.
That'd be like saying some points in time that don't have a ISO 8601 year. Every point in time has a year, but some years are longer than others.
If you sat down and watched https://time.is/UTC, it would monotonically tick up, except that occasionally some seconds would be very slightly longer. Like 0.001% longer over the course of 24 hours.
When storing dates in a database I always store them in Unix Epoch time and I don't record the timezone information on the date field (it is stored separately if there was a requirement to know the timezone).
Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
I know that timezones are a field of landmines, but again, that is a human construct where timezone boundaries are adjusted over time.
It seems we need to anchor on absolute time, and then render that out to whatever local time format we need, when required.
lmm 25 days ago [-]
> Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
Yes. TAI or similar is the only sensible way to track "system" time, and a higher-level system should be responsible for converting it to human-facing times; leap second adjustment should happen there, in the same place as time zone conversion.
Unfortunately Unix standardised the wrong thing and migration is hard.
beng-nl 25 days ago [-]
I wish there were a TAI timezone: just unmodified, unleaped, untimezoned seconds, forever, in both directions. I was surprised it doesn’t exist.
maxnoe 25 days ago [-]
TAI is not a time zone. Timezones are a concept of civil time keeping, that is tied to the UTC time scale.
TAI is a separate time scale and it is used to define UTC.
There is now CLOCK_TAI in Linux [1], tai_clock [2] in c++ and of course several high level libraries in many languages (e.g. astropy.time in Python [3])
There are three things you want in a time scale:
* Monotonically Increasing
* Ticking with a fixed frequency, i.e. an integer multiple of the SI second
* Aligned with the solar day
Unfortunately, as always, you can only chose 2 out of the 3.
TAI is 1 + 2, atomic clocks using the caesiun standard ticking at the frequency that is the definition of the SI second forever Increasing.
Then there is UT1, which is 1 + 3 (at least as long as no major disaster happens...). It is purely the orientation of the Earth, measured with radio telescopes.
UTC is 2 + 3, defined with the help of both. It ticks the SI seconds of TAI, but leap seconds are inserted at two possible time slots per year to keep it within 1 second of UT1. The last part is under discussion to be changed to a much longer time, practically eliminating future leap seconds.
The issue then is that POSIX chose the wrong standard for numerical system clocks. And now it is pretty hard to change and it can also be argued that for performance reasons, it shouldn't be changed, as you more often need the civil time than the monotonic time.
The remaining issues are:
* On many systems, it's simple to get TAI
* Many software systems do not accept the complexity of this topic and instead just return the wrong answer using simplified assumptions, e.g. of no leap seconds in UTC
* There is no standardized way to handle the leap seconds in the Unix time stamp, so on days around the introduction of leap second, the relationship between the Unix timestamp and the actual UTC or TAI time is not clear, several versions exist and that results in uncertainty up to two seconds.
* There might be a negative leap second one day, and nothing is ready for it
> you more often need the civil time than the monotonic time
I don't think that's true? You need to time something at the system level (e.g. measure the duration of an operation, or run something at a regular interval) a lot more often than you need a user-facing time.
beng-nl 25 days ago [-]
Thank you ; it’s kind of you to write such a thoughtful, thorough reply.
In my original comment, when I wrote timezone, I actually didn’t really mean one of many known civil timezones (because it’s not), but I meant “timezone string configuration in Linux that will then give TAI time, ie stop adjusting it with timezones, daylight savings, or leap seconds”.
I hadn’t heard of the concept of timescale.
Personally i think item (3) is worthless for computer (as opposed to human facing) timekeeping.
Your explanation is very educational, thank you.
That said, you say it’s simple to get TAI, but that’s within a programming language. What we need is a way to explicitly specify the meaning of a time (timezone but also timescale, I’m learning), and that that interpretation is stored together with the timestamp.
I still don’t understand why a TZ=TAI would be so unreasonable or hard to implement as a shorthand for this desire..
I’m thinking particularly of it being attractive for logfiles and other long term data with time info in it.
imuli 24 days ago [-]
I did this for my systems a while ago. You can grab <https://imu.li/TAI.zone>, compile it with the tzdata tools, and stick it in /etc/zoneinfo. It is unfortunately unable to keep time during a leap second.
In theory, if you keep your clock set to TAI instead of UTC, you can use the /etc/zoneinfo/right timezones for civic time and make a (simpler) TAI zone file. I learned of that after I'd created the above though, and I can imagine all sorts of problems with getting the NTP daemon to do the right thing, and my use case was more TZ=TAI date, as you mentioned.
There's a contentious discussion on the time zone mailing list about adding a TAI entry. It really didn't help that DJB was the one wanting to add it and approached the issue with his customary attitude. There's a lot of interesting stuff in there though - like allegedly there's a legal requirement in Germany for their time zone to be fixed to the rotation of the earth (and so they might abandon UTC if it gives up leap seconds).
maxnoe 24 days ago [-]
Sorry, there is a "not" missing there.
A remaining issue is that it is not easy to get proper TAI on most systems.
dfc 24 days ago [-]
Why do you think a time scale has to be aligned with solar day? Are you an astronomer or come from an astronomy adjacent background?
wrs 24 days ago [-]
Of all the definitions and hidden assumptions about time we’re talking about, possibly the oldest one is that the sun is highest at noon.
yencabulator 24 days ago [-]
That's already false except along one line within every timezone (and that's assuming the timezone is properly set and not a convenient political or historical fiction). Let's say your timezone is perfectly positioned, and "true" in the middle. Along its east and west boundaries, local noon is 30 minutes off. Near daylight savings transitions, it's off by about an hour everywhere.
Local noon just doesn't matter that much. It especially doesn't matter to the second.
wrs 21 days ago [-]
True, exactly “noon” hasn’t had a solar definition for a while. But whatever time the sun is highest for you, I imagine you still expect that to happen at the same time every day.
maxnoe 24 days ago [-]
The first clock precise enough to even measure the irregularity of Earth rotation was only build in 1934.
Before, it was simply the best clock available.
christina97 25 days ago [-]
No, almost often no. Most software is written to paper over leap seconds: it really only happens at the clock synchronization level (chrony for example implements leap second smearing).
All your cocks are therefore synchronized to UTC anyway: it would mean you’d have to translate from UTC to TAI when you store things, then undo when you retrieve. It would be a mess.
growse 25 days ago [-]
Smearing is alluring as a concept right up until you try and implement it in the real world.
If you control all the computers that all your other computers talk to (and also their time sync sources), then smearing works great. You're effectively investing your own standard to make Unix time monatomic.
If, however, your computers need to talk to someone else's computers and have some sort of consensus about what time it is, then the chances are your smearing policy won't match theirs, and you'll disagree on _what time it is_.
Sometimes these effects are harmless. Sometimes they're unforseen. If mysterious, infrequent buggy behaviour is your kink, then go for it!
ratorx 25 days ago [-]
Using time to sync between computers is one of the classic distributed systems problems. It is explicitly recommended against. The amount of errors in the regular time stack mean that you can’t really rely on time being accurate, regardless of leap seconds.
Computer clock speeds are not really that consistent, so “dead reckoning” style approaches don’t work.
NTP can only really sync to ~millisecond precision at best. I’m not aware of the state-of-the-art, but NTP errors and smearing errors in the worst case are probably quite similar. If you need more precise synchronisation, you need to implement it differently.
If you want 2 different computers to have the same time, you either have to solve it at a higher layer up by introducing an ordering to events (or equivalent) or use something like atomic clocks.
growse 25 days ago [-]
Fair, it's often one of those hidden, implicit design assumptions.
Google explicitly built spanner (?) around the idea that you can get distributed consistency and availability iff you control All The Clocks.
Smearing is fine, as long as it's interaction with other systems is thought about (and tested!). Nobody wants a surprise (yet actually inevitable) outage at midnight on New year's day.
gpderetta 25 days ago [-]
In practice with GPS clocks and OTP you can get very good precision in the microseconds
withinboredom 24 days ago [-]
Throw in chrony and you can get nanoseconds.
sadeshmukh 25 days ago [-]
That's quite the typo
halper 24 days ago [-]
Close to the poles, I'd say the assumption that the cocks be synchronised with UTC is flawed. Had we had cocks, I am afraid they'd be oversleeping at this time of year.
semiquaver 25 days ago [-]
> and I don't record the timezone information on the date field
Very few databases actually make it possible to preserve timezone in a timestamp column. Typically the db either has no concept of time zone for stored timestamps (e.g. SQL server) or has “time zone aware” timestamp column types where the input is converted to UTC and the original zone discarded (MySQL, Postgres)
Oracle is the only DB I’m aware of that can actually round-trip nonlocal zones in its “with time zone” type.
I read it but I only see an explanation about what it does, not the why. It could have stored the original timezone.
bvrmn 24 days ago [-]
What's "original timezone"? Most libraries implement timezone aware dates as an offset from UTC internally. What tzinfo uses oracle? Is it updated? Is it similar to tzinfo used in your service?
It's highly complicated topic and it's amazing PostgreSQL decided to use instant time for 'datetime with timezone' type instead of Oracle mess.
NoInkling 24 days ago [-]
> Most libraries implement timezone aware dates as an offset from UTC internally.
For what it's worth, the libraries that are generally considered "good" (e.g. java.time, Nodatime, Temporal) all offer a "zoned datetime" type which stores an IANA identifier (and maybe an offset, but it's only meant for disambiguation w.r.t. transitions). Postgres already ships tzinfo and works with those identifiers, it just expects you to manage them more manually (e.g. in a separate column or composite type). Also let's not pretend that "timestamp with time zone" isn't a huge misnomer that causes confusion when it refers to a simple instant.
I agree naming is kinda awful. But you need geo timezone only for rare cases and handling it in a separate column is not that hard. Instant time is the right thing for almost all cases beginners want to use `datetime with timezone` for.
Scarblac 24 days ago [-]
The discussion was about storing a timestamp as UTC, plus the timezone the time was in originally as a second field.
Postgres has timezone aware datetime fields, that translate incoming times to UTC, and outgoing to a configured timezone. So it doesnt store what timezone the time was in originally.
The claim was that the docs explain why not, but they don't.
hx8 25 days ago [-]
Maybe, it really depends on what your systems are storing. Most systems really won't care if you are one second off every few years. For some calculations being a second off is a big deal. I think you should tread carefully when adopting any format that isn't the most popular and have valid reasons for deviating from the norm. The simple act of being different can be expensive.
wodenokoto 25 days ago [-]
Use your database native date-time field.
SoftTalker 24 days ago [-]
Seconded. Don't mess around with raw timestamps. If you're using a database, use its date-time data type and functions. They will be much more likely to handle numerous edge cases you've never even thought about.
jonnycomputer 25 days ago [-]
I think this article ruined my Christmas. Is nothing sacred? seconds should be seconds since epoch. Why should I care if it drifts off solar day? Let seconds-since-epoch to date representation converters be responsible for making the correction. What am I missing?
christina97 25 days ago [-]
The way it is is really how we all want it. 86400 seconds = 1 day. And we operate under the assumption that midnight UTC is always a multiple of 86400.
We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
You never worried or thought about it before, and you don’t need to! It’s done in the right way.
lmm 25 days ago [-]
> We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
That kind of thing is already needed for timezone handling. Any piece of software that handles human-facing time needs regular updates.
I think it would make most of our lives easier if machine time was ~29 seconds off from human time. It would be a red flag for carelessly programmed applications, and make it harder to confuse system time with human-facing UK time.
24 days ago [-]
withinboredom 24 days ago [-]
You can set your OS to any timezone you want to. If you want it to be 29 seconds off, go for it. The tz database is open source.
odyssey7 24 days ago [-]
Nobody is an island… the hard part is interfacing with other systems, not hacking your own server.
withinboredom 24 days ago [-]
Seems to work fine for most of the planet?
demurgos 24 days ago [-]
I don't want it this way: it mixes a data model concern (timestamps) with a ui concern (calendars). As other have said, it would be much better if we used TAI and handled leap seconds at the same level as timezones.
turminal 25 days ago [-]
But most software that would need to care about that already needs to care about timezones, and those already need to be regularly updated, sometimes with not much more than a month's notice.
dmoy 25 days ago [-]
I will never forgive Egypt for breaking my shit with a 3 day notice (what was it like 10 years ago?).
Thankfully for me it was just a bunch of non-production-facing stuff.
kragen 24 days ago [-]
Was this Morsy's government or Sisi's? If it's Morsy's government you're holding a grudge against, I have some good news for you. (Presumably you're not holding that grudge against random taxi drivers and housewives in Alexandria.)
dmoy 22 days ago [-]
I don't know if the level of bureaucracy where that decision was made is really impacted by the leadership changing. Egypt continues to make super short notice timezone changes as recently as last year. (Just at least not 3 days notice this most recent time around)
quasarj 16 days ago [-]
It's definitely not the right way, in this case.
_kst_ 24 days ago [-]
Leap seconds should be replaced by large rockets mounted on the equator. Adjust the planet, not the clock.
myflash13 24 days ago [-]
It'd be not-so-funny if there was a miscalculation and the Earth was slowed down or sped up too much. There's a story about the end of times and the Antichrist (Dajjal) in the Muslim traditions where this sort of thing actually happens. It is said that the "first day of the Antichrist will be like a year, the second day like a month, and third like a week", which many take literally, i.e. a cosmic event which actually slows down the Earth's rotation, eventually reversing course such that the sun rises from the West (the final sign of the end of humanity).
quotemstr 25 days ago [-]
So what if leap seconds make the epoch 29 seconds longer-ago than date +%s would suggest? It matters a lot less than the fact that we all agree on some number N to represent the current time. That we have -29 fictional seconds doesn't affect the real world in any way. What are you going to do, run missile targeting routines on targets 30 years ago? I mean, I'm as much for abolish leap seconds as anyone, but I don't think it's useful --- even if it's pedantically correct --- to highlight the time discrepancy.
wat10000 24 days ago [-]
One could imagine a scenario where you’re looking at the duration of some brief event by looking at the start and end times. If that’s interval happens to span a leap second then the duration could be significantly different depending on how your timestamps handled it.
Much more important, though, is how it affects the future. The fact that timestamps in the past might be a few seconds different from the straightforward “now minus N seconds” calculation is mostly a curiosity. The fact that clocks might all have to shift by one more second at time point in the future is more significant. There are plenty of real-world scenarios where that needs some substantial effort to account for.
chrchr 25 days ago [-]
It matters for some things. Without those fictional leap seconds, the sun would be 29 seconds out of position at local noon, for instance.
umanwizard 25 days ago [-]
That does not matter at all to anyone.
growse 25 days ago [-]
Did you ask everyone?
It most certainly matters to a lot of people. It sounds like you've never met those people.
zokier 25 days ago [-]
For practically everyone the local civil time is off from local solar time more than 30 seconds, because very few people live at the exact longitude that corresponds to their time zone. And then you got DST which throws the local time even more off.
This is ignoring the fact that due equation of time, solar noon naturally shifts around tens of minutes over the course of the year.
To drive the point, for example local mean solar time at Buckingham palace is already more than 30 seconds off from Greenwich time.
numpad0 25 days ago [-]
The point is, since astronomical "time" isn't exactly on constant multiple of cesium standard seconds, and it even fluctuates due to astrophysical phenomena, applications that concern astro-kineti-geometrical reality has to use the tarnished timescale to match the motion of the planet we're on rather than following a monotonic counter pointed at a glass vial.
It is up to you to keep TAI for everything and let your representations of physical coordinates drift away into the galaxy or something, but that's not the majority choice. Overwhelming majority choose UTC time.
TAI is still nice for many high precision applications, weirdly including a lot of precisely those geo-spatial use cases, so we have both.
growse 25 days ago [-]
Sure, but that doesn't mean that we invented and practise leap seconds for the sheer fun of it.
There's very good reasons that are important behind why we try and keep UTC near UT1, so saying "it doesn't matter to anyone" without even entertaining that some people might care isn't very constructive.
zokier 24 days ago [-]
UTC, and leap seconds, originate from (military) navies of the world, with the intent of supporting celestial navigation. It is already dubious how useful leap seconds were for that use, and much more dubious is its use as civil timescale.
growse 24 days ago [-]
We have leap seconds to save us from having leap minutes, or leap hours.
Generally, it's useful for midnight to be at night, and midday during the day. UT1 is not regular, so you need some form of correction. Then the debate is about how big and how often.
philwelch 24 days ago [-]
It’s going to be multiple centuries until the cumulative leap seconds add up to 30 minutes, and by that point, a majority of the human population is likely to be living off the earth anyway.
recursivecaveat 24 days ago [-]
You don't need leap minutes. Nobody cares if the sun is off by minutes, it already is anyways thanks to timezones. You don't even need leap hours. If in seven thousand years no-one has done a 1 time correction, you can just move the timezones over 1 space, like computers do all the time for political reasons.
umanwizard 25 days ago [-]
Okay, I’ll bite. Who does this matter to, and why?
philwelch 24 days ago [-]
Also, some of the most populous time zones in the world, such as the European and Chinese time zones, are multiple hours across.
porridgeraisin 25 days ago [-]
Yeah. "Exact time" people are a bit like "entropy" people in cryptography. Constantly arguing about the perfect random number when nobody cares.
vendiddy 25 days ago [-]
Maybe a naive question but why wasn't the timestamp designed as seconds since the epoch with zero adjustments?
Everything would be derived from that.
I suppose it would make some math more complex but overall it feels simpler.
fragmede 25 days ago [-]
Arguably it's worse if 00:33 on 2024.12.26 has to get run through another function to get the true value of 2024.12.25 T 23:59.
The problem is leap seconds. Software just wasn't designed to handle 86401 seconds in a day, and caused incidents at Google, Cloudflare, Quantus, and others. Worried that resolving all possible bugs related to days with 86401 seconds in them was going to be impossible to get right, Google decided to smear that leap second so that the last "second" isn't.
And if you've not seen it, there's the falsehoods programmers believe about time article.
quasarj 16 days ago [-]
But that's not the case... the random-looking very-large value X has to go through a conversion function to get the true value of 2024.12.25 T 23:59, just like every other value. Surely nobody is just dividing the Unix time by 86400 like people keep suggesting? What kind of a hack would think they could do date math themselves?
growse 25 days ago [-]
With hindsight, we'd do lots of things differently :)
I guess they just didn't foresee the problem, or misjudged the impact. I can imagine it being very "let's kick that problem down the road and just do a simple thing for now" approach.
wat10000 24 days ago [-]
UNIX systems at the time probably didn’t care about accuracy to the second being maintained over rare leap second adjustments.
Random example, the wonderful RealTime1987A project (https://bmonreal.github.io/RealTime1987A/) talks about detecting neutrinos from the supernova, and what information can be inferred from the timing of the detections. A major source of that data is the Super Kamiokande project. The data was recorded to tape by a PDP-11, timestamped by its local clock. That clock was periodically synced with UTC with a sophisticated high-tech procedure that consisted of an operator making a phone call to some time service, then typing the time into the computer. As such, the timestamps recorded by this instrument have error bars of something like +/- one minute.
If that’s the sort of world you’re in, trying to account for leap seconds probably seems like a complete waste of effort and precious computer memory.
sevensor 25 days ago [-]
What I don’t understand is why we would ever assume two clocks in two different places could be compared in a non approximate way. Your clock, your observations of the world state, are always situated in a local context. In the best of all possible cases, the reasons why your clock and time reports from other clocks differ are well understood.
vpaulus 25 days ago [-]
I believe it has some advantages that while you are waiting at the train station your clock shows exactly the same time as the train conductor’s several miles away from you.
sevensor 25 days ago [-]
Surely not! We could be a whole minute off and I’d still be standing on the platform when the train arrived.
dibujaron 24 days ago [-]
in the US or parts of Europe you could wait there for 10m past the scheduled time and barely notice. In Japan if the train clock disagreed with the station clock by 30s, causing the train to arrive 30s late, they'd have to write all of the passengers excuse notes for why they were late to work.
wat10000 24 days ago [-]
GPS depends on widely separated (several times the diameter of Earth) clocks agreeing with each other down to the nanosecond.
withinboredom 24 days ago [-]
and moving at such high speeds that relativity factors into the equations.
wat10000 24 days ago [-]
Speeds and altitude both! I believe time dilation from gravity is more significant but both are big enough to need compensation.
ses1984 25 days ago [-]
I think something like the small angle approximation applies. There are plenty of applications where you can assume clocks are basically in the same frame of reference because relativistic effects are orders of magnitude smaller than your uncertainty.
christina97 25 days ago [-]
The approximation error is so small that you can often ignore it. Hence the concept of exact time.
Eg in most computing contexts, you can synchronize clocks close enough to ignore a few nanos difference.
Asraelite 25 days ago [-]
How? Unless you have an atomic clock nearby, they will very quickly drift apart by many nanoseconds again. It's also impossible to synchronize to that level of precision across a network.
AlotOfReading 25 days ago [-]
It's not only possible, you can demonstrate it on your phone. Check the GPS error on your device in a clear area. 1 ft of spatial error is roughly 1ns timing error on the signal (assuming other error sources are zero). Alternatively, you can just look at the published clock errors: http://navigationservices.agi.com/GNSSWeb/PAFPSFViewer.aspx
All the satellites in all of the GNSS constellations are synchronized to each other and every device tracking them to within a few tens of nanoseconds. Yes, atomic clocks are involved, but none of them are corrected locally and they're running at a significantly different rate than "true" time here on earth.
Asraelite 25 days ago [-]
That's true, but it's not really the situation I'm thinking of. Your phone is comparing the differences between the timestamps of multiple incoming GNSS signals at a given instant, not using them to set its local clock for future reference.
A better analogy to practical networked computing scenarios would be this: receive a timestamp from a GNSS signal, set your local clock to that, wait a few minutes, then receive a GNSS timestamp again and compare it to your local clock. Use the difference to measure how far you've travelled in those few minutes. If you did that without a local atomic clock then I don't think it would be very accurate.
wat10000 24 days ago [-]
Basic hardware gets you a precise GNSS time once per second. Your local clock won’t drift that much in that time, and you can track and compensate for the drift. If you’re in a position to get the signal and have the hardware, then you can have very accurate clocks in your system.
tucnak 24 days ago [-]
Until somebody start spoofing GPS like they do in Ukraine, and you look embarrassing.
AlotOfReading 23 days ago [-]
So use Galileo's OSNMA instead. That'll work until they spend $100 on a jammer.
tucnak 23 days ago [-]
I hate to break it to you, but all modern electronic warfare equipment has been targeting all GNSS for many years now. There's a reason why "GPS-denied", which is really referring to any form of satellite navigation, is a multi-billion dollar industry.
AlotOfReading 23 days ago [-]
GNSS does in fact work fairly well even in conflict zones, but I'm not sure why you're trying to make this point. Everything is vulnerable to some level of warfare and civilian infrastructure is almost never designed with the expectation that it'll operate in a conflict zone. Are you suggesting we individually airgap all the networking equipment with local atomic clocks behind 10m of specially reinforced concrete instead?
Even the most secure civilian facilities (data centers) are fine sticking 2-3 receivers on the roof and calling it good.
AlotOfReading 25 days ago [-]
That's a common way of doing high precision time sync, yes. It's slightly out of phone budget/form factor, but that's what a GPSDO does.
The receiver in your phone also needs pretty good short term stability to track the signal for all of the higher processing. It'd be absolutely fine to depend on PPS output with seconds or minutes between measurements.
prerok 25 days ago [-]
The Precision Time Protocol is intended to solve this problem:
WhiteRabbit achieves sub-nanosecond time synchronization over a network.
Asraelite 25 days ago [-]
Oh wow, that's impressive. Is that over a standard internet connection? Do they need special hardware?
mgaunard 25 days ago [-]
It does require a special switch yes.
pavel_lishin 25 days ago [-]
camera cuts across to Newton, seething on his side of the desk, his knuckles white as the table visibly starts to crack under his grip
srijan4 24 days ago [-]
Somewhat related: I really like Erlang's docs about handling time. They have common scenarios laid out and document which APIs to use for them. Like: retrieve system time, measure elapsed time, determine order of events, etc.
Is there a synchronized and monotonically increasing measure of time to be found?
kevindamm 24 days ago [-]
Not really. GPS time comes close (at least, it avoids leap seconds and DST) but you still have technical issues like clock drift.
SerCe 25 days ago [-]
Working with time is full of pitfalls, especially around clock monotonicity and clock synchronisation. I wrote an article about some of those pitfalls some time ago [1]. Then, you add time zones to it, and you get a real minefield.
You are a developer who works with time and you named your file, "16-05-2019-the-matter-of-time"? :)
eru 25 days ago [-]
What's wrong with that?
lysium 25 days ago [-]
That’s not a standard format. ISO format is yyyy-mm—dd. Also, sorts nicely by time if you sort alphabetically.
eru 24 days ago [-]
Yes, I know. But for your personal file names, you can pick whatever you feel like.
BrandoElFollito 25 days ago [-]
They wrote it on the 16th of May, or the 5th of Bdrfln, we will never know.
eru 25 days ago [-]
Perhaps it's just named for that date, and not written then?
In any case, dates only have to make sense in the context they are used.
Eg we don't know from just the string of numbers whether it's Gregorian, Julian, or Buddhist or Japanese etc calendar.
gsich 25 days ago [-]
Assuming Gregorian is a sane choice.
eru 24 days ago [-]
Not any worse than most other commonly used calendars, and it's got the benefit of network effects: many people use it, and virtually everyone will be at least somewhat familiar with it.
The timestamps given in the article seem completely wrong? Also, where would 29 even come from?
The offset between UTC and TAI is 37 seconds.
possiblywrong 25 days ago [-]
You are correct. The first example time in the article, "2024-12-25 at 18:54:53 UTC", corresponds to POSIX timestamp 1735152893, not 1735152686. And there have been 27 leap seconds since the 1970 epoch, not 29.
Retr0id 25 days ago [-]
I'm also not sure where 29 came from, but the expected offset here is 27 - there have been 27 UTC leap seconds since the unix epoch.
maxbond 25 days ago [-]
> ((tm_year - 69) / 4) * 86400
Seems like there's another corner cut here, where the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.
I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.
(For the curious, the way this seems to work is that it's calibrated to start ticking up in 1973 and every 4 years thereafter. This is integer math, so fractional values are rounded off. 1972 was a leap year. From March 1st to December 31st 1972, the leap day was accounted for in `tm_yday`. Thereafter it was accounted for in this expression.)
jwilk 25 days ago [-]
> the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.
The article cites the original edition of POSIX from 1988.
The bug in question was fixed in the 2001 edition:
> I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.
Not just Unix time, converting future local time to UTC and storing that is also fraught with risk, as there's no guarantee that the conversion you apply today will be the same as the one that needs to be applied in the future.
Often (for future dates), the right thing to do is to store the thing you were provided (e.g. a local timestamp + the asserted local timezone) and then convert when you need to.
(Past dates have fewer problems converting to UTC, because we don't tend to retroactively change the meaning of timezones).
tw1984 25 days ago [-]
there is literally no easy and safe way to actually handle leap seconds. what happens when they need to remove one second? even for the easier case of inserted leap second, you can smear it, but what happens if there are multiple systems each smearing it at different rates? I'd strongly argue that you pretty much have to reboot all your time critical and mission critical systems during the leap second to be safe.
the issue is so wide spread and complicated that they decided to stop introducing extra leap seconds so people can come up with something better in the coming decades - probably way later than the arrival of AGI.
prmph 24 days ago [-]
The more I learn about the computation of time, the more unbelievably complex getting it right seems. I thought I was pretty sophisticated in in my view of time handling, but just in the last couple of months there have been a series of posts on HN that have opened my eyes even more to how leaky this abstraction of computer time is.
Pretty soon we'll have to defer to deep experts and fundamental libraries to do anything at all with time in our applications, a la security and cryptography.
sarusso 24 days ago [-]
I remember hearing at a conference about 10 years ago that Google does not make use of leap seconds. Instead, they spread them across regular seconds (they modified their NTP servers). I quickly searched online and found the original article [1].
I've been trying to find discussion of this topic on Hacker News between October 1582 and September 1752, but to no avail.
'cal 9 1752' is .. funny. I guess instead of doing this annoying a-periodic leap second business, they accumulated a bunch of leap seconds owed, and skipped 11 days at one go. Sysadmins at the time were of divided opinion on the matter.
ck2 24 days ago [-]
I would not be on a plane or maybe even an elevator mid-January 2038
if it can do this to cloudflare, imagine everything left on legacy signed 32bit integers
Lot of people seem to miss the point of the article.
Suppose you had a clock that counted seconds (in the way we understand seconds, moving forward one unit per second). If you looked at it in a few days at midnight UTC on NYE (according to any clock), it would not be a multiple of 86400 (number of seconds per day). It would be off by some 29 seconds due to leap seconds. In that way, Unix time is not seconds since the epoch.
umanwizard 25 days ago [-]
You have it backwards. If you look at it at midnight UTC (on any day, not just NYE) it WOULD be an exact multiple of 86400. (Try it and see.)
Because of leap seconds, this is wrong. Midnight UTC tonight is in fact NOT a multiple of 86,400 real, physical seconds since midnight UTC on 1970-01-01.
jacobgkau 25 days ago [-]
He didn't have it backwards, he was saying the same thing as you. He said, "suppose you had a clock that counted seconds," then described how it would work (it would be a non-multiple) if that was the case, which it isn't. You ignored that his description of the behavior was part of a hypothetical and not meant to describe how it actually behaves.
umanwizard 25 days ago [-]
You’re absolutely right — not sure how I misinterpreted that so badly.
christina97 24 days ago [-]
Thanks but I’m a “she” :)
zaran 25 days ago [-]
I wonder if the increasing number of computers in orbit will mean even more strange relativistic timekeeping stuff will become a concern for normal developers - will we have to add leap seconds to individual machines?
Back of the envelope says ~100 years in low earth orbit will cause a difference of 1 second
gavinsyancey 25 days ago [-]
Most of those probably don't/won't have clocks that are accurate enough to measure 1 second every hundred years; typical quartz oscillators drift about one second every few weeks.
Rastonbury 25 days ago [-]
For GPS at least it is accounted for 38 microseconds per day, they have atomic clocks accurate to like 0.4 milliseconds over 100 years. The frequencies they measure at are different from earth and are constantly synchronised.
willvarfar 24 days ago [-]
The advantage of equal-length days is that know now what some future date represents; whereas if counting leap-seconds too etc you might get a different date computing now compared to future code that knows about any leap seconds between now and then.
ZeroCool2u 24 days ago [-]
More often than I care to admit, I yearn for another of Aphers programming interview short stories. Some of my favorite prose and incredibly in depth programming.
silisili 25 days ago [-]
> People, myself included, like to say that POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00.
> This is not true. Or rather, it isn’t true in the sense most people think.
I find that assertion odd, because it works exactly as I did assume. Though, to be fair, I'm not thinking in the scientific notion that the author may.
If we think of a second as a tick of some amount of time, it makes sense to just count up once each tick. That scientists inject a second here or there wouldn't interfere with such logic.
All of that said, the leap second is going away anyways, so hopefully whatever replaces it is less troublesome.
lmm 25 days ago [-]
> If we think of a second as a tick of some amount of time, it makes sense to just count up once each tick.
It would, but Unix timestamps don't. It works exactly not how you assume.
silisili 25 days ago [-]
Explain?
The article is claiming POSIX ignores injected leap seconds.
lmm 25 days ago [-]
The article is needlessly unclear, but the specification given in the second blockquote is the one that is actually applied, and a simpler way of explaining it is: POSIX time() returns 86400 * [the number of UTC midnights since 1970-01-01T00:00:00] + [the number of seconds since the last UTC midnight].
ec109685 25 days ago [-]
POSIX doesn’t ignore leap seconds. Occasionally systems repeat a second, so time doesn’t drift beyond a second from when leap seconds were invented: https://en.wikipedia.org/wiki/Leap_second
silisili 25 days ago [-]
After reading this article no less than 3 times, and the comments in this thread, I'm beyond lost.
So maybe the author was right. Because different people are claiming different things.
In that example, Unix time goes from 915148799 -> 915148800 -> 915148800 -> 915148801. Note how the timestamp gets repeated during leap second.
apgwoz 25 days ago [-]
The leap second in Unix time is supposed to wait a second and pretend it never happened. I can see why a longer second could be trouble, but also… if you knew it was coming you could make every nanosecond last 2 and lessen the impact as time would always be monotonic?
Typically you don't need to worry about leap seconds on server because AWS or GCP will help you handle it.
You just need to read the docs to understand their behavior. Some will smooth it out for you, some will jump for you. It would be a problem if you have 3rd party integrations and you rely on their timestamp.
nubinetwork 25 days ago [-]
Isn't this the point to the tz files shipped on every linux system? If the crappy online converters only do the basic math formula, of course it's going to be off a little...
odyssey7 24 days ago [-]
What we’re seeing is again the scientists trying to constrain a humanist system into a scientifically precise framework. It doesn’t really tend to work out. I’m reminded of the time that a bunch of astronomers decided to redefine what a planet is, and yet the cultural notion of Pluto remains strong.
Science and culture will rarely move hand-in-glove, so the rule of separation or concerns, to decouple human experience from scientific measurement, applies.
computator 25 days ago [-]
> POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00. … I think there should be a concise explanation of the problem.
I don’t think that the definition that software engineers believe is wrong or misleading at all. It really is the number of seconds that have passed since Unix’s “beginning of time”.
But to address the problem the article brings up, here’s my attempt at a concise definition:
POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00, and does not include leap seconds that have been added periodically since the 1970s.
jodrellblank 25 days ago [-]
Atomic clocks measure time passing.
Seconds are a fraction of a day which is Earth rotating, and count 86400 seconds and then roll over to the next day, but Earth's rotating speed changes so how much "time passing" is in 86400 seconds varies a little. Clocks based on Earth rotating get out of sync with atomic clocks.
Leap seconds go into day-rotation clocks so their date matches the atomic clock measure of how much time has passed - they are time which has actually passed and ordinary time has not accounted for; so it's inconsistant for you to say "Unix time really is the number of seconds that have passed" and "does not include leap seconds" because those leap seconds are time that has passed.
juped 25 days ago [-]
It really is the number of seconds that have passed since Unix's "beginning of time", minus twenty-nine. Some UTC days have 86401 seconds, Unix assumes they had 86400.
It's wrong and misleading in precisely the way you (and other commenters here) were wrong and misled, so it seems like that's a fair characterization.
umanwizard 25 days ago [-]
You’re wrong and have the situation exactly backwards.
If a day has 86,401 or 86,399 seconds due to leap seconds, POSIX time still advances by exactly 86,400.
If you had a perfectly accurate stopwatch running since 1970-01-01 the number it shows now would be different from POSIX time.
quasarj 25 days ago [-]
Wait, why would it be different?
growse 25 days ago [-]
Unix time is not monatomic. It sometimes goes backwards.
zokier 25 days ago [-]
Strictly speaking Unix time is monotonic, because it counts integer number of seconds and it does not go backwards, it only repeats during leap seconds.
LegionMammal978 24 days ago [-]
POSIX does define "the amount of time (in seconds and nanoseconds) since the Epoch", for the output of clock_gettime() with CLOCK_REALTIME [0]. That "amount of time" must be stopped or smeared or go backward in some way when it reaches a leap second. This isn't the 80s, we have functions that interact with Unix time at sub-second precision.
This feels like semantics. If a counter repeats a value, it's effectively gone backwards and by definition is not monatomic.
A delta between two monatomic values should always be non-negative. This is not true for Unix time.
wat10000 24 days ago [-]
“Monotonic” means non-decreasing (or non-increasing if you’re going the other way). Values are allowed to repeat. The term you’re looking for is “strictly increasing.”
growse 24 days ago [-]
I guess this hinges on whether you think Unix time is an integer or a float. If you think it's just an integer, then yes, you can't get a negative delta.
If, however, you think it's a float, then you can.
umanwizard 25 days ago [-]
Because a day, that is the time between midnight UTC and midnight UTC, is not always exactly 86400 seconds, due to leap seconds. But Unix time always increases by exactly 86400.
Calamityjanitor 25 days ago [-]
I think you're describing the exact confusion that developers have. Unix time doesn't include leap seconds, but they are real seconds that happened. Consider a system that counts days since 1970, but ignores leap years so doesn't count Feb 29. Those 29ths were actual days, just recorded strangely in the calendar. A system that ignores them is going to give you an inaccurate number of days since 1970.
timewizard 25 days ago [-]
> but they are real seconds that happened
They are not. They are inserted because two time scales, one which is based on the rotation of the earth, and the other on atomic clocks, have slowly drifted to a point that a virtual second is inserted or removed to bring them back into agreement. To they extent they exist, by the time they are accounted for, they've already slowly occurred fractionally over several months or years.
> A system that ignores them is going to give you an inaccurate number of days since 1970.
It depends on your frame of reference. If you're looking at an atomic clock it's inaccurate, if you're looking at the movement of the earth with respect to the sun and the stars, it's perfectly accurate.
It's easier to me if you separate these into "measured time" and "display time." Measured time is necessary for doing science. Display time is necessary for flying a plane. We can do whatever we want with "display time," including adding and subtracting an entire hour twice a year, as long as everyone agrees to follow the same formula.
quasarj 25 days ago [-]
Are you sure they actually happened? as you say, at least one of us is confused. My understanding is that the added leap seconds never happened, they are just inserted to make the dates line up nicely. Perhaps this depends on the definition of second?
wat10000 24 days ago [-]
Leap seconds are exactly analogous to leap days. One additional unit is added to the calendar, shifting everything down. For leap days we add a day 29 when normally we wrap after 28. For leap seconds we add second 60 when normally we wrap after 59.
Imagine a timestamp defined as days since January 1, 1970, except that it ignores leap years and says all years have 365 days. Leap days are handled by giving February 29 the same day number as February 28.
If you do basic arithmetic with these timestamps to answer the question, “how many days has it been since Nixon resigned? then you will get the wrong number. You’ll calculate N, but the sun has in fact risen N+13 times since that day.
Same thing with leap seconds. If you calculate the number of seconds since Nixon resigned by subtracting POSIX timestamps, you’ll come up short. The actual time since that event is 20-some seconds more than the value you calculate.
quasarj 16 days ago [-]
I think you make an interesting point here, but then your example is exactly backwards.
If you have a timestamp defined as days since January 1, 1970: If you do basic arithmetic to answer the question "How many days has it been since Nixon resigned" you will _always get the right number_. There are no leap days, they are just normal days.
The problem only comes in when you try to convert between this date type and other types. Our "days since the epoch" date type is fully internally consistent. As long as you know the correct value for "the day Nixon resigned" and "now", it's just a subtraction.
Calamityjanitor 25 days ago [-]
I'm honestly just diving into this now after reading the article, and not a total expert. Wikipedia has a table of a leap second happening across TAI (atomic clock that purely counts seconds) UTC, and unix timestamps according to POSIX: https://en.wikipedia.org/wiki/Unix_time#Leap_seconds
It works out to be that unix time spits out the same integer for 2 seconds.
quasarj 25 days ago [-]
"spits out" as in, when you try to convert to it - isn't that precisely because that second second never happened, so it MUST output a repeat?
jacobgkau 25 days ago [-]
I thought you were wrong because if a timestamp is being repeated, that means two real seconds (that actually happened) got the same timestamp.
However, after looking hard at the tables in that Wikipedia article comparing TAI, UTC, and Unix time, I think you might actually be correct-- TAI is the atomic time (that counts "real seconds that actually happened"), and it gets out of sync with "observed solar time." The leap seconds are added into UTC, but ultimately ignored in Unix time.* ~~So Unix time is actually more accurate to "real time" as measured atomically than solar UTC is.~~
The only point of debate is that most people consider UTC to be "real time," but that's physically not the case in terms of "seconds that actually happened." It's only the case in terms of "the second that high noon hits." (For anyone wondering, we can't simply fix this by redefining a second to be an actual 24/60/60 division of a day because our orbit is apparently irregular and generally slowing down over time, which is why UTC has to use leap seconds in order to maintain our social construct of "noon == sun at the highest point" while our atomic clocks are able to measure time that's actually passed.)
*Edit: Or maybe my initial intuition was right. The table does show that one Unix timestamp ends up representing two TAI (real) timestamps. UTC inserts an extra second, while Unix time repeats a second, to handle the same phenomenon. The table is bolded weirdly (and I'm assuming it's correct while it may not be); and beyond that, I'm not sure if this confusion is actually the topic of conversation in the article, or if it's just too late in the night to be pondering this.
>Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.
I recommend Bobiverse series for anyone who wants more "computer science in space" or permutation city for anyone who wants more "exploration of humans + simulations and computers"
I also read the link for “UTC, GPS, LORAN and TAI”. It’s an interesting contrast that GPS time does not account for leap seconds.
It's not, actually. Does 2 days and 1 hour ago mean 48, 49 or 50 hours, if there was a daylight saving jump in the meantime? If it's 3PM and something is due to happen in 3 days and 2 hours, the user is going to assume and prepare for 5PM, but what if there's a daylight saving jump in the meantime? What happens to "in 3 days and 2 hours" if there's a leap second happening tomorrow that some systems know about and some don't?
You rarely want to be thinking in terms of deltas when considering future events. If there is an event that you want to happen on jan 1, 2030 at 6 PM CET, there is no way to express that as a number of seconds between now and then, because you don't know whether the US government abolishes DST between now and 2030 or not.
To reiterate this point, there is no way to make an accurate, constantly decreasing countdown of seconds to 6PM CET on jan 1, 2030, because nobody actually knows when that moment is going to happen yet.
Also natural events are the other way around, we can know they're X in the future but not the exact calendar date/time.
If the definition of a future time was limited to hours, minutes and/or seconds, then it would be true that the only hard part is answering "what calendrical time and date is that?"
But if you can say "1 day in the future", you're already slamming into problems before even getting to ask that question.
If you want to know the timestamp of "two days from now" then you need to know all kinds of things like what time zone you're talking about and if there are any leap seconds etc. That would tell you if "two days from now" is in 172800 seconds or 172801 seconds or 169201 or 176400 etc.
But the seconds-counting thing should be doing absolutely nothing other than counting seconds and doing otherwise is crazy. The conversion from that into calendar dates and so on is for a separate library which is aware of all these contextual things that allow it to do the conversion. What we do not need and should not have is for the seconds counting thing to contain two identical timestamps that refer to two independent points in time. It should just count seconds.
"2 days from now" could either mean "after 2*86400 seconds have ticked" or it could mean "when the wall clock looks like it does now, after 2 sunset events". These are not the same thing.
The intent of the thing demanding a future event matters. So you can have the right software abstractions all you like and people will still use the wrong thing.
The problem is that programmers are human, and humans don't reason in monotonic counters :)
Seconds are numbers; calendrical units are quantities.
[0] Bateson was, in some ways, anticipating the divide between the digital and analog worlds.
Which is why you need some means to specify which one you want from the library that converts from the monotonic counter to calendar dates.
Anyone who tries to address the distinction by molesting the monotonic counter is doing it wrong.
You are very right that future calendar arithmetic is undefined. I guess that the only viable approach is to assume that it works based on what we know today, and to treat future changes as unpredictable events (as if earth would slow its rotation). Otherwise, we should just stop using calendar arithmetic, but in many fields this is just unfeasible...
[1] https://github.com/sarusso/Propertime
No, the only way is to store the user's intent, and recalculate based on that intent when needed.
When the user schedules a meeting for 2PM while being in Glasgow, the meeting should stay at 2PM Glasgow time, even in a hypothetical world where Scotland achieves independence from the UK and they get different ideas whether to do daylight saving or not.
The problem is determining what the user's intent actually is; if they set a reminder for 5PM while in NY, do they want it to be 5PM NY time in whatever timezone they're currently in (because their favorite football team plays at 5PM every week), or do they want it to be at 5PM in their current timezone (because they need to take their medicine at 5PM, whatever that currently means)?
If you say something will happen in three days, that's a big time window.
Calculating the calendar date for an event that’s 365 days in the future needs to consider whether leap-time corrections need to be made during the period. We do that already for days with our standard calendar.
Edited.
Really, I don't think you can reduce any of these to a specific number of seconds. If someone says an event is in 14 hours, the meaning is a lot closer to 14±½ * 3600 than it is to 14 * 3600.
We can measure the difference in speed of time in a valley and a mountain (“just” take an atomic clock up a mountain and wait for a bit, bring it back to your lab where the other atomic clock is now out of sync)
I don't really like this plan.
The entire point of UTC is to be some integer number of seconds away from TAI to approximate mean solar time (MST).
If we no longer want to track MST, then we should just switch to TAI. Having UTC drift away from MST leaves it in a bastardized state where it still has historical leap seconds that need to be accounted for, but those leap seconds no longer serve any purpose.
---
However, this proposal is not entirely pointless. The point is:
1. Existing UTC timekeeping is unmodified. (profoundly non-negotiable)
2. Any two timestamps after 2035 different by an accurate number of physical seconds.
---
Given that MST is already a feature of UTC, I agree removing it seems silly.
But in the real world a lot of systems made the wrong choice (UNIX being the biggest offender) and it got deeply encoded in many systems and regulations, so it's practically impossible to "just switch to TAI".
So it's easier to just re-interpret UTC as "the new TAI". I will not be surprised if some time in the future we will get the old UTC, but under a different name.
In most (all?) countries, civil time is based on UTC. Nobody is going to set all clocks in the world backwards by about half a minute because it is somewhat more pure.
GPS time also has an offset compared to TAI. Nobody care a bout that. Just like nobody really cares about the Unix epoch. As long as results are consistent.
There is, though? You can easily look at the BIPM's reports [0] to get the gist of how they do it. Some of the contributing atomic clocks are aligned to UTC, and others are aligned to TAI (according to the preferences of their different operators), but the BIPM averages all the contributing measurements into a TAI clock, then derives UTC from that by adding in the leap seconds.
[0] https://webtai.bipm.org/ftp/pub/tai/annual-reports/bipm-annu...
The logical thing to do is to precisely model Stonehenge to the last micron in space. That will take a bit of work involving the various sea levels and so on. So on will include the thermal expansion of granite and the traffic density on the A303 and whether the Solstice is a bank holiday.
Oh bollocks ... mass. That standard kilo thing - is it sorted out yet? Those cars and lorries are going to need constant observation - we'll need a sort of dynamic weigh bridge that works at 60mph. If we slap it in the road just after (going west) the speed cameras should keep the measurements within parameters. If we apply now, we should be able to get Highways to change the middle of the road markings from double dashed to a double solid line and then we can simplify a few variables.
... more daft stuff ...
Right, we've got this. We now have a standard place and point in time to define place and time from.
No we don't and we never will. There is no absolute when it comes to time, place or mass. What we do have is requirements for standards and a point to measure from. Those points to measure from have differing requirements, depending on who who you are and what you are doing.
I suggest we treat time as we do sea level, with a few special versions that people can use without having to worry about silliness.
Provided I can work out when to plant my wheat crop and read log files with sub micro second precision for correlation, I'll be happy. My launches to the moon will need a little more funkiness ...
The Wiltshire Downs and Salisbury Plain is mostly chalk/limestone. That is a porous rock which will expand and contract on water ingress/egress and be affected by atmospheric humidity. I've no real idea but I suspect that Stonehenge will rise and fall vertically(ish) on a seasonal and other longer rhythms.
( Note: I am a Scot, and added widdershins to my dictionary. )
You are not expected to understand this.
It keeps both systems in place.
If you want, I could make it either a hash or a lookup table.
> As an intermediate step at the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that 1 January 1972 00:00:00 UTC was 1 January 1972 00:00:10 TAI exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2.
So Unix times in the years 1970 and 1971 do not actually match UTC times from that period. [2]
[1] https://en.wikipedia.org/wiki/Coordinated_Universal_Time#His...
[2] https://en.wikipedia.org/wiki/Unix_time#UTC_basis
This is true even if we assume the time on the birth certificate was a time precise down to the second. It is because what was considered the length of a second during part of their life varied significantly compared to what we (usually) consider a second now.
[1] Second as in 9192631770/s being the the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom
https://www.slac.stanford.edu/~rkj/crazytime.txt
To make these dates fit in computer memory in the 1950s, they offset the calendar by 2.4 million days, placing day zero on November 17, 1858.
https://en.wikipedia.org/wiki/Julian_day
https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...
The macOS/Swift Foundation API NSDate.timeIntervalSinceReferenceDate uses an epoch of January 1, 2001.
edit: Looks like Wikipedia has a handy list https://en.wikipedia.org/wiki/Epoch_(computing)#Notable_epoc...
The Open Group Base Specifications Issue 7, 2018 edition says that "time_t shall be an integer type". Issue 8, 2024 edition says "time_t shall be an integer type with a width of at least 64 bits".
C merely says that time_t is a "real type capable of representing times". A "real type", as C defines the term, can be either integer or floating-point. It doesn't specify how time_t represents times; for example, a conforming implementation could represent 2024-12-27 02:17:31 UTC as 0x20241227021731.
It's been suggested that time_t should be unsigned so a 32-bit integer can represent times after 2038 (at the cost of not being able to represent times before 1970). Fortunately this did not catch on, and with the current POSIX requiring 64 bits, it wouldn't make much sense.
But the relevant standards don't forbid an unsigned time_t.
> If year < 1970 or the value is negative, the relationship is undefined.
That'd be like saying some points in time that don't have a ISO 8601 year. Every point in time has a year, but some years are longer than others.
If you sat down and watched https://time.is/UTC, it would monotonically tick up, except that occasionally some seconds would be very slightly longer. Like 0.001% longer over the course of 24 hours.
Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
I know that timezones are a field of landmines, but again, that is a human construct where timezone boundaries are adjusted over time.
It seems we need to anchor on absolute time, and then render that out to whatever local time format we need, when required.
Yes. TAI or similar is the only sensible way to track "system" time, and a higher-level system should be responsible for converting it to human-facing times; leap second adjustment should happen there, in the same place as time zone conversion.
Unfortunately Unix standardised the wrong thing and migration is hard.
TAI is a separate time scale and it is used to define UTC.
There is now CLOCK_TAI in Linux [1], tai_clock [2] in c++ and of course several high level libraries in many languages (e.g. astropy.time in Python [3])
There are three things you want in a time scale: * Monotonically Increasing * Ticking with a fixed frequency, i.e. an integer multiple of the SI second * Aligned with the solar day
Unfortunately, as always, you can only chose 2 out of the 3.
TAI is 1 + 2, atomic clocks using the caesiun standard ticking at the frequency that is the definition of the SI second forever Increasing.
Then there is UT1, which is 1 + 3 (at least as long as no major disaster happens...). It is purely the orientation of the Earth, measured with radio telescopes.
UTC is 2 + 3, defined with the help of both. It ticks the SI seconds of TAI, but leap seconds are inserted at two possible time slots per year to keep it within 1 second of UT1. The last part is under discussion to be changed to a much longer time, practically eliminating future leap seconds.
The issue then is that POSIX chose the wrong standard for numerical system clocks. And now it is pretty hard to change and it can also be argued that for performance reasons, it shouldn't be changed, as you more often need the civil time than the monotonic time.
The remaining issues are:
* On many systems, it's simple to get TAI * Many software systems do not accept the complexity of this topic and instead just return the wrong answer using simplified assumptions, e.g. of no leap seconds in UTC * There is no standardized way to handle the leap seconds in the Unix time stamp, so on days around the introduction of leap second, the relationship between the Unix timestamp and the actual UTC or TAI time is not clear, several versions exist and that results in uncertainty up to two seconds. * There might be a negative leap second one day, and nothing is ready for it
[1] https://www.man7.org/linux/man-pages/man7/vdso.7.html [2] https://en.cppreference.com/w/cpp/chrono/tai_clock [3] https://docs.astropy.org/en/stable/time/index.html
I don't think that's true? You need to time something at the system level (e.g. measure the duration of an operation, or run something at a regular interval) a lot more often than you need a user-facing time.
In my original comment, when I wrote timezone, I actually didn’t really mean one of many known civil timezones (because it’s not), but I meant “timezone string configuration in Linux that will then give TAI time, ie stop adjusting it with timezones, daylight savings, or leap seconds”.
I hadn’t heard of the concept of timescale.
Personally i think item (3) is worthless for computer (as opposed to human facing) timekeeping.
Your explanation is very educational, thank you.
That said, you say it’s simple to get TAI, but that’s within a programming language. What we need is a way to explicitly specify the meaning of a time (timezone but also timescale, I’m learning), and that that interpretation is stored together with the timestamp.
I still don’t understand why a TZ=TAI would be so unreasonable or hard to implement as a shorthand for this desire..
I’m thinking particularly of it being attractive for logfiles and other long term data with time info in it.
In theory, if you keep your clock set to TAI instead of UTC, you can use the /etc/zoneinfo/right timezones for civic time and make a (simpler) TAI zone file. I learned of that after I'd created the above though, and I can imagine all sorts of problems with getting the NTP daemon to do the right thing, and my use case was more TZ=TAI date, as you mentioned.
There's a contentious discussion on the time zone mailing list about adding a TAI entry. It really didn't help that DJB was the one wanting to add it and approached the issue with his customary attitude. There's a lot of interesting stuff in there though - like allegedly there's a legal requirement in Germany for their time zone to be fixed to the rotation of the earth (and so they might abandon UTC if it gives up leap seconds).
A remaining issue is that it is not easy to get proper TAI on most systems.
Local noon just doesn't matter that much. It especially doesn't matter to the second.
Before, it was simply the best clock available.
All your cocks are therefore synchronized to UTC anyway: it would mean you’d have to translate from UTC to TAI when you store things, then undo when you retrieve. It would be a mess.
If you control all the computers that all your other computers talk to (and also their time sync sources), then smearing works great. You're effectively investing your own standard to make Unix time monatomic.
If, however, your computers need to talk to someone else's computers and have some sort of consensus about what time it is, then the chances are your smearing policy won't match theirs, and you'll disagree on _what time it is_.
Sometimes these effects are harmless. Sometimes they're unforseen. If mysterious, infrequent buggy behaviour is your kink, then go for it!
Computer clock speeds are not really that consistent, so “dead reckoning” style approaches don’t work.
NTP can only really sync to ~millisecond precision at best. I’m not aware of the state-of-the-art, but NTP errors and smearing errors in the worst case are probably quite similar. If you need more precise synchronisation, you need to implement it differently.
If you want 2 different computers to have the same time, you either have to solve it at a higher layer up by introducing an ordering to events (or equivalent) or use something like atomic clocks.
Google explicitly built spanner (?) around the idea that you can get distributed consistency and availability iff you control All The Clocks.
Smearing is fine, as long as it's interaction with other systems is thought about (and tested!). Nobody wants a surprise (yet actually inevitable) outage at midnight on New year's day.
Oracle is the only DB I’m aware of that can actually round-trip nonlocal zones in its “with time zone” type.
It's highly complicated topic and it's amazing PostgreSQL decided to use instant time for 'datetime with timezone' type instead of Oracle mess.
For what it's worth, the libraries that are generally considered "good" (e.g. java.time, Nodatime, Temporal) all offer a "zoned datetime" type which stores an IANA identifier (and maybe an offset, but it's only meant for disambiguation w.r.t. transitions). Postgres already ships tzinfo and works with those identifiers, it just expects you to manage them more manually (e.g. in a separate column or composite type). Also let's not pretend that "timestamp with time zone" isn't a huge misnomer that causes confusion when it refers to a simple instant.
I suspect you might be part of the contingent that considers such a combined type a fundamentally bad idea, however: https://errorprone.info/docs/time#zoned_datetime
Postgres has timezone aware datetime fields, that translate incoming times to UTC, and outgoing to a configured timezone. So it doesnt store what timezone the time was in originally.
The claim was that the docs explain why not, but they don't.
We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
You never worried or thought about it before, and you don’t need to! It’s done in the right way.
That kind of thing is already needed for timezone handling. Any piece of software that handles human-facing time needs regular updates.
I think it would make most of our lives easier if machine time was ~29 seconds off from human time. It would be a red flag for carelessly programmed applications, and make it harder to confuse system time with human-facing UK time.
Thankfully for me it was just a bunch of non-production-facing stuff.
Much more important, though, is how it affects the future. The fact that timestamps in the past might be a few seconds different from the straightforward “now minus N seconds” calculation is mostly a curiosity. The fact that clocks might all have to shift by one more second at time point in the future is more significant. There are plenty of real-world scenarios where that needs some substantial effort to account for.
It most certainly matters to a lot of people. It sounds like you've never met those people.
This is ignoring the fact that due equation of time, solar noon naturally shifts around tens of minutes over the course of the year.
To drive the point, for example local mean solar time at Buckingham palace is already more than 30 seconds off from Greenwich time.
It is up to you to keep TAI for everything and let your representations of physical coordinates drift away into the galaxy or something, but that's not the majority choice. Overwhelming majority choose UTC time.
TAI is still nice for many high precision applications, weirdly including a lot of precisely those geo-spatial use cases, so we have both.
There's very good reasons that are important behind why we try and keep UTC near UT1, so saying "it doesn't matter to anyone" without even entertaining that some people might care isn't very constructive.
Generally, it's useful for midnight to be at night, and midday during the day. UT1 is not regular, so you need some form of correction. Then the debate is about how big and how often.
Everything would be derived from that.
I suppose it would make some math more complex but overall it feels simpler.
The problem is leap seconds. Software just wasn't designed to handle 86401 seconds in a day, and caused incidents at Google, Cloudflare, Quantus, and others. Worried that resolving all possible bugs related to days with 86401 seconds in them was going to be impossible to get right, Google decided to smear that leap second so that the last "second" isn't.
And if you've not seen it, there's the falsehoods programmers believe about time article.
I guess they just didn't foresee the problem, or misjudged the impact. I can imagine it being very "let's kick that problem down the road and just do a simple thing for now" approach.
Random example, the wonderful RealTime1987A project (https://bmonreal.github.io/RealTime1987A/) talks about detecting neutrinos from the supernova, and what information can be inferred from the timing of the detections. A major source of that data is the Super Kamiokande project. The data was recorded to tape by a PDP-11, timestamped by its local clock. That clock was periodically synced with UTC with a sophisticated high-tech procedure that consisted of an operator making a phone call to some time service, then typing the time into the computer. As such, the timestamps recorded by this instrument have error bars of something like +/- one minute.
If that’s the sort of world you’re in, trying to account for leap seconds probably seems like a complete waste of effort and precious computer memory.
Eg in most computing contexts, you can synchronize clocks close enough to ignore a few nanos difference.
All the satellites in all of the GNSS constellations are synchronized to each other and every device tracking them to within a few tens of nanoseconds. Yes, atomic clocks are involved, but none of them are corrected locally and they're running at a significantly different rate than "true" time here on earth.
A better analogy to practical networked computing scenarios would be this: receive a timestamp from a GNSS signal, set your local clock to that, wait a few minutes, then receive a GNSS timestamp again and compare it to your local clock. Use the difference to measure how far you've travelled in those few minutes. If you did that without a local atomic clock then I don't think it would be very accurate.
Even the most secure civilian facilities (data centers) are fine sticking 2-3 receivers on the roof and calling it good.
The receiver in your phone also needs pretty good short term stability to track the signal for all of the higher processing. It'd be absolutely fine to depend on PPS output with seconds or minutes between measurements.
https://en.m.wikipedia.org/wiki/Precision_Time_Protocol
It does require hardware support, though.
https://www.erlang.org/doc/apps/erts/time_correction.html#ho...
[1]: https://serce.me/posts/16-05-2019-the-matter-of-time
In any case, dates only have to make sense in the context they are used.
Eg we don't know from just the string of numbers whether it's Gregorian, Julian, or Buddhist or Japanese etc calendar.
But seriously, https://xkcd.com/1179/
https://news.ycombinator.com/item?id=42516811
The offset between UTC and TAI is 37 seconds.
Seems like there's another corner cut here, where the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.
I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.
(For the curious, the way this seems to work is that it's calibrated to start ticking up in 1973 and every 4 years thereafter. This is integer math, so fractional values are rounded off. 1972 was a leap year. From March 1st to December 31st 1972, the leap day was accounted for in `tm_yday`. Thereafter it was accounted for in this expression.)
The article cites the original edition of POSIX from 1988.
The bug in question was fixed in the 2001 edition:
https://pubs.opengroup.org/onlinepubs/007904975/basedefs/xbd...
Not just Unix time, converting future local time to UTC and storing that is also fraught with risk, as there's no guarantee that the conversion you apply today will be the same as the one that needs to be applied in the future.
Often (for future dates), the right thing to do is to store the thing you were provided (e.g. a local timestamp + the asserted local timezone) and then convert when you need to.
(Past dates have fewer problems converting to UTC, because we don't tend to retroactively change the meaning of timezones).
the issue is so wide spread and complicated that they decided to stop introducing extra leap seconds so people can come up with something better in the coming decades - probably way later than the arrival of AGI.
Pretty soon we'll have to defer to deep experts and fundamental libraries to do anything at all with time in our applications, a la security and cryptography.
[1] https://googleblog.blogspot.com/2011/09/time-technology-and-...
'cal 9 1752' is .. funny. I guess instead of doing this annoying a-periodic leap second business, they accumulated a bunch of leap seconds owed, and skipped 11 days at one go. Sysadmins at the time were of divided opinion on the matter.
if it can do this to cloudflare, imagine everything left on legacy signed 32bit integers
https://blog.cloudflare.com/how-and-why-the-leap-second-affe...
Suppose you had a clock that counted seconds (in the way we understand seconds, moving forward one unit per second). If you looked at it in a few days at midnight UTC on NYE (according to any clock), it would not be a multiple of 86400 (number of seconds per day). It would be off by some 29 seconds due to leap seconds. In that way, Unix time is not seconds since the epoch.
Because of leap seconds, this is wrong. Midnight UTC tonight is in fact NOT a multiple of 86,400 real, physical seconds since midnight UTC on 1970-01-01.
Back of the envelope says ~100 years in low earth orbit will cause a difference of 1 second
> This is not true. Or rather, it isn’t true in the sense most people think.
I find that assertion odd, because it works exactly as I did assume. Though, to be fair, I'm not thinking in the scientific notion that the author may.
If we think of a second as a tick of some amount of time, it makes sense to just count up once each tick. That scientists inject a second here or there wouldn't interfere with such logic.
All of that said, the leap second is going away anyways, so hopefully whatever replaces it is less troublesome.
It would, but Unix timestamps don't. It works exactly not how you assume.
The article is claiming POSIX ignores injected leap seconds.
So maybe the author was right. Because different people are claiming different things.
In that example, Unix time goes from 915148799 -> 915148800 -> 915148800 -> 915148801. Note how the timestamp gets repeated during leap second.
You just need to read the docs to understand their behavior. Some will smooth it out for you, some will jump for you. It would be a problem if you have 3rd party integrations and you rely on their timestamp.
Science and culture will rarely move hand-in-glove, so the rule of separation or concerns, to decouple human experience from scientific measurement, applies.
I don’t think that the definition that software engineers believe is wrong or misleading at all. It really is the number of seconds that have passed since Unix’s “beginning of time”.
But to address the problem the article brings up, here’s my attempt at a concise definition:
POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00, and does not include leap seconds that have been added periodically since the 1970s.
Seconds are a fraction of a day which is Earth rotating, and count 86400 seconds and then roll over to the next day, but Earth's rotating speed changes so how much "time passing" is in 86400 seconds varies a little. Clocks based on Earth rotating get out of sync with atomic clocks.
Leap seconds go into day-rotation clocks so their date matches the atomic clock measure of how much time has passed - they are time which has actually passed and ordinary time has not accounted for; so it's inconsistant for you to say "Unix time really is the number of seconds that have passed" and "does not include leap seconds" because those leap seconds are time that has passed.
It's wrong and misleading in precisely the way you (and other commenters here) were wrong and misled, so it seems like that's a fair characterization.
If a day has 86,401 or 86,399 seconds due to leap seconds, POSIX time still advances by exactly 86,400.
If you had a perfectly accurate stopwatch running since 1970-01-01 the number it shows now would be different from POSIX time.
[0] https://pubs.opengroup.org/onlinepubs/9799919799/functions/c...
A delta between two monatomic values should always be non-negative. This is not true for Unix time.
If, however, you think it's a float, then you can.
They are not. They are inserted because two time scales, one which is based on the rotation of the earth, and the other on atomic clocks, have slowly drifted to a point that a virtual second is inserted or removed to bring them back into agreement. To they extent they exist, by the time they are accounted for, they've already slowly occurred fractionally over several months or years.
> A system that ignores them is going to give you an inaccurate number of days since 1970.
It depends on your frame of reference. If you're looking at an atomic clock it's inaccurate, if you're looking at the movement of the earth with respect to the sun and the stars, it's perfectly accurate.
It's easier to me if you separate these into "measured time" and "display time." Measured time is necessary for doing science. Display time is necessary for flying a plane. We can do whatever we want with "display time," including adding and subtracting an entire hour twice a year, as long as everyone agrees to follow the same formula.
Imagine a timestamp defined as days since January 1, 1970, except that it ignores leap years and says all years have 365 days. Leap days are handled by giving February 29 the same day number as February 28.
If you do basic arithmetic with these timestamps to answer the question, “how many days has it been since Nixon resigned? then you will get the wrong number. You’ll calculate N, but the sun has in fact risen N+13 times since that day.
Same thing with leap seconds. If you calculate the number of seconds since Nixon resigned by subtracting POSIX timestamps, you’ll come up short. The actual time since that event is 20-some seconds more than the value you calculate.
If you have a timestamp defined as days since January 1, 1970: If you do basic arithmetic to answer the question "How many days has it been since Nixon resigned" you will _always get the right number_. There are no leap days, they are just normal days.
The problem only comes in when you try to convert between this date type and other types. Our "days since the epoch" date type is fully internally consistent. As long as you know the correct value for "the day Nixon resigned" and "now", it's just a subtraction.
It works out to be that unix time spits out the same integer for 2 seconds.
However, after looking hard at the tables in that Wikipedia article comparing TAI, UTC, and Unix time, I think you might actually be correct-- TAI is the atomic time (that counts "real seconds that actually happened"), and it gets out of sync with "observed solar time." The leap seconds are added into UTC, but ultimately ignored in Unix time.* ~~So Unix time is actually more accurate to "real time" as measured atomically than solar UTC is.~~
The only point of debate is that most people consider UTC to be "real time," but that's physically not the case in terms of "seconds that actually happened." It's only the case in terms of "the second that high noon hits." (For anyone wondering, we can't simply fix this by redefining a second to be an actual 24/60/60 division of a day because our orbit is apparently irregular and generally slowing down over time, which is why UTC has to use leap seconds in order to maintain our social construct of "noon == sun at the highest point" while our atomic clocks are able to measure time that's actually passed.)
*Edit: Or maybe my initial intuition was right. The table does show that one Unix timestamp ends up representing two TAI (real) timestamps. UTC inserts an extra second, while Unix time repeats a second, to handle the same phenomenon. The table is bolded weirdly (and I'm assuming it's correct while it may not be); and beyond that, I'm not sure if this confusion is actually the topic of conversation in the article, or if it's just too late in the night to be pondering this.