What is Unix time?
Unix time is the integer count of seconds elapsed since the Unix epoch — 00:00:00 UTC on 1 January 1970.1 The encoding is normative in the POSIX standard, which both pins the epoch instant and prescribes the formula by which a calendar date and time of day map to a single number, with each day represented as exactly 86,400 seconds.2
The same count goes by several names. POSIX itself calls it Seconds Since the Epoch; in everyday programming it is variously called Unix time, POSIX time, epoch time, or simply a Unix timestamp.8 The number 0 is the epoch itself. The number 1000000000, which Unix-culture observers nicknamed the "Unix billennium" when it elapsed at 01:46:40 UTC on 9 September 2001, represents one billion seconds later. Negative values are valid and represent instants before 1970.
A Unix timestamp does not carry a time zone. It is an absolute count, identical for an observer in Tokyo and an observer in New York at the same physical instant. The conversion to a human-readable wall clock is a separate calculation that consults a time-zone database for the local rules of the moment.
How is Unix time different from UTC?
Unix time labels instants in UTC, but it does not tick the way UTC does. UTC includes occasional one-second adjustments — leap seconds — that keep civil time aligned with the Earth's irregular rotation, and a UTC second therefore briefly takes the value 23:59:60 whenever a leap second is inserted.9 Unix time has no encoding for that instant. Each Unix-time day has exactly 86,400 seconds, and the POSIX standard explicitly leaves "the relationship between the actual date and time in Coordinated Universal Time … and the system's current value for seconds since the Epoch" unspecified.2
The practical consequence is small but real. Most of the time, a Unix timestamp and a UTC wall clock agree exactly. In the seconds bracketing each leap-second insertion, the two diverge by up to one second. Over the long run, the cumulative gap is bounded by the count of leap seconds inserted since 1972 — 27 to date, plus a 10-second seed offset, for a total of 37 seconds between Unix time's notional 86,400-second day and the underlying continuous atomic-time scale. A fuller treatment of UTC and the leap-second mechanism lives in the article on UTC.
Internet timestamp formats follow UTC, not Unix time. The standard for date-and-time strings on the internet allows a literal 60 in the seconds field "at the end of months in which a leap second occurs",9 which Unix time cannot represent. A Unix timestamp is what programs do arithmetic with; an ISO 8601 string is what humans and protocols pass between systems.
Where did Unix time come from?
Unix time descends directly from the time-keeping primitives of early Unix at Bell Laboratories in the early 1970s. The first surviving manual page for the time system call, dated November 1971, defines time as returning "the time since 00:00:00, Jan. 1, 1971, measured in sixtieths of a second".10 The 60 Hz tick rate was inherited from the AC line frequency, not from any astronomical or metrological consideration.
A 32-bit counter ticking at 60 Hz lasts only about 2.5 years, and the same V1 manual page noted as a bug that "2**32 sixtieths of a second is only about 2.5 years."10 By the third edition of the manual (March 1972) the epoch had been advanced to 1 January 1972 — the team was effectively re-zeroing the counter every year, with the bug-note now reading "this guarantees a crisis every 2.26 years."11
By the sixth edition (August 1973), the encoding had been overhauled. The same call now returned "the time since 00:00:00 GMT, Jan. 1, 1970, measured in seconds."3 Two design choices in that single sentence have proved durable: switching the unit from sixtieths of a second to whole seconds bought roughly sixty times the headroom, and back-dating the epoch to before the system existed — and never moving it again — made historical timestamps stable across releases. The 1970-01-01 epoch and the seconds-resolution counter have remained the encoding ever since, picked up unchanged by POSIX in the late 1980s.112
Why does Unix time ignore leap seconds?
The decision is documented as deliberate in the POSIX rationale. The standard committee chose to "ignore (not apply)" leap seconds in seconds-since-the-Epoch in order "to provide an easy and compatible method of computing time differences",4 and treated the result as an arithmetic primitive rather than a physical-time count.
The trade-off is straightforward. Subtracting two Unix timestamps to get a duration is a single integer operation; the answer is correct so long as no leap second falls inside the interval, and approximately correct — off by no more than the count of intervening leap seconds — when one does. Subtracting two UTC wall-clock times across a leap-second boundary requires a leap-second table look-up to get the exact answer, and most applications are happy to trade a few seconds of inaccuracy across decades for the simpler arithmetic.
The price is that Unix time is not, strictly, a physical-time count. The second counter "always contains the number of non-leap seconds since the start of 1970, as if the insertion or deletion of leap seconds in UTC never had happened",13 which is fine for civil-time labelling but inadequate for any system that needs strictly monotonic behaviour across leap-second instants. Code that requires that property typically works in International Atomic Time (TAI) or in a vendor-specific monotonic clock instead.
What is the Year 2038 problem?
The Year 2038 problem is the integer overflow that occurs when a Unix timestamp stored in a signed 32-bit integer reaches its maximum value of 2,147,483,647 — at exactly 03:14:07 UTC on 19 January 2038.5 The next second wraps the integer to its minimum value of −2,147,483,648, which corresponds to 20:45:52 UTC on 13 December 1901. Software that stores time in a 32-bit time_t and is still running at that instant will appear to jump 136 years into the past.
The fix is to widen the integer. The Linux kernel's internal time-keeping became 64-bit-clean in version 5.1, released in May 2019.14 On the user-space side, the GNU C library introduced the _TIME_BITS=64 compile-time switch in version 2.34 (August 2021); a 32-bit Linux application that defines the macro at compile time gets a 64-bit time_t transparently mapped to the kernel's 64-bit system calls.[^glibc-2.34-time-bits] On 64-bit platforms time_t was already 64-bit; the migration concerns 32-bit ABIs only.
A 64-bit signed time_t covers about 292 billion years on either side of the epoch — well past any horizon a computer system needs to plan for. The harder cases are not the C library or the kernel but the long tail of artefacts that have hard-coded a 32-bit seconds field: legacy filesystems and on-wire formats (NFSv3, ext3 inode timestamps), embedded toolchains that have not been rebuilt, and third-party libraries that ship a 32-bit time_t in their public ABI.15 The Year 2038 problem is therefore best understood as a long-running migration that began in the mid-2010s and is mostly complete in mainstream Linux user-space, rather than as a single overflow event.
What are the millisecond and nanosecond variants?
The base Unix-time encoding is whole seconds, but every modern programming environment ships a higher-resolution counterpart that uses the same epoch and the same leap-second-blind semantics. The variants differ only in the unit of count.
JavaScript counts in milliseconds. The ECMAScript Date object encodes a "time value" as an integer number of milliseconds since the Unix epoch, with each day treated as exactly 86,400,000 ms.6 The supported range is ±8,640,000,000,000,000 ms — 100,000,000 days from epoch in either direction, or roughly ±273,790 years. Because that range is much larger than what Unix time's signed 32-bit overflow imposes, JavaScript has no Year 2038 problem.
POSIX itself defines a higher-resolution clock interface. clock_gettime(CLOCK_REALTIME, &ts) returns the current Unix time in a struct timespec — a seconds field plus a nanoseconds field, with the nanoseconds field in the range 0 to 999,999,999.7 The same call with CLOCK_MONOTONIC returns a time from "an arbitrary origin" that "cannot be set"; it is the standard primitive for measuring durations across leap seconds without exposing the discontinuity to the caller.7
Other common variants follow the same pattern. Java's System.currentTimeMillis returns milliseconds; Python's time.time returns floating-point seconds; Go's time.Time.UnixNano returns nanoseconds; Rust's SystemTime is a seconds-and-nanoseconds pair. All inherit POSIX's leap-second-blind semantics.
How do operating systems handle leap seconds in Unix time?
Because Unix time has no encoding for the leap second, every operating system that synchronises against UTC has to do something with the second when it arrives. Three strategies are in use.
The simplest is to step the clock. At the leap-second instant, the system's time_t is held for one second, so that 23:59:60 UTC and the following 00:00:00 UTC share the same Unix-time value. Programs reading the clock during the leap see a one-second backwards jump; ordering is no longer monotonic. This is the historical default behaviour of NTP-disciplined Unix systems.16
A more sophisticated approach is to smear the leap second across a longer interval by running the clock slightly slow. Google has used a smear since 2008, currently a 24-hour linear smear from noon UTC to noon UTC, with each smeared second running about 11.6 microseconds longer than an SI second — a frequency offset of roughly 11.6 parts per million.17 During the smear interval Google's clocks disagree with non-smearing UTC sources by up to half a second, but no client ever sees a non-monotonic clock or a 23:59:60 timestamp. The smear "applies to all Google services, including all our APIs."17
A third route is to expose the underlying continuous atomic time scale. POSIX systems with a CLOCK_TAI clock surface the count of SI seconds elapsed since the same 1970 epoch, with no leap-second adjustments — equivalent to Unix time plus the current TAI–UTC offset. Code that needs strictly monotonic SI-second behaviour reads CLOCK_TAI directly rather than CLOCK_REALTIME, sidestepping the leap-second question entirely.1618
Frequently asked questions
Is Unix time the same in every time zone?
Yes. A Unix timestamp is an absolute UTC-based count. The same physical instant produces the same number whether captured in Tokyo, New York, or Reykjavík; only the human-readable rendering of that instant differs by zone.1
Can a Unix timestamp be negative?
Yes. Negative values represent instants before the 1970 epoch. The signed 32-bit minimum, −2147483648, corresponds to 20:45:52 UTC on 13 December 1901; a signed 64-bit minimum reaches well into the deep past.[^glibc-2.34-time-bits]
Why is the Unix epoch in 1970, rather than the year Unix was first written?
The original early-Unix time call counted from 1 January of whatever year was current at release time, and the team had to advance the epoch periodically because a 32-bit counter ticking at 60 Hz only covered about 2.5 years. Switching to seconds resolution and back-dating the epoch to before the system existed bought enough headroom that the epoch never had to move again.10113
Is JavaScript's Date.now() the same as Unix time on a Linux server?
It identifies the same instants but counts in different units. Date.now() returns milliseconds since the Unix epoch; the standard Unix time() call returns whole seconds since the same epoch. Divide by 1,000 (or use Math.floor(Date.now() / 1000)) to get whole-second Unix time.6
Footnotes
- 1. IEEE Std 1003.1-2024 (Open Group Base Specifications, Issue 8) — Base Definitions §3.125 Epoch , The Open Group / IEEE (2024) — accessed 2026-05-05.
- 2. IEEE Std 1003.1-2024 (Open Group Base Specifications, Issue 8) — Base Definitions §4.19 Seconds Since the Epoch , The Open Group / IEEE (2024) — accessed 2026-05-05.
- 3. UNIX Programmer's Manual, Edition 6 (V6) — time(II), 1973-08-05 , Bell Telephone Laboratories, hosted by The Unix Heritage Society (1973) — accessed 2026-05-05.
- 4. IEEE Std 1003.1-2017 (Open Group Base Specifications, Issue 7) — Rationale §A.4.16 Seconds Since the Epoch , The Open Group / IEEE (2017) — accessed 2026-05-05.
- 5. The (initial) glibc year-2038 plan , J. Corbet, LWN.net (2015) — accessed 2026-05-05.
- 6. ECMA-262 (15th edition, June 2024) — §21.4.1 Time Values and Time Range , Ecma International (2024) — accessed 2026-05-05.
- 7. IEEE Std 1003.1-2024 — System Interfaces, clock_gettime , The Open Group / IEEE (2024) — accessed 2026-05-05.
- 8. Unix time (Q14654) , Wikidata — accessed 2026-05-05.
- 9. RFC 3339: Date and Time on the Internet: Timestamps , Internet Engineering Task Force (2002) — accessed 2026-05-05.
- 10. UNIX Programmer's Manual, Edition 1 (V1) — time(II), 1971-11-03 , Bell Telephone Laboratories, hosted by The Unix Heritage Society (1971) — accessed 2026-05-05.
- 11. UNIX Programmer's Manual, Edition 3 (V3) — time(II), 1972-03-15 , Bell Telephone Laboratories, hosted by The Unix Heritage Society (1972) — accessed 2026-05-05.
- 12. The Evolution of the Unix Time-sharing System , D. M. Ritchie, in Lecture Notes in Computer Science vol. 79 (Springer-Verlag, 1980); reprinted in AT&T Bell Laboratories Technical Journal 63(8 pt 2) (1984) (1979) — accessed 2026-05-05.
- 13. Modernized <time.h> API for ISO C , M. Kuhn, Computer Laboratory, University of Cambridge (2004) — accessed 2026-05-05.
- 14. Approaching the kernel year-2038 end game , J. Corbet, LWN.net (2019) — accessed 2026-05-05.
- 15. System call conversion for year 2038 , J. Corbet, LWN.net (2015) — accessed 2026-05-05.
- 16. POSIX clocks for Linux , M. Kuhn, Computer Laboratory, University of Cambridge (1998) — accessed 2026-05-05.
- 17. Leap Smear , Google Public NTP / Google Developers — accessed 2026-05-05.
- 18. RFC 5905: Network Time Protocol Version 4: Protocol and Algorithms Specification , Internet Engineering Task Force (2010) — accessed 2026-05-05.