Year 2038 problem

·

3 min read

Y2K Two-digit year

Do you remember the Y2K? The summary is that to save the limited computer memory programmers used only 2 digits to record the year. For example, 1998 would be 98 in the system and assume it was 19 prefixed. However, the highest number it can record is 99 and when the year becomes 2000 the digits would be reset to 00 and the computer would assume 1900. This type of issue is called an integer overflow where the max possible number is reached and when increased it reverses to the minimum number. The Y2K bug was avoided as we spent time patching all systems to avoid this issue.

Y2K38 Unix Timestamp

The Year 2038 problem is due to one of the many ways dates are stored in a system. The problem date format is called a Unix Timestamp, Unix Epoch time or sometimes just referred to as a timestamp and many programming languages and systems adopted this date format to represent time. This is also why the bug is also nicknamed Epochalypse.

Unix Timestamp, in essence, measures the seconds since midnight on 01 January 1970 (UTC/GMT). The seconds are recorded by signed 32 integer bits which means the first bit represents if it is positive or negative and the remaining 31-bit store a value. So the range of signed 32 integer bits is -2^31 to (2^31)-1 which is -2,147,483,648 to 2,147,483,647. Being signed means you can record dates as far back as 20:45:52 UTC on 13 December 1901.

Overflow

With signed 32 the max seconds it can store is 2,147,483,647 which is 03:14:07 UTC on 19 January 2038 which the second after will revert to 2,147,483,648 seconds before midnight 01 January 1970 which is 20:45:52 UTC on 13 December 1901.

So the 2K38 bug is that on 03:14:08 UTC on 19 January 2038, the system assumes it's 20:45:52 UTC on 13 December 1901.

Anything that requires Unix Timestamp will be affected even 64-bit computer systems if they have software or hardware that rely on Unix Timestamp.

Example of systems affected:

  • Embedded systems use dates stored as UNIX Timestamps

  • 32-bit software or operating systems

  • Databases using signed 32-bit integers Timestamps

  • Database functions that use 32-bit integer representations of times such as UNIX_TIMESTAMP()

  • Code where dates, times or intervals between two times are compared

  • Code that calculates based on times or events in the future or past.

Ways to fix the bug:

  • Convert the timestamps and functions to use 64-bit integers (and ensure the functions can handle 64-bit integers correctly). This move the problem 292 million years forward.

  • Convert the code to use unsigned 32-bit integers where you don’t need to store dates before the year 1970. This gives an extra bit which pushes the bug to February 7, 2106.

  • Convert the timestamps to use structures specifically made to handle dates and times such as DateTime objects.