Making Sense Of \u201cSenseless\u201d JavaScript Features<\/h1>\nJuan Diego Rodr\u00edguez<\/address>\n 2023-12-28T10:00:00+00:00
\n 2024-06-12T20:05:40+00:00
\n <\/header>\n
Why does JavaScript have so many eccentricities!? Like, why does 0.2 + 0.1<\/code> equals 0.30000000000000004<\/code>? Or, why does "" == false<\/code> evaluate to true<\/code>?<\/p>\nThere are a lot of mind-boggling decisions in JavaScript that seem pointless; some are misunderstood, while others are direct missteps in the design. Regardless, it\u2019s worth knowing what<\/em> these strange things are and why<\/em> they are in the language. I\u2019ll share what I believe are some of the quirkiest things about JavaScript and make sense of them.<\/p>\n0.1 + 0.2<\/code> And The Floating Point Format<\/h2>\nMany of us have mocked JavaScript by writing 0.1 + 0.2<\/code> in the console and watching it resoundingly fail to get 0.3<\/code>, but rather a funny-looking 0.30000000000000004<\/code> value.<\/p>\nWhat many developers might not know is that the weird result is not really JavaScript\u2019s fault! JavaScript is merely adhering to the IEEE Standard for Floating-Point Arithmetic<\/strong><\/a> that nearly every other computer and programming language uses to represent numbers.<\/p>\nBut what exactly is the Floating-Point Arithmetic?<\/p>\n
Computers have to represent numbers in all sizes, from the distance between planets and even between atoms. On paper, it\u2019s easy to write a massive number or a minuscule quantity without worrying about the size it will take. Computers don\u2019t have that luxury since they have to save all kinds of numbers in binary and a small space in memory.<\/p>\n
Take an 8-bit integer, for example. In binary, it can hold integers ranging from 0<\/code> to 255<\/code>.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n 8-bit integers showing 0 and 255. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThe keyword here is integers<\/em>. It can\u2019t represent any decimals between them. To fix this, we could add an imaginary decimal point somewhere along our 8-bit so the bits before the point are used to represent the integer part and the rest are used for the decimal part. Since the point is always in the same imaginary spot, it\u2019s called a fixed point decimal<\/strong>. But it comes with a great cost since the range is reduced from 0<\/code> to 255<\/code><\/strong> to exactly 0<\/code> to 15.9375<\/code><\/strong>.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Decimals with a fixed point. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nHaving greater precision means sacrificing range, and vice versa. We also have to take into consideration that computers need to please a large number of users with different requirements. An engineer building a bridge doesn\u2019t worry too much if the measurements are off by just a little, say a hundredth of a centimeter. But, on the other hand, that same hundredth of a centimeter can end up costing much more for someone making a microchip. The precision that\u2019s needed is different, and the consequences of a mistake can vary.<\/p>\n
Another consideration is the size where numbers are stored in memory since storing long numbers in something like a megabyte isn\u2019t feasible.<\/p>\n
The floating-point<\/em> format was born from this need to represent both large and small quantities with precision and efficiency. It does so in three parts:<\/p>\n\n- A single bit that represents whether or not the number is positive or negative (
0<\/code> for positive, 1<\/code> for negative).<\/li>\n- A significand<\/a> or mantissa<\/a> that contains the number\u2019s digits.<\/li>\n
- An exponent<\/strong> specifies where the decimal (or binary) point is placed relative to the beginning of the mantissa, similar to how scientific notation works. Consequently, the point can move around to any position, hence the floating<\/em> point.<\/li>\n<\/ol>\n
<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Decimals with a floating point. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nAn 8-bit floating-point format can represent numbers between 0.0078<\/code> to 480<\/code> (and its negatives), but notice that the floating-point representation can\u2019t represent all of the numbers in that range. It\u2019s impossible since 8 bits can represent only 256 distinct values. Inevitably, many numbers cannot be accurately represented. There are gaps<\/em> along the range. Computers, of course, work with more bits to increase accuracy and range, commonly with 32-bits and 64-bits, but it\u2019s impossible to represent all numbers accurately, a small price to pay if we consider the range we gain and the memory we save.<\/p>\nThe exact dynamics are far more complex, but for now, we only have to understand that while this format allows us to express numbers in a large range, it loses precision (the gaps between representable values get bigger) when they become too big. For example, JavaScript numbers are presented in a double-precision floating-point format, i.e., each number is represented in 64 bits in memory, leaving 53 bits to represent the mantissa. That means JavaScript can only safely represent integers between –(253<\/sup> — 1) and 253<\/sup> — 1 without losing precision. Beyond that, the arithmetic stops making sense. That\u2019s why we have the Number.MAX_SAFE_INTEGER<\/code> static data property to represent the maximum safe integer in JavaScript, which is (253<\/sup> — 1) or 9007199254740991<\/code>.<\/p>\nBut 0.3<\/code> is obviously below the MAX_SAFE_INTEGER<\/code> threshold, so why can\u2019t we get it when adding 0.1<\/code> and 0.2<\/code>? The floating-point format struggles with some fractional numbers. It isn\u2019t a problem with the floating-point format, but it certainly is across any number system.<\/p>\nTo see this, let\u2019s represent one-third (1<\/sup>⁄3<\/sub>) in base-10.<\/p>\n0.3\n<\/code><\/pre>\n0.33\n<\/code><\/pre>\n0.3333333 [...]\n<\/code><\/pre>\nNo matter how many digits we try to write, the result will never be exactly<\/em> one-third. In the same way, we cannot accurately represent some fractional numbers in base-2 or binary. Take, for example, 0.2<\/code>. We can write it with no problem in base-10, but if we try to write it in binary we get a recurring 1001<\/code> at the end that repeats infinitely.<\/p>\n0.001 1001 1001 1001 1001 1001 10 [...]\n<\/code><\/pre>\nWe obviously can\u2019t have an infinitely large number, so at some point, the mantissa has to be truncated, making it impossible not to lose precision in the process. If we try to convert 0.2<\/code> from double-precision floating-point back to base-10, we will see the actual value saved in memory:<\/p>\n0.200000000000000011102230246251565404236316680908203125\n<\/code><\/pre>\nIt isn\u2019t 0.2! We cannot represent an awful lot of fractional values — not only in JavaScript but in almost all computers. So why does running 0.2 + 0.2<\/code> correctly compute 0.4<\/code>? In this case, the imprecision is so small that it gets rounded by Javascript (at the 16th<\/sup> decimal), but sometimes the imprecision is enough to escape the rounding mechanism, as is the case with 0.2 + 0.1<\/code>. We can see what\u2019s happening under the hood if we try to sum the actual values of 0.1<\/code> and 0.2<\/code>.<\/p>\nThis is the actual value saved when writing 0.1<\/code>:<\/p>\n0.1000000000000000055511151231257827021181583404541015625\n<\/code><\/pre>\nIf we manually sum up the actual values of 0.1<\/code> and 0.2<\/code>, we will see the culprit:<\/p>\n0.3000000000000000444089209850062616169452667236328125\n<\/code><\/pre>\nThat value is rounded to 0.30000000000000004<\/code>. You can check the real values saved at float.exposed<\/a>.<\/p>\nFloating-point has its known flaws, but its positives outweigh them, and it\u2019s standard around the world. In that sense, it\u2019s actually a relief when all modern systems will give us the same 0.30000000000000004<\/code> result across architectures. It might not be the result you expect, but it\u2019s a result you can predict.<\/p>\n\n
\n 2024-06-12T20:05:40+00:00
\n <\/header>\n
0.2 + 0.1<\/code> equals 0.30000000000000004<\/code>? Or, why does "" == false<\/code> evaluate to true<\/code>?<\/p>\nThere are a lot of mind-boggling decisions in JavaScript that seem pointless; some are misunderstood, while others are direct missteps in the design. Regardless, it\u2019s worth knowing what<\/em> these strange things are and why<\/em> they are in the language. I\u2019ll share what I believe are some of the quirkiest things about JavaScript and make sense of them.<\/p>\n0.1 + 0.2<\/code> And The Floating Point Format<\/h2>\nMany of us have mocked JavaScript by writing 0.1 + 0.2<\/code> in the console and watching it resoundingly fail to get 0.3<\/code>, but rather a funny-looking 0.30000000000000004<\/code> value.<\/p>\nWhat many developers might not know is that the weird result is not really JavaScript\u2019s fault! JavaScript is merely adhering to the IEEE Standard for Floating-Point Arithmetic<\/strong><\/a> that nearly every other computer and programming language uses to represent numbers.<\/p>\nBut what exactly is the Floating-Point Arithmetic?<\/p>\n
Computers have to represent numbers in all sizes, from the distance between planets and even between atoms. On paper, it\u2019s easy to write a massive number or a minuscule quantity without worrying about the size it will take. Computers don\u2019t have that luxury since they have to save all kinds of numbers in binary and a small space in memory.<\/p>\n
Take an 8-bit integer, for example. In binary, it can hold integers ranging from 0<\/code> to 255<\/code>.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n 8-bit integers showing 0 and 255. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThe keyword here is integers<\/em>. It can\u2019t represent any decimals between them. To fix this, we could add an imaginary decimal point somewhere along our 8-bit so the bits before the point are used to represent the integer part and the rest are used for the decimal part. Since the point is always in the same imaginary spot, it\u2019s called a fixed point decimal<\/strong>. But it comes with a great cost since the range is reduced from 0<\/code> to 255<\/code><\/strong> to exactly 0<\/code> to 15.9375<\/code><\/strong>.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Decimals with a fixed point. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nHaving greater precision means sacrificing range, and vice versa. We also have to take into consideration that computers need to please a large number of users with different requirements. An engineer building a bridge doesn\u2019t worry too much if the measurements are off by just a little, say a hundredth of a centimeter. But, on the other hand, that same hundredth of a centimeter can end up costing much more for someone making a microchip. The precision that\u2019s needed is different, and the consequences of a mistake can vary.<\/p>\n
Another consideration is the size where numbers are stored in memory since storing long numbers in something like a megabyte isn\u2019t feasible.<\/p>\n
The floating-point<\/em> format was born from this need to represent both large and small quantities with precision and efficiency. It does so in three parts:<\/p>\n\n- A single bit that represents whether or not the number is positive or negative (
0<\/code> for positive, 1<\/code> for negative).<\/li>\n- A significand<\/a> or mantissa<\/a> that contains the number\u2019s digits.<\/li>\n
- An exponent<\/strong> specifies where the decimal (or binary) point is placed relative to the beginning of the mantissa, similar to how scientific notation works. Consequently, the point can move around to any position, hence the floating<\/em> point.<\/li>\n<\/ol>\n
<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Decimals with a floating point. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nAn 8-bit floating-point format can represent numbers between 0.0078<\/code> to 480<\/code> (and its negatives), but notice that the floating-point representation can\u2019t represent all of the numbers in that range. It\u2019s impossible since 8 bits can represent only 256 distinct values. Inevitably, many numbers cannot be accurately represented. There are gaps<\/em> along the range. Computers, of course, work with more bits to increase accuracy and range, commonly with 32-bits and 64-bits, but it\u2019s impossible to represent all numbers accurately, a small price to pay if we consider the range we gain and the memory we save.<\/p>\nThe exact dynamics are far more complex, but for now, we only have to understand that while this format allows us to express numbers in a large range, it loses precision (the gaps between representable values get bigger) when they become too big. For example, JavaScript numbers are presented in a double-precision floating-point format, i.e., each number is represented in 64 bits in memory, leaving 53 bits to represent the mantissa. That means JavaScript can only safely represent integers between –(253<\/sup> — 1) and 253<\/sup> — 1 without losing precision. Beyond that, the arithmetic stops making sense. That\u2019s why we have the Number.MAX_SAFE_INTEGER<\/code> static data property to represent the maximum safe integer in JavaScript, which is (253<\/sup> — 1) or 9007199254740991<\/code>.<\/p>\nBut 0.3<\/code> is obviously below the MAX_SAFE_INTEGER<\/code> threshold, so why can\u2019t we get it when adding 0.1<\/code> and 0.2<\/code>? The floating-point format struggles with some fractional numbers. It isn\u2019t a problem with the floating-point format, but it certainly is across any number system.<\/p>\nTo see this, let\u2019s represent one-third (1<\/sup>⁄3<\/sub>) in base-10.<\/p>\n0.3\n<\/code><\/pre>\n0.33\n<\/code><\/pre>\n0.3333333 [...]\n<\/code><\/pre>\nNo matter how many digits we try to write, the result will never be exactly<\/em> one-third. In the same way, we cannot accurately represent some fractional numbers in base-2 or binary. Take, for example, 0.2<\/code>. We can write it with no problem in base-10, but if we try to write it in binary we get a recurring 1001<\/code> at the end that repeats infinitely.<\/p>\n0.001 1001 1001 1001 1001 1001 10 [...]\n<\/code><\/pre>\nWe obviously can\u2019t have an infinitely large number, so at some point, the mantissa has to be truncated, making it impossible not to lose precision in the process. If we try to convert 0.2<\/code> from double-precision floating-point back to base-10, we will see the actual value saved in memory:<\/p>\n0.200000000000000011102230246251565404236316680908203125\n<\/code><\/pre>\nIt isn\u2019t 0.2! We cannot represent an awful lot of fractional values — not only in JavaScript but in almost all computers. So why does running 0.2 + 0.2<\/code> correctly compute 0.4<\/code>? In this case, the imprecision is so small that it gets rounded by Javascript (at the 16th<\/sup> decimal), but sometimes the imprecision is enough to escape the rounding mechanism, as is the case with 0.2 + 0.1<\/code>. We can see what\u2019s happening under the hood if we try to sum the actual values of 0.1<\/code> and 0.2<\/code>.<\/p>\nThis is the actual value saved when writing 0.1<\/code>:<\/p>\n0.1000000000000000055511151231257827021181583404541015625\n<\/code><\/pre>\nIf we manually sum up the actual values of 0.1<\/code> and 0.2<\/code>, we will see the culprit:<\/p>\n0.3000000000000000444089209850062616169452667236328125\n<\/code><\/pre>\nThat value is rounded to 0.30000000000000004<\/code>. You can check the real values saved at float.exposed<\/a>.<\/p>\nFloating-point has its known flaws, but its positives outweigh them, and it\u2019s standard around the world. In that sense, it\u2019s actually a relief when all modern systems will give us the same 0.30000000000000004<\/code> result across architectures. It might not be the result you expect, but it\u2019s a result you can predict.<\/p>\n\n