What is ebay expansion?  Computers have given a very long answer

You probably remember from math lessons that you can expand pi as 3.14. However, two decimal places is not a very impressive result when compared to the result achieved by Google computers.

In 2019, Google Cloud put this number at 31.4 trillion. However, the record did not last long, because only two years later, scientists from the University of Applied Sciences in Grisons increased this figure to 62.8 trillion decimal places. This year, however, there was another remarkable achievement.

Read also: Google’s latest text-to-image generator can create realistic images of what you dream of

Google played the main role again, more precisely – computers there. Thanks to their participation, it was possible to determine the value of pi up to 100 trillion decimal places. So Google Cloud is once again involved in breaking records, which is especially impressive when you consider that it has tripled over three years.

The pi expansion is now known as 100 trillion decimal places

This achievement is a testament to how much the Google Cloud infrastructure is accelerating from year to year. The core technology that made this possible is Compute Engine, the secure and configurable computing service from Google Cloud, plus the last few additions and improvements: Compute Engine N2 family of machines, 100Gbps bandwidth, Google Virtual NIC, and sustainable permanent disks. We can read in a statement published by Google

The program that managed to calculate 100 trillion digits of pi is known as R-Cruncher v0.7.8. The algorithm it depends on is known by name Chudnovsky’s algorithm. The computations began on October 14, 2021 and ended on March 21, 2022. More specifically, the computation took 157 days, 23 hours, 31 minutes, and 7.651 seconds.

Read also: They spent four months puzzled over this math question. Chinese genius solved it overnight

As you can imagine, such a large-scale operation requires a lot of computing power and resources. Google Cloud estimated the cache size needed to perform the calculations at 554 TB. It is worth noting that the company has created a cluster consisting of one computing node and 32 storage nodes. In total, this included 64 iSCSI block storage targets.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

How are glaciers melting in Antarctica? New NASA notes

ice shelf It’s actually massive ice sheets drifting off the ocean’s surface.…

First observational evidence of gamma-ray emission in young Sun-like stars | Urania

A team of scientists has provided the first observational evidence that a…

Quantum matter has fewer and fewer secrets. New experience helped

Post authors available at nature materials However, they were able to go…

Eighth graders passed the “3+” exam. Math again more difficult than Polish

Pupils of the eighth grade of primary school had to pass three…