GREENTECH

Twitter removes storage bottlenecks, speeds up Hadoop analytics by 50%

Presented by Intel


Think it’s hard keeping up with your Twitter feed? Imagine keeping track of all of Twitter. “Every tweet is comprised of over 100 data points,” says Matt Singer, a senior staff hardware engineer responsible for server architecture at Twitter. Data from every retweet, “unfollow”, link-click and other actions feeds analytic and deep learning systems serving operational, advertising. and other vital functions. It’s a nonstop stream exceeding 1.5 trillion events per day.

How does an organization handle such hyper-scale demands? Twitter relies upon one of the world’s biggest deployments of Hadoop clusters. The open source Big Data analytics software helps the company generate business insights that allow it to manage and grow its vast global network. To enhance its reputation as the premier streaming information service, Twitter turned to Intel to help amplify its growth by maximizing performance and slashing rising storage costs.

The two companies collaborated on discovering new ways of improving Hadoop performance and removing bottlenecks. The pioneering approach developed by the partners – selectively caching and storing temporary files on fast solid state drives (SSDs) and increasing processor density 6X brought Twitter 50% faster Hadoop run times, a 75% reduction in hard disk drive (HDD) storage, 30% lower TCO, and a new reference architecture for server clusters that supports speedy, stable growth. Here’s how they did it – and what’s next.

Slow drives = slow analytics

“Hard drives can keep up with fast modern processors when they’re doing one thing at a time,” says Singer. “But bottlenecks occur when you give an HDD multiple things to do at the same time.” That’s exactly what was happening in Twitter’s critical Hadoop workloads on commodity 7,200 rpm drives.

The problem is that over the years hard drives have gotten bigger but not faster.

“I/O per second (IOPS) has not increased proportionally,” explains David Tuhy, vice president and general manager of the Intel Data Center Optane Storage Division. “Every time they double the size of the hard drive, there is a corresponding cut in the IOPS per Gigabyte by up to half. So the world’s storage is getting slower every generation.”

More clusters, more problems

These performance problems multiply greatly in hyperscale operations like Twitter’s:

A typical Hadoop cluster (the total number of clusters is confidential) at the company has up to 10,000 nodes and 100PB of logical storage, spread across 100,000 HDDs.

Moreover, slow IOPS-per-GB of storage also limits an organization’s architectural and hardware choices. Twitter had been continually adding more costly servers and storage to its Hadoop clusters, but had hit the wall. At this stage, simply adding more hard drives would actually make bottlenecks worse.

So in Fall 2017, Twitter and Intel engineers set out to investigate: How can Hadoop I/O performance be increased without driving up costs?

Solution: Cache, not cash

Twitter initially assumed slowdowns in Hadoop were due to the sheer volume of data. But after discussions with Intel engineers, they decided to explore intelligent caching in their Hadoop disk subsystem using Intel Cache Acceleration Software (Intel CAS), along with a much larger cache capacity.

It was an innovative move. “Nobody thought about using a caching model to try to accelerate Hadoop,” explains Juan Fernandez, an Intel technical solution specialist. “It isn’t a cache-friendly workload. But we knew that, if we could target an SSD cache just to the bottleneck in the Hadoop algorithm, this was critically important for the process.”

Intel built a nine-node test cluster at one of its data centers in Arizona, Twitter created a sizeable 120- node experimental cluster in its data center using Intel’s latest solid state drives, and joint experiments began. In the shared engineering toolbox was Terasort for synthetic benchmarks, Gridmix to replay representative production workloads, and an IntelVTuneTM Amplifier – Platform Profile for system profiling. Teams worked in sync, collaborating regularly by videoconference.

The breakthrough

In spring 2018, a breakthrough idea: Instead of caching all the actual Hadoop data, why not cache a smaller subset of critical path working data (AKA meta-data)? “The hard drives were doing okay with big sequential data, but we were having problems with small, random bits of data.” Singer explains. “But by trying to cache everything, data that was needed immediately got bumped out of the cache. We needed to find a better way to target the caching.”

So engineers began looking at new ways of analyzing, decomposing, and portioning Hadoop metadata. Their goal was to find the smallest, hottest process and direct it to the SSD. Ultimately, they chose data from YARN, a clustering algorithm within Hadoop that helps dynamically manage resources and schedule tasks.

Excitedly, test clusters were reconfigured, and tests run again with the intelligent data cache. “There was that kind of moment of disbelief,” Singer recalls. Every runtime benchmark improved sharply, but more surprisingly hard drive accesses dropped.

Reducing hard drives by 75%

After moving the temporary data from YARN processes to the fast, high-density Intel SSD, it became clear fewer HDDs were needed. But how few?

The standard Hadoop server in the Twitter data center had 12 drives. Engineers pulled out three, then ran benchmark tests again. No drop in performance. So they pulled another three. Again, solid performance. Then another three. “Now we’re down to three drives and getting the same performance,” Singer says. “We knew we had moved the right workload to the SSD. This was when our eyes really got big.”

But the Twitter-Intel teams still needed to prove it was the intelligent caching making the difference. So they disabled it, put back all 12 HDDs, and re-ran the tests. Sure enough, when engineers pulled out the first three drives, the benchmarks took longer. “We got down to three hard drives, and the benchmark took three-and-a-half times as long. That really gave us the confidence to move forward.”

New architecture for Hadoop servers

In spring 2019, Twitter began testing new server designs based on the results and computations from the previous year’s explorations. Configuration and validation verified that moving from four-core to 24-core processors (IntelXeon Gold 6262V) in Hadoop clusters boosted runtimes by 50% – a huge gain that opens up new analytic possibilities when paired with the fast new SSDs.

Today, Twitter continues producing and introducing the new-architecture Hadoop clusters into its production data centers. The company took advantage of the opportunity to increase the typical new configuration from 12 terabytes of total storage to 48 terabytes. These larger capacity disks are an important part of the increased cost efficiency of the new clusters, due to the reduced cost per TB.”

The increased storage capacity extends the speed performance improvements of the SSD, Singer explains. “Now, we can increase the retention time we can have on data. A cluster can perform analysis on a larger time scale or capture interesting new data points, for example.”

Fewer traffic jams

For Twitter, boosting Hadoop performance and cost efficiency with caching, fast Intel SSDs, and more compute has yielded dramatic benefits. Besides reducing or removing hard drives, the company can now store more data on existing systems in a different way that is more efficient, with more space and storage cycles.

Through experimentation and collaboration, the company has cut TCO of its Hadoop clusters nearly one-third. Much of the savings comes from eliminating three of every four hard drives, along with the related costs for rack space, energy, and maintenance. “If we have one-fourth of the hard drives, we’ll have approximately one-fourth of the number of drive failures. That’s a lot less interruption, maintenance, and risk of losing data,” Singer says.

In essence, the new Hadoop configurations prevent traffic jams. They help ensure that different workloads and groups are not delayed or bumping into each other when accessing shared clusters and data. Avoiding such bottlenecks and bringing processing closer to huge volumes of “hot” data is increasingly crucial in today’s business environment. Says Tuhy: “High-quality data analysis, business intelligence, and fast transactions are the new way that companies make money on the Internet.”

Continuing the advances

The Twitter team will continue working with Intel on several key areas, including creating the optimal balance of HDDs, processor threads, and SSDs. The team continues to innovate on the Intel open source Cache Acceleration software project. While SSDs will not replace commodity hard drives in many applications, when strategically deployed with extra CPU horsepower and selectively cached, they are compelling, with both higher-performance and TCO for critical workloads.

Intel is bringing the learnings, best practices, and new caching software and analysis tools from the Twitter collaboration to cloud service providers, financial institutions, and others. “Twitter is the perfect example,” says Tuhy, “but you don’t have to be Twitter-size to benefit. The IOPS Per GB issues for HDD drives are the same.”

Best practice: Be selective

Re-architecting and re-thinking caching helped Twitter find new efficiencies in its Hadoop clusters. For organizations considering the new approach, Twitter’s Singer has several pieces of advice. He underscores the importance of testing, understanding your workloads, and data flows in detail, and experimenting with different configurations. And, he warns, don’t confuse a busy hard disk device with an efficiently used device.

Singer concludes: “The big takeaway is selective routing of flows and data to an SSD. You don’t need to cache everything. Identifying selectively is likely to give you dramatic performance and operating benefits. That’s not the typical way that a cache is used. But it’s a producer/consumer model that has worked really well.”

Go deeper: Check out Animation video and White paper and for more complete information about performance and benchmark results, visit www.intel.com/benchmarks.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com.

Please follow and like us:
Verified by MonsterInsights