GREENTECH

How assuming fraudsters are lazy can help prevent cyberattacks

Join today’s leading executives online at the Data Summit on March 9th. Register here.


This article was contributed by Gergo Varga, author of the Fraud Prevention Guide for Dummies and senior content manager and product evangelist at SEON.

In 2022, online fraud is projected to be a huge industry. Just in the U.K., over $187 billion is lost to fraud every year. Globally, it cost $5.38 trillion in 2021 per Crowe and University of Portsmouth research, while cybercrime overall is projected to rise to $10.5 trillion by 2025, per Cybersecurity Ventures.

Further, it is estimated that in the 12 years from 2008 to 2020, average losses to fraud globally have increased by 88%. Even worse, this was calculated just before the start of the pandemic – which experts agree has exacerbated the situation further.

Within this landscape, there are different strategies’ fraud prevention and management vendors and analysts take to mitigate against threats. 

But what does this have to do with fraudsters’ laziness? Let’s see.

Betting against fraudsters: The hypothesis

In the anti-fraud industry, you can observe your typical game of cat-and-mouse against fraudsters and scammers, each side doing their best to keep ahead of new trends and technological capabilities.

Both sides will become early adopters of new technology and tools to help them achieve their goals. In general terms, many fraud analysts tend to be reactive, responding to threats as they arise. The more successful strategies, though, remain proactive.

What, however, if we were to make a bet, so to speak, investing on the assumption that fraudsters are lazy – too lazy to hide well enough to not be discovered, if you really know where to look.

Criminals have the basics covered

There are a series of tools fraud analysts use to identify high-risk users and accounts. These include in-depth device fingerprinting, which automatically queries each user’s hardware, software, and configuration to identify suspicious patterns. One simple example of this is seeing the same device configuration log into dozens of different accounts within a short time. 

Another type of technology that helps assess the intentions of each user to catch bad actors is IP analysis. For instance, an IP analysis module will consider whether the person is using a private IP address, public IP address, mobile or data center IP, assigning to each of these a value that contributes to their risk score. Moreover, any proxies, VPNs, or Tor/onion nodes identified will increase this score, which means the system sees the user as higher-risk.

Inasmuch as this is not appreciated by those who are extremely cautious about their privacy, this information is not private nor personally identifiable but more of a technical breakdown of their current circumstances. Moreover, it’s a tradeoff that allows for safe transactions online; they would have been impossible to trust without some level of scrutiny.

The above are examples of technology that’s adopted industry-wide in fraud prevention, though the effectiveness of each vendor’s solution depends on their respective modules and algorithms. 

However, criminals are well aware of these and have devised several techniques and applications to fool such detection algorithms — admittedly, with varying levels of success. 

There is always more to be done to better protect against scams and fraud, though.

Two competing problems: Fraud and churn 

One way to come up with solutions is to ask, “What are real, legitimate shoppers like? How can we figure out if people online are real rather than fake, stolen or synthetic IDs, without asking them directly?” Note here that not asking directly is important because avoiding friction and churn is paramount for businesses.

This is because there is an estimated $18 billion in sales lost to cart abandonment every year. Several reasons exist as to why someone might abandon their online cart, but 11% of cases are because they were asked for too much information. Online shoppers seek convenience and are also privacy-aware. Being asked for unnecessary information is seen as inconvenient and, to be frank, users hate it when they have to provide selfies and identification documents, for example. All this is perceived as insulting to them and risky to their privacy.

It is thus important for merchants to have a frictionless line of defense that does not disrupt the shopper journey. 

So, to this end, we can use information already provided by almost all shoppers in every transaction: an email address — coupled, where appropriate, with a phone number.

If we can use these simple elements to glean information about these people, we will then be able to identify and single out the more suspicious users and request additional proof of identity and/or details only from them, thus allowing the rest of the customers to continue shopping uninterrupted. 

Fraudsters are smart, but also lazy 

So, what we do is combine publicly available information for a given email address and/or phone number in order to get their digital footprint. Is it associated with a real-life user or not?

Such a tool is based on the assumption that fraudsters are lazy. Although our internal data shows that 98% of bad actors will create a new free email address that matches the stolen or synthetic identity they’ve assumed, our results also prove they will not spend the time to create a complete online profile — i.e. set up convincing social media accounts and other platforms for that address.

This is, of course, unlike real people, who are bound to use — or at least have signed up for — some online services and social media. There were over 4.55 billion social media users on Earth in October of 2021, with 1 billion on TikTok, 2.3 billion on YouTube and 2 billion on WhatsApp. 

What’s more, with email/password leaks reaching up to 8.4 billion entries at a time, most email address owners are likely to have been in one. As a side note, do keep in mind that this does not mean these people’s accounts were taken over, as it’s rare that passwords leak together with emails, some passwords will have changed, others might use multi-factor authentication, and so on. 

Cost-effectiveness and hidden information 

To be completely fair, the fact that fraudsters will not take the time to create a comprehensive, fully convincing online presence for their assumed identities is not necessarily down to laziness. 

It is just not a good return-on-investment for cybercriminals. It only takes a few minutes (even less using automated tools) to sign up for a free email account that matches a stolen credit card’s name. But it would take significantly more time to also create social media profiles for each, especially since such platforms require some sort of verification themselves, and usually involve some checks to prevent the creation of throwaway accounts. Add to that the fact that the vast majority of fake profiles/attempts at fraudulent activity will not work out for criminals, and it is evident they should be seeking to do the bare minimum to get by, in most cases.

So, the data enrichment module will use email addresses and phone numbers to find the digital footprint and create the profile of each user. In simple terms, this digital footprinting means it will look at data points as:

  • Is this email associated with any social media profiles e.g., Facebook, Twitter, LinkedIn?
  • If it is, are their public details (e.g., gender, location, industry) consistent? 
  • Has this address been found in any known data breaches? When is the earliest?
  • Who owns the domain, and when was it registered?
  • Is this email associated with web platforms e.g., TripAdvisor, GitHub, etc.?
  • Is it registered on VOIP messaging apps such as Viber, WhatsApp, Telegram, etc.?

These findings are collated into one comprehensive risk profile, which can either set in motion certain know-your-customer (KYC) protocols, such as additional documentation and authentication, or block the transaction, or even sent the digital profiles to a team of human data analysts to assess on a case-by-case basis.

Lazy fraudsters vs data enrichment: The results

As a result of this process, we can catch fraudsters in the act without bothering legitimate users with any additional demands and checks. 

This functionality is available as standalone API calls for manual research, or can sit at the core of our end-to-end fraud prevention platform, enriching data and helping to categorize users according to the level of risk they pose. This information is combined with the aforementioned analysis of their device, IP address, behavior, velocity data and more, all coming together to inform our choice to approve or reject a user’s actions or transactions. 

To see whether this approach works — and just how well — we recently gathered the data from our clients’ use of SEON’s anti-fraud platform in late 2021. We then analyzed it, in our effort to better understand recent trends and fraudster behavior. Just how lazy are fraudsters these days?

Internal results from January to September of 2021 show clearly that the more social media and other online platform profiles associated with an email address, the more likely it’s genuine. 

Also, those who have been found in at least one known data breach are less likely to be suspicious and/or declined. This isn’t so surprising to anyone aware of how prevalent these are. For instance, that 81% of companies have experienced a cyberattack in the past year while 51% of IT experts don’t feel confident they could mitigate one.

Let’s look more closely at two sectors central to the digital economy. In ecommerce, the users who are automatically approved have more extensive online presence on the web: 5.68 social media and online platform profiles on average. They are also likely to have been found in slightly over 2.4 data breaches (!) each. Remember that the approvals do not only rely on these data points but on a wide range of attributes, which is part of why the results are so consistent.

By comparison, the average number of social profiles associated with declined users is 2.8, while their address has been found in less than one (0.68) data breach on average. As for those passed to experts for manual review, they are halfway between these, at 3.37 profiles and 1.28 breaches.

Another sector to look at is the online lending arm of the fintech industry. Here it’s also vital to safeguard against fraud, as it can be catastrophic for startups to approve loans to people who will not pay them back and can literally cost them their entire business if done extensively.

The lending landscape as described by our findings is similar: those legitimate applicants who are approved have an average of 5.45 social media/online platform profiles, and almost half have been a victim of a data breach. However, declined consumers have only 1.7 social media profiles on average.

As for how many times these email addresses have been found in a data breach, the average is 1.02 for applicants whose loans were approved, but just 0.1 for the ones who were rejected.

It seems that fraudsters will not take the time to create more than a couple of social media or online platform profiles, if any, in their effort to impersonate the owner of a stolen credit card, or a synthetic identity they created. The solution will thus pick that up and flag them accordingly. 

With most comprehensive anti-fraud platforms, merchants and other types of organizations are able to create their own rulesets that match their history, sector and risk tolerance. The process is not unlike creating custom rules in other types of applications. 

In terms of these custom fraud prevention rules set by the business, some of the most common triggers include IP addresses found on at least one spam blacklist, more than one user logging on from the same IP in the same day, as well as identical cookie hashes to other accounts with similar behavior.

Key takeaways

These results demonstrate that it is helpful to assume fraudsters are “lazy” — too lazy to create legitimate and complete digital/online footprints for their fraudulent email addresses. 

In fact, the main reason some of these fake personas did have the little social activity they did is because some free email providers auto-propagate accounts on platforms linked to them when you sign up, which was included in the findings.

There’s no question then that in the fight against fraud, these two metrics are excellent tools to help organizations stay safe and prevent bad actors from taking advantage of them — and their legitimate users. 

As for whether fraudsters are genuinely lazy or just understand the principle of cost-effectiveness, it’s still up for debate.

Gergo Varga is the author of the Fraud Prevention Guide for Dummies – SEON Special Edition. He currently works as the senior content manager and product evangelist at SEON.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Please follow and like us:
Verified by MonsterInsights