How much do fraudsters invest to take down your company?

Over our past few blogs, we’ve explored how device intelligence and behavioral analytics stop fraud in its tracks. But what type of fraud are we talking about, exactly?

When we discuss common fraud tactics with our customers, there’s often a level of uncertainty around what fraud actually looks and acts like. While folks generally have a good grasp of the variety of techniques bad actors employ, many are still working toward a full understanding of the spectrum of fraud tactics and a clear definition of each approach. That’s understandable, of course — online fraudsters continue to evolve rapidly and in many directions all at once.

sinking piggy bank

One important principle to remember is that a cyberattack is an investment of time, money and energy for a fraudster. Just like you might compare two mutual funds before deciding where to invest your retirement plan, fraudsters weigh the pros and cons of different attack approaches. Are you dealing with a bad actor saving up for a rainy day, who will be stingy with their resources? Or is your fraudster a risk taker ready to gamble for a big, immediate payoff? Perhaps your cybersecurity threat is more focused on the long term, interested in slowly growing their nest egg over time. While these bad actors share the same goal of infiltrating your system, each selects a strategy with a level of investment fit for their unique needs and preferences.

To help build your knowledge of today’s top cybersecurity threats at all investment levels, here’s a breakdown of the types of fraud we see most often today — and what you can do about them.

Bad actors’ investment portfolio of fraud strategies


The list below is by no means comprehensive, so we’ve narrowed it down to the fraud schemes most likely to come up when discussing cybersecurity strategies with your team.

You should factor each of these threats into your overall digital strategy, whether that applies to desktop, mobile or both. These tactics live on a spectrum, measured in terms of fraudsters’ level of investment (from low to high). The level of investment varies based on the resources and time bad actors dedicate to an attack, the scalability of their approach and the likelihood of success.

Low investment, low return: Automated account takeovers

With automated account takeovers, bad actors deploy bots and basic automation tools to commit fraud. For example, in an unsophisticated credential stuffing attack, a fraudster deploys a simple automated script that can input thousands of stolen user credentials in seconds to attempt to log into a site without even loading the login page.

While fraudsters can take certain steps to mask their bots and attacks online, this approach relies heavily on non-human elements. So, when attacks aren’t sophisticated — meaning they don’t emulate real humans’ behavior — they’re likely to be stopped by the most basic bot detection tools adept at spotting telltale signs of automation. Red flags include multiple login attempts with different credentials from one IP address or inputting credentials faster than humanly possible.

Humans can get involved in account takeovers, too. However, what human-driven attacks gain in oversight and sophistication they lose in scalability. For example, if a fraudster purchases a list of 10,000 user credentials online, it’s unrealistic for that single cybercriminal to manually try out each and every login.

Medium investment, medium return: Human farms

Human farms solve the scalability issue of human-driven account takeovers. Now, fraudsters can farm out that list of 10,000 user credentials to many people working at a low hourly cost. Bringing on human talent increases scale and often leads to a higher success rate because these workers are not flagged as automation by bot-detection tools.

There are many other instances where fraudsters can call upon support from human farms. For example, once a correct credential is inputted via an automated script, the targeted company’s security system may detect bot-like behavior and present a typical point of friction like a CAPTCHA. The fraudster’s automated script immediately redirects the CAPTCHA to a human farm worker, who easily solves the challenge and bypasses security measures. Login achieved, personal information stolen.

Medium investment, medium return: Malware

Typically, malware is a malicious file or piece of code built by fraudsters that infects our devices virtually — we can pick up malware when interacting with a suspicious website or using our devices on public networks.

Malware comes in many shapes and forms, including ransomware and spyware. However, for most companies, credential-stealing malware is of particular concern. Once it infects a user’s device, this malicious software gives fraudsters access to the victim’s usernames and passwords, as well as other key pieces of personal information in some instances. That said, in most cases of credential-stealing malware, bad actors still rely on their own devices and geolocations when leveraging this user information, which makes them easier to spot.

Some fraudsters have managed to close even this loophole by taking over the user’s device itself. This type of attack involves a version of malware called a remote access trojan (RAT). After a user’s device has been infected with a RAT, fraudsters are able to remotely log into a user’s known device and perform actions there. When credentials are saved to that device, it’s difficult to differentiate between good and bad actors because fraudsters now have access to the same device, geolocation and credentials as validated customers.

High investment, high return: Social engineering with user interaction

This tactic completes fraudulent interactions in a manner that exploits a user’s own participation and permissions. For instance, consider what can happen after a fraudster enters your username and password on your bank’s website. With the right cybersecurity plan in place, your bank should flag the interaction based on a new device, geolocation, anomalous time and physical interactions and more. These red flags may ultimately inspire intervention in the form of an over-the-phone verification code.

If the fraudster also knows your phone number, they can contact you disguised as a representative from your bank, convincing you to provide what amounts to the keys to your own front door. The bad actor can then easily transfer money to their own bank account. While your bank’s cybersecurity strategy should again flag this behavior as abnormal, the goal is to prevent these anomalies sooner. Fraudsters can increase the stakes by impersonating major institutions such as the IRS, taking advantage of users’ fears and lack of knowledge in digital spaces. This process is hard to scale, which is why it requires more time investment but can also yield higher returns.

In these instances, fraudsters are able to coach users into bypassing the very cybersecurity strategies designed to protect them.

As you learn more about common types of fraud and align with your team about how these tactics threaten your business, it’s often helpful to enlist the assistance of third-party vendors trained in spotting anomalies across users’ behaviors.

Partnership with cybersecurity technologies offers access to the latest in device intelligence, behavioral analytics and behavioral tools, without impacting f your user’s experience. With cutting-edge cybersecurity technology, you can flag behavioral anomalies and retain decision-making power about which high-risk interactions and transactions require further intervention on your part. When acting on red flags — whether automated, human-driven or suspicious trusted users — you can introduce friction only where warranted, leaving good users to operate freely and with minimal fuss.

Technology that evaluates both behavioral and device insights can detect the subtle differences among threats discussed here, as well as threats not included on our list. For example, while device intelligence strategies may successfully prevent certain types of account takeovers and malware efforts, behavioral tools are more adept at intervening when other threats occur. When dealing with a RAT, for instance, the difference between spotting and differentiating a good versus bad actor may be as nuanced as tracking how a customer typically moves their mouse or how quickly they input information.