5 Reasons to Focus on Malware Delivery Mechanisms

Authored by Chris Olson, CEO and Co-Founder, The Media Trust.

Originally published by Security Magazine

Malware Delivery Mechanism

Defending against today’s pervasive web-based malware is not as straightforward and simple as it used to be. According to Symantec’s Monthly Threat report, the number of web attacks almost doubled in April of this year alone, up from 584,000 per day to 1,038,000 per day. Bad actors – seasoned cyber criminals, hacktivists, insiders, script kiddies and more – target premium, frequently whitelisted websites with varied motives such as financial, espionage and sabotage, to name a few. These web-based attacks are more targeted, complex and hard to detect, and when an employee visits an infected website, the damage to an enterprise network can be debilitating. Traditional security defenses like blacklists, whitelists, generic threat intelligence, AVs, web filters and firewalls fail to offer comprehensive protection. An alternative security approach is necessary, especially when working with malware data.

Managing malware data needs a paradigm shift

Currently, Information Security Professionals (InfoSec) and IT teams are trained to focus on the context of the web-based malware: What the payload might be; Is it replicating or morphing; Where’s the payload analysis; Who is targeting the website and why; along with a host of other variables. These are definitely valid questions, but should only be asked after action is taken to block it – not in order to take action.

Using existing analysis tactics to assess the ever-increasing volume of malware information is a Sisyphean task in the digital environment. The time it takes to agree that something is malicious is in direct proportion to your network’s exposure to web-based malware.

It’s time for InfoSec and IT teams to take a new, proactive approach to shielding customers and Internet real estate from web-based malware. It starts with adopting this simpler definition of malware: “Any code, program or application that behaves abnormally or that has an unwarranted presence on a device, network or digital asset.”

In essence, any code or behavior not germane to the intended execution of a web-based asset is considered malware. While this definition covers the obvious overt offenders it also includes seemingly non-malicious items including toolbars, redirects, bot drops, etc. Adopting a simple, yet broad definition enables you to focus on shielding your enterprise network from a wide range of active and potential malware attacks.

Understanding the digital environment is critical to breaking the analysis paralysis cycle and replacing it with a “block and tackle” approach. To do so, IT professionals need to focus on what matters: identifying the delivery mechanism in order to stop malware from penetrating the enterprise network. Here are five reasons why you should focus on the delivery mechanism:

Reason 1: Temporal malware is still dangerous

Web-based malware or malware delivered via the consumer internet (websites a typical person visits in the course of their daily activities, such as news, weather, travel, social and ecommerce sites) is fleeting and temporal. Research from The Media Trust reveals that in many scenarios web-based malware is active for as short as a few hours, giving little time for a deep dive analysis before blocking offending domains. If you spend time on analysis, you are a target for compromise because if the malware doesn’t infect your organization at the outset, it will most likely morph into another malicious domain or code to retarget the website with something more debilitating such as ransomware or keystroke logging.

Reason 2: Non-overt malware will turn on you eventually

Malware does not necessarily need to be complex or overtly malicious right from the start or upon initial detection. Annoying or seemingly innocuous behavior such as out-of-browser redirects, excessive cookie use, non-human clicks/actions or toolbar drops qualify as malware. While these behaviors may initially appear benign, they will frequently reveal their true intention upon a closer look at both Indicators of Threat (IOC) and Patterns of Attack (POA).

It happens quite often and reports suggest that every year researchers track 500+ malware evasion tactics used to bypass detection. For instance, a recent attack on several small and medium-tier ecommerce websites demonstrates malicious domains executing over varying time intervals and, in at least one instance, move from website to website across various geographies in order to avoid detection. In other instances, malware is specifically coded to look benign and only execute when certain conditions are met, e.g., geography, device, user profile or combinations of conditions. Taking weeks or months, this delayed execution is an effective technique to evade detection by most scanners. An auto-refresh ad on the browser or an alert to update software could be a red flag.

Reason 3: What’s in a name? 

While names are understandably necessary to tag malware, there is a tendency to initially fixate on labels rather than block the malware itself. For professionals in the frontlines of trying to stop web-based malware from infecting the enterprise network, focusing on the name can increase the dwell time and do more harm than good. Instead compromised domains will give teams better insight and allow them to block the malware from penetrating networks.

Reason 4: Past malware doesn’t predict future attacks

Just because malware is validated with a name or belongs to a recognized family; it does not always mean that information to defend against future attacks is necessarily reliable. The polymorphic nature of web-based malware allows it to propagate via different domains in various shapes and forms – embed malicious code on a web page through a particular CMS platform, execute an out-of-browser redirect, or present a fake system update alert. Not only is the delivery channel constantly changing, but also the actual intent and payload may change as well. Relying on past research is not a foolproof defense when it comes to ever-changing malware propagating in the digital ecosystem, which is a complex, mostly opaque environment.

Reason 5: Death by analysis

Extensive analysis of web-based malware before blocking it could have severe repercussions – either by way of a corrupted endpoint or a larger network breach. Once web-based malware reach endpoints, it is already past the security perimeter which means remediation efforts are necessary. According to reports, the average cost for an enterprise to clean up a web-based attack is estimated to be $96,000 and more.  Think of how many resources – people, time, money – could be saved if malware was immediately blocked upon detection.

By focusing on the delivery mechanism, security professionals can take a proactive stance to harden website defenses against web-based malware and also significantly reduce the time to action when it comes to securing endpoints and the enterprise networks. Real-time response is required or it provides the perfect window of opportunity for an attack to be successful.

The Great Data Leakage Whodunit

Safeguarding valuable, first-party data isn’t as easy as you think

If your job is even remotely connected to the digital advertising ecosystem, you are probably aware that data leakage has plagued publishers for many years. But you are most likely still in the dark about the scope and gravity of this issue. Simply put, data leakage is the unauthorized transfer of information from one entity to another. In the digital ad ecosystem, this data loss traditionally occurred when a brand or marketing agency collected publishers’ audience data and reused it without authorization. Today, this scenario is much more complicated due to the sheer number of players across the digital advertising landscape, which causes data loss to steadily permeate the entire digital ad industry, and leading to a “whodunit” pandemonium.

Surveying the Scene

On average, at The Media Trust we detect at least 10 parties contributing to the execution or delivery of a single digital ad, and this is a conservative figure considering that frequently this number is as high as 30, and in some cases more than 100, depending on the size of the campaign, type of ad, and so forth. The other contributing parties are typically DSPs, SSPs, Ad Exchanges, Trading Desks, CDNs and other middlemen that actively participate in the delivery of the ad as it moves from advertiser to publisher. Just imagine the cacophony of “not me!” that breaks out when unauthorized data collection is detected. To make matters worse: few understand how data leakage impacts their business and ultimately, the consumer. As a result, an unwieldy game of whodunit is afoot.

Sniffing out the culprit(s)

To unravel this data leakage mystery, let’s get down to brass tacks and build a basic story around just four actors: Bill the Luxury Traveler (Consumer), Brooke the Brand Marketer (Brand), Blair the Audience Researcher (Agency), and Ben the Ad Operations Director (Publisher).

data-leakage-who-dunnit

Bill the Luxury Traveler

Case File: As a typical consumer, Bill researched vacation package for his favorite Aspen resort on a popular travel website. He found a great bargain but wasn’t ready to make the final booking. As he spent the next few days thinking about his decision, he noticed ads for completely different resorts on almost every website he visited. How did “they” know he wants to travel?

Prime Suspects: Bill blames his favorite resort and the leading travel website for not protecting or, even worse, selling his personal data.

Brooke the Brand Marketer

Case File: Brooke is the marketer for a popular Aspen luxury resort. She invested a sizeable percentage of her marketing budget on an agency that specialized in audience research and paid a premium to advertise on a website frequented by consumers like Bill. To her dismay, she realized that this exact target audience is being served ads for competitive resorts on several other websites. How did her competitors know to target the same audience?

Prime Suspects: Brooke questions her ad agency leaking her valuable audience information to the ad ecosystem and also fears the leading travel website does not adequately safeguard audience data. What Brooke does not suspect is her own brand website, which could by itself be a sieve that filters audience data into the hands of competitors and bad actors alike.

Blair the Audience Researcher

Case File: With a decade of experience serving hospitality clients, Blair’s agency specializes in market research to understand the target audience and recommend digital placements for advertising campaigns. However, one of Blair’s prestigious clients questioned her about the potential use of the brand’s proprietary audience data by competitors. How does she prove the client-specific value of her research and justify the premium spend?

Prime Suspects: Blair is concerned about the backlash from her clients and the impact on the agency’s reputation. She now has to discuss the issue with her trading desk partner to understand what happened, but she is unaware that she is about to go down a rabbit hole that could lead right back to her client or the client’s brand website as the main culprit.

Ben the Director of Ad Operations:

Case File: Ben is the Director of Ad Operations for a premium travel website. As a digital publisher, the sanctity of his visitor/audience data directly translates to revenue. In this scenario, he suffered when his valuable audience data floated around the digital ecosystem without proper compensation Almost every upstream partner had access to his audience data and could collect it without permission. When his data leaked it devalued ad pricing, reduced market share and customer trust, and also raised data privacy concerns. How does he detect data leakage and catch the offending party?

Prime Suspects: Everyone. Publishers like Ben are tired of this whodunit scenario and the resulting finger-pointing. While ad exchanges and networks receive a bulk of the blame for data collection, he is aware that many agencies, brand marketers and their brand websites play a role in this caper, too.

And at the end of the day, consumers, people like Bill whose personal data is stolen, are ultimate the victims of this mysterious game.

Guilty until proven innocent

While the whole data leakage mystery is complex, it can be cracked. The first step is accepting that the entire display industry is riddled with mistrust and every participant is guilty until proven innocent. Several publishers, responsible DSPs, trading desks, exchanges, marketing agencies and brands have already taken it upon themselves to solve this endless whodunit. To bolster their innocence, these participants need to carefully review:

  1. Data Collection: Get smart about the tools used for assuring clean ads and content. Your solution provider should check for ad security, quality, performance and help with data protection. Reducing excessive data collection is the first step in addressing data leakage.
  1. Data Access: With the General Data Protection Regulation (GDPR), EU-US Privacy Shield, and many more such timely regulations, the onus is on every player in the digital ad ecosystem to understand what data their upstream and downstream partners can access and collect via ads. Instead of today’s blame game, the industry should slowly see accountability for non-compliant behavior.
  1. Governance: Every entity across the ad ecosystem should adopt and enforce stricter terms and conditions around data collection and data use. This is especially crucial for publishers and brands – the two endpoints of the digital ad landscape.

Ultimately, every participant in the digital advertising ecosystem first needs to monitor and govern their own website in an attempt to close loopholes that facilitate data leakage before pointing fingers at others.

The Blind Spot in Enterprise Security

Website security is overlooked in most IT governance frameworks. 

website security blindspot

Managing a website isn’t as easy as you think. Sure, you test your code and periodically scan web applications but this only addresses your first-party owned code. What about third-party code?

Considering more than 78% of the code executing on enterprise websites is from third-parties, IT/ website operations departments cannot truly control what renders on a visitor’s browser. This inability to identify and authorize vendor activity exposes the enterprise to a host of issues affecting security, data privacy and overall website performance. And, your website isn’t immune.

Masked vulnerability: What you don’t know can hurt you

The fact that the majority of the code executing on an enterprise website is not seen, let alone managed, does not absolve the enterprise from blame should something go wrong—and it does.

Much publicized stories about website compromises and digital defacement point to the embarrassing reality that websites are not easy to secure. But that’s not all.

Digital property owners—websites and mobile apps—are beholden to a series of regulations covering consumer privacy, deceptive advertising, and data protection. The U.S. Federal Trade Commission U.S. has dramatically stepped up enforcement of deceptive advertising and promotional practices in the digital environment over the past few years and recently signaled interest in litigating enterprises found to be violating the Children’s Online Privacy Protection Act (COPPA).

Data privacy regulations don’t only apply to minors accessing the website. The recent overturning of EU-US Safe Harbor and resulting EU-US Privacy Shield framework calls attention to the need to understand what data is collected, shared and stored via enterprise digital operations.

Don’t forget that these third parties directly affect website performance. Problematic code or behavior—too many page requests, large page download size, general latency, etc.—render a poor experience for the visitor. Potential customers will walk if your website pages take more than two seconds to load, and third parties are usually the culprits.

The problem is that the prevalence of third-party code masks what’s really happening on a public-facing website. This blindness exposes the enterprise to unnecessary risk of regulatory violations, brand damage and loss of revenue.

Seeing through the camouflage

This is a serious issue that many enterprises come to realize a little too late. Third-party vendors provide the interactive and engaging functionality people expect when they visit a website—content recommendation engines, customer identification platforms, social media widgets and video platforms, to name a few. In addition, they are also the source of numerous back-end services used to optimize the viewing experience—content delivery network, marketing management platforms, and data analytics.

Clearly, third parties are critical to the digital experience. However, no single individual or department in an organization is responsible for everything that occurs on the site—marketing provides the content and design, IT/web operations makes sure it works, sales/ecommerce drives the traffic, etc. This lack of holistic oversight makes it impossible to hold anyone or any group accountable for when things go wrong that can jeopardize the enterprise.

Case in point: can you clearly answer the following:

  • How many third-party vendors executing on your website?
  • How did they get on the site, i.e., were they called by another vendor?
  • Can you identify all activity performed by each vendor?
  • What department authorized and takes ownership of these vendors and their activity?
  • How do you ensure vendor activity complies with your organization’s policies as well as the growing body of government regulations?
  • What is the impact of individual vendor activity on website performance?
  • What recourse do you have for vendors that fail to meet contractually-agreed service level agreements (SLA)?

Questions like these highlight the fact that successfully managing an enterprise website requires a strong command of the collective and individual technologies, processes and vendors used to render the online presence, while simultaneously keeping the IT infrastructure secure and in compliance with company-generated and government-mandated policies regarding data privacy.

Adopting a Website Governance strategy will help you satisfy these requirements.

Take back control

What happens on your website is your responsibility. Don’t you think you should take control and know what’s going on? It’s time you took a proactive approach to security. The Media Trust can shine a light on your entire website operation and alert you to security incidents, privacy violations and performance issues.

 

Encryption – Your website isn’t as secure as you think

HTTPS code does not mean a site is encrypted

Encryption is complicated

Today is D-Day for ecommerce and IT professionals, basically anyone with a revenue-generating digital property. June 30 marks the day that Google’s ad networks move to HTTPS and follows previous statements indicating HTTPS compliance as a critical factor in search engine rankings.

From Google’s announcement to the White House directive mandating HTTPS-compliant federal websites by December 2016, encryption has become the topic du jour. And, rumors abound that browsers are getting into the encryption game by flashing alerts when a site loses encryption. Why all the fanfare?

Encryption adds elements of authenticity to website content, privacy for visitor search and browsing history, and security for commercial transactions. HTTPS guarantees the integrity of the connection between two systems—webserver and browser—by eliminating the inconsistent decision-making between the server and browser regarding which content is sensitive. It does not ensure a hacker-proof website and does not guarantee data security.

Over the past year, businesses worked to convert their website code to HTTPS. With Google’s recent announcement, ad-supported sites can sit back and relax knowing their sites are secure, right? Wrong.

To have a truly encrypted site you must ensure ALL connections to your website communicate through HTTPS, including all third-party code executing on your site, not just advertising. This means sites using providers such as content delivery networks, data management platforms, hosting services, analytic tools, product reviews, and video platforms, need to ensure connections—and any connections to fourth or fifth parties—are made via HTTPS. Just one break in any call chain will unencrypt your site. Considering 57% of ecommerce customers would stop a purchase session when alerted to an insecure page, the ongoing push to encrypted sites should not be ignored.

What’s a website operator to do? By its very nature, third-party code resides outside your infrastructure and is not detected during traditional web code scanning, vulnerability assessment, or penetration testing. To ensure your site—and all the vendors serving it—maintains encryption you must scan it from the user’s point of view to see how the third parties behave. Only then can you detect if encryption has been lost along the call chain.