Chasing the Revenue Dragon

While chasing the smoky revenue dragon, publishers miss a different monster: Data Leakage.dragon-fotolia_34730412_s

In October The Guardian’s Chief Revenue Officer revealed[1] that numerous ad tech providers in the ad supply chain were extracting up to 70% of advertisers’ money without quantifying the value to the brand. Yes, this revenue loss situation is eye opening, but it’s not the only activity affecting your bottom line. Protecting your data assets is critical for maintaining and maximizing revenue. Inability to control digital audience data within the supply chain is a catalyst for revenue loss. The looming General Data Protection Regulation (GDPR) regulations, that take effect in May 2018, makes the case for data protection that much stronger.

Data: a Publisher’s lifeblood

Every digital publisher intrinsically knows that one of their most valuable assets is their audience data – it drives a publisher’s stickiness with lucrative advertisers, their inventory value, and ultimately their brand image.

Data leakage is the unauthorised transfer of information from one entity to another. In the digital ad ecosystem, data loss traditionally occurred when a brand or marketing agency collected publishers’ audience data and reused it without authorisation. Today, this scenario is much more convoluted due to the volume of players in the digital advertising landscape, causing data loss to steadily permeate the entire digital ad industry.

Publishers lose when they can’t control their valuable consumer data:

1. Depleted market share: With your audience data in their hands, advertisers and ad tech providers can always go to other publications and target the exact audiences, thereby devaluing your brand.

2. Reduced ad pricing:  When advertisers or ad tech providers can purchase your audience at a fraction of the cost it decreases the demand for your ads, thus devaluing your ad prices.

3. Exposure to regulatory penalties & risk mitigation: Collection and use of consumer data is a publisher’s prerogative, but protection of this data is a weighty responsibility. Inability to safeguard data gathered from your website leaves a publisher vulnerable to running afoul of government regulations. Saying the penalties under GDPR are severe is an understatement. The repercussion of noncompliance is losing up to 4% of your total global turnover or €20 million, whichever is greater.

4. Reputation loss: Ultimately, data loss and any news of noncompliance could negatively affect consumer trust and brand reputation.

The hands behind data loss

On average, The Media Trust detects at least 10 parties contributing to the execution or delivery of a single digital ad, and this is a conservative figure considering that frequently this number is as high as 30, and at times more than 100, depending on the size of the campaign, type of ad, and so forth. The contributing parties are typically DSPs, SSPs, Ad Exchanges, Trading Desks, DMPs, CDNs and other middlemen who actively participate in the delivery of the ad as it traverses from advertiser to publisher. Any upstream player, including the advertiser or original buyer, has access to a publisher’s proprietary audience data if not monitored for compliance.

The advertising ecosystem isn’t the only offender. The bulk of third-party vendor code that executes on the publisher’s website goes unmonitored, exposing the publisher to excessive and unauthorised data collection. In these cases, a publisher’s own website acts as a sieve leaking audience data into the digital ecosystem.

Ending the chase

Resolving revenue lost from data leakage isn’t an unsolvable conundrum, but one that can be addressed by applying the following:

  1. Data Collection: Get smart about the tools used for assuring clean ads and content. Your solution provider for ad quality should check for ad security, quality, performance and help with data protection. Reducing excessive data collection is the first step in addressing data leakage.
  1. Data Access: With GDPR, EU-US Privacy Shield, and many more such timely regulations and programs, the onus is on the publisher to understand what data activity their upstream partners engage in via advertising. Instead of today’s rampant mistrust, the supply chain must move to accountability for non-compliant behavior.
  1. Governance: Publishers absolutely need to start adopting and enforcing stricter terms and conditions around data collection and data use.

Ultimately, every publisher needs to monitor and govern third-party partners on their website to close loopholes that facilitate data leakage before pointing fingers at others.

The Great Data Leakage Whodunit

Safeguarding valuable, first-party data isn’t as easy as you think

If your job is even remotely connected to the digital advertising ecosystem, you are probably aware that data leakage has plagued publishers for many years. But you are most likely still in the dark about the scope and gravity of this issue. Simply put, data leakage is the unauthorized transfer of information from one entity to another. In the digital ad ecosystem, this data loss traditionally occurred when a brand or marketing agency collected publishers’ audience data and reused it without authorization. Today, this scenario is much more complicated due to the sheer number of players across the digital advertising landscape, which causes data loss to steadily permeate the entire digital ad industry, and leading to a “whodunit” pandemonium.

Surveying the Scene

On average, at The Media Trust we detect at least 10 parties contributing to the execution or delivery of a single digital ad, and this is a conservative figure considering that frequently this number is as high as 30, and in some cases more than 100, depending on the size of the campaign, type of ad, and so forth. The other contributing parties are typically DSPs, SSPs, Ad Exchanges, Trading Desks, CDNs and other middlemen that actively participate in the delivery of the ad as it moves from advertiser to publisher. Just imagine the cacophony of “not me!” that breaks out when unauthorized data collection is detected. To make matters worse: few understand how data leakage impacts their business and ultimately, the consumer. As a result, an unwieldy game of whodunit is afoot.

Sniffing out the culprit(s)

To unravel this data leakage mystery, let’s get down to brass tacks and build a basic story around just four actors: Bill the Luxury Traveler (Consumer), Brooke the Brand Marketer (Brand), Blair the Audience Researcher (Agency), and Ben the Ad Operations Director (Publisher).

data-leakage-who-dunnit

Bill the Luxury Traveler

Case File: As a typical consumer, Bill researched vacation package for his favorite Aspen resort on a popular travel website. He found a great bargain but wasn’t ready to make the final booking. As he spent the next few days thinking about his decision, he noticed ads for completely different resorts on almost every website he visited. How did “they” know he wants to travel?

Prime Suspects: Bill blames his favorite resort and the leading travel website for not protecting or, even worse, selling his personal data.

Brooke the Brand Marketer

Case File: Brooke is the marketer for a popular Aspen luxury resort. She invested a sizeable percentage of her marketing budget on an agency that specialized in audience research and paid a premium to advertise on a website frequented by consumers like Bill. To her dismay, she realized that this exact target audience is being served ads for competitive resorts on several other websites. How did her competitors know to target the same audience?

Prime Suspects: Brooke questions her ad agency leaking her valuable audience information to the ad ecosystem and also fears the leading travel website does not adequately safeguard audience data. What Brooke does not suspect is her own brand website, which could by itself be a sieve that filters audience data into the hands of competitors and bad actors alike.

Blair the Audience Researcher

Case File: With a decade of experience serving hospitality clients, Blair’s agency specializes in market research to understand the target audience and recommend digital placements for advertising campaigns. However, one of Blair’s prestigious clients questioned her about the potential use of the brand’s proprietary audience data by competitors. How does she prove the client-specific value of her research and justify the premium spend?

Prime Suspects: Blair is concerned about the backlash from her clients and the impact on the agency’s reputation. She now has to discuss the issue with her trading desk partner to understand what happened, but she is unaware that she is about to go down a rabbit hole that could lead right back to her client or the client’s brand website as the main culprit.

Ben the Director of Ad Operations:

Case File: Ben is the Director of Ad Operations for a premium travel website. As a digital publisher, the sanctity of his visitor/audience data directly translates to revenue. In this scenario, he suffered when his valuable audience data floated around the digital ecosystem without proper compensation Almost every upstream partner had access to his audience data and could collect it without permission. When his data leaked it devalued ad pricing, reduced market share and customer trust, and also raised data privacy concerns. How does he detect data leakage and catch the offending party?

Prime Suspects: Everyone. Publishers like Ben are tired of this whodunit scenario and the resulting finger-pointing. While ad exchanges and networks receive a bulk of the blame for data collection, he is aware that many agencies, brand marketers and their brand websites play a role in this caper, too.

And at the end of the day, consumers, people like Bill whose personal data is stolen, are ultimate the victims of this mysterious game.

Guilty until proven innocent

While the whole data leakage mystery is complex, it can be cracked. The first step is accepting that the entire display industry is riddled with mistrust and every participant is guilty until proven innocent. Several publishers, responsible DSPs, trading desks, exchanges, marketing agencies and brands have already taken it upon themselves to solve this endless whodunit. To bolster their innocence, these participants need to carefully review:

  1. Data Collection: Get smart about the tools used for assuring clean ads and content. Your solution provider should check for ad security, quality, performance and help with data protection. Reducing excessive data collection is the first step in addressing data leakage.
  1. Data Access: With the General Data Protection Regulation (GDPR), EU-US Privacy Shield, and many more such timely regulations, the onus is on every player in the digital ad ecosystem to understand what data their upstream and downstream partners can access and collect via ads. Instead of today’s blame game, the industry should slowly see accountability for non-compliant behavior.
  1. Governance: Every entity across the ad ecosystem should adopt and enforce stricter terms and conditions around data collection and data use. This is especially crucial for publishers and brands – the two endpoints of the digital ad landscape.

Ultimately, every participant in the digital advertising ecosystem first needs to monitor and govern their own website in an attempt to close loopholes that facilitate data leakage before pointing fingers at others.

Malvertising: Is this the beginning of the end?

TAG Malware Scanning Guidelines

Decoding TAG malware scanning guidelines for tactical use 

Note: View webinar at https://www.themediatrust.com/videos.php 

The advertising industry’s crackdown on malvertising has begun. TAG’s recently-released malware scanning guidelines clearly state that every player in the digital advertising ecosystem has a role in deterring, detecting and removing malware.

However, these guidelines need to be translated into action plans. As with many cross-industry initiatives, the TAG guidelines serve several different groups across the digital ecosystem while also introducing security concepts to advertising/marketing professionals. The use of words such as: interdict, cloaking, checksum, and eval(), may baffle many ad ops professionals just like defining “creative” as a payload may baffle security teams.

The good news is that The Media Trust’s existing malware clients are already 100% compliant with the guidelines. Other ad ops teams at agencies, ad tech providers, and publishers, will need to translate the best practices into tactical actions in order to bring their operations into compliance.

What is clear: Scanning is in your future

Every entity that touches or contributes code to the serving of an ad plays a role in malware deterrence – this much is clear. Agencies, ad tech providers and publishers alike are, therefore, expected to proactively and repeatedly review their ads for malware.

Specifically, the guidelines state that:

  1.    Ads and their associated landing pages must be scanned for malware
  2.    Scanning should be performed before an ad is viewed by the end consumer
  3.    If initial scanning detects malware, then the ad must be rescanned until malware-free

Read between the lines: Reap what you sow

The complexities of the digital ecosystem make it almost impossible to explicitly state what each player in the advertising ecosystem should do. Typically, the amount of scanning required is directly proportional to the risk of serving a malware-infected ad or directing to a malware-infected landing page. While there are some directional tips, the guidelines also present a few abstract recommendations:

  • Scanning frequency

Ad formats, demand types, consumer reach and access to an ad as it traverses from advertiser to publisher, affect the frequency of recommended scanning.

For instance, a publisher with a campaign using hosted, static ads, targeting a small number of impressions does not have as robust a scanning requirement as a publisher running campaigns with rich media served programmatically. And, an ad contaminated by malware needs to be scanned more frequently than one that doesn’t set off alarm bells during the initial scan. And, an ad that changes mid-flight—modifying targeting, increasing number of impressions, introducing rich media—requires additional scanning.

  • Proof of scanning

Claiming an ad is scanned is not sufficient. As a best practice, all parties should document proof of scanning and this proof should contain creative id, tag specifications, date of initial and subsequent scans and scanning results. In addition, each party in the advertising value chain should establish a point of contact for reporting malware and communicate it to their upstream and downstream partners. 

  • Know your partner

A critical factor that informs rescanning cadence is the provider’s confidence in their upstream partner(s). Long-standing relationships with reputable, responsive partner(s) infers a reduced likelihood of malicious activity, as opposed to a newly-formed partnership with a one-man shop based in a foreign country. And, the provider should also track and document if their partner adheres to the scanning guidelines, too.

Look ahead: This is just the beginning

The guidelines clearly set the stage for optimizing ad quality and its resulting effect on the user experience, with an emphasis on security. A 100% malware-free advertising experience can’t be guaranteed, but everyone agrees it can be greatly improved. Future steps will undoubtedly address data privacy, ad behavior and more.

While these guidelines provide the impetus to tackle malvertising, it’s a safe bet that industry leaders will push to make them standard a la TAG Certified Against Fraud and Certified Against Piracy programs. And, in order to standardize, a certification and evaluation or audit process will be needed.  

Stay tuned.

Learn more
The Media Trust hosted three informative webinars presenting specific direction to publishers, ad tech providers and agency/buyers. To view, visit https://www.themediatrust.com/videos.php

You know nothing, CISO

Shadow IT can stab you in the back

CISO work overload

Disclaimer: This blog post contains strong references to Game of Thrones. Memes courtesy of ImgFlip. 

You, CISO, are a brave warrior who fights unknown threats from all corners of the digital world. You, CISO, try with all your might to manage an increasingly complex digital ecosystem of malware, exploit kits, Trojans, unwanted toolbars, annoying redirects and more. You, CISO, wrangle a shortage of security professionals and an overload of security solutions. You, CISO, have lost sleep over protecting your enterprise network and endpoints. You, CISO, are aware of the lurking threat of shadow IT, but you CISO, know nothing until you understand that your own corporate website is one of the biggest contributors of shadow IT.

Beware of your Corporate Website

Did you know it’s likely you are only monitoring around 20–25% of the code executing on your website? The remaining 75-80% is provided by third-parties who operate outside the IT infrastructure. You may think website application firewall (WAF) and the various other types of web app security tools like Dynamic Application Security (DAST), Static Application Security (SAST), and Runtime Application Self-Protection (RASP) adequately protect your website. News flash: these applications only monitor owned and operated code. In fact, they can’t even properly see third-party code as it’s triggered by user profiles. There is a dearth of security solutions that can emulate a true end user experience to detect threats.

Think about it, if there are so many traditional website security solutions available, why do websites still get compromised? This third-party code presents a multitude of opportunities for malware to enter your website and attack your website visitors–customers and employees alike–with the end goal to ultimately compromise endpoints and the enterprise network.

Shadow IT in the corporate website

Avoid the Shame!

Practical CISOs will keep these hard facts in mind:

1.  There is no true king

You could argue that marketing is the rightful king to the Iron Throne of your corporate website since it is responsible for the UX, messaging, branding and so forth. But the enterprise website requires so much more. Every department has a stake: IT, legal, ad ops (if you have an advertising-supported website), security and finance, to name a few. Each department’s differing objectives may lead to adoption of unsanctioned programs, plugins and widgets to meet their needs. As a result, the website’s third-party code operates outside the purview of IT and security. Further complicating matters, there is no one department or person to be accountable when the website is compromised. This makes it hard for security teams to detect a compromise via third-party code and easier for malware to evade traditional security tools. In the absence of ownership, the CISO is blamed.

2.  Malware is getting more evil

Bad actors continue to hone their malware delivery techniques. They use malicious code to fingerprint or steal information to develop a device profile which can be used to evade detection by security research systems and networks. Furthermore, web-based malware can also remain benign in a sandbox environment or be dormant until triggered to become overt at a later date.

3. You’re afraid of everyone’s website…but your own

You know the perils of the internet and have adopted various strategies to protect your network from the evils of world wide web. From black and white listing to firewall monitoring and ad blocking, these defenses help guard against intrusion. But what about your website?

As previously stated, everyday web-enablement programs such as a video platform or content recommendation engine operate outside the IT infrastructure. The more dynamic and function rich your website is, the more you are at risk of a breach from third-party vendor code. Below is a not so exhaustive list of apps and programs contributing third-party code:

  • RSS Feed
  • News Feed
  • Third Party Partner Widgets
  • Third Party Content MS Integrations
  • Third Party Digital Asset MS Integrations
  • Third Party ECommerce Platforms
  • Image Submission Sites
  • Ad Tags
  • Video Hosting Platform
  • Crowd Sharing Functionality
  • File Sharing Functionality
  • Customer Authentication Platforms
  • Third-Party Software Development (SD) Kits
  • Social Media Connectors
  • Marketing Software
  • Visitor Tracking Software

Stick ‘em with the pointy end

Yes, we know, what lies beyond the realm of your security team’s watchful eye is truly scary. But now that you know that your website’s third-party vendor code is a major contributor of shadow IT, you can more effectively address website security within your overall IT governance framework.

 

To mock or not to mock?

Avoiding fraudulent advertising campaign verification is critical for publishers

ad-mockup

That is the question frequently asked by media publishers trying to meet advertiser demands related to digital campaign success. The industry’s intense focus on viewability and transparency issues associated with ad fraud hijacks the limelight from another vital area of interest for advertisers: Are campaigns actually running as contracted?

What the advertiser wants, the advertiser gets

To justify the millions (and millions!) of dollars spent promoting products, advertisers rightfully demand proof that their campaigns execute as promised.

From expected ad rendering on the page to accurate targeting by geography and behavior profiles, advertisers want to know that the right ad has been served in the right way in the right location on the right page to the right demographic. In fact, when considering the average spend of a large-scale national campaign flight, many advertisers will assert they deserve to know their campaign is performing as promised.

Authenticated ad inventory yields benefits

The advertising ecosystem is a dynamic environment processing millions of ads covering billions in spend at any one time. Considering that 5% of display and mobile ads are served incorrectly at launch and countless more break during flight, publishers need to actively monitor and protect their ad-generated revenue channels.[i]

Authenticated ad inventory helps publishers secure ad revenue by avoiding pre-planned delivery overages to compensate for anticipated discrepancies. In addition, it also reduces the frequency of misfiring campaigns, thus minimizing instances of “make good” campaigns.

Ad verification is more than good looks

Reputable publishers recognize the value of their high-quality inventory and demonstrate it by providing proof of ad delivery according to established terms. This is a complicated prospect in an age of large-scale campaigns incorporating ads of varying formats (i.e., HTML5, pre/mid/post-roll video, native, etc.) through multiple platforms (i.e., display, tablet, smartphone, gaming consoles, etc.) across increasingly granular targeting segments.

A Photoshopped “mock-up” or full-page capture of the ad on a screen is a start, but it isn’t enough. Presenting a “mock-up” of how an ad should look could be considered fraudulent as it’s not a true representation of how an ad performs across all formats, devices and geographies. In fact, several industries (Tier 2 automotive, pharmaceutical, etc.) and countries (especially those in Latin America) regulate advertising-based billing processes and require third-party verified screenshots upon invoice presentation.

Beyond the visual of “how” an ad looks on a device, publishers must prove that each ad is delivered as contracted with the advertiser. Continuous monitoring of campaigns at launch and throughout flight will quickly detect errors associated with targeting, creative and device-specific issues that impede optimal campaign execution.

Authentication of possibly hundreds of ad combinations—by size, format, device and geography—is used by publishers to substantiate inventory value and by advertisers to audit and measure campaign ROI.

Consider this

To verify accurate ad placement, execution and targeting, a publisher must consider these five factors:

1.    Legitimacy: Screenshots of ads in a live environment truthfully demonstrate that an ad is delivered to the right target. A “mock-up” or “test page” may display how an ad appears on a site, but in reality it provides a false sense of security for how the ad is actually executing. It also infers that the ad will render the same across all devices, OS, formats and geographies.

2.    Accuracy: Mock-ups can’t prove ad placement as many ad units only occur behind paywalls or require an IP address in order to serve the correct messaging to the individual user.

3.    Automation: Imagine scaling the manual process of verifying ads across the overwhelming number of devices, browsers, user profiles, formats, sizes and geo-locations. Without automation, the task is almost impossible. Leverage technology to streamline the process.

4.    Costs: Carefully consider the total cost of ownership when deciding between an in-house or outsourced process. While in-house resources are easier to control, it is difficult to secure funding and keep the staff engaged. On the flip side, outsourcing requires integration, training, probable coordination with targeting vendors, and continuous oversight which could ultimately be more costly than anticipated—not to mention the complications of managing a remote team, in a case of choosing a non-local entity if a non-native entity is selected.

5.    Quality Assurance: Reliance on mock-up designs to certify campaign execution will not catch errors that occur at launch or throughout the campaign flight.

Ad verification is a complex, yet critical endeavor for publishers looking to highlight inventory value. Don’t mock it.

 

[i] The Media Trust analysis of millions of ad campaigns verified over the course of 10 years.

Is Your Threat Intelligence Certified Organic?

Certified _Organic_Threat_Intelligence

7 questions to ask before choosing a web-based threat intelligence feed.

It should come as no surprise that CISOs are under ever-increasing pressure, with many facing the prospect of losing their jobs if they cannot improve the strength of the enterprise security posture before breaches occur. And, occur they will. Consider these figures—recent studies report that web-based attacks are one of the most common types of digital attacks experienced by the average enterprise, costing $96,000 and requiring 27 days to resolve a single incident. Furthermore, there is a definite positive correlation between both the size of the organization and the cost of the cyber attack and additional correlation between the number of days taken to resolve an attack and the cost of the attack—the larger the organization or days required to remediate, the higher the cost.

Enter, Threat Intelligence

CISOs increasingly embrace threat intelligence as a means to enhance their digital security posture. In the past three years, organizations have significantly raised their spending on threat intelligence, allocating almost 10% of their IT security budget to it, and this number is expected to grow rapidly through 2018. And, this budget allocation appears to be well spent as organizations report enhanced detection of cyber attacks—catching an average 35 cyber attacks previously eluding traditional defenses.

Not all threat intel feeds are created equal

Sure, threat intelligence feeds are increasingly accepted and adopted as an essential element in the enterprise security strategy. In fact, 80 percent of breached companies wish they had invested in threat intelligence. But even as the use of third-party threat intelligence feeds increase, IT/security teams are realizing that not all threat intelligence feeds are created equal.

To begin with, there are several types of threat intelligence feeds based on web-based threats or email threats, and feeds that scan the dark web, among others. While not discounting the value of the various types of feeds, CISOs need to understand why web-based threat intelligence is the first among equals. Web-based malware target the enterprise network and the endpoints through day-to-day internet use by employees–internet critical to their day-to-day office functions. A truly valuable threat intelligence feed will help CISOs achieve their end goal of keeping their organization safe and blocking confirmed bad actors.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Checklist for Choosing the Right Threat Intelligence

Ask these seven questions to determine if the web-based threat intelligence feed(s) you choose are “certified organic” enough to provide tangible goodness and value to the health of your enterprise security posture:

1.    Is the data original source?

Our previous post, Your Threat Intelligence Isn’t Working, discussed the pitfalls of using compiled third-party sources in a threat intel feed—more data isn’t necessarily good data! The time-consuming process of managing duplicates and false positives cripples the performance of most information security teams to the point that many alerts are ignored. Protect cherished resources—budget and time—by choosing an original source threat intelligence feed.

2.    How is the data collected?

While original source threat intelligence minimizes false positives and duplicates, how the data is collected maximizes the tangible value of the feed. Web-based malware is typically delivered through mainstream, heavily-trafficked websites, either via ads or third-party code such as data management platforms, content management systems, customer identification engines, video players and more. Hence, the threat intelligence feed needs to source the data by replicating typical website visitors. This means continuously (24*7*365) scanning the digital ecosystem across multiple geography, browser, devices, operating system and consumer behavior, using REAL user profiles. Unless the engine that gathers the threat intelligence behaves like real internet users (who are the targets of web-based malware), the quality of the “internet threat” data is questionable at best.

3.     Is the threat intelligence dynamic?

Threat intelligence should be a living (frequently updated), constantly active data source. The data in the threat intelligence feed needs to adapt to reflect the rapidly transforming malware landscape. The engine behind the feed should both track and detect malware in real-time, while also accounting for the changing patterns of attack. Even the algorithms driving the machine learning needs to be dynamic and continuously reviewed.

4.     Does it prevent AND detect threats?

As the adage goes, an ounce of prevention is worth a pound of cure, and this holds true in the cyber security space. However, reliance on prevention isn’t practical or realistic. Prevention boils down to deployed policies, products, and processes which help curtail the odds of an attack based on known and confirmed threats. What about unknown or yet to be confirmed threats?

Threat hunting is becoming a crucial element in the security posture. It refers to the detection capabilities stemming from a combination of machine generated intel and human analysis to actively mine for suspicious threat vectors. Does your threat intelligence account for both indicators of compromise (IOC) and patterns of attack (POA)? The goal of threat hunting is to reduce the dwell time of threats and the intensity of potential damage. The threat intelligence feed should allow you to act on threats patterns before they become overt.

5.     How is the data verified?

Just as the automation or machine learning behind the threat intelligence feed should simulate a real user for data collection, human intervention is important for data verification. Without the element of human analysis, data accuracy should be questioned. Otherwise, you run the risk of experiencing increased false positives.

6.     Is the information actionable?

Malware is malware, and by its definition it is “bad”. You do not need an extensive payload analysis of threat data. You do, however, need information about the offending hosts and domains, so that compromised content can be blocked, either manually or via Threat Intelligence Platform (TIP). The granularity of the data can also save CISOs from the politics of whitelisting and blacklisting websites. As a bonus, real-time intelligence will enable you to unblock content when it is no longer compromised.

7.     Does it offer network-level protection?

While CISOs still debate over an optimal endpoint security solution, web-based threats attack at the enterprise network. Frankly, stopping malware at the endpoint is too late! The threat intelligence you choose must offer network-level protection and deter web-based threats from propagating to endpoints in the first place.

Your Threat Intelligence Isn’t Working

False positives undermine your security investments. 

Your Threat Intelligence Isn't Working

The rapid adoption of threat intelligence data by enterprises signals an increased emphasis on preventing targeted malware attacks. While few question the strategy fueling this boom, it is the quality of this intelligence that is debatable. Recent news of organizations suffering brand damage due to false positives in their “compiled” threat feed, puts the quality of numerous threat intelligence feeds under scrutiny.

In simple terms, a compiled threat intelligence feed aggregates data from various open sources and may also include observed data from the security vendor. The pitfalls of these multiple dependencies are many, the most debilitating of which is the quality of this so-called “intelligence.” In most cases, a compiled threat intelligence feed is a minefield of false positives, false negatives and unverified data.

To make your digital threat intelligence work for you, consider these factors:

Go for original source

Compiled isn’t conclusive

Many vendors use the euphemisms like “comprehensive” or “crowdsourced” threat intelligence to characterize the value of their data. These euphemisms typically describe data compiled from multiple sources. Very few (most likely none) reveal the fact that this aggregated data hasn’t been thoroughly vetted for accuracy – a process that requires significant manpower hours for the volume of data within the feed. In fact, the time needed to properly assess the data would delay an enterprise’s receipt of and action on the intelligence. Needless to say, this time lag is all it takes for serious damage to be done by cyber criminals.

Avoid Costly Cleanups
False positives can be damning

The inherent inaccuracies in a compiled threat intelligence feed can lead to false positives and duplicate threat alerts. It is a well-established fact that malware alerts generate around 81% false positives and average 395 hours a week of wasted resources chasing false negatives and/or false positives.

A critical by-product of false positives is alert fatigue, which induces enterprise security professionals to not react in a timely manner – fatal behavior when an actual breach or violation does occur. In this “boy who cried wolf” scenario, the enterprise is vulnerable from two perspectives. Failure to react to a “positive” alert could expose the entity to malware. On the flip side, reaction to a “false positive” expends countless resources. Whatever the situation, the consequences could damage careers, cripple the security posture, and tarnish the enterprise’s image. By using an original source digital threat intelligence feed vendor, you maximize the level of intel accuracy and minimize the margin for false positives to occur.

Focus on patterns, not just appearances
Both IOCs and POAs are important

Another aspect to deciphering the value of  threat intelligence is what actually goes on behind the scenes. Most threat intelligence feeds factor in indicators of compromise (IOCs) to describe a malware alert is valid  or is marked with “high confidence” in its accuracy. However, what is harder to determine is the actual behavioral pattern of a threat or the method of malware delivery, which is what patterns of attack (POAs) depict. By understanding the POAs, high-quality threat intelligence can also detect new threat vectors, hence allowing enterprises to block suspicious malware before it becomes overt.

The key determining characteristic between IOCs and POAs is that IOCs contain  superfluous, easy-to-alter data points that are not individual or specific to the bad actor, whereas POA data points are difficult to mask. To put it in simpler terms, think of a bank robbery. Information describing the appearance of the robber, such as a shirt or hair color, could be easily changed for the robber to evade detection and be free to commit additional heists. However, more specific, innate information regarding the robber’s gait or voice, would make the individual easier to detect and block their ability to commit the same crime again. These inherent factors or POAs are difficult and expensive to alter. Therefore, threat intelligence data should factor in both IOCs and POAs in order to provide a more conclusive picture of a threat and minimize false positives.

Security Buyer Beware

Yes, factors such as real-time data, number of data points on threat vectors, easy access, and seamless integration with TIP/SIEM are important in determining the overall quality of a threat data feed. However, inaccurate data and false positives are fundamental flaws in many market solutions for threat intelligence. By using an original source digital threat intelligence feed vendor, you maximize the level of intel accuracy and minimize the margin for false positives to occur. Choose wisely.