Chasing the Revenue Dragon

While chasing the smoky revenue dragon, publishers miss a different monster: Data Leakage.dragon-fotolia_34730412_s

In October The Guardian’s Chief Revenue Officer revealed[1] that numerous ad tech providers in the ad supply chain were extracting up to 70% of advertisers’ money without quantifying the value to the brand. Yes, this revenue loss situation is eye opening, but it’s not the only activity affecting your bottom line. Protecting your data assets is critical for maintaining and maximizing revenue. Inability to control digital audience data within the supply chain is a catalyst for revenue loss. The looming General Data Protection Regulation (GDPR) regulations, that take effect in May 2018, makes the case for data protection that much stronger.

Data: a Publisher’s lifeblood

Every digital publisher intrinsically knows that one of their most valuable assets is their audience data – it drives a publisher’s stickiness with lucrative advertisers, their inventory value, and ultimately their brand image.

Data leakage is the unauthorised transfer of information from one entity to another. In the digital ad ecosystem, data loss traditionally occurred when a brand or marketing agency collected publishers’ audience data and reused it without authorisation. Today, this scenario is much more convoluted due to the volume of players in the digital advertising landscape, causing data loss to steadily permeate the entire digital ad industry.

Publishers lose when they can’t control their valuable consumer data:

1. Depleted market share: With your audience data in their hands, advertisers and ad tech providers can always go to other publications and target the exact audiences, thereby devaluing your brand.

2. Reduced ad pricing:  When advertisers or ad tech providers can purchase your audience at a fraction of the cost it decreases the demand for your ads, thus devaluing your ad prices.

3. Exposure to regulatory penalties & risk mitigation: Collection and use of consumer data is a publisher’s prerogative, but protection of this data is a weighty responsibility. Inability to safeguard data gathered from your website leaves a publisher vulnerable to running afoul of government regulations. Saying the penalties under GDPR are severe is an understatement. The repercussion of noncompliance is losing up to 4% of your total global turnover or €20 million, whichever is greater.

4. Reputation loss: Ultimately, data loss and any news of noncompliance could negatively affect consumer trust and brand reputation.

The hands behind data loss

On average, The Media Trust detects at least 10 parties contributing to the execution or delivery of a single digital ad, and this is a conservative figure considering that frequently this number is as high as 30, and at times more than 100, depending on the size of the campaign, type of ad, and so forth. The contributing parties are typically DSPs, SSPs, Ad Exchanges, Trading Desks, DMPs, CDNs and other middlemen who actively participate in the delivery of the ad as it traverses from advertiser to publisher. Any upstream player, including the advertiser or original buyer, has access to a publisher’s proprietary audience data if not monitored for compliance.

The advertising ecosystem isn’t the only offender. The bulk of third-party vendor code that executes on the publisher’s website goes unmonitored, exposing the publisher to excessive and unauthorised data collection. In these cases, a publisher’s own website acts as a sieve leaking audience data into the digital ecosystem.

Ending the chase

Resolving revenue lost from data leakage isn’t an unsolvable conundrum, but one that can be addressed by applying the following:

  1. Data Collection: Get smart about the tools used for assuring clean ads and content. Your solution provider for ad quality should check for ad security, quality, performance and help with data protection. Reducing excessive data collection is the first step in addressing data leakage.
  1. Data Access: With GDPR, EU-US Privacy Shield, and many more such timely regulations and programs, the onus is on the publisher to understand what data activity their upstream partners engage in via advertising. Instead of today’s rampant mistrust, the supply chain must move to accountability for non-compliant behavior.
  1. Governance: Publishers absolutely need to start adopting and enforcing stricter terms and conditions around data collection and data use.

Ultimately, every publisher needs to monitor and govern third-party partners on their website to close loopholes that facilitate data leakage before pointing fingers at others.

You know nothing, CISO

Shadow IT can stab you in the back

CISO work overload

Disclaimer: This blog post contains strong references to Game of Thrones. Memes courtesy of ImgFlip. 

You, CISO, are a brave warrior who fights unknown threats from all corners of the digital world. You, CISO, try with all your might to manage an increasingly complex digital ecosystem of malware, exploit kits, Trojans, unwanted toolbars, annoying redirects and more. You, CISO, wrangle a shortage of security professionals and an overload of security solutions. You, CISO, have lost sleep over protecting your enterprise network and endpoints. You, CISO, are aware of the lurking threat of shadow IT, but you CISO, know nothing until you understand that your own corporate website is one of the biggest contributors of shadow IT.

Beware of your Corporate Website

Did you know it’s likely you are only monitoring around 20–25% of the code executing on your website? The remaining 75-80% is provided by third-parties who operate outside the IT infrastructure. You may think website application firewall (WAF) and the various other types of web app security tools like Dynamic Application Security (DAST), Static Application Security (SAST), and Runtime Application Self-Protection (RASP) adequately protect your website. News flash: these applications only monitor owned and operated code. In fact, they can’t even properly see third-party code as it’s triggered by user profiles. There is a dearth of security solutions that can emulate a true end user experience to detect threats.

Think about it, if there are so many traditional website security solutions available, why do websites still get compromised? This third-party code presents a multitude of opportunities for malware to enter your website and attack your website visitors–customers and employees alike–with the end goal to ultimately compromise endpoints and the enterprise network.

Shadow IT in the corporate website

Avoid the Shame!

Practical CISOs will keep these hard facts in mind:

1.  There is no true king

You could argue that marketing is the rightful king to the Iron Throne of your corporate website since it is responsible for the UX, messaging, branding and so forth. But the enterprise website requires so much more. Every department has a stake: IT, legal, ad ops (if you have an advertising-supported website), security and finance, to name a few. Each department’s differing objectives may lead to adoption of unsanctioned programs, plugins and widgets to meet their needs. As a result, the website’s third-party code operates outside the purview of IT and security. Further complicating matters, there is no one department or person to be accountable when the website is compromised. This makes it hard for security teams to detect a compromise via third-party code and easier for malware to evade traditional security tools. In the absence of ownership, the CISO is blamed.

2.  Malware is getting more evil

Bad actors continue to hone their malware delivery techniques. They use malicious code to fingerprint or steal information to develop a device profile which can be used to evade detection by security research systems and networks. Furthermore, web-based malware can also remain benign in a sandbox environment or be dormant until triggered to become overt at a later date.

3. You’re afraid of everyone’s website…but your own

You know the perils of the internet and have adopted various strategies to protect your network from the evils of world wide web. From black and white listing to firewall monitoring and ad blocking, these defenses help guard against intrusion. But what about your website?

As previously stated, everyday web-enablement programs such as a video platform or content recommendation engine operate outside the IT infrastructure. The more dynamic and function rich your website is, the more you are at risk of a breach from third-party vendor code. Below is a not so exhaustive list of apps and programs contributing third-party code:

  • RSS Feed
  • News Feed
  • Third Party Partner Widgets
  • Third Party Content MS Integrations
  • Third Party Digital Asset MS Integrations
  • Third Party ECommerce Platforms
  • Image Submission Sites
  • Ad Tags
  • Video Hosting Platform
  • Crowd Sharing Functionality
  • File Sharing Functionality
  • Customer Authentication Platforms
  • Third-Party Software Development (SD) Kits
  • Social Media Connectors
  • Marketing Software
  • Visitor Tracking Software

Stick ‘em with the pointy end

Yes, we know, what lies beyond the realm of your security team’s watchful eye is truly scary. But now that you know that your website’s third-party vendor code is a major contributor of shadow IT, you can more effectively address website security within your overall IT governance framework.

 

Is Your Threat Intelligence Certified Organic?

Certified _Organic_Threat_Intelligence

7 questions to ask before choosing a web-based threat intelligence feed.

It should come as no surprise that CISOs are under ever-increasing pressure, with many facing the prospect of losing their jobs if they cannot improve the strength of the enterprise security posture before breaches occur. And, occur they will. Consider these figures—recent studies report that web-based attacks are one of the most common types of digital attacks experienced by the average enterprise, costing $96,000 and requiring 27 days to resolve a single incident. Furthermore, there is a definite positive correlation between both the size of the organization and the cost of the cyber attack and additional correlation between the number of days taken to resolve an attack and the cost of the attack—the larger the organization or days required to remediate, the higher the cost.

Enter, Threat Intelligence

CISOs increasingly embrace threat intelligence as a means to enhance their digital security posture. In the past three years, organizations have significantly raised their spending on threat intelligence, allocating almost 10% of their IT security budget to it, and this number is expected to grow rapidly through 2018. And, this budget allocation appears to be well spent as organizations report enhanced detection of cyber attacks—catching an average 35 cyber attacks previously eluding traditional defenses.

Not all threat intel feeds are created equal

Sure, threat intelligence feeds are increasingly accepted and adopted as an essential element in the enterprise security strategy. In fact, 80 percent of breached companies wish they had invested in threat intelligence. But even as the use of third-party threat intelligence feeds increase, IT/security teams are realizing that not all threat intelligence feeds are created equal.

To begin with, there are several types of threat intelligence feeds based on web-based threats or email threats, and feeds that scan the dark web, among others. While not discounting the value of the various types of feeds, CISOs need to understand why web-based threat intelligence is the first among equals. Web-based malware target the enterprise network and the endpoints through day-to-day internet use by employees–internet critical to their day-to-day office functions. A truly valuable threat intelligence feed will help CISOs achieve their end goal of keeping their organization safe and blocking confirmed bad actors.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Checklist for Choosing the Right Threat Intelligence

Ask these seven questions to determine if the web-based threat intelligence feed(s) you choose are “certified organic” enough to provide tangible goodness and value to the health of your enterprise security posture:

1.    Is the data original source?

Our previous post, Your Threat Intelligence Isn’t Working, discussed the pitfalls of using compiled third-party sources in a threat intel feed—more data isn’t necessarily good data! The time-consuming process of managing duplicates and false positives cripples the performance of most information security teams to the point that many alerts are ignored. Protect cherished resources—budget and time—by choosing an original source threat intelligence feed.

2.    How is the data collected?

While original source threat intelligence minimizes false positives and duplicates, how the data is collected maximizes the tangible value of the feed. Web-based malware is typically delivered through mainstream, heavily-trafficked websites, either via ads or third-party code such as data management platforms, content management systems, customer identification engines, video players and more. Hence, the threat intelligence feed needs to source the data by replicating typical website visitors. This means continuously (24*7*365) scanning the digital ecosystem across multiple geography, browser, devices, operating system and consumer behavior, using REAL user profiles. Unless the engine that gathers the threat intelligence behaves like real internet users (who are the targets of web-based malware), the quality of the “internet threat” data is questionable at best.

3.     Is the threat intelligence dynamic?

Threat intelligence should be a living (frequently updated), constantly active data source. The data in the threat intelligence feed needs to adapt to reflect the rapidly transforming malware landscape. The engine behind the feed should both track and detect malware in real-time, while also accounting for the changing patterns of attack. Even the algorithms driving the machine learning needs to be dynamic and continuously reviewed.

4.     Does it prevent AND detect threats?

As the adage goes, an ounce of prevention is worth a pound of cure, and this holds true in the cyber security space. However, reliance on prevention isn’t practical or realistic. Prevention boils down to deployed policies, products, and processes which help curtail the odds of an attack based on known and confirmed threats. What about unknown or yet to be confirmed threats?

Threat hunting is becoming a crucial element in the security posture. It refers to the detection capabilities stemming from a combination of machine generated intel and human analysis to actively mine for suspicious threat vectors. Does your threat intelligence account for both indicators of compromise (IOC) and patterns of attack (POA)? The goal of threat hunting is to reduce the dwell time of threats and the intensity of potential damage. The threat intelligence feed should allow you to act on threats patterns before they become overt.

5.     How is the data verified?

Just as the automation or machine learning behind the threat intelligence feed should simulate a real user for data collection, human intervention is important for data verification. Without the element of human analysis, data accuracy should be questioned. Otherwise, you run the risk of experiencing increased false positives.

6.     Is the information actionable?

Malware is malware, and by its definition it is “bad”. You do not need an extensive payload analysis of threat data. You do, however, need information about the offending hosts and domains, so that compromised content can be blocked, either manually or via Threat Intelligence Platform (TIP). The granularity of the data can also save CISOs from the politics of whitelisting and blacklisting websites. As a bonus, real-time intelligence will enable you to unblock content when it is no longer compromised.

7.     Does it offer network-level protection?

While CISOs still debate over an optimal endpoint security solution, web-based threats attack at the enterprise network. Frankly, stopping malware at the endpoint is too late! The threat intelligence you choose must offer network-level protection and deter web-based threats from propagating to endpoints in the first place.

Your Threat Intelligence Isn’t Working

False positives undermine your security investments. 

Your Threat Intelligence Isn't Working

The rapid adoption of threat intelligence data by enterprises signals an increased emphasis on preventing targeted malware attacks. While few question the strategy fueling this boom, it is the quality of this intelligence that is debatable. Recent news of organizations suffering brand damage due to false positives in their “compiled” threat feed, puts the quality of numerous threat intelligence feeds under scrutiny.

In simple terms, a compiled threat intelligence feed aggregates data from various open sources and may also include observed data from the security vendor. The pitfalls of these multiple dependencies are many, the most debilitating of which is the quality of this so-called “intelligence.” In most cases, a compiled threat intelligence feed is a minefield of false positives, false negatives and unverified data.

To make your digital threat intelligence work for you, consider these factors:

Go for original source

Compiled isn’t conclusive

Many vendors use the euphemisms like “comprehensive” or “crowdsourced” threat intelligence to characterize the value of their data. These euphemisms typically describe data compiled from multiple sources. Very few (most likely none) reveal the fact that this aggregated data hasn’t been thoroughly vetted for accuracy – a process that requires significant manpower hours for the volume of data within the feed. In fact, the time needed to properly assess the data would delay an enterprise’s receipt of and action on the intelligence. Needless to say, this time lag is all it takes for serious damage to be done by cyber criminals.

Avoid Costly Cleanups
False positives can be damning

The inherent inaccuracies in a compiled threat intelligence feed can lead to false positives and duplicate threat alerts. It is a well-established fact that malware alerts generate around 81% false positives and average 395 hours a week of wasted resources chasing false negatives and/or false positives.

A critical by-product of false positives is alert fatigue, which induces enterprise security professionals to not react in a timely manner – fatal behavior when an actual breach or violation does occur. In this “boy who cried wolf” scenario, the enterprise is vulnerable from two perspectives. Failure to react to a “positive” alert could expose the entity to malware. On the flip side, reaction to a “false positive” expends countless resources. Whatever the situation, the consequences could damage careers, cripple the security posture, and tarnish the enterprise’s image. By using an original source digital threat intelligence feed vendor, you maximize the level of intel accuracy and minimize the margin for false positives to occur.

Focus on patterns, not just appearances
Both IOCs and POAs are important

Another aspect to deciphering the value of  threat intelligence is what actually goes on behind the scenes. Most threat intelligence feeds factor in indicators of compromise (IOCs) to describe a malware alert is valid  or is marked with “high confidence” in its accuracy. However, what is harder to determine is the actual behavioral pattern of a threat or the method of malware delivery, which is what patterns of attack (POAs) depict. By understanding the POAs, high-quality threat intelligence can also detect new threat vectors, hence allowing enterprises to block suspicious malware before it becomes overt.

The key determining characteristic between IOCs and POAs is that IOCs contain  superfluous, easy-to-alter data points that are not individual or specific to the bad actor, whereas POA data points are difficult to mask. To put it in simpler terms, think of a bank robbery. Information describing the appearance of the robber, such as a shirt or hair color, could be easily changed for the robber to evade detection and be free to commit additional heists. However, more specific, innate information regarding the robber’s gait or voice, would make the individual easier to detect and block their ability to commit the same crime again. These inherent factors or POAs are difficult and expensive to alter. Therefore, threat intelligence data should factor in both IOCs and POAs in order to provide a more conclusive picture of a threat and minimize false positives.

Security Buyer Beware

Yes, factors such as real-time data, number of data points on threat vectors, easy access, and seamless integration with TIP/SIEM are important in determining the overall quality of a threat data feed. However, inaccurate data and false positives are fundamental flaws in many market solutions for threat intelligence. By using an original source digital threat intelligence feed vendor, you maximize the level of intel accuracy and minimize the margin for false positives to occur. Choose wisely.

The Blind Spot in Enterprise Security

Website security is overlooked in most IT governance frameworks. 

website security blindspot

Managing a website isn’t as easy as you think. Sure, you test your code and periodically scan web applications but this only addresses your first-party owned code. What about third-party code?

Considering more than 78% of the code executing on enterprise websites is from third-parties, IT/ website operations departments cannot truly control what renders on a visitor’s browser. This inability to identify and authorize vendor activity exposes the enterprise to a host of issues affecting security, data privacy and overall website performance. And, your website isn’t immune.

Masked vulnerability: What you don’t know can hurt you

The fact that the majority of the code executing on an enterprise website is not seen, let alone managed, does not absolve the enterprise from blame should something go wrong—and it does.

Much publicized stories about website compromises and digital defacement point to the embarrassing reality that websites are not easy to secure. But that’s not all.

Digital property owners—websites and mobile apps—are beholden to a series of regulations covering consumer privacy, deceptive advertising, and data protection. The U.S. Federal Trade Commission U.S. has dramatically stepped up enforcement of deceptive advertising and promotional practices in the digital environment over the past few years and recently signaled interest in litigating enterprises found to be violating the Children’s Online Privacy Protection Act (COPPA).

Data privacy regulations don’t only apply to minors accessing the website. The recent overturning of EU-US Safe Harbor and resulting EU-US Privacy Shield framework calls attention to the need to understand what data is collected, shared and stored via enterprise digital operations.

Don’t forget that these third parties directly affect website performance. Problematic code or behavior—too many page requests, large page download size, general latency, etc.—render a poor experience for the visitor. Potential customers will walk if your website pages take more than two seconds to load, and third parties are usually the culprits.

The problem is that the prevalence of third-party code masks what’s really happening on a public-facing website. This blindness exposes the enterprise to unnecessary risk of regulatory violations, brand damage and loss of revenue.

Seeing through the camouflage

This is a serious issue that many enterprises come to realize a little too late. Third-party vendors provide the interactive and engaging functionality people expect when they visit a website—content recommendation engines, customer identification platforms, social media widgets and video platforms, to name a few. In addition, they are also the source of numerous back-end services used to optimize the viewing experience—content delivery network, marketing management platforms, and data analytics.

Clearly, third parties are critical to the digital experience. However, no single individual or department in an organization is responsible for everything that occurs on the site—marketing provides the content and design, IT/web operations makes sure it works, sales/ecommerce drives the traffic, etc. This lack of holistic oversight makes it impossible to hold anyone or any group accountable for when things go wrong that can jeopardize the enterprise.

Case in point: can you clearly answer the following:

  • How many third-party vendors executing on your website?
  • How did they get on the site, i.e., were they called by another vendor?
  • Can you identify all activity performed by each vendor?
  • What department authorized and takes ownership of these vendors and their activity?
  • How do you ensure vendor activity complies with your organization’s policies as well as the growing body of government regulations?
  • What is the impact of individual vendor activity on website performance?
  • What recourse do you have for vendors that fail to meet contractually-agreed service level agreements (SLA)?

Questions like these highlight the fact that successfully managing an enterprise website requires a strong command of the collective and individual technologies, processes and vendors used to render the online presence, while simultaneously keeping the IT infrastructure secure and in compliance with company-generated and government-mandated policies regarding data privacy.

Adopting a Website Governance strategy will help you satisfy these requirements.

Take back control

What happens on your website is your responsibility. Don’t you think you should take control and know what’s going on? It’s time you took a proactive approach to security. The Media Trust can shine a light on your entire website operation and alert you to security incidents, privacy violations and performance issues.

 

Did malvertising kill the video star?

Video Malware Vector

Large-scale video malware attack propagates across thousands of sites

Malware purveyors continue to evolve their craft, creatively using video to launch a large-scale malvertising attack late last week. Video has been an uncommon vector for malware, though its use is on the rise. What’s different is the massive reach of this particular attack and the ability to infect all browsers and devices. Much like The Buggles decried about video changing the consumption of music, this intelligent malware attack used video to orchestrate mayhem affecting 3,000 websites—many on the Alexa 100. Is this the future?

Charting the infection

The Media Trust team detected a surge in the appearance of the ad-based attack late Thursday night and immediately alerted our client base to the anomalous behavior of the malware-serving domain (brtmedia.net). As it unfurled, the team tracked the creative approach to obfuscation. (See image)

First, the domain leveraged the advertising ecosystem to drop a video player-imitating swf file on thousands of websites. The file identified the website domain—to purposefully avoid detection by many large industry players—and then injected malicious javascript into the website’s page. Imitating a bidding script, the “bidder.brtmedia” javascript determined the video tag placement size (i.e., 300×250) and called a legitimate VAST file. As the video played, the browser was injected with a 1×1 tracking iframe which triggered a “fake update” or “Tripbox” popup which deceptively notified the user to update an installed program. (In the example below, the user is instructed to update their Apple Safari browser). Unsuspecting users who clicked on the fake update unwittingly downloaded unwanted malware to their device.

The compromise continued unabated for hours, with The Media Trust alerting clients to attempts to infect their websites. This issue was resolved when brtmedia finally ceased delivery, but only after tainting the digital experience for thousands of consumers.

video-borne malware infection

Process flow for video-borne malware infection

The devil in the artistic details

The use of video as a malware vector is increasing. As demonstrated above, video and other rich media provide avenues for compromising the digital ecosystem, impacting both ads and websites.

The clever design and inclusion of multiple obfuscation attempts allowed this attack to propagate across some of the largest, most heavily-trafficked sites. As The Media Trust clients realized, the best defense against this kind of attack is through continuous monitoring of all ad tags and websites, including mobile and video advertising.

The Skinny on L.E.A.N. Ads

IAB LEAN

Breaking down newly-announced advertising industry principles.

In October 2015, the Interactive Advertising Bureau (IAB) announced L.E.A.N. Ads (LEAN), an initiative to overhaul and update standard advertising principles. In response to the steady rise in ad blocking capabilities, Flash furor, surge in HTML5 creative and a corresponding battery drain on mobile devices, the IAB proposed these principles to guide the development of the next phase of advertising technical standards. These principles aim to address consumer concerns regarding the affect advertisements have on site performance, security and data privacy.

What exactly is LEAN? That’s what The Media Trust clients want to know.

Defining LEAN

In a nutshell, LEAN aims to tighten the guidelines associated with the delivery of advertising content across desktop, mobile and tablet devices. As clients have discovered, The Media Trust’s Media Scanner service already supports the proposed LEAN elements, and more.

L – Light: Limit the ad file size.

This is easier said than done. The actual size of an ad’s creative design can be weighty, and the larger it is the longer it takes to load on a browser. For example, a 10MB design file loading on a 10k page destroys the user experience, especially if viewed on a mobile device.

But, the creative file size is not the only contributor to an ad’s disruption to the user experience. Once the initial creative is inserted into an ad tag, it moves through the advertising ecosystem accumulating additional components not critical to the actual rendering of the ad. For the most part, well-intentioned parties append tags to evaluate and optimize the ad’s overall performance and provide a more positive customer experience so that, in the future, the user is served a relevant ad when and how he wants to see it.

With the more widely adopted use of HTML5, site performance will become a bigger challenge as additional scripts—i.e., more verbose HTML, CSS, JavaScript—run, resulting in a more resource-intensive process. Combined, an ad’s design and its technical tag components significantly affect a page’s ability to load efficiently and meet the user’s expectations.

Managing the total ad file size is critical to the user experience—if it takes too long to load then the entire experience is at risk, negatively impacting both the advertiser and publisher. Hundreds of publishers and advertisers already use features in Media Scanner to set policies to alert on ads that exceed client-determined policies spanning total creative file size, total download size, number of calls/connections and CPU utilization, among others.

E – Encrypted: Ensure ad complies with HTTPS standards.

Site security initiatives took the world by storm earlier this summer when Google ad networks moved to HTTPS and the White House directed federal sites to be HTTPS compliant. As outlined in a previous post, to have a truly encrypted site EACH and EVERY connection made must communicate through HTTPS, including all third-party code, not just advertising. This means other site vendors—content delivery networks, data management platforms, hosting services, analytic tools, product reviews, video platforms, etc.—need to ensure all of their connections are made via HTTPS. Just one break in any call chain will cause the entire site to be unencrypted.

However, encryption is just one element of providing a secure consumer experience. Publishers and ad tech partners need to continuously be on the lookout for compromised ads exposing site visitors to malware. The only way these will be found is through continuously scanning sites and ads for malware, vulnerable ads and all encryption call failures.

A – Ad Choice Supported: Comply with industry data collection standards.

Launched in 2011, AdChoices is an industry self-regulation program outlining how advertisers and publishers collect consumer data used for re-targeting and giving consumers control over the process by allowing them to opt out of data collection activity. While created with good intentions, the program is not well understood by most consumers with the net effect that many who are against data collection do not actually opt out.

Determining an ad’s compliance with AdChoices is relatively straightforward. The tricky part is ensuring compliance with the myriad of state and federal regulations covering healthcare and children. In these instances, compliance isn’t a consumer choice, it is the law.

Data privacy is a serious concern among the general public who want to know the “who,” “what” and “how” of data collection—who is collecting, what is collected and how is it going to be used. Publishers want to know the answers to these basic questions and use Media Scanner to identify, analyze and report on all vendors executing on their digital properties with particular attention paid to the players involved in serving an ad. What publishers frequently discover is that their vendors—and external parties called to help the vendor render a service—perform actions that are not germane to the contracted relationship, such as dropping customer-tracking cookies. Besides giving up valuable customer data, publishers know that these unauthorized actions are contrary to many privacy policies posted on their sites and use Media Scanner to track this violating behavior.

N – Non-invasive: Don’t irritate the site visitor

This vague statement can be broken down into two categories that affect the consumer experience: technical performance and visual quality of an ad. Technical aspects of an ad, such as download size and CPU utilization, are represented in the “L” of LEAN described earlier. Visual ad quality refers to how an ad looks and behaves to the user. There’s nothing quite as startling as visiting a page to be greeted with ads automatically blaring audio or playing a video. And almost everyone is annoyed at ads that shake, blink, expand and push content around, or take over the page.

Reputable publishers have policies regarding the presence of these irritating ads on their sites. They use Media Scanner to enforce the policies by alerting on any ad in violation. In addition, publisher clients set policies regarding appropriate content of ads for their audience. While many clients ban adult, alcohol and gambling, some categorize ads by company, industry and brand to ensure the ads don’t conflict with the content. For example, an airline would not want their ads appearing on pages featuring a plane crash; nor would an automotive company appreciate their ads appearing on pages chronicling a safety recall for their vehicle brand.

Why Now?

The mounting backlash from consumers regarding slow site performance, malware exposure and data collection activities generated from digital advertisements must be addressed. Publishers that truly understand the value of a positive customer experience already closely protect it and avoid serving resource-draining, unsecure and intrusive ads. They use The Media Trust to preview ads (and third-party code) before being served and to continuously monitor and detect any policy-breaking activity.

In the end, the best way to protect the consumer experience is for advertisers and publishers to work together, adopt LEAN and enforce compliance with the proposed technical standards.