Safari’s updated privacy rules
Apple has been constantly updating its ITP (Intelligent Tracking Prevention) feature to curb the different methods adtech companies use to circumvent the tracking. So far, many including Google and Facebook have found their ways around ITP (think, link decoration).
But last week, Safari threw yet another punch – “WebKit Tracking Prevention Policy”. As the name implies, it’s a new tracking prevention policy for browsers using WebKit engine – i.e., Safari.
The policy is similar to that of Mozilla’s and it addresses all the tracking methods on the open web including the ones we need for measurement and attribution.
“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert). These goals apply to all types of tracking listed above, as well as tracking techniques currently unknown to us.”
So, how ITP is going to function?
Simple. Safari will block any kind of user tracking methods and technologies. When it can’t, it will try to
– limit the capability of the technique. A few examples: reducing the time window, restricting the availability of user data points, etc.
– get “informed consent” from the users if Safari can’t prevent the technique without undue user harm.
Remember, ITP 2.0 update that requires users to provide consent for Facebook.com to track them while browsing a site with Facebook like/comment button? The same will happen for literally any vendor trying to identify the user across the web.
The most interesting section of the policy is about circumvention. Safari made it clear.
“If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”
That is, those who try to bypass ITP could be flagged specifically – without any notice from Safari.
As you have guessed, Safari will prevent any tracking technique – whether it is harmful to the user or not from now on and publishers who rely on personalized advertising are included. The need to build your first-party data and cookieless-solutions are becoming apparent. Prioritize users and their experience over short-term gains.
Identity-based user consent
As cookies are facing threats from browsers, how can a publisher get and store consent of the users and pass it to the supply chain? Most importantly, how to develop a consistent consent mechanism?
We aren’t the only ones with the questions. In fact, we might even have an answer.
Meet, Authenticated consent.
Instead of relying on cookies to pop-up and ask for the right* consent, publishers that are asking for user registration can get consent and associate it with a single user – with the help of a user ID.
*Right consent here denotes to the message and the way publishers have to follow to pursue consent. For instance, a user from the EU will be shown a different message than the user from California (GDPR Vs CCPA).
The idea is to store consent from a user and sync it across devices and platforms so that the publisher can deliver the consented experience to the users – without asking for data and advertising preferences every time. If you have a log-in system in place to store user preference, you might as well save the consent preference too – at least, that’s the logic behind the theme.
Note that authenticated consent wouldn’t change the way consent is distributed. Sourcepoint offers an authenticated consent tool and it can be integrated with the company’s consent management platform. CMP will work as usual and pass the user consent to the vendors.
One of the benefits is the increase in consented impressions and thus, personalized ads. While it sounds like an easy way to solve the issue, the tool doesn’t support any other CMPs and the scalability is still in question. Most publishers don’t have a log-in or any similar authentication mechanism involved to take advantage of this technique. We’ll keep you informed on the updates.
Unified ID solution shows promising results
We’ve been discussing The Trade Desk’s Unified ID solution for a while now. In case you don’t know, we wrote about it last year in our roundup:
“As with every ID-based solution, Unified ID aims to eliminate the cookie syncing process and drive the industry towards a strong log in based solution. The solution is open to all the vendors and publishers as per the product page and it even listed some promising numbers from OpenX, Lotame, etc.”
This May, Prebid, an open-source header bidding framework included Unified ID as one of the identity modules. PubMatic, a sell-side vendor integrated Unified ID at the beginning of 2019 and the results are in.
– Match rate is close to 100% when cookies are included.
– Doubled the probability of a bid from a buy-side partner.
– Bids and CPMs from smaller DSPs who’ve also synced with Unified ID have increased (one of the direct benefits of ID solution).
“We’ve been involved with those because we think it’s a problem worth solving as an industry. The objective has always been to go from many IDs to some IDs, not just to one.”
– Tim Sims, The Trade Desk’s SVP of inventory partnerships.
We don’t have to emphasize the unsteady state of cookies (especially tracking ones) today. As a publisher, you don’t have to adopt an ID unless you set up and implement header bidding yourself (think, prebid). As industry puts it, we need a couple of ID solutions and considering the head start, Unified ID can very well be one of them.
Analyze, Iterate, and Win
We admire publishers who don’t wait for the buyers to set norms and push sell-side towards quality. Digiday published how The Times of London capitalized on page-level analysis and first-party data to increase its subscription and retention.
We will focus more on the method here.
The publisher wanted to segregate articles – based on several categories including content tone, headline type, article format, and geography. So, they’ve used eight freelancers and tech to tag 1,000 articles per section from the previous 17 months.
Every article had 16 different pieces of metadata (tags), which were plotted against engagement metrics such as page views, time on page, comments, saves, shares, and conversions (when a user becomes registered one or a subscriber).
This information has been used to determine the entire publication’s content strategy. And, of course, it worked. Besides helping the Times to cut down on the underperforming content, it improved the content that’s been published on the site. From dwell time to subscribers to comments, everything has been increased significantly.
“There are broad-brush strokes — like ‘features do well’ — that don’t do anything for the newsroom. This was our attempt to get actual data and findings for people who are doing the job and the wider company.”
– Taneth Evans, head of audience at the publisher.
This is yet another example of how the publisher spent time to understand its audience and then made the right decision. In fact, the Times reduced publishing news content regularly (which isn’t something that you see from similar media companies) and deliver what works for the readers. On all, it is a shift towards quality from quantity. If you haven’t yet, it’s time to focus on the quality.
Moments that matter
Omnicom Jumps On LinkedIn Audience API For Platform Data It Can Actually Use – AdExchanger.
As programmatic advertisers get smarter, agencies shake up the trading desk model – Digiday.
Black Hat: GDPR privacy law exploited to reveal personal data – BBC News.