1 / 33

How much can Behavioral Targeting Help Online Advertising

How much can Behavioral Targeting Help Online Advertising. Harini Sridharan Stephen Duraski . Introduction/Motivations. Behavioral Targeting (BT) is a technique that uses a user’s web-browsing behavior to determine which ads to display to a user.

bowie
Download Presentation

How much can Behavioral Targeting Help Online Advertising

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How much can Behavioral Targeting Help Online Advertising HariniSridharan Stephen Duraski

  2. Introduction/Motivations • Behavioral Targeting (BT) is a technique that uses a user’s web-browsing behavior to determine which ads to display to a user. • BT is a technique used by online advertisers to increase the effectiveness of their campaigns. • As of the writing of this paper, it is underexplored in academia how much BT can truly help online advertising in search engines.

  3. Introduction/Goals • First, we aim to empirically answer the question of whether Behavioral Targeting truly has the ability to help online advertising. • Second, we aim to answer the question of how much BT can help online advertising using commonly used evaluation metrics. • Finally, we aim to answer the question of which BT strategy can work better than others for ads delivery.

  4. Introduction/Methods • We use 7 days’ ads click-through log data coming from a commercial search engine, dated from June 1st to 7th 2008. • The dataset includes web page clicks and ad clicks of users. We do not include any demographic or geographic data to be clear of any privacy concerns.

  5. Behavioral Targeting • “BT uses information collected on an individual's web-browsing behavior, such as the pages they have visited or the searches they have made, to select which advertisements to display to that individual. Practitioners believe this helps them deliver their online advertisements to the users who are most likely to be influenced by them.” [15] • We measured the effectiveness of BT using the click-through rate (CTR).

  6. Behavioral Targeting/Questions • Does BT truly have the ability to help online advertising? To answer this question, we validate the basic assumption of BT, i.e. whether the users who clicked the same ad always have similar browsing and search behaviors and the users who clicked different ads have relatively different Web behaviors.

  7. Behavioral Targeting/Questions • How much can BT help online advertising using commonly used evaluation metrics? To answer this question, we use the difference between ads CTR before and after applying BT strategies as the measurement, i.e. the degree of CTR improvement is considered as a measurement of how much BT can help online advertising. The statistical t-test is utilized to secure the significance of our experiment results

  8. Behavioral Targeting/Questions • What BT strategy works better than others for ads delivery? We consider two types of BT strategies, which are (1) represent user behaviors by users’ clicked pages and (2) represent user behaviors by users’ search queries respectively. In addition, how long the user behaviors have occurred in the log data is also considered for user representation.

  9. Behavioral Targeting/Examination of Page Views • We adopt the classical Term Frequency Inverse Document Frequency (TFIDF) indexing [8] by considering each user as a document and considering each URL as a term for mathematical user representation. • A user is represented by a real-valued matrix where g is the total users and l is the total URLs visited. • A user is a row of U which is a real value vector with each user represented as:

  10. Behavioral Targeting/Examination of Queries • We also build the user behavioral profile by simply considering all terms that appear in a user’s search queries as his previous behaviors. • With this method we can represent each user in the Bag of Words (BOW) model. • We use Porter Stemming [3] to stem terms and then remove stop words and terms that only appear one in a user’s query text. • 470,712 terms are removed and 294,208 are reserved • We use the same TFIDF indexing to index the users by query terms.

  11. Behavioral Targeting/Long-term vs. Short-term • Many commercial BT systems use long-term user behavior, while many others use short term behavior. • There is no existing evidence to show which strategy is better. • As a preliminary survey, we will consider 1 days’ user behavior as their short term profile, and 7 days’ user behavior as their long-term profile.

  12. Behavioral Targeting • We will validate and compare four different BT strategies in this paper. They are: • LP: using Long term user behavior all through the seven days and representing the user behavior by Pageviews • LQ: using Long term user behavior all through the seven days and representing the user behavior by Query terms • SP: using Short term user behavior (1 day) and representing user behavior by Pageviews • SQ: using Short term user behavior (1 day) and representing user behavior by Query terms

  13. Dataset • 7 days CTR data from June 1st to 7th 2008. • We use user IDs associated with cookies stored on the users’ OS to identify individual users. • No other user information, such as demographic or geographic data, is used. • To filter robots, any user with >100 clicks per day is filtered out. • Data covers 6,426,633 unique users and 335,170 unique ads within 7 days. • Ads with <30 clicks over 7 days are removed, leaving 17,901 ads, the results are averaged over that number.

  14. Dataset

  15. Experimental Configuration/Symbols and Experiment Setup • Let be the set of n ads in our dataset. • For each ad suppose are all the queries which have displayed or clicked . • Users who have clicked or displayed is • To show whether has clicked we use

  16. Experimental Configuration/Symbols and Experiment Setup • In this work, we used two common clustering algorithms, k-means [10] and CLUTO [7] for user segmentation. • Suppose the users are segmented into K segments according to their behaviors. We use the function: to represent the distribution of under a given user segmentation results where stands for all users in who are grouped into the user segment. • Thus the user can be represented by

  17. Experimental Configuration/Symbols and Experiment Setup • We first represent the users by their behaviors using different types of BT strategies. • After that, we group the users according to their behaviors by the commonly used clustering algorithms. • Finally, we evaluate how much BT can help online advertising by delivering ads to good user segments

  18. Experiment Configuration • Within- and Between- Ads User Similarity • The basic assumption with BT is that users with similar browsing behavior will have similar interests and therefore be inclined to click the same ads. • If this is true, the similarity between users who clicked the same ad must be larger than the similarity between users who clicked different ads • Click-through rate • Once we have validated the basic assumption that similar users click similar ads we have to show that BT can help online advertising. • Ad performance is generally measured with either click-through rate or revenue, since advertiser revenue is difficult for us to obtain, we use CTR. • If user segments exist where the CTR substantially improves over the same ad shown without user segmentation, then BT is valuable.

  19. Experiment Configuration/F-measure • The improvement of CTR after user segmentation can only validate the precision of BT strategies in finding potentially interested users. • We can calculate the precision and recall. • The larger the precision, the more accurate we can segment the clickers of . The larger the recall, the better the coverage we achieve in collecting all clickers of . • The integration of precision and recall give use the F-measure, the higher the F-measure, the better performance we have achieved with BT. Users who clicked are positive instances, users were displayed but did not click are negative instances, Precision and Recall are defined as: Integrated F-measure:

  20. Experiment Configuration/Ads Click Entropy • Intuitively, if the clickers of an ad dominate some user segments and seldom appear in other user segments, we can easily deliver our targeted ads to them by selecting the segments they dominated. • However, suppose the clickers of are uniformly distributed in all user segments, if we aim to deliver the targeted ads to more interested users, we have to deliver the ad to more users who are not interested in this ad simultaneously. • The larger the Entropy is, the more uniformly the users, who clicked ad , are distributed among all the user segments. The smaller the Entropy is, the better results we will achieve.

  21. BT Results • The usage of within and between ads was to validate whether the users who clicked the same ad may have similar behaviors and that the users who clicked different ads had relatively different behaviors. • Let ==be the average within and between ads of the dataset collected, the average ratio of this was calculated and detailed results for the user representation strategy was tabulated as follows.

  22. BT Results • The fact that average Sw is greater than average Sb implies that users who clicked the same ad are more similar than the ones that clicked different ads. • To validate whether the difference between Sw and Sb, statistical paired t-test was implemented and the results were all less than 0.05. This in turn implied that, statistically, the within ad user similarity is always larger than the between ads similarity.

  23. BT Results • Using clustering algorithms and grouping users into 20,40,80 and 160 clusters(irrespective of the clustering algorithm used), was calculated, which represents the CTR improvement degree of ad by user segmentation in BT. • The ads CTR improved by as much as 670%. • The reason why the short term user behaviors are more effective then the long term user behavior for targeted advertising is that users have multiple interests that always change rapidly. • Search queries can work a little better than page clicking for BT, as the queries have a strong correlation to the ads displaying while the page clicks have no strong correlation to that in the dataset we analyzed.

  24. Limitations • In this work, we try and provide a systematic study to understand the click-through log of a commercial search engine, so the one can validate and compare different strategies of behavioral targeting. • This also happens to be the first systematic study for BT on real world ads click-through in academia.

  25. Conclusions • Demographic and geographic data were not considered during the study due to privacy concerns. • Ranking of user segments for a given ad has not been explored. • In general, user behavior modelling for BT is underexplored and hence we did not have a lot of resources to start with. • Study of algorithms for large sets of user data and their rapidly changing user behavior was not possible as BT requires large scale user data which is incremental.  

  26. Conclusions • The following conclusions can be drawn from our experiments: • Users who clicked the same ad will be more similar than users who clicked different ads. • 2. The CTR of ads can be averagely improved as high as 670% over all the ads we collected, using fundamental clustering algorithms. • 3.Tracking short-term user behavior can perform better than tracking long-term user behavior for the user representation strategies.

  27. Interesting Similar Results Modeling the Impact of Shortand Long Term Behavior on Search Personalization Paul N. Bennett, Ryen W. White, Wei Chu, Susan T. Dumais, Peter Bailey, FedorBorisyuk, and XiaoyuanCui • This is a paper from 2012 that studies customizing results in search engines based on user behavior. While their focus was not ads, it did come to the conclusion that short term behavior was more effective than long term behavior when a session has been ongoing.

  28. Related Work • Learning to Rank audience for behavioral targeting in display ads - Jian Tang, Ning Liu, Jun Yun, YelongShen , B Gao, S Yan,MingZhang • Online Targeting, behavioral advertising and privacy - AviGoldfarb,Catherine E Tucker • Transfer learning for behavioral targeting - TianqiChen , Jun Yan , Guirong Xue , ZhengChen • Analyzing content-level properties of the web adversphere - Yong Wang , Daniel Burgener , AleksandarKuzmanovic , Gabriel Maciá-Fernández • ISP-enabled behavioral ad targeting without deep packet inspectionGabrielMaciá-Fernández , Yong Wang , Rafael Rodríguez-Gómez , AleksandarKuzmanovic • Linking visual concept detection with viewer demographics - Adrian Ulges , Markus Koch , Damian Borth

  29. Future Work • Using advanced user segmentation algorithms can lead to better results. • Study of better user representation strategies such as search sessions, content of user clicked pages and user browsing trials can be explored.

More Related