Introduction
A key part of the analysis workflow in brand monitoring services is often the maintenance of a 'watchlist' of sites. This requirement arises most frequently in services comprising domain monitoring, which detect newly-registered names containing a brand name of interest, but which may not yet feature significant or infringing content.
In these cases, enforcement action may not immediately be possible or appropriate, but there might be a concern that higher-threat content may appear in the future. There is often therefore a need to monitor the domains for changes to their content and provide an alert when a significant change is identified. At that point, a decision can then be made regarding appropriate follow-up action. Requirements for 'revisitor' functionality along these lines can also arise in other brand-protection contexts, such as when enforcement action has already been taken against an infringing target (such as a website or marketplace listing), and the targeted page is then tracked to verify compliance to the takedown action.
There exist a number of automated tools which track content in this way, but key components of a highly effective version include the ability to analyse an appropriate set of different characteristics of the websites in question, and options to set the sensitivity appropriately - it is not generally desirable (for example) for an alert to be generated every time any change to website content is identified, since many websites incorporate dynamic features which differ every time the webpage is called. Conversely, sometimes a change which is only small, or of a particular type (e.g. the appearance of an explicit brand reference) can be significant.
In this article, I briefly explore the development and use of a Python-based revisitor script to inspect and then subsequently review a set of domain names of potential interest (using data from a domain monitoring service for a retail brand, as a case study). Having a simple, easily deployed script of this nature can be advantageous, in terms of being quick and efficient to roll out, and being fully customisable regarding the specific website characteristics analysed and the sensitivity thresholds to be used. These types of tools generally can be highly useful in the cases of watchlists which may feature many hundreds or thousands of URLs to be reviewed, and can, of course, also be expanded to cover other website features and more complex types of site analysis.
Script specifics
The workflow is built on the basis of a 'site visitor' script, which inspects each of the domains in the watchlist, and extracts the following features (which are 'dumped' to a readable database file):
- HTTP status[1] - a numerical code corresponding to the type of response received when the domain name is queried; a code of '200' indicates a live website response (i.e. potentially an active webpage)
- Page title[2] (as defined in the HTML source code of the page)
- Full webpage content[3] (all text, plus formatting features and other content such as embedded scripts - i.e. the full HTML content)
- Presence / absence of each of a set of pre-defined keywords[4] - applicable keywords for analysis might typically include brand terms or other relevance keywords (e.g. for a retail brand, terms indicating that e-commerce content is present ('buy', 'shop', 'cart', etc.))
- Final URL[5] - e.g. the destination URL (e.g. after following any site re-direct)
The basic element of the functionality of the revisitor is then to inspect the same list of sites at subsequent times, as required (or on a regular basis if configured to run accordingly), extract a list of the same features, and then compare these with the corresponding features from the same site from the previous round of analysis (as read from the database file). In an initial simple implementation of the script, the following are deemed to be significant changes (i.e. denoting that the site is now worthy of further (manual) inspection and consideration for follow-up action):
- A change to an HTTP status of 200 (i.e. the appearance of a live website response)[6]
- Any change to the page title
- Any(*) change to the webpage content
- Any instance of the appearance of a keyword of interest (where not previously present)
- Any change to the final URL (e.g. the appearance or disappearance of a re-direct)
Of course, none of these changes guarantee that the website is now definitively of concern or infringing, but it does generate a 'shortlist' of sites generally then requiring manual review for a definitive determination of appropriate next steps (much more efficiently than having to review the whole set of sites in the watchlist manually on a regular basis).
Considering content-change thresholds
As discussed above, one of the trickiest features is the determination of an appropriate 'threshold' for alerting to changes to webpage content. The simplest configuration is simply to trigger a notification for any change(*), but in some cases this option may turn out to be too 'sensitive' and might generate too many candidate sites for convenient further manual review (depending on the size of the watchlist and the interval between successive inspections).
As a further exploration, it is instructive to investigate a numerical basis for quantifying degrees of webpage change, and what these differing degrees 'look like' in practice. There are a number of potential algorithms for quantifying the degree of difference between two passages of text (as discussed, for example, in previous work on mark comparison[7]); however, the simple script discussed in this article employs the Python library module difflib.SequenceMatcher[8] applied to the full HTML of the page (split across spaces into individual 'words') to calculate a difference score. This simple score is based on the ratio of the number of 'similar matches' (i.e. words in common) between the two versions of the page in question, to the total number of elements (words). Furthermore, the script has been configured to also provide a more granular view of the exact nature of the change, comprising a summary of which elements (i.e. words in the HTML) have been removed from the (HTML of the) page between the two successive inspections, and which have been added (Figure 1).
(a)
(b)
Figure 1: Examples / illustrations of identified content changes for specific individual webpages between successive inspections:
- a) a change to a single dynamically generated string (in this case, Javascript elements)
- b) a change from showing an error message to featuring distinct (Javascript) content
Discussion and Conclusions
The examples in Figure 1 provide some initial illustration that the nature of the identified changes are potentially much more important in any determination of significance than (for example) a numerical quantification of the extent of the change (as a proportion of the website as a whole). The first example (i.e. 'a') - a change to a dynamically generated string - is potentially something which might be seen on every occasion the site is inspected and might not correspond to any material change to the page (the visible site content may be entirely unaffected, for example). Conversely, the second example ('b'), representing a change from a simple error message (which, in this case, comprised essentially the content of the website in its entirety) to the appearance of some sort of live, script-generated content (potentially wholly different website content), might be much more significant.
However, these differences may not be apparent from an inspection of just the numerical 'size' of the change on the page (i.e. the 'difference score'); a variation in a piece of scripted content (such as in Figure 1a) might, for example, just pertain to a small element on a much larger page, or could constitute the dominant component of the webpage as a whole. For example, in a sample dataset, examples of single changes similar to that shown in Figure 1a were found to be equivalent - across the examples of websites in the dataset - to anywhere between less than 5%, or more than 50%, of the whole content of the website in question.
For these reasons, there is always some danger in specifying a specific threshold below which degrees of change to the page are disregarded. In some senses, it is safer to conduct a more detailed inspection of all pages which show any change in content between successive revisits, so as to avoid missing significant cases. However, depending on the numbers of sites under review, this may not be feasible. Accordingly, in future developments or more sophisticated versions of the script, it may be appropriate to refine the scoring algorithm to reflect the nature and/or content of any change.
However, regardless of the specifics, the general approach discussed in this article is generally able to build efficiency into the review process of sites of future possible concern, potentially filtering down large numbers of sites to be reviewed into much smaller 'shortlists' of candidates identified for deeper inspection and analysis on any given occasion.
References
[1] Using Python library module: urllib.request.urlopen([URL]).status
[2] Using Python library module: bs4 → BeautifulSoup([URL],’html.parser’).title.text
[3] Using Python library module: urllib.request.urlopen([URL]).read()
[4] Using Regex matching (Python library module: re.search) as applied to the full webpage (HTML) content
[5] Using Python library module: urllib.request.urlopen([URL]).url
[6] However, care must also be taken to distinguish a 'real' change in site status from an 'apparent' change which can arise in instances where (for example) the connection speed to the site is slow, and a connectivity time-out may be mistaken from a real case of site inactivity.
[8] https://www.geeksforgeeks.org/python/compare-sequences-in-python-using-dfflib-module/
This article was first published on 23 October 2025 at:


No comments:
Post a Comment