Central to many intellectual property disputes is an assessment of the degree of similarity of two marks. This follows from the framework set out in (for example) the UK Trade Marks Act 1994[1], whereby even a non-identical mark may be considered non-registrable or infringing if it creates a likelihood of confusion (Sections 5(2b) and 10(2b)) with an earlier mark.
A key point to note is that decisions regarding legal similarity have traditionally been considered fundamentally to be almost purely subjective, involving a range of relevant tests which include consideration of the perception of the relevant consumer, and recognition of the existence of degrees of similarity within a spectrum (from high to low).
However, there are some areas where objective quantitative formulations can be constructed. A more objective framework could have a number of advantages, including the potential to quantitatively measure the difference between marks. It would be necessary to explicitly incorporate the relevant metrics into comparison tests, but this would offer the potential to define thresholds up to which IP protection could apply, and could provide the basis for new case law to be applied to future analogous disputes, offering the potential for greater legal consistency and predictability of decisions.
An objective quantitative approach is not likely to be applicable to all types of marks, or to all characteristics of (or categories of comparison between) marks of any particular type, but there are certain areas where algorithms for calculating a numerical degree of similarity can be formulated. Full details of such potential frameworks have been set out in previously published work on word and colour marks (Barnett, 2025a)[2] and on sound marks (Barnett, 2025b)[3]; the key elements are outlined below.
- Word marks - Of the types of mark for which some sort of quantitative approach might be possible, word marks are perhaps one of the more complex. It is extremely difficult to anticipate any sort of objective algorithmic framework able to assess the conceptual similarity (i.e. meaning - 'lexical' or 'semantic' similarity) between marks, particularly when additional complications such as differences between languages, and potential variations in meaning associated with spelling variations or homophones, are taken into account). However, visual similarity (i.e. similarity in spelling) and aural similarity (pronunciation) can be addressed to some degree.
In the previously proposed framework, visual similarity between (a pair of) word marks is quantified using algorithms based on the concepts of two specific metrics:
(i) Levenshtein distance, which relates to the number of character changes required to transform one string into the other; the smaller the number of changes required, the more similar the strings are; and
(ii) Jaro-Winkler similarity, which quantifies the number of 'matching' characters between the strings; it incorporates normalisations relating to the string lengths, and a weighting factor to take greater account of characters nearer to the start of the strings (where, arguably, a consumer might be more likely to notice differences).
Aural (or phonetic) similarity can be assessed by generating phonetic representations of the strings, using (in the proposed framework) an automated algorithm able to convert strings into their International Phonetic Alphabet (IPA) encodings, and then comparing these phonetic representations against each other to calculate their similarity (again, using an algorithm based on an implementation of Levenshtein distance).
Overall similarity is then calculated (in the simplest implementation) as the mean of the visual and aural similarity measurements. It is also possible to apply modifications to this approach, such as differently weighting the contributions of the individual components of the calculation, or adding algorithmic elements to reflect other relevant characteristics, such as splitting the word into key segments (‘tokens’) rather than considering the marks on a character-by-character basis, considering the distinctiveness of the marks (or their component elements), or analysing the parts of the strings which differ from each other (i.e. the 'remainders' when the common elements are removed) - and potentially, the relationship between these sub-elements and any associated goods and services classes.
- Colour marks - Colours are (arguably) somewhat simpler, as they can be exactly specified (e.g. by expressing them in RGB format – i.e. quantifying the red, green and blue components (as represented on a digital display), usually represented as a three-component vector (e.g. [255,255,255] for white) or in hexadecimal (#FFFFFF, equivalently for white)). On this basis, any colour can be represented as a point in a 3D 'space', defined with the red component varying along one axis, the green component along the second, and the blue component on the third. Accordingly, the difference between any two colours can relatively simply be calculated as the geometric 'distance' between the colours in 'RGB space'. This distance can equivalently be expressed as a difference (or similarity) score, by considering it as a proportion of the maximum possible distance between two colours in the space (i.e. the distance between black ([0,0,0]) and white ([255,255,255]).
- Sound marks - Many characteristics of sound marks are potentially too complex to be amenable to comparison using a simple algorithmic approach, but it is possible to make progress with the development of convenient frameworks in the cases of simple melodic lines expressible as sheet-music snippets. The proposed framework uses a numerical encoding to reflect the (relative) pitches and lengths of the notes, so as to represent the musical line as a string of characters. Two melodies can therefore be compared each other using algorithms analogous to those used for word marks. This approach might be applicable to trademark disputes, or to the assessment of potential copyright infringements. It might also be extendable to reflect other musical characteristics such as chord sequences, or to consider the extent of the section under consideration as a proportion of the whole piece, and could be modified to take account of the commonness (amongst the 'corpus' of pre-existing content) of particular musical elements. However, other characteristics, such as instrumentation, are likely to be more difficult to address. Going forward, it might also be possible to construct algorithmic approaches to assess the similarity between sound marks represented as digital (e.g. MP3) files.
These types of objective quantitative approaches are not likely to be (easily and repeatably) possible for certain other types or characteristics of marks, such as logos or associated imagery, though some progress might be achievable through the use of (say) image analysis or AI-based tools.
Overall, however, it is important to note that such algorithms should only be considered as tools to be utilised in the overall similarity assessment process, which will inevitably always incorporate significant subjectivity, involving consideration of a range of additional factors. These might typically include (for word marks specifically): conceptual similarity (i.e. meaning) and the distinctiveness of the marks; and (for marks generally) fonts or visual presentation, the associated goods and services, strength and degree of brand renown, documented evidence of actual confusion, the degree of attention paid by relevant consumers, and the nature of the overall market, all of which contribute to the estimation of the possibility of trademark confusion. Also relevant are the issues of how marks are perceived and recalled by consumers, which itself is dependent on a range of (largely unquantifiable) factors, such as levels of attention paid, the context in which the marks are encountered, physical differences between consumers, cultural associations, and so on.
It is not suggested that the ideas presented in this overview are intended to replace, in entirety, the current nuanced and multi-faceted approach to infringement employed by courts and trademark offices.
However, the possibility for the creation of a more objective framework offers the potential to be able to quantitatively measure the difference between marks (rather than simply relying on the traditional approach of assessing similarity just to (say) a 'low', 'medium' or 'high degree'), to define thresholds up to which IP protection could apply, and to build a case-law background to serve as the basis for future legal decisions within a more consistent framework.
References
This article was first published on 26 May 2025 at:
https://ipkitten.blogspot.com/2025/05/objectively-measuring-similarity-of.html
No comments:
Post a Comment