It is often asked whether the measurements of two different observers (sometimes more than two) or two different techniques yield similar results. This is called concordance or condore or reproducibility between measurements. Such an analysis examines the pairs of measurements, either categorically or numerically both, with each pair being performed on a person (or a pathology slide or an X-ray). Cohens – can also be used if the same counsellor evaluates the same patients at two times (for example. B to 2 weeks apart) or, in the example above, re-evaluated the same response sheets after 2 weeks. Its limitations are: (i) it does not take into account the magnitude of the differences, so it is unsuitable for ordinal data, (ii) it cannot be used if there are more than two advisors, and (iii) it does not distinguish between agreement for positive and negative results – which can be important in clinical situations (for example. B misdiagnosing a disease or falsely excluding them can have different consequences). Consider an example consisting of n-Displaystyle n-Observations (z.B. objects of unknown volume). The two tests (p.B different volume measurement methods) are performed for each sample, giving 2 data points with the 2n display style. Each of the n-Displaystyle samples is then displayed in the diagram by attributing the average value of the two measurements as x-Displaystyle and the difference between the two values as a y-Displaystyle. Bland-Altman parcels were also used to investigate a possible link between the differences between the measurements and the actual value (i.e.

proportional distortion). The existence of proportional distortion indicates that the methods do not uniformly correspond to the range of measures (i.e., the limits of compliance depend on the actual measure). To formally assess this relationship, the difference between methods should be reduced to the average of the two methods. If a relationship between differences and actual value has been identified (i.e. a significant slope of the regression line), 95% regression-based agreements should be indicated. [4] Readers are referred to the following documents, which contain measures of agreement: two methods are available to assess the agreement between the measurements of a continuous variable on observers, instruments, dates, etc. One of them, the intraclass coefficient correlation coefficient (CCI), provides a single measure of the magnitude of the match and the other, the Bland-Altman diagram, also provides a quantitative estimate of the narrowness of the values of two measures. The statistic of – can take values from 1 to 1 and is interpreted arbitrarily as follows: 0 – concordance that corresponds to chance; 0.10-0.20 – light approval; 0.21-0.40 – fair agreement; 0.41-0.60 – moderate support; 0.61-0.80 – substantial agreement; 0.81-0.99 – near-perfect chord; and 1.00 – perfect chord.

The negative results suggest that the observed agreement is worse than one might expect. An alternative interpretation is that Kappa values below 0.60 indicate a considerable degree of disagreement. For ordination data, where there are more than two categories, it is useful to know whether the evaluations of the various counsellors end slightly or vary by a significant amount. For example, microbiologists can assess bacterial growth on cultured plaques such as: none, occasional, moderate or confluence. In this case, the assessment of a plate given by two critics as «occasional» or «moderate» would mean a lower degree of disparity than the absence of «growth» or «confluence.»