In our simulations the correct (i.e.,
increasing rather than decreasing) trend was typically identified, but the magnitude of the trend was inaccurate. For example, it is plausible from our results that paired RDS s urveys would over- or under-estimate any change by 25%. This has important implications for studies using RDS to measure the impact of interventions on HIV or HCV prevalence and incidence in PWID populations (Degenhardt et al., 2010, Martin et al., 2013, Martin et al., 2011 and Solomon et al., find more 2013). For the purposes of estimating a change, researchers should consider using raw RDS values as well as adjusted ones. The issues we report will potentially affect studies which follow the same RDS recruited individuals
over time (Rudolph et al., 2011), rather than using a repeat survey; the overall number of individuals accessed will be smaller (than if multiple samples were taken), and any problems with recruitment in the initial RDS survey will persist throughout (such as difficulty reaching equilibrium). As estimates should still be adjusted using reported degrees, inaccuracies in the degrees will cause inaccurate estimates of the trends over time. Problems will occur both if the same reported degrees are used and if new reported degrees are obtained – the potential for error in the reported degrees is high. Though we consider only increasing prevalence, the same problems Depsipeptide datasheet will apply to populations with decreasing prevalence
and to surveys taken at different time intervals. Testing all realistic permutations is not feasible, but based on our results we expect that inaccurate degrees will introduce bias into RDS surveys, and confidence in the estimates will be low, causing uncertainty in the calculated trends from paired also samples. We note that our methods of adding inaccuracy to degrees are fairly conservative; it is likely that realistic biasing behaviour is heterogeneous across a population and may depend on factors like gender, age, behaviour, degree or disease status (Bell et al., 2007, Brewer, 2000, Marsden, 1990 and Rudolph et al., 2013). For example, men usually report a far higher number of sexual partners than women, giving inconsistency in the number of sexual partnerships that could have occurred (Brown and Sinclair, 1999, Liljeros et al., 2001 and Smith, 1992). Similar problems may occur among PWIDs recalling injecting partnerships. In addition, PWIDs in different countries or regions where different laws and restrictions apply may bias their answers differently. These more systematic inaccuracies will likely cause a larger error in estimates, enhanced by correlations between those factors and infection. Testing the accuracy of reported degrees would be very challenging in the “hidden populations” in which RDS is used.