Ct values can be used to estimate epidemic dynamics UPDATE! Ct values are expected to change depending on whether the epidemic is growing or declining, and we harness this to estimate the epidemic trajectory. Lots of cool new analyses and methods!

Highlights:
- Cts from symptom-based surveillance change over time, but the effect is weaker
- Methods to infer incidence using single cross-sections of Cts
- Unbiased by changing testing coverage
- Gaussian process (wiggly line) model for incidence tracking using Ct values
2/12
This work is *JOINTLY led* with @LeekShaffer and PI’d by @michaelmina_lab. Thank you also to the ever insightful @mlipsitch and to coauthors @SanjatKanjilal @gabriel_stacey and @nialljlennon. 3/12
Premise: times since infection depend on the epidemic trajectory. Distributions of randomly sampled viral loads proxy times since infection. With calibration, Ct values can estimate growth rate. We focus on qPCR in SARS-CoV-2, but the principle applies to any outbreak. 4/12
Result 1: viral loads are shifted higher (Cts lower) during epidemic growth and lower (Cts higher) during decline when individuals are sampled *based on the onset of symptoms*. We simulated linelist data under symptom-based surveillance and looked at TSI and Cts over time. 5/12
This is crucial when considering virulence in emerging SARS-CoV-2 variants. Lower Cts over time do not *necessarily* mean newly dominant variants have higher virulence. If incidence of a new variant is increasing, then we expect to see more recent infections and lower Cts. 6/12
**However, the effect is smaller than under random surveillance, so I would not rule out the possibility of increased virulence.** But important to consider. Thank you to @charliewhittak for chatting through this! 7/12
Result 2: we reconstructed the epidemic curve using single-cross sectional samples from well-observed nursing homes, finding that single cross sections using the full Ct distribution provided similar insights to point prevalence across three sample times. 8/12
Result 3: we compared Ct-based to case-count based methods when testing is changing. Rt estimates are biased when testing is increasing or decreasing (not a problem with the method, just the data!). Our method uses the Ct distribution so does not care about test numbers. 9/12
Result 4: we use multiple cross-sectional samples to reconstruct incidence without making assumptions about the trajectory shape (a Gaussian “wiggly” process model). We can track the incidence curve in MA using routinely collected hospital tests. 10/12
… and here is a gif that reminds me of a nematode worm. Every week we add on a new cross section of Cts and accurately track true incidence (in simulation, red line). 11/12
Conclusion: we are generating loads of (semi) quantitative data in the form of Cts. We can harness these to get unbiased estimates of the epidemic trajectory. Hopefully these ideas will help public health surveillance efforts and interpret data in the light of new variants. 12/12

More from Science

"NO LONGER BEST IN THE WORLD"
UNEP's new Human Development Index includes a new (separate) index: Planetary pressures-adjusted HDI (PHDI). News in Norway is that its position drops from #1 to #16 because of this, while Ireland rises from #2 to #1.
Why?

https://t.co/aVraIEzRfh


Check out Norway's 'Domestic Material Consumption'. Fossil fuels are no different here to Ireland's. What's different is this huge 'non-metallic minerals' category.
(Note also the jump in 1998, suggesting data problems.)
https://t.co/5QvzONbqmN


In Norway's case, it looks like the apparent consumption equation (production+imports-exports) for non-metal minerals is dominated by production: extraction of material in Norway.
https://t.co/5QvzONbqmN


And here we see that this production of non-metallic minerals is sand, gravel and crushed rock for construction. So it's about Norway's geology.
https://t.co/y6rqWmFVWc


Norway drops 15 places on the PHDI list not because of its CO₂ emissions (fairly high at 41st highest in the world per capita), but because of its geology, because it shifts a lot of rock whenever it builds anything.
https://t.co/hXlo8qgkD0
Look like that they got a classical case of PCR Cross-Contamination.
They had 2 fabricated samples (SRX9714436 and SRX9714921) on the same PCR run. Alongside with Lung07. They did not perform metagenomic sequencing on the “feces” and they did not get


A positive oral or anal swab from anywhere in their sampling. Feces came from anus and if these were positive the anal swabs must also be positive. Clearly it got there after the NA have been extracted and were from the very low-level degraded RNA which were mutagenized from

The Taq.
https://t.co/yKXCgiT29w to see SRX9714921 and SRX9714436.
Human+Mouse in the positive SRA, human in both of them. Seeing human+mouse in identical proportions across 3 different sequencers (PRJNA573298, A22, SEX9714436) are pretty straight indication that the originals

Were already contaminated with Human and mouse from the very beginning, and that this contamination is due to dishonesty in the sample handling process which prescribe a spiking of samples in ACE2-HEK293T/A549, VERO E6 and Human lung xenograft mouse.

The “lineages” they claimed to have found aren’t mutational lineages at all—all the mutations they see on these sequences were unique to that specific sequence, and are the result of RNA degradation and from the Taq polymerase errors accumulated from the nested PCR process

You May Also Like

Recently, the @CNIL issued a decision regarding the GDPR compliance of an unknown French adtech company named "Vectaury". It may seem like small fry, but the decision has potential wide-ranging impacts for Google, the IAB framework, and today's adtech. It's thread time! 👇

It's all in French, but if you're up for it you can read:
• Their blog post (lacks the most interesting details):
https://t.co/PHkDcOT1hy
• Their high-level legal decision: https://t.co/hwpiEvjodt
• The full notification: https://t.co/QQB7rfynha

I've read it so you needn't!

Vectaury was collecting geolocation data in order to create profiles (eg. people who often go to this or that type of shop) so as to power ad targeting. They operate through embedded SDKs and ad bidding, making them invisible to users.

The @CNIL notes that profiling based off of geolocation presents particular risks since it reveals people's movements and habits. As risky, the processing requires consent — this will be the heart of their assessment.

Interesting point: they justify the decision in part because of how many people COULD be targeted in this way (rather than how many have — though they note that too). Because it's on a phone, and many have phones, it is considered large-scale processing no matter what.