Just gonna leave this here. When we released the Alternative Influence Network report (@beccalew is the author), many were criticial of the humble recommendation that social media companies should review accounts as they gained popularity.

Becca wrote, “In a media environment consisting of networked influencers, YouTube must respond with policies that account for influence and amplification, as well as social networks.” This recommendation was simple, clear, and I told every company of the implications.
I wonder if we would be in this situation today if some of the more prominent disinformation voices had supported this recommendation, instead of saying that deplatforming threatened free speech.

Too busy trying to spot a bot maybe? Too worried about declining data stockpiles?
It’d abhorrent to have been arguing for simple policy fixes for years and only have support for them when hell touches down for the white middle class. BIPOC and women have been organizing for decades to get policies enforced for community safety online.
Instead of learning their work and policy recommendations and doing everything we can as researchers to help get these shared concerns on the table, I see white men rebranding as “disinformation,” “extremism,” and “conspiracy” experts.

It’s bumming me out.
Most repeat the same lines that Q believers are deluded and can be saved. For every one that is saying that deplatforming means it’s harder for you to find extremists, I wish you could hear yourself.

The problem is the design of social media as a content delivery system.
The same values of openness and scale that built these companies wealth reinforced the growth of white supremacist and conspiracist ideologies. It took a decade for that model to give us Trump.

The only way to talk society off the ledge is to work on smaller scales.
We need to build our communication system differently. I highly recommend following @ColorOfChange @BrandingBrandi @changeterms @culturejedi @mediajustice @stevenrenderos @womenindisinfo @ReFrameMentor @jonathan_c_ong @hypervisible @fightfortheftr @gabriellelim @lotus_ruan
The list continues @RMAjayi @dalitdiva @EqualityLabs @marylgray @nandoodles @sjjphd @JacquieSMason @BridgetMarie @LionsWrite @eramanujam @EvanFeeney @WideAsleepNima
And of course, stay with the trouble caused by insufficient infrastructure w/ @safiyanoble @ubiquity75 @sivavaid @EmmaLBriant @stacyewood @drbrittparis @IrenePasquetto @sarahbmyers @sobieraj @TarletonG @YochaiBenkler @JonasKaiser @nancybaym @zephoria @wphillips49 @meredithdclark
And more from those who care about technologies disarming doublespeak: @dude_crooks @drbethcoleman @LizCarolan @lizlosh @ruha9 @alondra @LatoyaPeterson @sassycrass @mutalenkonde @Combsthepoet @LeonYin @JuliaTicona1 @JuliaAngwin @EthanZ @alicetiara @YESHICAN @Data4BlackLives
And the anthropologists & sociologists who care about the people embedded in the systems @BiellaColeman @LimnMagazine @AaronPanofsky @gleemie @KeeangaYamahtta @tressiemcphd @alexhanna @ztsamudzi @xuhulk
And then there was one, @amelia_acker, who has kept all the receipts on presidential tweets since before it was cool. I cannot wait to see her work on the archives and their enemies.
The bottom line is we don’t need to give it a fancy name like “circuit breaker” or “break glass” because it’s the most simple and logical policy going forward: do not reward hate, violence, and incitement with money and clout. Instead, amplification needs curation. #10kLibrarians
People seem to really like lists, so I’ll keep going. For different ways of thinking about design and history of tech: @lnakamur @schock @histoftech @PopTechWorks @merbroussard @kmtamurphy @shannonmattern @cjack @cmcilwain @aschrock @DocDre @whkchun
https://t.co/uwzKlwUX7J

More from Tech

Recently, the @CNIL issued a decision regarding the GDPR compliance of an unknown French adtech company named "Vectaury". It may seem like small fry, but the decision has potential wide-ranging impacts for Google, the IAB framework, and today's adtech. It's thread time! 👇

It's all in French, but if you're up for it you can read:
• Their blog post (lacks the most interesting details):
https://t.co/PHkDcOT1hy
• Their high-level legal decision: https://t.co/hwpiEvjodt
• The full notification: https://t.co/QQB7rfynha

I've read it so you needn't!

Vectaury was collecting geolocation data in order to create profiles (eg. people who often go to this or that type of shop) so as to power ad targeting. They operate through embedded SDKs and ad bidding, making them invisible to users.

The @CNIL notes that profiling based off of geolocation presents particular risks since it reveals people's movements and habits. As risky, the processing requires consent — this will be the heart of their assessment.

Interesting point: they justify the decision in part because of how many people COULD be targeted in this way (rather than how many have — though they note that too). Because it's on a phone, and many have phones, it is considered large-scale processing no matter what.
The YouTube algorithm that I helped build in 2011 still recommends the flat earth theory by the *hundreds of millions*. This investigation by @RawStory shows some of the real-life consequences of this badly designed AI.


This spring at SxSW, @SusanWojcicki promised "Wikipedia snippets" on debated videos. But they didn't put them on flat earth videos, and instead @YouTube is promoting merchandising such as "NASA lies - Never Trust a Snake". 2/


A few example of flat earth videos that were promoted by YouTube #today:
https://t.co/TumQiX2tlj 3/

https://t.co/uAORIJ5BYX 4/

https://t.co/yOGZ0pLfHG 5/
A brief analysis and comparison of the CSS for Twitter's PWA vs Twitter's legacy desktop website. The difference is dramatic and I'll touch on some reasons why.

Legacy site *downloads* ~630 KB CSS per theme and writing direction.

6,769 rules
9,252 selectors
16.7k declarations
3,370 unique declarations
44 media queries
36 unique colors
50 unique background colors
46 unique font sizes
39 unique z-indices

https://t.co/qyl4Bt1i5x


PWA *incrementally generates* ~30 KB CSS that handles all themes and writing directions.

735 rules
740 selectors
757 declarations
730 unique declarations
0 media queries
11 unique colors
32 unique background colors
15 unique font sizes
7 unique z-indices

https://t.co/w7oNG5KUkJ


The legacy site's CSS is what happens when hundreds of people directly write CSS over many years. Specificity wars, redundancy, a house of cards that can't be fixed. The result is extremely inefficient and error-prone styling that punishes users and developers.

The PWA's CSS is generated on-demand by a JS framework that manages styles and outputs "atomic CSS". The framework can enforce strict constraints and perform optimisations, which is why the CSS is so much smaller and safer. Style conflicts and unbounded CSS growth are avoided.
There has been a lot of discussion about negative emissions technologies (NETs) lately. While we need to be skeptical of assumed planetary-scale engineering and wary of moral hazard, we also need much greater RD&D funding to keep our options open. A quick thread: 1/10

Energy system models love NETs, particularly for very rapid mitigation scenarios like 1.5C (where the alternative is zero global emissions by 2040)! More problematically, they also like tons of NETs in 2C scenarios where NETs are less essential.
https://t.co/M3ACyD4cv7 2/10


In model world the math is simple: very rapid mitigation is expensive today, particularly once you get outside the power sector, and technological advancement may make later NETs cheaper than near-term mitigation after a point. 3/10

This is, of course, problematic if the aim is to ensure that particular targets (such as well-below 2C) are met; betting that a "backstop" technology that does not exist today at any meaningful scale will save the day is a hell of a moral hazard. 4/10

Many models go completely overboard with CCS, seeing a future resurgence of coal and a large part of global primary energy occurring with carbon capture. For example, here is what the MESSAGE SSP2-1.9 scenario shows: 5/10

You May Also Like

Trading view scanner process -

1 - open trading view in your browser and select stock scanner in left corner down side .

2 - touch the percentage% gain change ( and u can see higest gainer of today)


3. Then, start with 6% gainer to 20% gainer and look charts of everyone in daily Timeframe . (For fno selection u can choose 1% to 4% )

4. Then manually select the stocks which are going to give all time high BO or 52 high BO or already given.

5. U can also select those stocks which are going to give range breakout or already given range BO

6 . If in 15 min chart📊 any stock sustaing near BO zone or after BO then select it on your watchlist

7 . Now next day if any stock show momentum u can take trade in it with RM

This looks very easy & simple but,

U will amazed to see it's result if you follow proper risk management.

I did 4x my capital by trading in only momentum stocks.

I will keep sharing such learning thread 🧵 for you 🙏💞🙏

Keep learning / keep sharing 🙏
@AdityaTodmal