Toggle light / dark theme

The policy applies even if the violation occurred outside a group.


Facebook is taking new steps to crack down on groups users who break its rules, even when they have done so in other parts of the app.

Under the new policy, Facebook will downrank content posted in groups by users who have broken its rules even if they have done so elsewhere on the company’s platform. The new rule will apply to any group member who has had a post removed for violating one of Facebook’s Community Standards in the previous 90 days. Those who have had multiple posts removed will have “more severe” demotions.

“This measure will help reduce the ability of members who break our rules from reaching others in their communities, and builds on the existing restrictions placed upon members who violate Community Standards,” Facebook wrote in a statement. The company notes that it already has policies that from people who repeatedly break rules within.

This post is a collaboration with Dr. Augustine Fou, a seasoned digital marketer, who helps marketers audit their campaigns for ad fraud and provides alternative performance optimization solutions; and Jodi Masters-Gonzales, Research Director at Beacon Trust Network and a doctoral student in Pepperdine University’s Global Leadership and Change program, where her research intersects at data privacy & ethics, public policy, and the digital economy.

The ad industry has gone through a massive transformation since the advent of digital. This is a multi-billion dollar industry that started out as a way for businesses to bring more market visibility to products and services more effectively, while evolving features that would allow advertisers to garner valuable insights about their customers and prospects. Fast-forward 20 years later and the promise of better ad performance and delivery of the right customers, has also created and enabled a rampant environment of massive data sharing, more invasive personal targeting and higher incidences of consumer manipulation than ever before. It has evolved over time, underneath the noses of business and industry, with benefits realized by a relative few. How did we get here? More importantly, can we curb the path of a burgeoning industry to truly protect people’s data rights?

There was a time when advertising inventory was finite. Long before digital, buying impressions was primarily done through offline publications, television and radio. Premium slots commanded higher CPM (cost per thousand) rates to obtain the most coveted consumer attention. The big advertisers with the deepest pockets largely benefitted from this space by commanding the largest reach.

Many people reject scientific expertise and prefer ideology to facts. Lee McIntyre argues that anyone can and should fight back against science deniers.
Watch the Q&A: https://youtu.be/2jTiXCLzMv4
Lee’s book “How to Talk to a Science Denier” is out now: https://geni.us/leemcintyre.

“Climate change is a hoax—and so is coronavirus.” “Vaccines are bad for you.” Many people may believe such statements, but how can scientists and informed citizens convince these ‘science deniers’ that their beliefs are mistaken?

Join Lee McIntyre as he draws on his own experience, including a visit to a Flat Earth convention as well as academic research, to explain the common themes of science denialism.

Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and an Instructor in Ethics at Harvard Extension School. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan (Ann Arbor). He has taught philosophy at Colgate University (where he won the Fraternity and Sorority Faculty Award for Excellence in Teaching Philosophy), Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston.

This talk was recorded on 24 August 2021.

“The use of organophosphate esters in everything from TVs to car seats has proliferated under the false assumption that they’re safe,” said Heather Patisaul, lead author and neuroendocrinologist at North Carolina State University. “Unfortunately, these chemicals appear to be just as harmful as the chemicals they’re intended to replace but act by a different mechanism.”


Summary: Exposure to even low levels of common chemicals called organophosphate esters can harm IQ, memory, learning, and brain development overall in young children.

Source: Green Science Policy Institute

Chemicals increasingly used as flame retardants and plasticizers pose a larger risk to children’s brain development than previously thought, according to a commentary published today in Environmental Health Perspectives.

The research team reviewed dozens of human, animal, and cell-based studies and concluded that exposure to even low levels of the chemicals—called organophosphate esters—may harm IQ, attention, and memory in children in ways not yet looked at by regulators.

Advanced Nuclear Power Advocacy For Humanity — Eric G. Meyer, Founder & Director, Generation Atomic


Eric G. Meyer is the Founder and Director of Generation Atomic (https://generationatomic.org/), a nuclear advocacy non-profit which he founded after hearing about the promise of advanced nuclear reactors, and he decided to devote his life to saving and expanding the use of atomic energy.

Eric worked as an organizer on several political, union, and issue campaigns while in graduate school for applied public policy, taking time off to attend the climate talks in Paris and sing opera about atomic energy.

Eric began his full time nuclear work in May of 2016 with Environmental Progress by organizing marches, rallies, and trainings in California, New York, and Illinois, before leaving to found Generation Atomic in late 2016.

In only a short period of time, Generation Atomic has made significant progress in the world of nuclear advocacy. Over the last year they’ve held several advocacy trainings at conferences, Marched for Science, talked to over tens of thousands voters, and carried the banner for nuclear energy at the climate talks in Morocco, Germany, and Poland.

Without a new legal framework, they could destabilize societal norms.


Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks, and because they could become combined with chemical, biological, radiological and nuclear weapons themselves.

Accounting and consulting firm PwC told Reuters on Thursday it will allow all its 40,000 U.S. client services employees to work virtually and live anywhere they want in perpetuity, making it one of the biggest employers to embrace permanent remote work.

The policy is a departure from the accounting industry’s rigid attitudes, known for encouraging people to put in late nights at the office. Other major accounting firms, such as Deloitte and KPMG, have also been giving employees more choice to work remotely in the face of the COVID-19 pandemic.

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks, and because they could become combined with chemical, biological, radiological and nuclear weapons themselves.

A dress worn this week by Democratic Congresswoman Alexandria Ocasio-Cortez (D-NY), which bore the message “tax the rich,” set off a wave of debate over how best to address wealth inequality, as Congress weighs a $3.5 trillion spending bill that includes tax hikes on corporations and high-earning individuals.

The debate coincides with the ongoing pandemic in which billionaires, many of whom are tech company founders, have added $1.8 trillion in wealth while consumers have come to depend increasingly on services like e-commerce and teleconference, according to a report released last month by the Institute for Policy Studies.

In a new interview, artificial intelligence expert Kai Fu-Lee — who worked as an executive at Google (GOOG, GOOGL), Apple (AAPL), and Microsoft (MSFT) — attributed the rise of wealth inequality in part to the tech boom in recent decades, predicting that the trend will worsen in coming years with the continued emergence of AI.