Download for Free: Robert F. Kennedy's New Book — ‘A Letter to Liberals’

Big Brother News Watch

May 18, 2023

Fertility App Fined $200,000 for Leaking Customers’ Health Data + More

Fertility App Fined $200,000 for Leaking Customers’ Health Data

CNN Business reported:

The company behind a popular fertility app has agreed to pay $200,000 in federal and state fines after authorities alleged that it had shared users’ personal health information for years without their consent, including to Google and to two companies based in China.

The app, known as Premom, will also be banned from sharing personal health information for advertising purposes and must ensure that the data it shared without users’ consent is deleted from third-party systems, according to the Federal Trade Commission, along with the attorneys general of Connecticut, the District of Columbia and Oregon.

The sharing of personal data allegedly affected Premom’s hundreds of thousands of users from at least 2018 until 2020, and violated a federal regulation known as the Health Breach Notification Rule, according to an FTC complaint against Easy Healthcare, Premom’s parent company.

Montana Is First State to Ban TikTok Over National Security Concerns

Ars Technica reported:

Montana became the first state to ban TikTok yesterday. In a press release, the state’s Republican governor, Greg Gianforte, said the move was a necessary step to keep Montanans safe from Chinese Communist Party surveillance. The ban will take effect on January 1, 2024.

Prior to signing Montana Senate Bill 419 into law, critics reported that banning TikTok in the state would likely be both technically and legally unfeasible. Technically, since Montana doesn’t control all Internet access in the state, the ban may be difficult to enforce. And legally, it must hold up to First Amendment scrutiny, because Montanans should have the right to access information and express themselves using whatever communications tool they prefer.

There are also possible complications with the ban because it prevents “mobile application stores from offering TikTok within the state.” Under the law, app stores like Google Play or the Apple App Store could be fined up to $10,000 a day for allowing TikTok downloads in the state. To many critics, that seems like Montana is trying to illegally regulate interstate commerce. And a trade group that Apple and Google help fund has recently confirmed that preventing access to TikTok in a single state would be impossible, The New York Times reported.

Supreme Court Hands Twitter, Google Wins in Internet Liability Cases

The Hill reported:

The Supreme Court on Thursday punted the issue of determining when internet companies are protected under a controversial liability shield, instead resolving the case on other grounds. The justices were considering two lawsuits in which families of terrorist attack victims said Google and Twitter should be held liable for aiding and abetting ISIS, leading to their relatives’ deaths.

Google asserted that Section 230 of the Communications Decency Act, enacted in 1996 to prevent internet companies from being held liable for content posted by third parties, protected the company from all of the claims.

But rather than wading into the weighty Section 230 dispute — which internet companies say allows them to serve users and offers protection from a deluge of litigation — the court Thursday found neither company had any underlying liability to need the protections.

Section 230 protects internet companies, of all sizes, from being held legally responsible for content posted by third parties. The protection has faced criticism from both sides of the aisle, with Democrats largely arguing it allows tech companies to host hate speech and misinformation without consequences and some Republicans alleging it allows tech companies to make content moderation decisions with an anti-conservative bias.

AI Is Getting Better at Reading Our Minds

Mashable reported:

AI is getting way better at deciphering our thoughts, for better or worse. Scientists at the University of Texas published a study in Nature describing how they used functional magnetic resonance (fMRI) and an AI system preceding ChatGPT called GPT-1, to create a non-invasive mind decoder that can detect brain activity and capture the essence of what someone is thinking.

To train the AI, researchers placed three people in fMRI scans and played entertaining podcasts for them to listen to, including The New York Times’ Modern Love, and The Moth Radio Hour. The scientists used transcripts of the podcasts to track brain activity and figure out which parts of the brain were activated by different words.

The decoder, however, is not fully developed yet. The AI only works if it’s trained with data from the brain activity of the person it is used on, which limits its distribution possibilities. There’s also a barrier with the fMRI scans, which are big and expensive. Plus, scientists found that the decoder can get confused if people decide to ‘lie’ to it by choosing to think about something different than what is required.

These obstacles may be a positive, as the potential to create a machine that can decode people’s thoughts raises serious privacy concerns; there’s currently no way to limit the tech’s use to medicine, and just imagine if the decoder could be used as surveillance or an interrogation method. So, before AI mind-reading develops further, scientists and policymakers need to seriously consider the ethical implications, and enforce laws that protect mental privacy to ensure this kind of tech is only used to benefit humanity.

How Addictive Tech Hacks Your Brain

Gizmodo reported:

Addictive cravings have become an everyday part of our relationship with technology. At the same time, comparing these urges to drug addictions can seem like hyperbole. Addictive drugs, after all, are chemical substances that need to physically enter your body to get you hooked — but you can’t exactly inject an iPhone. So how similar could they be?

More than you might think. From a neuroscientific perspective, it’s not unreasonable to draw parallels between addictive tech and cocaine. That’s not to say that compulsively checking your phone is as harmful as a substance use disorder, but the underlying neural circuitry is essentially the same, and in both situations, the catalyst is — you guessed it — dopamine.

We’ve established that the cycle of cocaine addiction is kicked off by chemically hotwiring our reward system. But your phone and computer (hopefully) aren’t putting any substances into your body. They can’t chemically hotwire anything.

Instead, they hook us by targeting our natural triggers for dopamine release. In contrast to hotwiring a car, this is like setting up misleading road signs, tricking a driver into unwittingly turning in the direction you indicate. Research into how this works on a biological level is still in its infancy, but there are a number of mechanisms that seem plausible.

In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels

The New York Times reported:

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to give away its A.I. crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had created an A.I. technology, called LLaMA, that can power online chatbots. But instead of keeping the technology to itself, Meta released the system’s underlying computer code into the wild. Academics, government researchers and others who gave their email address to Meta could download the code once the company had vetted the individual.

Essentially, Meta was giving its A.I. technology away as open-source software — computer code that can be freely copied, modified and reused — providing outsiders with everything they needed to quickly build chatbots of their own.

Its actions contrast with those of Google and OpenAI, the two companies leading the new A.I. arms race. Worried that A.I. tools like chatbots will be used to spread disinformation, hate speech and other toxic content, those companies are becoming increasingly secretive about the methods and software that underpin their A.I. products.

Colorado Senator Proposes Special Regulator for Big Tech and AI

The Hill reported:

Colorado Sen. Michael Bennet (D) will introduce legislation Thursday to establish a new regulator for the tech industry and the development of artificial intelligence (AI), which experts predict will have wide-ranging impacts on society.

Bennet introduced his Digital Platform Commission Act a year ago, but he has updated it to make its coverage of AI even more explicit.

The updated bill requires the commission to establish an age-appropriate design code and age verification standards for AI.

It would establish a Federal Digital Platform Commission to regulate digital platforms consistent with the public interest to encourage the creation of new online services and to provide consumer benefits, prevent harmful concentrations of private power and protect consumers from deceptive, unfair or abusive practices.

AI Pioneer Yoshua Bengio: Governments Must Move Fast to ‘Protect the Public’

Financial Times reported:

Advanced artificial intelligence systems such as OpenAI’s GPT could destabilize democracy unless governments take quick action and “protect the public”, an AI pioneer has warned.

Yoshua Bengio, who won the Turing Award alongside Geoffrey Hinton and Yann LeCun in 2018, said the recent rush by Big Tech to launch AI products had become “unhealthy,” adding he saw a “danger to political systems, to democracy, to the very nature of truth.”

Bengio is the latest in a growing faction of AI experts ringing alarm bells about the rapid rollout of powerful large language models. His colleague and friend Hinton resigned from Google this month to speak more freely about the risks AI poses to humanity.

In an interview with the Financial Times, Bengio pointed to society’s increasingly indiscriminate access to large language models as a serious concern, noting the lack of scrutiny currently being applied to the technology.

Aviation Advocacy Group Files Class Action Lawsuit Against Feds Over Mandatory COVID Shots

The Epoch Times reported:

Free to Fly Canada, an advocacy group for pilots and aviation employees, has filed a class action lawsuit against the federal government over mandatory workplace vaccination policies.

The organization said in a May 17 news release that it has chosen representative plaintiffs Greg Hill, Brent Warren, and Tanya Lewis, and will argue that the rights of thousands of Canadian aviation employees were violated by Transport Canada’s regulation, Interim Order Respecting Certain Requirements for Civil Aviation Due to COVID-19, No. 43. The legal action names as defendants the federal government and the minister of transportation.

The class action is open to unvaccinated employees who were affected by the Transport Canada Order, whether they were suspended, put on unpaid leave, fired, or coerced into early retirement, said Free to Fly. Interested aviation employees can sign up on the Free to Fly website.

The group said this is the first time a case of this type has been brought in Canadian courts. The class action now has to be certified by a court to proceed, which is standard procedure.

May 17, 2023

An AI Chatbot May Be Your Next Therapist. Will It Actually Help Your Mental Health? + More

An AI Chatbot May Be Your Next Therapist. Will It Actually Help Your Mental Health?

KFF Health News reported:

In the past few years, 10,000 to 20,000 apps have stampeded into the mental health space, offering to “disrupt” traditional therapy. With the frenzy around AI innovations like ChatGPT, the claim that chatbots can provide mental healthcare is on the horizon.

The numbers explain why: Pandemic stresses led to millions more Americans seeking treatment. At the same time, there has long been a shortage of mental health professionals in the United States; more than half of all counties lack psychiatrists. Given the Affordable Care Act’s mandate that insurers offer parity between mental and physical health coverage, there is a gaping chasm between demand and supply.

Unfortunately, in the mental health space, evidence of effectiveness is lacking. Few of the many apps on the market have independent outcomes research showing they help; most haven’t been scrutinized at all by the FDA. Though marketed to treat conditions such as anxiety, attention-deficit/hyperactivity disorder, and depression, or to predict suicidal tendencies, many warn users (in small print) that they are “not intended to be medical, behavioral health or other healthcare service” or “not an FDA cleared product.”

There are good reasons to be cautious in the face of this marketing juggernaut.

YouTube’s Recommendations Send Violent and Graphic Gun Videos to 9-Year-Olds, Study Finds

Associated Press reported:

When researchers at a nonprofit that studies social media wanted to understand the connection between YouTube videos and gun violence, they set up accounts on the platform that mimicked the behavior of typical boys living in the U.S.

They simulated two nine-year-olds who both liked video games. The accounts were identical, except that one clicked on the videos recommended by YouTube, and the other ignored the platform’s suggestions.

The account that clicked on YouTube’s suggestions was soon flooded with graphic videos about school shootings, tactical gun training videos and how-to instructions on making firearms fully automatic. One video featured an elementary school-age girl wielding a handgun; another showed a shooter using a .50 caliber gun to fire on a dummy head filled with lifelike blood and brains. Many of the videos violate YouTube’s own policies against violent or gory content.

Along with TikTok, the video-sharing platform is one of the most popular sites for children and teens. Both sites have been criticized in the past for hosting, and in some cases promoting, videos that encourage gun violence, eating disorders and self-harm. Critics of social media have also pointed to the links between social media, radicalization and real-world violence.

OpenAI CEO Sam Altman Raises $100 Million for Worldcoin Crypto Project, Which Uses ‘Orb’ to Scan Your Eye: Report

FOXBusiness reported:

OpenAI CEO Sam Altman has reportedly raised nearly $100 million for his next big project, a cryptocurrency called Worldcoin that will verify users’ unique identities by scanning their eyes.

After revolutionizing artificial intelligence with ChatGPT, Altman has set his sights on creating an “inclusive” global cryptocurrency that will be available to anyone who verifies their “unique personhood” with the “Orb,” an imaging device that takes a picture of an iris pattern.

The company says its crypto token will be “globally and freely distributed” to users who sign up for a wallet — a sort of universal basic income with crypto. Worldcoin hopes to incentivize people to adopt its currency by giving away free coins, which will, in turn, make the coins more valuable and useful if they become widely adopted, Worldcoin claims.

Users who submit to biometric iris scans are assigned a “World ID” that enables them to receive 25 free Worldcoin tokens at launch — provided they are located in a place where Worldcoin token is available.

Musk: There’s a Chance AI ‘Goes Wrong and Destroys Humanity’

The Hill reported:

Tesla CEO Elon Musk is warning that it’s possible emerging artificial intelligence (AI) technology “goes wrong and destroys humanity.”

“There’s a strong probability that it will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity,” Musk told CNBC anchor David Faber.

“Hopefully, that chance is small, but it’s not zero. And so I think we want to take whatever actions we can think of to minimize the probability that AI goes wrong.”

Musk called the tech a “double-edged sword” and stressed it’s hard to predict what happens next with the new tools.

Excelsior Pass Costs Ballooned to $64 Million and Keep Rising

Times Union reported:

They called it the Excelsior Pass. The first-in-the-nation app would provide a “secure and streamlined” way for people to attend live events and restaurants without digging out their vaccine card. It would be built by IBM, and it would cost $2.5 million.

The state decided early on to outsource the work on the app. While that aspect of the project didn’t change, the stated cost to taxpayers definitely did, and quickly: In June 2021, the New York Times noted that the pass would actually cost $17 million; a follow-up report two months later indicated that price tag had grown to as much as $27 million.

More than two years after Gov. Andrew M. Cuomo’s initial announcement, the payments to private companies for the app have multiplied well beyond that figure, even as the waning of the pandemic means the Excelsior Pass is rarely if ever used — and opens the related question of how many booster shots are needed to be “up to date” according to the app.

The current cost is $64 million, a previously unreported sum that includes funds paid to IBM as well as to two consultants on the project, Boston Consulting Group and Deloitte, according to records obtained by the Times Union.

The state continues to pay IBM $200,000 a month for data storage services related to the Excelsior Pass. In addition, in March the state spent $2.2 million for “application development” of the Excelsior Pass.

This Lawmaker Stands Out for His AI Expertise. Can He Help Congress?

The Washington Post reported:

Rep. Jay Obernolte has said it many times: The biggest risk posed by artificial intelligence is not “an army of evil robots with red laser eyes rising to take over the world.”

It’s the “less obvious” and more mundane issues such as data privacy, antitrust issues and AI’s potential to influence human behavior. All these take precedence over the hypothetical notion of AI ending humanity, the California Republican says.

Obernolte would know: He’s one of a handful of lawmakers with a computer science degree — including graduate research on AI in some of its earliest stages. With the rise of generative AI applications like ChatGPT — what some observers have dubbed a “big bang” moment — Obernolte has emerged as a leading expert in Congress on how the technology works and what lawmakers should worry about.

More broadly, he is blunt about the paucity of tech-savvy lawmakers. “We need more computer science professionals in Congress given the complexity of some of the technological issues we grapple with,” he told The Post.

Police to Use Live Facial Recognition in Cardiff During Beyoncé Concert

The Guardian reported:

Police will use live facial recognition technology in Cardiff during the Beyoncé concert on Wednesday, despite concerns about racial bias and human rights.

A spokesperson for the force said the technology would be used in the city center, not at the concert itself. In the past, police use of live facial recognition (LFR) in England and Wales had been limited to special operations such as football matches or the coronation, when there was a crackdown on protesters.

Daragh Murray, a senior lecturer of law at Queen Mary University in London, said the normalization of invasive surveillance capability at events such as a concert was concerning and was taking place without any real public debate.

“I think things like live facial recognition are the first step, but I think they’re opening the doors to the use of permanent facial recognition across city-wide surveillance camera networks,” he said.

May 16, 2023

Senator Rick Scott Introducing Legislation to Require Parental Consent for Kids’ AI Use + More

Rick Scott Introducing Legislation to Require Parental Consent for Kids’ AI Use

The Hill reported:

Sen. Rick Scott (R-Fla.) introduced legislation Tuesday that will require children to get parental consent to use artificial intelligence (AI) technology.

The AI Shield for Kids (ASK) Act will prevent children from accessing AI features on social media sites without the consent of a parent or guardian.

Scott’s bill will also require the Federal Communications Commission (FCC) to issue rules barring social media platforms from charging a fee or mandating a paid subscription before allowing either parents or children to remove AI features from products minors use.

The proposed legislation comes as Scott and other lawmakers are scheduled to speak on the issue in a Homeland Security and Governmental Affairs Committee hearing. OpenAI CEO Sam Altman, the head of the artificial intelligence company that makes the popular ChatGPT tool, will testify before Congress at the meeting.

Campaigners Welcome Kate Winslet’s Plea About Online Safety and Children

The Guardian reported:

Online safety campaigners have welcomed Kate Winslet’s call on “people in power” to criminalize harmful digital content during her Bafta acceptance speech, as the U.K. parliament debates legislation to rein in social media platforms.

The actor won the television award on Sunday for her portrayal of a mother whose teenage daughter suffers from mental health problems as a result of viewing damaging online content.

Accepting the award for best leading actress for Channel 4 drama I am Ruth, in which she acted alongside her real-life daughter, Mia Threapleton, Winslet said: “We want our children back.”

She added: “For young people who have become addicted to social media and its darker sides, this does not need to be your life. To people in power, and to people who can make change, please, criminalize harmful content.”

ChatGPT Chief Says Artificial Intelligence Should Be Regulated by a U.S. or Global Agency

Associated Press reported:

The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention “will be critical to mitigating the risks of increasingly powerful” AI systems.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman testified at a Senate hearing Tuesday.

Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

Pressed on his own worst fear about AI, Altman mostly avoided specifics. But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild” — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

Public Trusts Childhood Vaccines but Support for School Mandates Wanes

Axios reported:

Americans are much more confident in routine childhood vaccines than COVID-19 shots, but support for vaccine requirements in schools has slipped from pre-pandemic levels, according to a new Pew Research Center study.

Why it matters: Responses from the study of more than 10,000 adults suggest that vaccine hesitancy around COVID hasn’t fueled significantly wider anti-vax sentiment. But the share who say parents should be able to decide not to vaccinate their children now stands at 28%, up 12 points from four years ago.

What they found: 88% of Americans believe the benefits of childhood vaccines for measles, mumps and rubella outweigh the risks, compared to 62% who have the same views about COVID-19 vaccines.

Concerns about potential dangers are more pronounced for mothers than fathers, with about half of the mothers with a child under age 18 rating the risk of side effects from MMR vaccines as medium or high, a full 15 percentage points higher than the share of fathers with those views.

House Unanimously Passes Bill Limiting Federal Government Access to Phone, Email Data

The Epoch Times reported:

The U.S. House of Representatives on May 15 passed a bill that would place restrictions on the federal government’s access to personal cell phone and email data. The bill passed the lower chamber in a unanimous 412–0 vote, including 212 Republicans and 200 Democrats. The legislation was sponsored by Reps. Scott Fitzgerald (R-Wis.) and Jerry Nadler (D-N.Y.).

Under current law, prosecutors can request email and cell phone records from those not suspected of any criminal behavior. Often, these requests to service providers are accompanied by Non-Disclosure Orders (NDOs).

Nadler said in a press release that “abuse of secrecy orders is not limited to Congress — schools, local governments, Fortune 500 companies and countless others have had their data swept up by investigators looking to sidestep the basic protections afforded to Americans in criminal investigations.”

“The federal government has abused its authority to access the personal data of individuals under investigation,” Fitzgerald said in a press release.

WHO Warns Against Bias, Misinformation in Using AI in Healthcare

Reuters reported:

The World Health Organization called for caution on Tuesday in using artificial intelligence for public healthcare, saying data used by AI to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.

The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.

Eyes on the Poor: Cameras, Facial Recognition Watch Over Public Housing

The Washington Post reported:

In public housing facilities across America, local officials are installing a new generation of powerful and pervasive surveillance systems, imposing an outsize level of scrutiny on some of the nation’s poorest citizens.

Housing agencies have been purchasing the tools — some equipped with facial recognition and other artificial intelligence capabilities — with no guidance or limits on their use, though the risks are poorly understood and little evidence exists that they make communities safer.

As cameras have gotten smarter, their use in public housing is becoming a flashpoint in the national debate over facial recognition. States including Alabama, Colorado and Virginia have passed laws limiting the use of facial recognition by law enforcement, recognizing that these tools have been shown to produce false matches — particularly when scanning women and people of color.

Israel Repeals Quarantine Mandate for Those Sick With COVID

Haaretz reported:

Israel’s Health Ministry will not extend its mandatory quarantine policy for those sick with the coronavirus starting Tuesday, removing all remaining restrictions issued at the start of the pandemic.

​The ministry’s announcement follows the World Health Organization’s (WHO) decision last week to repeal the global state of emergency for the COVID-19 pandemic.

According to the WHO’s announcement, although this decision is indicative of the progress the world has made on this issue, “the coronavirus is here to stay,” and will continue to be considered a pandemic similar to AIDS.

May 15, 2023

TSA Testing Facial Recognition Technology at 16 Airports Across U.S. to Enhance Airport Security, Travel + More

TSA Testing Facial Recognition Technology at 16 Airports Across U.S. to Enhance Airport Security, Travel

FOXBusiness reported:

The Transportation Security Administration (TSA) is testing the use of facial recognition technology at select airports across the country to enhance security and speed up procedures.

The pilot program is currently in place at some TSA checkpoints at 16 airports in Baltimore, Washington, D.C., Atlanta, Boston, Dallas, Denver, Detroit, Las Vegas, Los Angeles, Miami, Orlando, Phoenix, Salt Lake City, San Jose, and Gulfport-Biloxi and Jackson in Mississippi.

TSA says the pilot program is voluntary and accurate, but critics have raised concerns about questions of bias in facial recognition technology and possible repercussions for passengers who want to opt-out.

In a February letter to TSA, five senators — four Democrats and an Independent who is part of the Democratic caucus — demanded the agency stop the program, saying, “Increasing biometric surveillance of Americans by the government represents a risk to civil liberties and privacy rights.”

Online Age Verification Is Coming, and Privacy Is on the Chopping Block

The Verge reported:

A spate of child safety rules might make going online in a few years very different, and not just for kids. In 2022 and 2023, numerous states and countries are exploring age verification requirements for the Internet, either as an implicit demand or a formal rule. The laws are positioned as a way to protect children on a dangerous internet. But the price of that protection might be high: nothing less than the privacy of, well, everyone.

Government agencies, private companies, and academic researchers have spent years seeking a way to solve the thorny question of how to check internet users’ ages without the risk of revealing intimate information about their online lives. But after all that time, privacy and civil liberties advocates still aren’t convinced the government is ready for the challenge.

In the U.S. and abroad, lawmakers want to limit children’s access to two things: social networks and porn sites. Louisiana, Arkansas, and Utah have all passed laws that set rules for underage users on social media. Meanwhile, multiple U.S. federal bills are on the table, and so are laws in other countries, like the U.K.’s Online Safety Bill. Some of these laws demand specific features from age verification tools. Others simply punish sites for letting anyone underage use them — a more subtle request for verification.

More experimentally, there are solutions that estimate a user’s age without an ID. One potential option, which is already used by Facebook and Instagram, would use a camera and facial recognition to guess whether you’re 18. Another, which is highlighted as a potential age verification solution by France’s National Commission on Informatics and Liberty (CNIL), would “guess” your age based on your online activity.

‘Biden Bucks Stop Here’: DeSantis Touts Florida’s Digital Currency Ban, Warns of ‘Cashless Society’

FOXBusiness reported:

The team for Florida Gov. Ron DeSantis on Saturday released a new video first obtained by FOX Business that follows the governor signing legislation to ban the use of a central bank digital currency, or CBDC, in the Sunshine State.

One day before the video’s release, DeSantis signed legislation prohibiting a federally adopted CBDC as money within Florida’s Uniform Commercial Code. The new law also implements protections against a central global currency by prohibiting any CBDC by a foreign reserve or sanctioned central bank and calls on other states to join in passing similar legislation.

The Federal Reserve defines CBDC as “a digital form of central bank money widely available to the general public,” meaning the central bank — the Federal Reserve in the case of the U.S. — would be liable for the money rather than a commercial bank.

Proponents argue a CBDC would offer greater convenience and efficiency while ensuring safety and liquidity. According to DeSantis and other critics, however, such a monetary system would give the federal government unprecedented power over consumers and businesses.

Ex-ByteDance Exec Claims CCP ‘Maintained’ Access to U.S. Data

Axios reported:

The Chinese Communist Party “maintained supreme access” to data belonging to TikTok’s parent company ByteDance, including data stored in the U.S., a former top executive claimed in a lawsuit Friday. Why it matters: The allegations come as federal officials weigh the fate of the social media giant in the U.S. amid growing concerns over national security and data privacy.

Driving the news: In a wrongful dismissal suit filed in San Francisco Superior Court, Yintao Yu said ByteDance “has served as a useful propaganda tool for the Chinese Communist Party.”

Yu, whose claim says he served as head of engineering for ByteDance’s U.S. offices from August 2017 to November 2018, alleged that inside the Beijing-based company, the CCP “had a special office or unit, which was sometimes referred to as the ‘Committee’.”

The “Committee” didn’t work for ByteDance but “played a significant role,” in part by “gui[ding] how the company advanced core Communist values,” the lawsuit claims. The CCP could also access U.S. user data via a “backdoor channel in the code,” the suit states.

Instagram and Facebook Are Using Fears of a TikTok Ban to Poach Influencers

Gizmodo reported:

Meta and other social media firms chasing TikTok’s short-form video success are fanning the flames of ban rhetoric while simultaneously vying to boost the attractiveness of their own TikTok clones with new monetization features. The goal? Steal away as many fearful TikTokers as possible.

Brewing fears of an impending TikTok ban could send some creators packing their digital bags and heading back to Instagram and Facebook, whether they like it or not. Digital marketing experts and content creators speaking with Gizmodo said a scenario where TikTok is made inaccessible in the U.S. would result in diminished competition in the social media landscape broadly and a far less favorable environment for creators.

It could also be a godsend for Meta and other companies racing to keep pace with TikTok’s meteoric ascent. Whether or not a national ban ever actually materializes is almost irrelevant; the fear is enough for their recruitment drive.

The U.K.’s Secretive Web Surveillance Program Is Ramping Up

Wired reported:

The U.K. government is quietly expanding and developing a controversial surveillance technology that could be capable of logging and storing the web histories of millions of people.

Official reports and spending documents show that in the past year, U.K. police have deemed the testing of a system that can collect people’s “internet connection records” a success, and have started work to potentially introduce the system nationally. If implemented, it could hand law enforcement a powerful surveillance tool.

Critics say the system is highly intrusive, and that officials have a history of not properly protecting people’s data. Much of the technology and its operation is shrouded in secrecy, with bodies refusing to answer questions about the systems.

Surveillance Contractor Monitored Vaccine Skeptics, Report Says

Reclaim the Net reported:

Flashpoint, a surveillance contractor for the FBI, infiltrated chatrooms of airline industry groups that opposed vaccine mandates, according to a report by investigative journalist Lee Fang.

In the past, Flashpoint infiltrated Islamic terror groups. But it has since focused on vaccine skeptic groups and other domestic political groups.

Fang analyzed a webinar presentation by Flashpoint to clients that was held last year. In the presentation, Flashpoint analyst Vlad Cuiujuclu showed the company’s methods of identifying and infiltrating Telegram chat groups.

Describing the presentation, Fang wrote: “‘In this case, we’re searching for a closed channel of U.S. Freedom Flyers,’ said Cuiujuclu. ‘It’s basically a group that opposed vaccination and masks.’

Fired Teachers Who Refused COVID Vaccine to Get Full Reinstatement and Back Pay

The Epoch Times reported:

Three Rhode Island teachers who were fired for refusing the COVID-19 vaccine have been offered their jobs back with full back pay after reaching a settlement with the school district.

Teachers Stephanie Hines, Brittany DiOrio, and Kerri Thurber were terminated from their positions in Barrington Public Schools after they had requested a religious exemption after the school-mandated employees get the vaccine.

Last week, their attorney, Greg Piccirilli, and the school district said they had reached a settlement, allowing the teachers to return to their jobs. They are also each entitled to $33,333 in damages along with their back pay. DiOrio will get $150,000, Thurber will get $128,000, and Hines will receive $65,000 under the agreement.

Meanwhile, Barrington Public Schools told the Providence Journal that it reached the settlement because the litigation would likely put a drag on the school’s resources and funding. It attempted to distance itself from its own vaccine mandate by claiming that it was dealing with the spread of COVID-19, although there is a growing body of evidence that shows the vaccines do not prevent the spread of the virus.

Nurse Fired for Refusing COVID Vaccine Reflects as National Emergency Ends: ‘It’s Frustrating to Look Back’

FOX News reported:

A New York nurse who lost her job, because she refused to take the COVID-19 vaccine due to medical circumstances, says it is “frustrating” looking back at her firing as the Biden administration marks an end to the health emergency.

Jenna Viani-Pascale says she decided to forgo the coronavirus vaccine because she suffered a stroke at 36 years old. She was dismissed from her job after she was unable to get a medical exemption for her condition or take her case to court as lawyers were hesitant to take a stand. Now as President Biden signed a bill Monday to end the national emergency response to COVID-19, the nurse is re-living her ordeal.

Viani-Pascale expressed frustration that medical professionals were not given an opportunity to make their own decision about taking the vaccine.

“I haven’t heard from my old job,” she said. “I haven’t been offered a position back. They’re still requiring it at the hospitals in New York. So, it’s just a very frustrating thing for somebody who really loved her job.”