🕵️ Prying eyes
AI may enable mass spying. And Pornhub blocks users in two states, and an au revoir to passwords?
It’s widely known that our data is collected and used to track us by advertisers and the companies we rely on for keeping in touch with friends and family, for posting or browsing social media, and for managing our tasks and calendars.
But our data can also be used for surveillance by governments or bad actors who want to understand our movements and activity, according to a Slate article by Bruce Schneier, a security technologist and fellow at Harvard’s Berkman Klein Center for Internet & Society.
“Surveillance has become the business model of the internet, and there’s no reasonable way for us to opt out of it,” writes Schneier.
Schneier notes that while technology can help in data tracking and spying efforts, the basic limitation lies in the necessity for human labor to sift through and analyze the acquired data. “…someone still has to sort through all the conversations,” he writes, adding that:
A.I. is about to change that. Summarization is something a modern generative A.I. system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.
AI, when used for surveillance, will be more efficient at sorting through data than any human could be. This means AI will mark a turning point in online surveillance and spying.
Schneier also reckons that AI will usher in an age of mass spying 👀, in which data is accessible and searchable at any time for nearly every person taking part in society.
Similarly, mass spying will change the nature of spying. All the data will be saved. It will all be searchable, and understandable, in bulk. Tell me who has talked about a particular topic in the past month, and how discussions about that topic have evolved. Person A did something; check if someone told them to do it. Find everyone who is plotting a crime, or spreading a rumor, or planning to attend a political protest.
This may sound like a plot from a futuristic movie, but AI will improve the ability to track and monitor the activities or millions of people.
🙈 Pornhub’s parent company Aylo has blocked visitors in Montana and North Carolina due to ID verification laws that went into effect on January 1.
Rather than try to make its users jump through hoops to view its content, Pornhub’s parent company has blocked viewers in Montana and North Carolina altogether, as it has in other states with similar legislation. Anyone in those states visiting an Aylo site, which includes Pornhub, Redtube, Brazzers, YouPorn, and more, is now met with a video and text message from the network delivered by performer Cherie DeVille, explaining that the site is blocked from view in their state.
No matter how you feel about porn, ID verification laws come with some privacy worries. The Electronic Frontier Foundation (EFF) notes that ID verification “can invade our privacy and aggravate existing social inequalities.”
🙅🔒 Goodbye passwords! Passwords? Who needs them! There’s a better alternative to saving or remembering hundreds of passwords and keeping them secure. Passkeys allow for sign-ins via biometrics, such as FaceID or fingerprint verification. Richard Lawler writes that passkeys may gain momentum in 2024, sending passwords to their death.
Passwordless logins with passkeys, tied to biometrics or other security options like hardware keys, seem ready to secure our logins. Google is already prompting users to add passkeys to their online security…, along with Apple, while password managers, including 1Password, are inching toward supporting a zero-password lifestyle.
After decades of forgetting passwords, resetting passwords, and having our passwords stolen, passkeys offer a repose from the drain and insecurity of passwords.
💰 Lastly, scammers are using AI to fake kidnappings. A post from Semafor explains how AI is being used to make it appear that an individual has been kidnapped, allowing scammers to request a ransom for their “release.”
The proliferation of artificial intelligence is making it easier for scammers to create fake kidnapping scenarios. Sixth Tone reported that telecom scammers in China have adopted AI technology to deepfake a victim’s face and voice, which they then use to orchestrate extortion schemes. Last year, the U.S. Federal Trade Commission issued a consumer alert warning people to be vigilant against AI-powered phone scams. A mother from Arizona testified in the U.S. Senate in June that she received a call from someone who had cloned her daughter’s voice. She warned lawmakers that artificial intelligence “corrodes our confidence in what is real and what is not.”
We know AI is able to help scammers but I don’t know of anyone who predicted that the tech would be used to realistically create a fake kidnapping. This has to be a traumatic experience for those perceived to be kidnapped and their families.