45 stories
·
0 followers

Warrant Protections against Police Searches of Our Data

1 Share

The cell phones we carry with us constantly are the most perfect surveillance device ever invented, and our laws haven't caught up to that reality. That might change soon.

This week, the Supreme Court will hear a case with profound implications on your security and privacy in the coming years. The Fourth Amendment's prohibition of unlawful search and seizure is a vital right that protects us all from police overreach, and the way the courts interpret it is increasingly nonsensical in our computerized and networked world. The Supreme Court can either update current law to reflect the world, or it can further solidify an unnecessary and dangerous police power.

The case centers on cell phone location data and whether the police need a warrant to get it, or if they can use a simple subpoena, which is easier to obtain. Current Fourth Amendment doctrine holds that you lose all privacy protections over any data you willingly share with a third party. Your cellular provider, under this interpretation, is a third party with whom you've willingly shared your movements, 24 hours a day, going back months -- even though you don't really have any choice about whether to share with them. So police can request records of where you've been from cell carriers without any judicial oversight. The case before the court, Carpenter v. United States, could change that.

Traditionally, information that was most precious to us was physically close to us. It was on our bodies, in our homes and offices, in our cars. Because of that, the courts gave that information extra protections. Information that we stored far away from us, or gave to other people, afforded fewer protections. Police searches have been governed by the "third-party doctrine," which explicitly says that information we share with others is not considered private.

The Internet has turned that thinking upside-down. Our cell phones know who we talk to and, if we're talking via text or e-mail, what we say. They track our location constantly, so they know where we live and work. Because they're the first and last thing we check every day, they know when we go to sleep and when we wake up. Because everyone has one, they know whom we sleep with. And because of how those phones work, all that information is naturally shared with third parties.

More generally, all our data is literally stored on computers belonging to other people. It's our e-mail, text messages, photos, Google docs, and more ­ all in the cloud. We store it there not because it's unimportant, but precisely because it is important. And as the Internet of Things computerizes the rest our lives, even more data will be collected by other people: data from our health trackers and medical devices, data from our home sensors and appliances, data from Internet-connected "listeners" like Alexa, Siri, and your voice-activated television.

All this data will be collected and saved by third parties, sometimes for years. The result is a detailed dossier of your activities more complete than any private investigator --­ or police officer --­ could possibly collect by following you around.

The issue here is not whether the police should be allowed to use that data to help solve crimes. Of course they should. The issue is whether that information should be protected by the warrant process that requires the police to have probable cause to investigate you and get approval by a court.

Warrants are a security mechanism. They prevent the police from abusing their authority to investigate someone they have no reason to suspect of a crime. They prevent the police from going on "fishing expeditions." They protect our rights and liberties, even as we willingly give up our privacy to the legitimate needs of law enforcement.

The third-party doctrine never made a lot of sense. Just because I share an intimate secret with my spouse, friend, or doctor doesn't mean that I no longer consider it private. It makes even less sense in today's hyper-connected world. It's long past time the Supreme Court recognized that a months'-long history of my movements is private, and my e-mails and other personal data deserve the same protections, whether they're on my laptop or on Google's servers.

This essay previously appeared in the Washington Post.

Details on the case. Two opinion pieces.

I signed on to two amicus briefs on the case.

Read the whole story
Eni
2532 days ago
reply
Share this story
Delete

The Motherboard Guide to Not Getting Hacked

1 Share

Editors note: This is Motherboard's comprehensive guide to digital security, which will be regularly updated and replaces some of our old guides. It is also available as a printable PDF. It was last updated on November 14, 2017

One of the questions we are asked most often at Motherboard is “how can I prevent myself from getting hacked?”

Because living in modern society necessitates putting an uncomfortably large amount of trust in third parties, the answer is often “not a whole lot.” Take, for example, the massive Equifax hack that affected roughly half of the American population: Few people voluntarily signed up for the service, and yet their information was stolen anyway.

Hackers steal hundreds of millions of passwords in one swoop and occasionally cause large-scale blackouts. The future is probably not going to get better, with real-life disasters caused by internet-connected knick-knacks, smart home robots that could kill you, flying hacker laptops, and the dangers of hackers getting your genetic data. Meanwhile, an ever-growing and increasingly passive surveillance apparatus that has trickled down to state and local police is an ever-present threat to our digital privacy.

That doesn’t mean it’s hopeless out there. There are lots of things you can do to make it much more difficult for hackers or would-be surveillers to access your devices and accounts, and the aim of this guide is to give you clear, easy-to-follow steps to improve your digital security. There are, broadly speaking, two types of hacks: Those that are unpreventable by users, and those you can generally prevent. We want to help you mitigate the damage of the first and prevent the second from happening.

You, as an individual user, can’t do anything to prevent your email provider, or the company that holds your financial details, from getting hacked. But you can avoid phishing attacks that will let a hacker get into your individual email account, and you can also prevent a password obtained in a larger hack from being reused on another, separate account you have.

This guide isn’t comprehensive and it’s not personalized; there is no such thing as “perfect security” and there are no one-size-fits all solutions. Instead, we hope this will be a jumping-off point for people looking to batten down the hatches on their digital lives.

That’s why we’ve tried to keep this guide as accessible as possible, but if you run into any lingo you don’t know, there’s a glossary at the end of this guide to help out.

This guide is the work of many people on Motherboard staff both past and present, and has been vetted by several of our sources, who we owe a great debt to. Large sections of it were written by Lorenzo Franceschi-Bicchierai, Joseph Cox, Sarah Jeong, and Jason Koebler, but the tips within it have grown out of years of writing and research on digital security by dozens of reporters and infosec professionals. Consider it a forever-ongoing work-in-progress that will receive at least one big annual refresh, as well as smaller updates when major new vulnerabilities are exposed. Special thanks to Matt Mitchell of Crypto Harlem, and Eva Galperin, of the Electronic Frontier Foundation for reviewing parts of this guide.

Anyways, enough. This is the Motherboard Guide to Not Getting Hacked.

Image: Koji Yamamoto & Seth Laupus

THREAT MODELING

Everything in this guide starts with “threat modeling,” which is hacker lingo for assessing how likely it is you are going to get hacked or surveilled. When thinking about how to protect your digital communications, it is imperative that you first think about what you’re protecting and who you’re protecting it from. “Depends on your threat model” is a thing infosec pros say when asked questions about whether, say, Signal is the best messaging app or Tor is the most secure browser. The answer to any question about the “best” security is, essentially: “it depends.”

No one security plan is identical to any other. What sort of protections you take all depend on who may try to get into your accounts, or to read your messages. The bad news is that there are no silver bullets (sorry!), but the good news is that most people have threat models in which they probably don’t have to live like a paranoid recluse to be reasonably safe online.

So before doing anything else, you should consider your threat model. Basically, what are you trying to protect, and who are you trying to protect it from?

The Electronic Frontier Foundation recommends asking yourself these five questions when threat modeling:

  • What do you want to protect?
  • Who do you want to protect it from?
  • How likely is it that you will need to protect it?
  • How bad are the consequences if you fail?
  • How much trouble are you willing to go through in order to try to prevent those?

Is your threat an ex who might want to go through your Facebook account? Then making sure they don't know your password is a good place to start. (Don't share critical passwords with people, no matter who they are; if we're talking Netflix, make sure you never reuse that password elsewhere.) Are you trying to keep opportunistic doxers from pulling together your personal information—such as your birthday—which in turn can be used to find other details? Well, keeping an eye on what sort of stuff you publish on social media would be a good idea. And two-factor authentication (more on that below) would go a long way to thwarting more serious criminals. If you are an activist, a journalist, or otherwise have reason to fear government, state, or law enforcement actors want to hack or surveil you, the steps you must take to protect yourself are significantly different than if you’re trying to keep plans for a surprise party secret from your best friend.

Overestimating your threat can be a problem too: if you start using obscure custom operating systems, virtual machines, or anything else technical when it's really not necessary (or you don't know how to use it), you’re probably wasting your time and might be putting yourself at risk. At best, even the most simple tasks might take a while longer; in a worst-case scenario, you might be lulling yourself into a false sense of security with services and hardware that you don’t need, while overlooking what actually matters to you and the actual threats you might be facing.

In certain places, this guide will offer specific steps to take if you have a threat model that includes sophisticated actors. But, in general, it’s designed for people who want to know the basics of how to strengthen their digital security. If your threat model includes NSA hackers or other state-sponsored groups like Fancy Bear, we recommend that you speak to a trained professional about your specific situation.

KEEP YOUR APPS UP TO DATE

Probably the most important and basic thing you can do to protect yourself is to update the software you use to its newest version. That means using an updated version of whatever operating system you're using, and updating all your apps and software. It also means updating the firmware on your router, connected devices, and any other gadgets you use that can connect to the internet.

Bear in mind that, on your computer, you don't necessarily have to use the latest iteration of an operating system. In some cases, even slightly older versions of operating systems get security updates. (Unfortunately, this is no longer the case with Windows XP—stop using it!) What's most important is that your OS is still receiving security updates, and that you're applying them.

So if you come away with one lesson from this guide is: update, update, update, or patch, patch, patch.

Many common cyberattacks take advantage of flaws in outdated software such as old web browsers, PDF readers, or spreadsheet and word-processing tools. By keeping everything up to date, you have a way lower chance of becoming a victim of malware, because responsible manufacturers and software developers quickly patch their products after new hacks are seen in the wild.

Hacking is often a path of least resistance: you go after the easy, soft, targets first. For example, the hackers behind the destructive ransomware outbreak known as WannaCry hit victims who had not applied a security update that had been available for weeks. In other words, they knew they were going to get in because the victims had not changed the lock to their door even though their keys had already been made available to everyone.

PASSWORDS

We all have too many passwords to remember, which is why some people just reuse the same ones over and over. Reusing passwords is bad because if, for example, a hacker gets control of your Netflix or Spotify password, they can then use it to get into your ridesharing or bank account to drain your credit card. Even though our brains aren't actually that bad at remembering passwords, it's almost impossible to remember dozens of unique, strong passwords.

The good news is that the solution to these problems is already out there: password managers. These are apps or browser extensions that keep track of passwords for you, automatically help you create good passwords, and simplify your online life. If you use a manger, all you have to remember is one password, the one that unlocks the vault of your other passwords.

That one password better be good though. Forget about capital letters, symbols, and numbers. The easiest way to make a secure master password is to make a passphrase: several random but pronounceable—and thus easier to memorize—words. For example: floodlit siesta kirk barrel amputee dice (don’t use this one though, we just burned it.)

Once you have that you can use unique passwords made of a lot of characters for everything else, as long as you create them with a password manager and never reuse them. The master password is better as a passphrase because it's easier to memorize, and the other passwords don't need to be memorized because the manager will remember them.

Intuitively, you might think it's unwise to store your passwords on your computer or with a third party password manager. What if a hacker gets in? Surely it's better that I'm keeping them all in my head? Well, not really: The risk of a crook reusing a shared password that has been stolen from somewhere else is far greater than some sophisticated hacker independently targeting your database of passwords. For example, if you used the same password across different websites, and that password was stolen in the massive Yahoo! hacks (which included 3 billion people), it could easily be reused on your Gmail, Uber, Facebook, and other websites. Some password managers store your passwords encrypted in the cloud, so even if the company gets hacked, your passwords will be safe. For example, the password manager LastPass has been hacked at least twice, but no actual passwords were stolen because the company stored them securely. LastPass remains a recommended password manager despite those incidents. Again, it's all about understanding your own threat model.

So, please, use one of the many password managers out there, such as 1Password, LastPass, or KeePass. there's no reason not to do it. It will make you—and the rest of us!—safer, and it'll even make your life easier.

And if your employer asks you to change passwords periodically in the name of security, please tell them that's a terrible idea. If you use a password manager, two-factor authentication (see below), and have unique strong passwords for every account there's no need to change them all the time—unless there’s a breach on the backend or your password is stolen somehow.

TWO-FACTOR AUTHENTICATION

Having unique, strong passwords is a great first step, but even those can be stolen. So for your most important accounts (think your email, your Facebook, Twitter accounts, your banking or financial accounts) you should add an extra layer of protection known as two-factor (or two-step or 2FA) authentication. A lot of services these days offer two-factor, so it doesn’t hurt to turn it on in as many places as you can. See all the services that offer 2FA at twofactorauth.org.

By enabling two-factor you'll need something more than just your password to log into those accounts. Usually, it's a numerical code sent to your cellphone via text messages, or it can be a code created by a specialized app (which is great if your cellphone doesn't have coverage at the time you're logging in), or a small, physical token like a USB key (sometimes called a YubiKey, named after the most popular brand).

There's been a lot of discussion in the last year about whether text messages can be considered a safe “second factor.” Activist Deray McKesson's phone number was hijacked, meaning hackers could then have the extra security codes protecting accounts sent straight to them. And the National Institute of Standards and Technology (NIST), a part of the US government that writes guidelines on rules and measurements, including security, recently discouraged the use of SMS-based 2FA.

The attack on Deray was made possible by “social engineering.” In this case, a customer service rep was tricked by a criminal into making Deray vulnerable. The attack involved getting his phone company to issue a new SIM card to the attackers, allowing them to take over his phone number. That means when they used his first factor (the password) to login to his account, the second factor code was sent directly to them. This is an increasingly common hack.

It's hard to defend against an attack like that, and it’s a sad truth that there is no form of perfect security. But there are steps you can take to make these attacks harder, and we detail them below, in the mobile security section.

SMS-based two-factor can be gamed, and it’s also possible to leverage vulnerabilities in the telecommunications infrastructure that carries our conversations or to use what’s known as an IMSI-catcher, otherwise known as a Stingray, to sweep up your cellphone communications, including your verification texts. We don’t write this to scare you, it’s just worth noting that while all forms of two-factor authentication are better than nothing, you should use an authentication app or better yet a physical key if at all possible.

You should, if the website allows it, use another 2FA option that isn't SMS-based, such as an authentication app on your smartphone (for example, Google Authenticator, DUO Mobile, or Authy), or a physical token. If that option is available to you, it's great idea to use it.

Don't use Flash: Flash is historically one of the most insecure pieces of software that's ever been on your computer. Hackers love Flash because it's had more holes than Swiss cheese. The good news is that a lot of the web has moved away from Flash so you don't really need it anymore to still enjoy a fully-featured and rich browsing experience. So consider purging it from your computer, or at least change the settings on your browser so you have to click to run Flash each time.

Do use antivirus: Yes, you've heard this before. But it's still (generally) true. Antiviruses are actually, and ironically, full of security holes, but if you're not a person who's at risk of getting targeted by nation-state hackers or pretty advanced criminals, having antivirus is still a good idea. Still, antivirus software is far from a panacea, and in 2017 you need more than that to be secure. Also, be aware that antivirus software, by definition, is incredibly invasive: it needs to reach deep into your computer to be able to scan and stop malware. This reach can be abused. For example, the US government accuses Kaspersky Lab, one of the most well-known antivirus software in the world, of having passed sensitive documents from one of its customers to the Russian government.

Do use some simple security plugins: Sometimes, all a hacker needs to pwn you is to get you to the right website—one laden with malware. That's why it's worth using some simple, install-and-forget-about-it plugins such as adblockers, which protect you from malware embedded in advertising presented by the shadier sites you may wander across on the web, and sometimes even legitimate sites. (We'd naturally prefer if you whitelisted Motherboard since web ads help keep our lights on.)

Another useful plugin is HTTPS Everywhere, which forces your connection to be encrypted (when the site supports it). This won't save you if the website you're going to has malware on it, but in some cases, it helps prevent hackers from redirecting you to fake versions of that site (if there's an encrypted one available), and will generally protect against attackers trying to tamper with your connection to the legitimate one.

Do use a VPN: Virtual Private Networks are a secure channel between your computer and the internet. If you use a VPN, you first connect to the VPN, and then to the whole internet, adding a layer of security and privacy. If you're using the internet in a public space, be it a Starbucks, an airport, or even an Airbnb apartment, you are sharing it with people you don't know. And if some hacker is on your same network, they can mess up with your connection and potentially your computer. It’s worth doing some research on VPNs before getting one, because some are much better than others (most of the free ones don’t do a great job of protecting your privacy). We recommend Freedome, Private Internet Access, or, if you’re a technical user, Algo.

Do disable macros: Hackers can use Microsoft Office macros inside documents to spread malware to your computer. It's an old trick, but it's back in vogue to spread ransomware. Disable them!

Do back up files: We're not breaking any news here, but if you're worried about hackers destroying or locking your files (such as with ransomware), then you need to back them up. Ideally, do it while you're disconnected from the network to an external hard drive so that even if you get ransomware, the backup won't get infected.

Don't overexpose yourself for no reason: People love to share pretty much everything about their lives on social media. But please, we beg you, don't tweet a picture of your credit card or flight’s boarding pass, for example. More generally, it's a good mindset to realize that a post on social media is often a post to anyone on the internet who can be bothered to check your profile, even if it's guessing your home address through your running routes on a site like Strava, a social network for runners and cyclists.

Personal information such as your home address or high school (and the school’s mascot, which is a Google away) can then be used to find more information via social engineering schemes. The more personal information an attacker has, the more likely they are to gain access to one of your accounts. With that in mind, maybe consider increasing the privacy settings on some of your accounts too.

Don't open attachments without precautions: For decades, cybercriminals have hidden malware inside attachments such as Word docs or PDFs. Antiviruses sometimes stop those threats, but it's better to just use commons sense: don't open attachments (or click on links) from people you don't know, or that you weren't expecting. And if you really want to do that, use precautions, like opening the attachments within Chrome (without downloading the files). Even better, save the file to Google Drive, and then open it within Drive, which is even safer because then the file is being opened by Google and not your computer.

We now live in a world where smartphones have become our primary computing devices. Not only we use cellphones more than desktop computers, but we keep them with us pretty much all the time . It goes without saying then, that hackers are targeting mobile phones more and more every day.

The good news is there are some basic steps and some precautions you can take to minimize the risks, and we’re going to tell you what they are.

MOBILE THREAT MODELING

Most people use passcodes, passwords, or patterns to “lock” their phones. If you don’t do this, you absolutely should! (Patterns are far easier to guess or “shoulder surf” than pins or passcodes, however, according to a recent study.)

One of the biggest mobile threats is someone who has physical access to your phone and can unlock it. This means your security is only as good as your passcode: If at all possible, avoid giving out your code or password, and avoid using easily guessed passcodes such as your birthday or address. Even simple passcodes and passwords are great to stop pickpockets or street thieves, but not so great if what you’re worried about is an abusive partner who knows your PIN, for example.

With that in mind, here's a few basic things you can do to prevent other common threats to your cellphone.

GET AN iPHONE

Pretty much everyone in the world of cybersecurity—except perhaps the engineers working on Android—believes that iPhones are the most secure cellphone you can get. There are a few reasons why, but the main ones are that iOS, Apple’s mobile operating system, is extremely locked down. Apps go through extensive checks before getting on the App Store, and there are extensive security measures in place, such as the fact that only code approved and digitally signed by Apple (a measure known as code-signing) and the fact that apps are limited from reaching into other apps (sandboxing). These features make it really hard for hackers to attack the most sensitive parts of the operating system. Because Apple controls the iOS infrastructure, iPhones get immediate, regular security updates and patches from Apple; critical security updates for many Android devices can take weeks or months to be pushed to users. Even the iPhone 5s, which was launched in 2013, is still supported.

So if you are paranoid, the iPhone is the most secure cellphone out of the box. But unless you have a really good reason for it, do NOT jailbreak it. While the jailbreaking movement and the hackers behind it have contributed to make the iPhone more secure, jailbreaking an iPhone at this point doesn’t really provide you any feature that’s worth the increased risks. In the past, hackers have been able to target at scale only jailbroken iPhones.

Nothing is unhackable though. We know some governments are armed with million-dollar hacking tools to hack iPhones, and perhaps some sophisticated criminals might have those too. Still, get an iPhone, install the updates, and don’t jailbreak it and you’ll probably be fine.

BUT I LOVE ANDROID! FINE...

Android has become the most popular operating system in the world thanks to its decentralized, open-source nature and the fact that many handsets are available at prices much lower than iPhones. In some ways, this open-sourced nature was Android’s original sin: Google traded control, and thus security, for market share. This way, critical security updates depend on carriers and device manufacturers, who have historically been lackadaisical about pushing them out.

The good news is that in the last two years this has improved a lot. Google has been pushing partners to give users monthly updates, and Google’s own flagship devices have almost the same kind of regular support that Apple provides to iPhones, as well as some of the same security features.

So your best bet is to stick to Pixels or Nexus phones, whose security doesn’t depend on anyone but Google. If you really don’t want a Google phone, these cellphones have a good track record of pushing security updates, according to Google itself.

Whatever Android phone you own, be careful what apps you install. Hackers have traditionally been very successful at sneaking malicious apps on the Play Store so think twice before installing a little-known app, or double check that the app you’re installing really is the one you want. Earlier this fall, a fake version of WhatsApp was installed by more than a million Android users. Also, stick to the Play Store and avoid downloading and installing apps from third-party stores, which may very well be malicious. On most Android phones, installing third-party apps is not enabled by default, leave it that way.

To protect the data on your Android phone, make sure full disk encryption is enabled. Open your Settings app, go to “Security” and click on “Encrypt Phone” if it’s not enabled already. (If this doesn’t work on your device, Google for instructions on your specific handset).

Finally, while not mandatory, it might be a good idea to install a mobile antivirus such as Lookout or Zips. While these can be effective against criminal’s malware, they probably won’t stop government hackers.

LOCK-UP THAT SIM CARD

Recently we revealed that hackers had been exploiting a nasty bug on a T-Mobile website to pull the personal data of customers in an attempt to gather data that they could then use to impersonate the victims and socially engineer T-Mobile support technicians into issuing new SIM cards. These kind of attacks, known as “SIM swapping” or “SIM hijacking,” allow hackers to take over your cellphone number, and in turn anything that’s connected to it. SIM hijacking is what makes two-factor authentication via SMS so dangerous.

Your phone number is likely the gateway to multiple other, perhaps more sensitive, parts of your digital life: your email, your bank account, your iCloud backups.

As a consumer, you can’t control the bugs that your carrier leave open for hackers. But you can make it a bit harder for hackers to impersonate you with gullible tech support employees. The solution is easy, although not that many people know about it: a secondary password or passcode that you need to provide when you call your cellphone provider. Most US carriers now offer this option.

Call your provider and ask them to set this up for you. Motherboard confirmed that Sprint, T-Mobile, Verizon and U.S. Cellular all give customers this option. Verizon and U.S. Cellular have made this mandatory, according to their spokespeople. Of course, make sure you remember this phone password, or better yet, write it down in your password manager.

Image: Koji Yamamoto & Seth Laupus

In the wake of September 11th, the United States built out a massive surveillance apparatus, undermined constitutional protections, and limited possible recourse to the legal system.

Given the extraordinary capabilities of state surveillance in the US—as well as the capabilities of governments around the world—you might be feeling a little paranoid! It’s not just the NSA—the FBI and even local cops have more tools at their disposal to snoop on people than ever before. And there is a terrifying breadth of passive and unexpected surveillance to worry about: Your social media accounts can be subpoenaed, your emails or calls can be scooped up in bulk collection efforts, and your cell phone metadata can be captured by Stingrays and IMSI catchers meant to target someone else.

Remember, anti-surveillance is not the cure, it’s just one thing you can do to protect yourself and others. You probably aren’t the most at-risk person, but that doesn’t mean you shouldn’t practice better security. Surveillance is a complicated thing: You can practice the best security in the world, but if you’re sending messages to someone who doesn’t, you can still be spied on through their device or through their communications with other people (if they discuss the information you told them, for instance).

That’s why it’s important that we normalize good security practices: If you don’t have that much to be afraid of, it’s all the more important for you to pick up some of these tools, because doing that will normalize the actions of your friends who are, say, undocumented immigrants, or engaged in activism. Trump’s CIA Director thinks that using encryption “may itself be a red flag.” If you have “nothing to hide,” your use of encryption can actually help people at risk by obfuscating that red flag. By following this guide, you are making someone else safer. Think of it as herd immunity. The more people practice good security, the safer everyone else is.

The security tips provided earlier in this guide still apply: If you can protect yourself from getting hacked, you will have a better shot at preventing yourself from being surveilled (when it comes to surveilling iPhones, for instance governments often have few options besides hacking the devices). But tech tools don’t solve all problems. Governments have a weapon in their hands that criminal hackers do not: the power of the law. Many of the tips in this section of the guide will help you not only against legal requests and government hacking, but also against anyone else who may be trying to spy on you.

You don’t have to turn yourself into a security expert. Just start thinking about your risks, and don’t be intimidated by the technology. Security is an ongoing process of learning. Both the threats and the tools developed to address them are constantly changing, which is one of the reasons why privacy and security advice can often seem fickle and contradictory. But the tips below are a good starting point.

THREAT MODELING (privacy and surveillance edition)

Keep in mind that different tools address different problems. Without threat modelling, it’s easy to feel overwhelmed by how many tools are out there. Threat modeling for surveillance is similar to threat modelling for hacking, but there are of course some nuances that vary in every situation.

It’s easy for some people to say “use Signal, use Tor,” and be done with it, but that doesn’t work for everyone. For example, a friend used to message people about her abusive ex-partner using the built-in Words With Friends messenger, because she knew that he read her text messages and Gchats. Words With Friends does not have a particularly secure messaging system, but in this case it was a better option than Signal or Hangouts because he didn’t think to read her messages on the game.

When it comes to state actors, it might be helpful to think of surveillance in two different forms: surveillance of metadata (who you are, who you’re talking to, when you’re talking) and surveillance of content (what you are saying). As with all things, when you dig a little deeper, it’s not as simple as that. But if you’re thinking about this for the first time, it’s a good start.

Surveillance law is complicated, but long story short, both the law and current technological infrastructure make it easier to grab metadata than content. Metadata isn’t necessarily less important or revealing than content. Say Planned Parenthood called you. Then you call your partner. Then you call your insurance. Then you call the abortion clinic. That information is going to be on your phone bill, and your telephone provider can easily give it up to the government. Your cell provider might not be recording those calls—the content is still private. But at that point, the content doesn’t matter—it would be easy for someone with the metadata alone to have a reasonable idea of what your calls were about.

Start thinking about what is open and exposed, and what you can protect. Sometimes, you have to accept that there’s very little you can do about a particular channel of communication. If circumstances are dire, you’re going to just have to work around it.

SIGNAL

Signal is an encrypted messaging service for smartphones and desktop computers. It is, for many—but not all—people, a good option for avoiding surveillance. Because the government has the capability to intercept electronic messages while they’re being transmitted, you want to use end-to-end encryption for as many of your communications as possible.

Using Signal is easy. You can find it and install it from your phone’s app store. (In the iOS App Store and the Google Play Store, it’s called “Signal Private Messenger,” and it’s made by Open Whisper Systems.)

If you have the other person’s phone number in your contacts list, you can see them in Signal, and message them or call them. As long as the other person also has Signal, the messages automatically encrypt—all the work is invisible.

It even has a desktop app, so you can use it the way that iOS/Mac OS people use iMessage on both their phones and computers. Go to the Signal.org website and download the app for your preferred operating system. Just follow the instructions—trust us, they’re easy.

Signal also lets you set a timer for messages to automatically expire, thus deleting them from all devices. You can set the timer for all kinds of lengths, including very short ones. This is a great feature for journalists who are concerned about protecting their sources or their conversations with editors.

These are great features, and they’re part of the reason why we recommend Signal over many other end-to-end messaging apps. iMessage and WhatsApp also use end-to-end encryption, but they both have drawbacks.

We do not recommend WhatsApp, because WhatsApp is owned by Facebook, and has been sharing user information with its parent company. While this is only metadata, it is ultimately a rollback of a privacy promise made when WhatsApp was acquired by Facebook. We think this says something negative about the overall trustworthiness of the company in coming days.

It is a very good thing that Apple encrypts iMessages end-to-end. But iMessage also backs up messages to iCloud by default, which is why you can message from all your Apple devices. This is a great and fun feature, but if you’re concerned about government surveillance, remember that Apple complies with lawful government demands for data in your iCloud: “iMessage and SMS messages are backed up on iCloud for your convenience,” Apple’s privacy page states. You can turn this feature off, but in theory Apple could be forced to access the iMessages you’ve sent people who still have the feature enabled.

Signal keeps very little information. We know this, because Open Whisper Systems was subpoenaed by the government last year, and was forced to hand over information. But the information it had—by design—was pretty minimal. Signal retains phone number, account creation date, and the time of the user’s last connection to Signal servers. Yes, that’s still something, but as you can see, it’s not very much.

There are worse products to use than iMessage and WhatsApp. For example, you absolutely should avoid using Telegram for sensitive communications. And Google can read your GChats unless you take additional steps to encrypt them end-to-end. There are several other products on the market that are decent alternatives (for example, Wire), but like WhatsApp and iMessage, they’re created and maintained by for-profit companies, and we don’t know how they’re planning to monetize in the future. Signal is an open source, nonprofit project. That has its own drawbacks (for example, Signal is not as slick as iMessage, nor does it have the luxury of having a large security team behind it), so maybe donate money when you download it?

One thing that’s worth mentioning about Signal is that it requires you to associate the device with a phone number. This means that you need to trust the people you’re messaging to have your phone number (or need to jump through hoops to use Signal with a dummy phone number); there are many reasons why you might want to message people without giving them your phone number, which is one of the potential drawbacks of Signal. If this is a concern for you, consider another option.

Another thing to remember is that just because a communication is end-to-end encrypted doesn’t mean it’s invisible to the government. It just means the contents are encrypted between endpoints. You can see the message, your recipient can see the message. If it’s intercepted in transit, it’s completely garbled, and the content of your message is protected from spying eyes.

But if an “endpoint” is compromised—in other words, if your own phone is hacked or physically seized by the government, or your texting partner is screencapping your conversation—it’s game over.

Encryption doesn’t make it impossible for the government to snoop, it just makes it way more challenging. The point is that introducing friction into the equation does provide privacy.

SOCIAL MEDIA

If you post publicly on social media, know that local police (and likely federal agencies as well) keep tabs on activists online. For example, Facebook, Instagram, and Twitter have all fed data to social media monitoring products that police departments used to track Black Lives Matter activists.

Even if you keep your privacy settings on lockdown, social media companies are subject to subpoenas, court orders, and data requests for your information. And often times, they’ll fork over the information without ever notifying the user that it’s happening. For the purposes of social media, assume that everything you post is public. This doesn’t mean you should stop using social media, it just means you have to be mindful of how you use it.

If you’re an activist, consider using a pseudonym for your activism. If you post online at all, take others’ safety and privacy into consideration as well.

Who are you tagging into your posts? Are you adding location information? Who are you taking a picture of, and why? Be particularly careful with photos or posts about protests, rallies, or meetings. Facial recognition technology is fairly sophisticated now, so even if you leave people untagged, theoretically an algorithm could scan for and identify activists in a photograph of a rally. You can already see this at work in Facebook’s tag suggestions.

When you take a picture of someone at a protest, make sure that they consent, and that they know the implications of having a photo of themselves out there.

DEVICE CAMERAS AND MICROPHONES

Do you live around any cameras? If you use internet-connected security cameras inside your home, or have a webcam running, don’t leave these things unsecured. Make sure that you’ve changed any passwords from the default that they shipped with, and cover them when you’re not using them.

If you have a laptop or a smartphone, use a sticker to cover the front-facing camera. You don’t have to stop Facetiming and taking selfies, you just want to cover things up so no one’s looking at you when you don’t want them to. The Electronic Frontier Foundation sells removable laptop cover stickers (five for $5) that won’t leave a residue on your camera, so you can take it on and off whenever you need it. Consider buying several and giving them to friends who might be shorter on cash.

Finally, there is absolutely no way to make sure your microphone is not recording. If you’re concerned about being wiretapped, consider turning off your phone and putting it in the microwave ( temporarily, with the microwave off), or leaving your phone in the other room. Turning your phone off alone does not necessarily protect you! And consider leaving all your devices outside of the bedroom when you have sex with your partner.

In 2012, Khadija Ismayilova, an Azeri journalist, was blackmailed with a surreptitiously filmed sex tape. The blackmailer told Ismayilova to stop publishing articles critical of the government, or else have her tape released. (Ismayilova went public, and the tape was posted on the internet.) In 2015, the Azerbaijan government sentenced her to seven and a half years in prison on tax evasion charges. She is currently out on probation.

Governments at home and abroad have used sex to blackmail dissenters. Be aware of that, and protect your privacy.

LOCK SCREEN

Put a password/passcode on your phone and your computer. Don’t rely on your thumbprint alone. The police are more likely to be able to legally compel you to use your fingerprint to open up your phone. You may have a stronger constitutional right not to speak your password.

USE OTR FOR CHATTING (if you have to)

It’s best to use Signal for desktop when chatting with people. But here’s another option that’s particularly useful for journalists.

Close your Gmail window and use OTR (Off The Record) instead to chat. Keep in mind that you can only use OTR if the other person is also using OTR. Mac users can install Adium, PC (and Linux) users will have to install Pidgin and the OTR plugin.

You can use your Gmail account as your chat ID. So what’s going on is that you’re engaging in Gchat, but with a layer of encryption on top. Open up a chat window and click the lock icon to begin encryption. And make sure you tweak your settings so that you’re not retaining chat logs during encrypted conversations.

Again, end-to-end only goes so far. If the other person is logging your conversations, it might not matter that you went this far. If you’re concerned, ask your friend to stop logging.

THE TOR BROWSER

Tor—which takes its name from an acronym for “The Onion Router”—scrambles your internet traffic by routing it through several layers of computers. This way, when you access a website, it can’t tell where you’re connecting from. The easiest way to use Tor is just to install the Tor Browser. It’s just like Firefox or Chrome or Internet Explorer, just a lot slower because of the privacy it provides.

Using Tor for everything will give you a big privacy boost, but it’s a bit unwieldy. Don’t, for instance, try to stream Netflix over Tor.

Evaluate your needs and figure out how much Tor you need in your life. Always remember that your IP address (which can give away where you are, and therefore, who you might be) is laid bare if you aren’t using it.

There are four reasons why you might want to use Tor.

  • You’re trying to keep your identity hidden.
  • You use a lot of public WiFi.
  • You’re trying to get around government censorship.
  • You are protecting the other people who use Tor.

If you’re an activist who is trying to hide their identity, you need Tor to mask your IP address. This is a limited use case scenario. For example, it’s self-defeating for me to open up Tor, log into my public Twitter account, and tweet, “What up, everyone, I’m tweeting from the Vice Media offices in New York City.” I am giving away all the information that Tor is masking for me—because when it comes down to it, in that use case scenario, I was never planning on keeping it private.

If you connect to a lot of public Wi-Fi (think Starbucks, a hotel, or the airport), though, you should use Tor. It provides similar benefits as VPNs , but without many of the drawbacks of a VPN (see the next section for a discussion of that).

If the United States begins to censor parts of the web, as many other governments do, Tor might be able to help you get around that. Tor certainly helps people connecting to the internet from other countries that practice internet censorship.

Finally, the thing about Tor is that the more people use it, the less trackable everyone else is. When a lot of random, unaffiliated people from all over the world use it, it becomes stronger and stronger. If you take the time to use Tor every day, you are helping people who really do need it.

A couple caveats, here: Tor is not bulletproof. The government has been known to hack groups of users on Tor, just like it’s been known to hack VPN users en masse. Tor, by itself, does not make it more unlikely for you to get hacked. Tor is for privacy, not security. And Tor is designed to make it hard to log your traffic, not impossible, so there’s always a risk that you aren’t being hidden.

The computers that make up the Tor network—the ones that your traffic bounces through—are run by volunteers, institutions, and organizations all over the world, some of whom face legal risks for doing so. They are not supposed to log the traffic that goes through them, but because it’s a volunteer network, some might. The risk is mitigated by the fact that each node only sees a snapshot of the traffic running through it, and nobody has access to both the user’s IP and their unencrypted traffic. A bad actor would have to run a very large number of Tor nodes to start logging meaningful traffic—which would be difficult—and the Tor project monitors for behavior that suggests anybody might be doing that.

Ultimately, for the purposes of state surveillance, Tor is better than a VPN, and a VPN is better than nothing.

It’s not clear whether Tor will continue to exist into the future. Tor is run partly through grants from the government. (Like many cutting edge technologies, Tor was originally developed by the US military.) It’s possible Tor will lose most of its funding in the very near-term. Consider donating to the Tor Project.

VIRTUAL PRIVATE NETWORKS

When it comes to state surveillance, VPNs won’t help much. A VPN will obscure your IP address, but when it comes to state surveillance, VPNs can be subpoenaed for user information that may ultimately identify you. For example, many VPN companies keep logs on what IP addresses log on when and what sites are accessed—which can end up pinpointing you, especially if you used your credit card to pay for a VPN subscription.

Some VPN companies claim not to log user information. You need to evaluate how much you trust these companies, and make that decision for yourself. If what you’re concerned about is government surveillance, our recommendation is that you stick with Tor.

PGP (probably isn’t worth the trouble)

The only reliable way to encrypt your email is PGP—also known as Pretty Good Privacy. However, PGP is incredibly obnoxious to use. Even PGP’s creator Phil Zimmermann has stopped using it, since he can’t use it on his phone. The problem isn’t just that you have to figure out PGP, everyone you talk to also has to figure it out. Telling someone to download Signal is a lot easier than walking them through public/private key encryption. This is where your threat model comes in handy, to help figure out if PGP is actually worth it to you.

If you absolutely must use encrypted email, this guide to PGP might be helpful. It’s tricky, so you might want to go to a crypto party and have an activist or technologist help you set it up.

PRIVATE EMAIL SERVERS (don't do it)

If 2016 did anything, it convinced everyone not to run their own private email server.

It’s true that Google and other companies have to comply with court orders for your information, including your emails. But on the other hand, Google knows how to run email servers way better than you do. Email servers are hard! Just ask Hillary Clinton.

If you are encrypting email, Google can only hand over the metadata (who’s sending to whom and subject headers). Since encrypting email is a huge pain, try to keep all your sensitive stuff away from email, and in end-to-end encrypted channels instead. Don’t abandon your third-party email account, just be aware that the government can get at what’s inside.

ENCRYPT YOUR HARD DRIVE

Good news: this isn’t as hard as it used to be!

Full-disk encryption means that once your device is locked (when it’s off, or when it’s on but showing a lock screen), the contents of your hard drive can’t be accessed without your password/key.

A lot of smartphones come with full disk encryption built in. If you own an iPhone with a recently updated operating system (like, in the last three years, really), just slap a passcode on that sucker and you’re golden.

If you own an Android phone, it might already be encrypted by default (Google Pixel is). But chances are, it’s not. There isn’t an up-to-date guide on turning on encryption on all Android devices, so you’re going to have to poke around yourself, or ask a friend. And if you own a Windows phone, god help you, because we can’t.

As for computers, things are again, much easier than they used to be. Use your operating system’s full disk encryption option instead. For MacBooks running Lion or newer, just turn on FileVault.

Windows, on the other hand, is a lot more complicated. First off, some users have encryption by default. Some more users can turn it on, but it’s kind of a pain. And if you’re using Microsoft’s Bitlocker, you’re going to have to fiddle with some additional settings to make it more secure. Apple doesn’t retain the capability of unlocking your devices. Famously, if the government goes to Apple, Apple can’t just decrypt your phone for the feds, not without coming up with a hack that will affect every iPhone in the world. But Microsoft isn’t doing quite the same thing—in some cases they use what’s known as “key escrow,” meaning they can decrypt your machine—so you have to take additional steps (outlined in this article) to get that same level of protection.

You may need to resort to using VeraCrypt. A lot of older guides will say to use TrueCrypt, regardless of operating system. This is now outdated advice. VeraCrypt used to be TrueCrypt, and the story of why it's not any more is a convoluted crypto soap opera with plot holes the size of Mars, and it is frankly outside the scope of this guide. Long story short, there’s nothing wrong with VeraCrypt as far as the experts can tell, but if you have the option, use the full disk encryption that your operating system already provided.

If you use Linux, your distro probably supports encryption out of the box. Follow the instructions while installing.

CREDIT CARDS

Know that credit card companies never stand up to the government. If you pay for anything using your credit card, know that the government can get that information pretty easily. And remember that once your identity touches something, there’s a chain that the government can follow all the way back.

For example, if you get a prepaid Visa gift card using your personal credit card, and pay a VPN company with that, the government can just go backwards through the chain and find your personal credit card, and then you. If you pay a VPN company with Bitcoin, but you bought the Bitcoin through a Bitcoin exchange using your personal credit card, that’s traceable as well.

This applies to anything else you use money for, like buying domains or cheap, pay-as-you-go phones, known as burners. Practically speaking, there’s not a lot you can do about this. It’s one of the reasons why we recommend Tor instead of a VPN service.

It’s also one of the reasons why it’s so hard to get a burner phone that’s really a burner. (How are you going to pay for continuing phone service without linking your name to it?) There is no easy answer here. We’re not going to pretend to be able to give good advice in this instance. If you find yourself in a situation where your life depends on staying anonymous, you’re going to need a lot more help than any internet guide.

One more thing: For now, organizations like the ACLU and NAACP have a constitutional right to resist giving up the names of donors. But your credit card or PayPal might betray you anyways. This doesn’t mean you shouldn’t donate to organizations that resist oppression and fight for civil rights and civil liberties. Rather, it makes it all the more important that you do. The more ordinary people do so, the more that individual donors are protected from scrutiny and suspicion.

SPECIAL NOTES FOR JOURNALISTS

Want to protect your sources? Your notes, your Slack chats, your Gchats, your Google Drive, your Dropbox, your recorded interviews, your transcripts, and your texts can all end up in court. Depending on what kind of court case it is, it might not matter that it’s encrypted.

Don’t wait until a lawsuit is imminent to delete all your stuff. That might be illegal, and you might be risking going to jail. Every situation is different: your notes might be necessary to get you out of trouble. So if you’re the type to hoard notes, know the risk, talk to a lawyer, and act responsibly.

THE FUTURE (?)

Which brings us to our next point: we don’t know what the future holds. This guide was written with the current technical and legal capabilities of the United States government in mind. But that might all change in the future. Strong encryption might become illegal. The United States might begin to practice internet censorship the way that China and other countries do. The government might institute a National ID policy for getting online, making it near-impossible to post anonymously.

These things are harder to enforce and implement, so they’re not likely to happen quickly.

It’s also not infeasible that the government pressures app stores to take down Signal and other end-to-end encryption applications. This guide might be only be so good for so long. That’s all the more reason to become proactive against surveillance now, and to keep adapting to changing circumstances.

LOG OFF

Many public places have cameras, some spots are wired with microphones. And there’s always the possibility that you are being individually targeted for surveillance. But ultimately, it’s a lot harder to surveil someone in person than to collect the electronic communications of many people at the same time.

Take a break from the wired world and meet people in person. If you stay out of earshot, you won’t be overheard, and your words will melt into the air, unsurveilled and unrecorded.

And besides, if you’re reading this guide, chances are that you really need a hug right now.

So meet up with your friends, verify your Signal keys, and give each other a big hug. Because you’re probably both scared, and you need each other more than you need any of this technology.

GO OUT THERE AND BE SAFE

That is all for now. Again, this is just meant to be a basic guide for average computer users. So if you're a human rights activist working in a dangerous country or a war zone, or an organization building IT infrastructure on the fly, this is certainly not enough, and you'll need more precautions.

But these are common sense essential tips that everyone should know about.

Of course, some readers will leap at the chance to point out everything that may have been missing from this guide, and we'd like to hear your feedback. Security is a constantly changing world, and what's good advice today might not be good advice tomorrow, so our goal is to keep this guide updated somewhat regularly, so, please, do reach out if you think we have something wrong or missing something.

And remember, always be vigilant!



Read the whole story
Eni
2547 days ago
reply
Share this story
Delete

New 'Coalition For Responsible Sharing' About To Send Millions Of Take-Down Notices To Stop Researchers Sharing Their Own Papers

1 Share

A couple of weeks ago, we wrote about a proposal from the International Association of Scientific Technical and Medical Publishers (STM) to introduce upload filtering on the ResearchGate site in order to stop authors from sharing their own papers without "permission". In its letter to ResearchGate, STM's proposal concluded with a thinly-veiled threat to call in the lawyers if the site refused to implement the upload filters. In the absence of ResearchGate's acquiescence, a newly-formed "Coalition for Responsible Sharing", whose members include the American Chemical Society (ACS), Brill, Elsevier, Wiley and Wolters Kluwer, has issued a statement confirming the move:

Following unsuccessful attempts to jointly find ways for scholarly collaboration network ResearchGate to run its service in a copyright-compliant way, a coalition of information analytics businesses, publishers and societies is now left with no other choice but to take formal steps to remedy the illicit hosting of millions of subscription articles on the ResearchGate site.

Those formal steps include sending "millions of takedown notices for unauthorized content on its site now and in the future." Two Coalition publishers, ACS and Elsevier, have also filed a lawsuit in a German regional court, asking for “clarity and judgement” on the legality of ResearchGate's activities. Justifying these actions, the Coalition's statement says: "ResearchGate acquires volumes of articles each month in violation of agreements between journals and authors" -- and that, in a nutshell, is the problem.

The articles posted on ResearchGate are generally uploaded by the authors; they want them there so that their peers can read them. They also welcome the seamless access to other articles written by their fellow researchers. In other words, academic authors are perfectly happy with ResearchGate and how it uses the papers that they write, because it helps them work better as researchers. A recent post on The Scholarly Kitchen blog noted:

Researchers particularly appreciate ResearchGate because they can easily follow who cites their articles, and they can follow references to find other articles they may find of interest. Researchers do not stop to think about copyright concerns and in fact, the platform encourages them, frequently, to upload their published papers.

The problem lies in the unfair and one-sided contracts academic authors sign with publishers, which often do not allow them to share their own published papers freely. The issues with ResearchGate would disappear if researchers stopped agreeing to these completely unnecessary restrictions -- and if publishers stopped demanding them.

The Coalition for Responsible Sharing's statement makes another significant comment about ResearchGate: that it acquires all these articles "without making any contribution to the production or publication of the intellectual work it hosts." But much the same could be said about publishers, which take papers written by publicly-funded academics for free, chosen by academics for free, and reviewed by academics for free, and then add some editorial polish at the end. Despite their minimal contributions, publishers -- and publishers alone -- enjoy the profits that result. The extremely high margins offer incontrovertible proof that ResearchGate and similar scholarly collaboration networks are not a problem for anybody. The growing popularity and importance of unedited preprints confirms that what publishers add is dispensable. That makes the Coalition for Responsible Sharing's criticism of ResearchGate and its business model deeply hypocritical.

It is also foolish. By sending millions of take-down notices to ResearchGate -- and thus making it harder for researchers to share their own papers on a site they currently find useful -- the Coalition for Responsible Sharing will inevitably push people to use other alternatives, notably Sci-Hub. Unlike ResearchGate, which largely offers articles uploaded by their own authors, Sci-Hub generally sources its papers without the permission of the academics. So, once more, the clumsy actions of publishers desperate to assert control at all costs make it more likely that unauthorized copies will be downloaded and shared, not less. How responsible is that?

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+



Permalink | Comments | Email This Story
Read the whole story
Eni
2582 days ago
reply
Share this story
Delete

Researcher Who Stopped WannaCry Ransomware Detained in US After Def Con

1 Share

On Wednesday, US authorities detained a researcher who goes by the handle MalwareTech, best known for stopping the spread of the WannaCry ransomware virus.

In May, WannaCry infected hospitals in the UK, a Spanish telecommunications company, and other targets in Russia, Turkey, Germany, Vietnam, and more. Marcus Hutchins, a researcher from cybersecurity firm Kryptos Logic, inadvertently stopped WannaCry in its tracks by registering a specific website domain included in the malware's code.

At the time of writing it is not clear what charges, if any, Hutchins may face.

Motherboard verified that a detainee called Marcus Hutchins, 23, was being held at the Henderson Detention Center in Nevada early on Thursday. A few hours after, Hutchins was moved to another facility, according to a close personal friend.

The friend told Motherboard they "tried to visit him as soon as the detention centre opened but he had already been transferred out." Motherboard granted the source anonymity due to privacy concerns.

"I've spoken to the US Marshals again and they say they have no record of Marcus being in the system. At this point we've been trying to get in contact with Marcus for 18 hours and nobody knows where he's been taken," the person added. "We still don't know why Marcus has been arrested and now we have no idea where in the US he's been taken to and we're extremely concerned for his welfare."

A US Marshals spokesperson told Motherboard in an email, "my colleague in Las Vegas says this was an FBI arrest. Mr. Hutchins is not in U.S. Marshals custody."

The FBI acknowledged a request for comment but did not provide one in time for publication.

Shortly before his arrest, Hutchins was in Las Vegas during Black Hat and DEF CON, two annual hacking conferences.

"We are aware a UK national has been arrested but it's a matter for the authorities in the US," a spokesperson for the UK's National Crime Agency told Motherboard in an email.

This story is developing...



Read the whole story
Eni
2652 days ago
reply
Share this story
Delete

May And Macron's Ridiculous Adventure In Censoring The Internet

1 Share

For some observers, struggling UK Prime Minister Theresa May and triumphant French President Emmanuel Macron may seem at somewhat opposite ends of the current political climate. But... apparently they agree on one really, really bad idea: that it's time to massively censor the internet and to blame tech companies if they don't censor enough. We've been explaining for many years why this is a bad idea, but apparently we need to do so again. First, the plan:

The prime minister and Emmanuel Macron will launch a joint campaign on Tuesday to tackle online radicalisation, a personal priority of the prime minister from her time as home secretary and a comfortable agenda for the pair to agree upon before Brexit negotiations begin next week.

In particular, the two say they intend to create a new legal liability for tech companies if they fail to remove inflammatory content, which could include penalties such as fines.

It's no surprise that May is pushing for this. She's been pushing to regulate the internet for quite some time, and it's a core part of her platform (which is a bit "weak and wobbly" as they say these days). But, Macron... well, he's been held up repeatedly as a "friend" to the tech industry, so this has to be seen as a bit of a surprise in the internet world. Of course, there were hints that he might not really be all that well versed in the way technology works when he appeared to support backdoors to encryption. This latest move just confirms an unfortunate ignorance about the technology/internet landscape.

Creating a new legal liability for companies that fail to remove inflammatory content is going to be a massive disaster in many, many ways. It will damage the internet economy in Europe. It will create massive harms to free speech. And, it won't do what they seem to think it will do: it won't stop terrorists from posting propaganda online.

First, a regime that fines companies for failing to remove "inflammatory content" will lead companies to censor broadly, out of fear that any borderline content they leave up may open them up to massive liability. This is exactly how the Great Firewall of China works. The Chinese government doesn't just say "censor bad stuff" it tells ISPs that they'll get fined if they allow bad stuff through. And thus, the ISPs over-censor to avoid leaving anything that might put them at risk online. And, when it comes to free speech, doing something "the way the Chinese do things" tends not to be the best idea.

Second, related to that, once they open up this can of worms, they may not be happy with how it turns out. It's great to say that you don't think "inflammatory content" should be allowed online, but who gets to define "inflammatory" makes a pretty big difference. As we've noted, you always want to design regulations as if the people you trust the least are in power. This is not to say that May or Macron themselves would do this, but would you put it past some politicians in power to argue that online content from political opponents is too "inflammatory" and thus must be removed? What about if the press reveals corruption? That could be considered "inflammatory" as well.

Third, one person's "inflammatory content" is another's "useful evidence." We see this all the time in other censorship cases. I've written before about how YouTube was pressured to take down inflammatory "terrorist videos" in the past, and ended up taking down the account of a human rights group documenting atrocities in Syria. It's easy to say "take down terrorist content!" but it's not always easy to recognize what's terrorist propaganda versus what's people documenting the horrors that the terrorists are committing.

Fourth, time and time again, we've seen the intelligence community come out and argue against this kind of censorship, noting that terrorists posting inflammatory content online is a really useful way to figure out what they're up to. Demanding that platforms take down these useful sources of open source intelligence will actually harm the intelligence community's ability to monitor and stop plans of attack.

Fifth, this move will almost certainly be used by autocratic and dictatorial regimes to justify their own widespread crackdown on free speech. And, sure, they might do that already, but removing the moral high ground can be deeply problematic in diplomatic situations. How can UK or French diplomats push for more freedom of expression in, say, China or Iran, if they're actively putting this in place back home. Sure, you can say that they're different, but the officials from those countries will argue it's the exact same thing: you're censoring the internet to "protect" people from "dangerous content." Well, they'll argue, that's the same thing that we do -- it's just that we have different threats we need to protect against.

Sixth, this will inevitably be bad for innovation and the economy in both countries. Time and time again, we've seen that leaving internet platforms free from liability for the actions of their users is what has helped those companies develop, provide useful services, employ lots of people and generally help create new economic opportunities. With this plan, sure, Google and Facebook can likely figure out some way to censor some content -- and can probably stand the risk of some liability. But pretty much every other smaller platform? Good luck. If I were running a platform company in either country, I'd be looking to move elsewhere, because the cost of complying and the risk of failing to take down content would simply be too much.

Seventh, and finally, it won't work. The "problem" is not that this content exists. The problem is that lots of people out there are susceptible to such content and are interested and/or swayed by it. That's a much more fundamental problem, and censoring such content doesn't do much good. Instead, it tends to only rally up those who were already susceptible to it. They see that the powers-that-be -- who they already don't trust -- find this content "too dangerous" and that draws them in even closer to it. And of course that content will find many other places to live online.

Censoring "bad" content always seems like an easy solution if you haven't actually thought through the issues. It's not a surprise that May hasn't -- but we had hopes that perhaps Macron wouldn't be swayed by the same weak arguments.



Permalink | Comments | Email This Story
Read the whole story
Eni
2701 days ago
reply
Share this story
Delete

Getting Started with Headless Chrome

1 Share

Getting Started with Headless Chrome

TL;DR

Headless Chrome is shipping in Chrome 59. It's a way to run the Chrome browser in a headless environment. Essentially, running Chrome without chrome! It brings all modern web platform features provided by Chromium and the Blink rendering engine to the command line.

Why is that useful?

A headless browser is a great tool for automated testing and server environments where you don't need a visible UI shell. For example, you may want to run some tests against a real web page, create a PDF of it, or just inspect how the browser renders an URL.

Caution: Headless mode is available on Mac and Linux in Chrome 59. Windows support is coming soon! To check what version of Chrome you have, open chrome://version.

Starting Headless (CLI)

The easiest way to get started with headless mode is to open the Chrome binary from the command line. If you've got Chrome 59+ installed, start Chrome with the --headless flag:

chrome \
  --headless \                   # Runs Chrome in headless mode.
  --disable-gpu \                # Temporarily needed for now.
  --remote-debugging-port=9222 \
  https://www.chromestatus.com   # URL to open. Defaults to about:blank.

Note: Right now, you'll also want to include the --disable-gpu flag. That will eventually go away.

chrome should point to your installation of Chrome. The exact location will vary from platform to platform. Since I'm on Mac, I created convenient aliases for each version of Chrome that I have installed.

If you're on the stable channel of Chrome and cannot get the Beta, I recommend using chrome-canary:

alias chrome="/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome"
alias chrome-canary="/Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary"
alias chromium="/Applications/Chromium.app/Contents/MacOS/Chromium"

Download Chrome Canary here).

Command line features

In some cases, you may not need to programmatically script Headless Chrome. There are some useful command line flags to perform common tasks.

Printing the DOM

The --dump-dom flag prints document.body.innerHTML to stdout:

chrome --headless --disable-gpu --dump-dom https://www.chromestatus.com/

Create a PDF

The --print-to-pdf flag creates a PDF of the page:

chrome --headless --disable-gpu --print-to-pdf https://www.chromestatus.com/

Taking screenshots

To capture a screenshot of a page, use the --screenshot flag:

chrome --headless --disable-gpu --screenshot https://www.chromestatus.com/

# Size of a standard letterhead.
chrome --headless --disable-gpu --screenshot --window-size=1280,1696 https://www.chromestatus.com/

# Nexus 5x
chrome --headless --disable-gpu --screenshot --window-size=412,732 https://www.chromestatus.com/

Running with --screenshot will produce a file named screenshot.png in the current working directory. If you're looking for full page screenshots, things are a tad more involved. There's a great blog post from David Schnurr that has you covered. Check out Using headless Chrome as an automated screenshot tool .

Debugging Chrome without a browser UI?

When you run Chrome with --remote-debugging-port=9222, it starts an instance with the DevTools Protocol enabled. The protocol is used to communicate with Chrome and drive the headless browser instance. It's also what tools like Sublime, VS Code, and Node use for remote debugging an application. #synergy

Since you don't have browser UI to see the page, navigate to http://localhost:9222 in another browser to check that everything is working. You'll see a list of inspectable pages where you can click through and see what Headless is rendering:

DevTools Remote
DevTools remote debugging UI

From here, you can use the familiar DevTools features to inspect, debug, and tweak the page as you normally would. If you're using Headless programmatically, this page is also a powerful debugging tool for seeing all the raw DevTools protocol commands going across the wire, communicating with the browser.

Using programmatically (Node)

Launching Chrome

In the previous section, we started Chrome manually using --headless --remote-debugging-port=9222. However, to fully automate tests, you'll probably want to spawn Chrome from your application.

One way is to use child_process:

const exec = require('child_process').exec;

function launchHeadlessChrome(url, callback) {
  // Assuming MacOSx.
  const CHROME = '/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome';
  exec(`${CHROME} --headless --disable-gpu --remote-debugging-port=9222 ${url}`, callback);
}

launchHeadlessChrome('https://www.chromestatus.com', (err, stdout, stderr) => {
  ...
});

But things get tricky if you want a portable solution that works across multiple platforms. Just look at that hard-coded path to Chrome :(

Using Lighthouse's ChromeLauncher

Lighthouse is a marvelous tool for testing the quality of your web apps. One thing people don't realize is that it ships with some really nice helper modules for working with Chrome. One of those modules is ChromeLauncher. ChromeLauncher will find where Chrome is installed, set up a debug instance, launch the browser, and kill it when your program is done. Best part is that it works cross-platform thanks to Node!

Note: The Lighthouse team is exploring a standalone package for ChromeLauncher with an improved API. Let us know if you have feedback.

By default, ChromeLauncher will try to launch Chrome Canary (if it's installed), but you can change that to manually select which Chrome to use. To use it, first install Lighthouse from npm:

yarn add lighthouse

Example - using ChromeLauncher to launch Headless

const {ChromeLauncher} = require('lighthouse/lighthouse-cli/chrome-launcher');

/**
 * Launches a debugging instance of Chrome on port 9222.
 * @param {boolean=} headless True (default) to launch Chrome in headless mode.
 *     Set to false to launch Chrome normally.
 * @return {Promise<ChromeLauncher>}
 */
function launchChrome(headless = true) {
  const launcher = new ChromeLauncher({
    port: 9222,
    autoSelectChrome: true, // False to manually select which Chrome install.
    additionalFlags: [
      '--window-size=412,732',
      '--disable-gpu',
      headless ? '--headless' : ''
    ]
  });

  return launcher.run().then(() => launcher)
    .catch(err => {
      return launcher.kill().then(() => { // Kill Chrome if there's an error.
        throw err;
      }, console.error);
    });
}

launchChrome(true).then(launcher => {
  ...
});

Running this script doesn't do much, but you should see an instance of Chrome fire up in the task manager that loaded about:blank. Remember, there won't be any browser UI. We're headless.

To control the browser, we need the DevTools protocol!

Retrieving information about the page

chrome-remote-interface is a great Node package that provides usable APIs for the DevTools Protocol. You can use it to orchestrate Headless Chrome, navigate to pages, and fetch information about those pages.

Warning: The DevTools protocol can do a ton of interesting stuff, but it can be a bit daunting at first. I recommend spending a bit of time browsing the DevTools Protocol Viewer, first. Then, move on to the chrome-remote-interface API docs to see how it wraps the raw protocol.

Let's install the library:

yarn add chrome-remote-interface

Examples

Example - print the user agent

launchChrome().then(launcher => {
  chrome.Version().then(version => console.log(version['User-Agent']));
});

Results in something like: HeadlessChrome/60.0.3082.0

Example - check if the site has a web app manifest

const chrome = require('chrome-remote-interface');

function onPageLoad(Page) {
  return Page.getAppManifest().then(response => {
    if (!response.url) {
      console.log('Site has no app manifest');
      return;
    }
    console.log('Manifest: ' + response.url);
    console.log(response.data);
  });
}

launchChrome().then(launcher => {

  chrome(protocol => {
    // Extract the parts of the DevTools protocol we need for the task.
    // See API docs: https://chromedevtools.github.io/devtools-protocol/
    const {Page} = protocol;

    // First, enable the Page domain we're going to use.
     Page.enable().then(() => {
      Page.navigate({url: 'https://www.chromestatus.com/'});

      // Wait for window.onload before doing stuff.
      Page.loadEventFired(() => {
        onPageLoad(Page)).then(() => {
          protocol.close();
          launcher.kill(); // Kill Chrome.
        });
      });
    });

  }).on('error', err => {
    throw Error('Cannot connect to Chrome:' + err);
  });

});

Example - extract the <title> of the page using DOM APIs.

const chrome = require('chrome-remote-interface');

function onPageLoad(Runtime) {
  const js = "document.querySelector('title').textContent";

  // Evaluate the JS expression in the page.
  return Runtime.evaluate({expression: js}).then(result => {
    console.log('Title of page: ' + result.result.value);
  });
}

launchChrome().then(launcher => {

  chrome(protocol => {
    // Extract the parts of the DevTools protocol we need for the task.
    // See API docs: https://chromedevtools.github.io/devtools-protocol/
    const {Page, Runtime} = protocol;

    // First, need to enable the domains we're going to use.
    Promise.all([
      Page.enable(),
      Runtime.enable()
    ]).then(() => {
      Page.navigate({url: 'https://www.chromestatus.com/'});

      // Wait for window.onload before doing stuff.
      Page.loadEventFired(() => {
        onPageLoad(Runtime).then(() => {
          protocol.close();
          launcher.kill(); // Kill Chrome.
        });
      });

    });

  }).on('error', err => {
    throw Error('Cannot connect to Chrome:' + err);
  });

});

Further resources

Here are some useful resources to get you started:

Docs

Tools

Demos

  • "The Headless Web" - Paul Kinlan's great blog post on using Headless with api.ai.

FAQ

Do I need the --disable-gpu flag?

Yes, for now. The --disable-gpu flag is a temporary requirement to work around a few bugs. You won't need this flag in future versions of Chrome. See https://crbug.com/546953#c152 and https://crbug.com/695212 for more information.

So I still need Xvfb?

No. Headless Chrome doesn't use a window so a display server like Xvfb is no longer needed. You can happily run your automated tests without it.

What is Xvfb? Xvfb is an in-memory display server for Unix-like systems that enables you to run graphical applications (like Chrome) without an attached physical display. Many people use Xvfb to run earlier versions of Chrome to do "headless" testing.

How do I create a Docker container that runs Headless Chrome?

Check out lighthouse-ci. It has an example Dockerfile that uses Ubuntu as a base image, and installs + runs Lighthouse in an App Engine Flexible container.

Can I use this with Selenium / WebDriver / ChromeDriver?

Right now, Selenium opens a full instance of Chrome. In other words, it's an automated solution but not completely headless. However, Selenium could use --headless in the future.

If you want to bleed on the edge, I recommend Running Selenium with Headless Chrome to set things up yourself.

Note: you may encounter bugs using ChromeDriver. At the time of writing, the latest release (2.29) only supports Chrome 58. Headless Chrome requires Chrome 59 or later.

How is this related to PhantomJS?

Headless Chrome is similar to tools like PhantomJS. Both can be used for automated testing in a headless environment. The main difference between the two is that Phantom uses an older version of WebKit as its rendering engine while Headless Chrome uses the latest version of Blink.

At the moment, Phantom also provides a higher level API than the DevTools Protocol.

Where do I report bugs?

For bugs against Headless Chrome, file them on crbug.com.

For bugs in the DevTools protocol, file them at github.com/ChromeDevTools/devtools-protocol.


Read the whole story
Eni
2744 days ago
reply
Share this story
Delete
Next Page of Stories