Csep-564-Lec-10

Designing Systems

Usability and Security

Browser HTTPS Indicators
  • Goal: identify secure connection
  • Previous solution: display lock icon
    • certificate states are too complex for general users
  • Problem: users don't notice the absence of an icon
  • Current solution: use https by default and call out http with a warning icon.

Google and Firefox have come to different conclusions as to what these warnings should look like.

Useful sites:

  • https://badssl.com
  • http://neverssl.com
Phishing
  • Problem: Users need to look at the URL bar to verify they're on the correct site.
  • Old solutions:
    • Passive warnings: show suspicious indicator in URL bar
    • Active warnings: force the user to proceed past a warning page
    • Site authentication image (old, bad, not useful)
  • Modern solutions:
    • Google safe browsing:
      1. Browser sends 32-bit prefix of hash(url)
      2. API says: good or bad
      • Safari and Firefox both use this service
    • Phishing warnings are crazy red and way worse than http warnings
Password Managers
  • Early solutions:
    • PwdHash: hash pwd with domain
      • triggered via prefixing pw with @@
    • Password Multiplier: hash master pwd, username, domain
      • triggered via alt clicking on pw field
    • Massive usability problems with each of these
    • Conclusion: usability problems lead to security problems
Root causes
  1. Computers are complex and users lack intuition
  2. Users are in charge of managing own devices (unlike, e.g. a car)
  3. We're bad at gauging risk
  4. Social issues: it's hard to get friends to use encrypted messaging.
Improvements
  1. Help users build accurate mental models
  2. Make security invisible
  3. Make security the least-resistance path

Security and Privacy for Emerging Technologies

  • Hacking Cars Paper
    • Spurred actual vendors to take action
    • Created new standards and US bills on automotive cybersecurity
    • New subfields in automotive/airplane security, etc.
  • AR output security
    • Interesting domain
    • E.g. user's driving, malicious script shows them a spider to startle them, etc.
  • Misinformation / Disinfomration (dis meaning intentional)
    • Still need more research
    • Initial guesses turned out to be incorrect
      • Bubbles causing polarization
        • actually it was probably the surfacing of extreme content to the other side
      • YouTube radicalizing people
        • actually radicalization was likely happening offline and simply correlating to increasingly radical videos

Exceptional Access

Brief aside

  • DES S-boxes
    • NSA offered different S-boxes. No one knew why.
      • Turns out they knew about certain vulnerabilities and fixed them. Nice.
  • Dual_EC_DRBG
    • NSA backdoored some shit (most likely) and burnt their credibility. Cool.
  • Some technologies have dual-use (civilian and military applications)
    • governments don't like dual-use technologies, they might be used for an uprising.
    • cryptography is a dual-use technology (civilians needed this once electronic communications became widely available)
    • crypto wars (and the labeling of > 40bit encryption as munitions) don't end until year 2000!!!

Q1

  • How could you make a legitimate backdoor, assuming you do trust the legal systems to issue warrants appropriately?
    • Savage et. al. from UCSD has a paper designing a system for this.

Q2

  • Should you? What are the pitfalls?
    • Trusting one legal system isn't enough; you need to trust all of them. So e.g. Apple as a phone manufacturer has to respect warrants issued from the US and those from China, Russia, etc. So essentially you are handing over a database of backdoor keys to authoritarian countries.

Q3

  • Does it matter? We don't build secure systems anyway.
    • It would appear that, for now, no, it does not matter. See Exhibit A: San Bernadino; after Apple failed to comply with the public request for a backdoor, the FBI paid for an exploit written by an Australian company. They had the phone unlocked within days.