Client-side code

These days, a fair number of computer programs are downloaded from the internet and run on the client side. There are rich web pages, and Rich internet applications. There's JavaScript, Adobe Flash/Flex and Microsoft's contender, Silverlight. It's fair to say that these environments are coming of age in 2008, with several very fast JavaScript runtimes in beta, Silverlight 2.0 released, and Adobe Flex version 3 and AIR responding to this challenge.

There are several parties to such an execution of code. Firstly, the web page and contained or linked code is hosted upon a web server. The code is not run here on the server, but is simply transferred to the client as any other file - e.g. page or image - is done. The client is just you with a web browser on your own computer. The code runs here, but it is not native code - it runs within some software platform that allows it to do so. The platform is the browser (for JavaScript), the browser plugin or desktop runtime (for Flex and Silverlight) that the client has previously downloaded from the vendor who created it.

Sometimes we don't think of them as programs, but they are. When you watch video from YouTube, you are running a little flash program that connects back to the YouTube server and streams video from it. We'll get back to that connection later.

Security

Programs that run automatically inside a web page have constraints placed upon them that conventional desktop applications do not. Anyone can put flash in their web page and have it run on computers all over the world, so the code should still be safe to run on your PC if the web page was created by maladjusted teenage hackers and/or the Russian Mafia. The vendor must create the platform so that it is safe on the client even when the server is hostile.

In general, these platforms make the code run inside a "sandbox", which provides severely gated access to the underlying PC's resources. The client-side code can display graphics on the screen, but only within its own window's confines, otherwise it could create fake dialoges and trick the user into entering password or credit card details. It can accept user input, but cannot monitor all keystrokes. It can store settings and data in carefully isolated parts of the file system, but they cannot list, read or write the other files on your computer. And they can connect back to the server from whence they came for more data, but they cannot make connections to other servers. That would allow it to use the client computer as part of Distributed denial of service attack, or a Cross Site Scripting exploit.

The party of the fourth part

But what if you do want to access the potential fourth party to this set-up, the "other servers"? There are lots of cases where this could be useful. For instance, the Twhirl Twitter client is downloaded from http://www.twhirl.org/ but works by connecting to Twitter and other websites.

The first solution was to do it in two hops: the client connects to the server that it came from, which connects onwards to the other server, gets a response and forwards it to the client. The problems with this are that it will be slower and more complicated; and that the more clients are running the more work the server has to do, so it will not scale up.

The second solution used is to carefully relax the restrictions, and allow servers to opt in to allowing Flash and Silverlight clients to connect to them. The server has a client access policy that specifies if clients can connect. Adobe pioneered this approach. In order to allow clients to connect to www.mysite.com, it looks for a file called www.mysite.com/crossdomain.xml. Here's a sample that allows access from all comers:

<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy 
  SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
  <allow-http-request-headers-from domain="*" headers="*"/>
</cross-domain-policy>

Microsoft's Silverlight shamelessly adopts the same policy - and the same file. Silverlight will first look for www.mysite.com/clientacessspolicy.xml, which is Microsoft's way of doing the same thing, but failing that it will look for www.mysite.com/crossdomain.xml.

Here's a sample clientacesspolicy.xml file:

<?xml version="1.0"?>
<access-policy>
  <cross-domain-access>
    <policy>
      <allow-from http-request-headers="*">
        <domain uri="*"/>
      </allow-from>
      <grant-to>
        <resource path="/" include-subpaths="true"/>
      </grant-to>
    </policy>
  </cross-domain-access>
</access-policy>
It's quite similar, only with different syntax.

Questions

Is a client access policy a good idea?

I have not yet made up my mind if the concept of client access policies is on the whole a good thing. It does not guard against all problems, but it probably does plug one particular hole, at the cost of a bit of inconvenience. It's quite restrictive because you have to opt in.

However, unless you have influence over Adobe and Microsoft and a better idea in mind, we're stuck with it. So it is very much a good idea for your site to be aware of the idea of client access policy, and either have one, or deliberately not have one.

So I have to put code on my server to let your client work?

It's not code, it's a configuration file. It configures who is allowed to go where. You probably already have a file called robots.txt that fills a similar role. The difference is that robots.txt allows you to opt out of web crawlers, but client access is more restrictive - you have to opt in.

One file satisfies all clients. You can have two files if you want to treat Silverlight differently from Adobe clients. Any potential future similar languages will probably also respect crossdomain.xml simply because it's in place now.

Other than that: yes, yes you do. These clients can't work without it, by design.

Sites have this?

Yes they do. Look at http://www.twitter.com/crossdomain.xml, http://maps.google.com/crossdomain.xml or http://api.flickr.com/crossdomain.xml

How is opt-in enforced? How do you get client code to respect this?

Opt in is enforced by the platform upon which the client-side code runs. It's possible that client-side code will try to subvert or work around the runtime upon which it runs, but now we're in the realm of patchable implementation bugs, not fundamental design flaws.

When client code tries to connect to a site that doesn't have a client access policy, it just gets a security error in response. Yes, I've tried this in Silverlight.

Why is the onus on the other server to supply this file?

If the client access policy was served with the Flash or Silverlight content, then malicious content could be accompanied by a malicious access policy. You get to control the gates to your own site. But if I manage to hack into a site and inject flash code that than "phones home" to my server, I can then set up my server's client access policy to take the call. It's not perfect, but it does plug some holes.

Does Everything2 have a client access policy?

No. But Everything2 is the kind of site that should - there are web and desktop clients that connect to E2's html pages and xml tickers, and I don't think that Flash and Silverlight clients should be excluded from that party. The issue has been raised. Watch this space.


References:
Microsoft Developer Network, "Making a Service Available Across Domain Boundaries", http://msdn.microsoft.com/en-us/library/cc197955(VS.95).aspx
Lucas Adamski, "Cross-domain policy file usage recommendations for Flash Player" http://www.adobe.com/devnet/flashplayer/articles/cross_domain_policy.html

Log in or register to write something here or to contact authors.