Categories
Programming Security web-design

Open Redirects

In this post I’ll discuss an issue I tackled a short while ago – open redirects. But first, the story of how I got to it. Feel free to skip ahead to the technical discussion.

Background

Our analytics for plnnr.com – the website for trip planning wasn’t working as well as we wanted. We’re using Google Analytics, and it’s hard generating the specific report we want, and when we did get it, it seemed to show inaccurate numbers. To partially alleviate the issue, I was required to add tracking pixels for facebook & adwords, so we can better track conversions.
For us, an “internal” conversion is when a user clicks on a link to a booking url (for a hotel, or any other “bookable” attraction).
After reviewing it, I decided that the best course of action would be to create an intermediate page, on which the tracking pixels would appear. Such a page would receive as a parameter the url to redirect to, and will contain the appropriate tracking pixels.

Description of problem

Let’s say we build the url for the page like so:

/redirect/?url=X

This page will load the appropriate tracking pixels, and then redirect to the given url (the X).
The problems are:
1. We are potentially exposing ourselves to Cross Site Scripting (XSS), if we don’t filter the redirect url correctly. A malicious website could create links to our page that will run scripts in our context.

2. A malicious webmaster could steal search engine authority. Let’s say he has two domains: a.com and b.com, of which he cares about b.com. He creates links on a.com to:

ourdomain.com/redirect/?url=b.com

A search engine crawls his page, sees the links to his domain, and gives ourdomain.com authority to b.com. Not nice.

3. A malicious website could create links to ourdomain.com that redirect to some malware site, this way harming the reputation of ourdomain.com, or creating better phishing links for ourdomain.com.

Possible solutions

Before we handle the open-redirect issues it’s important to block cross site scripting attacks. Since the attack might be possible by inject code into the url string, this is doable by correctly filtering the urls, and by using existing solutions for XSS.

As for the open redirect:

1. Non solution: cookies. We can pass the url we want in a cookie. Since cookies may only be set by our domain, other websites would not be able to set the redirect url. This doesn’t work well if you want more than one redirect link, or with multiple pages open, etc.

2. Checking the referrer (“referer”), and allowing redirects to come only from our domain. This will break for all users who use a browser that hides referrer information, for example, those using zone-alarm. Google also suggests that if the referrer information is available, block if it’s external. That way we are permissive for clients that hide it.

3. Whitelisting redirect urls. This solutions actually comes in two flavors – one is keeping a list of all possible urls, and then checking urls against it. The other is keeping a list of allowed specific url parts, for example, domains. While keeping track of all allowed urls may be impractical, keeping track of allowed domains is quite doable. Downside is that you have to update that piece of the code as well each time you want to allow another domain.

4. Signing urls. Let the server keep a secret, and generate a (sha1) hash for each url of “url + secret”. In the redirect page, require the hash, and if it doesn’t match the desired hash, don’t redirect to that url. This solution is quite elegant, but it means that the client code (the javascript) can’t generate redirect URLs. In my case this incurs a design change, a bandwidth cost, and a general complication of the design.

5. Robots.txt. Use the robots.txt file to prevent search engines from indexing the redirect page, thereby mitigating at least risk number 2.

6. Generating a token for the entire session, much like CSRF protection. The session token is added to all links, and is later checked by the redirect page (on the server side). This is especially easy to implement if you already have an existing anti-csrf mechanism in place.

7. A combination of the above.

Discussion and my thoughts

It seems to me, that blocking real users is unacceptable. Therefor, only filtering according to referrer information is unacceptable if you block users with no referrer information.
At first I started to implement the url signing mechanism, but then I saw the cost associated with it, and reassessed the risks. Given that cross-site-scripting is solved, the biggest risk is stealing search-engine-authority. Right now I don’t consider the last risk (harming our reputation) as important enough, but this will become more acute in the future.

Handling this in a robots.txt file is very easy, and that was the solution I chose. I will probably add more defense mechanisms in the future. When I do add another defense mechanism, it seems that using permissive referrer filtering, and the existing anti-csrf code will be the easiest to implement. A whitelist of domains might also be acceptable in the future.

If you think I missed a possible risk, or a possible solution, or you have differing opinions regarding my assessments, I’ll be happy to hear about it.

My thanks go to Rafel, who discussed this issue with me.

Further reading

* http://www.owasp.org/index.php/Open_redirect
* http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=171297
* Open Redirects and Phishing Vectors/

Categories
Security

Privacy mode not so private

I like my privacy. I also prefer to keep my information secure. I might be a bit more paranoid than the rest, but not extremely so. A short while ago, I discovered something disturbing regarding Firefox. It seems to be a ‘secret everybody knows’, yet Firefox doesn’t say anything about that.

What is it? When using Firefox’s capabilities to ‘clear private data’ (under options->privacy), even when checking all the checkboxes, a lot of information is still kept. This is even true when using the new ‘private browsing’ which supposedly allows you to browse without any record kept.

How is the information kept? Using Local Shared Objects (LSO’s), which are basically cookies used by flash objects. Who uses these cookies? Almost everyone. The result? If you trusted Firefox so far to keep your browsing history secure, take a look at the following locations, and tell me what do you see.

How to mitigate? Simplest option is just to delete the files you see in the aforementioned locations. Better yet, install BetterPrivacy. Of course, you can also install any kind of flash blocker, or any other tool, to make sure you don’t keep those LSOs.

If you do end up using BetterPrivacy, be sure to check the “On cookie deletion also delete empty cookie folders” checkbox. If you don’t, while the cookies are no longer kept, the record of the sites you visited is still kept locally.

Categories
Security

Threat analysis, security by obscurity and WordPress

Rusty Lock
  Image by Mykl Roventine

I’ve been running wordpress for a long time now, and luckily so far, it hasn’t been hacked.
Of course – this doesn’t prove anything, as I didn’t count hacking attempts. It also doesn’t show it’s unhackable – on the contrary, I believe that my wordpress installation is hackable by a determined attacker.

However, there’s a subtle issue at play regarding the ‘determined attacker’. There are several kinds of attackers today, and the two most notable ones are the ‘targeted attacker’ and the ‘mass attacker’. The targeted attacker aims to attack your resources specifically, probably because his interest in them. The mass attacker on the other hand, is interested in any resource like your own.

From this premise it follows that the two attackers will likely use different methods of operation. The mass attacker is looking to increase his ROI. He will use mass tools with the most coverage, and if an attack doesn’t work on a specific target, nevermind, it will work on others. For him, work is worthwhile only if it allows him to attack a substantial number of new targets.
In contrast, the targeted attacker’s goal is to break into your resources. For her the fact that a given attack will yield hundreds of other targets is irrelevant, unless it helps attacking you. She might start with top-of-the-shelf mass tools, but when these won’t work, the targeted attacker will study her target, until she finds a vulnerability, and then use it.

Now the question you should ask yourself – who are you defending against? When defending against a mass attacker, making yourself unnoticed and uncommon might be worthwhile. A little security by obscurity will most likely be enough to thwart most of the attacks.
Against targeted attacks you need a more solid defense, utilizing all the tricks in your bag, and still be aware that it probably won’t be enough. You should also seek to minimize damages in case of a successful attack.

Today, most wordpress blogs are under mass attacks. WordPress blogs are searched, exploited and the 0wned automatically, with the goal of getting the most coverage.
For some time now I’ve been using a small trick that helps to defend against mass attacks. The trick is simple – I added a small .htaccess file password-protecting the admin directory of my wordpress installation. Of course, in all probability the password may be bruteforced or completely bypassed by a very determined attacker, but against a mass attacker it is very effective.

I’ve seen suggestions to rename your files and dirs – this will probably also work. Still, it should be noted that this kind of methods only add obfuscation, thereby only protecting from mass attacks. Personally, I don’t consider the last method worthwhile – it complicates your installation and upgrade process, it requires much more work to be done right, and at most adds similar security to the .htaccess file, most likely less.

To conclude – do your threat analysis, and use the defense methods with the most ROI relative to that analysis. Just as another method – do consider using .htaccess files to prevent access to your admin directory.

Categories
Programming Philosophy Security web-design

Breaking Rapidshare's Annoying Captcha the Easy Way

Like many others, I got stuck in front of Rapidshare’s captcha. After more than five attempts at reading different letters with kittens and other critters hidden behind them, I was thinking of giving up. Especially because each time I failed I had to wait a half a minute again. However, in one instance I went *back* via my browser, and tried solving the same captcha again. Turns out this works, and I got the file.

I know I could probably have solved it in a smarter fashion, but it wasn’t worth the effort.

My lesson:

When someone writes crappy software, their software is probably crappy in more than one way.

This is not the first time I’ve seen this happen.

Categories
Security

Short Story: First Hit, Last Hit

I decided to try something a little bit different, and publish a short story I wrote. I’ll be glad to read any comments you might have on the subject, or the story itself. I might upload some more stories to the blog, but I’ll try to keep them technology-related.

Here’s the link to the story.

I actually wrote this one about a year ago, when I was discussing with Gadi his lecture on the bionic man. I thought the subject has a lot of room to play and a lot of security implications to consider. Some of our ideas became a reality when researchers managed to hack into a pace-maker.

Well, I hope this short story never comes true.

Categories
Security web-design

Troubles with Wild Themes

Some time ago, I wrote that I was planning on using a new theme for this blog. To do this, I first looked for possible candidates on themes.wordpress.net, and then started to adapt the one I liked. However, while working on the theme, I noticed hidden links in the code of the theme.

These links were hidden by using “font-size: 1px”. Hidden links are there to increase the search engine placement of the creator and his affiliates. In this case the creators were ‘wildconcepts.net‘. You can check their stats in technorati, and see they have about 250 blogs linking to them, mostly via regular credits.

Upon further examination I found two more themes by these guys, with the same hidden SEO links.
Afterwards, I checked some 20 blogs that had their themes, about half of which still had the SEO (Search Engine Optimization) links in them. Other linked urls were kianah.com (a blog) and ads-ph.com (an ad exchange service).
I reported this to themes.wordpress.net, and it seems I can no longer find the themes via their search engines. However, the themes are still available for download.

The SEO links, while being an annoyance and an underhanded thing to do, are not the main issue here. The big problem I had with this theme was, that if someone has no compunction about putting SEO links, he\they might put a backdoor there as well. I’m not saying that these guys did it, but I know that if I was a bad guy looking to make money – I’d do it.
This is a very easy way to infect servers. Just prepare a few good-looking themes for wordpress, phpbb or any other standard web-application, sit back, and watch the botnets grow. You don’t only get to infect the server, you can try to infect any client that connects to an infected server. Instead of researching a new vulnerability, just use social engineering, like you do with end users surfing the web.

You can publish your themes, and evidently, they won’t go through too much scrutiny. With an appropriate Google Alert in place – you can also be informed whenever someone new installs your theme.

This is a trust issue – and it seems that you shouldn’t trust WordPress’ theme DB. While this may seem obvious to you, it wasn’t to me at first, and I bet it isn’t obvious to many others starting their small blogs and looking for a good theme.

Categories
computer science Design Programming Programming Philosophy Security

Browser visibility-security and invisibility-insecurity

Formal languages have a knack of giving some output, and then later doing something completely different. For example, take the “Halting Problem“, but this is probably too theoretical to be of any relevance… so read on for something a bit more practical. We are going to go down the rabbit hole, to the ‘in-between’ space…

My interest was first piqued when I encountered the following annoyance – some websites would use transparent layers to prevent you from:

  1. Marking and copying text.
  2. Left-clicking on anything, including:
    1. images, to save them,
    2. just the website, to view its source –
  3. and so on and so forth…

Now I bet most intelligent readers would know how to pass these minor hurdles – but mostly just taking the steps is usually deterrent enough to prevent the next lazy guy from doing anything. So I was thinking, why not write a browser – or just a Firefox plugin, that will allow us to view just the top-level of any website?

This should be easy enough to do, but if it bothered enough sites (which it probably won’t), and they fought back, there would be a pretty standard escalation war. However, since the issue is not that major, I suspect it wouldn’t matter much.

Now comes the more interesting part. Unlike preventing someone from copying text, html (plus any ‘sub-languages’ it may use) may be used to display one thing, and to be read like a different thing altogether. The most common example is with spam – displaying image spam instead of text. When that was countered by spam filters, animated gif files were used. Now you have it – your escalation war, par excellence. This property of html was also used by honeypots to filter comment-spam, as described in securiteam. In this the securiteam blog post by Aviram, the beginning of another escalation war is described. There are many more examples of this property of html.

All of these examples come from html’s basic ability to specify what do display, and being able to seem to display completely different things. There are actually two parsers at work here – one is the ‘filter’ – its goal is to filter out some ‘bad’ html, and the other is a bit more complicated – it is the person reading the browser’s output (it may be considered to be the ‘browser + person’ parser) . These two parsers operate on completely different levels of html. Now, I would like to point out that having to parsers reading the same language is a common insecurity pattern. HTML has a huge space between what is expressible, and what is visible. In that space – danger lies.

As another, simpler example, consider phishing sites. These are common enough nowadays. How does your browser decide if the site you are looking at is actually a phishing site? Among other things – reading the code behind the site. However, this code can point to something completely different then what is being displayed. In this ‘invisible’ space – any misleading code can live. In that way, the spammer may pretend to be a legitimate site for the filter, but your run-of-the-mill phishing site for the human viewer. This misleading code in the ‘invisible space’ may be used to good – like a honeypot against some comment-spammer, or it may be used for different purposes – by the spammer himself.

Now comes the interesting part. The “what to do part”. For now let me just describe it theoretically, and later work on its practicality. I suggest using a ‘visibility browser’. This browser will use some popular browser (Internet Explorer, Firefox, Safari, Opera, etc.. ) as its lower level. This lower level browser will render the website to some buffer, instead of the screen. Now, our ‘visibility browser’ will OCR all of the visible rendered data, and restructure it as valid HTML. This ‘purified’ html may now be used to filter any ‘bad’ sites – whichever criterion you would like to use for ‘bad’.

I know, I know, this is not practical, it is computationally intensive etc etc… However, it does present a method to close down that nagging ‘space’, this place between readability and visibility, where bad code lies. I also know that the ‘visible browser’ itself may be targeted, and probably quite easily. Those attacks will have to rely on implementation faults of the software, or some other flaw, as yet un-thought-of. We all know there will always be bugs. But it seems to me that the ‘visibility browser’ does close, or at least cover for a time, one nagging design flaw.

Categories
Math Origami Protocols Security

"Where is Waldo?", or "Security by Origami"

The Problem

A friend of mine gave me a riddle this morning regarding “Where’s Waldo?”. The riddle is as follows:

You and a friend play “Where’s Waldo?”. You solve the puzzle before your friend, and you want to prove to your friend you solved the puzzle, without giving him any hints. How do you do this?

Obviously, this is of course very reminiscent of zero knowledge proofs. A good zero knowledge proof will allow your friend to be convinced before he found Waldo himself, and even if he never finds him at all. A somewhat less interesting solution is a solution that will allow your friend to be convinced if and when he finds Waldo himself and can then verify your proof.

There are obviously many “intuitive” solutions, and I will not write them here… I will write here the second solution I thought of, but I consider it to be the more interesting one. However, this solution doesn’t solve the original problem – it only allows your friend to verify your solution once he solved it himself.

The Solution

Take a sheet of paper of identical dimensions to the picture, and mark a spot on it in the position where Waldo would have been on that sheet of paper. Fold that sheet of paper to some kind of origami animal, and give it to your friend. Once he solves the puzzle, he can open the folding, and see for himself that the point was marked correctly.

This is obviously not a very good solution. It is just a glorified note – you write down your solution on a note, and ask your friend not to peek until he solves it himself. So I came up with an improvement:

Agree beforehand with your friend on some (large) origami folding (such as a beetle). He shouldn’t know the instructions to fold it. Take a sheet of paper, and mark Waldo’s position on it (with a dot). Hold the paper above the fold, and mark on the fold (with another dot) the projection of the original dot on the folding. Now unfold the origami – you have a crease pattern. Give the crease pattern to your friend. When he solves the puzzle, refold the origami, and prove to your friend that the projection of the dot on the fold coincides with the dot on the picture. As an added bonus – your friend just learned how to fold another origami beast!

Of course, this solution isn’t water-tight. It also relies on crease patterns being hard to solve. It is mostly security by obscurity – but this time, ‘security by obscurity’ becomes ‘security by origami’. I just found that fascinating – that origami may be considered a function that is ‘hard to reverse engineer’ (even if it is not so) – such as a hash function. Origami does behave a little bit like a hash…

Tell me your original solutions to the problem.